1
0
Fork 0
mirror of https://github.com/warmcat/libwebsockets.git synced 2025-03-23 00:00:06 +01:00
libwebsockets/lib/core-net
Andy Green 286cf4357a sul: multiple timer domains
Adapt the pt sul owner list to be an array, and define two different lists,
one that acts like before and is the default for existing users, and another
that has the ability to cooperate with systemwide suspend to restrict the
interval spent suspended so that it will wake in time for the earliest
thing on this wake-suspend sul list.

Clean the api a bit and add lws_sul_cancel() that only needs the sul as the
argument.

Add a flag for client creation info to indicate that this client connection
is important enough that, eg, validity checking it to detect silently dead
connections should go on the wake-suspend sul list.  That flag is exposed in
secure streams policy so it can be added to a streamtype with
"swake_validity": true

Deprecate out the old vhost timer stuff that predates sul.  Add a flag
LWS_WITH_DEPRECATED_THINGS in cmake so users can get it back temporarily
before it will be removed in a v4.2.

Adapt all remaining in-tree users of it to use explicit suls.
2020-06-02 08:37:10 +01:00
..
adopt.c PEER_LIMITS: modernize to sa46 and add notification cb 2020-06-02 08:37:10 +01:00
client.c cleaning 2020-01-05 22:17:58 +00:00
close.c client: reset: detach wsi from buflist pending owner on reset 2020-05-27 08:40:12 +01:00
CMakeLists.txt cmakelist: Augean Stables refactor 2020-05-27 08:40:12 +01:00
connect.c sul: multiple timer domains 2020-06-02 08:37:10 +01:00
detailed-latency.c detailed latency stats 2019-09-22 03:06:59 -07:00
dummy-callback.c CTest: migrate and deprecate existing selftest scripts 2020-05-11 15:40:13 +01:00
lws-dsh.c Coverity fixes 2019-08-19 10:12:20 +01:00
network.c windows: recent win10 support Unix Domain 2020-06-02 08:37:10 +01:00
output.c ws: make sure we understand frame finished when buflist_out flushed 2020-03-04 12:17:49 +00:00
pollfd.c CTest: migrate and deprecate existing selftest scripts 2020-05-11 15:40:13 +01:00
private-lib-core-net.h sul: multiple timer domains 2020-06-02 08:37:10 +01:00
README.md minimal-http-client-multi: add POST 2020-02-21 17:32:41 +00:00
sequencer.c sul: multiple timer domains 2020-06-02 08:37:10 +01:00
server.c lws_struct sqlite3 2020-03-04 11:00:04 +00:00
service.c tls: defer listing of pending tls wsi to be managed by tls read only 2020-05-27 08:40:12 +01:00
socks5-client.c client: secure streams 2020-03-04 12:17:49 +00:00
sorted-usec-list.c sul: multiple timer domains 2020-06-02 08:37:10 +01:00
state.c logs: allow giving log bitfields from cmake to force build or exclusion 2020-04-22 06:59:01 +01:00
stats.c cleaning 2020-01-05 22:17:58 +00:00
vhost.c sul: multiple timer domains 2020-06-02 08:37:10 +01:00
wsi-timeout.c sul: multiple timer domains 2020-06-02 08:37:10 +01:00
wsi.c client: allow setting keep warm time 2020-05-06 09:06:24 +01:00

Implementation background

Client connection Queueing

By default lws treats each client connection as completely separate, and each is made from scratch with its own network connection independently.

If the user code sets the LCCSCF_PIPELINE bit on info.ssl_connection when creating the client connection though, lws attempts to optimize multiple client connections to the same place by sharing any existing connection and its tls tunnel where possible.

There are two basic approaches, for h1 additional connections of the same type and endpoint basically queue on a leader and happen sequentially.

For muxed protocols like h2, they may also queue if the initial connection is not up yet, but subsequently the will all join the existing connection simultaneously "broadside".

h1 queueing

The initial wsi to start the network connection becomes the "leader" that subsequent connection attempts will queue against. Each vhost has a dll2_owner wsi->dll_cli_active_conns_owner that "leaders" who are actually making network connections themselves can register on as "active client connections".

Other client wsi being created who find there is already a leader on the active client connection list for the vhost, can join their dll2 wsi->dll2_cli_txn_queue to the leader's wsi->dll2_cli_txn_queue_owner to "queue" on the leader.

The user code does not know which wsi was first or is queued, it just waits for stuff to happen the same either way.

When the "leader" wsi connects, it performs its client transaction as normal, and at the end arrives at lws_http_transaction_completed_client(). Here, it calls through to the lws_mux _lws_generic_transaction_completed_active_conn() helper. This helper sees if anything else is queued, and if so, migrates assets like the SSL *, the socket fd, and any remaining queue from the original leader to the head of the list, which replaces the old leader as the "active client connection" any subsequent connects would queue on.

It has to be done this way so that user code which may know each client wsi by its wsi, or have marked it with an opaque_user_data pointer, is getting its specific request handled by the wsi it expects it to be handled by.

A side effect of this, and in order to be able to handle POSTs cleanly, lws does not attempt to send the headers for the next queued child before the previous child has finished.

The process of moving the SSL context and fd etc between the queued wsi continues until the queue is all handled.

muxed protocol queueing and stream binding

h2 connections act the same as h1 before the initial connection has been made, but once it is made all the queued connections join the network connection as child mux streams immediately, "broadside", binding the stream to the existing network connection.