![]() Move the common plugin scanning dir stuff to be based on lws_dir, which already builds for windows. Previously this was done via dirent for unix and libuv for windows. Reduce the dl plat stuff to just wrap instantiation and destruction of dynlibs, establish common code in lib/misc/dir.c for plugin scanning itself. Migrate the libuv windows dl stuff to windows-plugins.c, so that he's available even if later libuv loop support becomes and event lib plugin. Remove the existing api exports scheme for plugins, just export a const struct now which has a fixed header type but then whatever you want afterwards depending on the class / purpose of the plugin. Place a "class" string in the header so there can be different kinds of plugins implying different types exported. Make the plugin apis public and add support for filter by class string, and per instantation / destruction callbacks so the subclassed header type can do its thing for the plugin class. The user provides a linked-list base for his class of plugins, so he can manage them completely separately and in user code / user export types. Rip out some last hangers-on from generic sessions / tables. This is all aimed at making the plugins support general enough so it can provide event lib plugins later. |
||
---|---|---|
.. | ||
adopt.c | ||
client.c | ||
close.c | ||
CMakeLists.txt | ||
connect.c | ||
detailed-latency.c | ||
dummy-callback.c | ||
lws-dsh.c | ||
network.c | ||
output.c | ||
pollfd.c | ||
private-lib-core-net.h | ||
README.md | ||
sequencer.c | ||
server.c | ||
service.c | ||
socks5-client.c | ||
sorted-usec-list.c | ||
state.c | ||
stats.c | ||
vhost.c | ||
wsi-timeout.c | ||
wsi.c |
Implementation background
Client connection Queueing
By default lws treats each client connection as completely separate, and each is made from scratch with its own network connection independently.
If the user code sets the LCCSCF_PIPELINE
bit on info.ssl_connection
when
creating the client connection though, lws attempts to optimize multiple client
connections to the same place by sharing any existing connection and its tls
tunnel where possible.
There are two basic approaches, for h1 additional connections of the same type and endpoint basically queue on a leader and happen sequentially.
For muxed protocols like h2, they may also queue if the initial connection is not up yet, but subsequently the will all join the existing connection simultaneously "broadside".
h1 queueing
The initial wsi to start the network connection becomes the "leader" that
subsequent connection attempts will queue against. Each vhost has a dll2_owner
wsi->dll_cli_active_conns_owner
that "leaders" who are actually making network
connections themselves can register on as "active client connections".
Other client wsi being created who find there is already a leader on the active client connection list for the vhost, can join their dll2 wsi->dll2_cli_txn_queue to the leader's wsi->dll2_cli_txn_queue_owner to "queue" on the leader.
The user code does not know which wsi was first or is queued, it just waits for stuff to happen the same either way.
When the "leader" wsi connects, it performs its client transaction as normal,
and at the end arrives at lws_http_transaction_completed_client()
. Here, it
calls through to the lws_mux _lws_generic_transaction_completed_active_conn()
helper. This helper sees if anything else is queued, and if so, migrates assets
like the SSL *, the socket fd, and any remaining queue from the original leader
to the head of the list, which replaces the old leader as the "active client
connection" any subsequent connects would queue on.
It has to be done this way so that user code which may know each client wsi by its wsi, or have marked it with an opaque_user_data pointer, is getting its specific request handled by the wsi it expects it to be handled by.
A side effect of this, and in order to be able to handle POSTs cleanly, lws does not attempt to send the headers for the next queued child before the previous child has finished.
The process of moving the SSL context and fd etc between the queued wsi continues until the queue is all handled.
muxed protocol queueing and stream binding
h2 connections act the same as h1 before the initial connection has been made, but once it is made all the queued connections join the network connection as child mux streams immediately, "broadside", binding the stream to the existing network connection.