![]() Presently a vh is allocated per trust store at policy parsing-time, this is no problem on a linux-class device or if you decide you need a dynamic policy for functionality reasons. However if you're in a constrained enough situation that the static policy makes sense, in the case your trust stores do not have 100% duty cycle, ie, are anyway always in use, the currently-unused vhosts and their x.509 stack are sitting there taking up heap for no immediate benefit. This patch modifies behaviour in ..._STATIC_POLICY_ONLY so that vhosts and associated x.509 tls contexts are not instantiated until a secure stream using them is created; they are refcounted, and when the last logical secure stream using a vhost is destroyed, the vhost and its tls context is also destroyed. If another ss connection is created that wants to use the trust store, the vhost and x.509 context is regenerated again as needed. Currently the refcounting is by ss, it's also possible to move the refcounting to be by connection. The choice is between the delay to generate the vh being visisble at logical ss creation-time, or at connection-time. It's anyway not preferable to have ss instantiated and taking up space with no associated connection or connection attempt underway. NB you will need to reprocess any static policies after this patch so they conform to the trust_store changes. |
||
---|---|---|
.. | ||
adopt.c | ||
client.c | ||
close.c | ||
CMakeLists.txt | ||
connect.c | ||
detailed-latency.c | ||
dummy-callback.c | ||
lws-dsh.c | ||
network.c | ||
output.c | ||
pollfd.c | ||
private-lib-core-net.h | ||
README.md | ||
sequencer.c | ||
server.c | ||
service.c | ||
socks5-client.c | ||
sorted-usec-list.c | ||
state.c | ||
stats.c | ||
vhost.c | ||
wsi-timeout.c | ||
wsi.c |
Implementation background
Client connection Queueing
By default lws treats each client connection as completely separate, and each is made from scratch with its own network connection independently.
If the user code sets the LCCSCF_PIPELINE
bit on info.ssl_connection
when
creating the client connection though, lws attempts to optimize multiple client
connections to the same place by sharing any existing connection and its tls
tunnel where possible.
There are two basic approaches, for h1 additional connections of the same type and endpoint basically queue on a leader and happen sequentially.
For muxed protocols like h2, they may also queue if the initial connection is not up yet, but subsequently the will all join the existing connection simultaneously "broadside".
h1 queueing
The initial wsi to start the network connection becomes the "leader" that
subsequent connection attempts will queue against. Each vhost has a dll2_owner
wsi->dll_cli_active_conns_owner
that "leaders" who are actually making network
connections themselves can register on as "active client connections".
Other client wsi being created who find there is already a leader on the active client connection list for the vhost, can join their dll2 wsi->dll2_cli_txn_queue to the leader's wsi->dll2_cli_txn_queue_owner to "queue" on the leader.
The user code does not know which wsi was first or is queued, it just waits for stuff to happen the same either way.
When the "leader" wsi connects, it performs its client transaction as normal,
and at the end arrives at lws_http_transaction_completed_client()
. Here, it
calls through to the lws_mux _lws_generic_transaction_completed_active_conn()
helper. This helper sees if anything else is queued, and if so, migrates assets
like the SSL *, the socket fd, and any remaining queue from the original leader
to the head of the list, which replaces the old leader as the "active client
connection" any subsequent connects would queue on.
It has to be done this way so that user code which may know each client wsi by its wsi, or have marked it with an opaque_user_data pointer, is getting its specific request handled by the wsi it expects it to be handled by.
A side effect of this, and in order to be able to handle POSTs cleanly, lws does not attempt to send the headers for the next queued child before the previous child has finished.
The process of moving the SSL context and fd etc between the queued wsi continues until the queue is all handled.
muxed protocol queueing and stream binding
h2 connections act the same as h1 before the initial connection has been made, but once it is made all the queued connections join the network connection as child mux streams immediately, "broadside", binding the stream to the existing network connection.