h1 and h2 has a bunch of code supporting autobinding outgoing client connections
to be streams in, or queued as pipelined on, the same / existing single network
connection, if it's to the same endpoint.
Adapt this http-specific code and active connection tracking to be usable for
generic muxable protocols the same way.
It was already correct but add helpers to isolate and deduplicate
processing adding and closing a generically immortal stream.
Change the default 31s h2 network connection timeout to be settable
by .keepalive_timeout if nonzero.
Add a public api allowing a client h2 stream to transition to
half-closed LOCAL (by sending a 0-byte DATA with END_STREAM) and
mark itself as immortal to create a read-only long-poll stream
if the server allows it.
Add a vhost server option flag LWS_SERVER_OPTION_VH_H2_HALF_CLOSED_LONG_POLL
which allows the vhost to treat half-closed remotes as immortal long
poll streams.
Remove LWS_LATENCY.
Add the option LWS_WITH_DETAILED_LATENCY, allowing lws to collect very detailed
information on every read and write, and allow the user code to provide
a callback to process events.
wsi timeout, wsi hrtimer, sequencer timeout and vh-protocol timer
all now participate on a single sorted us list.
The whole idea of polling wakes is thrown out, poll waits ignore the
timeout field and always use infinite timeouts.
Introduce a public api that can schedule its own callback from the event
loop with us resolution (usually ms is all the platform can do).
Upgrade timeouts and sequencer timeouts to also be able to use us resolution.
Introduce a prepared fakewsi in the pt, so we don't have to allocate
one on the heap when we need it.
Directly handle vh-protocol timer if LWS_MAX_SMP == 1
https://github.com/warmcat/libwebsockets/issues/1550
rx flow control needs to handle the situation that it is draining from
a previous rx flow control period, and the user code reasserts rx flow
control partway through that.
The accounting for the used rx then boils down to only trimming the
rxflow buflist we were "replaying" to consume however much we managed
to deliver of that this time before the rx flow control came again.
"Normal" rx consumption is wrong in this case, since we accounted for
it entirely in the rxflow cache buflist.
The patch recognizes this situation, does the accounting in the cache
buflist, and then lies to the caller that there was no rx consumption
to be accounted for at his level.
If you have multiple vhosts with client contexts enabled, under
OpenSSL each one brings in the system cert bundle.
On libwebsockets.org, there are many vhosts and the waste adds up
to about 9MB of heap.
This patch makes a sha256 from the client context configuration, and
if a suitable client context already exists on another vhost, bumps
a refcount and reuses the client context.
In the case client contexts are configured differently, a new one
is created (and is available for reuse as well).