On some platforms, it's possible that logging flow may reset errno. In the case where
we try to log errno on those platforms and afterwards try to query it, we will get a
nasty surprise that the logged errno is destroyed by the time we come to test it.
In the two cases of this in the tree at the moment, sample errno into a temp and
log and test the temp.
Thanks to Sakthi Ramabadran for finding this.
Now the generic lws_system blobs can cover client certs + key, let's
add support for applying one of the blob sets to a specific client
connection (rather than doing it via the vhost).
It looks to semmle like the int size can be bigger than the char loop var.
But the size is the size of the IPv4 or IPv6 address, so it cannot make
a problem.
This teaches http client stuff how to handle 303 redirects... these
can happen after POST where the server side wants you to come back with
a GET to the Location: mentioned.
lws client will follow the redirect and force GET, this works for both
h1 and h2. Client protocol handler has to act differently if it finds
it is connecting for the initial POST or the subsequent GET, it can
find out which by checking a new api lws_http_is_redirected_to_get(wsi)
which returns nonzero if in GET mode.
Minimal example for server form-post has a new --303 switch to enable
this behaviour there and the client post example has additions to
check lws_http_is_redirected_to_get().
Pre-sul, checking for interval to next pending scheduled event was expensive and
iterative, so the service avoided it if the wait was already 0.
With sul though, the internal "check" function also services ripe events and
removes them, and finding the interval to the next one is really cheap.
Rename the "check" function to __lws_sul_service_ripe() to make it clear it's
not just about returning the interval to the next pending one. And call it
regardless of if we already decided we are not going to wait in the poll.
After https://github.com/warmcat/libwebsockets/pull/1745
As it is, if time_t is 32-bit on the platform it might lead to
arithmetic overflow, so force it to lws_usec_t (uint64_t) even
though it works OK here on x86_64.
Add a minimal example aimed at testing the wsi hrtimer stability
consistently across platforms.
Add and disable by default hrtimer dump code (this is too expensive
and specific to internal testing to leave in for debug mode even if
it's not printed). If you hack it enabled, it will dump the sul
list for the pt and assert if the list is disordered.
The sul callback is allowed to do anything to the sul list, it's
not possible to safely iterate it inbetween calls.
Luckily because it's sorted, we only ever care about the current
head, so switch to iterating it like that.
This shouldn't be necessary; just END_HEADERS flag should be enough.
But nghttp2 will not talk to us unless we end the stream from our side.
Unfortunately ending the stream at the time we sent the headers means
we cannot support the long poll half-close scheme. So add a quirk
flag to optionally support this behaviour of nghttp2 when the client
is creating the connection.
h1 and h2 has a bunch of code supporting autobinding outgoing client connections
to be streams in, or queued as pipelined on, the same / existing single network
connection, if it's to the same endpoint.
Adapt this http-specific code and active connection tracking to be usable for
generic muxable protocols the same way.
Introduce a generic lws_state object with notification handlers
that may be registered in a chain.
Implement one of those in the context to manage the "system state".
Allow other pieces of lws and user code to register notification
handlers on a context list. Handlers can object to or take over
responsibility to move forward and retry system state changes if
they know that some dependent action must succeed first.
For example if the system time is invalid, we cannot move on to
a state where anything can do tls until that has been corrected.
Refactor everything around ping / pong handling in ws and h2, so there
is instead a protocol-independent validity lws_sul tracking how long it
has been since the last exchange that confirms the operation of the
network connection in both directions.
Clean out periodic role callback and replace the last two role users
with discrete lws_sul for each pt.
It was already correct but add helpers to isolate and deduplicate
processing adding and closing a generically immortal stream.
Change the default 31s h2 network connection timeout to be settable
by .keepalive_timeout if nonzero.
Add a public api allowing a client h2 stream to transition to
half-closed LOCAL (by sending a 0-byte DATA with END_STREAM) and
mark itself as immortal to create a read-only long-poll stream
if the server allows it.
Add a vhost server option flag LWS_SERVER_OPTION_VH_H2_HALF_CLOSED_LONG_POLL
which allows the vhost to treat half-closed remotes as immortal long
poll streams.
There's no longer any reason to come out of sleep for periodic service
which has been eliminated by lws_sul.
With event libs, there is no opportunity to do it anyway since their
event loop is atomic and makes callbacks and sleeps until it is stopped.
But some users are relying on the old poll() service loop as
glue that's difficult to replace. So for now help that happen by
accepting the timeout_ms of -1 as meaning sample poll and service
what's there without any wait.
With http, the protocol doesn't indicate where the headers end and the
next transaction or body begin. Until now, we handled that for client
header response parsing by reading from the tls buffer bytewise.
This modernizes the code to read in up to 256-byte chunks and parse
the chunks in one hit (the parse API is already set up for doing this
elsewhere).
Now we have a generic input buflist, adapt the parser loop to go through
that and arrange that any leftovers are placed on there.
Remove LWS_LATENCY.
Add the option LWS_WITH_DETAILED_LATENCY, allowing lws to collect very detailed
information on every read and write, and allow the user code to provide
a callback to process events.