Pre-sul, checking for interval to next pending scheduled event was expensive and
iterative, so the service avoided it if the wait was already 0.
With sul though, the internal "check" function also services ripe events and
removes them, and finding the interval to the next one is really cheap.
Rename the "check" function to __lws_sul_service_ripe() to make it clear it's
not just about returning the interval to the next pending one. And call it
regardless of if we already decided we are not going to wait in the poll.
After https://github.com/warmcat/libwebsockets/pull/1745
wsi timeout, wsi hrtimer, sequencer timeout and vh-protocol timer
all now participate on a single sorted us list.
The whole idea of polling wakes is thrown out, poll waits ignore the
timeout field and always use infinite timeouts.
Introduce a public api that can schedule its own callback from the event
loop with us resolution (usually ms is all the platform can do).
Upgrade timeouts and sequencer timeouts to also be able to use us resolution.
Introduce a prepared fakewsi in the pt, so we don't have to allocate
one on the heap when we need it.
Directly handle vh-protocol timer if LWS_MAX_SMP == 1
There are quite a few linked-lists of things that want events after
some period. This introduces a type binding an lws_dll2 for the
list and a lws_usec_t for the duration.
The wsi timeouts, the hrtimer and the sequencer timeouts are converted
to use these, also in the common event wait calculation.
lws_dll2 removes the downsides of lws_dll and adds new features like a
running member count and explicit owner type... it's cleaner and more
robust (eg, nodes know their owner, so they can casually switch between
list owners and remove themselves without the code knowing the owner).
This deprecates lws_dll, but since it's public it allows it to continue
to be built for 4.0 release if you give cmake LWS_WITH_DEPRECATED_LWS_DLL.
All remaining internal users of lws_dll are migrated to lws_dll2.
Adapt service loops and event libs to use microsecond waits
internally, for hrtimer and sequencer. Reduce granularity
according to platform / event lib wait.
Add a helper so there's a single place to extend it.
Otherwise it often forgets to inform about event loop interrupts. Add a flag to the per thread context, set it in the signal function, then check / reset it in the service method.
https://github.com/warmcat/libwebsockets/issues/1550
rx flow control needs to handle the situation that it is draining from
a previous rx flow control period, and the user code reasserts rx flow
control partway through that.
The accounting for the used rx then boils down to only trimming the
rxflow buflist we were "replaying" to consume however much we managed
to deliver of that this time before the rx flow control came again.
"Normal" rx consumption is wrong in this case, since we accounted for
it entirely in the rxflow cache buflist.
The patch recognizes this situation, does the accounting in the cache
buflist, and then lies to the caller that there was no rx consumption
to be accounted for at his level.
If you're providing a unix socket service that will be proxied / served by another
process on the same machine, the unix fd permissions on the listening unix socket fd
have to be managed so only something running under the server credentials
can open the listening unix socket.
lws has been able to proxy h2 or h1 inbound connections to an
h1 onward connection for a while now. It's simple to use just
build with LWS_WITH_HTTP_PROXY and make a mount where the origin
is the onward connection details. Unix sockets can also be
used as the onward connection.
This patch extends the support to be able to also do the same for
inbound h2 or h1 ws upgrades to an h1 ws onward connection as well.
This allows you to offer completely different services in a
common URL space, including ones that connect back by ws / wss.
With SMP as soon as we add the new sockfd to the fds table, in the
case we load-balanced the fd on to a different pt, service on it
becomes live immediately and concurrently. This can lead to the
unexpected situation that while we think we're still initing the
new wsi from our thread, it can have lived out its whole life
concurrently from another service thread.
Add a volatile flag to inform the owning pt that if it wants to
service the wsi during this init window, it must wait and retry
next time around the event loop.