Add a generic table-based backoff scheme and a helper to track the
try count and calculate the next delay in ms.
Allow lws_sequencer_t to be given one of these at creation time...
since the number of creation args is getting a bit too much
convert that to an info struct at the same time.
The logic in the loops for insertion and deletion from the
mini, forced to non ulimit max fds in the pt mode was not
quite right.
It showed up in hard to reproduce problem with the ws client
spam test that uses the mini mode, on travis. This should
fix the root cause.
Otherwise it often forgets to inform about event loop interrupts. Add a flag to the per thread context, set it in the signal function, then check / reset it in the service method.
An lws context usually contains a processwide fd -> wsi lookup table.
This allows any possible fd returned by a *nix type OS to be immediately
converted to a wsi just by indexing an array of struct lws * the size of
the highest possible fd, as found by ulimit -n or similar.
This works modestly for Linux type systems where the default ulimit -n for
a process is 1024, it means a 4KB or 8KB lookup table for 32-bit or
64-bit systems.
However in the case your lws usage is much simpler, like one outgoing
client connection and no serving, this represents increasing waste. It's
made much worse if the system has a much larger default ulimit -n, eg 1M,
the table is occupying 4MB or 8MB, of which you will only use one.
Even so, because lws can't be sure the OS won't return a socket fd at any
number up to (ulimit -n - 1), it has to allocate the whole lookup table
at the moment.
This patch looks to see if the context creation info is setting
info->fd_limit_per_thread... if it leaves it at the default 0, then
everything is as it was before this patch. However if finds that
(info->fd_limit_per_thread * actual_number_of_service_threads) where
the default number of service threads is 1, is less than the fd limit
set by ulimit -n, lws switches to a slower lookup table scheme, which
only allocates the requested number of slots. Lookups happen then by
iterating the table and comparing rather than indexing the array
directly, which is obviously somewhat of a performance hit.
However in the case where you know lws will only have a very few wsi
maximum, this method can very usefully trade off speed to be able to
avoid the allocation sized by ulimit -n.
minimal examples for client that can make use of this are also modified
by this patch to use the smaller context allocations.
Generic sessions has been overdue some love to align it with
the progress in the rest of lws.
1) Strict Content Security Policy
2) http2 compatibility
3) fixes and additions for use in a separate process via unix domain socket
4) work on ws and http proxying in lws
5) add minimal example
https://github.com/warmcat/libwebsockets/issues/1550
rx flow control needs to handle the situation that it is draining from
a previous rx flow control period, and the user code reasserts rx flow
control partway through that.
The accounting for the used rx then boils down to only trimming the
rxflow buflist we were "replaying" to consume however much we managed
to deliver of that this time before the rx flow control came again.
"Normal" rx consumption is wrong in this case, since we accounted for
it entirely in the rxflow cache buflist.
The patch recognizes this situation, does the accounting in the cache
buflist, and then lies to the caller that there was no rx consumption
to be accounted for at his level.
This is aimed at allowing a stride to optionally be
given for the parameter name array... this will allow
use of lws_struct metadata as the parameter name
array.
Also introduce the option to put all allocations in
an lwsac instead of via lws_mallocs.
If you're providing a unix socket service that will be proxied / served by another
process on the same machine, the unix fd permissions on the listening unix socket fd
have to be managed so only something running under the server credentials
can open the listening unix socket.
Up until now if you wanted to drop privs, a numeric uid and gid had to be
given in info to control post-init permissions... this adds info.username
and info.groupname where you can do the same using user and group names.
The internal plat helper lws_plat_drop_app_privileges() is updated to directly use
context instead of info both ways it can be called, and to be able to return fatal
errors.
All failures to lookup non-0 or -1 uid or gid names from uid, or to look up
uid or gid from username or groupnames given, get an err message and fatal exit.
lws has been able to proxy h2 or h1 inbound connections to an
h1 onward connection for a while now. It's simple to use just
build with LWS_WITH_HTTP_PROXY and make a mount where the origin
is the onward connection details. Unix sockets can also be
used as the onward connection.
This patch extends the support to be able to also do the same for
inbound h2 or h1 ws upgrades to an h1 ws onward connection as well.
This allows you to offer completely different services in a
common URL space, including ones that connect back by ws / wss.
The callback flow is a bit more disruptive than doing the iteration
directly in your function. This helps by passing a user void *
into the callback set as an lws_dll[2]_foreach_safe() arg.