From eventfd man page:
Applications can use an eventfd file descriptor instead of a pipe (see
pipe(2)) in all cases where a pipe is used simply to signal events.
The kernel overhead of an eventfd file descriptor is much lower than
that of a pipe, and only one file descriptor is required
(versus the two required for a pipe).
Pre-sul, checking for interval to next pending scheduled event was expensive and
iterative, so the service avoided it if the wait was already 0.
With sul though, the internal "check" function also services ripe events and
removes them, and finding the interval to the next one is really cheap.
Rename the "check" function to __lws_sul_service_ripe() to make it clear it's
not just about returning the interval to the next pending one. And call it
regardless of if we already decided we are not going to wait in the poll.
After https://github.com/warmcat/libwebsockets/pull/1745
There's no longer any reason to come out of sleep for periodic service
which has been eliminated by lws_sul.
With event libs, there is no opportunity to do it anyway since their
event loop is atomic and makes callbacks and sleeps until it is stopped.
But some users are relying on the old poll() service loop as
glue that's difficult to replace. So for now help that happen by
accepting the timeout_ms of -1 as meaning sample poll and service
what's there without any wait.
wsi timeout, wsi hrtimer, sequencer timeout and vh-protocol timer
all now participate on a single sorted us list.
The whole idea of polling wakes is thrown out, poll waits ignore the
timeout field and always use infinite timeouts.
Introduce a public api that can schedule its own callback from the event
loop with us resolution (usually ms is all the platform can do).
Upgrade timeouts and sequencer timeouts to also be able to use us resolution.
Introduce a prepared fakewsi in the pt, so we don't have to allocate
one on the heap when we need it.
Directly handle vh-protocol timer if LWS_MAX_SMP == 1
There are quite a few linked-lists of things that want events after
some period. This introduces a type binding an lws_dll2 for the
list and a lws_usec_t for the duration.
The wsi timeouts, the hrtimer and the sequencer timeouts are converted
to use these, also in the common event wait calculation.
Adapt service loops and event libs to use microsecond waits
internally, for hrtimer and sequencer. Reduce granularity
according to platform / event lib wait.
Add a helper so there's a single place to extend it.
The logic in the loops for insertion and deletion from the
mini, forced to non ulimit max fds in the pt mode was not
quite right.
It showed up in hard to reproduce problem with the ws client
spam test that uses the mini mode, on travis. This should
fix the root cause.
Otherwise it often forgets to inform about event loop interrupts. Add a flag to the per thread context, set it in the signal function, then check / reset it in the service method.
An lws context usually contains a processwide fd -> wsi lookup table.
This allows any possible fd returned by a *nix type OS to be immediately
converted to a wsi just by indexing an array of struct lws * the size of
the highest possible fd, as found by ulimit -n or similar.
This works modestly for Linux type systems where the default ulimit -n for
a process is 1024, it means a 4KB or 8KB lookup table for 32-bit or
64-bit systems.
However in the case your lws usage is much simpler, like one outgoing
client connection and no serving, this represents increasing waste. It's
made much worse if the system has a much larger default ulimit -n, eg 1M,
the table is occupying 4MB or 8MB, of which you will only use one.
Even so, because lws can't be sure the OS won't return a socket fd at any
number up to (ulimit -n - 1), it has to allocate the whole lookup table
at the moment.
This patch looks to see if the context creation info is setting
info->fd_limit_per_thread... if it leaves it at the default 0, then
everything is as it was before this patch. However if finds that
(info->fd_limit_per_thread * actual_number_of_service_threads) where
the default number of service threads is 1, is less than the fd limit
set by ulimit -n, lws switches to a slower lookup table scheme, which
only allocates the requested number of slots. Lookups happen then by
iterating the table and comparing rather than indexing the array
directly, which is obviously somewhat of a performance hit.
However in the case where you know lws will only have a very few wsi
maximum, this method can very usefully trade off speed to be able to
avoid the allocation sized by ulimit -n.
minimal examples for client that can make use of this are also modified
by this patch to use the smaller context allocations.
If you're providing a unix socket service that will be proxied / served by another
process on the same machine, the unix fd permissions on the listening unix socket fd
have to be managed so only something running under the server credentials
can open the listening unix socket.
Up until now if you wanted to drop privs, a numeric uid and gid had to be
given in info to control post-init permissions... this adds info.username
and info.groupname where you can do the same using user and group names.
The internal plat helper lws_plat_drop_app_privileges() is updated to directly use
context instead of info both ways it can be called, and to be able to return fatal
errors.
All failures to lookup non-0 or -1 uid or gid names from uid, or to look up
uid or gid from username or groupnames given, get an err message and fatal exit.