Pre-sul, checking for interval to next pending scheduled event was expensive and
iterative, so the service avoided it if the wait was already 0.
With sul though, the internal "check" function also services ripe events and
removes them, and finding the interval to the next one is really cheap.
Rename the "check" function to __lws_sul_service_ripe() to make it clear it's
not just about returning the interval to the next pending one. And call it
regardless of if we already decided we are not going to wait in the poll.
After https://github.com/warmcat/libwebsockets/pull/1745
As it is, if time_t is 32-bit on the platform it might lead to
arithmetic overflow, so force it to lws_usec_t (uint64_t) even
though it works OK here on x86_64.
Add a minimal example aimed at testing the wsi hrtimer stability
consistently across platforms.
Add and disable by default hrtimer dump code (this is too expensive
and specific to internal testing to leave in for debug mode even if
it's not printed). If you hack it enabled, it will dump the sul
list for the pt and assert if the list is disordered.
Generic lws_system IPv4 DHCP client
- netif and route control via lib/plat apis
- linux plat pieces implemented
- Uses raw ip socket for UDP broadcast and rx
- security-aware
- usual stuff plus up to 4 x dns server
If it's enabled for build, it holds the system
state at DHCP until at least one registered interface
has acquired a set of IP / mask / router / DNS server
It uses PF_PACKET which is Linux-only atm. But those
areas are isolated into plat code.
TODOs
- lease timing and reacquire
- plat pieces for other than Linux
Refactor everything around ping / pong handling in ws and h2, so there
is instead a protocol-independent validity lws_sul tracking how long it
has been since the last exchange that confirms the operation of the
network connection in both directions.
Clean out periodic role callback and replace the last two role users
with discrete lws_sul for each pt.
There's no longer any reason to come out of sleep for periodic service
which has been eliminated by lws_sul.
With event libs, there is no opportunity to do it anyway since their
event loop is atomic and makes callbacks and sleeps until it is stopped.
But some users are relying on the old poll() service loop as
glue that's difficult to replace. So for now help that happen by
accepting the timeout_ms of -1 as meaning sample poll and service
what's there without any wait.
Remove LWS_LATENCY.
Add the option LWS_WITH_DETAILED_LATENCY, allowing lws to collect very detailed
information on every read and write, and allow the user code to provide
a callback to process events.
This adds the option to have lws do its own dns resolution on
the event loop, without blocking. Existing implementations get
the name resolution done by the libc, which is blocking. In
the case you are opening client connections but need to carefully
manage latency, another connection opening and doing the name
resolution becomes a big problem.
Currently it supports
- ipv4 / A records
- ipv6 / AAAA records
- ipv4-over-ipv6 ::ffff:1.2.3.4 A record promotion for ipv6
- only one server supported over UDP :53
- nameserver discovery on linux, windows, freertos
It also has some nice advantages
- lws-style paranoid response parsing
- random unique tid generation to increase difficulty of poisoning
- it's really integrated with the lws event loop, it does not spawn
threads or use the libc resolver, and of course no blocking at all
- platform-specific server address capturing (from /etc/resolv.conf
on linux, windows apis on windows)
- it has LRU caching
- piggybacking (multiple requests before the first completes go on
a list on the first request, not spawn multiple requests)
- observes TTL in cache
- TTL and timeout use lws_sul timers on the event loop
- ipv6 pieces only built if cmake LWS_IPV6 enabled
wsi timeout, wsi hrtimer, sequencer timeout and vh-protocol timer
all now participate on a single sorted us list.
The whole idea of polling wakes is thrown out, poll waits ignore the
timeout field and always use infinite timeouts.
Introduce a public api that can schedule its own callback from the event
loop with us resolution (usually ms is all the platform can do).
Upgrade timeouts and sequencer timeouts to also be able to use us resolution.
Introduce a prepared fakewsi in the pt, so we don't have to allocate
one on the heap when we need it.
Directly handle vh-protocol timer if LWS_MAX_SMP == 1
There are quite a few linked-lists of things that want events after
some period. This introduces a type binding an lws_dll2 for the
list and a lws_usec_t for the duration.
The wsi timeouts, the hrtimer and the sequencer timeouts are converted
to use these, also in the common event wait calculation.
Adapt service loops and event libs to use microsecond waits
internally, for hrtimer and sequencer. Reduce granularity
according to platform / event lib wait.
Add a helper so there's a single place to extend it.
The logic in the loops for insertion and deletion from the
mini, forced to non ulimit max fds in the pt mode was not
quite right.
It showed up in hard to reproduce problem with the ws client
spam test that uses the mini mode, on travis. This should
fix the root cause.
An lws context usually contains a processwide fd -> wsi lookup table.
This allows any possible fd returned by a *nix type OS to be immediately
converted to a wsi just by indexing an array of struct lws * the size of
the highest possible fd, as found by ulimit -n or similar.
This works modestly for Linux type systems where the default ulimit -n for
a process is 1024, it means a 4KB or 8KB lookup table for 32-bit or
64-bit systems.
However in the case your lws usage is much simpler, like one outgoing
client connection and no serving, this represents increasing waste. It's
made much worse if the system has a much larger default ulimit -n, eg 1M,
the table is occupying 4MB or 8MB, of which you will only use one.
Even so, because lws can't be sure the OS won't return a socket fd at any
number up to (ulimit -n - 1), it has to allocate the whole lookup table
at the moment.
This patch looks to see if the context creation info is setting
info->fd_limit_per_thread... if it leaves it at the default 0, then
everything is as it was before this patch. However if finds that
(info->fd_limit_per_thread * actual_number_of_service_threads) where
the default number of service threads is 1, is less than the fd limit
set by ulimit -n, lws switches to a slower lookup table scheme, which
only allocates the requested number of slots. Lookups happen then by
iterating the table and comparing rather than indexing the array
directly, which is obviously somewhat of a performance hit.
However in the case where you know lws will only have a very few wsi
maximum, this method can very usefully trade off speed to be able to
avoid the allocation sized by ulimit -n.
minimal examples for client that can make use of this are also modified
by this patch to use the smaller context allocations.
If you're providing a unix socket service that will be proxied / served by another
process on the same machine, the unix fd permissions on the listening unix socket fd
have to be managed so only something running under the server credentials
can open the listening unix socket.
Up until now if you wanted to drop privs, a numeric uid and gid had to be
given in info to control post-init permissions... this adds info.username
and info.groupname where you can do the same using user and group names.
The internal plat helper lws_plat_drop_app_privileges() is updated to directly use
context instead of info both ways it can be called, and to be able to return fatal
errors.
All failures to lookup non-0 or -1 uid or gid names from uid, or to look up
uid or gid from username or groupnames given, get an err message and fatal exit.
The retry stuff for bind failures is actually aimed at the scenarios the interface
either doesn't exist yet, or is not configured enough (having an IP) to be bindable yet.
This patch treats EADDRINUSE as fatal at vhost init.
Add generic http compression layer eanbled at cmake with LWS_WITH_HTTP_STREAM_COMPRESSION.
This is wholly a feature of the HTTP role (used by h1 and h2 roles) and doesn't exist
outside that context.
Currently provides 'deflate' and 'br' compression methods for server side only.
'br' requires also -DLWS_WITH_HTTP_BROTLI=1 at cmake and the brotli libraries (available in
your distro already) and dev package.
Other compression methods can be added nicely using an ops struct.
The built-in file serving stuff will use this is the client says he can handle it, and the
mimetype of the file either starts with "text/" (html and css etc) or is the mimetype of
Javascript.
zlib allocates quite a bit while in use, it seems to be around 256KiB per stream. So this
is only useful on relatively strong servers with lots of memory. However for some usecases
where you are serving a lot of css and js assets, it's a nice help.
The patch performs special treatment for http/1.1 pipelining, since the compression is
performed on the fly the compressed content-length is not known until the end. So for h1
only, chunked transfer-encoding is automatically added so pipelining can continue of the
connection.
For h2 the chunking is neither supported nor required, so it "just works".
User code can also request to add a compression transform before the reply headers were
sent using the new api
LWS_VISIBLE int
lws_http_compression_apply(struct lws *wsi, const char *name,
unsigned char **p, unsigned char *end, char decomp);
... this allows transparent compression of dynamically generated HTTP. The requested
compression (eg, "deflate") is only applied if the client headers indicated it was
supported, otherwise it's a NOP.
Name may be NULL in which case the first compression method in the internal table at
stream.c that is mentioned as acceptable by the client will be used.
NOTE: the compression translation, same as h2 support, relies on the user code using
LWS_WRITE_HTTP and then LWS_WRITE_HTTP_FINAL on the last part written. The internal
lws fileserving code already does this.
Various kinds of input stashing were replaced with a single buflist before
v3.0... this patch replaces the partial send arrangements with its own buflist
in the same way.
Buflists as the name says are growable lists of allocations in a linked-list
that take care of book-keeping what's added and removed (even if what is
removed is less than the current buffer on the list).
The immediate result is that we no longer have to freak out if we had a partial
buffered and new output is coming... we can just pile it on the end of the
buflist and keep draining the front of it.
Likewise we no longer need to be rabid about reporting multiple attempts to
send stuff without going back to the event loop, although not doing that
will introduce inefficiencies we don't have to term it "illegal" any more.
Since buflists have proven reliable on the input side and the logic for dealing
with truncated "non-network events" was already there this internal-only change
should be relatively self-contained.
1) This moves the service tid detection stuff from context to pt.
2) If LWS_MAX_SMP > 1, a default pthread tid detection callback is provided
on the dummy callback. Callback handlers that call through to the dummy
handler will inherit this. It provides an int truncation of the pthread
tid.
3) If there has been any service calls on the service threads, the pts now
know the low sizeof(int) bytes of their tid. When you ask for a client
connection to be created, it looks through the pts to see if the calling
thread is a pt service thread. If so, the new client is set to use the
same pt as the caller.