Pre-sul, checking for interval to next pending scheduled event was expensive and
iterative, so the service avoided it if the wait was already 0.
With sul though, the internal "check" function also services ripe events and
removes them, and finding the interval to the next one is really cheap.
Rename the "check" function to __lws_sul_service_ripe() to make it clear it's
not just about returning the interval to the next pending one. And call it
regardless of if we already decided we are not going to wait in the poll.
After https://github.com/warmcat/libwebsockets/pull/1745
There's no longer any reason to come out of sleep for periodic service
which has been eliminated by lws_sul.
With event libs, there is no opportunity to do it anyway since their
event loop is atomic and makes callbacks and sleeps until it is stopped.
But some users are relying on the old poll() service loop as
glue that's difficult to replace. So for now help that happen by
accepting the timeout_ms of -1 as meaning sample poll and service
what's there without any wait.
This adds the option to have lws do its own dns resolution on
the event loop, without blocking. Existing implementations get
the name resolution done by the libc, which is blocking. In
the case you are opening client connections but need to carefully
manage latency, another connection opening and doing the name
resolution becomes a big problem.
Currently it supports
- ipv4 / A records
- ipv6 / AAAA records
- ipv4-over-ipv6 ::ffff:1.2.3.4 A record promotion for ipv6
- only one server supported over UDP :53
- nameserver discovery on linux, windows, freertos
It also has some nice advantages
- lws-style paranoid response parsing
- random unique tid generation to increase difficulty of poisoning
- it's really integrated with the lws event loop, it does not spawn
threads or use the libc resolver, and of course no blocking at all
- platform-specific server address capturing (from /etc/resolv.conf
on linux, windows apis on windows)
- it has LRU caching
- piggybacking (multiple requests before the first completes go on
a list on the first request, not spawn multiple requests)
- observes TTL in cache
- TTL and timeout use lws_sul timers on the event loop
- ipv6 pieces only built if cmake LWS_IPV6 enabled
wsi timeout, wsi hrtimer, sequencer timeout and vh-protocol timer
all now participate on a single sorted us list.
The whole idea of polling wakes is thrown out, poll waits ignore the
timeout field and always use infinite timeouts.
Introduce a public api that can schedule its own callback from the event
loop with us resolution (usually ms is all the platform can do).
Upgrade timeouts and sequencer timeouts to also be able to use us resolution.
Introduce a prepared fakewsi in the pt, so we don't have to allocate
one on the heap when we need it.
Directly handle vh-protocol timer if LWS_MAX_SMP == 1
Adapt service loops and event libs to use microsecond waits
internally, for hrtimer and sequencer. Reduce granularity
according to platform / event lib wait.
Add a helper so there's a single place to extend it.
Otherwise it often forgets to inform about event loop interrupts. Add a flag to the per thread context, set it in the signal function, then check / reset it in the service method.
Up until now if you wanted to drop privs, a numeric uid and gid had to be
given in info to control post-init permissions... this adds info.username
and info.groupname where you can do the same using user and group names.
The internal plat helper lws_plat_drop_app_privileges() is updated to directly use
context instead of info both ways it can be called, and to be able to return fatal
errors.
All failures to lookup non-0 or -1 uid or gid names from uid, or to look up
uid or gid from username or groupnames given, get an err message and fatal exit.
The retry stuff for bind failures is actually aimed at the scenarios the interface
either doesn't exist yet, or is not configured enough (having an IP) to be bindable yet.
This patch treats EADDRINUSE as fatal at vhost init.
Add generic http compression layer eanbled at cmake with LWS_WITH_HTTP_STREAM_COMPRESSION.
This is wholly a feature of the HTTP role (used by h1 and h2 roles) and doesn't exist
outside that context.
Currently provides 'deflate' and 'br' compression methods for server side only.
'br' requires also -DLWS_WITH_HTTP_BROTLI=1 at cmake and the brotli libraries (available in
your distro already) and dev package.
Other compression methods can be added nicely using an ops struct.
The built-in file serving stuff will use this is the client says he can handle it, and the
mimetype of the file either starts with "text/" (html and css etc) or is the mimetype of
Javascript.
zlib allocates quite a bit while in use, it seems to be around 256KiB per stream. So this
is only useful on relatively strong servers with lots of memory. However for some usecases
where you are serving a lot of css and js assets, it's a nice help.
The patch performs special treatment for http/1.1 pipelining, since the compression is
performed on the fly the compressed content-length is not known until the end. So for h1
only, chunked transfer-encoding is automatically added so pipelining can continue of the
connection.
For h2 the chunking is neither supported nor required, so it "just works".
User code can also request to add a compression transform before the reply headers were
sent using the new api
LWS_VISIBLE int
lws_http_compression_apply(struct lws *wsi, const char *name,
unsigned char **p, unsigned char *end, char decomp);
... this allows transparent compression of dynamically generated HTTP. The requested
compression (eg, "deflate") is only applied if the client headers indicated it was
supported, otherwise it's a NOP.
Name may be NULL in which case the first compression method in the internal table at
stream.c that is mentioned as acceptable by the client will be used.
NOTE: the compression translation, same as h2 support, relies on the user code using
LWS_WRITE_HTTP and then LWS_WRITE_HTTP_FINAL on the last part written. The internal
lws fileserving code already does this.
Various kinds of input stashing were replaced with a single buflist before
v3.0... this patch replaces the partial send arrangements with its own buflist
in the same way.
Buflists as the name says are growable lists of allocations in a linked-list
that take care of book-keeping what's added and removed (even if what is
removed is less than the current buffer on the list).
The immediate result is that we no longer have to freak out if we had a partial
buffered and new output is coming... we can just pile it on the end of the
buflist and keep draining the front of it.
Likewise we no longer need to be rabid about reporting multiple attempts to
send stuff without going back to the event loop, although not doing that
will introduce inefficiencies we don't have to term it "illegal" any more.
Since buflists have proven reliable on the input side and the logic for dealing
with truncated "non-network events" was already there this internal-only change
should be relatively self-contained.
WSASetEvent(pt->events) just makes WSAWaitForMultipleEvents()
return, it will not set LWS_POLLOUT in pfd->revents and thus
has IMHO no effect. If WSAWaitForMultipleEvents() will set
LWS_POLLOUT it will also signal the event automatically.
1) This moves the service tid detection stuff from context to pt.
2) If LWS_MAX_SMP > 1, a default pthread tid detection callback is provided
on the dummy callback. Callback handlers that call through to the dummy
handler will inherit this. It provides an int truncation of the pthread
tid.
3) If there has been any service calls on the service threads, the pts now
know the low sizeof(int) bytes of their tid. When you ask for a client
connection to be created, it looks through the pts to see if the calling
thread is a pt service thread. If so, the new client is set to use the
same pt as the caller.
When a large deflate frame been received, WSAEnumNetworkEvents will indicate the socket is ready to read. And because the frame is compressed, it may not be consumed entirely(not all bytes ready to receive have been received), since WSAEnumNetworkEvents is edge triggered, and the socket read buffer never been drained, WSAEnumNetworkEvents will never indicate the socket is ready to read again. What here need is level trigger behavior, thus add additional recv with empty buffer to reset edge status.