- Add low level system message distibution framework
- Add support for local Secure Streams to participate using _lws_smd streamtype
- Add apit test and minimal example
- Add SS proxy support for _lws_smd
See minimal-secure-streams-smd README.md
Adapt the pt sul owner list to be an array, and define two different lists,
one that acts like before and is the default for existing users, and another
that has the ability to cooperate with systemwide suspend to restrict the
interval spent suspended so that it will wake in time for the earliest
thing on this wake-suspend sul list.
Clean the api a bit and add lws_sul_cancel() that only needs the sul as the
argument.
Add a flag for client creation info to indicate that this client connection
is important enough that, eg, validity checking it to detect silently dead
connections should go on the wake-suspend sul list. That flag is exposed in
secure streams policy so it can be added to a streamtype with
"swake_validity": true
Deprecate out the old vhost timer stuff that predates sul. Add a flag
LWS_WITH_DEPRECATED_THINGS in cmake so users can get it back temporarily
before it will be removed in a v4.2.
Adapt all remaining in-tree users of it to use explicit suls.
POSIX connect() specifies it will signal POLLOUT available when
the connect result is available. But windows has some non-posix
nonsense.
Improve the plat support to simulate the missing POLLOUT.
Add a member to the vh init struct allowing control of the overall
connection wait introduced in an earlier patch. Set it to 20s
by default.
The timeout_secs member controls the individual DNS result
connect timeout and is reduced to 5s by default.
UDP: call to recv() in there for unknown reasons was trashing udp
delays: pending tls (where data is buffered and no pending POLLIN)
was not reflected in forcing the wait to 0 properly.
Pre-sul, checking for interval to next pending scheduled event was expensive and
iterative, so the service avoided it if the wait was already 0.
With sul though, the internal "check" function also services ripe events and
removes them, and finding the interval to the next one is really cheap.
Rename the "check" function to __lws_sul_service_ripe() to make it clear it's
not just about returning the interval to the next pending one. And call it
regardless of if we already decided we are not going to wait in the poll.
After https://github.com/warmcat/libwebsockets/pull/1745
There's no longer any reason to come out of sleep for periodic service
which has been eliminated by lws_sul.
With event libs, there is no opportunity to do it anyway since their
event loop is atomic and makes callbacks and sleeps until it is stopped.
But some users are relying on the old poll() service loop as
glue that's difficult to replace. So for now help that happen by
accepting the timeout_ms of -1 as meaning sample poll and service
what's there without any wait.
wsi timeout, wsi hrtimer, sequencer timeout and vh-protocol timer
all now participate on a single sorted us list.
The whole idea of polling wakes is thrown out, poll waits ignore the
timeout field and always use infinite timeouts.
Introduce a public api that can schedule its own callback from the event
loop with us resolution (usually ms is all the platform can do).
Upgrade timeouts and sequencer timeouts to also be able to use us resolution.
Introduce a prepared fakewsi in the pt, so we don't have to allocate
one on the heap when we need it.
Directly handle vh-protocol timer if LWS_MAX_SMP == 1
Adapt service loops and event libs to use microsecond waits
internally, for hrtimer and sequencer. Reduce granularity
according to platform / event lib wait.
Add a helper so there's a single place to extend it.
Otherwise it often forgets to inform about event loop interrupts. Add a flag to the per thread context, set it in the signal function, then check / reset it in the service method.
WSASetEvent(pt->events) just makes WSAWaitForMultipleEvents()
return, it will not set LWS_POLLOUT in pfd->revents and thus
has IMHO no effect. If WSAWaitForMultipleEvents() will set
LWS_POLLOUT it will also signal the event automatically.
1) This moves the service tid detection stuff from context to pt.
2) If LWS_MAX_SMP > 1, a default pthread tid detection callback is provided
on the dummy callback. Callback handlers that call through to the dummy
handler will inherit this. It provides an int truncation of the pthread
tid.
3) If there has been any service calls on the service threads, the pts now
know the low sizeof(int) bytes of their tid. When you ask for a client
connection to be created, it looks through the pts to see if the calling
thread is a pt service thread. If so, the new client is set to use the
same pt as the caller.
When a large deflate frame been received, WSAEnumNetworkEvents will indicate the socket is ready to read. And because the frame is compressed, it may not be consumed entirely(not all bytes ready to receive have been received), since WSAEnumNetworkEvents is edge triggered, and the socket read buffer never been drained, WSAEnumNetworkEvents will never indicate the socket is ready to read again. What here need is level trigger behavior, thus add additional recv with empty buffer to reset edge status.