Introduce a generic lws_state object with notification handlers
that may be registered in a chain.
Implement one of those in the context to manage the "system state".
Allow other pieces of lws and user code to register notification
handlers on a context list. Handlers can object to or take over
responsibility to move forward and retry system state changes if
they know that some dependent action must succeed first.
For example if the system time is invalid, we cannot move on to
a state where anything can do tls until that has been corrected.
Refactor everything around ping / pong handling in ws and h2, so there
is instead a protocol-independent validity lws_sul tracking how long it
has been since the last exchange that confirms the operation of the
network connection in both directions.
Clean out periodic role callback and replace the last two role users
with discrete lws_sul for each pt.
It was already correct but add helpers to isolate and deduplicate
processing adding and closing a generically immortal stream.
Change the default 31s h2 network connection timeout to be settable
by .keepalive_timeout if nonzero.
Add a public api allowing a client h2 stream to transition to
half-closed LOCAL (by sending a 0-byte DATA with END_STREAM) and
mark itself as immortal to create a read-only long-poll stream
if the server allows it.
Add a vhost server option flag LWS_SERVER_OPTION_VH_H2_HALF_CLOSED_LONG_POLL
which allows the vhost to treat half-closed remotes as immortal long
poll streams.
Old certs were getting near the end of their life and we switched the
server to use letsencrypt. The root and intermediate needed for the
mbedtls case changed accordingly
Remove LWS_LATENCY.
Add the option LWS_WITH_DETAILED_LATENCY, allowing lws to collect very detailed
information on every read and write, and allow the user code to provide
a callback to process events.
This adds the option to have lws do its own dns resolution on
the event loop, without blocking. Existing implementations get
the name resolution done by the libc, which is blocking. In
the case you are opening client connections but need to carefully
manage latency, another connection opening and doing the name
resolution becomes a big problem.
Currently it supports
- ipv4 / A records
- ipv6 / AAAA records
- ipv4-over-ipv6 ::ffff:1.2.3.4 A record promotion for ipv6
- only one server supported over UDP :53
- nameserver discovery on linux, windows, freertos
It also has some nice advantages
- lws-style paranoid response parsing
- random unique tid generation to increase difficulty of poisoning
- it's really integrated with the lws event loop, it does not spawn
threads or use the libc resolver, and of course no blocking at all
- platform-specific server address capturing (from /etc/resolv.conf
on linux, windows apis on windows)
- it has LRU caching
- piggybacking (multiple requests before the first completes go on
a list on the first request, not spawn multiple requests)
- observes TTL in cache
- TTL and timeout use lws_sul timers on the event loop
- ipv6 pieces only built if cmake LWS_IPV6 enabled
wsi timeout, wsi hrtimer, sequencer timeout and vh-protocol timer
all now participate on a single sorted us list.
The whole idea of polling wakes is thrown out, poll waits ignore the
timeout field and always use infinite timeouts.
Introduce a public api that can schedule its own callback from the event
loop with us resolution (usually ms is all the platform can do).
Upgrade timeouts and sequencer timeouts to also be able to use us resolution.
Introduce a prepared fakewsi in the pt, so we don't have to allocate
one on the heap when we need it.
Directly handle vh-protocol timer if LWS_MAX_SMP == 1
There are quite a few linked-lists of things that want events after
some period. This introduces a type binding an lws_dll2 for the
list and a lws_usec_t for the duration.
The wsi timeouts, the hrtimer and the sequencer timeouts are converted
to use these, also in the common event wait calculation.
Adapt service loops and event libs to use microsecond waits
internally, for hrtimer and sequencer. Reduce granularity
according to platform / event lib wait.
Add a helper so there's a single place to extend it.
Since the messages are queued and then read in order from the event loop
thread, it's not generally safe to pass pointers to argument structs,
since there's no guarantee the lifetime of the thing sending the message
lasted until the sequencer read the message.
This puts pressure on the single void * argument-passed-as-value... this patch
adds a second void * argument-passed-as-value so it's more possible to put
what's needed directly in the argument.
It's also possible to alloc the argument on the heap and have the sequencer
callback free it after it has read it.
Add a generic table-based backoff scheme and a helper to track the
try count and calculate the next delay in ms.
Allow lws_sequencer_t to be given one of these at creation time...
since the number of creation args is getting a bit too much
convert that to an info struct at the same time.
Travis seems to be restricting the number of outgoing connections
or the rate of them... we have been using 10 concurrent and 100 connections
[2019/08/02 09:26:22:7950] USER: callback_minimal_spam: established (try 10, est 8, closed 0, err 0)
[2019/08/02 09:26:22:8041] USER: callback_minimal_spam: established (try 10, est 9, closed 0, err 0)
[2019/08/02 09:26:23:0098] USER: callback_minimal_spam: reopening (try 11, est 10, closed 1, err 0)
[2019/08/02 09:26:23:0105] USER: callback_minimal_spam: reopening (try 12, est 10, closed 2, err 0)
[2019/08/02 09:26:23:0111] USER: callback_minimal_spam: reopening (try 13, est 10, closed 3, err 0)
[2019/08/02 09:26:23:0117] USER: callback_minimalRROR: closed before established (try 25, est 14, closed 14, err 2)
[2019/08/02 09:26:44:6125] ERR: CLIENT_CONNECTION_ERROR: closed before established (try 26, est 14, closed 14, err 3)
[2019/08/02 09:26:44:6129] ERR: CLIENT_CONNECTION_ERROR: closed before established (try 27, est 14, closed 14, err 4)
[2019/08/02 09:26:44:6133] ERR: CLIENT_CONNECTION_ERROR: closed before established (try 28, est 14, closed 14, err 5)
[2019/08/02 09:26:44:6137] ERR: CLIENT_CONNECTION_ERROR: closed before established (try 29, est 14, closed 14, err 6)
[2019/08/02 09:26:45:6152] ERR: CLIENT_CONNECTION_ERROR: closed before established (try 30, est 14, closed 14, err 7)
[2019/08/02 09:26:45:6163] ERR: CLIENT_CONNECTION_ERROR: closed before established (try 31, est 14, closed 14, err 8)
[2019/08/02 09:26:45:6168] ERR: CLIENT_CONNECTION_ERROR: closed before established (try 32, est 14, closed 14, err 9)
[2019/08/02 09:26:45:6174] ERR: CLIENT_CONNECTION_ERROR: closed before established (try 33, est 14, closed 14, err 10)
[2019/08/02 09:26:47:0635] USER: callback_minimal_spam: established (try 34, est 14, closed 14, err 10)
Reduce to 3 concurrent / 15 see if it helps travis get over the hump
The logic in the loops for insertion and deletion from the
mini, forced to non ulimit max fds in the pt mode was not
quite right.
It showed up in hard to reproduce problem with the ws client
spam test that uses the mini mode, on travis. This should
fix the root cause.
Latest 1.1.1c (and patches 1.1.1b on Fedora) check the AES key for entropy
and error out if bad. Our aes-xts test key was a by-hand pattern repeated 4
times and OpenSSL errors out on it.
Improve the key to a random one.
SMTP was improved to use the new abstract stuff a while ago,
but it was only implemented with raw socket abstract transport,
and a couple of 'api cheats' remained passing network information
for the peer connection through the supposedly abstract apis.
This patch adds a flexible generic token array to supply
abstract transport-specific information through the abstract apis,
removing the network information from the abstract connect() op.
The SMTP minimal example is modified to use this new method to
pass the network information.
The abstract transport struct was opaque, but there are real
uses to override it in user code, so this patch also makes it
part of the public abi.
An lws context usually contains a processwide fd -> wsi lookup table.
This allows any possible fd returned by a *nix type OS to be immediately
converted to a wsi just by indexing an array of struct lws * the size of
the highest possible fd, as found by ulimit -n or similar.
This works modestly for Linux type systems where the default ulimit -n for
a process is 1024, it means a 4KB or 8KB lookup table for 32-bit or
64-bit systems.
However in the case your lws usage is much simpler, like one outgoing
client connection and no serving, this represents increasing waste. It's
made much worse if the system has a much larger default ulimit -n, eg 1M,
the table is occupying 4MB or 8MB, of which you will only use one.
Even so, because lws can't be sure the OS won't return a socket fd at any
number up to (ulimit -n - 1), it has to allocate the whole lookup table
at the moment.
This patch looks to see if the context creation info is setting
info->fd_limit_per_thread... if it leaves it at the default 0, then
everything is as it was before this patch. However if finds that
(info->fd_limit_per_thread * actual_number_of_service_threads) where
the default number of service threads is 1, is less than the fd limit
set by ulimit -n, lws switches to a slower lookup table scheme, which
only allocates the requested number of slots. Lookups happen then by
iterating the table and comparing rather than indexing the array
directly, which is obviously somewhat of a performance hit.
However in the case where you know lws will only have a very few wsi
maximum, this method can very usefully trade off speed to be able to
avoid the allocation sized by ulimit -n.
minimal examples for client that can make use of this are also modified
by this patch to use the smaller context allocations.
Generic sessions has been overdue some love to align it with
the progress in the rest of lws.
1) Strict Content Security Policy
2) http2 compatibility
3) fixes and additions for use in a separate process via unix domain socket
4) work on ws and http proxying in lws
5) add minimal example
https://libwebsockets.org/pipermail/libwebsockets/2019-April/007937.html
thanks to Bruce Perens for noting it.
This doesn't change the intention or status of the CC0 files, they were
pure CC0 before (ie, public domain) and they are pure CC0 now. It just
gets rid of the (C) part at the top of the dedication which may be read
to be a bit contradictory since the purpose is to make it public domain.
https://github.com/warmcat/libwebsockets/issues/1550
rx flow control needs to handle the situation that it is draining from
a previous rx flow control period, and the user code reasserts rx flow
control partway through that.
The accounting for the used rx then boils down to only trimming the
rxflow buflist we were "replaying" to consume however much we managed
to deliver of that this time before the rx flow control came again.
"Normal" rx consumption is wrong in this case, since we accounted for
it entirely in the rxflow cache buflist.
The patch recognizes this situation, does the accounting in the cache
buflist, and then lies to the caller that there was no rx consumption
to be accounted for at his level.
This is aimed at allowing a stride to optionally be
given for the parameter name array... this will allow
use of lws_struct metadata as the parameter name
array.
Also introduce the option to put all allocations in
an lwsac instead of via lws_mallocs.