Freertos + lwip doesn't support pipe2() or pipe()... implement a "pipe"
based on two UDP sockets, one listening on 127.0.0.1:54321 and the other
doing a sendto() there of a single byte to interrupt the event loop wait.
Re-use the arrangements for actual pipe fds and pipe role to deliver
lws_cancel_service() functionality using this.
There are some minor public api type improvements rather than cast everywhere
inside lws and user code to work around them... these changed from int to
size_t
- lws_buflist_use_segment() return
- lws_tokenize_t .len and .token_len
- lws_tokenize_cstr() length
- lws_get_peer_simple() namelen
- lws_get_peer_simple_fd() namelen, int fd -> lws_sockfd_type fd
- lws_write_numeric_address() len
- lws_sa46_write_numeric_address() len
These changes are typically a NOP for user code
This changes the approach of tx credit management to set the
initial stream tx credit window to zero. This is the only way
with RFC7540 to gain the ability to selectively precisely rx
flow control incoming streams.
At the time the headers are sent, a WINDOW_UPDATE is sent with
the initial tx credit towards us for that specific stream. By
default, this acts as before with a 256KB window added for both
the stream and the nwsi, and additional window management sent
as stuff is received.
It's now also possible to set a member in the client info
struct and a new option LCCSCF_H2_MANUAL_RXFLOW to precisely
manage both the initial tx credit for a specific stream and
the ongoing rate limit by meting out further tx credit
manually.
Add another minimal example http-client-h2-rxflow demonstrating how
to force a connection's peer's initial budget to transmit to us
and control it during the connection lifetime to restrict the amount
of incoming data we have to buffer.
Pre-sul, checking for interval to next pending scheduled event was expensive and
iterative, so the service avoided it if the wait was already 0.
With sul though, the internal "check" function also services ripe events and
removes them, and finding the interval to the next one is really cheap.
Rename the "check" function to __lws_sul_service_ripe() to make it clear it's
not just about returning the interval to the next pending one. And call it
regardless of if we already decided we are not going to wait in the poll.
After https://github.com/warmcat/libwebsockets/pull/1745
As it is, if time_t is 32-bit on the platform it might lead to
arithmetic overflow, so force it to lws_usec_t (uint64_t) even
though it works OK here on x86_64.
Add a minimal example aimed at testing the wsi hrtimer stability
consistently across platforms.
Add and disable by default hrtimer dump code (this is too expensive
and specific to internal testing to leave in for debug mode even if
it's not printed). If you hack it enabled, it will dump the sul
list for the pt and assert if the list is disordered.
Generic lws_system IPv4 DHCP client
- netif and route control via lib/plat apis
- linux plat pieces implemented
- Uses raw ip socket for UDP broadcast and rx
- security-aware
- usual stuff plus up to 4 x dns server
If it's enabled for build, it holds the system
state at DHCP until at least one registered interface
has acquired a set of IP / mask / router / DNS server
It uses PF_PACKET which is Linux-only atm. But those
areas are isolated into plat code.
TODOs
- lease timing and reacquire
- plat pieces for other than Linux
Refactor everything around ping / pong handling in ws and h2, so there
is instead a protocol-independent validity lws_sul tracking how long it
has been since the last exchange that confirms the operation of the
network connection in both directions.
Clean out periodic role callback and replace the last two role users
with discrete lws_sul for each pt.
There's no longer any reason to come out of sleep for periodic service
which has been eliminated by lws_sul.
With event libs, there is no opportunity to do it anyway since their
event loop is atomic and makes callbacks and sleeps until it is stopped.
But some users are relying on the old poll() service loop as
glue that's difficult to replace. So for now help that happen by
accepting the timeout_ms of -1 as meaning sample poll and service
what's there without any wait.
Remove LWS_LATENCY.
Add the option LWS_WITH_DETAILED_LATENCY, allowing lws to collect very detailed
information on every read and write, and allow the user code to provide
a callback to process events.
This adds the option to have lws do its own dns resolution on
the event loop, without blocking. Existing implementations get
the name resolution done by the libc, which is blocking. In
the case you are opening client connections but need to carefully
manage latency, another connection opening and doing the name
resolution becomes a big problem.
Currently it supports
- ipv4 / A records
- ipv6 / AAAA records
- ipv4-over-ipv6 ::ffff:1.2.3.4 A record promotion for ipv6
- only one server supported over UDP :53
- nameserver discovery on linux, windows, freertos
It also has some nice advantages
- lws-style paranoid response parsing
- random unique tid generation to increase difficulty of poisoning
- it's really integrated with the lws event loop, it does not spawn
threads or use the libc resolver, and of course no blocking at all
- platform-specific server address capturing (from /etc/resolv.conf
on linux, windows apis on windows)
- it has LRU caching
- piggybacking (multiple requests before the first completes go on
a list on the first request, not spawn multiple requests)
- observes TTL in cache
- TTL and timeout use lws_sul timers on the event loop
- ipv6 pieces only built if cmake LWS_IPV6 enabled
wsi timeout, wsi hrtimer, sequencer timeout and vh-protocol timer
all now participate on a single sorted us list.
The whole idea of polling wakes is thrown out, poll waits ignore the
timeout field and always use infinite timeouts.
Introduce a public api that can schedule its own callback from the event
loop with us resolution (usually ms is all the platform can do).
Upgrade timeouts and sequencer timeouts to also be able to use us resolution.
Introduce a prepared fakewsi in the pt, so we don't have to allocate
one on the heap when we need it.
Directly handle vh-protocol timer if LWS_MAX_SMP == 1
There are quite a few linked-lists of things that want events after
some period. This introduces a type binding an lws_dll2 for the
list and a lws_usec_t for the duration.
The wsi timeouts, the hrtimer and the sequencer timeouts are converted
to use these, also in the common event wait calculation.
Adapt service loops and event libs to use microsecond waits
internally, for hrtimer and sequencer. Reduce granularity
according to platform / event lib wait.
Add a helper so there's a single place to extend it.
The logic in the loops for insertion and deletion from the
mini, forced to non ulimit max fds in the pt mode was not
quite right.
It showed up in hard to reproduce problem with the ws client
spam test that uses the mini mode, on travis. This should
fix the root cause.
Otherwise it often forgets to inform about event loop interrupts. Add a flag to the per thread context, set it in the signal function, then check / reset it in the service method.
An lws context usually contains a processwide fd -> wsi lookup table.
This allows any possible fd returned by a *nix type OS to be immediately
converted to a wsi just by indexing an array of struct lws * the size of
the highest possible fd, as found by ulimit -n or similar.
This works modestly for Linux type systems where the default ulimit -n for
a process is 1024, it means a 4KB or 8KB lookup table for 32-bit or
64-bit systems.
However in the case your lws usage is much simpler, like one outgoing
client connection and no serving, this represents increasing waste. It's
made much worse if the system has a much larger default ulimit -n, eg 1M,
the table is occupying 4MB or 8MB, of which you will only use one.
Even so, because lws can't be sure the OS won't return a socket fd at any
number up to (ulimit -n - 1), it has to allocate the whole lookup table
at the moment.
This patch looks to see if the context creation info is setting
info->fd_limit_per_thread... if it leaves it at the default 0, then
everything is as it was before this patch. However if finds that
(info->fd_limit_per_thread * actual_number_of_service_threads) where
the default number of service threads is 1, is less than the fd limit
set by ulimit -n, lws switches to a slower lookup table scheme, which
only allocates the requested number of slots. Lookups happen then by
iterating the table and comparing rather than indexing the array
directly, which is obviously somewhat of a performance hit.
However in the case where you know lws will only have a very few wsi
maximum, this method can very usefully trade off speed to be able to
avoid the allocation sized by ulimit -n.
minimal examples for client that can make use of this are also modified
by this patch to use the smaller context allocations.