This should be a NOP for h2 support and only affects internal
apis. But it lets us reuse the working and reliable h2 mux
arrangements directly in other protocols later, and share code
so building for h2 + new protocols can take advantage of common
mux child handling struct and code.
Break out common mux handling struct into its own type.
Convert all uses of members that used to be in wsi->h2 to wsi->mux
Audit all references to the members and break out generic helpers
for anything that is useful for other mux-capable protocols to
reuse wsi->mux related features.
On some platforms, it's possible that logging flow may reset errno. In the case where
we try to log errno on those platforms and afterwards try to query it, we will get a
nasty surprise that the logged errno is destroyed by the time we come to test it.
In the two cases of this in the tree at the moment, sample errno into a temp and
log and test the temp.
Thanks to Sakthi Ramabadran for finding this.
This teaches http client stuff how to handle 303 redirects... these
can happen after POST where the server side wants you to come back with
a GET to the Location: mentioned.
lws client will follow the redirect and force GET, this works for both
h1 and h2. Client protocol handler has to act differently if it finds
it is connecting for the initial POST or the subsequent GET, it can
find out which by checking a new api lws_http_is_redirected_to_get(wsi)
which returns nonzero if in GET mode.
Minimal example for server form-post has a new --303 switch to enable
this behaviour there and the client post example has additions to
check lws_http_is_redirected_to_get().
The %.*s is very handy to print strings where you have a length, but
there is no NUL termination. It's quite widely supported but at least
one vendor RTOS toolchain doesn't have it.
Since there aren't that many uses of it yet, audit all uses and
convert to a new helper lws_strnncpy() which uses the smaller of
two lengths.
h1 and h2 has a bunch of code supporting autobinding outgoing client connections
to be streams in, or queued as pipelined on, the same / existing single network
connection, if it's to the same endpoint.
Adapt this http-specific code and active connection tracking to be usable for
generic muxable protocols the same way.
Remove LWS_LATENCY.
Add the option LWS_WITH_DETAILED_LATENCY, allowing lws to collect very detailed
information on every read and write, and allow the user code to provide
a callback to process events.
This adds the option to have lws do its own dns resolution on
the event loop, without blocking. Existing implementations get
the name resolution done by the libc, which is blocking. In
the case you are opening client connections but need to carefully
manage latency, another connection opening and doing the name
resolution becomes a big problem.
Currently it supports
- ipv4 / A records
- ipv6 / AAAA records
- ipv4-over-ipv6 ::ffff:1.2.3.4 A record promotion for ipv6
- only one server supported over UDP :53
- nameserver discovery on linux, windows, freertos
It also has some nice advantages
- lws-style paranoid response parsing
- random unique tid generation to increase difficulty of poisoning
- it's really integrated with the lws event loop, it does not spawn
threads or use the libc resolver, and of course no blocking at all
- platform-specific server address capturing (from /etc/resolv.conf
on linux, windows apis on windows)
- it has LRU caching
- piggybacking (multiple requests before the first completes go on
a list on the first request, not spawn multiple requests)
- observes TTL in cache
- TTL and timeout use lws_sul timers on the event loop
- ipv6 pieces only built if cmake LWS_IPV6 enabled
Improve the code around stash, getting rid of the strdups for a net
code reduction. Remove the special destroy helper for stash since
it becomes a one-liner.
Trade several stack allocs in the client reset function for a single
sized brief heap alloc to reduce peak stack alloc by around 700 bytes.
lws_dll2 removes the downsides of lws_dll and adds new features like a
running member count and explicit owner type... it's cleaner and more
robust (eg, nodes know their owner, so they can casually switch between
list owners and remove themselves without the code knowing the owner).
This deprecates lws_dll, but since it's public it allows it to continue
to be built for 4.0 release if you give cmake LWS_WITH_DEPRECATED_LWS_DLL.
All remaining internal users of lws_dll are migrated to lws_dll2.
Until now the uv watcher has been composed in the wsi.
This works fine except in the case of a client wsi that
meets a redirect when the event loop is libuv with its
requirement for handle close via the event loop.
We want to reuse the wsi, since the originator of it has
a copy of the wsi pointer, and we want to conceal the
redirect. Since the redirect is commonly to a different
IP, we want to keep the wsi alive while closing its
socket cleanly. That's not too difficult, unless you are
using uv.
With UV the comoposed watcher is a disaster, since after
the close is requested the wsi will start to reconnect.
We tried to deal with that by copying the uv handle and
freeing it when the handle close finalizes. But it turns
out the handle is in a linked-list scheme in uv.
This patch hopefully finally solves it by giving the uv
handle its own allocation from the start. When we want
to close the socket and reuse the wsi, we simply take
responsibility for freeing the handle and set the wsi
watcher pointer to NULL.