This teaches http client stuff how to handle 303 redirects... these
can happen after POST where the server side wants you to come back with
a GET to the Location: mentioned.
lws client will follow the redirect and force GET, this works for both
h1 and h2. Client protocol handler has to act differently if it finds
it is connecting for the initial POST or the subsequent GET, it can
find out which by checking a new api lws_http_is_redirected_to_get(wsi)
which returns nonzero if in GET mode.
Minimal example for server form-post has a new --303 switch to enable
this behaviour there and the client post example has additions to
check lws_http_is_redirected_to_get().
Introduce a generic lws_state object with notification handlers
that may be registered in a chain.
Implement one of those in the context to manage the "system state".
Allow other pieces of lws and user code to register notification
handlers on a context list. Handlers can object to or take over
responsibility to move forward and retry system state changes if
they know that some dependent action must succeed first.
For example if the system time is invalid, we cannot move on to
a state where anything can do tls until that has been corrected.
It was already correct but add helpers to isolate and deduplicate
processing adding and closing a generically immortal stream.
Change the default 31s h2 network connection timeout to be settable
by .keepalive_timeout if nonzero.
Add a public api allowing a client h2 stream to transition to
half-closed LOCAL (by sending a 0-byte DATA with END_STREAM) and
mark itself as immortal to create a read-only long-poll stream
if the server allows it.
Add a vhost server option flag LWS_SERVER_OPTION_VH_H2_HALF_CLOSED_LONG_POLL
which allows the vhost to treat half-closed remotes as immortal long
poll streams.
This adds the option to have lws do its own dns resolution on
the event loop, without blocking. Existing implementations get
the name resolution done by the libc, which is blocking. In
the case you are opening client connections but need to carefully
manage latency, another connection opening and doing the name
resolution becomes a big problem.
Currently it supports
- ipv4 / A records
- ipv6 / AAAA records
- ipv4-over-ipv6 ::ffff:1.2.3.4 A record promotion for ipv6
- only one server supported over UDP :53
- nameserver discovery on linux, windows, freertos
It also has some nice advantages
- lws-style paranoid response parsing
- random unique tid generation to increase difficulty of poisoning
- it's really integrated with the lws event loop, it does not spawn
threads or use the libc resolver, and of course no blocking at all
- platform-specific server address capturing (from /etc/resolv.conf
on linux, windows apis on windows)
- it has LRU caching
- piggybacking (multiple requests before the first completes go on
a list on the first request, not spawn multiple requests)
- observes TTL in cache
- TTL and timeout use lws_sul timers on the event loop
- ipv6 pieces only built if cmake LWS_IPV6 enabled
Improve the code around stash, getting rid of the strdups for a net
code reduction. Remove the special destroy helper for stash since
it becomes a one-liner.
Trade several stack allocs in the client reset function for a single
sized brief heap alloc to reduce peak stack alloc by around 700 bytes.
wsi timeout, wsi hrtimer, sequencer timeout and vh-protocol timer
all now participate on a single sorted us list.
The whole idea of polling wakes is thrown out, poll waits ignore the
timeout field and always use infinite timeouts.
Introduce a public api that can schedule its own callback from the event
loop with us resolution (usually ms is all the platform can do).
Upgrade timeouts and sequencer timeouts to also be able to use us resolution.
Introduce a prepared fakewsi in the pt, so we don't have to allocate
one on the heap when we need it.
Directly handle vh-protocol timer if LWS_MAX_SMP == 1
There are quite a few linked-lists of things that want events after
some period. This introduces a type binding an lws_dll2 for the
list and a lws_usec_t for the duration.
The wsi timeouts, the hrtimer and the sequencer timeouts are converted
to use these, also in the common event wait calculation.
lws_dll2 removes the downsides of lws_dll and adds new features like a
running member count and explicit owner type... it's cleaner and more
robust (eg, nodes know their owner, so they can casually switch between
list owners and remove themselves without the code knowing the owner).
This deprecates lws_dll, but since it's public it allows it to continue
to be built for 4.0 release if you give cmake LWS_WITH_DEPRECATED_LWS_DLL.
All remaining internal users of lws_dll are migrated to lws_dll2.
lws has been able to proxy h2 or h1 inbound connections to an
h1 onward connection for a while now. It's simple to use just
build with LWS_WITH_HTTP_PROXY and make a mount where the origin
is the onward connection details. Unix sockets can also be
used as the onward connection.
This patch extends the support to be able to also do the same for
inbound h2 or h1 ws upgrades to an h1 ws onward connection as well.
This allows you to offer completely different services in a
common URL space, including ones that connect back by ws / wss.
The callback flow is a bit more disruptive than doing the iteration
directly in your function. This helps by passing a user void *
into the callback set as an lws_dll[2]_foreach_safe() arg.