This changes the approach of tx credit management to set the
initial stream tx credit window to zero. This is the only way
with RFC7540 to gain the ability to selectively precisely rx
flow control incoming streams.
At the time the headers are sent, a WINDOW_UPDATE is sent with
the initial tx credit towards us for that specific stream. By
default, this acts as before with a 256KB window added for both
the stream and the nwsi, and additional window management sent
as stuff is received.
It's now also possible to set a member in the client info
struct and a new option LCCSCF_H2_MANUAL_RXFLOW to precisely
manage both the initial tx credit for a specific stream and
the ongoing rate limit by meting out further tx credit
manually.
Add another minimal example http-client-h2-rxflow demonstrating how
to force a connection's peer's initial budget to transmit to us
and control it during the connection lifetime to restrict the amount
of incoming data we have to buffer.
This should be a NOP for h2 support and only affects internal
apis. But it lets us reuse the working and reliable h2 mux
arrangements directly in other protocols later, and share code
so building for h2 + new protocols can take advantage of common
mux child handling struct and code.
Break out common mux handling struct into its own type.
Convert all uses of members that used to be in wsi->h2 to wsi->mux
Audit all references to the members and break out generic helpers
for anything that is useful for other mux-capable protocols to
reuse wsi->mux related features.
On some platforms, it's possible that logging flow may reset errno. In the case where
we try to log errno on those platforms and afterwards try to query it, we will get a
nasty surprise that the logged errno is destroyed by the time we come to test it.
In the two cases of this in the tree at the moment, sample errno into a temp and
log and test the temp.
Thanks to Sakthi Ramabadran for finding this.
This teaches http client stuff how to handle 303 redirects... these
can happen after POST where the server side wants you to come back with
a GET to the Location: mentioned.
lws client will follow the redirect and force GET, this works for both
h1 and h2. Client protocol handler has to act differently if it finds
it is connecting for the initial POST or the subsequent GET, it can
find out which by checking a new api lws_http_is_redirected_to_get(wsi)
which returns nonzero if in GET mode.
Minimal example for server form-post has a new --303 switch to enable
this behaviour there and the client post example has additions to
check lws_http_is_redirected_to_get().
Resetting the ah and waiting a bit is the right strategy at the end of
http/1.1 client transaction. But it's wrong for h2... drop the ah
instead if it's the end of a client transaction on h2.
The %.*s is very handy to print strings where you have a length, but
there is no NUL termination. It's quite widely supported but at least
one vendor RTOS toolchain doesn't have it.
Since there aren't that many uses of it yet, audit all uses and
convert to a new helper lws_strnncpy() which uses the smaller of
two lengths.
lws has been able to generate client multipart mime as shown
in minimal-http-client-post, but it requires a lot of user
boilerplate to handle the boundary, related transaction header,
and multipart headers.
This patch adds a client creation flag to indicate it will
carry multipart mime, which autocreates the boundary string
and applies the transaction header with it, and an api to
form the boundary headers between the different mime parts
and the terminating boundary.
h1 and h2 has a bunch of code supporting autobinding outgoing client connections
to be streams in, or queued as pipelined on, the same / existing single network
connection, if it's to the same endpoint.
Adapt this http-specific code and active connection tracking to be usable for
generic muxable protocols the same way.
It was already correct but add helpers to isolate and deduplicate
processing adding and closing a generically immortal stream.
Change the default 31s h2 network connection timeout to be settable
by .keepalive_timeout if nonzero.
Add a public api allowing a client h2 stream to transition to
half-closed LOCAL (by sending a 0-byte DATA with END_STREAM) and
mark itself as immortal to create a read-only long-poll stream
if the server allows it.
Add a vhost server option flag LWS_SERVER_OPTION_VH_H2_HALF_CLOSED_LONG_POLL
which allows the vhost to treat half-closed remotes as immortal long
poll streams.
With http, the protocol doesn't indicate where the headers end and the
next transaction or body begin. Until now, we handled that for client
header response parsing by reading from the tls buffer bytewise.
This modernizes the code to read in up to 256-byte chunks and parse
the chunks in one hit (the parse API is already set up for doing this
elsewhere).
Now we have a generic input buflist, adapt the parser loop to go through
that and arrange that any leftovers are placed on there.
Protocol list is no longer a simple sentinel-terminated
array but composed at vhost creation time in many
cases. Use the vhost's count of how many protocols it
has rather than seeking up to the sentinel.
Remove LWS_LATENCY.
Add the option LWS_WITH_DETAILED_LATENCY, allowing lws to collect very detailed
information on every read and write, and allow the user code to provide
a callback to process events.
This adds the option to have lws do its own dns resolution on
the event loop, without blocking. Existing implementations get
the name resolution done by the libc, which is blocking. In
the case you are opening client connections but need to carefully
manage latency, another connection opening and doing the name
resolution becomes a big problem.
Currently it supports
- ipv4 / A records
- ipv6 / AAAA records
- ipv4-over-ipv6 ::ffff:1.2.3.4 A record promotion for ipv6
- only one server supported over UDP :53
- nameserver discovery on linux, windows, freertos
It also has some nice advantages
- lws-style paranoid response parsing
- random unique tid generation to increase difficulty of poisoning
- it's really integrated with the lws event loop, it does not spawn
threads or use the libc resolver, and of course no blocking at all
- platform-specific server address capturing (from /etc/resolv.conf
on linux, windows apis on windows)
- it has LRU caching
- piggybacking (multiple requests before the first completes go on
a list on the first request, not spawn multiple requests)
- observes TTL in cache
- TTL and timeout use lws_sul timers on the event loop
- ipv6 pieces only built if cmake LWS_IPV6 enabled