Resetting the ah and waiting a bit is the right strategy at the end of
http/1.1 client transaction. But it's wrong for h2... drop the ah
instead if it's the end of a client transaction on h2.
The %.*s is very handy to print strings where you have a length, but
there is no NUL termination. It's quite widely supported but at least
one vendor RTOS toolchain doesn't have it.
Since there aren't that many uses of it yet, audit all uses and
convert to a new helper lws_strnncpy() which uses the smaller of
two lengths.
lws has been able to generate client multipart mime as shown
in minimal-http-client-post, but it requires a lot of user
boilerplate to handle the boundary, related transaction header,
and multipart headers.
This patch adds a client creation flag to indicate it will
carry multipart mime, which autocreates the boundary string
and applies the transaction header with it, and an api to
form the boundary headers between the different mime parts
and the terminating boundary.
This affects max header size since we use the latter half
of the pt_serv_buf to prepare the (possibly huge) auth token.
Adapt the pt_serv_buf_size in the hugeurl example.
Some servers set the tx credit to the absolute max and then add to it... this is illegal
(and checked for in h2spec). Add a quirk flag that works around it by reducing the
initial tx credit size by a factor of 16.
This shouldn't be necessary; just END_HEADERS flag should be enough.
But nghttp2 will not talk to us unless we end the stream from our side.
Unfortunately ending the stream at the time we sent the headers means
we cannot support the long poll half-close scheme. So add a quirk
flag to optionally support this behaviour of nghttp2 when the client
is creating the connection.
h1 and h2 has a bunch of code supporting autobinding outgoing client connections
to be streams in, or queued as pipelined on, the same / existing single network
connection, if it's to the same endpoint.
Adapt this http-specific code and active connection tracking to be usable for
generic muxable protocols the same way.
Refactor everything around ping / pong handling in ws and h2, so there
is instead a protocol-independent validity lws_sul tracking how long it
has been since the last exchange that confirms the operation of the
network connection in both directions.
Clean out periodic role callback and replace the last two role users
with discrete lws_sul for each pt.
It was already correct but add helpers to isolate and deduplicate
processing adding and closing a generically immortal stream.
Change the default 31s h2 network connection timeout to be settable
by .keepalive_timeout if nonzero.
Add a public api allowing a client h2 stream to transition to
half-closed LOCAL (by sending a 0-byte DATA with END_STREAM) and
mark itself as immortal to create a read-only long-poll stream
if the server allows it.
Add a vhost server option flag LWS_SERVER_OPTION_VH_H2_HALF_CLOSED_LONG_POLL
which allows the vhost to treat half-closed remotes as immortal long
poll streams.
With http, the protocol doesn't indicate where the headers end and the
next transaction or body begin. Until now, we handled that for client
header response parsing by reading from the tls buffer bytewise.
This modernizes the code to read in up to 256-byte chunks and parse
the chunks in one hit (the parse API is already set up for doing this
elsewhere).
Now we have a generic input buflist, adapt the parser loop to go through
that and arrange that any leftovers are placed on there.
Old certs were getting near the end of their life and we switched the
server to use letsencrypt. The root and intermediate needed for the
mbedtls case changed accordingly
Protocol list is no longer a simple sentinel-terminated
array but composed at vhost creation time in many
cases. Use the vhost's count of how many protocols it
has rather than seeking up to the sentinel.
Remove LWS_LATENCY.
Add the option LWS_WITH_DETAILED_LATENCY, allowing lws to collect very detailed
information on every read and write, and allow the user code to provide
a callback to process events.
This adds the option to have lws do its own dns resolution on
the event loop, without blocking. Existing implementations get
the name resolution done by the libc, which is blocking. In
the case you are opening client connections but need to carefully
manage latency, another connection opening and doing the name
resolution becomes a big problem.
Currently it supports
- ipv4 / A records
- ipv6 / AAAA records
- ipv4-over-ipv6 ::ffff:1.2.3.4 A record promotion for ipv6
- only one server supported over UDP :53
- nameserver discovery on linux, windows, freertos
It also has some nice advantages
- lws-style paranoid response parsing
- random unique tid generation to increase difficulty of poisoning
- it's really integrated with the lws event loop, it does not spawn
threads or use the libc resolver, and of course no blocking at all
- platform-specific server address capturing (from /etc/resolv.conf
on linux, windows apis on windows)
- it has LRU caching
- piggybacking (multiple requests before the first completes go on
a list on the first request, not spawn multiple requests)
- observes TTL in cache
- TTL and timeout use lws_sul timers on the event loop
- ipv6 pieces only built if cmake LWS_IPV6 enabled