This teaches http client stuff how to handle 303 redirects... these
can happen after POST where the server side wants you to come back with
a GET to the Location: mentioned.
lws client will follow the redirect and force GET, this works for both
h1 and h2. Client protocol handler has to act differently if it finds
it is connecting for the initial POST or the subsequent GET, it can
find out which by checking a new api lws_http_is_redirected_to_get(wsi)
which returns nonzero if in GET mode.
Minimal example for server form-post has a new --303 switch to enable
this behaviour there and the client post example has additions to
check lws_http_is_redirected_to_get().
The %.*s is very handy to print strings where you have a length, but
there is no NUL termination. It's quite widely supported but at least
one vendor RTOS toolchain doesn't have it.
Since there aren't that many uses of it yet, audit all uses and
convert to a new helper lws_strnncpy() which uses the smaller of
two lengths.
https://github.com/warmcat/libwebsockets/issues/1746
Adding the final CRLF is a NOP at JSON level, but can disrupt hashing the
JSON if it isn't expecting it.
Add flags to the jwk export so it can be controlled... operation remains
unchanged for old values 0 and 1 but a second flag can be OR-ed to control
issue of final CRLF.
As it is, if time_t is 32-bit on the platform it might lead to
arithmetic overflow, so force it to lws_usec_t (uint64_t) even
though it works OK here on x86_64.
Add a minimal example aimed at testing the wsi hrtimer stability
consistently across platforms.
Add and disable by default hrtimer dump code (this is too expensive
and specific to internal testing to leave in for debug mode even if
it's not printed). If you hack it enabled, it will dump the sul
list for the pt and assert if the list is disordered.
Generic lws_system IPv4 DHCP client
- netif and route control via lib/plat apis
- linux plat pieces implemented
- Uses raw ip socket for UDP broadcast and rx
- security-aware
- usual stuff plus up to 4 x dns server
If it's enabled for build, it holds the system
state at DHCP until at least one registered interface
has acquired a set of IP / mask / router / DNS server
It uses PF_PACKET which is Linux-only atm. But those
areas are isolated into plat code.
TODOs
- lease timing and reacquire
- plat pieces for other than Linux
lws has been able to generate client multipart mime as shown
in minimal-http-client-post, but it requires a lot of user
boilerplate to handle the boundary, related transaction header,
and multipart headers.
This patch adds a client creation flag to indicate it will
carry multipart mime, which autocreates the boundary string
and applies the transaction header with it, and an api to
form the boundary headers between the different mime parts
and the terminating boundary.
This affects max header size since we use the latter half
of the pt_serv_buf to prepare the (possibly huge) auth token.
Adapt the pt_serv_buf_size in the hugeurl example.
Rather than do all switches by hand on the minimal examples,
add a helper that knows some "builtin" ones like -d and
others to set context options you might want to use in
any example.
Introduce a generic lws_state object with notification handlers
that may be registered in a chain.
Implement one of those in the context to manage the "system state".
Allow other pieces of lws and user code to register notification
handlers on a context list. Handlers can object to or take over
responsibility to move forward and retry system state changes if
they know that some dependent action must succeed first.
For example if the system time is invalid, we cannot move on to
a state where anything can do tls until that has been corrected.
Refactor everything around ping / pong handling in ws and h2, so there
is instead a protocol-independent validity lws_sul tracking how long it
has been since the last exchange that confirms the operation of the
network connection in both directions.
Clean out periodic role callback and replace the last two role users
with discrete lws_sul for each pt.
It was already correct but add helpers to isolate and deduplicate
processing adding and closing a generically immortal stream.
Change the default 31s h2 network connection timeout to be settable
by .keepalive_timeout if nonzero.
Add a public api allowing a client h2 stream to transition to
half-closed LOCAL (by sending a 0-byte DATA with END_STREAM) and
mark itself as immortal to create a read-only long-poll stream
if the server allows it.
Add a vhost server option flag LWS_SERVER_OPTION_VH_H2_HALF_CLOSED_LONG_POLL
which allows the vhost to treat half-closed remotes as immortal long
poll streams.
Old certs were getting near the end of their life and we switched the
server to use letsencrypt. The root and intermediate needed for the
mbedtls case changed accordingly
Remove LWS_LATENCY.
Add the option LWS_WITH_DETAILED_LATENCY, allowing lws to collect very detailed
information on every read and write, and allow the user code to provide
a callback to process events.
This adds the option to have lws do its own dns resolution on
the event loop, without blocking. Existing implementations get
the name resolution done by the libc, which is blocking. In
the case you are opening client connections but need to carefully
manage latency, another connection opening and doing the name
resolution becomes a big problem.
Currently it supports
- ipv4 / A records
- ipv6 / AAAA records
- ipv4-over-ipv6 ::ffff:1.2.3.4 A record promotion for ipv6
- only one server supported over UDP :53
- nameserver discovery on linux, windows, freertos
It also has some nice advantages
- lws-style paranoid response parsing
- random unique tid generation to increase difficulty of poisoning
- it's really integrated with the lws event loop, it does not spawn
threads or use the libc resolver, and of course no blocking at all
- platform-specific server address capturing (from /etc/resolv.conf
on linux, windows apis on windows)
- it has LRU caching
- piggybacking (multiple requests before the first completes go on
a list on the first request, not spawn multiple requests)
- observes TTL in cache
- TTL and timeout use lws_sul timers on the event loop
- ipv6 pieces only built if cmake LWS_IPV6 enabled
wsi timeout, wsi hrtimer, sequencer timeout and vh-protocol timer
all now participate on a single sorted us list.
The whole idea of polling wakes is thrown out, poll waits ignore the
timeout field and always use infinite timeouts.
Introduce a public api that can schedule its own callback from the event
loop with us resolution (usually ms is all the platform can do).
Upgrade timeouts and sequencer timeouts to also be able to use us resolution.
Introduce a prepared fakewsi in the pt, so we don't have to allocate
one on the heap when we need it.
Directly handle vh-protocol timer if LWS_MAX_SMP == 1
There are quite a few linked-lists of things that want events after
some period. This introduces a type binding an lws_dll2 for the
list and a lws_usec_t for the duration.
The wsi timeouts, the hrtimer and the sequencer timeouts are converted
to use these, also in the common event wait calculation.
Adapt service loops and event libs to use microsecond waits
internally, for hrtimer and sequencer. Reduce granularity
according to platform / event lib wait.
Add a helper so there's a single place to extend it.
Since the messages are queued and then read in order from the event loop
thread, it's not generally safe to pass pointers to argument structs,
since there's no guarantee the lifetime of the thing sending the message
lasted until the sequencer read the message.
This puts pressure on the single void * argument-passed-as-value... this patch
adds a second void * argument-passed-as-value so it's more possible to put
what's needed directly in the argument.
It's also possible to alloc the argument on the heap and have the sequencer
callback free it after it has read it.
Add a generic table-based backoff scheme and a helper to track the
try count and calculate the next delay in ms.
Allow lws_sequencer_t to be given one of these at creation time...
since the number of creation args is getting a bit too much
convert that to an info struct at the same time.
Travis seems to be restricting the number of outgoing connections
or the rate of them... we have been using 10 concurrent and 100 connections
[2019/08/02 09:26:22:7950] USER: callback_minimal_spam: established (try 10, est 8, closed 0, err 0)
[2019/08/02 09:26:22:8041] USER: callback_minimal_spam: established (try 10, est 9, closed 0, err 0)
[2019/08/02 09:26:23:0098] USER: callback_minimal_spam: reopening (try 11, est 10, closed 1, err 0)
[2019/08/02 09:26:23:0105] USER: callback_minimal_spam: reopening (try 12, est 10, closed 2, err 0)
[2019/08/02 09:26:23:0111] USER: callback_minimal_spam: reopening (try 13, est 10, closed 3, err 0)
[2019/08/02 09:26:23:0117] USER: callback_minimalRROR: closed before established (try 25, est 14, closed 14, err 2)
[2019/08/02 09:26:44:6125] ERR: CLIENT_CONNECTION_ERROR: closed before established (try 26, est 14, closed 14, err 3)
[2019/08/02 09:26:44:6129] ERR: CLIENT_CONNECTION_ERROR: closed before established (try 27, est 14, closed 14, err 4)
[2019/08/02 09:26:44:6133] ERR: CLIENT_CONNECTION_ERROR: closed before established (try 28, est 14, closed 14, err 5)
[2019/08/02 09:26:44:6137] ERR: CLIENT_CONNECTION_ERROR: closed before established (try 29, est 14, closed 14, err 6)
[2019/08/02 09:26:45:6152] ERR: CLIENT_CONNECTION_ERROR: closed before established (try 30, est 14, closed 14, err 7)
[2019/08/02 09:26:45:6163] ERR: CLIENT_CONNECTION_ERROR: closed before established (try 31, est 14, closed 14, err 8)
[2019/08/02 09:26:45:6168] ERR: CLIENT_CONNECTION_ERROR: closed before established (try 32, est 14, closed 14, err 9)
[2019/08/02 09:26:45:6174] ERR: CLIENT_CONNECTION_ERROR: closed before established (try 33, est 14, closed 14, err 10)
[2019/08/02 09:26:47:0635] USER: callback_minimal_spam: established (try 34, est 14, closed 14, err 10)
Reduce to 3 concurrent / 15 see if it helps travis get over the hump
The logic in the loops for insertion and deletion from the
mini, forced to non ulimit max fds in the pt mode was not
quite right.
It showed up in hard to reproduce problem with the ws client
spam test that uses the mini mode, on travis. This should
fix the root cause.