This adds the option to have lws do its own dns resolution on
the event loop, without blocking. Existing implementations get
the name resolution done by the libc, which is blocking. In
the case you are opening client connections but need to carefully
manage latency, another connection opening and doing the name
resolution becomes a big problem.
Currently it supports
- ipv4 / A records
- ipv6 / AAAA records
- ipv4-over-ipv6 ::ffff:1.2.3.4 A record promotion for ipv6
- only one server supported over UDP :53
- nameserver discovery on linux, windows, freertos
It also has some nice advantages
- lws-style paranoid response parsing
- random unique tid generation to increase difficulty of poisoning
- it's really integrated with the lws event loop, it does not spawn
threads or use the libc resolver, and of course no blocking at all
- platform-specific server address capturing (from /etc/resolv.conf
on linux, windows apis on windows)
- it has LRU caching
- piggybacking (multiple requests before the first completes go on
a list on the first request, not spawn multiple requests)
- observes TTL in cache
- TTL and timeout use lws_sul timers on the event loop
- ipv6 pieces only built if cmake LWS_IPV6 enabled
Improve the code around stash, getting rid of the strdups for a net
code reduction. Remove the special destroy helper for stash since
it becomes a one-liner.
Trade several stack allocs in the client reset function for a single
sized brief heap alloc to reduce peak stack alloc by around 700 bytes.
wsi timeout, wsi hrtimer, sequencer timeout and vh-protocol timer
all now participate on a single sorted us list.
The whole idea of polling wakes is thrown out, poll waits ignore the
timeout field and always use infinite timeouts.
Introduce a public api that can schedule its own callback from the event
loop with us resolution (usually ms is all the platform can do).
Upgrade timeouts and sequencer timeouts to also be able to use us resolution.
Introduce a prepared fakewsi in the pt, so we don't have to allocate
one on the heap when we need it.
Directly handle vh-protocol timer if LWS_MAX_SMP == 1
There are quite a few linked-lists of things that want events after
some period. This introduces a type binding an lws_dll2 for the
list and a lws_usec_t for the duration.
The wsi timeouts, the hrtimer and the sequencer timeouts are converted
to use these, also in the common event wait calculation.
lws_dll2 removes the downsides of lws_dll and adds new features like a
running member count and explicit owner type... it's cleaner and more
robust (eg, nodes know their owner, so they can casually switch between
list owners and remove themselves without the code knowing the owner).
This deprecates lws_dll, but since it's public it allows it to continue
to be built for 4.0 release if you give cmake LWS_WITH_DEPRECATED_LWS_DLL.
All remaining internal users of lws_dll are migrated to lws_dll2.
Until now we parse HEAD requests but don't properly fulfil them.
This adds enough that if the request pointed to a valid mount,
it will send the headers and complete the transaction without
sending the body.
Test with
$ (echo -n -e "GET / HTTP/1.0\r\nHost: default\r\n\r\n"; sleep 2) | nc 127.0.0.1 7681
It's legal and does something important, if the upgrade fails and stays in http,
it describes how the connection should be handled after sending the error code.
But most ws servers can't cope with it...
https://github.com/warmcat/libwebsockets/issues/1625
"dead bodies" that were sent but not processed by lws as server
will clog up and destroy transaction tracking if repeated POSTs
with keepalive are sent to nonexistant paths.
This patch introduces a DISCARD_BODY state that follows BODY
except the payload is not signalled to the protocol callback.
Calling transaction_completed() with pending body makes lws
enter DISCARD_BODY and retry transaction completed only after
the pending body is exhausted.
When creating a vhost and the port is already bound to another process
this flag would allow the user code to choose to have the
lws_create_vhost function to fail and return a null pointer.
https://github.com/warmcat/libwebsockets/issues/1571
Although the code exists for non-tls h1 upgrade to h2, it hasn't been looked
after since all expected uses for h2 are going to be via h2 / alpn.
This patch aligns its upgrade actions with alpn upgrade path so it works OK
via
$ curl --http2 http://localhost:7681/ -v -w "\n"
ie, without tls. Operation via tls is unaffected.
To use the non-tls upgrade path, you have to be listening without tls, ie with the
test server without -s. If you're listening in a way that requires tls, this
can't be used to bypass that (or, eg, client certs) in itself, since you have to be
able to talk to it in h1 in the first place to attempt the upgrade to h2.
The common h2 path has some code to dropping the ah unconditionally it looks
like after the first service... this is too aggressive since the first thing
coming on the upgrade path is WINDOW_UPDATE. It looks wrong anyway, transaction /
stream completion will drop the ah and should be enough.
Generic sessions has been overdue some love to align it with
the progress in the rest of lws.
1) Strict Content Security Policy
2) http2 compatibility
3) fixes and additions for use in a separate process via unix domain socket
4) work on ws and http proxying in lws
5) add minimal example
- prioritize user-defined mimetypes over predefined server mimetypes.
- fix accessing memory out of string bounds.
- prefer case-insensitive comparison for extension matching.
- other minor fixes and improvements.
This is aimed at allowing a stride to optionally be
given for the parameter name array... this will allow
use of lws_struct metadata as the parameter name
array.
Also introduce the option to put all allocations in
an lwsac instead of via lws_mallocs.
Firefox sends HTTP requests with "Connection: keep-alive" header.
When LWS responds with 401 and WWW-Authenticate header, Firefox
doesn't show an authentication dialog until connection is closed.
Adding "Content-Length: 0" solves the problem.
If you're providing a unix socket service that will be proxied / served by another
process on the same machine, the unix fd permissions on the listening unix socket fd
have to be managed so only something running under the server credentials
can open the listening unix socket.