Secure Streams is an optional layer on top of lws that separates policy
like endpoint selection and tls cert validation into a device JSON
policy document.
Code that wants to open a client connection just specifies a streamtype name,
and no longer deals with details like the endpoint, the protocol (!) or anything
else other than payloads and optionally generic metadata; the JSON policy
contains all the details for each streamtype. h1, h2, ws and mqtt client
connections are supported.
Logical secure streams outlive any particular connection and supports "nailed-up"
connectivity regardless of underlying connection stability.
Adds client support for MQTT QoS0 and QoS1, compatible with AWS IoT
Supports stream binding where independent client connections to the
same endpoint can mux on a single tcp + tls connection with topic
routing managed internally.
This adds support for POST in both h1 and h2 queues / stream binding.
The previous queueing tried to keep the "leader" wsi who made the
actual connection around and have it act on the transaction queue
tail if it had done its own thing.
This refactors it so instead, who is the "leader" moves down the
queue and the queued guys inherit the fd, SSL * and queue from the
old leader as they take over.
This lets them operate in their own wsi identity directly and gets
rid of all the "effective wsi" checks, which was applied incompletely
and getting out of hand considering the separate lws_mux checks for
h2 and other muxed protocols alongside it.
This change also allows one wsi at a time to own the transaction for
POST. --post is added as an option to lws-minimal-http-client-multi
and 6 extra selftests with POST on h1/h2, pipelined or not and
staggered or not are added to the CI.
Add selectable event lib support to minimal-http-client-multi and
clean up context destroy flow so we can use lws_destroy_context() from
inside the callback to indicate we want to end the event loop, without
using the traditional "interrupted" flag and in a way that works no
matter which event loop backend is being used.
Freertos + lwip doesn't support pipe2() or pipe()... implement a "pipe"
based on two UDP sockets, one listening on 127.0.0.1:54321 and the other
doing a sendto() there of a single byte to interrupt the event loop wait.
Re-use the arrangements for actual pipe fds and pipe role to deliver
lws_cancel_service() functionality using this.
Saw this on travis selftests during context destroy
==18895== Invalid read of size 8
==18895== at 0x415909: __lws_vhost_destroy2 (vhost.c:1063)
==18895== by 0x40E65B: lws_context_destroy2 (context.c:929)
==18895== by 0x40EBE5: lws_context_destroy (context.c:1128)
==18895== by 0x40CC41: main (minimal-http-client-post.c:267)
==18895== Address 0x6168688 is 728 bytes inside a block of size 792 free'd
==18895== at 0x4C2BDEC: free (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==18895== by 0x45B29E: _realloc (alloc.c:120)
==18895== by 0x45B2D6: lws_realloc (alloc.c:130)
==18895== by 0x415ED7: __lws_vhost_destroy2 (vhost.c:1204)
==18895== by 0x419164: lws_vhost_unbind_wsi (wsi.c:82)
==18895== by 0x41236B: __lws_free_wsi (close.c:154)
==18895== by 0x4134CF: __lws_close_free_wsi_final (close.c:650)
==18895== by 0x4133BA: __lws_close_free_wsi (close.c:610)
==18895== by 0x413528: lws_close_free_wsi (close.c:660)
==18895== by 0x4158C7: __lws_vhost_destroy2 (vhost.c:1053)
==18895== by 0x40E65B: lws_context_destroy2 (context.c:929)
==18895== by 0x40EBE5: lws_context_destroy (context.c:1128)
Removing the last wsi from the vhost we started to destroy finalized the
vhost destruction, which is aimed at libuv async close cleanup. But if
we already entered __lws_vhost_destroy2, we will definitely destroy the vhost
ourselves at the end of that function already. So defeat the wsi close
triggering it.
h1 and h2 has a bunch of code supporting autobinding outgoing client connections
to be streams in, or queued as pipelined on, the same / existing single network
connection, if it's to the same endpoint.
Adapt this http-specific code and active connection tracking to be usable for
generic muxable protocols the same way.
Introduce a generic lws_state object with notification handlers
that may be registered in a chain.
Implement one of those in the context to manage the "system state".
Allow other pieces of lws and user code to register notification
handlers on a context list. Handlers can object to or take over
responsibility to move forward and retry system state changes if
they know that some dependent action must succeed first.
For example if the system time is invalid, we cannot move on to
a state where anything can do tls until that has been corrected.
Refactor everything around ping / pong handling in ws and h2, so there
is instead a protocol-independent validity lws_sul tracking how long it
has been since the last exchange that confirms the operation of the
network connection in both directions.
Clean out periodic role callback and replace the last two role users
with discrete lws_sul for each pt.
Remove LWS_LATENCY.
Add the option LWS_WITH_DETAILED_LATENCY, allowing lws to collect very detailed
information on every read and write, and allow the user code to provide
a callback to process events.
This adds the option to have lws do its own dns resolution on
the event loop, without blocking. Existing implementations get
the name resolution done by the libc, which is blocking. In
the case you are opening client connections but need to carefully
manage latency, another connection opening and doing the name
resolution becomes a big problem.
Currently it supports
- ipv4 / A records
- ipv6 / AAAA records
- ipv4-over-ipv6 ::ffff:1.2.3.4 A record promotion for ipv6
- only one server supported over UDP :53
- nameserver discovery on linux, windows, freertos
It also has some nice advantages
- lws-style paranoid response parsing
- random unique tid generation to increase difficulty of poisoning
- it's really integrated with the lws event loop, it does not spawn
threads or use the libc resolver, and of course no blocking at all
- platform-specific server address capturing (from /etc/resolv.conf
on linux, windows apis on windows)
- it has LRU caching
- piggybacking (multiple requests before the first completes go on
a list on the first request, not spawn multiple requests)
- observes TTL in cache
- TTL and timeout use lws_sul timers on the event loop
- ipv6 pieces only built if cmake LWS_IPV6 enabled
lws_dll2 removes the downsides of lws_dll and adds new features like a
running member count and explicit owner type... it's cleaner and more
robust (eg, nodes know their owner, so they can casually switch between
list owners and remove themselves without the code knowing the owner).
This deprecates lws_dll, but since it's public it allows it to continue
to be built for 4.0 release if you give cmake LWS_WITH_DEPRECATED_LWS_DLL.
All remaining internal users of lws_dll are migrated to lws_dll2.
If you're providing a unix socket service that will be proxied / served by another
process on the same machine, the unix fd permissions on the listening unix socket fd
have to be managed so only something running under the server credentials
can open the listening unix socket.
Up until now if you wanted to drop privs, a numeric uid and gid had to be
given in info to control post-init permissions... this adds info.username
and info.groupname where you can do the same using user and group names.
The internal plat helper lws_plat_drop_app_privileges() is updated to directly use
context instead of info both ways it can be called, and to be able to return fatal
errors.
All failures to lookup non-0 or -1 uid or gid names from uid, or to look up
uid or gid from username or groupnames given, get an err message and fatal exit.
lws has been able to proxy h2 or h1 inbound connections to an
h1 onward connection for a while now. It's simple to use just
build with LWS_WITH_HTTP_PROXY and make a mount where the origin
is the onward connection details. Unix sockets can also be
used as the onward connection.
This patch extends the support to be able to also do the same for
inbound h2 or h1 ws upgrades to an h1 ws onward connection as well.
This allows you to offer completely different services in a
common URL space, including ones that connect back by ws / wss.
If you have multiple vhosts with client contexts enabled, under
OpenSSL each one brings in the system cert bundle.
On libwebsockets.org, there are many vhosts and the waste adds up
to about 9MB of heap.
This patch makes a sha256 from the client context configuration, and
if a suitable client context already exists on another vhost, bumps
a refcount and reuses the client context.
In the case client contexts are configured differently, a new one
is created (and is available for reuse as well).
info.protocols works okay, but it has an annoying problem... you have to know
the type for each protocol's pss at the top level of the code, so you can set
the struct lws_protocols user_data size for it.
Lws already rewrites the protocol tables for a vhost in the case of runtime
protocol plugins... this adapts that already-existing code slightly to give
a new optional way to declare the protocol array.
Everything works as before by default, but now info.protocols may be NULL and
info.pprotocols defined instead (if that's also NULL, as it will be if you
just ignore it after memsetting to 0, then it continues to fall back to the
dummy protocol handler as before).
info.pprotocols is a NULL-termined array of pointers to lws_protocol
structs. This can be composed at the top level of your code without knowing
anything except the name of the externally-defined lws_protocol struct(s).
The minimal example http-server-dynamic is changed to use the new scheme as
an example.