The various stream transitions for direct ss, SSPC, smd, and
different protocols are all handled in different code, let's
stop hoping for the best and add a state transition validation
function that is used everywhere we pass a state change to a
user callback, and knows what is valid for the user state()
callback to see next, given the last state it was shown.
Let's assert if lws manages to violate that so we can find
where the problem is and provide a stricter guarantee about
what user state handler will see, no matter if ss or sspc
or other cases.
To facilitate that, move the states to start from 1, where
0 indicates the state unset.
Let's add a byte on the first message that sspc clients send,
indicating the version of the serialization protocol that the
client was built with.
Start the version at 1, we will add some more changes in other
patches and call v1 (now it has the versioning baked in)
the first real supported serialization version, this patch must
be applied with the next patches to actually represent v1
protocol changes.
This doesn't require user setting, the client is told what version
it supports in LWS_SSS_CLIENT_PROTOCOL_VERSION. The proxy knows
what version(s) it can support and loudly hangs up on the client
if it doesn't understand its protocol version.
This is a huge patch that should be a global NOP.
For unix type platforms it enables -Wconversion to issue warnings (-> error)
for all automatic casts that seem less than ideal but are normally concealed
by the toolchain.
This is things like passing an int to a size_t argument. Once enabled, I
went through all args on my default build (which build most things) and
tried to make the removed default cast explicit.
With that approach it neither change nor bloat the code, since it compiles
to whatever it was doing before, just with the casts made explicit... in a
few cases I changed some length args from int to size_t but largely left
the causes alone.
From now on, new code that is relying on less than ideal casting
will complain and nudge me to improve it by warnings.
This adds some new objects and helpers for keeping and logging
info on grouped allocations, a group is, eg, SS handles or client
wsis.
Allocated objects get a context-unique "tag" string intended to replace
%p / wsi pointers etc. Pointers quickly become confusing when
allocations are freed and reused, the tag string won't repeat
until you produce 2^64 objects in a context.
In addition the tag string documents the object group, with prefixes
like "wsi-" or "vh-" and contain object-specific additional
information like the vhost name, address / port or the role of the wsi.
At creation time the lws code can use a format string and args
to add whatever group-specific info makes sense, eg, a wsi bound
to a secure stream can also append the guid of the secure stream,
it's copied into the new object tag and so is still available
cleanly after the stream is destroyed if the wsi outlives it.
For LWSSSCS_UNREACHABLE state, the additional ord arg has b0 set if the
reason for the unreachability is because the DNS server itself was not
reachable (implying either DNS server is wrongly set, or is not reachable
due to not having connectivity through to it)
Since client_connect and request_tx can be called from code that expects
the ss handle to be in scope, these calls can't deal with destroying the
ss handle and must pass the lws_ss_state_return_t disposition back to
the caller to handle.
C++ APIs wrapping SS client
These are intended to provide an experimental protocol-independent c++
api even more abstracted than secure streams, along the lines of
"wget -Omyfile https://example.com/thing"
WIP
Teach lws how to deal with date: and retry-after:
Add quick selftest into apt-test-lws_tokenize
Expand lws_retry_sul_schedule_retry_wsi() to check for retry_after and
increase the backoff if a larger one found.
Finally, change SS h1 protocol to handle 503 + retry-after: as a
failure, and apply any increased backoff from retry-after
automatically.
This adds a per-streamtype JSON mapping table in the policy.
In addition to the previous flow, it lets you generate custom
SS state notifications for specific http response codes, eg:
"http_resp_map": [ { "530": 1530 }, { "531": 1531 } ],
It's not recommended to overload the transport-layer response
code with application layer responses. It's better to return
a 200 and then in the application protocol inside http, explain
what happened from the application perspective, usually with
JSON. But this is designed to let you handle existing systems
that do overload the transport layer response code.
SS states for user use start at LWSSSCS_USER_BASE, which is
1000.
You can do a basic test with minimal-secure-streams and --respmap
flag, this will go to httpbin.org and get a 404, and the warmcat.com
policy has the mapping for 404 -> LWSSSCS_USER_BASE (1000).
Since the mapping emits states, these are serialized and handled
like any other state in the proxy case.
The policy2c example / tool is also updated to handle the additional
mapping tables.
At the moment you can define and set per-stream metadata at the client,
which will be string-substituted and if configured in the policy, set in
related outgoing protocol specific content like h1 headers.
This patch extends the metadata concept to also check incoming protocol-
specific content like h1 headers and where it matches the binding in the
streamtype's metadata entry, make it available to the client by name, via
a new lws_ss_get_metadata() api.
Currently warmcat.com has additional headers for
server: lwsws (well-known header name)
test-custom-header: hello (custom header name)
minimal-secure-streams test is updated to try to recover these both
in direct and -client (via proxy) versions. The corresponding metadata
part of the "mintest" stream policy from warmcat.com is
{
"srv": "server:"
}, {
"test": "test-custom-header:"
},
If built direct, or at the proxy, the stream has access to the static
policy metadata definitions and can store the rx metadata in the stream
metadata allocation, with heap-allocated a value. For client side that
talks to a proxy, only the proxy knows the policy, and it returns rx
metadata inside the serialized link to the client, which stores it on
the heap attached to the stream.
In addition an optimization for mapping static policy metadata definitions
to individual stream handle metadata is changed to match by name.
Formalize the LWSSSSRET_ enums into a type "lws_ss_state_return_t"
returned by the rx, tx and state callbacks, and some private helpers
lws_ss_backoff() and lws_ss_event_helper().
Remove LWSSSSRET_SS_HANDLE_DESTROYED concept... the two helpers that could
have destroyed the ss and returned that, now return LWSSSSRET_DESTROY_ME
to the caller to perform or pass up to their caller instead.
Handle helper returns in all the ss protocols and update the rx / tx
calls to have their returns from rx / tx / event helper and ss backoff
all handled by unified code.
Add initial support for defining servers using Secure Streams
policy and api semantics.
Serving h1, h2 and ws should be functional, the new minimal
example shows a combined http + SS server with an incrementing
ws message shown in the browser over tls, in around 200 lines
of user code.
NOP out anything to do with plugins, they're not currently used.
Update the docs correspondingly.
You may use separate rx or tx handlers to neatly isolate different
rx or tx state handling, for example if the connection enters some
mode where you may send a variety of possibly large things, it can
be advantageous to have different code handling each of the
different things.
This allows you to change the rx, tx and / or state handlers to
different ones suitable for the user protocol state, if it's helpful.
With upcoming SS Server support, this has another use when SS
indicates that the underlying protocol upgraded, eg, http -> ws,
you may want to change the handlers for the different sort of
payloads expected after that, according to your user protocol.
Presently a vh is allocated per trust store at policy parsing-time, this
is no problem on a linux-class device or if you decide you need a dynamic
policy for functionality reasons.
However if you're in a constrained enough situation that the static policy
makes sense, in the case your trust stores do not have 100% duty cycle, ie,
are anyway always in use, the currently-unused vhosts and their x.509 stack
are sitting there taking up heap for no immediate benefit.
This patch modifies behaviour in ..._STATIC_POLICY_ONLY so that vhosts and
associated x.509 tls contexts are not instantiated until a secure stream using
them is created; they are refcounted, and when the last logical secure
stream using a vhost is destroyed, the vhost and its tls context is also
destroyed.
If another ss connection is created that wants to use the trust store, the
vhost and x.509 context is regenerated again as needed.
Currently the refcounting is by ss, it's also possible to move the refcounting
to be by connection. The choice is between the delay to generate the vh
being visisble at logical ss creation-time, or at connection-time. It's anyway
not preferable to have ss instantiated and taking up space with no associated
connection or connection attempt underway.
NB you will need to reprocess any static policies after this patch so they
conform to the trust_store changes.
Callbacks can ask the caller to, eg, destroy the ss handle now. But some
callback returns are handled and produced inside other helper apis, eg
lws_ss_backoff() may have to had fulfilled the callback request to destroy
the ss... therefore it has to signal to its caller, and its callers have
to check and exit their flow accordingly.
This differentiates between client connections for retry / writeable requests
and explicit lws_ss_client_connect() api calls. The former effectively uses
retry / backoff, and the latter resets the retry / backoff.
If you receive ALL_RETRIES_FAILED due to the retry policy, you can do whatever
you need to do there and call lws_ss_client_connect() to try to connect again
with a fresh, reset retry / backoff state.
- Add low level system message distibution framework
- Add support for local Secure Streams to participate using _lws_smd streamtype
- Add apit test and minimal example
- Add SS proxy support for _lws_smd
See minimal-secure-streams-smd README.md
Sometimes we need to find out the substituted length before we can
allocate and actually store it. Teach strexp that if we set the
output buffer to NULL (and the output length to something big) we
are asking for the substituted length and to not produce output.
It's not safe to destroy objects inside a callback from a parent that
still has references to the object.
Formalize what the user code can indicate by its return code from the
callback functions and provide the implementations at the parents.
- LWSSSSRET_OK: no action, OK
- LWSSSSRET_DISCONNECT_ME: disconnect the underlying connection
- LWSSSSRET_DESTROY_ME: destroy the ss object
- LWSSSSRET_TX_DONT_SEND: for tx, give up the tx opportunity since nothing to send
Some streamtypes do not pass or receive payload meaningfully. Allow them
to just leave their related cb NULL. Ditto for state, although I'm not sure
how useful such a streamtype can be.
Adapt the pt sul owner list to be an array, and define two different lists,
one that acts like before and is the default for existing users, and another
that has the ability to cooperate with systemwide suspend to restrict the
interval spent suspended so that it will wake in time for the earliest
thing on this wake-suspend sul list.
Clean the api a bit and add lws_sul_cancel() that only needs the sul as the
argument.
Add a flag for client creation info to indicate that this client connection
is important enough that, eg, validity checking it to detect silently dead
connections should go on the wake-suspend sul list. That flag is exposed in
secure streams policy so it can be added to a streamtype with
"swake_validity": true
Deprecate out the old vhost timer stuff that predates sul. Add a flag
LWS_WITH_DEPRECATED_THINGS in cmake so users can get it back temporarily
before it will be removed in a v4.2.
Adapt all remaining in-tree users of it to use explicit suls.
There are a few automatic things that look for streamtypes that may or
may not exist now
- captive_portal_detect
- fetch_policy
- api_amazon_com_auth
logging them as notice every startup is pretty intrusive, change to info.
For general OpenSSL case, we leave connection validity to system trust
store bundle to decide; even for mbedtls it may have been passed a
bundle externally and we don't want to have to list the x.509 stack
explicitly for a server we don't have any control over.
Instead of erroring out, allow the case no trust store is specified,
just use vhost[0] and let the system trust store decide if it likes
the server's cert or not.
No ABI change.
The endpoint field in streamtype policy may continue to just be the
hostname, like "warmcat.com".
But it's also possible now to be a url-formatted string, like, eg,
"https://warmcat.com:444/mailman/listinfo"
If so (ie, if it contains a : ) then the decoded elements may override
if tls is enabled, the endpoint address, the port, and the url path.
No ABI change.