It's already the case that leaving off the "tls_trust_store" member of the
streamtype definition in the policy causes the streamtype to validate its
tls connections via the OS trust store, usually a bundle OpenSSL has been
configured to load at init automagically, but also literally the OS trust
store in windows case.
Add tests to confirm that.
If the larger application is defining vhosts using lejp-conf JSON, it's
often more convenient to describe the vhost for ss server binding to
that.
If the server policy endpoint (usually used to describe the server
interface bind) begins with '!', take the remainder of the endpoint
string as the name of a preexisting vhost to bind ss server to at
creation-time.
This provides a way to get ahold of LWS_WITH_CONMON telemetry from Secure
Streams, it works the same with direct onward connections or via the proxy.
You can mark streamtypes with a "perf": true policy attribute... this
causes the onward connections on those streamtypes to collect information
about the connection performance, and the unsorted DNS results.
Streams with that policy attribute receive extra data in their rx callback,
with the LWSSS_FLAG_PERF_JSON flag set on it, containing JSON describing the
performance of the onward connection taken from CONMON data, in a JSON
representation. Streams without the "perf" attribute set never receive
this extra rx.
The received JSON is based on the CONMON struct info and looks like
{"peer":"46.105.127.147","dns_us":596,"sockconn_us":31382,"tls_us":28180,"txn_resp_us:23015,"dns":["2001:41d0:2:ee93::1","46.105.127.147"]}
A new minimal example minimal-secure-streams-perf is added that collects
this data on an HTTP GET from warmcat.com, and is built with a -client
version as well if LWS_WITH_SECURE_STREAMS_PROXY_API is set, that operates
via the ss proxy and produces the same result at the client.
Add .proxy_buflen_rxflow_on_above / .proxy_buflen_rxflow_off_below policy streamtype options
and manage the rx flow control for the onward ss wsi according to how the dsh for the
remote client is doing.
client_buflen_rxflow_... are there but not wired up.
Let's allow the proxy to pass back what the policy says about
the size of dsh buffer the client side of this streamtype
should have.
Defer clientsize dsh generation until we got the info back
from the proxy in the response to the initial packet. If
it's zero / unset in the policy, just go with 32KB.
This adds a per-streamtype JSON mapping table in the policy.
In addition to the previous flow, it lets you generate custom
SS state notifications for specific http response codes, eg:
"http_resp_map": [ { "530": 1530 }, { "531": 1531 } ],
It's not recommended to overload the transport-layer response
code with application layer responses. It's better to return
a 200 and then in the application protocol inside http, explain
what happened from the application perspective, usually with
JSON. But this is designed to let you handle existing systems
that do overload the transport layer response code.
SS states for user use start at LWSSSCS_USER_BASE, which is
1000.
You can do a basic test with minimal-secure-streams and --respmap
flag, this will go to httpbin.org and get a 404, and the warmcat.com
policy has the mapping for 404 -> LWSSSCS_USER_BASE (1000).
Since the mapping emits states, these are serialized and handled
like any other state in the proxy case.
The policy2c example / tool is also updated to handle the additional
mapping tables.
At the moment you can define and set per-stream metadata at the client,
which will be string-substituted and if configured in the policy, set in
related outgoing protocol specific content like h1 headers.
This patch extends the metadata concept to also check incoming protocol-
specific content like h1 headers and where it matches the binding in the
streamtype's metadata entry, make it available to the client by name, via
a new lws_ss_get_metadata() api.
Currently warmcat.com has additional headers for
server: lwsws (well-known header name)
test-custom-header: hello (custom header name)
minimal-secure-streams test is updated to try to recover these both
in direct and -client (via proxy) versions. The corresponding metadata
part of the "mintest" stream policy from warmcat.com is
{
"srv": "server:"
}, {
"test": "test-custom-header:"
},
If built direct, or at the proxy, the stream has access to the static
policy metadata definitions and can store the rx metadata in the stream
metadata allocation, with heap-allocated a value. For client side that
talks to a proxy, only the proxy knows the policy, and it returns rx
metadata inside the serialized link to the client, which stores it on
the heap attached to the stream.
In addition an optimization for mapping static policy metadata definitions
to individual stream handle metadata is changed to match by name.
Formalize the LWSSSSRET_ enums into a type "lws_ss_state_return_t"
returned by the rx, tx and state callbacks, and some private helpers
lws_ss_backoff() and lws_ss_event_helper().
Remove LWSSSSRET_SS_HANDLE_DESTROYED concept... the two helpers that could
have destroyed the ss and returned that, now return LWSSSSRET_DESTROY_ME
to the caller to perform or pass up to their caller instead.
Handle helper returns in all the ss protocols and update the rx / tx
calls to have their returns from rx / tx / event helper and ss backoff
all handled by unified code.
Change the default to not process multipart mime at SS layer.
If it's desired, then set "http_multipart_ss_in" true in the policy on the streamtype.
To test, use lws-minimal-secure-streams-avs, which uses SS processing as it is.
To check it without the processing, change #if 1 to #if 0 around the policy for
"http_multipart_ss_in" in both places in avs.c, and also enable the hexdump in ss_avs_metadata_rx()
also in avs.c, and observe the multipart framing is passed through unchanged.
Add initial support for defining servers using Secure Streams
policy and api semantics.
Serving h1, h2 and ws should be functional, the new minimal
example shows a combined http + SS server with an incrementing
ws message shown in the browser over tls, in around 200 lines
of user code.
NOP out anything to do with plugins, they're not currently used.
Update the docs correspondingly.
Adapt the pt sul owner list to be an array, and define two different lists,
one that acts like before and is the default for existing users, and another
that has the ability to cooperate with systemwide suspend to restrict the
interval spent suspended so that it will wake in time for the earliest
thing on this wake-suspend sul list.
Clean the api a bit and add lws_sul_cancel() that only needs the sul as the
argument.
Add a flag for client creation info to indicate that this client connection
is important enough that, eg, validity checking it to detect silently dead
connections should go on the wake-suspend sul list. That flag is exposed in
secure streams policy so it can be added to a streamtype with
"swake_validity": true
Deprecate out the old vhost timer stuff that predates sul. Add a flag
LWS_WITH_DEPRECATED_THINGS in cmake so users can get it back temporarily
before it will be removed in a v4.2.
Adapt all remaining in-tree users of it to use explicit suls.
You can disconnect the stream by returning -1 from tx(). You can
give up your chance to send anything by returning 1 from tx().
Returning 0 sends `*len` amount of the provided buffer.
Returning <0 from rx() also disconnects the stream.
In some cases devices may be too constrained to handle JSON policies but still
want to use SS apis and methodology.
This introduces an off-by-default cmake option LWS_WITH_SECURE_STREAMS_STATIC_POLICY_ONLY,
if enabled the JSON parsing part is excluded and it's assumed the user code
provides its policy as hardcoded policy structs.
Make the policy load apis public with an extra argument that says if you want the
JSON to overlay on an existing policy rather than replace it.
Teach the stream type parser stuff to realize it already has an entry for the
stream type and to modify that rather than create a second one, allowing overlays
to modify stream types.
Add --force-portal and --force-no-internet flags to minimal-secure-streams and
use the new policy overlay stuff to force the policy for captive portal detection
to feel that there is one or that there's no internet.
Implement Captive Portal detection support in lws, with the actual
detection happening in platform code hooked up by lws_system_ops_t.
Add an implementation using Secure Streams as well, if the policy
defines captive_portal_detect streamtype, a SS using that streamtype
is used to probe if it's behind a captive portal.
Secure Streams is an optional layer on top of lws that separates policy
like endpoint selection and tls cert validation into a device JSON
policy document.
Code that wants to open a client connection just specifies a streamtype name,
and no longer deals with details like the endpoint, the protocol (!) or anything
else other than payloads and optionally generic metadata; the JSON policy
contains all the details for each streamtype. h1, h2, ws and mqtt client
connections are supported.
Logical secure streams outlive any particular connection and supports "nailed-up"
connectivity regardless of underlying connection stability.