This adds a per-streamtype JSON mapping table in the policy.
In addition to the previous flow, it lets you generate custom
SS state notifications for specific http response codes, eg:
"http_resp_map": [ { "530": 1530 }, { "531": 1531 } ],
It's not recommended to overload the transport-layer response
code with application layer responses. It's better to return
a 200 and then in the application protocol inside http, explain
what happened from the application perspective, usually with
JSON. But this is designed to let you handle existing systems
that do overload the transport layer response code.
SS states for user use start at LWSSSCS_USER_BASE, which is
1000.
You can do a basic test with minimal-secure-streams and --respmap
flag, this will go to httpbin.org and get a 404, and the warmcat.com
policy has the mapping for 404 -> LWSSSCS_USER_BASE (1000).
Since the mapping emits states, these are serialized and handled
like any other state in the proxy case.
The policy2c example / tool is also updated to handle the additional
mapping tables.
At the moment you can define and set per-stream metadata at the client,
which will be string-substituted and if configured in the policy, set in
related outgoing protocol specific content like h1 headers.
This patch extends the metadata concept to also check incoming protocol-
specific content like h1 headers and where it matches the binding in the
streamtype's metadata entry, make it available to the client by name, via
a new lws_ss_get_metadata() api.
Currently warmcat.com has additional headers for
server: lwsws (well-known header name)
test-custom-header: hello (custom header name)
minimal-secure-streams test is updated to try to recover these both
in direct and -client (via proxy) versions. The corresponding metadata
part of the "mintest" stream policy from warmcat.com is
{
"srv": "server:"
}, {
"test": "test-custom-header:"
},
If built direct, or at the proxy, the stream has access to the static
policy metadata definitions and can store the rx metadata in the stream
metadata allocation, with heap-allocated a value. For client side that
talks to a proxy, only the proxy knows the policy, and it returns rx
metadata inside the serialized link to the client, which stores it on
the heap attached to the stream.
In addition an optimization for mapping static policy metadata definitions
to individual stream handle metadata is changed to match by name.
Before this we simply proxy the CREATING state from the proxy
version of the stream to the client version of the stream.
However this can result in disordering of onward connection
attempt request happening before the client has called back its
CREATING (*state()), meaning that any metadata set in the
state handler is missed for the onward connection.
This patch suppresses the CREATING forwarded from the proxy
and instead does its own local CREATING state callback at the
time the proxy indicates that the remote stream creation
(ie, with the requested policy streamtype) succeeded.
This then guarantees that the client has seen CREATING, and
had a chance to set metadata there, before the onward connection
request goes out. Since metadata has higher priority at the
writeable than the onward connection request it also means
any metadata set in client CREATING gets sync'd to the proxy
before the onward connection.
Formalize the LWSSSSRET_ enums into a type "lws_ss_state_return_t"
returned by the rx, tx and state callbacks, and some private helpers
lws_ss_backoff() and lws_ss_event_helper().
Remove LWSSSSRET_SS_HANDLE_DESTROYED concept... the two helpers that could
have destroyed the ss and returned that, now return LWSSSSRET_DESTROY_ME
to the caller to perform or pass up to their caller instead.
Handle helper returns in all the ss protocols and update the rx / tx
calls to have their returns from rx / tx / event helper and ss backoff
all handled by unified code.
Helpers remove casts and derefs.
Add additional pointer arithmetic in client_pss_to_sspc_h() helper to
remove dependency on handle_offset being the first thing in the userdata
Make the helper names explicit for different proxy and client pss handling,
so it should be clearer that client helpers belong in a client section and
vice versa.
We compute the refragmented flags when cutting up large client serialized
payload blocks. But we had a bug where we didn't actually apply it and
applied the original client flags on the fragments.
That causes a crisis because EOM is used to mark end of post body and
complete the transaction, that is then happening on the first fragment.
This one-liner corrects it to use the computed, refragmented flags on the
dsh fragments and eliminate the problem.
Correct a comment about payload layout and add detailed comments about
dsh handling at proxy.
Increase the post size so it shows up fragmentation issues at the proxy.
Change the default to not process multipart mime at SS layer.
If it's desired, then set "http_multipart_ss_in" true in the policy on the streamtype.
To test, use lws-minimal-secure-streams-avs, which uses SS processing as it is.
To check it without the processing, change #if 1 to #if 0 around the policy for
"http_multipart_ss_in" in both places in avs.c, and also enable the hexdump in ss_avs_metadata_rx()
also in avs.c, and observe the multipart framing is passed through unchanged.
We want to manage the proxy txcr, but at the moment the proxy doesn't pass
back information about if it's actually h1 or h2 it found across the internet.
Temporarily defeat txcr wait so we can support h1 until that's improved.
Add initial support for defining servers using Secure Streams
policy and api semantics.
Serving h1, h2 and ws should be functional, the new minimal
example shows a combined http + SS server with an incrementing
ws message shown in the browser over tls, in around 200 lines
of user code.
NOP out anything to do with plugins, they're not currently used.
Update the docs correspondingly.
You may use separate rx or tx handlers to neatly isolate different
rx or tx state handling, for example if the connection enters some
mode where you may send a variety of possibly large things, it can
be advantageous to have different code handling each of the
different things.
This allows you to change the rx, tx and / or state handlers to
different ones suitable for the user protocol state, if it's helpful.
With upcoming SS Server support, this has another use when SS
indicates that the underlying protocol upgraded, eg, http -> ws,
you may want to change the handlers for the different sort of
payloads expected after that, according to your user protocol.
Presently a vh is allocated per trust store at policy parsing-time, this
is no problem on a linux-class device or if you decide you need a dynamic
policy for functionality reasons.
However if you're in a constrained enough situation that the static policy
makes sense, in the case your trust stores do not have 100% duty cycle, ie,
are anyway always in use, the currently-unused vhosts and their x.509 stack
are sitting there taking up heap for no immediate benefit.
This patch modifies behaviour in ..._STATIC_POLICY_ONLY so that vhosts and
associated x.509 tls contexts are not instantiated until a secure stream using
them is created; they are refcounted, and when the last logical secure
stream using a vhost is destroyed, the vhost and its tls context is also
destroyed.
If another ss connection is created that wants to use the trust store, the
vhost and x.509 context is regenerated again as needed.
Currently the refcounting is by ss, it's also possible to move the refcounting
to be by connection. The choice is between the delay to generate the vh
being visisble at logical ss creation-time, or at connection-time. It's anyway
not preferable to have ss instantiated and taking up space with no associated
connection or connection attempt underway.
NB you will need to reprocess any static policies after this patch so they
conform to the trust_store changes.
Currently we always reserve a fakewsi per pt so events that don't have a related actual
wsi, like vhost-protocol-init or vhost cert init via protocol callback can make callbacks
that look reasonable to user protocol handler code expecting a valid wsi every time.
This patch splits out stuff that user callbacks often unconditionally expect to be in
a wsi, like context pointer, vhost pointer etc into a substructure, which is composed
into struct lws at the top of it. Internal references (struct lws is opaque, so there
are only internal references) are all updated to go via the substructre, the compiler
should make that a NOP.
Helpers are added when fakewsi is used and referenced.
If not PLAT_FREERTOS, we continue to provide a full fakewsi in the pt as before,
although the helpers improve consistency by zeroing down the substructure. There is
a huge amount of user code out there over the last 10 years that did not always have
the minimal examples to follow, some of it does some unexpected things.
If it is PLAT_FREERTOS, that is a newer thing in lws and users have the benefit of
being able to follow the minimal examples' approach. For PLAT_FREERTOS we don't
reserve the fakewsi in the pt any more, saving around 800 bytes. The helpers then
create a struct lws_a (the substructure) on the stack, zero it down (but it is only
like 4 pointers) and prepare it with whatever we know like the context.
Then we cast it to a struct lws * and use it in the user protocol handler call.
In this case, the remainder of the struct lws is undefined. However the amount of
old protocol handlers that might touch things outside of the substructure in
PLAT_FREERTOS is very limited compared to legacy lws user code and the saving is
significant on constrained devices.
User handlers should not be touching everything in a wsi every time anyway, there
are several cases where there is no valid wsi to do the call with. Dereference of
things outside the substructure should only happen when the callback reason shows
there is a valid wsi bound to the activity (as in all the minimal examples).