The --blob option requires GENCRYPTO that's not on by default, to handle
the hash checks... that's going to cause a lot of confusion because it
means the simplest ss example won't build by default then.
Let's remove the blob support (and GENCRYPTO dependency) from the simplest
example and make a new minimal-secure-streams-blob example that has --blob
support and the GENCRYPTO dependency as well.
Trying to use the opaque pointer in the handle to point to the conn isn't
going to work when we need it to point to the ss handle.
Move it to have its on place in the handle.
It's already the case that leaving off the "tls_trust_store" member of the
streamtype definition in the policy causes the streamtype to validate its
tls connections via the OS trust store, usually a bundle OpenSSL has been
configured to load at init automagically, but also literally the OS trust
store in windows case.
Add tests to confirm that.
This fixes the proxy rx flow by adding an lws_dsh helper to hide the
off-by-one in the "kind" array (kind 0 is reserved for tracking the
unallocated dsh blocks).
For testing, it adds a --blob option on minimal-secure-streams[-client]
which uses a streamtype "bulkproxflow" from here
https://warmcat.com/policy/minimal-proxy-v4.2-v2.json
"bulkproxflow": {
"endpoint": "warmcat.com",
"port": 443,
"protocol": "h1",
"http_method": "GET",
"http_url": "blob.bin",
"proxy_buflen": 32768,
"proxy_buflen_rxflow_on_above": 24576,
"proxy_buflen_rxflow_off_below": 8192,
"tls": true,
"retry": "default",
"tls_trust_store": "le_via_dst"
}
This downloads a 51MB blob of random data with the SHA256sum
ed5720c16830810e5829dfb9b66c96b2e24efc4f93aa5e38c7ff4150d31cfbbf
The minimal-secure-streams --blob example client delays the download by
50ms every 10KiB it sees to force rx flow usage at the proxy.
It downloads the whole thing and checks the SHA256 is as expected.
Logs about rxflow status are available at LLL_INFO log level.
This provides a way to get ahold of LWS_WITH_CONMON telemetry from Secure
Streams, it works the same with direct onward connections or via the proxy.
You can mark streamtypes with a "perf": true policy attribute... this
causes the onward connections on those streamtypes to collect information
about the connection performance, and the unsorted DNS results.
Streams with that policy attribute receive extra data in their rx callback,
with the LWSSS_FLAG_PERF_JSON flag set on it, containing JSON describing the
performance of the onward connection taken from CONMON data, in a JSON
representation. Streams without the "perf" attribute set never receive
this extra rx.
The received JSON is based on the CONMON struct info and looks like
{"peer":"46.105.127.147","dns_us":596,"sockconn_us":31382,"tls_us":28180,"txn_resp_us:23015,"dns":["2001:41d0:2:ee93::1","46.105.127.147"]}
A new minimal example minimal-secure-streams-perf is added that collects
this data on an HTTP GET from warmcat.com, and is built with a -client
version as well if LWS_WITH_SECURE_STREAMS_PROXY_API is set, that operates
via the ss proxy and produces the same result at the client.
Really not having any logs makes it difficult to know what is really
happening, but if that's you're thing this will align debug and release
modes to just have ERR and USER if you give WITH_NO_LOGS
Until now we set metadata value pointers into the onward wsi ah data
area... that's OK until we get a situation the wsi has gone away before we
have a chance to deliver the metadata over the proxy link.
Add a variant lws_ss_alloc_set_metadata() that allocates space on the heap
and takes a copy of the input metadata. Change ss-h1 to alloc copies of
its metadata so we no longer race the wsi ah lifetime.
lws_ss_set_metadata can fail... eg, due to transient OOM situation... if it does,
caller must take appropriate action like disconnect and retry.
So mark the api as requiring the result checking, and make sure all the
examples do it.
If the DNS lookup fails, we just sit out the remaining connect time.
The adapts it to reuse the wsi->sul_connect_timeout to schedule DNS lookup
retries until we're out of time.
Eventually we want to try other things as well, this is aligned with that.
Found with fault injection.
There are a few build options that are trying to keep and report
various statistics
- DETAILED_LATENCY
- SERVER_STATUS
- WITH_STATS
remove all those and establish a generic rplacement, lws_metrics.
lws_metrics makes its stats available via an lws_system ops function
pointer that the user code can set.
Openmetrics export is supported, for, eg, prometheus scraping.
warmcat.com and libwebsockets.org use Let's Encrypt certificates... LE
have changed their CA signing arrangements and after 2021-01-12 (the
point I renewed the LE server certs and received one signed using the
new arrangements) it's required to trust new root certs for the examples
to connect to warmcat.com and libwebsockets.org.
https://letsencrypt.org/2020/09/17/new-root-and-intermediates.html
This updates the in-tree CA copies, the remote policies on warmcat.com
have also been updated.
Just goes to show for real client infrastructure, you need to run your own
CA (that doesn't have to be trusted by anything outside the clients)
where you can control the CA lifetime.
Add a helper to simplify passing smd ss rx traffic into the local
smd participants, excluding the rx that received it externally to
avoid looping.
Make the smd readme clearer with three diagrams and more explanation
of how the ss proxying works.
This is a huge patch that should be a global NOP.
For unix type platforms it enables -Wconversion to issue warnings (-> error)
for all automatic casts that seem less than ideal but are normally concealed
by the toolchain.
This is things like passing an int to a size_t argument. Once enabled, I
went through all args on my default build (which build most things) and
tried to make the removed default cast explicit.
With that approach it neither change nor bloat the code, since it compiles
to whatever it was doing before, just with the casts made explicit... in a
few cases I changed some length args from int to size_t but largely left
the causes alone.
From now on, new code that is relying on less than ideal casting
will complain and nudge me to improve it by warnings.
This adds some new objects and helpers for keeping and logging
info on grouped allocations, a group is, eg, SS handles or client
wsis.
Allocated objects get a context-unique "tag" string intended to replace
%p / wsi pointers etc. Pointers quickly become confusing when
allocations are freed and reused, the tag string won't repeat
until you produce 2^64 objects in a context.
In addition the tag string documents the object group, with prefixes
like "wsi-" or "vh-" and contain object-specific additional
information like the vhost name, address / port or the role of the wsi.
At creation time the lws code can use a format string and args
to add whatever group-specific info makes sense, eg, a wsi bound
to a secure stream can also append the guid of the secure stream,
it's copied into the new object tag and so is still available
cleanly after the stream is destroyed if the wsi outlives it.
Since client_connect and request_tx can be called from code that expects
the ss handle to be in scope, these calls can't deal with destroying the
ss handle and must pass the lws_ss_state_return_t disposition back to
the caller to handle.
C++ APIs wrapping SS client
These are intended to provide an experimental protocol-independent c++
api even more abstracted than secure streams, along the lines of
"wget -Omyfile https://example.com/thing"
WIP
CTest does not directly support daemon spawn as part of the test flow,
we have to specify it as a "fixture" dependency and then hack up daemonization
in a shellscript... this last part unfortunately limits its ability to run to
unix type platforms.
On those though, if the PROXY_API cmake option is enabled, the ctest flow will
spawn the proxy and run lws-minimal-secure-strems-client against it
This adds a per-streamtype JSON mapping table in the policy.
In addition to the previous flow, it lets you generate custom
SS state notifications for specific http response codes, eg:
"http_resp_map": [ { "530": 1530 }, { "531": 1531 } ],
It's not recommended to overload the transport-layer response
code with application layer responses. It's better to return
a 200 and then in the application protocol inside http, explain
what happened from the application perspective, usually with
JSON. But this is designed to let you handle existing systems
that do overload the transport layer response code.
SS states for user use start at LWSSSCS_USER_BASE, which is
1000.
You can do a basic test with minimal-secure-streams and --respmap
flag, this will go to httpbin.org and get a 404, and the warmcat.com
policy has the mapping for 404 -> LWSSSCS_USER_BASE (1000).
Since the mapping emits states, these are serialized and handled
like any other state in the proxy case.
The policy2c example / tool is also updated to handle the additional
mapping tables.
At the moment you can define and set per-stream metadata at the client,
which will be string-substituted and if configured in the policy, set in
related outgoing protocol specific content like h1 headers.
This patch extends the metadata concept to also check incoming protocol-
specific content like h1 headers and where it matches the binding in the
streamtype's metadata entry, make it available to the client by name, via
a new lws_ss_get_metadata() api.
Currently warmcat.com has additional headers for
server: lwsws (well-known header name)
test-custom-header: hello (custom header name)
minimal-secure-streams test is updated to try to recover these both
in direct and -client (via proxy) versions. The corresponding metadata
part of the "mintest" stream policy from warmcat.com is
{
"srv": "server:"
}, {
"test": "test-custom-header:"
},
If built direct, or at the proxy, the stream has access to the static
policy metadata definitions and can store the rx metadata in the stream
metadata allocation, with heap-allocated a value. For client side that
talks to a proxy, only the proxy knows the policy, and it returns rx
metadata inside the serialized link to the client, which stores it on
the heap attached to the stream.
In addition an optimization for mapping static policy metadata definitions
to individual stream handle metadata is changed to match by name.
Event lib support as it has been isn't scaling well, at the low level
libevent and libev headers have a namespace conflict so they can't
both be built into the same image, and at the distro level, binding
all the event libs to libwebsockets.so makes a bloaty situation for
packaging, lws will drag in all the event libs every time.
This patch implements the plan discussed here
https://github.com/warmcat/libwebsockets/issues/1980
and refactors the event lib support so they are built into isolated
plugins and bound at runtime according to what the application says
it wants to use. The event lib plugins can be packaged individually
so that only the needed sets of support are installed (perhaps none
of them if the user code is OK with the default poll() loop). And
dependent user code can mark the specific event loop plugin package
as required so pieces are added as needed.
The eventlib-foreign example is also refactored to build the selected
lib support isolated.
A readme is added detailing the changes and how to use them.
https://libwebsockets.org/git/libwebsockets/tree/READMEs/README.event-libs.md
Correct a comment about payload layout and add detailed comments about
dsh handling at proxy.
Increase the post size so it shows up fragmentation issues at the proxy.