If facing a captive portal, we may seem to get a tcp level connection okay
but find that communication is silently dropped, leading to us timing out
in LRS_WAITING_SERVER_REPLY.
If so, we need to handle it as a connection fail in order to satisfy at
least Captive Portal detection.
We have access to a simplified report of the problem name for tls
validation inside the validation cb, let's bring it out and
use it for OpenSSL CCE reporting.
Mbedtls does not have the same concept as openssl about preloading the
system trust store into every SSL_CTX.
This patch allows you to simulate the behaviour by passing in a context
creation-time filepath that all client SSL_CTX will be initialized from.
Currently the lws_cancel_service() api only manifests itself at lws level.
This adds a state LWSSSCS_EVENT_WAIT_CANCELLED that is broadcast to all
SS in the event loop getting the cancel service api call, and allows
SS-level user code to pick up handling events from other threads.
There's a new example minimal-secure-streams-threads which shows the
pattern for other threads to communicate with and trigger the event in the
lws service thread.
It's already the case that leaving off the "tls_trust_store" member of the
streamtype definition in the policy causes the streamtype to validate its
tls connections via the OS trust store, usually a bundle OpenSSL has been
configured to load at init automagically, but also literally the OS trust
store in windows case.
Add tests to confirm that.
Before this commit, line 84 read 'u' before it had a value, on 1st for-loop iteration. See comment on line 84 below:
82 for (n = 0; n < 8; n++) {
83 ctx->gpio->set(ctx->clk, inv);
84 u = (u << 1) | !!ctx->gpio->read(ctx->miso); /* <-- u is used uninitialized here */
85 ctx->gpio->set(ctx->mosi, !!(u & 0x80));
86 ctx->gpio->set(ctx->clk, !inv);
87 }
Defer recording the ss metrics histogram until wsi close, so it has a
chance to collect all the tags that apply.
Defer dumping metrics until the FINALIZE phase of context destroy, so we
had a chance to get any metrics recorded.
This fixes the proxy rx flow by adding an lws_dsh helper to hide the
off-by-one in the "kind" array (kind 0 is reserved for tracking the
unallocated dsh blocks).
For testing, it adds a --blob option on minimal-secure-streams[-client]
which uses a streamtype "bulkproxflow" from here
https://warmcat.com/policy/minimal-proxy-v4.2-v2.json
"bulkproxflow": {
"endpoint": "warmcat.com",
"port": 443,
"protocol": "h1",
"http_method": "GET",
"http_url": "blob.bin",
"proxy_buflen": 32768,
"proxy_buflen_rxflow_on_above": 24576,
"proxy_buflen_rxflow_off_below": 8192,
"tls": true,
"retry": "default",
"tls_trust_store": "le_via_dst"
}
This downloads a 51MB blob of random data with the SHA256sum
ed5720c16830810e5829dfb9b66c96b2e24efc4f93aa5e38c7ff4150d31cfbbf
The minimal-secure-streams --blob example client delays the download by
50ms every 10KiB it sees to force rx flow usage at the proxy.
It downloads the whole thing and checks the SHA256 is as expected.
Logs about rxflow status are available at LLL_INFO log level.
On 32-bit Linux compilers, long int == int == 32-bit. So even atol() cannot
handle ints above 0x7fffffff and clips any it finds at that.
There's only one instance in policy-json.c, use atoll() cast to uint64_t
to allow values up to 64-bit INT_MAX even on 32-bit machines.
If the larger application is defining vhosts using lejp-conf JSON, it's
often more convenient to describe the vhost for ss server binding to
that.
If the server policy endpoint (usually used to describe the server
interface bind) begins with '!', take the remainder of the endpoint
string as the name of a preexisting vhost to bind ss server to at
creation-time.
This provides a way to get ahold of LWS_WITH_CONMON telemetry from Secure
Streams, it works the same with direct onward connections or via the proxy.
You can mark streamtypes with a "perf": true policy attribute... this
causes the onward connections on those streamtypes to collect information
about the connection performance, and the unsorted DNS results.
Streams with that policy attribute receive extra data in their rx callback,
with the LWSSS_FLAG_PERF_JSON flag set on it, containing JSON describing the
performance of the onward connection taken from CONMON data, in a JSON
representation. Streams without the "perf" attribute set never receive
this extra rx.
The received JSON is based on the CONMON struct info and looks like
{"peer":"46.105.127.147","dns_us":596,"sockconn_us":31382,"tls_us":28180,"txn_resp_us:23015,"dns":["2001:41d0:2:ee93::1","46.105.127.147"]}
A new minimal example minimal-secure-streams-perf is added that collects
this data on an HTTP GET from warmcat.com, and is built with a -client
version as well if LWS_WITH_SECURE_STREAMS_PROXY_API is set, that operates
via the ss proxy and produces the same result at the client.