With synthetic tests, we can have an h1 connection open to a server
and ask for an h2-specific connection to the same thing... lws will
bind it to the idle h1 connection since the endpoint and tls matches.
This also makes it check that the alpn filtering matches h1 before
allowing that.
C++ APIs wrapping SS client
These are intended to provide an experimental protocol-independent c++
api even more abstracted than secure streams, along the lines of
"wget -Omyfile https://example.com/thing"
WIP
CTest does not directly support daemon spawn as part of the test flow,
we have to specify it as a "fixture" dependency and then hack up daemonization
in a shellscript... this last part unfortunately limits its ability to run to
unix type platforms.
On those though, if the PROXY_API cmake option is enabled, the ctest flow will
spawn the proxy and run lws-minimal-secure-strems-client against it
When using a foreign libuv loop, context creation may fail after adding
handles to the foreign loop... if so, it can no longer deal with the
fatal error by unpicking the created context and returning NULL... it
has to brazen it out with a half-baked context that has already started
the destroy flow and allow the foreign loop to close out the handles
the usual way for libuv.
https://github.com/warmcat/libwebsockets/issues/2129
Chrome has started being able to issue frame type 0x42, we drop the connection
before we realize we wanted to ignore it.
This explicitly ignores it a bit earlier.
When ss is proxied, the handle CREATING state is deferred until the handle links up
to the proxy. So user code should only start using it when it sees CREATING. If it
tries to use it before then, we won'tget anywhere but we should make sure not to crash
on the NULL proxy link cwsi.
Fix an assumption about h2 being around if h1 is that crept in.
Add a sai scenario to catch this kind of problem, only needs one
build since testing lws' own consistency... add WITH_MINIMAL_EXAMPLES
as well
role ops are usually only sparsely filled, there are currently 20
function pointers but several roles only fill in two. No single
role has more than 14 of the ops. On a 32/64 bit build this part
of the ops struct takes a fixed 80 / 160 bytes then.
First reduce the type of the callback reason part from uint16_t to
uint8_t, this saves 12 bytes unconditionally.
Change to a separate function pointer array with a nybble index
array, it costs 10 bytes for the index and a pointer to the
separate array, for 32-bit the cost is
2 + (4 x ops_used)
and for 64-bit
6 + (8 x ops_used)
for 2 x ops_used it means 32-bit: 10 vs 80 / 64-bit: 22 vs 160
For a typical system with h1 (9), h2 (14), listen (2), netlink (2),
pipe (1), raw_skt (3), ws (12), == 43 ops_used out of 140, it means
the .rodata for this reduced from 32-bit: 560 -> 174 (386 byte
saving) and 64-bit: 1120 -> 350 (770 byte saving)
This doesn't account for the changed function ops calling code, two
ways were tried, a preprocessor macro and explicit functions
For an x86_64 gcc 10 build with most options, release mode,
.text + .rodata
before patch: 553282
accessor macro: 552714 (568 byte saving)
accessor functions: 553674 (392 bytes worse than without patch)
therefore we went with the macros
RFC6724 defines an ipv6-centric DNS result sorting algorithm, that
takes route and source address route information for the results
given by the DNS resolution, and sorts them in order of preferability,
which defines the order they should be tried in.
If LWS_WITH_NETLINK, then lws takes care about collecting and monitoring
the interface, route and source address information, and uses it to
perform the RFC6724 sorting to re-sort the DNS before trying to make
the connections.