(Includes fixes from Yichen Gu)
Currently the incoming ebuf is always replaced to point to either a whole
buflist segment, or up to the (pt_serv_buf - LWS_PRE) length in the pt_serv_buf.
This is called on path for handling http read... some user code reasonably wants to
restrict the read size to what it can handle.
Change the other lws_buflist_aware_read() callers to zero ebuf before calling, and for
those have it keep the current behaviour; but if non-NULL ebuf.token on incoming, as
in http read path case, restrict both reported len of buflist content and the read length
to the incoming ebuf.len so the user code can control what it will get at one time.
Additionally muxed protocol wsi have no choice but to read what was sent to them
since it's HOL-blocking for other streams and its own WINDOW_UPDATEs. So add an
internal param to lws_buflist_aware_read() forcing read even if buflist content
is available.
This provides support to build lws using the linkit 7697 public SDK
from here https://docs.labs.mediatek.com/resource/mt7687-mt7697/en/downloads
This toolchain has some challenges, its int32_t / uint32_t are long,
so assumptions about format strings for those being %u / %d / %x all
break. This fixes all the cases for the features enabled by the
default cmake settings.
Refactor everything around ping / pong handling in ws and h2, so there
is instead a protocol-independent validity lws_sul tracking how long it
has been since the last exchange that confirms the operation of the
network connection in both directions.
Clean out periodic role callback and replace the last two role users
with discrete lws_sul for each pt.
With http, the protocol doesn't indicate where the headers end and the
next transaction or body begin. Until now, we handled that for client
header response parsing by reading from the tls buffer bytewise.
This modernizes the code to read in up to 256-byte chunks and parse
the chunks in one hit (the parse API is already set up for doing this
elsewhere).
Now we have a generic input buflist, adapt the parser loop to go through
that and arrange that any leftovers are placed on there.
This adds the option to have lws do its own dns resolution on
the event loop, without blocking. Existing implementations get
the name resolution done by the libc, which is blocking. In
the case you are opening client connections but need to carefully
manage latency, another connection opening and doing the name
resolution becomes a big problem.
Currently it supports
- ipv4 / A records
- ipv6 / AAAA records
- ipv4-over-ipv6 ::ffff:1.2.3.4 A record promotion for ipv6
- only one server supported over UDP :53
- nameserver discovery on linux, windows, freertos
It also has some nice advantages
- lws-style paranoid response parsing
- random unique tid generation to increase difficulty of poisoning
- it's really integrated with the lws event loop, it does not spawn
threads or use the libc resolver, and of course no blocking at all
- platform-specific server address capturing (from /etc/resolv.conf
on linux, windows apis on windows)
- it has LRU caching
- piggybacking (multiple requests before the first completes go on
a list on the first request, not spawn multiple requests)
- observes TTL in cache
- TTL and timeout use lws_sul timers on the event loop
- ipv6 pieces only built if cmake LWS_IPV6 enabled
Improve the code around stash, getting rid of the strdups for a net
code reduction. Remove the special destroy helper for stash since
it becomes a one-liner.
Trade several stack allocs in the client reset function for a single
sized brief heap alloc to reduce peak stack alloc by around 700 bytes.
Various kinds of input stashing were replaced with a single buflist before
v3.0... this patch replaces the partial send arrangements with its own buflist
in the same way.
Buflists as the name says are growable lists of allocations in a linked-list
that take care of book-keeping what's added and removed (even if what is
removed is less than the current buffer on the list).
The immediate result is that we no longer have to freak out if we had a partial
buffered and new output is coming... we can just pile it on the end of the
buflist and keep draining the front of it.
Likewise we no longer need to be rabid about reporting multiple attempts to
send stuff without going back to the event loop, although not doing that
will introduce inefficiencies we don't have to term it "illegal" any more.
Since buflists have proven reliable on the input side and the logic for dealing
with truncated "non-network events" was already there this internal-only change
should be relatively self-contained.
HTTP server protocols have had for a while LWS_CALLBACK_HTTP_DROP/BIND_PROTOCOL
callbacks that mark when a wsi is attched to a protocol and detached.
It turns out this is generally useful for everything to know when a wsi is
joining a protocol and definitively completely finished with a protocol.
Particularly with client wsi where you provided the userdata externally, this
makes a clear point to free() it on the protocol binding being dropped.
This patch adds protocol bind / unbind callbacks to the role definition and
lets them operate on all roles. For the various roles
HTTP server: LWS_CALLBACK_HTTP_BIND/DROP_PROTOCOL as before
HTTP client: LWS_CALLBACK_CLIENT_HTTP_BIND/DROP_PROTOCOL
ws server: LWS_CALLBACK_WS_SERVER_BIND/DROP_PROTOCOL
ws client: LWS_CALLBACK_WS_CLIENT_BIND/DROP_PROTOCOL
raw file: LWS_CALLBACK_RAW_FILE_BIND/DROP_PROTOCOL
raw skt: LWS_CALLBACK_RAW_SKT_BIND/DROP_PROTOCOL
- split raw role into separate skt and file
- remove all special knowledge from the adoption
apis and migrate to core
- remove all special knowledge from client_connect
stuff, and have it discovered by iterating the
role callbacks to let those choose how to bind;
migrate to core
- retire the old deprecated client apis pre-
client_connect_info