There are two kinds of reaason to call lws_header_table_reset(), one is we are reallocating
a destroyed ah to another wsi, and the other is we are moving to the next pipelined header set
still on the same wsi, and we need a "weaker" reset that only clears down the state related
to the header parsing, not everything about the ah context including the ah rx buffer.
This patch moves the ah rxbuffer rxpos and rxlen resetting out of lws_header_table_reset() and to
be the responsibility of the caller. Callers who are moving the ah to another wsi are
patched to deal with resetting rxpos and rxlen and lws_http_transaction_completed() who only
resets the ah when moving to the next pipelined headers, no longer wrongly clears the ah rxbuf.
https://github.com/warmcat/libwebsockets/issues/638
This adds a new member to the context creation info struct "ws_ping_pong_interval".
If nonzero, it sets the number of seconds that established ws connections are
allowed to be idle before a PING is forced to be sent. If zero (the default) then
tracking of idle connection is disabled for backwards compatibility.
Timeouts cover both the period between decision to send the ping and it being
sent (because it needs the socket to become writeable), and the period between
the ping being sent and the PONG coming back.
INFO debug logs are issues when the timeout stuff is operating.
You can test the server side by running the test server hacked to set ws_ping_pong_interval
and debug log mask of 15. Both the mirror protocol and the server-status protocol are
idle if nothing is happening and will trigger the PING / PONG testing. (You can also
test using lwsws and /etc/lwsws/conf with "ws-pingpong-secs": "20" in the global section)
For client, run the test client with -n -P 20 for 20s interval. -n stops the test client
writing using the mirror protocol, so it will be idle and trigger the PING / PONGs.
The timeout interval may be up to +10s late, as lws checks for affected connections every
10s.
This makes it easy for user code to choose the size of the per-thread
buffer used by various things in lws, including file transfer chunking.
Previously it was 4096, if you leave info.pt_serv_buf_size as zero that
is still the default.
With some caveats, you can increase transfer efficiency by increasing it
to, eg, 128KiB, if that makes sense for your memory situation.
Signed-off-by: Andy Green <andy@warmcat.com>
It can join the free ah list and pick up client connect processing
later when the ah becomes available; this simplifies the code
doing the request since he won't have to deal with unexpected
failures / retries based on dynamic ah availability.
To do this though we have to handle that the connect_info members
may not have scope that lets them still exist after we return from
the first connect call, we stash them in a malloc'd buffer so the
connect processing can have them much later even so.
Signed-off-by: Andy Green <andy.green@linaro.org>
Originally this was alright in wsi->u.hdr, because ah implied header
processing. But since we allowed ah to be held across http
keep-alive transactions if we saw we had more header data, it means
we were trying to read this union member out of scope after it had
transitioned.
Moving the more_rx_waiting member to be a 1-bit bifield in the wsi
solves it and lets us check the state any time later at http
transaction completion.
https://github.com/warmcat/libwebsockets/issues/441
Signed-off-by: Andy Green <andy.green@linaro.org>
callers should protect it so this doesn't make a problem. But
Coverity is correct the code is confused about it.
Make it okay if we close a connection before the ah got attached.
Signed-off-by: Andy Green <andy.green@linaro.org>
This is intended to solve a longstanding problem with the
relationship between http/1.1 keep-alive and the service
loop.
Ah now contain an rx buffer which is used during header
processing, and the ah may not be detached from the wsi
until the rx buffer is exhausted.
Having the rx buffer in the ah means we can delay using the
rx until a later service loop.
Ah which have pending rx force POLLIN service on the wsi
they are attached to automatically, so we can interleave
general service / connections with draining each ah rx
buffer.
The possible http/1.1 situations and their dispositions are:
1) exactly one set of http headers come. After processing,
the ah is detached since no pending rx left. If more
headers come later, a fresh ah is aqcuired when available
and the rx flow control blocks the read until then.
2) more that one whole set of headers come and we remain in
http mode (no upgrade). The ah is left attached and
returns to the service loop after the first set of headers.
We will get forced service due to the ah having pending
content (respecting flowcontrol) and process the pending
rx in the ah. If we use it all up, we will detach the
ah.
3) one set of http headers come with ws traffic appended.
We service the headers, do the upgrade, and keep the ah
until the remaining ws content is used. When we
exhausted the ws traffix in the ah rx buffer, we
detach the ah.
Since there can be any amount of http/1.1 pipelining on a
connection, and each may be expensive to service, it's now
enforced there is a return to the service loop after each
header set is serviced on a connection.
When I added the forced service for ah with pending buffering,
I added support for it to the windows plat code. However this
is untested.
Signed-off-by: Andy Green <andy.green@linaro.org>
Connections must hold an ah for the whole time they are
processing one header set, even if eg, the headers are
fragmented and it involves network roundtrip times.
However on http1.1 / keepalive, it must drop the ah when
there are no more header sets to deal with, and reacquire
the ah later when more data appears. It's because the
time between header sets / http1.1 requests is unbounded
and the ah would be tied up forever.
But in the case that we got pipelined http1.1 requests,
even partial already buffered, we must keep the ah,
resetting it instead of dropping it. Because we store
the rx data conveniently in a per-tsi buffer since it only
does one thing at a time per thread, we cannot go back to
the event loop to await a new ah inside one service action.
But no problem since we definitely already have an ah,
let's just reuse it at http completion time if more rx is
already buffered.
NB: attack.sh makes request with echo | nc, this
accidentally sends a trailing '\n' from the echo showing
this problem. With this patch attack.sh can complete well.
Signed-off-by: Andy Green <andy.green@linaro.org>
This adds support for multithreaded service to lws without adding any
threading or locking code in the library.
At context creation time you can request split the service part of the
context into n service domains, which are load-balanced so that the most
idle one gets the next listen socket accept.
There's a single listen socket on one port still.
User code may then spawn n threads doing n service loops / poll()s
simultaneously. Locking is only required (I think) in the existing
FD lock callbacks already handled by the pthreads server example,
and that locking takes place in user code. So the library remains
completely agnostic about the threading / locking scheme.
And by default, it's completely compatible with one service thread
so no changes are required by people uninterested in multithreaded
service.
However for people interested in extremely lightweight mass http[s]/
ws[s] service with minimum provisioning, the library can now do
everything out of the box.
To test it, just try
$ libwebsockets-test-server-pthreads -j 8
where -j controls the number of service threads
Signed-off-by: Andy Green <andy.green@linaro.org>