The existing lib_v handling copied around to each change_pollfd instance
can be easily missed off if new change_pollfd uses are added. Instead
migrate it directly into change_pollfd to guarantee it is handled and
simplify the code.
Signed-off-by: Andy Green <andy.green@linaro.org>
Enforce no more internal use of deprecated apis (esp in the test apps)
Also signal clearly to users what is on the way out.
Signed-off-by: Andy Green <andy.green@linaro.org>
Connections must hold an ah for the whole time they are
processing one header set, even if eg, the headers are
fragmented and it involves network roundtrip times.
However on http1.1 / keepalive, it must drop the ah when
there are no more header sets to deal with, and reacquire
the ah later when more data appears. It's because the
time between header sets / http1.1 requests is unbounded
and the ah would be tied up forever.
But in the case that we got pipelined http1.1 requests,
even partial already buffered, we must keep the ah,
resetting it instead of dropping it. Because we store
the rx data conveniently in a per-tsi buffer since it only
does one thing at a time per thread, we cannot go back to
the event loop to await a new ah inside one service action.
But no problem since we definitely already have an ah,
let's just reuse it at http completion time if more rx is
already buffered.
NB: attack.sh makes request with echo | nc, this
accidentally sends a trailing '\n' from the echo showing
this problem. With this patch attack.sh can complete well.
Signed-off-by: Andy Green <andy.green@linaro.org>
SO_REUSEPORT means you don't get any error any more if another
server instance is already running, this will be quite unexpected
for singlethreaded users.
Signed-off-by: Andy Green <andy.green@linaro.org>
Context could be NULL only if context creation failed in the first
place and user error path is bad... no network connectivity at that
point...
Signed-off-by: Andy Green <andy.green@linaro.org>
This adds support for multithreaded service to lws without adding any
threading or locking code in the library.
At context creation time you can request split the service part of the
context into n service domains, which are load-balanced so that the most
idle one gets the next listen socket accept.
There's a single listen socket on one port still.
User code may then spawn n threads doing n service loops / poll()s
simultaneously. Locking is only required (I think) in the existing
FD lock callbacks already handled by the pthreads server example,
and that locking takes place in user code. So the library remains
completely agnostic about the threading / locking scheme.
And by default, it's completely compatible with one service thread
so no changes are required by people uninterested in multithreaded
service.
However for people interested in extremely lightweight mass http[s]/
ws[s] service with minimum provisioning, the library can now do
everything out of the box.
To test it, just try
$ libwebsockets-test-server-pthreads -j 8
where -j controls the number of service threads
Signed-off-by: Andy Green <andy.green@linaro.org>
In the case we have a lot of connections, checking them all for timeout state
once a second becomes burdensome. At the moment if you have 100K connections,
once a second they all get checked for timeout in a loop.
This patch adds a doubly-linked list based in the context to each wsi, and
only wsi with pending timeouts appear on it. At checking time, we traverse
the list, which costs nothing if empty because nobody has a pending timeout.
Similarly adding and removing from the list costs almost nothing since no
iteration is required no matter how big the list.
The extra 8 or 16 bytes in the wsi are offset a little bit by demoting .pps
from int to char (save 3 bytes). And trim max act exts to 2, since we only
provide one, saving 8 /16 bytes by itself if exts enabled.
Signed-off-by: Andy Green <andy.green@linaro.org>
This adds redirect support to the client side. Lws will follow
server redirects (301) up to three deep.
Signed-off-by: Andy Green <andy.green@linaro.org>
In most cases the close api will see it should send the CCE because
we are still in the waiting server reply state until the end of the
interpretation. Only if we completed the interpretation and moved
on to ESTABLISHED do we need to handle sending it ourselves.
Signed-off-by: Andy Green <andy.green@linaro.org>