This is intended to solve a longstanding problem with the
relationship between http/1.1 keep-alive and the service
loop.
Ah now contain an rx buffer which is used during header
processing, and the ah may not be detached from the wsi
until the rx buffer is exhausted.
Having the rx buffer in the ah means we can delay using the
rx until a later service loop.
Ah which have pending rx force POLLIN service on the wsi
they are attached to automatically, so we can interleave
general service / connections with draining each ah rx
buffer.
The possible http/1.1 situations and their dispositions are:
1) exactly one set of http headers come. After processing,
the ah is detached since no pending rx left. If more
headers come later, a fresh ah is aqcuired when available
and the rx flow control blocks the read until then.
2) more that one whole set of headers come and we remain in
http mode (no upgrade). The ah is left attached and
returns to the service loop after the first set of headers.
We will get forced service due to the ah having pending
content (respecting flowcontrol) and process the pending
rx in the ah. If we use it all up, we will detach the
ah.
3) one set of http headers come with ws traffic appended.
We service the headers, do the upgrade, and keep the ah
until the remaining ws content is used. When we
exhausted the ws traffix in the ah rx buffer, we
detach the ah.
Since there can be any amount of http/1.1 pipelining on a
connection, and each may be expensive to service, it's now
enforced there is a return to the service loop after each
header set is serviced on a connection.
When I added the forced service for ah with pending buffering,
I added support for it to the windows plat code. However this
is untested.
Signed-off-by: Andy Green <andy.green@linaro.org>
This gets the libuv stuff plumbed in and working.
Currently it's only workable for some service thread, and there
is an isolated valgrind problem left
==28425== 128 bytes in 1 blocks are definitely lost in loss record 3 of 3
==28425== at 0x4C28C50: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==28425== by 0x4C2AB1E: realloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==28425== by 0x58BBB27: maybe_resize (core.c:748)
==28425== by 0x58BBB27: uv__io_start (core.c:787)
==28425== by 0x58C1B80: uv__signal_loop_once_init (signal.c:225)
==28425== by 0x58C1B80: uv_signal_init (signal.c:260)
==28425== by 0x58BF7A6: uv_loop_init (loop.c:66)
==28425== by 0x4157F5: lws_uv_initloop (libuv.c:89)
==28425== by 0x405536: main (test-server-libuv.c:284)
libuv wants to sign off on all libuv 'handles' that will close, and
callback to do the close confirmation asynchronously. The wsi close function
is adapted when libuv is in use to work with libuv accordingly and exit the uv
loop the number of remaining wsi is zero.
Signed-off-by: Andy Green <andy.green@linaro.org>
This adds support for multithreaded service to lws without adding any
threading or locking code in the library.
At context creation time you can request split the service part of the
context into n service domains, which are load-balanced so that the most
idle one gets the next listen socket accept.
There's a single listen socket on one port still.
User code may then spawn n threads doing n service loops / poll()s
simultaneously. Locking is only required (I think) in the existing
FD lock callbacks already handled by the pthreads server example,
and that locking takes place in user code. So the library remains
completely agnostic about the threading / locking scheme.
And by default, it's completely compatible with one service thread
so no changes are required by people uninterested in multithreaded
service.
However for people interested in extremely lightweight mass http[s]/
ws[s] service with minimum provisioning, the library can now do
everything out of the box.
To test it, just try
$ libwebsockets-test-server-pthreads -j 8
where -j controls the number of service threads
Signed-off-by: Andy Green <andy.green@linaro.org>
In the case we have a lot of connections, checking them all for timeout state
once a second becomes burdensome. At the moment if you have 100K connections,
once a second they all get checked for timeout in a loop.
This patch adds a doubly-linked list based in the context to each wsi, and
only wsi with pending timeouts appear on it. At checking time, we traverse
the list, which costs nothing if empty because nobody has a pending timeout.
Similarly adding and removing from the list costs almost nothing since no
iteration is required no matter how big the list.
The extra 8 or 16 bytes in the wsi are offset a little bit by demoting .pps
from int to char (save 3 bytes). And trim max act exts to 2, since we only
provide one, saving 8 /16 bytes by itself if exts enabled.
Signed-off-by: Andy Green <andy.green@linaro.org>
Server side has had immediate RX flow control for quite a while.
But client side made do with RX continuing until what had been received was exhausted.
For what Autobahn tests, that's not enough.
This patch gives clientside RX flow control the same immediate effect as the server
side enjoys, re-using the same code.
Signed-off-by: Andy Green <andy.green@linaro.org>
- Mainly symbol length reduction
- Whitespace clean
- Code refactor for linear flow
- Audit @Context for API docs vs changes
Signed-off-by: Andy Green <andy.green@linaro.org>
Since struct lws (wsi) now has his own context pointer,
we were able to remove the need for passing context
almost everywhere in the apis.
In turn, that means there's no real use for context being
passed to every callback; in the rare cases context is
needed user code can get it with lws_get_ctx(wsi)
Signed-off-by: Andy Green <andy.green@linaro.org>
Extend the cleanout caused by wsi having a context pointer
into the public api.
There's no point keeping the 1.5 compatibility work,
we have changed the api in several places and
rebuilt wasn't going to be enough a while ago.
Signed-off-by: Andy Green <andy.green@linaro.org>
Now we bit the bullet and gave each wsi an lws_context *, many
internal apis that take both a context and wsi parameter only
need the wsi.
Also simplify parser code by making a temp var for
allocated_headers * instead of the longwinded
dereference chain everywhere.
Signed-off-by: Andy Green <andy.green@linaro.org>
This nukes all the oldstyle prefixes except in the compatibility code.
struct libwebsockets becomes struct lws too.
The api docs are updated accordingly as are the READMEs that mention
those apis.
Signed-off-by: Andy Green <andy.green@linaro.org>
Between changing to lws_ a few years ago and the previous two
patches migrating the public apis, there are only a few
internal functions left using libwebsocket_*.
Change those to also use lws_ without regard to compatibility
since they were never visible outside the library.
Signed-off-by: Andy Green <andy.green@linaro.org>
Change all internal uses of rationalized public apis to reflect the
new names.
Theer are a few things that got changed as side effect of search/replace
matches, but these are almost all internal. I added a compatibility define
for the public enum that got renamed.
Theoretically existing code should not notice the difference from these
two patches. And new code will find the new names.
https://github.com/warmcat/libwebsockets/issues/357
Signed-off-by: Andy Green <andy.green@linaro.org>
It was forgotten in two places that pending close ack should be
processed when wsi state is WSI_STATE_RETURNED_CLOSE_ALREADY, but
not WSI_STATE_ESTABLISHED. As a result, close ack wasn't sent out
to the peer.
Read the full incoming TLS/SSL record at once in libwebsocket_service_fd().
SSL_read() is called until no more pending data for the current record is buffered in SSL.
SSL_read() is never requested more than the pending data size for the current record
to ensure that the fd is not read again for new data, which would be copied in the SSL buffer otherwise.
Noticed by Andrey Pokrovskiy
Close processing reused ping processing to save and send the payload,
and sets a flag to know it's close, but forgot to change the control
packet accordingly.
Signed-off-by: Andy Green <andy.green@linaro.org>