This adds a new member to the context creation info struct "ws_ping_pong_interval".
If nonzero, it sets the number of seconds that established ws connections are
allowed to be idle before a PING is forced to be sent. If zero (the default) then
tracking of idle connection is disabled for backwards compatibility.
Timeouts cover both the period between decision to send the ping and it being
sent (because it needs the socket to become writeable), and the period between
the ping being sent and the PONG coming back.
INFO debug logs are issues when the timeout stuff is operating.
You can test the server side by running the test server hacked to set ws_ping_pong_interval
and debug log mask of 15. Both the mirror protocol and the server-status protocol are
idle if nothing is happening and will trigger the PING / PONG testing. (You can also
test using lwsws and /etc/lwsws/conf with "ws-pingpong-secs": "20" in the global section)
For client, run the test client with -n -P 20 for 20s interval. -n stops the test client
writing using the mirror protocol, so it will be idle and trigger the PING / PONGs.
The timeout interval may be up to +10s late, as lws checks for affected connections every
10s.
This clears up a couple of issues with client connect.
- if CLIENT_CONNECTION_ERROR is coming, which of the many
ways the rejection may have happened is documented in the
in argument. It's still possible if it just got hung up on
in will be NULL, but now it has MANY more canned strings
describing the issue available at the callback
"getaddrinfo (ipv6) failed"
"unknown address family"
"getaddrinfo (ipv4) failed"
"set socket opts failed"
"insert wsi failed"
"lws_ssl_client_connect1 failed"
"lws_ssl_client_connect2 failed"
"Peer hung up"
"read failed"
"HS: URI missing"
"HS: Redirect code but no Location"
"HS: URI did not parse"
"HS: Redirect failed"
"HS: Server did not return 200"
"HS: OOM"
"HS: disallowed by client filter"
"HS: disallowed at ESTABLISHED"
"HS: ACCEPT missing"
"HS: ws upgrade response not 101"
"HS: UPGRADE missing"
"HS: Upgrade to something other than websocket"
"HS: CONNECTION missing"
"HS: UPGRADE malformed"
"HS: PROTOCOL malformed"
"HS: Cannot match protocol"
"HS: EXT: list too big"
"HS: EXT: failed setting defaults"
"HS: EXT: failed parsing defaults"
"HS: EXT: failed parsing options"
"HS: EXT: Rejects server options"
"HS: EXT: unknown ext"
"HS: Accept hash wrong"
"HS: Rejected by filter cb"
"HS: OOM"
"HS: SO_SNDBUF failed"
"HS: Rejected at CLIENT_ESTABLISHED"
- until now the user code did not get the new wsi that was created
in the client connection action until it returned. However the
client connection action may provoke callbacks like
CLIENT_CONNECTION_ERROR before then, if multiple client connections
are initiated it makes it unknown to user code which one the callback
applies to. The wsi is provided in the callback but it has not yet
returned from the client connect api to give that wsi to the user code.
To solve that there is a new member added to client connect info struct,
pwsi, which lets you pass a pointer to a struct wsi * in the user code
that will get filled in with the new wsi. That happens before any
callbacks could be provoked, and it is updated to NULL if the connect
action fails before returning from the client connect api.
Otherwise lwsws started at boot on a system without an RTC
may feel it was started in 1970, and when the clock is set
by ntpd a little later, has been up for 16,000 days...
Signed-off-by: Andy Green <andy@warmcat.com>
This makes it easy for user code to choose the size of the per-thread
buffer used by various things in lws, including file transfer chunking.
Previously it was 4096, if you leave info.pt_serv_buf_size as zero that
is still the default.
With some caveats, you can increase transfer efficiency by increasing it
to, eg, 128KiB, if that makes sense for your memory situation.
Signed-off-by: Andy Green <andy@warmcat.com>
This adds support for dynamically loaded plugins at runtime, which
can expose their own protocols or extensions transparently.
With these changes lwsws defaults to OFF in cmake, and if enabled it
automatically enables plugins and libuv support.
Signed-off-by: Andy Green <andy@warmcat.com>
If you enable -DLWS_WITH_HTTP_PROXY=1 at cmake, the test server has a
new URI path http://localhost:7681/proxytest If you visit here, a client
connection to http://example.com:80 is spawned, and the results piped on
to your original connection.
Also with LWS_WITH_HTTP_PROXY enabled at cmake, lws wants to link to an
additional library, "libhubbub". This allows lws to do html rewriting on the
fly, adjusting proxied urls in a lightweight and fast way.
wsi can have a full tree relationship with each other using
linked lists. closing the parent ensures the children are
closed first.
Convert cgi to use this instead of his cgi-specific sub-wsi
management.
Signed-off-by: Andy Green <andy.green@linaro.org>
Server support for http[s] as well as ws[s] is implicit.
But until now client only supported ws[s].
This allows the user code to pass an explicit http method
like "GET" in the connect_info, disabling the ws upgrade logic.
Then you can also use lws client as http client, not just ws.
Signed-off-by: Andy Green <andy.green@linaro.org>
This is intended to solve a longstanding problem with the
relationship between http/1.1 keep-alive and the service
loop.
Ah now contain an rx buffer which is used during header
processing, and the ah may not be detached from the wsi
until the rx buffer is exhausted.
Having the rx buffer in the ah means we can delay using the
rx until a later service loop.
Ah which have pending rx force POLLIN service on the wsi
they are attached to automatically, so we can interleave
general service / connections with draining each ah rx
buffer.
The possible http/1.1 situations and their dispositions are:
1) exactly one set of http headers come. After processing,
the ah is detached since no pending rx left. If more
headers come later, a fresh ah is aqcuired when available
and the rx flow control blocks the read until then.
2) more that one whole set of headers come and we remain in
http mode (no upgrade). The ah is left attached and
returns to the service loop after the first set of headers.
We will get forced service due to the ah having pending
content (respecting flowcontrol) and process the pending
rx in the ah. If we use it all up, we will detach the
ah.
3) one set of http headers come with ws traffic appended.
We service the headers, do the upgrade, and keep the ah
until the remaining ws content is used. When we
exhausted the ws traffix in the ah rx buffer, we
detach the ah.
Since there can be any amount of http/1.1 pipelining on a
connection, and each may be expensive to service, it's now
enforced there is a return to the service loop after each
header set is serviced on a connection.
When I added the forced service for ah with pending buffering,
I added support for it to the windows plat code. However this
is untested.
Signed-off-by: Andy Green <andy.green@linaro.org>
This gets the libuv stuff plumbed in and working.
Currently it's only workable for some service thread, and there
is an isolated valgrind problem left
==28425== 128 bytes in 1 blocks are definitely lost in loss record 3 of 3
==28425== at 0x4C28C50: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==28425== by 0x4C2AB1E: realloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==28425== by 0x58BBB27: maybe_resize (core.c:748)
==28425== by 0x58BBB27: uv__io_start (core.c:787)
==28425== by 0x58C1B80: uv__signal_loop_once_init (signal.c:225)
==28425== by 0x58C1B80: uv_signal_init (signal.c:260)
==28425== by 0x58BF7A6: uv_loop_init (loop.c:66)
==28425== by 0x4157F5: lws_uv_initloop (libuv.c:89)
==28425== by 0x405536: main (test-server-libuv.c:284)
libuv wants to sign off on all libuv 'handles' that will close, and
callback to do the close confirmation asynchronously. The wsi close function
is adapted when libuv is in use to work with libuv accordingly and exit the uv
loop the number of remaining wsi is zero.
Signed-off-by: Andy Green <andy.green@linaro.org>
This adds support for multithreaded service to lws without adding any
threading or locking code in the library.
At context creation time you can request split the service part of the
context into n service domains, which are load-balanced so that the most
idle one gets the next listen socket accept.
There's a single listen socket on one port still.
User code may then spawn n threads doing n service loops / poll()s
simultaneously. Locking is only required (I think) in the existing
FD lock callbacks already handled by the pthreads server example,
and that locking takes place in user code. So the library remains
completely agnostic about the threading / locking scheme.
And by default, it's completely compatible with one service thread
so no changes are required by people uninterested in multithreaded
service.
However for people interested in extremely lightweight mass http[s]/
ws[s] service with minimum provisioning, the library can now do
everything out of the box.
To test it, just try
$ libwebsockets-test-server-pthreads -j 8
where -j controls the number of service threads
Signed-off-by: Andy Green <andy.green@linaro.org>
In the case we have a lot of connections, checking them all for timeout state
once a second becomes burdensome. At the moment if you have 100K connections,
once a second they all get checked for timeout in a loop.
This patch adds a doubly-linked list based in the context to each wsi, and
only wsi with pending timeouts appear on it. At checking time, we traverse
the list, which costs nothing if empty because nobody has a pending timeout.
Similarly adding and removing from the list costs almost nothing since no
iteration is required no matter how big the list.
The extra 8 or 16 bytes in the wsi are offset a little bit by demoting .pps
from int to char (save 3 bytes). And trim max act exts to 2, since we only
provide one, saving 8 /16 bytes by itself if exts enabled.
Signed-off-by: Andy Green <andy.green@linaro.org>