Lws maintains a linked-list of wsi that are on the same vhost protocol...
it walks it to perform ..._all_protocol() type apis.
Client connections also participate in this list, but in the case the
selected protocol is not given during negotation (a legal case where
the server default protocol is selected) we missed adding the new
ws negotiated client wsi to the list.
This patch makes sure we add the wsi to the vhost protocols[0] list
in that case.
https://github.com/warmcat/libwebsockets/issues/716
This adds a new member to the context creation info struct "ws_ping_pong_interval".
If nonzero, it sets the number of seconds that established ws connections are
allowed to be idle before a PING is forced to be sent. If zero (the default) then
tracking of idle connection is disabled for backwards compatibility.
Timeouts cover both the period between decision to send the ping and it being
sent (because it needs the socket to become writeable), and the period between
the ping being sent and the PONG coming back.
INFO debug logs are issues when the timeout stuff is operating.
You can test the server side by running the test server hacked to set ws_ping_pong_interval
and debug log mask of 15. Both the mirror protocol and the server-status protocol are
idle if nothing is happening and will trigger the PING / PONG testing. (You can also
test using lwsws and /etc/lwsws/conf with "ws-pingpong-secs": "20" in the global section)
For client, run the test client with -n -P 20 for 20s interval. -n stops the test client
writing using the mirror protocol, so it will be idle and trigger the PING / PONGs.
The timeout interval may be up to +10s late, as lws checks for affected connections every
10s.
This clears up a couple of issues with client connect.
- if CLIENT_CONNECTION_ERROR is coming, which of the many
ways the rejection may have happened is documented in the
in argument. It's still possible if it just got hung up on
in will be NULL, but now it has MANY more canned strings
describing the issue available at the callback
"getaddrinfo (ipv6) failed"
"unknown address family"
"getaddrinfo (ipv4) failed"
"set socket opts failed"
"insert wsi failed"
"lws_ssl_client_connect1 failed"
"lws_ssl_client_connect2 failed"
"Peer hung up"
"read failed"
"HS: URI missing"
"HS: Redirect code but no Location"
"HS: URI did not parse"
"HS: Redirect failed"
"HS: Server did not return 200"
"HS: OOM"
"HS: disallowed by client filter"
"HS: disallowed at ESTABLISHED"
"HS: ACCEPT missing"
"HS: ws upgrade response not 101"
"HS: UPGRADE missing"
"HS: Upgrade to something other than websocket"
"HS: CONNECTION missing"
"HS: UPGRADE malformed"
"HS: PROTOCOL malformed"
"HS: Cannot match protocol"
"HS: EXT: list too big"
"HS: EXT: failed setting defaults"
"HS: EXT: failed parsing defaults"
"HS: EXT: failed parsing options"
"HS: EXT: Rejects server options"
"HS: EXT: unknown ext"
"HS: Accept hash wrong"
"HS: Rejected by filter cb"
"HS: OOM"
"HS: SO_SNDBUF failed"
"HS: Rejected at CLIENT_ESTABLISHED"
- until now the user code did not get the new wsi that was created
in the client connection action until it returned. However the
client connection action may provoke callbacks like
CLIENT_CONNECTION_ERROR before then, if multiple client connections
are initiated it makes it unknown to user code which one the callback
applies to. The wsi is provided in the callback but it has not yet
returned from the client connect api to give that wsi to the user code.
To solve that there is a new member added to client connect info struct,
pwsi, which lets you pass a pointer to a struct wsi * in the user code
that will get filled in with the new wsi. That happens before any
callbacks could be provoked, and it is updated to NULL if the connect
action fails before returning from the client connect api.
This makes it easy for user code to choose the size of the per-thread
buffer used by various things in lws, including file transfer chunking.
Previously it was 4096, if you leave info.pt_serv_buf_size as zero that
is still the default.
With some caveats, you can increase transfer efficiency by increasing it
to, eg, 128KiB, if that makes sense for your memory situation.
Signed-off-by: Andy Green <andy@warmcat.com>
Add a test html button that will send 9KB of junk to confirm it
https://github.com/warmcat/libwebsockets/issues/480
permessage-deflate now checks the protocol rx buffer size for being
>=128, if not, permessage-deflate is disabled on that connection.
If it is >=128 but less than the zlib decompress buffer size, the
zlib decompress buffer size for that connection is reduced to the
nearest power of two of the protocol rx buf size.
To test this, dumb_increment is left violating the >= 128 rx buffer
size and permessage-deflte can be seen to be disabled on his
connections in the test html.
Signed-off-by: Andy Green <andy@warmcat.com>
This patch splits out some lws_context members into a new lws_vhost struct.
- ssl state and options per vhost
- SSL_CTX for serving and client per vhost
- protocols[] per vhost
- extensions[] per vhost
lws_context maintains a linked list of lws_vhosts.
The same lws_context_creation_info struct is used to regulate both the
context creation and to create vhosts: for backward compatibility if you
didn't provide the new LWS_SERVER_OPTION_EXPLICIT_VHOSTS option, then
a default vhost is created at context creation time using the same info
data as the context itself.
If you will have multiple vhosts though, you should give the
LWS_SERVER_OPTION_EXPLICIT_VHOSTS option at context creation time,
create the context first and then the vhosts afterwards using
lws_create_vhost(contest, &info);
Although there is a lot of housekeeping to implement this change, there
is almost no additional overhead if you don't use multiple vhosts and
very little api impact (no changes to test apps).
Signed-off-by: Andy Green <andy@warmcat.com>
If you enable -DLWS_WITH_HTTP_PROXY=1 at cmake, the test server has a
new URI path http://localhost:7681/proxytest If you visit here, a client
connection to http://example.com:80 is spawned, and the results piped on
to your original connection.
Also with LWS_WITH_HTTP_PROXY enabled at cmake, lws wants to link to an
additional library, "libhubbub". This allows lws to do html rewriting on the
fly, adjusting proxied urls in a lightweight and fast way.
Server support for http[s] as well as ws[s] is implicit.
But until now client only supported ws[s].
This allows the user code to pass an explicit http method
like "GET" in the connect_info, disabling the ws upgrade logic.
Then you can also use lws client as http client, not just ws.
Signed-off-by: Andy Green <andy.green@linaro.org>
This adds an info member that allows the user code to
set the library's network action timeout in seconds.
If left at the default 0, the build-time default
AWAITING_TIMEOUT continues to be used.
As suggested
https://github.com/warmcat/libwebsockets/issues/427
Signed-off-by: Andy Green <andy.green@linaro.org>
This is intended to solve a longstanding problem with the
relationship between http/1.1 keep-alive and the service
loop.
Ah now contain an rx buffer which is used during header
processing, and the ah may not be detached from the wsi
until the rx buffer is exhausted.
Having the rx buffer in the ah means we can delay using the
rx until a later service loop.
Ah which have pending rx force POLLIN service on the wsi
they are attached to automatically, so we can interleave
general service / connections with draining each ah rx
buffer.
The possible http/1.1 situations and their dispositions are:
1) exactly one set of http headers come. After processing,
the ah is detached since no pending rx left. If more
headers come later, a fresh ah is aqcuired when available
and the rx flow control blocks the read until then.
2) more that one whole set of headers come and we remain in
http mode (no upgrade). The ah is left attached and
returns to the service loop after the first set of headers.
We will get forced service due to the ah having pending
content (respecting flowcontrol) and process the pending
rx in the ah. If we use it all up, we will detach the
ah.
3) one set of http headers come with ws traffic appended.
We service the headers, do the upgrade, and keep the ah
until the remaining ws content is used. When we
exhausted the ws traffix in the ah rx buffer, we
detach the ah.
Since there can be any amount of http/1.1 pipelining on a
connection, and each may be expensive to service, it's now
enforced there is a return to the service loop after each
header set is serviced on a connection.
When I added the forced service for ah with pending buffering,
I added support for it to the windows plat code. However this
is untested.
Signed-off-by: Andy Green <andy.green@linaro.org>
This gets the libuv stuff plumbed in and working.
Currently it's only workable for some service thread, and there
is an isolated valgrind problem left
==28425== 128 bytes in 1 blocks are definitely lost in loss record 3 of 3
==28425== at 0x4C28C50: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==28425== by 0x4C2AB1E: realloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==28425== by 0x58BBB27: maybe_resize (core.c:748)
==28425== by 0x58BBB27: uv__io_start (core.c:787)
==28425== by 0x58C1B80: uv__signal_loop_once_init (signal.c:225)
==28425== by 0x58C1B80: uv_signal_init (signal.c:260)
==28425== by 0x58BF7A6: uv_loop_init (loop.c:66)
==28425== by 0x4157F5: lws_uv_initloop (libuv.c:89)
==28425== by 0x405536: main (test-server-libuv.c:284)
libuv wants to sign off on all libuv 'handles' that will close, and
callback to do the close confirmation asynchronously. The wsi close function
is adapted when libuv is in use to work with libuv accordingly and exit the uv
loop the number of remaining wsi is zero.
Signed-off-by: Andy Green <andy.green@linaro.org>
Connections must hold an ah for the whole time they are
processing one header set, even if eg, the headers are
fragmented and it involves network roundtrip times.
However on http1.1 / keepalive, it must drop the ah when
there are no more header sets to deal with, and reacquire
the ah later when more data appears. It's because the
time between header sets / http1.1 requests is unbounded
and the ah would be tied up forever.
But in the case that we got pipelined http1.1 requests,
even partial already buffered, we must keep the ah,
resetting it instead of dropping it. Because we store
the rx data conveniently in a per-tsi buffer since it only
does one thing at a time per thread, we cannot go back to
the event loop to await a new ah inside one service action.
But no problem since we definitely already have an ah,
let's just reuse it at http completion time if more rx is
already buffered.
NB: attack.sh makes request with echo | nc, this
accidentally sends a trailing '\n' from the echo showing
this problem. With this patch attack.sh can complete well.
Signed-off-by: Andy Green <andy.green@linaro.org>
This adds support for multithreaded service to lws without adding any
threading or locking code in the library.
At context creation time you can request split the service part of the
context into n service domains, which are load-balanced so that the most
idle one gets the next listen socket accept.
There's a single listen socket on one port still.
User code may then spawn n threads doing n service loops / poll()s
simultaneously. Locking is only required (I think) in the existing
FD lock callbacks already handled by the pthreads server example,
and that locking takes place in user code. So the library remains
completely agnostic about the threading / locking scheme.
And by default, it's completely compatible with one service thread
so no changes are required by people uninterested in multithreaded
service.
However for people interested in extremely lightweight mass http[s]/
ws[s] service with minimum provisioning, the library can now do
everything out of the box.
To test it, just try
$ libwebsockets-test-server-pthreads -j 8
where -j controls the number of service threads
Signed-off-by: Andy Green <andy.green@linaro.org>
This adds redirect support to the client side. Lws will follow
server redirects (301) up to three deep.
Signed-off-by: Andy Green <andy.green@linaro.org>
In most cases the close api will see it should send the CCE because
we are still in the waiting server reply state until the end of the
interpretation. Only if we completed the interpretation and moved
on to ESTABLISHED do we need to handle sending it ourselves.
Signed-off-by: Andy Green <andy.green@linaro.org>
Server side has had immediate RX flow control for quite a while.
But client side made do with RX continuing until what had been received was exhausted.
For what Autobahn tests, that's not enough.
This patch gives clientside RX flow control the same immediate effect as the server
side enjoys, re-using the same code.
Signed-off-by: Andy Green <andy.green@linaro.org>
The only guy who cared about this for a long while
(since I eliminated the pre-standard protocol variants)
was sending a close frame.
- Set it to 0 so old code remains happy. It only affects
user code buffer commit, if there's overcommit no harm
done so no effect directly on user ABI.
- Remove all uses inside the library. The sample apps
don't have it any more and that's the recommendation now.
Signed-off-by: Andy Green <andy.green@linaro.org>
- Mainly symbol length reduction
- Whitespace clean
- Code refactor for linear flow
- Audit @Context for API docs vs changes
Signed-off-by: Andy Green <andy.green@linaro.org>
Since struct lws (wsi) now has his own context pointer,
we were able to remove the need for passing context
almost everywhere in the apis.
In turn, that means there's no real use for context being
passed to every callback; in the rare cases context is
needed user code can get it with lws_get_ctx(wsi)
Signed-off-by: Andy Green <andy.green@linaro.org>
Extend the cleanout caused by wsi having a context pointer
into the public api.
There's no point keeping the 1.5 compatibility work,
we have changed the api in several places and
rebuilt wasn't going to be enough a while ago.
Signed-off-by: Andy Green <andy.green@linaro.org>
Now we bit the bullet and gave each wsi an lws_context *, many
internal apis that take both a context and wsi parameter only
need the wsi.
Also simplify parser code by making a temp var for
allocated_headers * instead of the longwinded
dereference chain everywhere.
Signed-off-by: Andy Green <andy.green@linaro.org>