There's no reason to not have the mounts linked list init also in the info
struct, rather than provide as a paramater to lws_create_vhost(). Now
is a good time to normalize that since this api only exists in master.
This also allows oldstyle "do everything at context creation time in one
vhost" guys to leverage mounts.
Also there's no reason the mounts linked-list pointer and all uses in lws
are non-const, so make them all explicitly const *.
Update the info struct docs to clarify which members are used when creating
a vhost and which for context creation.
Signed-off-by: Andy Green <andy@warmcat.com>
This allows mounts to define the caching policy of the files inside them.
Support is added in lwsws for controlling it from the config files.
The api for serializing a mount struct opaquely is removed and lws_http_mount struct
made public... it was getting out of control trying to hide the options.
Signed-off-by: Andy Green <andy@warmcat.com>
This adds the ability to store apache-compatible logs to a file given at
vhost-creation time.
lwsws conf can set it per-vhost using "access-log": "<filepath>"
The feature defaults to disabled at cmake, it can be set independently but
LWS_WITH_LWSWS set it on.
Signed-off-by: Andy Green <andy@warmcat.com>
This adds support for dynamically loaded plugins at runtime, which
can expose their own protocols or extensions transparently.
With these changes lwsws defaults to OFF in cmake, and if enabled it
automatically enables plugins and libuv support.
Signed-off-by: Andy Green <andy@warmcat.com>
This patch splits out some lws_context members into a new lws_vhost struct.
- ssl state and options per vhost
- SSL_CTX for serving and client per vhost
- protocols[] per vhost
- extensions[] per vhost
lws_context maintains a linked list of lws_vhosts.
The same lws_context_creation_info struct is used to regulate both the
context creation and to create vhosts: for backward compatibility if you
didn't provide the new LWS_SERVER_OPTION_EXPLICIT_VHOSTS option, then
a default vhost is created at context creation time using the same info
data as the context itself.
If you will have multiple vhosts though, you should give the
LWS_SERVER_OPTION_EXPLICIT_VHOSTS option at context creation time,
create the context first and then the vhosts afterwards using
lws_create_vhost(contest, &info);
Although there is a lot of housekeeping to implement this change, there
is almost no additional overhead if you don't use multiple vhosts and
very little api impact (no changes to test apps).
Signed-off-by: Andy Green <andy@warmcat.com>
If you enable -DLWS_WITH_HTTP_PROXY=1 at cmake, the test server has a
new URI path http://localhost:7681/proxytest If you visit here, a client
connection to http://example.com:80 is spawned, and the results piped on
to your original connection.
Also with LWS_WITH_HTTP_PROXY enabled at cmake, lws wants to link to an
additional library, "libhubbub". This allows lws to do html rewriting on the
fly, adjusting proxied urls in a lightweight and fast way.
Move the socket bind to interface code out of server into
libwebsockets.c and make a private api for it.
Signed-off-by: Andy Green <andy.green@linaro.org>
wsi can have a full tree relationship with each other using
linked lists. closing the parent ensures the children are
closed first.
Convert cgi to use this instead of his cgi-specific sub-wsi
management.
Signed-off-by: Andy Green <andy.green@linaro.org>
Server support for http[s] as well as ws[s] is implicit.
But until now client only supported ws[s].
This allows the user code to pass an explicit http method
like "GET" in the connect_info, disabling the ws upgrade logic.
Then you can also use lws client as http client, not just ws.
Signed-off-by: Andy Green <andy.green@linaro.org>
This adds an info member that allows the user code to
set the library's network action timeout in seconds.
If left at the default 0, the build-time default
AWAITING_TIMEOUT continues to be used.
As suggested
https://github.com/warmcat/libwebsockets/issues/427
Signed-off-by: Andy Green <andy.green@linaro.org>
This is intended to solve a longstanding problem with the
relationship between http/1.1 keep-alive and the service
loop.
Ah now contain an rx buffer which is used during header
processing, and the ah may not be detached from the wsi
until the rx buffer is exhausted.
Having the rx buffer in the ah means we can delay using the
rx until a later service loop.
Ah which have pending rx force POLLIN service on the wsi
they are attached to automatically, so we can interleave
general service / connections with draining each ah rx
buffer.
The possible http/1.1 situations and their dispositions are:
1) exactly one set of http headers come. After processing,
the ah is detached since no pending rx left. If more
headers come later, a fresh ah is aqcuired when available
and the rx flow control blocks the read until then.
2) more that one whole set of headers come and we remain in
http mode (no upgrade). The ah is left attached and
returns to the service loop after the first set of headers.
We will get forced service due to the ah having pending
content (respecting flowcontrol) and process the pending
rx in the ah. If we use it all up, we will detach the
ah.
3) one set of http headers come with ws traffic appended.
We service the headers, do the upgrade, and keep the ah
until the remaining ws content is used. When we
exhausted the ws traffix in the ah rx buffer, we
detach the ah.
Since there can be any amount of http/1.1 pipelining on a
connection, and each may be expensive to service, it's now
enforced there is a return to the service loop after each
header set is serviced on a connection.
When I added the forced service for ah with pending buffering,
I added support for it to the windows plat code. However this
is untested.
Signed-off-by: Andy Green <andy.green@linaro.org>
This gets the libuv stuff plumbed in and working.
Currently it's only workable for some service thread, and there
is an isolated valgrind problem left
==28425== 128 bytes in 1 blocks are definitely lost in loss record 3 of 3
==28425== at 0x4C28C50: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==28425== by 0x4C2AB1E: realloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==28425== by 0x58BBB27: maybe_resize (core.c:748)
==28425== by 0x58BBB27: uv__io_start (core.c:787)
==28425== by 0x58C1B80: uv__signal_loop_once_init (signal.c:225)
==28425== by 0x58C1B80: uv_signal_init (signal.c:260)
==28425== by 0x58BF7A6: uv_loop_init (loop.c:66)
==28425== by 0x4157F5: lws_uv_initloop (libuv.c:89)
==28425== by 0x405536: main (test-server-libuv.c:284)
libuv wants to sign off on all libuv 'handles' that will close, and
callback to do the close confirmation asynchronously. The wsi close function
is adapted when libuv is in use to work with libuv accordingly and exit the uv
loop the number of remaining wsi is zero.
Signed-off-by: Andy Green <andy.green@linaro.org>
This adds support for multithreaded service to lws without adding any
threading or locking code in the library.
At context creation time you can request split the service part of the
context into n service domains, which are load-balanced so that the most
idle one gets the next listen socket accept.
There's a single listen socket on one port still.
User code may then spawn n threads doing n service loops / poll()s
simultaneously. Locking is only required (I think) in the existing
FD lock callbacks already handled by the pthreads server example,
and that locking takes place in user code. So the library remains
completely agnostic about the threading / locking scheme.
And by default, it's completely compatible with one service thread
so no changes are required by people uninterested in multithreaded
service.
However for people interested in extremely lightweight mass http[s]/
ws[s] service with minimum provisioning, the library can now do
everything out of the box.
To test it, just try
$ libwebsockets-test-server-pthreads -j 8
where -j controls the number of service threads
Signed-off-by: Andy Green <andy.green@linaro.org>
In the case we have a lot of connections, checking them all for timeout state
once a second becomes burdensome. At the moment if you have 100K connections,
once a second they all get checked for timeout in a loop.
This patch adds a doubly-linked list based in the context to each wsi, and
only wsi with pending timeouts appear on it. At checking time, we traverse
the list, which costs nothing if empty because nobody has a pending timeout.
Similarly adding and removing from the list costs almost nothing since no
iteration is required no matter how big the list.
The extra 8 or 16 bytes in the wsi are offset a little bit by demoting .pps
from int to char (save 3 bytes). And trim max act exts to 2, since we only
provide one, saving 8 /16 bytes by itself if exts enabled.
Signed-off-by: Andy Green <andy.green@linaro.org>