1) There's now a .fops pointer that can be set in the context creation info. If set, the array of
fops it points to (terminated by an entry with .open = NULL) is walked to find out the best vfs filesystem
path match (comparing the vfs path to fops.path_prefix) for which fops to use.
If none given (.fops is NULL in info) then behaviour is as before, fops are the platform-provided one only.
2) The built in fileserving now walks any array of fops looking for the best fops match automatically.
3) lws_plat_file_... apis are renamed to lws_vfs_file_...
1) This makes lwsws run a parent process with the original permissions.
But this process is only able to respond to SIGHUP, it doesn't do anything
else.
2) You can send this parent process a SIGHUP now to cause it to
- close listening sockets in existing lwsws processes
- mark those processes as to exit when the number of active connections
on the falls to zero
- spawn a fresh child process from scratch, using latest configuration
file content, latest plugins, etc. It can now reopen listening sockets
if it chooses to, or open different listen ports or whatever.
Notes:
1) lws_context_destroy() has been split into two pieces... the reason for
the split is the first part closes the per-vhost protocols, but since
they may have created libuv objects in the per-vhost protocol storage,
these cannot be freed until after the loop has been run.
That's the purpose of the second part of the context destruction,
lws_context_destroy2().
For compatibility, if you are not using libuv, the first part calls the
second part. However if you are using libuv, you must now call the
second part from your own main.c after the first part.
Via Dosvald
lws_service_tsi() which has been around a while actually just
calls through to lws_plat_service_tsi(), meaning there is no
need to expose both apis.
Rename the internal lws_plat_service_tsi() to _lws_plat_service_tsi()
and replace the api export with a #define to lws_service_tsi for
compatibility's sake.
Some people are calling service with zero timeout, taking care of
not busywaiting by some other external arrangements.
Adapt the forced service signalling to survive this.
This patch splits out some lws_context members into a new lws_vhost struct.
- ssl state and options per vhost
- SSL_CTX for serving and client per vhost
- protocols[] per vhost
- extensions[] per vhost
lws_context maintains a linked list of lws_vhosts.
The same lws_context_creation_info struct is used to regulate both the
context creation and to create vhosts: for backward compatibility if you
didn't provide the new LWS_SERVER_OPTION_EXPLICIT_VHOSTS option, then
a default vhost is created at context creation time using the same info
data as the context itself.
If you will have multiple vhosts though, you should give the
LWS_SERVER_OPTION_EXPLICIT_VHOSTS option at context creation time,
create the context first and then the vhosts afterwards using
lws_create_vhost(contest, &info);
Although there is a lot of housekeeping to implement this change, there
is almost no additional overhead if you don't use multiple vhosts and
very little api impact (no changes to test apps).
Signed-off-by: Andy Green <andy@warmcat.com>
This is intended to solve a longstanding problem with the
relationship between http/1.1 keep-alive and the service
loop.
Ah now contain an rx buffer which is used during header
processing, and the ah may not be detached from the wsi
until the rx buffer is exhausted.
Having the rx buffer in the ah means we can delay using the
rx until a later service loop.
Ah which have pending rx force POLLIN service on the wsi
they are attached to automatically, so we can interleave
general service / connections with draining each ah rx
buffer.
The possible http/1.1 situations and their dispositions are:
1) exactly one set of http headers come. After processing,
the ah is detached since no pending rx left. If more
headers come later, a fresh ah is aqcuired when available
and the rx flow control blocks the read until then.
2) more that one whole set of headers come and we remain in
http mode (no upgrade). The ah is left attached and
returns to the service loop after the first set of headers.
We will get forced service due to the ah having pending
content (respecting flowcontrol) and process the pending
rx in the ah. If we use it all up, we will detach the
ah.
3) one set of http headers come with ws traffic appended.
We service the headers, do the upgrade, and keep the ah
until the remaining ws content is used. When we
exhausted the ws traffix in the ah rx buffer, we
detach the ah.
Since there can be any amount of http/1.1 pipelining on a
connection, and each may be expensive to service, it's now
enforced there is a return to the service loop after each
header set is serviced on a connection.
When I added the forced service for ah with pending buffering,
I added support for it to the windows plat code. However this
is untested.
Signed-off-by: Andy Green <andy.green@linaro.org>
This adds support for multithreaded service to lws without adding any
threading or locking code in the library.
At context creation time you can request split the service part of the
context into n service domains, which are load-balanced so that the most
idle one gets the next listen socket accept.
There's a single listen socket on one port still.
User code may then spawn n threads doing n service loops / poll()s
simultaneously. Locking is only required (I think) in the existing
FD lock callbacks already handled by the pthreads server example,
and that locking takes place in user code. So the library remains
completely agnostic about the threading / locking scheme.
And by default, it's completely compatible with one service thread
so no changes are required by people uninterested in multithreaded
service.
However for people interested in extremely lightweight mass http[s]/
ws[s] service with minimum provisioning, the library can now do
everything out of the box.
To test it, just try
$ libwebsockets-test-server-pthreads -j 8
where -j controls the number of service threads
Signed-off-by: Andy Green <andy.green@linaro.org>
Further reduces lws size to 512 on x86_64 "for free"
Both this and the last patch only rearrange private struct members.
Also convert win32-specific member from BOOL to bitfield:1.
Signed-off-by: Andy Green <andy.green@linaro.org>
- Mainly symbol length reduction
- Whitespace clean
- Code refactor for linear flow
- Audit @Context for API docs vs changes
Signed-off-by: Andy Green <andy.green@linaro.org>
Since struct lws (wsi) now has his own context pointer,
we were able to remove the need for passing context
almost everywhere in the apis.
In turn, that means there's no real use for context being
passed to every callback; in the rare cases context is
needed user code can get it with lws_get_ctx(wsi)
Signed-off-by: Andy Green <andy.green@linaro.org>
Having the lws_context alone doesn't let us track state or act different
by wsi, which is the most interesting usecase. Eg not only simply track
file position / decompression state per wsi but also act differently
according to wsi authentication state / associated cookies.
Signed-off-by: Andy Green <andy.green@linaro.org>
This is a rewrite of the patch from Soapyman here
https://github.com/warmcat/libwebsockets/pull/363
The main changes compared to Soapyman's original patch are
- There's no new stuff in the info struct user code does any overrides
it may want to do explicitly after lws_context_create returns
- User overrides for file ops can call through (subclass) to the original
platform implementation using lws_get_fops_plat()
- A typedef is provided for plat-specific fd type
- Public helpers are provided to allow user code to be platform-independent
about file access, using the lws platform file operations underneath:
static inline lws_filefd_type
lws_plat_file_open(struct lws_plat_file_ops *fops, const char *filename,
unsigned long *filelen, int flags)
static inline int
lws_plat_file_close(struct lws_plat_file_ops *fops, lws_filefd_type fd)
static inline unsigned long
lws_plat_file_seek_cur(struct lws_plat_file_ops *fops, lws_filefd_type fd,
long offset_from_cur_pos)
static inline int
lws_plat_file_read(struct lws_plat_file_ops *fops, lws_filefd_type fd,
unsigned long *amount, unsigned char *buf, unsigned long len)
static inline int
lws_plat_file_write(struct lws_plat_file_ops *fops, lws_filefd_type fd,
unsigned long *amount, unsigned char *buf, unsigned long len)
There's example documentation and implementation in the test server.
Signed-off-by: Andy Green <andy.green@linaro.org>