This gets the libuv stuff plumbed in and working.
Currently it's only workable for some service thread, and there
is an isolated valgrind problem left
==28425== 128 bytes in 1 blocks are definitely lost in loss record 3 of 3
==28425== at 0x4C28C50: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==28425== by 0x4C2AB1E: realloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==28425== by 0x58BBB27: maybe_resize (core.c:748)
==28425== by 0x58BBB27: uv__io_start (core.c:787)
==28425== by 0x58C1B80: uv__signal_loop_once_init (signal.c:225)
==28425== by 0x58C1B80: uv_signal_init (signal.c:260)
==28425== by 0x58BF7A6: uv_loop_init (loop.c:66)
==28425== by 0x4157F5: lws_uv_initloop (libuv.c:89)
==28425== by 0x405536: main (test-server-libuv.c:284)
libuv wants to sign off on all libuv 'handles' that will close, and
callback to do the close confirmation asynchronously. The wsi close function
is adapted when libuv is in use to work with libuv accordingly and exit the uv
loop the number of remaining wsi is zero.
Signed-off-by: Andy Green <andy.green@linaro.org>
This adds support for multithreaded service to lws without adding any
threading or locking code in the library.
At context creation time you can request split the service part of the
context into n service domains, which are load-balanced so that the most
idle one gets the next listen socket accept.
There's a single listen socket on one port still.
User code may then spawn n threads doing n service loops / poll()s
simultaneously. Locking is only required (I think) in the existing
FD lock callbacks already handled by the pthreads server example,
and that locking takes place in user code. So the library remains
completely agnostic about the threading / locking scheme.
And by default, it's completely compatible with one service thread
so no changes are required by people uninterested in multithreaded
service.
However for people interested in extremely lightweight mass http[s]/
ws[s] service with minimum provisioning, the library can now do
everything out of the box.
To test it, just try
$ libwebsockets-test-server-pthreads -j 8
where -j controls the number of service threads
Signed-off-by: Andy Green <andy.green@linaro.org>
Extend the cleanout caused by wsi having a context pointer
into the public api.
There's no point keeping the 1.5 compatibility work,
we have changed the api in several places and
rebuilt wasn't going to be enough a while ago.
Signed-off-by: Andy Green <andy.green@linaro.org>
This is a rewrite of the patch from Soapyman here
https://github.com/warmcat/libwebsockets/pull/363
The main changes compared to Soapyman's original patch are
- There's no new stuff in the info struct user code does any overrides
it may want to do explicitly after lws_context_create returns
- User overrides for file ops can call through (subclass) to the original
platform implementation using lws_get_fops_plat()
- A typedef is provided for plat-specific fd type
- Public helpers are provided to allow user code to be platform-independent
about file access, using the lws platform file operations underneath:
static inline lws_filefd_type
lws_plat_file_open(struct lws_plat_file_ops *fops, const char *filename,
unsigned long *filelen, int flags)
static inline int
lws_plat_file_close(struct lws_plat_file_ops *fops, lws_filefd_type fd)
static inline unsigned long
lws_plat_file_seek_cur(struct lws_plat_file_ops *fops, lws_filefd_type fd,
long offset_from_cur_pos)
static inline int
lws_plat_file_read(struct lws_plat_file_ops *fops, lws_filefd_type fd,
unsigned long *amount, unsigned char *buf, unsigned long len)
static inline int
lws_plat_file_write(struct lws_plat_file_ops *fops, lws_filefd_type fd,
unsigned long *amount, unsigned char *buf, unsigned long len)
There's example documentation and implementation in the test server.
Signed-off-by: Andy Green <andy.green@linaro.org>
This nukes all the oldstyle prefixes except in the compatibility code.
struct libwebsockets becomes struct lws too.
The api docs are updated accordingly as are the READMEs that mention
those apis.
Signed-off-by: Andy Green <andy.green@linaro.org>