An lws context usually contains a processwide fd -> wsi lookup table.
This allows any possible fd returned by a *nix type OS to be immediately
converted to a wsi just by indexing an array of struct lws * the size of
the highest possible fd, as found by ulimit -n or similar.
This works modestly for Linux type systems where the default ulimit -n for
a process is 1024, it means a 4KB or 8KB lookup table for 32-bit or
64-bit systems.
However in the case your lws usage is much simpler, like one outgoing
client connection and no serving, this represents increasing waste. It's
made much worse if the system has a much larger default ulimit -n, eg 1M,
the table is occupying 4MB or 8MB, of which you will only use one.
Even so, because lws can't be sure the OS won't return a socket fd at any
number up to (ulimit -n - 1), it has to allocate the whole lookup table
at the moment.
This patch looks to see if the context creation info is setting
info->fd_limit_per_thread... if it leaves it at the default 0, then
everything is as it was before this patch. However if finds that
(info->fd_limit_per_thread * actual_number_of_service_threads) where
the default number of service threads is 1, is less than the fd limit
set by ulimit -n, lws switches to a slower lookup table scheme, which
only allocates the requested number of slots. Lookups happen then by
iterating the table and comparing rather than indexing the array
directly, which is obviously somewhat of a performance hit.
However in the case where you know lws will only have a very few wsi
maximum, this method can very usefully trade off speed to be able to
avoid the allocation sized by ulimit -n.
minimal examples for client that can make use of this are also modified
by this patch to use the smaller context allocations.
If you're providing a unix socket service that will be proxied / served by another
process on the same machine, the unix fd permissions on the listening unix socket fd
have to be managed so only something running under the server credentials
can open the listening unix socket.
Up until now if you wanted to drop privs, a numeric uid and gid had to be
given in info to control post-init permissions... this adds info.username
and info.groupname where you can do the same using user and group names.
The internal plat helper lws_plat_drop_app_privileges() is updated to directly use
context instead of info both ways it can be called, and to be able to return fatal
errors.
All failures to lookup non-0 or -1 uid or gid names from uid, or to look up
uid or gid from username or groupnames given, get an err message and fatal exit.
Until now the JOSE pieces only had enough support for ACME.
This patch improves the JWK parsing to prepare for more
complete support and for adding JWE, genaes and genec in
later patches.
This provides a way to defer closing if the output buflist has
unsent content for the wsi, until the buflist is drained.
It doesn't make any assumption about the content being related
to http, so you can use it on raw.
It follows the semantics of the http transaction completed, ie
if (lws_raw_transaction_completed(wsi))
return -1
return 0;
Normalize the vhost options around optionally handling noncompliant
traffic at the listening socket for both non-tls and tls cases.
By default everything is as before.
However it's now possible to tell the vhost to allow noncompliant
connects to fall back to a specific role and protocol, both set
by name in the vhost creation info struct.
The original vhost flags allowing http redirect to https and
direct http serving from https server (which is a security
downgrade if enabled) are cleaned up and tested.
A minimal example minimal-raw-fallback-http-server is added with
switches to confirm operation of all the valid possibilities (see
the readme on that).
Until now basic auth only protected http actions in the protected
mount.
This extends the existing basic auth scheme to also be consulted for
ws upgrades if a "basic-auth" pvo exists on the selected protocol for
the vhost. The value of the pvo is the usual basic auth credentials
file same as for the http case.
This lets you build using the runtime Address Sanitizer in gcc.
LWS is heavily tested with valgrind routinely during development. But ASAN
did find some theoretical-only issues with shifting, strictly ~(1 << 31) is
a signed int, it should be ~(1u << 31). Gcc does the same for both, but it's
good to have the ability to find these.
This adds support for the integrating libdbus into the lws event loop.
Unlike the other roles, lws doesn't completely adopt the fd and libdbus insists
to retain control over the fd lifecycle. However libdbus provides apis for
foreign code (lws) to provide event loop services to libdbus for the fd.
Accordingly, unlike the other roles rx and writeable are not subsumed into
lws callback messages and the events remain the property of libdbus.
A context struct wrapper is provided that is available in the libdbus
callbacks to bridge between the lws and dbus worlds, along with
a minimal example dbus client and server.
This allows the client stuff to understand that addresses beginning with '+'
represent unix sockets.
If the first character after the '+' is '@', it understands that the '@'
should be read as '\0', in order to use Linux "abstract namespace"
sockets.
Further the lws_parse_uri() helper is extended to understand the convention
that an address starting with + is a unix socket, and treats the socket
path as delimited by ':', eg
http://+/var/run/mysocket:/my/path
HTTP Proxy is updated to allow mounts to these unix socket paths.
Proxy connections go out on h1, but are dynamically translated to h1 or h2
on the incoming side.
Proxy usage of libhubbub is separated out... LWS_WITH_HTTP_PROXY is on by
default, and LWS_WITH_HUBBUB is off by default.
Various kinds of input stashing were replaced with a single buflist before
v3.0... this patch replaces the partial send arrangements with its own buflist
in the same way.
Buflists as the name says are growable lists of allocations in a linked-list
that take care of book-keeping what's added and removed (even if what is
removed is less than the current buffer on the list).
The immediate result is that we no longer have to freak out if we had a partial
buffered and new output is coming... we can just pile it on the end of the
buflist and keep draining the front of it.
Likewise we no longer need to be rabid about reporting multiple attempts to
send stuff without going back to the event loop, although not doing that
will introduce inefficiencies we don't have to term it "illegal" any more.
Since buflists have proven reliable on the input side and the logic for dealing
with truncated "non-network events" was already there this internal-only change
should be relatively self-contained.
1) This moves the service tid detection stuff from context to pt.
2) If LWS_MAX_SMP > 1, a default pthread tid detection callback is provided
on the dummy callback. Callback handlers that call through to the dummy
handler will inherit this. It provides an int truncation of the pthread
tid.
3) If there has been any service calls on the service threads, the pts now
know the low sizeof(int) bytes of their tid. When you ask for a client
connection to be created, it looks through the pts to see if the calling
thread is a pt service thread. If so, the new client is set to use the
same pt as the caller.
This adds a plugin that interfaces to libjsongit2
https://warmcat.com/git/libjsongit2
to provide a per-vhost service for presenting bare git repos in a
web interface.
This creates a "pthread mutex with a reference count"
using gcc / clang atomic intrinsics + pthreads.
Both pt and context locks are moved to use this,
pt already had reference counting but it's new for
context.
This changes the vhost destroy flow to only hand off the listen
socket if another vhost sharing it, and mark the vhost as
being_destroyed.
Each tsi calls lws_check_deferred_free() once a second, if it sees
any vhost being_destroyed there, it closes all wsi on its tsi on
the same vhost, one time.
As the wsi on the vhost complete close (ie, after libuv async close
if on libuv event loop), they decrement a reference count for all
wsi open on the vhost. The tsi who closes the last one then
completes the destroy flow for the vhost itself... it's random
which tsi completes the vhost destroy but since there are no
wsi left on the vhost, and it holds the context lock, nothing
can conflict.
The advantage of this is that owning tsi do the close for wsi
that are bound to the vhost under destruction, at a time when
they are guaranteed to be idle for service, and they do it with
both vhost and context locks owned, so no other service thread
can conflict for stuff protected by those either.
For the situation the user code may have allocations attached to
the vhost, this adds args to lws_vhost_destroy() to allow destroying
the user allocations just before the vhost is freed.
- split raw role into separate skt and file
- remove all special knowledge from the adoption
apis and migrate to core
- remove all special knowledge from client_connect
stuff, and have it discovered by iterating the
role callbacks to let those choose how to bind;
migrate to core
- retire the old deprecated client apis pre-
client_connect_info