This allows the client stuff to understand that addresses beginning with '+'
represent unix sockets.
If the first character after the '+' is '@', it understands that the '@'
should be read as '\0', in order to use Linux "abstract namespace"
sockets.
Further the lws_parse_uri() helper is extended to understand the convention
that an address starting with + is a unix socket, and treats the socket
path as delimited by ':', eg
http://+/var/run/mysocket:/my/path
HTTP Proxy is updated to allow mounts to these unix socket paths.
Proxy connections go out on h1, but are dynamically translated to h1 or h2
on the incoming side.
Proxy usage of libhubbub is separated out... LWS_WITH_HTTP_PROXY is on by
default, and LWS_WITH_HUBBUB is off by default.
1) This moves the service tid detection stuff from context to pt.
2) If LWS_MAX_SMP > 1, a default pthread tid detection callback is provided
on the dummy callback. Callback handlers that call through to the dummy
handler will inherit this. It provides an int truncation of the pthread
tid.
3) If there has been any service calls on the service threads, the pts now
know the low sizeof(int) bytes of their tid. When you ask for a client
connection to be created, it looks through the pts to see if the calling
thread is a pt service thread. If so, the new client is set to use the
same pt as the caller.
This creates a "pthread mutex with a reference count"
using gcc / clang atomic intrinsics + pthreads.
Both pt and context locks are moved to use this,
pt already had reference counting but it's new for
context.
This changes the vhost destroy flow to only hand off the listen
socket if another vhost sharing it, and mark the vhost as
being_destroyed.
Each tsi calls lws_check_deferred_free() once a second, if it sees
any vhost being_destroyed there, it closes all wsi on its tsi on
the same vhost, one time.
As the wsi on the vhost complete close (ie, after libuv async close
if on libuv event loop), they decrement a reference count for all
wsi open on the vhost. The tsi who closes the last one then
completes the destroy flow for the vhost itself... it's random
which tsi completes the vhost destroy but since there are no
wsi left on the vhost, and it holds the context lock, nothing
can conflict.
The advantage of this is that owning tsi do the close for wsi
that are bound to the vhost under destruction, at a time when
they are guaranteed to be idle for service, and they do it with
both vhost and context locks owned, so no other service thread
can conflict for stuff protected by those either.
For the situation the user code may have allocations attached to
the vhost, this adds args to lws_vhost_destroy() to allow destroying
the user allocations just before the vhost is freed.
- split raw role into separate skt and file
- remove all special knowledge from the adoption
apis and migrate to core
- remove all special knowledge from client_connect
stuff, and have it discovered by iterating the
role callbacks to let those choose how to bind;
migrate to core
- retire the old deprecated client apis pre-
client_connect_info