Until now the uv watcher has been composed in the wsi.
This works fine except in the case of a client wsi that
meets a redirect when the event loop is libuv with its
requirement for handle close via the event loop.
We want to reuse the wsi, since the originator of it has
a copy of the wsi pointer, and we want to conceal the
redirect. Since the redirect is commonly to a different
IP, we want to keep the wsi alive while closing its
socket cleanly. That's not too difficult, unless you are
using uv.
With UV the comoposed watcher is a disaster, since after
the close is requested the wsi will start to reconnect.
We tried to deal with that by copying the uv handle and
freeing it when the handle close finalizes. But it turns
out the handle is in a linked-list scheme in uv.
This patch hopefully finally solves it by giving the uv
handle its own allocation from the start. When we want
to close the socket and reuse the wsi, we simply take
responsibility for freeing the handle and set the wsi
watcher pointer to NULL.
This allows the client stuff to understand that addresses beginning with '+'
represent unix sockets.
If the first character after the '+' is '@', it understands that the '@'
should be read as '\0', in order to use Linux "abstract namespace"
sockets.
Further the lws_parse_uri() helper is extended to understand the convention
that an address starting with + is a unix socket, and treats the socket
path as delimited by ':', eg
http://+/var/run/mysocket:/my/path
HTTP Proxy is updated to allow mounts to these unix socket paths.
Proxy connections go out on h1, but are dynamically translated to h1 or h2
on the incoming side.
Proxy usage of libhubbub is separated out... LWS_WITH_HTTP_PROXY is on by
default, and LWS_WITH_HUBBUB is off by default.
This changes the vhost destroy flow to only hand off the listen
socket if another vhost sharing it, and mark the vhost as
being_destroyed.
Each tsi calls lws_check_deferred_free() once a second, if it sees
any vhost being_destroyed there, it closes all wsi on its tsi on
the same vhost, one time.
As the wsi on the vhost complete close (ie, after libuv async close
if on libuv event loop), they decrement a reference count for all
wsi open on the vhost. The tsi who closes the last one then
completes the destroy flow for the vhost itself... it's random
which tsi completes the vhost destroy but since there are no
wsi left on the vhost, and it holds the context lock, nothing
can conflict.
The advantage of this is that owning tsi do the close for wsi
that are bound to the vhost under destruction, at a time when
they are guaranteed to be idle for service, and they do it with
both vhost and context locks owned, so no other service thread
can conflict for stuff protected by those either.
For the situation the user code may have allocations attached to
the vhost, this adds args to lws_vhost_destroy() to allow destroying
the user allocations just before the vhost is freed.
- split raw role into separate skt and file
- remove all special knowledge from the adoption
apis and migrate to core
- remove all special knowledge from client_connect
stuff, and have it discovered by iterating the
role callbacks to let those choose how to bind;
migrate to core
- retire the old deprecated client apis pre-
client_connect_info