This gets rid of all the platform-dependent #ifdef stuff and
migrates it into the new lws-plat-xxx.c files.
These are then included in a one-time test in libwebsockets.c
according basically to Windows or not.
The idea is from now on, all Windows-specific code should go in
lws-plat-win.c, where any kind of Windows perversion like DWORD
is fine.
Any new functions going in there should be named lws_plat_...
and be defined in all the lws-plat-xxx.c file (currently just
win32 and unix platforms are supported).
Signed-off-by: Andy Green <andy.green@linaro.org>
It's already the default and no "SSL_set_mode" in CYASSL
Reported by Chris Conlon <chris@wolfssl.com>
Signed-off-by: Andy Green <andy.green@linaro.org>
This provides a single place for pollfd event changing,
external locking for that and extpoll management.
It saves about 85 lines of duplication and simplifies the callers.
Signed-off-by: Andy Green <andy.green@linaro.org>
This adds two new callbacks in protocols[0] that are optional for allowing limited thread
access to libwebsockets, LWS_CALLBACK_LOCK_POLL and LWS_CALLBACK_UNLOCK_POLL.
If you use them, they protect internal and external poll list changes, but if you want to use
external thread access to libwebsocket_callback_on_writable() you have to implement your
locking here even if you don't use external poll support.
If you will use another thread for this, take a lot of care about managing your list of
live wsi by doing it from ESTABLISHED and CLOSED callbacks (with your own locking).
Signed-off-by: Andy Green <andy.green@linaro.org>
Subject: [PATCH] We can ran into situation (at least on iOS) when with openssl
nonblocking BIO and http proxy we don't perform ssl_connect straight away so
we need to retry until we finish ssl_connect. If we don't do that we will
fail in LWS_CONNMODE_WS_CLIENT_WAITING_PROXY_REPLY when testing for "HTTP/1.0
200" successful connection.
Signed-off-by: shys <shyswork@zoho.com>
As spotted by JM on Trac#40
http://libwebsockets.org/trac/libwebsockets/ticket/40
client connect didn't do anything about being truly nonblocking. This patch
should hopefully solve that.
Signed-off-by: Andy Green <andy.green@linaro.org>
''...Websocket servers return failure responses other than HTTP Status 101 in the case of
mismatches of WebSocket version or additional header data etc...
It seems that libwebsockets shows "WARN: problems parsing header" error in such cases without
calling any callbacks or returning error code. Is this right?
I modified lib/client.c to handle this.''
Signed-off-by: Fujii Bunichiroh <Bunichiroh.Fujii@jp.sony.com>
This tells the OS to reserve a TX buffer at least the size of the biggest RX frame
expected, for both server and client connections.
In Linux, the OS reserves 2 x the requested amount.
This is aimed at reducing the partial large atomic frame send problem to the point
it's only coming at large atomic frames the OS balks at reserving the size for.
If you have a lot of data to send, it is better to split it into multiple writes,
and use the FIN / CONTINUATION websocket stuff to manage it.
See the "fraggle" test app for example code of how to do that.
Signed-off-by: Andy Green <andy.green@linaro.org>
The function has a logical problem when the size of the requested
allocation is 0, it will return NULL which is overloaded as
failure.
Actually the whole function is evil as an api, this patch moves
it out of the public API space and fixes it to return 0 for
success or 1 for fail. Private code does not need to to return
wsi->user_space and public code should only get that from the
callback as discussed on trac recently.
Thanks to Edwin for debugging the problem.
Reported-by: Edwin van den Oetelaar <oetelaar.automatisering@gmail.com>
Signed-off-by: Andy Green <andy.green@linaro.org>
The two cases where I introduced snprintf are either already
safe for buffer overflow or can be made so with one extra
statement, allowing sprintf.
Signed-off-by: Andy Green <andy.green@linaro.org>
This brings the library sources into compliance with checkpatch
style except for three or four exceptions like WIN32 related stuff
and one long string constant I don't want to break into multiple
sprintf calls.
There should be no functional or compilability change from all
this (hopefully).
Signed-off-by: Andy Green <andy.green@linaro.org>
This removes all the direct wsi members specific to clients,
most of them are moved to being fake headers in the next 3-layer
header scheme, c_port moves to being a member of the u.hdr
unionized struct.
It gets rid of a lot of fiddly mallocs and frees(), despite it
adds a small internal API to create the fake headers, actually
the patch deletes more than it adds...
Signed-off-by: Andy Green <andy.green@linaro.org>
This big patch replaces the malloc / realloc per header
approach used until now with a single three-level struct
that gets malloc'd during the header union phase and freed
in one go when we transition to a different union phase.
It's more expensive in that we malloc a bit more than 4Kbytes,
but it's a lot cheaper in terms of malloc, frees, heap fragmentation,
no reallocs, nothing to configure. It also moves from arrays of
pointers (8 bytes on x86_64) to unsigned short offsets into the
data array, (2 bytes on all platforms).
The 3-level thing is all in one struct
- array indexed by the header enum, pointing to first "fragment" index
(ie, header type to fragment lookup, or 0 for none)
- array of fragments indexes, enough for 2 x the number of known headers
(fragment array... note that fragments can point to a "next"
fragment if the same header is spread across multiple entries)
- linear char array where the known header payload gets written
(fragments point into null-terminated strings stored in here,
only the known header content is stored)
http headers can legally be split over multiple headers of the same
name which should be concatenated. This scheme does not linearly
conatenate them but uses a linked list in the fragment structs to
link them. There are apis to get the total length and copy out a
linear, concatenated version to a buffer.
Signed-off-by: Andy Green <andy.green@linaro.org>