Gregory Junker <ggjunker@gmail.com> noticed the binary flag is not
getting set right, or at all on client side. This should improve
matters.
Signed-off-by: Andy Green <andy.green@linaro.org>
Move server-only stuff into their own files and make building
that depend on not having --without-server on the configure
Make fragments in other places conditional as well
Remove client-related members from struct libwebscket when
building LWS_NO_CLIENT
Apps:
normal: build test server, client, fraggle, ping
--without-client: build test server
--without-server: build test client, ping
Signed-off-by: Andy Green <andy.green@linaro.org>
Solve some unchecked return codes in the new daemonization file
AG adapted a bit
Signed-off-by: Edwin van den Oetelaar <oetelaar.automatisering@gmail.com>
Signed-off-by: Andy Green <andy.green@linaro.org>
Profiling what happens during the ab test, one of the hotspots
was strcasecmp in a loop looking for header name matches each time.
This patch introduces a lexical parser that creates a state machine
in 276 bytes that encodes all the known header names. The fsm is
walked bytewise as chaacters come in... most states do not need any
recursion to match or fail.
The state machine output is cut-and-pasted into parsers.c as an
unsigned char array.
The fsm generator is a bit rough and ready, included in the tree but
not built since normal mortals won't need to touch it.
Signed-off-by: Andy Green <andy.green@linaro.org>
Unfortunately this code is beginning to rot due to lack of demand to
provide it and it being disabled by default.
If demand appears we can revert this and resume work on it, otherwise
let's bite the bullet for the moment.
Signed-off-by: Andy Green <andy.green@linaro.org>
Problems with rx flow control implementation were the underlying cause
of the connection stalling issue that was covered up with the udelay()
patch that was removed recently.
This get rx flow control working properly and corrects problems with
fifo management in the test server mirror protocol code too.
The rxfow control api has been changed to just set a flag, so it's very cheap
to call from user code. After the callbacks that might use the rxflow control
api the flag is checked and any pending actions done.
rx flow control now stops any rx packet coming immediately, with compessed
connections "just what was left in the pipe" might be hundreds of KBytes. To
implement that the current packet being decoded is copied into a malloc'd buffer
by the rx processing code now.
When rxflow is allows to come again, the buffer is drained and freed before any
new packet content is accepted.
Signed-off-by: Andy Green <andy.green@linaro.org>
This rips out the connection hashtable implementation along with
MAX_CLIENTS and replaces it with a dynamically allocated fds array
and lookup table along the same lines as the new extpoll implementation
from Edwin van den Oetelaar.
It detects the max number of file descriptors possible at context init
time and allocates accordingly; this can be externally controlled by
ulimit and the server run as a specific user to facilitate targeting
specific ulimit rules at it.
Many operations that translated between socket descriptors and struct
websocket or pollfd objects have had iteration removed by this patch
and under load will be a lot faster.
Signed-off-by: Andy Green <andy.green@linaro.org>
Hash stuff is overkill since Edwin found a max connection limit of 30000 on his
box anyway. Just use a simple preallocated lookup table and fds array.
AG Modified for style and removed debugging bits
Signed-off-by: Edwin van der Oetelaar <oetelaar.automatisering@gmail.com>
Signed-off-by: Andy Green <andy.green@linaro.org>
This leverages the refactor patches to introduce the ability to
disable building any client side code in the library or the client
side test apps.
This will be a considerable size saving for embedded server-only
case.
Signed-off-by: Andy Green <andy.green@linaro.org>
This adapts the approach from the single-packet-per-poll-loop improvement
to sending more packets while the socket can take them.
It still falls back to the multi-state scheme if the socket ever chokes,
which it certainly will on larger files, so it's safe while being highly
efficient at smaller file sizes.
Nor should it significantly add to latency for other sockets, it simply
stuffs the pipe asynchronously as much as the pipe can take.
We also increase the packet payoad size from 512 to 1400 a time.
This reduces the time taken in the 300 connection / 5000 transfers ab test
from >8s to ~3.4s, transferring the same amount of data.
$ ab -t 100 -n 5000 -c 300 'http://127.0.0.1:7681/'
This is ApacheBench, Version 2.3 <$Revision: 1373084 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
Completed 3500 requests
Completed 4000 requests
Completed 4500 requests
Completed 5000 requests
Finished 5000 requests
Server Software: libwebsockets
Server Hostname: 127.0.0.1
Server Port: 7681
Document Path: /
Document Length: 8447 bytes
Concurrency Level: 300
Time taken for tests: 3.400 seconds
Complete requests: 5000
Failed requests: 0
Write errors: 0
Total transferred: 42680000 bytes
HTML transferred: 42235000 bytes
Requests per second: 1470.76 [#/sec] (mean)
Time per request: 203.976 [ms] (mean)
Time per request: 0.680 [ms] (mean, across all concurrent requests)
Transfer rate: 12260.17 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 7 24 15.6 20 125
Processing: 32 172 50.2 161 407
Waiting: 27 154 49.4 142 386
Total: 81 196 48.3 182 428
Percentage of the requests served within a certain time (ms)
50% 182
66% 185
75% 188
80% 194
90% 304
95% 316
98% 322
99% 328
100% 428 (longest request)
Signed-off-by: Andy Green <andy.green@linaro.org>
From an idea by Edwin van den Oetelaar <oetelaar.automatisering@gmail.com>
When testing libwebsockets with ab, Edwin found an unexpected bump in
the distribution of latencies, some connections were held back almost
the whole test duration.
http://ml.libwebsockets.org/pipermail/libwebsockets/2013-January/000006.html
Studying the problem revealed that when there are mass pending connections
amongst many active connections, we do not service the listen socket often
enough to clear the backlog, some seem to get stale violating FIFO ordering.
This patch introduces listen socket service "piggybacking", where every n
normal socket service actions we also check the listen socket and deal with
pending connections there.
Normally, it checks the listen socket gratuitously every 10 normal socket
services. However, if it finds something waiting, it forces a check on the
next normal socket service too by keeping stats on how often something was
waiting. If the probability of something waiting each time becomes high,
it will allow up to two waiting connections to be serviced for each normal
socket service.
In that way it has low burden in the normal case, but rapidly adapts by
detecting mass connection loads as found in ab.
Signed-off-by: Andy Green <andy.green@linaro.org>
Default remains at SOMAXCONN, you can force it at configure time
along these lines
./configure CFLAGS="-DLWS_SOMAXCONN=16384"
Signed-off-by: Andy Green <andy.green@linaro.org>
From an idea by Jack Mitchell <ml@communistcode.co.uk>
Use --without-testapps at configure time to suppress building them
Signed-off-by: Andy Green <andy.green@linaro.org>