Hello,

I have tested Apache Bloodhound installation on a quite good setup
(VPS with 3 vCores Intel Xeon CPU E5-2650, 3GB RAM, SSD) and with a
combination of Nginx + uWSGI + PostgreSQL.

Using wrk (https://github.com/wg/wrk) as load testing tool, I managed
to serve a static HTML page at around 11000 requests/second (to have
an idea of Nginx performance on this hardware).
Serving a simple template with Django, I managed around 200 requests/second.

However serving the Apache Bloodhound startup page, I only manage
around 12 requests/second:
> ./wrk -t12 -c12 -d30s http://bloodhound.example.com
> Running 30s test @ http://bloodhound.example.com
>  12 threads and 12 connections
>  Thread Stats   Avg      Stdev     Max   +/- Stdev
>    Latency     1.07s   232.48ms   1.54s    73.89%
>    Req/Sec     0.71      0.63     2.00     51.94%
>
> 376 requests in 30.01s, 4.13MB read
> Requests/sec:     12.53
> Transfer/sec:    141.01KB

Can someone confirm that these numbers are more or less what is
roughly expected ?
Or is there some way I can optimize to win an order of magnitude ?

If there is nothing I can do to improve these numbers, I will try to
use a cache tool like Varnish.

To detail the software configuration:
1) I have used the Nginx+uWSGI+PostgreSQL setup described in ticket
#796 (https://issues.apache.org/bloodhound/ticket/796) with the
addition of serving static ressources directly by Nginx (I will update
the patch to reflect this later)
2) I have tried several Nginx optimization tricks without any
significant changes (see for instance some Nginx tricks here:
https://github.com/perusio/piwik-nginx/blob/master/nginx.conf)
3) I have tried playing with uWSGI processes and cpu affinity
parameters without any significant changes

Thanks,

Reply via email to