>
> Maybe you want to disable it

Thanks for the reply! I have already tried that and doesn't help.

 Maybe you can run a "top" showing each CPU usage, so we can see how much
> time is spent in SI and in userland


During the test the CPU usage is pretty constant and the values are these:


%Cpu0  :* 65.1 *us,*  5.0 *sy,*  0.0 *ni,* 29.9 *id,*  0.0 *wa,*  0.0 *hi,*
0.0 *si,*  0.0 *st

%Cpu1  :* 49.0 *us,*  6.3 *sy,*  0.0 *ni,* 30.3 *id,*  0.0 *wa,*  0.0 *hi,*
14.3 *si,*  0.0 *st

%Cpu2  :* 67.7 *us,*  4.0 *sy,*  0.0 *ni,* 24.8 *id,*  0.0 *wa,*  0.0 *hi,*
3.6 *si,*  0.0 *st

%Cpu3  :* 72.1 *us,*  6.0 *sy,*  0.0 *ni,* 21.9 *id,*  0.0 *wa,*  0.0 *hi,*
0.0 *si,*  0.0 *st


I saw you're doing http-server-close. Is there any good reason for that?


I need to handle different requests from different clients (I am not
interested in keep alive, since clients usually make just 1 or 2 requests).
So I think that http-server-close doesn't matter because it is used only
for multiple request *from the same client*.

The maxconn on your frontend seem too low too compared to your target
> traffic (despite the 5000 will apply to each process).


It is 5,000 * 4 = 20,000 which should be enough for a test with 2,000
clients. In any case I have also tried to increase it to 25,000 per process
and the performance are the same in the load tests.

Last, I would create 4 bind lines, one per process, like this in your
> frontend:
>   bind :80 process 1
>   bind :80 process 2
>

Do you mean bind-process? The HAProxy docs say that when bind-process is
not present is the same as bind-process all, so I think that it is useless
to write it explicitly.


On Fri, May 11, 2018 at 4:58 PM, Baptiste <[email protected]> wrote:

> Hi Marco,
>
> I see you enabled compression in your HAProxy configuration. Maybe you
> want to disable it and re-run a test just to see (though I don't expect any
> improvement since you seem to have some free CPU cycles on the machine).
> Maybe you can run a "top" showing each CPU usage, so we can see how much
> time is spent in SI and in userland.
> I saw you're doing http-server-close. Is there any good reason for that?
> The maxconn on your frontend seem too low too compared to your target
> traffic (despite the 5000 will apply to each process).
> Last, I would create 4 bind lines, one per process, like this in your
> frontend:
>   bind :80 process 1
>   bind :80 process 2
>   ...
>
> Maybe one of your process is being saturated and you don't see it . The
> configuration above will ensure an even load distribution of the incoming
> connections to the HAProxy process.
>
> Baptiste
>
>
> On Fri, May 11, 2018 at 4:29 PM, Marco Colli <[email protected]>
> wrote:
>
>> how many connections you have opened on the private side
>>
>>
>> Thanks for the reply! What should I do exactly? Can you see it from
>> HAProxy stats? I have taken two screenshots (see attachments) during the
>> load test (30s, 2,000 client/s)
>>
>> here are not closing fast enough and you are reaching the limit.
>>
>>
>> What can I do to improve that?
>>
>>
>>
>>
>> On Fri, May 11, 2018 at 3:30 PM, Mihai Vintila <[email protected]> wrote:
>>
>>> Check how many connections you have opened on the private side(i.e.
>>> between haproxy and nginx), i'm thinking that there are not closing fast
>>> enough and you are reaching the limit.
>>>
>>> Best regards,
>>> Mihai
>>>
>>> On 5/11/2018 4:26 PM, Marco Colli wrote:
>>>
>>> Another note: each nginx server in the backend can handle 8,000 new
>>> clients/s: http://bit.ly/2Kh86j9 (tested with keep alive disabled and
>>> with the same http request)
>>>
>>> On Fri, May 11, 2018 at 2:02 PM, Marco Colli <[email protected]>
>>> wrote:
>>>
>>>> Hello!
>>>>
>>>> Hope that this is the right place to ask.
>>>>
>>>> We have a website that uses HAProxy as a load balancer and nginx in the
>>>> backend. The website is hosted on DigitalOcean (AMS2).
>>>>
>>>> The problem is that - no matter the configuration or the server size -
>>>> we cannot achieve a connection rate higher than 1,000 new connections / s.
>>>> Indeed we are testing using loader.io and these are the results:
>>>> - for a session rate of 1,000 clients per second we get exactly 1,000
>>>> responses per second
>>>> - for session rates higher than that, we get long response times (e.g.
>>>> 3s) and only some hundreds of responses per second (so there is a
>>>> bottleneck) https://ldr.io/2I5hry9
>>>>
>>>> Note that if we use a long http keep alive in HAProxy and the same
>>>> browser makes multiple requests we get much better results: however the
>>>> problem is that in the reality we need to handle many different clients
>>>> (which make 1 or 2 requests on average), not many requests from the same
>>>> client.
>>>>
>>>> Currently we have this configuration:
>>>> - 1x HAProxy with 4 vCPU (we have also tested with 12 vCPU... the
>>>> result is the same)
>>>> - system / process limits and HAProxy configuration:
>>>> https://gist.github.com/collimarco/347fa757b1bd1b3f1de536bf1e90f195
>>>> - 10x nginx backend servers with 2 vCPU each
>>>>
>>>> What can we improve in order to handle more than 1,000 different new
>>>> clients per second?
>>>>
>>>> Any suggestion would be extremely helpful.
>>>>
>>>> Have a nice day
>>>> Marco Colli
>>>>
>>>>
>>>
>>
>

Reply via email to