Thanks Baptiste.

The performance for SSL vs regular is very bad. Could someone help
with that? Following is the configuration, test result and the monitoring
tool results (the last is interesting).

-------------     Configuration file   -----------------------------
global
    daemon
    maxconn  60000
    quiet
    nbproc 6
    cpu-map 1 0
    cpu-map 2 1
    cpu-map 3 2
    cpu-map 4 3
    cpu-map 5 4
    cpu-map 6 5
    tune.pipesize 524288
    user haproxy
    group haproxy
    stats socket /var/run/haproxy.sock mode 600 level admin
    stats timeout 2m

defaults
    mode http
    option forwardfor
    retries 3
    option redispatch
    maxconn 60000
    option splice-auto
    option prefer-last-server
    timeout connect 5000ms
    timeout client 50000ms
    timeout server 50000ms

frontend www-http
    bind <HAPROXY>:80
    default_backend www-backend

frontend www-https
    bind-process 1,2,3,4
    bind <HAPROXY>:443 ssl crt /etc/ssl/private/haproxy.pem
    default_backend www-backend

backend www-backend
    bind-process 1,2,3,4   # Have tested with this removed also, no
difference
    mode http
    maxconn 60000
    stats enable
    stats uri /stats
    balance roundrobin
    option prefer-last-server
    option forwardfor
    option splice-auto
    cookie FKSID prefix indirect nocache
    server nginx-1 <NGINX-1>:80 maxconn 20000 check
    server nginx-2 <NGINX-2>:80 maxconn 20000 check

--------------   Test results   ------------------------------------
Test result without SSL:
    Requests per second:    181224.69 [#/sec] (mean)
    Transfer rate:          51854.33 [Kbytes/sec] received

Test result with SSL:
    SSL/TLS Protocol:       TLSv1/SSLv3,ECDHE-RSA-AES256-GCM-SHA384,1024,256
    Requests per second:    65313.33 [#/sec] (mean)
    Transfer rate:          18688.29 [Kbytes/sec] received
--------------------   Monitoring tools results  -------------------
Pidstat/mpstat without SSL:
    pidstat shows each haproxy using similar system resources:
    Average:      110      4730   19.60   29.20    0.00   48.80     -
haproxy
    (and all remaining 5 are similar)
    mpstat is similar:
    Average:     CPU    %usr   %nice    %sys %iowait    %irq   %soft
%steal  %guest  %gnice   %idle
    Average:       0   18.34    0.00   24.12    0.00    0.00    2.26
0.00    0.00    0.00   55.28

Pidstat/mpstat with SSL:
    pidstat shows first haproxy (on cpu0) is the only one active, rest are
0% for all fields.
    Average:      110      4839   35.64   63.56    0.00   99.20     -
haproxy
    mpstat is similar to pidstat:
    Average:     CPU    %usr   %nice    %sys %iowait    %irq   %soft
%steal  %guest  %gnice   %idle
    Average:     all    0.75    0.00    0.82    0.01    0.00    0.52
0.00    0.00    0.00   97.90
    Average:       0   35.73    0.00   38.93    0.00    0.00   24.80
0.00    0.00    0.00    0.53

Setting bind-process seems to have the effect of reducing performance to
1/3rd of
original (even for http request, if I add bind-process in the frontend
section of http),
as only the first cpu in bind-process is running. If I remove bind-process,
https of 64
bytes goes up 2.5 times to 140K (lower than http which is 180K). But
ofcourse 64K
doesn't work then.

Thanks again,
- Krishna Kumar


On Wed, May 13, 2015 at 7:05 PM, Baptiste <bed...@gmail.com> wrote:

> On Wed, May 13, 2015 at 2:16 PM, Krishna Kumar (Engineering)
> <krishna...@flipkart.com> wrote:
> > Hi Baptiste,
> >
> > Thank you very much for the tips. I have nbproc=8 in my configuration.
> Made
> > the
> > following changes:
> >
> > Added both bind and tune.bufsize change                        result ->
> > works.
> > Removed the tune.bufsize
> > result -> works.
> > Added bind-process for frontend and backend as:
> >         bind-process 1,2,3,4,5,6,7,8
> > result -> works
> > Removed the bind-process
> > result -> fails.
> >
> > (the bind-process change you suggested worked for 16K and also for 128K,
> > which
> > was what I was initially testing before going smaller to find that 16K
> > failed and 4K
> > worked)
> >
> > The performance for SSL is also very much lower compared to regular
> traffic,
> > it may be related to configuration settings (about 2x to 3x worse):
> >
> > 128 bytes I/O:
> >         SSL:        BW: 22168.31 KB/s      RPS: 63408.79
> >         NO-SSL: BW: 61193.31 KB/s       RPS: 175033.38
> >
> > 64K bytes I/O:
> >         SSL:        BW: 506393.55 KB/s     RPS: 7884.49 rps
> >         NO-SSL: BW: 1101296.07 KB/s    RPS: 17147.05 rps
> >
> > I will send the configuration a little later, as it needs heavy cleaning
> up,
> > there are
> > lots of things I want to clean before that.
> >
> > Thanks,
> > - Krishna Kumar
> >
>
>
> Ok, so we spotted a bug there :)
> At least, HAProxy should warn you your backend and frontend aren't on
> the same process.
> In my mind, HAProxy silently create a backend to the frontend's
> process, even if it was not supposed to be there. But this behavior
> may have changed recently.
>
> No time to dig further in it, but I'll let Willy know so he can check
> about it.
>
> Simply bear this rule in mind: a frontend and a backend must be on the
> same process.
>
> Baptiste
>

Reply via email to