Getting Client IP to backend instance application

2017-01-17 Thread Jayalath, Viranga
Hi Haproxy team ,

I have a question. I have a backed instance which attached to haproxy
instance. I have requirement to get client  IP in my nodejs application
logs . But how ever im getting the haproxy IP. I refer all your x forwarder
options can use to get client ip but still im getting the haproxy server ip
in my logs. Can you advice any thing i can do .  Below you i will mentioned
configuration changes i used.

# add X-FORWARDED-FOR
option forwardfor
# add X-CLIENT-IP
http-request add-header X-CLIENT-IP %[src]


-- 
Best Regards,

Viranga Jayalath
DevOps and Application Engineering,Cloud Services Technology Operations

Pearson Lanka (Pvt) Ltd.
Technology Operations
Orion City, Alnitak Building
No. 752, Dr. Danister De Silva Mawatha
Sri Lanka


*M*  +94 (0) 714 672980

Learn more at *pearson.com *

*ALWAYS LEARNING*


Re: haproxy consuming 100% cpu - epoll loop

2017-01-17 Thread Willy Tarreau
Hi Patrick,

On Tue, Jan 17, 2017 at 02:33:44AM +, Patrick Hemmer wrote:
> So on one of my local development machines haproxy started pegging the
> CPU at 100%
> `strace -T` on the process just shows:
> 
> ...
> epoll_wait(0, {}, 200, 0)   = 0 <0.03>
> epoll_wait(0, {}, 200, 0)   = 0 <0.03>
> epoll_wait(0, {}, 200, 0)   = 0 <0.03>
> epoll_wait(0, {}, 200, 0)   = 0 <0.03>
> epoll_wait(0, {}, 200, 0)   = 0 <0.03>
> epoll_wait(0, {}, 200, 0)   = 0 <0.03>
> ...

Hmm not good.

> Opening it up with gdb, the backtrace shows:
> 
> (gdb) bt
> #0  0x7f4d18ba82a3 in __epoll_wait_nocancel () from /lib64/libc.so.6
> #1  0x7f4d1a570ebc in _do_poll (p=, exp=-1440976915)
> at src/ev_epoll.c:125
> #2  0x7f4d1a4d3098 in run_poll_loop () at src/haproxy.c:1737
> #3  0x7f4d1a4cf2c0 in main (argc=, argv= out>) at src/haproxy.c:2097

Ok so an event is not being processed correctly.

> This is haproxy 1.7.0 on CentOS/7

Ah, that could be a clue. We've had 2 or 3 very ugly bugs in 1.7.0
and 1.7.1. One of them is responsible for the few outages on haproxy.org
(last one happened today, I left it running to get the core to confirm).
One of them is an issue with the condition to wake up an applet when it
failed to get a buffer first and it could be what you're seeing. The
other ones could possibly cause some memory corruption resulting in
anything.

Thus I'd strongly urge you to update this one to 1.7.2 (which I'm going
to do on haproxy.org now that I could get a core). Continue to monitor
it but I'd feel much safer after this update.

Thanks for your report!
Willy



Amazon Web Services

2017-01-17 Thread Julie Smith
Good Day,



Would you be interested in acquiring the list of Companies or Client's using 
AWS.



We give information over the globe - North America, EMEA, Asia Pacific and 
LATAM.



Please review and let me know your thoughts I will get back to you with counts, 
pricing and more information in my next email.



Await your response.



Best Regards,

Julie Smith

Demand Generation- Technology Database



ALL RIGHTS RESERVED. No part of this report may be reproduced or transmitted in 
any form whatsoever, electronic, or mechanical, including photocopying, 
recording, or by any informational storage or retrieval system without express 
written, dated and signed permission from the author.













Re: Hitting rate limit?

2017-01-17 Thread Hubert Matthews
Are you using keepalives?  If not, you're measuring mostly the TCP/SSL 
set up and teardown times.  Try ab -k.  I did some measurements recently 
on a web system and got 7 kreq/s for a non-SSL site without keepalives 
and 30kreq/s with.


--
Hubert Matthews



Re: Hitting rate limit?

2017-01-17 Thread Holger Just
Hi Atha,

Atha Kouroussis wrote:
> Output from ab against haproxy:
> Concurrency Level:  200
> Time per request:   49.986 [ms] (mean)

If you check these numbers, you'll notice that with a time of 49 ms per
request and 200 concurrent requests, you;ll end up at exactly 4000
requests / second:

(1000 ms / (49 ms / req)) * (200 req / s) = 4081 req / s

Thus, in order to achieve a higher throughput, you have two options:

* You could try to reduce the time required per request, which probably
helps a certain amount,
* or you could increase the concurrency of your requests with ab. Since
in the real world, you'll probably get fewer requests per source form
way more sources, this would probably simulate your actual production
load even better.

Best,
Holger



Re: Hitting rate limit?

2017-01-17 Thread Atha Kouroussis
Hi Aleks,

On Tue, Jan 17, 2017 at 8:55 AM, Aleksandar Lazic 
wrote:

> Hi.
>
> Am 17-01-2017 05:46, schrieb Atha Kouroussis:
>
> Hi all,
>>
>> I seem to hitting some kind of bottleneck at about 4k req/s and I'm not
>> able to find the cause.
>>
>> I have HAproxy 1.7.2 installed on Ubuntu 16.04.1, VM with 8 cores, 2GB
>> RAM, 1 Gbps networking. Testing with ab cannot get past ~4K req/s. Hitting
>> the backend directly can yield 8-10K without issues. Requests are POST, 1K
>> data. Requests should be very short lived, 10-50ms average, but if going
>> through HAproxy it seems to more than double to 150ms range. Roundtrip
>> between haproxy and backend is sub-1ms. Attaching haproxy and OS config
>> below.
>>
>> Any help/pointers on what might be wrong is greatly appreciated. Thanks
>> in advance!
>>
>
> What shows the status page of haproxy?
>

The  status page of haproxy shows the session rate oscilates around
4500-5000 during the load test. Actual number of sessions remains very low,
doesn't go beyond 1K.

>
> Do you run ab & haproxy on the same machine?


No, ab is run on dedicated machines, generate load from 3 clients with ab.

>

Can you post the output of ab for the both differnt backends (haproxy &
> backend)
>

Output from ab against haproxy:
Document Path:  /
Document Length:Variable

Concurrency Level:  200
Time taken for tests:   24.993 seconds
Complete requests:  10
Failed requests:0
Non-2xx responses:  23
Total transferred:  16396187 bytes
Total body sent:11300
HTML transferred:   2484 bytes
Requests per second:4001.11 [#/sec] (mean)
Time per request:   49.986 [ms] (mean)
Time per request:   0.250 [ms] (mean, across all concurrent requests)
Transfer rate:  640.65 [Kbytes/sec] received
4415.28 kb/s sent
5055.94 kb/s total

Output from ab against backend:
Document Path:  /
Document Length:Variable

Concurrency Level:  200
Time taken for tests:   16.734 seconds
Complete requests:  10
Failed requests:68
   (Connect: 0, Receive: 34, Length: 0, Exceptions: 34)
Total transferred:  14495070 bytes
Total body sent:11360
HTML transferred:   0 bytes
Requests per second:5975.80 [#/sec] (mean)
Time per request:   33.468 [ms] (mean)
Time per request:   0.167 [ms] (mean, across all concurrent requests)
Transfer rate:  845.89 [Kbytes/sec] received
6629.40 kb/s sent
7475.29 kb/s totall

That is with 1 ab client and haproxy with 1 backend.


> But all over ~4k/s isn't that bad ;-)
>

What is driving me nuts is that it doesn't matter if I use 1 or 4 server
backends in haproxy, or if I generate the load with 1 or 3 clients, the
constant is the bottleneck in haproxy at about 4k/s. Resources are plenty
in the VM, should be able to go much higher.

Best,
Atha


> BR Aleks
>
>
> Best,
>> Atha
>>
>> ##
>> haproxy config
>> ##
>>
>> global
>> log /dev/log local0
>> log /dev/log local1 notice
>> chroot /var/lib/haproxy
>> stats socket /run/haproxy/admin.sock mode 660 level admin
>> stats timeout 30s
>> stats bind-process 1
>> user haproxy
>> group haproxy
>> daemon
>> # Default SSL material locations
>> ca-base /etc/ssl/certs
>> crt-base /etc/ssl/private
>> ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES
>> 256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
>> ssl-default-bind-options no-sslv3
>>
>> maxconn 20
>> nbproc 8
>> cpu-map 1 0
>> cpu-map 2 1
>> cpu-map 3 2
>> cpu-map 4 3
>> cpu-map 5 4
>> cpu-map 6 5
>> cpu-map 7 6
>> cpu-map 8 7
>>
>> defaults
>> mode http
>> timeout connect 5s
>> timeout client 50s
>> timeout server 50s
>> errorfile 400 /etc/haproxy/errors/400.http
>> errorfile 403 /etc/haproxy/errors/403.http
>> errorfile 408 /etc/haproxy/errors/408.http
>> errorfile 500 /etc/haproxy/errors/500.http
>> errorfile 502 /etc/haproxy/errors/502.http
>> errorfile 503 /etc/haproxy/errors/503.http
>> errorfile 504 /etc/haproxy/errors/504.http
>>
>> listen bidders
>> bind *:80
>> maxconn 20
>> server srv1 xx.xx.xx.xx:yy check
>>
>> ##
>> sysctl settings
>> ##
>> net.ipv4.tcp_mem = 786432 1697152 1945728
>> net.ipv4.tcp_rmem = 4096 4096 16777216
>> net.ipv4.tcp_wmem = 4096 4096 16777216
>> net.ipv4.tcp_tw_reuse = 1
>> net.ipv4.ip_local_port_range = 1024 65023
>> net.ipv4.tcp_max_syn_backlog = 6
>> net.ipv4.tcp_fin_timeout = 30
>> net.ipv4.tcp_synack_retries = 3
>> net.ipv4.ip_nonlocal_bind = 1
>> net.core.somaxconn = 6
>> net.core.netdev_max_backlog = 1
>> fs.file-max = 1000
>> fs.nr_open = 1000
>>
>> ###
>> Security limits conf
>> ###
>> haproxy soft nofile 99
>> haproxy hard nofile 99
>>
>


Re: Hitting rate limit?

2017-01-17 Thread Aleksandar Lazic

Hi.

Am 17-01-2017 05:46, schrieb Atha Kouroussis:


Hi all,

I seem to hitting some kind of bottleneck at about 4k req/s and I'm not 
able to find the cause.


I have HAproxy 1.7.2 installed on Ubuntu 16.04.1, VM with 8 cores, 2GB 
RAM, 1 Gbps networking. Testing with ab cannot get past ~4K req/s. 
Hitting the backend directly can yield 8-10K without issues. Requests 
are POST, 1K data. Requests should be very short lived, 10-50ms 
average, but if going through HAproxy it seems to more than double to 
150ms range. Roundtrip between haproxy and backend is sub-1ms. 
Attaching haproxy and OS config below.


Any help/pointers on what might be wrong is greatly appreciated. Thanks 
in advance!


What shows the status page of haproxy?

Do you run ab & haproxy on the same machine?
Can you post the output of ab for the both differnt backends (haproxy & 
backend)


But all over ~4k/s isn't that bad ;-)

BR Aleks


Best,
Atha

##
haproxy config
##

global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
stats bind-process 1
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
ssl-default-bind-ciphers 
ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS

ssl-default-bind-options no-sslv3

maxconn 20
nbproc 8
cpu-map 1 0
cpu-map 2 1
cpu-map 3 2
cpu-map 4 3
cpu-map 5 4
cpu-map 6 5
cpu-map 7 6
cpu-map 8 7

defaults
mode http
timeout connect 5s
timeout client 50s
timeout server 50s
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http

listen bidders
bind *:80
maxconn 20
server srv1 xx.xx.xx.xx:yy check

##
sysctl settings
##
net.ipv4.tcp_mem = 786432 1697152 1945728
net.ipv4.tcp_rmem = 4096 4096 16777216
net.ipv4.tcp_wmem = 4096 4096 16777216
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 6
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_synack_retries = 3
net.ipv4.ip_nonlocal_bind = 1
net.core.somaxconn = 6
net.core.netdev_max_backlog = 1
fs.file-max = 1000
fs.nr_open = 1000

###
Security limits conf
###
haproxy soft nofile 99
haproxy hard nofile 99