Re: maxsslconn vs maxsslrate

2018-06-07 Thread Mihir Shirali
Hi Alexander,

I have looked at the link. What I am looking for is an answer to the
difference between maxsslconn and maxsslrate. The former does not result in
CPU savings while the latter does. Again the former does result in large
number of tcp connection resets while the latter does not. What I'd like to
know and understand is why that is the case.
I am using nbproc set to 2.

On Thu, Jun 7, 2018 at 2:43 PM, Aleksandar Lazic  wrote:

> On 07/06/2018 14:30, Mihir Shirali wrote:
>
>> We have a large number of ip phones connecting to this port. They could
>> be as large as 80k. They request for a file from a custom
>> application. haproxy front ends the tls connection and then forwards
>> the request to the application's http port.
>>
>
> Have you take a look into the link below for some tunings for the system
> and haproxy.
>
> HA-Proxy version 1.8.8 2018/04/19
>> Copyright 2000-2018 Willy Tarreau 
>>
>
> [snipp]
>
> Any change to update to 1.8.9?
>
> Thanks can you also send the "Anonymized haproxy conf".
> The main questions are do you use thread and or nbprocs?
> This will be answered by the conf
>
> Best regards
> aleks
>
>
> On Thu, Jun 7, 2018 at 2:13 PM, Aleksandar Lazic 
>> wrote:
>>
>> Hi Mihir.
>>>
>>> On 07/06/2018 10:27, Mihir Shirali wrote:
>>>
>>> Hi Team,
>>>>
>>>> We use haproxy to front tls for a large number of endpoints, haproxy
>>>> prcesses the TLS session and then forwards the request to the backend
>>>> application.
>>>>
>>>> What we have noticed is that if there are a large number of connections
>>>> from different clients - the CPU usage goes up significantly. This
>>>> primarily because haproxy is handling a lot ofSSL connections. I came
>>>> across 2 options above and tested them out.
>>>>
>>>>
>>> What do you mean with *large number*?
>>>
>>> https://medium.freecodecamp.org/how-we-fine-tuned-haproxy-to
>>> -achieve-2-000-000-concurrent-ssl-connections-d017e61a4d27
>>>
>>> With maxsslrate - CPU is better controlled and if I combine this with
>>>
>>>> 503 response in the front end I see great results. Is there a
>>>> possibility of connection timeout on the client here if there are a
>>>> very large number of requests?
>>>>
>>>> With maxsslconn, CPU is still pegged high - and clients receive a tcp
>>>> reset. This is also good, because there is no chance of tcp time out on
>>>> the client. Clients can retry after a bit and they are aware that the
>>>> connection is closed instead of waiting on timeout. However, CPU still
>>>> seems pegged high. What is the reason for high CPU on the server here -
>>>> Is it because SSL stack is still hit with this setting?
>>>>
>>>>
>>> SSL/TLS handling isn't that easy.
>>>
>>> Please can you share some more information's, because in the latest
>>> versions of haproxy are a lot optimisation's introduced also for TLS.
>>>
>>> haproxy -vv
>>>
>>> Anonymized haproxy conf.
>>>
>>> --
>>>
>>>> Regards,
>>>> Mihir
>>>>
>>>>
>>> Best regards
>>> Aleks
>>>
>>
>> --
>> Regards,
>> Mihir
>>
>


-- 
Regards,
Mihir


Re: maxsslconn vs maxsslrate

2018-06-07 Thread Mihir Shirali
We have a large number of ip phones connecting to this port. They could be
as large as 80k. They request for a file from a custom application. haproxy
front ends the tls connection and then forwards the request to the
application's http port.

HA-Proxy version 1.8.8 2018/04/19
Copyright 2000-2018 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
-fwrapv -fno-strict-overflow -Wno-unused-label
  OPTIONS = USE_OPENSSL=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Running on OpenSSL version : OpenSSL 1.0.2l.6.2.83
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built without PCRE or PCRE2 support (using libc's regex instead)
Built without compression support (neither USE_ZLIB nor USE_SLZ are set).
Compression algorithms supported : identity("identity")
Built with network namespace support.

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
[SPOE] spoe
[COMP] compression
[TRACE] trace


On Thu, Jun 7, 2018 at 2:13 PM, Aleksandar Lazic  wrote:

> Hi Mihir.
>
> On 07/06/2018 10:27, Mihir Shirali wrote:
>
>> Hi Team,
>>
>> We use haproxy to front tls for a large number of endpoints, haproxy
>> prcesses the TLS session and then forwards the request to the backend
>> application.
>>
>> What we have noticed is that if there are a large number of connections
>> from different clients - the CPU usage goes up significantly. This
>> primarily because haproxy is handling a lot ofSSL connections. I came
>> across 2 options above and tested them out.
>>
>
> What do you mean with *large number*?
>
> https://medium.freecodecamp.org/how-we-fine-tuned-haproxy-to
> -achieve-2-000-000-concurrent-ssl-connections-d017e61a4d27
>
> With maxsslrate - CPU is better controlled and if I combine this with
>> 503 response in the front end I see great results. Is there a
>> possibility of connection timeout on the client here if there are a
>> very large number of requests?
>>
>> With maxsslconn, CPU is still pegged high - and clients receive a tcp
>> reset. This is also good, because there is no chance of tcp time out on
>> the client. Clients can retry after a bit and they are aware that the
>> connection is closed instead of waiting on timeout. However, CPU still
>> seems pegged high. What is the reason for high CPU on the server here -
>> Is it because SSL stack is still hit with this setting?
>>
>
> SSL/TLS handling isn't that easy.
>
> Please can you share some more information's, because in the latest
> versions of haproxy are a lot optimisation's introduced also for TLS.
>
> haproxy -vv
>
> Anonymized haproxy conf.
>
> --
>> Regards,
>> Mihir
>>
>
> Best regards
> Aleks
>



-- 
Regards,
Mihir


maxsslconn vs maxsslrate

2018-06-06 Thread Mihir Shirali
Hi Team,

We use haproxy to front tls for a large number of endpoints, haproxy
prcesses the TLS session and then forwards the request to the backend
application.
What we have noticed is that if there are a large number of connections
from different clients - the CPU usage goes up significantly. This
primarily because haproxy is handling a lot ofSSL connections. I came
across 2 options above and tested them out.

With maxsslrate - CPU is better controlled and if I combine this with 503
response in the front end I see great results. Is there a possibility of
connection timeout on the client here if there are a very large number of
requests?

With maxsslconn, CPU is still pegged high - and clients receive a tcp
reset. This is also good, because there is no chance of tcp time out on the
client. Clients can retry after a bit and they are aware that the
connection is closed instead of waiting on timeout. However, CPU still
seems pegged high. What is the reason for high CPU on the server here - Is
it because SSL stack is still hit with this setting?

-- 
Regards,
Mihir


Re: Haproxy support for handling concurrent requests from different clients

2018-05-16 Thread Mihir Shirali
Thanks Jamo!
This is just what we were looking for!



On Tue, May 15, 2018 at 10:17 PM, Jarno Huuskonen 
wrote:

> Hi,
>
> On Fri, May 11, Mihir Shirali wrote:
> > I did look up some examples for setting 503 - but all of them (as you've
> > indicated) seem based on src ip or src header. I'm guessing this is more
> > suitable for a DOS/DDOS  attack? In our deployment, the likelihood of
> > getting one request from multiple clients is more than multiple requests
> > from a single client.
>
> Can you explain how/when(on what condition) you'd like to limit the number
> of requests and haproxy return 503 status to clients (429 seems more
> appropriate status code for this) ?
>
> If you just want haproxy to return 503 for all new requests when
> there're X number of sessions/connections/session rate then
> take a look at fe_conn, fe_req_rate, fe_sess_rate, be_conn and
> be_sess_rate
> (https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#7.3.2-fe_
> conn)
> so for example something like
> http-request deny deny_status 503 if { fe_req_rate gt 50 }
>
> > As an update the rate-limit directive has helped. However, the only
> problem
> > is that the client does not know that the server is busy and *could* time
> > out. It would be great if it were possible to somehow send a 503 out , so
> > the clients could retry after a random time.
>
> -Jarno
>
> --
> Jarno Huuskonen
>



-- 
Regards,
Mihir


Re: Haproxy support for handling concurrent requests from different clients

2018-05-11 Thread Mihir Shirali
Thanks Aleksandar for the help!
I did look up some examples for setting 503 - but all of them (as you've
indicated) seem based on src ip or src header. I'm guessing this is more
suitable for a DOS/DDOS  attack? In our deployment, the likelihood of
getting one request from multiple clients is more than multiple requests
from a single client.
As an update the rate-limit directive has helped. However, the only problem
is that the client does not know that the server is busy and *could* time
out. It would be great if it were possible to somehow send a 503 out , so
the clients could retry after a random time.

With respect to the update - we are evaluating this and have run into some
issues since we need to host 2 different certificates on the port (served
based on the cipher). We should be able to fix this on our own though.

On Fri, May 11, 2018 at 11:41 AM, Aleksandar Lazic 
wrote:

> Hi Mihir.
>
> Am 11.05.2018 um 05:57 schrieb Mihir Shirali:
> > Hi Aleksandar,
> >
> > Why do you add http header for a tftp service?
> > Do you really mean https://de.wikipedia.org/wiki/Trivial_File_Transfer_
> Protocol
> > <https://de.wikipedia.org/wiki/Trivial_File_Transfer_Protocol>
> > [Mihir]>>This TFTP is a custom application written by us. The http
> headers also
> > have custom attributes which are used by the backend application.
> >
> > haproxy version is
> > HA-Proxy version 1.5.11 2015/01/31
>
> Could you try to update at least to the latest 1.5 or better to 1.8?
> https://www.haproxy.org/bugs/bugs-1.5.11.html
>
> > https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-rate-
> limit%20sessions
> > <https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-rate-
> limit%20sessions>
> >
> > [Mihir]>>I believe this only queues the packets right? Is there a way we
> could
> > tell the client to back off and retry after a bit (like a 503). This
> decision
> > based on the high number of requests.
>
> Yes it's possible but I haven't done it before.
> I would try this, but I hope that someone with more experience in this
> topic
> step forward and show us a working solution.
>
> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-http-
> request
> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#7.3.3-src_
> conn_rate
>
> http-request connection track-sc0 src
> http-request deny deny_status 503 if { src_conn_rate gt 10 }
>
> This lines are shameless copied from the examples in
> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-tcp-
> request%20connection
>
> Regards
> Aleks
>
> > On Fri, May 11, 2018 at 1:58 AM, Aleksandar Lazic  > <mailto:al-hapr...@none.at>> wrote:
> >
> > Am 10.05.2018 um 18:27 schrieb Mihir Shirali:
> > > Hi Team,
> > >
> > > We have haproxy installed on a server which is being used
> primarily for front
> > > ending TLS. After session establishment it sets certain headers in
> the http
> > > request and forwards it to the application in the backend. The
> back end
> > > application is a tftp server and hence it can receive requests
> from a large
> > > number of clients.
> >
> > Why do you add http header for a tftp service?
> > Do you really mean
> > https://de.wikipedia.org/wiki/Trivial_File_Transfer_Protocol
> > <https://de.wikipedia.org/wiki/Trivial_File_Transfer_Protocol>
> >
> > > What we observe on our server is that when we have large number of
> clients
> > > haproxy gets quite busy and the CPU clocks pretty high. Since both
> haproxy and
> > > our backend application run on the same server - this combined CPU
> can get close
> > > to the limit.
> > > What we’d like to know is if there is a way to throttle the number
> of requests
> > > per second. All the searches so far - seem to indicate that we
> could rate limit
> > > based on src ip or http header. However, since our client ips will
> be different
> > > in the real world we wont be able to use that (less recurrence)
> > > Could you please help? Is this possible?
> >
> > What's the output of haproxy -vv ?
> > There was some issues about high CPU Usage so maybe you will need to
> update.
> >
> > Could this be a option?
> > https://cbonte.github.io/haproxy-dconv/1.8/
> configuration.html#4.2-rate-limit%20sessions
> > <https://cbonte.github.io/haproxy-dconv/1.8/
> configuration.html#4.2-rate-limit%20sessions>
> > https://cbonte.github.io/haproxy-dconv/1.8/
> configuration.html#7.3.3-src_updt_conn_cnt
> > <https://cbonte.github.io/haproxy-dconv/1.8/
> configuration.html#7.3.3-src_updt_conn_cnt>
> >
> > What's 'less recurrence' , hours, days?
> >
> > Regards
> > Aleks
> >
> >
> >
> >
> > --
> > Regards,
> > Mihir
>
>


-- 
Regards,
Mihir


Re: Haproxy support for handling concurrent requests from different clients

2018-05-10 Thread Mihir Shirali
Hi Aleksandar,

Why do you add http header for a tftp service?
Do you really mean https://de.wikipedia.org/wiki/
Trivial_File_Transfer_Protocol
[Mihir]>>This TFTP is a custom application written by us. The http headers
also have custom attributes which are used by the backend application.

haproxy version is
HA-Proxy version 1.5.11 2015/01/31

https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-rate-
limit%20sessions
[Mihir]>>I believe this only queues the packets right? Is there a way we
could tell the client to back off and retry after a bit (like a 503). This
decision based on the high number of requests.


On Fri, May 11, 2018 at 1:58 AM, Aleksandar Lazic 
wrote:

> Am 10.05.2018 um 18:27 schrieb Mihir Shirali:
> > Hi Team,
> >
> > We have haproxy installed on a server which is being used primarily for
> front
> > ending TLS. After session establishment it sets certain headers in the
> http
> > request and forwards it to the application in the backend. The back end
> > application is a tftp server and hence it can receive requests from a
> large
> > number of clients.
>
> Why do you add http header for a tftp service?
> Do you really mean https://de.wikipedia.org/wiki/
> Trivial_File_Transfer_Protocol
>
> > What we observe on our server is that when we have large number of
> clients
> > haproxy gets quite busy and the CPU clocks pretty high. Since both
> haproxy and
> > our backend application run on the same server - this combined CPU can
> get close
> > to the limit.
> > What we’d like to know is if there is a way to throttle the number of
> requests
> > per second. All the searches so far - seem to indicate that we could
> rate limit
> > based on src ip or http header. However, since our client ips will be
> different
> > in the real world we wont be able to use that (less recurrence)
> > Could you please help? Is this possible?
>
> What's the output of haproxy -vv ?
> There was some issues about high CPU Usage so maybe you will need to
> update.
>
> Could this be a option?
> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-rate-
> limit%20sessions
> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#7.3.3-src_
> updt_conn_cnt
>
> What's 'less recurrence' , hours, days?
>
> Regards
> Aleks
>



-- 
Regards,
Mihir


Haproxy support for handling concurrent requests from different clients

2018-05-10 Thread Mihir Shirali
Hi Team,

We have haproxy installed on a server which is being used primarily for
front ending TLS. After session establishment it sets certain headers in
the http request and forwards it to the application in the backend. The
back end application is a tftp server and hence it can receive requests
from a large number of clients.
What we observe on our server is that when we have large number of clients
haproxy gets quite busy and the CPU clocks pretty high. Since both haproxy
and our backend application run on the same server - this combined CPU can
get close to the limit.
What we’d like to know is if there is a way to throttle the number of
requests per second. All the searches so far - seem to indicate that we
could rate limit based on src ip or http header. However, since our client
ips will be different in the real world we wont be able to use that (less
recurrence)
Could you please help? Is this possible?


Controlling list of "Acceptable CA names"

2017-01-07 Thread Mihir Shirali -X (mshirali - INFOSYS LIMITED at Cisco)
Hi All,

We have a scenario where HA proxy might send a large of "Acceptable client 
certificate CA names" to the client as part of the "Certificate Request" 
message. What we see on the client side, is that it balks with the following 
error:
>>> TLS 1.2 Alert [length 0002], fatal illegal_parameter
02 2f
139911422498632:error:1408E098:SSL routines:SSL3_GET_MESSAGE:excessive message 
size:s3_both.c:512:
---

Now, for the moment we worked arpound the problem by preventing the server from 
sending down the client certificate request, but we're wondering if:
1 - Anyone is aware of this issue or if there is a limitation to the number of 
names that the server can send down?
2 - Is there a way to send the client request, but  avoid sending the list of 
"acceptable client certificate CA names"

Regards,
Mihir