Re: Reg: HAProxy 1.6.12 on RHEL7.2 (MAXCONN in FRONT-END/LISTEN BLOCK)

2017-06-28 Thread Willy Tarreau
Hi,

On Wed, Jun 28, 2017 at 05:01:25PM +0800, Velmurugan Dhakshnamoorthy wrote:
> As I mentioned earlier, I was not aware that the people from discourse
> forum and this email d-list group are same. I am 100% new to HAProxy.

In general (and this is absolutely not specific to haproxy), when asking
questions on public places (forums, lists, etc) about topics already covered
somewhere else, it's welcome to send pointers to previous conversations and
to explain what complement of information is needed so that people willing to
help don't start from the same points that were already answered but instead
try to focus on your specific points.

I've seen people do that a lot on stackoverflow for example, it's common
to read "In that question someone responded this or that but I don't
understand what happens if I do that".

And it's irrelevant to the fact that you might find the same people, it's
relevant to the fact that people helping on these places are mostly doing
it on their spare time and it's nice to help them optimize their time to
serve a maximum of people with a minimal effort.

Cheers,
Willy



Re: Reg: HAProxy 1.6.12 on RHEL7.2 (MAXCONN in FRONT-END/LISTEN BLOCK)

2017-06-28 Thread Igor Cicimov
Hi all,

On Thu, Jun 29, 2017 at 11:23 AM, Velmurugan Dhakshnamoorthy <
dvel@gmail.com> wrote:

> Thanks Much Andrew,  I will definitely explore on this.
>
> Thanks again.
>
> On Jun 28, 2017 22:03, "Andrew Smalley"  wrote:
>
>> Hi Vel
>>
>> Form what you describe the example using the tarpit feature may help you
>> taken from here https://blog.codecentric.de/en
>> /2014/12/haproxy-http-header-rate-limiting/
>>
>> frontend fe_api_ssl
>>   bind 192.168.0.1:443 ssl crt /etc/haproxy/ssl/api.pem no-sslv3 ciphers ...
>>   default_backend be_api
>>
>>   tcp-request inspect-delay 5s
>>
>>   acl document_request path_beg -i /v2/documents
>>   acl is_upload hdr_beg(Content-Type) -i multipart/form-data
>>   acl too_many_uploads_by_user sc0_gpc0_rate() gt 100
>>   acl mark_seen sc0_inc_gpc0 gt 0
>>
>>   stick-table type string size 100k store gpc0_rate(60s)
>>
>>   tcp-request content track-sc0 hdr(Authorization) if METH_POST 
>> document_request is_upload
>>
>>   use_backend 429_slow_down if mark_seen too_many_uploads_by_user
>>
>> backend be_429_slow_down
>>   timeout tarpit 2s
>>   errorfile 500 /etc/haproxy/errorfiles/429.http
>>   http-request tarpit
>>
>>
>>
>> Andrew Smalley
>>
>> Loadbalancer.org Ltd.
>> www.loadbalancer.org 
>>
>> 
>> 
>> 
>> 
>> 
>> +1 888 867 9504 <%2%29%20867-9504> / +44 (0)330 380 1064
>> <+44%20330%20380%201064>
>> asmal...@loadbalancer.org
>>
>> Leave a Review
>>  | Deployment
>> Guides
>> 
>> | Blog 
>>
>> On 28 June 2017 at 10:01, Velmurugan Dhakshnamoorthy 
>> wrote:
>>
>>> Hi Lukas,
>>> Thanks for your response in length. As I mentioned earlier, I was not
>>> aware that the people from discourse forum and this email d-list group are
>>> same. I am 100% new to HAProxy.
>>>
>>> Let me explain my current situation in-detail in this email thread,
>>> Kindly check if you or other people from the group can guide me.
>>>
>>> Our requirement to use HAProxy is NOT to load balance back-end (Weblogic
>>> 12c) servers, we have a singe backend instance (ex: PIA1), our server
>>> capacity is not high to handle the heavy traffic during peak load, the peak
>>> load occurs only 2 times in a year, that's a reason we are not scaling up
>>> our server resources as they will be idle majority of the time.
>>>
>>> we would like to use HAProxy to throttle http/tcp connections during the
>>> peak load, so that weblogic backed will not go to Out-Of-Memory
>>> state/PeopleSoft will not crash.
>>>
>>> To achieve http throttling,when setting maxconn to back end , HAProxy
>>> queue up further connections and releases once the active http connections
>>> become idle,however how weblogic works is, once the PeopleSoft URL is
>>> accessed and user is authenticated , cookie will be inserted to browser and
>>> cookie will be active by default 20 minutes, which mean even if user does
>>> not navigate and do anything inside the application, cookie session state
>>> will be retained in weblogic java heap. weblogic allocates small amount of
>>> memory in order to retain each active sessions (though memory allocation
>>> increase/decrease dynamically based on various business functionality i).
>>> as per current capacity , weblogic can retain only 100 session state ,
>>> which means, I don't want to forward any further connections to weblogic
>>> until some of the sessions from 100 are released (by default the session
>>> will be released when user clicks explicitly on signout button or
>>> inactivity timeout reaches 20 minutes).
>>>
>>> according to my understanding, maxconn in back-end throttles connections
>>> and releases to back-end as and when tcp connection status changed to idle,
>>> but though connections are idle, logout/signout not occurred from
>>> PeopleSoft, so that still session state are maintained in weblogic and not
>>> released and cannot handle further connections.
>>>
>>> that's reason, I am setting the maxconn in front end and keeping HTTP
>>> alive option ON, so that I can throttle connections at front end itself.
>>> According to my POC, setting maxconn in front-end behaves differently than
>>> setting in back-end, when it is on front-end, it hold further connections
>>> in kernel , once the existing http connections are closed, it allows
>>> further connections inside, in this I dont see any performance issue for
>>> existing connections.
>>>
>>> for your information HAProxy and Weblogic are residing in a same single
>>> VM.
>>>
>>> please let me know if my above understa

Re: Reg: HAProxy 1.6.12 on RHEL7.2 (MAXCONN in FRONT-END/LISTEN BLOCK)

2017-06-28 Thread Velmurugan Dhakshnamoorthy
Thanks Much Andrew,  I will definitely explore on this.

Thanks again.

On Jun 28, 2017 22:03, "Andrew Smalley"  wrote:

> Hi Vel
>
> Form what you describe the example using the tarpit feature may help you
> taken from here https://blog.codecentric.de/en/2014/12/haproxy-http-
> header-rate-limiting/
>
> frontend fe_api_ssl
>   bind 192.168.0.1:443 ssl crt /etc/haproxy/ssl/api.pem no-sslv3 ciphers ...
>   default_backend be_api
>
>   tcp-request inspect-delay 5s
>
>   acl document_request path_beg -i /v2/documents
>   acl is_upload hdr_beg(Content-Type) -i multipart/form-data
>   acl too_many_uploads_by_user sc0_gpc0_rate() gt 100
>   acl mark_seen sc0_inc_gpc0 gt 0
>
>   stick-table type string size 100k store gpc0_rate(60s)
>
>   tcp-request content track-sc0 hdr(Authorization) if METH_POST 
> document_request is_upload
>
>   use_backend 429_slow_down if mark_seen too_many_uploads_by_user
>
> backend be_429_slow_down
>   timeout tarpit 2s
>   errorfile 500 /etc/haproxy/errorfiles/429.http
>   http-request tarpit
>
>
>
> Andrew Smalley
>
> Loadbalancer.org Ltd.
> www.loadbalancer.org 
>
> 
> 
> 
> 
> 
> +1 888 867 9504 <(888)%20867-9504> / +44 (0)330 380 1064
> <+44%20330%20380%201064>
> asmal...@loadbalancer.org
>
> Leave a Review
>  | Deployment
> Guides
> 
> | Blog 
>
> On 28 June 2017 at 10:01, Velmurugan Dhakshnamoorthy 
> wrote:
>
>> Hi Lukas,
>> Thanks for your response in length. As I mentioned earlier, I was not
>> aware that the people from discourse forum and this email d-list group are
>> same. I am 100% new to HAProxy.
>>
>> Let me explain my current situation in-detail in this email thread,
>> Kindly check if you or other people from the group can guide me.
>>
>> Our requirement to use HAProxy is NOT to load balance back-end (Weblogic
>> 12c) servers, we have a singe backend instance (ex: PIA1), our server
>> capacity is not high to handle the heavy traffic during peak load, the peak
>> load occurs only 2 times in a year, that's a reason we are not scaling up
>> our server resources as they will be idle majority of the time.
>>
>> we would like to use HAProxy to throttle http/tcp connections during the
>> peak load, so that weblogic backed will not go to Out-Of-Memory
>> state/PeopleSoft will not crash.
>>
>> To achieve http throttling,when setting maxconn to back end , HAProxy
>> queue up further connections and releases once the active http connections
>> become idle,however how weblogic works is, once the PeopleSoft URL is
>> accessed and user is authenticated , cookie will be inserted to browser and
>> cookie will be active by default 20 minutes, which mean even if user does
>> not navigate and do anything inside the application, cookie session state
>> will be retained in weblogic java heap. weblogic allocates small amount of
>> memory in order to retain each active sessions (though memory allocation
>> increase/decrease dynamically based on various business functionality i).
>> as per current capacity , weblogic can retain only 100 session state ,
>> which means, I don't want to forward any further connections to weblogic
>> until some of the sessions from 100 are released (by default the session
>> will be released when user clicks explicitly on signout button or
>> inactivity timeout reaches 20 minutes).
>>
>> according to my understanding, maxconn in back-end throttles connections
>> and releases to back-end as and when tcp connection status changed to idle,
>> but though connections are idle, logout/signout not occurred from
>> PeopleSoft, so that still session state are maintained in weblogic and not
>> released and cannot handle further connections.
>>
>> that's reason, I am setting the maxconn in front end and keeping HTTP
>> alive option ON, so that I can throttle connections at front end itself.
>> According to my POC, setting maxconn in front-end behaves differently than
>> setting in back-end, when it is on front-end, it hold further connections
>> in kernel , once the existing http connections are closed, it allows
>> further connections inside, in this I dont see any performance issue for
>> existing connections.
>>
>> for your information HAProxy and Weblogic are residing in a same single
>> VM.
>>
>> please let me know if my above understanding is correct about maxconn. Is
>> there any understanding gap ? is there any way to achieve my requirement
>> differently?
>>
>> when decided to use maxconn in front-end, the connection queuing for few
>> milli sec

Re: Looking for a way to limit simultaneous connections per IP

2017-06-28 Thread Patrick Hemmer


On 2017/6/28 17:40, Mark Staudinger wrote:
> Hi Patrick,
>
> Where are you using the stick table and lua script call?  Frontend or
> backend?
>
> Perhaps this would work:
>
> * In the frontend, check the connection count from the "real backend"
> stick table
> * if the count is > 6, set ACL for the source
> *Use this ACL to steer the conection to the "redirect backend" which
> will call the lua script to sleep/redirect
>
> In this way, redirected requests won't add to the backend count for
> the stick table counting such things, because they go to a different
> backend that doesn't actually talk to the resource you are protecting.
>
I think I found the solution. It's very similar to what you proposed.

frontend foofront
  http-request lua.delay_request if { src,table_conn_cur(fooback) ge 6 }
  http-request redirect prefix / code 302 if {
src,table_conn_cur(fooback) ge 6 }

backend fooback
  stick-table type ip size 1 expire 10s peers cluster store conn_cur
  http-request track-sc1 src


I didn't see this solution at first as I didn't see the `table_conn_cur`
converter. I thought the only way to get a value from a stick table was
to track the connection.
The documentation is also a little confusing as it seems to imply it'll
use the string form of the IP address, when I expect the table stores
the binary form of the IP address. But it seems to work from my testing.


> Best,
> -Mark
>
> On Wed, 28 Jun 2017 16:56:03 -0400, Patrick Hemmer
>  wrote:
>
> So as the subject indicates, I'm looking to limit concurrent
> connections to a backend by the source IP. The behavior I'm trying
> for is that if the client has more than 6 connections, we sit on
> the request for a second, and then send back a 302 redirect to the
> same resource that was just requested.
>
> I was able to accomplish this using a stick table for tracking
> connection count, and a Lua script for doing the sleep ("sit on
> the request" part), but it has a significant flaw. Once the >6
> connection limit is hit, and we start redirecting with 302, the
> client can't leave this state. When they come back in after the
> redirect, they'll still have >6 connections, and will hit the rate
> limit rule again.
>
> We instead need a way to differentiate (count) connections held
> open and sitting in the Lua delay function, and connections being
> processed by a server.
>
> I'd be open to other ways of accomplishing the end goal as well.
> We want to use the 302 redirect so the rate limit is transparent
> to the client. And we want the delay so that the client just
> doesn't hammer haproxy with request after request, and the browser
> report it as a redirect loop (a brief delay will allow the
> existing connections to finish processing so that after the 302,
> it can be handled). And we're trying for a per-client limit (as
> opposed to a simple "maxconn" setting and a FIFO queue) to prevent
> a single client from monopolizing the backend resource.
>
> -Patrick
>
>
>
>



Re: Looking for a way to limit simultaneous connections per IP

2017-06-28 Thread Michael Ezzell
On Jun 28, 2017 16:58, "Patrick Hemmer"  wrote:


We instead need a way to differentiate (count) connections held open and
sitting in the Lua delay function, and connections being processed by a
server.


My wishlist would be per-client queueing, sort of a source IP or XFF-based
maxconn.  How sweet would that be?

Without addressing some potential issues with your solution, such as the
potential for requests being handled out of order (#7 processes before #6
if it arrives after two of #1-#5 finish while #6 waits), it seems like
maybe the http_first_req fetch would be useful?  It's not a given that the
redirect would reuse the same connection, but it might be worth a shot.

http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#7.3.6-http_first_req

Or, modify the query string for the redirect, to append another parameter
(or create a query string if there isn't one).  I have a setup that
rewrites a 404 over to a 302 and adds a query string parameter to tell
HAProxy to use a different backend for the subsequent request... I use this
as a hackaround for the fact that we don't have a way for HAProxy to retain
and resend idempotent [ Willy :) ] requests to a different backend (which
would be another wishlist item for me) on certain errors. Same idea --
behave differently after a redirect than before it, even though the
redirect sends the browser back to (essentially) the same URL.  I redirect
to the same base with ?action=refresh appended, which the proxy interprets
to mean the request should route to the alternate backend.


Re: Looking for a way to limit simultaneous connections per IP

2017-06-28 Thread Mark Staudinger

Hi Patrick,

Where are you using the stick table and lua script call?  Frontend or  
backend?


Perhaps this would work:

* In the frontend, check the connection count from the "real backend"  
stick table

* if the count is > 6, set ACL for the source
*Use this ACL to steer the conection to the "redirect backend" which will  
call the lua script to sleep/redirect


In this way, redirected requests won't add to the backend count for the  
stick table counting such things, because they go to a different backend  
that doesn't actually talk to the resource you are protecting.


Best,
-Mark

On Wed, 28 Jun 2017 16:56:03 -0400, Patrick Hemmer  
 wrote:


So as the subject indicates, I'm looking to limit concurrent connections  
to a backend by the source IP. The behavior I'm trying for is that if  
the client has more than 6 connections, we sit on >the request for a  
second, and then send back a 302 redirect to the same resource that was  
just requested.


I was able to accomplish this using a stick table for tracking  
connection count, and a Lua script for doing the sleep ("sit on the  
request" part), but it has a significant flaw. Once the >6 >connection  
limit is hit, and we start redirecting with 302, the client can't leave  
this state. When they come back in after the redirect, they'll still  
have >6 connections, and will hit the rate limit >rule again.


We instead need a way to differentiate (count) connections held open and  
sitting in the Lua delay function, and connections being processed by a  
server.


I'd be open to other ways of accomplishing the end goal as well. We want  
to use the 302 redirect so the rate limit is transparent to the client.  
And we want the delay so that the client just >doesn't hammer haproxy  
with request after request, and the browser report it as a redirect loop  
(a brief delay will allow the existing connections to finish processing  
so that after the 302, it >can be handled). And we're trying for a  
per-client limit (as opposed to a simple "maxconn" setting and a FIFO  
queue) to prevent a single client from monopolizing the backend resource.


-Patrick

Replacing reqadd with http-request set-path

2017-06-28 Thread Norman Branitsky
The HAProxy 1.7 manual says:
"Using "reqadd"/"reqdel"/"reqrep" to manipulate request headers is discouraged 
in newer versions (>= 1.5)."

I've copied the "reqadd" statements from my HAProxy 1.5.18 configuration to 
Haproxy 1.7.7 and now want to update them:

acl path_licd  path_beg /licenseDetails

acl path_admin path_beg /admin /staff

acl path_data  path_beg /datamart

acl path_root  path /

reqrep ^([^\ \t]*)[\ \t]/(.*)\ (.*) \1\ /datamart/licenseDetails.do\ \3 if 
path_licd

reqrep ^([^\ \t]*)[\ \t]/(.*)\ (.*) \1\ /datamart/\2/languageChoice.do\ \3 
if path_admin

reqrep ^([^\ \t]*)[\ \t]/\ (.*) \1\ /datamart/wiLogin.do\ \2  if path_data

redirect location /datamart/wiLogin.do if path_root

I assume the path_licd statement becomes:

http-request set-path /datamart/licenseDetails.do\ %[query] if path_licd

I assume the path_admin statement becomes:

http-request set-path /datamart/%[path]/languageChoice.do\ %[query] if 
path_admin

I assume the path_data statement becomes:

http-request set-path /datamart/wiLogin.do\ %[query] if path_data

I assume the redirect statement becomes:

http-request redirect /datamart/wiLogin.do if path_root

Are my "translations" correct?

Norman

Norman Branitsky
Cloud Architect
MicroPact
(o) 416.916.1752
(c) 416.843.0670
(t) 1-888-232-0224 x61752
www.micropact.com
Think it > Track it > Done



Looking for a way to limit simultaneous connections per IP

2017-06-28 Thread Patrick Hemmer
So as the subject indicates, I'm looking to limit concurrent connections
to a backend by the source IP. The behavior I'm trying for is that if
the client has more than 6 connections, we sit on the request for a
second, and then send back a 302 redirect to the same resource that was
just requested.

I was able to accomplish this using a stick table for tracking
connection count, and a Lua script for doing the sleep ("sit on the
request" part), but it has a significant flaw. Once the >6 connection
limit is hit, and we start redirecting with 302, the client can't leave
this state. When they come back in after the redirect, they'll still
have >6 connections, and will hit the rate limit rule again.

We instead need a way to differentiate (count) connections held open and
sitting in the Lua delay function, and connections being processed by a
server.

I'd be open to other ways of accomplishing the end goal as well. We want
to use the 302 redirect so the rate limit is transparent to the client.
And we want the delay so that the client just doesn't hammer haproxy
with request after request, and the browser report it as a redirect loop
(a brief delay will allow the existing connections to finish processing
so that after the 302, it can be handled). And we're trying for a
per-client limit (as opposed to a simple "maxconn" setting and a FIFO
queue) to prevent a single client from monopolizing the backend resource.

-Patrick


RE: Rewriting/redirecting part of URL

2017-06-28 Thread Mark Holmes
Great, I'll give that a go. Thanks Philipp! 

PS Don't feel sorry for me, I don't work for VWG group directly :)

-Original Message-
From: Philipp Buehler [mailto:e1c1bac6253dc54a1e89ddc046585...@posteo.net] 
Sent: 28 June 2017 18:37
To: Mark Holmes
Cc: 'haproxy@formilux.org'
Subject: Re: Rewriting/redirecting part of URL

Am 28.06.2017 19:20 schrieb Mark Holmes:
> Note that /audi/page/whatever will change all the time - essentially, 
> I want to preserve whatever comes after the first /, just rewriting 
> the domain part

I feel bad for "Audi" (shouts from an ex-Daimler one.. :D ) now.

With 1.6 you can just do that with 'http-request' and 'prefix':
acl oldthings hdr(host) -i old.com
http-request redirect prefix https://new.com if oldthings

HTH,
--
pb

This e-mail message is being sent solely for use by the intended recipient(s) 
and may contain confidential information.  Any unauthorized review, use, 
disclosure or distribution is prohibited.  If you are not the intended 
recipient, please contact the sender by phone or reply by e-mail, delete the 
original message and destroy all copies. Thank you.



Re: Rewriting/redirecting part of URL

2017-06-28 Thread Philipp Buehler

Am 28.06.2017 19:20 schrieb Mark Holmes:

Note that /audi/page/whatever will change all the time - essentially,
I want to preserve whatever comes after the first /, just rewriting
the domain part


I feel bad for "Audi" (shouts from an ex-Daimler one.. :D ) now.

With 1.6 you can just do that with 'http-request' and 'prefix':
acl oldthings hdr(host) -i old.com
http-request redirect prefix https://new.com if oldthings

HTH,
--
pb



Rewriting/redirecting part of URL

2017-06-28 Thread Mark Holmes
Hi all,

I am trying to achieve the following in haproxy 1.6.6

We have a URL

www.old.com/audi/page/whatever

I'd like to redirect, maintaining everything after the first / ie 
/audi/page/whatever

For example

www.old.com/audi/page/whatever

redirects to

www.new.com/audi/page/whatever

Note that /audi/page/whatever will change all the time - essentially, I want to 
preserve whatever comes after the first /, just rewriting the domain part

I've tried a few things, the below seems to work if I hit the URL using HTTP 
but not if I use HTTPS (I need HTTPS)


frontend www.new.com
mode http
bind 145.90.33.11:80
bind 145.90.33.11:443 ssl crt /etc/haproxy/keys/www.new.com.pem no-sslv3 
ciphers 
ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-G$



#acl h_static hdr(Host) -m beg www.new.com

#reqirep ^Host:\ www.new.com  Host:\ www.old.com if h_static


redirect scheme https if !{ ssl_fc }
option forwardfor


default_backend www.old.com

backend www.old.com
mode http
balance roundrobin
cookie SERVERID insert indirect nocache secure

server Node1 pp-websv08:1061 check cookie Node1
server Node2 pp-websv09:1061 check cookie Node2
server Sorry_Server 192.168.33.200:80 check backup



Grateful for any suggestions and thanks for reading!


Mark


This e-mail message is being sent solely for use by the intended recipient(s) 
and may contain confidential information.  Any unauthorized review, use, 
disclosure or distribution is prohibited.  If you are not the intended 
recipient, please contact the sender by phone or reply by e-mail, delete the 
original message and destroy all copies. Thank you.

Re: How to forward HTTP / HTTPS to different backend proxy servers

2017-06-28 Thread Daren Sefcik
On Wed, Jun 28, 2017 at 8:12 AM, Olivier Doucet  wrote:

> Hi,
>
>
> 2017-06-28 16:47 GMT+02:00 Daren Sefcik :
>
>> Hi, I have searched for an answer to this and tried several things but
>> cannot seem to figure it out so am hoping someone can point me in the right
>> direction. I have different backend proxy servers (squid) setup to handle
>> specifically HTTP and HTTPS traffic but cannot figure out how to tell
>> haproxy to tell the difference and send appropriately.
>>
>> For example, I have
>>
>> frontend proxy_servers
>> backend http_proxies
>> backend https_proxies
>>
>> how can I tell frontend to send all http traffic to backend http_proxies
>> and all https traffic to https_backend? I have tried using dst_port 443 and
>> the acl https ssl_fc but nothing seems to distinguish https traffic.
>>
>
> Well, it should work. Send a copy of your config to see what's wrong in
> it.
>
> Olivier
>
>
>
>>
>> TIA...
>>
>
>
Here is an example, it continues to direct all https traffic to the web
proxy and not the streaming media one.

frontend HTPL_PROXY
bind10.1.4.105:8181 name 10.1.4.105:8181
modehttp
log global
option  http-server-close
option  forwardfor
acl https ssl_fc
http-request set-header X-Forwarded-Proto http if !https
http-request set-header X-Forwarded-Proto https if https
maxconn 9
timeout client  1
option tcp-smart-accept
acl is_youtube  hdr_sub(host) -i youtube.com
acl is_netflix  hdr_sub(host) -i netflix.com
acl is_nflixvideo   hdr_sub(host) -i nflxvideo.net
acl is_googlevideo  hdr_sub(host) -i googlevideo.com
acl is_google   hdr_sub(host) -i google.com
acl is_pandora  hdr_sub(host) -i pandora.com
acl is_httpsdst_port eq 443
use_backend HTPL_STREAMING_MEDIA_PROXY_http_ipvANY  if  is_youtube
use_backend HTPL_STREAMING_MEDIA_PROXY_http_ipvANY  if  is_netflix
use_backend HTPL_STREAMING_MEDIA_PROXY_http_ipvANY  if  is_nflixvideo
use_backend HTPL_STREAMING_MEDIA_PROXY_http_ipvANY  if  is_googlevideo
use_backend HTPL_STREAMING_MEDIA_PROXY_http_ipvANY  if  is_pandora
use_backend HTPL_STREAMING_MEDIA_PROXY_http_ipvANY  if  is_https
default_backend HTPL_WEB_PROXY_http_ipvANY


Re: How to forward HTTP / HTTPS to different backend proxy servers

2017-06-28 Thread Olivier Doucet
Hi,


2017-06-28 16:47 GMT+02:00 Daren Sefcik :

> Hi, I have searched for an answer to this and tried several things but
> cannot seem to figure it out so am hoping someone can point me in the right
> direction. I have different backend proxy servers (squid) setup to handle
> specifically HTTP and HTTPS traffic but cannot figure out how to tell
> haproxy to tell the difference and send appropriately.
>
> For example, I have
>
> frontend proxy_servers
> backend http_proxies
> backend https_proxies
>
> how can I tell frontend to send all http traffic to backend http_proxies
> and all https traffic to https_backend? I have tried using dst_port 443 and
> the acl https ssl_fc but nothing seems to distinguish https traffic.
>

Well, it should work. Send a copy of your config to see what's wrong in it.

Olivier



>
> TIA...
>


How to forward HTTP / HTTPS to different backend proxy servers

2017-06-28 Thread Daren Sefcik
Hi, I have searched for an answer to this and tried several things but
cannot seem to figure it out so am hoping someone can point me in the right
direction. I have different backend proxy servers (squid) setup to handle
specifically HTTP and HTTPS traffic but cannot figure out how to tell
haproxy to tell the difference and send appropriately.

For example, I have

frontend proxy_servers
backend http_proxies
backend https_proxies

how can I tell frontend to send all http traffic to backend http_proxies
and all https traffic to https_backend? I have tried using dst_port 443 and
the acl https ssl_fc but nothing seems to distinguish https traffic.

TIA...


Re: High Availability for haproxy itself

2017-06-28 Thread Daren Sefcik
We use PfSense with CARP & HaProxy, works great.

On Fri, Jun 2, 2017 at 1:34 AM, Jiafan Zhou 
wrote:

> Hi,
>
> Haproxy ensures the HA for real servers such as httpd. However, in the
> case of haproxy itself, if it fails, then it requires another instance of
> haproxy to be ready. Is there any High Availability solution for haproxy
> itself?
>
> Regards,
> Jiafan
>
>
>


Re: Reg: HAProxy 1.6.12 on RHEL7.2 (MAXCONN in FRONT-END/LISTEN BLOCK)

2017-06-28 Thread Andrew Smalley
Hi Vel

Form what you describe the example using the tarpit feature may help you
taken from here
https://blog.codecentric.de/en/2014/12/haproxy-http-header-rate-limiting/

frontend fe_api_ssl
  bind 192.168.0.1:443 ssl crt /etc/haproxy/ssl/api.pem no-sslv3 ciphers ...
  default_backend be_api

  tcp-request inspect-delay 5s

  acl document_request path_beg -i /v2/documents
  acl is_upload hdr_beg(Content-Type) -i multipart/form-data
  acl too_many_uploads_by_user sc0_gpc0_rate() gt 100
  acl mark_seen sc0_inc_gpc0 gt 0

  stick-table type string size 100k store gpc0_rate(60s)

  tcp-request content track-sc0 hdr(Authorization) if METH_POST
document_request is_upload

  use_backend 429_slow_down if mark_seen too_many_uploads_by_user

backend be_429_slow_down
  timeout tarpit 2s
  errorfile 500 /etc/haproxy/errorfiles/429.http
  http-request tarpit



Andrew Smalley

Loadbalancer.org Ltd.
www.loadbalancer.org 






+1 888 867 9504 / +44 (0)330 380 1064
asmal...@loadbalancer.org

Leave a Review
 | Deployment
Guides

| Blog 

On 28 June 2017 at 10:01, Velmurugan Dhakshnamoorthy 
wrote:

> Hi Lukas,
> Thanks for your response in length. As I mentioned earlier, I was not
> aware that the people from discourse forum and this email d-list group are
> same. I am 100% new to HAProxy.
>
> Let me explain my current situation in-detail in this email thread, Kindly
> check if you or other people from the group can guide me.
>
> Our requirement to use HAProxy is NOT to load balance back-end (Weblogic
> 12c) servers, we have a singe backend instance (ex: PIA1), our server
> capacity is not high to handle the heavy traffic during peak load, the peak
> load occurs only 2 times in a year, that's a reason we are not scaling up
> our server resources as they will be idle majority of the time.
>
> we would like to use HAProxy to throttle http/tcp connections during the
> peak load, so that weblogic backed will not go to Out-Of-Memory
> state/PeopleSoft will not crash.
>
> To achieve http throttling,when setting maxconn to back end , HAProxy
> queue up further connections and releases once the active http connections
> become idle,however how weblogic works is, once the PeopleSoft URL is
> accessed and user is authenticated , cookie will be inserted to browser and
> cookie will be active by default 20 minutes, which mean even if user does
> not navigate and do anything inside the application, cookie session state
> will be retained in weblogic java heap. weblogic allocates small amount of
> memory in order to retain each active sessions (though memory allocation
> increase/decrease dynamically based on various business functionality i).
> as per current capacity , weblogic can retain only 100 session state ,
> which means, I don't want to forward any further connections to weblogic
> until some of the sessions from 100 are released (by default the session
> will be released when user clicks explicitly on signout button or
> inactivity timeout reaches 20 minutes).
>
> according to my understanding, maxconn in back-end throttles connections
> and releases to back-end as and when tcp connection status changed to idle,
> but though connections are idle, logout/signout not occurred from
> PeopleSoft, so that still session state are maintained in weblogic and not
> released and cannot handle further connections.
>
> that's reason, I am setting the maxconn in front end and keeping HTTP
> alive option ON, so that I can throttle connections at front end itself.
> According to my POC, setting maxconn in front-end behaves differently than
> setting in back-end, when it is on front-end, it hold further connections
> in kernel , once the existing http connections are closed, it allows
> further connections inside, in this I dont see any performance issue for
> existing connections.
>
> for your information HAProxy and Weblogic are residing in a same single VM.
>
> please let me know if my above understanding is correct about maxconn. Is
> there any understanding gap ? is there any way to achieve my requirement
> differently?
>
> when decided to use maxconn in front-end, the connection queuing for few
> milli seconds and seconds are OK, but when connections are queued in
> minutes, would like to emit some meaningful message to user, that's a
> reason asked if there is any way to display custom message when connections
> are queued in Linux kernel.
>
> to answer Luaks question, weblogic does not logout user when tcp
> connec

Subscribe

2017-06-28 Thread Mark Holmes


This e-mail message is being sent solely for use by the intended recipient(s) 
and may contain confidential information.  Any unauthorized review, use, 
disclosure or distribution is prohibited.  If you are not the intended 
recipient, please contact the sender by phone or reply by e-mail, delete the 
original message and destroy all copies. Thank you.

Re: Reg: HAProxy 1.6.12 on RHEL7.2 (MAXCONN in FRONT-END/LISTEN BLOCK)

2017-06-28 Thread Velmurugan Dhakshnamoorthy
Hi Lukas,
Thanks for your response in length. As I mentioned earlier, I was not aware
that the people from discourse forum and this email d-list group are same.
I am 100% new to HAProxy.

Let me explain my current situation in-detail in this email thread, Kindly
check if you or other people from the group can guide me.

Our requirement to use HAProxy is NOT to load balance back-end (Weblogic
12c) servers, we have a singe backend instance (ex: PIA1), our server
capacity is not high to handle the heavy traffic during peak load, the peak
load occurs only 2 times in a year, that's a reason we are not scaling up
our server resources as they will be idle majority of the time.

we would like to use HAProxy to throttle http/tcp connections during the
peak load, so that weblogic backed will not go to Out-Of-Memory
state/PeopleSoft will not crash.

To achieve http throttling,when setting maxconn to back end , HAProxy queue
up further connections and releases once the active http connections become
idle,however how weblogic works is, once the PeopleSoft URL is accessed and
user is authenticated , cookie will be inserted to browser and cookie will
be active by default 20 minutes, which mean even if user does not navigate
and do anything inside the application, cookie session state will be
retained in weblogic java heap. weblogic allocates small amount of memory
in order to retain each active sessions (though memory allocation
increase/decrease dynamically based on various business functionality i).
as per current capacity , weblogic can retain only 100 session state ,
which means, I don't want to forward any further connections to weblogic
until some of the sessions from 100 are released (by default the session
will be released when user clicks explicitly on signout button or
inactivity timeout reaches 20 minutes).

according to my understanding, maxconn in back-end throttles connections
and releases to back-end as and when tcp connection status changed to idle,
but though connections are idle, logout/signout not occurred from
PeopleSoft, so that still session state are maintained in weblogic and not
released and cannot handle further connections.

that's reason, I am setting the maxconn in front end and keeping HTTP alive
option ON, so that I can throttle connections at front end itself.
According to my POC, setting maxconn in front-end behaves differently than
setting in back-end, when it is on front-end, it hold further connections
in kernel , once the existing http connections are closed, it allows
further connections inside, in this I dont see any performance issue for
existing connections.

for your information HAProxy and Weblogic are residing in a same single VM.

please let me know if my above understanding is correct about maxconn. Is
there any understanding gap ? is there any way to achieve my requirement
differently?

when decided to use maxconn in front-end, the connection queuing for few
milli seconds and seconds are OK, but when connections are queued in
minutes, would like to emit some meaningful message to user, that's a
reason asked if there is any way to display custom message when connections
are queued in Linux kernel.

to answer Luaks question, weblogic does not logout user when tcp connection
is closed. weblogic creates new connections as and when required.



Best Wishes,
Vel

On Wed, Jun 28, 2017 at 9:47 AM, Lukas Tribus  wrote:

> Hello Andrew,
>
>
> Am 28.06.2017 um 02:06 schrieb Andrew Smalley:
> > Lukas
> >
> > Why is this triple posting? Surely he asked questions in a nice way in
> more than one location and deserves the right answer and not a flame down
> here.
> >
> > It is about helping people after all I hope!
>
> Questions have been answered in a lengthy thread some 10 days ago:
> http://discourse.haproxy.org/t/regarding-maxconn-parameter-
> in-backend-for-connection-queueing/1320/9
>
> No followup questions there.
>
>
> Then a new thread today, no specific question that hasn't already
> been answered in the previous thread, no followup responses (to my
> request to clarify the question) either:
> http://discourse.haproxy.org/t/custom-display-message-when-
> setting-maxconn-in-front-end-listen-block/1382/2
>
>
> Then he moves the discussion to the mailing list, not mentioning the
> conversations on discourse (which would prevented people - in this
> case Jarno - from trying to explain the same thing all over again).
>
>
> Its about helping people out, but that doesn't work in the long term
> when we have people deliberately spread questions about the same topic
> across different channels (mailing list, discourse).
>
>
>
> Lukas Tribus:
> > Is there anything that has been answered 3 times already, or
> > do you just like to annoy other people?
>
> This should have been:
> Is there anything that has *not* been answered 3 times already?
>
>
>
> Velmurugan Dhakshnamoorthy:
> > Apologize,  my intent is not to annoy anyone
> > [...]
> > I am not aware this email group and discourse forum