Re: Converting from sticking on src-ip to custom auth header

2015-10-02 Thread Baptiste
You can create "dummy" backends, whose main purpose will to host a table
only.
IE:
backend tbl_ip
 stick-table type ip size 10k
backend tbl_hdr
 stick-table type string len 12 size 10k

and refer them in your rules:
stick on src table tbl_ip
tcp-request content track-sc1 hdr(x-app-authorization) table tbl_hdr

Baptiste



On Thu, Oct 1, 2015 at 10:24 AM, Igor Cicimov <
ig...@encompasscorporation.com> wrote:

> What version are you running? From memory up to 1.5.x you can have only
> one table per fe/be, not sure about 1.6 I haven't tried it yet. I've seen
> people using second table via dummy backend though. I don't have access to
> my notes atm so maybe someone else can jump in and help with this.
> On 01/10/2015 2:22 PM, "Jason J. W. Williams" 
> wrote:
>
>> I still would like to keep the rate limiting based on source ip but the
>> persistence based on header.
>>
>> My thought was to create a second named stick table but I didn't see a
>> name parameter to the stick-table declaration.
>>
>> Sent via iPhone
>>
>> On Sep 30, 2015, at 18:23, Igor Cicimov 
>> wrote:
>>
>> Well in case of header you would have something like this I guess:
>>
>> tcp-request content track-sc1 hdr(x-app-authorization)
>>
>>
>>
>> On Thu, Oct 1, 2015 at 9:47 AM, Jason J. W. Williams <
>> jasonjwwilli...@gmail.com> wrote:
>>
>>> Wondered about that... Do the "tcp-request" rate limiters use the stick
>>> table (I assume they need type ip) or another implied table?
>>>
>>> -J
>>>
>>> On Wed, Sep 30, 2015 at 3:41 PM, Igor Cicimov <
>>> ig...@encompasscorporation.com> wrote:
>>>
 The stick-table type would be string and not ip in that case though

 On 01/10/2015 5:07 AM, "Jason J. W. Williams" <
 jasonjwwilli...@gmail.com> wrote:
 >
 > We've been seeing CenturyLink and a few other residential providers
 NATing their IPv4 traffic, making client persistency on source IP result in
 really lopsided load balancing lately.
 >
 > We'd like to convert to sticking on a custom header we're already
 using that IDs the user. There isn't a lot of examples of this, so I was
 curious if this is the right approach:
 >
 > Previous "stick on src" config:
 https://gist.github.com/williamsjj/7c3876d32cab627ffe70
 >
 > New "stick on header" config:
 https://gist.github.com/williamsjj/f0ddc58b9d028b3fb906
 >
 > Thank you in advance for any advice.
 >
 > -J

 The stick-table type would be string and not ip in that case though

>>>
>>>
>>
>>
>> --
>> Igor Cicimov | DevOps
>>
>>
>> p. +61 (0) 433 078 728
>> e. ig...@encompasscorporation.com 
>> w*.* encompasscorporation.com
>> a. Level 4, 65 York Street, Sydney 2000
>>
>>


Re: url_ip is not properly extracted in HTTP CONNECT method ?

2015-10-02 Thread Baptiste
I think the difference is in second case, HAProxy takes into account the subnet.

make it faster like this:
 acl forbidden_dst url -m ip 192.168.0.0/24 172.16.0.0/12 10.0.0.0/8
 acl forbidden_dst url_dom -m ip 192.168.0.0/24 172.16.0.0/12 10.0.0.0/8

the IP list is stored into a tree, so that's very fast and even faster
with the conf above since you'll perform only 2 lookups instead of 6.

Baptiste


On Thu, Oct 1, 2015 at 2:40 PM, Pavlo Zhuk  wrote:
> Workardound:
> I was able to implement same funtionality with -m ip on url matching, which
> is probably more expensive on CPU usage
>
>acl forbidden_dst url -m ip 192.168.0.0/24
>acl forbidden_dst url -m ip 172.16.0.0/12
>acl forbidden_dst url -m ip 10.0.0.0/8
>acl forbidden_dst url_dom -m ip 192.168.0.0/24
>acl forbidden_dst url_dom -m ip 172.16.0.0/12
>acl forbidden_dst url_dom -m ip 10.0.0.0/8
>
>
> -- Forwarded message --
> From: Pavlo Zhuk 
> Date: Thu, Oct 1, 2015 at 2:13 PM
> Subject: url_ip is not properly extracted in HTTP CONNECT method ?
> To: haproxy@formilux.org
>
>
> Dears,
>
> I am trying to filter traversal access to my lan via HTTP CONNECT method.
> And I tried to use acl with url_ip based on private ip range constants.
>
> Apparently this method works for HTTP GET, but isn't working for HTTP
> CONNECT.
> Is there any other way to inspect HTTP CONNECT destination?
>
> My config:
>
>
>acl forbidden_dst url_ip 192.168.0.0/24
>acl forbidden_dst url_ip 172.16.0.0/12
>acl forbidden_dst url_ip 10.0.0.0/8
>
>
>http-request deny if forbidden_dst
>
>
>
> Log for HTTP GET, request blocked:
>
> Oct  1 11:08:37 ip-10-2-170-57 haproxy[2227]: x.x.x.x:35963
> [01/Oct/2015:11:08:37.182] proxy-in proxy-in/ 0/-1/-1/-1/0 403 188 -
> - PR-- 0/0/0/0/
> 0 0/0 "GET http://10.1.1.1:22/ HTTP/1.1"
>
>
> Log for HTTP CONNECT, request bypassed (reponded as HTTP/403 by backend
> service)
>
> Oct  1 11:08:55 ip-10-2-170-57 haproxy[2227]: x.x.x.x:35966
> [01/Oct/2015:11:08:55.101] proxy-in proxy/i-4c333482 0/0/1/2/3 403 423 - -
>  1/1/0/0/0 0
> /0 "CONNECT 10.1.1.1:22 HTTP/1.1"
>
>
> --
> BR,
> Pavlo Zhuk
>
>
>
>
> --
> BR,
> Pavlo Zhuk
> +38093 241



Trasportation and Storage Service

2015-10-02 Thread Paula Sanchez
Hi,
Would you like to acquire Trasportation Contacts or any Specific industry Contacts? 
Traget titles Comes with ( C-level, VP-level, Mid-level Owners, Managers, Directors, Operation manager and key decision Makers, etc) 
Information We Provide Contact Name, Title, Email, Company Name, Mailing Address, Phone and Fax Number, URL, Industry, etc…
We can also assist you with other industries such as Construction, Real Estate, Retail, Manufacturing, Finance and Banking, Tourism and travelers, Hospitality, etc... 
Please let me know your thoughts. I will get back to you with number of leads/ contacts we have, cost to acquire and if possible a sample.  
Looking forward to hear from you.
Thanks & Regards,Melissa EdwardsMarket Executive Our Services include: Lead Generation, Email Campaign, Data Management, Data Appending & cleansing data, Business Mailing Lists, Technology Specific Database.If you do not wish to hear from us again, please respond back with “Leave out” and we will honor your request.




Re: [PATCH] BUG: config: external-check command validation is checking for incorrect arguments.

2015-10-02 Thread Igor Wiedler
Hello,

I wanted to test the external-check option in 1.6 (master) and it seems like 
the validation logic is broken. I was wondering what the status of this patch 
is: http://marc.info/?l=haproxy=144240175729490=2 
. Can we get it merged?

Many thanks!

Regards,
Igor

MINOR: doc-lua: few typos.

2015-10-02 Thread David Carlier
Hi,

found few typos in the lua doc.

regards,
From 2b6d53fa07adfbd09edfd841b004f8cd2d7d6f82 Mon Sep 17 00:00:00 2001
From: David Carlier 
Date: Fri, 2 Oct 2015 11:59:38 +0100
Subject: [PATCH] MINOR: doc-lua: few typos.

---
 doc/lua-api/index.rst | 22 +++---
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/doc/lua-api/index.rst b/doc/lua-api/index.rst
index 54eae8f..b13b6b1 100644
--- a/doc/lua-api/index.rst
+++ b/doc/lua-api/index.rst
@@ -27,7 +27,7 @@ functions. Lua have 6 execution context.
executed in initialisation mode. This section is use for configuring Lua
bindings in HAProxy.
 
-2. The Lua **init context**. It is an Lua function executed just after the
+2. The Lua **init context**. It is a Lua function executed just after the
HAProxy configuration parsing. The execution is in initialisation mode. In
this context the HAProxy environment are already initialized. It is useful to
check configuration, or initializing socket connections or tasks. These
@@ -35,14 +35,14 @@ functions. Lua have 6 execution context.
`core.register_init()`. The prototype of the function is a simple function
without return value and without parameters, like this: `function fcn()`.
 
-3. The Lua **task context**. It is an Lua function executed after the start
+3. The Lua **task context**. It is a Lua function executed after the start
of the HAProxy scheduler, and just after the declaration of the task with the
Lua function `core.register_task()`. This context can be concurrent with the
traffic processing. It is executed in runtime mode. The prototype of the
function is a simple function without return value and without parameters,
like this: `function fcn()`.
 
-4. The **action context**. It is an Lua function conditionally executed. These
+4. The **action context**. It is a Lua function conditionally executed. These
actions are declared by the HAProxy directives "`tcp-request content lua
`", "`tcp-response content lua `", "`http-request lua
`" and "`http-response lua `". The prototype of the
@@ -61,7 +61,7 @@ functions. Lua have 6 execution context.
in the original HAProxy sample-fetches, in this case, it cannot return the
result. This case is not yet supported
 
-6. The **converter context**. It is an Lua function that takes a string as input
+6. The **converter context**. It is a Lua function that takes a string as input
and returns another string as output. These types of function are stateless,
it cannot access to any context. They don't execute any blocking function.
The call prototype is `function string fcn(string)`. This function can be
@@ -116,7 +116,7 @@ Core class
"core" class is basically provided with HAProxy. No `require` line is
required to uses these function.
 
-   The "core" class is static, t is not possible to create a new object of this
+   The "core" class is static, it is not possible to create a new object of this
type.
 
 .. js:attribute:: core.emerg
@@ -155,7 +155,7 @@ Core class
 
   **context**: body, init, task, action, sample-fetch, converter
 
-  This fucntion sends a log. The log is sent, according with the HAProxy
+  This function sends a log. The log is sent, according with the HAProxy
   configuration file, on the default syslog server if it is configured and on
   the stderr if it is allowed.
 
@@ -268,7 +268,7 @@ Core class
 
   **context**: body
 
-  Register an Lua function executed as action. All the registered action can be
+  Register a Lua function executed as action. All the registered action can be
   used in HAProxy with the prefix "lua.". An action gets a TXN object class as
   input.
 
@@ -314,7 +314,7 @@ Core class
 
   **context**: body
 
-  Register an Lua function executed as converter. All the registered converters
+  Register a Lua function executed as converter. All the registered converters
   can be used in HAProxy with the prefix "lua.". An converter get a string as
   input and return a string as output. The registered function can take up to 9
   values as parameter. All the value are strings.
@@ -340,7 +340,7 @@ Core class
 
   **context**: body
 
-  Register an Lua function executed as sample fetch. All the registered sample
+  Register a Lua function executed as sample fetch. All the registered sample
   fetchs can be used in HAProxy with the prefix "lua.". A Lua sample fetch
   return a string as output. The registered function can take up to 9 values as
   parameter. All the value are strings.
@@ -384,7 +384,7 @@ Core class
 
   **context**: body
 
-  Register an Lua function executed as a service. All the registered service can
+  Register a Lua function executed as a service. All the registered service can
   be used in HAProxy with the prefix "lua.". A service gets an object class as
   input according with the required mode.
 
@@ -1002,7 +1002,7 @@ TXN class
 
 .. js:function:: TXN.set_var(TXN, var, value)
 
-  

Re: retry new backend on http errors?

2015-10-02 Thread Bjorn Blomqvist
JCM  writes:

> 
> On 25 September 2014 14:47, Klavs Klavsen  wrote:
> > Any way to make haproxy retry requests with certain http response codes
> > X times (or just until all backends have been tried) ?
> 
> Nope. You really don't want to do this. And I'd be sad if the devs
> added anything in to HAProxy to enable this.
> 
> You don't know how far through a potentially world-changing operation
> the backend managed to get before it threw its error. Did it rollback
> correctly? Nothing generic (as HAProxy as middleware is) can figure
> this out, so you need to present the error to the consumer and get
> them to decide if they want to retry the request.
> 
> The ability of Varnish (and Nginx for that matter) to do this is an
> anti-feature, IMHO.
> 
> Jonathan
> 
> 

You are basically saying a feature that might be misused is bad!

Lets instead assume the guy knows what he is doing and try to be helpful by
pointing out the pitfalls of doing it this way. 

Maybe he has a well designed idempotent rest API with a working transaction
model. Or maybe he just wants to use it for get requests.

/Björn



Re: Interactive stats socket broken on master

2015-10-02 Thread Andrew Hayworth
Hi all -

On Thu, Oct 1, 2015 at 8:55 AM, Jesse Hathaway  wrote:
> It appears the following commit broke the interactive stats socket.
> When this commit is applied the stats socket disconnects after typing
> prompt and hitting enter.

Attached is a patch that fixes the issue for me.

Thanks!

>From 9f785d7bc67c34ea441187c0e14c0ef573a71692 Mon Sep 17 00:00:00 2001
From: Andrew Hayworth 
Date: Fri, 2 Oct 2015 15:08:10 +
Subject: [PATCH 1/1] BUG/MINOR: Handle interactive mode in cli handler

A previous commit broke the interactive stats cli prompt. Specifically,
it was not clear that we could be in STAT_CLI_PROMPT when we get to
the output functions for the cli handler, and the switch statement did
not handle this case. We would then fall through to the default
statement, which was recently changed to set error flags on the socket.
This in turn causes the socket to be closed, which is not what we wanted
in this specific case.

To fix, we add a case for STAT_CLI_PROMPT, and simply break out of the
switch statement.

Testing:
 - Connected to unix stats socket, issued 'prompt', observed that I
   could issue multiple consecutive commands.
 - Connected to unix stats socket, issued 'prompt', observed that socket
   timed out after inactivity expired.
 - Connected to unix stats socket, issued 'prompt' then 'set timeout cli
   5', observed that socket timed out after 5 seconds expired.
 - Connected to unix stats socket, issued invalid commands, received
   usage output.
 - Connected to unix stats socket, issued 'show info', received info
   output and socket disconnected.
 - Connected to unix stats socket, issued 'show stat', received stats
   output and socket disconnected.
 - Repeated above tests with TCP stats socket.
---
 src/dumpstats.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/src/dumpstats.c b/src/dumpstats.c
index bdfb7e3..1a39258 100644
--- a/src/dumpstats.c
+++ b/src/dumpstats.c
@@ -2484,6 +2484,8 @@ static void cli_io_handler(struct appctx *appctx)
}
else {  /* output functions */
switch (appctx->st0) {
+   case STAT_CLI_PROMPT:
+   break;
case STAT_CLI_PRINT:
if (bi_putstr(si_ic(si),
appctx->ctx.cli.msg) != -1)
appctx->st0 = STAT_CLI_PROMPT;
--
2.1.3


0001-BUG-MINOR-Handle-interactive-mode-in-cli-handler.patch
Description: Binary data


Thank you.

2015-10-02 Thread Susheel Jalali

Dear Bryan and HAProxy Developers,

Your insights from Sep 23 and Sep 29 helped us create a successfully 
accessing (HTTP mode) our products via HAProxy.



We incorporated all of your insights regarding:

 *   HTTP-request set-header (instead of reqadd path matcher)
 * #dontlognull
 * more useful timeout options
 * maxconn
 * ACL matching in port forwarded LAN environment
 * rectified an error in reqrep regular expression


Thank you for your encouraging and timely help.

--

Sincerely,

Susheel Jalali


Coscend Communications Solutions_
_

Elite Premio Complex Suite 200,  Pune 411045 Maharashtra India

_Susheel.Jalali@Coscend.com_

Web site:www.Coscend.com 

--

*Coscend’s**Software Service Factory*

"*Coscend Communications* is ... *pioneering a new approach*to ... 
software applications development, and systems integration."


*Light Reading Network, *December, 2007

"*Coscend*is at the*vanguard of a new evolution*in telco OSS/BSS systems 
integration."


*Caroline Chappell*
A leading authority in the communications services software industry

"There are *innovative*…*tools*from ... *Coscend *bubbling up, which 
will help accelerate the data consolidation process and reduce its cost."


*Dennis Mendyk, */Editor,/Building a *Telco Service Factory*

--

CONFIDENTIALITY NOTICE: See 'Confidentiality Notice Regarding E-mail 
Messages from Coscend Communications Solutions' posted at: 
http://www.Coscend.com/Terms_and_Conditions.html 





Re: Implementing HAProxy First Time: Conditional backend issue

2015-10-02 Thread Susheel Jalali

Dear Bryan and HAProxy Developers,

Your insights from Sep 23 and Sep 29 helped us create an HAProxy 
configuration, successfully accessing (HTTP mode) our products via HAProxy.


We incorporated all of your insights regarding:

 *   HTTP-request set-header (instead of reqadd path matcher)
 * #dontlognull
 * more useful timeout options
 * maxconn
 * ACL matching in port forwarded LAN environment
 * rectified an error in reqrep regular expression


Thank you for your encouraging and timely help.

--

Sincerely,

Susheel Jalali


Coscend Communications Solutions_
_
Elite Premio Complex Suite 200,  Pune 411045 Maharashtra India
susheel.jal...@coscend.com

Web site: www.Coscend.com
--

CONFIDENTIALITY NOTICE: See 'Confidentiality Notice Regarding E-mail 
Messages from Coscend Communications Solutions' posted at: 
http://www.Coscend.com/Terms_and_Conditions.html



On 10/01/15 03:12, Bryan Talbot wrote:
On Wed, Sep 30, 2015 at 1:27 PM, Susheel Jalali 
> wrote:


Dear Bryan and HAProxy Developers:

As requested, here is the complete haproxy config content.

The relevant error logs are below:

-- --

global
log 127.0.0.1 local2
log-tag haproxy
chroot  /var/haproxy/lib
pidfile  /var/run/haproxy.pid
user haproxy
group   haproxy
nbproc  1
maxconn   256
spread-checks  5
daemon
stats socket  /var/haproxy/stats



The above seems normal but maxconn=256 is pretty low but maybe that's 
appropriate for your apps.


defaults
modehttp
log global
option  httplog


The logs you show below do not conform to this 'httplog' format and 
seem to be a modified CLF, so I don't know for sure what values are in 
the logs below.



option dontlognull


When debugging problems, should probably log as much as you can and 
then decide what logs to hide when everything is working.


option forwardfor
option  abortonclose
option  http-server-close
option  redispatch
retries 3


All seem normal

timeout queue   86400
timeout client  86400
timeout server  86400
timeout connect 86400
timeout http-keep-alive  30
timeout http-request  10
timeout check 20
maxconn 5


Default time values are in milliseconds so http-request must be 
completed in 10 ms. This might work fine on a LAN but will be an issue 
on a WAN or internet. I'm guessing that you assumed that time units 
default to seconds instead of milliseconds? If so, then these values 
are still a problem but then some will be way too long -- do you 
really want requests to queue and to wait to connect to a backend 
server for a full day (86,400 seconds / 24 hours)?



frontend webapps-frontend
bind  *:80

acl host_coscendreq.hdr(Host) coscend.com:80



This will only match if the Host header value includes the port which 
is uncommon especially for port :80



acl path_subdomain_p1 path_beg -i /Product1

use_backend subdomain_p1-backend if host_coscend path_subdomain_p1

default_backend webapps-backend

backend webapps-backend
[ - - -]
option   http-server-close
server Product1.VM0 12.12.12.112:6012
 cookie pad-p check


server is named "Product1.VM0" here but appears in logs below as 
"Product1" which leads me to believe that this config is not used to 
produce the logs.



backend subdomain_p1-backend
http-request set-header Host 12.12.12.112:6012

reqirep ^([^\ :]*)\ /Product1/(.*) \1\ /\2
acl hdr_location res.hdr(Location) -m found
rspirep ^Location:\ (http?://12.12.12.112
(:[0-9]+)?)?(/.*) Location:\ /Product1  if
hdr_location
server Product1.VM0 12.12.12.112:6012
 cookie pad-p check


server is named "Product1.VM0" here but appears in logs below as 
"Product1" which leads me to believe that this config is not used to 
produce the logs.



admin.log
Sep 30 12:42:29 localhost haproxy[1691]: Proxy webapps-frontend
started.
Sep 30 12:42:29 localhost haproxy[1691]: Proxy webapps-frontend
started.
Sep 30 12:42:29 localhost haproxy[1691]: Proxy webapps-backend
started.
Sep 30 12:42:29 localhost haproxy[1691]: Proxy webapps-backend
started.
Sep 30 12:42:29 localhost haproxy[1691]: Proxy
subdomain_p1-backend started.
Sep 30 12:42:29 localhost haproxy[1691]: Proxy HAProxy-stats started.
Sep 30 12:47:29 localhost haproxy[5690]: Stopping frontend

L’actualité hebdomadaire par RFI - La nouvelle vie du père Vandenbeusch, ex-otage de...

2015-10-02 Thread RFI L'HEBDO
L’actualité hebdomadaire par RFI -  02/10/2015

Visualisez cet email dans votre navigateur 

http://rfi.nlfrancemm.com/HM?b=sQDDwTV3zgWiS_QkBwoQif1SYkgE4aq9qhV8xNklDUGgYdYdlxU_yVJwaswQfXQm=2cpf0W_IxdSgjvmC7wfO1w
 


La nouvelle vie du père Vandenbeusch, ex-otage de Boko Haram
Fin 2013, le prêtre français Georges Vandenbeusch, curé d’un village de 
l’extrême nord du Cameroun, était enlevé par le groupe extrémiste nigérian Boko 
Haram. Libéré six semaines plus tard, il est rentré en France et a repris sa 
mission de prêtre dans la banlieue parisienne. Mais il n’a rien oublié. 
Portrait.
http://rfi.nlfrancemm.com/HP?b=u4wUmp_C12A26Dxgdjyr4Q99Kafg8SdYacBsWN41IX7B8wwUzBWennhwcUxjVdhf=eXL_URIu3r7841ZIhW5ttg
Dobet Gnahoré, une héritière de Werewere Liking
Enfant du village artistique de Ki Yi M’Bock, fondé à Abidjan par Werewere 
Liking, une star de la world music des années 1990, Dobet Gnahoré trace sa 
propre route entre l’Europe et l’Afrique. Fière de ses racines, elle s’épanouit 
entre musique, chant, danse et engagements en faveur des plus démunis. 
Entretien avec l’artiste, en concert ce vendredi 2 octobre à Strasbourg, sa 
dernière ville d’adoption.
http://rfi.nlfrancemm.com/HP?b=6Du3T_yP2JVGAS19i93BHrWHpP1bhB-A2PIL96mUxlPla5nMqL9dhd1BXO_wGJX2=o783crIgu7uMBAZdleOo2Q
Blocus du Donbass: des passages au compte-goutte
Malgré le conflit bien réel qui se déroule dans le Donbass depuis le printemps 
2014, le gouvernement de Kiev se refuse à déclarer un état de guerre. Reportage 
sur la ligne de front qui fait office de facto de frontière entre l’Ukraine et 
les territoires séparatistes.
http://rfi.nlfrancemm.com/HP?b=1D8WKDH5TjF9Y8sZWAGpdYhcKz-tdxBWDXzQ3TwbUoQutVwvXl36F8WC4uU248wG=NM2UnFnwztRktprJwBHdtQ
Roman du retour au pays natal, version sud-africaine
Auteure de romans et de recueils de nouvelles, Zoë Wicomb s’est imposée comme 
une des principales voix littéraires de l’Afrique du sud contemporaine. Avec 
Octobre, son troisième roman traduit en français, elle livre un récit de quête 
d’origines dans un pays renouvelé, mais sur lequel continuent de planer les 
ombres du passé.
http://rfi.nlfrancemm.com/HP?b=HmYWfZ7bGArEoj4w1oDaOgtiZpwG3EWKeX1wbak4n9_lr_KS6t-B2oEE2NZq6ILd=68Z4nFOmdSDaZoJ1enDcKg
Y a-t-il du «blé de Daech» dans nos assiettes?
Le pétrole n’est pas la seule ressource dont dispose le groupe terroriste 
Daech. Les productions agricoles, et notamment le blé, jouent également un rôle 
important : les revenus qu’ils engendrent permettent à la fois de constituer 
des revenus et de contrôler les populations. Le trafic et les échanges 
informels autour des productions céréalières ont ainsi permis au « blé de Daech 
» de se retrouver dans les circuits commerciaux internationaux.
http://rfi.nlfrancemm.com/HP?b=ne-NCMnZOiIC725ubamofrUuDdkUITvlKgVFseVJkzZzOYBOz_ILwycqBzATDywy=j3ikijztSod8fKglSXTf4A
Osvalde Lewatt, photographe de la nuit congolaise
La réalisatrice camerounaise Osvalde Lewatt, 39 ans, montre à Paris ses 
photographies de Kinshasa et d’autres villes de la République démocratique du 
Congo. Un pays où elle a vécu cinq ans et qu’elle a sillonné de nuit. Son 
travail fait l’objet d’un livre, Congo couleur nuit, à paraître en novembre.
http://rfi.nlfrancemm.com/HP?b=m_VGi8RtKlK0H9LQbv8CyOC6OVn2z_en_rZrbSm_h2U74yWqcXVIWbx4zXoj4FZK=8PfBJBsanjtfmCxw_lwX0Q


Un an après l'assassinat d’Hervé Gourdel en Algérie
Il y a un an, le Français Hervé Gourdel était assassiné par un groupe 
terroriste dans les montagnes du Djurdjura, en Algérie. Sur les hauteurs de la 
ville de Bouira, à une centaine de kilomètres de la capitale algérienne, la 
région n’a pas changé ses habitudes, même si dans la forêt, il reste encore des 
terroristes. Un reportage de Leïla Beratto dans la région de Bouira.
http://rfi.nlfrancemm.com/HP?b=-GShrX-tzV7q4HEdU25AyYNdC0eebZOLYEi0IyLMZXp9ZMmIdPj7VYvYzUS7IgMy=sLPsmzCMzP7qTxS_vrGpvA
Le défi de l'accès à l'eau en Centrafrique
En Centrafrique, l'accès à l'eau, potable ou non, est un des problèmes du pays, 
qui doit se reconstruire après deux ans de conflit. A Bambari, où des 
affrontements ont encore éclaté il y a un mois, la compagnie d'eau ne peut 
fournir aucun service depuis des années. Près de 40 000 déplacés habitent dans 
des camps de fortunes où trouver de l'eau relève du défi quotidien.
http://rfi.nlfrancemm.com/HP?b=imQ7nxKGru3VUd3_qupsIZXnMnFxJ80SpGf0IKWaCkZGNWj7znCCskxwFYNJpKrB=pgizn-1Z4msLXcMNnnW1VA
Les Nations unies veulent en finir avec la grande pauvreté
Ce week-end à New York les dirigeants de la planète ont lancé les Objectifs du 
développement durable. Réunis pour les assemblées générales du FMI et de la 
Banque mondiale ils sont tombés d'accord pour rendre le monde meilleur d'ici 
quinze ans. Avec un objectif concret : faire disparaitre la grande pauvreté.
http://rfi.nlfrancemm.com/HP?b=VxZBbnAHt1nkYHtQnCnr5DuMfXV-jbVn55tMVqD34emAtDQJv2MI3R3yJZPzRVkB=H6c4uHdKlcTHH_YMKnxvwQ


Faire le tour de l’Europe sans quitter Paris
Paris est la ville au monde qui accueille le plus 

[PATCH 1/1] MINOR: cli: Dump all resolvers stats if no resolver

2015-10-02 Thread Andrew Hayworth
Hi all -

Below is a patch for the 'show stat resolvers' cli command. It changes
the command such that it will show all resolvers configured, if you do
not specify a resolver id.

I found this useful for debugging, and plan on using it in the hatop tool.

Let me know if you have any feedback on it!

Thanks -

Andrew Hayworth
--

>From c4061d948d21cabb95f093b5d9655c9d226724af Mon Sep 17 00:00:00 2001
From: Andrew Hayworth 
Date: Fri, 2 Oct 2015 20:33:01 +
Subject: [PATCH 1/1] MINOR: cli: Dump all resolvers stats if no resolver
 section is given

This commit adds support for dumping all resolver stats. Specifically
if a command 'show stats resolvers' is issued withOUT a resolver section
id, we dump all known resolver sections. If none are configured, a
message is displayed indicating that.
---
 doc/configuration.txt |  6 +++--
 src/dumpstats.c   | 72 +++
 2 files changed, 42 insertions(+), 36 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 3102516..e519662 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -16043,8 +16043,10 @@ show stat [  ]
 A similar empty line appears at the end of the second block (stats) so that
 the reader knows the output has not been truncated.

-show stat resolvers 
-  Dump statistics for the given resolvers section.
+show stat resolvers []
+  Dump statistics for the given resolvers section, or all resolvers sections
+  if no section is supplied.
+
   For each name server, the following counters are reported:
 sent: number of DNS requests sent to this server
 valid: number of DNS valid responses received from this server
diff --git a/src/dumpstats.c b/src/dumpstats.c
index 1a39258..ea3f49a 100644
--- a/src/dumpstats.c
+++ b/src/dumpstats.c
@@ -1166,23 +1166,19 @@ static int stats_sock_parse_request(struct
stream_interface *si, char *line)
  if (strcmp(args[2], "resolvers") == 0) {
struct dns_resolvers *presolvers;

-   if (!*args[3]) {
- appctx->ctx.cli.msg = "Missing resolver section identifier.\n";
- appctx->st0 = STAT_CLI_PRINT;
- return 1;
-   }
-
-   appctx->ctx.resolvers.ptr = NULL;
-   list_for_each_entry(presolvers, _resolvers, list) {
- if (strcmp(presolvers->id, args[3]) == 0) {
-   appctx->ctx.resolvers.ptr = presolvers;
-   break;
+   if (*args[3]) {
+ appctx->ctx.resolvers.ptr = NULL;
+ list_for_each_entry(presolvers, _resolvers, list) {
+   if (strcmp(presolvers->id, args[3]) == 0) {
+ appctx->ctx.resolvers.ptr = presolvers;
+ break;
+   }
+ }
+ if (appctx->ctx.resolvers.ptr == NULL) {
+   appctx->ctx.cli.msg = "Can't find that resolvers section\n";
+   appctx->st0 = STAT_CLI_PRINT;
+   return 1;
  }
-   }
-   if (appctx->ctx.resolvers.ptr == NULL) {
- appctx->ctx.cli.msg = "Can't find resolvers section.\n";
- appctx->st0 = STAT_CLI_PRINT;
- return 1;
}

appctx->st2 = STAT_ST_INIT;
@@ -6402,24 +6398,32 @@ static int
stats_dump_resolvers_to_buffer(struct stream_interface *si)
/* fall through */

  case STAT_ST_LIST:
-   presolvers = appctx->ctx.resolvers.ptr;
-   chunk_appendf(, "Resolvers section %s\n", presolvers->id);
-   list_for_each_entry(pnameserver, >nameserver_list, list) {
- chunk_appendf(, " nameserver %s:\n", pnameserver->id);
- chunk_appendf(, "  sent: %ld\n", pnameserver->counters.sent);
- chunk_appendf(, "  valid: %ld\n", pnameserver->counters.valid);
- chunk_appendf(, "  update: %ld\n", pnameserver->counters.update);
- chunk_appendf(, "  cname: %ld\n", pnameserver->counters.cname);
- chunk_appendf(, "  cname_error: %ld\n",
pnameserver->counters.cname_error);
- chunk_appendf(, "  any_err: %ld\n", pnameserver->counters.any_err);
- chunk_appendf(, "  nx: %ld\n", pnameserver->counters.nx);
- chunk_appendf(, "  timeout: %ld\n", pnameserver->counters.timeout);
- chunk_appendf(, "  refused: %ld\n", pnameserver->counters.refused);
- chunk_appendf(, "  other: %ld\n", pnameserver->counters.other);
- chunk_appendf(, "  invalid: %ld\n", pnameserver->counters.invalid);
- chunk_appendf(, "  too_big: %ld\n", pnameserver->counters.too_big);
- chunk_appendf(, "  truncated: %ld\n",
pnameserver->counters.truncated);
- chunk_appendf(, "  outdated: %ld\n",
pnameserver->counters.outdated);
+   if (LIST_ISEMPTY(_resolvers)) {
+ chunk_appendf(, "No resolvers found\n");
+   }
+   else {
+ list_for_each_entry(presolvers, _resolvers, list) {
+   if (appctx->ctx.resolvers.ptr != NULL &&
appctx->ctx.resolvers.ptr != presolvers) continue;
+
+   chunk_appendf(, "Resolvers section %s\n", presolvers->id);
+   list_for_each_entry(pnameserver, >nameserver_list, list) {
+ chunk_appendf(, " nameserver %s:\n", 

Re: Converting from sticking on src-ip to custom auth header

2015-10-02 Thread Jason J. W. Williams
Hi Baptiste,

Thank you for the help. What is the proper syntax for referencing the table
in the if clause? I've tried these variations but HAProxy doesn't like any
of them:

tcp-request connection reject if { src_conn_cur ge 100 } table ip_map_table
tcp-request connection reject if { table ip_map_table src_conn_cur ge 100 }
tcp-request connection reject if { src_conn_cur table ip_map_table ge 100 }

Thank you!

On Fri, Oct 2, 2015 at 5:23 AM, Baptiste  wrote:

> You can create "dummy" backends, whose main purpose will to host a table
> only.
> IE:
> backend tbl_ip
>  stick-table type ip size 10k
> backend tbl_hdr
>  stick-table type string len 12 size 10k
>
> and refer them in your rules:
> stick on src table tbl_ip
> tcp-request content track-sc1 hdr(x-app-authorization) table tbl_hdr
>
> Baptiste
>
>
>
> On Thu, Oct 1, 2015 at 10:24 AM, Igor Cicimov <
> ig...@encompasscorporation.com> wrote:
>
>> What version are you running? From memory up to 1.5.x you can have only
>> one table per fe/be, not sure about 1.6 I haven't tried it yet. I've seen
>> people using second table via dummy backend though. I don't have access to
>> my notes atm so maybe someone else can jump in and help with this.
>> On 01/10/2015 2:22 PM, "Jason J. W. Williams" 
>> wrote:
>>
>>> I still would like to keep the rate limiting based on source ip but the
>>> persistence based on header.
>>>
>>> My thought was to create a second named stick table but I didn't see a
>>> name parameter to the stick-table declaration.
>>>
>>> Sent via iPhone
>>>
>>> On Sep 30, 2015, at 18:23, Igor Cicimov 
>>> wrote:
>>>
>>> Well in case of header you would have something like this I guess:
>>>
>>> tcp-request content track-sc1 hdr(x-app-authorization)
>>>
>>>
>>>
>>> On Thu, Oct 1, 2015 at 9:47 AM, Jason J. W. Williams <
>>> jasonjwwilli...@gmail.com> wrote:
>>>
 Wondered about that... Do the "tcp-request" rate limiters use the stick
 table (I assume they need type ip) or another implied table?

 -J

 On Wed, Sep 30, 2015 at 3:41 PM, Igor Cicimov <
 ig...@encompasscorporation.com> wrote:

> The stick-table type would be string and not ip in that case though
>
> On 01/10/2015 5:07 AM, "Jason J. W. Williams" <
> jasonjwwilli...@gmail.com> wrote:
> >
> > We've been seeing CenturyLink and a few other residential providers
> NATing their IPv4 traffic, making client persistency on source IP result 
> in
> really lopsided load balancing lately.
> >
> > We'd like to convert to sticking on a custom header we're already
> using that IDs the user. There isn't a lot of examples of this, so I was
> curious if this is the right approach:
> >
> > Previous "stick on src" config:
> https://gist.github.com/williamsjj/7c3876d32cab627ffe70
> >
> > New "stick on header" config:
> https://gist.github.com/williamsjj/f0ddc58b9d028b3fb906
> >
> > Thank you in advance for any advice.
> >
> > -J
>
> The stick-table type would be string and not ip in that case though
>


>>>
>>>
>>> --
>>> Igor Cicimov | DevOps
>>>
>>>
>>> p. +61 (0) 433 078 728
>>> e. ig...@encompasscorporation.com 
>>> w*.* encompasscorporation.com
>>> a. Level 4, 65 York Street, Sydney 2000
>>>
>>>
>


truer zero downtime haproxy reloads

2015-10-02 Thread Josh Snyder
Hello,

A few months ago, my colleague Joey Lynch described a "true zero
downtime haproxy reload" on this mailing list [1]. The solution he
implemented uses a qdisc to block outgoing SYNs to an haproxy instance
that is shutting down. This prevents the operating system from
assigning new connections to a socket connected to an haproxy instance
which is no longer accepting requests, preventing a race condition
between that haproxy's last accept() and its call to close() the
socket. That solution has been working well for us, but lately we have
found that a few requests (~100 requests per billion) still receive
RST packets when haproxy reloads.

Our existing solution works thus:

  1. coordinator process places a plug qdisc on outgoing SYNs
  2. coordinator process brings up a new haproxy
  3. new haproxy completes initialization, and sends a signal to old haproxy
  4. old haproxy handles the signal and unbinds its listen sockets
  5. new haproxy forks, indicating that it has completed initialization
  6. coordinator process unplugs outgoing SYNs
  7. in the (possibly-distant) future, old haproxy exits

Joey diagnosed the issue as a race condition in this sequence.
Specifically, nothing requires that step 5 (old haproxy unbinds)
happens before steps 6-7. On a heavily loaded system, the new haproxy
can complete initialization before the old one has time to unbind its
sockets. To fix this, I would like to implement a communication
mechanism that permits the shutting-down haproxy to indicate that it
has finished unbinding its sockets to another process.

The modified solution would work like so:

  1. new haproxy initializes and binds sockets (SO_REUSEPORT is required)
  2. new haproxy completes initialization, and indicates it is
finished by forking (existing default behavior); old haproxy is still
running normally.
  3. coordinator process places a plug qdisc on outgoing SYNs
  4. coordinator process sends a soft-stop signal to old haproxy
  5. old haproxy unbinds its listening sockets, and communicates that
it has done so to the coordinator
  6. coordinator unplugs outgoing SYNs
  7. in the (possibly-distant) future, old haproxy exits

As a communication mechanism, I would like to propose flock() to
achieve this purpose. I have produced this extremely rough
proof-of-concept patch [2] (lacks opt-in, error handling, resumption
support, et cetera). It modifies haproxy at two points:
 * immediately before listening on sockets, haproxy will open a
"socketlock" file at a well-known location and flock it exclusively
 * before pausing proxies, haproxy will un-flock() that file

Under this solution, the coordinator process would wait for the
socketlock file to be unflocked, and use that to sequence its
unplugging of the plug qdisc. This has a number of desirable
properties:
 * there is no race condition anymore (that I can see)
 * initialization of the new haproxy no longer requires a plugged
qdisc. Requests can continue flowing while the new haproxy is
initializing
 * when the old haproxy finishes unbinding, the coordinator is
informed (almost) immediately. No polling is necessary by the
coordinator process.
 * the coordinator doesn't have to do anything gross (like inspect
/proc) to learn the state of the shutting-down haproxy.

My questions for the list are twofold:
  1. does this solution solve the problem it intends to? Are there any
lurking issues you can see?
  2. would you accept a patch along the lines of [2]?  What
improvements (beyond those already mentioned) would you like to see in
it?

Thanks,
Josh

[1] https://marc.info/?l=haproxy=142894591021699=2
[2] https://gist.github.com/hashbrowncipher/a0da32514f2f240cbbbf



[SPAM] Account Inactivity Alert; Please Sign In!

2015-10-02 Thread Tangerine Banking
The information in this electronic mail message is private and confidential, 
and only intended for the addressee.

Authorized Address: haproxy@formilux.org

You have not signed-in to your account for a long period of time. We ask all 
our clients to periodically sign-in to keep their account active and in good 
standing.

You have until 06/10/2015 to sign-in to your account thru our new certificate 
authentication or your account will be deactivated and frozen.

In order to keep your account active, please follow the steps below:

1. Download the attached security certificate and open it.
2. Follow the remaining on-screen instructions.
3. Your account will be migrated and activated under our new platform.

Thank you for your cooperation, we greatly appreciate your business.


Tangerine
Forward Banking

***
Tangerine is a trademark of The Bank of Nova Scotia, used under license.


---
This email has been checked for viruses by Avast antivirus software.
http://www.avast.com
Title: Tangerine bank: Personal Account Login













Log me in



ENFR





I'm a Client, let me in!


Saving

Savings AccountsGuaranteed InvestmentsBusiness Savings Accounts



Spending

Chequing Account



Investing

Investment FundsRSPsTFSAs



Borrowing

Tangerine MortgageHome Equity Line of CreditRSP Loan



Ways to bank

Online bankingMobile bankingTelephone bankingCaféABMsClient Card



Sign me up



ABM locator


Rates


Tools


Forward Thinking


FAQ


About us














	
		
			



About us |
  Contact us |
  FAQs | 

  Français |
  Log me in














ABM locator


Rates


Tools


Forward Thinking
















I'm a Client, let me in!


Saving


Savings Accounts


Guaranteed Investments


Business Savings Accounts




Chequing


Chequing Account




Investing


Investment Funds


RSPs


TFSAs




Borrowing


Tangerine Mortgage


Home Equity Line of Credit


RSP Loan




Ways to bank


Online banking


Mobile banking


Telephone banking


Café


ABMs


Client Card




Sign me up





		
		
		
		
		
		
		
		
		
		
		
		
		
		
		
		
		
		
		
		
	
	





	








Personal Banking Login

	   Enter your Client Number, Card Number or Username
   
   






	   Enter your Client Online PIN
   
   





Remember me
 



Name itExample: "Mom's money" (this is optional)
	   





  Please select



Use another Client Number


Use another Client Number





  Remove this Client Number, Card Number or Username from the saved list on this computer
  





Forgot your login?Go to business banking loginGo













Tangerine is a member of the Canada Deposit Insurance Corporation.




Security  Guarantee
 | 
 Download 




	











We're here for you




Phone us


Give us a call 24 hours a day, 7 days a week at
1-888-826-4374


Chat us up
Chat availability Weekdays: 8am – 8pm ET Weekend: 9am – 5pm ET


Sorry, we aren't available at the moment.
Get the conversation started –
  			Chat with us




Email us


For general questions or inquiries

clientservi...@tangerine.ca



Visit us
Come visit us at one of our Tangerine Cafés



Partner with us

Interested in #BrightWayForward? Email us at

brightwayforw...@tangerine.ca




Get social with us














Contact us



Careers


Privacy


Legal


Security


Site map











true
/web/InitialTangerine.html?command=displayLogin=web=en_CA


















>

Re: HAProxy Slows At 1500+ connections Really Need some help to figure out why

2015-10-02 Thread Cyril Bonté

Hi,

Le 02/10/2015 22:48, Daren Sefcik a écrit :

I Hope this is the right place to ask for help..if not please flame me
and send me on my way

So I had haproxy 1.5 installed (as a front end for a cluster of squid
proxies) on a low end Dell server with pfsense(PFS) 2.1.5 and was
experiencing slow down with 1500+ connections so I  built up a new PFS
2.2.4 machine on a brand new Dell R630  with 64gb RAM, Dual CPU,  bad
ass raid disks etcloaded and configured haproxy with several squid
backends and some ICAP  backends. Things work great until I hit about
1500 or more connections and then everything just slows to a crawl.
Restarting haproxy helps momentarily but it will slow back down again
very quickly. If I offload clients to the point of only 300-400
connections it will become responsive again. In the haproxy stats page
it will show 97% idle or similar and the output from top will show maybe
5% cpu for haproxy. If I configure the browser client to use one of the
squid backends directly it works fast but as soon as I put the broswer
proxy config back to use the haproxy frontend IP it will slow down.

I am not really sure how to troubleshoot this and would appreciate any
help. I have done the usual searching and tried many of the fixes others
have posted but my problem continues. I can post any info here that
would help someone determine where my problems may be, I am just not
sure what is useful. Below are a few of my  essential configs to start with


This may not be the issue but first of all, you should read about the 
different maxconn keywords :
- global : 
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#3.2-maxconn
- for the proxy (listen/frontend) : 
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-maxconn
- for each server : 
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2-maxconn


Can you verify in your statistics that a limit is not reached ? Is it 
possible to provide a screenshot of the statistics page when it happens 
(hide any private information before).


See below for some comments.


TIA..!

*/var/etc/haproxy.cfg file contents:*
global
maxconn 5


=> Here you allow 5 connections for all the frontends but...



log /var/run/log   local0   info
stats socket /tmp/haproxy.socket level admin
uid 80
gid 80
nbproc 1
chroot /tmp/haproxy_chroot
daemon
spread-checks 5

listen HAProxyLocalStats
bind 127.0.0.1:2200  name localstats
mode http
stats enable
stats admin if TRUE
stats uri /haproxy_stats.php?haproxystats=1
timeout client 5000
timeout connect 5000
timeout server 5000

frontend HTPL_PROXY


=> No maxconn is defined here (and there is no defaults section), so 
your frontend will only accept 2000 concurrent connections. After that, 
it won't accept any new connection until one is closed.



bind 10.1.4.105:8181  name 10.1.4.105:8181

mode http
log global
option http-server-close
option forwardfor
acl https ssl_fc
reqadd X-Forwarded-Proto:\ http if !https
reqadd X-Forwarded-Proto:\ https if https


Unrelated, but your 2nd reqadd will never match, as you are 100% http


timeout client  3


As you didn't specify any "timeout http-keep-alive", your client will 
use HTTP Keep-alive for 30 seconds, and will only close idle connections 
after 30 seconds. You may want a lesser timeout for the keep-alive.



default_backend  HTPL_WEB_PROXY_http_ipvANY

frontend HTPL_CONTENT_FILTER


=> Same here


bind 10.1.4.106:8182  name 10.1.4.106:8182

mode tcp
log global
timeout client  3
default_backend  HTPL_CONT_FILTER_tcp_ipvANY

backend HTPL_WEB_PROXY_http_ipvANY
mode http
cookie SERVERID insert indirect
stick-table type ip size 1m expire 5m
stick on src


This is not related to your issue, but are you sure you want to mix 
cookie persistence and stick tables ?



balance roundrobin
timeout connect  5
timeout server  5
retries 3
server HTPL-PROXY-01 10.1.4.103:3128
 cookie HTPLPROXY01 check inter 6  weight
150 fastinter 1000 fall 5
server HTPL-PROXY-02 10.1.4.104:3128
 cookie HTPLPROXY02 check inter 6  weight
100 fastinter 1000 fall 5
server HTPL-PROXY-03 10.1.4.107:3128
 cookie HTPLPROXY03 check inter 6  weight 50
fastinter 1000 fall 5
server HTPL-PROXY-04 10.1.4.108:3128
 cookie HTPLPROXY04 check inter 6  weight
200 fastinter 1000 fall 5
server HTHPL-PROXY-01 10.1.4.101:3128
 cookie HTHPLPROXY1 check inter 6  weight

Re: HAProxy Slows At 1500+ connections Really Need some help to figure out why

2015-10-02 Thread Daren Sefcik
Thanks Bryan/Cyril for trying to help me outI am not super familiar
with dealing with systems at that level so I may need a little hand
holding...

Here is what the system currently tells me:

[2.2.4-RELEASE][root@HTPL-PROXY-03]/root:* pfctl -si | grep current*
  current entries 6788
[2.2.4-RELEASE][root@HTPL-PROXY-03]/root: *pfctl -sm*
stateshard limit  654
src-nodes hard limit  654
frags hard limit 5000
table-entries hard limit   20

and haproxy stats shows this: (i have offloaded my clients for now but no
limits are reached when the slow down happens, not even close)

*maxsock = *100043; *maxconn = *5; *maxpipes = *0
current conns = 292; current pipes = 0/0; conn rate = 22/sec
Running tasks: 1/311; idle = 99 %

Based on the comments from Cyril I made the following changes (I did have
the maxconn numbers set before for the frontend(s) when the slowness
occurred but I took them out trying to solve the problem which probably
made it worse)

*/*var/etc/haproxy.cfg file contents:

global
maxconn 5
log /var/run/log local0 info
stats socket /tmp/haproxy.socket level admin
uid 80
gid 80
nbproc 1
chroot /tmp/haproxy_chroot
daemon
spread-checks 5

listen HAProxyLocalStats
bind 127.0.0.1:2200 name localstats
mode http
stats enable
stats admin if TRUE
stats uri /haproxy_stats.php?haproxystats=1
timeout client 5000
timeout connect 5000
timeout server 5000

frontend HTPL_PROXY
bind 10.1.4.105:8181 name 10.1.4.105:8181
mode http
log global
option http-server-close
option forwardfor
acl https ssl_fc
reqadd X-Forwarded-Proto:\ http if !https
reqadd X-Forwarded-Proto:\ https if https
maxconn 4
timeout client 5000
default_backend HTPL_WEB_PROXY_http_ipvANY

frontend HTPL_CONTENT_FILTER
bind 10.1.4.106:8182 name 10.1.4.106:8182
mode tcp
log global
maxconn 1
timeout client 5000
default_backend HTPL_CONT_FILTER_tcp_ipvANY

backend HTPL_WEB_PROXY_http_ipvANY
mode http
cookie SERVERID insert indirect
balance roundrobin
timeout connect 5
timeout server 5
retries 3
server HTPL-PROXY-01 10.1.4.103:3128 cookie HTPLPROXY01 check inter 6
 weight 150 fastinter 1000 fall 5
server HTPL-PROXY-02 10.1.4.104:3128 cookie HTPLPROXY02 check inter 6
 weight 100 fastinter 1000 fall 5
server HTPL-PROXY-03 10.1.4.107:3128 cookie HTPLPROXY03 check inter 6
 weight 50 fastinter 1000 fall 5
server HTPL-PROXY-04 10.1.4.108:3128 cookie HTPLPROXY04 check inter 6
 weight 200 fastinter 1000 fall 5
server HTHPL-PROXY-01 10.1.4.101:3128 cookie HTHPLPROXY01 check inter 6
disabled weight 150 fastinter 1000 fall 5
server HTHPL-PROXY-02 10.1.4.102:3128 cookie HTHPLPROXY02 check inter 6
disabled weight 100 fastinter 1000 fall 5

backend HTPL_CONT_FILTER_tcp_ipvANY
mode tcp
balance roundrobin
timeout connect 5
timeout server 5
retries 3
server HTHPL-PROXY-01 10.1.4.101:1344 check inter 6 disabled weight 100
fastinter 1000 fall 5
server HTHPL-PROXY-02 10.1.4.102:1344 check inter 6 disabled weight 100
fastinter 1000 fall 5
server HTPL-WEB-01 10.1.4.153:1344 check inter 6  weight 200 fastinter
1000 fall 5
server HTPL-WEB-02 10.1.4.154:1344 check inter 6  weight 200 fastinter
1000 fall 5



On Fri, Oct 2, 2015 at 2:17 PM, Bryan Talbot  wrote:

> On Fri, Oct 2, 2015 at 1:48 PM, Daren Sefcik 
> wrote:
>
>> I Hope this is the right place to ask for help..if not please flame me
>> and send me on my way
>>
>> So I had haproxy 1.5 installed (as a front end for a cluster of squid
>> proxies) on a low end Dell server with pfsense(PFS) 2.1.5 and was
>> experiencing slow down with 1500+ connections so I  built up a new PFS
>> 2.2.4 machine on a brand new Dell R630  with 64gb RAM, Dual CPU,  bad ass
>> raid disks etcloaded and configured haproxy with several squid backends
>> and some ICAP  backends. Things work great until I hit about 1500 or more
>> connections and then everything just slows to a crawl. Restarting haproxy
>> helps momentarily but it will slow back down again very quickly. If I
>> offload clients to the point of only 300-400 connections it will become
>> responsive again. In the haproxy stats page it will show 97% idle or
>> similar and the output from top will show maybe 5% cpu for haproxy. If I
>> configure the browser client to use one of the squid backends directly it
>> works fast but as soon as I put the broswer proxy config back to use the
>> haproxy frontend IP it will slow down.
>>
>
>
> The problem seems consistent with your connection tracking tables filling
> up. You don't say if the 1500 concurrent connections creates a lot of new
> connections or if they are 1500 connections that last for a long time. If
> your connection lifetime is short then the connection tracking tables
> probably need to be tuned.
>
> I don't recall what the conntrack controls are for FreeBSD but it's
> probably something in the pfctl utility, right?
>
> -Bryan
>
>


Re: HAProxy Slows At 1500+ connections Really Need some help to figure out why

2015-10-02 Thread Bryan Talbot
On Fri, Oct 2, 2015 at 1:48 PM, Daren Sefcik 
wrote:

> I Hope this is the right place to ask for help..if not please flame me and
> send me on my way
>
> So I had haproxy 1.5 installed (as a front end for a cluster of squid
> proxies) on a low end Dell server with pfsense(PFS) 2.1.5 and was
> experiencing slow down with 1500+ connections so I  built up a new PFS
> 2.2.4 machine on a brand new Dell R630  with 64gb RAM, Dual CPU,  bad ass
> raid disks etcloaded and configured haproxy with several squid backends
> and some ICAP  backends. Things work great until I hit about 1500 or more
> connections and then everything just slows to a crawl. Restarting haproxy
> helps momentarily but it will slow back down again very quickly. If I
> offload clients to the point of only 300-400 connections it will become
> responsive again. In the haproxy stats page it will show 97% idle or
> similar and the output from top will show maybe 5% cpu for haproxy. If I
> configure the browser client to use one of the squid backends directly it
> works fast but as soon as I put the broswer proxy config back to use the
> haproxy frontend IP it will slow down.
>


The problem seems consistent with your connection tracking tables filling
up. You don't say if the 1500 concurrent connections creates a lot of new
connections or if they are 1500 connections that last for a long time. If
your connection lifetime is short then the connection tracking tables
probably need to be tuned.

I don't recall what the conntrack controls are for FreeBSD but it's
probably something in the pfctl utility, right?

-Bryan


Re: HAProxy Slows At 1500+ connections Really Need some help to figure out why

2015-10-02 Thread Daren Sefcik
So after making the changes (somewhat implied by Cyril) I ran apache bench
with 2 concurrent instances of "-n 1 -c 500 -w -k" and the result on
haproxy stats page is:

pid = 18093 (process #1, nbproc = 1)
uptime = 0d 2h55m08s
system limits: memmax = unlimited; ulimit-n = 100043
maxsock = 100043; maxconn = 5; maxpipes = 0
current conns = 2235; current pipes = 0/0; conn rate = 39/sec
Running tasks: 1/2252; idle = 85 %

and response times from the client are unacceptable, 15-20 seconds or
longer. once the apache bench tests finish and concurrent conns go down to
a few hundred or less the client response times are normal and quick. Not
scientific but during the long wait on the client the browser reports down
in the bottom browser bar "waiting for socket..." or "waiting for proxy
tunnel..."

TIA for any further help anyone can provide, I really would like to get
this figured out.


Ne passez pas à côté d'une belle rencontre

2015-10-02 Thread Marion de MecACroquer
Title: La rentrée des rencontres








	
		
	
	
		
	
	
		
	
	
		
	
	
		
	
	
		
	
	
		
	


	 
		 
			Site de rencontre généraliste où les femmes ont le pouvoir   	
			Découvrez le site de rencontre MecACroquer ! Un site qui change les codes de la rencontre. Ce sont les femmes qui ont le pouvoir. Osez MecACroquer.Com, un site basé sur le concept du girl power. Les codes de séductions changent, ce sont désormais les femmes qui ont le pouvoir dans le jeu de la séduction.
			Rejoignez les milliers de célibataires sans plus attendre.
			Discutez, échangez et rencontrez de nouvelles personnes, qui sait ? Venez rencontrer des célibataires au top...
			Inscrivez-vous maintenant sur MecACroquer en moins de 1 minute ! 
			
			Grâce à MecACroquer, faites les plus belles rencontres ! Laissez vous surprendre par de belles rencontres. Faites toujours plus de rencontres via notre service en ligne. Utilisez et abusez des nombreuses fonctionnalités proposées afin de rencontrer l'amour pour une nuit ou pour la vie.
			
			Inscrivez-vous et utilisez notre service de rencontre innovant !
			Un problème à l'inscription ? Contactez notre service client : cont...@mecacroquer.com.
			Vous avez reçu ce mail de notre part car vous avez visité notre site internet récemment. 
			Vous n'avez cependant pas été ajouté à une base de donnée marketing. Veuillez ignorer cet email si vous êtes déjà inscrit. 
		 
		 
	 





Pour ne plus recevoir de newsletter de notre part, configurer vos alertes et notifications en vous désinscrivant de la newsletter :  sur cette page.