Re: another round for configuration.txt => html

2011-11-03 Thread Aleksandar Lazic

Hi,

On 03.11.2011 06:40, Baptiste wrote:

because writting the tool to do it is more fun and easier to maintain
than a whole doc to parse again after each patch.
:)


I'm not sure if I understand  you right.

What I have understand is that you write a tool which make a
ascii2markdown translation, right?

Afterward it is possible to use markdown to create another
output file e. g.: pdf,html,...

There is no change for configuration.txt file for now, right?

What i don't understand, except for programming fun ;-), why reinvent
the wheel and not use the markdown format direct in the doc-file?

BR
Aleks

On Thu, Nov 3, 2011 at 6:23 AM, carlo flores  
wrote:

Just curious: why not rewrite the docs in markdown?

Would a rewrite formulinix could just add to be welcome?

On Wednesday, November 2, 2011, Baptiste  wrote:

Hi Aleks,

It's a good and interesting start.
I already talked to Willy about the doc format, and unfortunately 
for

you, the way you're doing is not the one wanted by him.

As you have remarked, the doc format is quite "open", each
documentation contributors tries to maintain the format, but there 
is

no strict verification on the shape (only on the content).
What Willy wants, is not a translation of the doc in a new format 
that

would force devs to follow strong recommendation, otherwise the
integrity of the whole doc would be broken.
He considers the documentation is readable for a human eye, so it
should be for an automatic tool which could then translate it into 
a

nicer format.

Purpose is double:
1. don't bother the devs when they have to write documentation
2. have a nice readable documentation

So basically, a lot of people are interested by a nicer version of 
the

doc, I already started working on the subject and I might push
something in my github very soon: a bash/sed/awk tool to translate 
the

HAProxy documentation in Markdown format (could be HTML as well).
Contribution will be welcome :)

cheers

On Thu, Nov 3, 2011 at 12:57 AM, Aleksandar Lazic 


wrote:

Hi all,

I have now started do change the configuration.txt in that way
that asciidoc an produce nice HTML output.

asciidoc -b html5 -o haproxy-conf.html configuration.txt

http://www.none.at/haproxy-conf.html

I have stopped at section 2.3 to get your feedback.

As you can see in the diff there is not to much to change,
yet.

http://www.none.at/haproxy-conf.diff

Thank you for your feedback

Aleks




Re: haproxy and multi location failover

2011-11-03 Thread joris dedieu
2011/11/1 Senthil Naidu :
> hi,
>
> we need to have a setup as follows
>
>
>
> site 1 site 2
>
>   LB  (ip 1)   LB (ip 2)
>    |   |
>    |   |
>  srv1  srv2  srv1 srv2
>
> site 1 is primary and site 2 is backup in case of site 1  LB's failure or
> failure of all the servers in site1 the website should work from backup
> location servers.

Unless you have your own routing, if you want no downtime for nobody
you have to imagine a more complex scenario. Has said below the only
way to switch for a datacenter
to an other is to use dns.

So you have to find a solution for waiting dns propagation to be complete.

I'll do something like :
1) if lb1 fail
- change dns
- srv1-1 become a lb for himself and srv2-1

2) if srv1-1 and srv2-1 fail
- change dns
- ld1 forward requests for lb2 (maybe slow but better than nothing).

and so one ...

Joris
>
> Regards
>
> On Tue, Nov 1, 2011 at 10:31 PM, Gene J  wrote:
>>
>> Please provide more detail about what you are hosting and what you want to
>> achieve with multiple sites.
>>
>> -Eugene
>>
>> On Nov 1, 2011, at 9:58, Senthil Naidu  wrote:
>>
>> Hi,
>>
>> thanks for the reply,  if the same needs to be done with dns do we need
>> any external dns services our we can use our own ns1 and ns2 for the same.
>>
>> Regards
>>
>>
>> On Tue, Nov 1, 2011 at 9:06 PM, Baptiste  wrote:
>>>
>>> Hi,
>>>
>>> Do you want to failover the Frontend or the Backend?
>>> If this is the frontend, you can do it through DNS or RHI (but you
>>> need your own AS).
>>> If this is the backend, you have nothing to do: adding your servers in
>>> the conf in a separated backend, using some ACL to take failover
>>> decision and you're done.
>>>
>>> cheers
>>>
>>>
>>> On Tue, Nov 1, 2011 at 2:25 PM, Senthil Naidu 
>>> wrote:
>>> > Hi,
>>> >
>>> > Is it possible to use haproxy in a active/passive failover scenario
>>> > between
>>> > multiple datacenters.
>>> >
>>> > Regards
>>> >
>>> >
>>> >
>>> >
>>
>
>



Hello

2011-11-03 Thread Prince Ojie
Hello 
i AM PRINCE OJIE 20YRS OLD  I HAVE VERY IMPORTANT ISSUE TO DISCUSS WITH YOU 
PLEASE KINDLY CONTACT ME URGENTLY  ON E-MAIL : princeo...@live.com 
+233269420345 
Regards 
Prince Ojie



Re: Haproxy 502 errors, all the time on specific sites or backend

2011-11-03 Thread Cyril Bonté
Hi Benoit,

Le Jeudi 3 Novembre 2011 14:46:10 Benoit GEORGELIN a écrit :
> Hi !
> 
> My name is Benoît and i'm in a associative project who provide web hosting.
> We are using Haproxy and we have a lot of problems with 502 errors :(
> 
> 
> So, i would like to know how to really debug this and find solutions :)
> There is some cases on mailling list archives but i will appreciate if
> someone can drive me with a real case on our infrastructure.

My first observations, it it can help someone to target the issue :
In your servers responses, there is no Content-Length header, this can make 
some troubles.

502 errors occurs when asking for compressed data :
- curl -si -H "Accept-Encoding: gzip,deflate" http://sandka.org/portfolio/
HTTP/1.0 502 Bad Gateway
- curl -si http://sandka.org/portfolio/
=> results in a truncated page without Content-Length Header

We'll have to find why your backends doesn't provide a Content-Length header 
(and what happens with compression, which should be sent in chunks).

> Details:
> 
> 
> Haproxy Stable 1.4.18
> OS: Debian Lenny
> 
> Configuration File:
> 
> 
> ##
> 
> global
> 
> 
> log 127.0.0.1 local0 notice #debug
> maxconn 2 # count about 1 GB per 2 connections
> ulimit-n 40046
> 
> 
> tune.bufsize 65536 # Necessary for lot of CMS page like Prestashop :(
> tune.maxrewrite 1024
> 
> 
> #chroot /usr/share/haproxy
> user haproxy
> group haproxy
> daemon
> #nbproc 4
> #debug
> #quiet
> 
> 
> defaults
> log global
> mode http
> retries 3 # 2 -> 3 le 06102011 #
> maxconn 19500 # Should be slightly smaller than global.maxconn.
> 
> 
>  OPTIONS ##
> option dontlognull
> option abortonclose
> #option redispatch # Désactive le 06102011 car balance en mode source et
> non RR # option tcpka
> #option log-separate-errors
> #option logasap
> 
> 
>  TIMeOUT ##
> timeout client 30s #1m 40s Client and server timeout must match the longest
> timeout server 30s #1m 40s time we may wait for a response from the server.
> timeout queue 30s #1m 40s Don't queue requests too long if saturated.
> timeout connect 5s #10s 5s There's no reason to change this one.
> timeout http-request 5s #10s 5s A complete request may never take that long
> timeout http-keep-alive 10s
> timeout check 10s #10s
> 
> ###
> # F R O N T E N D P U B L I C B E G I N
> #
> frontend public
> bind 123.456.789.123:80
> default_backend webserver
> 
> 
>  OPTIONS ##
> option dontlognull
> #option httpclose
> option httplog
> option http-server-close
> # option dontlog-normal
> 
> 
> # Gestion sur URL # Tout commenter le 21/10/2011
> # log the name of the virtual server
> capture request header Host len 60
> 
> 
> 
> 
> #
> # F R O N T E N D P U B L I C E N D
> ###
> 
> ###
> # B A C K E N D W E B S E R V E R B E G I N
> #
> backend webserver
> balance source # Reactive le 06102011 #
> #balance roundrobin # Désactive le 06102011 #
> 
> 
>  OPTIONS ##
> option httpchk
> option httplog
> option forwardfor
> #option httpclose # Désactive le 06102011 #
> option http-server-close
> option http-pretend-keepalive
> 
> 
> retries 5
> cookie SERVERID insert indirect
> 
> 
> # Detect an ApacheKiller-like Attack
> acl killerapache hdr_cnt(Range) gt 10
> # Clean up the request
> reqidel ^Range if killerapache
> 
> 
> 
> server http-A 192.168.0.1:80 cookie http-A check inter 5000
> server http-B 192.168.1.1:80 cookie http-B check inter 5000
> server http-C 192.168.2.1:80 cookie http-C check inter 5000
> server http-D 192.168.3.1:80 cookie http-D check inter 5000
> server http-E 192.168.4.1:80 cookie http-E check inter 5000
> 
> 
> # Every header should end with a colon followed by one space.
> reqideny ^[^:\ ]*[\ ]*$
> 
> 
> # block Apache chunk exploit
> reqideny ^Transfer-Encoding:[\ ]*chunked
> reqideny ^Host:\ apache-
> 
> 
> # block annoying worms that fill the logs...
> reqideny ^[^:\ ]*\ .*(\.|%2e)(\.|%2e)(%2f|%5c|/|  )
> reqideny ^[^:\ ]*\ ([^\ ]*\ [^\ ]*\ |.*%00)
> reqideny ^[^:\ ]*\ .*

Re: Haproxy 502 errors, all the time on specific sites or backend

2011-11-03 Thread Benoit GEORGELIN (web4all)
Thanks Cyril for this elements.

Here the modules available on apache2:



actions alias auth_basic auth_mysql auth_pam authn_file authz_default 
authz_groupfile authz_host authz_user autoindex cache cgi deflate dir env 
expires headers include mime mod-evasive negotiation php5 python rewrite rpaf 
setenvif ssl status

Maybe one of them have troubles.. I will search about Content-Length header

Cordialement,

Benoît Georgelin
Web 4 all Hébergeur associatif
+33 977 218 005
+1 514 463 7255
benoit.georgelin@web 4 all.fr

Afin de contribuer au respect de l'environnement, merci de n'imprimer ce mail 
qu'en cas de nécessité

- Mail original -

De: "Cyril Bonté" 
À: "Benoit GEORGELIN (web4all)" 
Cc: haproxy@formilux.org
Envoyé: Jeudi 3 Novembre 2011 10:32:06
Objet: Re: Haproxy 502 errors, all the time on specific sites or backend

Hi Benoit,

Le Jeudi 3 Novembre 2011 14:46:10 Benoit GEORGELIN a écrit :
> Hi !
>
> My name is Benoît and i'm in a associative project who provide web hosting.
> We are using Haproxy and we have a lot of problems with 502 errors :(
>
>
> So, i would like to know how to really debug this and find solutions :)
> There is some cases on mailling list archives but i will appreciate if
> someone can drive me with a real case on our infrastructure.

My first observations, it it can help someone to target the issue :
In your servers responses, there is no Content-Length header, this can make
some troubles.

502 errors occurs when asking for compressed data :
- curl -si -H "Accept-Encoding: gzip,deflate" http://sandka.org/portfolio/ 
HTTP/1.0 502 Bad Gateway
- curl -si http://sandka.org/portfolio/
=> results in a truncated page without Content-Length Header

We'll have to find why your backends doesn't provide a Content-Length header
(and what happens with compression, which should be sent in chunks).

> Details:
>
>
> Haproxy Stable 1.4.18
> OS: Debian Lenny
>
> Configuration File:
>
>
> ##
>
> global
>
>
> log 127.0.0.1 local0 notice #debug
> maxconn 2 # count about 1 GB per 2 connections
> ulimit-n 40046
>
>
> tune.bufsize 65536 # Necessary for lot of CMS page like Prestashop :(
> tune.maxrewrite 1024
>
>
> #chroot /usr/share/haproxy
> user haproxy
> group haproxy
> daemon
> #nbproc 4
> #debug
> #quiet
>
>
> defaults
> log global
> mode http
> retries 3 # 2 -> 3 le 06102011 #
> maxconn 19500 # Should be slightly smaller than global.maxconn.
>
>
>  OPTIONS ##
> option dontlognull
> option abortonclose
> #option redispatch # Désactive le 06102011 car balance en mode source et
> non RR # option tcpka
> #option log-separate-errors
> #option logasap
>
>
>  TIMeOUT ##
> timeout client 30s #1m 40s Client and server timeout must match the longest
> timeout server 30s #1m 40s time we may wait for a response from the server.
> timeout queue 30s #1m 40s Don't queue requests too long if saturated.
> timeout connect 5s #10s 5s There's no reason to change this one.
> timeout http-request 5s #10s 5s A complete request may never take that long
> timeout http-keep-alive 10s
> timeout check 10s #10s
>
> ###
> # F R O N T E N D P U B L I C B E G I N
> #
> frontend public
> bind 123.456.789.123:80
> default_backend webserver
>
>
>  OPTIONS ##
> option dontlognull
> #option httpclose
> option httplog
> option http-server-close
> # option dontlog-normal
>
>
> # Gestion sur URL # Tout commenter le 21/10/2011
> # log the name of the virtual server
> capture request header Host len 60
>
>
>
>
> #
> # F R O N T E N D P U B L I C E N D
> ###
>
> ###
> # B A C K E N D W E B S E R V E R B E G I N
> #
> backend webserver
> balance source # Reactive le 06102011 #
> #balance roundrobin # Désactive le 06102011 #
>
>
>  OPTIONS ##
> option httpchk
> option httplog
> option forwardfor
> #option httpclose # Désactive le 06102011 #
> option http-server-close
> option http-pretend-keepalive
>
>
> retries 5
> cookie SERVERID insert indirect
>
>
> # Detect an ApacheKiller-like Attack
> acl killerapache hdr_cnt(Range) gt 10
> # Clean up the request
> reqidel ^Range if killerapache
>
>
>
> server http-A 192.168.0.1:80 cookie http-A check inter 5000
> server http-B 192.168.1.1:80 cookie http-B check inter 5000
> server http-C 192.168.2.1:80 cookie http-C check inter 5000
> server http-D 192.168.3.1:80 cookie http-D check inter 5000
> server http-E 192.168.4.1:80 cookie http-E check inter 5000
>
>
> # Every header should end with a colon followed by one space.
> reqideny ^[^:\ ]*[\ ]*$
>
>
> # block Apache chunk exploit
> reqideny ^Transfer-Encoding:[\ ]*chunked
> reqideny ^Host:\ apache-
>
>
> # block annoying worms that fill the logs...
> reqideny ^[^:\ ]*\ .*(\.|%2e)(\.|%2e)(%2f|%5c|/|  

Re: Haproxy 502 errors, all the time on specific sites or backend

2011-11-03 Thread Benoit GEORGELIN (web4all)
Humm very interesting, a disabled mod_deflate on now it's working like a charm 
:(
Do you know why?


Cordialement,

Benoît Georgelin

- Mail original -

De: "Cyril Bonté" 
À: "Benoit GEORGELIN (web4all)" 
Cc: haproxy@formilux.org
Envoyé: Jeudi 3 Novembre 2011 10:32:06
Objet: Re: Haproxy 502 errors, all the time on specific sites or backend

Hi Benoit,

Le Jeudi 3 Novembre 2011 14:46:10 Benoit GEORGELIN a écrit :
> Hi !
>
> My name is Benoît and i'm in a associative project who provide web hosting.
> We are using Haproxy and we have a lot of problems with 502 errors :(
>
>
> So, i would like to know how to really debug this and find solutions :)
> There is some cases on mailling list archives but i will appreciate if
> someone can drive me with a real case on our infrastructure.

My first observations, it it can help someone to target the issue :
In your servers responses, there is no Content-Length header, this can make
some troubles.

502 errors occurs when asking for compressed data :
- curl -si -H "Accept-Encoding: gzip,deflate" http://sandka.org/portfolio/ 
HTTP/1.0 502 Bad Gateway
- curl -si http://sandka.org/portfolio/
=> results in a truncated page without Content-Length Header

We'll have to find why your backends doesn't provide a Content-Length header
(and what happens with compression, which should be sent in chunks).

> Details:
>
>
> Haproxy Stable 1.4.18
> OS: Debian Lenny
>
> Configuration File:
>
>
> ##
>
> global
>
>
> log 127.0.0.1 local0 notice #debug
> maxconn 2 # count about 1 GB per 2 connections
> ulimit-n 40046
>
>
> tune.bufsize 65536 # Necessary for lot of CMS page like Prestashop :(
> tune.maxrewrite 1024
>
>
> #chroot /usr/share/haproxy
> user haproxy
> group haproxy
> daemon
> #nbproc 4
> #debug
> #quiet
>
>
> defaults
> log global
> mode http
> retries 3 # 2 -> 3 le 06102011 #
> maxconn 19500 # Should be slightly smaller than global.maxconn.
>
>
>  OPTIONS ##
> option dontlognull
> option abortonclose
> #option redispatch # Désactive le 06102011 car balance en mode source et
> non RR # option tcpka
> #option log-separate-errors
> #option logasap
>
>
>  TIMeOUT ##
> timeout client 30s #1m 40s Client and server timeout must match the longest
> timeout server 30s #1m 40s time we may wait for a response from the server.
> timeout queue 30s #1m 40s Don't queue requests too long if saturated.
> timeout connect 5s #10s 5s There's no reason to change this one.
> timeout http-request 5s #10s 5s A complete request may never take that long
> timeout http-keep-alive 10s
> timeout check 10s #10s
>
> ###
> # F R O N T E N D P U B L I C B E G I N
> #
> frontend public
> bind 123.456.789.123:80
> default_backend webserver
>
>
>  OPTIONS ##
> option dontlognull
> #option httpclose
> option httplog
> option http-server-close
> # option dontlog-normal
>
>
> # Gestion sur URL # Tout commenter le 21/10/2011
> # log the name of the virtual server
> capture request header Host len 60
>
>
>
>
> #
> # F R O N T E N D P U B L I C E N D
> ###
>
> ###
> # B A C K E N D W E B S E R V E R B E G I N
> #
> backend webserver
> balance source # Reactive le 06102011 #
> #balance roundrobin # Désactive le 06102011 #
>
>
>  OPTIONS ##
> option httpchk
> option httplog
> option forwardfor
> #option httpclose # Désactive le 06102011 #
> option http-server-close
> option http-pretend-keepalive
>
>
> retries 5
> cookie SERVERID insert indirect
>
>
> # Detect an ApacheKiller-like Attack
> acl killerapache hdr_cnt(Range) gt 10
> # Clean up the request
> reqidel ^Range if killerapache
>
>
>
> server http-A 192.168.0.1:80 cookie http-A check inter 5000
> server http-B 192.168.1.1:80 cookie http-B check inter 5000
> server http-C 192.168.2.1:80 cookie http-C check inter 5000
> server http-D 192.168.3.1:80 cookie http-D check inter 5000
> server http-E 192.168.4.1:80 cookie http-E check inter 5000
>
>
> # Every header should end with a colon followed by one space.
> reqideny ^[^:\ ]*[\ ]*$
>
>
> # block Apache chunk exploit
> reqideny ^Transfer-Encoding:[\ ]*chunked
> reqideny ^Host:\ apache-
>
>
> # block annoying worms that fill the logs...
> reqideny ^[^:\ ]*\ .*(\.|%2e)(\.|%2e)(%2f|%5c|/|  )
> reqideny ^[^:\ ]*\ ([^\ ]*\ [^\ ]*\ |.*%00)
> reqideny ^[^:\ ]*\ .*

Re: Haproxy 502 errors, all the time on specific sites or backend

2011-11-03 Thread Benoit GEORGELIN (web4all)
It's working better, but now i have some blanks pages.


Cordialement,


Afin de contribuer au respect de l'environnement, merci de n'imprimer ce mail 
qu'en cas de nécessité

- Mail original -

De: "Benoit GEORGELIN (web4all)" 
À: "Cyril Bonté" 
Cc: haproxy@formilux.org
Envoyé: Jeudi 3 Novembre 2011 10:47:57
Objet: Re: Haproxy 502 errors, all the time on specific sites or backend


Humm very interesting, a disabled mod_deflate on now it's working like a charm 
:(
Do you know why?


Cordialement,

Benoît Georgelin

- Mail original -

De: "Cyril Bonté" 
À: "Benoit GEORGELIN (web4all)" 
Cc: haproxy@formilux.org
Envoyé: Jeudi 3 Novembre 2011 10:32:06
Objet: Re: Haproxy 502 errors, all the time on specific sites or backend

Hi Benoit,

Le Jeudi 3 Novembre 2011 14:46:10 Benoit GEORGELIN a écrit :
> Hi !
>
> My name is Benoît and i'm in a associative project who provide web hosting.
> We are using Haproxy and we have a lot of problems with 502 errors :(
>
>
> So, i would like to know how to really debug this and find solutions :)
> There is some cases on mailling list archives but i will appreciate if
> someone can drive me with a real case on our infrastructure.

My first observations, it it can help someone to target the issue :
In your servers responses, there is no Content-Length header, this can make
some troubles.

502 errors occurs when asking for compressed data :
- curl -si -H "Accept-Encoding: gzip,deflate" http://sandka.org/portfolio/ 
HTTP/1.0 502 Bad Gateway
- curl -si http://sandka.org/portfolio/
=> results in a truncated page without Content-Length Header

We'll have to find why your backends doesn't provide a Content-Length header
(and what happens with compression, which should be sent in chunks).

> Details:
>
>
> Haproxy Stable 1.4.18
> OS: Debian Lenny
>
> Configuration File:
>
>
> ##
>
> global
>
>
> log 127.0.0.1 local0 notice #debug
> maxconn 2 # count about 1 GB per 2 connections
> ulimit-n 40046
>
>
> tune.bufsize 65536 # Necessary for lot of CMS page like Prestashop :(
> tune.maxrewrite 1024
>
>
> #chroot /usr/share/haproxy
> user haproxy
> group haproxy
> daemon
> #nbproc 4
> #debug
> #quiet
>
>
> defaults
> log global
> mode http
> retries 3 # 2 -> 3 le 06102011 #
> maxconn 19500 # Should be slightly smaller than global.maxconn.
>
>
>  OPTIONS ##
> option dontlognull
> option abortonclose
> #option redispatch # Désactive le 06102011 car balance en mode source et
> non RR # option tcpka
> #option log-separate-errors
> #option logasap
>
>
>  TIMeOUT ##
> timeout client 30s #1m 40s Client and server timeout must match the longest
> timeout server 30s #1m 40s time we may wait for a response from the server.
> timeout queue 30s #1m 40s Don't queue requests too long if saturated.
> timeout connect 5s #10s 5s There's no reason to change this one.
> timeout http-request 5s #10s 5s A complete request may never take that long
> timeout http-keep-alive 10s
> timeout check 10s #10s
>
> ###
> # F R O N T E N D P U B L I C B E G I N
> #
> frontend public
> bind 123.456.789.123:80
> default_backend webserver
>
>
>  OPTIONS ##
> option dontlognull
> #option httpclose
> option httplog
> option http-server-close
> # option dontlog-normal
>
>
> # Gestion sur URL # Tout commenter le 21/10/2011
> # log the name of the virtual server
> capture request header Host len 60
>
>
>
>
> #
> # F R O N T E N D P U B L I C E N D
> ###
>
> ###
> # B A C K E N D W E B S E R V E R B E G I N
> #
> backend webserver
> balance source # Reactive le 06102011 #
> #balance roundrobin # Désactive le 06102011 #
>
>
>  OPTIONS ##
> option httpchk
> option httplog
> option forwardfor
> #option httpclose # Désactive le 06102011 #
> option http-server-close
> option http-pretend-keepalive
>
>
> retries 5
> cookie SERVERID insert indirect
>
>
> # Detect an ApacheKiller-like Attack
> acl killerapache hdr_cnt(Range) gt 10
> # Clean up the request
> reqidel ^Range if killerapache
>
>
>
> server http-A 192.168.0.1:80 cookie http-A check inter 5000
> server http-B 192.168.1.1:80 cookie http-B check inter 5000
> server http-C 192.168.2.1:80 cookie http-C check inter 5000
> server http-D 192.168.3.1:80 cookie http-D check inter 5000
> server http-E 192.168.4.1:80 cookie http-E check inter 5000
>
>
> # Every header should end with a colon followed by one space.
> reqideny ^[^:\ ]*[\ ]*$
>
>
> # block Apache chunk exploit
> reqideny ^Transfer-Encoding:[\ ]*chunked
> reqideny ^Host:\ apache-
>
>
> # block annoying worms that fill the logs...
> reqideny ^[^:\ ]*\ .*(\.|%2e)(\.|%2e)(%2f|%5c|/|  )
> reqideny ^[^:\ ]*\ ([^\ ]*\ [^\ ]*\ |.*%00)
> reqideny ^[^:\ ]*\ .*

Re: Haproxy 502 errors, all the time on specific sites or backend

2011-11-03 Thread Cyril Bonté
Le Jeudi 3 Novembre 2011 15:53:50 Benoit GEORGELIN a écrit :
> It's working better, but now i have some blanks pages.

Yes, responses are still truncated most of the time.

> 
> Cordialement,
> 
> 
> Afin de contribuer au respect de l'environnement, merci de n'imprimer ce
> mail qu'en cas de nécessité
> 
> - Mail original -
> 
> De: "Benoit GEORGELIN (web4all)" 
> À: "Cyril Bonté" 
> Cc: haproxy@formilux.org
> Envoyé: Jeudi 3 Novembre 2011 10:47:57
> Objet: Re: Haproxy 502 errors, all the time on specific sites or backend
> 
> 
> Humm very interesting, a disabled mod_deflate on now it's working like a
> charm :( Do you know why?
> 
> 
> Cordialement,
> 
> Benoît Georgelin
> 
> - Mail original -
> 
> De: "Cyril Bonté" 
> À: "Benoit GEORGELIN (web4all)" 
> Cc: haproxy@formilux.org
> Envoyé: Jeudi 3 Novembre 2011 10:32:06
> Objet: Re: Haproxy 502 errors, all the time on specific sites or backend
> 
> Hi Benoit,
> 
> Le Jeudi 3 Novembre 2011 14:46:10 Benoit GEORGELIN a écrit :
> > Hi !
> > 
> > My name is Benoît and i'm in a associative project who provide web
> > hosting. We are using Haproxy and we have a lot of problems with 502
> > errors :(
> > 
> > 
> > So, i would like to know how to really debug this and find solutions :)
> > There is some cases on mailling list archives but i will appreciate if
> > someone can drive me with a real case on our infrastructure.
> 
> My first observations, it it can help someone to target the issue :
> In your servers responses, there is no Content-Length header, this can make
> some troubles.
> 
> 502 errors occurs when asking for compressed data :
> - curl -si -H "Accept-Encoding: gzip,deflate" http://sandka.org/portfolio/
> HTTP/1.0 502 Bad Gateway
> - curl -si http://sandka.org/portfolio/
> => results in a truncated page without Content-Length Header
> 
> We'll have to find why your backends doesn't provide a Content-Length header
> (and what happens with compression, which should be sent in chunks).
> > Details:
> > 
> > 
> > Haproxy Stable 1.4.18
> > OS: Debian Lenny
> > 
> > Configuration File:
> > 
> > 
> > ##
> > 
> > global
> > 
> > 
> > log 127.0.0.1 local0 notice #debug
> > maxconn 2 # count about 1 GB per 2 connections
> > ulimit-n 40046
> > 
> > 
> > tune.bufsize 65536 # Necessary for lot of CMS page like Prestashop :(
> > tune.maxrewrite 1024
> > 
> > 
> > #chroot /usr/share/haproxy
> > user haproxy
> > group haproxy
> > daemon
> > #nbproc 4
> > #debug
> > #quiet
> > 
> > 
> > defaults
> > log global
> > mode http
> > retries 3 # 2 -> 3 le 06102011 #
> > maxconn 19500 # Should be slightly smaller than global.maxconn.
> > 
> > 
> >  OPTIONS ##
> > option dontlognull
> > option abortonclose
> > #option redispatch # Désactive le 06102011 car balance en mode
> > source et non RR # option tcpka
> > #option log-separate-errors
> > #option logasap
> > 
> > 
> >  TIMeOUT ##
> > timeout client 30s #1m 40s Client and server timeout must match the
> > longest timeout server 30s #1m 40s time we may wait for a response from
> > the server. timeout queue 30s #1m 40s Don't queue requests too long if
> > saturated. timeout connect 5s #10s 5s There's no reason to change this
> > one. timeout http-request 5s #10s 5s A complete request may never take
> > that long timeout http-keep-alive 10s
> > timeout check 10s #10s
> > 
> > ###
> > # F R O N T E N D P U B L I C B E G I N
> > #
> > frontend public
> > bind 123.456.789.123:80
> > default_backend webserver
> > 
> > 
> >  OPTIONS ##
> > option dontlognull
> > #option httpclose
> > option httplog
> > option http-server-close
> > # option dontlog-normal
> > 
> > 
> > # Gestion sur URL # Tout commenter le 21/10/2011
> > # log the name of the virtual server
> > capture request header Host len 60
> > 
> > 
> > 
> > 
> > #
> > # F R O N T E N D P U B L I C E N D
> > ###
> > 
> > ###
> > # B A C K E N D W E B S E R V E R B E G I N
> > #
> > backend webserver
> > balance source # Reactive le 06102011 #
> > #balance roundrobin # Désactive le 06102011 #
> > 
> > 
> >  OPTIONS ##
> > option httpchk
> > option httplog
> > option forwardfor
> > #option httpclose # Désactive le 06102011 #
> > option http-server-close
> > option http-pretend-keepalive
> > 
> > 
> > retries 5
> > cookie SERVERID insert indirect
> > 
> > 
> > # Detect an ApacheKiller-like Attack
> > acl killerapache hdr_cnt(Range) gt 10
> > # Clean up the request
> > reqidel ^Range if killerapache
> > 
> > 
> > 
> > server http-A 192.168.0.1:80 cookie http-A check inter 5000
> > server http-B 192.168.1.1:80 cookie http-B check inter 5000
> > server http-C 192.168.2.1:80 cookie http-C check inter 5000
> > server http-D 192.168.3

RE: haproxy and multi location failover

2011-11-03 Thread David Prothero
We use www.dnsmadeeasy.com (unsolicited plug) to do automatic DNS failover that 
Joris is describing. It works well for us.

My colleague and I theorized another option would be to run your HAProxy 
instances as Amazon EC2 instances (one each in different availability zones) 
with an elastic IP. That way you'd be taking advantage of Amazon's routing 
network without having to build your own. Like I said, that's only been 
theorized. I haven't actually done that.

---
David Prothero
I.T. Director
Pharmacist's Letter / Prescriber's Letter
Natural Medicines Comprehensive Database
Ident-A-Drug / www.therapeuticresearch.com

(209) 472-2240 x231
(209) 472-2249 (fax)


-Original Message-
From: joris dedieu [mailto:joris.ded...@gmail.com] 
Sent: Thursday, November 03, 2011 1:19 AM
To: haproxy@formilux.org
Subject: Re: haproxy and multi location failover

2011/11/1 Senthil Naidu :
> hi,
>
> we need to have a setup as follows
>
>
>
> site 1 site 2
>
>   LB  (ip 1)   LB (ip 2)
>    |   |
>    |   |
>  srv1  srv2  srv1 srv2
>
> site 1 is primary and site 2 is backup in case of site 1  LB's failure 
> or failure of all the servers in site1 the website should work from 
> backup location servers.

Unless you have your own routing, if you want no downtime for nobody you have 
to imagine a more complex scenario. Has said below the only way to switch for a 
datacenter to an other is to use dns.

So you have to find a solution for waiting dns propagation to be complete.

I'll do something like :
1) if lb1 fail
- change dns
- srv1-1 become a lb for himself and srv2-1

2) if srv1-1 and srv2-1 fail
- change dns
- ld1 forward requests for lb2 (maybe slow but better than nothing).

and so one ...

Joris
>
> Regards
>
> On Tue, Nov 1, 2011 at 10:31 PM, Gene J  wrote:
>>
>> Please provide more detail about what you are hosting and what you 
>> want to achieve with multiple sites.
>>
>> -Eugene
>>
>> On Nov 1, 2011, at 9:58, Senthil Naidu  wrote:
>>
>> Hi,
>>
>> thanks for the reply,  if the same needs to be done with dns do we 
>> need any external dns services our we can use our own ns1 and ns2 for the 
>> same.
>>
>> Regards
>>
>>
>> On Tue, Nov 1, 2011 at 9:06 PM, Baptiste  wrote:
>>>
>>> Hi,
>>>
>>> Do you want to failover the Frontend or the Backend?
>>> If this is the frontend, you can do it through DNS or RHI (but you 
>>> need your own AS).
>>> If this is the backend, you have nothing to do: adding your servers 
>>> in the conf in a separated backend, using some ACL to take failover 
>>> decision and you're done.
>>>
>>> cheers
>>>
>>>
>>> On Tue, Nov 1, 2011 at 2:25 PM, Senthil Naidu 
>>> 
>>> wrote:
>>> > Hi,
>>> >
>>> > Is it possible to use haproxy in a active/passive failover 
>>> > scenario between multiple datacenters.
>>> >
>>> > Regards
>>> >
>>> >
>>> >
>>> >
>>
>
>






Re: Haproxy 502 errors, all the time on specific sites or backend

2011-11-03 Thread Benoit GEORGELIN (web4all)
Can you give me more details about your analyse? (examples)
I will try to understand more what's happen


Is the response who is not complete or the header only?


Thanks


Cordialement,

Benoît Georgelin


Afin de contribuer au respect de l'environnement, merci de n'imprimer ce mail 
qu'en cas de nécessité

- Mail original -

De: "Cyril Bonté" 
À: "Benoit GEORGELIN (web4all)" 
Cc: haproxy@formilux.org
Envoyé: Jeudi 3 Novembre 2011 10:54:46
Objet: Re: Haproxy 502 errors, all the time on specific sites or backend

Le Jeudi 3 Novembre 2011 15:53:50 Benoit GEORGELIN a écrit :
> It's working better, but now i have some blanks pages.

Yes, responses are still truncated most of the time.

>
> Cordialement,
>
>
> Afin de contribuer au respect de l'environnement, merci de n'imprimer ce 
> mail qu'en cas de nécessité
>
> - Mail original -
>
> De: "Benoit GEORGELIN (web4all)" 
> À: "Cyril Bonté" 
> Cc: haproxy@formilux.org
> Envoyé: Jeudi 3 Novembre 2011 10:47:57
> Objet: Re: Haproxy 502 errors, all the time on specific sites or backend 
>
>
> Humm very interesting, a disabled mod_deflate on now it's working like a 
> charm :( Do you know why?
>
>
> Cordialement,
>
> Benoît Georgelin
>
> - Mail original -
>
> De: "Cyril Bonté" 
> À: "Benoit GEORGELIN (web4all)" 
> Cc: haproxy@formilux.org
> Envoyé: Jeudi 3 Novembre 2011 10:32:06
> Objet: Re: Haproxy 502 errors, all the time on specific sites or backend 
>
> Hi Benoit,
>
> Le Jeudi 3 Novembre 2011 14:46:10 Benoit GEORGELIN a écrit :
> > Hi !
> >
> > My name is Benoît and i'm in a associative project who provide web
> > hosting. We are using Haproxy and we have a lot of problems with 502
> > errors :(
> >
> >
> > So, i would like to know how to really debug this and find solutions :)
> > There is some cases on mailling list archives but i will appreciate if 
> > someone can drive me with a real case on our infrastructure.
>
> My first observations, it it can help someone to target the issue :
> In your servers responses, there is no Content-Length header, this can make
> some troubles.
>
> 502 errors occurs when asking for compressed data :
> - curl -si -H "Accept-Encoding: gzip,deflate" http://sandka.org/portfolio/
> HTTP/1.0 502 Bad Gateway
> - curl -si http://sandka.org/portfolio/
> => results in a truncated page without Content-Length Header
>
> We'll have to find why your backends doesn't provide a Content-Length header
> (and what happens with compression, which should be sent in chunks).
> > Details:
> >
> >
> > Haproxy Stable 1.4.18
> > OS: Debian Lenny
> >
> > Configuration File:
> >
> >
> > ## 
> >
> > global
> >
> >
> > log 127.0.0.1 local0 notice #debug
> > maxconn 2 # count about 1 GB per 2 connections
> > ulimit-n 40046
> >
> >
> > tune.bufsize 65536 # Necessary for lot of CMS page like Prestashop :(
> > tune.maxrewrite 1024
> >
> >
> > #chroot /usr/share/haproxy
> > user haproxy
> > group haproxy
> > daemon
> > #nbproc 4
> > #debug
> > #quiet
> >
> >
> > defaults
> > log global
> > mode http
> > retries 3 # 2 -> 3 le 06102011 #
> > maxconn 19500 # Should be slightly smaller than global.maxconn.
> >
> >
> >  OPTIONS ##
> > option dontlognull
> > option abortonclose
> > #option redispatch # Désactive le 06102011 car balance en mode
> > source et non RR # option tcpka
> > #option log-separate-errors
> > #option logasap
> >
> >
> >  TIMeOUT ##
> > timeout client 30s #1m 40s Client and server timeout must match the
> > longest timeout server 30s #1m 40s time we may wait for a response from
> > the server. timeout queue 30s #1m 40s Don't queue requests too long if 
> > saturated. timeout connect 5s #10s 5s There's no reason to change this 
> > one. timeout http-request 5s #10s 5s A complete request may never take 
> > that long timeout http-keep-alive 10s
> > timeout check 10s #10s
> >
> > ###
> > # F R O N T E N D P U B L I C B E G I N
> > #
> > frontend public
> > bind 123.456.789.123:80
> > default_backend webserver
> >
> >
> >  OPTIONS ##
> > option dontlognull
> > #option httpclose
> > option httplog
> > option http-server-close
> > # option dontlog-normal
> >
> >
> > # Gestion sur URL # Tout commenter le 21/10/2011
> > # log the name of the virtual server
> > capture request header Host len 60
> >
> >
> >
> >
> > #
> > # F R O N T E N D P U B L I C E N D
> > ###
> >
> > ###
> > # B A C K E N D W E B S E R V E R B E G I N
> > #
> > backend webserver
> > balance source # Reactive le 06102011 #
> > #balance roundrobin # Désactive le 06102011 #
> >
> >
> >  OPTIONS ##
> > option httpchk
> > option httplog
> > option forwardfor
> > #option httpclose # Désactive le 06102011 #
> > option http-server-clo

Re: Haproxy 502 errors, all the time on specific sites or backend

2011-11-03 Thread Cyril Bonté
Le Jeudi 3 Novembre 2011 17:34:38 Benoit GEORGELIN a écrit :
> Can you give me more details about your analyse? (examples)
> I will try to understand more what's happen
> 
> 
> Is the response who is not complete or the header only?

The body is not complete. I tried with the examples I provided in my first 
mail.

Examples :
curl -si "http://sandka.org/portfolio/"; => HTTP/1.0 200 OK with html cut in 
the middle.
curl -si "http://sandka.org/portfolio/foobar"; => HTTP/1.0 404 Not Found with 
html cut in the middle.

There's something bad in ZenPhoto : it forces the response in HTTP/1.0, which 
prevents chunked transfer. That also can explain why mod_deflate generated 502 
errors.

One thing you can try :
Edit the file index.php in ZenPhoto and replace "HTTP/1.0" occurences (one for 
200, one for 404) by "HTTP/1.1". Hopefully, this will allow apache+php to use 
chunked responses and solve the problem.

-- 
Cyril Bonté



cannot bind socket Multiple backends tcp mode

2011-11-03 Thread Saul
Hello List,

I hope someone can shed some light with the following situation:

Setup:
HAproxy frontend proxy and apache SSL backends. I didn't want to use
haproxy+stunnel or apache mod_ssl so I use straight TCP mode and
redirects, it works fine with one backend. The only problem is when I
try to add a second backend for a different farm of servers I get the
following:

"Starting frontend https-services-in: cannot bind socket"

My understanding was that multiple backends could use the same
interface, perhaps I was wrong, if that is the case, any suggestions
on how to be able to have multiple backends running tcp mode on port
443 so I can match the url and redirect to the appropriate backend
from my HAproxy?

Thank You Very much in advance.

Relevant configuration:

##--
##  HTTP FRONTEND
## 
frontend www
mode http
bind :80

redirect prefix https://secure.mydomain.com if { hdr_dom(Host) -i
secure.mydomain.com }
redirect prefix https://services.mydomain.com if { hdr_dom(Host) -i
services.mydomain.com }

backend www
mode http
balance leastconn
stats enable
option httpclose
option forwardfor
option httpchk HEAD /ha.txt HTTP/1.0

server nginx_1 10.10.1.1:80 weight 100 check

##--
##  HTTPS FRONTEND
## 


frontend https-in
mode tcp
bind :443
default_backend https-secure-portal

##--
##  HEADER ACL'S
## 

acl secure1 hdr_dom(Host) -i secure.mydomain.com
use_backend https-secure-portal if secure1

backend https-secure-portal
mode tcp
balance leastconn
option ssl-hello-chk

server ssl_1 10.10.1.1:443 weight 100 check

##--
##  SERVICES FRONTEND
## 

frontend https-services-in
mode tcp
bind :443
default_backend https-services

acl services1 hdr_dom(Host) -i services.mydomain.com
use_backend https-services if services1

backend https-services
mode tcp
balance leastconn
option ssl-hello-chk
#option httpclose
#option forwardfor

server nginx2_ssl 10.10.1.110:443 weight 100 check



Re: cannot bind socket Multiple backends tcp mode

2011-11-03 Thread Graeme Donaldson
On 3 November 2011 21:34, Saul  wrote:
> My understanding was that multiple backends could use the same
> interface, perhaps I was wrong, if that is the case, any suggestions
> on how to be able to have multiple backends running tcp mode on port
> 443 so I can match the url and redirect to the appropriate backend
> from my HAproxy?

You can have multiple backends with a single frontend, and define
various criteria to decide which backend to use for each incoming
request.

Having said that, there are problems in your configuration. Firstly,
you are defining 2 frontends listening on the same port ("bind :443"
twice), which is causing the "cannot bind socket" message. Second, you
are attempting to use hdr_beg to match an HTTP Host: header, which you
cannot do when HAproxy is handling SSL traffic in TCP mode, because
HAproxy cannot read the HTTP request, it's encrypted. In order to use
hdr_beg and similar criteria, you must be using plain HTTP, which
requires you to use stunnel, nginx or something else in front of
HAproxy to handle the SSL and make a plain HTTP connection to HAproxy.

Hope this helps,
Graeme.



Re: cannot bind socket Multiple backends tcp mode

2011-11-03 Thread Baptiste
That's normal, your port 443 is binded by the first frontend.
So when HAproxy wants to bind it for your second frontend, it can't...

The only solution, in the current case, is to have one frontend per IP.
Furthermore, your ACL won't work since you're in TCP mode and the
traffic is encrypted.

Cheers



On Thu, Nov 3, 2011 at 8:34 PM, Saul  wrote:
> Hello List,
>
> I hope someone can shed some light with the following situation:
>
> Setup:
> HAproxy frontend proxy and apache SSL backends. I didn't want to use
> haproxy+stunnel or apache mod_ssl so I use straight TCP mode and
> redirects, it works fine with one backend. The only problem is when I
> try to add a second backend for a different farm of servers I get the
> following:
>
> "Starting frontend https-services-in: cannot bind socket"
>
> My understanding was that multiple backends could use the same
> interface, perhaps I was wrong, if that is the case, any suggestions
> on how to be able to have multiple backends running tcp mode on port
> 443 so I can match the url and redirect to the appropriate backend
> from my HAproxy?
>
> Thank You Very much in advance.
>
> Relevant configuration:
>
> ##--
> ##  HTTP FRONTEND
> ## 
> frontend www
> mode http
> bind :80
>
> redirect prefix https://secure.mydomain.com if { hdr_dom(Host) -i
> secure.mydomain.com }
> redirect prefix https://services.mydomain.com if { hdr_dom(Host) -i
> services.mydomain.com }
>
> backend www
> mode http
> balance leastconn
> stats enable
> option httpclose
> option forwardfor
> option httpchk HEAD /ha.txt HTTP/1.0
>
> server nginx_1 10.10.1.1:80 weight 100 check
>
> ##--
> ##  HTTPS FRONTEND
> ## 
>
>
> frontend https-in
> mode tcp
> bind :443
> default_backend https-secure-portal
>
> ##--
> ##  HEADER ACL'S
> ## 
>
> acl secure1 hdr_dom(Host) -i secure.mydomain.com
> use_backend https-secure-portal if secure1
>
> backend https-secure-portal
> mode tcp
> balance leastconn
> option ssl-hello-chk
>
> server ssl_1 10.10.1.1:443 weight 100 check
>
> ##--
> ##  SERVICES FRONTEND
> ## 
>
> frontend https-services-in
> mode tcp
> bind :443
> default_backend https-services
>
> acl services1 hdr_dom(Host) -i services.mydomain.com
> use_backend https-services if services1
>
> backend https-services
> mode tcp
> balance leastconn
> option ssl-hello-chk
> #option httpclose
> #option forwardfor
>
> server nginx2_ssl 10.10.1.110:443 weight 100 check
>
>



Help with SSL

2011-11-03 Thread Christophe Rahier
Hello,

 My config of HAProxy is:

--> CUT <--
global
log 192.168.0.2 local0
log 127.0.0.1 local1 notice
maxconn 10240
defaults
logglobal
option dontlognull
retries2
timeout client 35s
timeout server 90s
timeout connect 5s
timeout http-keep-alive 10s

listen WebPlayer-Farm 192.168.0.2:80
mode http
option httplog
balance source
#balance leastconn
option forwardfor
stats enable
option http-server-close
server Player4 192.168.0.13:80 check
server Player3 192.168.0.12:80 check
server Player1 192.168.0.10:80 check
server Player2 192.168.0.11:80 check
server Player5 192.168.0.14:80 check
option httpchk HEAD /checkCF.cfm HTTP/1.0

listen WebPlayer-Farm-SSL 192.168.0.2:443
mode tcp
option ssl-hello-chk
balance source
server Player4 192.168.0.13:443 check
server Player3 192.168.0.12:443 check
server Player1 192.168.0.10:443 check
server Player2 192.168.0.11:443 check
server Player5 192.168.0.14:443 check

listen  Manager-Farm192.168.0.2:81
mode http
option httplog
balance source
option forwardfor
stats enable
option http-server-close
server  Manager1 192.168.0.60:80 check
server  Manager2 192.168.0.61:80 check
server  Manager3 192.168.0.62:80 check
option httpchk HEAD /checkCF.cfm HTTP/1.0

listen Manager-Farm-SSL 192.168.0.2:444
mode tcp
option ssl-hello-chk
balance source
server Manager1 192.168.0.60:443 check
server Manager2 192.168.0.61:443 check
server Manager3 192.168.0.62:443 check

listen  info 192.168.0.2:90
mode http
balance source
stats uri /


--> CUT <--

 The problem with SSL is that the IP address that I get to the web server
is the IP address of the loadbalancer and not the original IP address.

 This is a big problem for me and it's essential that I can have the
"right" IP address.

 How can I do, is it possible? I've heard of stunnel but I don't
understand how to use it.

 Thank you in advance for your help,

 Christophe



Re: Help with SSL

2011-11-03 Thread Baptiste
Hi Christophe,

Use the HAProxy box in transparent mode: HAProxy will get connected to
your application server using the client IP.
In your backend, just add the line:
source 0.0.0.0 usesrc clientip

Bear in mind that in such configuration, the default gateway of your
server must be the HAProxy box. Or you have to configure PBR on your
network.

Stunnel can be used in front of HAProxy to uncrypt the traffic.
But if your main issue is to get the client IP, then it won't help you
unless you setup transparent mode as explained above.

cheers


On Thu, Nov 3, 2011 at 10:00 PM, Christophe Rahier
 wrote:
> Hello,
>
>  My config of HAProxy is:
>
> --> CUT <--
> global
> log 192.168.0.2 local0
> log 127.0.0.1 local1 notice
> maxconn     10240
> defaults
> log    global
> option dontlognull
> retries    2
> timeout client 35s
> timeout server 90s
> timeout connect 5s
> timeout http-keep-alive 10s
>
> listen WebPlayer-Farm 192.168.0.2:80
> mode http
> option httplog
> balance source
> #balance leastconn
> option forwardfor
> stats enable
> option http-server-close
> server Player4 192.168.0.13:80 check
> server Player3 192.168.0.12:80 check
> server Player1 192.168.0.10:80 check
> server Player2 192.168.0.11:80 check
> server Player5 192.168.0.14:80 check
> option httpchk HEAD /checkCF.cfm HTTP/1.0
>
> listen WebPlayer-Farm-SSL 192.168.0.2:443
> mode tcp
> option ssl-hello-chk
> balance source
> server Player4 192.168.0.13:443 check
> server Player3 192.168.0.12:443 check
> server Player1 192.168.0.10:443 check
> server Player2 192.168.0.11:443 check
> server Player5 192.168.0.14:443 check
>
> listen  Manager-Farm    192.168.0.2:81
> mode http
> option httplog
> balance source
> option forwardfor
> stats enable
> option http-server-close
> server  Manager1 192.168.0.60:80 check
> server  Manager2 192.168.0.61:80 check
> server  Manager3 192.168.0.62:80 check
> option httpchk HEAD /checkCF.cfm HTTP/1.0
>
> listen Manager-Farm-SSL 192.168.0.2:444
> mode tcp
> option ssl-hello-chk
> balance source
> server Manager1 192.168.0.60:443 check
> server Manager2 192.168.0.61:443 check
> server Manager3 192.168.0.62:443 check
>
> listen  info 192.168.0.2:90
> mode http
> balance source
> stats uri /
>
>
> --> CUT <--
>
>  The problem with SSL is that the IP address that I get to the web server
> is the IP address of the loadbalancer and not the original IP address.
>
>  This is a big problem for me and it's essential that I can have the
> "right" IP address.
>
>  How can I do, is it possible? I've heard of stunnel but I don't
> understand how to use it.
>
>  Thank you in advance for your help,
>
>  Christophe
>
>