Re: Truncated responses from upstream nginx server

2011-02-10 Thread Josh Slayton
Yes, I wrote too soon. You're right, it was an issue on app0, the other
server. Bad permissions for the nginx proxy_temp dir. Apologies for the
spam!

On Thu, Feb 10, 2011 at 4:34 PM, Cyril Bonté  wrote:

> Hi Josh,
>
> Le vendredi 11 février 2011 00:55:29, Josh Slayton a écrit :
> > Hi,
> >
> > Sorry to drop in on the mailing list with what may be a stupid question,
> > but I've been trying to figure it out myself for most of the day with no
> > luck.
> >
> > At AngelList (angel.co) we have haproxy forwarding upstream to two nginx
> > servers running passenger, serving a rails app. These servers are set to
> > deliver all content gzipped using Transfer-Encoding: chunked. The problem
> > is that we're getting some truncated responses on really large pages.
> It's
> > hit-or-miss as well. Try hitting http://angel.co/markets. About 25%-50%
> of
> > the time the response is truncated.
>
> Are your sure it's not a issue in one of your backend ? I sent some loops
> of
> requests on your website and the issue looked to occur every 2 requests (I
> guess I'm not alone but it was what I observed).
>
> > If I access the webserver directly (app1.angel.co), I never get this
> issue.
>
> Maybe the other one has always the issue ?
>
> > The only configuration option set besides timeouts and connections is
> > "option httpclose", which shouldn't matter since it seems nginx isn't
> using
> > keepalive connections anyway.
> >
> > Any thoughts? Much appreciated.
>
> While I was writing this mail, I've seen that every requests are now OK,
> I've
> you fixed something ?
>
> --
> Cyril Bonté
>


Re: Truncated responses from upstream nginx server

2011-02-10 Thread Cyril Bonté
Hi Josh,

Le vendredi 11 février 2011 00:55:29, Josh Slayton a écrit :
> Hi,
> 
> Sorry to drop in on the mailing list with what may be a stupid question,
> but I've been trying to figure it out myself for most of the day with no
> luck.
> 
> At AngelList (angel.co) we have haproxy forwarding upstream to two nginx
> servers running passenger, serving a rails app. These servers are set to
> deliver all content gzipped using Transfer-Encoding: chunked. The problem
> is that we're getting some truncated responses on really large pages. It's
> hit-or-miss as well. Try hitting http://angel.co/markets. About 25%-50% of
> the time the response is truncated.

Are your sure it's not a issue in one of your backend ? I sent some loops of 
requests on your website and the issue looked to occur every 2 requests (I 
guess I'm not alone but it was what I observed).

> If I access the webserver directly (app1.angel.co), I never get this issue.

Maybe the other one has always the issue ?

> The only configuration option set besides timeouts and connections is
> "option httpclose", which shouldn't matter since it seems nginx isn't using
> keepalive connections anyway.
> 
> Any thoughts? Much appreciated.

While I was writing this mail, I've seen that every requests are now OK, I've 
you fixed something ?

-- 
Cyril Bonté



Truncated responses from upstream nginx server

2011-02-10 Thread Josh Slayton
Hi,

Sorry to drop in on the mailing list with what may be a stupid question, but
I've been trying to figure it out myself for most of the day with no luck.

At AngelList (angel.co) we have haproxy forwarding upstream to two nginx
servers running passenger, serving a rails app. These servers are set to
deliver all content gzipped using Transfer-Encoding: chunked. The problem is
that we're getting some truncated responses on really large pages. It's
hit-or-miss as well. Try hitting http://angel.co/markets. About 25%-50% of
the time the response is truncated.

If I access the webserver directly (app1.angel.co), I never get this issue.

The only configuration option set besides timeouts and connections is
"option httpclose", which shouldn't matter since it seems nginx isn't using
keepalive connections anyway.

Any thoughts? Much appreciated.

- Joshua


[MINOR] stats: add support for several packets in stats admin

2011-02-10 Thread Cyril Bonté
Some browsers send POST requests in several packets, which was not supported
by the "stats admin" function.

This patch allows to wait for more data when they are not fully received
(we are still limited to a certain size defined by the buffer size minus its
reserved space).
It also adds support for the "Expect: 100-Continue" header.
---
 doc/configuration.txt |   11 ---
 src/proto_http.c  |   36 
 2 files changed, 32 insertions(+), 15 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 889f3b2..171a229 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -4694,13 +4694,10 @@ stats admin { if | unless } 
  unless you know what you do : memory is not shared between the
  processes, which can result in random behaviours.
 
-  Currently, there are 2 known limitations :
-
-- The POST data are limited to one packet, which means that if the list of
-  servers is too long, the request won't be processed. It is recommended
-  to alter few servers at a time.
-
-- Expect: 100-continue is not supported.
+  Currently, the POST request is limited to the buffer size minus the reserved
+  buffer space, which means that if the list of servers is too long, the
+  request won't be processed. It is recommended to alter few servers at a
+  time.
 
   Example :
 # statistics admin level only for localhost
diff --git a/src/proto_http.c b/src/proto_http.c
index 2d3c711..108f55c 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -2831,8 +2831,7 @@ int http_wait_for_request(struct session *s, struct 
buffer *req, int an_bit)
 
 /* We reached the stats page through a POST request.
  * Parse the posted data and enable/disable servers if necessary.
- * Returns 0 if request was parsed.
- * Returns 1 if there was a problem parsing the posted data.
+ * Returns 1 if request was parsed or zero if it needs more data.
  */
 int http_process_req_stat_post(struct session *s, struct buffer *req)
 {
@@ -2850,15 +2849,15 @@ int http_process_req_stat_post(struct session *s, 
struct buffer *req)
 
cur_param = next_param = end_params;
 
-   if (end_params >= req->data + req->size) {
+   if (end_params >= req->data + req->size - global.tune.maxrewrite) {
/* Prevent buffer overflow */
s->data_ctx.stats.st_code = STAT_STATUS_EXCD;
return 1;
}
else if (end_params > req->data + req->l) {
-   /* This condition also rejects a request with Expect: 
100-continue */
-   s->data_ctx.stats.st_code = STAT_STATUS_EXCD;
-   return 1;
+   /* we need more data */
+   s->data_ctx.stats.st_code = STAT_STATUS_NONE;
+   return 0;
}
 
*end_params = '\0';
@@ -2931,7 +2930,7 @@ int http_process_req_stat_post(struct session *s, struct 
buffer *req)
next_param = cur_param;
}
}
-   return 0;
+   return 1;
 }
 
 /* This stream analyser runs all HTTP request processing which is common to
@@ -3162,7 +3161,28 @@ int http_process_req_common(struct session *s, struct 
buffer *req, int an_bit, s
/* Was the status page requested with a POST ? */
if (txn->meth == HTTP_METH_POST) {
if (s->data_ctx.stats.flags & STAT_ADMIN) {
-   http_process_req_stat_post(s, req);
+   if (msg->msg_state < HTTP_MSG_100_SENT) {
+   /* If we have HTTP/1.1 and Expect: 
100-continue, then we must
+* send an HTTP/1.1 100 Continue 
intermediate response.
+*/
+   if (txn->flags & TX_REQ_VER_11) {
+   struct hdr_ctx ctx;
+   ctx.idx = 0;
+   /* Expect is allowed in 1.1, 
look for it */
+   if (http_find_header2("Expect", 
6, msg->sol, &txn->hdr_idx, &ctx) &&
+   unlikely(ctx.vlen == 12 && 
strncasecmp(ctx.line+ctx.val, "100-continue", 12) == 0)) {
+   buffer_write(s->rep, 
http_100_chunk.str, http_100_chunk.len);
+   }
+   }
+   msg->msg_state = HTTP_MSG_100_SENT;
+   s->logs.tv_request = now;  /* update 
the request timer to reflect full request */
+   }
+   if (!http_process_req_stat_post(s, req)) {
+   /* we need more data */
+   req->analysers |= an_bit;
+

Re: Stats page giving error sometimes, [X] Action not processed : the buffer couldn't store all the data.

2011-02-10 Thread Cyril Bonté
Le jeudi 10 février 2011 11:09:58, Willy Tarreau a écrit :
> Oh, I'm disappointed, I finally released 1.4.11 this morning with many
> fixes. I could have merged that too but it's too late now, it will be
> for next version :-/

Well, bad timing...I wasn't available yesterday to work on the patch.
Nevermind, it's now ready ;-) I'll send the patch just after this mail.

-- 
Cyril Bonté



Re: Any Webminars? or working group sessions?

2011-02-10 Thread Amol
Thanks for the URL kris..

--- On Thu, 2/10/11, Cloud Central - Kris 
 wrote:

From: Cloud Central - Kris 
Subject: Re: Any Webminars? or working group sessions?
To: "Amol" 
Cc: haproxy@formilux.org
Date: Thursday, February 10, 2011, 5:35 AM

http://37signals.com/svn/posts/1073-nuts-bolts-haproxy








From: "Willy Tarreau" 

Sent: Thursday, February 10, 2011 9:34 PM

To: "Amol" 

Subject: Re: Any Webminars? or working group sessions?



Hi Amol,



On Tue, Feb 08, 2011 at 06:51:07AM -0800, Amol wrote:

> Hi, i have just recently got familiar with haproxy, i was wondering if there 
> were any webminars? or videos discussing all the things that can be made 
> possible with using haproxy as a load balancer? 

> This would be really useful to some naive users like me trying to achieve a 
> certain goal with haproxy but might end up doing a lot more and productive 
> work..

>  



There was a very nice presentation on 37signals quite some time ago, you

can easily find it on the net. I'm not aware of other ones though.



Regards,

Willy








  

Re: Stats page giving error sometimes, [X] Action not processed : the buffer couldn't store all the data.

2011-02-10 Thread Guillaume Bourque
Bonjour Willy !

you should not be desapointed at all, haproxy simply rock, it's only a minor
issue only seen on G chrome and you already have a fix that will probably
solve this.

Again, thanks for this great software !


2011/2/10 Willy Tarreau 

> On Tue, Feb 08, 2011 at 05:08:48PM -0500, Guillaume Bourque wrote:
> > Cyril,  youre a machine !
> >
> > I use firefox for now but it will be nice to use chrome too !  And if it
> > make things too much complicated to support the new stat option in G
> > chrome, well I will stick with firefox no prob at all ;-)
>
> Oh, I'm disappointed, I finally released 1.4.11 this morning with many
> fixes. I could have merged that too but it's too late now, it will be
> for next version :-/
>
> Regards,
> Willy
>
>


-- 
Guillaume Bourque, B.Sc.,
consultant, infrastructures technologiques libres
Logisoft Technologies inc.  http://www.logisoftech.com
514 576-7638,  http://ca.linkedin.com/in/GuillaumeBourque/fr


RE: Stopping HAproxy from OS detection

2011-02-10 Thread Lee Archer
Thank, I will do some investigation.

Regards

Lee

-Original Message-
From: Graeme Donaldson [mailto:gra...@donaldson.za.net] 
Sent: 10 February 2011 12:39
To: Lee Archer
Cc: haproxy@formilux.org
Subject: Re: Stopping HAproxy from OS detection

On 10 February 2011 14:26, Lee Archer  wrote:
> Hi all, is it possible to stop HAproxy from identifying itself as 
> running on Linux?  I've run a few NMAP scans from external servers and 
> it always identifies itself as Linux.  I really need it to not do 
> this.  The website servers it is proxying for are running IIS so it isn't an 
> issue with these.
> Any ideas?

As far as I know, nmap uses TCP fingerprinting techniques
(http://en.wikipedia.org/wiki/TCP/IP_stack_fingerprinting) to guess the 
operating system. This has nothing to do with haproxy. The wikipedia article on 
TCL fingerprinting mentions that there are tools for making fingerprinting 
attempts less successful, that's probably where you should look if its 
something you want to do.

HTH,
Graeme.



Re: Stopping HAproxy from OS detection

2011-02-10 Thread Graeme Donaldson
On 10 February 2011 14:26, Lee Archer  wrote:
> Hi all, is it possible to stop HAproxy from identifying itself as running on
> Linux?  I’ve run a few NMAP scans from external servers and it always
> identifies itself as Linux.  I really need it to not do this.  The website
> servers it is proxying for are running IIS so it isn’t an issue with these.
> Any ideas?

As far as I know, nmap uses TCP fingerprinting techniques
(http://en.wikipedia.org/wiki/TCP/IP_stack_fingerprinting) to guess
the operating system. This has nothing to do with haproxy. The
wikipedia article on TCL fingerprinting mentions that there are tools
for making fingerprinting attempts less successful, that's probably
where you should look if its something you want to do.

HTH,
Graeme.



Stopping HAproxy from OS detection

2011-02-10 Thread Lee Archer
Hi all, is it possible to stop HAproxy from identifying itself as
running on Linux?  I've run a few NMAP scans from external servers and
it always identifies itself as Linux.  I really need it to not do this.
The website servers it is proxying for are running IIS so it isn't an
issue with these.  Any ideas?

Thanks

Lee


Re: Email to the list not delivered -- anyone else?

2011-02-10 Thread Graeme Donaldson
On 10 February 2011 12:45, Willy Tarreau  wrote:
>
> We've already noticed that the ML archives don't show all the messages. I
> have never understood why. Since most posters are subscribed and other
> archives are up to date (gmane and marc.info), I never took much time
> investigating this issue which does not seem to be that big of a deal.
>

I didn't realise there were alternate locations for viewing the
archives. My message does indeed show up at gmane and marc.info.

Nothing to see here, move along :-)

Graeme.



Re: Email to the list not delivered -- anyone else?

2011-02-10 Thread Willy Tarreau
Hi Graeme,

On Thu, Feb 10, 2011 at 12:38:42PM +0200, Graeme Donaldson wrote:
> Hi all
> 
> Has anyone else noticed instances of messages sent to the list not being
> delivered?
> 
> I just realised that a reply I sent to Kyle's question 2 days ago never made
> it to the list. Notice that until now there are no messages from me to the
> list shown on http://www.formilux.org/archives/haproxy/1102/date.html, but I
> definitely sent a message 2 days ago: http://i.imgur.com/tCqRe.png

Yes in fact it was delivered, I noticed your reply just after I replied to
him, so he basically got the same response twice :-)

We've already noticed that the ML archives don't show all the messages. I
have never understood why. Since most posters are subscribed and other
archives are up to date (gmane and marc.info), I never took much time
investigating this issue which does not seem to be that big of a deal.

Thanks,
Willy




Re: balance (hdr) problem (maybe bug?)

2011-02-10 Thread Willy Tarreau
Hi Craig,

On Mon, Feb 07, 2011 at 09:24:24PM +0100, Craig wrote:
> Hi,
> 
> >> The X-Forwarded-For header is only added once at the end of all
> processing.
> >> Otherwise, having it in the defaults section would result in both your
> >> frontend and your backend adding it.
> Then the possibility to add it only to a frontend or a backend in the
> defaults section would be nice?

It is already the case. The fact is that we're telling haproxy that we
want an outgoing request to have the header. If you set the option in
the frontend, it will have it. If you set it in the backend, it will
have it. If you set it in both, it will only be added once. It's really
a flag : when the request passes through a frontend or backend which
has the option, then it will have the header appended.

> >> So in your case, what happens is that you delete it in the frontend
> (using
> >> reqidel) then you tag the session for adding a new one after all
> processing
> >> is done.
> >>
> >> When at the last point we have to establish a connection to the
> server, we
> >> check the header and balance based on it. I agree we should always
> have it
> >> filled with the same value, so there's a bug.
> So if I got it right, I cannot balance based on the new header because
> it was not added yet. That behaviour comes really unexpected because one
> usually would believe it was already added in the frontend.

It can come unexpected when you reason with header addition, but it's sort
of an implicit header addition. The opposite would be much more unexpected,
you'd really not want the header to be added twice because it was enable in
both sections. It's possible that the doc is not clear enough :

 "This option may be specified either in the frontend or in the backend. If at
  least one of them uses it, the header will be added. Note that the backend's
  setting of the header subargument takes precedence over the frontend's if
  both are defined."

Maybe we should insist on the fact that it's done only at the end.

We could try to add it in the frontend and tag the session to know it was
already performed. But this would slightly change the semantics to a new
one which might not necessarily be desirable. For instance, it's possible
in a backend to delete the header and set the option. That way you know
that your servers will receive exactly one occurrence of it. Many people
are doing that because their servers are having issues with this header
passed as a list. Changing the behaviour would result in the backend's
delete rule to suppress the header that was just added, and the new one
won't be added anymore since it already was.

> >> My guess is that you're running a version prior to 1.4.10 which has the
> >> header deletion bug : the header list can become corrupted when exactly
> >> two consecutive headers are removed from the request (eg: connection and
> >> x-forwarded-for). Then the newly added X-Forwarded-For could not be seen
> >> by the code responsible for hashing it.
> >>
> >> If so, please try to upgrade to the last bug fix (1.4.10) and see if the
> >> problem persists.
> I am already using 1.4.10 - sorry, it seems I somehow forgot to mention
> it! :/

OK so I'm interested in any reliabe reproducer for this bug (eg: config and/or
request exhibiting the issue). You can send me your config privately if you
don't want to post it to the list.

> That is a good hint, but I also have a frontend for SSL (with stunnel
> which adds the X-Forward-For header) that I'd want to use the same
> backend. I did not like defining backends twice as it introduces
> redundancy and might lead to inconsistency, it is a good workaround
> though. Note: my testing and the bug happened with the normal frontend.

OK I see. Be aware that this setup is not compatible with keep-alive though,
as stunnel will only add the header in the first request. An alternative is
to apply the patch for the proxy protocol to stunnel and use it with either
haproxy 1.5-dev, or use the 1.4 backports that were recently posted to the
list.

> Also, I could leave out the reqidel of the header, but then a malicious
> party could theoretically choose the server it accesses (by forging
> x-forwarded-for) and overload one after another; I prefer to take away
> this possibility (yea I am overdoing it, maybe). ;)

Targetting a server is a false problem. It can also be done by forcing
cookies when persistence is used. And even when persistence cookies are
encrypted and not predictible, it's enough to make one valid request and
replay very fast with the assigned cookie to always hit the same server ;-)

Your infrastructure and configuration should ensure that your servers don't
die from overloading. That will protect you from this specific issue.

Regards,
Willy




Email to the list not delivered -- anyone else?

2011-02-10 Thread Graeme Donaldson
Hi all

Has anyone else noticed instances of messages sent to the list not being
delivered?

I just realised that a reply I sent to Kyle's question 2 days ago never made
it to the list. Notice that until now there are no messages from me to the
list shown on http://www.formilux.org/archives/haproxy/1102/date.html, but I
definitely sent a message 2 days ago: http://i.imgur.com/tCqRe.png

*confused*

Graeme.


Re: ACLs with Overlapping Subnets and IPs

2011-02-10 Thread Willy Tarreau
Hi Kyle,

On Tue, Feb 08, 2011 at 07:48:19AM -0500, Kyle Brandt wrote:
> Hi All,
> 
> Can I have an ACL that doesn't perform an action on a specific IP but will
> perform the action on the subnet that the IP is part of?
> 
> For example:
> 
> acl bad_subnet src 10.0.0.0/8
> acl okay_ip src 10.0.1.5
> use_backend blocked if bad_subnet !okay_ip
> 
> So the target result would be to use the backend "blocked" if the IP is in
> the 10.0.0.0/8 subnet unless that IP is 10.0.1.5. If the IP is outside the
> 10.0.0.0/8 network no action would be take for this rule.
> 
> Is my example correct for this? If it isn't -- how can this be done?

Yes, your example is correct and it will work without any change at all :-)

Willy




Re: Stability of master branch

2011-02-10 Thread Willy Tarreau
Hi David,

On Tue, Feb 08, 2011 at 05:24:10PM +0900, David Cournapeau wrote:
> Hi,
> 
> I understand that the master branch on the official git repository is
> labeled as unstable, but I was wondering to what degree it is
> considered unstable by haproxy developers ? Is it unstable as "you
> would be crazy to use this in production", or are some people using it
> in production ?

It is the second one. A few sites are running it in production, but I know
them, have already reviewed their configs and can inform them when I fix
something that might affect them. These sites also cooperate by easily
reporting strange issues, providing logs or are willing to debug on live
traffic when needed. In fact they're working as beta-testers in a very
well defined perimeter. I'm sure there are other adventurers running it
without me being aware of it, but it's their problem if it breaks ;-)

> For example, when I read the changelog for 1.5dev3 on
> haproxy page, it seems that some people are using some features not in
> 1.4.9.

Yes indeed. Some are also using a few backported features, such as the support
for the PROXY protocol.

To be honnest, I wouldn't recommend running 1.5-dev if you don't know what
to expect from it nor if you're willing to be very open about your config,
logs, or accept some emergency updates. But if you're OK with all of that
and want some 1.5 features, there is almost no risk. In fact, most of the
bugs I recently discovered in 1.5 were also in 1.4, so there are very few
regressions. Sometimes, the config can change a bit between two consecutive
development versions but that should not cause particular trouble.

> As a tangential question, is there a roadmap for 1.5 ?

Yes, it's in the ROADMAP file in the sources. It says that 1.5 is deemed
for end of 2010. As you might have noticed, it's been delayed a lot, and
many important features have still not been implemented. I really want to
implement basic server-side keepalive before releasing it, as well as
POST parameter extraction, the "return" statement and possibly the "use-server"
if possible. The other ones have less impact and can be brought into
maintenance releases. The server-side keep-alive is really the hard part
because it has revealed many remaining internal mis-designs that must be
addressed before releasing.

Regards,
Willy




Re: Stats page giving error sometimes, [X] Action not processed : the buffer couldn't store all the data.

2011-02-10 Thread Willy Tarreau
On Tue, Feb 08, 2011 at 05:08:48PM -0500, Guillaume Bourque wrote:
> Cyril,  youre a machine !
> 
> I use firefox for now but it will be nice to use chrome too !  And if it 
> make things too much complicated to support the new stat option in G 
> chrome, well I will stick with firefox no prob at all ;-)

Oh, I'm disappointed, I finally released 1.4.11 this morning with many
fixes. I could have merged that too but it's too late now, it will be
for next version :-/

Regards,
Willy




Re: [ HAProxy v 1.4.8 ] Curious Behaviour

2011-02-10 Thread Willy Tarreau
Hi Vivien,

On Thu, Feb 10, 2011 at 11:01:14AM +0100, maynardkee...@free.fr wrote:
> Hello,
> 
> I’am trying to perform a LoadBalancer with this huge haproxy tool.
> It works with lots of backends in a mock-up except a backend and I don’t know
> why.
> Have you got an idea about this ?
> In order to simplify the problem, in this config file below, I have only one
> backend instead of two.
> 
> • To communicate with the backend, with a wget or a telnet, it’s works:
> 
> -bash-3.2$ telnet cppfrontendx01-i 80
> Trying 192.168.235.43...
> Connected to cppfrontendx01-i (192.168.235.43).
(...)
> -bash-3.2$ telnet 127.0.0.1 14001
> Trying 127.0.0.1...
> Connected to localhost (127.0.0.1).
(...)
> listen A233_ cppfrontendx01-i_14001 127.0.0.1:14001
>server cppfrontendx01-i cppfrontendx01-i cookie 
> c1a7e61bae8b014bd997915dea478161

Look above, your "server" line has no port, so it will use the port
it receives the connection on. In your case, it will be 14001. Your
test shows that your server listens on port 80, so you just have to
append ":80" after "cppfrontendx01-i" and it will work.

Regards,
Willy




Re: Any Webminars? or working group sessions?

2011-02-10 Thread Willy Tarreau
Hi Amol,

On Tue, Feb 08, 2011 at 06:51:07AM -0800, Amol wrote:
> Hi, i have just recently got familiar with haproxy, i was wondering if there 
> were any webminars? or videos discussing all the things that can be made 
> possible with using haproxy as a load balancer? 
> This would be really useful to some naive users like me trying to achieve a 
> certain goal with haproxy but might end up doing a lot more and productive 
> work..
>  

There was a very nice presentation on 37signals quite some time ago, you
can easily find it on the net. I'm not aware of other ones though.

Regards,
Willy




[ HAProxy v 1.4.8 ] Curious Behaviour

2011-02-10 Thread maynardkeenan
Hello,

I’am trying to perform a LoadBalancer with this huge haproxy tool.
It works with lots of backends in a mock-up except a backend and I don’t know
why.
Have you got an idea about this ?
In order to simplify the problem, in this config file below, I have only one
backend instead of two.

•   To communicate with the backend, with a wget or a telnet, it’s works:

-bash-3.2$ telnet cppfrontendx01-i 80
Trying 192.168.235.43...
Connected to cppfrontendx01-i (192.168.235.43).
Escape character is '^]'.
GET /myfarm/home/awhomepage.aspx
HTTP/1.1 200 OK
Server: Microsoft-IIS/5.0
Date: Fri, 04 Feb 2011 17:07:46 GMT
X-Powered-By: ASP.NET
X-AspNet-Version: 1.1.4322
Set-Cookie: ASP.NET_SessionId=v5mevtim4orlzz2f4btimf45; path=/
Cache-Control: no-store, no-cache, must-revalidate
Content-Type: text/html; charset=utf-8
Content-Length: 3689





My Farm Portal Error



[ etc ]

•   To communicate with the haproxy listener, I obtain a 503 error :

-bash-3.2$ telnet 127.0.0.1 14001
Trying 127.0.0.1...
Connected to localhost (127.0.0.1).
Escape character is '^]'.
GET /myfarm/home/awhomepage.aspx
HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html

503 Service Unavailable
No server is available to handle this request.

Connection closed by foreign host.

 Below, my global haproxy configuration file:

#Cat Haproxy.cfg

global
   log 127.0.0.1:1514   local4 debug
   maxconn 8192
   uid ta_h380_adm_i
   gid grp_h380_roa
   daemon

defaults
   log global
   modehttp
   option  httplog
   option  dontlognull
   option httpclose
   option forwardfor
   retries 3
   option redispatch
   maxconn 2000
   contimeout  5000
   clitimeout  5
   srvtimeout  5

listen stats 127.0.0.1:14000
stats enable
stats refresh 5s
balance roundrobin
mode http
stats uri   /


#cat haproxy.cfg ## include for my backend:

listen A233_ cppfrontendx01-i_14001 127.0.0.1:14001
#  balance roundrobin
#  option httpchk GET /server-status HTTP/1.0
   cookie SERVERID insert indirect
   server cppfrontendx01-i cppfrontendx01-i cookie
c1a7e61bae8b014bd997915dea478161

The backend and my server are in the same subnet.

Thank you for your help in advance,
Vivien