Re: issue with using digest with jetty backends

2010-04-07 Thread Matt
On 6 April 2010 19:43, Willy Tarreau w...@1wt.eu wrote:

 On Tue, Apr 06, 2010 at 11:42:53AM +0100, Matt wrote:
  Hi all,
 
  Using HA-Proxy version 1.3.19 2009/07/27.  Set-up is HA-Proxy balancing a
  pool of Jetty servers.
 
  We had a tomcat application using keep-alive that was having issues (kept
 on
  opening many connections), so to stop that and other clients getting the
  same problem we used the option httpclose which fixed the problem.
 
  This though has added another issue when using digest authentication with
  curl.  When sending to the HA-Proxy IP:-
 
  **request**
   User-Agent: curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5
 OpenSSL/0.9.8g
  zlib/1.2.3.3 libidn/1.15
   Host: ...
   Accept: */*
   content-type:application/xml
   Content-Length: 0
   Expect: 100-continue
 
  **response**
   HTTP/1.1 100 Continue
   Connection: close
  * Empty reply from server
  * Closing connection #0
  curl: (52) Empty reply from server
 
  It looks like HA-Proxy is sending 100-continue and not 401 and adding the
  connection closed header.  If I use curl with the --http1.0 option, then
 it
  works as expected, but I guess this is forcing Jetty to work in http 1.0
  mode.

 This was fixed in 1.3.23 and 1.3.24. The issue is not what you describe
 above.
 What happens is that the client sends the Expect: 100-continue header,
 which
 is forwarded to the server. The server then replies with HTTP/1.1 100
 Continue
 and haproxy adds the Connection: close response there. Strictly speaking,
 both
 curl and haproxy are incorrect here :
  - haproxy should not add any header on a 100-continue response
  - libcurl should ignore any header in a 100-continue response.

 But the reality is that both do probably not consider the 100-continue
 response as a special case, which it is.

 There is nothing you can do with the configuration to fix this, you should
 really update your version (also other annoying issues have been fixed
 since
 1.3.19). Either you install 1.3.24 (or 1.3.23 if you don't find 1.3.24 yet
 for
 your distro), or you can switch to 1.4.3.

 Well, maybe if you remove option httpclose and replace it with
 reqadd Connection:\ close, without the corresponding rspadd, it could
 work,
 if you don't have anything else touching the response (no cookie insertion,
 ...).
 This would rely on the server to correctly close the response. But it would
 be
 an awful hack.

  When using apache in front of HA-Proxy with both force-proxy-request-1.0
 and
  proxy-nokeepalive the request is successful.

 This is because the Expect header appeared in 1.1, so the client cannot use
 it
 if you force the request as 1.0.

 Thanks, i'll test 1.3.23/24 in our lab

Matt


Re: issue with using digest with jetty backends

2010-04-07 Thread Matt
On 6 April 2010 19:43, Willy Tarreau w...@1wt.eu wrote:

 On Tue, Apr 06, 2010 at 11:42:53AM +0100, Matt wrote:
  Hi all,
 
  Using HA-Proxy version 1.3.19 2009/07/27.  Set-up is HA-Proxy balancing a
  pool of Jetty servers.
 
  We had a tomcat application using keep-alive that was having issues (kept
 on
  opening many connections), so to stop that and other clients getting the
  same problem we used the option httpclose which fixed the problem.
 
  This though has added another issue when using digest authentication with
  curl.  When sending to the HA-Proxy IP:-
 
  **request**
   User-Agent: curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5
 OpenSSL/0.9.8g
  zlib/1.2.3.3 libidn/1.15
   Host: ...
   Accept: */*
   content-type:application/xml
   Content-Length: 0
   Expect: 100-continue
 
  **response**
   HTTP/1.1 100 Continue
   Connection: close
  * Empty reply from server
  * Closing connection #0
  curl: (52) Empty reply from server
 
  It looks like HA-Proxy is sending 100-continue and not 401 and adding the
  connection closed header.  If I use curl with the --http1.0 option, then
 it
  works as expected, but I guess this is forcing Jetty to work in http 1.0
  mode.

 This was fixed in 1.3.23 and 1.3.24. The issue is not what you describe
 above.
 What happens is that the client sends the Expect: 100-continue header,
 which
 is forwarded to the server. The server then replies with HTTP/1.1 100
 Continue
 and haproxy adds the Connection: close response there. Strictly speaking,
 both
 curl and haproxy are incorrect here :
  - haproxy should not add any header on a 100-continue response
  - libcurl should ignore any header in a 100-continue response.

 But the reality is that both do probably not consider the 100-continue
 response as a special case, which it is.

 There is nothing you can do with the configuration to fix this, you should
 really update your version (also other annoying issues have been fixed
 since
 1.3.19). Either you install 1.3.24 (or 1.3.23 if you don't find 1.3.24 yet
 for
 your distro), or you can switch to 1.4.3.

 Well, maybe if you remove option httpclose and replace it with
 reqadd Connection:\ close, without the corresponding rspadd, it could
 work,
 if you don't have anything else touching the response (no cookie insertion,
 ...).
 This would rely on the server to correctly close the response. But it would
 be
 an awful hack.

  When using apache in front of HA-Proxy with both force-proxy-request-1.0
 and
  proxy-nokeepalive the request is successful.

 This is because the Expect header appeared in 1.1, so the client cannot use
 it
 if you force the request as 1.0.

 On second thoughts I don't think this is going to work.  If 1.3.24 is the
same as 1.4.3, i'm getting an error on the first request not the challenge
when using 1.4.3 and option httpclose, or option http-server-close.

When using curl :-
* Server auth using Digest with user 'su'
 PUT . HTTP/1.1
 User-Agent: curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5 OpenSSL/0.9.8g
zlib/1.2.3.3 libidn/1.15
 Host: ..
 Accept: */*
 content-type:application/xml
 Content-Length: 0
 Expect: 100-continue

 HTTP/1.1 100 Continue
* HTTP 1.0, assume close after body
 HTTP/1.0 502 Bad Gateway
 Cache-Control: no-cache
 Connection: close
 Content-Type: text/html

htmlbodyh1502 Bad Gateway/h1
The server returned an invalid or incomplete response.
/body/html
* Closing connection #0

The Jetty server throws an exception :-
HTTP/1.1 PUT
Request URL: http://..
User-Agent: curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5 OpenSSL/0.9.8g
zlib/1.2.3.3 libidn/1.15
Host: 
Accept: */*
Content-Type: application/xml
Content-Length: 0
Expect: 100-continue
X-Forwarded-For: ...
Connection: close
Querystring: null
-ERROR Authenticator Authenticator caught IO Error when trying
to authenticate user!
org.mortbay.jetty.EofException
org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:760)
org.mortbay.jetty.AbstractGenerator$Output.flush(AbstractGenerator.java:565)
org.mortbay.jetty.HttpConnection$Output.flush(HttpConnection.java:904)
org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:633)
org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:586)
org.mortbay.jetty.security.DigestAuthenticator.authenticate(DigestAuthenticator.java:131)
...
Caused by: java.nio.channels.ClosedChannelException
...

HA Proxy debug:-
accept(0007)=0008 from [...:49194]
clireq[0008:]: PUT ... HTTP/1.1
clihdr[0008:]: User-Agent: curl/7.19.5 (i486-pc-linux-gnu)
libcurl/7.19.5 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.15
clihdr[0008:]: Host: 
clihdr[0008:]: Accept: */*
clihdr[0008:]: content-type:application/xml
clihdr[0008:]: Content-Length: 0
clihdr[0008:]: Expect: 100-continue
srvrep[0008:0009]: HTTP/1.1 100 Continue
srvcls[0008:0009]
clicls[0008:0009]
closed[0008:0009]

Making sure that both httpclose 

Re: Source IP instead of Haproxy server IP

2010-04-07 Thread Joseph Hardeman
Willy,

Thank you for the response, its interesting that I can't do this with
haproxy, I was successful in doing this with LVS before.

   Web Visitor
   ^  |
   |  |
   |  V
   |Haproxy
   |  /|\
   | / | \
Cluster of servers

I understand that haproxy is a layer 7 proxy and I am looking at using it as
a transparent forwarding load balancer, at least for this step.

Even with haproxy compiled with tproxy, you mentioned this wont work.

I want to stay with haproxy, but I need to have haproxy at this first step
pass the visitors' ip as the source to the next set of systems instead of
changing it out for haproxy server IP address.

Thanks again.

Joe


 From: Willy Tarreau w...@1wt.eu
 Date: Tue, 6 Apr 2010 07:10:04 +0200
 To: Joseph Hardeman jharde...@colocube.com
 Cc: haproxy@formilux.org
 Subject: Re: Source IP instead of Haproxy server IP
 
 On Tue, Apr 06, 2010 at 07:02:20AM +0200, Willy Tarreau wrote:
 They are wanting their systems to send the data back to the visitor instead
 of passing it back through haproxy.
 
 Opps, sorry, I did not notice the end of the question. It is not
 possible to send the data back to the client because it is not the
 same TCP connection, so it's not a matter of using one address or
 the other.
 
 There is one connection from the client to haproxy and another one
 from haproxy to the server. And even if you use the TPROXY feature,
 the return traffic must still pass through haproxy.
 
 This will be true for any layer7 load balancer BTW : the LB must
 first accept the client's connection to find the contents, and by
 doing so, it chooses TCP sequence numbers that will be different
 from those that the final server will choose (and a lot of other
 settings differ). So the server needs to pass through the LB for
 the return traffic so that the LB can respond to the client with
 its own settings.
 
 If your customer is worried about the bandwidth, you should build
 with LINUX_SPLICE and use a kernel = 2.6.27.x which adds support
 for TCP splicing. This is basically as fast as IP forwarding and
 can even be faster on large objects. With this I reached 10 Gbps
 in labs tests, but someone on the list has already reached 5 Gbps
 of production traffic and is permanently above 3 Gbps.
 
 So maybe this is what you're looking for. And yes, this is compatible
 with LINUX_TPROXY, though the few iptables rules may noticeably
 impact performance.
 
 Regards,
 Willy
 




Changing HA Proxy return codes

2010-04-07 Thread Matt
I'm guessing the answer is no as i'm unable to find anything in the
documentation that suggests otherwise, but..

If I wanted to change the error return code submitted by haproxy (not the
backend server) is this possible? i.e. change haproxy to return a 502 when
it's going to return a 504?

I know the current return codes are correct as of
http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html - I could just do
with this short hack.

Thanks,

Matt


Fwd: Re: Source IP instead of Haproxy server IP

2010-04-07 Thread Chris Sarginson

Forwarded to the list for posterity:
---BeginMessage---

Hi Joe,

I'm pretty sure that if you are using LVS then you will have an iptables 
redirect rule set up, that directs traffic back through the 
loadbalancer, not directly back to the client - how can the client know 
that the TCP session you are establishing from your server is related to 
another IP address entirely?


Hope this helps point you in the right direction.

Chris

Joseph Hardeman wrote:

Willy,

Thank you for the response, its interesting that I can't do this with
haproxy, I was successful in doing this with LVS before.

Web Visitor
^  |
|  |
|  V
|Haproxy
|  /|\
| / | \
Cluster of servers

I understand that haproxy is a layer 7 proxy and I am looking at using it as
a transparent forwarding load balancer, at least for this step.

Even with haproxy compiled with tproxy, you mentioned this wont work.

I want to stay with haproxy, but I need to have haproxy at this first step
pass the visitors' ip as the source to the next set of systems instead of
changing it out for haproxy server IP address.

Thanks again.

Joe



From: Willy Tarreauw...@1wt.eu
Date: Tue, 6 Apr 2010 07:10:04 +0200
To: Joseph Hardemanjharde...@colocube.com
Cc:haproxy@formilux.org
Subject: Re: Source IP instead of Haproxy server IP

On Tue, Apr 06, 2010 at 07:02:20AM +0200, Willy Tarreau wrote:

They are wanting their systems to send the data back to the visitor instead
of passing it back through haproxy.


Opps, sorry, I did not notice the end of the question. It is not
possible to send the data back to the client because it is not the
same TCP connection, so it's not a matter of using one address or
the other.

There is one connection from the client to haproxy and another one
from haproxy to the server. And even if you use the TPROXY feature,
the return traffic must still pass through haproxy.

This will be true for any layer7 load balancer BTW : the LB must
first accept the client's connection to find the contents, and by
doing so, it chooses TCP sequence numbers that will be different
from those that the final server will choose (and a lot of other
settings differ). So the server needs to pass through the LB for
the return traffic so that the LB can respond to the client with
its own settings.

If your customer is worried about the bandwidth, you should build
with LINUX_SPLICE and use a kernel= 2.6.27.x which adds support
for TCP splicing. This is basically as fast as IP forwarding and
can even be faster on large objects. With this I reached 10 Gbps
in labs tests, but someone on the list has already reached 5 Gbps
of production traffic and is permanently above 3 Gbps.

So maybe this is what you're looking for. And yes, this is compatible
with LINUX_TPROXY, though the few iptables rules may noticeably
impact performance.

Regards,
Willy




---End Message---


Re: Changing HA Proxy return codes

2010-04-07 Thread Holger Just
Hi Matt,

On 2010-04-07 14:34, Matt wrote:
 If I wanted to change the error return code submitted by haproxy (not
 the backend server) is this possible? i.e. change haproxy to return a
 502 when it's going to return a 504?

You could (ab)use the errorfile parameter and have haproxy send
arbitrary data. Thus you could do something like this:

errorfile 500 /etc/haproxy/errorfiles/503.http
errorfile 502 /etc/haproxy/errorfiles/503.http
errorfile 503 /etc/haproxy/errorfiles/503.http
errorfile 504 /etc/haproxy/errorfiles/503.http

and then have the file at /etc/haproxy/errorfiles/503.http contain
something like this:

HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html
Content-Length: 329

!DOCTYPE html PUBLIC -//W3C//DTD XHTML 1.0 Strict//EN
http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;
html xmlns=http://www.w3.org/1999/xhtml;
head
  meta http-equiv=Content-Type content=text/html; charset=utf-8 /
  titleSomething is wrong/title
/head
bodyh1Something went wrong/h1/body
/html

Note that you should correctly re-calculate the Content-Length header
(or leave it out) if you do any changes here.

--Holger



Re: Source IP instead of Haproxy server IP

2010-04-07 Thread Willy Tarreau
On Wed, Apr 07, 2010 at 08:27:26AM -0400, Joseph Hardeman wrote:
 Willy,
 
 Thank you for the response, its interesting that I can't do this with
 haproxy, I was successful in doing this with LVS before.

This is because both work very differently :

  - LVS does not inspect contents, it just forwards traffic to the server
  - LVS is already in the middle of the IP stack in the kernel, where it's
possible to mangle anything.

Web Visitor
^  |
|  |
|  V
|Haproxy
|  /|\
| / | \
 Cluster of servers
 
 I understand that haproxy is a layer 7 proxy and I am looking at using it as
 a transparent forwarding load balancer, at least for this step.

 Even with haproxy compiled with tproxy, you mentioned this wont work.

With tproxy you will be able to make haproxy connect to the servers from the
client's IP address. But this is just spoofing. That means that there is one
connection from the client to haproxy, another one from haproxy to the server
and the server believes it's the original client which is connecting to it
because it sees its IP as the source of the connection. Your servers will
still have to route the traffic back through haproxy in order for each
connection to work. Also, keep in mind that your kernel will have to be
patched to support doing this, and that the required iptables rules may reduce
the performance.

 I want to stay with haproxy, but I need to have haproxy at this first step
 pass the visitors' ip as the source to the next set of systems instead of
 changing it out for haproxy server IP address.

Quite honnestly, while I understand in which cases it can help (eg. when you
host servers for multiple customers), I don't think there is any long-term
benefit in doing this. It eases deployment but is more complicated to maintain
in the long term (patched kernel, routing issues, etc...). Having a web server
being aware to use standard methods to get the client's IP address is generally
preferred.

Regards,
Willy




Re: Please contribute haproxy info and stat samples

2010-04-07 Thread John Feuerstein
CC'ing the list:

On 04/07/2010 07:13 AM, Willy Tarreau wrote:
 That's very nice. It is possible that your tool could become the default
 CLI interface in the end if it's lightweight and portable. At one moment
 I wondered if I should embed a CLI in the haproxy binary using readline
 or ncurses and use it as a client to connect to the previous process.

I'm glad to hear that. HATop has no external dependencies except for
Python itself, which comes with the socket and curses APIs. It should
run everywhere where Python is available. (I wrote it for the current
Python 2.6, however it runs fine with 2.5 and it should be quite easy to
make it work with others)

To keep it lightweight and easy to distribute, it's written as one
single python executable: https://labs.feurix.org/admin/hatop/tree/hatop

It's interesting to hear that you've thought about integrating this
directly into the haproxy binary. I had that exact same thought and
mentally prepared myself to contribute a patch... when I suddenly found
the stat socket and it's interactive mode. In the end, I think an
external app connecting to the haproxy socket is the better solution:

 - keep haproxy as small and stable as possible
 - there are tools to tunnel the socket connection

I've also thought about an additional tcp mode, but IMHO ssh is more
than sufficient, plus much more secure and convenient (pubkeys):

  $ ssh -t haproxy-host hatop -s /tmp/haproxy.sock

After all, the CLI is my prefered way to manage haproxy (and probably
other's too) and working with plain socat is painful :-)

 OK, so here are two different samples from two distinct machines. I'm
 hoping this helps.
 
 Also, if you find that some fields are missing from the output and
 could help your tool, please suggest, I'm open to discussions.

Thanks! I've received samples from 4 people so far, this will help me a
lot designing the screen layout.

I didn't read the pre-haproxy-1.4 CSV format documentation yet. I want
to support 1.3 too at least, so it's possible that I'm missing something
there. (interactive socket mode, some status fields, ...)

Regards,
John




Re: issue with using digest with jetty backends

2010-04-07 Thread Matt
Hi Willy,

That trace is from curl using --verbose, looks like one empty line after
Expect: 100-continue

Here using --trace-ascii it definitely looks like an empty line after

00b7: content-type:application/xml
00d9: Content-Length: 0
00ec: Expect: 100-continue
0102:
== Info: HTTP 1.0, assume close after body
= Recv header, 26 bytes (0x1a)
: HTTP/1.0 502 Bad Gateway
= Recv header, 25 bytes (0x19)
: Cache-Control: no-cache
= Recv header, 19 bytes (0x13)
: Connection: close
= Recv header, 25 bytes (0x19)
: Content-Type: text/html
= Recv header, 2 bytes (0x2)
:
= Recv data, 107 bytes (0x6b)
: htmlbodyh1502 Bad Gateway/h1.The server returned an inva
0040: lid or incomplete response../body/html.
htmlbodyh1502 Bad Gateway/h1
The server returned an invalid or incomplete response.
/body/html
== Info: Closing connection #0

I'll try the latest snapshot now.

Thanks,

Matt

On 7 April 2010 13:44, Willy Tarreau w...@1wt.eu wrote:

 Hi Matt,

 On Wed, Apr 07, 2010 at 11:10:58AM +0100, Matt wrote:
  When using curl :-
  * Server auth using Digest with user 'su'
   PUT . HTTP/1.1
   User-Agent: curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5
 OpenSSL/0.9.8g
  zlib/1.2.3.3 libidn/1.15
   Host: ..
   Accept: */*
   content-type:application/xml
   Content-Length: 0
   Expect: 100-continue
  
   HTTP/1.1 100 Continue
  * HTTP 1.0, assume close after body
   HTTP/1.0 502 Bad Gateway
   Cache-Control: no-cache
   Connection: close
   Content-Type: text/html
 (...)

 Where was this trace caught ? Are you sure there was no empty line after
 the HTTP/1.1 100 Continue ? That would be a protocol error, but maybe
 it's just an interpretation of the tool used to dump the headers.

  The Jetty server throws an exception :-
  HTTP/1.1 PUT
  Request URL: http://..
  User-Agent: curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5 OpenSSL/0.9.8g
  zlib/1.2.3.3 libidn/1.15
  Host: 
  Accept: */*
  Content-Type: application/xml
  Content-Length: 0
  Expect: 100-continue
  X-Forwarded-For: ...
  Connection: close
  Querystring: null
  -ERROR Authenticator Authenticator caught IO Error when
 trying
  to authenticate user!
  org.mortbay.jetty.EofException
  org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:760)
 
 org.mortbay.jetty.AbstractGenerator$Output.flush(AbstractGenerator.java:565)
  org.mortbay.jetty.HttpConnection$Output.flush(HttpConnection.java:904)
 
 org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:633)
 
 org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:586)
 
 org.mortbay.jetty.security.DigestAuthenticator.authenticate(DigestAuthenticator.java:131)
  ...
  Caused by: java.nio.channels.ClosedChannelException
  ...
 
  HA Proxy debug:-
  accept(0007)=0008 from [...:49194]
  clireq[0008:]: PUT ... HTTP/1.1
  clihdr[0008:]: User-Agent: curl/7.19.5 (i486-pc-linux-gnu)
  libcurl/7.19.5 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.15
  clihdr[0008:]: Host: 
  clihdr[0008:]: Accept: */*
  clihdr[0008:]: content-type:application/xml
  clihdr[0008:]: Content-Length: 0
  clihdr[0008:]: Expect: 100-continue
  srvrep[0008:0009]: HTTP/1.1 100 Continue
  srvcls[0008:0009]
  clicls[0008:0009]
  closed[0008:0009]
 
  Making sure that both httpclose and http-server-close are absent causes
 the
  requests to work.

 This would make me think about another funny behaviour in the server,
 related to Connection: close. Could you try latest 1.4 snapshot and
 add option http-pretend-keepalive ? It is possible that the server
 disables handling of the 100-continue when it sees a close (which is
 not related at all, but since this is the only difference, we can think
 about another home-made HTTP implementation).

 Regards,
 Willy




Re: issue with using digest with jetty backends

2010-04-07 Thread Matt
On 7 April 2010 17:16, Matt mattmora...@gmail.com wrote:

 Hi Willy,

 That trace is from curl using --verbose, looks like one empty line after
 Expect: 100-continue

 Here using --trace-ascii it definitely looks like an empty line after

 00b7: content-type:application/xml
 00d9: Content-Length: 0
 00ec: Expect: 100-continue
 0102:
 == Info: HTTP 1.0, assume close after body
 = Recv header, 26 bytes (0x1a)
 : HTTP/1.0 502 Bad Gateway
 = Recv header, 25 bytes (0x19)
 : Cache-Control: no-cache
 = Recv header, 19 bytes (0x13)
 : Connection: close
 = Recv header, 25 bytes (0x19)
 : Content-Type: text/html
 = Recv header, 2 bytes (0x2)
 :
 = Recv data, 107 bytes (0x6b)
 : htmlbodyh1502 Bad Gateway/h1.The server returned an inva
 0040: lid or incomplete response../body/html.
 htmlbodyh1502 Bad Gateway/h1
 The server returned an invalid or incomplete response.
 /body/html
 == Info: Closing connection #0

 I'll try the latest snapshot now.

 Thanks,

 Matt

 On 7 April 2010 13:44, Willy Tarreau w...@1wt.eu wrote:

 Hi Matt,

 On Wed, Apr 07, 2010 at 11:10:58AM +0100, Matt wrote:
  When using curl :-
  * Server auth using Digest with user 'su'
   PUT . HTTP/1.1
   User-Agent: curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5
 OpenSSL/0.9.8g
  zlib/1.2.3.3 libidn/1.15
   Host: ..
   Accept: */*
   content-type:application/xml
   Content-Length: 0
   Expect: 100-continue
  
   HTTP/1.1 100 Continue
  * HTTP 1.0, assume close after body
   HTTP/1.0 502 Bad Gateway
   Cache-Control: no-cache
   Connection: close
   Content-Type: text/html
 (...)

 Where was this trace caught ? Are you sure there was no empty line after
 the HTTP/1.1 100 Continue ? That would be a protocol error, but maybe
 it's just an interpretation of the tool used to dump the headers.

  The Jetty server throws an exception :-
  HTTP/1.1 PUT
  Request URL: http://..
  User-Agent: curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5
 OpenSSL/0.9.8g
  zlib/1.2.3.3 libidn/1.15
  Host: 
  Accept: */*
  Content-Type: application/xml
  Content-Length: 0
  Expect: 100-continue
  X-Forwarded-For: ...
  Connection: close
  Querystring: null
  -ERROR Authenticator Authenticator caught IO Error when
 trying
  to authenticate user!
  org.mortbay.jetty.EofException
  org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:760)
 
 org.mortbay.jetty.AbstractGenerator$Output.flush(AbstractGenerator.java:565)
  org.mortbay.jetty.HttpConnection$Output.flush(HttpConnection.java:904)
 
 org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:633)
 
 org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:586)
 
 org.mortbay.jetty.security.DigestAuthenticator.authenticate(DigestAuthenticator.java:131)
  ...
  Caused by: java.nio.channels.ClosedChannelException
  ...
 
  HA Proxy debug:-
  accept(0007)=0008 from [...:49194]
  clireq[0008:]: PUT ... HTTP/1.1
  clihdr[0008:]: User-Agent: curl/7.19.5 (i486-pc-linux-gnu)
  libcurl/7.19.5 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.15
  clihdr[0008:]: Host: 
  clihdr[0008:]: Accept: */*
  clihdr[0008:]: content-type:application/xml
  clihdr[0008:]: Content-Length: 0
  clihdr[0008:]: Expect: 100-continue
  srvrep[0008:0009]: HTTP/1.1 100 Continue
  srvcls[0008:0009]
  clicls[0008:0009]
  closed[0008:0009]
 
  Making sure that both httpclose and http-server-close are absent causes
 the
  requests to work.

 This would make me think about another funny behaviour in the server,
 related to Connection: close. Could you try latest 1.4 snapshot and
 add option http-pretend-keepalive ? It is possible that the server
 disables handling of the 100-continue when it sees a close (which is
 not related at all, but since this is the only difference, we can think
 about another home-made HTTP implementation).

 Regards,
 Willy


Latest snapshot on 1.4.3

- option http-pretend-keepalive  - works
- option http-pretend-keepalive and httpclose - same behaviour as before,
errors
- option http-server-close - same behaviour as before, errors
- option http-server-close and http-pretend-keepalive - works

What exactly does pretend-keepalive do?

Thanks,

Matt


Timings and HTTP keep-alives

2010-04-07 Thread Joshua Levy
Hi,
I've been using the new http-server-close option for HTTP
keep-alives and it's been working well -- thanks for this feature.

I may be missing something, but I did not see much detail in the
docs on how the various timings in the logs, particularly Tq and Tt,
work when keep-alives are in use.  It seems these are measured
from the start of the connection, not the request?  Is there a
recommended way to get per-request timings in this situation?
Presumably Tt minus Tq gives response timing, though that seems
to mean request timing is not available.

Thanks for your help, and for a great piece of software.

Regards,
Josh