On Wed, Nov 11, 2015 at 06:55:11PM -0800, Bryan Talbot wrote:
> On Wed, Nov 11, 2015 at 6:47 AM, Holger Just <hapr...@meine-er.de> wrote:
> 
> >
> > As a loadbalancer however, HAProxy should always return a proper HTTP
> > error if the request was even partially forwarded to the server. It's
> > probably fine to just close the connection if the connect timeout stroke
> > and the request was never actually handled anywhere but it should
> > definitely return a real HTTP error if its the sever timeout and a
> > backend server started doing anything with a request.
> >
> >
> This would be my preferred behavior and actually what I thought haproxy was
> already doing.

Guys, please read the HTTP RFC, you *can't* do that by default. HTTP/1 doesn't
warn before closing an idle keep-alive connection. So if you send a request
over an existing connection and the server closes at the same time, you get
exactly the situation above. And you clearly don't want to send a 502 or 504
to a client because it will be displayed on the browser. Remember the issues
we had with Chrome's preconnect and 408 ? That would be the same. We had to
silence the 408 on keep-alive connections to the client so that the browser
could replay. Here it's the same, by silently closing, we're telling the
browser it should retry, and we're ensuring not to interfere between the
browser and the server regarding the connection's behaviour.

And it's the browser's responsibility to only retry safe requests (those
that are called "idempotent"). Normally it does this by selecting which
requests can be sent over an existing connection. That ensures a non-
idempotent request cannot hit a closed connection. In web services
environments, in order to address this, you often see the requests sent
in two parts, first a POST is emitted with an Expect: 100-continue, and
only once the server responds, the body is emitted (which ensures that
the connection is still alive). Note by the way that it requires two
round trips to do this so there's little benefit to keeping persistent
connections in such a case.

You have the exact same situation between the browser and haproxy, or the
browser and the server. When haproxy or the server closes a keep-alive
connection, the browser doesn't know whether the request was being processed
or not, and that's the reason why it (normally) doesn't send unsafe request
over existing connections.

I'm not opposed to having an option to make these errors verbose for web
services environments, but be prepared to hear end-users complain if you
enable this with browsers-facing web sites, because your users will
unexpectedly get some 502 or 504.

Yes the persistent connection model in HTTP/1 is far from being perfect,
and that's one of the reasons it was changed in HTTP/2.

Willy


Reply via email to