On 01/04/2015 07:47 PM, Maxim Dounin wrote:
Hello!

Hi, thanks for taking the time to review the patch!

On Fri, Jan 02, 2015 at 04:03:58PM -0800, Shawn J. Goff wrote:

This patch adds a new directive to the upstream module:
propagate_connection_close. It puts a "Connection: close" header in
the downstream response if the upstream server sent one. This is
useful when Nginx is used as HTTP/HTTPS endpoints with load blancers
in front of them. It allows the upstream server to close connections
in order to shed load.
You may try to better elaborate on the problem you are trying to
solve and why existing mechanisms do not work for you.

We have HTTP servers that sit behind TCP load balancers. The servers currently have a protocol for making sure long-lived connections are balanced among them. This involves closing specific connections at appropriate times; this causes the client to open a new connection, which will most-likely be handled by another host. We now want those hosts to accept HTTPS connections in a well-understood, reliable way that has acceptable performance characteristics. We are hoping to use Nginx. We found that it breaks our load-balancing because connections are not being closed.

As of now, nginx can:
but
- Disable keepalive completely ("keepalive_timeout 0") on a
   per-location basis (http://nginx.org/r/keepalive_timeout).

This will turn off keep-live for every connection at the location, which is not what we want - we want to close specific existing connections.

- Disable keepalive based on a number of requests served in a
   connection (http://nginx.org/r/keepalive_requests).

This is not enough; we already have a way of choosing connections to close, and it's not only based on the number of requests (or on length of idle time, keepalive_timeout).

- Disable keepalive in some specific conditions
   (http://nginx.org/r/keepalive_disable).

This disables keep-alive per browser, which is very much a different thing, and not our problem.

- Disable chunked transfer encoding on a per-location basis
   (http://nginx.org/r/chunked_transfer_encoding).

This is also not going to help us.

I think this addresses most, if not all, problems in the links you
provided.  In particular, low enough keepalive_requests can be
used to distribute load if needed.

It can, at the expense of higher latency. We are extremely latency-sensitive, so we've put a lot of work into ensuring our p99.9+ latencies are consistent and as low as possible. To that end, we have our own method of distributing the load that should really not be part of Nginx, as it's highly specific to our service.


I can submit a documentation patch if this patch is accepted.
The approach taken in the patch looks wrong, for multiple reasons,
in particular:

- The upstream module isn't expected to contain it's own
   directives, expect ones used to define upstream{} blocks.
   Instead, there should be directives in modules implementing
   protocols, like "proxy_foo_bar...".

I had considered putting it in upstream, but thought the having it in location{} would give more flexibility. I'd be fine putting it in upstream{} instead.

As far as putting it in a proxy_foo_bar module, I took a look through the modules here: http://nginx.org/en/docs/http/ngx_http_proxy_module.html . The only one I see that might be appropriate is proxy_pass; are there any others you were referring to?

I chose to put this in the upstream module because that is what strips out the Connection header and sets the connection_close field in the headers_in struct that is specific to the upstream module.


- The "Connection: close" header is a hop-by-hop http header, and
   "propogating" it looks like a bad idea.  It mixes control of the
   nginx-to-backend connection with control of the client-to-nginx
   connection.  Instead, there should be a way to control these
   connections separately.  It may be an option to add X-Accel-...
   header instead, similart to X-Accel-Limit-Rate.  Though this
   approach has it's own problems too, see below.

It is hop-by-hop, but we're not really wanting Nginx as a separate hop; that is just a byproduct. Nginx on the same host as the upstream server; it's just there to take care of TLS for us.


- It is not possible to control connections that aren't proxied
   to backends but are handled locally - e.g., when using embedded
   perl or even just serving static files.

If there is a need to allow dynamic control of keepalive, I think
that proper way would be to extend the "keepalive_disable"
directive with variables support.


How would this work? Should I set a variable depending on whether some X-Accel- header is present, then set keepalive_disable per request depending on that variable?

_______________________________________________
nginx-devel mailing list
[email protected]
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Reply via email to