> Index: modules/proxy/proxy_util.c
> +    /* Close a possible existing socket if we are told to do so */
> +    if (conn->close) {
> +        socket_cleanup(conn);
> +        conn->close = 0;
> +    }

Does this mean that sockets that should have been closed (via the
close flag) weren't getting closed correctly before?  I.e. sockets in
the pool were still "open" as far as mod_proxy was concerned even
though we set the close flag?

Why is the patch below necessary?  The comments indicate that if we
were to us an existing keepalive connection to the backend server, but
it failed, then if the client was NOT keepalive then we couldn't send
the failure to the client...but why?  If we fail to connect to the
backend, whether on a keepalive or non-keepalive connection, why does
it matter if the client is keepalive or not in order to send an error?

> Index: modules/proxy/mod_proxy_http.c
> +    /*
> +     * In the case that we are handling a reverse proxy connection and this
> +     * is not a request that is coming over an already kept alive
> connection
> +     * with the client, do NOT reuse the connection to the backend, because
> +     * we cannot forward a failure to the client in this case as the client
> +     * does NOT expects this in this situation.
> +     * Yes, this creates a performance penalty.
> +     * XXX: Make this behaviour configurable: Either via an environment
> +     * variable or via a worker configuration directive.
> +     */
> +    if ((r->proxyreq == PROXYREQ_REVERSE) && (!c->keepalives)) {
> +        backend->close = 1;
> +    }
> +

Reply via email to