Junio C Hamano <gits...@pobox.com> writes:

> And the unexpected discrepancy is reported by find_symref() as
> fatal.  The server side dies, and somehow that fact is lost between
> the upload-pack process and the client and somebody in the middle
> (e.g. fastcgi interface or nginx webserver on the server side, or
> the remote-curl helper on the client side) keeps the "git fetch"
> process waiting.
>
> So there seem to be two issues.  
>
>  - Because of the unlocked read, find_symref() can observe an
>    inconsistent state.  Perhaps it should be updated not to die but
>    to retry, expecting that transient inconsistency will go away.
>
>  - A fatal error in upload-pack is not reported back to the client
>    to cause it exit is an obvious one, and even if we find a way to
>    make this fatal error in find_symref() not to trigger, fatal
>    errors in other places in the code can trigger the same symptom.

I wonder if the latter is solved by recent patch 296b847c0d
("remote-curl: don't hang when a server dies before any output",
2016-11-18) on the client side.

-- >8 --
From: David Turner <dtur...@twosigma.com>
Date: Fri, 18 Nov 2016 15:30:49 -0500
Subject: [PATCH] remote-curl: don't hang when a server dies before any output

In the event that a HTTP server closes the connection after giving a
200 but before giving any packets, we don't want to hang forever
waiting for a response that will never come.  Instead, we should die
immediately.

One case where this happens is when attempting to fetch a dangling
object by its object name.  In this case, the server dies before
sending any data.  Prior to this patch, fetch-pack would wait for
data from the server, and remote-curl would wait for fetch-pack,
causing a deadlock.

Despite this patch, there is other possible malformed input that could
cause the same deadlock (e.g. a half-finished pktline, or a pktline but
no trailing flush).  There are a few possible solutions to this:

1. Allowing remote-curl to tell fetch-pack about the EOF (so that
fetch-pack could know that no more data is coming until it says
something else).  This is tricky because an out-of-band signal would
be required, or the http response would have to be re-framed inside
another layer of pkt-line or something.

2. Make remote-curl understand some of the protocol.  It turns out
that in addition to understanding pkt-line, it would need to watch for
ack/nak.  This is somewhat fragile, as information about the protocol
would end up in two places.  Also, pkt-lines which are already at the
length limit would need special handling.

Both of these solutions would require a fair amount of work, whereas
this hack is easy and solves at least some of the problem.

Still to do: it would be good to give a better error message
than "fatal: The remote end hung up unexpectedly".

Signed-off-by: David Turner <dtur...@twosigma.com>
Signed-off-by: Junio C Hamano <gits...@pobox.com>
---
 remote-curl.c               |  8 ++++++++
 t/t5551-http-fetch-smart.sh | 30 ++++++++++++++++++++++++++++++
 2 files changed, 38 insertions(+)

diff --git a/remote-curl.c b/remote-curl.c
index f14c41f4c0..ee4423659f 100644
--- a/remote-curl.c
+++ b/remote-curl.c
@@ -400,6 +400,7 @@ struct rpc_state {
        size_t pos;
        int in;
        int out;
+       int any_written;
        struct strbuf result;
        unsigned gzip_request : 1;
        unsigned initial_buffer : 1;
@@ -456,6 +457,8 @@ static size_t rpc_in(char *ptr, size_t eltsize,
 {
        size_t size = eltsize * nmemb;
        struct rpc_state *rpc = buffer_;
+       if (size)
+               rpc->any_written = 1;
        write_or_die(rpc->in, ptr, size);
        return size;
 }
@@ -659,6 +662,8 @@ static int post_rpc(struct rpc_state *rpc)
        curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, rpc_in);
        curl_easy_setopt(slot->curl, CURLOPT_FILE, rpc);
 
+
+       rpc->any_written = 0;
        err = run_slot(slot, NULL);
        if (err == HTTP_REAUTH && !large_request) {
                credential_fill(&http_auth);
@@ -667,6 +672,9 @@ static int post_rpc(struct rpc_state *rpc)
        if (err != HTTP_OK)
                err = -1;
 
+       if (!rpc->any_written)
+               err = -1;
+
        curl_slist_free_all(headers);
        free(gzip_body);
        return err;
diff --git a/t/t5551-http-fetch-smart.sh b/t/t5551-http-fetch-smart.sh
index 1ec5b2747a..43665ab4a8 100755
--- a/t/t5551-http-fetch-smart.sh
+++ b/t/t5551-http-fetch-smart.sh
@@ -276,6 +276,36 @@ test_expect_success 'large fetch-pack requests can be 
split across POSTs' '
        test_line_count = 2 posts
 '
 
+test_expect_success 'test allowreachablesha1inwant' '
+       test_when_finished "rm -rf test_reachable.git" &&
+       server="$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
+       master_sha=$(git -C "$server" rev-parse refs/heads/master) &&
+       git -C "$server" config uploadpack.allowreachablesha1inwant 1 &&
+
+       git init --bare test_reachable.git &&
+       git -C test_reachable.git remote add origin "$HTTPD_URL/smart/repo.git" 
&&
+       git -C test_reachable.git fetch origin "$master_sha"
+'
+
+test_expect_success 'test allowreachablesha1inwant with unreachable' '
+       test_when_finished "rm -rf test_reachable.git; git reset --hard $(git 
rev-parse HEAD)" &&
+
+       #create unreachable sha
+       echo content >file2 &&
+       git add file2 &&
+       git commit -m two &&
+       git push public HEAD:refs/heads/doomed &&
+       git push public :refs/heads/doomed &&
+
+       server="$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
+       master_sha=$(git -C "$server" rev-parse refs/heads/master) &&
+       git -C "$server" config uploadpack.allowreachablesha1inwant 1 &&
+
+       git init --bare test_reachable.git &&
+       git -C test_reachable.git remote add origin "$HTTPD_URL/smart/repo.git" 
&&
+       test_must_fail git -C test_reachable.git fetch origin "$(git rev-parse 
HEAD)"
+'
+
 test_expect_success EXPENSIVE 'http can handle enormous ref negotiation' '
        (
                cd "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
-- 
2.11.0-442-g0c85c54a77

Reply via email to