On Fri, Jun 12, 2015 at 02:20:45PM -0400, Jeff King wrote:

> > > Notice GitHub prints "remote: fatal: pack exceeds maximum allowed
> > > size". That interrupted my "Writing objects" progress meter, and then
> > > git push just kept going and wrote really really fast (170 MiB/s!)
> > > until the entire pack was sent.
> > 
> > Sounds like it's writing to a closed fd, then. Which makes sense; I
> > think we should hang up the socket after writing the "fatal" message
> > above.
> 
> For reference, here's the patch implementing the max-size check on the
> server. It's on my long list of patches to clean up and send to the
> list; I never did this one because of the unpack-objects caveat
> mentioned below.

I did a little more digging on this.

With the max-size patch, we seem to reliably notice the problem and die
of EPIPE (you can bump receive.maxsize to something reasonable like 1m).
Pushing to GitHub[1], though, sometimes dies and sometimes ends up
pushing the whole pack over the ssh session. It seems racy.

I've confirmed in both cases that the receive-pack process dies on our
server. So presumably the problem is in between; it might be an ssh
weirdness, or it might be a problem with our proxy layer. I'll open an
issue internally to look at that.

-Peff

[1] I can tweak the max-size on a per-repo basis, which is how I did my
    testing without waiting for 2G to transfer. If anybody is interested
    in diagnosing the client side of this, I am happy to drop the
    max-size on a test repo for you. But AFAICT it is not a client
    problem at all.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to