On Tue, Jun 09, 2015 at 08:46:24PM -0700, Shawn Pearce wrote:

> > This patch introduces a "quick" flag to has_sha1_file which
> > callers can use when they would prefer high performance at
> > the cost of false negatives during repacks. There may be
> > other code paths that can use this, but the index-pack one
> > is the most obviously critical, so we'll start with
> > switching that one.
> 
> Hilarious. We did this in JGit back in ... uhm before 2009. :)
> 
> But its Java. So of course we had to do optimizations.

This is actually how Git did it up until v1.8.4.2, in 2013. I changed it
then because the old way was racy (and git could flakily report refs as
broken and skip them during repacks!).

If you are doing it the "quick" way everywhere in JGit, you may want to
reexamine the possibility for races. :)

> > @@ -3169,6 +3169,8 @@ int has_sha1_file(const unsigned char *sha1)
> >                 return 1;
> >         if (has_loose_object(sha1))
> >                 return 1;
> > +       if (flags & HAS_SHA1_QUICK)
> > +               return 0;
> >         reprepare_packed_git();
> >         return find_pack_entry(sha1, &e);
> 
> Something else we do is readdir() over the loose objects and store
> them in a map in memory. That way we avoid stat() calls during that
> has_loose_object() path. This is apparently a win enough of the time
> that we always do that when receiving a pack over the wire (client or
> server).

Yeah, I thought about that while writing this. It would be a win as long
as you have a small number of loose objects and were going to make a
large number of requests (otherwise you are traversing even though
nobody is going to look it up). According to perf, though, loose object
lookups are not a large expenditure[1].

I'm also hesitant to go that route because it's basically caching, which
introduces new opportunities for race conditions when the cache is stale
(we do the same thing with loose refs, and we have indeed run into races
there).

-Peff

[1] As measured mostly by __d_lookup_rcu calls. Of course, my patch
    gives a 5% improvement over the original, and we were not spending
    5% of the time there originally. I suspect part of the problem is
    that we do the lookup under a lock, so the longer we spend there,
    the more contention we have between threads, and the less
    parallelism. Indeed, I just did a quick repeat of my tests with
    pack.threads=1, and the size of the improvement shrinks.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to