On Wed, Jun 10, 2015 at 7:00 AM, Jeff King <p...@peff.net> wrote:
> On Tue, Jun 09, 2015 at 08:46:24PM -0700, Shawn Pearce wrote:
>
>> > This patch introduces a "quick" flag to has_sha1_file which
>> > callers can use when they would prefer high performance at
>> > the cost of false negatives during repacks. There may be
>> > other code paths that can use this, but the index-pack one
>> > is the most obviously critical, so we'll start with
>> > switching that one.
>>
>> Hilarious. We did this in JGit back in ... uhm before 2009. :)
>>
>> But its Java. So of course we had to do optimizations.
>
> This is actually how Git did it up until v1.8.4.2, in 2013. I changed it
> then because the old way was racy (and git could flakily report refs as
> broken and skip them during repacks!).
>
> If you are doing it the "quick" way everywhere in JGit, you may want to
> reexamine the possibility for races. :)

Correct, fortunately we are not this naive.

JGit always does the reprepare_packed_git() + retry search on a miss.

But we have a code path to bypass that inside critical loops like our
equivalent of index-pack pulling off the wire. We snapshot the object
tree at the start of the operation before we read in the pack header,
and then require that the incoming pack be completed with that
snapshot. Since the snapshot was taking after ref
negotiation/advertisement, we should be at least as current as the
refs that were exchanged on the wire.

>> > @@ -3169,6 +3169,8 @@ int has_sha1_file(const unsigned char *sha1)
>> >                 return 1;
>> >         if (has_loose_object(sha1))
>> >                 return 1;
>> > +       if (flags & HAS_SHA1_QUICK)
>> > +               return 0;
>> >         reprepare_packed_git();
>> >         return find_pack_entry(sha1, &e);
>>
>> Something else we do is readdir() over the loose objects and store
>> them in a map in memory. That way we avoid stat() calls during that
>> has_loose_object() path. This is apparently a win enough of the time
>> that we always do that when receiving a pack over the wire (client or
>> server).
>
> Yeah, I thought about that while writing this. It would be a win as long
> as you have a small number of loose objects and were going to make a
> large number of requests (otherwise you are traversing even though
> nobody is going to look it up). According to perf, though, loose object
> lookups are not a large expenditure[1].

Interesting. We were getting hurt by this in JGit. For most
repositories it was cheaper to issue 256 readdir() and build a set of
SHA-1s we found. I think we even just hit every directory 00..ff, we
don't even bother with a readdir() on the $GIT_DIR/objects itself.

> I'm also hesitant to go that route because it's basically caching, which
> introduces new opportunities for race conditions when the cache is stale
> (we do the same thing with loose refs, and we have indeed run into races
> there).

Yes. But see above, we do this only after we snapshot the packs, and
only after the ref negotiation, and only for the duration of parsing
the pack off the wire. So we should never have a data race.

Since JGit is multi-threaded, this cache is also effectively a
thread-local. Its never shared across threads.

> [1] As measured mostly by __d_lookup_rcu calls. Of course, my patch
>     gives a 5% improvement over the original, and we were not spending
>     5% of the time there originally. I suspect part of the problem is
>     that we do the lookup under a lock, so the longer we spend there,
>     the more contention we have between threads, and the less
>     parallelism. Indeed, I just did a quick repeat of my tests with
>     pack.threads=1, and the size of the improvement shrinks.

Interesting. Yea, fine-grained locking can hurt parallel execution... :(
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to