As I've mentioned before, I have some repositories with rather large
numbers of refs. The worst one has ~13 million refs, for a 1.6GB
packed-refs file. So I was saddened by this:

  $ time git.v2.0.0 rev-parse refs/heads/foo >/dev/null 2>&1
  real    0m6.840s
  user    0m6.404s
  sys     0m0.440s

  $ time git.v2.4.0-rc1 rev-parse refs/heads/foo >/dev/null 2>&1
  real    0m19.432s
  user    0m18.996s
  sys     0m0.456s

The command isn't important; what I'm really measuring is loading the
packed-refs file. And yes, of course this repository is absolutely
ridiculous. But the slowdowns here are linear with the number of refs.
So _every_ git command got a little bit slower, even in less crazy
repositories. We just didn't notice it as much.

Here are the numbers after this series:

  real    0m8.539s
  user    0m8.052s
  sys     0m0.496s

Much better, but I'm frustrated that they are still 20% slower than the
original.

The main culprits seem to be d0f810f (which introduced some extra
expensive code for each ref) and my 10c497a, which switched from fgets()
to strbuf_getwholeline. It turns out that strbuf_getwholeline is really
slow.

There may be other problems lurking to account for the remaining 20%.
It's hard to find performance regressions with a bisection if there are
multiple of them; if you stop at a random commit and it is 500ms slow,
it is hard to tell which problem is causing it.

Note that while these are regressions, they are in v2.2.0 and v2.2.2
respectively. So this can wait until post-2.4.

  [1/6]: strbuf_getwholeline: use getc macro
  [2/6]: git-compat-util: add fallbacks for unlocked stdio
  [3/6]: strbuf_getwholeline: use get_unlocked
  [4/6]: strbuf: add an optimized 1-character strbuf_grow
  [5/6]: t1430: add another refs-escape test
  [6/6]: refname_is_safe: avoid expensive normalize_path_copy call

-Peff
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to