Am 05.04.2015 um 20:52 schrieb Jeff King:
On Sun, Apr 05, 2015 at 03:41:39PM +0200, René Scharfe wrote:
I wonder if pluggable reference backends could help here. Storing refs
in a database table indexed by refname should simplify things.
...this. I think that effort might be better spent on
Am 05.04.2015 um 20:59 schrieb Jeff King:
Still, the numbers are promising. Here's are comparisons
against for-each-ref on torvalds/linux, which has a 218M
packed-refs file:
$ time git for-each-ref \
--format='%(objectname) %(refname)' \
refs/remotes/2325298/ |
wc -c
On Mon, Apr 06, 2015 at 12:39:15AM +0200, René Scharfe wrote:
...this. I think that effort might be better spent on a ref storage
format that's more efficient, simpler (with respect to subtle races and
such), and could provide other features (e.g., transactional atomicity).
Such as a DBMS?
Am 05.04.2015 um 03:06 schrieb Jeff King:
As I've mentioned before, I have some repositories with rather large
numbers of refs. The worst one has ~13 million refs, for a 1.6GB
packed-refs file. So I was saddened by this:
$ time git.v2.0.0 rev-parse refs/heads/foo /dev/null 21
real
On Sun, Apr 05, 2015 at 02:52:59PM -0400, Jeff King wrote:
Right now we parse all of the packed-refs file into an in-memory cache,
and then do single lookups from that cache. Doing an mmap() and a binary
search is way faster (and costs less memory) for doing individual
lookups. It relies on
On Sun, Apr 05, 2015 at 03:41:39PM +0200, René Scharfe wrote:
The main culprits seem to be d0f810f (which introduced some extra
expensive code for each ref) and my 10c497a, which switched from fgets()
to strbuf_getwholeline. It turns out that strbuf_getwholeline is really
slow.
10c497a
As I've mentioned before, I have some repositories with rather large
numbers of refs. The worst one has ~13 million refs, for a 1.6GB
packed-refs file. So I was saddened by this:
$ time git.v2.0.0 rev-parse refs/heads/foo /dev/null 21
real0m6.840s
user0m6.404s
sys 0m0.440s
7 matches
Mail list logo