Re: [PATCH] write-tree performance problems

2005-04-20 Thread Linus Torvalds


On Tue, 19 Apr 2005, Chris Mason wrote:
 
 I'll finish off the patch once you ok the basics below.  My current code 
 works 
 like this:

Chris, before you do anything further, let me re-consider.

Assuming that the real cost of write-tree is the compression (and I think
it is), I really suspect that this ends up being the death-knell to my
use the sha1 of the _compressed_ object approach. I thought it was
clever, and I was ready to ignore the other arguments against it, but if
it turns out that we can speed up write-tree a lot by just doing the SHA1
on the uncompressed data, and noticing that we already have the tree
before we need to compress it and write it out, then that may be a good
enough reason for me to just admit that I was wrong about that decision.

So I'll see if I can turn the current fsck into a convert into
uncompressed format, and do a nice clean format conversion. 

Most of git is very format-agnostic, so that shouldn't be that painful. 
Knock wood.

Linus
-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] write-tree performance problems

2005-04-20 Thread H. Peter Anvin
Linus Torvalds wrote:
So I'll see if I can turn the current fsck into a convert into
uncompressed format, and do a nice clean format conversion. 

Just let me know what you want to do, and I can trivially change the 
conversion scripts I've already written to do what you want.

-hpa
-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)

2005-04-20 Thread Ingo Molnar

* Linus Torvalds [EMAIL PROTECTED] wrote:

 So to convert your old git setup to a new git setup, do the following:
 [...]

did this for two repositories (git and kernel-git), it works as 
advertised.

Ingo
-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)

2005-04-20 Thread Jon Seymour
On 4/20/05, Linus Torvalds [EMAIL PROTECTED] wrote:
 
 
 I converted my git archives (kernel and git itself) to do the SHA1 hash
 _before_ the compression phase.
 

Linus,
 
 Am I correct to understand that with this change, all the objects in
the database are still being compressed (so no net performance benefit
now), but by doing the SHA1 calculations before compression you are
keeping open the possibility that at some point in the future you may
use a different compression technique (including none at all) for some
or all of the objects?

jon.

[ reposted to list, because list post was bounced because of rich text
formatting ]
-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)

2005-04-20 Thread Martin Uecker
On Wed, Apr 20, 2005 at 10:11:10PM +1000, Jon Seymour wrote:
 On 4/20/05, Linus Torvalds [EMAIL PROTECTED] wrote:
  
  
  I converted my git archives (kernel and git itself) to do the SHA1 hash
  _before_ the compression phase.
  
 
 Linus,
  
  Am I correct to understand that with this change, all the objects in
 the database are still being compressed (so no net performance benefit
 now), but by doing the SHA1 calculations before compression you are
 keeping open the possibility that at some point in the future you may
 use a different compression technique (including none at all) for some
 or all of the objects?

The main point is not about trying different compression
techniques but that you don't need to compress at all just
to calculate the hash of some data. (to know if it is
unchanged for example)

There are still some other design decisions I am worried
about:

The storage method of the database of a collection of
files in the underlying file system. Because of the
random nature of the hashes this leads to a horrible
amount of seeking for all operations which walk the
logical structure of some tree stored in the database.

Why not store all objects linearized in one or more
flat file?


The other thing I don't like is the use of a sha1
for a complete file. Switching to some kind of hash
tree would allow to introduce chunks later. This has
two advantages:

It would allow git to scale to repositories of large
binary files. And it would allow to build a very cool
content transport algorithm for those repositories.
This algorithm could combine all the advantages of
bittorrent and rsync (without the cpu load).


And it would allow trivial merging of patches which
apply to different chunks of a file in exact the same
way as merging changesets which apply to different
files in a tree.


Martin

-- 
One night, when little Giana from Milano was fast asleep,
she had a strange dream.



signature.asc
Description: Digital signature


Re: WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)

2005-04-20 Thread Morten Welinder
On 4/20/05, Martin Uecker [EMAIL PROTECTED] wrote:

 The storage method of the database of a collection of
 files in the underlying file system. Because of the
 random nature of the hashes this leads to a horrible
 amount of seeking for all operations which walk the
 logical structure of some tree stored in the database.
 
 Why not store all objects linearized in one or more
 flat file?

I've been thinking along the same lines and it doesn't look too hard
to factor out the
back end, i.e., provide methods to
read/write/stat/remove/mmap/whatever objects.
(Note the mmap there.  Apart from that, the backend could be an http connection
or worse.)

It will, however, seriously break rsync as transport for people who
commit to their trees.
Thus you need an alternative in place before you can present it as an
alternative.

Morten
-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)

2005-04-20 Thread Jon Seymour
 The main point is not about trying different compression
 techniques but that you don't need to compress at all just
 to calculate the hash of some data. (to know if it is
 unchanged for example)
 

Ah, ok, I didn't understand that there were extra compresses being
performed for that reason. Thanks for the explanation.

jon.
-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)

2005-04-20 Thread David Woodhouse
On Wed, 2005-04-20 at 02:08 -0700, Linus Torvalds wrote:
 I converted my git archives (kernel and git itself) to do the SHA1
 hash _before_ the compression phase.

I'm happy to see that -- because I'm going to be asking you to make
another change which will also require a simple repository conversion. 

We are working on getting the complete history since 2.4.0 into git
form. When it's done and checked (which should be RSN) I'd like you to
edit the first commit object in your tree -- the import of 2.6.12-rc2,
and give it a parent. That parent will be the sha1 hash of the
2.6.12-rc2 commit in the newly-provided history, and of course will
change the sha1 hash of your first commit, and all subsequent commits. 
We'll provide a tool to do that, of course.

The history itself will be absent from your tree. Obviously we'll need
to make sure that the tools can cope with an absentee parent, probably
by just treating that case as if no parent exists. That won't be hard,
it'll be useful for people to prune their trees of unwanted older
history in the general case too. That history won't be lost or undone --
it'll just be archived elsewhere.

The reason for doing this is that without it, we can't ever have a full
history actually connected to the current trees. There'd always be a
break at 2.6.12-rc2, at which point you'd have to switch to an entirely
different git repository.

-- 
dwmw2

-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)

2005-04-20 Thread Linus Torvalds


On Wed, 20 Apr 2005, Jon Seymour wrote:
 
 Am I correct to understand that with this change, all the objects in the 
 database are still being compressed (so no net performance benefit), but by 
 doing the SHA1 calculations before compression you are keeping open the 
 possibility that at some point in the future you may use a different 
 compression technique (including none at all) for some or all of the 
 objects?

Correct. There is zero performance benefit to this right now, and the only 
reason for doing it is because it will allow other things to happen.

Note that the other things include:
 - change the compression format to make it cheaper
 - _keep_ the same compression format, but notice that we already have an 
   object by looking at the uncompressed one.

I'm actually leaning towards just #2 at this time. I like how things
compress, and it sure is simple. The fact that we use the equivalent of
-9 may be expensive, but the thing is, we don't actually write new files
that often, and it's just CPU time (no seeking on disk or anything like
that), which tends to get cheaper over time.

So I suspect that once I optimize the tree writing to notice that oh, I
already have this tree object, and thus build it up but never compressing
it, write-tree performance will go up _hugely_ even without removing the
compressioin. Because most of the time, write-tree actually only needs to
create a couple of small new tree objects.

Linus
-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)

2005-04-20 Thread Martin Uecker
On Wed, Apr 20, 2005 at 10:30:15AM -0400, C. Scott Ananian wrote:

Hi,

your code looks pretty cool. thank you!

 On Wed, 20 Apr 2005, Martin Uecker wrote:
 
 The other thing I don't like is the use of a sha1
 for a complete file. Switching to some kind of hash
 tree would allow to introduce chunks later. This has
 two advantages:
 
 You can (and my code demonstrates/will demonstrate) still use a whole-file 
 hash to use chunking.  With content prefixes, this takes O(N ln M) time 
 (where N is the file size and M is the number of chunks) to compute all 
 hashes; if subtrees can share the same prefix, then you can do this in 
 O(N) time (ie, as fast as possible, modulo a constant factor, which is 
 '2').  You don't *need* internal hashing functions.

I don't understand this paragraph. What is an internal
hash function? Your code seems to do exactly what I want.
The hashes are computed recusively as in a hash tree
with O(N ln N). The only difference between your design
and a design based on a conventional (binary) hash tree
seems to be that data is stored in the intermediate nodes
too. 

 It would allow git to scale to repositories of large
 binary files. And it would allow to build a very cool
 content transport algorithm for those repositories.
 This algorithm could combine all the advantages of
 bittorrent and rsync (without the cpu load).
 
 Yes, the big benefit of internal hashing is that it lets you check 
 validity of a chunk w/o having the entire file available.  I'm not sure 
 that's terribly useful in this case.  [And, if it is, then it can 
 obviously be done w/ other means.]

If I don't miss anything essential, you can validate
each treap piece at the moment you get it from the
network with its SHA1 hash and then proceed with
downloading the prefix and suffix tree (in parallel
if you have more than one peer a la bittorrent).

 And it would allow trivial merging of patches which
 apply to different chunks of a file in exact the same
 way as merging changesets which apply to different
 files in a tree.
 
 I'm not sure anyone should be looking at chunks.  To me, at least, they 
 are an object-store-implementation detail only.  For merging, etc, we 
 should be looking at whole files, or (better) the whole repository.
 The chunking algorithm is guaranteed not to respect semantic boundaries 
 (for *some* semantics of *some* file).

You might be right. I just wanted to point out this
possibility because it would allow to avoid calling
external merging code for a lot of trivial merges.

bye,
Martin



-- 
One night, when little Giana from Milano was fast asleep,
she had a strange dream.



signature.asc
Description: Digital signature


Re: [PATCH] write-tree performance problems

2005-04-20 Thread Chris Mason
On Wednesday 20 April 2005 02:43, Linus Torvalds wrote:
 On Tue, 19 Apr 2005, Chris Mason wrote:
  I'll finish off the patch once you ok the basics below.  My current code
  works like this:

 Chris, before you do anything further, let me re-consider.

 Assuming that the real cost of write-tree is the compression (and I think
 it is), I really suspect that this ends up being the death-knell to my
 use the sha1 of the _compressed_ object approach. 

Thanks for looking at this.  Your new tree is faster, it gets the commit 100 
patches time down from 1m5s to 50s.  I've attached my patch from last night, 
which is mostly a rough guess of the changes we would need, I haven't 
validated or cleaned things up.

With the basic changes I described before, the  100 patch time only goes down 
to 40s.  Certainly not fast enough to justify the changes.  In this case, the 
bulk of the extra time comes from write-tree writing the index file, so I 
split write-tree.c up into libwrite-tree.c, and created update-cache 
--write-tree.

This gets our time back down to 21s.

The attached patch is not against your latest revs.  After updating I would 
need to sprinkle a few S_ISDIR checks into diff-cache.c and checkout-cache.c, 
but the changes should be small.

-chris
Index: Makefile
===
--- dbeacafeb442bcfd39dfdc90c360d47d4215c185/Makefile  (mode:100644 sha1:6a04941a337ec50da06cf4cf52aa58f3b1435776)
+++ 27e71cd40ff1dccfbbd996427833fd7bac714dde/Makefile  (mode:100644 sha1:2ba6d49196e8a2335cfcd77ec0dbe9cda3e402dd)
@@ -29,7 +29,7 @@
 
 VERSION= VERSION
 
-LIB_OBJS=read-cache.o sha1_file.o usage.o object.o commit.o tree.o blob.o
+LIB_OBJS=read-cache.o sha1_file.o usage.o object.o commit.o tree.o blob.o libwrite-tree.o
 LIB_FILE=libgit.a
 LIB_H=cache.h object.h
 
Index: cache.h
===
--- dbeacafeb442bcfd39dfdc90c360d47d4215c185/cache.h  (mode:100644 sha1:c182ea0c5c1def37d899f9a05f8884ebe17c9d92)
+++ 27e71cd40ff1dccfbbd996427833fd7bac714dde/cache.h  (mode:100644 sha1:0882b713222b71e67c9dab5d58ab6f15c3c49ed6)
@@ -74,7 +74,7 @@
 #define ce_stage(ce) ((CE_STAGEMASK  ntohs((ce)-ce_flags))  CE_STAGESHIFT)
 
 #define ce_permissions(mode) (((mode)  0100) ? 0755 : 0644)
-#define create_ce_mode(mode) htonl(S_IFREG | ce_permissions(mode))
+#define create_ce_mode(mode) htonl((mode  (S_IFREG|S_IFDIR)) | ce_permissions(mode))
 
 #define cache_entry_size(len) ((offsetof(struct cache_entry,name) + (len) + 8)  ~7)
 
Index: libwrite-tree.c
===
--- /dev/null  (tree:dbeacafeb442bcfd39dfdc90c360d47d4215c185)
+++ 27e71cd40ff1dccfbbd996427833fd7bac714dde/libwrite-tree.c  (mode:100644 sha1:52202930d02b3721f5a388ae1178c5a4d99ec1b4)
@@ -0,0 +1,174 @@
+/*
+ * GIT - The information manager from hell
+ *
+ * Copyright (C) Linus Torvalds, 2005
+ */
+#include cache.h
+
+struct new_ce {
+	struct new_ce *next;
+	struct cache_entry ce;
+};
+
+static struct new_ce *add_list = NULL;
+
+static int check_valid_sha1(unsigned char *sha1)
+{
+	char *filename = sha1_file_name(sha1);
+	int ret;
+
+	/* If we were anal, we'd check that the sha1 of the contents actually matches */
+	ret = access(filename, R_OK);
+	if (ret)
+		perror(filename);
+	return ret;
+}
+
+static int prepend_integer(char *buffer, unsigned val, int i)
+{
+	buffer[--i] = '\0';
+	do {
+		buffer[--i] = '0' + (val % 10);
+		val /= 10;
+	} while (val);
+	return i;
+}
+
+#define ORIG_OFFSET (40)	/* Enough space to add the header of tree size\0 */
+
+static int write_tree(struct cache_entry **cachep, int maxentries, const char *base, int baselen, unsigned char *returnsha1)
+{
+	unsigned char subdir_sha1[20];
+	unsigned long size, offset;
+	char *buffer;
+	int i, nr;
+
+	/* Guess at some random initial size */
+	size = 8192;
+	buffer = malloc(size);
+	offset = ORIG_OFFSET;
+
+	nr = 0;
+	do {
+		struct cache_entry *ce = cachep[nr];
+		const char *pathname = ce-name, *filename, *dirname;
+		int pathlen = ce_namelen(ce), entrylen;
+		unsigned char *sha1;
+		unsigned int mode;
+
+		/* Did we hit the end of the directory? Return how many we wrote */
+		if (baselen = pathlen || memcmp(base, pathname, baselen))
+			break;
+
+		sha1 = ce-sha1;
+		mode = ntohl(ce-ce_mode);
+
+		/* Do we have _further_ subdirectories? */
+		filename = pathname + baselen;
+		dirname = strchr(filename, '/');
+		if (dirname) {
+			int subdir_written;
+			int len = dirname - pathname;
+			unsigned int size = cache_entry_size(len);
+			struct new_ce *new_ce = malloc(size + sizeof(struct new_ce *));
+			struct cache_entry *c = new_ce-ce;
+			subdir_written = write_tree(cachep + nr, maxentries - nr, pathname, dirname-pathname+1, subdir_sha1);
+			nr += subdir_written - 1;
+
+			/* Now we need to write out the directory entry into this tree.. */
+			mode = S_IFDIR;
+			pathlen = dirname - pathname;
+
+			sha1 = subdir_sha1;
+
+			memset(c, 0, size);
+
+			/* 

Re: [PATCH] write-tree performance problems

2005-04-20 Thread C. Scott Ananian
On Wed, 20 Apr 2005, Chris Mason wrote:
With the basic changes I described before, the  100 patch time only goes down
to 40s.  Certainly not fast enough to justify the changes.  In this case, the
bulk of the extra time comes from write-tree writing the index file, so I
split write-tree.c up into libwrite-tree.c, and created update-cache
--write-tree.
Hmm.  Are our index files too large, or is there some other factor?
I was considering using a chunked representation for *all* files (not just 
blobs), which would avoid the original 'trees must reference other trees 
or they become too large' issue -- and maybe the performance issue you're 
referring to, as well?
 --scott

Boston MI6 quiche LPMEDLEY BLUEBIRD PBSUCCESS jihad biowarfare non-violent protest 
Yakima NRA EZLN DES hack SARANAC KMPLEBE Echelon PBCABOOSE security
 ( http://cscott.net/ )
-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] write-tree performance problems

2005-04-20 Thread Linus Torvalds


On Wed, 20 Apr 2005, C. Scott Ananian wrote:
 
 Hmm.  Are our index files too large, or is there some other factor?

They _are_ pretty large, but they have to be,

For the kernel, the index file is about 1.6MB. That's 

 - 17,000+ files and filenames
 - stat information for all of them
 - the sha1 for them all

ie for the kernel it averages to 93.5 bytes per file. Which is actually 
pretty dense (just the sha1 and stat information is about half of it, and 
those are required).

 I was considering using a chunked representation for *all* files (not just 
 blobs), which would avoid the original 'trees must reference other trees 
 or they become too large' issue -- and maybe the performance issue you're 
 referring to, as well?

No. The most common index file operation is reading, and that's the one 
that has to be _fast_. And it is - it's a single mmap and some parsing.

In fact, writing it is pretty fast too, exactly because the index file is 
totally linear and isn't compressed or anything fancy like that. It's a 
_lot_ faster than the tree objects, exactly because it doesn't need to 
be as careful.

The main cost of the index file is probably the fact that I add a sha1 
signature of the file into itself to verify that it's ok. The advantage is 
that the signature means that the file is ok, and the parsing of it can be 
much more relaxed. You win some, you lose some.

Linus
-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] write-tree performance problems

2005-04-20 Thread C. Scott Ananian
On Wed, 20 Apr 2005, Linus Torvalds wrote:
I was considering using a chunked representation for *all* files (not just
blobs), which would avoid the original 'trees must reference other trees
or they become too large' issue -- and maybe the performance issue you're
referring to, as well?
No. The most common index file operation is reading, and that's the one
that has to be _fast_. And it is - it's a single mmap and some parsing.
OK, sure.  But how 'bout chunking trees?  Are you grown happy with the new 
trees-reference-other-trees paradigm, or is there a deep longing in your 
heart for the simplicity of 'trees-reference-blobs-period'?  I'm fairly
certain that chunking could get you the space-savings you need without 
multi-level trees, if the simplicity of that is still appealing.

Not necessarily for rev.1 of the chunking code, but I'm curious as to 
whether it's still of interest at all.  I don't know exactly how far
ingrained multilevel trees have become since they were adopted.
 --scott

Japan explosion BLUEBIRD Honduras jihad D5 SLBM Diplomat overthrow 
JMTIDE CABOUNCE AMTHUG ESODIC Kennedy AVBRANDY CLOWER mail drop PHOENIX
 ( http://cscott.net/ )
-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)

2005-04-20 Thread Martin Uecker
On Wed, Apr 20, 2005 at 11:28:20AM -0400, C. Scott Ananian wrote:

Hi,
 
 A merkle-tree (which I think you initially pointed me at) makes the hash 
 of the internal nodes be a hash of the chunk's hashes; ie not a straight 
 content hash.  This is roughly what my current implementation does, but
 I would like to identify each subtree with the hash of the 
 *(expanded) contents of that subtree* (ie no explicit reference to 
 subtree hashes).  This makes it interoperable with non-chunked or 
 differently-chunked representations, in that the top-level hash is *just 
 the hash of the complete content*, not some hash-of-subtree-hashes.  Does 
 that make more sense?

Yes, thank you. But I would like to argue against this:

You can make the representations interoperable
if you calculate the hash for the non-chunked
representations exactly as if this file is stored
chunked but simple do not store it in that way.

Of course this is not backward compatible to the
monolithic hash and not compatible with a differently
chunked representation (but you could store subtrees
unchunked if you think your chunks are too small).

 The code I posted doesn't demonstrate this very well, but now that Linus 
 has abandoned the 'hash of compressed content' stuff, my next code posting 
 should show this more clearly.

I think the hash of the treap piece should be calculated
from the hash of the prefix and suffix tree and the already
calculated hash of the uncompressed data. This makes hashing
nearly as cheap as in Linus version which is important
because checking whether a given file has identically
content as a stored version should be fast.

 If I don't miss anything essential, you can validate
 each treap piece at the moment you get it from the
 network with its SHA1 hash and then proceed with
 downloading the prefix and suffix tree (in parallel
 if you have more than one peer a la bittorrent).
 
 Yes, I guess this is the detail I was going to abandon. =)
 
 I viewed the fact that the top-level hash was dependent on the exact chunk 
 makeup a 'misfeature', because it doesn't allow easy interoperability with 
 existing non-chunked repos.

I thought this as a misfeature too before I realized how
many advantages this has.

Martin
 

-- 
One night, when little Giana from Milano was fast asleep,
she had a strange dream.



signature.asc
Description: Digital signature


Re: [PATCH] write-tree performance problems

2005-04-20 Thread David Willmore
On 4/20/05, Linus Torvalds [EMAIL PROTECTED] wrote:
 It really _shouldn't_ be faster. It still does the compression, and throws
 the end result away.

Am I misunderstanding or is the proglem that doing:
file with unknown status - compress - sha1 - compare with existing hash

is expensive?

What about doing:
file it's supposed to be equal to - uncompress - compare with
unknown status file

It's more file I/O, but the uncompress is much cheaper than the compress.

On a second issue, what's the format of the main 'index' file?  Is it:
pathspec sha1hash
pathspec sha1hash 
?
If so, that's not going to compress well.  A file like:
pathspec1
pathspec2

sha1hash1
sha1hash2

Will compress better.

Stop me if I'm way off base--I'm just following the mailing list, I
haven't tried out the code.

Cheers,
David
-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] write-tree performance problems

2005-04-20 Thread Linus Torvalds


On Wed, 20 Apr 2005, C. Scott Ananian wrote:
 
 OK, sure.  But how 'bout chunking trees?  Are you grown happy with the new 
 trees-reference-other-trees paradigm, or is there a deep longing in your 
 heart for the simplicity of 'trees-reference-blobs-period'?

I'm pretty sure we do better chunking on a subdirectory basis, especially 
as it allows us to do various optimizations (avoid diffing common parts).

Yes, you could try to do the same optimizations with chunking, but then 
you'd need to make sure that the chunking was always on a full tree entry 
boundary etc - ie much harder than blob chunking. 

But hey, numbers talk, bullshit walks. 

Linus
-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] write-tree performance problems

2005-04-20 Thread Linus Torvalds


On Wed, 20 Apr 2005, Linus Torvalds wrote:
 
 To actually go faster, it _should_ need this patch. Untested. See if it 
 works..

NO! Don't see if this works. For the sha1 file already exists file, it 
forgot to return the SHA1 value in returnsha1, and would thus corrupt 
the trees it wrote.

So don't apply, don't test. You won't corrupt your archive (you'll just
write bogus tree objects), but if you commit the bogus trees you're going
to be in a world of hurt and will have to undo everything you did.

It's a good test for fsck though. It core-dumps because it tries to add 
references to NULL objects.

Linus
-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] write-tree performance problems

2005-04-20 Thread Chris Mason
On Wednesday 20 April 2005 11:40, Linus Torvalds wrote:
 On Wed, 20 Apr 2005, Chris Mason wrote:
  Thanks for looking at this.  Your new tree is faster, it gets the commit
  100 patches time down from 1m5s to 50s.

 It really _shouldn't_ be faster. It still does the compression, and throws
 the end result away.

Well, that's a little odd.  I had thought about making sure you did this 
change and forgotten.  1 minute benchmarks are a horrible idea since they run 
into noise with cache writebacks.  I should know better...

At any rate, the time for a single write-tree is pretty consistent.  Before it 
was around .5 seconds, and with this change it goes down to .128s.  My patch 
was .024.

The 100 patch time is down to 32s (3 run average).  This is close enough that 
I don't think my patch is worth it if no other part of git can benefit from 
having trees in the index.


 To actually go faster, it _should_ need this patch. Untested. See if it
 works..

Thanks. This one missed the filling in the returnsha1.  New patch attached.

-chris
diff -u linus.back/sha1_file.c linus/sha1_file.c
--- linus.back/sha1_file.c	2005-04-20 12:31:00.240181016 -0400
+++ linus/sha1_file.c	2005-04-20 12:13:56.339837528 -0400
@@ -173,12 +173,27 @@
 	z_stream stream;
 	unsigned char sha1[20];
 	SHA_CTX c;
+	char *filename;
+	int fd;
 
 	/* Sha1.. */
 	SHA1_Init(c);
 	SHA1_Update(c, buf, len);
 	SHA1_Final(sha1, c);
 
+	filename = sha1_file_name(sha1);
+	fd = open(filename, O_WRONLY | O_CREAT | O_EXCL, 0666);
+	if (fd  0) {
+		if (errno != EEXIST)
+			return -1;
+
+		/*
+		 * We might do collision checking here, but we'd need to
+		 * uncompress the old file and check it. Later.
+		 */
+		goto out;
+	}
+
 	/* Set it up */
 	memset(stream, 0, sizeof(stream));
 	deflateInit(stream, Z_BEST_COMPRESSION);
@@ -195,8 +210,10 @@
 	deflateEnd(stream);
 	size = stream.total_out;
 
-	if (write_sha1_buffer(sha1, compressed, size)  0)
-		return -1;
+	if (write(fd, compressed, size) != size)
+		die(unable to write file);
+	close(fd);
+out:		
 	if (returnsha1)
 		memcpy(returnsha1, sha1, 20);
 	return 0;


Re: [PATCH] write-tree performance problems

2005-04-20 Thread Linus Torvalds


On Wed, 20 Apr 2005, Linus Torvalds wrote:
 
 NO! Don't see if this works. For the sha1 file already exists file, it 
 forgot to return the SHA1 value in returnsha1, and would thus corrupt 
 the trees it wrote.

Proper version with fixes checked in. For me, it brings down the time to
write a kernel tree from 0.34s to 0.24s, so a third of the time was just
compressing objects that we ended up already having.

Two thirds to go ;)

Linus
-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] write-tree performance problems

2005-04-20 Thread Linus Torvalds


On Wed, 20 Apr 2005, Chris Mason wrote:
 
 At any rate, the time for a single write-tree is pretty consistent.  Before 
 it 
 was around .5 seconds, and with this change it goes down to .128s.

Oh, wow.

I bet your SHA1 implementation is done with hand-optimized and scheduled
x86 MMX code or something, while my poor G5 is probably using some slow
generic routine. As a result, it only improved by 33% for me since the
compression was just part of the picture, but with your cheap SHA1 the
compression costs really dominated, and so it's almost four times faster
for you.

Anyway, that's good. It definitely means that I consider tree writing to 
be fast enough. You can commit patches in a third of a second on your 
machine.

I'll consider the problem solved for now. Yeah, I realize that it still 
takes you half a minute to commit the 100 quilt patches, but I just can't 
bring myself to think it's a huge problem in the kind of usage patterns I 
think are realistic.

If somebody really wants to replace quilt with git, he'd need to spend
some effort on it. If you just want to work together reasonably well, I
think 3 patches per second is pretty much there.

Linus
-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] write-tree performance problems

2005-04-20 Thread David S. Miller
On Wed, 20 Apr 2005 10:06:15 -0700 (PDT)
Linus Torvalds [EMAIL PROTECTED] wrote:

 I bet your SHA1 implementation is done with hand-optimized and scheduled
 x86 MMX code or something, while my poor G5 is probably using some slow
 generic routine. As a result, it only improved by 33% for me since the
 compression was just part of the picture, but with your cheap SHA1 the
 compression costs really dominated, and so it's almost four times faster
 for you.

The openssl tree has a i586 optimized SHA1 implementation.
A quick scan of the 0.9.7e tree I happen to have lying around
shows there aren't optimized for other cpus in there, just i586.
-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] write-tree performance problems

2005-04-20 Thread Linus Torvalds


On Wed, 20 Apr 2005, Chris Mason wrote:
 
 Well, the difference there should be pretty hard to see with any benchmark.
 But I was being lazy...new patch attached.  This one gets the same perf 
 numbers, if this is still wrong then I really need some more coffee.

I did my preferred version. Makes a big difference here too.

It would be nicer for the cache to make the index file header be a 
footer, and write it out last - that way we'd be able to do the SHA1 as 
we write rather than doing a two-pass thing. That's for another time.

Linus
-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: WARNING! Object DB conversion (was Re: [PATCH] write-tree performance problems)

2005-04-20 Thread David Woodhouse
On Wed, 2005-04-20 at 07:59 -0700, Linus Torvalds wrote:
 external-parent commit-hash external-parent-ID
 comment for this parent
 
 and the nice thing about that is that now that information allows you to 
 add external parents at any point. 
 
 Why do it like this? First off, I think that the initial import ends up
 being just one special case of the much more _generic_ issue of having
 patches come in from other source control systems 

This isn't about patches coming in from other systems -- it's about
_history_, and the fact that it's imported from another system is just
an implementation detail. It's git history now, and what we have here is
just a special case of wanting to prune ancient git history to keep the
size of our working trees down. You refer to this yourself...

 Secondly, we do need something like this for pruning off history anyway, 
 so that the tools have a better way of saying history has been pruned 
 off than just hitting a missing commit. 

Having a more explicit way of saying history is pruned than just a
reference to a missing commit is a reasonable request -- but I really
don't see how we can do that by changing the now-oldest commit object to
contain an 'external-parent' field. Doing that would change the sha1 of
the commit object in question, and then ripple through all the
subsequent commits.

Come this time next year, if I decide I want to prune anything older
than 2.6.40 from all the trees on my laptop, it has to happen _without_
changing the commit objects which occur after my arbitrarily-chosen
cutoff point.

If we want to have an explicit record of pruning rather than just
copying with a missing object, then I think we'd need to do it with an
external note to say It's OK that commit XXX is missing.

 Thirdly, I don't actually want my new tree to depend on a conversion of
 the old BK tree.
 
 Two reasons: if it's a really full conversion, there are definitely going
 to be issues with BitMover. They do not want people to try to reverse
 engineer how they do namespace merges

Don't think of it as a conversion of the old BK tree. It's just an
import of Linux's development history. This isn't going to help
reverse-engineer how BK does merges; it's just our own revision history.
I'm not sure exactly how Thomas is extracting it, but AIUI it's all
obtainable from the SCCS files anyway without actually resorting to
using BK itself. 

There's nothing here for Larry to worry about. It's not as if we're
actually using BK to develop git by observing BK's behaviour w.r.t
merges and trying to emulate it. Besides -- if we wanted to do that,
we'd need to use the _BK_ version of the tree; the git version wouldn't
help us much anyway.

And given that BK's merges are based on individual files and we're not
going that route with git, it's not clear how much we could lift
directly from BK even if we _were_ going to try that.

 The other reason is just the really obvious one: in the last week, I've
 already changed the format _twice_ in ways that change the hash. As long
 as it's 119MB of data, it's not going to be too nasty to do again.

That's fine. But by the time we settle on a format and actually start
using it in anger, it'd be good to be sure that it _is_ possible to
track development from current trees all the way back -- be that with
explicit reference to pruned history as you suggest, or with absent
parents as I still prefer.

 it's not that it's necessarily the wrong thing to do, but I think it
 is the wrogn thing to do _now_.

OK, time for us to keep arguing over the implementation details of how
we prune history then :)

-- 
dwmw2

-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] write-tree performance problems

2005-04-19 Thread Olivier Galibert
On Tue, Apr 19, 2005 at 10:36:06AM -0700, Linus Torvalds wrote:
 In fact, git has all the same issues that BK had, and for the same 
 fundamental reason: if you do distributed work, you have to always 
 append stuff, and that means that you can never re-order anything after 
 the fact.

You can, moving a patch around is just a chain of merges.

[Warning, ascii art ahead]

A merge is traditionally seen as:

1- Start with (A, B, C... are nodes/trees..., Pn are patches/changesets):

 /--P1-B
/
   A
\
 \--P2-C

2- End with:

 /--P1-B
/
   A(P1+P2)-D
\
 \--P2-C

   where D is the merge between B and C with A as common ancestor.

But you can also see the result as:

 /--P1-B--P2--\
/   \
   A D
\   /
 \--P2-C--P1--/

i.e. you have two patch chains, one being A-P1-B-P2-D and the other
A-P2-C-P1-D.  I.e. you have the two patches P1 and P2 in two
possible patching orders.  But you can do even more amusing.  Start
with a patch chain:

   E--P3--F--P4--G

and merge E and G with F as common ancestor.  You'll then get H where
E--P4--H--P3--G.  I.e. you inverted two patches in your patch chain.
Or, if you keep H instead of G as your head, you removed P3 from your
patch chain.

Of course you can permute blocs of patches that way by having E, F and
G further away from each other.  You just increase the merge conflict
probability.

That is, I think, the way to do quilt/arch patch handling with safe
distribution and safe backtracing procedures.

  OG.

-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] write-tree performance problems

2005-04-19 Thread Linus Torvalds


On Tue, 19 Apr 2005, Chris Mason wrote:
 
 Very true, you can't replace quilt with git without ruining both of them.  
 But 
 it would be nice to take a quilt tree and turn it into a git tree for merging 
 purposes, or to make use of whatever visualization tools might exist someday. 
  

Fair enough. The thing is, going from quilt-git really is a pretty big
decision, since it's the decision that says I will now really commit all
this quilt changes forever and ever.

Which is also why I think it's actually ok to take a minute to do 100
quilt patches. This is not something you do on a whim. It's something
you'd better think about. It's turning a very fluid environment into a
unchangable, final thing.

That said, I agree that write-tree is expensive. It tends to be by far
the most expensive op you normally do. I'll make sure it goes faster.

 We already have a trust me, it hasn't changed via update-cache.

Heh. I see update-cache not as a it hasn't changed, but a it _has_ 
changed, and now I want you to reflect that fact. In other words, 
update-cache is an active statement: it says that you're ready to commit 
your changes.

In contrast, to me your write-tree thing in many ways is the reverse of 
that: it's saying don't look here, there's nothing interesting there.

Which to me smells like trying to hide problems rather than being positive 
about them.

Which it is, of course. It's trying to hide the fact that writing a tree 
is not instantaenous.

 With that said, I hate the patch too.  I didn't see how to compare against 
 the 
 old tree without reading each tree object from the old tree, and that should 
 be slower then what write-tree does now.

Reading a tree is faster, simply because you uncompress instead of
compress. So I can read a tree in 0.28 seconds, but it takes me 0.34
seconds to write one. That said, reading the trees has disk seek issues if
it's not in the cache.

What I'd actually prefer to do is to just handle tree caching the same way
we handle file caching - in the index.

Ie we could have the index file track what subtree is this directory
associated with, and have a update-cache --refresh-dir thing that
updates it (and any entry update in that directory obviously removes the
dir-cache entry).

Normally we'd not bother and it would never trigger, but it would be
useful for your scripted setup it would end up caching all the tree
information in a very efficient manner. Totally transparently, apart from
the one --refresh-dir at the beginning. That one would be slightly
expensive (ie would do all the stuff that write-tree does, but it would
be done just once).

(We could also just make write-tree do it _totally_ transparently, but
then we're back to having write-tree both read _and_ write the index file,
which is a situation that I've been trying to avoid. It's so much easier 
to verify the correctness of an operation if it is purely one-way).

I'll think about it. I'd love to speed up write-tree, and keeping track of 
it in the index is a nice little trick, but it's not quite high enough up 
on my worries for me to act on it right now.

But if you want to try to see how nasty it would be to add tree index
entries to the index file at write-tree time automatically, hey...

Linus
-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] write-tree performance problems

2005-04-19 Thread David Lang
On Tue, 19 Apr 2005, Linus Torvalds wrote:
On Tue, 19 Apr 2005, Chris Mason wrote:
Very true, you can't replace quilt with git without ruining both of them.  
But
it would be nice to take a quilt tree and turn it into a git tree for merging
purposes, or to make use of whatever visualization tools might exist someday.
Fair enough. The thing is, going from quilt-git really is a pretty big
decision, since it's the decision that says I will now really commit all
this quilt changes forever and ever.
Which is also why I think it's actually ok to take a minute to do 100
quilt patches. This is not something you do on a whim. It's something
you'd better think about. It's turning a very fluid environment into a
unchangable, final thing.
what if you turned the forest of quilt patches into a forest of git trees? 
(essentially applying each patch against the baseline seperatly) would 
this make sense or be useful?

David Lang
--
There are two ways of constructing a software design. One way is to make it so 
simple that there are obviously no deficiencies. And the other way is to make 
it so complicated that there are no obvious deficiencies.
 -- C.A.R. Hoare
-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] write-tree performance problems

2005-04-19 Thread C. Scott Ananian
On Tue, 19 Apr 2005, Linus Torvalds wrote:
(*) Actually, I think it's the compression that ends up being the most
expensive part.
You're also using the equivalent of '-9', too -- and *that's slow*.
Changing to Z_NORMAL_COMPRESSION would probably help a lot
(but would break all existing repositories, sigh).
 --scott
DES WTO Indonesia NRA LCPANGS supercomputer plastique class struggle 
AEFOX Pakistan ODEARL Secretary KUGOWN Cheney ODIBEX SDI AP JMMADD
 ( http://cscott.net/ )
-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] write-tree performance problems

2005-04-19 Thread Linus Torvalds


On Tue, 19 Apr 2005, Chris Mason wrote:
 
 5) right before exiting, write-tree updates the index if it made any changes.

This part won't work. It needs to do the proper locking, which means that 
it needs to create index.lock _before_ it reads the index file, and 
write everything to that one and then do a rename.

If it doesn't need to do the write, it can just remove index.lock without 
writing to it, obviously.

 The downside to this setup is that I've got to change other index users to 
 deal with directory entries that are there sometimes and missing other times. 
  
 The nice part is that I don't have to invalidate the directory entry, if it 
 is present, it is valid.

To me, the biggest downside is actually the complexity part, and worrying
about the directory index ever getting stale. How big do the changes end
up being?

Linus
-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html