On Wed, 2005-04-20 at 19:15 +0200, [EMAIL PROTECTED] wrote: ... > As data, I used my /usr/src/linux which uses 301M and contains 20753 files and > 1389 directories. To compute the key for a directory, I considered that its > contents were a mapping from names to keys. I suppose if you used the blob archive for storing many revisions the number of stored blobs would be much higher. However even then we can estimate that the maximum number of stored blobs will be in the order of milions.
> When constructing the indexed archive, I actually stored empty files instead > of > blobs because I am only interested in overhead. > > Using your suggested indexing method that uses [0:4] as the 1st level key and [0:3] > [4:8] as the 2nd level key, I obtain an indexed archive that occupies 159M, > where the top level contains 18665 1st level keys, the largest first level dir > contains 5 entries, and all 2nd level dirs contain exactly 1 entry. Yes, it really doesn't make much sense to have so big keys on the directories. If we would assume that SHA1 is a really good hashing function so the probability of any hash value is the same this would allow storing 2^16 * 2^16 * 2^16 blobs with approximately same directory usage. > Using Linus suggested 1 level [0:2] indexing, I obtain an indexed archive that [0:1] I suppose > occupies 1.8M, where the top level contains 256 1st level keys, and where the > largest 1st level dir contains 110 entries. The question is how many entries in directory is optimal compromise between space and the speed of access to it's files. If we suppose the maximum number of stored blobs in the order of milions probably the optimal indexing would be 1 level [0:2] indexing or 2 levels [0:1] [2:3]. However it would be necessary to do some benchmarking first before setting this to stone. -- Tomas Mraz <[EMAIL PROTECTED]> - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html