Linus Torvalds wrote:

So why is "base64" worse than the stock one?

As mentioned, the "flat" version may be faster, but it really isn't an
option. 32000 objects is peanuts. Any respectable source tree may hit that
in a short time, and will break in horrible ways on many Linux
filesystems.


If it does, it's not because of n_link; see previous email.

I have used ext2 filesystems with hundreds of thousands of files per directory back in 1996. It was slow but didn't break anything.

The only filesystem I know of which has a 2^16 entry limit is FAT.

So you need at least a single level of subdirectory.

What I don't get is why the stock hex version would be better than base64.

I like the result, I just don't _understand_ it.

The base64 version has 2^12 subdirectories instead of 2^8 (I just used 2 characters as the hash key just like the hex version.) So it ascerbates the performance penalty of subdirectory hashing.


        -hpa
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to