"Alfred M\. Szmidt" <[EMAIL PROTECTED]> writes: > So if you have a tree that is 300MiB, and you fix a couple of > documents over 50 revisions, the total size of the patches might be as > low as 300KiB, but using the `number of patches' algorithm, you will > end up with a new cacherev that takes up 300.3MiB. Not very smart.
That's even more complex than it seems to be, because (from my experience) latency is usually more important than bandwidth. So it may be faster to download one 5Mb cachedrev than 50 patches of 10Kb each. Mercurial claims it has a "cachedrev"/patch compromise that allow O(1) retrieval (hmm, I guess that's O(tree size), at least ;-). You may want to have a look. -- Matthieu _______________________________________________ Gnu-arch-users mailing list [email protected] http://lists.gnu.org/mailman/listinfo/gnu-arch-users GNU arch home page: http://savannah.gnu.org/projects/gnu-arch/
