No. Most filesystems have a total maximum disk size of 2TB - breaking compatibility to support 64TB datastores isn't worth it, and probably won't be for a decade. Personally I think the obsession some people have with ridiculously large datastores is misplaced anyway.
Ian. On 17 Feb 2006, at 17:28, Matthew Toseland wrote: > Is it worth breaking backwards compatibility for the 0.7 datastore > (with > prior builds of 0.7) to fix an inherent 64TB limit? > > The code uses an int for offsets into files, which is easily fixed. > However it also uses, on disk, an int for block numbers. This means > datastores are limited to 2G * 32K = 64TB. Normally I wouldn't regard > this as a big problem, but since we are in pre-alpha, and since there > isn't that much content, I'm inclined to make the change... > -- > Matthew J Toseland - toad at amphibian.dyndns.org > Freenet Project Official Codemonkey - http://freenetproject.org/ > ICTHUS - Nothing is impossible. Our Boss says so. > _______________________________________________ > Tech mailing list > Tech at freenetproject.org > http://emu.freenetproject.org/cgi-bin/mailman/listinfo/tech
