Someone posted to the LBD list last December regarding some supposedly horrible bugs in large filesystems: https://www.gelato.unsw.edu.au/archives/lbd/2004-December/000075.html https://www.gelato.unsw.edu.au/archives/lbd/2004-December/000074.html
(I will admit that these were brought to my attention by that paragon of fact-checking, a slashdot comment...) I haven't found anything else online regarding these issues. Our initial tests seem to indicate that it's possible to fill a 2.5TB ext3 filesystem without corrupting the data or metadata, and as far as I and my colleagues can understand the code it doesn't look too bad. But, before I start loading all our production data up, I'd like to feel confident. Does this poster know what he's talking about? Is there, or was there, any real issue? Running x86-32 using kernel 2.6.8 (from Debian sarge), although can always roll my own if necessary. Preferred filesystem would be ext3, and I anticipate no need to grow beyond the initial 2.5TB. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

