Your most reliable bet is going to be multiple 7.5 (because why push
it) TB filesystems, and symlinks.  Ext3 is 'stuck' at 8TB because many
of the tools can't handle anything larger.  Some tools can, but not
the normal ones as you have discovered.

Second choice would be XFS.  When it's *significantly more reliable*
than ext3, and has *absolutely no quirks*, then I'll consider it as a
first choice.

There's a fuse module someone in our local LUG uses to merge multiple
sub-8-tb filesystems together virtually called mhddfs.  I haven't
tried it myself yet.  See http://svn.uvw.ru/mhddfs/trunk/README

On Wed, Nov 19, 2008 at 16:07, Miles O'Neal <[EMAIL PROTECTED]> wrote:
> Our local vendor built us a Supermicro/Adaptec
> system with 16x1TB SATA drives.  We have a 12TB
> partition that they built as EXT2.  When I tried
> to add journaling, it took forever, and then the
> system locked up.  On reboot, the FS was still
> EXT2, and takes hours (even empty) to fsck.  Based
> on the messages flying by I am also not confident
> fsck rally understands a filesystem this large.
>
> Is the XFS module stable on 5.1 and 5.2?  (The
> vendor installed 5.1 because that's what they
> have, but I ran "yum update").
>
> Anyone have experience with filesystems this large
> on a Linux system?  Will XFS work well for this?
>
> If any of you have successfully used EXT3 on a
> filesystem this large, are there any tuning tips
> you recommend?  I was thinking of turning on
> dir_index, but somewhere I saw a warning this
> nmight not work with other OSes.  Since we do have
> some Windows and Mac users accessing things via
> SMB, I wasn't sure that was safe. either.
>
> This is a 64bit system. 8^)
>
> Thanks,
> Miles
>

Reply via email to