Re: Large filesystem recommendation

2013-07-24 Thread Connie Sieh

On Wed, 24 Jul 2013, Ray Van Dolson wrote:


On Wed, Jul 24, 2013 at 01:59:03PM -0400, John Lauro wrote:

What is recommended for a large file system (40TB) under SL6?

In the past I have always had good luck with jfs.  Might not be the
fastest, but very stable.  It works well with being able to repair
huge filesystems in reasonable amount of RAM, and handle large
directories, and large files.  Unfortunately jfs doesn't appear to be
supported in 6?  (or is there a repo I can add?)


Besides for support of 40+TB filesystem, also need support of files

4TB, and directories with hundreds of thousands of files.  What do
people recommend?


Echoing what others have said, sounds like XFS might be the best option
if you can find a repository with a quality version (EPEL perhaps?)


Do you have issues with the xfs that is provided in SL 6?

-Connie Sieh



Interesting on the Backblaze and ext4 thing.  While ext4 itself may
support this larger file system size, I'm not sure if the default
ext4tools will??  Could be a risk to investigate if you go this route.

Other options I can think of:

- btrfs (not sure if something like EPEL provides a release with this)
- ZFS on Linux (for the adventurous only, but I believe they have a
 version that works well with RHEL).

Personally, I'd go XFS.

Ray



RE: Large filesystem recommendation

2013-07-24 Thread Brown, Chris (GE Healthcare)
If you enable the sl-other repository ZFS is also now an option to try as well.

- Chris Brown

-Original Message-
From: owner-scientific-linux-us...@listserv.fnal.gov 
[mailto:owner-scientific-linux-us...@listserv.fnal.gov] On Behalf Of Connie Sieh
Sent: Wednesday, July 24, 2013 3:25 PM
To: Ray Van Dolson
Cc: SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV
Subject: Re: Large filesystem recommendation

On Wed, 24 Jul 2013, Ray Van Dolson wrote:

 On Wed, Jul 24, 2013 at 01:59:03PM -0400, John Lauro wrote:
 What is recommended for a large file system (40TB) under SL6?

 In the past I have always had good luck with jfs.  Might not be the 
 fastest, but very stable.  It works well with being able to repair 
 huge filesystems in reasonable amount of RAM, and handle large 
 directories, and large files.  Unfortunately jfs doesn't appear to be 
 supported in 6?  (or is there a repo I can add?)


 Besides for support of 40+TB filesystem, also need support of files
 4TB, and directories with hundreds of thousands of files.  What do 
 people recommend?

 Echoing what others have said, sounds like XFS might be the best 
 option if you can find a repository with a quality version (EPEL 
 perhaps?)

Do you have issues with the xfs that is provided in SL 6?

-Connie Sieh


 Interesting on the Backblaze and ext4 thing.  While ext4 itself may 
 support this larger file system size, I'm not sure if the default
 ext4tools will??  Could be a risk to investigate if you go this route.

 Other options I can think of:

 - btrfs (not sure if something like EPEL provides a release with this)
 - ZFS on Linux (for the adventurous only, but I believe they have a  
 version that works well with RHEL).

 Personally, I'd go XFS.

 Ray



Re: Large filesystem recommendation

2013-07-24 Thread Paul Robert Marino
I use XFS on SL 6.x its great in my opinion very stable and fast.Redhat themselves suggest it as the backend filesystem for Gluster storage nodes. Early versions on SL6 had some problems but they have all been worked out now. And on other distros SuSE and Gentoo I've used it for decade now with no issues.In SL 6.4 I've even been using it on install via kickstarts it supported on every file system except /, and /boot by the anaconda installer.There are just three things to keep I'm mind with XFS1) if you use the undelete feature on EXT3 and 4 there is no such feature on XFS.2) XFS has its own diagnostic tools fsck does nothing on XFS even thefsck.xfs command is just a dummy command to satisfy the startup scripts wanting to check the file systems every so many times its mounted.3) if you want to do a full file system backup it may behoove you to look at the XFS dump command rather than traditional tools like tar because the file produced by xfsdump will include any selinux contexts, posix ACLS, and extended attributes set on the files. This comes in really handy for things like the openstack swift gluster integration which stores all the swift acls as extended attributes on the backend file system.-- Sent from my HP Pre3On Jul 24, 2013 16:25, Connie Sieh cs...@fnal.gov wrote: On Wed, 24 Jul 2013, Ray Van Dolson wrote:

 On Wed, Jul 24, 2013 at 01:59:03PM -0400, John Lauro wrote:
 What is recommended for a large file system (40TB) under SL6?

 In the past I have always had good luck with jfs.  Might not be the
 fastest, but very stable.  It works well with being able to repair
 huge filesystems in reasonable amount of RAM, and handle large
 directories, and large files.  Unfortunately jfs doesn't appear to be
 supported in 6?  (or is there a repo I can add?)


 Besides for support of 40+TB filesystem, also need support of files
 4TB, and directories with hundreds of thousands of files.  What do
 people recommend?

 Echoing what others have said, sounds like XFS might be the best option
 if you can find a repository with a quality version (EPEL perhaps?)

Do you have issues with the xfs that is provided in SL 6?

-Connie Sieh


 Interesting on the Backblaze and ext4 thing.  While ext4 itself may
 support this larger file system size, I'm not sure if the "default"
 ext4tools will??  Could be a risk to investigate if you go this route.

 Other options I can think of:

 - btrfs (not sure if something like EPEL provides a release with this)
 - ZFS on Linux (for the adventurous only, but I believe they have a
  version that works well with RHEL).

 Personally, I'd go XFS.

 Ray