Re: >16tb filesystems on linux

2010-08-27 Thread Andrew Robert Nicols
On 27 August 2010 10:21, Tim Small  wrote:

>  On 27/08/10 09:18, Andrew Robert Nicols wrote:
>
> As I say, we're primarily a Debian shop and Solaris did used to feel like a
> bit of a thorn in the side but things have improved.
>
> Did you consider/try ZFS on Debian-kFreeBSD instead of OpenSolaris to try
> and make things less painful?
>
> http://packages.debian.org/sid/zfsutils
>

At the time, it wasn't really an option. This still isn't available in Lenny
- only Squeeze and Sid.

Andrew

-- 
Systems Developer

e: andrew.nic...@luns.net.uk
im: a.nic...@jabber.lancs.ac.uk
t: +44 (0)1524 5 10147

Lancaster University Network Services is a limited company registered in
England and Wales. Registered number: 04311892. Registered office:
University House, Lancaster University, Lancaster, LA1 4YW
___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq

Re: >16tb filesystems on linux

2010-08-27 Thread Tim Small

On 27/08/10 09:18, Andrew Robert Nicols wrote:
As I say, we're primarily a Debian shop and Solaris did used to feel 
like a bit of a thorn in the side but things have improved.


Did you consider/try ZFS on Debian-kFreeBSD instead of OpenSolaris to 
try and make things less painful?


http://packages.debian.org/sid/zfsutils


Cheers,

Tim.

--
South East Open Source Solutions Limited
Registered in England and Wales with company number 06134732.
Registered Office: 2 Powell Gardens, Redhill, Surrey, RH1 1TQ
VAT number: 900 6633 53  http://seoss.co.uk/ +44-(0)1273-808309

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq

Re: >16tb filesystems on linux

2010-08-27 Thread Andrew Robert Nicols
On 27 August 2010 09:18, Andrew Robert Nicols wrote:

> On 26 August 2010 18:26, Nick Stephens  wrote:
>
>> I am very interested in ZFS, but it seems like it will never make it (in
>> a stable fashion) into the linux world at this rate.
>>
>
> We're primarily a Debian shop but we've dabbled with ZFS. It's really
> pretty good and it's fault tolerance is really reassuring.
> I'd avoid OpenSolaris though - since Oracle took over shop, active
> development on it has all  but stopped. Solaris 10 is actually pretty
> reasonable these days. There should be a new release out in the next 2 weeks
> from what I recall.
>

Some other notes I meant to make in my original reply:

Our current architecture comprises of an automated system to transfer
snapshots from our live server to a secondary server in a second data
centre. We perform our backups from the secondary server to reduce potential
load on the primary at peak times so we need to be able to read from the
secondary.
We did try using GFS, and OCFS2 on top of DRBD on a RAID60 configuration on
these X4500s but they weren't stable enough for our needs - DRBD couldn't
cope with the load.
EXT3/4 and XFS weren't suitable as they don't allow for transfer of the data
to another host (whilst still being able to read from the seconary for
backups).

Andrew

-- 
Systems Developer

e: andrew.nic...@luns.net.uk
im: a.nic...@jabber.lancs.ac.uk
t: +44 (0)1524 5 10147

Lancaster University Network Services is a limited company registered in
England and Wales. Registered number: 04311892. Registered office:
University House, Lancaster University, Lancaster, LA1 4YW
___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq

Re: >16tb filesystems on linux

2010-08-27 Thread Buchan Milne
On Thu, Aug 26, 2010 at 6:26 PM, Nick Stephens  wrote:

> Hi all,
>
> I recently purchased a PE610 with a PERC6 card attached to an MD1000
> with about 26TB of space.


How soon do you need to put this into production?


>  I know from my own research that ext4
> supports up to an exabyte, however it appears that the e2fs team has not
> yet created a mkfs.ext4 that supports anything bigger than 16TB.
>
> I have played with XFS in the past, and sadly it's performance is
> severely lacking for our environment, so it is not an option.
>
> I am very interested in ZFS, but it seems like it will never make it (in
> a stable fashion) into the linux world at this rate.
>
> Does anyone have any tips or tricks for this scenario?  I am utilizing
> RHEL5 based installations, btw.
>

If you can afford to either:
-Wait for RHEL6
-Run a RHEL6 beta

then you may also want to consider using btrfs (
http://en.wikipedia.org/wiki/Btrfs)

In case you *really* need a single 26TB filesystem, btrfs offers significant
advantages over ext4, especially online fsck, subvolumes, subvolume
snapshots etc.

Regards,
Buchan
___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq

Re: >16tb filesystems on linux

2010-08-27 Thread Andrew Robert Nicols
Hi Nick,

On 26 August 2010 18:26, Nick Stephens  wrote:

> I am very interested in ZFS, but it seems like it will never make it (in
> a stable fashion) into the linux world at this rate.
>

We're primarily a Debian shop but we've dabbled with ZFS. It's really pretty
good and it's fault tolerance is really reassuring.
I'd avoid OpenSolaris though - since Oracle took over shop, active
development on it has all  but stopped. Solaris 10 is actually pretty
reasonable these days. There should be a new release out in the next 2 weeks
from what I recall.

We run Solaris 10 and ZFS in a RAIDz2 configuration on a SunFire X4500. The
X4500 has 48 x 500Gb disks and, after losing 2 for mirrored boot disks,
we're left with 46 disks. We've calculated that the optimum number of disks
in a zpool is 11, so we have four zpools of 11 disks, and two hot spares
which will be used by the first two disks to fail in the zpool.
The X4500 also has six SATA controllers and we've calculated a zpool
configuration such that we could potentially drop any one controller and
there would be no service interruption.
So of our 22Tb of raw space (after dropping another 2 disks for hot spares),
we get about 16Tb of usable space. You can run in Raidz instead of Raidz2
and, on our configuration, you'd have 18Tb usable space, but you'd lose the
ability to drop a controller. Obviously, a lot of this is irrelevant for a
MD1000.

Our disk usage is primarily lots of small files, with various really large
files (zip files for backup, database backups, etc) too.

The other *really* nice feature of ZFS is it's snapshot ability. While LVM
snapshotting is possible, let's face it, it's pants. With zfs snapshot, you
can have a virtually unlimited number of snapshots, you can promote
snapshots and mount them elsewhere (read only), delve into any snapshot
(handy for restores), and send snapshots over the wire to another server
(very useful for backups).
On our X4500, we've got 4313 snapshots at present. These are hourly
snapshots for the last three months or so, and then daily snapshots since
February. We recently purged snapshtos from September 2007 because we just
didn't see the point in keeping them any longer.

As I say, we're primarily a Debian shop and Solaris did used to feel like a
bit of a thorn in the side but things have improved. The package management
in Solaris blows compared to Debian and creating packages can be painful.
The community packaged software effort on the other hand is really good.
There are two main projects - OpenCSW and Blastwave. They did used to be the
same project but forked about 18 months ago. We use OpenCSW for various
reasons of preference. It's really worth using one or other if you do go for
the Solaris route.

The zfs-discuss mailing list on Open Solaris used to be really handy for
questions on ZFS but I haven't taken part in it for some time.

Hope this is of some help,

Andrew

-- 
Systems Developer

e: andrew.nic...@luns.net.uk
im: a.nic...@jabber.lancs.ac.uk
t: +44 (0)1524 5 10147

Lancaster University Network Services is a limited company registered in
England and Wales. Registered number: 04311892. Registered office:
University House, Lancaster University, Lancaster, LA1 4YW
___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq

Re: >16tb filesystems on linux

2010-08-27 Thread Simon Waters
On Thursday 26 August 2010 18:26:19 Nick Stephens wrote:
> 
> I have played with XFS in the past, and sadly it's performance is
> severely lacking for our environment, so it is not an option.

If you tell us what your environment is we can answer the question.

One large file system sounds like a very bad idea to me, but without knowing 
what you are doing it is hard to answer the question in a sensible fashion.

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: >16tb filesystems on linux

2010-08-27 Thread Stroller

On 27 Aug 2010, at 08:28, Davide Ferrari wrote:

> On Thu, 2010-08-26 at 14:09 -0400, Drew Weaver wrote:
>> I have a system running 10x2TB drives in RAID-0 in EXT4 and it  
>> appears
>> to work fine in a single partition.
>
> You *do* love taking risks, uh? :)

Squid proxy, CDN or some other kind of cache?

The question that intrigues me is whether it's as fast as one would  
imagine?


Stroller.

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


RE: >16tb filesystems on linux

2010-08-27 Thread Davide Ferrari
On Thu, 2010-08-26 at 14:09 -0400, Drew Weaver wrote:
> I have a system running 10x2TB drives in RAID-0 in EXT4 and it appears
> to work fine in a single partition.

You *do* love taking risks, uh? :)

-- 
Davide Ferrari
System Administrator
Atrapalo S.L.

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: >16tb filesystems on linux

2010-08-26 Thread Jefferson Ogata
On 2010-08-26 18:30, Nick Stephens wrote:
> I actually gave that a shot myself but didn't think it was available yet 
> due to getting the same error message.  Now that I think about it 
> though, it could be a different issue I'm encountering. 
> 
> [r...@localhost ~]# mkfs.ext4dev -T news -m0 -L backup -E 
> stride=16,stripe-width=208 /dev/sda1
> mke2fs 1.41.12 (17-May-2010)
> mkfs.ext4dev: Size of device /dev/sda1 too big to be expressed in 32 
> bits
> using a blocksize of 4096.

Another reason to use LVM: you've put a partition table on your giant 
block device. Did you align the start of the first partition with your 
RAID stripe size? If not, then many of your filesystem blocks will span 
two disks, meaning reading one of those block requires two disks to seek 
instead of one. If you make the whole block device an LVM physical 
volume instead, you won't have to worry about that (unless you have a 
stripe size > 64 kB, and in that case, you can override the default PV 
metadata size to make it a multiple of your RAID stripe size).

See:

http://insights.oetiker.ch/linux/raidoptimization/

[snip]
> The MD1000 is populated with (15) 2TB 7200rpm SAS drives in a RAID-5 
> with 1 hotspare (leaving 13 data disks).  I know that conventional 
> wisdom says that raid5 is a poor choice when you are looking for 
> performance, but localized benchmarking has proven that in our scenario 
> the total-size gains acquired with the striping outweigh the redundancy 
> provided with RAID-10 (since we are unable to get significant 
> performance increases).

Consider creating two 7-disk RAID5s instead of a single 14-disk RAID5. 
This will double your redundancy, and you can still stripe over all 14 
disks using LVM.

In addition, if you use slots 0-6 for one RAID5 and 7-13 for the other, 
you can dual-connect the MD1000 and have one SAS channel dedicated to 
each RAID.

Or, as others have suggested, consider RAID6.

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: >16tb filesystems on linux

2010-08-26 Thread Jefferson Ogata
On 2010-08-26 17:26, Nick Stephens wrote:
> Does anyone have any tips or tricks for this scenario?  I am utilizing 
> RHEL5 based installations, btw.

Don't create very large filesystems.
Use LVM.

- Very large filesystems take a long time to fsck. Using smaller 
filesystems with LVM snapshots lets you fsck periodically without even 
umounting your filesystems.
- A serious error or inconsistency in a very large filesystem may blow 
away all of your data; smaller filesystems constrain the damage.
- The properties of one giant filesystem (e.g. striping, inode/block 
ratio) can't be tuned to the different needs of different types of files 
you might store. Your application might be more efficient if it put 
larger files on a different filesystem with a better large-file 
allocation strategy.
- Very large filesystems limit you to a small subset of possible 
filesystem types.
- Very large filesystems keep you from migrating your data to 
off-the-shelf hardware in an emergency.
- You're going to hit limits of some kind sooner or later, so your 
application should be designed to tolerate having your data on multiple 
filesystems anyway.

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: >16tb filesystems on linux

2010-08-26 Thread Sabuj Pattanayek
> The MD1000 is populated with (15) 2TB 7200rpm SAS drives in a RAID-5
> with 1 hotspare (leaving 13 data disks).  I know that conventional
> wisdom says that raid5 is a poor choice when you are looking for
> performance, but localized benchmarking has proven that in our scenario

Since you've got such a large logical disk, I would do RAID6 +
hotspare for better data security in case of disk failures, but it'll
probably be a bit slower than RAID5.

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: >16tb filesystems on linux

2010-08-26 Thread Sam Kuonen
Have you tried a larger block size?

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: >16tb filesystems on linux

2010-08-26 Thread Paul M. Dyer
Hi,

you could try GFS2, if you are RHEL 5.5 AP 64bit.   Here is a quote from the 
top of this documentation:

http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Global_File_System_2/index.html

Paul

GFS2 is based on a 64-bit architecture, which can theoretically accommodate an 
8 EB file system.
However, the current supported maximum size of a GFS2 file system is 25 TB. If 
your system requires
GFS2 file systems larger than 25 TB, contact your Red Hat service 
representative.

- Original Message -
From: "Nick Stephens" 
To: linux-poweredge@dell.com
Sent: Thursday, August 26, 2010 12:26:19 PM
Subject: >16tb filesystems on linux

Hi all,

I recently purchased a PE610 with a PERC6 card attached to an MD1000
with about 26TB of space. I know from my own research that ext4
supports up to an exabyte, however it appears that the e2fs team has not
yet created a mkfs.ext4 that supports anything bigger than 16TB.

I have played with XFS in the past, and sadly it's performance is
severely lacking for our environment, so it is not an option.

I am very interested in ZFS, but it seems like it will never make it (in
a stable fashion) into the linux world at this rate.

Does anyone have any tips or tricks for this scenario? I am utilizing
RHEL5 based installations, btw.

Thanks!
Nick

___ Linux-PowerEdge mailing
list Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge Please read
the FAQ at http://lists.us.dell.com/faq

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: >16tb filesystems on linux

2010-08-26 Thread Jeff Layton
  You can always build the latest e2fsprogs yourself. They have
the 16TB fixes in them but haven't gotten alot of testing so
be careful (test it out first). I've heard it's mostly the resizing
piece of ext4 that hasn't been exercised much but you can
ask the ext4 mailing list.

Jeff

> Hi all,
>
> I recently purchased a PE610 with a PERC6 card attached to an MD1000
> with about 26TB of space.  I know from my own research that ext4
> supports up to an exabyte, however it appears that the e2fs team has not
> yet created a mkfs.ext4 that supports anything bigger than 16TB.
>
> I have played with XFS in the past, and sadly it's performance is
> severely lacking for our environment, so it is not an option.
>
> I am very interested in ZFS, but it seems like it will never make it (in
> a stable fashion) into the linux world at this rate.
>
> Does anyone have any tips or tricks for this scenario?  I am utilizing
> RHEL5 based installations, btw.
>
> Thanks!
> Nick
>
> ___
> Linux-PowerEdge mailing list
> Linux-PowerEdge@dell.com
> https://lists.us.dell.com/mailman/listinfo/linux-poweredge
> Please read the FAQ at http://lists.us.dell.com/faq
>

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


RE: >16tb filesystems on linux

2010-08-26 Thread Drew Weaver
I have a system running 10x2TB drives in RAID-0 in EXT4 and it appears to work 
fine in a single partition.

thanks,
-Drew


-Original Message-
From: linux-poweredge-boun...@dell.com 
[mailto:linux-poweredge-boun...@dell.com] On Behalf Of Nick Stephens
Sent: Thursday, August 26, 2010 1:26 PM
To: linux-poweredge@dell.com
Subject: >16tb filesystems on linux

Hi all,

I recently purchased a PE610 with a PERC6 card attached to an MD1000 
with about 26TB of space.  I know from my own research that ext4 
supports up to an exabyte, however it appears that the e2fs team has not 
yet created a mkfs.ext4 that supports anything bigger than 16TB.

I have played with XFS in the past, and sadly it's performance is 
severely lacking for our environment, so it is not an option.

I am very interested in ZFS, but it seems like it will never make it (in 
a stable fashion) into the linux world at this rate.

Does anyone have any tips or tricks for this scenario?  I am utilizing 
RHEL5 based installations, btw.

Thanks!
Nick

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: >16tb filesystems on linux

2010-08-26 Thread Bond Masuda
In what scenarios did you experience poor performance with XFS? In our
environment, running a farm of massive file servers, XFS has always
outperformed ext3 by a large margin. Even the performance comparisons of
ext4 that I've seen mostly conclude it was about the same level as XFS
if not a little lacking in a few scenarios. We did a pretty extensive
benchmarking / performance test for our environment comparing ext3, jfs,
xfs, reiserfs, and concluded that XFS was the best for our needs. JFS
was a close 2nd, but it didn't handle multiple parallel I/O streams very
well.

Your statement about XFS comes as a surprise to me...

On Thu, 2010-08-26 at 10:26 -0700, Nick Stephens wrote:
> Hi all,
> 
> I recently purchased a PE610 with a PERC6 card attached to an MD1000 
> with about 26TB of space.  I know from my own research that ext4 
> supports up to an exabyte, however it appears that the e2fs team has not 
> yet created a mkfs.ext4 that supports anything bigger than 16TB.
> 
> I have played with XFS in the past, and sadly it's performance is 
> severely lacking for our environment, so it is not an option.
> 
> I am very interested in ZFS, but it seems like it will never make it (in 
> a stable fashion) into the linux world at this rate.
> 
> Does anyone have any tips or tricks for this scenario?  I am utilizing 
> RHEL5 based installations, btw.
> 
> Thanks!
> Nick
> 
> ___
> Linux-PowerEdge mailing list
> Linux-PowerEdge@dell.com
> https://lists.us.dell.com/mailman/listinfo/linux-poweredge
> Please read the FAQ at http://lists.us.dell.com/faq


___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: >16tb filesystems on linux

2010-08-26 Thread Nick Stephens
I actually gave that a shot myself but didn't think it was available yet 
due to getting the same error message.  Now that I think about it 
though, it could be a different issue I'm encountering. 

[r...@localhost ~]# mkfs.ext4dev -T news -m0 -L backup -E 
stride=16,stripe-width=208 /dev/sda1
mke2fs 1.41.12 (17-May-2010)
mkfs.ext4dev: Size of device /dev/sda1 too big to be expressed in 32 
bits
using a blocksize of 4096.


To explain:

In our environment we don't handle large files at all, but rather 
millions of jpg images.  As such, our file sizes range from around 4kb 
-> 1mb.  The fact that we are utilizing such small amounts of data 
greatly limits the ability of the hardware to reach it's maximum 
potential with sustained read/writes.

Because of this, I typically create new filesystems using the -T news 
flag for the greatest amount of inodes as such:

[r...@localhost ~]# mkfs.ext4 -T news -m0 -L backup -E 
stride=16,stripe-width=208 /dev/sdb1
mke4fs 1.41.5 (23-Apr-2009)
Filesystem label=backup
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)


The MD1000 is populated with (15) 2TB 7200rpm SAS drives in a RAID-5 
with 1 hotspare (leaving 13 data disks).  I know that conventional 
wisdom says that raid5 is a poor choice when you are looking for 
performance, but localized benchmarking has proven that in our scenario 
the total-size gains acquired with the striping outweigh the redundancy 
provided with RAID-10 (since we are unable to get significant 
performance increases).

It's been a bit over a year since we did the XFS testing, but iirc we 
ditched it due to poor delete performance and (i think) overall 
performance issues.  Again it's been a while, but I do remember doing a 
lot of performance tuning research that did not seem to help us.

Going with an opensolaris type option COULD work and has been kicking 
around in the back of my mind for some time, but I'm hesitant to add a 
new OS to the environment if I don't need to.  Trying to keep it as 
simple for the rest of the team as possible, within practicality.

Thanks all
Nick


Jeff Layton wrote:
>  You can always build the latest e2fsprogs yourself. They have
> the 16TB fixes in them but haven't gotten alot of testing so
> be careful (test it out first). I've heard it's mostly the resizing
> piece of ext4 that hasn't been exercised much but you can
> ask the ext4 mailing list.
>
> Jeff
>
>> Hi all,
>>
>> I recently purchased a PE610 with a PERC6 card attached to an MD1000
>> with about 26TB of space.  I know from my own research that ext4
>> supports up to an exabyte, however it appears that the e2fs team has not
>> yet created a mkfs.ext4 that supports anything bigger than 16TB.
>>
>> I have played with XFS in the past, and sadly it's performance is
>> severely lacking for our environment, so it is not an option.
>>
>> I am very interested in ZFS, but it seems like it will never make it (in
>> a stable fashion) into the linux world at this rate.
>>
>> Does anyone have any tips or tricks for this scenario?  I am utilizing
>> RHEL5 based installations, btw.
>>
>> Thanks!
>> Nick

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: >16tb filesystems on linux

2010-08-26 Thread Kevin Davidson


On 26 Aug 2010, at 18:26, Nick Stephens  wrote:

> I recently purchased a PE610 with a PERC6 card attached to an MD1000 
> with about 26TB of space.  I know from my own research that ext4 
> supports up to an exabyte, however it appears that the e2fs team has not 
> yet created a mkfs.ext4 that supports anything bigger than 16TB.

Exactly how many disks and what size? How much usable space Di you expect to 
get? You should almost certainly be looking RAID10 or similar or you will get 
hit by 2nd/3rd disks failing during a rebuild. That's half already, so your 
problem is solved - your filesystem only needs to be 13TB. 

> 
> I have played with XFS in the past, and sadly it's performance is 
> severely lacking for our environment, so it is not an option.
> 
> I am very interested in ZFS, but it seems like it will never make it (in 
> a stable fashion) into the linux world at this rate.
> 
> Does anyone have any tips or tricks for this scenario?  I am utilizing 
> RHEL5 based installations, btw.

With that much data you really should be looking at ZFS. Haven't tried, but 
could you consider running a Solaris variant as host OS and effectively a NAS 
for a virtualised RHEL? Or install RHEL on a second box and connect via iSCSI?

You haven't told us anything about your application and its requirements so 
it's a bit difficult to advise. 


-- 
Kevin Davidson
Apple Certified System Administrator
Sent from my iPhone

indigospring :Making Sense of IT
w http://www.indigospring.co.uk/
t 0870 745 4001

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: >16tb filesystems on linux

2010-08-26 Thread Sabuj Pattanayek
On Thu, Aug 26, 2010 at 12:38 PM, Nils Breunese (Lemonbit)
 wrote:
>> I have played with XFS in the past, and sadly it's performance is
>> severely lacking for our environment, so it is not an option.

Really? I guess it depends on what you're trying to do as always. One
thing I love about XFS/JFS and other non ext fs is that they don't
take years to mkfs.

This is how I create my xfs :

mkfs.xfs -l size=64m

And these are the options I use to mount xfs parts:

noatime,nodiratime,logbufs=8

What about btrfs? I haven't used that yet, does it allow > 16TB FS?

HTH,
Sabuj Pattanayek

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


Re: >16tb filesystems on linux

2010-08-26 Thread Nick Stephens

Nils Breunese (Lemonbit) wrote:

Nick Stephens wrote:

  
I recently purchased a PE610 with a PERC6 card attached to an MD1000 
with about 26TB of space.  I know from my own research that ext4 
supports up to an exabyte, however it appears that the e2fs team has not 
yet created a mkfs.ext4 that supports anything bigger than 16TB.


I have played with XFS in the past, and sadly it's performance is 
severely lacking for our environment, so it is not an option.


I am very interested in ZFS, but it seems like it will never make it (in 
a stable fashion) into the linux world at this rate.


Does anyone have any tips or tricks for this scenario?  I am utilizing 
RHEL5 based installations, btw.



Dividing you 26 TB into multiple partitions is not an option?

Nils.
  


It is certainly an option (and was my initial consideration) but would 
add some difficulties into how we typically handle our disk balancing.  
It's not out of the question and is my current backup plan, but I wanted 
to make sure I wasn't missing something that could be better.


Nick
___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq

Re: >16tb filesystems on linux

2010-08-26 Thread Nils Breunese (Lemonbit)
Nick Stephens wrote:

> I recently purchased a PE610 with a PERC6 card attached to an MD1000 
> with about 26TB of space.  I know from my own research that ext4 
> supports up to an exabyte, however it appears that the e2fs team has not 
> yet created a mkfs.ext4 that supports anything bigger than 16TB.
> 
> I have played with XFS in the past, and sadly it's performance is 
> severely lacking for our environment, so it is not an option.
> 
> I am very interested in ZFS, but it seems like it will never make it (in 
> a stable fashion) into the linux world at this rate.
> 
> Does anyone have any tips or tricks for this scenario?  I am utilizing 
> RHEL5 based installations, btw.

Dividing you 26 TB into multiple partitions is not an option?

Nils.

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


>16tb filesystems on linux

2010-08-26 Thread Nick Stephens
Hi all,

I recently purchased a PE610 with a PERC6 card attached to an MD1000 
with about 26TB of space.  I know from my own research that ext4 
supports up to an exabyte, however it appears that the e2fs team has not 
yet created a mkfs.ext4 that supports anything bigger than 16TB.

I have played with XFS in the past, and sadly it's performance is 
severely lacking for our environment, so it is not an option.

I am very interested in ZFS, but it seems like it will never make it (in 
a stable fashion) into the linux world at this rate.

Does anyone have any tips or tricks for this scenario?  I am utilizing 
RHEL5 based installations, btw.

Thanks!
Nick

___
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq