> > I have two 500 GB SATA II drives.  My programs do lots of file I/O and can
> > generate files as large as 15-20 GB in some of my scientific applications.
> 
> You could also consider xfs, it's very god for large files. Even better, 
> test your application with both types of filesystem, and then decide.

Testing is definitely the best way to reach a decision. About a year 
ago, I tested ext3, xfs, and reiserfs for our systems. We do molecular
dynamics simulations on clusters, and that means that we have several
clients appending to large files (>10GB) simultaneously.

I've discovered that xfs came out fastest (and with a relatively low
load on the NFS server), ext3 was second, and reiserfs clearly the
weakest. However, when I repeated a few of the tests on a local file
system, that picture changed drastically: reiserfs was more or less on
par with xfs, with ext3 being way behind.
So, it definitely depends on the details of your application.

As for the stability, the data I have is much "softer" (read:
anecdotal). We've had a few crashes with reiserfs, including some data
loss. At one point, a rebuild_tree failed and left a disc with no file
names and only a limited directory hierarchy. While xfs seems to be
more stable, it shows some problems as well (no data loss so far).
In my experience, ext3 has been rock solid.

However, since fs crashes and data loss are such rare events (at
least, they should be), it is quite hard to gather reliable data, and
the signal-to-noise ratio tends to be low.


Besides, reiserfs takes a long time to mount; and mkfs.xfs is
incredibly fast. Of course, creating files systems (and even mounting
them) is a rare occurrence, so these points should not influence your
decision unless everything else is equal.




A.

-- 
Ansgar Esztermann
Researcher & Sysadmin
http://www.mpibpc.mpg.de/groups/grubmueller/start/people/aeszter/index.shtml

Attachment: pgp80ntxKYIPN.pgp
Description: PGP signature

Reply via email to