My experience with Linux software RAID up to now has consisted of
combining a number of disks to one very large ext2 volume. However,
now I'm curious about paritioning and performance characteristics of
multiple partitions on a RAID system. Since I don't have a suitable
configuration I could run tests on, I'll just have to ask you guys for 
help..

Suppose I had a system equipped with, say, 4x 9GB SCSI disks that I'd
like to set up as a fairly generic server system. Conventional server
setup wisdom says to partition /, /usr, /var, /home and perhaps
/var/spool separately. However, how does the RAID-5 subsystem perform
when multiple md devices are configured to span the same physical
disks? Obviously, all volumes benefit from redundancy, but let's think 
about the performance side only for now (for that reason, perhaps it
would actually make more sense to compare striped md's to dedicated
disks..) Immediate thought is that number of seeks may increase,
resulting in more time lost to seek times. On the other hand,
sustained bandwidth is greater for all volumes. Has anyone run
benchmarks to find out which is the more powerful factor?

In particular, how does fsck deal with md devices? It parallelizes
itself for multiple disks, but if the volumes are all actually striped 
over the same disks, fsck will perform better if it's serial. On the
other hand, were I have to have two separate physical drive clusters
with independent striped md's on them, fsck should parallelize for
both.. But how can it figure out which volumes are on the same
devices?

-- 
The less you bother me, the sooner you'll get results.
Osma Ahvenlampi <[EMAIL PROTECTED]>

Reply via email to