Hello,

I have three RAID related questions stemming from a recent change in
workstations.  I recently started using a Dell PowerEdge 1300 server as my
workstation.  The machine has a Dell PERC 2S/C card in it and I have three
9Gb drives that are setup in a RAID 0.  The first disk has now failed
three times, once when NT was installed and twice after Linux was
installed.  In addition, I haven't been happy with the performance of the
RAID controller at all, both from my machine, another server we have and
also judging from other users' experience with this card.  So I'm looking
to do several things and hence my questions:

1)  Is it possible to easily convert from hardware RAID to software RAID
using existing disks?
2)  Is it unreasonable to create /dev/md[0-9] to mimic the 9 paritions I
now have on /dev/sda?
3)  Any ideas on a path to take to replace the failed disk in the RAID 0
setup, without losing data?

Some more details:

Regarding 1), I have read posts where users have gone from hardware RAID
to software RAID under Linux without a problem, since the underlying
RAID formatting is apparently the same.  Is that feasible with this card
and setup?  What I'm planning on doing is replacing the PERC 2 card with
a plain old Adaptec U2W SCSI card and running software RAID over that.
The drives are Quantum Atlas IV, and they are U160.

Regarding 2), the hardware RAID controller currently takes the three
drives and makes them appear as /dev/sda, which I then partition
accordingly.  Under Linux software RAID, it would seem that I have to
create a partition on each drive, then create 9 entries in my /etc/raidtab
to mimic my original partitions, something like this:

Hardware RAID           Software RAID
/dev/sda1               /dev/md1 (which is /dev/sda1,/dev/sdb1,/dev/sdc1)
/dev/sda2               /dev/md2 (which is /dev/sda2,/dev/sdb2,/dev/sdc2)
/dev/sda3               /dev/md3 (which is /dev/sda3,/dev/sdb3,/dev/sdc3)
etc...

Is it reasonable to do this or is it silly to spread 9 paritions across
three drives like this (i.e. will I run into performance issues)?

And regarding 3), I use AMANDA for backups.  I'm toying with creating a
static version of the amrestore client and then creating some sort of
mini-system that I can boot from (floppy with ramdisk perhaps) and then
restore over top my newly created software RAID.  I don't see any other
way of doing this since not only am I replacing a physical device with a
new physical device (the disk that has failed replaced by the new disk),
I'm also replacing a logical device with another logical device (the
hardware RAID 0 with the software RAID 0).  If I can get a static AMANDA
restore client, I may end up switching from RAID 0 to RAID 5 also, since
I've found that after I don't have the redundancy, I end up liking it
anyway, despite the reduction in disk space :)

If anyone has any thoughts on any of this, I'd appreciate hearing it.  I
used software RAID with excellent success, both in terms of stability and
of speed, with the 2.2.X kernels at another job so I'm confident in its
suitability for my task.

Thanks,
Kevin
-- 
Kevin M. Myer
Systems Administrator
Lancaster-Lebanon Intermediate Unit 13
(717)-560-6140




-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]

Reply via email to