On 27/07/2010 6:54 AM, John Almberg wrote:
John Almberg wrote:
If you have hardware controller with RAID capabilities, using native RAID is better, otherwise look towards gvinum or maybe ccd; see also:
I've just been reading up on RAID in my Absolute FreeBSD book, and it occurs to me that my client has a SCSI RAID drive chassis that he is using stupidly...

It's a 14 bay drive, and he's currently got seven 32G drives stuck in it, configured with RAID-0. This is the original 200G drive I was talking about. It's a few years old.

Over the next few years, this guy is going to need lots of storage for his videos.

After a bit of reading, I'm wondering if the best idea might be to toss out those 32G drives and replace them with 3 big (say, 300G) drives configured with RAID-5. It sounds to me like a RAID-5 array can be expanded by adding new drives.

QUESTION: is expansion normally a matter of just plugging in a new drive? Is the new drive automatically grafted onto the old drives? Or do you have to go through a process like, backing up the data, plugging in the new drive, reformatting the expanded array of drives, and restoring the data.

I don't know the brand/model of the RAID drive chassis, but the client thinks it can be switched to use RAID 5. I'm waiting for the technical details, but assuming it can handle RAID-5 for now.
Answering my own question...

So its a HP 6402 / 128 RAID controller. From a quick skim of the manual, it looks like the controller has to go through an 'expansion' process when adding a new drive. This sounds time consuming, but more or less automatic -- i.e., handled by the controller.

Sounds like this might be the best way to go.
It's been a while since I dealt with HP SCSI RAID, but ISTR that you'd need to install and configure the 3 disks as a RAID 5 set, copy the data from the 7x36GB array to the new array, (using a temporary mount point, generally, and dump | restore) switch the mount points across so that the /videos tree is the new copy, then remove the RAID0 set from the controller.

You may or may not find that the RAID controller changes LUN IDs after a cold start too, so LUN 1 (new RAID 5) suddenly becomes LUN 0 on the cold start after the old RAID set is decommissioned and pulled. This is often accompanied by a heart attack on the part of the person restarting the server.

After that, though, expansion is a cinch - but it will be quite slow since it needs to read and write the entire content of all disks. I'd therefore go as many spindles as you can - 3 disks, 5 disks and 9 disks are what I recall as being optimal groups for RAID 5.

Also consider that you can supplement the RAIDs with the BSD tools previously mentioned. Today is 3 x 300GB. Tomorrow add another 3 x 300 (assuming IOPS is OK) and concatenate them to be a 1.8TB "disk" - 2D+P + 2D+P.

Dave.

--
David Rawling
PD Consulting And Security
Mob: +61 412 135 513
Email: d...@pdconsec.net

_______________________________________________
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"

Reply via email to