At Sun, 30 Jan 2011 09:01:56 -0600 CentOS mailing list <centos@centos.org> 
wrote:

> 
> 
> On Jan 30, 2011, at 7:36 AM, Robert Heller wrote:
> 
> > At Sat, 29 Jan 2011 22:33:50 -0500 CentOS mailing list <centos@centos.org> 
> > wrote:
> 
> 
> > Many of the SATA (so-called) hardware raid controllers are not really
> > hardware raid controllers, they are 'fakeraid' and requires lots of
> > software RAID logic.  You are generally *better off* to *disable* the
> > motherboard RAID controller and use native Linux software RAID.
> 
> The only caveat I can think of is if you wanted to BOOT off of the
> raid configuration.  The BIOS wouldn't understand the Linux RAID
> implementation.

Not really a problem: make /boot its own RAID 1 set.  The BIOS will boot
off /dev/sda and Grub will read /dev/sda1 (typically) to load the kernal
and init ramdisk.  The Linux RAID1 superblock is at the *end* of the
disk -- The ext2/3 superblock is in its normal place, where grub will
see it.  /dev/sda1 and /dev/sdb1 will be kept identical by the Linux
RAID logic, so if /dev/sda dies, it can be pulled and /dev/sdb will
become /dev/sda.  You'll want to replicatate the boot loader install on
/dev/sdb (eg grub-install ... /dev/sdb).

> 
> But for RAID 1, especially, you probably want a minimum of 3 drives. 
> A boot drive with Linux, and the other 2 RAIDed together for speed. 
> That way, the logic to handle the failure of one of the drives isn't on
> the drive that may have failed.

No, only two drives will be just fine.  Even if one drive fails, you
can still boot the RAID set in 'degraded' mode, and then add in the
replacement disk to the running system. Make two partitions on each
drive, a small one for /boot and the rest for everything else and make
this second raid set a LVM volumn group and carve out swap, root (/),
/home, etc. as LVM volumns.

That is what I have:

sauron.deepsoft.com% cat /proc/mdstat 
Personalities : [raid1] 
md0 : active raid1 sdb1[1] sda1[0]
      1003904 blocks [2/2] [UU]
      
md1 : active raid1 sdb2[1] sda2[0]
      155284224 blocks [2/2] [UU]
      
unused devices: <none>

sauron.deepsoft.com% df -h /boot/
Filesystem            Size  Used Avail Use% Mounted on
/dev/md0              965M  171M  746M  19% /boot

sauron.deepsoft.com% sudo /usr/sbin/pvdisplay 
  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               sauron
  PV Size               148.09 GB / not usable 768.00 KB
  Allocatable           yes 
  PE Size (KByte)       4096
  Total PE              37911
  Free PE               23
  Allocated PE          37888
  PV UUID               ttB15B-3eWx-4ioj-TUvm-lAPM-z9rD-Prumee

sauron.deepsoft.com% df -h / /usr /var /home
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/sauron-c5root
                      2.0G  905M  1.1G  47% /
/dev/mapper/sauron-c5usr
                      9.9G  4.9G  4.5G  53% /usr
/dev/mapper/sauron-c5var
                      4.0G  1.4G  2.5G  36% /var
/dev/mapper/sauron-home
                      9.9G  8.7G  759M  93% /home

(I have a pile of other File Systems.)

> 
> Of course, if it is the Linux drive that failed, you replace that
> (from backup?) and your data should all still be available.
> 
> 
> 
> _______________________________________________
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
> 
>                                                                               
>                                      

-- 
Robert Heller             -- 978-544-6933 / hel...@deepsoft.com
Deepwoods Software        -- http://www.deepsoft.com/
()  ascii ribbon campaign -- against html e-mail
/\  www.asciiribbon.org   -- against proprietary attachments


        
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos

Reply via email to