On Fri, Apr 25, 2014 at 8:49 PM, Gora Mohanty <[email protected]> wrote: > Hi, > > Need some advice. I am being told by a hardware person that > a software RAID (Linux mdadm) array should use disks instead > of partitions for much improved efficiency.
I have struggled with this question in the past and took the advice of a colleague who is a FOSS enthusiast as well a storage vendor for his daytime job. He advised that software RAID (mdadm) performance is comparable with true hardware RAID for RAID1 or RAID10. Beyond this use true hardware RAID controller, not the 'fake' raid (dmraid module) offered by many motherboards. You can find plenty of 4/8 port SATA/SAS raid controllers on ebay.com for $125 or less, Indian customs duty extra. I built a 3TB RAID5 storage with an used Adaptec 3405, that I picked up for $45 on eBay. I am very happy with it. > For the life of me, I > cannot see why this would be the case, and searching Google > turns up no such evidence. On the other hand, the difference in > wasted space would be huge, as we are talking about 3TB disks > to be used in a RAID-6 array. > > Could someone more familiar with software RAID offer advice? > I am not sure I understand your comment about 'wasted' space. RAID5/6 have one / two disk for failure + some cards allow for 'hot' spares. You cannot avoid the N-1 or N-2 for RAID5 or RAID6 respectively. The only loss of storage I see, would be the 512 byte partition table space (per disk) between using the entire disk v/s having one partition using the entire disk capacity. Anyway for RAID6, I suggest -- go with hardware RAID (Areca, 3Ware, LSI, Adaptec). Most of the hardware RAID OEMs have embraced Linux; drivers are in the kernel and they provide the utilities for the major distros like RHEL (CentOS) / Novell SuSE (openSUSE). HTH, -- Arun Khan _______________________________________________ Ilugd mailing list [email protected] https://lists.hserus.net/mailman/listinfo/ilugd
