On 2013-04-11, David C. Miller <mille...@fusion.gat.com> wrote:
>
> Just for reference, I have a 24 x 2TB SATAIII using CentOS 6.4 Linux MD RAID6 
> with two of those 24 disks as hotspares. The drives are in a Supermicro 
> external SAS/SATA box connected to another Supermicro 1U computer with an 
> i3-2125 CPU @ 3.30GHz and 16GB ram. The connection is via a 6Gbit mini SAS 
> cable to an LSI 9200 HBA. Before I deployed it into production I tested how 
> long it would take to rebuild the raid from one of the hot spares and it took 
> a little over 9 hours.

I did a similar test on a 3ware controller.  Apparently those cards have
a feature that allows the controller to remember which sectors on the
disks it has written, so that on a rebuild it only reexamines those
sectors.  This greatly reduces rebuild time on a mostly empty array, but
it means that a good test would almost fill the array, then attempt a
rebuild.  I definitely saw a difference in rebuild times as I filled the
array.  (In 3ware/LSI world this is sometimes called "rapid RAID
recovery".)

In checking my archives, it looks like a rebuild on an almost full 50TB
array (24 disks) took about 16 hours.  That's still pretty respectable.
I didn't repeat the experiment, unfortunately.

I don't know if your LSI controller has a similar feature, but it's
worth investigating.

--keith


-- 
kkel...@wombat.san-francisco.ca.us


_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos

Reply via email to