On Mon, Nov 23, 2009 at 1:09 PM, Tom Buskey <[email protected]> wrote: >> http://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_5_performance: >> "... known as the write hole ..." > > Thanks for clarifying that for me.
For the record, there doesn't seem to be any universal agreement as to what "RAID 5 write hole" actually means. > I'm corrected. RAID 5 will always have to do parity and RAID 0 does not. I find it's a good idea to keep the definitions in mind when talking about this stuff. The numbered levels tend to confuse things. In discussions, I usually re-state the definitions regularly for this reason. RAID 0 = striping without redundancy RAID 1 = mirroring RAID 5 = stripping with distributed parity RAID 6 = stripping with double distributed parity RAID 10 = striped mirrors I've left out the RAID levels nobody uses, like 3 and 0+1, and the higher-numbered levels which are ill-defined. > I once replaced my 120 GB drives with 500 GB drives to increase the pool. > It didn't seem slow to me, but.. A big part of rebuild performance is the load on the system. If the system has a light I/O load, there's enough spare capacity that rebuild overhead won't be much noticed. The same goes for performance of software vs hardware RAID. Most people run their I/O benchmarks with zero load, because they want controlled conditions. However, software RAID robs resources from the host, so load matters. Many systems have resources to spare, so this is okay. But on a heavily loaded system, this may matter. Say you we're talking members of a database server cluster backing a busy web site. CPU, RAM, and I/O will all be in demand. You may not want to give up resources to that. OTOH, it may be cheaper to buy another cluster member than upgrade your storage controller. But OTOOH, before adding another server, don't forget to consider impact on rack space budget, power budget, heat budget, and the environment. "You can't do just one thing." -- Ben _______________________________________________ gnhlug-discuss mailing list [email protected] http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
