RAID10 far (f2) read throughput on random and sequential / read-ahead

2008-02-21 Thread Nat Makarevitch
'md' performs wonderfully. Thanks to every contributor! I pitted it against a 3ware 9650 and 'md' won on nearly every account (albeit on RAID5 for sequential I/O the 3ware is a distant winner): http://www.makarevitch.org/rant/raid/#3wmd On RAID10 f2 a small read-ahead reduces the throughput on se

Re: How many drives are bad?

2008-02-21 Thread Peter Rabbitson
Peter Grandi wrote: In general, I'd use RAID10 (http://WWW.BAARF.com/), RAID5 in Interesting movement. What do you think is their stance on Raid Fix? :) - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at

Re: How many drives are bad?

2008-02-21 Thread pg_mh
>>> On Thu, 21 Feb 2008 13:12:30 -0500, Norman Elton >>> <[EMAIL PROTECTED]> said: [ ... ] normelton> Assuming we go with Guy's layout of 8 arrays of 6 normelton> drives (picking one from each controller), Guy Watkins proposed another one too: «Assuming the 6 controllers are equal, I would m

Re: How many drives are bad?

2008-02-21 Thread Norman Elton
Pure genius! I wonder how many Thumpers have been configured in this well thought out way :-). I'm sorry I missed your contributions to the discussion a few weeks ago. As I said up front, this is a test system. We're still trying a number of different configurations, and are learning how best

Re: LVM performance (was: Re: RAID5 to RAID6 reshape?)

2008-02-21 Thread Peter Grandi
>> This might be related to raid chunk positioning with respect >> to LVM chunk positioning. If they interfere there indeed may >> be some performance drop. Best to make sure that those chunks >> are aligned together. > Interesting. I'm seeing a 20% performance drop too, with default > RAID and LV

Re: How many drives are bad?

2008-02-21 Thread Peter Grandi
>>> On Tue, 19 Feb 2008 14:25:28 -0500, "Norman Elton" >>> <[EMAIL PROTECTED]> said: [ ... ] normelton> The box presents 48 drives, split across 6 SATA normelton> controllers. So disks sda-sdh are on one controller, normelton> etc. In our configuration, I run a RAID5 MD array for normelton> each

Re: suns raid-z / zfs

2008-02-21 Thread Mario 'BitKoenig' Holbe
Keld Jørn Simonsen <[EMAIL PROTECTED]> wrote: > On Mon, Feb 18, 2008 at 09:51:15PM +1100, Neil Brown wrote: >> Recovery after a failed drive would not be an easy operation, and I >> cannot imagine it being even close to the raw speed of the device. > I thought this was a problem with most raid type