Sounds to me like something is wrong as on my 20 disk backup machine
with 20 1TB disks on a single raidz2 vdev I get the following with DD on
sequential reads/writes:

writes:

r...@opensolaris: 11:36 AM :/data# dd bs=1M count=100000 if=/dev/zero
of=./100gb.bin
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB) copied, 233.257 s, 450 MB/s

reads:

r...@opensolaris: 11:44 AM :/data# dd bs=1M if=./100gb.bin of=/dev/null
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB) copied, 131.051 s, 800 MB/s

zpool iostat <pool> 10

gives me about the same values that DD gives me. Maybe you have a bad
drive somewhere? Which areca controller are you using as maybe you can
pull the smart info off the drives from a linux boot cd as some of the
controllers support that. Could be a bad drive somewhere.

On 06/18/2010 02:33 AM, Curtis E. Combs Jr. wrote:
> Yea. I did bs sizes from 8 to 512k with counts from 256 on up. I just
> added zeros to the count, to try to test performance for larger files.
> I didn't notice any difference at all, either with the dtrace script
> or zpool iostat. Thanks for you help, btw.
>
> On Fri, Jun 18, 2010 at 5:30 AM, Pasi Kärkkäinen <pa...@iki.fi> wrote:
>   
>> On Fri, Jun 18, 2010 at 02:21:15AM -0700, artiepen wrote:
>>     
>>> 40MB/sec is the best that it gets. Really, the average is 5. I see 4, 5, 2, 
>>> and 6 almost 10x as many times as I see 40MB/sec. It really only bumps up 
>>> to 40 very rarely.
>>>
>>> As far as random vs. sequential. Correct me if I'm wrong, but if I used dd 
>>> to make files from /dev/zero, wouldn't that be sequential? I measure with 
>>> zpool iostat 2 in another ssh session while making files of various sizes.
>>>
>>>       
>> Yep, dd will generate sequential IO.
>> Did you specify blocksize for dd? (bs=1024k for example).
>>
>> As a default dd does 4 kB IOs.. which won't be very fast.
>>
>> -- Pasi
>>
>>     
>>> This is a test system. I'm wondering, now, if I should just reconfigure 
>>> with maybe 7 disks and add another spare. Seems to be the general consensus 
>>> that bigger raid pools = worse performance. I thought the opposite was 
>>> true...
>>> --
>>> This message posted from opensolaris.org
>>> _______________________________________________
>>> zfs-discuss mailing list
>>> zfs-discuss@opensolaris.org
>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>>       
>>     
>
>
>   

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to