Curtis E. Combs Jr. wrote:
> Sure. And hey, maybe I just need some context to know what's "normal"
> IO for the zpool. It just...feels...slow, sometimes. It's hard to
> explain. I attached a log of iostat -xn 1 while doing mkfile 10g
> testfile on the zpool, as well as your dd with the bs set really high.
> When I Ctl-C'ed the dd it said 460M/sec....like I said, maybe I just
> need some context...
> 

These iostats don't match to the creation of any large files. What are
you doing there? Looks more like 512 byte random writes... Are you
generating the load locally or remote?

> 
> On Fri, Jun 18, 2010 at 5:36 AM, Arne Jansen <sensi...@gmx.net> wrote:
>> artiepen wrote:
>>> 40MB/sec is the best that it gets. Really, the average is 5. I see 4, 5, 2, 
>>> and 6 almost 10x as many times as I see 40MB/sec. It really only bumps up 
>>> to 40 very rarely.
>>>
>>> As far as random vs. sequential. Correct me if I'm wrong, but if I used dd 
>>> to make files from /dev/zero, wouldn't that be sequential? I measure with 
>>> zpool iostat 2 in another ssh session while making files of various sizes.
>>>
>>> This is a test system. I'm wondering, now, if I should just reconfigure 
>>> with maybe 7 disks and add another spare. Seems to be the general consensus 
>>> that bigger raid pools = worse performance. I thought the opposite was 
>>> true...
>> A quick test on a system with 21 1TB SATA-drives in a single
>> RAIDZ2 group show a performance of about 400MB/s with a
>> single dd, blocksize=1048576. Creating a 10G-file with mkfile
>> takes 25 seconds also.
>> So I'd say basically there is nothing wrong with the zpool
>> configuration. Can you paste some "iostat -xn 1" output while
>> your test is running?
>>
>> --Arne
>>
> 
> 
> 

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to