Sure. And hey, maybe I just need some context to know what's "normal" IO for the zpool. It just...feels...slow, sometimes. It's hard to explain. I attached a log of iostat -xn 1 while doing mkfile 10g testfile on the zpool, as well as your dd with the bs set really high. When I Ctl-C'ed the dd it said 460M/sec....like I said, maybe I just need some context...
On Fri, Jun 18, 2010 at 5:36 AM, Arne Jansen <sensi...@gmx.net> wrote: > artiepen wrote: >> 40MB/sec is the best that it gets. Really, the average is 5. I see 4, 5, 2, >> and 6 almost 10x as many times as I see 40MB/sec. It really only bumps up to >> 40 very rarely. >> >> As far as random vs. sequential. Correct me if I'm wrong, but if I used dd >> to make files from /dev/zero, wouldn't that be sequential? I measure with >> zpool iostat 2 in another ssh session while making files of various sizes. >> >> This is a test system. I'm wondering, now, if I should just reconfigure with >> maybe 7 disks and add another spare. Seems to be the general consensus that >> bigger raid pools = worse performance. I thought the opposite was true... > > A quick test on a system with 21 1TB SATA-drives in a single > RAIDZ2 group show a performance of about 400MB/s with a > single dd, blocksize=1048576. Creating a 10G-file with mkfile > takes 25 seconds also. > So I'd say basically there is nothing wrong with the zpool > configuration. Can you paste some "iostat -xn 1" output while > your test is running? > > --Arne > -- Curtis E. Combs Jr. System Administrator Associate University of Georgia High Performance Computing Center ceco...@uga.edu Office: (706) 542-0186 Cell: (706) 206-7289 Gmail Chat: psynoph...@gmail.com
tests.gz
Description: GNU Zip compressed data
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss