On 12/12/10 04:48 AM, Stephan Budach wrote:
Hi,
on friday I received two of my new fc raids, that I intended to use
as my new zpool devices. These devices are from CiDesign and their
type/model is iR16FC4ER. These are fc raids, that also allow JBOD
operation, which is what I chose. So I configured 16 raid groups on
each system and configured the raids to attach them to their fc
channel one by one.
On my Sol11Expr host I have created a zpool of mirror vdevs, by
selecting 1 disk from either raid. This way I got a zpool that looks
like this:
At first I disabled all write cache and read ahead options for each
raid group on the raids, since I wanted to provide ZFS as much control
over the drives as possible, but the performance was quite worse. I am
running this zpool on a Sun Fire X4170M2 with 32 GB of RAM so I ran
bonnie++ with -s 63356 -n 128 and got these results:
Sequential Output
char: 51819
block: 50602
rewrite: 28090
Sequential Input:
char: 62562
block 60979
Random seeks: 510 <- this seems really low to me, isn't it?
Sequential Create:
create: 27529
read: 172287
delete: 30522
Random Create:
create: 25531
read: 244977
delete 29423
The closet I have by way of caparison is an old thumper with a stripe of
9 mirrors:
Sequential Output
char: 206479
block: 601102
rewrite: 218089
Sequential Input:
char: 138945
block 702598
Random seeks: 1970
Getting on for an order of magnitude better on I/O.
Is there anything else I can check?
iostat are recommended elsewhere.
--
Ian.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss