Hi, I faced with a similar problem, like Ross, but still have not found a solution. I have raidz out of 9 sata disks connected to internal and 2 external sata controllers. Bonnie++ gives me the following results: nexenta,8G,104393,43,159637,30,57855,13,77677,38,56296,7,281.8,1,16,26450,99,+++++,+++,29909,93,24232,99,+++++,+++,13912,99 while running on a single disk it gives me the following: nexenta,8G,54382,23,49141,8,25955,5,58696,27,60815,5,270.8,1,16,19793,76,+++++,+++,32637,99,22958,99,+++++,+++,10490,99 The performance difference of between those two seems to be too small. I checked zpool iostat -v during bonnie++ itelligent writing and it looks it, every time more or less like this:
capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- iTank 7.20G 2.60T 12 13 1.52M 1.58M raidz1 7.20G 2.60T 12 13 1.52M 1.58M c8d0 - - 1 1 172K 203K c7d1 - - 1 1 170K 203K c6t0d0 - - 1 1 172K 203K c8d1 - - 1 1 173K 203K c9d0 - - 1 1 174K 203K c10d0 - - 1 1 174K 203K c6t1d0 - - 1 1 175K 203K c5t0d0s0 - - 1 1 176K 203K c5t1d0s0 - - 1 1 176K 203K As far as I understand it, less each vdev executes only 1 i/o in a time. time. however, on a single device zpool iostat -v gives me the following: capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- rpool 5.47G 181G 3 3 441K 434K c7d0s0 5.47G 181G 3 3 441K 434K ---------- ----- ----- ----- ----- ----- ----- In this case this device performs 3 i/o in a time, which gives it much higher bandwidth per unit. Is there any way to increase i/o counts for my iTank zpool? I'm running OS-11.2008 on MSI P45 Diamond with 4G of memory Best Regards, Dmitry -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss