Bob Friesenhahn writes:

 > On Fri, 15 Feb 2008, Roch Bourbonnais wrote:
 > >>> What was the interlace on the LUN ?
 > >
 > > The question was about LUN  interlace not interface.
 > > 128K to 1M works better.
 > 
 > The "segment size" is set to 128K.  The max the 2540 allows is 512K. 
 > Unfortunately, the StorageTek 2540 and CAM documentation does not 
 > really define what "segment size" means.
 > 
 > > Any compression ?
 > 
 > Compression is disabled.
 > 
 > > Does turn off checksum helps the number (that would point to a CPU limited 
 > > throughput).
 > 
 > I have not tried that but this system is loafing during the benchmark. 
 > It has four 3GHz Opteron cores.
 > 
 > Does this output from 'iostat -xnz 20' help to understand issues?
 > 
 >                      extended device statistics
 >      r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 >      3.0    0.7   26.4    3.5  0.0  0.0    0.0    4.2   0   2 c1t1d0
 >      0.0  154.2    0.0 19680.3  0.0 20.7    0.0  134.2   0  59 
 > c4t600A0B80003A8A0B0000096147B451BEd0
 >      0.0  211.5    0.0 26940.5  1.1 33.9    5.0  160.5  99 100 
 > c4t600A0B800039C9B500000A9C47B4522Dd0
 >      0.0  211.5    0.0 26940.6  1.1 33.9    5.0  160.4  99 100 
 > c4t600A0B800039C9B500000AA047B4529Bd0
 >      0.0  154.0    0.0 19654.7  0.0 20.7    0.0  134.2   0  59 
 > c4t600A0B80003A8A0B0000096647B453CEd0
 >      0.0  211.3    0.0 26915.0  1.1 33.9    5.0  160.5  99 100 
 > c4t600A0B800039C9B500000AA447B4544Fd0
 >      0.0  152.4    0.0 19447.0  0.0 20.5    0.0  134.5   0  59 
 > c4t600A0B80003A8A0B0000096A47B4559Ed0
 >      0.0  213.2    0.0 27183.8  0.9 34.1    4.2  159.9  90 100 
 > c4t600A0B800039C9B500000AA847B45605d0
 >      0.0  152.5    0.0 19453.4  0.0 20.5    0.0  134.5   0  59 
 > c4t600A0B80003A8A0B0000096E47B456DAd0
 >      0.0  213.2    0.0 27177.4  0.9 34.1    4.2  159.9  90 100 
 > c4t600A0B800039C9B500000AAC47B45739d0
 >      0.0  213.2    0.0 27195.3  0.9 34.1    4.2  159.9  90 100 
 > c4t600A0B800039C9B500000AB047B457ADd0
 >      0.0  154.4    0.0 19711.8  0.0 20.7    0.0  134.0   0  59 
 > c4t600A0B80003A8A0B0000097347B457D4d0
 >      0.0  211.3    0.0 26958.6  1.1 33.9    5.0  160.6  99 100 
 > c4t600A0B800039C9B500000AB447B4595Fd0
 > 

Interesting that a subset of 5 disks are responding faster
(which also leads to smaller actv queues and so lower
service times) than the 7 others.

....

and the slow ones are subject to more writes...haha.

If the sizes of the luns are different (or have different
amount of free blocks) then maybe ZFS is now trying to rebalance
free space by targetting a subset of the disks with more 
new data.  Pool throughput will be impacted by this.


-r





 > Bob
 > ======================================
 > Bob Friesenhahn
 > [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
 > GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
 > 
 > _______________________________________________
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to