On Tue, Aug 08, 2006 at 09:54:16AM -0700, Robert Milkowski wrote:
> Hi.
> 
>     snv_44, v440
> 
> filebench/varmail results for ZFS RAID10 with 6 disks and 32 disks.
> What is suprising is that the results for both cases are almost the same!
> 
> 
> 
> 6 disks:
> 
>    IO Summary:      566997 ops 9373.6 ops/s, (1442/1442 r/w)  45.7mb/s,    
> 299us cpu/op,   5.1ms latency
>    IO Summary:      542398 ops 8971.4 ops/s, (1380/1380 r/w)  43.9mb/s,    
> 300us cpu/op,   5.4ms latency
> 
> 
> 32 disks:
>    IO Summary:      572429 ops 9469.7 ops/s, (1457/1457 r/w)  46.2mb/s,    
> 301us cpu/op,   5.1ms latency
>    IO Summary:      560491 ops 9270.6 ops/s, (1426/1427 r/w)  45.4mb/s,    
> 300us cpu/op,   5.2ms latency
> 
>    
> 
> Using iostat I can see that with 6 disks in a pool I get about 100-200 IO/s 
> per disk in a pool, and with 32 disk pool I get only 30-70 IO/s per disk in a 
> pool. Each CPU is used at about 25% in SYS (there're 4 CPUs).
> 
> Something is wrong here.

It's possible that you are CPU limited.  I'm guessing that your test
uses only one thread, so that may be the limiting factor.

We can get a quick idea of where that CPU is being spent if you can run
'lockstat -kgIW sleep 60' while your test is running, and send us the
first 100 lines of output.  It would be nice to see the output of
'iostat -xnpc 3' while the test is running, too.

--matt
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to