> iostat -cxnz 1 201 gives the following indication:
> 
> cpu
> us sy wt id
> 22 78 0 0
> extended device statistics
> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
> 0.0 43.0 0.0 159.5 0.0 0.1 0.1 2.3 0 10 c1d0
> 0.0 43.0 0.0 159.5 0.0 0.1 0.1 2.0 0 8 c2d0

What is your exact system setup, and what test are you running?

> So, I guess that the high cpu usage combined with low
> disk utilization (%b) could indicate something,
> right? Am I looking at kernel driver issues or what?
> I'm using an Asus M2NPV-VM motherboard with an Nforce
> chipset.


Well, I have the same Asus M2NPV-VM mainboard, and
a zfs mirrored poll with two S-ATA drives:

% zpool status files
  pool: files
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        files       ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c4d0s6  ONLINE       0     0     0
            c5d0s6  ONLINE       0     0     0

errors: No known data errors


Creating a 4.8GB file on that zpool with...

% time mkfile 4800m foobar
0.01u 2.51s 0:04.46 56.5%


... shows iostat rates that are similar with the numbers you've got:

 us sy wt id
  1 54  0 44
                    extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0  103.1    0.0  365.8  0.1  0.1    0.8    1.2   1  11 c4d0
    0.0  100.1    0.0  365.8  0.1  0.2    1.0    1.5   2  13 c5d0
     cpu
 us sy wt id
  0 64  0 35
                    extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0   99.1    0.0  378.2  0.1  0.1    0.6    1.2   1  11 c4d0
    0.0   98.1    0.0  377.2  0.1  0.1    0.6    1.3   1  12 c5d0


The "mistake" with this "benchmark" is that zfs compresses away
all the null bytes; not that the file consumes just a single disk block,
although it has a file size of 4.8gbytes:

% du -h foobar 
   0K   foobar
% du  foobar
1       foobar
% ls -l foobar 
-rw-------   1 jk       usr      5033164800 Apr  3 11:16 foobar

All that zfs has to write are a few file metadata updates (modification
time updates, file size updates), and that results in a few hundred
kilobytes written to disk per seconds.


Note also, that when you compute the number of bytes written
per second by mkfile you get a transfer rate at the filesytem
level of > 1 gbyte/sec!

% bc
scale=3        
5033164800/4.46
1128512286.995


If you want to measure / verify SATA performace, run the
"iostat -cxnz 1 201" command in a window, and at the same
time read from the raw disk device, using something like

    dd if=/dev/rdsk/c1d0p0 of=/dev/null bs=32k


When I run such a test on the Asus M2NPV-VM, I see 

     cpu
 us sy wt id
  1 12  0 87
                    extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 2407.0    0.0 77024.6    0.0  0.0  0.8    0.0    0.3   4  84 c4d0
     cpu
 us sy wt id
  0 11  0 88
                    extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 2427.0    0.0 77664.3    0.0  0.0  0.8    0.0    0.3   4  85 c4d0
 
 
This message posted from opensolaris.org
_______________________________________________
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Reply via email to