A question (well lets make it 3 really) – Is vdbench a useful tool when testing 
file system performance of a ZFS file system? Secondly - is ZFS write 
performance really much worse than UFS or VxFS? and Third - what is a good 
benchmarking tool to test ZFS vs UFS vs VxFS?

The reason I ask is this – I am using the following – very simple write only 
test scenario using vdbench - 
 
sd=ZFS,lun=/pool/TESTFILE,size=10g,threads=8
wd=ETL,sd=ZFS,rdpct=0,  seekpct=80
rd=ETL,wd=ETL,iorate=max,elapsed=1800,interval=5,forxfersize=(1k,4k,8k,32k)

For those not familiar with vdbench - this scenario reads as follows:

SD = Storage Definition =  A 10g file in on my ZFS directory - and use 8 
concurrent threads for the test which indicate the maximum concurrent IO's for 
this Storage definition

WD = Workload Definition - Use the ZFS storage pool, with rdpct = 0 - means 
100% write, and seekpct = 80 means that 80% of the IO ops will start a new 
random seek address.

RD = Run Definition = This is the actual run - we want to run 4 tests with 1, 
4, 8 & 32k blocks - report back to me every 5 seconds what's happening, and 
iorate = max says 'have at er' - run this thing to the ground if possible - as 
I am not simulating a specific workload per se but trying to see what the 
maximum performance I might see would be.
 
So - I am seeing disheartening results.

My test configuration is as follows: T2000 / 32Gb connected via 2 x 2GB fibre 
connections to a 3510 array which has 5 disks assigned to the host acting only 
as a JBOD. This configuration (including 3510) is dedicated to testing 
performance of ZFS vs SVM with UFS vs VxVM with VxFS – but note that at this 
point the veritas packages have not been added yet – we do not want any 
perception that another package may impact performance of any of the tests. 
This is a Solaris 10 06/06 install – with all the latest patches and no out of 
box optimizations performed. The first suite of tests is to only see 
differences in out of box performance and manageability.

The results I am seeing are this – read performance is acceptable and on par 
with UFS, however when it comes to write – based on the performance I am seeing 
- the system seems to not be able to keep up with the writes. I am seeing large 
periods of time where is no reported activity, and if I am looking at zfs 
iostat I do see consistent writing however - also if I terminate the test - zfs 
will continue to write for about 2 - 3 minutes after all other activity has 
stopped.

The following is an example of what I see - Example 1 is from the initial file 
create, example 2 is from the actual test run. What is most important to note 
are the lines with big gapping 0's running through them -and the associated 
large response times surrounding them.

If anyone has any information on ZFS performance - it would be apprecated.

 

12:40:59.815 Starting RD=New_file_format_for_sd=ZFS; I/O rate:   5000; Elapsed: 
360000 seconds. For loops: xfers

ize=65536 threads=8

              interval        i/o   MB/sec   bytes   read     resp     resp     
resp    cpu%  cpu%

                             rate  1024**2     i/o    pct     time      max   
stddev sys+usr   sys

12:41:03.362         1    4858.99   303.69   65536   0.00    1.411   59.843    
2.468    21.4  16.7

12:41:04.101         2     878.85    54.93   65536   0.00    2.219   98.016    
8.795    13.1   7.5

12:41:05.069         3     147.65     9.23   65536   0.00   74.181 1022.374  
232.319     9.7   4.0

12:41:06.056         4     160.97    10.06   65536   0.00   52.750  425.264  
105.263     8.0   3.8

12:41:07.085         5    1025.10    64.07   65536   0.00    8.832  492.696   
50.995    10.2   5.6

12:41:08.077         6     664.67    41.54   65536   0.00    1.550    2.963    
0.511     6.3   5.6

12:41:09.113         7    1740.72   108.79   65536   0.00    6.627 1139.357   
75.573    10.9   9.9

12:41:10.160         8     111.72     6.98   65536   0.00   98.972 1426.363  
361.047     6.0   3.5

12:41:11.029         9    3888.81   243.05   65536   0.00    1.653    7.255    
0.687    20.3  18.0

12:41:12.071        10       0.00     0.00       0   0.00    0.000    0.000    
0.000     3.5   3.2

12:41:13.045        11       0.00     0.00       0   0.00    0.000    0.000    
0.000     3.4   3.2

12:41:14.075        12       0.00     0.00       0   0.00    0.000    0.000    
0.000     3.3   2.8

12:41:15.034        13    4183.62   261.48   65536   0.00    8.080 3230.801  
143.611    19.6  19.1

12:41:16.071        14    1238.02    77.38   65536   0.00    6.348  136.478   
10.506    12.1   9.9

12:41:17.057        15    1136.13    71.01   65536   0.00    6.214  102.408   
11.094    10.1   9.5

12:41:18.053        16     805.20    50.32   65536   0.00    8.028  134.882   
16.480     7.9   7.6

12:41:19.040        17       0.00     0.00       0   0.00    0.000    0.000    
0.000     3.3   3.2

12:41:20.039        18       0.00     0.00       0   0.00    0.000    0.000    
0.000     3.6   3.3

12:41:21.038        19       0.00     0.00       0   0.00    0.000    0.000    
0.000     3.3   3.2

12:41:22.035        20       0.00     0.00       0   0.00    0.000    0.000    
0.000     2.5   2.3

12:41:23.035        21    3281.30   205.08   65536   0.00   12.456 4476.151  
220.323    15.1  14.7

12:41:24.033        22    1030.57    64.41   65536   0.00    3.100   24.685    
2.928     6.9   6.3

12:41:25.031        23       0.00     0.00       0   0.00    0.000    0.000    
0.000     0.4   0.1

12:41:26.032        24       0.00     0.00       0   0.00    0.000    0.000    
0.000     0.1   0.0

12:41:27.032        25       0.00     0.00       0   0.00    0.000    0.000    
0.000     0.2   0.1

12:41:28.032        26       0.00     0.00       0   0.00    0.000    0.000    
0.000     0.1   0.0

12:41:29.032        27       0.00     0.00       0   0.00    0.000    0.000    
0.000     0.1   0.0

 

 

Example 2 is from a test run:

 

20:49:53.300 Starting RD=R2-ETL; I/O rate: Uncontrolled MAX; Elapsed: 1800 
seconds. For loops: xfersize=1024

 

              interval        i/o   MB/sec   bytes   read     resp     resp     
resp    cpu%  cpu%

                             rate  1024**2     i/o    pct     time      max   
stddev sys+usr   sys

20:50:24.010         1   29696.60    29.00    1024   0.00    0.071    0.846    
0.076    13.2  11.5

20:50:54.009         2   32894.71    32.12    1024   0.00    0.389 53786.416  
131.545    12.9  11.0

20:51:24.019         3       0.00     0.00       0   0.00    0.000    0.000    
0.000     5.5   5.5

20:51:54.009         4       0.00     0.00       0   0.00    0.000    0.000    
0.000     5.3   5.3

20:52:24.009         5       0.00     0.00       0   0.00    0.000    0.000    
0.000     4.5   4.5

20:52:54.012         6   73450.21    71.73    1024   0.00    0.431 100559.489  
191.587    25.1  20.0

20:53:24.029         7   91490.35    89.35    1024   0.00    0.073 1155.887    
3.896    29.7  23.1

20:53:54.029         8   36467.79    35.61    1024   0.00    0.114 2475.837    
8.169    16.1  13.5

20:54:24.009         9       0.00     0.00       0   0.00    0.000    0.000    
0.000     5.4   5.4

20:54:54.010        10       0.00     0.00       0   0.00    0.000    0.000    
0.000     5.4   5.3

20:55:24.012        11     168.99     0.17    1024   0.00  161.833 102381.535 
4063.826     6.4   6.4

20:55:54.010        12  107970.26   105.44    1024   0.00    0.063    1.328    
0.046    33.0  25.5

20:56:24.054        13   99816.33    97.48    1024   0.00    0.063  184.888    
0.320    30.6  23.7

20:56:54.009        14       0.00     0.00       0   0.00    0.000    0.000    
0.000     5.4   5.4

20:57:24.009        15       0.00     0.00       0   0.00    0.000    0.000    
0.000     5.3   5.3

20:57:54.009        16       0.00     0.00       0   0.00    0.000    0.000    
0.000     4.4   4.4

20:58:24.011        17   66220.65    64.67    1024   0.00    0.479 102375.134  
205.419    22.9  18.3

20:58:54.011        18  116584.14   113.85    1024   0.00    0.057    1.022    
0.011    33.1  25.0

20:59:24.009        19   24817.44    24.24    1024   0.00    0.080   62.738    
0.224    12.7  11.0

20:59:54.019        20       0.00     0.00       0   0.00    0.000    0.000    
0.000     5.4   5.4

21:00:24.019        21       0.00     0.00       0   0.00    0.000    0.000    
0.000     5.4   5.4

21:00:54.012        22   24137.65    23.57    1024   0.00    1.221 102831.215  
341.815    14.1  12.4

21:01:24.010        23  117132.15   114.39    1024   0.00    0.057    0.636    
0.010    33.3  25.3

21:01:54.009        24   65551.35    64.01    1024   0.00    0.065   46.923    
0.111    22.2  17.7

21:02:24.009        25       0.00     0.00       0   0.00    0.000    0.000    
0.000     5.4   5.4

21:02:54.019        26       0.00     0.00       0   0.00    0.000    0.000    
0.000     5.3   5.3

21:03:24.010        27       0.00     0.00       0   0.00    0.000    0.000    
0.000     4.8   4.8

21:03:54.009        28  100439.41    98.09    1024   0.00    0.336 102721.762  
167.369    33.0  26.1

21:04:24.019        29  106561.14   104.06    1024   0.00    0.062   55.353    
0.099    31.9  24.6

21:04:54.009        30       0.00     0.00       0   0.00    0.000    0.000    
0.000     5.5   5.5

 

If anyone know’s if this is a VDbench issue not properly reporting the 
information or if it is ZFS issue – I’d appreciate some further insights.

 

Thanks
-Tony
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to