Hi, 

I did a quick test (because I'm curious also). The Hardware was a 3 SATA Disk 
RaidZ1.

What I did: 

1) Create a pool with NexentaStor 3.0.4 (Pool Version 26, Raidz1 with 3 disks)
2) Disabled all caching (primarycache=none, secondarycache=none) to force media 
access
3) Copied and extracted a recent Linux Kernel to generate meta data intensive 
workload (lots of small files)
4) Copied the Linux Kernel 10 times

Then I booted into SOL11 and did: 

5) ran "time du -sh ." on the dataset three times and did the average 
6)  upgraded the pool to version 31. 
7)  I rewrote the data (repeated steps 3 and 4). 
8) Then I measured the time again (three times average again as in step 5) 

I did see a ~13% improvement. 

Here are the numbers: 

Pool Version 26: 
-------------------

r...@solaris11:/volumes/mypool# time du -sh .
3.3G    .

real    1m51.509s
user    0m1.178s
sys     0m27.115s
r...@solaris11:/volumes/mypool# time du -sh .
3.3G    .

real    1m55.953s
user    0m1.128s
sys     0m25.510s
r...@solaris11:/volumes/mypool# time du -sh .
3.3G    .

real    1m48.442s
user    0m1.096s
sys     0m24.405s

= 111 Sec

Pool Version 31: 
----------------

r...@solaris11:/volumes/mypool# time du -sh .
3.3G    .

real    1m30.376s
user    0m1.049s
sys     0m21.775s

r...@solaris11:/volumes/mypool# time du -sh .
3.3G    .

real    1m45.745s
user    0m1.105s
sys     0m24.739s

r...@solaris11:/volumes/mypool# time du -sh .
3.3G    .

real    1m38.199s
user    0m1.093s
sys     0m24.096s

= 97 Sec

This means 14 seconds faster, which equals to ~13% of the total 111 secs. 

I expect even more exciting results for wider raidz and raidz2 arrays.

Regards, 
Robert
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to