Hello,
I am running SPEC SFS benchmark [1] on dual Xeon 2.80GHz box with 4GB memory. 
More details:
snv_56, zil_disable=1, zfs_arc_max = 0x80000000 #2GB
Configurations that were tested: 
160 dirs/1 zfs/1 zpool/4 SAN LUNs 
160 zfs'es/1 zpool/4 SAN LUNs
40 zfs'es/4 zpools/4 SAN LUNs
One zpool was created on 4 SAN LUNs. The SAN storage array used doesn't honor 
flush cache commands. 
NFSD_SERVERS=1024, NFS3 via UDP was used.
Max. number of obtained SPEC NFS IOPS: 5K
Max. number of SPEC NFS IOPS for SVM/VxFS configuration obtained before: 24K [2]
So we have almost a five-times difference. Can we improve this? How can we 
accelerate this NFS/ZFS setup?
Two serious problems were observed:
1.Degradation of benchmark results of the same setup. The same benchmark gave 
first time 4030 IOPS, when was ran second time - 2037 IOPS.
2.When 4 zpools were used instead of 1, the result was degraded about 4 times.

The benchmark report shows abnormally high part of [b]readdirplus[/b] 
operations that reached 50% of the test time. It's part in SFS mix is: 9%. Does 
it point to some known problem? Increasing of DNLC size doesn't help in case 
ZFS, I checked this.
I will appreciate your help very much. This testing is a part of preparation 
for production deployment. I will provide any additional information that may 
be needed.

Thank you,
[i]-- leon[/i]










[1] http://www.spec.org/osg/sfs/
[2] http://napobo3.blogspot.com/2006/08/spec-sfs-bencmark-of-zfsufsvxfs.html
[3] http://www.opensolaris.org/jive/thread.jspa?threadID=23263
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to