What I'm saying is ZFS doesn't play nice with NFS in all the scenarios I could 
think of:

-Single second disk in a v210 (sun72g) write cache on and off = ~1/3 the 
performance of UFS when writing files using dd over an NFS mount using the same 
disk. 

-2 raid 5 volumes composing of 6 spindles each taking ~53 seconds to write 1gb 
over a NFS mounted zfs stripe,raidz or mirror of a storedge 6120 array with 
bbc, zil_disable'd and write cache off/on.  In some testing dd would even seem 
to 'hang'. When any volslice is formatted UFS with the same NFS client - its 
~17 seconds!

We are likely going to just try iscsi instead, the behavior is non-existent.  
At some point though we would like to use ZFS based NFS mounts for things..  
the current difference in performance just scares us!

-Andy


-----Original Message-----
From: [EMAIL PROTECTED] on behalf of Roch - PAE
Sent: Mon 4/23/2007 5:32 AM
To: Leon Koll
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Re: ZFS+NFS on storedge 6120 (sun t4)
 
Leon Koll writes:
 > Welcome to the club, Andy...
 > 
 > I tried several times to attract the attention of the community to the 
 > dramatic performance degradation (about 3 times) of NFZ/ZFS vs. ZFS/UFS 
 > combination - without any result : <a 
 > href="http://www.opensolaris.org/jive/thread.jspa?messageID=98592";>[1]</a> , 
 > <a href="http://www.opensolaris.org/jive/thread.jspa?threadID=24015";>[2]</a>.
 > 
 > Just look at two graphs in my <a 
 > href="http://napobo3.blogspot.com/2006/08/spec-sfs-bencmark-of-zfsufsvxfs.html";>posting
 >  dated August, 2006</a> to see how bad the situation was and, unfortunately, 
 > this situation wasn't changed much recently: 
 > http://photos1.blogger.com/blogger/7591/428/1600/sfs.1.png
 > 
 > I don't think the storage array is a source of the problems you reported. 
 > It's somewhere else...
 > 

Why do you say this ?

My reading is that  almost all NFS/ZFS complaints are either
complaining  about NFS   performance   vs   direct   attach,
comparing UFS vs  ZFS on disk with  write cache  enabled, or
complaining  about ZFS running  on storage with NVRAM.  Your
complain is the  one   exception, SFS being worst  with  ZFS
backend vs say UFS or VxFS.

My points being:

 So NFS cannot match direct attach for some loads.
 It's a fact that we can't get around .

 Enabling the write cache gives is not a valid way to
 run NFS over UFS. 

 ZFS on NVRAM storage, we need to make sure the storage
 does not flush the cache in response to ZFS requests.


 Then SFS over ZFS is being investigated by others within
 Sun. I believe we have stuff in the pipe to make ZFS match
 or exceed  UFS on small server level loads. So I think your
 complaint is being heard. 
 
 I personally find it always incredibly hard to do performance
 engineering around SFS.
 So my perspective is that improving the SFS numbers
 will more likely come from finding ZFS/NFS performance
 deficiencies on simpler benchmarks.


-r

 > [i]-- leon[/i]
 >  
 >  
 > This message posted from opensolaris.org
 > _______________________________________________
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to