cedric briner writes:
 > >> You might set zil_disable to 1 (_then_ mount the fs to be
 > >> shared). But you're still exposed to OS crashes; those would
 > >> still corrupt your nfs clients.
 > Just to better understand ? (I know that I'm quite slow :( )
 > when you say _nfs clients_ are you specifically talking of:
 > - the nfs client program itself :
 >     (lockd, statd) meaning that you can have a stale nfs handle or other 
 > things ?
 > - the host acting as an nfs client
 >     meaning that the nfs client service works, but you would have 
 > corrupt the data that the software use with nfs's mounted disk.
 > 

It's rather applications running on the client.
Basically, we would have data loss from application's
perspective running on client without any sign of errors. It's a bit like
having a disk that would drop a write request and  not
signal an error.

 > 
 > If I'm digging and digging against this ZIL and NFS UFS with write 
 > cache, that's because I do not understand which kind of problems that 
 > can occurs. What I read in general is statement like _corruption_ of the 
 > client's point of view.. but what does that means ?
 > 
 > is the shema of what can happen is :
 > - the application on the nfs client side write data on the nfs server
 > - meanwhile the nfs server crashes so:
 >   - the data are not stored
 >   - the application on the nfs client think that the data are stored ! :(
 > - when the server is up again
 > - the nfs client re-see the data
 > - the application on the nfs client side find itself with data in the 
 > previous state of its lasts writes.
 > 
 > Am I right ?

The scenario I see would be on the client, 
download some software (a tar file).

        tar x
        make

The tar succeeded with  no  errors at  all. Behind our  back
during the  tar  x,   the   server rebooted. No   big   deal
normally.   But with zil_disable   on the  server, the  make
fails, either because  some files from  the original tar are
missing or parts of files.

 > 
 > So with ZIL:
 >   - The application has the ability to do things in the right way. So 
 > even of a nfs-server crash, the application on the nfs-client side can 
 > rely on is own data.
 > 
 > So without ZIL:
 >   - The application has not the ability to do things in the right way. 
 > And we can have a corruption of data. But that doesn't mean corruption 
 > of the FS. It means that the data were partially written and some are 
 > missing.

Sounds right.

 > 
 > > For the love of God do NOT do stuff like that.
 > > 
 > > Just create ZFS on a pile of disks the way that we should, with the
 > > write cache disabled on all the disks and with redundancy in the ZPool
 > > config .. nothing special :

 > Wooooh !!noo..  this is really special to me !!
 > I've read and re-read many times the:
 >   - NFS and ZFS, a fine combination
 >   - ZFS Best Practices Guide
 > and other blog without remarking such idea !
 > 
 > I even notice the opposite recommendation
 > from:
 > -ZFS Best Practices Guide >> ZFS Storage Pools Recommendations
 > -http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#ZFS_Storage_Pools_Recommendations
 > where I read :
 >   - For production systems, consider using whole disks for storage pools 
 > rather than slices for the following reasons:
 >    + Allow ZFS to enable the disk's write cache, for those disks that 
 > have write caches
 >
 > and from:
 > -NFS and ZFS, a fine combination >> Comparison with UFS
 > -http://blogs.sun.com/roch/#zfs_to_ufs_performance_comparison
 > where I read :
 >   Semantically correct NFS service :
 > 
 >      nfs/ufs : 17     sec (write cache disable)
 >      nfs/zfs : 12     sec (write cache disable,zil_disable=0)
 >      nfs/zfs :  7     sec (write cache enable,zil_disable=0)
 > then I can say:
 >   that nfs/zfs with write cache enable end zil_enable is --in that 
 > case-- faster
 > 
 > So why are you recommending me to disable the write cache ?
 >

For ZFS, it can work either way. Maybe the above was a typo.

 > -- 
 > 
 > Cedric BRINER
 > Geneva - Switzerland
 > 
 > _______________________________________________
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to