Thanks for the reply, I will play around with the larger rsize and wsize
with sync enable to see if I can bump the performance.

There is no specific reason for nfs4 that I am aware of other than that it
is the default on the EMC devices that we are using.

Regards

On 27 August 2012 17:41, Sabuj Pattanayek <sab...@gmail.com> wrote:

> We were getting less perf with nfsv4 so we stuck with v3, but perhaps
> you have other reasons for using v4...
>
> On Mon, Aug 27, 2012 at 11:39 AM, Sabuj Pattanayek <sab...@gmail.com>
> wrote:
> > Yes, that's what'll happen if you use sync. One way around using sync
> > and getting decent write performance is to use rsize and wsize >=
> > 128kb, but then you'll lose small file performance since you're moving
> > larger blocks over the wire. The problem I've seen with all the EMC
> > solutions I've tested is that NFSv3 client (read) data cache usage is
> > broken unless you use sync or you set the EMC server to force NFS
> > unstable writes to filesync or datasync rather than the default of an
> > unstable NFS write (async).
> >
> > What I mean by broken client (read) cache usage is that if you write a
> > file out to the server which fits in memory on the client (since the
> > client buffers the write to local memory before sending it to the
> > server), then if you try to read the file back and it's still in local
> > client memory, the EMC server will do something with the client that
> > causes it to read the entire file back again over the wire, rather
> > than just getting it from local memory (really fast). Between Linux
> > NFS servers and clients or other high end NAS solutions I've tested
> > the client will read the file out of cache if it exists and if it's
> > unchanged, which is the correct behavior.
> >
> > dd is actually a pretty good way of determining whether several NFS
> > stacks are "broken" in this regard, I've written a nice wrapper around
> > it https://code.google.com/p/nfsspeedtest/
> >
> > On Mon, Aug 27, 2012 at 9:19 AM, Gerhardus Geldenhuis
> > <gerhardus.geldenh...@gmail.com> wrote:
> >> Hi
> >> I am debugging NFS performance problems at the moment and going through
> a
> >> steep learning curve with regards to all its inner workings.
> >>
> >> In short we are mounting EMC provided NFS storage and seeing a massive
> >> difference when mounting with and without the sync option.
> >>
> >> Without the sync option I can achieve 90MB/s without any tuning, with
> sync
> >> option turned on that falls to 4MB/s. That seems to slow... any
> thoughts and
> >> experience on this would be appreciated.
> >> These values were obtained by doing a dd if=/dev/zero of=<file onnfs
> share>
> >> bs=1M count=1024. I am aware that I need to optimize for specific use
> case
> >> with regards to concurrency, file size, writes, rewrites, reads, rereads
> >> etc. However the performance penalty for sync seems excessive and I
> wanted
> >> to know if that is a shared experience.
> >>
> >> My understanding of sync is basically that in the case of NFS, it will
> wait
> >> for a confirmation from the NFS server that the data has been written to
> >> disk before acknowledging the write locally, so being "very certain". My
> >> understanding could be flawed and I would appreciate anyone correcting
> me on
> >> that and/or pointing me to more reading which I would happily do.
> >>
> >> I am a bit confused however about what the man pages are saying:
> >> man mount says:
> >> the sync option today has effect only for ext2, ext3, fat, vfat and ufs
> >> but when I change it I can see a big difference in performance so the
> man
> >> page is out of date... or is there something I am misreading?
> >>
> >> man 5 nfs, does not make mention of sync.
> >>
> >> OS: RHEL 5.8
> >> Kernel: 2.6.18-308.4.1.0.1.el5
> >> NFS:
> >> nfs-utils-lib-1.0.8-7.9.el5
> >> nfs-utils-1.0.9-60.0.1.el5
> >> mount parameters:
> >> nfs4
> >>
> rw,bg,hard,nointr,rsize={16384,32768,65536},wsize={16384,32768,65536},suid,timeo=600,_netdev
> >> 0 0
> >>
> >> I am currently running iozone and changing various mount parameters to
> see
> >> how they effect performance. iozone is arguably a better test of
> performance
> >> than dd
> >>
> >> Regards
> >>
> >> --
> >> Gerhardus Geldenhuis
> >>
> >> _______________________________________________
> >> rhelv5-list mailing list
> >> rhelv5-list@redhat.com
> >> https://www.redhat.com/mailman/listinfo/rhelv5-list
>
> _______________________________________________
> rhelv5-list mailing list
> rhelv5-list@redhat.com
> https://www.redhat.com/mailman/listinfo/rhelv5-list
>



-- 
Gerhardus Geldenhuis
_______________________________________________
rhelv5-list mailing list
rhelv5-list@redhat.com
https://www.redhat.com/mailman/listinfo/rhelv5-list

Reply via email to