On Wed, Feb 14, 2018 at 08:32:19AM -0500, Daniel Gryniewicz wrote:
> How many clients are you using?  Each client op can only (currently)
> be handled in a single thread, and client's won't send more ops
> until the current one is ack'd,

What version of NFS was the test run over?

I don't see how the server can limit the number of outstanding requests
for NFS versions less than 4.1.

So if the client's doing NFSv3 or v4.0, then there should still always
be a queue of requests in the server's receive buffer, so it shouldn't
have to wait for a round trip back to the client to process the next
request.

But the reads have to be synchronous assuming the working set isn't in
cache, so you're completely bound by the SSD's read latency there.  If
the writes are asynchronous (so, sent without the "stable" bit set),
there might be a little opportunity for parallelism writing to the drive
as well.

So might be interesting to know what exactly the model of SSD used is.

> so Ganesha can basically only
> parallelize on a per-client basis at the moment.

Still something worth fixing, of course.

--b.

> 
> I'm sure there are locking issues; so far we've mostly worked on
> correctness rather than performance.  2.6 has changed the threading
> model a fair amount, and 2.7 will have more improvements, but it's a
> slow process.
> 
> Daniel
> 
> On 02/13/2018 06:38 PM, Deepak Jagtap wrote:
> >Thanks Daniel!
> >
> >Yeah user-kernel context switching is definitely adding up
> >latency, but I wonder ifrpc or some locking overhead is also in
> >the picture.
> >
> >With 70% read 30% random workload nfs ganesha CPU usage was close
> >to 170% while remaining 2 cores were pretty much unused (~18K
> >IOPS, latency ~8ms)
> >
> >With 100% read 30% random nfs ganesha CPU usage ~250% ( ~50K IOPS,
> >latency ~2ms).
> >
> >
> >-Deepak
> >
> >------------------------------------------------------------------------
> >*From:* Daniel Gryniewicz <d...@redhat.com>
> >*Sent:* Tuesday, February 13, 2018 6:15:47 AM
> >*To:* nfs-ganesha-devel@lists.sourceforge.net
> >*Subject:* Re: [Nfs-ganesha-devel] nfs ganesha vs nfs kernel performance
> >Also keep in mind that FSAL VFS can never, by it's very nature, beat
> >knfsd, since it has to do everything knfsd does, but also has userspace
> ><-> kernespace transitions.  Ganesha's strength is exporting
> >userspace-based cluster filesystems.
> >
> >That said, we're always working to make Ganesha faster, and I'm sure
> >there's gains to be made, even in these circumstances.
> >
> >Daniel
> >
> >On 02/12/2018 07:01 PM, Deepak Jagtap wrote:
> >>Hey Guys,
> >>
> >>
> >>I ran few performance tests to compare nfs gansha and nfs kernel
> >>server and noticed significant difference.
> >>
> >>
> >>Please find my test result:
> >>
> >>
> >>SSD formated with EXT3 exported using nfs ganesha  : ~18K IOPS 
> >>  Avg latency: ~8ms       Throughput: ~60MBPS
> >>
> >>same directory exported using nfs kernel server:           
> >> ~75K IOPS      Avg latency: ~0.8ms Throughput: ~300MBPS
> >>
> >>
> >>nfs kernel and nfs ganesha both of them are configured with 128
> >>worker threads. nfs ganesha is configured with VFS FSAL.
> >>
> >>
> >>Am I missing something major in nfs ganesha config or this is
> >>expected behavior.
> >>
> >>Appreciate any inputs as how the performance can be improved for
> >>nfs ganesha.
> >>
> >>
> >>
> >>Please find following ganesha config file that I am using:
> >>
> >>
> >>NFS_Core_Param
> >>{
> >>          Nb_Worker = 128 ;
> >>}
> >>
> >>EXPORT
> >>{
> >>      # Export Id (mandatory, each EXPORT must have a unique Export_Id)
> >>     Export_Id = 77;
> >>     # Exported path (mandatory)
> >>     Path = /host/test;
> >>     Protocols = 3;
> >>     # Pseudo Path (required for NFS v4)
> >>     Pseudo = /host/test;
> >>     # Required for access (default is None)
> >>     # Could use CLIENT blocks instead
> >>     Access_Type = RW;
> >>     # Exporting FSAL
> >>     FSAL {
> >>          Name = VFS;
> >>     }
> >>     CLIENT
> >>     {
> >>          Clients = *;
> >>          Squash = None;
> >>          Access_Type = RW;
> >>     }
> >>}
> >>
> >>
> >>
> >>Thanks & Regards,
> >>
> >>Deepak
> >>
> >>
> >>
> >>------------------------------------------------------------------------------
> >>Check out the vibrant tech community on one of the world's most
> >>engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> >>
> >>
> >>
> >>_______________________________________________
> >>Nfs-ganesha-devel mailing list
> >>Nfs-ganesha-devel@lists.sourceforge.net
> >>https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
> >>
> >
> >
> >------------------------------------------------------------------------------
> >Check out the vibrant tech community on one of the world's most
> >engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> >_______________________________________________
> >Nfs-ganesha-devel mailing list
> >Nfs-ganesha-devel@lists.sourceforge.net
> >https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
> 
> 
> ------------------------------------------------------------------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> _______________________________________________
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to