On 2/14/18 8:32 AM, Daniel Gryniewicz wrote:
How many clients are you using?  Each client op can only (currently) be handled 
in a single thread, and client's won't send more ops until the current one is 
ack'd, so Ganesha can basically only parallelize on a per-client basis at the 
moment.

Actually, 2.6 should handle as many concurrent client requests as you like.
(Up to 250 of them.)  That's one of its features.

The client is not sending concurrent requests.


I'm sure there are locking issues; so far we've mostly worked on correctness 
rather than performance.  2.6 has changed the threading model a fair amount, 
and 2.7 will have more improvements, but it's a slow process.

But the planned 2.7 improvements are mostly throughput related, not IOPS.


On 02/13/2018 06:38 PM, Deepak Jagtap wrote:
Yeah user-kernel context switching is definitely adding up latency, but I 
wonder ifrpc or some locking overhead is also in the picture.

ifrpc?


With 70% read 30% random workload nfs ganesha CPU usage was close to 170% while 
remaining 2 cores were pretty much unused (~18K IOPS, latency ~8ms)

With 100% read 30% random nfs ganesha CPU usage ~250% ( ~50K IOPS, latency 
~2ms).

Those latency numbers seem suspect to me.  The dominant latency should be
the file system.  The system calls shouldn't add more than microseconds.

If Ganesha is adding 6 ms to every read operation, we have a serious
problem, and need to profile immediately!

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to