Re: nfs-rdma performance

2014-06-13 Thread Shirley Ma

On 06/12/2014 04:06 PM, Mark Lehrer wrote:
 I am using ConnectX-3 HCA's and Dell R720 servers.
 
 On Thu, Jun 12, 2014 at 2:00 PM, Steve Wise sw...@opengridcomputing.com 
 wrote:
 On 6/12/2014 2:54 PM, Mark Lehrer wrote:

 Awesome work on nfs-rdma in the later kernels!  I had been having
 panic problems for awhile and now things appear to be quite reliable.

 Now that things are more reliable, I would like to help work on speed
 issues.  On this same hardware with SMB Direct and the standard
 storage review 8k 70/30 test, I get combined read  write performance
 of around 2.5GB/sec.  With nfs-rdma it is pushing about 850MB/sec.
 This is simply an unacceptable difference.

I was able to get close to 2.5GB/s with ConnectX-2 for direct I/O. What's your 
test case and wsize/rsize? Did you collect /proc/interrupts, cpu usage and 
profiling data?


 I'm using the standard settings -- connected mode, 65520 byte MTU,
 nfs-server-side async, lots of nfsd's, and nfsver=3 with large
 buffers.  Does anyone have any tuning suggestions and/or places to
 start looking for bottlenecks?


 What RDMA device?

 Steve.
 --
 To unsubscribe from this list: send the line unsubscribe linux-rdma in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 
--
To unsubscribe from this list: send the line unsubscribe linux-rdma in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


nfs-rdma performance

2014-06-12 Thread Mark Lehrer
Awesome work on nfs-rdma in the later kernels!  I had been having
panic problems for awhile and now things appear to be quite reliable.

Now that things are more reliable, I would like to help work on speed
issues.  On this same hardware with SMB Direct and the standard
storage review 8k 70/30 test, I get combined read  write performance
of around 2.5GB/sec.  With nfs-rdma it is pushing about 850MB/sec.
This is simply an unacceptable difference.

I'm using the standard settings -- connected mode, 65520 byte MTU,
nfs-server-side async, lots of nfsd's, and nfsver=3 with large
buffers.  Does anyone have any tuning suggestions and/or places to
start looking for bottlenecks?

Thanks,
Mark
--
To unsubscribe from this list: send the line unsubscribe linux-rdma in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: nfs-rdma performance

2014-06-12 Thread Steve Wise

On 6/12/2014 2:54 PM, Mark Lehrer wrote:

Awesome work on nfs-rdma in the later kernels!  I had been having
panic problems for awhile and now things appear to be quite reliable.

Now that things are more reliable, I would like to help work on speed
issues.  On this same hardware with SMB Direct and the standard
storage review 8k 70/30 test, I get combined read  write performance
of around 2.5GB/sec.  With nfs-rdma it is pushing about 850MB/sec.
This is simply an unacceptable difference.

I'm using the standard settings -- connected mode, 65520 byte MTU,
nfs-server-side async, lots of nfsd's, and nfsver=3 with large
buffers.  Does anyone have any tuning suggestions and/or places to
start looking for bottlenecks?


What RDMA device?

Steve.
--
To unsubscribe from this list: send the line unsubscribe linux-rdma in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: nfs-rdma performance

2014-06-12 Thread Wendy Cheng
On Thu, Jun 12, 2014 at 12:54 PM, Mark Lehrer leh...@gmail.com wrote:

 Awesome work on nfs-rdma in the later kernels!  I had been having
 panic problems for awhile and now things appear to be quite reliable.

 Now that things are more reliable, I would like to help work on speed
 issues.  On this same hardware with SMB Direct and the standard
 storage review 8k 70/30 test, I get combined read  write performance
 of around 2.5GB/sec.  With nfs-rdma it is pushing about 850MB/sec.
 This is simply an unacceptable difference.

 I'm using the standard settings -- connected mode, 65520 byte MTU,
 nfs-server-side async, lots of nfsd's, and nfsver=3 with large
 buffers.  Does anyone have any tuning suggestions and/or places to
 start looking for bottlenecks?


There is a tunable called xprt_rdma_slot_table_entries .. Increasing
that seemed to help a lot for me last year. Be aware that this tunable
is enclosed inside  #ifdef RPC_DEBUG so you might need to tweak the
source and rebuild the kmod.


-- Wendy
--
To unsubscribe from this list: send the line unsubscribe linux-rdma in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: nfs-rdma performance

2014-06-12 Thread Mark Lehrer
I am using ConnectX-3 HCA's and Dell R720 servers.

On Thu, Jun 12, 2014 at 2:00 PM, Steve Wise sw...@opengridcomputing.com wrote:
 On 6/12/2014 2:54 PM, Mark Lehrer wrote:

 Awesome work on nfs-rdma in the later kernels!  I had been having
 panic problems for awhile and now things appear to be quite reliable.

 Now that things are more reliable, I would like to help work on speed
 issues.  On this same hardware with SMB Direct and the standard
 storage review 8k 70/30 test, I get combined read  write performance
 of around 2.5GB/sec.  With nfs-rdma it is pushing about 850MB/sec.
 This is simply an unacceptable difference.

 I'm using the standard settings -- connected mode, 65520 byte MTU,
 nfs-server-side async, lots of nfsd's, and nfsver=3 with large
 buffers.  Does anyone have any tuning suggestions and/or places to
 start looking for bottlenecks?


 What RDMA device?

 Steve.
--
To unsubscribe from this list: send the line unsubscribe linux-rdma in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html