Re: SoftiWARP: new patchset

2012-09-05 Thread Stefan Metzmacher
Hi Bernard, many thanks for commenting on the software iWARP RDMA driver code is sent about 5 weeks ago. I hope I now incorporated all recent suggestions and fixes. These are the main changes: o changing siw device attachment to be dynamic based on netlink events o enabling inline

Re: [patch v2 00/37] add rxe (soft RoCE)

2012-09-05 Thread Stefan (metze) Metzmacher
Hi Bob, Am 24.07.2011 21:43, schrieb rpearson-klaocwyjdxkshymvu7je4pqqe7ycj...@public.gmane.org: Changes in v2 include: - Updated to Roland's tree as of 7/24/2011 - Moved the crc32 algorithm into a patch (slice-by-8-for_crc32.c.diff) that goes into the mainline kernel.

[PATCH V2 2/2] cxgb4: Remove duplicate register definitions

2012-09-05 Thread Vipul Pandya
Removed duplicate definition for SGE_PF_KDOORBELL, SGE_INT_ENABLE3, PCIE_MEM_ACCESS_OFFSET registers. Moved the register field definitions around the register definition. Signed-off-by: Santosh Rastapur sant...@chelsio.com Signed-off-by: Vipul Pandya vi...@chelsio.com Reviewed-by: Sivakumar

Re: [PATCH] opensm: improve search common pkeys.

2012-09-05 Thread Alex Netes
Hi Daniel, On 17:07 Wed 18 Jul , Daniel Klein wrote: improving runtim of search comon pkeys code to o(n). Signed-off-by: Daniel Klein dani...@mellanox.com --- Applied after removing unused variables. Thanks. -- To unsubscribe from this list: send the line unsubscribe linux-rdma in the

Re: [PATCH][MINOR] opensm/osm_vendor_ibumad.c: Add management class to error log message

2012-09-05 Thread Alex Netes
Hi Hal, On 02:27 Thu 09 Aug , Hal Rosenstock wrote: Signed-off-by: Hal Rosenstock h...@mellanox.com --- Applied, thanks. -- To unsubscribe from this list: send the line unsubscribe linux-rdma in the body of a message to majord...@vger.kernel.org More majordomo info at

Re: [PATCH] opensm/osm_sw_info_rcv.c: Fixed locking issue on osm_get_node_by_guid error

2012-09-05 Thread Alex Netes
Hi Hal, On 14:10 Tue 28 Aug , Hal Rosenstock wrote: Signed-off-by: Hal Rosenstock h...@mellanox.com --- Applied, thanks. -- To unsubscribe from this list: send the line unsubscribe linux-rdma in the body of a message to majord...@vger.kernel.org More majordomo info at

Re: [PATCH] OpenSM: Add new Mellanox OUI

2012-09-05 Thread Alex Netes
Hi Hal, On 08:18 Tue 04 Sep , Hal Rosenstock wrote: Signed-off-by: Hal Rosenstock h...@mellanox.com --- Applied, thanks. -- To unsubscribe from this list: send the line unsubscribe linux-rdma in the body of a message to majord...@vger.kernel.org More majordomo info at

Re: [PATCH for-next V2 01/22] IB/core: Reserve bits in enum ib_qp_create_flags for low-level driver use

2012-09-05 Thread Doug Ledford
On 8/3/2012 4:40 AM, Jack Morgenstein wrote: Reserve bits 26-31 for internal use by low-level drivers. Two such bits are used in the mlx4 driver SRIOV IB implementation. These enum additions guarantee that the core layer will never use these bits, so that low level drivers may safely make

Re: IPoIB performance

2012-09-05 Thread Christoph Lameter
On Wed, 29 Aug 2012, Atchley, Scott wrote: I am benchmarking a sockets based application and I want a sanity check on IPoIB performance expectations when using connected mode (65520 MTU). I am using the tuning tips in Documentation/infiniband/ipoib.txt. The machines have Mellanox QDR cards

[PATCH] IB: new module params. cm_response_timeout, max_cm_retries

2012-09-05 Thread Dongsu Park
Create two kernel parameters, in order to make variables configurable. i.e. cma_cm_response_timeout for CM response timeout, and cma_max_cm_retries for the number of retries. They can now be configured via command line for the kernel modules. For example: # modprobe ib_srp

[PATCH] osmtest/osmt_multicast.c: Fix 02BF error

2012-09-05 Thread Hal Rosenstock
when running osmtest -f m -M 2 Reported-by: Daniel Klein dani...@mellanox.com Sep 04 20:27:28 920578 [D2499700] 0x02 - osmt_run_mcast_flow: Checking partial JoinState delete request - removing NonMember (o15.0.1.14)... Sep 04 20:27:28 920863 [D2499700] 0x02 - osmt_run_mcast_flow: Validating

Re: IPoIB performance

2012-09-05 Thread Atchley, Scott
On Sep 5, 2012, at 11:51 AM, Christoph Lameter wrote: On Wed, 29 Aug 2012, Atchley, Scott wrote: I am benchmarking a sockets based application and I want a sanity check on IPoIB performance expectations when using connected mode (65520 MTU). I am using the tuning tips in

Re: IPoIB performance

2012-09-05 Thread Reeted
On 08/29/12 21:35, Atchley, Scott wrote: Hi all, I am benchmarking a sockets based application and I want a sanity check on IPoIB performance expectations when using connected mode (65520 MTU). I have read that with newer cards the datagram (unconnected) mode is faster at IPoIB than

Re: IPoIB performance

2012-09-05 Thread Reeted
On 09/05/12 17:51, Christoph Lameter wrote: PCI-E on PCI 2.0 should give you up to about 2.3 Gbytes/sec with these nics. So there is like something that the network layer does to you that limits the bandwidth. I think those are 8 lane PCI-e 2.0 so that would be 500MB/sec x 8 that's 4

Re: IPoIB performance

2012-09-05 Thread Atchley, Scott
On Sep 5, 2012, at 1:50 PM, Reeted wrote: On 08/29/12 21:35, Atchley, Scott wrote: Hi all, I am benchmarking a sockets based application and I want a sanity check on IPoIB performance expectations when using connected mode (65520 MTU). I have read that with newer cards the datagram

Re: IPoIB performance

2012-09-05 Thread Christoph Lameter
On Wed, 5 Sep 2012, Atchley, Scott wrote: # ethtool -k ib0 Offload parameters for ib0: rx-checksumming: off tx-checksumming: off scatter-gather: off tcp segmentation offload: off udp fragmentation offload: off generic segmentation offload: on generic-receive-offload: off There is no

Re: IPoIB performance

2012-09-05 Thread Atchley, Scott
On Sep 5, 2012, at 2:20 PM, Christoph Lameter wrote: On Wed, 5 Sep 2012, Atchley, Scott wrote: # ethtool -k ib0 Offload parameters for ib0: rx-checksumming: off tx-checksumming: off scatter-gather: off tcp segmentation offload: off udp fragmentation offload: off generic segmentation

Re: IPoIB performance

2012-09-05 Thread Reeted
On 09/05/12 19:59, Atchley, Scott wrote: On Sep 5, 2012, at 1:50 PM, Reeted wrote: I have read that with newer cards the datagram (unconnected) mode is faster at IPoIB than connected mode. Do you want to check? I have read that the latency is lower (better) but the bandwidth is lower. Using

Re: IPoIB performance

2012-09-05 Thread Christoph Lameter
On Wed, 5 Sep 2012, Atchley, Scott wrote: AFAICT the network stack is useful up to 1Gbps and after that more and more band-aid comes into play. Hmm, many 10G Ethernet NICs can reach line rate. I have not yet tested any 40G Ethernet NICs, but I hope that they will get close to line rate.

Re: IPoIB performance

2012-09-05 Thread Christoph Lameter
On Wed, 5 Sep 2012, Atchley, Scott wrote: These are Mellanox QDR HCAs (board id is MT_0D90110009). The full output of ibv_devinfo is in my original post. Hmmm... You are running an old kernel. What version of OFED do you use? -- To unsubscribe from this list: send the line unsubscribe

Re: IPoIB performance

2012-09-05 Thread Atchley, Scott
On Sep 5, 2012, at 3:04 PM, Reeted wrote: On 09/05/12 19:59, Atchley, Scott wrote: On Sep 5, 2012, at 1:50 PM, Reeted wrote: I have read that with newer cards the datagram (unconnected) mode is faster at IPoIB than connected mode. Do you want to check? I have read that the latency is

Re: IPoIB performance

2012-09-05 Thread Atchley, Scott
On Sep 5, 2012, at 3:06 PM, Christoph Lameter wrote: On Wed, 5 Sep 2012, Atchley, Scott wrote: AFAICT the network stack is useful up to 1Gbps and after that more and more band-aid comes into play. Hmm, many 10G Ethernet NICs can reach line rate. I have not yet tested any 40G Ethernet

Re: IPoIB performance

2012-09-05 Thread Atchley, Scott
On Sep 5, 2012, at 3:13 PM, Christoph Lameter wrote: On Wed, 5 Sep 2012, Atchley, Scott wrote: These are Mellanox QDR HCAs (board id is MT_0D90110009). The full output of ibv_devinfo is in my original post. Hmmm... You are running an old kernel. What version of OFED do you use? Hah, if

Re: IPoIB performance

2012-09-05 Thread Christoph Lameter
On Wed, 5 Sep 2012, Atchley, Scott wrote: With Myricom 10G NICs, for example, you just need one core and it can do line rate with 1500 byte MTU. Do you count the stateless offloads as band-aids? Or something else? The stateless aids also have certain limitations. Its a grey zone if you want

Re: IPoIB performance

2012-09-05 Thread Ezra Kissel
On 9/5/2012 3:48 PM, Atchley, Scott wrote: On Sep 5, 2012, at 3:06 PM, Christoph Lameter wrote: On Wed, 5 Sep 2012, Atchley, Scott wrote: AFAICT the network stack is useful up to 1Gbps and after that more and more band-aid comes into play. Hmm, many 10G Ethernet NICs can reach line rate. I

Re: IPoIB performance

2012-09-05 Thread Christoph Lameter
On Wed, 5 Sep 2012, Atchley, Scott wrote: Hmmm... You are running an old kernel. What version of OFED do you use? Hah, if you think my kernel is old, you should see my userland (RHEL5.5). ;-) My condolences. Does the version of OFED impact the kernel modules? I am using the modules

Re: IPoIB performance

2012-09-05 Thread Atchley, Scott
On Sep 5, 2012, at 4:12 PM, Ezra Kissel wrote: On 9/5/2012 3:48 PM, Atchley, Scott wrote: On Sep 5, 2012, at 3:06 PM, Christoph Lameter wrote: On Wed, 5 Sep 2012, Atchley, Scott wrote: AFAICT the network stack is useful up to 1Gbps and after that more and more band-aid comes into play.

RE: rsocket library and dup2()

2012-09-05 Thread Hefty, Sean
I found the following code in dup2(): oldfdi = idm_lookup(idm, oldfd); if (oldfdi oldfdi-type == fd_fork) fork_passive(oldfd); In that code the file descriptor type (type) is compared with a fork state enum value (fd_fork). Is that on purpose ?? On purpose?