Hi Bernard,
many thanks for commenting on the software iWARP RDMA
driver code is sent about 5 weeks ago. I hope I now
incorporated all recent suggestions and fixes.
These are the main changes:
o changing siw device attachment to be dynamic based on
netlink events
o enabling inline
Hi Bob,
Am 24.07.2011 21:43, schrieb
rpearson-klaocwyjdxkshymvu7je4pqqe7ycj...@public.gmane.org:
Changes in v2 include:
- Updated to Roland's tree as of 7/24/2011
- Moved the crc32 algorithm into a patch (slice-by-8-for_crc32.c.diff)
that goes into the mainline kernel.
Removed duplicate definition for SGE_PF_KDOORBELL, SGE_INT_ENABLE3,
PCIE_MEM_ACCESS_OFFSET registers.
Moved the register field definitions around the register definition.
Signed-off-by: Santosh Rastapur sant...@chelsio.com
Signed-off-by: Vipul Pandya vi...@chelsio.com
Reviewed-by: Sivakumar
Hi Daniel,
On 17:07 Wed 18 Jul , Daniel Klein wrote:
improving runtim of search comon pkeys code to o(n).
Signed-off-by: Daniel Klein dani...@mellanox.com
---
Applied after removing unused variables. Thanks.
--
To unsubscribe from this list: send the line unsubscribe linux-rdma in
the
Hi Hal,
On 02:27 Thu 09 Aug , Hal Rosenstock wrote:
Signed-off-by: Hal Rosenstock h...@mellanox.com
---
Applied, thanks.
--
To unsubscribe from this list: send the line unsubscribe linux-rdma in
the body of a message to majord...@vger.kernel.org
More majordomo info at
Hi Hal,
On 14:10 Tue 28 Aug , Hal Rosenstock wrote:
Signed-off-by: Hal Rosenstock h...@mellanox.com
---
Applied, thanks.
--
To unsubscribe from this list: send the line unsubscribe linux-rdma in
the body of a message to majord...@vger.kernel.org
More majordomo info at
Hi Hal,
On 08:18 Tue 04 Sep , Hal Rosenstock wrote:
Signed-off-by: Hal Rosenstock h...@mellanox.com
---
Applied, thanks.
--
To unsubscribe from this list: send the line unsubscribe linux-rdma in
the body of a message to majord...@vger.kernel.org
More majordomo info at
On 8/3/2012 4:40 AM, Jack Morgenstein wrote:
Reserve bits 26-31 for internal use by low-level drivers. Two
such bits are used in the mlx4 driver SRIOV IB implementation.
These enum additions guarantee that the core layer will never use
these bits, so that low level drivers may safely make
On Wed, 29 Aug 2012, Atchley, Scott wrote:
I am benchmarking a sockets based application and I want a sanity check
on IPoIB performance expectations when using connected mode (65520 MTU).
I am using the tuning tips in Documentation/infiniband/ipoib.txt. The
machines have Mellanox QDR cards
Create two kernel parameters, in order to make variables configurable.
i.e. cma_cm_response_timeout for CM response timeout,
and cma_max_cm_retries for the number of retries.
They can now be configured via command line for the kernel modules.
For example:
# modprobe ib_srp
when running osmtest -f m -M 2
Reported-by: Daniel Klein dani...@mellanox.com
Sep 04 20:27:28 920578 [D2499700] 0x02 - osmt_run_mcast_flow: Checking partial
JoinState delete request - removing NonMember (o15.0.1.14)...
Sep 04 20:27:28 920863 [D2499700] 0x02 - osmt_run_mcast_flow: Validating
On Sep 5, 2012, at 11:51 AM, Christoph Lameter wrote:
On Wed, 29 Aug 2012, Atchley, Scott wrote:
I am benchmarking a sockets based application and I want a sanity check
on IPoIB performance expectations when using connected mode (65520 MTU).
I am using the tuning tips in
On 08/29/12 21:35, Atchley, Scott wrote:
Hi all,
I am benchmarking a sockets based application and I want a sanity check on
IPoIB performance expectations when using connected mode (65520 MTU).
I have read that with newer cards the datagram (unconnected) mode is
faster at IPoIB than
On 09/05/12 17:51, Christoph Lameter wrote:
PCI-E on PCI 2.0 should give you up to about 2.3 Gbytes/sec with these
nics. So there is like something that the network layer does to you that
limits the bandwidth.
I think those are 8 lane PCI-e 2.0 so that would be 500MB/sec x 8 that's
4
On Sep 5, 2012, at 1:50 PM, Reeted wrote:
On 08/29/12 21:35, Atchley, Scott wrote:
Hi all,
I am benchmarking a sockets based application and I want a sanity check on
IPoIB performance expectations when using connected mode (65520 MTU).
I have read that with newer cards the datagram
On Wed, 5 Sep 2012, Atchley, Scott wrote:
# ethtool -k ib0
Offload parameters for ib0:
rx-checksumming: off
tx-checksumming: off
scatter-gather: off
tcp segmentation offload: off
udp fragmentation offload: off
generic segmentation offload: on
generic-receive-offload: off
There is no
On Sep 5, 2012, at 2:20 PM, Christoph Lameter wrote:
On Wed, 5 Sep 2012, Atchley, Scott wrote:
# ethtool -k ib0
Offload parameters for ib0:
rx-checksumming: off
tx-checksumming: off
scatter-gather: off
tcp segmentation offload: off
udp fragmentation offload: off
generic segmentation
On 09/05/12 19:59, Atchley, Scott wrote:
On Sep 5, 2012, at 1:50 PM, Reeted wrote:
I have read that with newer cards the datagram (unconnected) mode is
faster at IPoIB than connected mode. Do you want to check?
I have read that the latency is lower (better) but the bandwidth is lower.
Using
On Wed, 5 Sep 2012, Atchley, Scott wrote:
AFAICT the network stack is useful up to 1Gbps and
after that more and more band-aid comes into play.
Hmm, many 10G Ethernet NICs can reach line rate. I have not yet tested any
40G Ethernet NICs, but I hope that they will get close to line rate.
On Wed, 5 Sep 2012, Atchley, Scott wrote:
These are Mellanox QDR HCAs (board id is MT_0D90110009). The full output of
ibv_devinfo is in my original post.
Hmmm... You are running an old kernel. What version of OFED do you use?
--
To unsubscribe from this list: send the line unsubscribe
On Sep 5, 2012, at 3:04 PM, Reeted wrote:
On 09/05/12 19:59, Atchley, Scott wrote:
On Sep 5, 2012, at 1:50 PM, Reeted wrote:
I have read that with newer cards the datagram (unconnected) mode is
faster at IPoIB than connected mode. Do you want to check?
I have read that the latency is
On Sep 5, 2012, at 3:06 PM, Christoph Lameter wrote:
On Wed, 5 Sep 2012, Atchley, Scott wrote:
AFAICT the network stack is useful up to 1Gbps and
after that more and more band-aid comes into play.
Hmm, many 10G Ethernet NICs can reach line rate. I have not yet tested any
40G Ethernet
On Sep 5, 2012, at 3:13 PM, Christoph Lameter wrote:
On Wed, 5 Sep 2012, Atchley, Scott wrote:
These are Mellanox QDR HCAs (board id is MT_0D90110009). The full output of
ibv_devinfo is in my original post.
Hmmm... You are running an old kernel. What version of OFED do you use?
Hah, if
On Wed, 5 Sep 2012, Atchley, Scott wrote:
With Myricom 10G NICs, for example, you just need one core and it can do
line rate with 1500 byte MTU. Do you count the stateless offloads as
band-aids? Or something else?
The stateless aids also have certain limitations. Its a grey zone if you
want
On 9/5/2012 3:48 PM, Atchley, Scott wrote:
On Sep 5, 2012, at 3:06 PM, Christoph Lameter wrote:
On Wed, 5 Sep 2012, Atchley, Scott wrote:
AFAICT the network stack is useful up to 1Gbps and
after that more and more band-aid comes into play.
Hmm, many 10G Ethernet NICs can reach line rate. I
On Wed, 5 Sep 2012, Atchley, Scott wrote:
Hmmm... You are running an old kernel. What version of OFED do you
use?
Hah, if you think my kernel is old, you should see my userland
(RHEL5.5). ;-)
My condolences.
Does the version of OFED impact the kernel modules? I am using the
modules
On Sep 5, 2012, at 4:12 PM, Ezra Kissel wrote:
On 9/5/2012 3:48 PM, Atchley, Scott wrote:
On Sep 5, 2012, at 3:06 PM, Christoph Lameter wrote:
On Wed, 5 Sep 2012, Atchley, Scott wrote:
AFAICT the network stack is useful up to 1Gbps and
after that more and more band-aid comes into play.
I found the following code in dup2():
oldfdi = idm_lookup(idm, oldfd);
if (oldfdi oldfdi-type == fd_fork)
fork_passive(oldfd);
In that code the file descriptor type (type) is compared with a fork
state enum value (fd_fork). Is that on purpose ??
On purpose?
28 matches
Mail list logo