Andrew,
I am resubmitting this patch because I believe that the discussion
has shown this to be an acceptable solution. I have fixed the 32 bit
build errors, but other than that change, the code is the same as
Roland's V3 patch.
From: Roland Dreier rola...@cisco.com
As discussed in
Good morning,
I'm testing some Neteffect cards (Intel code E10G81GP - Neteffect
NE020.LP.1.SSR).
PC has Linux| (kernel version) 2.6.18-164.15.1.el5 | x86_64 x86_64
x86_64 GNU/Linux.
In this phase, I measure the bandwidth with the netserver/nerperf
(version netperf-2.4.5) ad hoc tests.
They work
Vladimir Sokolovsky wrote:
Roland Dreier wrote:
Hence, I think it would be cleaner if a new capability,
masked_atomic_cap, were introduced, using the original definitions
(NONE, HCA, GLOB).
Vlad, what do you think about that? The more I think about it, the
cleaner this seems to me. And
On Apr 12, 2010 10:14 AM, Andrea Gozzelino
andrea.gozzel...@lnl.infn.it wrote:
Good morning,
I'm testing some Neteffect cards (Intel code E10G81GP - Neteffect
NE020.LP.1.SSR).
PC has Linux| (kernel version) 2.6.18-164.15.1.el5 | x86_64 x86_64
x86_64 GNU/Linux.
In this phase, I measure
On Fri, 2010-04-09 at 17:27 -0700, Jason Gunthorpe wrote:
On Fri, Apr 09, 2010 at 05:13:24PM -0700, Ralph Campbell wrote:
For the QSFP data, I hope I can leave it as is since it is
related to the link state that the other files contain.
It is a read-only file so no issue with trying to
Hi,
I'm trying to do some performance benchmarking of IPoIB on a DDR IB
cluster, and I am having a hard time understanding what I am seeing.
When I do a simple netperf, I get results like these:
[r...@gateway3 ~]# netperf -H 192.168.23.252
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0
Hi!
I am resubmitting this patch because I believe that the discussion
has shown this to be an acceptable solution. I have fixed the 32 bit
build errors, but other than that change, the code is the same as
Roland's V3 patch.
From: Roland Dreier rola...@cisco.com
As discussed in
Dave,
Thanks for the pointer. I thought it was running in connected mode, and
looking at that variable that you mentioned confirms it:
[r...@gateway3 ~]# cat /sys/class/net/ib0/mode
connected
And the IP MTU shows up as:
[r...@gateway3 ~]# ifconfig ib0
ib0 Link encap:InfiniBand HWaddr
Add checking for pipe FD's during destroy and clean them up with close.
Signed-off-by: Arlin Davis arlin.r.da...@intel.com
---
dapl/openib_cma/dapl_ib_cq.c |8 +++-
1 files changed, 7 insertions(+), 1 deletions(-)
diff --git a/dapl/openib_cma/dapl_ib_cq.c b/dapl/openib_cma/dapl_ib_cq.c
check/cleanup CQ and completion channels during dat_ia_close
Signed-off-by: Arlin Davis arlin.r.da...@intel.com
---
dapl/openib_cma/dapl_ib_util.c | 22 --
1 files changed, 16 insertions(+), 6 deletions(-)
diff --git a/dapl/openib_cma/dapl_ib_util.c
On Mon, 12 Apr 2010, Tom Ammon wrote:
| Thanks for the pointer. I thought it was running in connected mode, and
| looking at that variable that you mentioned confirms it:
| [r...@gateway3 ~]# ifconfig ib0
| ib0 Link encap:InfiniBand HWaddr
|
On Mon, 12 Apr 2010 07:22:17 +0100
Eric B Munson ebmun...@us.ibm.com wrote:
Andrew,
I am resubmitting this patch because I believe that the discussion
has shown this to be an acceptable solution.
To whom? Some acked-by's would clarify.
I have fixed the 32 bit
build errors, but other
12 matches
Mail list logo