On 3/1/2012 8:31 AM, Doug Ledford wrote:
I would say we are simply getting to the point where we *know* we need
opensm to handle more than one fabric from a single instance ;-)
Why does a single OpenSM need to handle multiple subnets/fabrics ?
What's the issue with running multiple OpenSMs with
- Original Message -
On 3/1/2012 8:31 AM, Doug Ledford wrote:
I would say we are simply getting to the point where we *know* we
need
opensm to handle more than one fabric from a single instance ;-)
Why does a single OpenSM need to handle multiple subnets/fabrics ?
What's the
On 3/5/2012 10:28 AM, Doug Ledford wrote:
- Original Message -
On 3/1/2012 8:31 AM, Doug Ledford wrote:
I would say we are simply getting to the point where we *know* we
need
opensm to handle more than one fabric from a single instance ;-)
Why does a single OpenSM need to handle
An iser target may send iscsi NO-OP PDUs as soon as it marks the iser
iscsi session as fully operative. This means that there is window in time,
where there are no posted receive buffers in the initiator side, such that
its possible for the iser RC connection to break as of RNR NAK / retry errors.
Alex - is this patch acceptable? My connectivity has been knocked for a loop by
the transition to Intel and I'm not sure if you received this or not.
-Original Message-
From: linux-rdma-ow...@vger.kernel.org
[mailto:linux-rdma-ow...@vger.kernel.org] On Behalf Of Mike Heinz
Sent:
- Original Message -
On 3/5/2012 10:28 AM, Doug Ledford wrote:
- Original Message -
On 3/1/2012 8:31 AM, Doug Ledford wrote:
I would say we are simply getting to the point where we *know* we
need
opensm to handle more than one fabric from a single instance ;-)
Why
From: Roland Dreier rol...@purestorage.com
The current driver defaults to 1M MTT segments, where each segment holds
8 MTT entries. This limits the total memory registered to 8M * PAGE_SIZE
which is 32GB with 4K pages. Since systems that have much more memory
are pretty common now (at least
From: Roland Dreier rol...@purestorage.com
The current driver defaults to 1M MTT segments, where each segment holds
8 MTT entries. This limits the total memory registered to 8M * PAGE_SIZE
which is 32GB with 4K pages. Since systems that have much more memory
are pretty common now (at least
Signed-off-by: Ira Weiny wei...@llnl.gov
---
src/saquery.c | 12 +++-
1 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/src/saquery.c b/src/saquery.c
index 097d9dd..e5fdb25 100644
--- a/src/saquery.c
+++ b/src/saquery.c
@@ -798,7 +798,7 @@ static int
Signed-off-by: Albert Chu ch...@llnl.gov
---
include/opensm/osm_subnet.h | 10 +-
1 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/include/opensm/osm_subnet.h b/include/opensm/osm_subnet.h
index e444ee5..219dfb3 100644
--- a/include/opensm/osm_subnet.h
+++
Hi,
It has been a long time since I looked at this but I was looking at
ibv_modify_qp on an mlx4 system.
I noticed the following which seems incorrect to me.
From the RTS state issuing a modify qp with qp_attr_mask only set to state
to the SQD state
Returns qp_attr with qp_state = SQD but if you
11 matches
Mail list logo