Hi Sean,
I will re-check until the end of the week; there is
some test scheduling issue with our test system, which
affects my access times.
Thanks
Andreas
On Mon, 19 Aug 2013 17:10:11 +
Hefty, Sean sean.he...@intel.com wrote:
Can you see if the patch below fixes the hang?
Hi,
I have added the patch and re-tested: I still encounter
hangs of my application. I am not quite sure whether the
I hit the same error on the shutdown because now I don't hit
the error always, but only every now and then.
WHen adding the patch to my code base (git tag v1.0.17) I notice
an
The purpose of this InfiniBand SRP initiator patch series is as follows:
- Make the SRP initiator driver better suited for use in a H.A. setup.
Add fast_io_fail_tmo, dev_loss_tmo and reconnect_delay parameters.
These can be used either to speed up failover or to avoid device
removal when
Keep the rport data structure around after srp_remove_host() has
finished until cleanup of the IB transport layer has finished
completely. This is necessary because later patches use the rport
pointer inside the queuecommand callback. Without this patch
accessing the rport from inside a
Add the necessary functions in the SRP transport module to allow
an SRP initiator driver to implement transport layer error handling
similar to the functionality already provided by the FC transport
layer. This includes:
- Support for implementing fast_io_fail_tmo, the time that should
elapse
Finish all outstanding I/O requests after fast_io_fail_tmo expired,
which speeds up failover in a multipath setup. This patch is a
reworked version of a patch from Sebastian Riemer.
Reported-by: Sebastian Riemer sebastian.rie...@profitbricks.com
Signed-off-by: Bart Van Assche bvanass...@acm.org
Enable reconnect_delay, fast_io_fail_tmo and dev_loss_tmo
functionality for the IB SRP initiator. Add kernel module
parameters that allow to specify default values for these
three parameters.
Signed-off-by: Bart Van Assche bvanass...@acm.org
Acked-by: David Dillow dillo...@ornl.gov
Cc: Roland
Start the reconnect timer, fast_io_fail timer and dev_loss timers
if a transport layer error occurs.
Signed-off-by: Bart Van Assche bvanass...@acm.org
Acked-by: David Dillow dillo...@ornl.gov
Cc: Roland Dreier rol...@kernel.org
Cc: Vu Pham v...@mellanox.com
Cc: Sebastian Riemer
This patch does not change any functionality.
Signed-off-by: Bart Van Assche bvanass...@acm.org
Cc: Roland Dreier rol...@purestorage.com
Cc: David Dillow dillo...@ornl.gov
Cc: Vu Pham v...@mellanox.com
Cc: Sebastian Riemer sebastian.rie...@profitbricks.com
---
drivers/infiniband/ulp/srp/ib_srp.c
Certain storage configurations, e.g. a sufficiently large array of
hard disks in a RAID configuration, need a queue depth above 64 to
achieve optimal performance. Hence make the queue depth configurable.
Signed-off-by: Bart Van Assche bvanass...@acm.org
Cc: Roland Dreier rol...@purestorage.com
On 8/19/2013 6:46 AM, Line Holen wrote:
On 08/16/13 15:47, Hal Rosenstock wrote:
On 8/14/2013 6:26 AM, Line Holen wrote:
Signed-off-by: Line Holenline.ho...@oracle.com
---
diff --git a/opensm/osm_port_info_rcv.c b/opensm/osm_port_info_rcv.c
index 7dcd15e..961b376 100644
---
From: Vladimir Koushnir vladim...@mellanox.com
double strdup for p_opt-dump_files_dir is causing memory leak
Signed-off-by: Vladimir Koushnir vladim...@mellanox.com
Signed-off-by: Hal Rosenstock h...@mellanox.com
---
diff --git a/opensm/osm_subnet.c b/opensm/osm_subnet.c
index 7ab1671..4b5ef38
From: Vladimir Koushnir vladim...@mellanox.com
double strdup for p_opt-dump_files_dir is causing memory leak
Approach from Bart Van Assche bvanass...@acm.org
Signed-off-by: Vladimir Koushnir vladim...@mellanox.com
Signed-off-by: Hal Rosenstock h...@mellanox.com
---
Change since v1:
Eliminate
I have added the patch and re-tested: I still encounter
hangs of my application. I am not quite sure whether the
I hit the same error on the shutdown because now I don't hit
the error always, but only every now and then.
I guess this is at least some progress... :/
WHen adding the patch to
Where is the documentation for this? Multiple people have referred to it, but
I don't see any mention of it in libibverbs.git.
This is an unmerged, yet to be accepted patch set. Extensions were added as
part of adding support for XRC.
Yishai Hadas posted v9 of the series on 8/1 - Add
Signed-off-by: Ira Weiny ira.we...@intel.com
---
README |1 +
configure.ac |2 ++
2 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/README b/README
index d19e3e9..10a11e2 100644
--- a/README
+++ b/README
@@ -14,6 +14,7 @@ Dependencies:
2) libibumad = 1.3.7
Commit 36a8f01c (IB/qib: Add congestion control agent implementation) caused
statements to leak pass the header guard.
Correct with this update.
Reviewed-by: Marciniszyn, Mike mike.marcinis...@intel.com
Signed-off-by: Ira Weiny ira.we...@intel.com
---
drivers/infiniband/hw/qib/qib_mad.h |
On 8/20/2013 3:50 PM, Bart Van Assche wrote:
Certain storage configurations, e.g. a sufficiently large array of
hard disks in a RAID configuration, need a queue depth above 64 to
achieve optimal performance. Hence make the queue depth configurable.
Signed-off-by: Bart Van Assche
From: Alex Netes ale...@mellanox.com
The key also should be freed.
Signed-off-by: Alex Netes ale...@mellanox.com
Signed-off-by: Hal Rosenstock h...@mellanox.com
---
opensm/osm_db_files.c |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/opensm/osm_db_files.c
On 08/20/13 17:34, Sagi Grimberg wrote:
On 8/20/2013 3:50 PM, Bart Van Assche wrote:
Certain storage configurations, e.g. a sufficiently large array of
hard disks in a RAID configuration, need a queue depth above 64 to
achieve optimal performance. Hence make the queue depth configurable.
[ ...
celina2john...@hotmail.com
Hello,
My name is Celina Johnson.
i saw your profile today and become interesting to know more about you. please
i will like you respond to me at my private e-mail address
(celina2john...@hotmail.com) so that i will tell you more about my self and
also give you my
From: Dan Ben Yosef da...@dev.mellanox.co.il
leaks the storage that p_accum_val and p_key points to.
Signed-off-by: Dan Ben Yosef da...@dev.mellanox.co.il
Reviewed-by: Vladimir Koushnir vladim...@mellanox.com
Signed-off-by: Hal Rosenstock h...@mellanox.com
---
diff --git a/opensm/osm_db_files.c
On Tue, 2013-08-20 at 17:55 +0200, Bart Van Assche wrote:
On 08/20/13 17:34, Sagi Grimberg wrote:
Question,
If srp now will allow larger queues while using a single global FMR pool
of size 1024, isn't it more likely now that in stress environment srp
will run out of FMRs to handle IO
23 matches
Mail list logo