With platform contrib/platform/lanl/tlss/debug-panasus I get an error:
make[2]: Entering directory
`/panfs/scratch/vol7/hjelmn/turing/ompi-trunk-git/ompi/tools/ompi_info'
CCLD ompi_info
../../../ompi/.libs/libmpi.so: undefined reference to `NBC_Operation'
Brian, can you take a look?
-Nathan
Nice! Are we moving this to 1.7 as well?
-Nathan
On Mon, Jul 02, 2012 at 11:20:12AM -0400, svn-commit-mai...@open-mpi.org wrote:
> Author: pasha (Pavel Shamis)
> Date: 2012-07-02 11:20:12 EDT (Mon, 02 Jul 2012)
> New Revision: 26707
> URL: https://svn.open-mpi.org/trac/ompi/changeset/26707
>
> L
Keep in mind that this is currently not used for the openib BTL. It is only
used in the upcoming OpenFabrics-based collectives component.
The iWARP-required connector-must-send-first logic is not yet included in this
code, as I understand it. That must be added before it can be used with the
Hello,
I'm debugging an issue with openmpi-1.4.5 and the openib btl over
chelsio iwarp devices. I am the iwarp driver developer for this
device. I have debug code that detects cq overflows, and I'm seeing rcq
overflows during finalize for certain IMB runs with ompi.So as the
recv wrs a
So is ofacm another replacement for ibcm and rdmacm?
--td
On 7/2/2012 11:20 AM, Nathan Hjelm wrote:
Nice! Are we moving this to 1.7 as well?
-Nathan
On Mon, Jul 02, 2012 at 11:20:12AM -0400, svn-commit-mai...@open-mpi.org wrote:
Author: pasha (Pavel Shamis)
Date: 2012-07-02 11:20:12 EDT (Mon
On 7/2/12 11:00 AM, "Nathan Hjelm" wrote:
>With platform contrib/platform/lanl/tlss/debug-panasus I get an error:
>make[2]: Entering directory
>`/panfs/scratch/vol7/hjelmn/turing/ompi-trunk-git/ompi/tools/ompi_info'
> CCLD ompi_info
>../../../ompi/.libs/libmpi.so: undefined reference to `NBC_O
> Keep in mind that this is currently not used for the openib BTL. It is only
> used in the upcoming OpenFabrics-based collectives component.
>
> The iWARP-required connector-must-send-first logic is not yet included in
> this code, as I understand it. That must be added before it can be used
So is ofacm another replacement for ibcm and rdmacm?
Essentially it extraction of the OpenIB BTL connection manager functionality
(minus rdmacm) from the OpenIB BTL. The idea is to allow access to this
functionality for other communication components, like collectives and btls.
OFACM supports
Yeah, it is going to 1.7
Do you want to move your UD connection manager code there :)
Pavel (Pasha) Shamis
---
Application Performance Tools Group
Computer Science and Math Division
Oak Ridge National Laboratory
On Jul 2, 2012, at 11:20 AM, Nathan Hjelm wrote:
> Nice! Are we moving this to
Hello everyone -
This morning the branch for the 1.7 release series was created (and all
the other ancillary stuff that goes along with a branch also occurred).
The nightly tarballs are available at the expected location
(http://www.open-mpi.org/nightly/v1.7/), with the first generated this
mornin
You know, I have the following in a few of my MTT configurations:
-
# See if this makes the CQ overrun errors go away
cq_depth = " --mca btl_openib_cq_size 65536 "
-
And then I use that variable as an mpirun CLI option in a few places. It looks
like someth
Steve --
Can you extend this new stuff to support RDMACM, including the warp-needed
connector-sends-first stuff? It would be *very* nice to ditch the openib CPC
stuff and only have the new ofacm stuff.
I'm asking Steve because he's effectively the only iWARP vendor left around
(and iWARP *req
Yevgeny,
ROCEE transport relays on RDMACM as well. I believe, Mellanox should be
interested to support it.
Pavel (Pasha) Shamis
---
Application Performance Tools Group
Computer Science and Math Division
Oak Ridge National Laboratory
On Jul 2, 2012, at 5:14 PM, Jeff Squyres wrote:
> Steve
Yup, I will work on it after I figure out why it is causing MTT failures.
-Nathan
On Mon, Jul 02, 2012 at 02:06:30PM -0400, Shamis, Pavel wrote:
> Yeah, it is going to 1.7
>
> Do you want to move your UD connection manager code there :)
>
> Pavel (Pasha) Shamis
> ---
> Application Performance T
If I use --mca btl_oepnib_cq_size and override the computed CQ depth,
then I can indeed avoid the CQ overflows.
On 7/2/2012 4:12 PM, Jeff Squyres wrote:
You know, I have the following in a few of my MTT configurations:
-
# See if this makes the CQ overrun errors go away
cq_depth = " --mca
On 7/2/2012 4:14 PM, Jeff Squyres wrote:
Steve --
Can you extend this new stuff to support RDMACM, including the warp-needed
connector-sends-first stuff?
I have no time right now. I could test something perhaps if someone can
do the initial pull of the rdma cpc code into the ofacm...
I
Please put on the agenda:
- openib registered memory exhaustion problem still isn't fixed
Should we just use a heuristic to limit free list sizes (etc.)?
--
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/
>>> Terry -- please add this to the agenda (but at the lowest priority).
It's been a long-standing dream of mine to launch MPI processes in individual
text screens. The venerable screen(1) app makes this somewhat difficult, but I
was pleased to discover recently that tmux(1) -- basically, a 2nd
18 matches
Mail list logo