Hi Jeff,
Sorry, can't use IPv6 right now but may be in the future.
When you're talking to someone behind NAT (or any type of firewall), how do you
know to whom you're actually talking?
If Machine A can talk to machine C in front of NAT and that machine can relay
the data packet to the machin
1.4.2rc2 is now out there - please give it a whirl.
On Apr 27, 2010, at 3:56 PM, Jeff Squyres wrote:
> I'm still evaluating my Cisco MTT failures -- at least one of them is
> repeatable, so I'm trying to track down whether it's a local error or a real
> bug.
>
> My $0.02 is that the pending CM
I'm still evaluating my Cisco MTT failures -- at least one of them is
repeatable, so I'm trying to track down whether it's a local error or a real
bug.
My $0.02 is that the pending CMRs should be applied and we cut rc2 tonight
anyway.
--
Jeff Squyres
jsquy...@cisco.com
For corporate legal inf
The following code was recently modified in btl_openib_endpoint.h (trunk):
-
#if OPAL_ENABLE_DEBUG
do {
ftr->seq = ep->eager_rdma_remote.seq;
} while (!OPAL_ATOMIC_CMPSET_32(&ep->eager_rdma_remote.seq, ftr->seq,
ftr->seq+1));
#endif
-
This line produces the foll
Hi,
With Jeff and Ralph's help, I have completed a System V shared memory
component for Open MPI. I have conducted some preliminary tests on
our systems, but would like to get test results from a broader audience.
As it stands, mmap is the defaul, but System V shared memory can be
activa
On Apr 27, 2010, at 1:43 PM, Jeff Squyres wrote:
> Brad -- I assume that this is one of the locking patches that has not yet
> made it to v1.5...? (the routine in question is surrounded with locking
> comments, but it does not exist in v1.5)
Gah -- I totally lied (forgot to svn up my v1.5 tree
On Apr 24, 2010, at 12:31 PM, Ralph Castain wrote:
> The first two are trivial and I will simply fix them myself. The last one is
> less obvious to me - can someone with knowledge of that code give us a patch?
Done -- patch in https://svn.open-mpi.org/trac/ompi/ticket/2391. It's not an
issue o
I didn't runt that specific test, but I did run a test that calls MPI_Abort. I
found a bug this morning, though (reported by Sam) that was causing the state
of remote procs to be incorrectly reported.
Try with r23048 or higher.
On Apr 27, 2010, at 9:15 AM, Rolf vandeVaart wrote:
> Ralph, did y
On Apr 27, 2010, at 10:20 , Sylvain Jeaugey wrote:
> Hi list,
>
> I'm currently working on IB bandwidth improvements and maybe some of you may
> help me understanding some things. I'm trying to align every IB RDMA
> operation to 64 bytes, because having it unaligned can hurt your performance
>
Ralph, did you get a chance to run the ibm/final test to see if these
changes fixed the problem? I just rebuilt the trunk and tried it and I
still get an exit status of 0 back. I will run it again to make sure I
have not made a mistake.
Rolf
On 04/26/10 23:43, Ralph Castain wrote:
Okay, thi
Hi list,
I'm currently working on IB bandwidth improvements and maybe some of you
may help me understanding some things. I'm trying to align every IB RDMA
operation to 64 bytes, because having it unaligned can hurt your
performance from lightly to very badly, depending on your architecture.
On Apr 27, 2010, at 10:06 AM, Leo P. wrote:
> Ralph has talked about the other parts already; so I'll ask about the BTL:
> what type of network are you looking to route via the BTL?
>
> I am talking about two different network using a private IP and all the
> communication being routed through
Hi jeff,
> The reason why i am using MPI_Comm_spawn and singleton is i am going to
> route the MPI Communication (btl and OOB) from another computer before it
> reaches it intended destination. :)
Ralph has talked about the other parts already; so I'll ask about the BTL: what
type of networ
On Apr 26, 2010, at 11:05 PM, Leo P. wrote:
> The reason why i am using MPI_Comm_spawn and singleton is i am going to
> route the MPI Communication (btl and OOB) from another computer before it
> reaches it intended destination. :)
Ralph has talked about the other parts already; so I'll ask a
I tested this on my machines and it worked, so hopefully it will meet your
needs. You only need to run one "ompi-server" period, so long as you locate it
where all of the processes can find the contact file and can open a TCP socket
to the daemon. There is a way to knit multiple ompi-servers int
15 matches
Mail list logo