Hmmmwell, according to this, it looks like the process ranks are being
incorrectly assigned. Shouldn't have anything to do with what environ we are
in (slurm, rsh, etc).
I'll look into it - thanks!
On Mon, May 17, 2010 at 4:25 PM, Christopher Maestas wrote:
> OK. The -np only run:
> ---
> s
OK. The -np only run:
---
sh-3.1$ mpirun -np 2 --display-allocation --display-devel-map mpi_hello
== ALLOCATED NODES ==
Data for node: Name: cut1n7Launch id: -1 Arch: ffc91200
State: 2
Num boards: 1 Num sockets/board: 2Num
That's a pretty old version of slurm - I don't have access to anything that
old to test against. You could try running it with --display-allocation
--display-devel-map to see what ORTE thinks the allocation is and how it
mapped the procs. It sounds like something may be having a problem there...
Dear Jeff,
Thank you for your reply. In this case, I believe that I only need to setup
IPsec tunneling using IPsec-tools and raccon.
2010/5/17 Jeff Squyres
> There is currently no provision for IPsec inside Open MPI right now (i.e.,
> Open MPI doesn't use any IPsec encryption itself).
>
> Howev
Open MPI's OpenFabrics support will spawn up to two additional blocking threads
(they wait for asynchronous verbs events of various flavors). They consume a
few resources, but typically are not used much. They don't cause any
noticeable change in performance.
On May 17, 2010, at 12:38 PM, Pi
There is currently no provision for IPsec inside Open MPI right now (i.e., Open
MPI doesn't use any IPsec encryption itself).
However, I don't see any problems with running Open MPI over transparent IPsec
tunneling.
On May 15, 2010, at 4:26 PM, awwase wrote:
> Dear all,
>
> I would like to
Hello,
I've been having some troubles with OpenMPI 1.4.X and slurm recently. I
seem to be able to run jobs this way ok:
---
sh-3.1$ mpirun -np 2 mpi_hello
Hello, I am node cut1n7 with rank 0
Hello, I am node cut1n8 with rank 1
--
However if I try and use the -npernode option I get:
---
sh-3.1$ m
Hello,
I found when running am MPI program that is linked against OpenMPI library,
for each MPI task, OpenMPI will spawn three threads, as the sample shown
below:
$ ps axms
...
13536 3565 ---
-pts/14 0:00 mpirun -n 2 ./a
El Lunes 17 Mayo 2010, Scott Atchley escribió:
> On May 16, 2010, at 1:32 PM, Lydia Heck wrote:
> > When running over gigabit using -mca btl tcp,self,sm the code runs
> > alright, which is good as the largest part of our cluster is over
> > gigabit, and as Gadget-3 scales rather well, the penalty
I don't know if it's the same problem or not (and we haven't tested on
Myrinet), but we have one code which frequently hangs on smallish (64 node)
runs. I unfortunately haven't been able to deep dive into the problem, but the
hang is in a bcast call, where peers are doing sendrecv calls. All b
10 matches
Mail list logo