i instaled last version of openmpi now i have this error
I
t seems that [at least] one of the processes that was started with
mpirun did not invoke MPI_INIT before quitting (it is possible that
more than one process did not invoke MPI_INIT -- mpirun was only
notified of the first one, which was
I'm not a systems guy, but I'll pitch in anyway. On our cluster,
all the compute nodes are completely diskless. The root file system,
including /tmp, resides in memory (ramdisk). OpenMPI puts these
session directories therein. All our jobs run through a batch
system (torque). At the
Thanks for the help. A couple follow-up-questions, maybe this starts to go
outside OpenMPI:
What's wrong with using /dev/shm? I think you said earlier in this thread that
this was not a safe place.
If the NFS-mount point is moved from /tmp to /work, would a /tmp magically
appear in the
yes i have old version i will instal 1.4.4 and see
merci
2011/11/3 Jeff Squyres
> It sounds like you have an old version of Open MPI that is not ignoring
> your unconfigured OpenFabrics devices in your Linux install. This is a
> guess because you didn't provide any
The sm btl is definitely more performant than loopback on other devices.
On Nov 3, 2011, at 4:55 PM, Blosch, Edwin L wrote:
> I might be missing something here. Is there a side-effect or performance loss
> if you don't use the sm btl? Why would it exist if there is a wholly
> equivalent
It sounds like you have an old version of Open MPI that is not ignoring your
unconfigured OpenFabrics devices in your Linux install. This is a guess
because you didn't provide any information about your Open MPI installation.
:-)
Try upgrading to a newer version of Open MPI.
On Nov 3,
On Nov 3, 2011, at 2:55 PM, Blosch, Edwin L wrote:
> I might be missing something here. Is there a side-effect or performance loss
> if you don't use the sm btl? Why would it exist if there is a wholly
> equivalent alternative? What happens to traffic that is intended for another
> process
I might be missing something here. Is there a side-effect or performance loss
if you don't use the sm btl? Why would it exist if there is a wholly
equivalent alternative? What happens to traffic that is intended for another
process on the same node?
Thanks
-Original Message-
From:
Right. Actually "--mca btl ^sm". (Was missing "btl".)
On 11/3/2011 11:19 AM, Blosch, Edwin L wrote:
I don't tell OpenMPI what BTLs to use. The default uses sm and puts a session
file on /tmp, which is NFS-mounted and thus not a good choice.
Are you suggesting something like --mca ^sm?
I don't tell OpenMPI what BTLs to use. The default uses sm and puts a session
file on /tmp, which is NFS-mounted and thus not a good choice.
Are you suggesting something like --mca ^sm?
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf
On Nov 3, 2011, at 1:36 PM, Blosch, Edwin L wrote:
> Yes it sucks, so that's what led me to post my original question: If /dev/shm
> isn't the right place to put the session file, and /tmp is NFS-mounted, then
> what IS the "right" way to set up a diskless cluster? I don't think the idea
> of
You are right, Ralph. There is no surprise behavior. I had forgotten that I
had been testing --mca orte_tmpdir_base /dev/shm to see if it worked (and
obviously it doesn't). Before that, without any MCA options, OpenMPI had tried
/tmp, and gave me the warning about /tmp being NFS mounted, and
I've not been following closely. Why must one use shared-memory
communications? How about using other BTLs in a "loopback" fashion?
Cross-thread response here, as this is related to the shared-memory thread:
Yes it sucks, so that's what led me to post my original question: If /dev/shm
isn't the right place to put the session file, and /tmp is NFS-mounted, then
what IS the "right" way to set up a diskless cluster? I don't
In /tmp.
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf
Of Durga Choudhury
Sent: Thursday, November 03, 2011 11:04 AM
To: Open MPI Users
Subject: EXTERNAL: Re: [OMPI users] Shared-memory problems
Since /tmp is mounted across a network
i use openmpi in my computer
2011/11/3 Ralph Castain
> Couple of things:
>
> 1. Check the configure cmd line you gave - OMPI thinks your local computer
> should have an openib support that isn't correct.
>
> 2. did you recompile your app on your local computer, using the
Couple of things:
1. Check the configure cmd line you gave - OMPI thinks your local computer
should have an openib support that isn't correct.
2. did you recompile your app on your local computer, using the version of OMPI
built/installed there?
On Nov 3, 2011, at 10:10 AM, amine mrabet
On Nov 1, 2011, at 7:31 PM, Blosch, Edwin L wrote:
> I’m getting this message below which is observing correctly that /tmp is
> NFS-mounted. But there is no other directory which has user or group write
> permissions. So I think I’m kind of stuck, and it sounds like a serious
> issue.
That
I'm afraid this isn't correct. You definitely don't want the session directory
in /dev/shm as this will almost always cause problems.
We look thru a progression of envars to find where to put the session directory:
1. the MCA param orte_tmpdir_base
2. the envar OMPI_PREFIX_ENV
3. the envar
hey ,
i use mpirun tu run program with using mpi this program worked well in
university computer
but with mine i have this error
i run with
amine@dellam:~/Bureau$ mpirun -np 2 pl
and i have this error
libibverbs: Fatal: couldn't read uverbs ABI version.
Since /tmp is mounted across a network and /dev/shm is (always) local,
/dev/shm seems to be the right place for shared memory transactions.
If you create temporary files using mktemp is it being created in
/dev/shm or /tmp?
On Thu, Nov 3, 2011 at 11:50 AM, Bogdan Costescu
On Thu, Nov 3, 2011 at 15:54, Blosch, Edwin L wrote:
> - /dev/shm is 12 GB and has 755 permissions
> ...
> % ls –l output:
>
> drwxr-xr-x 2 root root 40 Oct 28 09:14 shm
This is your problem: it should be something like drwxrwxrwt. It might
depend on the
On Nov 3, 2011, at 8:54 AM, Blosch, Edwin L wrote:
> Can anyone guess what the problem is here? I was under the impression that
> OpenMPI (1.4.4) would look for /tmp and would create its shared-memory
> backing file there, i.e. if you don’t set orte_tmpdir_base to anything.
That is correct
Can anyone guess what the problem is here? I was under the impression that
OpenMPI (1.4.4) would look for /tmp and would create its shared-memory backing
file there, i.e. if you don't set orte_tmpdir_base to anything.
Well, there IS a /tmp and yet it appears that OpenMPI has chosen to use
Hi,
we have done with testing OFED stack by using openmpi but still want to
check OFED stack with GRAPH 500.I heard about Graph 500 benchmark test but
don't have enough information on net regarding,how to use it.I posted here
as it was a hope that MPI people know about these.
Please can u
25 matches
Mail list logo