lt;us...@open-mpi.org>
> Sent: Mon, May 23, 2016 9:13 am
> Subject: Re: [OMPI users] problem about mpirun on two nodes
>
> On May 21, 2016, at 11:31 PM, dour...@aol.com wrote:
>>
>> I encountered a problem about mpirun and SSH when using OMPI 1.10.0 compiled
>
13 am
Subject: Re: [OMPI users] problem about mpirun on two nodes
On May 21, 2016, at 11:31 PM, dour...@aol.com wrote:
>
> I encountered a problem about mpirun and SSH when using OMPI 1.10.0 compiled
> with gcc, running on centos7.2.
> When I execute mpirun on my 2 node cluster, I ge
On May 21, 2016, at 11:31 PM, dour...@aol.com wrote:
>
> I encountered a problem about mpirun and SSH when using OMPI 1.10.0 compiled
> with gcc, running on centos7.2.
> When I execute mpirun on my 2 node cluster, I get the following errors pasted
> below.
>
> [douraku@master home]$ mpirun -np
Hi all
I encountered a problem about mpirun and SSH when using OMPI 1.10.0 compiled
with gcc, running on centos7.2.
When I execute mpirun on my 2 node cluster, I get the following errors pasted
below.
[douraku@master home]$ mpirun -np 12 a.out
Permission denied
gt;>
>>> Karos - Were you able to make and run the Java examples in the
>>> MPI_ROOT/examples directory ?
>>>
>>> I started with those after similar hiccups trying to get things up and
>>> running.
>>>
>>> Chuck Mosher
>>&g
ing.
>>
>> Chuck Mosher
>> JavaSeis.org
>>
>> From: Ralph Castain <r...@open-mpi.org>
>> To: Open MPI Users <us...@open-mpi.org>
>> Sent: Thursday, January 17, 2013 2:27 PM
>> Subject: Re: [OMPI users] Problem with mpirun for java codes
ngs up and
> running.
>
> Chuck Mosher
> JavaSeis.org
>
> From: Ralph Castain <r...@open-mpi.org>
> To: Open MPI Users <us...@open-mpi.org>
> Sent: Thursday, January 17, 2013 2:27 PM
> Subject: Re: [OMPI users] Problem with mpirun for java codes
>
>
.
Chuck Mosher
JavaSeis.org
From: Ralph Castain <r...@open-mpi.org>
To: Open MPI Users <us...@open-mpi.org>
Sent: Thursday, January 17, 2013 2:27 PM
Subject: Re: [OMPI users] Problem with mpirun for java codes
Just as an FYI: we have removed the Java b
Just as an FYI: we have removed the Java bindings from the 1.7.0 release due to
all the reported errors - looks like that code just isn't ready yet for
release. It remains available on the nightly snapshots of the developer's trunk
while we continue to debug it.
With that said, I tried your
Hi,
The version that I am using is
1.7rc6 (pre-release)
Regards,
Karos
On 16 Jan 2013, at 21:07, Ralph Castain wrote:
> Which version of OMPI are you using?
>
>
> On Jan 16, 2013, at 11:43 AM, Karos Lotfifar wrote:
>
>> Hi,
>>
>> I am still
Which version of OMPI are you using?
On Jan 16, 2013, at 11:43 AM, Karos Lotfifar wrote:
> Hi,
>
> I am still struggling with the installation problems! I get very strange
> errors. everything is fine when I run OpenMPI for C codes, but when I try to
> run a simple java
We really need more information in order to help you. Please see:
http://www.open-mpi.org/community/help/
On Nov 3, 2011, at 7:37 PM, amine mrabet wrote:
> i instaled last version of openmpi now i have this error
> I
> t seems that [at least] one of the processes that was started with
>
i instaled last version of openmpi now i have this error
I
t seems that [at least] one of the processes that was started with
mpirun did not invoke MPI_INIT before quitting (it is possible that
more than one process did not invoke MPI_INIT -- mpirun was only
notified of the first one, which was
yes i have old version i will instal 1.4.4 and see
merci
2011/11/3 Jeff Squyres
> It sounds like you have an old version of Open MPI that is not ignoring
> your unconfigured OpenFabrics devices in your Linux install. This is a
> guess because you didn't provide any
It sounds like you have an old version of Open MPI that is not ignoring your
unconfigured OpenFabrics devices in your Linux install. This is a guess
because you didn't provide any information about your Open MPI installation.
:-)
Try upgrading to a newer version of Open MPI.
On Nov 3,
i use openmpi in my computer
2011/11/3 Ralph Castain
> Couple of things:
>
> 1. Check the configure cmd line you gave - OMPI thinks your local computer
> should have an openib support that isn't correct.
>
> 2. did you recompile your app on your local computer, using the
Couple of things:
1. Check the configure cmd line you gave - OMPI thinks your local computer
should have an openib support that isn't correct.
2. did you recompile your app on your local computer, using the version of OMPI
built/installed there?
On Nov 3, 2011, at 10:10 AM, amine mrabet
hey ,
i use mpirun tu run program with using mpi this program worked well in
university computer
but with mine i have this error
i run with
amine@dellam:~/Bureau$ mpirun -np 2 pl
and i have this error
libibverbs: Fatal: couldn't read uverbs ABI version.
Hi Jeff,
Thanks to you, I figured the problem . As you suspected, it was iptables
which was acting as firewalls in some machines. So, after I stopped the
iptable, the MPI communication is going fine. Even I tried with 5 machines
together and the communication is going allright.
Thanks again,
ssh may be allowed but other random TCP ports may not.
iptables is the typical firewall software that most Linux installations use; it
may have been enabled by default.
I'm a little doubtful that this is your problem, though, because you're
apparently able to *launch* your application, which
Are you running any firewall software?
Sent from my phone. No type good.
On May 25, 2011, at 10:41 PM, "Jagannath Mondal"
wrote:
> Hi,
> I am having a problem in running mpirun over multiple nodes.
> To run a job over two 8-core processors, I generated a
Hi,
I am having a problem in running mpirun over multiple nodes.
To run a job over two 8-core processors, I generated a hostfile as follows:
yethiraj30 slots=8 max_slots=8
yethiraj31 slots=8 max_slots=8
These two machines are intra-connected and I have installed openmpi 1.3.3.
Then If I try
and the openmpi-1.2.7-pgi??
On Mon, Nov 29, 2010 at 6:27 AM, Tushar Andriyas wrote:
> Hi there,
> The thing is I did not write the code myself and am just trying to get it
> to work. So, would it help if i change the version of the compiler or is
> that it happens with
Hi there,
The thing is I did not write the code myself and am just trying to get it to
work. So, would it help if i change the version of the compiler or is that
it happens with every pgi compiler suite??
On Sun, Nov 28, 2010 at 11:45 PM, Simon Hammond wrote:
> Hi,
>
>
Hi,
This isn't usually an error - you get this by using conventional
Fortran exit methods. The Fortran stop means the program hit the exit
statements in the code. I have only had this with PGI.
--
Si Hammond
Research
Hi there,
I have posted before about the problems that I am facing with mpirun. I have
gotten some help but right now i am stuck with an error message.FORTRAN
STOP when I invoke mpirun..can someone help PLEASE!!
I m using openmpi-1.2.7-pgi and pgi-7.2 compiler.
On Fri, Jun 11, 2010 at 11:03:03AM +0200, asmae.elbahlo...@mpsa.com wrote:
> Sender: users-boun...@open-mpi.org
>
>
>hello,
>
>i'm doing a tutorial on OpenFoam, but when i run in parallel by typing
>"mpirun -np 30 foamProMesh -parallel | tee 2>&1 log/FPM.log"
.
>
>[1]
I'm a afraid I don't know anything about OpenFoam, but it looks like it
deliberately chose to abort due to some error (i.e., it then called MPI_ABORT
to abort).
I don't know what those stack traces mean; you will likely have better luck
asking your question on the OpenFoam support list.
Good
Squyres <jsquy...@cisco.com> wrote:
From: Jeff Squyres <jsquy...@cisco.com>
Subject: Re: [OMPI users] Problem running mpirun with ssh on remote nodes
-Daemon did not report back when launched problem
To: "Open MPI Users" <us...@open-mpi.org>
List-Post: users@lists.open-
Hello,
I am trying to run a simple hello world program before actually launching some
very heavy load testing over the Xen SMP set up that I have.
I am trying to run this command over four different hosts, Dom0 being the host
where i am launching mpirun and rest three being xen guest
I verified that the preload functionality works on the trunk. It seems
to be broken on the v1.3/v1.4 branches. The version of this code has
changed significantly between the v1.3/v1.4 and the trunk/v1.5
versions. I filed a bug about this so it does not get lost:
Now that I have passwordless-ssh set up both directions, and verified
working - I still have the same problem.
I'm able to run ssh/scp on both master and client nodes - (at this
point, they are pretty much the same), without being asked for password.
And mpirun works fine if I have the
Though the --preload-binary option was created while building the
checkpoint/restart functionality it does not depend on checkpoint/restart
function in any way (just a side effect of the initial development).
The problem you are seeing is a result of the computing environment setup of
I'm no expert on the preload-binary option - but I would suspect that
is the case given your observations.
That option was created to support checkpoint/restart, not for what
you are attempting to do. Like I said, you -should- be able to use it
for that purpose, but I expect you may hit a
Thank you very much for your help! I believe I do have password-less ssh
set up, at least from master node to client node (desktop -> laptop in
my case). If I type >ssh node1 on my desktop terminal, I am able to get
to the laptop node without being asked for password. And as I mentioned,
if I
It -should- work, but you need password-less ssh setup. See our FAQ
for how to do that, if you are unfamiliar with it.
On Nov 10, 2009, at 2:02 PM, Qing Pang wrote:
I'm having problem getting the mpirun "preload-binary" option to work.
I'm using ubutu8.10 with openmpi 1.3.3, nodes
I'm having problem getting the mpirun "preload-binary" option to work.
I'm using ubutu8.10 with openmpi 1.3.3, nodes connected with Ethernet cable.
If I copy the executable to client nodes using scp, then do mpirun,
everything works.
But I really want to avoid the copying, so I tried the
Please see my earlier response. This proposed solution will work, but may be
unstable as it (a) removes all of OMPI's internal variables, some of which
are required; and (b) also removes all the variables that might be needed by
your system. For example, envars directing the use of specific
Could your problem is related to the MCA parameter "contamination" problem,
where the child MPI process inherits MCA environment variables from the parent
process still exists.
Back in 2007 I was implementing a program that solves two large interrelated
systems of equations (+200.000.000
Thanks,
That's what I wanted to know. And thanks for all the help!
Luke
On Wed, Oct 28, 2009 at 9:06 PM, Ralph Castain wrote:
> I see. No, we don't copy your envars and ship them to remote nodes. Simple
> reason is that we don't know which ones we can safely move, and which
I see. No, we don't copy your envars and ship them to remote nodes. Simple
reason is that we don't know which ones we can safely move, and which would
cause problems.
However, we do provide a mechanism for you to tell us which envars to move.
Just add:
-x LD_LIBRARY_PATH
to your mpirun cmd line
My apologies for not being clear. These variables are set in my
environment, they just are not published to the other nodes in the
cluster when the jobs are run through the scheduler. At the moment,
even though I can use mpirun to run jobs locally on the head node
without touching my
Normally, one does simply set the ld_library_path in your environment to
point to the right thing. Alternatively, you could configure OMPI with
--enable-mpirun-prefix-by-default
This tells OMPI to automatically add the prefix you configured the system
with to your ld_library_path and path
Thanks for the quick reply. This leads me to another issue I have
been having with openmpi as it relates to sge. The "tight
integration" works where I do not have to give mpirun a hostfile when
I use the scheduler, but it does not seem to be passing on my
environment variables. Specifically
I'm afraid we have never really supported this kind of nested invocations of
mpirun. If it works with any version of OMPI, it is totally a fluke - it
might work one time, and then fail the next.
The problem is that we pass envars to the launched processes to control
their behavior, and these
45 matches
Mail list logo