Hi,
Thanks for this guys. I think I might have two MPI implementations
installed because 'locate mpirun' gives (see bold lines) :
-
/etc/alternatives/mpirun
/etc/alternatives/mpirun.1.gz
*/home/djordje/Build_WRF/LIBRARIES/mpich/bin/mpirun*
/home/djordje/Star
Apologies for stirring even more the confusion by mispelling
"Open MPI" as "OpenMPI".
"OMPI" doesn't help either, because all OpenMP environment
variables and directives start with "OMP".
Maybe associating the names to
"message passing" vs. "threads" would help?
Djordje:
'which mpif90' etc show
Maybe we should rename OpenMP to be something less confusing -- perhaps
something totally unrelated, perhaps even non-sensical. That'll end lots of
confusion!
My vote: OpenMP --> SharkBook
It's got a ring to it, doesn't it? And it sounds fearsome!
On Apr 14, 2014, at 5:04 PM, "Elken, Tom"
That’s OK. Many of us make that mistake, though often as a typo.
One thing that helps is that the correct spelling of Open MPI has a space in
it, but OpenMP does not.
If not aware what OpenMP is, here is a link: http://openmp.org/wp/
What makes it more confusing is that more and more apps. offer
Yes, this is a bug. Doh!
Looks like we fixed it for one case, but missed another case. :-(
I've filed https://svn.open-mpi.org/trac/ompi/ticket/4519, and will fix this
shortly.
On Apr 14, 2014, at 4:11 AM, Luis Kornblueh
wrote:
> Dear all,
>
> the attached mympi_test.f90 does not com
OK guys... Thanks for all this info. Frankly, I didn't know these
diferences between OpenMP and OpenMPI. The commands:
which mpirun
which mpif90
which mpicc
give,
/usr/bin/mpirun
/usr/bin/mpif90
/usr/bin/mpicc
respectively.
A tutorial on how to compile WRF (
http://www.mmm.ucar.edu/wrf/OnLineTutor
On 04/08/2014 05:49 PM, Daniel Milroy wrote:
Hello,
The file system in question is indeed Lustre, and mounting with flock
isn’t possible in our environment. I recommended the following changes
to the users’ code:
Hi. I'm the ROMIO guy, though I do rely on the community to help me
keep the
Djordje
Your WRF configure file seems to use mpif90 and mpicc (line 115 &
following).
In addition, it also seems to have DISABLED OpenMP (NO TRAILING "I")
(lines 109-111, where OpenMP stuff is commented out).
So, it looks like to me your intent was to compile with MPI.
Whether it is THIS MPI (
On 04/14/2014 03:02 PM, Jeff Squyres (jsquyres) wrote:
If you didn't use Open MPI, then this is the wrong mailing list for you. :-)
(this is the Open MPI users' support mailing list)
On Apr 14, 2014, at 2:58 PM, Djordje Romanic wrote:
I didn't use OpenMPI.
On Mon, Apr 14, 2014 at 2:37 PM
On 04/14/2014 01:15 PM, Djordje Romanic wrote:
Hi,
I am trying to run WRF-ARW in parallel. This is configuration of my system:
-
Architecture: x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):
to get help :)
On Mon, Apr 14, 2014 at 3:11 PM, Djordje Romanic wrote:
> Yes, but I was hoping to get. :)
>
>
> On Mon, Apr 14, 2014 at 3:02 PM, Jeff Squyres (jsquyres) <
> jsquy...@cisco.com> wrote:
>
>> If you didn't use Open MPI, then this is the wrong mailing list for you.
>> :-)
>>
>> (t
Yes, but I was hoping to get. :)
On Mon, Apr 14, 2014 at 3:02 PM, Jeff Squyres (jsquyres) wrote:
> If you didn't use Open MPI, then this is the wrong mailing list for you.
> :-)
>
> (this is the Open MPI users' support mailing list)
>
>
> On Apr 14, 2014, at 2:58 PM, Djordje Romanic wrote:
>
If you didn't use Open MPI, then this is the wrong mailing list for you. :-)
(this is the Open MPI users' support mailing list)
On Apr 14, 2014, at 2:58 PM, Djordje Romanic wrote:
> I didn't use OpenMPI.
>
>
> On Mon, Apr 14, 2014 at 2:37 PM, Jeff Squyres (jsquyres)
> wrote:
> This can a
I didn't use OpenMPI.
On Mon, Apr 14, 2014 at 2:37 PM, Jeff Squyres (jsquyres) wrote:
> This can also happen when you compile your application with one MPI
> implementation (e.g., Open MPI), but then mistakenly use the "mpirun" (or
> "mpiexec") from a different MPI implementation (e.g., MPICH).
This can also happen when you compile your application with one MPI
implementation (e.g., Open MPI), but then mistakenly use the "mpirun" (or
"mpiexec") from a different MPI implementation (e.g., MPICH).
On Apr 14, 2014, at 2:32 PM, Djordje Romanic wrote:
> I compiled it with: x86_64 Linux, g
I compiled it with: x86_64 Linux, gfortran compiler with gcc (dmpar).
dmpar - distributed memory option.
Attached is the self-generated configuration file. The architecture
specification settings start at line 107. I didn't use Open MPI (shared
memory option).
On Mon, Apr 14, 2014 at 1:23 PM,
On Apr 14, 2014, at 12:15 PM, Djordje Romanic wrote:
> When I start wrf with mpirun -np 4 ./wrf.exe, I get this:
> -
> starting wrf task0 of1
> starting wrf task0 of1
> starting wrf task
Hi,
I am trying to run WRF-ARW in parallel. This is configuration of my system:
-
Architecture: x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):4
On-line CPU(s) list: 0-3
Thread(s) p
I'm still poking around, but would appreciate a little more info to ensure I'm
looking in the right places. How many nodes are you running your application
across for your verification suite? I suspect it isn't just one :-)
On Apr 10, 2014, at 9:19 PM, Ralph Castain wrote:
> I shaved about 30
On Apr 13, 2014, at 11:42 AM, Allan Wu wrote:
> Thanks, Ralph!
>
> Adding MAC parameter 'plm_rsh_no_tree_spawn' solves the problem.
>
> If I understand correctly, the first layer of daemons are three nodes, and
> when there are more than three nodes the second layer of daemons are spawn.
>
I'm confused - how are you building OMPI?? You normally have to do:
1. ./configure --prefix= This is where you would add --enable-debug
2. make clean all install
You then run your mpirun command as you've done.
On Apr 14, 2014, at 12:52 AM, Lubrano Francesco
wrote:
> I can't set --en
Am 13.04.2014 um 09:58 schrieb Kamal:
> I have a code which uses both mpicc and mpif90.
>
> The code is to read a file from the directory, It works properly on my
> desktop (ubuntu) but when I run the same code on my Macbook I get fopen
> failure errno : 2 ( file does not exist )
Without more
Dear all,
the attached mympi_test.f90 does not compile with intel and OpenMPI
Version 1.7.4, apparently it also does not compile with 1.8.0.
The Intel Compiler version is 14.0.2.
tmp/ifortjKG1cP.o: In function `MAIN__':
mympi_test.f90:(.text+0x90): undefined reference to `mpi_sizeof0di4_'
Th
Hi,
I have a code which uses both mpicc and mpif90.
The code is to read a file from the directory, It works properly on my
desktop (ubuntu) but when I run the same code on my Macbook I get fopen
failure errno : 2 ( file does not exist )
Could some one please tell me what might be the proble
I can't set --enable-debug (command not found: I have just --enable-recovery in
help command), but the other commands works properly. The output is:
francesco@linux-hldu:~> mpirun -mca plm_base_verbose 10 --debug-daemons --host
Frank@158.110.39.110 hostname
[linux-hldu.site:02234] mca: base: com
Hello Jeff,
I will pass your recommendation to the users and apprise you when I receive a
response.
Thank you,
Dan Milroy
-Original Message-
From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Jeff Squyres
(jsquyres)
Sent: Friday, April 11, 2014 6:45 AM
To: Open MPI Users
Su
26 matches
Mail list logo