I tried the 1.8 code base, and I'm afraid it doesn't work there either.
After digging into the code, I can see why - I'm afraid that joining
multiple singleton's isn't going to work until you get to the 1.9 series.
I'll have to check to see if our current master can support it (I believe
it
I confess I am sorely puzzled. I replace the Info key with MPI_INFO_NULL,
but still had to pass a bogus argument to master since you still have the
Info_set code in there - otherwise, info_set segfaults due to a NULL
argv[1]. Doing that (and replacing "hostname" with an MPI example code)
makes
Yes, I did. I replaced the info argument of MPI_Comm_spawn with
MPI_INFO_NULL.
On Tue, Feb 3, 2015 at 5:54 PM, Ralph Castain wrote:
> When running your comm_spawn code, did you remove the Info key code? You
> wouldn't need to provide a hostfile or hosts any more, which is
When running your comm_spawn code, did you remove the Info key code? You
wouldn't need to provide a hostfile or hosts any more, which is why it
should resolve that problem.
I agree that providing either hostfile or host as an Info key will cause
the program to segfault - I'm woking on that issue.
Appreciate your patience! I'm somewhat limited this week by being on travel
to our HQ, so I don't have access to my usual test cluster. I'll be better
situated to complete the implementation once I get home.
For now, some quick thoughts:
1. stdout/stderr: yes, I just need to "register"
Hmmmno, I wasn't seeing those warnings/errors, but I only ran one
submit job. I'll investigate.
On Tue, Feb 3, 2015 at 11:38 AM, Mark Santcroos
wrote:
> Hi Ralph,
>
> > On 03 Feb 2015, at 16:28 , Ralph Castain wrote:
> > I think I fixed some
Hi Ralph,
Besides the items in the other mail, I have three more items that would need
resolving at some point.
1. STDOUT/STDERR currently go to the orte-dvm console.
I'm sure this is not a fundamental limitation.
Even if getting the information to the orte-submit instance would be
Setting these environment variables did indeed change the way mpirun maps
things, and I didn't have to specify a hostfile. However, setting these
for my MPI_Comm_spawn code still resulted in the same segmentation fault.
Evan
On Tue, Feb 3, 2015 at 10:09 AM, Ralph Castain
Yes, you still need to tell the configure script about all your MPI env
variables (I guess you haven't or have not set them all).
MPICC=mpiicc MPIFC=mpifort
or the equivalent. Probably some more are needed.
Please ask on the abinit forum about how to set the MPI executables
correctly. But the
Jeff and Nick,
Thanks very much for your help. In fact the first thing I did is to join the
mailing list of Abinit and I posted in the community; however I only got one
reply nothing since yesterday morning. I know it is a short period of time and
people are busy doing their work but I don't
Hi Ralph,
> On 03 Feb 2015, at 16:28 , Ralph Castain wrote:
> I think I fixed some of the handshake issues - please give it another try.
> You should see orte-submit properly shutdown upon completion,
Indeed, it works on my laptop now! Great!
It feels quite fast too, for sort
I'll hit you off list with my Abinit OpenMPI build notes,
Brock Palen
www.umich.edu/~brockp
CAEN Advanced Computing
XSEDE Campus Champion
bro...@umich.edu
(734)936-1985
> On Feb 3, 2015, at 2:26 PM, Nick Papior Andersen wrote:
>
> I also concur with Jeff about asking
I also concur with Jeff about asking software specific questions at the
software-site, abinit already has a pretty active forum:
http://forum.abinit.org/
So any questions can also be directed there.
2015-02-03 19:20 GMT+00:00 Nick Papior Andersen :
>
>
> 2015-02-03 19:12
2015-02-03 19:12 GMT+00:00 Elio Physics :
> Hello,
>
> thanks for your help. I have tried:
>
> ./configure --with-mpi-prefix=/usr FC=ifort CC=icc
>
> But i still get the same error. Mind you if I compile it serially, that
> is ./configure FC=ifort CC=icc
>
> It works
Without knowing anything about the application that you are trying to build,
it's really hard to say. You should probably be asking the support mailing
lists for that specific application -- they would better be able to support you.
This list is for Open MPI, which is likely one of the MPI
T "https://bugs.launchpad.net/abinit/;
| #define PACKAGE "abinit"
| #define VERSION "7.10.2"
| #define ABINIT_VERSION "7.10.2"
| #define ABINIT_VERSION_MAJOR "7"
| #define ABINIT_VERSION_MINOR "10"
| #define ABINIT_VERSION_MICRO "2"
| #define
If you add the following to your environment, you should run on multiple
nodes:
OMPI_MCA_rmaps_base_mapping_policy=node
OMPI_MCA_orte_default_hostfile=
The first tells OMPI to map-by node. The second passes in your default
hostfile so you don't need to specify it as an Info key.
HTH
Ralph
On
Hi Ralph,
Good to know you've reproduced it. I was experiencing this using both the
hostfile and host key. A simple comm_spawn was working for me as well, but
it was only launching locally, and I'm pretty sure each node only has 4
slots given past behavior (the mpirun -np 8 example I gave in my
First try and correct your compilation by using the intel c-compiler AND
the fortran compiler. You should not mix compilers.
CC=icc FC=ifort
Else the config.log is going to be necessary to debug it further.
PS: You could also try and convince your cluster administrator to provide a
more recent
Classification: UNCLASSIFIED
Caveats: NONE
Make sure that you are also using icc and icpc in addition to ifort. GCC built
code is not necessarily compatible with Intel build
code as GCC uses some custom symbols here and there.
Andrew Burns
Lockheed Martin
Software Engineer
410-306-0409
ARL DSRC
Dear all,
II am trying to configure a code with mpi (for parallel processing) to do
some calculations so basically I type:
./configure
and I get:
configure: error: Fortran compiler does not provide iso_c_binding module. Use a
more recent version or a different compiler
which means that my
I think I fixed some of the handshake issues - please give it another try.
You should see orte-submit properly shutdown upon completion, and orte-dvm
properly shutdown when sent the terminate cmd. I was able to cleanly run
MPI jobs on my laptop.
On Mon, Feb 2, 2015 at 10:44 PM, Mark Santcroos
Classification: UNCLASSIFIED
Caveats: NONE
If I could venture a guess, it sounds like you are trying to merge two separate
programs into one executable and run them in parallel
via MPI.
The problem sounds like an issue where your program starts in parallel but then
changes back to serial
I'm afraid I don't quite understand what you are saying, so let's see if I
can clarify. You have two fortran MPI programs. You start one using
"mpiexec". You then start the other one as a singleton - i.e., you just run
"myapp" without using mpiexec. The two apps are attempting to execute an
Dear All,
Take my greetings. I am new in mpi usage. I have problems in parallel run,
when two fortran mpi programs are merged to one executable. If these two
are separate, then they are running parallel.
One program has used spmd and another one has used mpich header directly.
Other issue is
On 03 Feb 2015, at 0:20 , Ralph Castain wrote:
> Okay, thanks - I'll get on it tonight. Looks like a fairly simple bug, so
> hopefully I'll have it ironed out tonight.
Sorry, I was not completely accurate. Let me be more specific:
* The orte-submit does not return though, so
That's pretty ancient - could you try the nightly 1.8 tarball?
On Mon, Feb 2, 2015 at 5:58 PM, haozi wrote:
> mpiexec (OpenRTE) 1.4.3
>
>
>
>
>
>
> At 2015-02-02 12:54:11, "Ralph Castain" wrote:
>
> Which OMPI version?
>
> On Jan 25, 2015, at 5:41 AM,
BTW: I've confirmed this only happens if you provide the hostfile info key.
A simple comm_spawn without the hostfile key works just fine.
On Sun, Feb 1, 2015 at 8:53 PM, Ralph Castain wrote:
> Well, I can reproduce it - but I won’t have time to address it until I
> return
28 matches
Mail list logo