Gus is correct - the -host option needs to be in the appfile

On Feb 9, 2011, at 3:32 PM, Gus Correa wrote:

> Sindhi, Waris PW wrote:
>> Hi,
>>    I am having trouble using the --app option with OpenMPI's mpirun
>> command. The MPI processes launched with the --app option get launched
>> on the linux node that mpirun command is executed on.
>> The same MPI executable works when specified on the command line using
>> the -np <num-procs> option.
>> Please let me know what I am doing wrong ?
>> Bad launch :
>> head-node % /usr/lib64/openmpi/1.4-gcc/bin/mpirun --host
>> node1,node1,node2,node2 --app appfile head-node :Hello world from 0
>> head-node :Hello world from 3
>> head-node :Hello world from 1
>> head-node :Hello world from 2
>> Good launch :
>> head-node % /usr/lib64/openmpi/1.4-gcc/bin/mpirun --host
>> node1,node1,node2,node2 -np 4 mpiinit
>> node1 :Hello world from 0
>> node2 :Hello world from 2
>> node2 :Hello world from 3
>> node1 :Hello world from 1
>> head-node % cat appfile
>> -np 1 /home/user461/OPENMPI/mpiinit
>> -np 1 /home/user461/OPENMPI/mpiinit
>> -np 1 /home/user461/OPENMPI/mpiinit
>> -np 1 /home/user461/OPENMPI/mpiinit
>> head-node % cat mpiinit.c
>> #include <mpi.h>
>> #include <stdio.h>
>> int main(int argc, char** argv)
>> {
>>    int rc, me;
>>    char pname[MPI_MAX_PROCESSOR_NAME];
>>    int plen;
>>    MPI_Init(
>>       &argc,
>>       &argv
>>    );
>>    rc = MPI_Comm_rank(
>>            MPI_COMM_WORLD,
>>            &me
>>    );
>>    if (rc != MPI_SUCCESS)
>>    {
>>       return rc;
>>    }
>>    MPI_Get_processor_name(
>>       pname,
>>       &plen
>>    );
>>    printf("%s:Hello world from %d\n", pname, me);
>>    MPI_Finalize();
>>    return 0;
>> }
>> head-node % /usr/lib64/openmpi/1.4-gcc/bin/ompi_info
>>                 Package: Open MPI
>> mockbu...@x86-004.build.bos.redhat.com Distribution
>>                Open MPI: 1.4
>>   Open MPI SVN revision: r22285
>>   Open MPI release date: Dec 08, 2009
>>                Open RTE: 1.4
>>   Open RTE SVN revision: r22285
>>   Open RTE release date: Dec 08, 2009
>>                    OPAL: 1.4
>>       OPAL SVN revision: r22285
>>       OPAL release date: Dec 08, 2009
>>            Ident string: 1.4
>>                  Prefix: /usr/lib64/openmpi/1.4-gcc
>> Configured architecture: x86_64-unknown-linux-gnu
>>          Configure host: x86-004.build.bos.redhat.com
>>           Configured by: mockbuild
>>           Configured on: Tue Feb 23 12:39:24 EST 2010
>>          Configure host: x86-004.build.bos.redhat.com
>>                Built by: mockbuild
>>                Built on: Tue Feb 23 12:41:54 EST 2010
>>              Built host: x86-004.build.bos.redhat.com
>>              C bindings: yes
>>            C++ bindings: yes
>>      Fortran77 bindings: yes (all)
>>      Fortran90 bindings: yes
>> Fortran90 bindings size: small
>>              C compiler: gcc
>>     C compiler absolute: /usr/bin/gcc
>>            C++ compiler: g++
>>   C++ compiler absolute: /usr/bin/g++
>>      Fortran77 compiler: gfortran
>>  Fortran77 compiler abs: /usr/bin/gfortran
>>      Fortran90 compiler: gfortran
>>  Fortran90 compiler abs: /usr/bin/gfortran
>>             C profiling: yes
>>           C++ profiling: yes
>>     Fortran77 profiling: yes
>>     Fortran90 profiling: yes
>>          C++ exceptions: no
>>          Thread support: posix (mpi: no, progress: no)
>>           Sparse Groups: no
>>  Internal debug support: no
>>     MPI parameter check: runtime
>> Memory profiling support: no
>> Memory debugging support: no
>>         libltdl support: yes
>>   Heterogeneous support: no
>> mpirun default --prefix: yes
>>         MPI I/O support: yes
>>       MPI_WTIME support: gettimeofday
>> Symbol visibility support: yes
>>   FT Checkpoint support: no  (checkpoint thread: no)
>>           MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.4)
>>              MCA memory: ptmalloc2 (MCA v2.0, API v2.0, Component v1.4)
>>           MCA paffinity: linux (MCA v2.0, API v2.0, Component v1.4)
>>               MCA carto: auto_detect (MCA v2.0, API v2.0, Component
>> v1.4)
>>               MCA carto: file (MCA v2.0, API v2.0, Component v1.4)
>>           MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.4)
>>           MCA maffinity: libnuma (MCA v2.0, API v2.0, Component v1.4)
>>               MCA timer: linux (MCA v2.0, API v2.0, Component v1.4)
>>         MCA installdirs: env (MCA v2.0, API v2.0, Component v1.4)
>>         MCA installdirs: config (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA dpm: orte (MCA v2.0, API v2.0, Component v1.4)
>>              MCA pubsub: orte (MCA v2.0, API v2.0, Component v1.4)
>>           MCA allocator: basic (MCA v2.0, API v2.0, Component v1.4)
>>           MCA allocator: bucket (MCA v2.0, API v2.0, Component v1.4)
>>                MCA coll: basic (MCA v2.0, API v2.0, Component v1.4)
>>                MCA coll: hierarch (MCA v2.0, API v2.0, Component v1.4)
>>                MCA coll: inter (MCA v2.0, API v2.0, Component v1.4)
>>                MCA coll: self (MCA v2.0, API v2.0, Component v1.4)
>>                MCA coll: sm (MCA v2.0, API v2.0, Component v1.4)
>>                MCA coll: sync (MCA v2.0, API v2.0, Component v1.4)
>>                MCA coll: tuned (MCA v2.0, API v2.0, Component v1.4)
>>                  MCA io: romio (MCA v2.0, API v2.0, Component v1.4)
>>               MCA mpool: fake (MCA v2.0, API v2.0, Component v1.4)
>>               MCA mpool: rdma (MCA v2.0, API v2.0, Component v1.4)
>>               MCA mpool: sm (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA pml: cm (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA pml: csum (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA pml: ob1 (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA pml: v (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA bml: r2 (MCA v2.0, API v2.0, Component v1.4)
>>              MCA rcache: vma (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA btl: ofud (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA btl: openib (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA btl: self (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA btl: sm (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA btl: tcp (MCA v2.0, API v2.0, Component v1.4)
>>                MCA topo: unity (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA osc: pt2pt (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA osc: rdma (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA iof: hnp (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA iof: orted (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA iof: tool (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA oob: tcp (MCA v2.0, API v2.0, Component v1.4)
>>                MCA odls: default (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA ras: gridengine (MCA v2.0, API v2.0, Component
>> v1.4)
>>                 MCA ras: slurm (MCA v2.0, API v2.0, Component v1.4)
>>               MCA rmaps: load_balance (MCA v2.0, API v2.0, Component
>> v1.4)
>>               MCA rmaps: rank_file (MCA v2.0, API v2.0, Component v1.4)
>>               MCA rmaps: round_robin (MCA v2.0, API v2.0, Component
>> v1.4)
>>               MCA rmaps: seq (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA rml: oob (MCA v2.0, API v2.0, Component v1.4)
>>              MCA routed: binomial (MCA v2.0, API v2.0, Component v1.4)
>>              MCA routed: direct (MCA v2.0, API v2.0, Component v1.4)
>>              MCA routed: linear (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA plm: rsh (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA plm: slurm (MCA v2.0, API v2.0, Component v1.4)
>>               MCA filem: rsh (MCA v2.0, API v2.0, Component v1.4)
>>              MCA errmgr: default (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA ess: env (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA ess: hnp (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA ess: singleton (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA ess: slurm (MCA v2.0, API v2.0, Component v1.4)
>>                 MCA ess: tool (MCA v2.0, API v2.0, Component v1.4)
>>             MCA grpcomm: bad (MCA v2.0, API v2.0, Component v1.4)
>>             MCA grpcomm: basic (MCA v2.0, API v2.0, Component v1.4)
>> Sincerely,
>> Waris Sindhi
>> High Performance Computing, TechApps
>> Pratt & Whitney, UTC
>> (860)-565-8486
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> Hi Waris
> 
> I think the appfile syntax includes the hosts part (and anything
> else you want to pass to mpiexec):
> 
> -host node1 -np 1 /home/user461/OPENMPI/mpiinit
> -host node2 -np 1 /home/user461/OPENMPI/mpiinit
> ...
> 
> Then the mpiexec command just lists the appfile:
> 
> mpiexec --app appfile
> 
> It works for me here (with the caveat that I am
> running under Torque/PBS).
> 
> 
> Also, 'man mpiexec' says:
> 
>       --app <appfile>
>              Provide an appfile, ignoring all other command line options.
> 
> **
> So, I suppose this means that all information passed to mpiexec must
> be inside the appfile, anything else will be ignored.
> This may explain why your 'bad launch' ran on the headnode,
> which is probably the default machine.
> (It would be great if the OpenMPI folks added a few examples there,
> specially for people who run MIMD programs.  :) )
> 
> But, you know, these are only my guesses, guesses, guesses ...
> 
> I hope this helps,
> Gus Correa
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to