Just to make sure I understand -- you're running the hello world app you pasted 
in an earlier email with just 1 MPI process on the local machine, and you're 
seeing hangs.  Is that right?

(there was a reference in a prior email to 2 different architectures -- that's 
why I'm clarifying)


On May 24, 2010, at 2:53 AM, Yves Caniou wrote:

> I rechecked, but didn't see anything wrong.
> Here is how I set my environment. Tkx.
> 
> $>mpicc --v
> Using built-in specs.
> COLLECT_GCC=//home/p10015/gcc/bin/x86_64-unknown-linux-gnu-gcc-4.5.0
> COLLECT_LTO_WRAPPER=/hsfs/home4/p10015/gcc/bin/../libexec/gcc/x86_64-unknown-linux-gnu/4.5.0/lto-wrapper
> Target: x86_64-unknown-linux-gnu
> Configured
> with: ../gcc-4.5.0/configure --prefix=/home/p10015/gcc 
> --with-gmp=/home/p10015/gmp --with-mpfr=/home/p10015/mpfr 
> --with-mpc=/home/p10015/mpc --enable-lto --with-ppl=/home/p10015/ppl 
> --with-libelf=/home/p10015/libelf --with-cloog=/home/p10015/cloog-ppl 
> --enable-languages=c,c++,lto --disable-libada 
> --enable-stage1-languages=c,c++,lto
> Thread model: posix
> gcc version 4.5.0 (GCC)
> 
> $>mpiexec
> mpiexec (OpenRTE) 1.4.2
> [cut]
> 
> $>echo $LD_LIBRARY_PATH
> /home/p10015/gcc/lib64/:/home/p10015/openmpi/lib/:/home/p10015/omniORB/lib/:/home/p10015/omniORB/lib64/:/home/p10015/lib/:/home/p10015/lib64/::/usr/lib/:/usr/lib/xen/:/lib/:
> 
> $>echo $PATH
> .:/home/p10015/gcc/bin/:/home/p10015/openmpi/bin/:/home/p10015/omniORB/bin/:/home/p10015/git/bin/:/home/p10015/Bin/:/home/p10015/bin/:..:/usr/local/bin/:/opt/ofort90/bin:/opt/optc/bin:/opt/optscxx/bin:/opt/hitachi/nqs/bin:/opt/torque/bin:/opt/mpich-mx/bin:/usr/java/default/bin:/bin:/usr/bin:/sbin/:/usr/sbin/
> 
> $>echo $CPLUS_INCLUDE_PATH
> /home/p10015/gcc/include/c++/4.5.0/:/home/p10015/openmpi/include/:/home/p10015/omniORB/include/:
> 
> $>echo $C_INCLUDE_PATH
> /home/p10015/gcc/lib/gcc/x86_64-unknown-linux-gnu/4.5.0/include-fixed/:/home/p10015/gcc/lib/gcc/x86_64-unknown-linux-gnu/4.5.0/include/:/home/p10015/openmpi/include/:/home/p10015/omniORB/include/:
> 
> 
> Le Monday 24 May 2010 08:35:17 Ralph Castain, vous avez écrit :
> > It looks to me like you are getting version confusion - your path and
> > ld_library_path aren't pointing to the place where you installed 1.4.1 and
> > you are either getting someone else's mpiexec or getting 1.2.x instead.
> > Could also be that mpicc isn't the one from 1.4.1 either.
> >
> > Check to ensure that the mpiexec and mpicc you are using are from 1.4.1,
> > and that your environment is pointing to the right place.
> >
> > On May 24, 2010, at 12:15 AM, Yves Caniou wrote:
> > > Dear All,
> > > (follows a previous mail)
> > >
> > > I don't understand the strange behavior of this small code: sometimes it
> > > ends, sometimes not. The output of MPI_Finalized is 1 (for each processes
> > > if n>1), but the code doesn't end. I am forced to use Ctrl-C.
> > >
> > > I compiled it with the command line:
> > > "mpicc --std=c99"  /  gcc is 4.5, on a Quad-Core AMD Opteron(tm)
> > > Processor 8356 "mpiexec -n 1 a.out" or "mpiexec -n 2 a.out" to run the
> > > code.
> > > "ps aux" returns that the program is in Sl+ state.
> > >
> > > Sometimes, I can see also a line like this:
> > > p10015    6892  0.1  0.0  43376  1828 ?        Ssl  14:50   0:00 orted
> > > --hnp --set-sid --report-uri 8 --singleton-died-pipe 9
> > >
> > > Is this a bug? Do I do something wrong?
> > > If you have any tips...
> > > Thank you.
> > >
> > > ---------
> > > #include "stdio.h"
> > > #include "mpi.h"
> > >
> > > int
> > > main(int argc, char *argv[])
> > > {
> > >  int my_num, mpi_size ;
> > >  int flag ;
> > >
> > >  MPI_Init(&argc, &argv) ;
> > >
> > >  MPI_Comm_rank(MPI_COMM_WORLD, &my_num);
> > >  printf("%d calls MPI_Finalize()\n\n\n", my_num) ;
> > >
> > >  MPI_Finalize() ;
> > >
> > >  MPI_Finalized(&flag) ;
> > >  printf("MPI finalized: %d\n", flag) ;
> > >  return 0 ;
> > > }
> > > -------
> > >
> > > --
> > > Yves Caniou
> > > Associate Professor at Université Lyon 1,
> > > Member of the team project INRIA GRAAL in the LIP ENS-Lyon,
> > > Délégation CNRS in Japan French Laboratory of Informatics (JFLI),
> > >  * in Information Technology Center, The University of Tokyo,
> > >    2-11-16 Yayoi, Bunkyo-ku, Tokyo 113-8658, Japan
> > >    tel: +81-3-5841-0540
> > >  * in National Institute of Informatics
> > >    2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan
> > >    tel: +81-3-4212-2412
> > > http://graal.ens-lyon.fr/~ycaniou/
> > >
> > > _______________________________________________
> > > users mailing list
> > > us...@open-mpi.org
> > > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> 
> --
> Yves Caniou
> Associate Professor at Université Lyon 1,
> Member of the team project INRIA GRAAL in the LIP ENS-Lyon,
> Délégation CNRS in Japan French Laboratory of Informatics (JFLI),
>   * in Information Technology Center, The University of Tokyo,
>     2-11-16 Yayoi, Bunkyo-ku, Tokyo 113-8658, Japan
>     tel: +81-3-5841-0540
>   * in National Institute of Informatics
>     2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan
>     tel: +81-3-4212-2412
> http://graal.ens-lyon.fr/~ycaniou/
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/


Reply via email to