Michael,
OSCAR version is 4.2. The OS on the client node is Red Hat and is installed from the client image file generated during the OSCAR installation. The cluster testing step at the end of installation passed except the ganglia part. Thank you very much for your help. Michelle
Here is the code for Hello.cc.
*******************************************************************************************
#include <iostream.h>
// modified to reference the master mpi.h file, to meet the MPI standard spec.
#include " mpi.h"
#include <iostream.h>
// modified to reference the master mpi.h file, to meet the MPI standard spec.
#include " mpi.h"
int
main(int argc, char *argv[])
{
MPI::Init(argc, argv);
main(int argc, char *argv[])
{
MPI::Init(argc, argv);
int rank = MPI::COMM_WORLD.Get_rank();
int size = MPI::COMM_WORLD.Get_size();
cout << "Hello World! I am " << rank << " of " << size << endl;
MPI::Finalize();
}
*****************************************************************************************
On 4/3/06, Michael Edwards <[EMAIL PROTECTED]> wrote:
What version of OSCAR are you using, and on what platform?
Also, could you send us a copy of hello++.cpp, it looks like there are
some errors there? Also, did all the oscar tests pass?
LAM appears to be working correctly at surface anyway.
On 4/3/06, Michelle Chu <[EMAIL PROTECTED] > wrote:
> Hello, there,
>
> When I mpirun a simple hello MPI program on all my eight nodes as the
> following. I get a sequence of hello world! i am 0 of 1, instead of 1 of 8,
> 2 of 8, 3 of 8. Also, problem with MPI_INIT. Thank you for your help.
>
> which mpicc
> /opt/lam-7.0.6/bin/mpicc
> lamboot -v my_hostfile
> my_hostfile is:
> **************************************
> athena.cs.xxx.edu
> oscarnode1.cs.xxx.edu
> oscarnode2.cs.xxx.edu
> oscarnode3.cs.xxx.edu
> oscarnode4.cs.xxx.edu
> oscarnode5.cs.xxx.edu
> oscarnode6.cs.xxx.edu
> oscarnode7.cs.xxx.edu
> oscarnode8.cs.xxx.edu
>
> *****************************************************************************
> LAM 7.0.6/MPI 2 C++/ROMIO - Indiana University
>
> n-1<16365> ssi:boot:base:linear: booting n0 (athena.cs.xxx.edu)
> n-1<16365> ssi:boot:base:linear: booting n1 ( oscarnode1.cs.xxx.edu)
> n-1<16365> ssi:boot:base:linear: booting n2 (oscarnode2.cs.xxx.edu)
> n-1<16365> ssi:boot:base:linear: booting n3 ( oscarnode3.cs.xxx.edu)
> n-1<16365> ssi:boot:base:linear: booting n4 (oscarnode4.cs.xxx.edu)
> n-1<16365> ssi:boot:base:linear: booting n5 ( oscarnode5.cs.xxx.edu)
> n-1<16365> ssi:boot:base:linear: booting n6 (oscarnode6.cs.xxx.edu)
> n-1<16365> ssi:boot:base:linear: booting n7 ( oscarnode7.cs.xxx.edu)
> n-1<16365> ssi:boot:base:linear: booting n8 (oscarnode8.cs.xxx.edu)
> n-1<16365> ssi:boot:base:linear: finished
>
> mpirun N hello++
> *****************************************************************
> Hello World! I am 0 of 1
> Hello World! I am 0 of 1
> Hello World! I am 0 of 1
> Hello World! I am 0 of 1
> Hello World! I am 0 of 1
> Hello World! I am 0 of 1
> -----------------------------------------------------------------------------
> It seems that [at least] one of the processes that was started with
> mpirun did not invoke MPI_INIT before quitting (it is possible that
> more than one process did not invoke MPI_INIT -- mpirun was only
> notified of the first one, which was on node n0).
>
> mpirun can *only* be used with MPI programs (i.e., programs that
> invoke MPI_INIT and MPI_FINALIZE). You can use the "lamexec" program
> to run non-MPI programs over the lambooted nodes.
> -----------------------------------------------------------------------------
> Hello World! I am 0 of 1
> ****************************************************************************************************
>
