Hi Ankush
Ankush Kaul wrote:
I am not able to check if NFS export/mount of /tmp is working,
when i give the command *ssh 192.168.45.65 192.168.67.18* i get the
error : bash: 192.168.67.18 <http://192.168.67.18/>: command not found
The ssh command syntax above is wrong.
Use only one IP address, which should be your remote machine's IP.
Assuming you are logged in to 192.168.67.18 (is this the master ?),
and want to ssh to 192.168.45.65 (is this the slave ?),
and run the command 'my_command' there, do:
ssh 192.168.45.65 'my_command'
If you already set up the passwordless ssh connection,
this should work.
let me explain what i understood using an example.
First, i make a folder '/work directory' on my master node.
Yes ...
... but don't use spaces in Linux/Unix names! Never!
It is either "/work"
or "/work_directory".
Using "/work directory" with a blank space in-between
is to ask for real trouble!
This is OK in Windows, but raises the hell on Linux/Unix.
In Linux/Unix blank space is a separator for everything,
so it will interpret only the first chunk of your directory name,
and think that what comes after the blank is another directory name,
or a command option, or whatever else.
You can create subdirectories there also, to put your own
programs.
Or maybe one subdirectory
for each user, and change the ownership of each subdirectory
to the corresponding user.
As root, on the master node, do:
cd /work
whoami (this will give you your own user-name)
mkdir user-name
chown user-name:user-name user-name (pay attention to the : and blanks!)
Then i mount this directory on a folder named '/work directory/mnt' on
the slave node
is this correct?
No.
The easy thing to do is to use the same name for the mountpoint
as the original directory, say, /work only, if you called
it /work on the master node.
Again, don't use white space on Linux/Unix names!
Create a mountpoint directory called /work on the slave node:
mkdir /work
Don't populate the slave node /work directory,
as it is just a mountpoint.
Leave it empty.
Then use it to mount the actual /work directory that you
want to export from the master node.
also how and where (is it on the master node) do i give the list of
hosts?
On the master node, in the mpirun command line.
As I said, do "/full/path/to/openmpi/bin/mpirun --help" to get
a lot of information about the mpirun command options.
and by hosts you mean the compute nodes.
By hosts I mean whatever computers you want to run your MPI program on.
It can be the master only, the slave only, or both.
The (excellent) OpenMPI FAQ may also help you:
http://www.open-mpi.org/faq/
Many of your questions may have been answered there already.
I encourage you to read them, particularly the General Information,
Building, and Running Jobs ones.
Plez bear with me as this is the first time i am doin a project on Linux
clustering.
Welcome, and good luck!
Gus Correa
---------------------------------------------------------------------
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
---------------------------------------------------------------------
On Mon, Apr 6, 2009 at 9:27 PM, Gus Correa <g...@ldeo.columbia.edu
<mailto:g...@ldeo.columbia.edu>> wrote:
Hi Ankush
If I remember right,
mpirun will put you on your home directory, not on /tmp,
when it starts your ssh session.
To run on /tmp (or on /mnt/nfs)
you may need to use "-path" option.
Likewise, you may want to give mpirun a list of hosts (-host option)
or a hostfile (-hostfile option), to specify where you want the
program to run.
Do
"/full/path/to/openmpi/mpriun -help"
for details.
Make sure your NFS export/mount of /tmp is working,
say, by doing:
ssh slave_node 'hostname; ls /tmp; ls /mnt/nfs'
or similar, and see if your program "pi" is really there (and where).
Actually, it may be confusing to export /tmp, as it is part
of the basic Linux directory tree,
which is the reason why you mounted it on /mnt/nfs.
You may want to choose to export/mount
a directory that is not so generic as /tmp,
so that you can use a consistent name on both computers.
For instance, you can create a /my_export or /work directory
(or whatever name you prefer) on the master node,
export it to the slave node, mount it on the slave node
with the same name/mountpoint, and use it for your MPI work.
I hope this helps.
Gus Correa
---------------------------------------------------------------------
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
---------------------------------------------------------------------
Ankush Kaul wrote:
Thank you sir,
one more thing i am confused about, suppose i have 2 run a 'pi'
program using open mpi, where do i place the program?
currently i have placed it in /tmp folder on de master node.
this /tmp folder is mounted on /mnt/nfs of the compute node.
i run de progam from the tmp folder on de master node, is this
correct?
i m a newbie n really need some help, thanks in advance
On Mon, Apr 6, 2009 at 8:43 PM, John Hearns
<hear...@googlemail.com <mailto:hear...@googlemail.com>
<mailto:hear...@googlemail.com <mailto:hear...@googlemail.com>>>
wrote:
2009/4/6 Ankush Kaul <ankush.rk...@gmail.com
<mailto:ankush.rk...@gmail.com>
<mailto:ankush.rk...@gmail.com <mailto:ankush.rk...@gmail.com>>>:
>> Also how do i come to know that the program is using
resources
of both the
> nodes?
Log into the second node before you start the program.
Run 'top'
Seriously - top is a very, very useful utility.
_______________________________________________
users mailing list
us...@open-mpi.org <mailto:us...@open-mpi.org>
<mailto:us...@open-mpi.org <mailto:us...@open-mpi.org>>
http://www.open-mpi.org/mailman/listinfo.cgi/users
------------------------------------------------------------------------
_______________________________________________
users mailing list
us...@open-mpi.org <mailto:us...@open-mpi.org>
http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org <mailto:us...@open-mpi.org>
http://www.open-mpi.org/mailman/listinfo.cgi/users
------------------------------------------------------------------------
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users