Re: [gmx-users] very low speed in simulation !( need for help)

2010-05-18 Thread Carsten Kutzner
Hi,

On May 18, 2010, at 9:35 AM, delara aghaie wrote:

> Dear gmx-usres,
> In our university we have a cluster containing 20 nodes each with 4 
> processors, which I want to use that for the simulation project.
> As a test I submitted a run that I have tested it before in the Imperial 
> College (London).
> Here the structure of the cluster is so that I should specify on which nodes 
> I want the simulation to be done.
>  
> for this we have a folder (gromacs launcher), in which there are some files.
> in the file (lamhosts.txt) the node numbers that I can have access to them, 
> have been specified by the head of cluster.
>  
> And in the file (hosts.txt) I can choose the nodes which I want to simulate 
> my system with them, (I am resricted to use only the nodes that are listes in 
> the lamhosts.txt).
>  
> whan I want to use the grompp order, it wants me to specify the -np 
> option(number of processors). For example I can have access to 12 processors 
> (or 3 nodes).
> I write these commands:
>  
> /usr/local/gromacs/bin/grompp -c ~.gro -f ~.mdp -p ~.top -n ~.ndx -o 
> topol.tpr -np 12
> mpiexec /usr/local/gromacs/bin/mdrun -v -s topol.tpr -np 12
I think the -np 12 should go directly after the mpiexec.
>  
> I receive the error that I should include the server name in the list of 
> nodes . I did it for both the lamhosts.txt and hosts.txt file.
>  
If you are using LAM MPI, you have to set up the parallel environment with the 
command
"lamboot" before you start any parallel job with mpirun or mpiexec. LAM 
requires that the node
you run "lamboot" on is in the list of hosts. This is what the error message 
says. You can boot
a parallel environment with more nodes than you actually use for your parallel 
job, and I think
there is probably a way to tell lam which of the lambooted nodes it should then 
use for the run.
(i.e. all but the server in your case).

The other solution would be to log in to one of the nodes which is in the hosts 
file and issue
the lamboot and the mpirun/mpiexec commands there.

For better scalability you might also want to upgrade to Gromacs 4.

Carsten


> 
> Then I should write number of processors 13 instead of 12.
> In this way the simulation goes very very slowly, as I see that if I run the 
> simulation on one processor the speed is more satisfying !!!.
>  
> I think this low speed is because of including the server in the processor 
> list, and because the server is always busy with other jobs, the speed falls 
> down.
> Now:
> 1) Is the reason for low speed , including the server in the list?
> 2) is the way that let me not to include the server in the list  of 
> lamhosts.txt and hosts.txt?
> It would be greatly appreciated if you guide me as I am completely confused 
> !!!
>  
> thanks in advance.
> D. M
>  
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] very low speed in simulation !( need for help)

2010-05-18 Thread delara aghaie
Dear gmx-usres,
In our university we have a cluster containing 20 nodes each with 4 processors, 
which I want to use that for the simulation project.
As a test I submitted a run that I have tested it before in the Imperial 
College (London).
Here the structure of the cluster is so that I should specify on which nodes I 
want the simulation to be done.
 
for this we have a folder (gromacs launcher), in which there are some files.
in the file (lamhosts.txt) the node numbers that I can have access to them, 
have been specified by the head of cluster.
 
And in the file (hosts.txt) I can choose the nodes which I want to simulate my 
system with them, (I am resricted to use only the nodes that are listes in the 
lamhosts.txt).
 
whan I want to use the grompp order, it wants me to specify the -np 
option(number of processors). For example I can have access to 12 processors 
(or 3 nodes).
I write these commands:
 
/usr/local/gromacs/bin/grompp -c ~.gro -f ~.mdp -p ~.top -n ~.ndx -o topol.tpr 
-np 12
mpiexec /usr/local/gromacs/bin/mdrun -v -s topol.tpr -np 12
 
I receive the error that I should include the server name in the list of nodes 
. I did it for both the lamhosts.txt and hosts.txt file.
 
Then I should write number of processors 13 instead of 12.
In this way the simulation goes very very slowly, as I see that if I run the 
simulation on one processor the speed is more satisfying !!!.
 
I think this low speed is because of including the server in the processor 
list, and because the server is always busy with other jobs, the speed falls 
down.
Now:
1) Is the reason for low speed , including the server in the list?
2) is the way that let me not to include the server in the list  of 
lamhosts.txt and hosts.txt?
It would be greatly appreciated if you guide me as I am completely confused !!!
 
thanks in advance.
D. M
 


  -- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php