Re: [gmx-users] mpi run in Gromacs

2011-03-02 Thread Mark Abraham

On 2/03/2011 11:37 PM, Erik Marklund wrote:

Selina Nawaz skrev 2011-03-02 11.54:

Hi, I am a Phd student studying polymer membranes.

I have installed Gromacs version 4.5.2 on our department cluster and 
attempted to run a simulations on a DPPC bilayer membranes taken from 
the tielleman website.
I have set the system up in an NPT ensemble using a semi-isotropic 
pressure coupling.
The simulation seems to run perfect in serial however I am having 
problems to run the simulation in parallel.


You're going to have to be much more specific if you want useful help.

I understand that this version of Gromacs uses threading to 
parallelise the system. I wanted to know whether it is possible to 
run this using openmpi and if not how do i use this thread base 
parallelisation.

Many Thanks
Selina

Hi,

It's slightly different than you think. Gromacs uses a thread based 
MPI implementation for communications between cores on the same 
machine. If you compile without threads it would use e.g. OpenMPI 
instead of threads as far as I know. Note that it will still use 
OpenMPI (or equivalent) for inter-node communication.


Not in 4.5.x. You can configure with MPI or threads (the default) or 
neither. The two cannot co-exist. Threading is only useful if a network 
is not involved. OpenMPI will do approximately the same via 
shared-memory as you can get from threading in the cases where it is useful.


The OP should consult the GROMACS webpage for MPI installation instructions.

Mark
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mpi run in Gromacs

2011-03-02 Thread Erik Marklund

Selina Nawaz skrev 2011-03-02 11.54:

Hi, I am a Phd student studying polymer membranes.

I have installed Gromacs version 4.5.2 on our department cluster and 
attempted to run a simulations on a DPPC bilayer membranes taken from 
the tielleman website.
I have set the system up in an NPT ensemble using a semi-isotropic 
pressure coupling.
The simulation seems to run perfect in serial however I am having 
problems to run the simulation in parallel.
I understand that this version of Gromacs uses threading to 
parallelise the system. I wanted to know whether it is possible to run 
this using openmpi and if not how do i use this thread base 
parallelisation.

Many Thanks
Selina

Hi,

It's slightly different than you think. Gromacs uses a thread based MPI 
implementation for communications between cores on the same machine. If 
you compile without threads it would use e.g. OpenMPI instead of threads 
as far as I know. Note that it will still use OpenMPI (or equivalent) 
for inter-node communication.


Cheers,

--
---
Erik Marklund, PhD student
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,75124 Uppsala, Sweden
phone:+46 18 471 4537fax: +46 18 511 755
er...@xray.bmc.uu.sehttp://folding.bmc.uu.se/

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] mpi run in Gromacs

2011-03-02 Thread Selina Nawaz
Hi, I am a Phd student studying polymer membranes.

I have installed Gromacs version 4.5.2 on our department cluster and attempted 
to run a simulations on a DPPC bilayer membranes taken from the tielleman 
website.
I have set the system up in an NPT ensemble using a semi-isotropic pressure 
coupling.
The simulation seems to run perfect in serial however I am having problems to 
run the simulation in parallel.
I understand that this version of Gromacs uses threading to parallelise the 
system. I wanted to know whether it is possible to run this using openmpi and 
if not how do i use this thread base parallelisation.

Many Thanks

Selina
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

RE: [gmx-users] mpi run in Gromacs 4.5.3

2010-12-13 Thread Te, Jerez A., Ph.D.
Thank you, Justin and Mark. I'll recompile gromacs. In my current version -nt 
is not available (using mdrun -h).
Thanks again,
JT 


-Original Message-
From: gmx-users-boun...@gromacs.org on behalf of Justin A. Lemkul
Sent: Mon 12/13/2010 5:49 PM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] mpi run in Gromacs 4.5.3
 


Te, Jerez A., Ph.D. wrote:
> Hi Mark,
> 
> Thank you for your reply. Just to confirm, mdrun_mpi is still being used in
> Gromacs 4.5.3? The gromacs manual (Appendix A.5- Running Gromacs in parallel)
> suggested using "mdrun -np 8 -s topol -v -N 8" in running a single machine
> with multiple (8) processors and "mpirun -p goofus,doofus,fred 10 mdrun -s
> topol -v -N 30" to run three machines with ten processors each. You probably
> already know this but I am confused as to whether these commands are correct

Those commands are outdated.  I am updating the manual to fix this.

> in running parallel simulations (thus the assumption that mdrun_mpi is no
> longer applicable in gromacs 4.5.3). So far I have not been successful in
> running parallel simulations (as I mentioned before the two options above
> would only start X identical serial processes). My bottom line is I want to
> run Gromacs simulation in parallel (regardless of whether it is running on
> one silicon with a number of processors or on different machines or nodes
> with a specified number of processors).
> 

The exact implementation depends on the nature of your cluster setup.  For 
instance, our aging supercomputer does not support threading, so we have to 
compile with --enable-mpi to produce an mdrun_mpi binary that is called via 
mpirun.  My laptop, however, supports threading, so I can initiate a process 
over both cores with mdrun -nt, no external MPI support necessary.

> I assume that in order to get mdrun_mpi, Gromacs has to be recompiled using
> the --enable-mpi option because currently our gromacs executable bin does not
> have mdrun_mpi. Our system administrator said that all mpi-related options
> have been enabled in compiling gromacs and still the mdrun_mpi is not found
> as one of the exe files. Please help.
> 

Just because your /bin subdirectory does not have mdrun_mpi does not 
necessarily 
mean the binary was not compiled with MPI support.  Based on what you have 
reported, it sounds like you indeed only have a serial mdrun, but it is 
possible 
to suppress the default suffix.  If you have threading support, the -nt option 
will be printed if you issue mdrun -h.

-Justin

> Thanks, JT
> 
> 
> -Original Message- From: gmx-users-boun...@gromacs.org on behalf of
> Mark Abraham Sent: Mon 12/13/2010 4:38 PM To: Discussion list for GROMACS
> users Subject: Re: [gmx-users] mpi run in Gromacs 4.5.3
> 
> On 14/12/2010 7:48 AM, Te, Jerez A., Ph.D. wrote:
>> Hi, I have been trying to run Gromacs 4.5.3 parallel simulations using 
>> openmpi 1.4.2. From my understanding, mdrun_mpi is not used in this version
>> of Gromacs.
>> 
> 
> I don't understand what (you think) you mean. You can use thread-based 
> parallelism for processors that share common silicon, or MPI-based 
> parallelism if a network connection is involved, but not both. The latter is
> named mdrun_mpi by default.
> 
>> Our system administrator told me that all mpi related options have been
>> turned on while installing Gromacs. With either commands: mdrun -np X
>> -deffnm topol -N X (run in an 8-cpu node)
>> 
> 
> This won't run in parallel at all. mdrun ignores -np and -N
> 
>> or mpirun -np X mdrun -deffnm topol -N X (submitted in a number of nodes 
>> depending on availability)
>> 
> 
> This will get you the symptoms below, but -N is still ignored.
> 
>> I get X identical simulations instead of a parallel run. If X=4, I get 4
>> identical simulations (the same simulation ran 4 times) instead of 1 
>> parallel simulation in 4 processors. The performance difference between a
>> single-processor run and the X=4 run are also similar (no marked difference
>> in the time it takes to finish the simulation). Has anyone encountered this
>> problem?
>> 
> 
> You're using a serial mdrun. Use a parallel mdrun.
> 
> Mark
> 
> 

-- 


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don

Re: [gmx-users] mpi run in Gromacs 4.5.3

2010-12-13 Thread Mark Abraham

On 14/12/2010 11:14 AM, Te, Jerez A., Ph.D. wrote:

Hi Mark,

Thank you for your reply. Just to confirm, mdrun_mpi is still being used in Gromacs 4.5.3? The 
gromacs manual (Appendix A.5- Running Gromacs in parallel) suggested using "mdrun -np 8 -s 
topol -v -N 8" in running a single machine with multiple (8) processors and "mpirun -p 
goofus,doofus,fred 10 mdrun -s topol -v -N 30" to run three machines with ten processors each. 
You probably already know this but I am confused as to whether these commands are correct in 
running parallel simulations (thus the assumption that mdrun_mpi is no longer applicable in gromacs 
4.5.3). So far I have not been successful in running parallel simulations (as I mentioned before 
the two options above would only start X identical serial processes). My bottom line is I want to 
run Gromacs simulation in parallel (regardless of whether it is running on one silicon with a 
number of processors or on different machines or nodes with a specified number of processors).

I assume that in order to get mdrun_mpi, Gromacs has to be recompiled using the 
--enable-mpi option because currently our gromacs executable bin does not have 
mdrun_mpi. Our system administrator said that all mpi-related options have been 
enabled in compiling gromacs and still the mdrun_mpi is not found as one of the 
exe files. Please help.

I echo Justin's comments.

"all mpi-related options" seems suspicious. There's only one, 
--enable-mpi. Obviously you need a correctly-configured MPI compiler and 
environment for it to work


Mark

Thanks,
JT


-Original Message-
From: gmx-users-boun...@gromacs.org on behalf of Mark Abraham
Sent: Mon 12/13/2010 4:38 PM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] mpi run in Gromacs 4.5.3

On 14/12/2010 7:48 AM, Te, Jerez A., Ph.D. wrote:

Hi,
I have been trying to run Gromacs 4.5.3 parallel simulations using
openmpi 1.4.2. From my understanding, mdrun_mpi is not used in this
version of Gromacs.


I don't understand what (you think) you mean. You can use thread-based
parallelism for processors that share common silicon, or MPI-based
parallelism if a network connection is involved, but not both. The
latter is named mdrun_mpi by default.


Our system administrator told me that all mpi related options have
been turned on while installing Gromacs. With either commands:
mdrun -np X -deffnm topol -N X (run in an 8-cpu node)


This won't run in parallel at all. mdrun ignores -np and -N


or
mpirun -np X mdrun -deffnm topol -N X (submitted in a number of nodes
depending on availability)


This will get you the symptoms below, but -N is still ignored.


I get X identical simulations instead of a parallel run. If X=4, I get
4 identical simulations (the same simulation ran 4 times) instead of 1
parallel simulation in 4 processors. The performance difference
between a single-processor run and the X=4 run are also similar (no
marked difference in the time it takes to finish the simulation).
Has anyone encountered this problem?


You're using a serial mdrun. Use a parallel mdrun.

Mark




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mpi run in Gromacs 4.5.3

2010-12-13 Thread Justin A. Lemkul



Te, Jerez A., Ph.D. wrote:

Hi Mark,

Thank you for your reply. Just to confirm, mdrun_mpi is still being used in
Gromacs 4.5.3? The gromacs manual (Appendix A.5- Running Gromacs in parallel)
suggested using "mdrun -np 8 -s topol -v -N 8" in running a single machine
with multiple (8) processors and "mpirun -p goofus,doofus,fred 10 mdrun -s
topol -v -N 30" to run three machines with ten processors each. You probably
already know this but I am confused as to whether these commands are correct


Those commands are outdated.  I am updating the manual to fix this.


in running parallel simulations (thus the assumption that mdrun_mpi is no
longer applicable in gromacs 4.5.3). So far I have not been successful in
running parallel simulations (as I mentioned before the two options above
would only start X identical serial processes). My bottom line is I want to
run Gromacs simulation in parallel (regardless of whether it is running on
one silicon with a number of processors or on different machines or nodes
with a specified number of processors).



The exact implementation depends on the nature of your cluster setup.  For 
instance, our aging supercomputer does not support threading, so we have to 
compile with --enable-mpi to produce an mdrun_mpi binary that is called via 
mpirun.  My laptop, however, supports threading, so I can initiate a process 
over both cores with mdrun -nt, no external MPI support necessary.



I assume that in order to get mdrun_mpi, Gromacs has to be recompiled using
the --enable-mpi option because currently our gromacs executable bin does not
have mdrun_mpi. Our system administrator said that all mpi-related options
have been enabled in compiling gromacs and still the mdrun_mpi is not found
as one of the exe files. Please help.



Just because your /bin subdirectory does not have mdrun_mpi does not necessarily 
mean the binary was not compiled with MPI support.  Based on what you have 
reported, it sounds like you indeed only have a serial mdrun, but it is possible 
to suppress the default suffix.  If you have threading support, the -nt option 
will be printed if you issue mdrun -h.


-Justin


Thanks, JT


-Original Message- From: gmx-users-boun...@gromacs.org on behalf of
Mark Abraham Sent: Mon 12/13/2010 4:38 PM To: Discussion list for GROMACS
users Subject: Re: [gmx-users] mpi run in Gromacs 4.5.3

On 14/12/2010 7:48 AM, Te, Jerez A., Ph.D. wrote:
Hi, I have been trying to run Gromacs 4.5.3 parallel simulations using 
openmpi 1.4.2. From my understanding, mdrun_mpi is not used in this version

of Gromacs.



I don't understand what (you think) you mean. You can use thread-based 
parallelism for processors that share common silicon, or MPI-based 
parallelism if a network connection is involved, but not both. The latter is

named mdrun_mpi by default.


Our system administrator told me that all mpi related options have been
turned on while installing Gromacs. With either commands: mdrun -np X
-deffnm topol -N X (run in an 8-cpu node)



This won't run in parallel at all. mdrun ignores -np and -N

or mpirun -np X mdrun -deffnm topol -N X (submitted in a number of nodes 
depending on availability)




This will get you the symptoms below, but -N is still ignored.


I get X identical simulations instead of a parallel run. If X=4, I get 4
identical simulations (the same simulation ran 4 times) instead of 1 
parallel simulation in 4 processors. The performance difference between a

single-processor run and the X=4 run are also similar (no marked difference
in the time it takes to finish the simulation). Has anyone encountered this
problem?



You're using a serial mdrun. Use a parallel mdrun.

Mark




--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


RE: [gmx-users] mpi run in Gromacs 4.5.3

2010-12-13 Thread Te, Jerez A., Ph.D.
Hi Mark,

Thank you for your reply. Just to confirm, mdrun_mpi is still being used in 
Gromacs 4.5.3? The gromacs manual (Appendix A.5- Running Gromacs in parallel) 
suggested using "mdrun -np 8 -s topol -v -N 8" in running a single machine with 
multiple (8) processors and "mpirun -p goofus,doofus,fred 10 mdrun -s topol -v 
-N 30" to run three machines with ten processors each. You probably already 
know this but I am confused as to whether these commands are correct in running 
parallel simulations (thus the assumption that mdrun_mpi is no longer 
applicable in gromacs 4.5.3). So far I have not been successful in running 
parallel simulations (as I mentioned before the two options above would only 
start X identical serial processes). My bottom line is I want to run Gromacs 
simulation in parallel (regardless of whether it is running on one silicon with 
a number of processors or on different machines or nodes with a specified 
number of processors).  

I assume that in order to get mdrun_mpi, Gromacs has to be recompiled using the 
--enable-mpi option because currently our gromacs executable bin does not have 
mdrun_mpi. Our system administrator said that all mpi-related options have been 
enabled in compiling gromacs and still the mdrun_mpi is not found as one of the 
exe files. Please help.

Thanks,
JT 


-Original Message-
From: gmx-users-boun...@gromacs.org on behalf of Mark Abraham
Sent: Mon 12/13/2010 4:38 PM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] mpi run in Gromacs 4.5.3
 
On 14/12/2010 7:48 AM, Te, Jerez A., Ph.D. wrote:
>
> Hi,
> I have been trying to run Gromacs 4.5.3 parallel simulations using 
> openmpi 1.4.2. From my understanding, mdrun_mpi is not used in this 
> version of Gromacs.
>

I don't understand what (you think) you mean. You can use thread-based 
parallelism for processors that share common silicon, or MPI-based 
parallelism if a network connection is involved, but not both. The 
latter is named mdrun_mpi by default.

> Our system administrator told me that all mpi related options have 
> been turned on while installing Gromacs. With either commands:
> mdrun -np X -deffnm topol -N X (run in an 8-cpu node)
>

This won't run in parallel at all. mdrun ignores -np and -N

> or
> mpirun -np X mdrun -deffnm topol -N X (submitted in a number of nodes 
> depending on availability)
>

This will get you the symptoms below, but -N is still ignored.

> I get X identical simulations instead of a parallel run. If X=4, I get 
> 4 identical simulations (the same simulation ran 4 times) instead of 1 
> parallel simulation in 4 processors. The performance difference 
> between a single-processor run and the X=4 run are also similar (no 
> marked difference in the time it takes to finish the simulation).
> Has anyone encountered this problem?
>

You're using a serial mdrun. Use a parallel mdrun.

Mark

<>-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mpi run in Gromacs 4.5.3

2010-12-13 Thread Mark Abraham

On 14/12/2010 7:48 AM, Te, Jerez A., Ph.D. wrote:


Hi,
I have been trying to run Gromacs 4.5.3 parallel simulations using 
openmpi 1.4.2. From my understanding, mdrun_mpi is not used in this 
version of Gromacs.




I don't understand what (you think) you mean. You can use thread-based 
parallelism for processors that share common silicon, or MPI-based 
parallelism if a network connection is involved, but not both. The 
latter is named mdrun_mpi by default.


Our system administrator told me that all mpi related options have 
been turned on while installing Gromacs. With either commands:

mdrun -np X -deffnm topol -N X (run in an 8-cpu node)



This won't run in parallel at all. mdrun ignores -np and -N


or
mpirun -np X mdrun -deffnm topol -N X (submitted in a number of nodes 
depending on availability)




This will get you the symptoms below, but -N is still ignored.

I get X identical simulations instead of a parallel run. If X=4, I get 
4 identical simulations (the same simulation ran 4 times) instead of 1 
parallel simulation in 4 processors. The performance difference 
between a single-processor run and the X=4 run are also similar (no 
marked difference in the time it takes to finish the simulation).

Has anyone encountered this problem?



You're using a serial mdrun. Use a parallel mdrun.

Mark
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] mpi run in Gromacs 4.5.3

2010-12-13 Thread Te, Jerez A., Ph.D.
Hi,
I have been trying to run Gromacs 4.5.3 parallel simulations using openmpi 
1.4.2. From my understanding, mdrun_mpi is not used in this version of Gromacs. 
Our system administrator told me that all mpi related options have been turned 
on while installing Gromacs. With either commands:
mdrun -np X -deffnm topol -N X (run in an 8-cpu node)
or
mpirun -np X mdrun -deffnm topol -N X (submitted in a number of nodes depending 
on availability)
I get X identical simulations instead of a parallel run. If X=4, I get 4 
identical simulations (the same simulation ran 4 times) instead of 1 parallel 
simulation in 4 processors. The performance difference between a 
single-processor run and the X=4 run are also similar (no marked difference in 
the time it takes to finish the simulation).
Has anyone encountered this problem?
I could provide more details if needed
Thanks,
JT 
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: RE: [gmx-users] mpi run

2010-07-08 Thread Mark Abraham


- Original Message -
From: #ZHAO LINA# 
Date: Thursday, July 8, 2010 18:53
Subject: RE: [gmx-users] mpi run
To: Discussion list for GROMACS users 


   P {margin-top:0;margin-bottom:0;} 
---
| 

 
 > During your installation, if you use --program-suffix=_mpi, so the mdrun_mpi 
 > will work.

--program-suffix merely appends that suffix. --enable-mpi is necessary to 
actually create MPI-enable mdrun. It is normal to use both of them together.

Mark

>  I suggest you to check the nodes of the cluster, probably try pbsnodes, on a 
> master node no job may be run. 
>  Note that I do not know much about your information, so the (guessing) 
> suggestion maybe not fit for your problems, just cause I once confused about 
> why mine 4cpus barely used before too.
 > 
>  Best,
 > 
>  lina
   ---
 > From: gmx-users-boun...@gromacs.org [gmx-users-boun...@gromacs.org] on 
 > behalf of Carsten Kutzner [ckut...@gwdg.de]
 > Sent: Thursday, July 08, 2010 4:04 PM
 > To: Discussion list for GROMACS users
 > Subject: Re: [gmx-users] mpi run
 > 
   > Hi Mahmoud, 
  > for anyone to be able to help you, you need to provide > a lot more 
information, at least: > - which mpi library are you using? > - how did you 
compile and/or install Gromacs? > - what commands do you use to run mdrun and 
what was > the output of it? > 
  > Best, >   Carsten > 
  > 
  > On Jul 8, 2010, at 9:41 AM, nanogroup wrote: >   
---
   | >  Dear GMX Users,
 > 
>  I have a PC with 4 CPU, but the Gromacs only use one CPU.
 > 
>  the command of mpiru works on linux; however, the command of mdrun_mpi does 
> not work.
 > 
>  Would you please help me how to set up the mdrun_mpi in Gromacs 4.0.4
 > 
>  Many thanks,
>  Mahmoud
 > 
  | 
  ---
 > 
>  -- 
>  gmx-users mailing listgmx-users@gromacs.org
 > http://lists.gromacs.org/mailman/listinfo/gmx-users
>  Please search the archive at http://www.gromacs.org/search before posting!
>  Please don't post (un)subscribe requests to the list. Use the 
>  www interface or send it to gmx-users-requ...@gromacs.org.
>  Can't post? Read http://www.gromacs.org/mailing_lists/users.php  > 
   > >  -- > Dr. Carsten Kutzner > Max Planck Institute for Biophysical 
Chemistry > Theoretical and Computational Biophysics > Am Fassberg 11, 37077 
Goettingen, Germany > Tel. +49-551-2012313, Fax: +49-551-2012302 > 
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne > 
   >  >   > 
  |
---
  > -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search 
> before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

RE: [gmx-users] mpi run

2010-07-08 Thread #ZHAO LINA#
During your installation, if you use --program-suffix=_mpi, so the mdrun_mpi 
will work.
I suggest you to check the nodes of the cluster, probably try pbsnodes, on a 
master node no job may be run.
Note that I do not know much about your information, so the (guessing) 
suggestion maybe not fit for your problems, just cause I once confused about 
why mine 4cpus barely used before too.

Best,

lina

From: gmx-users-boun...@gromacs.org [gmx-users-boun...@gromacs.org] on behalf 
of Carsten Kutzner [ckut...@gwdg.de]
Sent: Thursday, July 08, 2010 4:04 PM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] mpi run

Hi Mahmoud,

for anyone to be able to help you, you need to provide
a lot more information, at least:
- which mpi library are you using?
- how did you compile and/or install Gromacs?
- what commands do you use to run mdrun and what was
the output of it?

Best,
  Carsten


On Jul 8, 2010, at 9:41 AM, nanogroup wrote:

Dear GMX Users,

I have a PC with 4 CPU, but the Gromacs only use one CPU.

the command of mpiru works on linux; however, the command of mdrun_mpi does not 
work.

Would you please help me how to set up the mdrun_mpi in Gromacs 4.0.4

Many thanks,
Mahmoud



--
gmx-users mailing listgmx-users@gromacs.org<mailto:gmx-users@gromacs.org>
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] mpi run

2010-07-08 Thread Carsten Kutzner
Hi Mahmoud,

for anyone to be able to help you, you need to provide
a lot more information, at least:
- which mpi library are you using?
- how did you compile and/or install Gromacs?
- what commands do you use to run mdrun and what was
the output of it?

Best,
  Carsten


On Jul 8, 2010, at 9:41 AM, nanogroup wrote:

> Dear GMX Users,
> 
> I have a PC with 4 CPU, but the Gromacs only use one CPU.
> 
> the command of mpiru works on linux; however, the command of mdrun_mpi does 
> not work.
> 
> Would you please help me how to set up the mdrun_mpi in Gromacs 4.0.4
> 
> Many thanks,
> Mahmoud
> 
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] mpi run

2010-07-08 Thread nanogroup
Dear GMX Users,

I have a PC with 4 CPU, but the Gromacs only use one CPU.

the command of mpiru works on linux; however, the command of mdrun_mpi does not 
work.

Would you please help me how to set up the mdrun_mpi in Gromacs 4.0.4

Many thanks,
Mahmoud




  -- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] mpi-run

2010-05-30 Thread Justin A. Lemkul



nanogroup wrote:

Dear GMX Users,

I want to run Gromacs on a multiprocessor PC.

The MPI files are correctly installed and the gromacs is also configured.

However, at the end of configuration section, an Error appears that the 
FFTW can not be found!


Indeed, the FFTW is already installed but the configuration can not 
locate it!


Would you please help me to locate the fftw!


Please follow the installation instructions, most notably the information in the 
"Configuration" section:


http://www.gromacs.org/index.php?title=Download_%26_Installation/Installation_Instructions#Configuration

If you're still having difficulty, please provide more detail, like the actual 
commands you're issuing, where FFTW is installed, and what the actual error 
messages are.


-Justin



Many thanks
Mahmoud




--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] mpi-run

2010-05-30 Thread nanogroup
Dear GMX Users,

I want to run Gromacs on a multiprocessor PC.

The MPI files are correctly installed and the gromacs is also configured.

However, at the end of configuration section, an Error appears that the FFTW 
can not be found!

Indeed, the FFTW is already installed but the configuration can not locate it!

Would you please help me to locate the fftw!

Many thanks
Mahmoud




  -- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php