Re: [gmx-users] why it is so slow

2012-11-02 Thread Albert

On 11/01/2012 06:15 PM, Justin Lemkul wrote:



On 11/1/12 12:55 PM, Albert wrote:

hello:

  I am running a 40ns REMD with GBSA solvent NPT simulations. It is 
exchange for

16 different temperature with exchange step 300.



Based on your .mdp file, you're not doing NPT (pcoupl = no). 


Hello Justin:

thanks a lot for kind reply.

If I turn this option on, such as:

 pcouple=Isotropic

it is said:


ERROR 1 [file eq.mdp, line 78]:
  Pressure coupling not enough values (I need 1)


WARNING 1 [file eq.mdp]:
  Turning off pressure coupling for vacuum system

Setting the LD random seed to 14221350

---
Program grompp, VERSION 4.5.5
Source code file: gmxcpp.c, line: 248

Fatal error:
Topology include file ligand.itp not found
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

Cowardly refusing to create an empty archive (GNU tar)

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] why it is so slow

2012-11-02 Thread Justin Lemkul



On 11/2/12 10:56 AM, Albert wrote:

On 11/01/2012 06:15 PM, Justin Lemkul wrote:



On 11/1/12 12:55 PM, Albert wrote:

hello:

  I am running a 40ns REMD with GBSA solvent NPT simulations. It is exchange for
16 different temperature with exchange step 300.



Based on your .mdp file, you're not doing NPT (pcoupl = no).


Hello Justin:

thanks a lot for kind reply.

If I turn this option on, such as:

  pcouple=Isotropic

it is said:


ERROR 1 [file eq.mdp, line 78]:
   Pressure coupling not enough values (I need 1)


WARNING 1 [file eq.mdp]:
   Turning off pressure coupling for vacuum system

Setting the LD random seed to 14221350

---
Program grompp, VERSION 4.5.5
Source code file: gmxcpp.c, line: 248

Fatal error:
Topology include file ligand.itp not found
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

Cowardly refusing to create an empty archive (GNU tar)



I wasn't suggesting that you use NPT, I was merely pointing out that you made an 
statement that wasn't true and thought I would mention it.


It looks like you have other issues to deal with.

-Justin


--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] why it is so slow

2012-11-02 Thread Albert

On 11/02/2012 04:02 PM, Justin Lemkul wrote:


I wasn't suggesting that you use NPT, I was merely pointing out that 
you made an statement that wasn't true and thought I would mention it.


It looks like you have other issues to deal with.

-Justin



Hello Justin:

 thanks a lot for such kind reply.
 What's the left problems are there? I just remember that in an old 
gromacs mailist thread some one mentioned that NPT would be better for 
REMD. However, I am using GBSA solvent model, do you think NPT is also 
better than NVT?


I never done REMD before.

Thank you very much
best
Albert

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] why it is so slow

2012-11-02 Thread Justin Lemkul



On 11/2/12 11:08 AM, Albert wrote:

On 11/02/2012 04:02 PM, Justin Lemkul wrote:


I wasn't suggesting that you use NPT, I was merely pointing out that you made
an statement that wasn't true and thought I would mention it.

It looks like you have other issues to deal with.

-Justin



Hello Justin:

  thanks a lot for such kind reply.
  What's the left problems are there? I just remember that in an old gromacs


The problem I was referring to was the fatal error in your last mail.


mailist thread some one mentioned that NPT would be better for REMD. However, I


Quite the opposite.  You can get very different densities, depending on the 
range of temperatures, that can cause various algorithms to fail.



am using GBSA solvent model, do you think NPT is also better than NVT?



No.  In fact, using GBSA, you should be using a non-periodic cell with infinite 
cutoffs.



I never done REMD before.



It would be wise, then, to spend a significant amount of time reading before 
attempting anything.  A few hours spent in the literature will potentially save 
you weeks of wasted CPU time if you realize you're not doing something right.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] why it is so slow

2012-11-01 Thread Albert

hello:

 I am running a 40ns REMD with GBSA solvent NPT simulations. It is 
exchange for 16 different temperature with exchange step 300.


mpiexec -n 384 /opt/gromacs/4.5.5/bin/mdrun -nosum -dlb yes -v -s 
remd_.tpr -multi 16 -replex 300



I found that it will require 1 months to be finished which is a really 
long time.


I am just wondering is there anything I did wrong for the .mdp so that 
it is so slow? here is my .mdp file.


thank you very much





title = Protein-ligand complex NPT equilibration
; Run parameters
integrator = sd ;
nsteps = 2000 ; 2 * 5 = 100 ps
dt = 0.002 ; 2 fs
nstxout = 0 ; save coordinates every 0.2 ps
nstvout = 0 ; save velocities every 0.2 ps
nstfout = 0

nstxtcout = 500
nstenergy = 100 ; save energies every 0.2 ps
nstlog = 1000 ; update log file every 0.2 ps
energygrps = Protein_LIG
; Bond parameters
continuation = yes ; first dynamics run
constraint_algorithm = lincs ; holonomic constraints
constraints = all-bonds ; all bonds (even heavy atom-H bonds) constrained
lincs_iter = 1 ; accuracy of LINCS
lincs_order = 4 ; also related to accuracy
; Neighborsearching
ns_type = simple ; search neighboring grid cells
nstlist = 0 ; 10 fs
rlist = 0 ; short-range neighborlist cutoff (in nm)
rcoulomb = 0 ; short-range electrostatic cutoff (in nm)
rvdw = 0 ; short-range van der Waals cutoff (in nm)

; Electrostatics
coulombtype = cutoff ; Particle Mesh Ewald for long-range electrostatics
pme_order = 4 ; cubic interpolation
fourierspacing = 0.15 ; grid spacing for FFT

; Temperature coupling
tcoupl = V-rescale ; modified Berendsen thermostat
tc-grps = Protein_LIG  ; two coupling groups - more accurate
tau_t = 0.1 ; time constant, in ps
ref_t = 310 ; reference temperature, one for each group, in K

; Pressure coupling
pcoupl = no; pressure coupling is on for NPT
; Periodic boundary conditions
; Pressure coupling
pcoupl = no; pressure coupling is on for NPT
; Periodic boundary conditions
tau_p   = 2.0   ; time constant, in ps
ref_p   = 1.0   ; reference pressure, in bar
pbc=no
; Dispersion correction
DispCorr = no ; account for cut-off vdW scheme
pcoupltype  = isotropic ; uniform scaling of box vectors
; Velocity generation
gen_vel = yes ; assign velocities from Maxwell distribution
gen_temp = 310 ; temperature for Maxwell distribution
gen_seed = -1 ; generate a random seed
ld_seed=-1

; IMPLICIT SOLVENT ALGORITHM
implicit_solvent = GBSA
comm_mode = ANGULAR

; GENERALIZED BORN ELECTROSTATICS
; Algorithm for calculating Born radii
gb_algorithm = OBC
; Frequency of calculating the Born radii inside rlist
nstgbradii = 1
; Cutoff for Born radii calculation; the contribution from atoms
; between rlist and rgbradii is updated every nstlist steps
rgbradii = 0
; Dielectric coefficient of the implicit solvent
gb_epsilon_solvent = 80
; Salt concentration in M for Generalized Born models
gb_saltconc = 0
; Scaling factors used in the OBC GB model. Default values are OBC(II)
gb_obc_alpha = 1
gb_obc_beta = 0.8
gb_obc_gamma = 4.85
gb_dielectric_offset = 0.009
sa_algorithm = Ace-approximation
; Surface tension (kJ/mol/nm^2) for the SA (nonpolar surface) part of GBSA
; The value -1 will set default value for Still/HCT/OBC GB-models.
sa_surface_tension = 2.25936


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] why it is so slow

2012-11-01 Thread Justin Lemkul



On 11/1/12 12:55 PM, Albert wrote:

hello:

  I am running a 40ns REMD with GBSA solvent NPT simulations. It is exchange for
16 different temperature with exchange step 300.



Based on your .mdp file, you're not doing NPT (pcoupl = no).


mpiexec -n 384 /opt/gromacs/4.5.5/bin/mdrun -nosum -dlb yes -v -s remd_.tpr
-multi 16 -replex 300


I found that it will require 1 months to be finished which is a really long 
time.

I am just wondering is there anything I did wrong for the .mdp so that it is so
slow? here is my .mdp file.



One month is not very long, especially if you are running on CPU and not GPU. 
How many atoms are in your system?  How did you decide that 24 CPU per replica 
was appropriate?  How did you decide on your exchange frequency?  Exchanging 
every 0.6 ps sounds awfully frequent, but I'm no REMD expert so I'll leave that 
for others to comment on.


-Justin


thank you very much





title = Protein-ligand complex NPT equilibration
; Run parameters
integrator = sd ;
nsteps = 2000 ; 2 * 5 = 100 ps
dt = 0.002 ; 2 fs
nstxout = 0 ; save coordinates every 0.2 ps
nstvout = 0 ; save velocities every 0.2 ps
nstfout = 0

nstxtcout = 500
nstenergy = 100 ; save energies every 0.2 ps
nstlog = 1000 ; update log file every 0.2 ps
energygrps = Protein_LIG
; Bond parameters
continuation = yes ; first dynamics run
constraint_algorithm = lincs ; holonomic constraints
constraints = all-bonds ; all bonds (even heavy atom-H bonds) constrained
lincs_iter = 1 ; accuracy of LINCS
lincs_order = 4 ; also related to accuracy
; Neighborsearching
ns_type = simple ; search neighboring grid cells
nstlist = 0 ; 10 fs
rlist = 0 ; short-range neighborlist cutoff (in nm)
rcoulomb = 0 ; short-range electrostatic cutoff (in nm)
rvdw = 0 ; short-range van der Waals cutoff (in nm)

; Electrostatics
coulombtype = cutoff ; Particle Mesh Ewald for long-range electrostatics
pme_order = 4 ; cubic interpolation
fourierspacing = 0.15 ; grid spacing for FFT

; Temperature coupling
tcoupl = V-rescale ; modified Berendsen thermostat
tc-grps = Protein_LIG  ; two coupling groups - more accurate
tau_t = 0.1 ; time constant, in ps
ref_t = 310 ; reference temperature, one for each group, in K

; Pressure coupling
pcoupl = no; pressure coupling is on for NPT
; Periodic boundary conditions
; Pressure coupling
pcoupl = no; pressure coupling is on for NPT
; Periodic boundary conditions
tau_p   = 2.0   ; time constant, in ps
ref_p   = 1.0   ; reference pressure, in bar
pbc=no
; Dispersion correction
DispCorr = no ; account for cut-off vdW scheme
pcoupltype  = isotropic ; uniform scaling of box vectors
; Velocity generation
gen_vel = yes ; assign velocities from Maxwell distribution
gen_temp = 310 ; temperature for Maxwell distribution
gen_seed = -1 ; generate a random seed
ld_seed=-1

; IMPLICIT SOLVENT ALGORITHM
implicit_solvent = GBSA
comm_mode = ANGULAR

; GENERALIZED BORN ELECTROSTATICS
; Algorithm for calculating Born radii
gb_algorithm = OBC
; Frequency of calculating the Born radii inside rlist
nstgbradii = 1
; Cutoff for Born radii calculation; the contribution from atoms
; between rlist and rgbradii is updated every nstlist steps
rgbradii = 0
; Dielectric coefficient of the implicit solvent
gb_epsilon_solvent = 80
; Salt concentration in M for Generalized Born models
gb_saltconc = 0
; Scaling factors used in the OBC GB model. Default values are OBC(II)
gb_obc_alpha = 1
gb_obc_beta = 0.8
gb_obc_gamma = 4.85
gb_dielectric_offset = 0.009
sa_algorithm = Ace-approximation
; Surface tension (kJ/mol/nm^2) for the SA (nonpolar surface) part of GBSA
; The value -1 will set default value for Still/HCT/OBC GB-models.
sa_surface_tension = 2.25936




--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] why it is so slow in Blue gene?

2012-04-25 Thread Mark Abraham

On 25/04/2012 3:24 PM, Albert wrote:

hello:

  it is blue gene P. And the gromacs is single precision in the 
cluster. Getting Loaded...And the administrator told me that I have to 
use the multiples of 32 in the bg_size parameter. The number specified 
in -np should be 4 times bg_size.


Yes, but your system is too small to make use of 128 processors. Also, 
get rid of -launch and -nt from your command line, since they do nothing.



  It is even slower than my own workstation with 16 core.




here is the log file I get:


No, that's the stdout file. Look at the end of the .log file.



-log
Reading file npt_01.tpr, VERSION 4.5.5 (single precision)
Loaded with Money

Will use 112 particle-particle and 16 PME only nodes


This is guaranteed to lead to woeful performance with your .mdp 
settings, but you will have to look towards the beginning of the .log 
file to find out why mdrun selected this. Odds are good that your system 
size is so small that the minimum particle-particle cell size 
(constrained by rcoulomb) doesn't give mdrun any good options that use 
all the processors. You'd likely get better raw performance with twice 
the number of atoms or half the number of processors.


Mark


This is a guess, check the performance at the end of the log file
Making 3D domain decomposition 4 x 4 x 7
starting mdrun 'GRowing Old MAkes el Chrono Sweat'
50 steps,500.0 ps.
step 0
vol 0.64! imb F 16% pme/F 0.22 step 100, will finish Wed Apr 25 
18:28:06 2012
vol 0.65! imb F 17% pme/F 0.21 step 200, will finish Wed Apr 25 
18:09:54 2012
vol 0.67! imb F 18% pme/F 0.21 step 300, will finish Wed Apr 25 
18:03:12 2012
vol 0.69! imb F 18% pme/F 0.21 step 400, will finish Wed Apr 25 
17:58:25 2012
vol 0.67! imb F 19% pme/F 0.21 step 500, will finish Wed Apr 25 
17:55:26 2012
vol 0.68! imb F 19% pme/F 0.22 step 600, will finish Wed Apr 25 
17:53:31 2012
vol 0.68! imb F 19% pme/F 0.22 step 700, will finish Wed Apr 25 
17:51:57 2012
vol 0.68! imb F 19% pme/F 0.22 step 800, will finish Wed Apr 25 
17:50:32 2012
vol 0.68! imb F 20% pme/F 0.22 step 900, will finish Wed Apr 25 
17:49:14 2012
vol 0.67! imb F 21% pme/F 0.22 step 1000, will finish Wed Apr 25 
17:48:13 2012
vol 0.68! imb F 20% pme/F 0.22 step 1100, will finish Wed Apr 25 
17:47:28 2012
vol 0.67! imb F 21% pme/F 0.22 step 1200, will finish Wed Apr 25 
17:46:50 2012
vol 0.67! imb F 21% pme/F 0.22 step 1300, will finish Wed Apr 25 
17:46:15 2012




On 04/24/2012 06:01 PM, Hannes Loeffler wrote:

On Tue, 24 Apr 2012 15:42:15 +0200
Albertmailmd2...@gmail.com  wrote:


hello:

I am running a 60,000 atom system with 128 core in a blue gene
cluster. and it is only 1ns/day here is the script I used for

You don't give any information what exact system that is (L/P/Q?), if
you run single or double precision and what force field you are using.
But for a similar sized system using a united atom force field in
single precision we find about 4 ns/day on a BlueGene/P (see our
benchmarking reports on
http://www.stfc.ac.uk/CSE/randd/cbg/Benchmark/25241.aspx).  I would
expect a run with the CHARMM 27 force field in double precision to be
roughly 3 times slower.  We found scaling to 128 cores to be
reasonably good. Also, check our report for problems when compiling
with higher optimisation.

Hannes.




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] why it is so slow in Blue gene?

2012-04-24 Thread Albert

hello:

  I am running a 60,000 atom system with 128 core in a blue gene 
cluster. and it is only 1ns/day here is the script I used for 
submitting jobs:


# @ job_name = gmx_test
# @ class = kdm-large
# @ error = gmx_test.err
# @ output = gmx_test.out
# @ wall_clock_limit = 00:20:00
# @ job_type = bluegene
# @ bg_size = 32
# @ queue
mpirun -exe /opt/gromacs/4.5.5/bin/mdrun_mpi_bg -args -nosum -dlb yes 
-v -s npt
_01.tpr -o npt_01.trr -cpo npt_01.cpt -g npt_01.log -launch -nt -mode 
VN -np 128




here is my npt.mdp

title= NPT-01
cpp  = /usr/bin/cpp
include  =
define = -DPOSRES  -DPOSRES_POPE_HEAD
integrator   = md
dt   = 0.001
nsteps   = 500
nstxout  = 10
nstvout  = 10
nstlog   = 10
nstenergy= 5
nstxtcout= 5
xtc_grps =
energygrps = Protein SOL ION
nstcalcenergy= 10
nstlist  = 10
nstcomm  = 10
comm_mode= Linear
comm-grps= Protein_POPEWater_and_ions
ns_type  = grid
rlist= 1.2
rlistlong = 1.4
vdwtype = Switch
rvdw = 1.2
rvdw_switch = 0.8
coulombtype  = pme
rcoulomb = 1.2
rcoulomb_switch = 0.0
fourierspacing = 0.15
pme_order = 6
DispCorr = no
tcoupl   = V-rescale ;nose-hoover
nhchainlength= 1
tc-grps  = Protein_POPEWater_and_ions
tau_t= 0.1   0.1
ref_t= 310 310
Pcoupl   = berendsen;parrinello-rahman
Pcoupltype   = semiisotropic
tau_p= 1.0
compressibility  = 4.5e-5   4.5e-5
ref_p= 1.0  1.0
pbc = xyz
refcoord_scaling = com
gen_vel  = no
optimize_fft = no
constraints  = hbonds
constraint_algorithm = Lincs


Does anybody have any advices?

thank you very much
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] why it is so slow in Blue gene?

2012-04-24 Thread Justin A. Lemkul



On 4/24/12 9:42 AM, Albert wrote:

hello:

I am running a 60,000 atom system with 128 core in a blue gene cluster. and it
is only 1ns/day here is the script I used for submitting jobs:

# @ job_name = gmx_test
# @ class = kdm-large
# @ error = gmx_test.err
# @ output = gmx_test.out
# @ wall_clock_limit = 00:20:00
# @ job_type = bluegene
# @ bg_size = 32
# @ queue
mpirun -exe /opt/gromacs/4.5.5/bin/mdrun_mpi_bg -args -nosum -dlb yes -v -s npt
_01.tpr -o npt_01.trr -cpo npt_01.cpt -g npt_01.log -launch -nt -mode VN -np 
128



here is my npt.mdp

title = NPT-01
cpp = /usr/bin/cpp
include =
define = -DPOSRES -DPOSRES_POPE_HEAD
integrator = md
dt = 0.001
nsteps = 500
nstxout = 10
nstvout = 10
nstlog = 10
nstenergy = 5
nstxtcout = 5
xtc_grps =
energygrps = Protein SOL ION
nstcalcenergy = 10
nstlist = 10
nstcomm = 10
comm_mode = Linear
comm-grps = Protein_POPE Water_and_ions
ns_type = grid
rlist = 1.2
rlistlong = 1.4
vdwtype = Switch
rvdw = 1.2
rvdw_switch = 0.8
coulombtype = pme
rcoulomb = 1.2
rcoulomb_switch = 0.0
fourierspacing = 0.15
pme_order = 6
DispCorr = no
tcoupl = V-rescale ;nose-hoover
nhchainlength = 1
tc-grps = Protein_POPE Water_and_ions
tau_t = 0.1 0.1
ref_t = 310 310
Pcoupl = berendsen ;parrinello-rahman
Pcoupltype = semiisotropic
tau_p = 1.0
compressibility = 4.5e-5 4.5e-5
ref_p = 1.0 1.0
pbc = xyz
refcoord_scaling = com
gen_vel = no
optimize_fft = no
constraints = hbonds
constraint_algorithm = Lincs


Does anybody have any advices?



The end of the log file will print information about where performance may have 
been lost.  For 60,000 atoms I would think that 128 cores is too many; you're 
sacrificing performance to communication overhead.  A good ballpark is 1000 
atoms/core.  A few quick benchmark calculations should give you a better idea on 
the setup for optimal performance.


-Justin

--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] why it is so slow in Blue gene?

2012-04-24 Thread Hannes Loeffler
On Tue, 24 Apr 2012 15:42:15 +0200
Albert mailmd2...@gmail.com wrote:

 hello:
 
I am running a 60,000 atom system with 128 core in a blue gene 
 cluster. and it is only 1ns/day here is the script I used for 

You don't give any information what exact system that is (L/P/Q?), if
you run single or double precision and what force field you are using.
But for a similar sized system using a united atom force field in
single precision we find about 4 ns/day on a BlueGene/P (see our
benchmarking reports on
http://www.stfc.ac.uk/CSE/randd/cbg/Benchmark/25241.aspx).  I would
expect a run with the CHARMM 27 force field in double precision to be
roughly 3 times slower.  We found scaling to 128 cores to be
reasonably good. Also, check our report for problems when compiling
with higher optimisation.

Hannes.
-- 
Scanned by iCritical.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists