>If the simulation is even stable, it will be horribly inaccurate. 12 nm 
>cutoffs 
>are unheard of and 2-nm grid spacing is about 20 times too large.
>
>Without seeing the original .mdp file that gave the high PME load, and without 
>a 
>further description about how large the system is (number of atoms), it is 
>hard 
>to say what you should do. Some system do not parallelize well, but I imagine 
>you should be able to get better performance.
>
>-Justin

Thank you for reply

The original .mdp is below and the number of atoms is 20171

The difference is only in rlist, rcoulomb, rvdw, and fourierspacing

title = ttt
cpp = /lib/cpp
constraints = hbonds
;define = -DFLEX_SPC
integrator = md
emtol = 100.0
emstep = 0.005
dt = 0.002 ; ps !
nsteps = 25000000 ; total 50 ns
nstcomm = 5000
nstxout = 5000
nstvout = 5000
nstfout = 5000
nstlog = 5000
nstenergy = 5000
nstlist = 10
ns_type = grid

rlist = 1
rcoulomb = 1
rvdw = 1
coulombtype = PME
fourierspacing = 0.2
pme_order = 6
optimize_fft = yes
Tcoupl = v-rescale
tc-grps = Protein Non-Protein
;tau_t = 0.1 0.1
tau_t = 0.2 0.2
ref_t = 300 300
energygrps = A-chain B-chain SOL NA
Pcoupl = berendsen
Pcoupltype = isotropic
;tau_p = 0.1
tau_p = 0.25
compressibility = 5.4e-5
ref_p = 1.0
gen_vel = yes
gen_temp = 300
gen_seed = 173529

Hsin-Lin

 
-- 
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Reply via email to