[gmx-users] can't generate pdb file in g_cluster

2015-01-01 Thread Nizar Masbukhin
hi gromacs users,

while i was trying to cluster structure using:

 g_cluster_mpi -f traj -s chainB.pdb -method gromos -cl out.pdb -n
index_20-29_ca.ndx

error message appeared

[nizars-mbp:01757] *** Process received signal ***

[nizars-mbp:01757] Signal: Segmentation fault: 11 (11)

[nizars-mbp:01757] Signal code: Address not mapped (1)

[nizars-mbp:01757] Failing at address: 0x0

[nizars-mbp:01757] [ 0] 0   libsystem_platform.dylib
0x7fff924485aa _sigtramp + 26

[nizars-mbp:01757] [ 1] 0   ???
0x2060 0x0 + 35596688949248

[nizars-mbp:01757] *** End of error message ***

this wasn't happening if i am not using -cl flag.

what is libsystem_platform.dylib ?
do i miss some library?
-- 
Thanks
My Best Regards, Nizar
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] error while executing do_dssp program

2014-12-31 Thread Nizar Masbukhin
hi gromacs user,

when i try to compute secondary structure using do_dssp program, the error
message appear:

Program do_dssp_mpi, VERSION 5.0.2
Source code file: /home/user/gromacs-5.0.2/src/gromacs/fileio/futil.cpp,
line: 863

Fatal error:
Permission denied for opening ddhCTh1r
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors;

i have installed dssp program, and i have set the environment DSSP to the
path where i install DSSP.

could anyone please explain to me whats wrong?

-- 
Thanks
My Best Regards, Nizar
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] error while executing do_dssp program

2014-12-31 Thread Nizar Masbukhin
the directory where i execute the command is in desktop. i think that
folder doesnt need write/read permission. i even have used sudo -s mode,
still the problem exist.
On Dec 31, 2014 11:39 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 12/31/14 10:53 AM, Nizar Masbukhin wrote:

 hi gromacs user,

 when i try to compute secondary structure using do_dssp program, the error
 message appear:

 Program do_dssp_mpi, VERSION 5.0.2
 Source code file: /home/user/gromacs-5.0.2/src/gromacs/fileio/futil.cpp,
 line: 863

 Fatal error:
 Permission denied for opening ddhCTh1r
 For more information and tips for troubleshooting, please check the
 GROMACS
 website at http://www.gromacs.org/Documentation/Errors;

 i have installed dssp program, and i have set the environment DSSP to the
 path where i install DSSP.

 could anyone please explain to me whats wrong?


 You don't have write permission in the directory where you are executing
 your command.

 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Ruth L. Kirschstein NRSA Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 629
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441
 http://mackerell.umaryland.edu/~jalemkul

 ==
 --
 Gromacs Users mailing list

 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Continuing inturepted REMD simulation

2014-12-03 Thread Nizar Masbukhin
So, in REMD case, we can not continue simulation if it interrupted?

and, How do I check wether exchange occurred or not?

On Tue, Dec 2, 2014 at 7:31 PM, Mark Abraham mark.j.abra...@gmail.com
wrote:

 Hi,

 You'll certainly lose the per-run statistics, because nobody's cared enough
 to write the code to accumulate it over interrupted runs.

 Showing no exchanges at all is another problem, but
 *you should look at the actual exchange reports to see whether the problem
 is that no exchanges*
 occurred, or that the books weren't being kept properly.

 Mark

 On Tue, Dec 2, 2014 at 6:59 AM, Nizar Masbukhin nizar.fku...@gmail.com
 wrote:

  Dear gromacs users,
  I have finished an REMD simulation, tough it had interupted (by
 electrical
  instabilty that lead computer shutdown) several times.  I continue the
  interupted simulation using this command:
  mpirun -np 8 mdrun_mpi -v -multidir md[01234567] -replex 2000 -cpt 3 -cpi
  state.cpt
 
  however, the remd statistics show unexpected exchange probability.
  Replica exchange statistics
  Repl  3365 attempts, 1683 odd, 1682 even
  Repl  average probabilities:
  Repl 01234567
  Repl
  Repl  number of exchanges:
  Repl 01234567
  Repl0000000
  Repl  average number of exchanges:
  Repl 01234567
  Repl  .00  .00  .00  .00  .00  .00  .00
 
 
  ReplEmpirical Transition Matrix
  Repl   1   2   3   4   5   6   7   8
  Repl  1.  0.  0.  0.  0.  0.  0.  0.  0
  Repl  0.  1.  0.  0.  0.  0.  0.  0.  1
  Repl  0.  0.  1.  0.  0.  0.  0.  0.  2
  Repl  0.  0.  0.  1.  0.  0.  0.  0.  3
  Repl  0.  0.  0.  0.  1.  0.  0.  0.  4
  Repl  0.  0.  0.  0.  0.  1.  0.  0.  5
  Repl  0.  0.  0.  0.  0.  0.  1.  0.  6
  Repl  0.  0.  0.  0.  0.  0.  0.  1.  7
 
  That statistics was very different when i try remd simulation with short
  timesteps, and uninterupted one. the exchange probabilty was 0.3 as i
  expected.
  Could anyone tell me whats wrong?
  --
  --
  Gromacs Users mailing list
 
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
  posting!
 
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  * For (un)subscribe requests visit
  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
  send a mail to gmx-users-requ...@gromacs.org.
 
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.




-- 
Thanks
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Continuing inturepted REMD simulation

2014-12-01 Thread Nizar Masbukhin
Dear gromacs users,
I have finished an REMD simulation, tough it had interupted (by electrical
instabilty that lead computer shutdown) several times.  I continue the
interupted simulation using this command:
mpirun -np 8 mdrun_mpi -v -multidir md[01234567] -replex 2000 -cpt 3 -cpi
state.cpt

however, the remd statistics show unexpected exchange probability.
Replica exchange statistics
Repl  3365 attempts, 1683 odd, 1682 even
Repl  average probabilities:
Repl 01234567
Repl
Repl  number of exchanges:
Repl 01234567
Repl0000000
Repl  average number of exchanges:
Repl 01234567
Repl  .00  .00  .00  .00  .00  .00  .00


ReplEmpirical Transition Matrix
Repl   1   2   3   4   5   6   7   8
Repl  1.  0.  0.  0.  0.  0.  0.  0.  0
Repl  0.  1.  0.  0.  0.  0.  0.  0.  1
Repl  0.  0.  1.  0.  0.  0.  0.  0.  2
Repl  0.  0.  0.  1.  0.  0.  0.  0.  3
Repl  0.  0.  0.  0.  1.  0.  0.  0.  4
Repl  0.  0.  0.  0.  0.  1.  0.  0.  5
Repl  0.  0.  0.  0.  0.  0.  1.  0.  6
Repl  0.  0.  0.  0.  0.  0.  0.  1.  7

That statistics was very different when i try remd simulation with short
timesteps, and uninterupted one. the exchange probabilty was 0.3 as i
expected.
Could anyone tell me whats wrong?
--
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Regarding simulated tempering

2014-11-08 Thread Nizar Masbukhin
Dear gromacs users,
Simulated tempering, one of multi canonical MC method,  can be performed in
GROMACS. As I know, adaptive tempering is also multi canonical ensemble
method. Can adaptive tempering be performed in GROMACS? Or both of them are
the same?

-- 
Thanks
My Best Regards, Nizar
Medical Faculty of Brawijaya University, Malang, Indonesia
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] md.log interpretation

2014-11-06 Thread Nizar Masbukhin
I have tried running simulation with zero cutoff and zero nstlist and
simple ns. That was very very slow, however.

anyway thanks for the reply.
 On Nov 5, 2014 9:38 PM, Mark Abraham mark.j.abra...@gmail.com wrote:

 Hi,

 I am not familiar with the implementation details, but I would assume there
 is a dependency on rcoulomb of the cost of the Born force chain rule. You
 can check this by trying different values. It might be faster to use
 infinite cutoffs (see manual) so that the algorithm does not need to
 search.

 Mark

 On Tue, Nov 4, 2014 at 1:34 AM, Nizar Masbukhin nizar.fku...@gmail.com
 wrote:

  Dear gromacs users,
  I've just finished my NVT equilibration in implicit solvent. From
 md.log, i
  see that most of time were used to calculate Forces (of Born force chain
  rule). Could someone explain to me why calculating it takes so long?
 
  mdp setting:
 
  integrator=md
  dt=0.004
  nstep=250
  bd-frict=50
  rlist=5
  rvdw=5
  rcoulomb=5
  nstlist=40
  implicit-solvent=GBSA
  gb-alogarithm=still
  nstgbradii=40
  gbradii=5
  gb-epsilon-solvent=80
  gb-salt-conc=0.2
  sa-algorithm=ace-approximation
  sa-surface-tension=2.05
  constraints=all-bonds
  constraints-algorithm=LINCS
  lincs-order=12
 
 
  md.log
 
  M E G A - F L O P S   A C C O U N T I N G
 
   NB=Group-cutoff nonbonded kernelsNxN=N-by-N cluster Verlet kernels
   RF=Reaction-Field  VdW=Van der Waals  QSTab=quadratic-spline table
   W3=SPC/TIP3p  W4=TIP4p (single or pairs)
   VF=Potential and force  V=Potential only  F=Force only
 
   Computing:   M-Number M-Flops  %
 Flops
 
 
 -
   NB VdW [VF] 32677.013279   32677.013
  0.0
   NB VdW [F] 1740991.575264 1740991.575
  0.3
   NB VdW  Elec. [VF]   1192677.312698 1192677.313
  0.2
   NB VdW  Elec. [F]   106409048.599622   106409048.600
 17.9
   1,4 nonbonded interactions   20035.628466 1803206.562
  0.3
   Born radii (Still) 1999216.23274293963162.939
 15.8
   Born force chain rule 25548182.811729   *383222742.176*
  64.4
   NS-Pairs 64906.572700 1363038.027
  0.2
   CG-CoM 205.506576 616.520
  0.0
   Propers  15168.313980 3473543.901
  0.6
   Impropers  637.980588  132699.962
  0.0
   Pos. Restr.   3830.053530  191502.676
  0.0
   Virial  82.6954711488.518
  0.0
   Stop-CM 82.214752 822.148
  0.0
   Calc-Ekin  822.011152   22194.301
  0.0
   Lincs 4696.969329  281818.160
  0.0
   Lincs-Mat   229529.791548  918119.166
  0.2
   Constraint-V  9393.938658   75151.509
  0.0
   Constraint-Vir  46.9739791127.375
  0.0
   Virtual Site 3 615.868824   22787.146
  0.0
   Virtual Site 3fd   791.205144   75164.489
  0.0
   Virtual Site 3fad  168.761208   29701.973
  0.0
   Virtual Site 3out 1893.632256  164746.006
  0.0
   Virtual Site 4fdn  576.418152  146410.211
  0.0
   (null) 476.315439   0.000
  0.0
 
 
 -
   Total   595265438.267
  100.0
 
 
 -
 
 
   R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G
 
  On 1 MPI rank
 
   Computing:  Num   Num  CallWall time Giga-Cycles
   Ranks Threads  Count  (s) total sum%
 
 
 -
   Vsite constr.  111085001  96.421385.432
  0.2
   Neighbor search11  27126 915.964   3661.462
  2.1
   Force  111085001   *39317.742* 157168.118
  91.3
   Vsite spread   111095852 142.563569.880
  0.3
   Write traj.11  11030 425.764   1701.942
  1.0
   Update 111085001  96.742386.714
  0.2
   Constraints1110850011311.720   5243.449
  3.0
   Rest 766.404   3063.612
  1.8
 
 
 -
   Total  43073.320 172180.608
 100.0

[gmx-users] md.log interpretation

2014-11-03 Thread Nizar Masbukhin
Dear gromacs users,
I've just finished my NVT equilibration in implicit solvent. From md.log, i
see that most of time were used to calculate Forces (of Born force chain
rule). Could someone explain to me why calculating it takes so long?

mdp setting:

integrator=md
dt=0.004
nstep=250
bd-frict=50
rlist=5
rvdw=5
rcoulomb=5
nstlist=40
implicit-solvent=GBSA
gb-alogarithm=still
nstgbradii=40
gbradii=5
gb-epsilon-solvent=80
gb-salt-conc=0.2
sa-algorithm=ace-approximation
sa-surface-tension=2.05
constraints=all-bonds
constraints-algorithm=LINCS
lincs-order=12


md.log

M E G A - F L O P S   A C C O U N T I N G

 NB=Group-cutoff nonbonded kernelsNxN=N-by-N cluster Verlet kernels
 RF=Reaction-Field  VdW=Van der Waals  QSTab=quadratic-spline table
 W3=SPC/TIP3p  W4=TIP4p (single or pairs)
 VF=Potential and force  V=Potential only  F=Force only

 Computing:   M-Number M-Flops  % Flops
-
 NB VdW [VF] 32677.013279   32677.013 0.0
 NB VdW [F] 1740991.575264 1740991.575 0.3
 NB VdW  Elec. [VF]   1192677.312698 1192677.313 0.2
 NB VdW  Elec. [F]   106409048.599622   106409048.60017.9
 1,4 nonbonded interactions   20035.628466 1803206.562 0.3
 Born radii (Still) 1999216.23274293963162.93915.8
 Born force chain rule 25548182.811729   *383222742.176*64.4
 NS-Pairs 64906.572700 1363038.027 0.2
 CG-CoM 205.506576 616.520 0.0
 Propers  15168.313980 3473543.901 0.6
 Impropers  637.980588  132699.962 0.0
 Pos. Restr.   3830.053530  191502.676 0.0
 Virial  82.6954711488.518 0.0
 Stop-CM 82.214752 822.148 0.0
 Calc-Ekin  822.011152   22194.301 0.0
 Lincs 4696.969329  281818.160 0.0
 Lincs-Mat   229529.791548  918119.166 0.2
 Constraint-V  9393.938658   75151.509 0.0
 Constraint-Vir  46.9739791127.375 0.0
 Virtual Site 3 615.868824   22787.146 0.0
 Virtual Site 3fd   791.205144   75164.489 0.0
 Virtual Site 3fad  168.761208   29701.973 0.0
 Virtual Site 3out 1893.632256  164746.006 0.0
 Virtual Site 4fdn  576.418152  146410.211 0.0
 (null) 476.315439   0.000 0.0
-
 Total   595265438.267   100.0
-


 R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G

On 1 MPI rank

 Computing:  Num   Num  CallWall time Giga-Cycles
 Ranks Threads  Count  (s) total sum%
-
 Vsite constr.  111085001  96.421385.432   0.2
 Neighbor search11  27126 915.964   3661.462   2.1
 Force  111085001   *39317.742* 157168.118  91.3
 Vsite spread   111095852 142.563569.880   0.3
 Write traj.11  11030 425.764   1701.942   1.0
 Update 111085001  96.742386.714   0.2
 Constraints1110850011311.720   5243.449   3.0
 Rest 766.404   3063.612   1.8
-
 Total  43073.320 172180.608 100.0
-

   Core t (s)   Wall t (s)(%)
   Time:41985.82843073.320   97.5
 11h57:53
 (ns/day)(hour/ns)
Performance:8.7062.757
Finished mdrun on rank 0 Tue Nov  4 03:27:26 2014


I know, for someone this question is less important, but for me performance
is important.

-- 
Thanks
My Best Regards, Nizar
Medical Faculty of Brawijaya University
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or 

Re: [gmx-users] Free energy in implicit solvent

2014-10-30 Thread Nizar Masbukhin
even using PLUMED plugin?

On Wed, Oct 29, 2014 at 1:59 PM, David van der Spoel sp...@xray.bmc.uu.se
wrote:

 On 2014-10-29 01:05, Nizar Masbukhin wrote:

 Dear gromacs users,
 Is it possible to calculate free energy in implicit solvent in gromacs?

  It is not possible at all.

 --
 David van der Spoel, Ph.D., Professor of Biology
 Dept. of Cell  Molec. Biol., Uppsala University.
 Box 596, 75124 Uppsala, Sweden. Phone:  +46184714205.
 sp...@xray.bmc.uu.sehttp://folding.bmc.uu.se
 --
 Gromacs Users mailing list

 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.




-- 
Thanks
My Best Regards, Nizar
Medical Faculty of Brawijaya University
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] error in the middle of running mdrun_mpi

2014-10-28 Thread Nizar Masbukhin
Thank you very much Justin and Mark.

On Tue, Oct 28, 2014 at 2:31 AM, Mark Abraham mark.j.abra...@gmail.com
wrote:

 On Mon, Oct 27, 2014 at 6:05 PM, Nizar Masbukhin nizar.fku...@gmail.com
 wrote:

  i dont really understand the point. could you please what do you mean in
  the last reply?
  what command should i use?
 
  if, say i have 72 cores in 9 nodes, and 16 replicas to simulate in
 implicit
  solvent.


 Hi,

 You can only use two MPI ranks per replica if there's a limit of two ranks
 per simulation. So that's 32 ranks total. So something like

 mpirun -np 32 mdrun_mpi -multidir your-16-directories -repl_ex whatever

 after setting up the MPI environment to fill four nodes.

 Mark
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.




-- 
Thanks
My Best Regards, Nizar
Medical Faculty of Brawijaya University
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] error in the middle of running mdrun_mpi

2014-10-28 Thread Nizar Masbukhin
Now, the only thing worrying me is a warning Turning off pressure coupling
for vacuum system on NPT equilibration. Can I just ignore this warning or
should i do something? as i didn't mean my system in vacuum.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Free energy in implicit solvent

2014-10-28 Thread Nizar Masbukhin
Dear gromacs users,
Is it possible to calculate free energy in implicit solvent in gromacs?
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] error in the middle of running mdrun_mpi

2014-10-27 Thread Nizar Masbukhin
and how to use that 2 cores? i think that would increase performace twice
as now i am running 1 core per replica.

On Mon, Oct 27, 2014 at 7:15 AM, Justin Lemkul jalem...@vt.edu wrote:



 On 10/26/14 9:55 AM, Nizar Masbukhin wrote:

 regarding gaining speed in implicit solvent simulation, i have tried to
 parallelize using -ntmpi flag. However gromacs doesn't allow as i use
 group
 cutoff-scheme. Any recommendation how to parallelise implicit solvent
 simulation? I do need parallelise my simulation. I have found the same
 question in this mail list, one suggest use all-vs-all kernel which uses
 zero cut-off.
 This is my test run actually. I intend to run my simulation in cluster
 computer.


 Unless the restriction was lifted at some point, implicit simulations
 won't run on more than 2 cores.  There were issues with constraints that
 led to the limitation.

 -Justin


  On Sun, Oct 26, 2014 at 8:23 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 10/26/14 9:17 AM, Nizar Masbukhin wrote:

  Thanks Justin.
 I have increased the cutoff, and yeah thats work. There were no error
 message anymore. The first 6 nanoseconds, i felt the simulation run
 slower.
 Felt so curious that  simulation run very fast the rest of time.


  Longer cutoffs mean there are more interactions to calculate, but the
 cutoffs aren't to be toyed with arbitrarily to gain speed.  They are a
 critical element of the force field itself, though in implicit solvent,
 it
 is common to increase (and never decrease) the cutoff values used in
 explicit solvent.  Physical validity should trump speed any day.

 -Justin


   On Fri, Oct 24, 2014 at 7:37 PM, Justin Lemkul jalem...@vt.edu
 wrote:




 On 10/24/14 8:31 AM, Nizar Masbukhin wrote:

   Thanks for yor reply, Mark.



 At first i was sure that the problem was table-exension because when I
 enlarge table-extension value, warning message didn't  appear anymore.
 Besides, i have successfully minimized and equilibrated the system
 (indicated by Fmax  emtol reached; and no error messages during
 NVTNPT
 equilibration, except a warning that the Pcouple is turned off in
 vacuum
 system).

 However, the error message appeared without table-extension warning
 makes
 me doubt also about my system stability. Here is my mdp setting.
 Please
 tell me if there are any 'weird' setting, and also kindly
 suggest/recommend
 a better setting.


 *mdp file for Minimisation*


 integrator = steep

 nsteps = 5000

 emtol = 200

 emstep = 0.01

 niter = 20

 nstlog = 1

 nstenergy = 1

 cutoff-scheme = group

 nstlist = 1

 ns_type = simple

 pbc = no

 rlist = 0.5

 coulombtype = cut-off

 rcoulomb = 0.5

 vdw-type = cut-off

 rvdw-switch = 0.8

 rvdw = 0.5

 DispCorr = no

 fourierspacing = 0.12

 pme_order = 6

 ewald_rtol = 1e-06

 epsilon_surface = 0

 optimize_fft = no

 tcoupl = no

 pcoupl = no

 free_energy = yes

 init_lambda = 0.0

 delta_lambda = 0

 foreign_lambda = 0.05

 sc-alpha = 0.5

 sc-power = 1.0

 sc-sigma  = 0.3

 couple-lambda0 = vdw

 couple-lambda1 = none

 couple-intramol = no

 nstdhdl = 10

 gen_vel = no

 constraints = none

 constraint-algorithm = lincs

 continuation = no

 lincs-order  = 12

 implicit-solvent = GBSA

 gb-algorithm = still

 nstgbradii = 1

 rgbradii = 0.5

 gb-epsilon-solvent = 80

 sa-algorithm = Ace-approximation

 sa-surface-tension = 2.05


 *mdp file for NVT equilibration*


 define = -DPOSRES

 integrator = md

 tinit = 0

 dt = 0.002

 nsteps = 25

 init-step = 0

 comm-mode = angular

 nstcomm = 100

 bd-fric = 0

 ld-seed = -1

 nstxout = 1000

 nstvout = 5

 nstfout = 5

 nstlog = 100

 nstcalcenergy = 100

 nstenergy = 1000

 nstxtcout = 100

 xtc-precision = 1000

 xtc-grps = system

 energygrps = system

 cutoff-scheme= group

 nstlist  = 1

 ns-type = simple

 pbc= no

 rlist= 0.5

 coulombtype = cut-off

 rcoulomb= 0.5

 vdw-type = Cut-off

 vdw-modifier = Potential-shift-Verlet

 rvdw-switch= 0.8

 rvdw = 0.5

 table-extension = 500

 fourierspacing = 0.12

 fourier-nx  = 0

 fourier-ny = 0

 fourier-nz = 0

 implicit-solvent = GBSA

 gb-algorithm = still

 nstgbradii = 1

 rgbradii = 0.5

 gb-epsilon-solvent = 80

 sa-algorithm = Ace-approximation

 sa-surface-tension = 2.05

 tcoupl = v-rescale

 nsttcouple = -1

 nh-chain-length = 10

 print-nose-hoover-chain-variables = no

 tc-grps = system

 tau-t = 0.1

 ref-t = 298.00

 pcoupl = No

 pcoupltype = Isotropic

 nstpcouple = -1

 tau-p = 1

 refcoord-scaling = No

 gen-vel = yes

 gen-temp = 298.00

 gen-seed  = -1

 constraints= all-bonds

 constraint-algorithm = Lincs

 continuation = no

 Shake-SOR = no

 shake-tol = 0.0001

 lincs-order = 4

 lincs-iter = 1

 lincs-warnangle = 30


 *mdp file for NPT equilibration*


 define = -DPOSRES

 integrator = md

 tinit = 0

 dt = 0.002

 nsteps = 50

 init-step = 0

 simulation-part = 1

 comm-mode = angular

 nstcomm = 100

 bd-fric = 0

 ld-seed = -1

 nstxout = 1000

 nstvout = 50

 nstfout = 50

 nstlog

Re: [gmx-users] error in the middle of running mdrun_mpi

2014-10-27 Thread Nizar Masbukhin
i dont really understand the point. could you please what do you mean in
the last reply?
what command should i use?

if, say i have 72 cores in 9 nodes, and 16 replicas to simulate in implicit
solvent.


On 10/27/14 5:59 AM, Nizar Masbukhin wrote:

 and how to use that 2 cores? i think that would increase performace twice
 as now i am running 1 core per replica.


In the context of REMD, mdrun should figure this out if you issue the
command over 2N processors, where N is the number of replicas.

-Justin

 On Mon, Oct 27, 2014 at 7:15 AM, Justin Lemkul jalem...@vt.edu wrote:



 On 10/26/14 9:55 AM, Nizar Masbukhin wrote:

  regarding gaining speed in implicit solvent simulation, i have tried to
 parallelize using -ntmpi flag. However gromacs doesn't allow as i use
 group
 cutoff-scheme. Any recommendation how to parallelise implicit solvent
 simulation? I do need parallelise my simulation. I have found the same
 question in this mail list, one suggest use all-vs-all kernel which uses
 zero cut-off.
 This is my test run actually. I intend to run my simulation in cluster
 computer.


  Unless the restriction was lifted at some point, implicit simulations
 won't run on more than 2 cores.  There were issues with constraints that
 led to the limitation.

 -Justin


   On Sun, Oct 26, 2014 at 8:23 PM, Justin Lemkul jalem...@vt.edu wrote:




 On 10/26/14 9:17 AM, Nizar Masbukhin wrote:

   Thanks Justin.

 I have increased the cutoff, and yeah thats work. There were no error
 message anymore. The first 6 nanoseconds, i felt the simulation run
 slower.
 Felt so curious that  simulation run very fast the rest of time.


   Longer cutoffs mean there are more interactions to calculate, but the

 cutoffs aren't to be toyed with arbitrarily to gain speed.  They are a
 critical element of the force field itself, though in implicit solvent,
 it
 is common to increase (and never decrease) the cutoff values used in
 explicit solvent.  Physical validity should trump speed any day.

 -Justin


On Fri, Oct 24, 2014 at 7:37 PM, Justin Lemkul jalem...@vt.edu
 wrote:




  On 10/24/14 8:31 AM, Nizar Masbukhin wrote:

Thanks for yor reply, Mark.



 At first i was sure that the problem was table-exension because when
 I
 enlarge table-extension value, warning message didn't  appear
 anymore.
 Besides, i have successfully minimized and equilibrated the system
 (indicated by Fmax  emtol reached; and no error messages during
 NVTNPT
 equilibration, except a warning that the Pcouple is turned off in
 vacuum
 system).

 However, the error message appeared without table-extension warning
 makes
 me doubt also about my system stability. Here is my mdp setting.
 Please
 tell me if there are any 'weird' setting, and also kindly
 suggest/recommend
 a better setting.


 *mdp file for Minimisation*


 integrator = steep

 nsteps = 5000

 emtol = 200

 emstep = 0.01

 niter = 20

 nstlog = 1

 nstenergy = 1

 cutoff-scheme = group

 nstlist = 1

 ns_type = simple

 pbc = no

 rlist = 0.5

 coulombtype = cut-off

 rcoulomb = 0.5

 vdw-type = cut-off

 rvdw-switch = 0.8

 rvdw = 0.5

 DispCorr = no

 fourierspacing = 0.12

 pme_order = 6

 ewald_rtol = 1e-06

 epsilon_surface = 0

 optimize_fft = no

 tcoupl = no

 pcoupl = no

 free_energy = yes

 init_lambda = 0.0

 delta_lambda = 0

 foreign_lambda = 0.05

 sc-alpha = 0.5

 sc-power = 1.0

 sc-sigma  = 0.3

 couple-lambda0 = vdw

 couple-lambda1 = none

 couple-intramol = no

 nstdhdl = 10

 gen_vel = no

 constraints = none

 constraint-algorithm = lincs

 continuation = no

 lincs-order  = 12

 implicit-solvent = GBSA

 gb-algorithm = still

 nstgbradii = 1

 rgbradii = 0.5

 gb-epsilon-solvent = 80

 sa-algorithm = Ace-approximation

 sa-surface-tension = 2.05


 *mdp file for NVT equilibration*


 define = -DPOSRES

 integrator = md

 tinit = 0

 dt = 0.002

 nsteps = 25

 init-step = 0

 comm-mode = angular

 nstcomm = 100

 bd-fric = 0

 ld-seed = -1

 nstxout = 1000

 nstvout = 5

 nstfout = 5

 nstlog = 100

 nstcalcenergy = 100

 nstenergy = 1000

 nstxtcout = 100

 xtc-precision = 1000

 xtc-grps = system

 energygrps = system

 cutoff-scheme= group

 nstlist  = 1

 ns-type = simple

 pbc= no

 rlist= 0.5

 coulombtype = cut-off

 rcoulomb= 0.5

 vdw-type = Cut-off

 vdw-modifier = Potential-shift-Verlet

 rvdw-switch= 0.8

 rvdw = 0.5

 table-extension = 500

 fourierspacing = 0.12

 fourier-nx  = 0

 fourier-ny = 0

 fourier-nz = 0

 implicit-solvent = GBSA

 gb-algorithm = still

 nstgbradii = 1

 rgbradii = 0.5

 gb-epsilon-solvent = 80

 sa-algorithm = Ace-approximation

 sa-surface-tension = 2.05

 tcoupl = v-rescale

 nsttcouple = -1

 nh-chain-length = 10

 print-nose-hoover-chain-variables = no

 tc-grps = system

 tau-t = 0.1

 ref-t = 298.00

 pcoupl = No

 pcoupltype = Isotropic

 nstpcouple = -1

 tau-p = 1

 refcoord-scaling = No

 gen-vel = yes

 gen-temp = 298.00

 gen-seed  = -1

 constraints= all-bonds

 constraint-algorithm

Re: [gmx-users] error in the middle of running mdrun_mpi

2014-10-26 Thread Nizar Masbukhin
Thanks Justin.
I have increased the cutoff, and yeah thats work. There were no error
message anymore. The first 6 nanoseconds, i felt the simulation run slower.
Felt so curious that  simulation run very fast the rest of time.

On Fri, Oct 24, 2014 at 7:37 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 10/24/14 8:31 AM, Nizar Masbukhin wrote:

 Thanks for yor reply, Mark.


 At first i was sure that the problem was table-exension because when I
 enlarge table-extension value, warning message didn't  appear anymore.
 Besides, i have successfully minimized and equilibrated the system
 (indicated by Fmax  emtol reached; and no error messages during NVTNPT
 equilibration, except a warning that the Pcouple is turned off in vacuum
 system).

 However, the error message appeared without table-extension warning makes
 me doubt also about my system stability. Here is my mdp setting. Please
 tell me if there are any 'weird' setting, and also kindly
 suggest/recommend
 a better setting.


 *mdp file for Minimisation*


 integrator = steep

 nsteps = 5000

 emtol = 200

 emstep = 0.01

 niter = 20

 nstlog = 1

 nstenergy = 1

 cutoff-scheme = group

 nstlist = 1

 ns_type = simple

 pbc = no

 rlist = 0.5

 coulombtype = cut-off

 rcoulomb = 0.5

 vdw-type = cut-off

 rvdw-switch = 0.8

 rvdw = 0.5

 DispCorr = no

 fourierspacing = 0.12

 pme_order = 6

 ewald_rtol = 1e-06

 epsilon_surface = 0

 optimize_fft = no

 tcoupl = no

 pcoupl = no

 free_energy = yes

 init_lambda = 0.0

 delta_lambda = 0

 foreign_lambda = 0.05

 sc-alpha = 0.5

 sc-power = 1.0

 sc-sigma  = 0.3

 couple-lambda0 = vdw

 couple-lambda1 = none

 couple-intramol = no

 nstdhdl = 10

 gen_vel = no

 constraints = none

 constraint-algorithm = lincs

 continuation = no

 lincs-order  = 12

 implicit-solvent = GBSA

 gb-algorithm = still

 nstgbradii = 1

 rgbradii = 0.5

 gb-epsilon-solvent = 80

 sa-algorithm = Ace-approximation

 sa-surface-tension = 2.05


 *mdp file for NVT equilibration*


 define = -DPOSRES

 integrator = md

 tinit = 0

 dt = 0.002

 nsteps = 25

 init-step = 0

 comm-mode = angular

 nstcomm = 100

 bd-fric = 0

 ld-seed = -1

 nstxout = 1000

 nstvout = 5

 nstfout = 5

 nstlog = 100

 nstcalcenergy = 100

 nstenergy = 1000

 nstxtcout = 100

 xtc-precision = 1000

 xtc-grps = system

 energygrps = system

 cutoff-scheme= group

 nstlist  = 1

 ns-type = simple

 pbc= no

 rlist= 0.5

 coulombtype = cut-off

 rcoulomb= 0.5

 vdw-type = Cut-off

 vdw-modifier = Potential-shift-Verlet

 rvdw-switch= 0.8

 rvdw = 0.5

 table-extension = 500

 fourierspacing = 0.12

 fourier-nx  = 0

 fourier-ny = 0

 fourier-nz = 0

 implicit-solvent = GBSA

 gb-algorithm = still

 nstgbradii = 1

 rgbradii = 0.5

 gb-epsilon-solvent = 80

 sa-algorithm = Ace-approximation

 sa-surface-tension = 2.05

 tcoupl = v-rescale

 nsttcouple = -1

 nh-chain-length = 10

 print-nose-hoover-chain-variables = no

 tc-grps = system

 tau-t = 0.1

 ref-t = 298.00

 pcoupl = No

 pcoupltype = Isotropic

 nstpcouple = -1

 tau-p = 1

 refcoord-scaling = No

 gen-vel = yes

 gen-temp = 298.00

 gen-seed  = -1

 constraints= all-bonds

 constraint-algorithm = Lincs

 continuation = no

 Shake-SOR = no

 shake-tol = 0.0001

 lincs-order = 4

 lincs-iter = 1

 lincs-warnangle = 30


 *mdp file for NPT equilibration*


 define = -DPOSRES

 integrator = md

 tinit = 0

 dt = 0.002

 nsteps = 50

 init-step = 0

 simulation-part = 1

 comm-mode = angular

 nstcomm = 100

 bd-fric = 0

 ld-seed = -1

 nstxout = 1000

 nstvout = 50

 nstfout = 50

 nstlog = 100

 nstcalcenergy = 100

 nstenergy = 1000

 nstxtcout = 100

 xtc-precision = 1000

 xtc-grps = system

 energygrps = system

 cutoff-scheme = group

 nstlist = 1

 ns-type = simple

 pbc = no

 rlist  = 0.5

 coulombtype= cut-off

 rcoulomb = 0.5

 vdw-type = Cut-off

 vdw-modifier = Potential-shift-Verlet

 rvdw-switch = 0.8

 rvdw= 0.5

 table-extension = 1

 fourierspacing = 0.12

 fourier-nx= 0

 fourier-ny = 0

 fourier-nz = 0

 implicit-solvent = GBSA

 gb-algorithm = still

 nstgbradii = 1

 rgbradii = 0.5

 gb-epsilon-solvent = 80

 sa-algorithm = Ace-approximation

 sa-surface-tension = 2.05

 tcoupl  = Nose-Hoover

 tc-grps = system

 tau-t  = 0.1

 ref-t = 298.00

 pcoupl = parrinello-rahman

 pcoupltype = Isotropic

 tau-p   = 1.0

 compressibility   = 4.5e-5

 ref-p   = 1.0

 refcoord-scaling = No

 gen-vel   = no

 gen-temp = 298.00

 gen-seed   = -1

 constraints  = all-bonds

 constraint-algorithm   = Lincs

 continuation  = yes

 Shake-SOR  = no

 shake-tol  = 0.0001

 lincs-order = 4

 lincs-iter   = 1

 lincs-warnangle  = 30


 *mdp file for MD*


 integrator  = md

 tinit = 0

 dt  = 0.001

 nsteps = 5 ; 1 us

 init-step = 0

 simulation-part= 1

 comm-mode  = Angular

 nstcomm = 100

 comm-grps = system

 bd-fric  = 0

 ld-seed = -1

 nstxout  = 1

 nstvout  = 0

 nstfout   = 0

 nstlog  = 1

 nstcalcenergy = 1

 nstenergy

Re: [gmx-users] error in the middle of running mdrun_mpi

2014-10-26 Thread Nizar Masbukhin
regarding gaining speed in implicit solvent simulation, i have tried to
parallelize using -ntmpi flag. However gromacs doesn't allow as i use group
cutoff-scheme. Any recommendation how to parallelise implicit solvent
simulation? I do need parallelise my simulation. I have found the same
question in this mail list, one suggest use all-vs-all kernel which uses
zero cut-off.
This is my test run actually. I intend to run my simulation in cluster
computer.

On Sun, Oct 26, 2014 at 8:23 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 10/26/14 9:17 AM, Nizar Masbukhin wrote:

 Thanks Justin.
 I have increased the cutoff, and yeah thats work. There were no error
 message anymore. The first 6 nanoseconds, i felt the simulation run
 slower.
 Felt so curious that  simulation run very fast the rest of time.


 Longer cutoffs mean there are more interactions to calculate, but the
 cutoffs aren't to be toyed with arbitrarily to gain speed.  They are a
 critical element of the force field itself, though in implicit solvent, it
 is common to increase (and never decrease) the cutoff values used in
 explicit solvent.  Physical validity should trump speed any day.

 -Justin


  On Fri, Oct 24, 2014 at 7:37 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 10/24/14 8:31 AM, Nizar Masbukhin wrote:

  Thanks for yor reply, Mark.


 At first i was sure that the problem was table-exension because when I
 enlarge table-extension value, warning message didn't  appear anymore.
 Besides, i have successfully minimized and equilibrated the system
 (indicated by Fmax  emtol reached; and no error messages during NVTNPT
 equilibration, except a warning that the Pcouple is turned off in vacuum
 system).

 However, the error message appeared without table-extension warning
 makes
 me doubt also about my system stability. Here is my mdp setting. Please
 tell me if there are any 'weird' setting, and also kindly
 suggest/recommend
 a better setting.


 *mdp file for Minimisation*


 integrator = steep

 nsteps = 5000

 emtol = 200

 emstep = 0.01

 niter = 20

 nstlog = 1

 nstenergy = 1

 cutoff-scheme = group

 nstlist = 1

 ns_type = simple

 pbc = no

 rlist = 0.5

 coulombtype = cut-off

 rcoulomb = 0.5

 vdw-type = cut-off

 rvdw-switch = 0.8

 rvdw = 0.5

 DispCorr = no

 fourierspacing = 0.12

 pme_order = 6

 ewald_rtol = 1e-06

 epsilon_surface = 0

 optimize_fft = no

 tcoupl = no

 pcoupl = no

 free_energy = yes

 init_lambda = 0.0

 delta_lambda = 0

 foreign_lambda = 0.05

 sc-alpha = 0.5

 sc-power = 1.0

 sc-sigma  = 0.3

 couple-lambda0 = vdw

 couple-lambda1 = none

 couple-intramol = no

 nstdhdl = 10

 gen_vel = no

 constraints = none

 constraint-algorithm = lincs

 continuation = no

 lincs-order  = 12

 implicit-solvent = GBSA

 gb-algorithm = still

 nstgbradii = 1

 rgbradii = 0.5

 gb-epsilon-solvent = 80

 sa-algorithm = Ace-approximation

 sa-surface-tension = 2.05


 *mdp file for NVT equilibration*


 define = -DPOSRES

 integrator = md

 tinit = 0

 dt = 0.002

 nsteps = 25

 init-step = 0

 comm-mode = angular

 nstcomm = 100

 bd-fric = 0

 ld-seed = -1

 nstxout = 1000

 nstvout = 5

 nstfout = 5

 nstlog = 100

 nstcalcenergy = 100

 nstenergy = 1000

 nstxtcout = 100

 xtc-precision = 1000

 xtc-grps = system

 energygrps = system

 cutoff-scheme= group

 nstlist  = 1

 ns-type = simple

 pbc= no

 rlist= 0.5

 coulombtype = cut-off

 rcoulomb= 0.5

 vdw-type = Cut-off

 vdw-modifier = Potential-shift-Verlet

 rvdw-switch= 0.8

 rvdw = 0.5

 table-extension = 500

 fourierspacing = 0.12

 fourier-nx  = 0

 fourier-ny = 0

 fourier-nz = 0

 implicit-solvent = GBSA

 gb-algorithm = still

 nstgbradii = 1

 rgbradii = 0.5

 gb-epsilon-solvent = 80

 sa-algorithm = Ace-approximation

 sa-surface-tension = 2.05

 tcoupl = v-rescale

 nsttcouple = -1

 nh-chain-length = 10

 print-nose-hoover-chain-variables = no

 tc-grps = system

 tau-t = 0.1

 ref-t = 298.00

 pcoupl = No

 pcoupltype = Isotropic

 nstpcouple = -1

 tau-p = 1

 refcoord-scaling = No

 gen-vel = yes

 gen-temp = 298.00

 gen-seed  = -1

 constraints= all-bonds

 constraint-algorithm = Lincs

 continuation = no

 Shake-SOR = no

 shake-tol = 0.0001

 lincs-order = 4

 lincs-iter = 1

 lincs-warnangle = 30


 *mdp file for NPT equilibration*


 define = -DPOSRES

 integrator = md

 tinit = 0

 dt = 0.002

 nsteps = 50

 init-step = 0

 simulation-part = 1

 comm-mode = angular

 nstcomm = 100

 bd-fric = 0

 ld-seed = -1

 nstxout = 1000

 nstvout = 50

 nstfout = 50

 nstlog = 100

 nstcalcenergy = 100

 nstenergy = 1000

 nstxtcout = 100

 xtc-precision = 1000

 xtc-grps = system

 energygrps = system

 cutoff-scheme = group

 nstlist = 1

 ns-type = simple

 pbc = no

 rlist  = 0.5

 coulombtype= cut-off

 rcoulomb = 0.5

 vdw-type = Cut-off

 vdw-modifier = Potential-shift-Verlet

 rvdw-switch = 0.8

 rvdw= 0.5

 table-extension = 1

 fourierspacing = 0.12

 fourier-nx= 0

 fourier-ny = 0

 fourier-nz

Re: [gmx-users] installing on macbook

2014-10-24 Thread Nizar Masbukhin
Hi Suraya. I have succesfully installed Gromacs 4.5, 4.6, and 5.0 on my
macbook pro running OS X Mavericks, even with CUDA support.
I am not using apple compiler because apple compiler in Mavericks (clang
3.4) doesn't support OpenMP, that will cause many errors when compiling
gromacs (my experience) so that i use gcc 4.7 compiler instead. Here my
step to install Gromacs on My MacbookPro mid 2009.

FIRST:
-Install GCC/G++ 4.7. Its very easy to install gcc/g++ 4.7 via MacPorts (
www.macports.org) because it is almost automatic as all dependencies will
be installed.
SECOND:
-Download Gromacs 5.0/5.0.01/5.0.2
-Extract the downloaded file
-open the downloaded folder in terminal. If you don't familiar how to open
folder in terminal, then just right click - service -- new terminal at
folder (if this options doesn't exists, do the following steps : system
preferences -- keyboard -- shortcut -- service -- new folder at
terminal)
- just type: *cmake .. -DCMAKE_C_COMPILER=/opt/local/bin/gcc
-DCMAKE_CXX_COMPILER=/opt/local/bin/g++ -DGMX_GPU=ON
-DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda -DGMX_BUILD_OWN_FFTW=ON *then press
return/enter.
- everything should be set up, and the fftw is also automatically
downloaded and compiled.
- then type *make -j 2 *(if your macbook has 2 cores)
-  then * make install*
- run *make check *if you compile 5.0.2
- then type *export PATH=$PATH:/usr/local/gromacs/bin/* every time you want
to start gromacs

that's it. good luck



On Fri, Oct 24, 2014 at 6:03 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 10/24/14 6:32 AM, Suraya Abdul Sani wrote:

 hi.. i have problem in installing gromacs in my macbook pro ( personal pc)
 .. can anyone help me with this??


 Not unless you tell us exactly what you're doing - exact cmake command,
 exact error messages, etc.

 In general, follow http://www.gromacs.org/Documentation/Installation_
 Instructions

 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Ruth L. Kirschstein NRSA Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 629
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441
 http://mackerell.umaryland.edu/~jalemkul

 ==

 --
 Gromacs Users mailing list

 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.




-- 
Thanks
My Best Regards, Nizar
Medical Faculty of Brawijaya University
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] error in the middle of running mdrun_mpi

2014-10-24 Thread Nizar Masbukhin
Thanks for yor reply, Mark.


At first i was sure that the problem was table-exension because when I
enlarge table-extension value, warning message didn't  appear anymore.
Besides, i have successfully minimized and equilibrated the system
(indicated by Fmax  emtol reached; and no error messages during NVTNPT
equilibration, except a warning that the Pcouple is turned off in vacuum
system).

However, the error message appeared without table-extension warning makes
me doubt also about my system stability. Here is my mdp setting. Please
tell me if there are any 'weird' setting, and also kindly suggest/recommend
a better setting.


*mdp file for Minimisation*

integrator = steep

nsteps = 5000

emtol = 200

emstep = 0.01

niter = 20

nstlog = 1

nstenergy = 1

cutoff-scheme = group

nstlist = 1

ns_type = simple

pbc = no

rlist = 0.5

coulombtype = cut-off

rcoulomb = 0.5

vdw-type = cut-off

rvdw-switch = 0.8

rvdw = 0.5

DispCorr = no

fourierspacing = 0.12

pme_order = 6

ewald_rtol = 1e-06

epsilon_surface = 0

optimize_fft = no

tcoupl = no

pcoupl = no

free_energy = yes

init_lambda = 0.0

delta_lambda = 0

foreign_lambda = 0.05

sc-alpha = 0.5

sc-power = 1.0

sc-sigma  = 0.3

couple-lambda0 = vdw

couple-lambda1 = none

couple-intramol = no

nstdhdl = 10

gen_vel = no

constraints = none

constraint-algorithm = lincs

continuation = no

lincs-order  = 12

implicit-solvent = GBSA

gb-algorithm = still

nstgbradii = 1

rgbradii = 0.5

gb-epsilon-solvent = 80

sa-algorithm = Ace-approximation

sa-surface-tension = 2.05


*mdp file for NVT equilibration*

define = -DPOSRES

integrator = md

tinit = 0

dt = 0.002

nsteps = 25

init-step = 0

comm-mode = angular

nstcomm = 100

bd-fric = 0

ld-seed = -1

nstxout = 1000

nstvout = 5

nstfout = 5

nstlog = 100

nstcalcenergy = 100

nstenergy = 1000

nstxtcout = 100

xtc-precision = 1000

xtc-grps = system

energygrps = system

cutoff-scheme= group

nstlist  = 1

ns-type = simple

pbc= no

rlist= 0.5

coulombtype = cut-off

rcoulomb= 0.5

vdw-type = Cut-off

vdw-modifier = Potential-shift-Verlet

rvdw-switch= 0.8

rvdw = 0.5

table-extension = 500

fourierspacing = 0.12

fourier-nx  = 0

fourier-ny = 0

fourier-nz = 0

implicit-solvent = GBSA

gb-algorithm = still

nstgbradii = 1

rgbradii = 0.5

gb-epsilon-solvent = 80

sa-algorithm = Ace-approximation

sa-surface-tension = 2.05

tcoupl = v-rescale

nsttcouple = -1

nh-chain-length = 10

print-nose-hoover-chain-variables = no

tc-grps = system

tau-t = 0.1

ref-t = 298.00

pcoupl = No

pcoupltype = Isotropic

nstpcouple = -1

tau-p = 1

refcoord-scaling = No

gen-vel = yes

gen-temp = 298.00

gen-seed  = -1

constraints= all-bonds

constraint-algorithm = Lincs

continuation = no

Shake-SOR = no

shake-tol = 0.0001

lincs-order = 4

lincs-iter = 1

lincs-warnangle = 30


*mdp file for NPT equilibration*

define = -DPOSRES

integrator = md

tinit = 0

dt = 0.002

nsteps = 50

init-step = 0

simulation-part = 1

comm-mode = angular

nstcomm = 100

bd-fric = 0

ld-seed = -1

nstxout = 1000

nstvout = 50

nstfout = 50

nstlog = 100

nstcalcenergy = 100

nstenergy = 1000

nstxtcout = 100

xtc-precision = 1000

xtc-grps = system

energygrps = system

cutoff-scheme = group

nstlist = 1

ns-type = simple

pbc = no

rlist  = 0.5

coulombtype= cut-off

rcoulomb = 0.5

vdw-type = Cut-off

vdw-modifier = Potential-shift-Verlet

rvdw-switch = 0.8

rvdw= 0.5

table-extension = 1

fourierspacing = 0.12

fourier-nx= 0

fourier-ny = 0

fourier-nz = 0

implicit-solvent = GBSA

gb-algorithm = still

nstgbradii = 1

rgbradii = 0.5

gb-epsilon-solvent = 80

sa-algorithm = Ace-approximation

sa-surface-tension = 2.05

tcoupl  = Nose-Hoover

tc-grps = system

tau-t  = 0.1

ref-t = 298.00

pcoupl = parrinello-rahman

pcoupltype = Isotropic

tau-p   = 1.0

compressibility   = 4.5e-5

ref-p   = 1.0

refcoord-scaling = No

gen-vel   = no

gen-temp = 298.00

gen-seed   = -1

constraints  = all-bonds

constraint-algorithm   = Lincs

continuation  = yes

Shake-SOR  = no

shake-tol  = 0.0001

lincs-order = 4

lincs-iter   = 1

lincs-warnangle  = 30


*mdp file for MD*

integrator  = md

tinit = 0

dt  = 0.001

nsteps = 5 ; 1 us

init-step = 0

simulation-part= 1

comm-mode  = Angular

nstcomm = 100

comm-grps = system

bd-fric  = 0

ld-seed = -1

nstxout  = 1

nstvout  = 0

nstfout   = 0

nstlog  = 1

nstcalcenergy = 1

nstenergy = 1

nstxtcout  = 0

xtc-precision  = 1000

xtc-grps  = system

energygrps  = system

cutoff-scheme  = group

nstlist  = 10

ns-type  = simple

pbc   = no

rlist= 0.5

coulombtype= cut-off

rcoulomb  = 0.5

vdw-type  = Cut-off

vdw-modifier  = Potential-shift-Verlet

rvdw-switch = 0.8

rvdw  = 0.5

DispCorr = No

table-extension  = 500

fourierspacing = 0.12

fourier-nx  = 0

fourier-ny = 0

fourier-nz = 0

implicit-solvent = GBSA

;implicit-solvent = GBSA

gb-algorithm = still

nstgbradii = 1

rgbradii = 0.5

gb-epsilon-solvent = 80


[gmx-users] error in the middle of running mdrun_mpi

2014-10-23 Thread Nizar Masbukhin
Dear gromacs users,

I try simulate protein folding using REMD sampling method in implicit
solvent. I run my simulation on MPI-compiled gromacs 5.0.2 on single node.
I have succesfully minimized equilibrated (NVT-constrained, and NPT
constrained) my system. However, In the middle of mdrun_mpi process, the
warning messages appear.





























*starting mdrun 'Protein'5 steps, 50.0 ps.starting mdrun
'Protein'5 steps, 50.0 ps.starting mdrun 'Protein'5
steps, 50.0 ps.starting mdrun 'Protein'5 steps, 50.0
ps.starting mdrun 'Protein'starting mdrun 'Protein'5 steps,
50.0 ps.starting mdrun 'Protein'5 steps, 50.0 ps.starting
mdrun 'Protein'5 steps, 50.0 ps.5 steps, 50.0
ps.step 2873100, will finish Sat Nov  1 10:03:07 2014WARNING: Listed
nonbonded interaction between particles 192 and 197at distance 16.773 which
is larger than the table limit 10.500 nm.This is likely either a 1,4
interaction, or a listed interaction insidea smaller molecule you are
decoupling during a free energy calculation.Since interactions at distances
beyond the table cannot be computed,they are skipped until they are inside
the table limit again. You willonly see this message once, even if it
occurs for several interactions.IMPORTANT: This should not happen in a
stable simulation, so there isprobably something wrong with your system.
Only change the table-extensiondistance in the mdp file if you are really
sure that is the reason.*




















*[nizarPC:07548] *** Process received signal ***[nizarPC:07548] Signal:
Segmentation fault (11)[nizarPC:07548] Signal code: Address not mapped
(1)[nizarPC:07548] Failing at address: 0x1ef8d90[nizarPC:07548] [ 0]
/lib/x86_64-linux-gnu/libc.so.6(+0x36c30) [0x7f610bc9fc30][nizarPC:07548] [
1]
/usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(nb_kernel_ElecGB_VdwLJ_GeomP1P1_F_avx_256_single+0x836)
[0x7f610d3a2466][nizarPC:07548] [ 2]
/usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(do_nonbonded+0x240)
[0x7f610d235a30][nizarPC:07548] [ 3]
/usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(do_force_lowlevel+0x1d3e)
[0x7f610d97bebe][nizarPC:07548] [ 4]
/usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(do_force_cutsGROUP+0x1510)
[0x7f610d91bbe0][nizarPC:07548] [ 5] mdrun_mpi(do_md+0x57c1)
[0x42e5e1][nizarPC:07548] [ 6] mdrun_mpi(mdrunner+0x12a1)
[0x413af1][nizarPC:07548] [ 7] mdrun_mpi(_Z9gmx_mdruniPPc+0x18e5)
[0x4337b5][nizarPC:07548] [ 8]
/usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(_ZN3gmx24CommandLineModuleManager3runEiPPc+0x92)
[0x7f610ce15a42][nizarPC:07548] [ 9] mdrun_mpi(main+0x7c)
[0x40cb8c][nizarPC:07548] [10]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)
[0x7f610bc8aec5][nizarPC:07548] [11] mdrun_mpi() [0x40ccce][nizarPC:07548]
*** End of error message
***--mpirun
noticed that process rank 5 with PID 7548 on node nizarPC exited on signal
11 (Segmentation fault).*
I have increased the table-extension to 500.00 (how much this value should
be?), and re-grompp and mdrun again. there were no warning message
regarding table-extension anymore, However, this error messages showed:




































*starting mdrun 'Protein'5 steps, 50.0 ps.starting mdrun
'Protein'5 steps, 50.0 ps.starting mdrun 'Protein'5
steps, 50.0 ps.starting mdrun 'Protein'5 steps, 50.0
ps.starting mdrun 'Protein'5 steps, 50.0 ps.starting mdrun
'Protein'starting mdrun 'Protein'5 steps, 50.0 ps.starting
mdrun 'Protein'5 steps, 50.0 ps.5 steps, 50.0
ps.step 4142800, will finish Sat Nov  1 10:35:55 2014[nizarPC:09984] ***
Process received signal ***[nizarPC:09984] Signal: Segmentation fault
(11)[nizarPC:09984] Signal code: Address not mapped (1)[nizarPC:09984]
Failing at address: 0x1464040[nizarPC:09984] [ 0]
/lib/x86_64-linux-gnu/libc.so.6(+0x36c30) [0x7fa764b65c30][nizarPC:09984] [
1]
/usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(nb_kernel_ElecGB_VdwLJ_GeomP1P1_F_avx_256_single+0x85f)
[0x7fa76626848f][nizarPC:09984] [ 2]
/usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(do_nonbonded+0x240)
[0x7fa7660fba30][nizarPC:09984] [ 3]
/usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(do_force_lowlevel+0x1d3e)
[0x7fa766841ebe][nizarPC:09984] [ 4]
/usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(do_force_cutsGROUP+0x1510)
[0x7fa7667e1be0][nizarPC:09984] [ 5] mdrun_mpi(do_md+0x57c1)
[0x42e5e1][nizarPC:09984] [ 6] mdrun_mpi(mdrunner+0x12a1)
[0x413af1][nizarPC:09984] [ 7] mdrun_mpi(_Z9gmx_mdruniPPc+0x18e5)
[0x4337b5][nizarPC:09984] [ 8]
/usr/local/gromacs/bin/../lib/libgromacs_mpi.so.0(_ZN3gmx24CommandLineModuleManager3runEiPPc+0x92)
[0x7fa765cdba42][nizarPC:09984] [ 9] mdrun_mpi(main+0x7c)
[0x40cb8c][nizarPC:09984] [10]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)
[0x7fa764b50ec5][nizarPC:09984] [11] mdrun_mpi() 

[gmx-users] Add missing residue

2014-09-28 Thread Nizar Masbukhin
Dear users,

I'm going to simulate a protein. On the pdb file contents, it's said that
this protein has two chains, and missing residues and atoms. when I try to
pdb2gmx, the error messages occured. It seemed due tue the missing residue
so that i got the message error.
My question is: How can I add missing resiudes and atoms to the pdb files
so that i can use it?

Thanks
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Add missing residue

2014-09-28 Thread Nizar Masbukhin
Thanks Mark,
I will try swiss-Pdb viewer.

On Sun, Sep 28, 2014 at 9:45 PM, Mark Abraham mark.j.abra...@gmail.com
wrote:

 Hi,

 There are some suggestions at
 http://www.gromacs.org/Documentation/File_Formats/Coordinate_File

 Mark

 On Sun, Sep 28, 2014 at 4:35 PM, Nizar Masbukhin nizar.fku...@gmail.com
 wrote:

  Dear users,
 
  I'm going to simulate a protein. On the pdb file contents, it's said that
  this protein has two chains, and missing residues and atoms. when I try
 to
  pdb2gmx, the error messages occured. It seemed due tue the missing
 residue
  so that i got the message error.
  My question is: How can I add missing resiudes and atoms to the pdb files
  so that i can use it?
 
  Thanks
  --
  Gromacs Users mailing list
 
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
  posting!
 
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  * For (un)subscribe requests visit
  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
  send a mail to gmx-users-requ...@gromacs.org.
 
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.




-- 
Thanks
My Best Regards, Nizar
Medical Faculty of Brawijaya University
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] overclocked cpu on gromacs

2014-09-05 Thread Nizar Masbukhin
hello all, i am wondering that using overclocked cpu on gromas is safe, in
terms of results reability.

and is there any benefit from xeon processor compared to non xeon?
and is there any benedit from ECC memory on gromacs?

thanks
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] error message in mdrun: Check for bad contacts and/or reduce the timestep if appropriate.

2014-05-04 Thread Nizar Masbukhin
Hi friends, 
I am calculating free energy of a protein. I got this message when i run mdrun. 
My command was  mdrun -deffnm npt -nt 1

Step 2, time 0.004 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.22, max 0.000500 (between atoms 2240 and 2243)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
   2844   2845   39.80.1105   0.1090  0.1090

step 2: Water molecule starting at atom 41058 can not be settled.
Check for bad contacts and/or reduce the timestep if appropriate.

I have reduced nsteps from 5000 to 1, but the same error message appeared
=
My Best Regards, Nizar
Medical Faculty of Brawijaya University, Malang, Indonesia

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] problem with pdb2gmx

2014-05-01 Thread Nizar Masbukhin
Hi, friends. I’m new to gromacs. I’m trying to learn it based on  tutorial 
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/index.html.
Whenever i input pdb2gmx line, i always get this message:

Using the Oplsaa force field in directory oplsaa.ff

Opening force field file 
/usr/local/gromacs/share/gromacs/top/oplsaa.ff/aminoacids.r2b
Reading 1AKI.pdb...

---
Program pdb2gmx, VERSION 4.5.5
Source code file: futil.c, line: 491

File input/output error:
1AKI.pdb
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
--- 

What is wrong? can anyone explain?

Thank for your help
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] problem with pdb2gmx

2014-05-01 Thread Nizar Masbukhin
Hi, friends. I’m new to gromacs. I’m trying to learn it based on  tutorial 
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/index.html.
Whenever i input pdb2gmx line, i always get this message:

Using the Oplsaa force field in directory oplsaa.ff

Opening force field file 
/usr/local/gromacs/share/gromacs/top/oplsaa.ff/aminoacids.r2b
Reading 1AKI.pdb...

---
Program pdb2gmx, VERSION 4.5.5
Source code file: futil.c, line: 491

File input/output error:
1AKI.pdb
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
--- 

What is wrong? can anyone explain? and what is the solution?

Thank for your help
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] problem with pdb2gmx

2014-05-01 Thread Nizar Masbukhin
i wrote this

pdb2gmx -f 1AKI.pdb -o 1AKI_processed.gro -water spce

On May 1, 2014, at 5:28 PM, Chandan Choudhury iitd...@gmail.com wrote:

 On Thu, May 1, 2014 at 3:32 PM, Nizar Masbukhin nizar.fku...@gmail.comwrote:
 
 Hi, friends. I’m new to gromacs. I’m trying to learn it based on
 tutorial
 http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/index.html
 .
 Whenever i input pdb2gmx line, i always get this message:
 
 Using the Oplsaa force field in directory oplsaa.ff
 
 Opening force field file /usr/local/gromacs/share/gromacs/top/oplsaa.ff/
 aminoacids.r2b
 Reading 1AKI.pdb...
 
 ---
 Program pdb2gmx, VERSION 4.5.5
 Source code file: futil.c, line: 491
 
 File input/output error:
 1AKI.pdb
 For more information and tips for troubleshooting, please check the GROMACS
 website at http://www.gromacs.org/Documentation/Errors
 ---
 
 What is wrong? can anyone explain? and what is the solution?
 
 
 Please copy and paste your command.
 
 It seems that there is some problem in the pdb file (1AKI). You can check
 its contents.
 
 Chandan
 
 
 Thank for your help
 --
 Gromacs Users mailing list
 
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.
 
 
 
 
 -- 
 
 --
 Chandan Kumar Choudhury
 National Chemical Laboratory, Pune
 India
 -- 
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.