[gmx-users] g_mindist for hydrophobic interactions

2014-09-08 Thread Ca C .
Dear All,
I have to verify if some hydrophobic residues, during the simulation, conserve 
their interactions and make a cross talk for receptor's transactivation.
Is g_mindist a good tool for this purpose? Do you have more suggestions?
Moreover, should I use trjconv for pbc treatment before running g_mindist?

Thank you in advance
  
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] (no subject)

2014-09-08 Thread Somayeh Alimohammadi
Dear gmx users
I am performing a simulation by gromacs. for building the ligand itp file
by Prodrg, I get an error thet boron atom is not supported by this program.
Do you have any propose for solute this problem?
regards
-- 
 Somayeh Alimohammadi
Ph.D Student of Medical Nanotechnology
 Shahid Beheshti University of Medical Sciences
Tehran-Iran
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] g_mindist for hydrophobic interactions

2014-09-08 Thread Erik Marklund
Another option is g_hbond -contact.

On 8 Sep 2014, at 09:25, Ca C. devi...@hotmail.com wrote:

 Dear All,
 I have to verify if some hydrophobic residues, during the simulation, 
 conserve their interactions and make a cross talk for receptor's 
 transactivation.
 Is g_mindist a good tool for this purpose? Do you have more suggestions?
 Moreover, should I use trjconv for pbc treatment before running g_mindist?
 
 Thank you in advance
 
 -- 
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] GPU job failed

2014-09-08 Thread Albert

Hello:

I am trying to use the following command in Gromacs-5.0.1:

mpirun -np 2 mdrun_mpi -v -s npt2.tpr -c npt2.gro -x npt2.xtc -g 
npt2.log -gpu_id 01 -ntomp 10



but it always failed with messages:


2 GPUs detected on host cudaB:
  #0: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC:  no, stat: 
compatible
  #1: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC:  no, stat: 
compatible


2 GPUs user-selected for this run.
Mapping of GPUs to the 1 PP rank in this node: #0, #1


---
Program mdrun_mpi, VERSION 5.0.1
Source code file: 
/soft2/plumed-2.2/gromacs-5.0.1/src/gromacs/gmxlib/gmx_detect_hardware.c, line: 
359


Fatal error:
Incorrect launch configuration: mismatching number of PP MPI processes 
and GPUs per node.
mdrun_mpi was started with 1 PP MPI process per node, but you provided 2 
GPUs.

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors



However, this command works fine in Gromacs-4.6.5, and I don't know why 
it failed in 5.0.1. Does anybody have any idea?


thx a lot

Albert
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] g-select failed

2014-09-08 Thread Albert

Hello:

I am trying to make two groups for my lipids sytem by g_select with 
command line:


g_select_mpi -sf select.dat -f em.gro

here is the content of select.dat:

up=z80;
down=z80;

but it failed with messages:


Program: gmx select, VERSION 5.0.1
Source file: src/gromacs/commandline/cmdlineparser.cpp (line 232)
Function:void gmx::CommandLineParser::parse(int*, char**)
Error in user input:
Invalid command-line options
  In command-line option -sf
Error in adding selections from file 'select.dat'
  Too few selections provided

thank you very much.

Albert


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Status of MD simulation

2014-09-08 Thread ankit agrawal
hi
I am running a 5ns simulation using mdrun command. So this will take a day
to complete. So I want to know that how to check the status of simulation
in between the run whether it is going in right direction or not?

thanks

regards
ankit
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] g-select failed

2014-09-08 Thread Teemu Murtola
Hi,

On Mon, Sep 8, 2014 at 4:47 PM, Albert mailmd2...@gmail.com wrote:

 I am trying to make two groups for my lipids sytem by g_select with
 command line:

 g_select_mpi -sf select.dat -f em.gro

 here is the content of select.dat:

 up=z80;
 down=z80;

 but it failed with messages: ...


Your selection file only declares two selection variables (up and
down), but no actual selections. Since g_select expects to get some
selections, it gives the error message. You can specify your selections
like this if you want to give them names:

up z80;
down z80;

Best regards,
Teemu
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Integrator problem

2014-09-08 Thread Lovika Moudgil
Hi everyoneI want to ask one question...In my .mdp file if I use md
intergrator for energy minimisation .then system is fine...but if I use
steep integrator...my system got error of more force on one atomI  not
clear why this is happeingcan any body guide me please..

Regards
Lovika
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Status of MD simulation

2014-09-08 Thread Mark Abraham
Make a temporary copy of the files (generally not necessary, but might
help) and observe whatever suits you.

Mark
On Sep 8, 2014 3:50 PM, ankit agrawal aka...@gmail.com wrote:

 hi
 I am running a 5ns simulation using mdrun command. So this will take a day
 to complete. So I want to know that how to check the status of simulation
 in between the run whether it is going in right direction or not?

 thanks

 regards
 ankit
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Status of MD simulation

2014-09-08 Thread Lovika Moudgil
Hi...I think the best way is to check log file If I m wrong please do
correct me!!

Regards
Lovika
On 8 Sep 2014 19:21, ankit agrawal aka...@gmail.com wrote:

 hi
 I am running a 5ns simulation using mdrun command. So this will take a day
 to complete. So I want to know that how to check the status of simulation
 in between the run whether it is going in right direction or not?

 thanks

 regards
 ankit
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Integrator problem

2014-09-08 Thread Mark Abraham
The md integrator does MD, not EM...

Mark
On Sep 8, 2014 4:11 PM, Lovika Moudgil lovikamoud...@gmail.com wrote:

 Hi everyoneI want to ask one question...In my .mdp file if I use md
 intergrator for energy minimisation .then system is fine...but if I use
 steep integrator...my system got error of more force on one atomI  not
 clear why this is happeingcan any body guide me please..

 Regards
 Lovika
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] g-select failed

2014-09-08 Thread Albert

Hello Teemu:

thanks a lot for such helpful advices.

It works now. If I would like to select protein and z80, I use 
following select.dat file:


up protein and z80;
down protein and z80;

but it failed with messages:

  In command-line option -sf
Error in parsing selections from file 'select.dat'
  syntax error
  invalid selection 'up protein and z80'

do you have any idea how to add additional options?

thx again

Albert


On 09/08/2014 03:59 PM, Teemu Murtola wrote:

Your selection file only declares two selection variables (up and
down), but no actual selections. Since g_select expects to get some
selections, it gives the error message. You can specify your selections
like this if you want to give them names:

up z80;
down z80;

Best regards,
Teemu


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU and MPI

2014-09-08 Thread Da-Wei Li
I have found my mistake and hopefully this information is useful.

This is caused by pinning of OPENMP threads by MPI. By default, all OPENMP
threads belongs to each MPI rank will run on one core only in our cluster.
I didn't realize this partially because Gromacs's thread MPI (this is
employed when I run Gromacs on one node only) doesn't have this problem.

best,

dawei

On Tue, Sep 2, 2014 at 9:23 AM, Da-Wei Li lida...@gmail.com wrote:

 I did a little more test. It is unexpected that mixed use of MPI and
 OPENMP on 2 nodes will cause dramatic efficiency lose. That is, my previous
 slowdown is not caused  by GPU.

 BTW, each node on our cluster has two X5650 XEON cpu (6 cores each) and
 two Nvidia M2070 GPU (not K40 as I thought before).


 Test 1:   12 core on one node, 12 MPI rank, 50ns/day
 Test 2:   12 core on one node,  2 MPI rank, 6 OPENMP threads per rank, 41
 ns/day
 Test 3:   24 core on two nodes, 24 MPI rank, 80ns./day
 Test 4:   24 core on two nodes, 4 MPI rank, 6 OPENMP threads per rank, 15
 ns/day



 dawei


 On Tue, Sep 2, 2014 at 6:20 AM, Szilárd Páll pall.szil...@gmail.com
 wrote:

 You may want to try other settings between 4x6 and 24x1 too, e.g. 12x2
 or 6x4 - especially if you have a dual-socket 6-core machine with
 HyperThreading. In my experience, using as many ranks as hardware
 threads with HT in GPU runs results in big slowdown compared to either
 not using HT (i.e. 12x1) or using 2 threads/rank (12x2).

 Cheers,
 --
 Szilárd


 On Mon, Sep 1, 2014 at 5:13 PM, Carsten Kutzner ckut...@gwdg.de wrote:
 
  On 01 Sep 2014, at 15:58, Da-Wei Li lida...@gmail.com wrote:
 
  No. With GPU, both domain and PME domain are decomposited by 4X1X1,
 because
  I use 4 MPI ranks, in line with 4 GPUs. W/o GPU, domain decomposition
 is
  20X1X1 and PME is 4X1X1.
  So the difference in performance could be due to the different DD and
  PME/PP settings. I would try to use exactly the same settings with and
  without GPU. With GPUs, you then would need to specify something like
 
  mpirun -n 24 mdrun -dd 20 1 1 -npme 4 -gpu_id 01
 
  So you get 10 DD plus 2 PME ranks per node and map the first 5 DD ranks
  to GPU id 0, and the last 5 DD ranks to GPU id 1.
 
  Carsten
 
 
 
  dawei
 
 
  On Mon, Sep 1, 2014 at 8:39 AM, Carsten Kutzner ckut...@gwdg.de
 wrote:
 
  Hi Dawei,
 
  on two nodes, regarding the cases with and without GPUs,
  do you use the same domain decomposition in both cases?
 
  Carsten
 
 
  On 01 Sep 2014, at 14:30, Da-Wei Li lida...@gmail.com wrote:
 
  I have added  -resethway but still the same result. Use two GPU
 and 12
  cores distributed in 2 nodes will result 33 ns/day, that is, it is
 about
  3
  time slower than MD run on one node (2GPU+12core).
 
  I have no idea what is wrong.
 
 
  dawei
 
 
  On Mon, Sep 1, 2014 at 5:34 AM, Carsten Kutzner ckut...@gwdg.de
 wrote:
 
  Hi,
 
  take a look at mdrun’s hidden but sometimes useful options:
 
  mdrun -h -hiddden
 
  Carsten
 
 
  On 01 Sep 2014, at 11:07, Oliver Schillinger 
  o.schillin...@fz-juelich.de
  wrote:
 
  Hi,
  I did not know about the -resethway command line switch to mdrun.
  Why is it nowhere documented?
  Or am I blind/stupid?
  Cheers,
  Oliver
 
  On 08/29/2014 05:15 PM, Carsten Kutzner wrote:
  Hi Dawei,
 
  On 29 Aug 2014, at 16:52, Da-Wei Li lida...@gmail.com wrote:
 
  Dear Carsten
 
  Thanks for the clarification. Here it is my benchmark for a small
  protein
  system (18k atoms).
 
  (1) 1 node (12 cores/node, no GPU):   50 ns/day
  (2) 2 nodes (12 cores/node, no GPU): 80 ns/day
  (3) 1 node (12 cores/node, 2 K40 GPUs/node): 100 ns/day
  (4) 2 nodes (12 cores/node, 2 K40 GPUs/node): 40 ns/day
 
 
  I send out this question because the benchmark 4 above is very
  suspicious.
  Indeed, if you get 80 ns/day without GPUs, then it should not be
 less
  with GPUs. For how many time steps do you run each of the
  benchmarks? Do you use the -resethway command line switch to mdrun
  to disregard the first half of the run (where initialization and
  balancing is done, you don’t want to count that in a benchmark)?
 
  Carsten
 
  But I agree size of my system may play a role.
 
  best,
 
  dawei
 
 
  On Fri, Aug 29, 2014 at 10:36 AM, Carsten Kutzner 
 ckut...@gwdg.de
  wrote:
 
  Hi Dawei,
 
  the mapping of GPUs to PP ranks is printed for the Master node
 only,
  but if this node reports two GPUs, then all other PP ranks will
 also
  use two GPUs (or an error is reported).
 
  The scaling will depend also on your system size, if this is too
  small,
  then you might be better off by using a single node.
 
  Carsten
 
 
  On 29 Aug 2014, at 16:24, Da-Wei Li lida...@gmail.com wrote:
 
  Dear users,
 
  I recently try to run Gromacs on two nodes, each of them has 12
  cores
  and 2
  GPUs. The nodes are connected with infiniband and scaling is
 pretty
  good
  when no GPU is evolved.
 
  My command is like this:
 
  mpiexec  -npernode 2 -np 4 mdrun_mpi -ntomp 6
 
 
  However, it looks like Gromacs only 

Re: [gmx-users] Integrator problem

2014-09-08 Thread Lovika Moudgil
Oo.thanks for guiding Mark !!!

Regards
Lovika
On 8 Sep 2014 19:45, Mark Abraham mark.j.abra...@gmail.com wrote:

 The md integrator does MD, not EM...

 Mark
 On Sep 8, 2014 4:11 PM, Lovika Moudgil lovikamoud...@gmail.com wrote:

  Hi everyoneI want to ask one question...In my .mdp file if I use md
  intergrator for energy minimisation .then system is fine...but if I
 use
  steep integrator...my system got error of more force on one atomI
 not
  clear why this is happeingcan any body guide me please..
 
  Regards
  Lovika
  --
  Gromacs Users mailing list
 
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
  posting!
 
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  * For (un)subscribe requests visit
  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
  send a mail to gmx-users-requ...@gromacs.org.
 
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU and MPI

2014-09-08 Thread Albert
Now the question is how can we solve the problem in GPU workstaton and 
make two GPU work together for one task?


thx

Albert


On 09/08/2014 04:18 PM, Da-Wei Li wrote:

I have found my mistake and hopefully this information is useful.

This is caused by pinning of OPENMP threads by MPI. By default, all OPENMP
threads belongs to each MPI rank will run on one core only in our cluster.
I didn't realize this partially because Gromacs's thread MPI (this is
employed when I run Gromacs on one node only) doesn't have this problem.

best,

dawei


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Performing MD simulations in presence of trivalent cations.

2014-09-08 Thread soumadwip ghosh
Hello gmx users,
  I am currently working on the ion dependent
persistence length calculations of RNA strands.I want to calculate it in
presence of multivalent cations like Al3+ and Co3+. I guess in order to do
that we have to include the specifications of these ions (either one) in
the ions.itp file mentioned in the force field directory. My question is
whether mentioning the charge +3 and giving a name (say Al) and mass of the
ion would be sufficient enough for it or some other protocols are
required.Also, is there any force field which will be suitable for such
cations? I am not a professional in this field and any sort of help will be
highly appreciated.

Regards,
Soumadwip
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] protein-ligand complex by gromacs

2014-09-08 Thread Mahboobeh Eslami
hi GMX users


i have simulated the protein-ligand complex by gromacs. I've repeated the 
simulation twice but i have get very different results. in one of the 
simulations ligand separated from protein and stayed in the center of box.
I've checked all of the input files and the steps , but I did not understand 
why this happened.
Please help me .
Thank you for your kindness
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Performing MD simulations in presence of trivalent cations.

2014-09-08 Thread David van der Spoel

On 2014-09-08 18:28, soumadwip ghosh wrote:

Hello gmx users,
   I am currently working on the ion dependent
persistence length calculations of RNA strands.I want to calculate it in
presence of multivalent cations like Al3+ and Co3+. I guess in order to do
that we have to include the specifications of these ions (either one) in
the ions.itp file mentioned in the force field directory. My question is
whether mentioning the charge +3 and giving a name (say Al) and mass of the
ion would be sufficient enough for it or some other protocols are
required.Also, is there any force field which will be suitable for such
cations? I am not a professional in this field and any sort of help will be
highly appreciated.
That's going to be really difficult. Present force fields have a lot of 
problems even with Mg2+ and Ca2+. In any way do you need to search the 
literature for Van der Waals (Lennard-Jones) parameters for such ions.


Regards,
Soumadwip




--
David van der Spoel, Ph.D., Professor of Biology
Dept. of Cell  Molec. Biol., Uppsala University.
Box 596, 75124 Uppsala, Sweden. Phone:  +46184714205.
sp...@xray.bmc.uu.sehttp://folding.bmc.uu.se
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] no data read from file pdo.gz

2014-09-08 Thread Justin Lemkul



On 9/8/14 12:36 AM, Lyna Luo wrote:

Hi Justin,

The blank between lines are just from email format. I used only one window to 
see if g_wham can readin my data, but I actually have 64 window. Please see the 
error message below. Thanks again! -Lyna


GROMACS:  gmx wham, VERSION 5.0

Executable:   /usr/local/gromacs/bin/gmx

Library dir:  /usr/local/gromacs/share/gromacs/top

Command line:

   g_wham -ip pdo.dat -bins 400 -temp 300 -tol 0.1 -auto


Found 64 pdo files in pdo.dat

Automatic determination of boundaries from 64 pdo files...

Using gunzig executable /bin/gunzip

Opening disa_job13_pdo/job.job13.63.sort.colvars.traj,temp.bb.pdo.gz ... [100%]


Determined boundaries to 10002004087734272.00 and 
-10002004087734272.00


Opening disa_job13_pdo/job.job13.0.sort.colvars.traj,temp.bb.pdo.gz ... [ 2%]

WARNING, no data points read from file 
disa_job13_pdo/job.job13.0.sort.colvars.traj,temp.bb.pdo.gz (check -b option)

Opening disa_job13_pdo/job.job13.1.sort.colvars.traj,temp.bb.pdo.gz ... [ 3%]

WARNING, no data points read from file 
disa_job13_pdo/job.job13.1.sort.colvars.traj,temp.bb.pdo.gz (check -b option)

Opening disa_job13_pdo/job.job13.2.sort.colvars.traj,temp.bb.pdo.gz ... [ 5%]

WARNING, no data points read from file 
disa_job13_pdo/job.job13.2.sort.colvars.traj,temp.bb.pdo.gz (check -b option)


My pdo file looks like below. Is there something wrong with component selection or 
nskip or the two column data format?

# UMBRELLA 3.0

# Component selection: 0 0 1

# nSkip 1

# Ref. Group TestAtom

# Nr. of pull groups 1

# Group 1 GR1 Umb. Pos. 57.6 Umb. Cons. 2.0

#

5054910 5.76202567601903e+01

5054920 5.76153109776819e+01

5054930 5.76057280906502e+01

5054940 5.76000707360825e+01

5054950 5.75956747536078e+01

5054960 5.75863624004517e+01

5054970 5.75807826101227e+01

5054980 5.75752353578918e+01

5054990 5.75700050629966e+01

5055000 5.75644906348954e+01

5055010 5.75599694771144e+01

5055020 5.75577873701180e+01

5055030 5.75528304157033e+01



Any number of things could be wrong here.  Realize that .pdo files have been 
obsolete since about 2008, so the code referring to them is not routinely used 
or maintained.  So something could be going on there, and I suspect that is at 
least part of the problem, given the fact that the maximum and minimum can't be 
calculated properly.  Another possibility is that the contents of the files are 
misinterpreted; a quick look at the code suggests that the first field should be 
float (time in ps, not timestep) so something may be off.


You may want to consider using Alan Grossfield's WHAM program so you don't have 
to deal with all this .pdo-specific stuff; most people use that program for 
analyzing NAMD output.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] protein-ligand complex by gromacs

2014-09-08 Thread Justin Lemkul



On 9/8/14 12:30 PM, Mahboobeh Eslami wrote:

hi GMX users


i have simulated the protein-ligand complex by gromacs. I've repeated the 
simulation twice but i have get very different results. in one of the 
simulations ligand separated from protein and stayed in the center of box.
I've checked all of the input files and the steps , but I did not understand 
why this happened.


This sounds like a simple periodicity issue.  Check your use of trjconv.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Limitation on the maximum number of OpenMPI threads

2014-09-08 Thread Abhi Acharya
Hello,
I was trying to run a simulation on Gromacs-4.6.3 which has been compiled
without thread MPI on a BlueGene/Q system. The configurations per node are
as follows:

 PowerPC A2, 64-bit, 1.6 GHz, 16 cores SMP, 4 threads per core

For running on 8 nodes I tried:

srun mdrun_mpi -ntomp 64

But, this gave me an error:

Program mdrun_mpi, VERSION 4.6.3
Source code file:
/home/staff/sheed/apps/gromacs-4.6.3/src/mdlib/nbnxn_search.c, line: 2520

Fatal error:
64 OpenMP threads were requested. Since the non-bonded force buffer
reduction is prohibitively slow with more than 32 threads, we do not allow
this. Use 32 or less OpenMP threads.

So, I tried using 32 and it works fine. The problem is the performance
seems to be too low; for 1 ns run it shows an estimated time of more than a
day. The same run on a

workstation with 6 cores and 2 GPU gives a performance of 17 ns/day.

I am now at loss. Any ideas what is happening ??

Regards,
Abhishek Acharya
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU and MPI

2014-09-08 Thread Albert

Helo Yunlong:

thx a lot for the reply.

It works in Gromacs-4.6.5, but it does NOT in Gromacs-5.0.1. I used the 
following command:


mpirun -np 2 mdrun_mpi -v -s npt2.tpr -c npt2.gro -x npt2.xtc -g 
npt2.log -gpu_id 01 -ntomp 10



but it always failed with messages:


2 GPUs detected on host cudaB:
  #0: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC:  no, stat: 
compatible
  #1: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC:  no, stat: 
compatible


2 GPUs user-selected for this run.
Mapping of GPUs to the 1 PP rank in this node: #0, #1


---
Program mdrun_mpi, VERSION 5.0.1
Source code file: 
/soft2/plumed-2.2/gromacs-5.0.1/src/gromacs/gmxlib/gmx_detect_hardware.c, line: 
359


Fatal error:
Incorrect launch configuration: mismatching number of PP MPI processes 
and GPUs per node.
mdrun_mpi was started with 1 PP MPI process per node, but you provided 2 
GPUs.

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors



However, this command works fine in Gromacs-4.6.5, and I don't know why 
it failed in 5.0.1.


best

Albert


On 09/08/2014 05:33 PM, Yunlong Liu wrote:

For using two GPU, just add option -gpu_id 01 to specify each gpu for each MPI 
process.


Yunlong


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] electric filed

2014-09-08 Thread Albert

Hello:

I am simulating a protein in lipids bilyaer and I am going to apply 50mV 
voltage across the bilayer. I noticed this paper:


http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0056342

The author did in Gromacs. I noticed that there is a Electric 
fieldselectric field option for mdp file, So I am just wondering will 
the following setting would be enough for my purpose:


E_z= 1  0.08  1

All the other settings would be the same as normal membrane protein 
simulation as indicated in Gromacs tutorial.


Thank you very much.

Albert
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU and MPI

2014-09-08 Thread Da-Wei Li
Hi, Albert

I think the error message is very clear. You have one MPI rank per node,
but provide 2 GPUs per node. The gpuid argument is applied on each of the
node.

dawei

On Mon, Sep 8, 2014 at 2:38 PM, Albert mailmd2...@gmail.com wrote:

 Helo Yunlong:

 thx a lot for the reply.

 It works in Gromacs-4.6.5, but it does NOT in Gromacs-5.0.1. I used the
 following command:

 mpirun -np 2 mdrun_mpi -v -s npt2.tpr -c npt2.gro -x npt2.xtc -g npt2.log
 -gpu_id 01 -ntomp 10


 but it always failed with messages:


 2 GPUs detected on host cudaB:
   #0: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC:  no, stat:
 compatible
   #1: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC:  no, stat:
 compatible

 2 GPUs user-selected for this run.
 Mapping of GPUs to the 1 PP rank in this node: #0, #1


 ---
 Program mdrun_mpi, VERSION 5.0.1
 Source code file: 
 /soft2/plumed-2.2/gromacs-5.0.1/src/gromacs/gmxlib/gmx_detect_hardware.c,
 line: 359

 Fatal error:
 Incorrect launch configuration: mismatching number of PP MPI processes and
 GPUs per node.
 mdrun_mpi was started with 1 PP MPI process per node, but you provided 2
 GPUs.
 For more information and tips for troubleshooting, please check the GROMACS
 website at http://www.gromacs.org/Documentation/Errors



 However, this command works fine in Gromacs-4.6.5, and I don't know why it
 failed in 5.0.1.

 best

 Albert


 On 09/08/2014 05:33 PM, Yunlong Liu wrote:

 For using two GPU, just add option -gpu_id 01 to specify each gpu for
 each MPI process.


 Yunlong


 --
 Gromacs Users mailing list

 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU and MPI

2014-09-08 Thread Albert

HI Dawei:

Yes, it is.

I am running it in a workstation which have 1 CPU (20 cores) plus 2 GPU. 
It is not a server. That's why I use additional option:


-ntomp 10

So that each MPI rank can use 10 core CPU.

This works fine in Gromacs-4.6.5, but it doesn't work in 5.0.1


thx

Albertt


On 09/08/2014 09:08 PM, Da-Wei Li wrote:

Hi, Albert

I think the error message is very clear. You have one MPI rank per node,
but provide 2 GPUs per node. The gpuid argument is applied on each of the
node.

dawei


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU and MPI

2014-09-08 Thread Da-Wei Li
Hi, Albertt

It is quite strange. Your log file should provide how many MPI ranks and
how many OPENMP threads per rank. Can you check that part to find how many
MPI ranks are there?

best,
dawei

On Mon, Sep 8, 2014 at 3:18 PM, Albert mailmd2...@gmail.com wrote:

 HI Dawei:

 Yes, it is.

 I am running it in a workstation which have 1 CPU (20 cores) plus 2 GPU.
 It is not a server. That's why I use additional option:

 -ntomp 10

 So that each MPI rank can use 10 core CPU.

 This works fine in Gromacs-4.6.5, but it doesn't work in 5.0.1


 thx

 Albertt


 On 09/08/2014 09:08 PM, Da-Wei Li wrote:

 Hi, Albert

 I think the error message is very clear. You have one MPI rank per node,
 but provide 2 GPUs per node. The gpuid argument is applied on each of
 the
 node.

 dawei


 --
 Gromacs Users mailing list

 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] PME

2014-09-08 Thread kiana moghaddam
Dear GMX Users

I have a question about PME loading When executing mdrun. 
All my MD simulations (DNA-ligand interaction in triclinic box) are computed on 
in-house Linux 64-bit Intel Core-i7. 
According to gromacs tutorial in Justin web site 
(http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/08_MD.html),
 For a cubic box, the optimal setup will have a PME load of 0.25 and for a 
dodecahedral box, the optimal PME load is 0.33. 
Is this result should be obtained with my computer (with np=8)? or these PME 
load will be obtained only with np8?

Best Regards
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] [ANN] MDTraj 1.0: Trajectory Analysis in Python

2014-09-08 Thread Robert McGibbon
Hello,

We are happy to announce the 1.0 release of MDTraj.

MDTraj is a modern, lightweight and efficient software package for
analyzing molecular dynamics trajectories.
It reads and writes trajectory data from a wide variety of formats,
including those used by AMBER, GROMACS,
CHARMM, NAMD and TINKER. The package has a strong focus on interoperability
with the wider scientific
Python ecosystem.

The 1.0 release indicates substantial stabilization of the package, and a
strong commitment to backward compatibility.
New features since the 0.9 release include and interactive WebGL-based
protein visualization in IPython notebook
and a full implementation of DSSP secondary structure assignment.

More information, detailed release notes, downloads and a large number of
example analysis notebooks
can be found at http://mdtraj.org.

Cheers,
Robert T. McGibbon and the MDTraj Development Team
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU job failed

2014-09-08 Thread Szilárd Páll
Hi,

It looks like you're starting two ranks and passing two GPU IDs so it
should work. The only think I can think of is that you are either
getting the two MPI ranks placed on different nodes or that for some
reason mpirun -np 2 is only starting one rank (MPI installation
broken?).

Does the same setup work with thread-MPI?

Cheers,
--
Szilárd


On Mon, Sep 8, 2014 at 2:50 PM, Albert mailmd2...@gmail.com wrote:
 Hello:

 I am trying to use the following command in Gromacs-5.0.1:

 mpirun -np 2 mdrun_mpi -v -s npt2.tpr -c npt2.gro -x npt2.xtc -g npt2.log
 -gpu_id 01 -ntomp 10


 but it always failed with messages:


 2 GPUs detected on host cudaB:
   #0: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC:  no, stat:
 compatible
   #1: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC:  no, stat:
 compatible

 2 GPUs user-selected for this run.
 Mapping of GPUs to the 1 PP rank in this node: #0, #1


 ---
 Program mdrun_mpi, VERSION 5.0.1
 Source code file:
 /soft2/plumed-2.2/gromacs-5.0.1/src/gromacs/gmxlib/gmx_detect_hardware.c,
 line: 359

 Fatal error:
 Incorrect launch configuration: mismatching number of PP MPI processes and
 GPUs per node.
 mdrun_mpi was started with 1 PP MPI process per node, but you provided 2
 GPUs.
 For more information and tips for troubleshooting, please check the GROMACS
 website at http://www.gromacs.org/Documentation/Errors



 However, this command works fine in Gromacs-4.6.5, and I don't know why it
 failed in 5.0.1. Does anybody have any idea?

 thx a lot

 Albert
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
 mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] PME

2014-09-08 Thread Szilárd Páll
Hi,

By default, there will be no separate PME ranks used with less than
AFAIR 12 ranks (i.e. the default with small number of ranks is -npme
0). Without separate PME ranks (and without GPUs) there is no PP-PME
load balance to tweak, so the PME load is not very relevant from
performance optimization point of view.

Cheers,
--
Szilárd


On Mon, Sep 8, 2014 at 9:06 PM, kiana moghaddam ki_moghad...@yahoo.com wrote:
 Dear GMX Users

 I have a question about PME loading When executing mdrun.
 All my MD simulations (DNA-ligand interaction in triclinic box) are computed 
 on in-house Linux 64-bit Intel Core-i7.
 According to gromacs tutorial in Justin web site 
 (http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/08_MD.html),
  For a cubic box, the optimal setup will have a PME load of 0.25 and for a 
 dodecahedral box, the optimal PME load is 0.33.
 Is this result should be obtained with my computer (with np=8)? or these PME 
 load will be obtained only with np8?

 Best Regards
 --
 Gromacs Users mailing list

 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU job failed

2014-09-08 Thread Yunlong Liu
Same idea with Szilard.

How many nodes are you using?
On one nodes, how many MPI ranks do you have? The error is complaining about 
you assigned two GPUs to only one MPI process on one node. If you spread your 
two MPI ranks on two nodes, that means you only have one at each. Then you 
can't assign two GPU for only one MPI rank.

How many GPU do you have on one node? If there are two, you can either launch 
two PPMPI processes on one node and assign two GPU for them. If you only want 
to launch one MPI rank on each node, you can assign only one GPU for each node 
( by -gpu_id 0 )

Yunlong

Try to run
Sent from my iPhone

 On Sep 8, 2014, at 5:35 PM, Szilárd Páll pall.szil...@gmail.com wrote:
 
 Hi,
 
 It looks like you're starting two ranks and passing two GPU IDs so it
 should work. The only think I can think of is that you are either
 getting the two MPI ranks placed on different nodes or that for some
 reason mpirun -np 2 is only starting one rank (MPI installation
 broken?).
 
 Does the same setup work with thread-MPI?
 
 Cheers,
 --
 Szilárd
 
 
 On Mon, Sep 8, 2014 at 2:50 PM, Albert mailmd2...@gmail.com wrote:
 Hello:
 
 I am trying to use the following command in Gromacs-5.0.1:
 
 mpirun -np 2 mdrun_mpi -v -s npt2.tpr -c npt2.gro -x npt2.xtc -g npt2.log
 -gpu_id 01 -ntomp 10
 
 
 but it always failed with messages:
 
 
 2 GPUs detected on host cudaB:
  #0: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC:  no, stat:
 compatible
  #1: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC:  no, stat:
 compatible
 
 2 GPUs user-selected for this run.
 Mapping of GPUs to the 1 PP rank in this node: #0, #1
 
 
 ---
 Program mdrun_mpi, VERSION 5.0.1
 Source code file:
 /soft2/plumed-2.2/gromacs-5.0.1/src/gromacs/gmxlib/gmx_detect_hardware.c,
 line: 359
 
 Fatal error:
 Incorrect launch configuration: mismatching number of PP MPI processes and
 GPUs per node.
 mdrun_mpi was started with 1 PP MPI process per node, but you provided 2
 GPUs.
 For more information and tips for troubleshooting, please check the GROMACS
 website at http://www.gromacs.org/Documentation/Errors
 
 
 
 However, this command works fine in Gromacs-4.6.5, and I don't know why it
 failed in 5.0.1. Does anybody have any idea?
 
 thx a lot
 
 Albert
 --
 Gromacs Users mailing list
 
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
 mail to gmx-users-requ...@gromacs.org.
 -- 
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] [CfP] 5th International Workshop on Model-driven Approaches for Simulation Engineering (Mod4Sim)

2014-09-08 Thread Daniele Gianni
#
  5th International Workshop on
 Model-driven Approaches for Simulation Engineering

  part of the Symposium on Theory of Modeling and Simulation
(SCS SpringSim 2015)

  CALL FOR PAPERS
#

April 12-15, 2015, Alexandria, VA (USA)
http://www.sel.uniroma2.it/Mod4Sim15

#
# Papers Due: *** November 10, 2014 *** Accepted papers will be
# published in the conference proceedings and archived in the ACM
# Digital Library.
#

The workshop aims to bring together experts in model-based,
model-driven and software engineering with experts in simulation
methods and simulation practitioners, with the objective to
advance the state of the art in model-driven simulation
engineering.

Model-driven engineering approaches provide considerable
advantages to software systems engineering activities through the
provision of consistent and coherent models at different
abstraction levels. As these models are in a machine readable
form, model-driven engineering approaches can also support the
exploitation of computing capabilities for model reuse,
programming code generation, and model checking, for example.

The definition of a simulation model, its software implementation
and its execution platform form what is known as simulation
engineering. As simulation systems are mainly based on software,
these systems can similarly benefit from model-driven approaches
to support automatic software generation, enhance software
quality, and reduce costs, development effort and time-to-market.

Similarly to systems and software engineering, simulation
engineering can exploit the capabilities of model-driven
approaches by increasing the abstraction level in simulation
model specifications and by automating the derivation of
simulator code. Further advantages can be gained by using
modeling languages, such as UML and SysML, but not exclusively
those. For example, modeling languages can be used for
descriptive modeling (to describe the system to be simulated),
for analytical modeling (to specify analytically the simulation
of the same system) and for implementation modeling (to define
the respective simulator).

A partial list of topics of interest includes:

* model-driven simulation engineering processes
* requirements modeling for simulation
* domain specific languages for modeling and simulation
* model transformations for simulation model building
* model transformations for simulation model implementation
* model-driven engineering of distributed simulation systems
* relationship between metamodeling standards (e.g., MOF, Ecore)
  and distributed simulation standards (e.g., HLA, DIS)
* metamodels for simulation reuse and interoperability
* model-driven technologies for different simulation paradigms
  (discrete event simulation, multi-agent simulation,
  sketch-based simulation, etc.)
* model-driven methods and tools for performance engineering of
  simulation systems
* simulation tools for model-driven software performance
  engineering
* model-driven technologies for simulation verification and
  validation
* model-driven technologies for data collection and analysis
* model-driven technologies for simulation visualization
* executable UML
* executable architectures
* SysML/Modelica integration
* simulation model portability and reuse
* model-based systems verification and validation
* simulation for model-based systems engineering

To stimulate creativity, however, the workshop maintains a wider
scope and welcomes contributions offering original perspectives
on model-driven engineering of simulation systems.

+++ Important Dates +++

 * Abstract Submission Deadline (optional): September 12, 2014
 * Paper Submission Deadline: November 10, 2014
 * Decision to paper authors: January 9, 2015
 * Camera ready due: February 10, 2015
 * Conference dates: April 12-15, 2015

 Organizing Committee 

* Andrea DÕAmbrogio - University of Rome Tor Vergata, Italy
* Paolo Bocciarelli - University of Rome Tor Vergata, Italy
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Limitation on the maximum number of OpenMPI threads

2014-09-08 Thread Mark Abraham
Hi,

Generally speaking, in the absence of accelerators, OpenMP as used in
GROMACS 4.6/5.0 is only useful as you get down to around a few hundred
atoms per core (details vary, but since you often can't get fewer than 512
cores of BG/Q the point is often moot there), and only at fairly low OpenMP
thread counts (hence the error). BG/Q has 4 hardware threads per core, but
because of the way the processor issues instructions to them, you will
observe benefit with mdrun only if you use either 2 or 3 of them. You
should start by setting an MPI rank per core, and vary -ntomp to observe
that (probably) 2 is best. You can then increase the number of nodes=ranks
until mdrun starts complaining that it can't find a suitable domain
decomposition (because they're getting too small). Then you can get some
value from splitting an MPI rank over two cores (and thus doubling -ntomp
to try 4 or 6), etc. Under such conditions, and with a good PP-PME load
balance, I have seen mdrun (admittedly NVE, and not writing a trajectory)
continue getting faster until about 30 atoms/core, but that's an
unrealistic scenario. Your mileage will be worse in the real world.

Mark

On Mon, Sep 8, 2014 at 8:15 PM, Abhi Acharya abhi117acha...@gmail.com
wrote:

 Hello,
 I was trying to run a simulation on Gromacs-4.6.3 which has been compiled
 without thread MPI on a BlueGene/Q system. The configurations per node are
 as follows:

  PowerPC A2, 64-bit, 1.6 GHz, 16 cores SMP, 4 threads per core

 For running on 8 nodes I tried:

 srun mdrun_mpi -ntomp 64

 But, this gave me an error:

 Program mdrun_mpi, VERSION 4.6.3
 Source code file:
 /home/staff/sheed/apps/gromacs-4.6.3/src/mdlib/nbnxn_search.c, line: 2520

 Fatal error:
 64 OpenMP threads were requested. Since the non-bonded force buffer
 reduction is prohibitively slow with more than 32 threads, we do not allow
 this. Use 32 or less OpenMP threads.

 So, I tried using 32 and it works fine. The problem is the performance
 seems to be too low; for 1 ns run it shows an estimated time of more than a
 day. The same run on a

 workstation with 6 cores and 2 GPU gives a performance of 17 ns/day.

 I am now at loss. Any ideas what is happening ??

 Regards,
 Abhishek Acharya
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU job failed

2014-09-08 Thread Albert


thanks a lot for reply both Yunlong and Szilard.

I don't set up PBS system and nodes in the workstation. In the GPU 
workstation, it contains 1 CPU with 20 cores, and two GPUs. So it is 
similar to 1 nodes with 2 GPUs.


But I don't know why 4.6.5 works, but 5.0.1 doesn't ...




Thx again for reply.

Albert

On 09/08/2014 11:59 PM, Yunlong Liu wrote:

Same idea with Szilard.

How many nodes are you using?
On one nodes, how many MPI ranks do you have? The error is complaining about 
you assigned two GPUs to only one MPI process on one node. If you spread your 
two MPI ranks on two nodes, that means you only have one at each. Then you 
can't assign two GPU for only one MPI rank.

How many GPU do you have on one node? If there are two, you can either launch 
two PPMPI processes on one node and assign two GPU for them. If you only want 
to launch one MPI rank on each node, you can assign only one GPU for each node 
( by -gpu_id 0 )

Yunlong


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Query regarding the addition of solvent molecule

2014-09-08 Thread Christina Florina
Hi,
  I have included the link to my dropbox where I have attached my
gromacs topology files. Though I have included the cyclohexane itp file in
the .top file still I am the same error NO SUCH MOLECULETYPE CHX. SO,
Kindly need help in this regard.
  Thank you in advance.

https://www.dropbox.com/sh/vkvsr3u2hmh2ft9/AABxNv6VxA1gbSs7h2gkaIfxa?dl=0

On Fri, Sep 5, 2014 at 5:25 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 9/5/14, 7:10 AM, Christina Florina wrote:

 Hi,
I have included the chx.itp file in the protein.top file already.
 Checked with the molecule name (CHX) in the .itp file and also in the
 topology file variable name which matches CHX. But still I am getting the
 same error.
Kindly need help to resolve it.


 Something doesn't add up.  You will need to provide all of your files for
 download via a file-sharing service to diagnose.  A simple #include
 statement and correct updating of [molecules] is all that is needed.

 -Justin



 On Fri, Sep 5, 2014 at 3:37 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 9/5/14, 2:50 AM, Christina Florina wrote:

  Hi,
   I have just started my work in MD and using Gromacs 5.0. I
 need
 to use cyclohexane as my solvent instead of water. I generated the
 topology
 file, .itp and .gro using PRODRG. I have successfully incorporated the
 .gro
 file using solvate command and generated the solvent box. But I am
 facing
 problem with grompp (ions.mdp) step before the addition of ions.

 gmx grompp -f ions.mdp -c protein_solv.gro -p protein.top -o ions.tpr

   I am getting the FATAL ERROR: NO SUCH MOLECULETYPE CHX
 though
 I have checked with the molecule name (CHX) in the cyclohexane .itp
 file.
 I
 have tired changing the name of the molecule also.

  Do I need to add this cyclohexane solvent molecule in the
 forcefield file or .atp file, .rtp file? I tried adding them but still
 not
 able to run this. I might be incorrect while adding it in the .rtp file.

   So, I kindly need help regarding the addition of new
 solvent
 molecule in gromacs since I have other organic solvents also for my md
 work
 and having the same problem.


  You need to #include the CHX .itp file in the system .top, and update
 [molecules] accordingly.  Please note as well that PRODRG topologies are
 of
 low quality in my experience and need to be corrected.  CHX is simple, a
 ring of CH2 atom types, all with zero charge.  Other results are less
 trivial to fix.

 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Ruth L. Kirschstein NRSA Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441
 http://mackerell.umaryland.edu/~jalemkul

 ==
 --
 Gromacs Users mailing list

 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.


 --
 ==

 Justin A. Lemkul, Ph.D.
 Ruth L. Kirschstein NRSA Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441
 http://mackerell.umaryland.edu/~jalemkul

 ==
 --
 Gromacs Users mailing list

 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.