Re: [gmx-users] TFE-water simulation

2013-11-08 Thread João Henriques
Hello again,

That depends on the peptide. There is no general answer. I am starting with
a linear conformations, but that's because I'm working with intrinsically
disordered proteins. That's as far as I can go regarding telling you about
what I'm doing. I'm not at liberty to discuss these things, it's not public
yet, sorry.

Best regards,
João


On Fri, Nov 8, 2013 at 11:32 AM, Archana Sonawani-Jagtap 
ask.arch...@gmail.com wrote:

 Should I start with helical peptides and see if it maintains the helicity
 or I can start with random coil?

 Do random coil peptides take long simulation time to form helical peptides?

 any help on this will be appreciated.


 On Tue, Nov 5, 2013 at 12:25 AM, Archana Sonawani-Jagtap 
 ask.arch...@gmail.com wrote:

  Thanks Joao Henriques for helping me with the steps.
  On Nov 4, 2013 3:18 PM, João Henriques joao.henriques.32...@gmail.com
 
  wrote:
 
  Hello Archana,
 
  I'm also toying with a TFE-water system, therefore I am also a newbie.
  This
  is what I am doing, I hope it helps:
 
  1) Since I'm using G54A7 I created a TFE.itp using GROMOS parameters (I
  don't use PRODGR, see why in DOI: 10.1021/ci100335w).
  2) Do the math and check how many molecules of TFE you're going to need
  for
  a given v/v TFE-water ratio and a given simulation box volume.
  3) Build box with the correct size.
  4) Randomly insert correct number of TFE molecules.
  5) Solvate.
  6) Insert protein.
 
  Hopefully, the amount of TFE and water molecules that will be deleted in
  inserting the protein in the final step will be proportional, given that
  the TFE molecules are well distributed.
 
  I've tried many different ways of doing this and it's always impossible
 to
  maintain a perfect TFE-water ratio, no matter the order and manner of
  insertion of each system component. I've also never been able to insert
  the
  correct number of waters after the TFE. My calculations predict a higher
  number, but the solvation algorithm can't find enough space for them.
 
  In sum, either you place each molecule by hand and you spend a life time
  building the system, or you just make a few compromises and deal with
 it.
  I
  ended up going with the former as I have a limited amount of time on my
  hands and I am aware of the approximations I am doing.
 
  Best regards,
 
  João Henriques
  
  PhD student
  Division of Theoretical Chemistry
  Lund University
  Lund, Sweden
  
  joao.henriq...@teokem.lu.se
  http://www.teokem.lu.se/~joaoh/
 
 
  On Thu, Oct 24, 2013 at 7:15 PM, Justin Lemkul jalem...@vt.edu wrote:
 
  
  
   On 10/24/13 1:13 PM, Archana Sonawani-Jagtap wrote:
  
   Dear Justin,
  
   I have not constructed the system but I have downloaded it from ATB
   website. To maintain the number of TFE and water molecules(1:1 v/v)
 in
  the
   system (I don't want to add extra water molecules) I tried many
  options in
   genbox but still it adds 678 water molecules. Can you provide me some
   hint?
  
  
   Not without seeing your actual command(s).
  
  
Is their need to remove periodicity of this pre-equilibrated system
 as
  in
   case of lipids?
  
  
   No idea.  Are the molecules broken in the initial configuration?
  
   -Justin
  
   --
   ==
  
  
   Justin A. Lemkul, Ph.D.
   Postdoctoral Fellow
  
   Department of Pharmaceutical Sciences
   School of Pharmacy
   Health Sciences Facility II, Room 601
   University of Maryland, Baltimore
   20 Penn St.
   Baltimore, MD 21201
  
   jalem...@outerbanks.umaryland.edu | (410) 706-7441
  
   ==
  
   --
   gmx-users mailing listgmx-users@gromacs.org
   http://lists.gromacs.org/mailman/listinfo/gmx-users
   * Please search the archive at http://www.gromacs.org/
   Support/Mailing_Lists/Search before posting!
   * Please don't post (un)subscribe requests to the list. Use the www
   interface or send it to gmx-users-requ...@gromacs.org.
   * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 


 --
 Archana Sonawani-Jagtap
 Senior Research Fellow,
 Biomedical Informatics Centre,
 NIRRH (ICMR), Parel
 Mumbai, India.
 9960791339
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users

Re: [gmx-users] TFE-water simulation

2013-11-04 Thread João Henriques
Hello Archana,

I'm also toying with a TFE-water system, therefore I am also a newbie. This
is what I am doing, I hope it helps:

1) Since I'm using G54A7 I created a TFE.itp using GROMOS parameters (I
don't use PRODGR, see why in DOI: 10.1021/ci100335w).
2) Do the math and check how many molecules of TFE you're going to need for
a given v/v TFE-water ratio and a given simulation box volume.
3) Build box with the correct size.
4) Randomly insert correct number of TFE molecules.
5) Solvate.
6) Insert protein.

Hopefully, the amount of TFE and water molecules that will be deleted in
inserting the protein in the final step will be proportional, given that
the TFE molecules are well distributed.

I've tried many different ways of doing this and it's always impossible to
maintain a perfect TFE-water ratio, no matter the order and manner of
insertion of each system component. I've also never been able to insert the
correct number of waters after the TFE. My calculations predict a higher
number, but the solvation algorithm can't find enough space for them.

In sum, either you place each molecule by hand and you spend a life time
building the system, or you just make a few compromises and deal with it. I
ended up going with the former as I have a limited amount of time on my
hands and I am aware of the approximations I am doing.

Best regards,

João Henriques

PhD student
Division of Theoretical Chemistry
Lund University
Lund, Sweden

joao.henriq...@teokem.lu.se
http://www.teokem.lu.se/~joaoh/


On Thu, Oct 24, 2013 at 7:15 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 10/24/13 1:13 PM, Archana Sonawani-Jagtap wrote:

 Dear Justin,

 I have not constructed the system but I have downloaded it from ATB
 website. To maintain the number of TFE and water molecules(1:1 v/v) in the
 system (I don't want to add extra water molecules) I tried many options in
 genbox but still it adds 678 water molecules. Can you provide me some
 hint?


 Not without seeing your actual command(s).


  Is their need to remove periodicity of this pre-equilibrated system as in
 case of lipids?


 No idea.  Are the molecules broken in the initial configuration?

 -Justin

 --
 ==


 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441

 ==

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] TFE-water simulation

2013-11-04 Thread João Henriques
Erratum:

Where I wrote I ended up going with the former it should be I ended up
going with the latter.

/J


On Mon, Nov 4, 2013 at 10:47 AM, João Henriques 
joao.henriques.32...@gmail.com wrote:

 Hello Archana,

 I'm also toying with a TFE-water system, therefore I am also a newbie.
 This is what I am doing, I hope it helps:

 1) Since I'm using G54A7 I created a TFE.itp using GROMOS parameters (I
 don't use PRODGR, see why in DOI: 10.1021/ci100335w).
 2) Do the math and check how many molecules of TFE you're going to need
 for a given v/v TFE-water ratio and a given simulation box volume.
 3) Build box with the correct size.
 4) Randomly insert correct number of TFE molecules.
 5) Solvate.
 6) Insert protein.

 Hopefully, the amount of TFE and water molecules that will be deleted in
 inserting the protein in the final step will be proportional, given that
 the TFE molecules are well distributed.

 I've tried many different ways of doing this and it's always impossible to
 maintain a perfect TFE-water ratio, no matter the order and manner of
 insertion of each system component. I've also never been able to insert the
 correct number of waters after the TFE. My calculations predict a higher
 number, but the solvation algorithm can't find enough space for them.

 In sum, either you place each molecule by hand and you spend a life time
 building the system, or you just make a few compromises and deal with it. I
 ended up going with the former as I have a limited amount of time on my
 hands and I am aware of the approximations I am doing.

 Best regards,

 João Henriques
 
 PhD student
 Division of Theoretical Chemistry
 Lund University
 Lund, Sweden
 
 joao.henriq...@teokem.lu.se
 http://www.teokem.lu.se/~joaoh/


 On Thu, Oct 24, 2013 at 7:15 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 10/24/13 1:13 PM, Archana Sonawani-Jagtap wrote:

 Dear Justin,

 I have not constructed the system but I have downloaded it from ATB
 website. To maintain the number of TFE and water molecules(1:1 v/v) in
 the
 system (I don't want to add extra water molecules) I tried many options
 in
 genbox but still it adds 678 water molecules. Can you provide me some
 hint?


 Not without seeing your actual command(s).


  Is their need to remove periodicity of this pre-equilibrated system as in
 case of lipids?


 No idea.  Are the molecules broken in the initial configuration?

 -Justin

 --
 ==


 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441

 ==

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Conversion of nm to Å

2013-08-21 Thread João Henriques
OP, I give you 10/10. First time I laugh out loud reading the GMX mailing
list.

/J


On Wed, Aug 21, 2013 at 5:05 AM, Arunima Shilpi writetoas...@gmail.comwrote:

 Dear Sir
  I have to query as to how do we convert the paramaters in nm to  Å
 --. For example the RMSD and RMSF calculation gives result in nm. I want
 to convert
 it to  Å. I request you to kindly guide me with the process

 Thanking You with Regards.

 Arunima Shilpi

 Ph. D Research Scholar(Cancer  Epigenetics)
 Department of Life Science
 National Institute of Technology
 Rourkela
 Odisha
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
João Henriques
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Running gmx-4.6.x over multiple homogeneous nodes with GPU acceleration

2013-06-05 Thread João Henriques
Thank you very much for both contributions. I will conduct some tests to
assess which approach works best for my system.

Much appreciated,
Best regards,
João Henriques


On Tue, Jun 4, 2013 at 6:30 PM, Szilárd Páll szilard.p...@cbr.su.se wrote:

 mdrun is not blind, just the current design does report the hardware
 of all compute nodes used. Whatever CPU/GPU hardware mdrun reports in
 the log/std output is *only* what rank 0, i.e. the first MPI process,
 detects. If you have a heterogeneous hardware configuration, in most
 cases you should be able to run just fine, but you'll still get only
 the hardware the first rank sits on reported.

 Hence, if you want to run on 5 of the nodes you mention, you just do:
 mpirun -np 10 mdrun_mpi [-gpu_id 01]

 You may want to try both -ntomp 8 and -ntomp 16 (using HyperThreading
 does not always help).

 Also note that if you use GPU sharing among ranks (in order to use 8
 threads/rank), (for some technical reasons) disabling dynamic load
 balancing may help - especially if you have a homogenous simulation
 system (and hardware setup).


 Cheers,
 --
 Szilárd


 On Tue, Jun 4, 2013 at 3:31 PM, João Henriques
 joao.henriques.32...@gmail.com wrote:
  Dear all,
 
  Since gmx-4.6 came out, I've been particularly interested in taking
  advantage of the native GPU acceleration for my simulations. Luckily, I
  have access to a cluster with the following specs PER NODE:
 
  CPU
  2 E5-2650 (2.0 Ghz, 8-core)
 
  GPU
  2 Nvidia K20
 
  I've become quite familiar with the heterogenous parallelization and
  multiple MPI ranks per GPU schemes on a SINGLE NODE. Everything works
  fine, no problems at all.
 
  Currently, I'm working with a nasty system comprising 608159 tip3p water
  molecules and it would really help to accelerate things up a bit.
  Therefore, I would really like to try to parallelize my system over
  multiple nodes and keep the GPU acceleration.
 
  I've tried many different command combinations, but mdrun seems to be
 blind
  towards the GPUs existing on other nodes. It always finds GPUs #0 and #1
 on
  the first node and tries to fit everything into these, completely
  disregarding the existence of the other GPUs on the remaining requested
  nodes.
 
  Once again, note that all nodes have exactly the same specs.
 
  Literature on the official gmx website is not, well... you know...
 in-depth
  and I would really appreciate if someone could shed some light into this
  subject.
 
  Thank you,
  Best regards,
 
  --
  João Henriques
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
João Henriques
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Running gmx-4.6.x over multiple homogeneous nodes with GPU acceleration

2013-06-05 Thread João Henriques
Sorry to keep bugging you guys, but even after considering all you
suggested and reading the bugzilla thread Mark pointed out, I'm still
unable to make the simulation run over multiple nodes.
*Here is a template of a simple submission over 2 nodes:*

--- START ---
#!/bin/sh
#
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
#
# Job name
#SBATCH -J md
#
# No. of nodes and no. of processors per node
#SBATCH -N 2
#SBATCH --exclusive
#
# Time needed to complete the job
#SBATCH -t 48:00:00
#
# Add modules
module load gcc/4.6.3
module load openmpi/1.6.3/gcc/4.6.3
module load cuda/5.0
module load gromacs/4.6
#
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
#
grompp -f md.mdp -c npt.gro -t npt.cpt -p topol -o md.tpr
mpirun -np 4 mdrun_mpi -gpu_id 01 -deffnm md -v
#
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
--- END ---

*Here is an extract of the md.log:*

--- START ---
Using 4 MPI processes
Using 4 OpenMP threads per MPI process

Detecting CPU-specific acceleration.
Present hardware specification:
Vendor: GenuineIntel
Brand:  Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
Family:  6  Model: 45  Stepping:  7
Features: aes apic avx clfsh cmov cx8 cx16 htt lahf_lm mmx msr nonstop_tsc
pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3
tdt x2apic
Acceleration most likely to fit this hardware: AVX_256
Acceleration selected at GROMACS compile time: AVX_256


2 GPUs detected on host en001:
  #0: NVIDIA Tesla K20m, compute cap.: 3.5, ECC: yes, stat: compatible
  #1: NVIDIA Tesla K20m, compute cap.: 3.5, ECC: yes, stat: compatible


---
Program mdrun_mpi, VERSION 4.6
Source code file:
/lunarc/sw/erik/src/gromacs/gromacs-4.6/src/gmxlib/gmx_detect_hardware.c,
line: 322

Fatal error:
Incorrect launch configuration: mismatching number of PP MPI processes and
GPUs per node.
mdrun_mpi was started with 4 PP MPI processes per node, but you provided 2
GPUs.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---
--- END ---

As you can see, gmx is having trouble understanding that there's a second
node available. Note that since I did not specify -ntomp, it assigned 4
threads to each of the 4 mpi processes (filling the entire avail. 16 CPUs *on
one node*).
For the same exact submission, if I do set -ntomp 8 (since I have 4 MPI
procs * 8 OpenMP threads = 32 CPUs total on the 2 nodes) I get a warning
telling me that I'm hyperthreading, which can only mean that *gmx is
assigning all processes to the first node once again.*
Am I doing something wrong or is there some problem with gmx-4.6? I guess
it can only be my fault, since I've never seen anyone else complaining
about the same issue here.

*Here are the cluter specs details:*

http://www.lunarc.lu.se/Systems/ErikDetails

Thank you for your patience and expertise,
Best regards,
João Henriques



On Tue, Jun 4, 2013 at 6:30 PM, Szilárd Páll szilard.p...@cbr.su.se wrote:

 mdrun is not blind, just the current design does report the hardware
 of all compute nodes used. Whatever CPU/GPU hardware mdrun reports in
 the log/std output is *only* what rank 0, i.e. the first MPI process,
 detects. If you have a heterogeneous hardware configuration, in most
 cases you should be able to run just fine, but you'll still get only
 the hardware the first rank sits on reported.

 Hence, if you want to run on 5 of the nodes you mention, you just do:
 mpirun -np 10 mdrun_mpi [-gpu_id 01]

 You may want to try both -ntomp 8 and -ntomp 16 (using HyperThreading
 does not always help).

 Also note that if you use GPU sharing among ranks (in order to use 8
 threads/rank), (for some technical reasons) disabling dynamic load
 balancing may help - especially if you have a homogenous simulation
 system (and hardware setup).


 Cheers,
 --
 Szilárd


 On Tue, Jun 4, 2013 at 3:31 PM, João Henriques
 joao.henriques.32...@gmail.com wrote:
  Dear all,
 
  Since gmx-4.6 came out, I've been particularly interested in taking
  advantage of the native GPU acceleration for my simulations. Luckily, I
  have access to a cluster with the following specs PER NODE:
 
  CPU
  2 E5-2650 (2.0 Ghz, 8-core)
 
  GPU
  2 Nvidia K20
 
  I've become quite familiar with the heterogenous parallelization and
  multiple MPI ranks per GPU schemes on a SINGLE NODE. Everything works
  fine, no problems at all.
 
  Currently, I'm working with a nasty system comprising 608159 tip3p water
  molecules and it would really help to accelerate things up a bit.
  Therefore, I would really like to try to parallelize my system over
  multiple nodes and keep the GPU acceleration.
 
  I've tried many different command combinations, but mdrun seems to be
 blind
  towards the GPUs existing on other nodes. It always finds GPUs #0 and #1
 on
  the first node and tries to fit

Re: [gmx-users] Running gmx-4.6.x over multiple homogeneous nodes with GPU acceleration

2013-06-05 Thread João Henriques
Ok, thanks once again. I will do my best to overcome this issue.

Best regards,
João Henriques


On Wed, Jun 5, 2013 at 3:33 PM, Mark Abraham mark.j.abra...@gmail.comwrote:

 On Wed, Jun 5, 2013 at 2:53 PM, João Henriques 
 joao.henriques.32...@gmail.com wrote:

  Sorry to keep bugging you guys, but even after considering all you
  suggested and reading the bugzilla thread Mark pointed out, I'm still
  unable to make the simulation run over multiple nodes.
  *Here is a template of a simple submission over 2 nodes:*
 
  --- START ---
  #!/bin/sh
  #
  # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
  #
  # Job name
  #SBATCH -J md
  #
  # No. of nodes and no. of processors per node
  #SBATCH -N 2
  #SBATCH --exclusive
  #
  # Time needed to complete the job
  #SBATCH -t 48:00:00
  #
  # Add modules
  module load gcc/4.6.3
  module load openmpi/1.6.3/gcc/4.6.3
  module load cuda/5.0
  module load gromacs/4.6
  #
  # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
  #
  grompp -f md.mdp -c npt.gro -t npt.cpt -p topol -o md.tpr
  mpirun -np 4 mdrun_mpi -gpu_id 01 -deffnm md -v
  #
  # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
  --- END ---
 
  *Here is an extract of the md.log:*
 
  --- START ---
  Using 4 MPI processes
  Using 4 OpenMP threads per MPI process
 
  Detecting CPU-specific acceleration.
  Present hardware specification:
  Vendor: GenuineIntel
  Brand:  Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
  Family:  6  Model: 45  Stepping:  7
  Features: aes apic avx clfsh cmov cx8 cx16 htt lahf_lm mmx msr
 nonstop_tsc
  pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2
 ssse3
  tdt x2apic
  Acceleration most likely to fit this hardware: AVX_256
  Acceleration selected at GROMACS compile time: AVX_256
 
 
  2 GPUs detected on host en001:
#0: NVIDIA Tesla K20m, compute cap.: 3.5, ECC: yes, stat: compatible
#1: NVIDIA Tesla K20m, compute cap.: 3.5, ECC: yes, stat: compatible
 
 
  ---
  Program mdrun_mpi, VERSION 4.6
  Source code file:
  /lunarc/sw/erik/src/gromacs/gromacs-4.6/src/gmxlib/gmx_detect_hardware.c,
  line: 322
 
  Fatal error:
  Incorrect launch configuration: mismatching number of PP MPI processes
 and
  GPUs per node.
 

 per node is critical here.


  mdrun_mpi was started with 4 PP MPI processes per node, but you provided
 2
  GPUs.
 

 ...and here. As far as mdrun_mpi knows from the MPI system there's only MPI
 ranks on this one node.

 For more information and tips for troubleshooting, please check the GROMACS
  website at http://www.gromacs.org/Documentation/Errors
  ---
  --- END ---
 
  As you can see, gmx is having trouble understanding that there's a second
  node available. Note that since I did not specify -ntomp, it assigned 4
  threads to each of the 4 mpi processes (filling the entire avail. 16 CPUs
  *on
  one node*).
  For the same exact submission, if I do set -ntomp 8 (since I have 4 MPI
  procs * 8 OpenMP threads = 32 CPUs total on the 2 nodes) I get a warning
  telling me that I'm hyperthreading, which can only mean that *gmx is
  assigning all processes to the first node once again.*
  Am I doing something wrong or is there some problem with gmx-4.6? I guess
  it can only be my fault, since I've never seen anyone else complaining
  about the same issue here.
 

 Assigning MPI processes to nodes is a matter configuring your MPI. GROMACS
 just follows the MPI system information it gets from MPI - hence the
 oversubscription. If you assign two MPI processes to each node, then things
 should work.

 Mark
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
João Henriques
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Running gmx-4.6.x over multiple homogeneous nodes with GPU acceleration

2013-06-05 Thread João Henriques
Just to wrap up this thread, it does work when the mpirun is properly
configured. I knew it had to be my fault :)

Something like this works like a charm:
mpirun -npernode 2 mdrun_mpi -ntomp 8 -gpu_id 01 -deffnm md -v

Thank you Mark and Szilárd for your invaluable expertise.

Best regards,
João Henriques


On Wed, Jun 5, 2013 at 4:21 PM, João Henriques 
joao.henriques.32...@gmail.com wrote:

 Ok, thanks once again. I will do my best to overcome this issue.

 Best regards,
 João Henriques


 On Wed, Jun 5, 2013 at 3:33 PM, Mark Abraham mark.j.abra...@gmail.comwrote:

 On Wed, Jun 5, 2013 at 2:53 PM, João Henriques 
 joao.henriques.32...@gmail.com wrote:

  Sorry to keep bugging you guys, but even after considering all you
  suggested and reading the bugzilla thread Mark pointed out, I'm still
  unable to make the simulation run over multiple nodes.
  *Here is a template of a simple submission over 2 nodes:*
 
  --- START ---
  #!/bin/sh
  #
  # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
  #
  # Job name
  #SBATCH -J md
  #
  # No. of nodes and no. of processors per node
  #SBATCH -N 2
  #SBATCH --exclusive
  #
  # Time needed to complete the job
  #SBATCH -t 48:00:00
  #
  # Add modules
  module load gcc/4.6.3
  module load openmpi/1.6.3/gcc/4.6.3
  module load cuda/5.0
  module load gromacs/4.6
  #
  # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
  #
  grompp -f md.mdp -c npt.gro -t npt.cpt -p topol -o md.tpr
  mpirun -np 4 mdrun_mpi -gpu_id 01 -deffnm md -v
  #
  # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
  --- END ---
 
  *Here is an extract of the md.log:*
 
  --- START ---
  Using 4 MPI processes
  Using 4 OpenMP threads per MPI process
 
  Detecting CPU-specific acceleration.
  Present hardware specification:
  Vendor: GenuineIntel
  Brand:  Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
  Family:  6  Model: 45  Stepping:  7
  Features: aes apic avx clfsh cmov cx8 cx16 htt lahf_lm mmx msr
 nonstop_tsc
  pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2
 ssse3
  tdt x2apic
  Acceleration most likely to fit this hardware: AVX_256
  Acceleration selected at GROMACS compile time: AVX_256
 
 
  2 GPUs detected on host en001:
#0: NVIDIA Tesla K20m, compute cap.: 3.5, ECC: yes, stat: compatible
#1: NVIDIA Tesla K20m, compute cap.: 3.5, ECC: yes, stat: compatible
 
 
  ---
  Program mdrun_mpi, VERSION 4.6
  Source code file:
 
 /lunarc/sw/erik/src/gromacs/gromacs-4.6/src/gmxlib/gmx_detect_hardware.c,
  line: 322
 
  Fatal error:
  Incorrect launch configuration: mismatching number of PP MPI processes
 and
  GPUs per node.
 

 per node is critical here.


  mdrun_mpi was started with 4 PP MPI processes per node, but you
 provided 2
  GPUs.
 

 ...and here. As far as mdrun_mpi knows from the MPI system there's only
 MPI
 ranks on this one node.

 For more information and tips for troubleshooting, please check the
 GROMACS
  website at http://www.gromacs.org/Documentation/Errors
  ---
  --- END ---
 
  As you can see, gmx is having trouble understanding that there's a
 second
  node available. Note that since I did not specify -ntomp, it assigned 4
  threads to each of the 4 mpi processes (filling the entire avail. 16
 CPUs
  *on
  one node*).
  For the same exact submission, if I do set -ntomp 8 (since I have 4
 MPI
  procs * 8 OpenMP threads = 32 CPUs total on the 2 nodes) I get a warning
  telling me that I'm hyperthreading, which can only mean that *gmx is
  assigning all processes to the first node once again.*
  Am I doing something wrong or is there some problem with gmx-4.6? I
 guess
  it can only be my fault, since I've never seen anyone else complaining
  about the same issue here.
 

 Assigning MPI processes to nodes is a matter configuring your MPI. GROMACS
 just follows the MPI system information it gets from MPI - hence the
 oversubscription. If you assign two MPI processes to each node, then
 things
 should work.

 Mark
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




 --
 João Henriques




-- 
João Henriques
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Running gmx-4.6.x over multiple homogeneous nodes with GPU acceleration

2013-06-04 Thread João Henriques
Dear all,

Since gmx-4.6 came out, I've been particularly interested in taking
advantage of the native GPU acceleration for my simulations. Luckily, I
have access to a cluster with the following specs PER NODE:

CPU
2 E5-2650 (2.0 Ghz, 8-core)

GPU
2 Nvidia K20

I've become quite familiar with the heterogenous parallelization and
multiple MPI ranks per GPU schemes on a SINGLE NODE. Everything works
fine, no problems at all.

Currently, I'm working with a nasty system comprising 608159 tip3p water
molecules and it would really help to accelerate things up a bit.
Therefore, I would really like to try to parallelize my system over
multiple nodes and keep the GPU acceleration.

I've tried many different command combinations, but mdrun seems to be blind
towards the GPUs existing on other nodes. It always finds GPUs #0 and #1 on
the first node and tries to fit everything into these, completely
disregarding the existence of the other GPUs on the remaining requested
nodes.

Once again, note that all nodes have exactly the same specs.

Literature on the official gmx website is not, well... you know... in-depth
and I would really appreciate if someone could shed some light into this
subject.

Thank you,
Best regards,

-- 
João Henriques
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Restarting a REMD simulation (error)

2013-04-08 Thread João Henriques
Dear all,

Due to cluster wall-time limitations, I was forced to restart two REMD
simulations. It ran absolutely fine until hitting the wall-time. To restart
I used the following command:

mpirun -np 64 -output-filename MPIoutput $GromDir/mdrun_mpi -s H5_.tpr
-multi 64 -replex 1000 -deffnm H5_ -cpi -noappend

(I'm using GMX-4.0.7 and yes I know it's old but I have my own reasons for
using it.)

Here is a random replica (#1) MPI output:

##START###
NNODES=64, MYRANK=1, HOSTNAME=an091
NODEID=1 argc=11
Checkpoint file is from part 1, new output files will be suffixed part0002.
Reading file H5_1.tpr, VERSION 4.0.7 (single precision)

Reading checkpoint file H5_1.cpt generated: Wed Apr  3 17:13:14 2013

---
Program mdrun_mpi, VERSION 4.0.7
Source code file: main.c, line: 116

Fatal error:
The 64 subsystems are not compatible

---

Error on node 1, will try to stop all the nodes
Halting parallel program mdrun_mpi on CPU 1 out of 64
##END###

It's reading from the correct cpt and tpr files, so it must be something
else.

Here is a tail of the respective log file:

##START###
Initializing Replica Exchange
Repl  There are 64 replicas:
Multi-checking the number of atoms ... OK
Multi-checking the integrator ... OK
Multi-checking init_step+nsteps ... OK
Multi-checking first exchange step: init_step/-replex ...
first exchange step: init_step/-replex is not equal for all subsystems
  subsystem 0: 3062
  subsystem 1: 3062
  subsystem 2: 3062
  subsystem 3: 3062
  subsystem 4: 3062
  subsystem 5: 3062
  subsystem 6: 3062
  subsystem 7: 3062
  subsystem 8: 3062
  subsystem 9: 3062
  subsystem 10: 3062
  subsystem 11: 3062
  subsystem 12: 3062
  subsystem 13: 3062
  subsystem 14: 3062
  subsystem 15: 3062
  subsystem 16: 3062
  subsystem 17: 3062
  subsystem 18: 3062
  subsystem 19: 3062
  subsystem 20: 3062
  subsystem 21: 3062
  subsystem 22: 3062
  subsystem 23: 3062
  subsystem 24: 3062
  subsystem 25: 3062
  subsystem 26: 3062
  subsystem 27: 3062
  subsystem 28: 3062
  subsystem 29: 3062
  subsystem 30: 3062
  subsystem 31: 3062
  subsystem 32: 3062
  subsystem 33: 3062
  subsystem 34: 3062
  subsystem 35: 3062
  subsystem 36: 3062
  subsystem 37: 3062
  subsystem 38: 3062
  subsystem 39: 3066
  subsystem 40: 3062
  subsystem 41: 3062
  subsystem 42: 3062
  subsystem 43: 3062
  subsystem 44: 3062
  subsystem 45: 3062
  subsystem 46: 3062
  subsystem 47: 3062
  subsystem 48: 3062
  subsystem 49: 3062
  subsystem 50: 3062
  subsystem 51: 3062
  subsystem 52: 3062
  subsystem 53: 3062
  subsystem 54: 3062
  subsystem 55: 3062
  subsystem 56: 3062
  subsystem 57: 3062
  subsystem 58: 3062
  subsystem 59: 3062
  subsystem 60: 3062
  subsystem 61: 3062
  subsystem 62: 3062
  subsystem 63: 3062

---
Program mdrun_mpi, VERSION 4.0.7
Source code file: main.c, line: 116

Fatal error:
The 64 subsystems are not compatible

---
##END###

It's clear that init_step/-replex is not equal for all subsystems is the
problem, but does anyone know why this is happening and how to solve it?

Thank you for your patience,
Best regards,

João Henriques
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Restarting a REMD simulation (error)

2013-04-08 Thread João Henriques
Thank you very much. I didn't notice it until now considering all those
numbers look so similar. Great eye for detail!

João


On Mon, Apr 8, 2013 at 3:17 PM, Mark Abraham mark.j.abra...@gmail.comwrote:

 On Apr 8, 2013 8:53 AM, João Henriques joao.henriques.32...@gmail.com
 wrote:
 
  Dear all,
 
  Due to cluster wall-time limitations, I was forced to restart two REMD
  simulations. It ran absolutely fine until hitting the wall-time. To
 restart
  I used the following command:
 
  mpirun -np 64 -output-filename MPIoutput $GromDir/mdrun_mpi -s H5_.tpr
  -multi 64 -replex 1000 -deffnm H5_ -cpi -noappend
 
  (I'm using GMX-4.0.7 and yes I know it's old but I have my own reasons
 for
  using it.)
 
  Here is a random replica (#1) MPI output:
 
  ##START###
  NNODES=64, MYRANK=1, HOSTNAME=an091
  NODEID=1 argc=11
  Checkpoint file is from part 1, new output files will be suffixed
 part0002.
  Reading file H5_1.tpr, VERSION 4.0.7 (single precision)
 
  Reading checkpoint file H5_1.cpt generated: Wed Apr  3 17:13:14 2013
 
  ---
  Program mdrun_mpi, VERSION 4.0.7
  Source code file: main.c, line: 116
 
  Fatal error:
  The 64 subsystems are not compatible
 
  ---
 
  Error on node 1, will try to stop all the nodes
  Halting parallel program mdrun_mpi on CPU 1 out of 64
  ##END###
 
  It's reading from the correct cpt and tpr files, so it must be something
  else.
 
  Here is a tail of the respective log file:
 
  ##START###
  Initializing Replica Exchange
  Repl  There are 64 replicas:
  Multi-checking the number of atoms ... OK
  Multi-checking the integrator ... OK
  Multi-checking init_step+nsteps ... OK
  Multi-checking first exchange step: init_step/-replex ...
  first exchange step: init_step/-replex is not equal for all subsystems
subsystem 0: 3062
subsystem 1: 3062
subsystem 2: 3062
subsystem 3: 3062
subsystem 4: 3062
subsystem 5: 3062
subsystem 6: 3062
subsystem 7: 3062
subsystem 8: 3062
subsystem 9: 3062
subsystem 10: 3062
subsystem 11: 3062
subsystem 12: 3062
subsystem 13: 3062
subsystem 14: 3062
subsystem 15: 3062
subsystem 16: 3062
subsystem 17: 3062
subsystem 18: 3062
subsystem 19: 3062
subsystem 20: 3062
subsystem 21: 3062
subsystem 22: 3062
subsystem 23: 3062
subsystem 24: 3062
subsystem 25: 3062
subsystem 26: 3062
subsystem 27: 3062
subsystem 28: 3062
subsystem 29: 3062
subsystem 30: 3062
subsystem 31: 3062
subsystem 32: 3062
subsystem 33: 3062
subsystem 34: 3062
subsystem 35: 3062
subsystem 36: 3062
subsystem 37: 3062
subsystem 38: 3062
subsystem 39: 3066

 Seems system 39 got its IO done faster. Its state_prev.cpt will be 3062.
 Back up your files. Use gmxcheck to see what's in files. Rename as suitable
 so your set of files is consistent.

 Mark

subsystem 40: 3062
subsystem 41: 3062
subsystem 42: 3062
subsystem 43: 3062
subsystem 44: 3062
subsystem 45: 3062
subsystem 46: 3062
subsystem 47: 3062
subsystem 48: 3062
subsystem 49: 3062
subsystem 50: 3062
subsystem 51: 3062
subsystem 52: 3062
subsystem 53: 3062
subsystem 54: 3062
subsystem 55: 3062
subsystem 56: 3062
subsystem 57: 3062
subsystem 58: 3062
subsystem 59: 3062
subsystem 60: 3062
subsystem 61: 3062
subsystem 62: 3062
subsystem 63: 3062
 
  ---
  Program mdrun_mpi, VERSION 4.0.7
  Source code file: main.c, line: 116
 
  Fatal error:
  The 64 subsystems are not compatible
 
  ---
  ##END###
 
  It's clear that init_step/-replex is not equal for all subsystems is
 the
  problem, but does anyone know why this is happening and how to solve it?
 
  Thank you for your patience,
  Best regards,
 
  João Henriques
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
João Henriques
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive

Re: [gmx-users] replica exchange data in cpt file

2013-04-05 Thread João Henriques
Thank you Mark and Francesco. I will be pleased to contribute with a fix in
case I come up with a fairly general solution for this issue.

Best regards,
João Henriques


On Thu, Apr 4, 2013 at 7:45 PM, Mark Abraham mark.j.abra...@gmail.comwrote:

 Demux.pl pre-dates .cpt and -append. The only solution is to preserve your
 .log files (e.g. with -noappend) and post-process. If you do that by
 modifying demux.pl, please consider contributing your fix back.

 Mark
 On Apr 4, 2013 1:42 PM, francesco oteri francesco.ot...@gmail.com
 wrote:

  if your -append option is activated (the default is yes),
  maybe Demux.pl reads the exchanging from the .log taking into account the
  time in the
  log and so you don't need to do anything.
  But I don't know how Demux.pl works :(
 
  Francesco
 
  2013/4/4 João Henriques joao.henriques.32...@gmail.com
 
   That's terrible! I was just about to restart 2 hefty REMD
 simulations...
   Maybe I can move the original log files somewhere and combine them with
  the
   restart ones afterwards by using a script. It's just an idea, because I
   need to run Demux.pl on the final concatenated log file.
  
   Any other issues I should be aware of?
  
   Best regards,
   João Henriques
  
  
   On Thu, Apr 4, 2013 at 12:18 PM, francesco oteri
   francesco.ot...@gmail.comwrote:
  
This is what I meant,
in particular it is a problem when I want to analyze
the data regarding the exchange probability.
   
Francesco
   
2013/4/4 João Henriques joao.henriques.32...@gmail.com
   
 So let me see if I understood what Francesco said correctly.
   Restarting a
 REMD job after hitting the cluster wall-time limit resets the
   information
 stored in the log files? Can someone shed some light on this
 subject?

 Best regards,
 João Henriques
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before
 posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

   
   
   
--
Cordiali saluti, Dr.Oteri Francesco
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
   
  
  
  
   --
   João Henriques
   --
   gmx-users mailing listgmx-users@gromacs.org
   http://lists.gromacs.org/mailman/listinfo/gmx-users
   * Please search the archive at
   http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
   * Please don't post (un)subscribe requests to the list. Use the
   www interface or send it to gmx-users-requ...@gromacs.org.
   * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  
 
 
 
  --
  Cordiali saluti, Dr.Oteri Francesco
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
João Henriques
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] replica exchange data in cpt file

2013-04-04 Thread João Henriques
So let me see if I understood what Francesco said correctly. Restarting a
REMD job after hitting the cluster wall-time limit resets the information
stored in the log files? Can someone shed some light on this subject?

Best regards,
João Henriques
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] replica exchange data in cpt file

2013-04-04 Thread João Henriques
That's terrible! I was just about to restart 2 hefty REMD simulations...
Maybe I can move the original log files somewhere and combine them with the
restart ones afterwards by using a script. It's just an idea, because I
need to run Demux.pl on the final concatenated log file.

Any other issues I should be aware of?

Best regards,
João Henriques


On Thu, Apr 4, 2013 at 12:18 PM, francesco oteri
francesco.ot...@gmail.comwrote:

 This is what I meant,
 in particular it is a problem when I want to analyze
 the data regarding the exchange probability.

 Francesco

 2013/4/4 João Henriques joao.henriques.32...@gmail.com

  So let me see if I understood what Francesco said correctly. Restarting a
  REMD job after hitting the cluster wall-time limit resets the information
  stored in the log files? Can someone shed some light on this subject?
 
  Best regards,
  João Henriques
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 



 --
 Cordiali saluti, Dr.Oteri Francesco
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
João Henriques
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] do_dssp Segmentation fault

2012-11-12 Thread João Henriques
Hello,

do_dssp (4.5.5) is broken. There are two possible answers you're gonna get
here:

1) Use old dssp, which you are using.
2) You're an idiot, which are not.

What I did to solve the problem was, download gmx from git, and substitute
the /src/tools/do_dssp.c of gmx 4.5.5 with the one from the git version.
Re-compile it and voila! This do_dssp version accepts both old and new dssp
and you have to specify which version with the flag -ver if I remember
correctly.

This worked perfectly for me. I hope it helps you as well.

All the best,
João Henriques

On Mon, Nov 12, 2012 at 8:38 AM, mshappy1986 mshappy1...@126.com wrote:

 Hi all,
I am meeting the following error in Gromacs 4.5.5 with do_dssp
Here is the command
do_dssp -f md.xtc -s md.tpr -o dssp.xpm
   give me the following error
segmentation fault
   I have downloaded the executable DSSP form
 http://swift.cmbi.ru.nl/gv/dssp/ and set the environment variable, but
 do_dssp did not work.
   How can I fix it?
   Thanks a lot










 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
João Henriques
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] dssp and gromacs version

2012-03-02 Thread João Henriques
Gromacs' 4.5.5 do_dssp is broken. There is a patch for it in git. I haven't
tried it out, so I can't say for sure it works. This is not a DSSP problem,
it's a Gromacs problem. Just to reinforce the idea, the new DSSP should not
be used, no matter which Gromacs version you're using.

Best regards,
Joao Henriques


On Fri, Mar 2, 2012 at 9:10 AM, lina lina.lastn...@gmail.com wrote:

 On Fri, Mar 2, 2012 at 3:55 PM, Mark Abraham mark.abra...@anu.edu.au
 wrote:
  On 2/03/2012 6:52 PM, lina wrote:
 
  Hi,
 
  is the old dssp not compatible with the gromacs 4.5.5 ?
 
  I am confused,
 
  Thanks,
 
  Best regards,
 
 
  The new DSSP is not compatible with any GROMACS

 Ha ... the dssp complain the segmentation fault in 4.5.5, but can run
 smoothly in 4.5.3.

 IIRC, seems there were some threads talked about that before.

 
  Mark
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  Please don't post (un)subscribe requests to the list. Use the www
 interface
  or send it to gmx-users-requ...@gromacs.org.
  Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
João Henriques
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: Antw: Re: Re: [gmx-users] constant PH simulations

2011-09-07 Thread João Henriques
There is no problem in being wrong. The problem is that he wants to be
wrong. At least 4 different researchers gave constructive input and
this subject keeps hitting the same key. I've always been told that
worse than not knowing, is not wanting to know.

Still, I apologize for my outburst.

Best regards,

On Wed, Sep 7, 2011 at 1:54 AM, Mark Abraham mark.abra...@anu.edu.au wrote:
 On 7/09/2011 3:53 AM, João Henriques wrote:

 I guess someone has been living in a cave for the past decade or so...

 Please keep contributions to the mailing list constructive :-) Everyone's
 been wrong before!

 Mark
 --
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the www interface
 or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
João Henriques
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: Re: Antw: Re: Re: [gmx-users] constant PH simulations

2011-09-07 Thread João Henriques
Why don't you read the papers associated with the link everyone keeps
sending you!!!
Stop it with the autistic behavior.

http://www.gromacs.org/Documentation/How-tos/Constant_pH_Simulation

Here is the main paper regarding the CpH-MD method I'm currently using:

http://jcp.aip.org/resource/1/jcpsa6/v117/i9/p4184_s1

Here's a tip: searching google scholar a little before emailing
everyone in the list should prove useful.

Cheers,

On Wed, Sep 7, 2011 at 11:01 AM, Emanuel Peter
emanuel.pe...@chemie.uni-regensburg.de wrote:
 At first I would like to say that I deeply apologize for
 the cave-like things I have said. I again say, that this
 was not the field I am deeply involved.
 From Gerrit I got a banana.
 For this guy, I am a cave-man.

 Thanks, for being such ready for open discussion.

 I did not tell that I do not want to be wrong.
 Questions, which are included in my doubts:

 Generalized forces and averages for H+ interchange ?
 Comparison with titration experiments ?
 Is there any experimental evidence for the rates of
 interchange ?
 Are simulation-times or the periods of interchange at
 any time realistic?
 Are equilibria sampled well, with such interchanges,
 or are there jumps in free energy by this interchange ?
 Why is there no free H+ ?

 Thanks for your kind and very constructive criticisms.

 I would appreciate, if this so-called discussion will
 find an end.
 I am deeply depressed about such comments and
 I will not take part in any users-discussion in the future.
 It makes no sense, because talking like this expresses
 the way on how science is done today. Repelling that
 person, who does not walk the common way.
 And:

 Maybe I have lived in a cave, but someone like you, who answers in
 such a way, IS A CAVEMAN !

 João Henriquesjoao.henriques.32...@gmail.com 07.09.11 11.30 Uhr 
 There is no problem in being wrong. The problem is that he wants to be
 wrong. At least 4 different researchers gave constructive input and
 this subject keeps hitting the same key. I've always been told that
 worse than not knowing, is not wanting to know.

 Still, I apologize for my outburst.

 Best regards,

 On Wed, Sep 7, 2011 at 1:54 AM, Mark Abraham mark.abra...@anu.edu.au
 wrote:
 On 7/09/2011 3:53 AM, João Henriques wrote:

 I guess someone has been living in a cave for the past decade or so...

 Please keep contributions to the mailing list constructive :-) Everyone's
 been wrong before!

 Mark
 --
 gmx-users mailing list gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the www
 interface
 or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




 --
 João Henriques
 --
 gmx-users mailing list gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 --
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
João Henriques
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: Antw: Re: Re: [gmx-users] constant PH simulations

2011-09-06 Thread João Henriques
No forcefield on whole earth is able to reproduce the pH realistically
by H+.
You just can apply pH of your system through the protonation states
of each part in your system.

I guess someone has been living in a cave for the past decade or so...

Cheers,
-- 
João Henriques
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Changing the pH

2011-05-10 Thread João Henriques
Hi Tanos,

Perhaps you should try one of the available constant-pH MD methods. Here is
a quick explanation of what they do and who is involved in their development
right now:

http://www.gromacs.org/Documentation/How-tos/Constant_pH_Simulation
My opinion regarding this subject may be a little biased since I've worked
with one of the most active researchers in this area, but I would suggest
you to read some papers from Antonio Baptista and/or Miguel Machuqueiro
(from Baptista's 2002 initial Constant-pH molecular dynamics using
stochastic titration paper to nowadays).

Best regards,

On Tue, May 10, 2011 at 7:22 AM, Justin A. Lemkul jalem...@vt.edu wrote:



 Tanos Celmar Costa Franca wrote:


 Dear GROMACS users,
 Does someone now how to procede to change the pH of a MD simulation from
 the physiologic one (7.4) to 6.5 ?


 Classical MD simulations do not currently allow for dynamic proton
 exchange. The best you can do at the moment is use pdb2gmx to set the
 protonation state of titratable residues.  At pH 6.5, histidines will be
 quite annoying to deal with properly.

 -Justin



 Tanos Celmar Costa Franca - D.Sc
 Coordenador do Programa de Pos-graduacão em Química
 Secão de Engenharia Química - SE/5
 Instituto Militar de Engenharia - IME
 Rio de Janeiro - RJ
 Brasil





 
 This message was sent using IMP, the Internet Messaging Program.


 --
 

 Justin A. Lemkul
 Ph.D. Candidate
 ICTAS Doctoral Scholar
 MILES-IGERT Trainee
 Department of Biochemistry
 Virginia Tech
 Blacksburg, VA
 jalemkul[at]vt.edu | (540) 231-9080
 http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

 

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the www interface
 or send it to gmx-users-requ...@gromacs.org.

 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
João Henriques, MSc in Biochemistry
Faculty of Science of the University of Lisbon
Department of Chemistry and Biochemistry
C8 Building, Room 8.5.47
Campo Grande, 1749-016 Lisbon, Portugal
E-mail: joao.henriques.32...@gmail.com / jmhenriq...@fc.ul.pt
http://intheochem.fc.ul.pt/members/joaoh.html
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Adding water to protein to start the simulation process

2011-04-15 Thread João Henriques
With all due respect, this is clearly a RT*M moment.

* = F

Joao Henriques

On Fri, Apr 15, 2011 at 1:03 PM, Justin A. Lemkul jalem...@vt.edu wrote:



 Monisha Hajra wrote:

 Hi Justin,

 I am trying to follow the protocol only.
  More than the Gromacs own website, I find
 http://nmr.chem.uu.nl/~tsjerk/course/molmod/md.html#production link is
 more useful.


 Clearly.  This is one of many tutorials linked from the site I posted
 before.

 However, I am stuck at one step which is mentioned :
 http://nmr.chem.uu.nl/~tsjerk/course/molmod/analysis.html

 I am not able to understand how to create traj.trr, traj.xtc and ener.edr
 file. Remaining all is self explained in the previous link.


 These files are output by mdrun, i.e. actually running a simulation.

 -Justin

 Really appreciate any help.

 Regards
 Monisha


 On Fri, Apr 15, 2011 at 8:06 PM, Justin A. Lemkul jalem...@vt.edumailto:
 jalem...@vt.edu wrote:



Monisha Hajra wrote:

Hi User,

I have a protein which I have modeled by Homology modelling. The
modeled protein has no water molecules in its surrounding
environment.

How should I add water molecule so that I can start the
simulation process?


Please refer to the abundant tutorial material on the website:

http://www.gromacs.org/Documentation/Tutorials#General_GROMACS_Use

 http://www.gromacs.org/Documentation/How-tos/Steps_to_Perform_a_Simulation

-Justin

Regards
Monisha


-- 

Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu http://vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


-- gmx-users mailing listgmx-users@gromacs.org
mailto:gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org
mailto:gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



 --
 

 Justin A. Lemkul
 Ph.D. Candidate
 ICTAS Doctoral Scholar
 MILES-IGERT Trainee
 Department of Biochemistry
 Virginia Tech
 Blacksburg, VA
 jalemkul[at]vt.edu | (540) 231-9080
 http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the www interface
 or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists