[gmx-users] about command do_dssp

2013-04-28 Thread aixintiankong
Dear prof.
 i use the the gromacs 4.6.1 on my centos6.4 system. After MD ending, i use 
the do_dssp -f md.xtc -s md.tpr  -o secondary-structure.xpm -sc 
secondary-structure.xvg to analyze the secondary structrue of the protein. 
when i perform the do_dssp and select MainChain , the fatal erros come out 
as follow;
Fatal error:
DSSP executable (usr/local/bin/dssp) does not exist (use setenv DSSP)
it is means thant  i don't install do_dssp? however, when i perform the  which 
do_dssp at the terminal,  the termianl output 
/usr/local/gromacs-4.6.1/bin/do_dssp. i don't know where is wrong. should i 
reset the environment of the do_dssp?
   please help me !
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Including polarizability by a Drude oscillator or the shell model

2013-04-28 Thread David van der Spoel

On 2013-04-27 19:18, Andrew DeYoung wrote:

Hi,

I am interested in including polarizability using a Drude harmonic
oscillator (charge on a spring).  In section 3.5 of the version 4.5.4
manual, Shell molecular dynamics is described briefly.  It seems that the
shell model is quite similar, if not identical, to the Drude oscillator.
However, I do not see in the manual where the use of the shell model in
Gromacs simulations is described.  Do you know if some sort of tutorial
exists about the use of the shell model (or the Drude oscillator) in
Gromacs?

Thank you so much for your time!

Andrew DeYoung
Carnegie Mellon University


Check here for some examples
http://virtualchemistry.org/pol.php

--
David van der Spoel, Ph.D., Professor of Biology
Dept. of Cell  Molec. Biol., Uppsala University.
Box 596, 75124 Uppsala, Sweden. Phone:  +46184714205.
sp...@xray.bmc.uu.sehttp://folding.bmc.uu.se
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] can we use large timestep for membrane GPU simulation?

2013-04-28 Thread Albert

Hello:

 I am watching Eric online Gromacs GPU webminar these days. I notice 
that he talked about introduce large timestep (5fs) for GPU simulations 
on a water system. I am just wondering, can  we also introduce such big 
time step for membrane system if we are going to run the job in Gromacs?


What's more, Eric also shown the GLIC ion channel simulation with 
150,000 atoms. E5-2690+GTX Titan can get up to 38ns/day. But he didn't 
talked about what's the timestep and cutoff.


could anybody comment on this?

thank you very much
best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] about command do_dssp

2013-04-28 Thread Justin Lemkul



On 4/28/13 4:07 AM, aixintiankong wrote:

Dear prof.
  i use the the gromacs 4.6.1 on my centos6.4 system. After MD ending, i 
use the do_dssp -f md.xtc -s md.tpr  -o secondary-structure.xpm -sc 
secondary-structure.xvg to analyze the secondary structrue of the protein.
 when i perform the do_dssp and select MainChain , the fatal erros come out 
as follow;
Fatal error:
DSSP executable (usr/local/bin/dssp) does not exist (use setenv DSSP)
it is means thant  i don't install do_dssp? however, when i perform the  which 
do_dssp at the terminal,  the termianl output 
/usr/local/gromacs-4.6.1/bin/do_dssp. i don't know where is wrong. should i reset the 
environment of the do_dssp?


do_dssp and dssp are different executables.  Please read the following:

http://www.gromacs.org/Documentation/Gromacs_Utilities/do_dssp

-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] why files are so large?

2013-04-28 Thread Albert

Hello:

  I am using the following settings for output file:

dt  = 0.002 ; Time-step (ps)
nsteps  = 25; Number of steps to run (0.002 * 50 
= 1 ns)


; Parameters controlling output writing
nstxout = 500   ; Write coordinates to output 
.trr file every 2 ps
nstvout = 500   ; Write velocities to output 
.trr file every 2 ps

nstfout = 0
nstxtcout   = 5
nstenergy   = 50; Write energies to output .edr file 
every 2 ps

nstlog  = 50; Write output to .log file every 2 ps


and I obtained following warnings from grompp:

NOTE 2 [file md.mdp]:
  This run will generate roughly 2791985478365075968 Mb of data


however, when I set
nstxout=0
nstvout=0
nstoout=0

I obtained following informations:

This run will generate roughly -9066 Mb of data

why it file size is negative? Moreover, my nstxout is very large, I 
don't know why the file is so big and no matter how did I change 
nstxout, nstvout, the files size doesn't change at all. it always claimed:



  This run will generate roughly 2791985478365075968 Mb of data

thank you very much
Albert

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] can we use large timestep for membrane GPU simulation?

2013-04-28 Thread Justin Lemkul



On 4/28/13 6:15 AM, Albert wrote:

Hello:

  I am watching Eric online Gromacs GPU webminar these days. I notice that he
talked about introduce large timestep (5fs) for GPU simulations on a water
system. I am just wondering, can  we also introduce such big time step for
membrane system if we are going to run the job in Gromacs?



The key to using a 5-fs timestep is constraining all bonds and using virtual 
sites.  If you do this, you can use such a timestep with just about any system.



What's more, Eric also shown the GLIC ion channel simulation with 150,000 atoms.
E5-2690+GTX Titan can get up to 38ns/day. But he didn't talked about what's the
timestep and cutoff.



Can't comment on this part.

-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] why files are so large?

2013-04-28 Thread Justin Lemkul



On 4/28/13 8:04 AM, Albert wrote:

Hello:

   I am using the following settings for output file:

dt  = 0.002 ; Time-step (ps)
nsteps  = 25; Number of steps to run (0.002 * 50 = 1 ns)

; Parameters controlling output writing
nstxout = 500   ; Write coordinates to output .trr file
every 2 ps
nstvout = 500   ; Write velocities to output .trr file
every 2 ps
nstfout = 0
nstxtcout   = 5
nstenergy   = 50; Write energies to output .edr file every 2 ps
nstlog  = 50; Write output to .log file every 2 ps


and I obtained following warnings from grompp:

NOTE 2 [file md.mdp]:
   This run will generate roughly 2791985478365075968 Mb of data


however, when I set
nstxout=0
nstvout=0
nstoout=0

I obtained following informations:

This run will generate roughly -9066 Mb of data

why it file size is negative? Moreover, my nstxout is very large, I don't know
why the file is so big and no matter how did I change nstxout, nstvout, the
files size doesn't change at all. it always claimed:


   This run will generate roughly 2791985478365075968 Mb of data



This looks like a pretty clear bug to me, especially the negative file size, 
which cannot possibly make sense.  What version of Gromacs is this?


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] why files are so large?

2013-04-28 Thread Albert

On 04/28/2013 02:08 PM, Justin Lemkul wrote:
This looks like a pretty clear bug to me, especially the negative file 
size, which cannot possibly make sense.  What version of Gromacs is this?


-Justin


hi Justin:

 thanks a lot for kind comments. I am using the latest 4.6.1 with GPU 
production.


best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] why files are so large?

2013-04-28 Thread Justin Lemkul



On 4/28/13 8:09 AM, Albert wrote:

On 04/28/2013 02:08 PM, Justin Lemkul wrote:

This looks like a pretty clear bug to me, especially the negative file size,
which cannot possibly make sense.  What version of Gromacs is this?

-Justin


hi Justin:

  thanks a lot for kind comments. I am using the latest 4.6.1 with GPU 
production.



Please file a bug report on redmine.gromacs.org, including the exact screen 
output and all files necessary to reproduce the problem.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] GPU job often stopped

2013-04-28 Thread Albert

Dear:

  I am running MD jobs in a workstation with 4 K20 GPU and I found that 
the job always failed with following messages from time to time:



[tesla:03432] *** Process received signal ***
[tesla:03432] Signal: Segmentation fault (11)
[tesla:03432] Signal code: Address not mapped (1)
[tesla:03432] Failing at address: 0xfffe02de67e0
[tesla:03432] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0) 
[0x7f4666da1cb0]

[tesla:03432] [ 1] mdrun_mpi() [0x47dd61]
[tesla:03432] [ 2] mdrun_mpi() [0x47d8ae]
[tesla:03432] [ 3] 
/opt/intel/lib/intel64/libiomp5.so(__kmp_invoke_microtask+0x93) 
[0x7f46667904f3]

[tesla:03432] *** End of error message ***
--
mpirun noticed that process rank 0 with PID 3432 on node tesla exited on 
signal 11 (Segmentation fault).

--


I can continue the jobs with mdrun option -append -cpi, but it still 
stopped from time to time. I am just wondering what's the problem?


thank you very much
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GPU job often stopped

2013-04-28 Thread Justin Lemkul



On 4/28/13 11:27 AM, Albert wrote:

Dear:

   I am running MD jobs in a workstation with 4 K20 GPU and I found that the job
always failed with following messages from time to time:


[tesla:03432] *** Process received signal ***
[tesla:03432] Signal: Segmentation fault (11)
[tesla:03432] Signal code: Address not mapped (1)
[tesla:03432] Failing at address: 0xfffe02de67e0
[tesla:03432] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0) 
[0x7f4666da1cb0]
[tesla:03432] [ 1] mdrun_mpi() [0x47dd61]
[tesla:03432] [ 2] mdrun_mpi() [0x47d8ae]
[tesla:03432] [ 3]
/opt/intel/lib/intel64/libiomp5.so(__kmp_invoke_microtask+0x93) [0x7f46667904f3]
[tesla:03432] *** End of error message ***
--
mpirun noticed that process rank 0 with PID 3432 on node tesla exited on signal
11 (Segmentation fault).
--


I can continue the jobs with mdrun option -append -cpi, but it still stopped
from time to time. I am just wondering what's the problem?



Frequent failures suggest instability in the simulated system.  Check your .log 
file or stderr for informative Gromacs diagnostic information.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists