[gmx-users] how to calculate kinetic constant?

2013-10-04 Thread Albert

Hello:

 I've submit a simulation in gromacs, and I am just wondering how can 
we calculate kinetic constant for the ligand bound/ubound process?


thanks a lot
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_select problem

2013-09-28 Thread Albert

On 09/28/2013 09:42 AM, rajat desikan wrote:

Hi,
I am not sure, but try resname SOL or something similar. Also, your first
line has waterO and second line has water0.


Hi Rajat:

thanks a lot for such kind and helpful comments. It works now. I run it 
by command:


 g_select_mpi -f ../md_pbc_center.xtc -s ../md.tpr -on density.ndx -sf 
select.dat -os size.xvg -oc cfrac.xvg -oi index.dat


I am trying to make density of the water molecules inside my protein by 
g_density:


g_density_mpi -f ../md_pbc_center.xtc -s ../md.tpr -ei density.ndx -o 
density.xvg -n density.ndx


but it asked me which water molecules I would like to make. Do you 
have any idea how to do this correctly?


thank you very much.

best
Albert

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_select problem

2013-09-28 Thread Albert
Hello:

 I am trying to analyze the water density along Z direction of my protein.
Here is my g_select command:

g_select_mpi -f ../md_pbc_center.xtc -s ../md.tpr -on density.ndx -sf
select.dat

and here is my select.dat:

waterO = water and name OW and z>30 and z<70;
close = water0 and within 0.4 of group Protein;
close

it failed with messages:

Reading file ../md.tpr, VERSION 4.6.3 (single precision)
Reading file ../md.tpr, VERSION 4.6.3 (single precision)
selection parser: syntax error
selection parser: invalid selection 'waterO = water and name OW and z>30
and z<70'

I am just wondering, is there anything wrong with my syntax?

thank you very much.

best
Albert
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] identical results or exactly the same results?

2013-09-15 Thread Albert

Hello:

 I've got a md.tpr file generated by grompp, I am just wondering will 
the results be identical or would be exactly the same if I run it in 
different machine?


thank you very much.

best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] is there any tool for flexibility?

2013-09-01 Thread Albert

On 09/01/2013 06:03 PM, Justin Lemkul wrote:
Maybe g_msd or g_rmsf, but I don't think there's anything specifically 
designed to address water molecules, and the term "flexibility" can 
have far too many meanings ;)


-Justin



thanks a lot for kind messages.

but there are a lot of water molecules in those region. Moreover, those 
water molecules can exchange with bulk environment. In this case, it 
would be very difficult to use g_msd or g_rmsf for this purpose.


I don't know which tool or which method would be good to describe this 
observation...

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] is there any tool for flexibility?

2013-09-01 Thread Albert

Hello:

 I've finished a MD with Gromacs and I found that the water molecules 
inside certain region of my protein is very flexible: the movement of 
them are very fast; in other region there are very stable. I am just 
wondering is there any tool in Gromacs can differentiate the water 
molecules' flexibility?


thank you very much
best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] efficiency in HPC

2013-08-29 Thread Albert

On 08/29/2013 10:31 PM, Prentice Bisbal wrote:

Albert,

It looks like you are trying to compile for an IBM Power 7 system. 
What is the 'other machine' you are comparing it to, and did you use 
the same number of cores in each case?


Prentice


yes it is. here is the configurations:

 * 4 IBM POWER7 processors; detailed configuration:
 o architecture ppc64 (specification Power ISA v.2.06)
 o 64 bit
 o 8 cores
 o core clock rate 3.83 Ghz
 o up to 4 SMT threads per core
 o 64 kB (32 kB instructions + 32 kB data) cache L1 per core
 o 256 kB cache L2 per core
 o 4 MB cache L3 per core
 * SMP
 * 32 cores
 * 128 GB RAM
 * theor. peak perf. 980 GFlops/s
 * dedicated operating system AIX 7.1


I compared with the following one, obviously the following one hardware 
is poor than above one but efficiency is much better. There must be 
something wrong with the compiling


AMD Quad-Core Opteron 835X nodes
x86_64 architecure 432 nodes
16 cores each
16/32 GB per node for memobry

THX

Albert


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] efficiency in HPC

2013-08-29 Thread Albert

Hello:

 I compiled 4.6.3 in HPC with following steps:


setenv OBJECT_MODE 64
/opt/cmake/2.8.9_new/bin/cmake .. 
-DCMAKE_INSTALL_PREFIX=/opt/gromacs/4.6.3 -DBUILD_SHARED_LIBS=OFF 
-DCMAKE_BUILD_TYPE=Release -DGMX_SOFTWARE_INVSQRT=OFF -DGMX_GPU=OFF 
-DGMX_MPI=ON -DGMX_THREAD_MPI
=OFF -DGMX_X11=OFF -DGMX_FLOAT_FORMAT_IEEE754=1 
-DGMX_IEEE754_BIG_ENDIAN_BYTE_ORDER=1 
-DGMX_IEEE754_BIG_ENDIAN_WORD_ORDER=1 
-DCMAKE_C_COMPILER="/usr/bin/mpcc_r" -DCMAKE_CXX_COMPILER="/usr/bin/mpCC_
r" -DCMAKE_C_FLAGS_RELEASE="-O3 -qstrict -qsimd=auto -qmaxmem=-1 
-qarch=pwr7 -qtune=pwr7" -DCMAKE_XX_FLAGS_RELEASE="-O3 -qstrict 
-qsimd=auto -qmaxmem=-1 -qarch=pwr7 -qtune=pwr7" -DFFTWF_LIBRARY="/o
pt/fftw/3.3.2/lib/libfftw3f.a" 
-DFFTWF_INCLUDE_DIR="/opt/fftw/3.3.2/include" 
-DCMAKE_LINKER="/usr/bin/mpcc_r"

gmake -j 16
gmake install

when I do the test, I found that the performance is really bad. It tooks 
more than one year for a job to be done while it only take 2 months in 
other machine. Does anybody have any idea where is the problem?


thanks a lot

Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] problem of submitting job in HPC

2013-08-28 Thread Albert
hello Mark:

 thanks a lot for kind advices. Here is my log file for output of mdrun
-version, There are always some duplicate informations and files



Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Program: mdrun_mpi
Gromacs version:VERSION 4.6.3
Precision:  single
Memory model:   64 bit
MPI library:MPI
OpenMP support: enabled
GPU support:disabled
invsqrt routine:(1.0/__sqrt(x))
CPU acceleration:   NONE
FFT library:fftw-3.3.2-fma
Large file support: enabled
RDTSCP usage:   disabled
Built on:   Thu Aug 29 02:03:12 CEST 2013
Built by:   sheed@c2n25-hf0 [CMAKE]
Gromacs version:VERSION 4.6.3
Precision:  single
Memory model:   64 bit
MPI library:MPI
OpenMP support: enabled
GPU support:disabled
invsqrt routine:(1.0/__sqrt(x))
CPU acceleration:   NONE
FFT library:fftw-3.3.2-fma
Large file support: enabled
RDTSCP usage:   disabled
Built on:   Thu Aug 29 02:03:12 CEST 2013
Built by:   sheed@c2n25-hf0 [CMAKE]
Build OS/arch:  AIX 1 00CCE0564C00
Build CPU vendor:   Unknown
Build CPU brand:Unknown CPU brand
Build CPU family:   0   Model: 0   Stepping: 0
Build CPU features: CannotDetect
C compiler: /usr/bin/mpcc_r XL mpcc_r
C compiler flags:  -qlanglvl=extc99 -qarch=auto -qtune=auto
-qthreaded -qalias=noansi -qhalt=e  -O3 -qstrict -qsimd=auto -qmaxmem=-1 -qa
rch=pwr7 -qtune=pwr7
Build OS/arch:  AIX 1 00CCE0564C00
Build CPU vendor:   Unknown
Build CPU brand:Unknown CPU brand
Build CPU family:   0   Model: 0   Stepping: 0
Build CPU features: CannotDetect
C compiler: /usr/bin/mpcc_r XL mpcc_r
C compiler flags:  -qlanglvl=extc99 -qarch=auto -qtune=auto
-qthreaded -qalias=noansi -qhalt=e  -O3 -qstrict -qsimd=auto -qmaxmem=-1 -qa
rch=pwr7 -qtune=pwr7
Gromacs version:VERSION 4.6.3
Precision:  single
Memory model:   64 bit
MPI library:MPI
OpenMP support: enabled
GPU support:disabled
invsqrt routine:(1.0/__sqrt(x))
CPU acceleration:   NONE
FFT library:fftw-3.3.2-fma
Large file support: enabled
RDTSCP usage:   disabled
Built on:   Thu Aug 29 02:03:12 CEST 2013
Built by:   sheed@c2n25-hf0 [CMAKE]
Build OS/arch:  AIX 1 00CCE0564C00
Build CPU vendor:   Unknown
Build CPU brand:Unknown CPU brand
Build CPU family:   0   Model: 0   Stepping: 0
Build CPU features: CannotDetect
C compiler: /usr/bin/mpcc_r XL mpcc_r
C compiler flags:  -qlanglvl=extc99 -qarch=auto -qtune=auto
-qthreaded -qalias=noansi -qhalt=e  -O3 -qstrict -qsimd=auto -qmaxmem=-1 -qa
rch=pwr7 -qtune=pwr7
Gromacs version:VERSION 4.6.3
Precision:  single
Memory model:   64 bit
MPI library:MPI
OpenMP support: enabled
GPU support:disabled
invsqrt routine:(1.0/__sqrt(x))
CPU acceleration:   NONE
FFT library:fftw-3.3.2-fma
Large file support: enabled
RDTSCP usage:   disabled
Built on:   Thu Aug 29 02:03:12 CEST 2013
Built by:   sheed@c2n25-hf0 [CMAKE]
Build OS/arch:  AIX 1 00CCE0564C00
Build CPU vendor:   Unknown
Build CPU brand:Unknown CPU brand
Build CPU family:   0   Model: 0   Stepping: 0
Build CPU features: CannotDetect
C compiler: /usr/bin/mpcc_r XL mpcc_r
C compiler flags:  -qlanglvl=extc99 -qarch=auto -qtune=auto
-qthreaded -qalias=noansi -qhalt=e  -O3 -qstrict -qsimd=auto -qmaxmem=-1 -qa
rch=pwr7 -qtune=pwr7
Gromacs version:VERSION 4.6.3
Precision:  single
Memory model:   64 bit
MPI library:MPI
.
.
.





2013/8/28 Mark Abraham 

> On Wed, Aug 28, 2013 at 7:06 PM, Justin Lemkul  wrote:
> >
> >
> > On 8/28/13 12:39 PM, Albert wrote:
> >>
> >> Hello:
> >>
> >>   I am trying to use following command to run 4.6.3 in a HPC cluster:
> >>
> >> mpiexec -n 32 /opt/gromacs/4.6.3/bin/mdrun_mpi  -dlb yes -v -s md.tpr -x
> >> md.xtc
> >> -o md.trr -g md.log -e md.edr  >& md.info
> >>
> >> the 4.5.5 works fine in this machine with command:
> >>
> >> mpiexec -n 32 mdrun -nosum -dlb yes -v -s md.tpr -x md.xtc -o md.trr -g
> >> md.log
> >> -e md.edr  >& md.info
> >>
> >> the difference is that the option "-nosum" is not available in 4.6.3
> >>
> >> but 4.6.3 always failed. It generate a lot of similar files and log
> >> 

Re: [gmx-users] work in 4.5.5 but failed in 4.6.1

2013-08-28 Thread Albert

On 08/28/2013 07:38 PM, Justin Lemkul wrote:


That's not the problem.  It's complaining about whatever is on line 1 
(not clear from the previous message if the comment line is #1 or a 
blank line), so assuming that the #ifdef is in the right place 
(probably is, or the error would be different), it's possible that 
there's some weird hidden character that is causing the error.


-Justin



IC. the problem solved when I run command dos2unix.

thank you very much.

Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] work in 4.5.5 but failed in 4.6.1

2013-08-28 Thread Albert

On 08/28/2013 07:25 PM, Justin Lemkul wrote:
Looks normal, so without context of how it is #included, there's not 
much to diagnose here. 



here is my #include in topol.top file:

 ; Include Position restraint file
#ifdef POSRES
#include "restrain.itp"
#endif


I first generate restrain for all the protein CA by genrest command, 
after that I delete the atoms which I don't want to restrain from output 
restrain.itp. Probably that's the problem? Maybe I should set the 
unwanted restrain atoms into 0 force constant instead of delete them 
from restrain.itp file?


thx a lot

Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] work in 4.5.5 but failed in 4.6.1

2013-08-28 Thread Albert

On 08/28/2013 07:07 PM, Justin Lemkul wrote:

WARNING 2 [file helix.itp, line 1]:
   Too few parameters on line (source file toppush.c, line 1501)


Looks concerning - what's line 1?

here is the initial lines:

; position restraints for part of C-alpha of Protein

[ position_restraints ]
;  i funct   fcxfcyfcz
   51300300300
  241300300300






WARNING 3 [file md.mdp]:
   The sum of the two largest charge group radii (13.715767) is 
larger than

   rlist (1.00)



How big is your box?  This may very well be a simple periodicity issue.

http://www.gromacs.org/Documentation/Errors#The_sum_of_the_two_largest_charge_group_radii_(X)_is_larger_than.c2.a0rlist_-_rvdw.2frcoulomb 



-Justin 


I've see this informations from gromacs website. my box is:

   6.96418   6.96418   9.77176

in the last line of my input .gro file.

I don't understand, why this warning not appear in 4.6.3, but failed in 
4.5.5


thank you very much

Albert

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] problem of submitting job in HPC

2013-08-28 Thread Albert

Hello:

 I am trying to use following command to run 4.6.3 in a HPC cluster:

mpiexec -n 32 /opt/gromacs/4.6.3/bin/mdrun_mpi  -dlb yes -v -s md.tpr -x 
md.xtc -o md.trr -g md.log -e md.edr  >& md.info


the 4.5.5 works fine in this machine with command:

mpiexec -n 32 mdrun -nosum -dlb yes -v -s md.tpr -x md.xtc -o md.trr -g 
md.log -e md.edr  >& md.info


the difference is that the option "-nosum" is not available in 4.6.3

but 4.6.3 always failed. It generate a lot of similar files and log 
informations. It looks like use mpiexec evoke serial mdrun.


Does anybody have any idea?

thank you very much.

best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] work in 4.5.5 but failed in 4.6.1

2013-08-28 Thread Albert

Hello:

I am constraining one part of the protein and trying to generate md.tpr 
with command:


grompp -f md.mdp -c npt4.gro -n -o md.tpr

it works fine in 4.6.3, but it failed in 4.5.5 with following warning 
messages:



WARNING 1 [file md.mdp, line 65]:
  Unknown left-hand 'cutoff-scheme' in parameter file
WARNING 2 [file helix.itp, line 1]:
  Too few parameters on line (source file toppush.c, line 1501)
WARNING 3 [file md.mdp]:
  The sum of the two largest charge group radii (13.715767) is larger than
  rlist (1.00)

There were 3 notes
There were 3 warnings

---
Program grompp, VERSION 4.5.5
Source code file: grompp.c, line: 1584

Does anybody have any idea?

thx

Albert

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_select problem

2013-08-21 Thread Albert

On 08/21/2013 05:06 PM, Justin Lemkul wrote:


Your selection, as Teemu says, is calling for *atoms* named T3P that 
are also *atoms* named O, which is clearly impossible.  The *residue* 
name is T3P.


-Justin 


I see. thanks a lot for clearly pointing out. I changed it into

resname T3P

and it works very well now.

thanks all you guys.

best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_select problem

2013-08-21 Thread Albert
but actually the water name in my system IS "T3P" and the atom name in 
the water model is "O". I've double checked for this.


thx

Albert

On 08/21/2013 04:59 PM, Teemu Murtola wrote:

If that is really your selection, you are trying to select atoms that have
both name T3P and O, which is clearly impossible, so nothing gets selected.

Best regards,
Teemu


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_select problem

2013-08-21 Thread Albert

Hello Mark:

 thanks for kind advices.

 I checked my residues number carefully, and I didn't find anything 
wrong. However, I am not sure whether the syntax of my selection.dat is 
all right or not, since the documentation for this option is not well 
clarified.


thank you very much

Albert


On 08/21/2013 04:35 PM, Mark Abraham wrote:

Simplify your condition gradually and find out which bit is wrong!

Mark


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_select problem

2013-08-21 Thread Albert

Hello:

 I am trying to calculate the number of water molecules within 3 A of a 
resid 51, and here is my command:


g_select -f md.xtc -s input.pdb -os water.xvg -sf selection.dat

here is my selection.dat:

waterO = name "T3P" and name O;
close = waterO and within 0.3 of resnr 51;
close

the command run without errors, but I noticed that the number of water 
molecules in the output file are all 0. When I visualize each frame, I 
observe a lot of water molecules within the defined region.


Does anybody have any idea what's the problem?

thx

Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] itp problem

2013-08-17 Thread Albert

Hello:

 I am trying to restrain part  of my protein. First I generate a index 
file by make_ndx. Then I use genrest for this purpose like:


genres -f npt.gro -o helix -n -fc 400

after that I change the following in the topol.top file, like:


; Include Position restraint file
#ifdef POSRES
#include "helix.itp"
#endif

when I run grompp, it claimed following warnings:

WARNING 1 [file helix.itp, line 1]:
  Too few parameters on line (source file
  /home/albert/install/source/gromacs-4.6.3/src/kernel/toppush.c, line 
1756)


here is my helix.itp:
; position restraints for System of

[ position_restraints ]
;  i funct   fcxfcyfcz
   51400400400
  241400400400
  411400400400
  481400400400
  591400400400
  701400400400
  861400400400
 1071400400400
 1261400400400
 1401400400400
 1561400400400
 1711400400400
 1901400400400
 2001400400400
 2191400400400
 2291400400400
 2451400400400
 2641400400400
 2741400400400


thank you very much.
Albert


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GPU metadynamics

2013-08-15 Thread Albert

On 08/15/2013 11:21 AM, Jacopo Sgrignani wrote:

Dear Albert
to run parallel jobs on multiple GPUs you should use something like this:

mpirun -np (number of parallel sessions on CPU) mdrun_mpi .. -gpu_id 


so you will have 4 calculations for GPU.


Jacopo


thanks a  lot for reply. but there is some problem with following command:

mpirun -np 4 mdrun_mpi -s md.tpr -v -g md.log -o md.trr -x md.xtc 
-plumed plumed2.dat -e md.edr -gpu_id 0123.


--log---

4 GPUs detected on host node3:
  #0: NVIDIA GeForce GTX 690, compute cap.: 3.0, ECC:  no, stat: compatible
  #1: NVIDIA GeForce GTX 690, compute cap.: 3.0, ECC:  no, stat: compatible
  #2: NVIDIA GeForce GTX 690, compute cap.: 3.0, ECC:  no, stat: compatible
  #3: NVIDIA GeForce GTX 690, compute cap.: 3.0, ECC:  no, stat: compatible


---
Program mdrun_mpi, VERSION 4.6.3
Source code file: 
/home/albert/install/source/gromacs-4.6.3/src/gmxlib/gmx_detect_hardware.c, 
line: 349


Fatal error:
Incorrect launch configuration: mismatching number of PP MPI processes 
and GPUs per node.
mdrun_mpi was started with 1 PP MPI process per node, but you provided 4 
GPUs.

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] GPU metadynamics

2013-08-15 Thread Albert

Hello:

 I've got two GTX690 GPU in a workstation, and I compiled gromacs-4.6.3 
with plumed and MPI support. I am trying to run some metadynamics with 
mdrun with command:


 mdrun_mpi -s md.tpr -v -g md.log -o md.trr -x md.xtc -plumed 
plumed2.dat -e md.edr


but mdrun can only use 1 GPU as indicated in the log file:



4 GPUs detected on host node3:
  #0: NVIDIA GeForce GTX 690, compute cap.: 3.0, ECC:  no, stat: compatible
  #1: NVIDIA GeForce GTX 690, compute cap.: 3.0, ECC:  no, stat: compatible
  #2: NVIDIA GeForce GTX 690, compute cap.: 3.0, ECC:  no, stat: compatible
  #3: NVIDIA GeForce GTX 690, compute cap.: 3.0, ECC:  no, stat: compatible


NOTE: potentially sub-optimal launch configuration, mdrun_mpi started 
with less

  PP MPI process per node than GPUs available.
  Each PP MPI process can use only one GPU, 1 GPUs per node will be 
used.


1 GPU auto-selected for this run: #0



I am just wondering how can we use multiple GPU for such kind of jobs?

THX
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] how can we control output ?

2013-08-14 Thread Albert

Hello:

 I am producing MD production with command:

mpirun -np 24 mdrun_mpi -plumed plumed.dat -s md.tpr -v -g md.log -x 
md.xtc -o md.trr -e md.edr


I notice that it generate two additional files:

HILLS COLVAR

the output in this two files are very frequent such as:

#! FIELDS time cv1 cv2 vbias
0. -1.372322679  0.4671409730.0
0.0900 -1.225425363  0.4868991370.02999
0.1800 -1.420091748  0.5462185740.031921782
0.2700 -1.430565715  0.5849254130.057605527
0.3600 -1.193568826  0.5248758790.056731977
0.4500 -1.215992570  0.4391220810.074018419
0.5400 -1.320514202  0.3950026330.059704699
0.6300 -1.247566938  0.4936060310.119831786
0.7200 -1.258976460  0.4036847650.120678633

AND:
 0.090 -1.225425363  0.486899137  0.08697 
0.08697  0.061249998 1.960
 0.180 -1.420091748  0.546218574  0.08697 
0.08697  0.061201861 1.960
 0.270 -1.430565715  0.584925413  0.08697 
0.08697  0.060562217 1.960
 0.360 -1.193568826  0.524875879  0.08697 
0.08697  0.060583858 1.960
 0.450 -1.215992570  0.439122081  0.08697 
0.08697  0.060157006 1.960
 0.540 -1.320514202  0.395002633  0.08697 
0.08697  0.060510237 1.960
 0.630 -1.247566938  0.493606031  0.08697 
0.08697  0.059040397 1.960
 0.720 -1.258976460  0.403684765  0.08697 
0.08697  0.059019958 1.960
 0.810 -1.331293344  0.531644762  0.08697 
0.08697  0.059053400 1.960
 0.900 -1.477816820  0.541284800  0.08697 
0.08697  0.059861982 1.960



I am performing 200 ns MD simulations. The file probably will become too 
big I am just wondering how can we setup the frequency record for 
the HILL and COLVAR file?


THX

Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Unkwown Keyword HILLS

2013-08-14 Thread Albert

On 08/14/2013 12:26 PM, Michael Shirts wrote:

This is a plumed error, not a gromacs error. Gromacs never handles those 
keywords.

Sent from my iPhone


IC

THX, it works now. I found some "strange characters" in the plumed.dat 
file. After delete them, it work now

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Unkwown Keyword HILLS

2013-08-13 Thread Albert

Does anybody have any idea what's the problem?

I use the tutorial example and I don't know why it doesn't work.

THX


On 08/13/2013 07:19 PM, Albert wrote:

Dear:

 I am trying to run plumed with gromacs plugin. Here is my plumed.dat 
file which I defined two dihedral angels as cvs:


*HILLS HEIGHT 0.3 W_STRIDE 450
WELLTEMPERED SIMTEMP 310 BIASFACTOR 1.96
TORSION LIST 1 4 65 344 SIGMA 0.12
TORSION LIST 2 46 80 656 SIGMA 0.12

ENDMETA*

I am using plumed-1.3+gromacs-4.6.2 with command:


*mpirun -np 24 mdrun_mpi -s md.tpr -plumed plumed.dat -g md.log -v -x 
md.xtc -o md.trr -e md.edr*



but it failed with messages:


*starting mdrun 'protein'
1 steps, 20.0 ps.
! PLUMED ERROR: Line 1 Unkwown Keyword HILLS

! ABORTING RUN
! PLUMED ERROR: Line 1 Unkwown Keyword HILLS

! ABORTING RUN
--*

thank you very much

Albert 


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Unkwown Keyword HILLS

2013-08-13 Thread Albert

Dear:

 I am trying to run plumed with gromacs plugin. Here is my plumed.dat 
file which I defined two dihedral angels as cvs:


*HILLS HEIGHT 0.3 W_STRIDE 450
WELLTEMPERED SIMTEMP 310 BIASFACTOR 1.96
TORSION LIST 1 4 65 344 SIGMA 0.12
TORSION LIST 2 46 80 656 SIGMA 0.12

ENDMETA*

I am using plumed-1.3+gromacs-4.6.2 with command:


*mpirun -np 24 mdrun_mpi -s md.tpr -plumed plumed.dat -g md.log -v -x 
md.xtc -o md.trr -e md.edr*



but it failed with messages:


*starting mdrun 'protein'
1 steps, 20.0 ps.
! PLUMED ERROR: Line 1 Unkwown Keyword HILLS

! ABORTING RUN
! PLUMED ERROR: Line 1 Unkwown Keyword HILLS

! ABORTING RUN
--*

thank you very much

Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: problem in g_membed

2013-07-29 Thread Albert

superimpose your receptor PDB files with related structure OPM database
center your lipids in 0 by editconf command in gromacs

then you GPCR would be in the center of the lipids.

PS: 340 lipids is too big for a single GPCR, 140~160 would be enough 
before g_membed. You'd better read paper to see how many lipids did 
other people use.



On 07/29/2013 10:31 AM, pavithrakb wrote:

Thank you both of you (Justin and Albert) sir.
Initially I was using dppc128 and now I changed to POPE 340 and still my
protein (its a GPCR protein) protrude out of the membrane (the same region;
two amino acids).
since you (Justin) you have mentioned that the protein must be completely
inside the membrane to avoid any instability, what must I do to avoid this
problem?
is something wrong with my protein?
The region protruding out is a loop. is it still a problem?
the protein's box size is 4.49101 4.74797 7.29103 (from protein.gro).
Kindly help.


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] binding energy for membrane system

2013-07-24 Thread Albert

Hello:

 I notice that the manual or tutorial in Gromacs (FEP, unbralla 
sampling, TI and so on) website for binding energy evaluation are all 
for protein in water. I am just wondering how can we evaluate the 
protein/ligand binding affinity for membrane system accurately? Probably 
the most difficult thing is to evaluate the term from protein/lipids and 
protein/water part, since the system is heterogeneous instead of 
homogeneous.


thank you very much.
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] How to calculate enthalpy

2013-07-15 Thread Albert

On 07/15/2013 10:20 AM, Dr. Vitaly Chaban wrote:

Sure, you can.



Dr. Vitaly V. Chaban


I've got a question for it. Why the calculated entropy is negative?
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] a question on energygrps

2013-07-14 Thread Albert

Hello:

 I've got a question on energygrps options in .mdp file. This option is 
usually used for  establishing the groups for the energy output. I am 
going to study the entropy changes of protein-ligand-SOL in a membrane 
system, I am just wondering shall I specify each compoenent in 
energygrps option, like:


energygrps  =  Protein LIG SOL POPC

I am also going to use g_lie for interaction analysis, will the above 
options be sufficient?


If I forgot to assign the energygrps options in my md.mdp file during MD 
simulations, is it any fast way to calculate the energy after MD instead 
of running the simulation from beginning?


I notice that there is a tool called g_lie can be used to evaluate 
protein/ligand binding, I am just wondering is there any paper on this 
tool? I don't find too much informations on gromacs manual.


thank you very much
Albert




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] why DGbind=0 ?

2013-07-09 Thread Albert

Hello:

 I am using glie to evaluate my ligand binding affinity with command:

g_lie_d -f md.edr -o lie.xvg -ligand LIG

but I obtained the following results:

Opened md.edr as single precision energy file
Using the following energy terms:
LJ:
Coul:
DGbind = 0.000 (0.000)

@title "LIE free energy estimate"
@xaxis  label "Time (ps)"
@yaxis  label "DGbind (kJ/mol)"
@TYPE xy
 0   0
  1000   0
  2000   0
  3000   0
  4000   0
  5000   0
  6000   0
  7000   0
  8000   0
  9000   0
 1   0
 11000   0
 12000   0
 13000   0
 14000   0
 15000   0
 16000   0
 17000   0
 18000   0
 19000   0
 2   0
 21000   0
 22000   0
 23000   0
 24000   0


I am just wondering why the result is zero?

thank you very much

Albert

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] cuda problem

2013-07-09 Thread Albert

On 07/09/2013 11:15 AM, Szilárd Páll wrote:

Tesla C1060 is not compatible - which should be shown in the log and
standard output.

Cheers,
--
Szilárd


THX for kind comments.

do you mean C1060 is not compatible with cuda-5.0 toolkit? or it is not 
compatable with Gromacs-4.6.3?I only obtained the following information 
in the log file



2 GPUs detected on host c0107:
  #0: NVIDIA Tesla C2075, compute cap.: 2.0, ECC: yes, stat: compatible
  #1: NVIDIA Tesla C2075, compute cap.: 2.0, ECC: yes, stat: compatible

2 GPUs auto-selected for this run: #0, #1


NOTE: Using a GPU with ECC enabled and CUDA driver API version <5.0, 
known to
  cause performance loss. Switching to the alternative polling GPU 
wait.
  If you encounter issues, switch back to standard GPU waiting by 
setting

  the GMX_CUDA_STREAMSYNC environment variable.


Non-default thread affinity set probably by the OpenMP library,
disabling internal thread affinity



best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] cuda problem

2013-07-09 Thread Albert

Dear:

 I've installed a gromacs-4.6.3 in a GPU cluster, and I obtained the 
following information for testing:


NOTE: Using a GPU with ECC enabled and CUDA driver API version <5.0, 
known to
  cause performance loss. Switching to the alternative polling GPU 
wait.
  If you encounter issues, switch back to standard GPU waiting by 
setting

  the GMX_CUDA_STREAMSYNC environment variable.

The cuda version in the GPU cluster is 4.2 and the GPU is: Tesla C1060

I notice that the performanc is really slow. I am just wondering how can 
we solve this problem? which directory should I set up for 
GMX_CUDA_STREAMSYNC?


THX

Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: problem in g_membed

2013-07-08 Thread Albert
Just a piece of advices:  you can consider equilibrate a lipids system 
which is large enough for your protein. This will save you huge amount 
of time on using tricks to add water later or enlarge the lipids.



The system MUST BE IN PBC BOX for the g_membed input coordinate, 
otherwise your job would be failed in the following step. You can 
visualize your system in VMD to double check. If it is not in the PBC 
BOX, you can use editconf -box to fix this issue.


good luck.

Albert


On 07/08/2013 11:40 AM, Justin Lemkul wrote:



On 7/7/13 10:52 PM, pavithrakb wrote:

Dear Sir,
Thanks so much..
now what's the solution? Should I increase the box size?


Yes.  If the protein protrudes "out" of the central image, then you 
will have an unstable system due to atomic overlap as well as 
violations of the minimum image convention.



Already I have centered the protein and fixed the POPE membrane size.
Can you tell me how to increase the box size? or is there any other
solution?


If you need more space in the x-y plane, use genconf to replicate the 
membrane patch appropriately.  If x-y is sufficient and only z is the 
problem, use editconf to adjust the box size and (if desired) the 
location of the membrane-protein complex  and then solvate with genbox 
to fill the newly introduced void.


-Justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] why TIPS3P, why not TIP3P?

2013-07-08 Thread Albert

On 07/08/2013 10:47 AM, Javier Cerezo wrote:
In a recent benchmark by Piggot, Piñeiro and Khalid ( 
http://pubs.acs.org/doi/abs/10.1021/ct3003157 ), they showed that the 
TIP3P flavour may affect some properties (ApL) for simulations with 
CHARM36, concluding that CHARMM-TIP3P is recommended, at least for DPPC.


I've also experienced similar issues with DMPC.

Javier



which CHARMM36 FF do you use? The CHARMM36 is being update from time to 
time.


It may have some influence on the head property of the lipids, but how 
much it would be for the whole protein/membrane system, it is still 
unclear. For the long time scaled MD simulations, many people would be 
hesitated to introducing charmm-TIP3P which sacrifice too much speed.


I've also observed that in several paper, people said Na+ have influence 
on POPC property similar to your claims, but most people still prefer to 
use POPC+0.15M NaCl which is the most close system to physiology 
environment.


Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] why TIPS3P, why not TIP3P?

2013-07-08 Thread Albert

Hello:

 I noticed that the CHARMM36 FF recommend CHARMM TIP 3-point with LJ on 
H's model for lipids when we run pb2gmx each time. I am just wondering, 
why it is recommend for lipids? Is there any special superior reason to 
do so? As far as I google both Gromacs maillist and CHARMM forum, most 
people conclude that there is no big differences between CHARMM TIP3 
model and original TIP3P model. Here is what I found:



http://www.charmm.org/ubbthreads/ubbthreads.php?ubb=showflat&Number=23727

http://www.charmm.org/ubbthreads/ubbthreads.php?ubb=showflat&Number=23422#Post23422

http://pubs.acs.org/doi/full/10.1021/ct900549r

http://lists.gromacs.org/pipermail/gmx-users/2010-September/053966.html

Actually, in one of the recent De.Shaw CELL paper 
(http://www.sciencedirect.com/science/article/pii/S0092867412015528), 
they also introduced normal TIP3P water model+ CHARMM36 FF for their 
system. In this work, they performed 100+ us long time scaled MD 
simulation for a extremely large membrane protein.


could anybody comment on this issue?

THX a lot.

Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] 11-cis retinal topology problem

2013-07-07 Thread Albert

On 07/07/2013 10:23 AM, Mark Abraham wrote:

The error happens on the last line it reached. Take a look at it:-)

Mark



Hi Mark:

 thanks again for kind helps these days. My problem solved now: my 
original charmm file was mixed with old parameters which I worked on 
before. After I start from a fresh file, everything goes well now.


best regards

Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] 11-cis retinal topology problem

2013-07-07 Thread Albert

On 07/07/2013 10:23 AM, Mark Abraham wrote:

Program grompp, VERSION 4.6.3
>Source code file:
>kernel/topdirs.c, line: 106
>



Hello Mark:

 thanks a lot for kind reply. Do you mean the above information? I open 
this file and here is the line 106:


gmx_fatal(FARGS, "Invalid angle type %d", type);


here is the angetypes section in the ffbonded.itp file which generated 
from the python scrips, to be honest I don't see anything bad


thank you very much.

Albert



[ angletypes ]
; ijkfuncth0cthub0cub
CT2CT2NCH15111.20669.440.00.0
HCRNCH1CT25118.40292.880.00.0
CT2NCH1CR155123.20418.40.00.0
CR15NCH1HCR5120.00292.880.00.0
HACT2NCH15109.50418.40.00.0
NCH1CR15HPL15119.10292.880.00.0
NCH1CR15CR145122.90794.960.00.0
CR15CR14HPL5119.70292.880.00.0
CR14CR15HPL15119.70292.880.00.0
CR15CR14CR135122.90794.960.00.0
CR14CR13CT35119.70585.760.00.0
CR13CR14HPL5119.70292.880.00.0
CR13CT3HA5109.50418.40.00.0
CR14CR13CR125122.90794.960.00.0
CR13CR12HPL5119.70292.880.00.0
CR12CR13CT35119.70585.760.00.0
CR13CR12CR115122.90794.960.00.0
CR12CR11HPL5119.70292.880.00.0
CR11CR12HPL5119.70292.880.00.0
CR12CR11CR105122.90794.960.00.0
CR11CR10HPL5119.70292.880.00.0
CR10CR11HPL5119.70292.880.00.0
CR11CR10CR95122.90794.960.00.0
CR10CR9CT35119.70585.760.00.0
CR9CT3HA5109.50418.40.00.0
CR9CR10HPL5119.70292.880.00.0
CR10CR9CR85122.90794.960.00.0
CR9CR8HPL5119.70292.880.00.0
CR8CR9CT35119.70585.760.00.0
CR9CR8CR75122.90794.960.00.0
CR8CR7HPL5119.70292.880.00.0
CR7CR8HPL5119.70292.880.00.0
CR8CR7CR65122.90794.960.00.0
CR7CR6CT35119.70585.760.00.0
CR6CR7HPL5119.70292.880.00.0
CR7CR6CR55122.90794.960.00.0
CR6CR5CT35119.70585.760.00.0
CR6CR5CT25119.70585.760.00.0
CT3CR5CT25119.70585.760.00.0
CR5CT2HA5109.50418.40.00.0
CR5CT3HA5109.50418.40.00.0
CR6CT3CT25111.10527.1840.00.0
CR6CT3CT35111.10527.1840.00.0
CR5CT2CT25111.10527.1840.00.0
CR5CR6CT35119.70585.760.00.0
CT3CT3CT35109.50527.1840.00.0
CT2CT3CT35109.50527.1840.00.0


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] 11-cis retinal topology problem

2013-07-07 Thread Albert

On 07/06/2013 10:17 PM, Mark Abraham wrote:

That's a completely different output. I think both your executables
are broken, somehow. Clean the build and install trees and try again
:-)

Mark



Hello Mark:

thanks a lot for kind messages. The previous problem was solved by 
Justin's sugguestions. Here is the errors from debug:


Ignoring obsolete mdp entry 'title'
Ignoring obsolete mdp entry 'cpp'

Back Off! I just backed up mdout.mdp to ./#mdout.mdp.3#

NOTE 1 [file topol.top, line 1]:
  Debugging
NOTE 2 [file topol.top, line 2]:
  Debugging
NOTE 3 [file topol.top, line 3]:
  Debugging
..
NOTE 18 [file topol.top, line 18]:
  DebugginG
NOTE 19 [file forcefield.itp, line 1]:
  Debugging
NOTE 20 [file forcefield.itp, line 2]:
  Debugging

NOTE 3474 [file ffbonded.itp, line 820]:
  Debugging
NOTE 3475 [file ffbonded.itp, line 821]:
  Debugging
---
Program grompp, VERSION 4.6.3
Source code file:
kernel/topdirs.c, line: 106

Fatal error:
Invalid angle type 0
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

I still don't know where is the problem.

thank yo uvery much
Albert

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] 11-cis retinal topology problem

2013-07-07 Thread Albert

On 07/07/2013 03:18 AM, Justin Lemkul wrote:
This problem can be solved by remembering to complete step 5 of 
http://www.gromacs.org/Documentation/How-tos/Adding_a_Residue_to_a_Force_Field. 



As Mark notes, though, this issue is entirely separate from your 
initial problem.


-Justin


Hello Justin:

 thanks a lot for kind advices. This problem solved now. However, the 
grompp still have some problems:


 grompp_mpi -f em.mdp -c gmx.pdb -o em.tpr

source code gromacs-4.6.2/src/kernel/toppush.c, line: 726
Fatal error:
Unknown bond_atomtype NCH1

It is pretty strange, I already add NCH1 atom type in ffbonded.itp as 
following:


[ bondtypes ]
; ijfuncb0kb
NCH1CT210.149284512.0
NCH1HCR10.1359824.0
NCH1CR1510.132410032.0

In the atomtypes.atp, I also have information:

NCH114.00700 ;LYR


Why it still complained?

thank you very much

Albert


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] 11-cis retinal topology problem

2013-07-06 Thread Albert
 C12C11C13H12
C13C14C12C20
C14C13C15H14
C15N16C14H15
N16CEC15H16

Then I add this information into the aminoacide.rtp file. I don't know 
why it failed. It is expected to be treated as regular amino acid 
residue


thank you very much.

Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] 11-cis retinal topology problem

2013-07-06 Thread Albert

On 07/06/2013 11:10 AM, Mark Abraham wrote:

 set_warning_line(wi, cpp_cur_file(&handle),
cpp_cur_linenr(&handle));
 warning_note(wi, "Debugging)";

Mark



Hello Mark:

 thanks a lot for kind messages.

 I changed above line in topio.c,

set_warning_line(wi, cpp_cur_file(&handle), 
cpp_cur_linenr(&handle));

warning_note(wi, "Debugging)";


but the compiling failed with messages:


romacs-4.6.3/src/kernel/topio.c: In function ‘read_topol’:
gromacs-4.6.3/src/kernel/topio.c:648:35: error: expected ‘)’ before ‘;’ 
token
gromacs-4.6.3/src/kernel/topio.c:1014:9: error: expected ‘;’ before ‘}’ 
token

make[2]: *** [src/kernel/CMakeFiles/gmxpreprocess.dir/topio.c.o] Error 1
make[2]: *** Waiting for unfinished jobs


thank you very much

Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] 11-cis retinal topology problem

2013-07-05 Thread Albert

Hello guys:

  I am building 11-cis-retinal topology these days. Here is what I did:

first of all, I build a small peptide like compound which 
contains:11-cis-retinal connected to the protonated sidechain of LYS, 
ACE and NME capped the N-term and C-term of the LYS respectively. With 
this compound, I upload to paramchem and obtained a ligand.str file.


Second, I merged the information (including BONDS, ANGLES and DIHEDRALS) 
of the ligand.str file into related section in par_all36_cgenff.prm. 
Then I run charmm2gromacs-pvm.py with command:


python charmm2gromacs-pvm.py par_all36_cgenff.prm top_all36_cgenff.rtf

it generate a folder called cgenff-2b7.ff which includes the following 
files:


aminoacids.rtp  ffbonded.itp forcefield.doc
atomtypes.atp   ffnonbonded.itp  forcefield.itp

I merged the content of above files into gromacs CHARMM36.ff and build a 
topology for 11-cis-retinal into aminoacids.rtp as following. This 
toplogy only contains informations for 11-cis-retinal and protonated LYS:


-
[ RETK ]
 [ atoms ]
NNH1-0.470
HNH0.311
CACT10.072
HAHB0.093
CBCT2-0.184
HB1HA0.095
HB2HA0.096
CGCT2-0.187
HG1HA0.098
HG2HA0.099
CDCT2-0.1810
HD1HA0.0911
HD2HA0.0912
CECT20.2113
HE1HA0.0514
HE2HA0.0515
NZNH3-0.83216
HZ1HC0.4217
HZ2HC0.4218
CC0.5120
OO-0.5121
C1CG3010.00022
C2CG321-0.18223
C3CG321-0.17724
C4CG321-0.18325
C5CG2DC1-0.00126
C6CG2DC1-0.00127
C7CG2DC2-0.14928
C8CG2DC2-0.15029
C9CG2DC1-0.00330
C10CG2DC1-0.13431
C11CG321-0.19032
C12CG321-0.18733
C13CG2D1-0.00534
C14CG2D1-0.04335
C15CG3240.29936
C16CG331-0.26937
C17CG331  -0.26938
C18CG331  -0.26839
C19CG331  -0.26940
C20CG331  -0.26741
H7HGA40.15042
H8HGA40.15043
H10HGA40.15044
H14HGA40.15045
H21HGA20.09046
H22HGA20.09047
H31HGA20.09048
H32HGA20.09049
H41HGA20.09050
H42HGA20.09051
H111HGA20.09052
H112HGA20.09053
H121HGA20.09054
H122HGA20.09055
H151HGA20.09056
H152HGA20.09057
H161HGA30.09058
H162HGA30.09059
H163HGA30.09060
H171HGA30.09061
H172HGA30.09062
H173HGA30.09063
H181HGA30.09064
H182HGA30.09065
H183HGA30.09066
H191HGA30.09067
H192HGA30.09068
H193HGA30.09069
H201HGA30.09070
H202HGA30.09071
H203HGA30.09072
 [ bonds ]
CBCA
CGCB
CDCG
CECD
NZCE
NHN
NCA
CCA
C+N
CAHA
CBHB1
CBHB2
CGHG1
CGHG2
CDHD1
CDHD2
CEHE1
CEHE2
OC
NZHZ1
NZHZ2
NZC15
C1C2
C1C6
C1C16
C1C17
C2C3
C2H21
C2H22
C3C4
C3H31
C3H32
C4C5
C4H41
C4H42
C5C6
C5C18
C6C7
C7C8
C7H7
C8C9
C8H8
C9C10
C9C19
C10C11
C10H10
C11C12
C11H111
C11H112
C12C13
C12H121
C12H122
C13C14
C13C20
C14C15
C14H14
C15H151
C15H152
C16H161
C16H162
C16H163
C17H171
C17H172
C17H173
C18H181
C18H182
C18H183
C19H191
C19H192
C19H193
C20H201
C20H202
C20H203
 [ impropers ]
N-CCAHN
CCA+NO
 [ cmap ]
-CNCAC+N
-


with the new forcefild, I run pdb2gmx:

-
pdb2gmx -f input.pdb -o gmx.pdb
-


it finished without any warnings or errors for this step. However, when 
I try to run grompp with command:


-
grompp_mpi -f em.mdp -c gmx.pdb -p topol.top
-

it failed with messages:

-
Program grompp_mpi, VERSION 4.6.2
Source code file: 
/home/albert/Desktop/gromacs-4.6.2/src/kernel/topdirs.c, line: 106

Fatal error:
Invalid angle type 0
For

[gmx-users] charmm2gromacs-pvm.py error

2013-07-05 Thread Albert

Hlello:

 I am trying to run charmm2gromacs-pvm.py with command:

charmm2gromacs-pvm.py par_all36_cgenff.prm top_all36_cgenff.rtf

but it failed with messages:


Creating cgenff-2b7.ff files...
Traceback (most recent call last):
  File "charmm2gromacs-pvm.py", line 283, in 
name = segments[1]
IndexError: list index out of range

although it generate a cgenff-2b7 folder, but all the files it generate 
are empty.


thank you very much
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] GPU cannot be detected

2013-07-04 Thread Albert

Hello:

 I've installed Gromacs-4.6.2 in GPU cluster with following configurations:

CC=icc FC=ifort F77=ifort CXX=icpc 
CMAKE_PREFIX_PATH=/export/intel/cmkl/include/fftw:/export/mpi/mvapich2-1.8-rhes6 
cmake .. -DGMX_MPI=ON 
-DCMAKE_INSTALL_PREFIX=/home/albert/install/gromacs -DGMX_GPU=ON 
-DBUILD_SHARED_LIBS=OFF -DCUDA_TOOLKIT_ROOT_DIR=/export/cuda5.0 
-DFFTWF_LIBRARY=/home/albert/install/fftw-3.3.3/lib/libfftw3f.so



However, when I submit job with qsub, Gromacs failed with following 
messages:




No GPUs detected on host c0204

Can not set thread affinities on the current platform. On NUMA systems this
can cause performance degradation. If you think your platform should 
support

setting affinities, contact the GROMACS developers.
--
mpirun noticed that process rank 2 with PID 6 on node c0205 exited 
on signal 4 (Illegal instruction).

--

I am just wondering how can we solve this problem?

thank you very much

Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] fftw compile error for 4.6.2

2013-07-04 Thread Albert

On 07/04/2013 07:59 PM, Mark Abraham wrote:

I plan to release 4.6.3 tomorrow, once I've gotten some more urgent
stuff off my plate:-).

Mark


thanks a lot for kind messages, Mark.

It seems that Gromacs update more and more frequently....


Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] a question concerning on entropy

2013-07-04 Thread Albert

Hello :

 I've got a question about the the entropy. As we all know that in the 
md.edr file it will give us the entropy value of the system along the 
simulations.


However, my system is a protein/membrane system, and I am only would 
like to make statics for the protein/water related entropy. I am just 
wondering, is it possible to do this?


thank you very much

Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] fftw compile error for 4.6.2

2013-07-04 Thread Albert

On 07/04/2013 05:52 PM, Szilárd Páll wrote:

FYI: 4.6.2 contains a bug related to thread affinity setting which
will lead to a considerable performance loss (I;ve seen 35%) as well
as often inconsistent performance - especially with GPUs (case in
which one would run many OpenMP threads/rank). My advice is that you
either use the code from git or wait for 4.6.3.



Oh, my Lady GaGa...

Do you have any idea when this new version would be released?

THX

Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] another error compiling 4.6.2 in GPU cluster

2013-07-04 Thread Albert

Hello:

 I am using the following configuration to compile gromacs-4.6.2 in a 
GPU cluster:




CC=icc FC=ifort F77=ifort CXX=icpc 
CMAKE_PREFIX_PATH=/export/intel/cmkl/include/fftw:/export/mpi/mvapich2-1.8-rhes6 
cmake .. -DGMX_MPI=ON 
-DCMAKE_INSTALL_PREFIX=/home/albert/install/gromacs -DGMX_GPU=ON 
-DBUILD_SHARED_LIBS=OFF -DCUDA_TOOLKIT_ROOT_DIR=/export/cuda5.0/cuda 
-DFFTWF_LIBRARY=/home/albert/install/fftw-3.3.3/lib/libfftw3f.so


It was finished without any errors, But when I type

make -j4

it stopped at 66%:


clared but never referenced
  static const double* sy_const[] = {

/home/albert/install/source/gromacs-4.6.2/include/maths.h(189): remark 
#177: function "gmx_numzero" was declared but never referenced

  gmx_numzero(double a)
  ^
/home/albert/install/source/gromacs-4.6.2/include/maths.h(196): remark 
#177: function "gmx_log2" was declared but never referenced

  gmx_log2(real x)
  ^
/home/albert/install/source/gromacs-4.6.2/include/vec.h(849): remark 
#177: function "calc_lll" was declared but never referenced

  static void calc_lll(rvec box, rvec lll)
  ^
/home/albert/install/source/gromacs-4.6.2/include/vec.h(880): remark 
#177: function "m_rveccopy" was declared but never referenced

  static void m_rveccopy(int dim, rvec *a, rvec *b)
  ^
/home/albert/install/source/gromacs-4.6.2/include/vec.h(892): remark 
#177: function "matrix_convert" was declared but never referenced

  static void matrix_convert(matrix box, rvec vec, rvec angle)
  ^
/home/albert/install/source/gromacs-4.6.2/include/grompp.h(155): remark 
#177: variable "ds" was declared but never referenced

  static const char *ds[d_maxdir+1] = {

Linking CXX static library libmd_mpi.a
[ 66%] Built target md
make: *** [all] Error 2

thank you very much

Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] fftw compile error for 4.6.2

2013-07-04 Thread Albert

On 07/04/2013 11:18 AM, Mark Abraham wrote:

No idea. I do not think there is any need for you to use
BUILD_SHARED_LIBS=OFF, and it could well be the problem.

Mark


thank you all the same.

I found the problem, there is a subdirectory in cuda5.0:

/export/cuda5.0/cuda

the problem solved after I redirect the path into above path.

best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] fftw compile error for 4.6.2

2013-07-04 Thread Albert

On 07/04/2013 10:43 AM, Oliver Schillinger wrote:
It seems that GROMACS is looking for a shared library, but you 
compiled FFTW statically (--enable-static). Either recompile FFTW 
--enable-shared or link GROMACS statically by passing 
-DBUILD_SHARED_LIBS=OFF to cmake. 



Hello guys:

 thanks a lot for such warning advices. the fftw error passed after I 
take above suggestions, but there is something wrong with CUDA. It is 
very strange to me, because I've already specified the cuda path.


CC=icc FC=ifort F77=ifort CXX=icpc 
CMAKE_PREFIX_PATH=/export/intel/cmkl/include/fftw:/export/mpi/mvapich2.amd/1.4 
cmake .. -DGMX_MPI=ON 
-DCMAKE_INSTALL_PREFIX=/home/albert/install/gromacs -DGMX_GPU=ON 
-DBUILD_SHARED_LIBS=OFF -DCUDA_TOOLKIT_ROOT_DIR=/export/cuda5.0 
-DFFTWF_LIBRARY=/home/albert/install/fftw-3.3.3/lib/libfftw3f.so 
-DBUILD_SHARED_LIBS=OFF



CMake Error: The following variables are used in this project, but they 
are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the 
CMake files:

CUDA_CUDART_LIBRARY (ADVANCED)
linked by target "cuda_tools" in directory 
/home/albert/install/source/gromacs-4.6.2/src/gmxlib/cuda_tools
linked by target "gpu_utils" in directory 
/home/albert/install/source/gromacs-4.6.2/src/gmxlib/gpu_utils
linked by target "nbnxn_cuda" in directory 
/home/albert/install/source/gromacs-4.6.2/src/mdlib/nbnxn_cuda

CUDA_TOOLKIT_INCLUDE (ADVANCED)
   used as include directory in directory 
/home/albert/install/source/gromacs-4.6.2/src/gmxlib/cuda_tools
   used as include directory in directory 
/home/albert/install/source/gromacs-4.6.2/src/gmxlib/gpu_utils
   used as include directory in directory 
/home/albert/install/source/gromacs-4.6.2/src/gmxlib/gpu_utils
   used as include directory in directory 
/home/albert/install/source/gromacs-4.6.2/src/mdlib/nbnxn_cuda

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] fftw compile error for 4.6.2

2013-07-04 Thread Albert

Hello:

 I am trying to compile Gromacs-4.6.2 for a GPU cluster with following 
command:



CC=icc FC=ifort F77=ifort CXX=icpc 
CMAKE_PREFIX_PATH=/export/intel/cmkl/include/fftw:/export/mpi/mvapich2.amd/1.4 
cmake .. -DGMX_MPI=ON 
-DCMAKE_INSTALL_PREFIX=/home/albert/install/gromacs -DGMX_GPU=ON 
-DBUILD_SHARED_LIBS=OFF -DCUDA_TOOLKIT_ROOT_DIR=/export/cuda5.0 
-DFFTWF_LIBRARY= /export/fftw-3.3.3


but it always claimed errors for fftw:



-- Found PkgConfig: /usr/bin/pkg-config (found version "0.21")
-- checking for module 'fftw3f'
--   package 'fftw3f' not found
-- pkg-config could not detect fftw3f, trying generic detection
-- Looking for fftwf_plan_r2r_1d in /export/fftw-3.3.3/lib
WARNING: Target "cmTryCompileExec1612346837" requests linking to 
directory "/export/fftw-3.3.3/lib".  Targets may link only to 
libraries.  CMake is dropping the item.

-- Looking for fftwf_plan_r2r_1d in /export/fftw-3.3.3/lib - not found
CMake Error at cmake/FindFFTW.cmake:97 (message):
  Could not find fftwf_plan_r2r_1d in /export/fftw-3.3.3/lib, take a 
look at

  the error message in
  /home/albert/install/gromacs-4.6.2/build/CMakeFiles/CMakeError.log to 
find
  out what went wrong.  If you are using a static lib (.a) make sure 
you have

  specified all dependencies of fftw3f in FFTWF_LIBRARY by hand (e.g.
  -DFFTWF_LIBRARY='/path/to/libfftw3f.so;/path/to/libm.so') !
Call Stack (most recent call first):
  CMakeLists.txt:943 (find_package)


-- Configuring incomplete, errors occurred!


I compiled fftw with options:

./configure CC=icc CXX=icpc F77=ifort FC=ifort 
--prefix=/home/albert/install/fftw-3.3.3 --enable-float --with-pic 
--enable-single --enable-static --enable-mpi


and I don't find so called "libfftw3f.so" in the fftw installation 
directory.


does anybody have any advices?

THX

Albert

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] multiple chain restrain problem

2013-06-27 Thread Albert

On 06/27/2013 01:50 PM, Justin Lemkul wrote:


Then select by residue number.  Note that genrestr will only work for 
the first molecule, since position restraint numbering is based on the 
[moleculetype] numbering, not the coordinate file numbering.


-Justin



thank you very much for kind advices.

I solved this by extracting each chain into indivisual coordinate and 
make restrain based on them.


best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] multiple chain restrain problem

2013-06-27 Thread Albert

Hello:

 I've got two protein chain in my system. I generate gmx.pdb by pb2gmx, 
and gromacs generate toplogy and restrain file for each chain:


gmx.pdb
 topol_chain_A.itp
topol_chain_B.itp
porschain_A.itp
porschain_B.itp

I noticed that the gmx.pdb doesn't contain any chain information. I am 
going to equilibrate the system first restrain the heavy atoms which can 
be specified by porschain_A.itp and porschain_B.itp. However, in the 
next step I am going to restrain only the backbone of chain A, how can 
we to do this? The chain information in new generated gmx.pdb lost 
When I use command:


genrestr -f gmx.pdb -p porsB

it doesn't contains selection for chain

thank you very much

Albert

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] problem with charmm2gromacs-pvm.py

2013-06-24 Thread Albert

Hello:

 I am trying to use charmm2gromacs-pvm.py to generate cgenff with comman:

python charmm2gromacs-pvm.py toppar/par_all36_cgenff.prm 
toppar/top_all36_cgenff.rtf


but it always failed with messages:


Creating cgenff-2b7.ff files...
Traceback (most recent call last):
  File "charmm2gromacs-pvm.py", line 283, in 
name = segments[1]
IndexError: list index out of range


thank you very much
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] shall we use double precision?

2013-06-11 Thread Albert

HI Mark:

 thank you for kind reply. I found the following in the documentations:

Installations:

Example.
This is the procedure for compiling the serial version of GROMACS, using 
the GNU compilers.

tar zxf gromacs-4.0.5.tar.gz
cd gromacs-4.0.5
export plumedir="PLUMED root"
cp $plumedir/patches/plumedpatch gromacs 4.0.4.sh .
CC=gcc CXX=g++ ./configure
./plumedpatch gromacs 4.0.4.sh -patch
make
make install


However, in the test section, it said the following:

For GROMACS users only. Please note that:
 The tests for GROMACS are designed for and should be executed with
the double-precision version of the code;
 Biasxmd, ptmetad, dd and pd are designed for the parallel version of
GROMACS. The user should specify in the test script the location of
the parallel executable and the version of GROMACS used. These tests
will fail if the parallel version of GROMACS has not been compiled.


thank you very much

Albert



On 06/11/2013 08:55 AM, Mark Abraham wrote:

Probably not. What does PLUMED documentation recommend?

Mark


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] shall we use double precision?

2013-06-10 Thread Albert

Hello :

 I am going to use Gromacs with PLUMD plugin to perform Metadynamics. 
Since this methods involved in free energy calculations, I am just 
wondering is it necessary to introduce double precision Gromacs?


thank you very much

Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] GPU ECC question

2013-06-08 Thread Albert

Hello:

 Recently I found a strange question about Gromacs-4.6.2 on GPU 
workstaion. In my GTX690 machine, when I run md production I found that 
the ECC is on. However, in my another GTX590 machine, I found the ECC 
was off:


4 GPUs detected:
  #0: NVIDIA GeForce GTX 590, compute cap.: 2.0, ECC:  no, stat: compatible
  #1: NVIDIA GeForce GTX 590, compute cap.: 2.0, ECC:  no, stat: compatible
  #2: NVIDIA GeForce GTX 590, compute cap.: 2.0, ECC:  no, stat: compatible
  #3: NVIDIA GeForce GTX 590, compute cap.: 2.0, ECC:  no, stat: compatible

moreover, there is only two GTX590 in the machine, I don't know why 
Gromacs claimed 4 GPU detected. However, in my another Linux machine 
which also have two GTX590, Gromacs-4.6.2 only find 2 GPU, and ECC is 
still off.


I am just wondering:

(1) why in GTX690 the ECC can be on while it is off in my GTX590? I 
compiled Gromacs with the same options and the same version of intel 
compiler


(2) why in machines both of physically installed two GTX590 cards, one 
of them was detected with 4 GPU while the other was claimed contains two 
GPU?


thank you very much

best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] why mass and charge is zero?

2013-06-08 Thread Albert

On 06/08/2013 06:56 PM, David van der Spoel wrote:
the ones in the atoms section are the ones that are used UNLESS they 
are not given, in which case the defaults are used. 


IC.

thank you very much.

Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] why mass and charge is zero?

2013-06-08 Thread Albert

Hello:

 I generate a ligand toplogy by ACPYPE  with amber GAFF. However, I 
found that in the ligandGMX.itp file, in the atomtypes section, the mass 
and charge are all zero, like:


[ atomtypes ]
;name   bond_type mass charge   ptype   sigma epsilon   Amb
 NT   NT  0.0  0.0   A 3.25000e-01 7.11280e-01 
; 1.82  0.1700



however, in the atoms sections, I found:

[ atoms ]
;   nr  type  resi  res  atom  cgnr charge  mass   ; qtot   
bond_type

26   NT 1   UKA   N10   26-0.719301 14.01000 ; qtot -7.758

I am a little bit confused for this. Does anybody have any idea for it?

thank you very much

best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GPU problem

2013-06-04 Thread Albert

On 06/04/2013 11:22 AM, Chandan Choudhury wrote:

Hi Albert,

I think using -nt flag (-nt=16) with mdrun would solve your problem.

Chandan



thank you so much.

it works well now.

ALBERT
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] GPU problem

2013-06-04 Thread Albert

Dear:

 I've got four GPU in one workstation. I am trying to run two GPU job 
with command:


mdrun -s md.tpr -gpu_id 01
mdrun -s md.tpr -gpu_id 23

there are 32 CPU in this workstation. I found that each job trying to 
use the whole CPU, and there are 64 sub job when these two GPU mdrun 
submitted.  Moreover, one of the job stopped after short of running, 
probably because of the CPU issue.


I am just wondering, how can we distribute CPU when we run two GPU job 
in a single workstation?


thank you very much

best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GROMACS 4.6.2 released

2013-05-30 Thread Albert

it seems that Gromacs update quite frequently these days..




On 05/30/2013 05:42 PM, Mark Abraham wrote:

Hi GROMACS users,


GROMACS 4.6.2 is officially released. It contains numerous bug fixes, some
simulation performance enhancements and some documentation updates. We
encourage all users to upgrade their installations from 4.6 and 4.6.1.


You can find the code, manual, release notes, installation instructions and
test
suite at the links below.

ftp://ftp.gromacs.org/pub/gromacs/gromacs-4.6.2.tar.gz
ftp://ftp.gromacs.org/pub/manual/manual-4.6.2.pdf
http://www.gromacs.org/About_Gromacs/Release_Notes/Versions_4.6.2.x
http://www.gromacs.org/Documentation/Installation_Instructions
http://gromacs.googlecode.com/files/regressiontests-4.6.2.tar.gz

Happy simulating!


The GROMACS development team


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] how to distribute CPU in GPU workstation?

2013-05-30 Thread Albert

Dear:

 I've got 4 GPU in one GPU workstation. I've submit one of my job by 
command:


mdrun -s md.tpr -gpu_id 01 -n -x md.xtc

I found that all the 24 CPU were occupied by this job. However, I would 
like to submit another job with -gpu_id 23, how should I specify the CPU 
for each job?


thank you very much
best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_select question

2013-05-08 Thread Albert

Dear:

 I am trying to run g_select with command:

g_select -f md.xtc -s md.pdb -os water.xvg -sf selection.dat

in the selection.dat I defined the following:


watero= name 0 and resname T3P;
close = water0 and within 0.6 of resid 50;
close;

my residue 50 is in the deep pocket of protein and there is only max. 12 
water near 6A. However, I found that in almost all my trajectories, 
there are at least 16 water from the g_select output. I am just 
wondering, is there anything wrong of my defenition in selection.dat?


thank you very much

best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GPU job often stopped

2013-05-02 Thread Albert

the problem is still there...

:-(



On 04/29/2013 06:06 PM, Szilárd Páll wrote:

On Mon, Apr 29, 2013 at 3:51 PM, Albert  wrote:

>On 04/29/2013 03:47 PM, Szilárd Páll wrote:

>>
>>In that case, while it isn't very likely, the issue could be caused by
>>some implementation detail which aims to avoid performance loss caused
>>by an issue in the NVIDIA drivers.
>>
>>Try running with the GMX_CUDA_STREAMSYNC environment variable set.
>>
>>Btw, were there any other processes using the GPU while mdrun was running?
>>
>>Cheers,
>>--
>>Szilárd

>
>
>thanks for kind reply.
>There is no any other process when I am running Gromacs.
>
>do you mean I should set GMX_CUDA_STREAMSYNC in the job script like:
>
>export GMX_CUDA_STREAMSYNC=/opt/cuda-5.0

Sort of, but the value does not matter. So if your shell is bash, the
above as well as simply "export GMX_CUDA_STREAMSYNC=" will work fine.

Let us know if this avoided the crash - when you have simulated long
enough to be able to judge.

Cheers,
--
Szilárd



--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] where can be obtain circled lipids bilayer?

2013-05-02 Thread Albert

Hello:

 I've got a question about where can be obtain circled lipids bilayer?


like shown here:

http://wwwuser.gwdg.de/~ggroenh/membed/vesicle.png

thank you very much
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GPU job often stopped

2013-04-29 Thread Albert

On 04/29/2013 03:47 PM, Szilárd Páll wrote:

In that case, while it isn't very likely, the issue could be caused by
some implementation detail which aims to avoid performance loss caused
by an issue in the NVIDIA drivers.

Try running with the GMX_CUDA_STREAMSYNC environment variable set.

Btw, were there any other processes using the GPU while mdrun was running?

Cheers,
--
Szilárd


thanks for kind reply.
There is no any other process when I am running Gromacs.

do you mean I should set GMX_CUDA_STREAMSYNC in the job script like:

export GMX_CUDA_STREAMSYNC=/opt/cuda-5.0

?

THX
Albert



--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GPU job often stopped

2013-04-29 Thread Albert

On 04/29/2013 03:31 PM, Szilárd Páll wrote:

The segv indicates that mdrun crashed and not that the machine was
restarted. The GPU detection output (both on stderr and log) should
show whether ECC is "on" (and so does the nvidia-smi tool).

Cheers,
--
Szilárd


yes it was on:


Reading file heavy.tpr, VERSION 4.6.1 (single precision)
Using 4 MPI threads
Using 8 OpenMP threads per tMPI thread

5 GPUs detected:
  #0: NVIDIA Tesla K20m, compute cap.: 3.5, ECC: yes, stat: compatible
  #1: NVIDIA GeForce GTX 650, compute cap.: 3.0, ECC:  no, stat: compatible
  #2: NVIDIA Tesla K20m, compute cap.: 3.5, ECC: yes, stat: compatible
  #3: NVIDIA Tesla K20m, compute cap.: 3.5, ECC: yes, stat: compatible
  #4: NVIDIA Tesla K20m, compute cap.: 3.5, ECC: yes, stat: compatible

4 GPUs user-selected for this run: #0, #2, #3, #4

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GPU job often stopped

2013-04-29 Thread Albert

On 04/28/2013 05:45 PM, Justin Lemkul wrote:


Frequent failures suggest instability in the simulated system. Check 
your .log file or stderr for informative Gromacs diagnostic information.


-Justin 



my log file didn't have any errors, the end of topped log file something 
like:


DD  step 2259  vol min/aver 0.967  load imb.: force  0.8%

   Step   Time Lambda
   226045200.00.0

   Energies (kJ/mol)
  AngleU-BProper Dih.  Improper Dih.  LJ-14
9.86437e+034.02406e+043.52809e+046.13542e+02 8.61815e+03
 Coulomb-14LJ (SR)  Disper. corr.   Coulomb (SR)   Coul. recip.
1.25055e+043.05477e+04   -9.05956e+03   -6.02400e+05 1.58357e+03
 Position Rest.  PotentialKinetic En.   Total Energy Temperature
1.39149e+02   -4.72066e+051.37165e+05   -3.34901e+05 3.11958e+02
 Pres. DC (bar) Pressure (bar)   Constr. rmsd
   -2.94092e+02   -7.91535e+011.79812e-05


also in the information file I only obtained information:


step 13300, will finish Tue Apr 30 14:41
NOTE: Turning on dynamic load balancing


Probably the machine was restarted from time to time?

best
Albert


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GPU job often stopped

2013-04-29 Thread Albert

Hello:

 yes, I tried the CPU only version, it goes well and didn't stop. I am 
not sure whether I have ECC on or not. There are 4 Tesla K20 and one 
GTX650 in the workstation, after compilation, I simple submit the jobs 
with command:



mdrun -s md.tpr -gpu_id 0234

I submit the same system in another GTX690 machine, it also goes 
well. I compiled Gromacs with the same options in that machine.


thank you very much
best
Albert



On 04/29/2013 01:19 PM, Szilárd Páll wrote:

Have you tried running on CPUs only just to see if the issue persists?
Unless the issue does not occur with the same binary on the same
hardware running on CPUs only, I doubt it's a problem in the code.

Do you have ECC on?
--
Szilárd


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] GPU job often stopped

2013-04-28 Thread Albert

Dear:

  I am running MD jobs in a workstation with 4 K20 GPU and I found that 
the job always failed with following messages from time to time:



[tesla:03432] *** Process received signal ***
[tesla:03432] Signal: Segmentation fault (11)
[tesla:03432] Signal code: Address not mapped (1)
[tesla:03432] Failing at address: 0xfffe02de67e0
[tesla:03432] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0) 
[0x7f4666da1cb0]

[tesla:03432] [ 1] mdrun_mpi() [0x47dd61]
[tesla:03432] [ 2] mdrun_mpi() [0x47d8ae]
[tesla:03432] [ 3] 
/opt/intel/lib/intel64/libiomp5.so(__kmp_invoke_microtask+0x93) 
[0x7f46667904f3]

[tesla:03432] *** End of error message ***
--
mpirun noticed that process rank 0 with PID 3432 on node tesla exited on 
signal 11 (Segmentation fault).

--


I can continue the jobs with mdrun option "-append -cpi", but it still 
stopped from time to time. I am just wondering what's the problem?


thank you very much
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] why files are so large?

2013-04-28 Thread Albert

On 04/28/2013 02:08 PM, Justin Lemkul wrote:
This looks like a pretty clear bug to me, especially the negative file 
size, which cannot possibly make sense.  What version of Gromacs is this?


-Justin


hi Justin:

 thanks a lot for kind comments. I am using the latest 4.6.1 with GPU 
production.


best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] why files are so large?

2013-04-28 Thread Albert

Hello:

  I am using the following settings for output file:

dt  = 0.002 ; Time-step (ps)
nsteps  = 25; Number of steps to run (0.002 * 50 
= 1 ns)


; Parameters controlling output writing
nstxout = 500   ; Write coordinates to output 
.trr file every 2 ps
nstvout = 500   ; Write velocities to output 
.trr file every 2 ps

nstfout = 0
nstxtcout   = 5
nstenergy   = 50; Write energies to output .edr file 
every 2 ps

nstlog  = 50; Write output to .log file every 2 ps


and I obtained following warnings from grompp:

NOTE 2 [file md.mdp]:
  This run will generate roughly 2791985478365075968 Mb of data


however, when I set
nstxout=0
nstvout=0
nstoout=0

I obtained following informations:

This run will generate roughly -9066 Mb of data

why it file size is negative? Moreover, my nstxout is very large, I 
don't know why the file is so big and no matter how did I change 
nstxout, nstvout, the files size doesn't change at all. it always claimed:



  This run will generate roughly 2791985478365075968 Mb of data

thank you very much
Albert

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] can we use large timestep for membrane GPU simulation?

2013-04-28 Thread Albert

Hello:

 I am watching Eric online Gromacs GPU webminar these days. I notice 
that he talked about introduce large timestep (5fs) for GPU simulations 
on a water system. I am just wondering, can  we also introduce such big 
time step for membrane system if we are going to run the job in Gromacs?


What's more, Eric also shown the GLIC ion channel simulation with 
150,000 atoms. E5-2690+GTX Titan can get up to 38ns/day. But he didn't 
talked about what's the timestep and cutoff.


could anybody comment on this?

thank you very much
best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] _gpu_id failed

2013-04-26 Thread Albert

Hello:

 I am going to run gromacs with command:

mpirun -np 4 mdrun_mpi -s em.tpr -c em.gro -v -g em.log -gpu_id #0, #2, 
#3, #4



but it failed with messages:

Program mdrun_mpi, VERSION 4.6.1
Source code file: 
/home/albert/install/source/gromacs-4.6.1/src/gmxlib/statutil.c, line: 364


Fatal error:
Expected a string argument for option -gpu_id

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

"Jede der Scherben spiegelt das Licht" (Wir sind Helden)

Error on node 3, will try to stop all the nodes
Halting parallel program mdrun_mpi on CPU 3 out of 4

gcq#330: "Jede der Scherben spiegelt das Licht" (Wir sind Helden)


gcq#330: "Jede der Scherben spiegelt das Licht" (Wir sind Helden)


gcq#330: "Jede der Scherben spiegelt das Licht" (Wir sind Helden)

--
mpirun has exited due to process rank 2 with PID 10440 on
node tesla exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).


thank you very much
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] GPU efficiency question

2013-04-26 Thread Albert

Dear:

 I've got two GTX690 in a a workstation and I found that when I run the 
md production with following two command:


mpirun -np 4 md_run_mpi

or

mpirun -np 2 md_run_mpi

the efficiency are the same. I notice that gromacs can detect 4 GPU 
(probably because GTX690 have two core..):


4 GPUs detected on host node4:
  #0: NVIDIA GeForce GTX 690, compute cap.: 3.0, ECC:  no, stat: compatible
  #1: NVIDIA GeForce GTX 690, compute cap.: 3.0, ECC:  no, stat: compatible
  #2: NVIDIA GeForce GTX 690, compute cap.: 3.0, ECC:  no, stat: compatible
  #3: NVIDIA GeForce GTX 690, compute cap.: 3.0, ECC:  no, stat: compatible


why the "-np 2" and "-np 4" are the same efficiency? shouldn't it be 
faster for "-np 4" ?


thank you very much

Albert

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] compile error

2013-04-26 Thread Albert

IC.

it works very well now.

thanks a lot
Albert


On 04/26/2013 08:01 PM, Szilárd Páll wrote:

You got a warning at configure-time that the nvcc host compiler can't
be set because the mpi compiler wrapper are used. Because of this,
nvcc is using gcc to compile CPU code whick chokes on the icc flags.
You can:
- set CUDA_HOST_COMPILER to the mpicc backend, i.e. icc or
- let cmake detect MPI and use simply CC=icc CXX=icpc cmake
-DGMX_MPI=ON (in this case the normal compiler are used and*if*  cmake
can detect the MPI libs it will not need the wrappers).

Cheers,
--
Szilárd


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] compile error

2013-04-26 Thread Albert

Dear:

 I've installed gromacs-4.6.1 in a GPU workstation with two GTX690. 
Here is my step:




./configure --prefix=/home/albert/install/openmpi-1.4.5 CC=icc CXX=icpc 
F77=ifort FC=ifort

make
make install


cmake .. -DGMX_MPI=ON -DGMX_GPU=ON -DBUILD_SHARED_LIBS=OFF 
-DCUDA_TOOLKIT_ROOT_DIR=/opt/common/cuda-5.0 
-DCMAKE_INSTALL_PREFIX=/home/albert/install/gromacs-4.6.1 
-DCMAKE_PREFIX_PATH=/opt/intel/mkl/include/fftw 
-DCMAKE_CXX_COMPILER=/home/albert/install/openmpi-1.4.5/bin/mpiCC 
-DCMAKE_C_COMPILER=/home/albert/install/openmpi-1.4.5/bin/mpicc



however, it failed with messages:

[  1%] Building NVCC (Device) object 
src/gmxlib/gpu_utils/CMakeFiles/gpu_utils.dir//./gpu_utils_generated_gpu_utils.cu.o

cc1plus: error: unrecognized command line option ‘-ip’
CMake Error at gpu_utils_generated_gpu_utils.cu.o.cmake:198 (message):
  Error generating
/home/albert/install/source/gromacs-4.6.1/build/src/gmxlib/gpu_utils/CMakeFiles/gpu_utils.dir//./gpu_utils_generated_gpu_utils.cu.o


make[2]: *** 
[src/gmxlib/gpu_utils/CMakeFiles/gpu_utils.dir/./gpu_utils_generated_gpu_utils.cu.o] 
Error 1

make[1]: *** [src/gmxlib/gpu_utils/CMakeFiles/gpu_utils.dir/all] Error 2
make: *** [all] Error 2


when I try to run make

thank you very much
best
Albert

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] using CHARMM force field for organic molecule

2013-04-22 Thread Albert

On 04/22/2013 01:43 PM, Justin Lemkul wrote:

There are several options, all external to Gromacs:

https://www.paramchem.org/
http://www.swissparam.ch/

-Justin 



did paramchem support gromacs? As far as I know it only export in CHARMM 
format.


Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GROMACS 4.6 with GPU acceleration (double

2013-04-21 Thread Albert

On 04/22/2013 08:40 AM, Mikhail Stukan wrote:

Could you explain which hardware do you mean? As far as I know, K20X supports 
double precision, so I would assume that double precision GROMACS should be 
realizable on it.


Really? But many people have discussed that the GPU version ONLY support 
single precision..





Thanks and regards,
Mikhail


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] how to direct log file correctly?

2013-04-20 Thread Albert

Hi Justin:

 thanks for reply.

  I redirect it by  command:

mpirun -np 2 mdrun_mpi -v -s md.tpr -c md.gro -x md.xtc -g md.log > md.info

However the information is still in my terminal screen instead of 
md.info file. It generate a md.info file, but it it is empty.


I also try to use command:

mpirun -np 2 mdrun_mpi -v -s md.tpr -c md.gro -x md.xtc -g md.log & > 
md.info


but it said:

Invalid null command.


thanks a lot
Albert



On 04/17/2013 05:33 PM, Justin Lemkul wrote:



On 4/17/13 11:30 AM, Albert wrote:

Hello:

  I found that each time I submit gromacs job in GPU workstation, the 
log file

always in my terminal screen, like:

imb F  1% step 13700, will finish Wed Apr 17 17:57:20 2013
imb F  1% step 13800, will finish Wed Apr 17 17:57:20 2013
imb F  0% step 13900, will finish Wed Apr 17 17:57:20 2013
imb F  0% step 14000, will finish Wed Apr 17 17:57:20 2013
imb F  1% step 14100, will finish Wed Apr 17 17:57:19 2013
imb F  0% step 14200, will finish Wed Apr 17 17:57:19 2013
imb F  0% step 14300, will finish Wed Apr 17 17:57:19 2013
imb F  0% step 14400, will finish Wed Apr 17 17:57:19 2013


I am using command:

mpirun -np 2 mdrun_mpi -v -s nvt.tpr -c nvt.gro -g nvt.log -x nvt.xtc
grompp_mpi -f npt1.mdp  -c nvt.gro -p topol.top -o heavy.tpr -n
mpirun -np 2 mdrun_mpi -v -s heavy.tpr -c heavy.gro -x heavy.xtc -g 
1000.log




I am just wondering how can we direct the log file correctly? 
Meanwhile tho job
won't have any problem going to next step (I am running all the steps 
in one

script).



You're only getting that output because you're enabling verbose mode 
with -v. If you don't want that information, don't use -v. If you do 
want it, but just want it written to a file, redirect it using 
standard command line redirection (Google is your friend).


-Justin



--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Announcement: new version of GridMAT-MD available

2013-04-18 Thread Albert
It seems to be good. However, I always have problem to compile matpack 
which is needed by xmatrix... I sent the author an email, but never 
replied..


here is the log file:


creating libpng12.la
(cd .libs && rm -f libpng12.la && ln -s ../libpng12.la libpng12.la)
make[4]: Leaving directory 
`/home/albert/Desktop/matpack/source/3rdparty/libpng'
make[3]: Leaving directory 
`/home/albert/Desktop/matpack/source/3rdparty/libpng'

(cd Matutil ; make )
make[3]: Entering directory `/home/albert/Desktop/matpack/source/Matutil'
g++ -ansi -std=c++0x -m64 -DXPM_INCLUDE="" -c -s -Wall -O3 
-fforce-addr -funroll-loops -felide-constructors -I ../../include 
mpromannumber.cpp
g++ -ansi -std=c++0x -m64 -DXPM_INCLUDE="" -c -s -Wall -O3 
-fforce-addr -funroll-loops -felide-constructors -I ../../include 
mpcontextsave.cpp
g++ -ansi -std=c++0x -m64 -DXPM_INCLUDE="" -c -s -Wall -O3 
-fforce-addr -funroll-loops -felide-constructors -I ../../include 
mpparsetool.cpp
g++ -ansi -std=c++0x -m64 -DXPM_INCLUDE="" -c -s -Wall -O3 
-fforce-addr -funroll-loops -felide-constructors -I ../../include 
mpgetopt.cpp
mpgetopt.cpp: In member function ‘bool 
MATPACK::MpGetopt::assign(MATPACK::MpGetopt::OptNode*, const char**, 
std::string&)’:

mpgetopt.cpp:314: error: ‘strtol’ was not declared in this scope
mpgetopt.cpp:330: error: ‘strtoul’ was not declared in this scope
mpgetopt.cpp:345: error: ‘strtod’ was not declared in this scope
g++ -ansi -std=c++0x -m64 -DXPM_INCLUDE="" -c -s -Wall -O3 
-fforce-addr -funroll-loops -felide-constructors -I ../../include 
mptimerqueue.cpp

make[3]: *** [mpgetopt.o] Error 1
make[3]: *** Waiting for unfinished jobs
make[3]: Leaving directory `/home/albert/Desktop/matpack/source/Matutil'
make[2]: *** [static_lib] Error 2
make[2]: Leaving directory `/home/albert/Desktop/matpack/source'
make[1]: *** [lib] Error 2
make[1]: Leaving directory `/home/albert/Desktop/matpack'
make: *** [allstatic] Error 2


thank you very much

Albert


On 04/18/2013 11:19 PM, Justin Lemkul wrote:


Greetings all,

I wanted to announce that we have released version 2.0 of GridMAT-MD.  
Many of you use this program for membrane analysis and I am pleased to 
note that we have introduced many new features based on your feedback, 
including trajectory support (multi-frame .pdb and .gro files).  
Please visit our webpage for details: 
http://www.bevanlab.biochem.vt.edu/GridMAT-MD/


We hope you continue to find our software useful.  Please contact me 
if any issues arise.


-Justin



--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] how to direct log file correctly?

2013-04-17 Thread Albert

Hello:

 I found that each time I submit gromacs job in GPU workstation, the 
log file always in my terminal screen, like:


imb F  1% step 13700, will finish Wed Apr 17 17:57:20 2013
imb F  1% step 13800, will finish Wed Apr 17 17:57:20 2013
imb F  0% step 13900, will finish Wed Apr 17 17:57:20 2013
imb F  0% step 14000, will finish Wed Apr 17 17:57:20 2013
imb F  1% step 14100, will finish Wed Apr 17 17:57:19 2013
imb F  0% step 14200, will finish Wed Apr 17 17:57:19 2013
imb F  0% step 14300, will finish Wed Apr 17 17:57:19 2013
imb F  0% step 14400, will finish Wed Apr 17 17:57:19 2013


I am using command:

mpirun -np 2 mdrun_mpi -v -s nvt.tpr -c nvt.gro -g nvt.log -x nvt.xtc
grompp_mpi -f npt1.mdp  -c nvt.gro -p topol.top -o heavy.tpr -n
mpirun -np 2 mdrun_mpi -v -s heavy.tpr -c heavy.gro -x heavy.xtc -g 1000.log



I am just wondering how can we direct the log file correctly? Meanwhile 
tho job won't have any problem going to next step (I am running all the 
steps in one script).


thank you very much
best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: why minimization stop so fast

2013-04-16 Thread Albert

Hello Brad:

 thanks for advices.
 I've also solved the problem after I run the 6.1 minimization step in 
NAMD. After that, I reimport the lipids system into gromacs, and it no 
longer complain those issue.


best
Albert




On 04/16/2013 09:59 PM, Brad Van Oosten wrote:

Hello,I have had the same problem with CHARMM-GUI however I have found a
decent work-around procedure:  1. Go to your input .gro file and locate the
atom with infinite force (atom 23533 in this case)2. Change one of the x,y,z
positions of that atom by about  /- 0.5.3. Rerun grompp with the new .gro
file5. rerun minimization4. Repeat. this may happen with several atoms that
are overlapped, but with the little shove you give it, it may be able to
correct itself


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] new CHARMM-GUI output not supported?

2013-04-16 Thread Albert

Hello Mark and Justin:

 thanks a lot for kind comments.

 I changed the atom order in forcecfiled .rtp file so that they are the 
same with the output from CHARMM-GUI, and it works fine now.


best
Albert



On 04/17/2013 12:50 AM, Mark Abraham wrote:

"Support" is not really the right word:-). That force field port and that
builder are things provided by (different?) third parties in the hope that
they are useful. You can't really expect stuff written by three groups of
people at different times to inter-operate seamlessly.

IIRC nbfix has been around for a while,  but I've no idea what it is...

Mark


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] why minimization stop so fast

2013-04-16 Thread Albert

On 04/16/2013 07:28 PM, Justin Lemkul wrote:

Create a more physically reasonable starting structure.

-Justin



the protein and ligand are already minimized, but CHARMM-GUI create the 
mix lipids automatically which I cannot change them.


ALBERT
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] why minimization stop so fast

2013-04-16 Thread Albert

Hi Justin:

 thanks for kind reply.
 Yes, there many atom clashes in CHARMM-GUI system, so I add very 
strong force on protein and ligands, and try to minimize the rest of the 
system


do you have any idea how can I solve the issue and make it work?

thanks a lot
Albert


On 04/16/2013 07:09 PM, Justin Lemkul wrote:
Your system has severe atomic overlap.  You have infinite force on 
atom 23533.


-Justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] why minimization stop so fast

2013-04-16 Thread Albert

Hello:

 I've built a system with CHARMM-GUI, and I am trying to mimize it with 
following  em.mdp file



title   = steepest descent energy minimization
define  = -DPOSRES -DPOSRES_LIG
cpp =  /usr/bin/cpp
include =
integrator  = steep
nsteps  = 5
emtol   = 0.01
nstcgsteep  = 1000;step for cg correction ,the larger the 
more accurate

nstxout =
nstvout =
nstlog  = 100
nstenergy   = 10
nstxtcout   = 100
nstlist = 10
ns_type = grid
rcoulomb= 1.4
coulombtype = pme
fourierspacing= 0.12
pme_order= 4
rvdw= 1.4
rlist   = 1.4
DispCorr= enerpres
constraints = none; 
none,hbonds,all-bonds,h-angles,all-angles

cutoff-scheme   =Verlet  ; GPU running
constraint_algorithm  = Lincs





but it stopped after 15 steps.







Steepest Descents:
   Tolerance (Fmax)   =  1.0e-02
   Number of steps=5

Energy minimization has stopped, but the forces havenot converged to the
requested precision Fmax < 0.01 (whichmay not be possible for your system).
It stoppedbecause the algorithm tried to make a new step whose sizewas too
small, or there was no change in the energy sincelast step. Either way, we
regard the minimization asconverged to within the available machine
precision,given your starting configuration and EM parameters.

Double precision normally gives you higher accuracy, butthis is often not
needed for preparing to run moleculardynamics.
You might need to increase your constraint accuracy, or turn
off constraints altogether (set constraints = none in mdp file)

writing lowest energy coordinates.

Back Off! I just backed up em.gro to ./#em.gro.1#

Steepest Descents converged to machine precision in 15 steps,
but did not reach the requested Fmax < 0.01.
Potential Energy  =  3.4943549e+18
Maximum force =inf on atom 23533
Norm of force =inf

NOTE: 22 % of the run time was spent in domain decomposition,
  6 % of the run time was spent in pair search,
  you might want to increase nstlist (this has no effect on accuracy)


gcq#154: "Rub It Right Accross Your Eyes" (F. Zappa)




Thank you very much
Albert

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] new CHARMM-GUI output not supported?

2013-04-16 Thread Albert



Hello:


 I obtained a  POPC lipids from CHARMM-GUI and I found the initial 12 
line are following:

following atom name order:

ATOM   6315  N   POPC   22   3.580 -22.614  19.970 1.00-19.29  MEMB
ATOM   6316  C12 POPC   22   4.563 -22.414  18.821 1.00-17.85  MEMB
ATOM   6317 H12A POPC   22   4.337 -21.455  18.379  1.00 0.00  MEMB
ATOM   6318 H12B POPC   22   5.583 -22.427  19.173  1.00 0.00  MEMB
ATOM   6319  C13 POPC   22   3.979 -23.730  20.844 1.00-17.96  MEMB
ATOM   6320 H13A POPC   22   3.969 -24.679  20.329  1.00 0.00  MEMB
ATOM   6321 H13B POPC   22   4.977 -23.582  21.229  1.00 0.00  MEMB
ATOM   6322 H13C POPC   22   3.420 -23.829  21.763  1.00 0.00  MEMB
ATOM   6323  C14 POPC   22   3.424 -21.320  20.778 1.00-20.44  MEMB
ATOM   6324 H14A POPC   22   2.552 -21.497  21.391  1.00 0.00  MEMB
ATOM   6325 H14B POPC   22   4.281 -21.118  21.404  1.00 0.00  MEMB
ATOM   6326 H14C POPC   22   3.249 -20.474  20.130  1.00 0.00  MEMB


And here is initial 12 line Slipid popc.itp:
; residue   1 POPC rtp POPC q  0.0
 1NTL  1   POPC  N  1   -0.6 14.007   ; 
qtot -0.6
 2   CTL2  1   POPCC12  2   -0.1 12.011   ; 
qtot -0.7
 3   CTL5  1   POPCC13  3  -0.35 12.011   ; 
qtot -1.05
 4   CTL5  1   POPCC14  4  -0.35 12.011   ; 
qtot -1.4
 5   CTL5  1   POPCC15  5  -0.35 12.011   ; 
qtot -1.75

 6 HL  1   POPC   H12A  6   0.25 1.008 ; qtot -1.5
 7 HL  1   POPC   H12B  7   0.25 1.008 ; qtot 
-1.25

 8 HL  1   POPC   H13A  8   0.25 1.008 ; qtot -1
 9 HL  1   POPC   H13B  9   0.25 1.008 ; qtot 
-0.75

10 HL  1   POPC   H13C 10   0.25 1.008 ; qtot -0.5
11 HL  1   POPC   H14A 11   0.25 1.008 ; qtot 
-0.25

12 HL  1   POPC   H14B 12   0.25 1.008 ; qtot 0


And here is initial 12 line in CHARMM36.ff/lipids.rtp file:

[ POPC ]
 [ atoms ]
NNTL-0.600
C12CTL2-0.101
C13CTL5-0.352
C14CTL5-0.353
C15CTL5-0.354
H12AHL0.255
H12BHL0.256
H13AHL0.257
H13BHL0.258
H13CHL0.259
H14AHL0.2510
H14BHL0.2511


As we can see the Slipids popc.itp is the same with CHARMM36.ff/lipids.rtp
, but both of which are different from current CHARMM-GUI output. 
Probably CHARMM-GUI
change its format, and Slipids FF and CHARMM36 FF in Gromacs cannot 
support current new version

CHARMM-GUI output?

Moreover, the new CHARMM36 FF (both for protein, lipids and 
ions)introduced  Residue pair specific (native contact) non-bonded 
parameters, nbfix term,

probably it is also not supported in current CHARMM36FF in Gromacs?

 thank you very much

best
Albert





--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] new CHARMM-GUI output not supported?

2013-04-15 Thread Albert

Hello:


 I obtained a  POPC lipids from CHARMM-GUI and I found the initial 12 
line are following:

following atom name order:

ATOM   6315  N   POPC   22   3.580 -22.614  19.970 1.00-19.29  MEMB
ATOM   6316  C12 POPC   22   4.563 -22.414  18.821 1.00-17.85  MEMB
ATOM   6317 H12A POPC   22   4.337 -21.455  18.379  1.00 0.00  MEMB
ATOM   6318 H12B POPC   22   5.583 -22.427  19.173  1.00 0.00  MEMB
ATOM   6319  C13 POPC   22   3.979 -23.730  20.844 1.00-17.96  MEMB
ATOM   6320 H13A POPC   22   3.969 -24.679  20.329  1.00 0.00  MEMB
ATOM   6321 H13B POPC   22   4.977 -23.582  21.229  1.00 0.00  MEMB
ATOM   6322 H13C POPC   22   3.420 -23.829  21.763  1.00 0.00  MEMB
ATOM   6323  C14 POPC   22   3.424 -21.320  20.778 1.00-20.44  MEMB
ATOM   6324 H14A POPC   22   2.552 -21.497  21.391  1.00 0.00  MEMB
ATOM   6325 H14B POPC   22   4.281 -21.118  21.404  1.00 0.00  MEMB
ATOM   6326 H14C POPC   22   3.249 -20.474  20.130  1.00 0.00  MEMB


And here is initial 12 line Slipid popc.itp:
; residue   1 POPC rtp POPC q  0.0
 1NTL  1   POPC  N  1   -0.6 14.007 ; qtot 
-0.6
 2   CTL2  1   POPCC12  2   -0.1 12.011 ; qtot 
-0.7
 3   CTL5  1   POPCC13  3  -0.35 12.011 ; qtot 
-1.05
 4   CTL5  1   POPCC14  4  -0.35 12.011 ; qtot 
-1.4
 5   CTL5  1   POPCC15  5  -0.35 12.011 ; qtot 
-1.75
 6 HL  1   POPC   H12A  6   0.25 1.008   ; qtot 
-1.5
 7 HL  1   POPC   H12B  7   0.25 1.008   ; qtot 
-1.25

 8 HL  1   POPC   H13A  8   0.25 1.008   ; qtot -1
 9 HL  1   POPC   H13B  9   0.25 1.008   ; qtot 
-0.75
10 HL  1   POPC   H13C 10   0.25 1.008   ; qtot 
-0.5
11 HL  1   POPC   H14A 11   0.25 1.008   ; qtot 
-0.25

12 HL  1   POPC   H14B 12   0.25 1.008   ; qtot 0


And here is initial 12 line in CHARMM36.ff/lipids.rtp file:

[ POPC ]
 [ atoms ]
NNTL-0.600
C12CTL2-0.101
C13CTL5-0.352
C14CTL5-0.353
C15CTL5-0.354
H12AHL0.255
H12BHL0.256
H13AHL0.257
H13BHL0.258
H13CHL0.259
H14AHL0.2510
H14BHL0.2511


As we can see the Slipids popc.itp is the same with CHARMM36.ff/lipids.rtp
, but both of which are different from current CHARMM-GUI output. 
Probably CHARMM-GUI
change its format, and Slipids FF and CHARMM36 FF in Gromacs cannot 
support current new version

CHARMM-GUI output?

Moreover, the new CHARMM36 FF (both for protein, lipids and 
ions)introduced  Residue pair specific (native contact) non-bonded 
parameters, nbfix term,

probably it is also not supported in current CHARMM36FF in Gromacs?

 thank you very much

best
Albert



--- Begin Message ---

Hello:


 I obtained a  POPC lipids from CHARMM-GUI and I found the initial 12 line are 
following:
following atom name order:

ATOM   6315  N   POPC   22   3.580 -22.614  19.970 1.00-19.29  MEMB
ATOM   6316  C12 POPC   22   4.563 -22.414  18.821 1.00-17.85  MEMB
ATOM   6317 H12A POPC   22   4.337 -21.455  18.379  1.00 0.00  MEMB
ATOM   6318 H12B POPC   22   5.583 -22.427  19.173  1.00 0.00  MEMB
ATOM   6319  C13 POPC   22   3.979 -23.730  20.844 1.00-17.96  MEMB
ATOM   6320 H13A POPC   22   3.969 -24.679  20.329  1.00 0.00  MEMB
ATOM   6321 H13B POPC   22   4.977 -23.582  21.229  1.00 0.00  MEMB
ATOM   6322 H13C POPC   22   3.420 -23.829  21.763  1.00 0.00  MEMB
ATOM   6323  C14 POPC   22   3.424 -21.320  20.778 1.00-20.44  MEMB
ATOM   6324 H14A POPC   22   2.552 -21.497  21.391  1.00 0.00  MEMB
ATOM   6325 H14B POPC   22   4.281 -21.118  21.404  1.00 0.00  MEMB
ATOM   6326 H14C POPC   22   3.249 -20.474  20.130  1.00 0.00  MEMB


And here is initial 12 line Slipid popc.itp:
; residue   1 POPC rtp POPC q  0.0
 1NTL  1   POPC  N  1   -0.6 14.007   ; qtot -0.6
 2   CTL2  1   POPCC12  2   -0.1 12.011   ; qtot -0.7
 3   CTL5  1   POPCC13  3  -0.35 12.011   ; qtot -1.05
 4   CTL5  1   POPCC14  4  -0.35 12.011   ; qtot -1.4
 5   CTL5  1   POPCC15  5  -0.35 12.011   ; qtot -1.75
 6 HL  1   POPC   H12A  6   0.25 1.008   ; qtot -1.5
 7 HL  1   POPC   H12B  7   0.25 1.008   ; qtot -1.25
 8 HL  1   POPC   H13A  8   0.25 1.008   ; qtot -1
 9 HL  1   POPC   H13B  9   0.25 1.008   ; qtot -0.75
10 HL  1   POPC   H13C 10   0.25 1.008   ; qtot -0.5
11 HL  1   POPC   H14A 11

Re: [gmx-users] why no. of atoms doesn't match?

2013-04-15 Thread Albert
  -5.982  15.011  -0.495  1.00 0.00  MEMB
ATOM   6371 H15S POPA   22  -7.788  14.771  -0.220  1.00 0.00  MEMB
ATOM   6372 C216 POPA   22  -6.719  13.003  -0.168  1.00 1.38  MEMB
ATOM   6373 H16R POPA   22  -7.194  12.488   0.703  1.00 0.00  MEMB
ATOM   6374 H16S POPA   22  -5.668  12.647  -0.108  1.00 0.00  MEMB
ATOM   6375 C217 POPA   22  -7.629  12.493  -1.353  1.00 2.26  MEMB
ATOM   6376 H17R POPA   22  -7.103  12.927  -2.231  1.00 0.00  MEMB
ATOM   6377 H17S POPA   22  -8.714  12.714  -1.259  1.00 0.00  MEMB
ATOM   6378 C218 POPA   22  -7.509  11.008  -1.453  1.00 3.09  MEMB
ATOM   6379 H18R POPA   22  -7.847  10.511  -0.519  1.00 0.00  MEMB
ATOM   6380 H18S POPA   22  -6.520  10.572  -1.713  1.00 0.00  MEMB
ATOM   6381 H18T POPA   22  -8.258  10.612  -2.171  1.00 0.00  MEMB
ATOM   6382  C33 POPA   22 -16.464  17.039  11.820 1.00-12.04  MEMB
ATOM   6383  H3X POPA   22 -17.560  16.953  11.693  1.00 0.00  MEMB
ATOM   6384  H3Y POPA   22 -16.019  16.368  11.052  1.00 0.00  MEMB
ATOM   6385  C34 POPA   22 -16.090  18.484  11.481 1.00-10.72  MEMB
ATOM   6386  H4X POPA   22 -15.125  18.749  11.972  1.00 0.00  MEMB
ATOM   6387  H4Y POPA   22 -16.873  19.152  11.903  1.00 0.00  MEMB
ATOM   6388  C35 POPA   22 -15.928  18.728   9.974  1.00 -9.86  
MEMB

ATOM   6389  H5X POPA   22 -14.961  18.301   9.648  1.00 0.00  MEMB
ATOM   6390  H5Y POPA   22 -15.835  19.806   9.775  1.00 0.00  MEMB
ATOM   6391  C36 POPA   22 -17.057  18.200   9.080  1.00 -8.54  
MEMB

ATOM   6392  H6X POPA   22 -18.041  18.549   9.465  1.00 0.00  MEMB
ATOM   6393  H6Y POPA   22 -17.055  17.088   9.085  1.00 0.00  MEMB
ATOM   6394  C37 POPA   22 -16.865  18.708   7.645  1.00 -7.43  
MEMB

ATOM   6395  H7X POPA   22 -15.803  18.583   7.354  1.00 0.00  MEMB
ATOM   6396  H7Y POPA   22 -17.018  19.795   7.633  1.00 0.00  MEMB
ATOM   6397  C38 POPA   22 -17.785  18.057   6.608  1.00 -6.21  
MEMB

ATOM   6398  H8X POPA   22 -18.835  18.366   6.809  1.00 0.00  MEMB
ATOM   6399  H8Y POPA   22 -17.730  16.950   6.711  1.00 0.00  MEMB
ATOM   6400  C39 POPA   22 -17.404  18.434   5.167  1.00 -5.05  
MEMB

ATOM   6401  H9X POPA   22 -16.388  18.042   4.949  1.00 0.00  MEMB
ATOM   6402  H9Y POPA   22 -17.367  19.541   5.069  1.00 0.00  MEMB
ATOM   6403 C310 POPA   22 -18.395  17.870   4.142  1.00 -3.85  
MEMB

ATOM   6404 H10X POPA   22 -19.360  18.409   4.265  1.00 0.00  MEMB
ATOM   6405 H10Y POPA   22 -18.570  16.797   4.381  1.00 0.00  MEMB
ATOM   6406 C311 POPA   22 -17.940  17.953   2.674  1.00 -2.66  
MEMB

ATOM   6407 H11X POPA   22 -18.798  17.664   2.025  1.00 0.00  MEMB
ATOM   6408 H11Y POPA   22 -17.133  17.205   2.507  1.00 0.00  MEMB
ATOM   6409 C312 POPA   22 -17.419  19.330   2.231  1.00 -1.30  
MEMB

ATOM   6410 H12X POPA   22 -16.444  19.506   2.741  1.00 0.00  MEMB
ATOM   6411 H12Y POPA   22 -18.108  20.133   2.554  1.00 0.00  MEMB
ATOM   6412 C313 POPA   22 -17.202  19.416   0.713  1.00 -0.21  
MEMB

ATOM   6413 H13X POPA   22 -16.950  18.396   0.342  1.00 0.00  MEMB
ATOM   6414 H13Y POPA   22 -16.321  20.046   0.491  1.00 0.00  MEMB
ATOM   6415 C314 POPA   22 -18.396  19.961  -0.111  1.00 1.01  MEMB
ATOM   6416 H14X POPA   22 -18.686  20.951   0.204  1.00 0.00  MEMB
ATOM   6417 H14Y POPA   22 -19.197  19.234   0.080  1.00 0.00  MEMB
ATOM   6418 C315 POPA   22 -18.024  19.833  -1.597  1.00 1.92  MEMB
ATOM   6419 H15X POPA   22 -17.663  18.829  -1.906  1.00 0.00  MEMB
ATOM   6420 H15Y POPA   22 -17.169  20.513  -1.796  1.00 0.00  MEMB
ATOM   6421 C316 POPA   22 -19.239  20.200  -2.506  1.00 3.28  MEMB
ATOM   6422 H16X POPA   22 -19.011  19.979  -3.570  1.00 0.00  MEMB
ATOM   6423 H16Y POPA   22 -19.546  21.266  -2.437  1.00 0.00  MEMB
ATOM   6424 H16Z POPA   22 -20.104  19.539  -2.286  1.00 0.00  MEMB


thank you very much
best
Albert



On 04/15/2013 08:03 PM, Albert wrote:

Hello Justin and bv08ay:

  thanks a lot for kind reply. I count each component by isolating 
them one by one:


1. protein generated by pdb2gmx command: 4739 atoms in all
2. ligand: 79 atoms
3. chelostrol: 74x42= 3108 atoms
4. popc: 116x96= 11368 atoms
5. solvent: 10079x3= 30237 atoms

total: 49531


all the above results I double check the no. of coponent and total 
line (which represent the atom no.) with gedit, the total atoms is 
49621. The toplogy give correct no. of each component, and the total 
no. of such component should be 49621 instead of what claimed by 
Gromacs " does not match topology (topol.top, 51295) "


It is correct with the topology I defined at the end of topol.top file:

[ molecules ]
; Compound#mo

Re: [gmx-users] why no. of atoms doesn't match?

2013-04-15 Thread Albert

Hello Justin and bv08ay:

  thanks a lot for kind reply. I count each component by isolating them 
one by one:


1. protein generated by pdb2gmx command: 4739 atoms in all
2. ligand: 79 atoms
3. chelostrol: 74x42= 3108 atoms
4. popc: 116x96= 11368 atoms
5. solvent: 10079x3= 30237 atoms

total: 49531


all the above results I double check the no. of coponent and total line 
(which represent the atom no.) with gedit, the total atoms is 49621. The 
toplogy give correct no. of each component, and the total no. of such 
component should be 49621 instead of what claimed by Gromacs " does not 
match topology (topol.top, 51295) "


It is correct with the topology I defined at the end of topol.top file:

[ molecules ]
; Compound#mols
Protein 1
LIG1
CHL142
POPC98
SOL10079

thank you very much
best
Albert


On 04/15/2013 07:45 PM, Justin Lemkul wrote:
Seriously though, the answer to this question is always the same.  
You're not counting something right.  The solution is something only 
you can determine. Applying grep -c is your friend here.  You're off 
by 1764 atoms, which may be useful information if any of your 
molecules are evenly divisible into 1764. Water is, but perhaps other 
molecules are, as well, depending on their representation.


-Justin 


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] why no. of atoms doesn't match?

2013-04-15 Thread Albert

Hello:

 I've build a protein/membrane system with CHARMM-GUI, and I am going 
to use it for Gromacs MD simulation with Slipids FF.


First I extract the protein and generate toplogy file by command:

pdb2gmx -f protein.pdb -o gmx.pdb -ignh -ter

The protein was assigned with Amber FF including TIP3P for solvent

after that, I add the following at the top of topol.top file:

; Include forcefield parameters
#include "slipids.ff/forcefield.itp"
#include "ligand.itp"
#include "popc.itp"
#include "chol.itp"

and the following at the bottom of topol.top file:

[ molecules ]
; Compound#mols
Protein_chain_A 1
LIG 1
CHL1 42
POPC 98
SOL 10079

A the new protein coordinate gmx.pdb was replaced CHEARMM-GUI protein 
coordinate, all cheloest are put together, all popc were also put 
together. The order in the complex.pdb are: protein, ligand, cholestrol, 
popc and water. The system contains 1 protein, 1 ligand, 42 chelostrol, 
98 POPC and 10079 water molecules. Then I am trying to minimize the 
system with command:


grompp_mpi -f em.mdp -c complex.pdb -o em.tpr

However, it always failed with messages:


Program grompp_mpi, VERSION 4.6.1
Source code file: 
/home/albert/Desktop/gromacs-4.6.1/src/kernel/grompp.c, line: 563

Fatal error:
number of coordinates in coordinate file (complex.pdb, 49531)
 does not match topology (topol.top, 51295)


I don't understand why Gromacs claim such kind of errors, since to me, 
everything is all right for my pdb file and toplogy...


thank you very much
Albert




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] shall we add ions in FEP?

2013-04-11 Thread Albert

Hello:

 I found that in the free energy calculation tutorial, the ligand was 
placed in "pure" solvent which doesn't have any ions. I am just 
wondering, will it be better to add 0.15 M NaCl in the system especially 
when we would like to calculate protein/ligand binding energy through FEP?


THX
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] how to extract last frame?

2013-04-09 Thread Albert

Hello:

 I am trying to extract last frame of my MD simulations with command:

trjconv_mpi -f s.xtc -s P-in.gro -dump -1 -o p-out.pdb

but it said:

WARNING no output, last frame read at t=751.4
gcq#286: "Oh, There Goes Gravity" (Eminem)

thank you very much
best
Albert

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] can we use GPU from another machine for calculation?

2013-04-08 Thread Albert

Hello:

   I've got two GPU workstation both of which have two GTX690 GPU. Now 
I am planning to run Gromacs GPU version and I am just wondering can we 
submit a single GPU jobs in one machine and evoke all GPU resources from 
both machine?


thank you very  much
best
Albert
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] fail to pull

2013-04-07 Thread Albert

IC.

thanks a lot for explanations.

Albert


On 04/07/2013 06:08 PM, Justin Lemkul wrote:

I'm assuming you're getting that line from my tutorial. You pass the .cpt
file to grompp to preserve velocities from the previous equilibration
phase. If you don't, what was the point of equilibrating? Coordinates,
topology, and .mdp parameters are all that are strictly required to produce
a .tpr file, but other files may be necessary to produce a sensible .tpr
file based on previous steps.

-Justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] fail to pull

2013-04-07 Thread Albert

hello Justine:

 thanks a lot for such kind comments.

 I may find the problem, the COM probably is not suitable to use whole 
protein since I found by g_dist that all the distance are between 
0.9-1.0 nm throughout the whole pulling process.


 BTW, I notice that we are using command:

grompp -f md_pull.mdp -c npt.gro -p topol.top -n index.ndx -t npt.cpt -o 
pull.tpr


for pulling .tpr generating. May I ask whey shall we use option

"-t npt.cpt"

in  this  step? Usually we only need to specify .mdp, .gro and .top file 
for .tpr file



thank you very much
best
Albert

On 04/07/2013 05:31 PM, Justin Lemkul wrote:

Let me clear one thing up first. 1 ns of pulling with a 0.001 nm/ps pull
rate will not necessarily cause the ligand to be displaced by 1 nm. The
particle pulling the virtual spring will be displaced by 1 nm, but the
ligand will only move as a function of this applied force and the restoring
forces (i.e. interactions between the ligand and protein).

Choosing a more suitable reference group and running the simulation for
longer will produce the desired result.

-Justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] fail to pull

2013-04-07 Thread Albert

On 04/06/2013 08:52 PM, Justin Lemkul wrote:

Hard to tell. Does your ligand have a suitable exit pathway exactly aligned
along the x-axis? Have you tried increasing the pull rate? How long is the
simulation? I don't even see nsteps in the above .mdp file. How about
increasing the force constant? Is the vector connecting the COM of the
entire protein and the COM of the ligand suitable for describing the exit
pathway?

-Justin


Hello Justin:

 thanks a lot for kind rely.
 Yes, I adjust the conformation of whole protein/ligand so that it can 
exist from X-axies.  I only show part of the .mdp file so some of then 
are not shown.



; Run parameters
integrator  = md
dt  = 0.002
tinit   = 0
nsteps  = 50; 500 ps
nstcomm = 10
; Output parameters
nstxout = 5000  ; every 10 ps
nstvout = 5000
nstfout = 1000
nstxtcout   = 1000  ; every 1 ps
nstenergy   = 1000


probably I should consider use part of the protein such as residues 
around binding pocket as COM for protein instead of whole protein? I 
applied for 1ns with rate pull_rate1= 0.001, so at then end of pulling, 
the distance for COMprotein and COM ligand should be 10A. Probably this 
is too short for whole protein as COM?


thank you very much
best
Albert



--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] fail to pull

2013-04-06 Thread Albert

Dear:

 I am trying to pull my ligand outside of the binding pocket with 
following configurations:


title   = Umbrella pulling simulation
define  = -DPOSRES
; Pull code
pull= umbrella
pull_geometry   = distance  ; simple distance increase
pull_dim= Y N N
pull_start  = yes   ; define initial COM distance > 0
pull_ngroups= 1
pull_group0 = Protein
pull_group1 = LIG
pull_rate1  = 0.001  ; 0.01 nm per ps = 10 nm per ns
pull_k1 = 1000  ; kJ mol^-1 nm^-2

Tcoupl  = v-rescale
tc_grps = Protein_LIG   Water_and_ions
tau_t   = 0.5   0.5
ref_t   = 310   310
; Pressure coupling is on
Pcoupl  = Parrinello-Rahman
pcoupltype  = isotropic
tau_p   = 1.0   1.0
compressibility = 4.5e-5
ref_p   = 1.0 1.0
refcoord_scaling = com
; Generate velocities is off
gen_vel = no
; Periodic boundary conditions are on in all directions
pbc = xyz
; Long-range dispersion correction
DispCorr= EnerPres



It is quite strange, the ligand is still in place and not outside the 
pocket at the end of simulations. I am just wondering where is the problem?


thank you very much
best
Albert

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


  1   2   3   4   >