[gmx-users] How to simulate pure methane system?

2013-03-27 Thread Liu Chanjuan
Hello all,
Now I want to have a simulation of water and methane system, but when I wrote a 
methane.itp and the force field file, it could not run correctly. I used tip4p 
water and the opls-aa methane. According to the simulation with NPT, not only 
the pure methane system, but also the water and methane system,  the simulated 
box became larger and lager, I don't know why and I cann't find the problem, 
could you tell me what's the problem?
looking forward to your reply and thanks a lot.
 
Liu Chanjuan
Institute of Geology and Geophysics, Chinese Academy of Sciences
TEL:010-82998424
E-mail:chanjuan0...@126.com
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Atoms are fused after inserting in membrane

2013-03-27 Thread sdshine
Dear all,

Does that mean the atoms will shrink and adjust at the time inflategro step
and
this closely packed lipids will stay apart? Because as you said the energy
of course will be high on selected atoms during first EM step.
Could you suggest me that, can I proceed as such with my system, since there
is no problem in grompp and EM step (except high energy on some atom).

or Do I need to manually remove those overlapping lipids and water
molecules.

Thanks in advance 




--
View this message in context: 
http://gromacs.5086.n6.nabble.com/Atoms-are-fused-after-inserting-in-membrane-tp5006620p5006675.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] no CUDA-capable device is detected

2013-03-27 Thread Chandan Choudhury
Dear GMX Users,

I am trying to execute Gromacs 4.6.1 on one of the GPU server:
*OS*: OpenSuse 12.3 x86_64 3.7.10-1.1-desktop (Kernel Release)
*gcc*: 4.7.2

CUDA Library paths
#CUDA-5.0
export CUDA_HOME=/usr/local/cuda-5.0
export PATH=$CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:/lib:$LD_LIBRARY_PATH

The gromacs has been compiled with

CMAKE_PREFIX_PATH=/opt/apps/fftw-3.3.3/single:/usr/local/cuda-5.0 cmake ..
-DGMX_GPU=ON -DCMAKE_INSTALL_PREFIX=/opt/apps/gromacs/461/single
-DGMX_DEFAULT_SUFFIX=OFF -DGMX_BINARY_SUFFIX=_461 -DGMX_LIBS_SUFFIX=_461
-DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda

*Error on executing mdrun
*
*
*
*NOTE: Error occurred during GPU detection:
no CUDA-capable device is detected
Can not use GPU acceleration, will fall back to CPU kernels.


Will use 24 particle-particle and 8 PME only nodes
This is a guess, check the performance at the end of the log file
Using 32 MPI threads

No GPUs detected

*I checked my cuda installation. I am able to compile and execute the
sample programmes e.g., deviceQuery.

Also executed *nvidia-smi *:
+--+
| NVIDIA-SMI 4.310.40 Driver Version: 310.40 |
|---+--+--+
| GPU Name | Bus-Id Disp. | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===+==+==|
| 0 NVS 300 | :03:00.0 N/A | N/A |
| N/A 49C N/A N/A / N/A | 3% 16MB / 511MB | N/A Default |
+---+--+--+
| 1 Tesla K20c | :04:00.0 Off | Off |
| 30% 38C P8 16W / 225W | 0% 13MB / 5119MB | 0% Default |
+---+--+--+

+-+
| Compute processes: GPU Memory |
| GPU PID Process name Usage |
|=|
| 0 Not Supported |
+-+

What am I missing that Gromacs is not detecting the GPUs.

Chandan

--
Chandan kumar Choudhury
NCL, Pune
INDIA
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Simulation membrane proteins in amber99 force field.

2013-03-27 Thread Christopher Neale
Dear James:

As always, check the primary literature. The Amber99SB ff was introduced with 
an 8A cutoff and PME: http://www.ncbi.nlm.nih.gov/pubmed/16981200

Other cutoffs are at your discretion. I am, for instance, using this ff with a 
1.0 nm cutoff and PME because I am using it with the Stockholm Lipids.

Chris.

-- original message --

I'd like to perform MD simulation of the membrane protein parametrized
in Amber99sb force field. Could you tell me what cut-off patterns
should I use for such simulation ?


James

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] TOP file question

2013-03-27 Thread Peter Eastman
I'm implementing a TOP file reader, and I have a question about an ambiguity in 
the format.  The [ pairs ] block lists atom pairs that should be handled 
specially (exclusions and 1-4 interactions).  In addition, the gen-pairs flag 
can indicate that pairs are generated automatically.  But all the files I've 
looked at include BOTH of these things, and I'm not sure how to interpret that.

Can I assume that all pairs generated by gen-pairs are already included in the 
list, or might I have to generate additional ones?

Can I assume that all pairs listed in the file where generated by gen-pairs, or 
might there be additional pairs that came from somewhere else?

What if the two definitions disagree with each other, and the [ pairs ] block 
lists different parameters for a pair than would be generated automatically?  
Which should I use?

Thanks.

Peter--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Simulating interfaces and load imbalance

2013-03-27 Thread Gabriele Lanaro
Hi! Recently I've been simulating a system comprised of water and LiCl ions
in different arrangements. Depending on the arrangements I get different
responses in terms of performances. The box is tetragonal (but I heard from
other people in my lab that they had a performance hit also in cubic boxes,
not sure if it's the same as my case anyway).

This is running on 64 processors and I have 51840 atoms. The Li/Cl/H2O are
in equal proportions (10368 each).

Test1: Water and ions well mixed.
Box dimension 30.23817   5.54007   4.84122

A set of DD entries from the log files:

DD  step 1350  vol min/aver 0.714  load imb.: force  4.3%  pme
mesh/force 3.802
DD  step 1351  vol min/aver 0.794  load imb.: force  4.0%  pme
mesh/force 1.701
DD  step 1352  vol min/aver 0.833  load imb.: force  6.6%  pme
mesh/force 1.652
DD  step 1353  vol min/aver 0.701  load imb.: force 11.7%  pme
mesh/force 2.062
DD  step 1354  vol min/aver 0.686  load imb.: force  3.8%  pme
mesh/force 1.934
DD  step 1355  vol min/aver 0.768  load imb.: force  1.9%  pme
mesh/force 1.639
DD  step 1356  vol min/aver 0.919  load imb.: force  2.1%  pme
mesh/force 1.369

Test2: All ions on one side (a cubic crystal) all the water on the other
(they are divided in the x direction).
Box dimensions 23.03340   6.15600   6.15600

DD  load balancing is limited by minimum cell size in dimension X
DD  step 756  vol min/aver 0.868! load imb.: force 39.9%  pme
mesh/force 1.036
DD  load balancing is limited by minimum cell size in dimension X
DD  step 757  vol min/aver 0.869! load imb.: force 43.1%  pme
mesh/force 1.030
DD  load balancing is limited by minimum cell size in dimension X
DD  step 758  vol min/aver 0.868! load imb.: force 41.8%  pme
mesh/force 1.035
DD  load balancing is limited by minimum cell size in dimension X
DD  step 759  vol min/aver 0.869! load imb.: force 41.1%  pme
mesh/force 1.044
DD  load balancing is limited by minimum cell size in dimension X
DD  step 760  vol min/aver 0.868! load imb.: force 40.3%  pme
mesh/force 1.030
DD  load balancing is limited by minimum cell size in dimension X
DD  step 761  vol min/aver 0.869! load imb.: force 43.8%  pme
mesh/force 1.031

Basically the Test2 is very slow and I would like to find a way to
interpret those messages and try fix this issue (tell me if there's
something I should read).

Thank you,

Gabriele
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] diffusion constant level off

2013-03-27 Thread Ahmet yıldırım
Dear users,

I used the following commands to get diffusion constants (every 10 ns) of a
simulation of 100 ns . The number of frame is 5001 (write .xtc trajectory
every 20 ps).  I looked at RMSD vs average structure, RMSD vs starting
structure, Radius of gyration, RMSD matrix. This simulation has reached to
converge at last 50 ns.
g_msd -f traj.xtc -s topol.tpr -o msd_1.xvg -b 0 -e 1
g_msd -f traj.xtc -s topol.tpr -o msd_2.xvg -b 1 -e 2
g_msd -f traj.xtc -s topol.tpr -o msd_3.xvg -b 2 -e 3
...
g_msd -f traj.xtc -s topol.tpr -o msd_10.xvg -b 9 -e 10

1.) I used the above commands without the following flags ( -type, -lateral
and -ten). Which diffusion will the above comands give? is it bulk
diffusion?
Gromacs manual:
-type:Compute diffusion coefficient in one direction:no, x, y or z
-lateral:Calculate the lateral diffusion in a plane perpendicular to: no,
x, y or z
-ten:Calculate the full tensor
2.) I plotted diffusions (10 values) as function of time. Diffusions dont
converge. Did I do any steps by mistake?
3.) From manual:
The diffusion constant is calculated by least squares fitting a straight
line (D*t + c)...
What is (D*t + c)? What are the meaning of D and c?
4.) What should be "Time between restarting points in trajectory"?

Thanks in advance
-- 
Ahmet Yıldırım
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] SMD : pulling both primes of DNA

2013-03-27 Thread raghav singh
Hello All,

My simulation system is composed of DNA and want to pull both of the primes
at the same time towards each other.
I have tried every possible set of parameters in pull code...and i am not
able to pull them together.

it's like ...
the DNA molecule is aligned along Y-axis and if want to pull both ends at
the same time that means pulling in -Y and +Y direction at the same time.

if anyone successfully implemented such ...simulations... Please provide me
with some idea about this problem.

Thank you
Raghav
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: compilation of gromacs-4.5.4 with fftw-3.3 for double precision versition

2013-03-27 Thread Christoph Junghans
> Date: Wed, 27 Mar 2013 15:14:19 +0100
> From: Mark Abraham 
> Subject: Re: [gmx-users] compilation of gromacs-4.5.4 with fftw-3.3
> for double  precision versition
> To: Discussion list for GROMACS users 
> Message-ID:
> 
> Content-Type: text/plain; charset=UTF-8
>
> Yes, --enable-long-double is useless for FFTW+GROMACS.
Actually long-double is, like single precision, a different library - libffwl.
Same is true for fftw's quad precision

>
> Mark
>
> On Wed, Mar 27, 2013 at 3:03 PM, Qinghua Liao wrote:
>
>> Hi all,
>>
>> Finally I compiled it successfully when I used the following commands:
>>
>> 1077  ./configure --prefix=/usr/users/iff_th2/liao/fftw-3.3
>> --enable-threads --enable-shared --enable-mpi CC=gcc
>>  1078  make
>>  1079  make install
>>  1080  history | grep export
>>  1081  export CPPFLAGS=-I/usr/users/iff_th2/liao/fftw-3.3/include
>>  1082  export LDFLAGS=-L/usr/users/iff_th2/liao/fftw-3.3/lib
>>  1083  cd ../gromacs454/
>>  1084  history | grep configure
>>  1085  ./configure --prefix=/usr/users/iff_th2/liao/gromacs454/
>> --enable-double --enable-threads CC=gcc --disable-gcc41-check --enable-mpi
>> --program-suffix=_d --enable-shared
>>  1086  make
>>  1087  make install
>>
>> The only difference is that I delete the option of --enable-long-double for
>> compiling fftw and add the option of --enable-shared for compiling gromacs.
>>
>> Thanks all for the suggestions.
>>
>> All the best.
>> Qinghua Liao
>>
>>
>> On Wed, Mar 27, 2013 at 2:04 PM, Ahmet yıldırım 
>> wrote:
>>
>> > cd
>> > mkdir fftw3.3
>> > cd Desktop
>> > wget http://www.fftw.org/fftw-3.3.tar.gz
>> > tar xzvf fftw-3.3.tar.gz
>> > cd fftw-3.3
>> > ./configure --prefix=/home/manchu/fftw3.3 --enable-threads --enable-sse2
>> > --enable-shared
>> > make
>> > make install
>> >
>> > cd
>> > mkdir gromacs_install
>> > cd Desktop
>> > wget ftp://ftp.gromacs.org/pub/gromacs/gromac­s-4.5.5.tar.gz
>> > tar xzvf gromacs-4.5.5.tar.gz
>> > cd gromacs-4.5.5
>> > ./configure --prefix=/home/manchu/gromacs_install
>> > LDFLAGS=-L/home/manchu/fftw3.3/lib
>> CPPFLAGS=-I/home/manchu/fftw3.3/include
>> > --disable-float
>> > make
>> > make install
>> >
>> > Executables are in /home/manchu/gromacs/bin
>> > Reference:http://www.youtube.com/watch?v=bxWjWmdf6xw
>> >
>> > 2013/3/27 Qinghua Liao 
>> >
>> > > Dear Justin,
>> > >
>> > > Thanks very much for your reply! Yeah, I did not add the option of
>> > > --enable-shared for compilation of gromacs 4.5.4, but it still failed
>> > after
>> > > I added this option for the compilation.
>> > > For the compilations I posted in the last e-mail, I do add the option
>> of
>> > > --enable-shared in compilation of fftw 3.3, but not for compilation of
>> > > gromacs 4.5.4. Problem remains unsolved.
>> > >
>> > > I choose this  old version is to keep the simulations consistent with
>> > > previous simulations. Thanks for the suggestion!
>> > >
>> > > All the best,
>> > > Qinghua Liao
>> > >
>> > >
>> > > On Wed, Mar 27, 2013 at 12:59 PM, Justin Lemkul 
>> wrote:
>> > >
>> > > >
>> > > >
>> > > > On 3/27/13 6:57 AM, Qinghua Liao wrote:
>> > > >
>> > > >> Dear gmx users,
>> > > >>
>> > > >> I tried to compile gromacs 4.5.4 with double precision, but it
>> failed.
>> > > The
>> > > >> reason was a little wired.
>> > > >>
>> > > >> Firstly, I used the following commands to compile gromacs 4.5.4
>> > together
>> > > >> with fftw 3.3 for serial and parallel version with single precision,
>> > > and I
>> > > >> made it successfully.
>> > > >>
>> > > >>
>> > > >>   1014  ./configure --prefix=/usr/users/iff_th2/**liao/fftw-3.3
>> > > >> --enable-sse
>> > > >> --enable-threads --enable-float --enable-shared CC=gcc
>> > > >>   1015  make
>> > > >>   1016  make install
>> > > >>   1017  make distclean
>> > > >>   1018  export CPPFLAGS=-I/usr/users/iff_th2/**liao/fftw-3.3/include
>> > > >>   1019  export LDFLAGS=-L/usr/users/iff_th2/**liao/fftw-3.3/lib
>> > > >>
>> > > >>   1022  mv gromacs-4.5.4 gromacs454
>> > > >>   1023  cd gromacs454/
>> > > >>
>> > > >>   1026  ./configure --prefix=/usr/users/iff_th2/**liao/gromacs454/
>> > > >> --enable-float --enable-threads CC=gcc --disable-gcc41-check
>> > > >>   1027  make
>> > > >>   1028  make install
>> > > >>
>> > > >>   1031  make distclean
>> > > >>   1032  cd ../fftw-3.3
>> > > >>   1034  ./configure --prefix=/usr/users/iff_th2/**liao/fftw-3.3
>> > > >> --enable-sse
>> > > >> --enable-threads --enable-float --enable-shared --enable-mpi CC=gcc
>> > > >>   1035  make
>> > > >>   1036  make install
>> > > >>   1037  make distclean
>> > > >>   1038  cd ../gromacs454/
>> > > >>   1039  ls
>> > > >>   1040  make distclean
>> > > >>   1041  ./configure --prefix=/usr/users/iff_th2/**liao/gromacs454/
>> > > >> --enable-float --enable-threads CC=gcc --disable-gcc41-check
>> > > --enable-mpi
>> > > >> --program-suffix=_mpi
>> > > >>   1042  make
>> > > >>   1043  make install
>> > > >>   1044  make distclean
>> > > >>
>> > > >> But when I used these similar commands

Re:Re: Re: [gmx-users] Implicit solvent MD is not fast and not accurate.

2013-03-27 Thread xiao
Dear Justin,
Thank you very much for your suggetions.
BW
Fugui


At 2013-03-27 22:15:23,"Justin Lemkul"  wrote:
>On Wed, Mar 27, 2013 at 10:11 AM, xiao  wrote:
>
>> Dear Justin,
>> Thank you very much for your reply.
>> I found that the speed of implict MD is slower that explict MD. For
>> examplex, the speed of an explict MD for a protein of 300 amino acids is
>> about 3ns per day, however, the implicit solvent is about 1.5ns per day.
>>
>
>I know. This is what I mean about the need for improving the capabilities
>of the code. The main point of the implicit solvent approach is that it
>should be faster.
>
>
>> With respect to the accuracy of implicit solvent, the result shows bad
>> result. There are two carbon atom types which are not in the gbsa.itp, and
>> i just copied some carbon atom type in gbsa.itp because i find that there
>> is no big difference between the carbon parameters. I do not know whether
>> this is the
>
>reason.
>>
>
>You need to establish whether what you did was appropriate or not before
>you begin accusing Gromacs of being inaccurate.  If you observe
>inconsistencies, errors, poor energy conservation, etc. in the context of
>some known system, then that might be worth investigating.  Problems with a
>custom topology are more likely to be due to the topology than to the
>software using it.  GB atom types are one possible source of error, but in
>parametrization, there are plenty of things that can go wrong before you
>even get to that point.
>
>-Justin
>
>-- 
>
>
>
>Justin A. Lemkul, Ph.D.
>Research Scientist
>Department of Biochemistry
>Virginia Tech
>Blacksburg, VA
>jalemkul[at]vt.edu | (540)
>231-9080http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
>
>
>-- 
>gmx-users mailing listgmx-users@gromacs.org
>http://lists.gromacs.org/mailman/listinfo/gmx-users
>* Please search the archive at 
>http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>* Please don't post (un)subscribe requests to the list. Use the 
>www interface or send it to gmx-users-requ...@gromacs.org.
>* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: Re: [gmx-users] Implicit solvent MD is not fast and not accurate.

2013-03-27 Thread Justin Lemkul
On Wed, Mar 27, 2013 at 10:11 AM, xiao  wrote:

> Dear Justin,
> Thank you very much for your reply.
> I found that the speed of implict MD is slower that explict MD. For
> examplex, the speed of an explict MD for a protein of 300 amino acids is
> about 3ns per day, however, the implicit solvent is about 1.5ns per day.
>

I know. This is what I mean about the need for improving the capabilities
of the code. The main point of the implicit solvent approach is that it
should be faster.


> With respect to the accuracy of implicit solvent, the result shows bad
> result. There are two carbon atom types which are not in the gbsa.itp, and
> i just copied some carbon atom type in gbsa.itp because i find that there
> is no big difference between the carbon parameters. I do not know whether
> this is the

reason.
>

You need to establish whether what you did was appropriate or not before
you begin accusing Gromacs of being inaccurate.  If you observe
inconsistencies, errors, poor energy conservation, etc. in the context of
some known system, then that might be worth investigating.  Problems with a
custom topology are more likely to be due to the topology than to the
software using it.  GB atom types are one possible source of error, but in
parametrization, there are plenty of things that can go wrong before you
even get to that point.

-Justin

-- 



Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540)
231-9080http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] compilation of gromacs-4.5.4 with fftw-3.3 for double precision versition

2013-03-27 Thread Mark Abraham
Yes, --enable-long-double is useless for FFTW+GROMACS.

Mark

On Wed, Mar 27, 2013 at 3:03 PM, Qinghua Liao wrote:

> Hi all,
>
> Finally I compiled it successfully when I used the following commands:
>
> 1077  ./configure --prefix=/usr/users/iff_th2/liao/fftw-3.3
> --enable-threads --enable-shared --enable-mpi CC=gcc
>  1078  make
>  1079  make install
>  1080  history | grep export
>  1081  export CPPFLAGS=-I/usr/users/iff_th2/liao/fftw-3.3/include
>  1082  export LDFLAGS=-L/usr/users/iff_th2/liao/fftw-3.3/lib
>  1083  cd ../gromacs454/
>  1084  history | grep configure
>  1085  ./configure --prefix=/usr/users/iff_th2/liao/gromacs454/
> --enable-double --enable-threads CC=gcc --disable-gcc41-check --enable-mpi
> --program-suffix=_d --enable-shared
>  1086  make
>  1087  make install
>
> The only difference is that I delete the option of --enable-long-double for
> compiling fftw and add the option of --enable-shared for compiling gromacs.
>
> Thanks all for the suggestions.
>
> All the best.
> Qinghua Liao
>
>
> On Wed, Mar 27, 2013 at 2:04 PM, Ahmet yıldırım 
> wrote:
>
> > cd
> > mkdir fftw3.3
> > cd Desktop
> > wget http://www.fftw.org/fftw-3.3.tar.gz
> > tar xzvf fftw-3.3.tar.gz
> > cd fftw-3.3
> > ./configure --prefix=/home/manchu/fftw3.3 --enable-threads --enable-sse2
> > --enable-shared
> > make
> > make install
> >
> > cd
> > mkdir gromacs_install
> > cd Desktop
> > wget ftp://ftp.gromacs.org/pub/gromacs/gromac­s-4.5.5.tar.gz
> > tar xzvf gromacs-4.5.5.tar.gz
> > cd gromacs-4.5.5
> > ./configure --prefix=/home/manchu/gromacs_install
> > LDFLAGS=-L/home/manchu/fftw3.3/lib
> CPPFLAGS=-I/home/manchu/fftw3.3/include
> > --disable-float
> > make
> > make install
> >
> > Executables are in /home/manchu/gromacs/bin
> > Reference:http://www.youtube.com/watch?v=bxWjWmdf6xw
> >
> > 2013/3/27 Qinghua Liao 
> >
> > > Dear Justin,
> > >
> > > Thanks very much for your reply! Yeah, I did not add the option of
> > > --enable-shared for compilation of gromacs 4.5.4, but it still failed
> > after
> > > I added this option for the compilation.
> > > For the compilations I posted in the last e-mail, I do add the option
> of
> > > --enable-shared in compilation of fftw 3.3, but not for compilation of
> > > gromacs 4.5.4. Problem remains unsolved.
> > >
> > > I choose this  old version is to keep the simulations consistent with
> > > previous simulations. Thanks for the suggestion!
> > >
> > > All the best,
> > > Qinghua Liao
> > >
> > >
> > > On Wed, Mar 27, 2013 at 12:59 PM, Justin Lemkul 
> wrote:
> > >
> > > >
> > > >
> > > > On 3/27/13 6:57 AM, Qinghua Liao wrote:
> > > >
> > > >> Dear gmx users,
> > > >>
> > > >> I tried to compile gromacs 4.5.4 with double precision, but it
> failed.
> > > The
> > > >> reason was a little wired.
> > > >>
> > > >> Firstly, I used the following commands to compile gromacs 4.5.4
> > together
> > > >> with fftw 3.3 for serial and parallel version with single precision,
> > > and I
> > > >> made it successfully.
> > > >>
> > > >>
> > > >>   1014  ./configure --prefix=/usr/users/iff_th2/**liao/fftw-3.3
> > > >> --enable-sse
> > > >> --enable-threads --enable-float --enable-shared CC=gcc
> > > >>   1015  make
> > > >>   1016  make install
> > > >>   1017  make distclean
> > > >>   1018  export CPPFLAGS=-I/usr/users/iff_th2/**liao/fftw-3.3/include
> > > >>   1019  export LDFLAGS=-L/usr/users/iff_th2/**liao/fftw-3.3/lib
> > > >>
> > > >>   1022  mv gromacs-4.5.4 gromacs454
> > > >>   1023  cd gromacs454/
> > > >>
> > > >>   1026  ./configure --prefix=/usr/users/iff_th2/**liao/gromacs454/
> > > >> --enable-float --enable-threads CC=gcc --disable-gcc41-check
> > > >>   1027  make
> > > >>   1028  make install
> > > >>
> > > >>   1031  make distclean
> > > >>   1032  cd ../fftw-3.3
> > > >>   1034  ./configure --prefix=/usr/users/iff_th2/**liao/fftw-3.3
> > > >> --enable-sse
> > > >> --enable-threads --enable-float --enable-shared --enable-mpi CC=gcc
> > > >>   1035  make
> > > >>   1036  make install
> > > >>   1037  make distclean
> > > >>   1038  cd ../gromacs454/
> > > >>   1039  ls
> > > >>   1040  make distclean
> > > >>   1041  ./configure --prefix=/usr/users/iff_th2/**liao/gromacs454/
> > > >> --enable-float --enable-threads CC=gcc --disable-gcc41-check
> > > --enable-mpi
> > > >> --program-suffix=_mpi
> > > >>   1042  make
> > > >>   1043  make install
> > > >>   1044  make distclean
> > > >>
> > > >> But when I used these similar commands to compile for the double
> > > >> precision,
> > > >> it failed.
> > > >>
> > > >>
> > > >>   1049  ./configure --prefix=/usr/users/iff_th2/**liao/fftw-3.3
> > > >> --enable-long-double --enable-threads --enable-shared --enable-mpi
> > > CC=gcc
> > > >>   1050  make
> > > >>   1051  make install
> > > >>   1052  make distclean
> > > >>   1053  cd ../gromacs454/
> > > >>   1054  make distclean
> > > >>   1055  ./configure --prefix=/usr/users/iff_th2/**liao/gromacs454/
> > > >> --enable-double --enable-threads CC=gcc --disable-gcc41-check
> > > -

Re:Re: [gmx-users] Implicit solvent MD is not fast and not accurate.

2013-03-27 Thread xiao
Dear Justin,
Thank you very much for your reply.
I found that the speed of implict MD is slower that explict MD. For examplex, 
the speed of an explict MD for a protein of 300 amino acids is about 3ns per 
day, however, the implicit solvent is about 1.5ns per day.
With respect to the accuracy of implicit solvent, the result shows bad result. 
There are two carbon atom types which are not in the gbsa.itp, and i just 
copied some carbon atom type in gbsa.itp because i find that there is no big 
difference between the carbon parameters. I do not know whether this is the 
reason.
BW
Fugui




At 2013-03-27 21:45:42,"Justin Lemkul"  wrote:
>On Wed, Mar 27, 2013 at 9:27 AM, xiao  wrote:
>
>> Dear Gromacs users:
>> I did a protein MD using implicit solvent and Amber 99SB force filed.
>> However, i found that the implicit solvent is not faster than explicit
>> solvent, and what is worse is that it is also not accurate.
>> The system is a protein-ligand complex. Firstly, i run a minimization, and
>> then i did a production MD. The explicit solvent MD can give nearly same
>> strucuture as the crystal structure after 10ns MD. However, there is a
>> significant change in the ligand after 1ns MD in implicit solvent.
>> My .mdp file is as follows:
>> title  = OPLS Lysozyme MD
>> ; Run parameters
>> integrator = md  ; leap-frog integrator
>> nsteps  = 1000 ; 2 * 50 = 1000 ps, 1 ns
>> dt  = 0.002  ; 2 fs
>> ; Output control
>> nstxout  = 1000  ; save coordinates every 2 ps
>> nstvout  = 1000  ; save velocities every 2 ps
>> nstxtcout = 1000  ; xtc compressed trajectory output every 2 ps
>> nstenergy = 1000  ; save energies every 2 ps
>> nstlog  = 1000  ; update log file every 2 ps
>> ; Bond parameters
>> continuation = yes  ; Restarting after NPT
>> constraint_algorithm = lincs ; holonomic constraints
>> constraints = all-bonds ; all bonds (even heavy atom-H bonds) constrained
>> lincs_iter = 1  ; accuracy of LINCS
>> lincs_order = 4  ; also related to accuracy
>> ; Neighborsearching
>> ns_type  = grid  ; search neighboring grid cells
>> nstlist  = 5  ; 10 fs
>> rlist  = 0  ; short-range neighborlist cutoff (in nm)
>> rcoulomb = 0  ; short-range electrostatic cutoff (in nm)
>> rvdw  = 0  ; short-range van der Waals cutoff (in nm)
>> ; Electrostatics
>> coulombtype = cut-off ; Particle Mesh Ewald for long-range
>> vdwtype = cut-off
>> pme_order = 4  ; cubic interpolation
>> fourierspacing = 0.16  ; grid spacing for FFT
>> ; Temperature coupling is on
>> tcoupl  = V-rescale ; modified Berendsen thermostat
>> tc-grps  = system  ; two coupling groups - more accurate
>> tau_t  = 0.1; time constant, in ps
>> ref_t  = 300   ; reference temperature, one for each group, in K
>> ;
>> ;
>> comm-mode   =  angular
>> comm-grps   =  system
>> ;
>> ;
>> pcoupl  = no ; Pressure coupling on in NPT
>> pbc  = no  ; 3-D PBC
>> gen_vel =  yes
>> gen_temp=  300
>> gen_seed=  -1
>> ;
>> ;
>> implicit_solvent=  GBSA
>> gb_algorithm=  OBC ; HCT ; OBC
>> nstgbradii  =  1
>> rgbradii=  0   ; [nm] Cut-off for the calculation of the
>> Born radii. Currently must be equal to rlist
>> gb_epsilon_solvent  =  80; Dielectric constant for the implicit solvent
>> ; gb_saltconc   =  0 ; Salt concentration for implicit solvent
>> models, currently not used
>> sa_algorithm=  Ace-approximation
>> sa_surface_tension  = -1
>>
>> Can anyone give me some suggestions?
>>
>
>Performance issues are known. There are plans to implement the implicit
>solvent code for GPU and perhaps allow for better parallelization, but I
>don't know what the status of all that is.  As it stands (and as I have
>said before on this list and to the developers privately), the implicit
>code is largely unproductive because the performance is terrible.
>
>As for the accuracy assessment, I think you need to provide better evidence
>of what you mean. A single simulation is not definitive of anything, and
>moreover, some differences between explicit and implicit are likely given
>the lack of solvent collisions. The implicit trajectory will probably
>sample states that are inaccessible (or at least very rare) in the explicit
>trajectory.
>
>-Justin
>
>-- 
>
>
>
>Justin A. Lemkul, Ph.D.
>Research Scientist
>Department of Biochemistry
>Virginia Tech
>Blacksburg, VA
>jalemkul[at]vt.edu | (540)
>231-9080http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
>
>
>-- 
>gmx-users mailing listgmx-users@gromacs.org
>http://lists.gromacs.org/mailman/listinfo/gmx-users
>* Please search the archive at 
>http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>* Please don't post (un)subscribe requests to the list. Use the 
>www interface or send it to gmx-users-requ...@gromacs.org.
>* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@g

Re: [gmx-users] LJ values conversion

2013-03-27 Thread Justin Lemkul
On Wed, Mar 27, 2013 at 9:57 AM, 라지브간디  wrote:

> Thanks for the mail justin.
>
>
> In charmm27.ff, the value for bonded are in b0 and ko format, whereas the
> gromos uses them in different way? If so, how do i convert between them?
>
>
Please consult the manual, Chapters 4 and 5 for all the relevant equations
and implementation.


>
> The value i incorporated for specific atoms are from published journals.
> Thanks in advance.
>
>
I think you're missing my point though. Just because something is
published, doesn't mean you can use it for whatever purpose you like.
 People spend years working out properly balanced force fields, and if you
try to put in some new atom type, it upsets the balance of every
interaction.  You may be OK within the context of CHARMM (depending on what
the paper is, and what parameter set they modified), but I can guarantee
you that if you're trying to incorporate CHARMM parameters into GROMOS, all
you're doing is producing a pile of trash.  You can probably "make it work"
from a topology standpoint, but you shouldn't be doing it.  I hope my
perspective is clear; I'm trying to save you from wasted effort.

-Justin


>
>
> On 3/27/13 2:15 AM, ��� wrote:
> > Hello gmx,
> >
> >
> > I have LJ parameter value of C (epsilon = 0.0262 kcal/mol, sigma = 3.83)
> O (epsilon = 0.1591. sigma = 3.12) in charmm format and wants to use them
> in gromos43a1 or charmm27 force field in gromacs.
> >
> >
> > Could you tell me how do i convert them to gromacs format? Any examples
> plz.
> >
> >
>
> Equation 5.1 in the manual, or apply g_sigeps. Note that picking values of
> atoms randomly and inserting them into an existing force field is a great
> way to
> completely invalidate the force field. Atom types are balanced against one
> another. Making ad hoc changes means you're using an unvalidated parameter
> set,
> and any good reviewer is going to have serious issues with whatever data
> you
> produce.
>
> -Justin
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 



Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540)
231-9080http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] compilation of gromacs-4.5.4 with fftw-3.3 for double precision versition

2013-03-27 Thread Qinghua Liao
Hi all,

Finally I compiled it successfully when I used the following commands:

1077  ./configure --prefix=/usr/users/iff_th2/liao/fftw-3.3
--enable-threads --enable-shared --enable-mpi CC=gcc
 1078  make
 1079  make install
 1080  history | grep export
 1081  export CPPFLAGS=-I/usr/users/iff_th2/liao/fftw-3.3/include
 1082  export LDFLAGS=-L/usr/users/iff_th2/liao/fftw-3.3/lib
 1083  cd ../gromacs454/
 1084  history | grep configure
 1085  ./configure --prefix=/usr/users/iff_th2/liao/gromacs454/
--enable-double --enable-threads CC=gcc --disable-gcc41-check --enable-mpi
--program-suffix=_d --enable-shared
 1086  make
 1087  make install

The only difference is that I delete the option of --enable-long-double for
compiling fftw and add the option of --enable-shared for compiling gromacs.

Thanks all for the suggestions.

All the best.
Qinghua Liao


On Wed, Mar 27, 2013 at 2:04 PM, Ahmet yıldırım  wrote:

> cd
> mkdir fftw3.3
> cd Desktop
> wget http://www.fftw.org/fftw-3.3.tar.gz
> tar xzvf fftw-3.3.tar.gz
> cd fftw-3.3
> ./configure --prefix=/home/manchu/fftw3.3 --enable-threads --enable-sse2
> --enable-shared
> make
> make install
>
> cd
> mkdir gromacs_install
> cd Desktop
> wget ftp://ftp.gromacs.org/pub/gromacs/gromac­s-4.5.5.tar.gz
> tar xzvf gromacs-4.5.5.tar.gz
> cd gromacs-4.5.5
> ./configure --prefix=/home/manchu/gromacs_install
> LDFLAGS=-L/home/manchu/fftw3.3/lib CPPFLAGS=-I/home/manchu/fftw3.3/include
> --disable-float
> make
> make install
>
> Executables are in /home/manchu/gromacs/bin
> Reference:http://www.youtube.com/watch?v=bxWjWmdf6xw
>
> 2013/3/27 Qinghua Liao 
>
> > Dear Justin,
> >
> > Thanks very much for your reply! Yeah, I did not add the option of
> > --enable-shared for compilation of gromacs 4.5.4, but it still failed
> after
> > I added this option for the compilation.
> > For the compilations I posted in the last e-mail, I do add the option of
> > --enable-shared in compilation of fftw 3.3, but not for compilation of
> > gromacs 4.5.4. Problem remains unsolved.
> >
> > I choose this  old version is to keep the simulations consistent with
> > previous simulations. Thanks for the suggestion!
> >
> > All the best,
> > Qinghua Liao
> >
> >
> > On Wed, Mar 27, 2013 at 12:59 PM, Justin Lemkul  wrote:
> >
> > >
> > >
> > > On 3/27/13 6:57 AM, Qinghua Liao wrote:
> > >
> > >> Dear gmx users,
> > >>
> > >> I tried to compile gromacs 4.5.4 with double precision, but it failed.
> > The
> > >> reason was a little wired.
> > >>
> > >> Firstly, I used the following commands to compile gromacs 4.5.4
> together
> > >> with fftw 3.3 for serial and parallel version with single precision,
> > and I
> > >> made it successfully.
> > >>
> > >>
> > >>   1014  ./configure --prefix=/usr/users/iff_th2/**liao/fftw-3.3
> > >> --enable-sse
> > >> --enable-threads --enable-float --enable-shared CC=gcc
> > >>   1015  make
> > >>   1016  make install
> > >>   1017  make distclean
> > >>   1018  export CPPFLAGS=-I/usr/users/iff_th2/**liao/fftw-3.3/include
> > >>   1019  export LDFLAGS=-L/usr/users/iff_th2/**liao/fftw-3.3/lib
> > >>
> > >>   1022  mv gromacs-4.5.4 gromacs454
> > >>   1023  cd gromacs454/
> > >>
> > >>   1026  ./configure --prefix=/usr/users/iff_th2/**liao/gromacs454/
> > >> --enable-float --enable-threads CC=gcc --disable-gcc41-check
> > >>   1027  make
> > >>   1028  make install
> > >>
> > >>   1031  make distclean
> > >>   1032  cd ../fftw-3.3
> > >>   1034  ./configure --prefix=/usr/users/iff_th2/**liao/fftw-3.3
> > >> --enable-sse
> > >> --enable-threads --enable-float --enable-shared --enable-mpi CC=gcc
> > >>   1035  make
> > >>   1036  make install
> > >>   1037  make distclean
> > >>   1038  cd ../gromacs454/
> > >>   1039  ls
> > >>   1040  make distclean
> > >>   1041  ./configure --prefix=/usr/users/iff_th2/**liao/gromacs454/
> > >> --enable-float --enable-threads CC=gcc --disable-gcc41-check
> > --enable-mpi
> > >> --program-suffix=_mpi
> > >>   1042  make
> > >>   1043  make install
> > >>   1044  make distclean
> > >>
> > >> But when I used these similar commands to compile for the double
> > >> precision,
> > >> it failed.
> > >>
> > >>
> > >>   1049  ./configure --prefix=/usr/users/iff_th2/**liao/fftw-3.3
> > >> --enable-long-double --enable-threads --enable-shared --enable-mpi
> > CC=gcc
> > >>   1050  make
> > >>   1051  make install
> > >>   1052  make distclean
> > >>   1053  cd ../gromacs454/
> > >>   1054  make distclean
> > >>   1055  ./configure --prefix=/usr/users/iff_th2/**liao/gromacs454/
> > >> --enable-double --enable-threads CC=gcc --disable-gcc41-check
> > --enable-mpi
> > >> --program-suffix=_d
> > >>   1056  make
> > >>
> > >> The error showed to me was:
> > >>
> > >> /usr/bin/ld: /usr/local/lib/libfftw3.a(**plan-dft-c2r-2d.o):
> relocation
> > >> R_X86_64_32 against `a local symbol' can not be used when making a
> > shared
> > >> object; recompile with -fPIC /usr/local/lib/libfftw3.a: could not read
> > >> symbols: Bad value
> > >>
> > >> I added the option

[gmx-users] LJ values conversion

2013-03-27 Thread 라지브간디
Thanks for the mail justin.


In charmm27.ff, the value for bonded are in b0 and ko format, whereas the 
gromos uses them in different way? If so, how do i convert between them? 


The value i incorporated for specific atoms are from published journals. Thanks 
in advance.


 
On 3/27/13 2:15 AM, ��� wrote: 
> Hello gmx, 
> 
> 
> I have LJ parameter value of C (epsilon = 0.0262 kcal/mol, sigma = 3.83) O 
> (epsilon = 0.1591. sigma = 3.12) in charmm format and wants to use them in 
> gromos43a1 or charmm27 force field in gromacs. 
> 
> 
> Could you tell me how do i convert them to gromacs format? Any examples plz. 
> 
> 

Equation 5.1 in the manual, or apply g_sigeps. Note that picking values of 
atoms randomly and inserting them into an existing force field is a great way 
to 
completely invalidate the force field. Atom types are balanced against one 
another. Making ad hoc changes means you're using an unvalidated parameter set, 
and any good reviewer is going to have serious issues with whatever data you 
produce. 

-Justin 
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Implicit solvent MD is not fast and not accurate.

2013-03-27 Thread Justin Lemkul
On Wed, Mar 27, 2013 at 9:27 AM, xiao  wrote:

> Dear Gromacs users:
> I did a protein MD using implicit solvent and Amber 99SB force filed.
> However, i found that the implicit solvent is not faster than explicit
> solvent, and what is worse is that it is also not accurate.
> The system is a protein-ligand complex. Firstly, i run a minimization, and
> then i did a production MD. The explicit solvent MD can give nearly same
> strucuture as the crystal structure after 10ns MD. However, there is a
> significant change in the ligand after 1ns MD in implicit solvent.
> My .mdp file is as follows:
> title  = OPLS Lysozyme MD
> ; Run parameters
> integrator = md  ; leap-frog integrator
> nsteps  = 1000 ; 2 * 50 = 1000 ps, 1 ns
> dt  = 0.002  ; 2 fs
> ; Output control
> nstxout  = 1000  ; save coordinates every 2 ps
> nstvout  = 1000  ; save velocities every 2 ps
> nstxtcout = 1000  ; xtc compressed trajectory output every 2 ps
> nstenergy = 1000  ; save energies every 2 ps
> nstlog  = 1000  ; update log file every 2 ps
> ; Bond parameters
> continuation = yes  ; Restarting after NPT
> constraint_algorithm = lincs ; holonomic constraints
> constraints = all-bonds ; all bonds (even heavy atom-H bonds) constrained
> lincs_iter = 1  ; accuracy of LINCS
> lincs_order = 4  ; also related to accuracy
> ; Neighborsearching
> ns_type  = grid  ; search neighboring grid cells
> nstlist  = 5  ; 10 fs
> rlist  = 0  ; short-range neighborlist cutoff (in nm)
> rcoulomb = 0  ; short-range electrostatic cutoff (in nm)
> rvdw  = 0  ; short-range van der Waals cutoff (in nm)
> ; Electrostatics
> coulombtype = cut-off ; Particle Mesh Ewald for long-range
> vdwtype = cut-off
> pme_order = 4  ; cubic interpolation
> fourierspacing = 0.16  ; grid spacing for FFT
> ; Temperature coupling is on
> tcoupl  = V-rescale ; modified Berendsen thermostat
> tc-grps  = system  ; two coupling groups - more accurate
> tau_t  = 0.1; time constant, in ps
> ref_t  = 300   ; reference temperature, one for each group, in K
> ;
> ;
> comm-mode   =  angular
> comm-grps   =  system
> ;
> ;
> pcoupl  = no ; Pressure coupling on in NPT
> pbc  = no  ; 3-D PBC
> gen_vel =  yes
> gen_temp=  300
> gen_seed=  -1
> ;
> ;
> implicit_solvent=  GBSA
> gb_algorithm=  OBC ; HCT ; OBC
> nstgbradii  =  1
> rgbradii=  0   ; [nm] Cut-off for the calculation of the
> Born radii. Currently must be equal to rlist
> gb_epsilon_solvent  =  80; Dielectric constant for the implicit solvent
> ; gb_saltconc   =  0 ; Salt concentration for implicit solvent
> models, currently not used
> sa_algorithm=  Ace-approximation
> sa_surface_tension  = -1
>
> Can anyone give me some suggestions?
>

Performance issues are known. There are plans to implement the implicit
solvent code for GPU and perhaps allow for better parallelization, but I
don't know what the status of all that is.  As it stands (and as I have
said before on this list and to the developers privately), the implicit
code is largely unproductive because the performance is terrible.

As for the accuracy assessment, I think you need to provide better evidence
of what you mean. A single simulation is not definitive of anything, and
moreover, some differences between explicit and implicit are likely given
the lack of solvent collisions. The implicit trajectory will probably
sample states that are inaccessible (or at least very rare) in the explicit
trajectory.

-Justin

-- 



Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540)
231-9080http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Implicit solvent MD is not fast and not accurate.

2013-03-27 Thread xiao
Dear Gromacs users:
I did a protein MD using implicit solvent and Amber 99SB force filed. However, 
i found that the implicit solvent is not faster than explicit solvent, and what 
is worse is that it is also not accurate.
The system is a protein-ligand complex. Firstly, i run a minimization, and then 
i did a production MD. The explicit solvent MD can give nearly same strucuture 
as the crystal structure after 10ns MD. However, there is a
significant change in the ligand after 1ns MD in implicit solvent.
My .mdp file is as follows:
title  = OPLS Lysozyme MD
; Run parameters
integrator = md  ; leap-frog integrator
nsteps  = 1000 ; 2 * 50 = 1000 ps, 1 ns
dt  = 0.002  ; 2 fs
; Output control
nstxout  = 1000  ; save coordinates every 2 ps
nstvout  = 1000  ; save velocities every 2 ps
nstxtcout = 1000  ; xtc compressed trajectory output every 2 ps
nstenergy = 1000  ; save energies every 2 ps
nstlog  = 1000  ; update log file every 2 ps
; Bond parameters
continuation = yes  ; Restarting after NPT
constraint_algorithm = lincs ; holonomic constraints
constraints = all-bonds ; all bonds (even heavy atom-H bonds) constrained
lincs_iter = 1  ; accuracy of LINCS
lincs_order = 4  ; also related to accuracy
; Neighborsearching
ns_type  = grid  ; search neighboring grid cells
nstlist  = 5  ; 10 fs
rlist  = 0  ; short-range neighborlist cutoff (in nm)
rcoulomb = 0  ; short-range electrostatic cutoff (in nm)
rvdw  = 0  ; short-range van der Waals cutoff (in nm)
; Electrostatics
coulombtype = cut-off ; Particle Mesh Ewald for long-range
vdwtype = cut-off
pme_order = 4  ; cubic interpolation
fourierspacing = 0.16  ; grid spacing for FFT
; Temperature coupling is on
tcoupl  = V-rescale ; modified Berendsen thermostat
tc-grps  = system  ; two coupling groups - more accurate
tau_t  = 0.1; time constant, in ps
ref_t  = 300   ; reference temperature, one for each group, in K
;
;
comm-mode   =  angular
comm-grps   =  system
;
;
pcoupl  = no ; Pressure coupling on in NPT
pbc  = no  ; 3-D PBC
gen_vel =  yes
gen_temp=  300
gen_seed=  -1
;
;
implicit_solvent=  GBSA
gb_algorithm=  OBC ; HCT ; OBC
nstgbradii  =  1
rgbradii=  0   ; [nm] Cut-off for the calculation of the
Born radii. Currently must be equal to rlist
gb_epsilon_solvent  =  80; Dielectric constant for the implicit solvent
; gb_saltconc   =  0 ; Salt concentration for implicit solvent
models, currently not used
sa_algorithm=  Ace-approximation
sa_surface_tension  = -1
 
Can anyone give me some suggestions?
BW
Fugui
 
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] compilation of gromacs-4.5.4 with fftw-3.3 for double precision versition

2013-03-27 Thread Ahmet yıldırım
cd
mkdir fftw3.3
cd Desktop
wget http://www.fftw.org/fftw-3.3.tar.gz
tar xzvf fftw-3.3.tar.gz
cd fftw-3.3
./configure --prefix=/home/manchu/fftw3.3 --enable-threads --enable-sse2
--enable-shared
make
make install

cd
mkdir gromacs_install
cd Desktop
wget ftp://ftp.gromacs.org/pub/gromacs/gromac­s-4.5.5.tar.gz
tar xzvf gromacs-4.5.5.tar.gz
cd gromacs-4.5.5
./configure --prefix=/home/manchu/gromacs_install
LDFLAGS=-L/home/manchu/fftw3.3/lib CPPFLAGS=-I/home/manchu/fftw3.3/include
--disable-float
make
make install

Executables are in /home/manchu/gromacs/bin
Reference:http://www.youtube.com/watch?v=bxWjWmdf6xw

2013/3/27 Qinghua Liao 

> Dear Justin,
>
> Thanks very much for your reply! Yeah, I did not add the option of
> --enable-shared for compilation of gromacs 4.5.4, but it still failed after
> I added this option for the compilation.
> For the compilations I posted in the last e-mail, I do add the option of
> --enable-shared in compilation of fftw 3.3, but not for compilation of
> gromacs 4.5.4. Problem remains unsolved.
>
> I choose this  old version is to keep the simulations consistent with
> previous simulations. Thanks for the suggestion!
>
> All the best,
> Qinghua Liao
>
>
> On Wed, Mar 27, 2013 at 12:59 PM, Justin Lemkul  wrote:
>
> >
> >
> > On 3/27/13 6:57 AM, Qinghua Liao wrote:
> >
> >> Dear gmx users,
> >>
> >> I tried to compile gromacs 4.5.4 with double precision, but it failed.
> The
> >> reason was a little wired.
> >>
> >> Firstly, I used the following commands to compile gromacs 4.5.4 together
> >> with fftw 3.3 for serial and parallel version with single precision,
> and I
> >> made it successfully.
> >>
> >>
> >>   1014  ./configure --prefix=/usr/users/iff_th2/**liao/fftw-3.3
> >> --enable-sse
> >> --enable-threads --enable-float --enable-shared CC=gcc
> >>   1015  make
> >>   1016  make install
> >>   1017  make distclean
> >>   1018  export CPPFLAGS=-I/usr/users/iff_th2/**liao/fftw-3.3/include
> >>   1019  export LDFLAGS=-L/usr/users/iff_th2/**liao/fftw-3.3/lib
> >>
> >>   1022  mv gromacs-4.5.4 gromacs454
> >>   1023  cd gromacs454/
> >>
> >>   1026  ./configure --prefix=/usr/users/iff_th2/**liao/gromacs454/
> >> --enable-float --enable-threads CC=gcc --disable-gcc41-check
> >>   1027  make
> >>   1028  make install
> >>
> >>   1031  make distclean
> >>   1032  cd ../fftw-3.3
> >>   1034  ./configure --prefix=/usr/users/iff_th2/**liao/fftw-3.3
> >> --enable-sse
> >> --enable-threads --enable-float --enable-shared --enable-mpi CC=gcc
> >>   1035  make
> >>   1036  make install
> >>   1037  make distclean
> >>   1038  cd ../gromacs454/
> >>   1039  ls
> >>   1040  make distclean
> >>   1041  ./configure --prefix=/usr/users/iff_th2/**liao/gromacs454/
> >> --enable-float --enable-threads CC=gcc --disable-gcc41-check
> --enable-mpi
> >> --program-suffix=_mpi
> >>   1042  make
> >>   1043  make install
> >>   1044  make distclean
> >>
> >> But when I used these similar commands to compile for the double
> >> precision,
> >> it failed.
> >>
> >>
> >>   1049  ./configure --prefix=/usr/users/iff_th2/**liao/fftw-3.3
> >> --enable-long-double --enable-threads --enable-shared --enable-mpi
> CC=gcc
> >>   1050  make
> >>   1051  make install
> >>   1052  make distclean
> >>   1053  cd ../gromacs454/
> >>   1054  make distclean
> >>   1055  ./configure --prefix=/usr/users/iff_th2/**liao/gromacs454/
> >> --enable-double --enable-threads CC=gcc --disable-gcc41-check
> --enable-mpi
> >> --program-suffix=_d
> >>   1056  make
> >>
> >> The error showed to me was:
> >>
> >> /usr/bin/ld: /usr/local/lib/libfftw3.a(**plan-dft-c2r-2d.o): relocation
> >> R_X86_64_32 against `a local symbol' can not be used when making a
> shared
> >> object; recompile with -fPIC /usr/local/lib/libfftw3.a: could not read
> >> symbols: Bad value
> >>
> >> I added the option of --with-fPIC, but it was not recognized, and then I
> >> changed it to --with-pic, but the error was still the same.
> >>
> >> I don't know why gromacs can recognize the fftw library when doing the
> >> single float compilation, but not for the double float compilation, I
> >> already used the shared option. Could someone give me some suggestions
> to
> >> help me this out? Any reply will be appreciated.
> >>
> >>
> > In your last step, you're not using --enable-shared like you did in every
> > preceding step.  Adding that flag should fix it.
> >
> > http://www.gromacs.org/**Documentation/Installation_**
> > Instructions_4.5#Details_for_**building_the_FFTW_prerequisite<
> http://www.gromacs.org/Documentation/Installation_Instructions_4.5#Details_for_building_the_FFTW_prerequisite
> >
> >
> > Gromacs 4.5.4 is pretty old; is there any reason you're not using a new
> > version?  You'll get much better performance from 4.6.1.
> >
> > -Justin
> >
> > --
> > ==**==
> >
> > Justin A. Lemkul, Ph.D.
> > Research Scientist
> > Department of Biochemistry
> > Virginia Tech
> > Blacksburg, VA
> > jalemkul[at]vt.edu | (540) 231-90

Re: [gmx-users] compilation of gromacs-4.5.4 with fftw-3.3 for double precision versition

2013-03-27 Thread Qinghua Liao
Dear Justin,

Thanks very much for your reply! Yeah, I did not add the option of
--enable-shared for compilation of gromacs 4.5.4, but it still failed after
I added this option for the compilation.
For the compilations I posted in the last e-mail, I do add the option of
--enable-shared in compilation of fftw 3.3, but not for compilation of
gromacs 4.5.4. Problem remains unsolved.

I choose this  old version is to keep the simulations consistent with
previous simulations. Thanks for the suggestion!

All the best,
Qinghua Liao


On Wed, Mar 27, 2013 at 12:59 PM, Justin Lemkul  wrote:

>
>
> On 3/27/13 6:57 AM, Qinghua Liao wrote:
>
>> Dear gmx users,
>>
>> I tried to compile gromacs 4.5.4 with double precision, but it failed. The
>> reason was a little wired.
>>
>> Firstly, I used the following commands to compile gromacs 4.5.4 together
>> with fftw 3.3 for serial and parallel version with single precision, and I
>> made it successfully.
>>
>>
>>   1014  ./configure --prefix=/usr/users/iff_th2/**liao/fftw-3.3
>> --enable-sse
>> --enable-threads --enable-float --enable-shared CC=gcc
>>   1015  make
>>   1016  make install
>>   1017  make distclean
>>   1018  export CPPFLAGS=-I/usr/users/iff_th2/**liao/fftw-3.3/include
>>   1019  export LDFLAGS=-L/usr/users/iff_th2/**liao/fftw-3.3/lib
>>
>>   1022  mv gromacs-4.5.4 gromacs454
>>   1023  cd gromacs454/
>>
>>   1026  ./configure --prefix=/usr/users/iff_th2/**liao/gromacs454/
>> --enable-float --enable-threads CC=gcc --disable-gcc41-check
>>   1027  make
>>   1028  make install
>>
>>   1031  make distclean
>>   1032  cd ../fftw-3.3
>>   1034  ./configure --prefix=/usr/users/iff_th2/**liao/fftw-3.3
>> --enable-sse
>> --enable-threads --enable-float --enable-shared --enable-mpi CC=gcc
>>   1035  make
>>   1036  make install
>>   1037  make distclean
>>   1038  cd ../gromacs454/
>>   1039  ls
>>   1040  make distclean
>>   1041  ./configure --prefix=/usr/users/iff_th2/**liao/gromacs454/
>> --enable-float --enable-threads CC=gcc --disable-gcc41-check --enable-mpi
>> --program-suffix=_mpi
>>   1042  make
>>   1043  make install
>>   1044  make distclean
>>
>> But when I used these similar commands to compile for the double
>> precision,
>> it failed.
>>
>>
>>   1049  ./configure --prefix=/usr/users/iff_th2/**liao/fftw-3.3
>> --enable-long-double --enable-threads --enable-shared --enable-mpi CC=gcc
>>   1050  make
>>   1051  make install
>>   1052  make distclean
>>   1053  cd ../gromacs454/
>>   1054  make distclean
>>   1055  ./configure --prefix=/usr/users/iff_th2/**liao/gromacs454/
>> --enable-double --enable-threads CC=gcc --disable-gcc41-check --enable-mpi
>> --program-suffix=_d
>>   1056  make
>>
>> The error showed to me was:
>>
>> /usr/bin/ld: /usr/local/lib/libfftw3.a(**plan-dft-c2r-2d.o): relocation
>> R_X86_64_32 against `a local symbol' can not be used when making a shared
>> object; recompile with -fPIC /usr/local/lib/libfftw3.a: could not read
>> symbols: Bad value
>>
>> I added the option of --with-fPIC, but it was not recognized, and then I
>> changed it to --with-pic, but the error was still the same.
>>
>> I don't know why gromacs can recognize the fftw library when doing the
>> single float compilation, but not for the double float compilation, I
>> already used the shared option. Could someone give me some suggestions to
>> help me this out? Any reply will be appreciated.
>>
>>
> In your last step, you're not using --enable-shared like you did in every
> preceding step.  Adding that flag should fix it.
>
> http://www.gromacs.org/**Documentation/Installation_**
> Instructions_4.5#Details_for_**building_the_FFTW_prerequisite
>
> Gromacs 4.5.4 is pretty old; is there any reason you're not using a new
> version?  You'll get much better performance from 4.6.1.
>
> -Justin
>
> --
> ==**==
>
> Justin A. Lemkul, Ph.D.
> Research Scientist
> Department of Biochemistry
> Virginia Tech
> Blacksburg, VA
> jalemkul[at]vt.edu | (540) 231-9080
> http://www.bevanlab.biochem.**vt.edu/Pages/Personal/justin
>
> ==**==
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/**mailman/listinfo/gmx-users
> * Please search the archive at http://www.gromacs.org/**
> Support/Mailing_Lists/Searchbefore
>  posting!
> * Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read 
> http://www.gromacs.org/**Support/Mailing_Lists
>



-- 
Best Regards,

Qinghua
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the a

RE: [gmx-users] chiller failure leads to truncated .cpt and _prev.cpt files using gromacs 4.6.1

2013-03-27 Thread Berk Hess

Hi,

Gromacs calls fsync for every checkpoint file written:

   fsync() transfers ("flushes") all modified in-core data of (i.e., modi-
   fied  buffer cache pages for) the file referred to by the file descrip-
   tor fd to the disk device (or other permanent storage device)  so  that
   all  changed information can be retrieved even after the system crashed
   or was rebooted.  This includes writing  through  or  flushing  a  disk
   cache  if  present.   The call blocks until the device reports that the
   transfer has completed.  It also flushes metadata  information  associ-
   ated with the file (see stat(2)).

If fsync fails, mdrun exits with a fatal error.
We have experience with unreliable AFS file systems, where fsync mdrun could 
wait for hours and fail,
for which we added an environment variable.
So either fsync is not supported on your system (highly unlikely)
or your file system returns 0, indicating the file was synched, but it actually 
didn't fully sync.

Note that we first write a new checkpoint file with number, fynsc that, then 
move the current
to _prev (thereby loosing the old prev) and then the numbered one to the 
current.
So you should never end up with only corrupted files, unless fsync doesn't do 
what it's supposed to do.

Cheers,

Berk



> From: chris.ne...@mail.utoronto.ca
> To: gmx-users@gromacs.org
> Date: Wed, 27 Mar 2013 03:13:57 +
> Subject: [gmx-users] chiller failure leads to truncated .cpt and _prev.cpt 
> files using gromacs 4.6.1
>
> Dear Matthew:
>
> Thank you for noticing the file size. This is a very good lead.
> I had not noticed that this was special. Indeed, here is the complete listing 
> for truncated/corrupt .cpt files:
>
> -rw-r- 1 cneale cneale 1048576 Mar 26 18:53 md3.cpt
> -rw-r- 1 cneale cneale 1048576 Mar 26 18:54 md3.cpt
> -rw-r- 1 cneale cneale 1048576 Mar 26 18:54 md3.cpt
> -rw-r- 1 cneale cneale 1048576 Mar 26 18:54 md3.cpt
> -rw-r- 1 cneale cneale 1048576 Mar 26 18:50 md3.cpt
> -rw-r- 1 cneale cneale 1048576 Mar 26 18:50 md3.cpt
> -rw-r- 1 cneale cneale 1048576 Mar 26 18:50 md3.cpt
> -rw-r- 1 cneale cneale 1048576 Mar 26 18:51 md3.cpt
> -rw-r- 1 cneale cneale 1048576 Mar 26 18:51 md3.cpt
> -rw-r- 1 cneale cneale 2097152 Mar 26 18:52 md3.cpt
> -rw-r- 1 cneale cneale 2097152 Mar 26 18:52 md3.cpt
> -rw-r- 1 cneale cneale 2097152 Mar 26 18:52 md3.cpt
> -rw-r- 1 cneale cneale 2097152 Mar 26 18:52 md3.cpt
>
> I will contact my sysadmins and let them know about your suggestions.
>
> Nevertheless, I respectfully reject the idea that there is really nothing 
> that can be done about this inside
> gromacs. About 6 years ago, I worked on a cluster with massive sporadic NSF 
> delays. The only solution to
> automate runs on that machine was to, for example, use sed to create a .mdp 
> from a template .mdp file, which had ;;;EOF as the last line and then to poll 
> the created mdp file for ;;;EOF until it existed prior to running
> grompp (at the time I was using mdrun -sort and desorting with an in-house 
> script prior to domain
> decomposition, so I had to stop/start gromacs every coupld of hours). This is 
> not to say that such things are
> ideal, but I think gromacs would be all the better if it was able to avoid 
> with problems like this regardless of
> the cluster setup.
>
> Please note that, over the years, I have seen this on 4 different clusters 
> (albeit with different versions of
> gromacs), but that is to say that it's not just one setup that is to blame.
>
> Matthew, please don't take my comments the wrong way. I deeply appreciate 
> your help. I just want to put it
> out there that I believe that gromacs would be better if it didn't overwrite 
> good .cpt files with truncated/corrupt
> .cpt files ever, even if the cluster catches on fire or the earth's magnetic 
> field reverses, etc.
> Also, I suspect that sysadmins don't have a lot of time to test their 
> clusters for graceful exit upon chiller failure
> conditions, so a super-careful regime of .cpt update will always be useful.
>
> Thank you again for your help, I'll take it to my sysadmins, who are very 
> good and may be able to remedy
> this on their cluster, but who knows what cluster I will be using in 5 years.
>
> Again, thank you for your assistance, it is very useful,
> Chris.
>
> -- original message --
>
>
> Dear Chris,
>
> While it's always possible that GROMACS can be improved (or debugged), this
> smells more like a system-level problem. The corrupt checkpoint files are
> precisely 1MiB or 2MiB, which suggests strongly either 1) GROMACS was in
> the middle of a buffer flush when it was killed (but the filesystem did
> everything right; it was just sent incomplete data), or 2) the filesystem
> itself wrote a truncated file (but GROMACS wrote it successfully, the data
> was buffered, and GROMACS went on its merry way).
>
> #1 could happen, for example,

Re: [gmx-users] compilation of gromacs-4.5.4 with fftw-3.3 for double precision versition

2013-03-27 Thread Justin Lemkul



On 3/27/13 6:57 AM, Qinghua Liao wrote:

Dear gmx users,

I tried to compile gromacs 4.5.4 with double precision, but it failed. The
reason was a little wired.

Firstly, I used the following commands to compile gromacs 4.5.4 together
with fftw 3.3 for serial and parallel version with single precision, and I
made it successfully.


  1014  ./configure --prefix=/usr/users/iff_th2/liao/fftw-3.3 --enable-sse
--enable-threads --enable-float --enable-shared CC=gcc
  1015  make
  1016  make install
  1017  make distclean
  1018  export CPPFLAGS=-I/usr/users/iff_th2/liao/fftw-3.3/include
  1019  export LDFLAGS=-L/usr/users/iff_th2/liao/fftw-3.3/lib

  1022  mv gromacs-4.5.4 gromacs454
  1023  cd gromacs454/

  1026  ./configure --prefix=/usr/users/iff_th2/liao/gromacs454/
--enable-float --enable-threads CC=gcc --disable-gcc41-check
  1027  make
  1028  make install

  1031  make distclean
  1032  cd ../fftw-3.3
  1034  ./configure --prefix=/usr/users/iff_th2/liao/fftw-3.3 --enable-sse
--enable-threads --enable-float --enable-shared --enable-mpi CC=gcc
  1035  make
  1036  make install
  1037  make distclean
  1038  cd ../gromacs454/
  1039  ls
  1040  make distclean
  1041  ./configure --prefix=/usr/users/iff_th2/liao/gromacs454/
--enable-float --enable-threads CC=gcc --disable-gcc41-check --enable-mpi
--program-suffix=_mpi
  1042  make
  1043  make install
  1044  make distclean

But when I used these similar commands to compile for the double precision,
it failed.


  1049  ./configure --prefix=/usr/users/iff_th2/liao/fftw-3.3
--enable-long-double --enable-threads --enable-shared --enable-mpi CC=gcc
  1050  make
  1051  make install
  1052  make distclean
  1053  cd ../gromacs454/
  1054  make distclean
  1055  ./configure --prefix=/usr/users/iff_th2/liao/gromacs454/
--enable-double --enable-threads CC=gcc --disable-gcc41-check --enable-mpi
--program-suffix=_d
  1056  make

The error showed to me was:

/usr/bin/ld: /usr/local/lib/libfftw3.a(plan-dft-c2r-2d.o): relocation
R_X86_64_32 against `a local symbol' can not be used when making a shared
object; recompile with -fPIC /usr/local/lib/libfftw3.a: could not read
symbols: Bad value

I added the option of --with-fPIC, but it was not recognized, and then I
changed it to --with-pic, but the error was still the same.

I don't know why gromacs can recognize the fftw library when doing the
single float compilation, but not for the double float compilation, I
already used the shared option. Could someone give me some suggestions to
help me this out? Any reply will be appreciated.



In your last step, you're not using --enable-shared like you did in every 
preceding step.  Adding that flag should fix it.


http://www.gromacs.org/Documentation/Installation_Instructions_4.5#Details_for_building_the_FFTW_prerequisite

Gromacs 4.5.4 is pretty old; is there any reason you're not using a new version? 
 You'll get much better performance from 4.6.1.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] chiller failure leads to truncated .cpt and _prev.cpt files using gromacs 4.6.1

2013-03-27 Thread Justin Lemkul



On 3/26/13 11:13 PM, Christopher Neale wrote:

Dear Matthew:

Thank you for noticing the file size. This is a very good lead.
I had not noticed that this was special. Indeed, here is the complete listing 
for truncated/corrupt .cpt files:

-rw-r- 1 cneale cneale 1048576 Mar 26 18:53 md3.cpt
-rw-r- 1 cneale cneale 1048576 Mar 26 18:54 md3.cpt
-rw-r- 1 cneale cneale 1048576 Mar 26 18:54 md3.cpt
-rw-r- 1 cneale cneale 1048576 Mar 26 18:54 md3.cpt
-rw-r- 1 cneale cneale 1048576 Mar 26 18:50 md3.cpt
-rw-r- 1 cneale cneale 1048576 Mar 26 18:50 md3.cpt
-rw-r- 1 cneale cneale 1048576 Mar 26 18:50 md3.cpt
-rw-r- 1 cneale cneale 1048576 Mar 26 18:51 md3.cpt
-rw-r- 1 cneale cneale 1048576 Mar 26 18:51 md3.cpt
-rw-r- 1 cneale cneale 2097152 Mar 26 18:52 md3.cpt
-rw-r- 1 cneale cneale 2097152 Mar 26 18:52 md3.cpt
-rw-r- 1 cneale cneale 2097152 Mar 26 18:52 md3.cpt
-rw-r- 1 cneale cneale 2097152 Mar 26 18:52 md3.cpt

I will contact my sysadmins and let them know about your suggestions.

Nevertheless, I respectfully reject the idea that there is really nothing that 
can be done about this inside
gromacs. About 6 years ago, I worked on a cluster with massive sporadic NSF 
delays. The only solution to
automate runs on that machine was to, for example, use sed to create a .mdp 
from a template .mdp file, which had ;;;EOF as the last line and then to poll 
the created mdp file for ;;;EOF until it existed prior to running
grompp (at the time I was using mdrun -sort and desorting with an in-house 
script prior to domain
decomposition, so I had to stop/start gromacs every coupld of hours). This is 
not to say that such things are
ideal, but I think  gromacs would be all the better if it was able to avoid 
with problems like this regardless of
the cluster setup.

Please note that, over the years, I have seen this on 4 different clusters 
(albeit with different versions of
gromacs), but that is to say that it's not just one setup that is to blame.

Matthew, please don't take my comments the wrong way. I deeply appreciate your 
help. I just want to put it
out there that I believe that gromacs would be better if it didn't overwrite 
good .cpt files with truncated/corrupt
.cpt files ever, even if the cluster catches on fire or the earth's magnetic 
field reverses, etc.
Also, I suspect that sysadmins don't have a lot of time to test their clusters 
for graceful exit upon chiller failure
conditions, so a super-careful regime of .cpt update will always be useful.

Thank you again for your help, I'll take it to my sysadmins, who are very good 
and may be able to remedy
this on their cluster, but who knows what cluster I will be using in 5 years.



Perhaps this is a case where the -cpnum option would be useful?  That may cause 
a lot of checkpoint files to accumulate, depending on the length of the run, but 
perhaps a scripted cleanup routine to preserve some subset of backups would be 
useful.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_cluster stopped - malloc failed

2013-03-27 Thread Justin Lemkul



On 3/27/13 5:40 AM, Zalikha Ibrahim wrote:

Good day to all GMX users,

I have two trajectories, 0-10ns and 10-20ns. I merged both .trr file using
trjcat with option -overwrite.


trjcat -f npt.trr npt2.trr -o npt_cat.trr -overwrite


Converted into .xtc and then I tried using the g_cluster tool in order to
cluster the trajectories.


g_cluster -f npt_cat.xtc -s npt.gro -method gromos -cl cluster.pdb -g

cluster.log -cutoff 0.15 -b 4000

I choose c-alpha for rmsd calculation
I choose protein for output
It appeared this following line at frame 5000.
..
Reading frame5000 time 14000.000   malloc failed

I would be grateful if someone can tell me why it failed? And why does the
time didnt appeared as it should? (e.g. reading frame 5000 time 1.00,
as it is written at every 2ps).



The frame and time values are only updated periodically in the screen output. 
What you've posted above does not indicate that frame 5000 corresponds to the 
time at 14000 ps, just that g_cluster was busy processing and hadn't yet updated 
the printed output.  The "malloc failed" error means you have run out of memory. 
 Either reduce the number of frames or reduce the size of the problem. 
g_cluster requires a lot of memory.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] calculating LJ

2013-03-27 Thread Justin Lemkul



On 3/27/13 2:15 AM, 라지브간디 wrote:

Hello gmx,


I have LJ parameter value of C (epsilon = 0.0262 kcal/mol,   sigma = 3.83) O 
(epsilon = 0.1591. sigma = 3.12) in charmm format and wants to use them in 
gromos43a1 or charmm27 force field in gromacs.


Could you tell me how do i convert them to gromacs format? Any examples plz.




Equation 5.1 in the manual, or apply g_sigeps.  Note that picking values of 
atoms randomly and inserting them into an existing force field is a great way to 
completely invalidate the force field.  Atom types are balanced against one 
another.  Making ad hoc changes means you're using an unvalidated parameter set, 
and any good reviewer is going to have serious issues with whatever data you 
produce.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] compilation of gromacs-4.5.4 with fftw-3.3 for double precision versition

2013-03-27 Thread Qinghua Liao
Dear gmx users,

I tried to compile gromacs 4.5.4 with double precision, but it failed. The
reason was a little wired.

Firstly, I used the following commands to compile gromacs 4.5.4 together
with fftw 3.3 for serial and parallel version with single precision, and I
made it successfully.


 1014  ./configure --prefix=/usr/users/iff_th2/liao/fftw-3.3 --enable-sse
--enable-threads --enable-float --enable-shared CC=gcc
 1015  make
 1016  make install
 1017  make distclean
 1018  export CPPFLAGS=-I/usr/users/iff_th2/liao/fftw-3.3/include
 1019  export LDFLAGS=-L/usr/users/iff_th2/liao/fftw-3.3/lib

 1022  mv gromacs-4.5.4 gromacs454
 1023  cd gromacs454/

 1026  ./configure --prefix=/usr/users/iff_th2/liao/gromacs454/
--enable-float --enable-threads CC=gcc --disable-gcc41-check
 1027  make
 1028  make install

 1031  make distclean
 1032  cd ../fftw-3.3
 1034  ./configure --prefix=/usr/users/iff_th2/liao/fftw-3.3 --enable-sse
--enable-threads --enable-float --enable-shared --enable-mpi CC=gcc
 1035  make
 1036  make install
 1037  make distclean
 1038  cd ../gromacs454/
 1039  ls
 1040  make distclean
 1041  ./configure --prefix=/usr/users/iff_th2/liao/gromacs454/
--enable-float --enable-threads CC=gcc --disable-gcc41-check --enable-mpi
--program-suffix=_mpi
 1042  make
 1043  make install
 1044  make distclean

But when I used these similar commands to compile for the double precision,
it failed.


 1049  ./configure --prefix=/usr/users/iff_th2/liao/fftw-3.3
--enable-long-double --enable-threads --enable-shared --enable-mpi CC=gcc
 1050  make
 1051  make install
 1052  make distclean
 1053  cd ../gromacs454/
 1054  make distclean
 1055  ./configure --prefix=/usr/users/iff_th2/liao/gromacs454/
--enable-double --enable-threads CC=gcc --disable-gcc41-check --enable-mpi
--program-suffix=_d
 1056  make

The error showed to me was:

/usr/bin/ld: /usr/local/lib/libfftw3.a(plan-dft-c2r-2d.o): relocation
R_X86_64_32 against `a local symbol' can not be used when making a shared
object; recompile with -fPIC /usr/local/lib/libfftw3.a: could not read
symbols: Bad value

I added the option of --with-fPIC, but it was not recognized, and then I
changed it to --with-pic, but the error was still the same.

I don't know why gromacs can recognize the fftw library when doing the
single float compilation, but not for the double float compilation, I
already used the shared option. Could someone give me some suggestions to
help me this out? Any reply will be appreciated.

All the best,
Qinghua Liao




-- 
Best Regards,

Qinghua
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_cluster stopped - malloc failed

2013-03-27 Thread Zalikha Ibrahim
Good day to all GMX users,

I have two trajectories, 0-10ns and 10-20ns. I merged both .trr file using
trjcat with option -overwrite.

>trjcat -f npt.trr npt2.trr -o npt_cat.trr -overwrite

Converted into .xtc and then I tried using the g_cluster tool in order to
cluster the trajectories.

> g_cluster -f npt_cat.xtc -s npt.gro -method gromos -cl cluster.pdb -g
cluster.log -cutoff 0.15 -b 4000

I choose c-alpha for rmsd calculation
I choose protein for output
It appeared this following line at frame 5000.
..
Reading frame5000 time 14000.000   malloc failed

I would be grateful if someone can tell me why it failed? And why does the
time didnt appeared as it should? (e.g. reading frame 5000 time 1.00,
as it is written at every 2ps).

Thanks in advance. Appreciate your help.

Zalikha Ibrahim
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists