Re: [gmx-users] building OLPSAA force field

2013-06-09 Thread Souilem Safa
Dear Mark,
thank you for your answer, I'm doing it right now.
Cheers,
Safa


On 10 June 2013 14:02, Mark Abraham  wrote:

> Hi,
>
> Please do search for GROMACS tutorial material, which will help you get
> started with the basic concepts.
>
> Cheers,
>
> Mark
>
>
> On Mon, Jun 10, 2013 at 6:34 AM, Souilem Safa  >wrote:
>
> > Dear Gromacs users,
> > I'm still relatively new to molecular modelling.
> > I want to build a OPLSAA toplogy file of my molecule. I have only the pdb
> > file of my molecule.
> > Can any one help me what detailed steps should I follow to get that
> topolgy
> > file.
> > Many thanks
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > * Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-requ...@gromacs.org.
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] building OLPSAA force field

2013-06-09 Thread Mark Abraham
Hi,

Please do search for GROMACS tutorial material, which will help you get
started with the basic concepts.

Cheers,

Mark


On Mon, Jun 10, 2013 at 6:34 AM, Souilem Safa wrote:

> Dear Gromacs users,
> I'm still relatively new to molecular modelling.
> I want to build a OPLSAA toplogy file of my molecule. I have only the pdb
> file of my molecule.
> Can any one help me what detailed steps should I follow to get that topolgy
> file.
> Many thanks
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] fftw without SIMD

2013-06-09 Thread Mark Abraham
You may have compiled one like that (AFAIK those CFLAGS won't help), but
CMake *finding* that one once installed can be another matter (e.g. use
CMAKE_PREFIX_PATH per install instructions). You can see which fftw was
found with "grep FFTWF_LIBRARY CMakeCache.txt" from the build directory -
or pay close attention to the output from the initial cmake.

Mark

On Mon, Jun 10, 2013 at 1:34 AM, francesco oteri
wrote:

> Dear gromacs users,
> I am compiling gromacs 4.6.1, but when I configure the package ccmake
> says:
> The fftw library found is compiled without SIMD support, which makes it
>
> although I compiled fftw
>
> ./configure --enable-single --enable-shared --enable-sse2 CC=icc F77=ifort
> CFLAGS=-msse4.1
>
>
> Is it a known issue or I did something wrong?
>
> Francesco
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] building OLPSAA force field

2013-06-09 Thread Souilem Safa
Dear Gromacs users,
I'm still relatively new to molecular modelling.
I want to build a OPLSAA toplogy file of my molecule. I have only the pdb
file of my molecule.
Can any one help me what detailed steps should I follow to get that topolgy
file.
Many thanks
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] ask for help

2013-06-09 Thread mxy1989
Dear, sir
 I'm a user of gromacs from China.I have just regist your mail 
list."gmx-users mailing list membership configuration for mxy1989 at 
mail.ustc.edu.cn, bravema".
but I don't know how to use it. and I have a very important question: In your 
gromacs version 4.5 , Pulling is not supported on GPU,but in recent version 
4.6, can I do the pulling simulation on GPU?
THANK YOU!
BEST WISH!

FROM:
KEVIN MA 




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] fftw without SIMD

2013-06-09 Thread francesco oteri
Dear gromacs users,
I am compiling gromacs 4.6.1, but when I configure the package ccmake
says:
The fftw library found is compiled without SIMD support, which makes it

although I compiled fftw

./configure --enable-single --enable-shared --enable-sse2 CC=icc F77=ifort
CFLAGS=-msse4.1


Is it a known issue or I did something wrong?

Francesco
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: mdrun segmentation fault for new build of gromacs 4.6.1

2013-06-09 Thread Roland Schulz
Hi,

based on Mark's idea I would have thought that the cpu detection would
have already failed during cmake. But it seems it detected SSE4.1
correctly.
Could you post the stack trace for the crash? (see previous mail for
instructions)

Roland

On Sun, Jun 9, 2013 at 4:42 PM, Amil Anderson  wrote:
> Roland,
>
> I have posted the cmake output (cmake-4.6.1) and the file CMakeError.log at
> the usual
>
> https://www.dropbox.com/sh/h6867f7ivl5pcl9/j9gt9CsVdP
>
> I see there are some errors but don't know what to make of them.
>
> Thanks,
> Amil
>
>
>
> --
> View this message in context: 
> http://gromacs.5086.x6.nabble.com/mdrun-segmentation-fault-for-new-build-of-gromacs-4-6-1-tp5008873p5008944.html
> Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: mdrun segmentation fault for new build of gromacs 4.6.1

2013-06-09 Thread Amil Anderson
Roland,

I have posted the cmake output (cmake-4.6.1) and the file CMakeError.log at
the usual

https://www.dropbox.com/sh/h6867f7ivl5pcl9/j9gt9CsVdP

I see there are some errors but don't know what to make of them.

Thanks,
Amil



--
View this message in context: 
http://gromacs.5086.x6.nabble.com/mdrun-segmentation-fault-for-new-build-of-gromacs-4-6-1-tp5008873p5008944.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Running gmx-4.6.x over multiple homogeneous nodes with GPU acceleration

2013-06-09 Thread Szilárd Páll
On Wed, Jun 5, 2013 at 4:35 PM, João Henriques
 wrote:
> Just to wrap up this thread, it does work when the mpirun is properly
> configured. I knew it had to be my fault :)
>
> Something like this works like a charm:
> mpirun -npernode 2 mdrun_mpi -ntomp 8 -gpu_id 01 -deffnm md -v

That is indeed the correct way to launch the simulation, this way
you'll have two ranks per node, each using a different GPU. However,
coming back to your initial (non-working) launch config, if you want
to run 4 x 4 threads per rank you'll have to assign two for each GPU:
mpirun -np 4 mdrun_mpi -gpu_id 00111 -deffnm md -v

If scaling in the multi-threading is limiting performance, the above
will help (compared to 2 x 8s thread per rank) - which is often the
case on AMD and I've seen cases where it did help already on a single
Intel node.

I'd like to point out one more thing which is important when you run
on more than just a node or two. GPU accelerated runs don't switch to
using separate PME ranks - mostly because it's very hard to pick the
settings for distributing cores between PP and PME ranks. However,
already from around two-three nodes, you will get better performance
by using separate PME ranks.

You should experiment with using part of the cores (usually half is a
decent choice) for PME either by running 2 PP + 1 PME or 2 PP and 2
PME.

> Thank you Mark and Szilárd for your invaluable expertise.

Welcome!

--
Szilárd

>
> Best regards,
> João Henriques
>
>
> On Wed, Jun 5, 2013 at 4:21 PM, João Henriques <
> joao.henriques.32...@gmail.com> wrote:
>
>> Ok, thanks once again. I will do my best to overcome this issue.
>>
>> Best regards,
>> João Henriques
>>
>>
>> On Wed, Jun 5, 2013 at 3:33 PM, Mark Abraham wrote:
>>
>>> On Wed, Jun 5, 2013 at 2:53 PM, João Henriques <
>>> joao.henriques.32...@gmail.com> wrote:
>>>
>>> > Sorry to keep bugging you guys, but even after considering all you
>>> > suggested and reading the bugzilla thread Mark pointed out, I'm still
>>> > unable to make the simulation run over multiple nodes.
>>> > *Here is a template of a simple submission over 2 nodes:*
>>> >
>>> > --- START ---
>>> > #!/bin/sh
>>> > #
>>> > # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
>>> > #
>>> > # Job name
>>> > #SBATCH -J md
>>> > #
>>> > # No. of nodes and no. of processors per node
>>> > #SBATCH -N 2
>>> > #SBATCH --exclusive
>>> > #
>>> > # Time needed to complete the job
>>> > #SBATCH -t 48:00:00
>>> > #
>>> > # Add modules
>>> > module load gcc/4.6.3
>>> > module load openmpi/1.6.3/gcc/4.6.3
>>> > module load cuda/5.0
>>> > module load gromacs/4.6
>>> > #
>>> > # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
>>> > #
>>> > grompp -f md.mdp -c npt.gro -t npt.cpt -p topol -o md.tpr
>>> > mpirun -np 4 mdrun_mpi -gpu_id 01 -deffnm md -v
>>> > #
>>> > # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
>>> > --- END ---
>>> >
>>> > *Here is an extract of the md.log:*
>>> >
>>> > --- START ---
>>> > Using 4 MPI processes
>>> > Using 4 OpenMP threads per MPI process
>>> >
>>> > Detecting CPU-specific acceleration.
>>> > Present hardware specification:
>>> > Vendor: GenuineIntel
>>> > Brand:  Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
>>> > Family:  6  Model: 45  Stepping:  7
>>> > Features: aes apic avx clfsh cmov cx8 cx16 htt lahf_lm mmx msr
>>> nonstop_tsc
>>> > pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2
>>> ssse3
>>> > tdt x2apic
>>> > Acceleration most likely to fit this hardware: AVX_256
>>> > Acceleration selected at GROMACS compile time: AVX_256
>>> >
>>> >
>>> > 2 GPUs detected on host en001:
>>> >   #0: NVIDIA Tesla K20m, compute cap.: 3.5, ECC: yes, stat: compatible
>>> >   #1: NVIDIA Tesla K20m, compute cap.: 3.5, ECC: yes, stat: compatible
>>> >
>>> >
>>> > ---
>>> > Program mdrun_mpi, VERSION 4.6
>>> > Source code file:
>>> >
>>> /lunarc/sw/erik/src/gromacs/gromacs-4.6/src/gmxlib/gmx_detect_hardware.c,
>>> > line: 322
>>> >
>>> > Fatal error:
>>> > Incorrect launch configuration: mismatching number of PP MPI processes
>>> and
>>> > GPUs per node.
>>> >
>>>
>>> "per node" is critical here.
>>>
>>>
>>> > mdrun_mpi was started with 4 PP MPI processes per node, but you
>>> provided 2
>>> > GPUs.
>>> >
>>>
>>> ...and here. As far as mdrun_mpi knows from the MPI system there's only
>>> MPI
>>> ranks on this one node.
>>>
>>> For more information and tips for troubleshooting, please check the
>>> GROMACS
>>> > website at http://www.gromacs.org/Documentation/Errors
>>> > ---
>>> > --- END ---
>>> >
>>> > As you can see, gmx is having trouble understanding that there's a
>>> second
>>> > node available. Note that since I did not specify -ntomp, it assigned 4
>>> > threads to each of the 4 mpi processes (filling the entire avail. 16
>>> CPUs
>>> > *on
>>> > one node*).
>>> > For the same exact submission, if I do set "-nto

Re: [gmx-users] GPU ECC question

2013-06-09 Thread Szilárd Páll
On Sat, Jun 8, 2013 at 9:21 PM, Albert  wrote:
> Hello:
>
>  Recently I found a strange question about Gromacs-4.6.2 on GPU workstaion.
> In my GTX690 machine, when I run md production I found that the ECC is on.
> However, in my another GTX590 machine, I found the ECC was off:
>
> 4 GPUs detected:
>   #0: NVIDIA GeForce GTX 590, compute cap.: 2.0, ECC:  no, stat: compatible
>   #1: NVIDIA GeForce GTX 590, compute cap.: 2.0, ECC:  no, stat: compatible
>   #2: NVIDIA GeForce GTX 590, compute cap.: 2.0, ECC:  no, stat: compatible
>   #3: NVIDIA GeForce GTX 590, compute cap.: 2.0, ECC:  no, stat: compatible
>
> moreover, there is only two GTX590 in the machine, I don't know why Gromacs
> claimed 4 GPU detected. However, in my another Linux machine which also have
> two GTX590, Gromacs-4.6.2 only find 2 GPU, and ECC is still off.
>
> I am just wondering:
>
> (1) why in GTX690 the ECC can be on while it is off in my GTX590? I compiled
> Gromacs with the same options and the same version of intel compiler

Unless your 690 is in fact a Tesla K10 it does surely not support ECC!
Note that ECC is not something I personally think you really need.

>
> (2) why in machines both of physically installed two GTX590 cards, one of
> them was detected with 4 GPU while the other was claimed contains two GPU?
>

Both GTX 590 and 690 are dual-chip boards which means two independent
processing units with their own memory mounted on the same card and
connected by a PCI switch (NVIDIA NF200). Hence, the two GPUs on these
dual-chip boards will be enumerated as a separate devices. You can
double-check this in nvidia-smi which should give the same devices as
what mdrun reports. I suspect that one of the GPUs which is shown to
have only two GPUs suffers from some hardware or software issues.


Regards,
Szilard

> thank you very much
>
> best
> Albert
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_density -center issues (GMX version 4.6)

2013-06-09 Thread Reid Van Lehn
Hi users/developers,

I identified 2 issues in the center_coords function of g_density that make
the flag -center incorrectly center the system. These are in addition to
the issues previously raised by Chris Neale (reported as issue 1168 on
redmine: http://redmine.gromacs.org/issues/1168). I have added these two
issues to the same redmine bug report as they are related to the same
analysis tool/similar issues; following from Chris' previous advice, I also
suggest that users not use the -center flag and instead center their
trajectories prior to running g_density using trjconv. The issues described
below are present in g_density for GMX 4.6 and presumably for earlier/later
versions as well; I apologize if these have been updated in 4.6.1 or 4.6.2.

The function center_coords is intended to shift the COM of the system to
bX/2, bY/2, 0 as described in the help documentation for bCenter. This is
accomplished by calculating a shift vector on line 153 (line numbers may be
off, sorry):

rvec_sub(box_center, com, shift);

and then subtracting the shift from all coordinates in a loop on line 162:

for (i = 0; (i < atoms->nr); i++)
   rvec_dec(x0[i], shift);

This is incorrect; the shift vector should be added to all coordinates, not
subtracted, for the COM to be centered since rvec_sub sets shift to
box_center - com. This can be fixed by changing rvec_dec to rvec_inc, which
I have verified returns the correctly centered coordinates.

The other issue is that the COM is computed by weighting by the mass of
each atom obtained from the topology, as shown in the command:

mm = atoms->atom[i].m;

This is fine if the -dens flag is set to mass or electron; but for number
or charge the atom[i].m variable in the topology is reset in the function
g_density to either 1 (for -dens number) or the atom's charge (for -dens
charge). The system is thus centered not by the center of mass, but rather
by the geometric center/center of charge respectively. This could be fine
but may not be obvious to the user.

Again I added a report of both issues to the redmine report and wanted to
inform users of their existence in case this affects the use of the
g_density tool.

Best,
Reid
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Eigenvector and eigenvalues

2013-06-09 Thread Ankita naithani
Hi Tsjerk,

Thank you so much for the helpful insight. It will surely help in clearing
some basic concepts about eigenvectors and the projections.
Also, since in the end you mentioned about my matrix being derived only
from C-alpha atoms which proves a limitation, would you reckon a backbone
analysis would be better suited to gain some helpful insight into protein
motions (Allosteric transitions, effector signalling??). It would however
be intensively complicated to derive for the whole protein considering mine
is a tetramer and a huge system.


Kind regards,

Ankita


On Sat, Jun 8, 2013 at 9:18 PM, Tsjerk Wassenaar  wrote:

> Hi Ankita,
>
> It is important to use the correct names for things. The projection is the
> (scalar) inner product of the configuration (coordinates) with the
> eigenvector. This is also named the score of the configuration on the
> selected eigenvector. With g_anaeig -proj you get the projection. Each
> configuration of a trajectory has a projection, and you can plot them over
> time, or plot the projection on one eigenvector against another (-2d). You
> can also make a 3D plot of three selected eigenvectors (-3d), which is
> written out as a PDB file. The projection is not a structure, and you can't
> have a movie of structures. What you can do is extract the structures
> corresponding to the extreme projections (-extr). Those are always written
> per eigenvector. Since the eigenvectors are uncorrelated, the extremes are
> not defined for combinations of them. It is also possible to filter a
> trajectory on a set of eigenvectors. That will give a trajectory of
> (unphysical) structures, in which the projections on the non-selected
> eigenvectors are set to zero. This filtering is a matrix operation, using
> the matrix of selected eigenvectors and the corresponding atoms of the
> trajectory. Because the matrix of eigenvectors is derived from C-alpha
> atoms only in your case, it is not possible to get anything else out of it.
>
> Hope it helps,
>
> Tsjerk
>
>
>
> On Sat, Jun 8, 2013 at 1:06 PM, Ankita naithani  >wrote:
>
> > Hi Tsjerk,
> >
> > [I am reposting it since my previous attachment was unable to proceed I
> > guess]
> >
> > Thank you for the reply. I am sorry but I don't have the snapshots with
> me
> > as my system was formatted and I lost that information. However, I
> > performed PCA again on Calpha atoms and this time my matrix is 5976X5976
> > wherein 1992 are my total number of calpha atoms. So, yes indeed I did
> > something foolish in the previous case to have got that result. However,
> I
> > will try to look at it and find my mistake so as to not repeat it again.
> >
> > I would be really grateful if you could help me with my another naive
> > query. I wanted to see the movie of first few eigenvectors as individual
> > and also separately. I know I can extract that with the help of g_anaeig
> > but as I did PCA on only calpha atoms, all I get in the projection of
> first
> > eigenvector is the calpha dots. Is there any way, I can visualise the
> > projection of whole protein on that particular eigenvector?
> >
> > Secondly, if I want to see the projection along first 10 eigenvectors and
> > last 10 eigenvectors (slow modes and fast modes), how could I proceed
> > because when I choose -first 1 -last 10, it gives me individual pdb files
> > corresponding to the eigenvectors.
> >
> > Third, as can be judged by my absolute naivety, I did a 2d projection on
> PC
> > 1 and PC 2. I can visualise that plot in grace as a 2d plot. An example
> > file is appended herein. I know each dot might correspond to a particular
> > conformation in the trajectory but I really have no idea as to how to
> > interpret the results and draw conclusions from these 2d plots and also
> 3d
> > pdb projections. Could you please guide me in the right direction to
> > understand these 2d plots and infer the significant results?
> >
> >
> > Kind regards,
> >
> > Ankita
> >
> >
> > On Thu, Jun 6, 2013 at 2:49 PM, Tsjerk Wassenaar 
> > wrote:
> > >
> > >> Hi Ankita,
> > >>
> > >> Please provide the commands you've run and the screen output from
> > g_covar.
> > >>
> > >> Cheers,
> > >>
> > >> Tsjerk
> > >>
> > >>
> > >>
> > >> On Thu, Jun 6, 2013 at 3:44 PM, Ankita naithani <
> > ankitanaith...@gmail.com
> > >> >wrote:
> > >>
> > >> > Hi,
> > >> >
> > >> > I wanted to know about the eigenvectors and eigenvalues. I recently
> > >> > performed the principal component analysis (only the backbone into
> > >> > consideration) on a trajectory of 2000 residues. I obtained 15641
> > >> > eigenvectors and 17928 eigenvalues. There is a difference in the
> > number,
> > >> > which I am not quite sure off (perhaps that has to do with
> eigenvalue
> > >> for
> > >> > each eigenvector, and the eigenvector has 3 co-ordinates x,y,z. I
> > know I
> > >> > may be wrong completely but since there are 15641 eigenvectors, then
> > >> > shouldn't there by only 15641 eigenvalues for those eigenvectors)
> > >> >
> > >> >

Re: [gmx-users] distance_restraints

2013-06-09 Thread Justin Lemkul



On 6/9/13 5:15 AM, maggin wrote:

Hi, all

I use GMX4.5.5,  GROMOS96 53a6 force field ro simulation 1dx0.pdb, I use two
steps energy minimization (steep and cg ) in vacuum as follows:

1. pdb2gmx -f 1dx0.pdb -o xxx.gro -ignh -ter -water  spce  -ss  -p xxx.top

2. editconf -f xxx.gro -o xxx.pdb -c -d 0.9 -bt cubic

3.grompp -f em.mdp -c xxx.pdb -p xxx.top -o xxx.tpr

4.mdrun -v -s xxx.tpr -nt 2 -deffnm em_xxx

5.grompp_d -f cg.mdp -c em_xxx.gro -p xxx.top -o xxx.tpr

6.mdrun_d -v -s xxx.tpr -nt 2 -deffnm cg_xxx

steep is no problem, while cg have some warnings:

Step 143, Epot=-1.190781e+04, Fnorm=3.680e+01, Fmax=7.027e+02 (atom 217)
Step 144, Epot=-1.190936e+04, Fnorm=2.620e+01, Fmax=2.688e+02 (atom 217)
Step 145, Epot=-1.191034e+04, Fnorm=2.618e+01, Fmax=2.276e+02 (atom 277)
Step 146, Epot=-1.191235e+04, Fnorm=3.672e+01, Fmax=4.235e+02 (atom 277)
Step 147, Epot=-1.192020e+04, Fnorm=6.482e+01, Fmax=8.288e+02 (atom 219)
Step 148, Epot=-1.196346e+04, Fnorm=1.315e+02, Fmax=1.971e+03 (atom 378)

Step -1, time -0.002 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.000694, max 0.004189 (between atoms 296 and 297)
bonds that rotated more than 30 degrees:
  atom 1 atom 2  angle  previous, current, constraint length

Step -1, time -0.002 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.009008, max 0.056534 (between atoms 1 and 4)
bonds that rotated more than 30 degrees:
  atom 1 atom 2  angle  previous, current, constraint length
 181182   31.30.1000   0.1000  0.1000
 685688   37.80.1000   0.1051  0.1000
 262263   41.00.1000   0.1000  0.1000
   1  4   39.20.1000   0.1057  0.1000
Step 149, Epot=-1.197948e+04, Fnorm=1.069e+02, Fmax=1.271e+03 (atom 218)
Step 150, Epot=-1.198617e+04, Fnorm=6.868e+01, Fmax=9.492e+02 (atom 218)
Step 151, Epot=-1.199312e+04, Fnorm=6.069e+01, Fmax=8.678e+02 (atom 378)
Step 152, Epot=-1.199645e+04, Fnorm=2.884e+01, Fmax=3.323e+02 (atom 378)
Step 153, Epot=-1.199771e+04, Fnorm=2.482e+01, Fmax=2.873e+02 (atom 378)
Step 154, Epot=-1.199942e+04, Fnorm=3.681e+01, Fmax=6.652e+02 (atom 378)
Step 155, Epot=-1.200439e+04, Fnorm=7.122e+01, Fmax=1.091e+03 (atom 378)
.
.
.
.
.
Step 208, Epot=-1.231153e+04, Fnorm=2.409e+01, Fmax=2.882e+02 (atom 262)
Step 209, Epot=-1.231272e+04, Fnorm=2.635e+01, Fmax=3.012e+02 (atom 261)
Step 210, Epot=-1.231523e+04, Fnorm=3.957e+01, Fmax=7.031e+02 (atom 378)
Step 211, Epot=-1.232519e+04, Fnorm=6.187e+01, Fmax=1.273e+03 (atom 378)

Step -1, time -0.002 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.01, max 0.05 (between atoms 775 and 776)
bonds that rotated more than 30 degrees:
  atom 1 atom 2  angle  previous, current, constraint length
 262263   30.30.1000   0.1000  0.1000
Step 212, Epot=-1.233691e+04, Fnorm=4.926e+01, Fmax=5.858e+02 (atom 378)
Step 213, Epot=-1.234682e+04, Fnorm=8.846e+01, Fmax=1.130e+03 (atom 261)
Step 214, Epot=-1.235149e+04, Fnorm=8.846e+01, Fmax=1.372e+03 (atom 262)



What does a visual inspection of your structures (initial and final) and the 
trajectory tell you about these problematic areas?



writing lowest energy coordinates.

Back Off! I just backed up em_cg_constrain_vacuum.gro to
./#em_cg_constrain_vacuum.gro.1#

Polak-Ribiere Conjugate Gradients converged to Fmax < 100 in 275 steps
Potential Energy  = -1.25808627767813e+04
Maximum force =  9.11707505025239e+01 on atom 757
Norm of force =  1.65933958355887e+01



The outcome appears quite good overall, but one should never ignore LINCS 
warnings, so you do need to carefully evaluate what happened visually.



I constraint h-bond in cg.mdp:
title   =  bovin
cpp =  /usr/bin/cpp
define  =  -DFLEXIBLE
constraints =  h-bonds
integrator  =  cg
dt  =  0.002; ps !
nsteps  =  2000
nstlist =  10
ns_type =  grid
rlist   =  1.0
coulombtype =  PME
rcoulomb=  1.0
vdwtype =  cut-off
rvdw=  1.4
fourierspacing=  0.12
fourier_nx=  0
fourier_ny=  0
fourier_nz=  0
pme_order=  4
ewald_rtol=  1e-5
optimize_fft=  yes
emtol   =  100
emstep  =  0.001
; Highest order in the expansion of the constraint coupling matrix
lincs_order  = 8

; GENERATE VELOCITIES FOR STARTUP RUN
gen-vel  = no
gen-temp = 293
gen-seed = 173529

in order to fix warning, I use distance_restraints as follows:

genrestr  -f em_xxx.gro  -o posre.itp -disre_dist 0.1 -disre_up2 1

acording to the posre.itp, revise xxx.top file,

; Include Position restraint file
#ifdef POSRES
#include "posre.itp"
[ distance_restraints ]
; ai aj type index type’ low up1 up2 fac
1 4 1 0 2 0.0 0.19 1.19 1.0
181 182 1 0 2 0.0 0.19 1.19 1.0
262 

[gmx-users] distance_restraints

2013-06-09 Thread maggin
Hi, all

I use GMX4.5.5,  GROMOS96 53a6 force field ro simulation 1dx0.pdb, I use two
steps energy minimization (steep and cg ) in vacuum as follows:

1. pdb2gmx -f 1dx0.pdb -o xxx.gro -ignh -ter -water  spce  -ss  -p xxx.top

2. editconf -f xxx.gro -o xxx.pdb -c -d 0.9 -bt cubic 

3.grompp -f em.mdp -c xxx.pdb -p xxx.top -o xxx.tpr

4.mdrun -v -s xxx.tpr -nt 2 -deffnm em_xxx

5.grompp_d -f cg.mdp -c em_xxx.gro -p xxx.top -o xxx.tpr

6.mdrun_d -v -s xxx.tpr -nt 2 -deffnm cg_xxx

steep is no problem, while cg have some warnings:

Step 143, Epot=-1.190781e+04, Fnorm=3.680e+01, Fmax=7.027e+02 (atom 217)
Step 144, Epot=-1.190936e+04, Fnorm=2.620e+01, Fmax=2.688e+02 (atom 217)
Step 145, Epot=-1.191034e+04, Fnorm=2.618e+01, Fmax=2.276e+02 (atom 277)
Step 146, Epot=-1.191235e+04, Fnorm=3.672e+01, Fmax=4.235e+02 (atom 277)
Step 147, Epot=-1.192020e+04, Fnorm=6.482e+01, Fmax=8.288e+02 (atom 219)
Step 148, Epot=-1.196346e+04, Fnorm=1.315e+02, Fmax=1.971e+03 (atom 378)

Step -1, time -0.002 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.000694, max 0.004189 (between atoms 296 and 297)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length

Step -1, time -0.002 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.009008, max 0.056534 (between atoms 1 and 4)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
181182   31.30.1000   0.1000  0.1000
685688   37.80.1000   0.1051  0.1000
262263   41.00.1000   0.1000  0.1000
  1  4   39.20.1000   0.1057  0.1000
Step 149, Epot=-1.197948e+04, Fnorm=1.069e+02, Fmax=1.271e+03 (atom 218)
Step 150, Epot=-1.198617e+04, Fnorm=6.868e+01, Fmax=9.492e+02 (atom 218)
Step 151, Epot=-1.199312e+04, Fnorm=6.069e+01, Fmax=8.678e+02 (atom 378)
Step 152, Epot=-1.199645e+04, Fnorm=2.884e+01, Fmax=3.323e+02 (atom 378)
Step 153, Epot=-1.199771e+04, Fnorm=2.482e+01, Fmax=2.873e+02 (atom 378)
Step 154, Epot=-1.199942e+04, Fnorm=3.681e+01, Fmax=6.652e+02 (atom 378)
Step 155, Epot=-1.200439e+04, Fnorm=7.122e+01, Fmax=1.091e+03 (atom 378)
.
.
.
.
.
Step 208, Epot=-1.231153e+04, Fnorm=2.409e+01, Fmax=2.882e+02 (atom 262)
Step 209, Epot=-1.231272e+04, Fnorm=2.635e+01, Fmax=3.012e+02 (atom 261)
Step 210, Epot=-1.231523e+04, Fnorm=3.957e+01, Fmax=7.031e+02 (atom 378)
Step 211, Epot=-1.232519e+04, Fnorm=6.187e+01, Fmax=1.273e+03 (atom 378)

Step -1, time -0.002 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.01, max 0.05 (between atoms 775 and 776)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
262263   30.30.1000   0.1000  0.1000
Step 212, Epot=-1.233691e+04, Fnorm=4.926e+01, Fmax=5.858e+02 (atom 378)
Step 213, Epot=-1.234682e+04, Fnorm=8.846e+01, Fmax=1.130e+03 (atom 261)
Step 214, Epot=-1.235149e+04, Fnorm=8.846e+01, Fmax=1.372e+03 (atom 262)

writing lowest energy coordinates.

Back Off! I just backed up em_cg_constrain_vacuum.gro to
./#em_cg_constrain_vacuum.gro.1#

Polak-Ribiere Conjugate Gradients converged to Fmax < 100 in 275 steps
Potential Energy  = -1.25808627767813e+04
Maximum force =  9.11707505025239e+01 on atom 757
Norm of force =  1.65933958355887e+01

I constraint h-bond in cg.mdp:
title   =  bovin
cpp =  /usr/bin/cpp
define  =  -DFLEXIBLE
constraints =  h-bonds
integrator  =  cg
dt  =  0.002; ps !
nsteps  =  2000
nstlist =  10 
ns_type =  grid
rlist   =  1.0
coulombtype =  PME
rcoulomb=  1.0
vdwtype =  cut-off
rvdw=  1.4
fourierspacing=  0.12
fourier_nx=  0
fourier_ny=  0
fourier_nz=  0
pme_order=  4
ewald_rtol=  1e-5
optimize_fft=  yes
emtol   =  100
emstep  =  0.001
; Highest order in the expansion of the constraint coupling matrix
lincs_order  = 8

; GENERATE VELOCITIES FOR STARTUP RUN
gen-vel  = no
gen-temp = 293
gen-seed = 173529

in order to fix warning, I use distance_restraints as follows:

genrestr  -f em_xxx.gro  -o posre.itp -disre_dist 0.1 -disre_up2 1

acording to the posre.itp, revise xxx.top file,

; Include Position restraint file
#ifdef POSRES
#include "posre.itp"
[ distance_restraints ]
; ai aj type index type’ low up1 up2 fac
1 4 1 0 2 0.0 0.19 1.19 1.0
181 182 1 0 2 0.0 0.19 1.19 1.0
262 263 1 0 2 0.0 0.19 1.19 1.0
296 297 1 0 2 0.0 0.19 1.19 1.0
353 354 1 0 2 0.0 0.23 1.23 1.0
512 516 1 1 2 0.0 0.24 1.24 1.0
685 688 1 0 2 0.0 0.19 1.19 1.0
775 776 1 0 2 0.0 0.20 1.20 1.0
994 995 1 2 2 0.0 0.24 1.24 1.0
#endif

But things not changed, same warning

How should I do to fix it?

Thank you very much!

maggin




-