Re: [gmx-users] Re: gmx-users Digest, Vol 76, Issue 53

2010-08-12 Thread Mark Abraham
Please file a Bugzilla, being sure to attach your .tpr files and be explicit 
about the GROMACS version.

Mark

- Original Message -
From: Changwon Yang 
Date: Friday, August 13, 2010 14:15
Subject: [gmx-users] Re: gmx-users Digest, Vol 76, Issue 53
To: gmx-users@gromacs.org

> I tried openmpi v 1.3.3 
> but, I got a same error.
>  mdrun_mpi -multi  works fine. REMD has a problem.
> 
> ## error message ##
> 
> step 500, will finish Fri Aug 13 16:43:25 2010[localhost:20171] 
> *** Process received signal ***
> [localhost:20172] *** Process received signal ***
> [localhost:20172] Signal: Segmentation fault (11)
> [localhost:20172] Signal code: Address not mapped (1)
> [localhost:20172] Failing at address: (nil)
> [localhost:20171] Signal: Segmentation fault (11)
> [localhost:20171] Signal code: Address not mapped (1)
> [localhost:20171] Failing at address: (nil)
> [localhost:20172] [ 0] /lib64/libpthread.so.0 [0x36e9e0eb10]
> [localhost:20172] [ 1] mdrun_mpi_d(replica_exchange+0x1136) [0x42a446]
> [localhost:20172] [ 2] mdrun_mpi_d(do_md+0x48a8) [0x433ca8]
> [localhost:20172] [ 3] mdrun_mpi_d(mdrunner+0x11f1) [0x42f181]
> [localhost:20172] [ 4] mdrun_mpi_d(main+0x9f1) [0x438101]
> [localhost:20172] [ 5] /lib64/libc.so.6(__libc_start_main+0xf4) 
> [0x36e921d994][localhost:20172] [ 6] mdrun_mpi_d [0x420359]
> [localhost:20172] *** End of error message ***
> [localhost:20171] [ 0] /lib64/libpthread.so.0 [0x36e9e0eb10]
> [localhost:20171] [ 1] mdrun_mpi_d(replica_exchange+0x1136) [0x42a446]
> [localhost:20171] [ 2] mdrun_mpi_d(do_md+0x48a8) [0x433ca8]
> [localhost:20171] [ 3] mdrun_mpi_d(mdrunner+0x11f1) [0x42f181]
> [localhost:20171] [ 4] mdrun_mpi_d(main+0x9f1) [0x438101]
> [localhost:20171] [ 5] /lib64/libc.so.6(__libc_start_main+0xf4) 
> [0x36e921d994][localhost:20171] [ 6] mdrun_mpi_d [0x420359]
> [localhost:20171] *** End of error message ***
> -
> -
> mpiexec noticed that process rank 2 with PID 20172 on node 
> localhost.localdomain exited on signal 11 (Segmentation fault).
> -
> -
> 
> 
> 
> --
> From: 
> Sent: Wednesday, August 11, 2010 7:00 PM
> To: 
> Subject: gmx-users Digest, Vol 76, Issue 53
> 
> > Send gmx-users mailing list submissions to
> > gmx-users@gromacs.org
> > 
> > To subscribe or unsubscribe via the World Wide Web, visit
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > or, via email, send a message with subject or body 'help' to
> > gmx-users-requ...@gromacs.org
> > 
> > You can reach the person managing the list at
> > gmx-users-ow...@gromacs.org
> > 
> > When replying, please edit your Subject line so it is more specific
> > than "Re: Contents of gmx-users digest..."
> > 
> > 
> > Today's Topics:
> > 
> >   1. RE: Replica Exchange problem in gmx-4.5 beta3 
> (Berk Hess)
> > 
> > 
> > ---
> ---
> > 
> > Message: 1
> > Date: Wed, 11 Aug 2010 11:23:54 +0200
> > From: Berk Hess 
> > Subject: RE: [gmx-users] Replica Exchange problem in gmx-4.5 beta3
> > To: Discussion list for GROMACS users 
> > Message-ID: 
> > Content-Type: text/plain; charset="iso-8859-1"
> > 
> > 
> > Hi,
> > 
> > This could be due to problems with MPICH.
> > Could you please try openmpi and report back?
> > 
> > Thanks,
> > 
> > Berk
> > 
> > From: sht_yc...@hotmail.com
> > To: gmx-users@gromacs.org
> > Date: Wed, 11 Aug 2010 18:09:00 +0900
> > Subject: [gmx-users] Replica Exchange problem in gmx-4.5 beta3 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > Hello!
> > I'm doing a simple REMD test with 4 
> > replicas.
> > Time step : 2 fs
> > Exchange : every 500fs
> > 
> > md_0.tpr md_1.tpr md_2.tpr md_3.tpr 
> > 
> > mpiexec(or mpirun) -np 4 mdrun_mpi_d -deffnm md_ 
> > -multi 4 -replex 200
> > I got a error message.
> > 
> > ##error##
> > 100 steps,   2000.0 ps.
> > step 600 rank 
> > 3 in job 10  localhost.localdomain_50305   
> caused collective 
> > abort of all ranks
> >  exit status of rank 3: killed by signal 11 
> > rank 
> > 2 in job 10  localhost.localdomain_50305   
> caused collective 
> > abort of all ranks
> >  exit status of rank 2: killed by signal 11 
> > 
> > 
> > Using gmx4.0.7 , It works fine.
> > Is this bug in gmx-4.5 beta ?
> > In log files, no error message were 
> > found.
> > gmx-4.5 beta3 was compiled with icc 11.0
> > and mpich2-1.2.1p1.
> > 
> > 
> > 
> > -- 
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > Please search the archive at http://www.gromacs.org/search 
> before posting!
> > Please don't post (un)subscribe requests to the list. Use the 
> > www interface or send it to gmx-users-requ...@gromacs.org.
> > Can't post? Read 
> http://www.gromacs.org/mailing_lists/users.php  
> > -- next part --
> > An HTML attachment was sc

Re: [gmx-users] Restarting the job

2010-08-12 Thread sonali dhindwal
Thanks Mark.

--
Sonali Dhindwal

--- On Fri, 13/8/10, Mark Abraham  wrote:

From: Mark Abraham 
Subject: Re: [gmx-users] Restarting the job
To: "Discussion list for GROMACS users" 
Date: Friday, 13 August, 2010, 8:46 AM

- Original Message -
From: sonali dhindwal 
Date: Friday, August 13, 2010 0:59
Subject: [gmx-users] Restarting the job
To: Discussion list for GROMACS users 

> Hello All,
> 
> I have a query regarding the restarts of the jobs after crash.
> I want to simulate a protein for 2 ns but in between due to system shutdown, 
> it stopped, and I made a restart using this command:
> mdrun -s topol.tpr -cpi state.cpt -appendnow I checked the rmsd of in between 
> by producing a .xtc file of the job which ran till now and then checked g_rms 
> of the simulation, it is showing a graph like this,(I have attached in the 
> mail)

See http://www.gromacs.org/Documentation/How-tos/Graphing_Data for a couple of 
gnuplot tips. I suspect the weirdness is gnuplot interpreting something as data 
that it should not interpret as data, and that the contents of the .xvg are 
actually the second half of normal RMS variation.
 
> this is showing rmsd after the point the  job was restarted with some error 
> in the beginning.
> I want to know if there will be error at the end of the job too in the output 
> file, .gro ?

The final .gro will have the final coordinates, as normal.

Mark

-Inline Attachment Follows-

-- 
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] Re: gmx-users Digest, Vol 76, Issue 53

2010-08-12 Thread Changwon Yang
I tried openmpi v 1.3.3 
but, I got a same error.
 mdrun_mpi -multi  works fine. REMD has a problem.

## error message ##

step 500, will finish Fri Aug 13 16:43:25 2010[localhost:20171] *** Process 
received signal ***
[localhost:20172] *** Process received signal ***
[localhost:20172] Signal: Segmentation fault (11)
[localhost:20172] Signal code: Address not mapped (1)
[localhost:20172] Failing at address: (nil)
[localhost:20171] Signal: Segmentation fault (11)
[localhost:20171] Signal code: Address not mapped (1)
[localhost:20171] Failing at address: (nil)
[localhost:20172] [ 0] /lib64/libpthread.so.0 [0x36e9e0eb10]
[localhost:20172] [ 1] mdrun_mpi_d(replica_exchange+0x1136) [0x42a446]
[localhost:20172] [ 2] mdrun_mpi_d(do_md+0x48a8) [0x433ca8]
[localhost:20172] [ 3] mdrun_mpi_d(mdrunner+0x11f1) [0x42f181]
[localhost:20172] [ 4] mdrun_mpi_d(main+0x9f1) [0x438101]
[localhost:20172] [ 5] /lib64/libc.so.6(__libc_start_main+0xf4) [0x36e921d994]
[localhost:20172] [ 6] mdrun_mpi_d [0x420359]
[localhost:20172] *** End of error message ***
[localhost:20171] [ 0] /lib64/libpthread.so.0 [0x36e9e0eb10]
[localhost:20171] [ 1] mdrun_mpi_d(replica_exchange+0x1136) [0x42a446]
[localhost:20171] [ 2] mdrun_mpi_d(do_md+0x48a8) [0x433ca8]
[localhost:20171] [ 3] mdrun_mpi_d(mdrunner+0x11f1) [0x42f181]
[localhost:20171] [ 4] mdrun_mpi_d(main+0x9f1) [0x438101]
[localhost:20171] [ 5] /lib64/libc.so.6(__libc_start_main+0xf4) [0x36e921d994]
[localhost:20171] [ 6] mdrun_mpi_d [0x420359]
[localhost:20171] *** End of error message ***
--
mpiexec noticed that process rank 2 with PID 20172 on node 
localhost.localdomain exited on signal 11 (Segmentation fault).
--



--
From: 
Sent: Wednesday, August 11, 2010 7:00 PM
To: 
Subject: gmx-users Digest, Vol 76, Issue 53

> Send gmx-users mailing list submissions to
> gmx-users@gromacs.org
> 
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> or, via email, send a message with subject or body 'help' to
> gmx-users-requ...@gromacs.org
> 
> You can reach the person managing the list at
> gmx-users-ow...@gromacs.org
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of gmx-users digest..."
> 
> 
> Today's Topics:
> 
>   1. RE: Replica Exchange problem in gmx-4.5 beta3 (Berk Hess)
> 
> 
> --
> 
> Message: 1
> Date: Wed, 11 Aug 2010 11:23:54 +0200
> From: Berk Hess 
> Subject: RE: [gmx-users] Replica Exchange problem in gmx-4.5 beta3
> To: Discussion list for GROMACS users 
> Message-ID: 
> Content-Type: text/plain; charset="iso-8859-1"
> 
> 
> Hi,
> 
> This could be due to problems with MPICH.
> Could you please try openmpi and report back?
> 
> Thanks,
> 
> Berk
> 
> From: sht_yc...@hotmail.com
> To: gmx-users@gromacs.org
> Date: Wed, 11 Aug 2010 18:09:00 +0900
> Subject: [gmx-users] Replica Exchange problem in gmx-4.5 beta3 
> 
> 
> 
> 
> 
> 
> 
> 
> Hello!
> I'm doing a simple REMD test with 4 
> replicas.
> Time step : 2 fs
> Exchange : every 500fs
> 
> md_0.tpr md_1.tpr md_2.tpr md_3.tpr 
> 
> mpiexec(or mpirun) -np 4 mdrun_mpi_d -deffnm md_ 
> -multi 4 -replex 200
> I got a error message.
> 
> ##error##
> 100 steps,   2000.0 ps.
> step 600 rank 
> 3 in job 10  localhost.localdomain_50305   caused collective 
> abort of all ranks
>  exit status of rank 3: killed by signal 11 
> rank 
> 2 in job 10  localhost.localdomain_50305   caused collective 
> abort of all ranks
>  exit status of rank 2: killed by signal 11 
> 
> 
> Using gmx4.0.7 , It works fine.
> Is this bug in gmx-4.5 beta ?
> In log files, no error message were 
> found.
> gmx-4.5 beta3 was compiled with icc 11.0
> and mpich2-1.2.1p1.
> 
> 
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php  
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> http://lists.gromacs.org/pipermail/gmx-users/attachments/20100811/b4d58a9c/attachment-0001.html
> 
> --
> 
> -- 
> gmx-users mailing list
> gmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> 
> End of gmx-users Digest, Vol 76, Issue 53
> *
> -- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs

Re: [gmx-users] Restarting the job

2010-08-12 Thread Mark Abraham
- Original Message -
From: sonali dhindwal 
Date: Friday, August 13, 2010 0:59
Subject: [gmx-users] Restarting the job
To: Discussion list for GROMACS users 

---
| ---
| > Hello All,
> 
> I have a query regarding the restarts of the jobs after crash.
> I want to simulate a protein for 2 ns but in between due to system shutdown, 
> it stopped, and I made a restart using this command:
> mdrun -s topol.tpr -cpi state.cpt -appendnow I checked the rmsd of in between 
> by producing a .xtc file of the job which ran till now and then checked g_rms 
> of the simulation, it is showing a graph like this,(I have attached in the 
> mail)

See http://www.gromacs.org/Documentation/How-tos/Graphing_Data for a couple of 
gnuplot tips. I suspect the weirdness is gnuplot interpreting something as data 
that it should not interpret as data, and that the contents of the .xvg are 
actually the second half of normal RMS variation.
 
> this is showing rmsd after the point the  job was restarted with some error 
> in the beginning.
> I want to know if there will be error at the end of the job too in the output 
> file, .gro ?
 |
---
 |
---

The final .gro will have the final coordinates, as normal.

Mark

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] Units of k1 in the pulling code

2010-08-12 Thread chris . neale

Dear Xueming:

the word "mol" is short form for "mole"

http://en.wikipedia.org/wiki/Mole_%28unit%29

In the pull code context, it refers to moles of the pulled group.

The force is not "applied" to the COM of a cluster. The magnitude of  
the force is determined based on the COM distance, and then the force  
is applied to each atom in the pull groups.


Chris.

-- original message --

Hi there

The units for pull_k1 = $$ kJ/mol/nm. If this force is applied to a cluster,
the "/mol" in the units of force means per atom in the cluster, or single
molecule composed of several atoms? Sorry, I don't know the default value of
mol in gromacs. Does that mean per molecule? Besides, the force is applied
to the COM of cluster, but in the real pulling process, the force is applied
to each of the molecule in the cluster, or each of the atom in the cluster?

Thanks in advance!

Best!
Xueming


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] umbrella histograms

2010-08-12 Thread chris . neale

Dear Gavin:

plot the position vs. time. Probably you will see it sample around one  
value, then discover a new state, and make a sharp transition to  
another value. If the new state is much more favourable than the  
initial state, then you can discard the initial state as  
equilibration. It might not be though...


Also, could you have a PBC problem? I suggest that you read up on  
pull_pbcmol and try some tests with smart values of pull_pbcmol.


Chris.

-- original message --

Hi all

 I am generating a potential of mean force curve for the interaction
between two cage molecules using umbrella sampling. The system is
composed of only the two molecules in question, using no pbc and no
cut-offs. Umbrella sampling is performed at 25 different distances
(between COMs) between 0.75 and 2.75 nm.  The histogram generated form
g_wham shows very good overlap between the histogram between 0.75 nm and
2 nm. The shapes of the histogram and the width of the distribution are
all well in this region. Above 2nm the histograms show two peaks and
therefore the shape and distribution at these distances is poor. Has
anyone ever come across this sort of behaviour before? Also can you use
different force constants for the harmonic potential at different
distances ?

Cheers

Gavin

p.s Here is a copy of the mdp file that I am using

title   = Pull test
cpp =
include =
define  =
integrator  = md
nsteps  = 5000
dt  = 0.002
nstxout = 25
nstvout = 25
nstlog  = 25
nstenergy   = 5000
nstfout = 25
pbc = no
nstlist = 10
ns_type = simple
vdwtype = cut-off
rlist   = 0
rvdw_switch = 0
rvdw= 0
coulombtype = cut-off
rcoulomb= 0
tcoupl  = nose-hoover
tc_grps = system
tau_t   = 0.1
ref_t   = 600
gen_vel = no
gen_temp=
constraints = none
comm_mode   = angular
pull= umbrella
pull_geometry = distance
pull_dim = Y Y Y
pull_start = no
pull_ngroups = 1
pull_group0 = cage_1
pull_group1 = cage_2
pull_init1 = 2.59
pull_rate1 = 0.0
pull_k1 = 1000
pull_nstxout = 1000
pull_nstfout = 1000



--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] gmx-developers mailing list search

2010-08-12 Thread Rossen Apostolov

 Hi,

I just updated the website and now you can search also the developers 
mailing list at 
http://www.gromacs.org/Support/Mailing_Lists/Search_gmx-developers_mailing_list 
.


The quick link on the front page still takes you to the gmx-users list, 
so after that you have to click again in the menu on the left.


Rossen
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] Re: coordination number

2010-08-12 Thread Vitaly Chaban
> Hello,
> I am trying to calculate the number of solvent molecular present in the
> first solvation shell.
> How can I can calculate the coordination number of solute in first
> solvation shell.
> NIlesh


1. g_rdf -cn .
2. Plot RDF.XVG and RCN.XVG.
3. Find the first minimum on RDF.
4. Look at the RCN value at r = first minimum on RDF

Good luck!

--
Dr. Vitaly Chaban
Associate Researcher
Department of Chemistry
University of Roshester, NY, USA
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] Segmentation fault at g_traj

2010-08-12 Thread Justin A. Lemkul



Jorge Alberto Jover Galtier wrote:

Dear friends:
Running several simulations, we have found a problem with the 'g_traj' 
utility. It doesn't finish properly the files it generates, and gives a 
segmentation fault. This is what we have done:


We are working with double precision. With the file .mdp that is at the 
end of the mail, we have used 'grompp_d' to generate the .tpr file:


grompp_d -maxwarn 1000 -f temp_prueba_mail-list_001_mdpmdp -c 
../data/in/pep_ini.gro -r ../data/in/pep_ini.gro -p ../data/in/pep.top 
-o temp_prueba_mail-list_001_tpr.tpr


After that, we ran the simulation with 'mdrun_d':

mdrun_d -s temp_prueba_mail-list_001_tpr.tpr -o 
temp_prueba_mail-list_001_trr.trr -c 
../data/out/gro_unconstr_1.00_01.gro -g 
../data/out/log_unconstr_1.00_01.log -e 
temp_prueba_mail-list_001_edr.edr


Then we tried to get the coordinates of the atoms with 'g_traj_d':

g_traj_d -f temp_prueba_mail-list_001_trr.trr -s 
temp_prueba_mail-list_001_tpr.tpr -ox temp_prueba_mail-list_001_ox.xvg


At the terminal, we tell the program to get the coordinates from the 
group 0 (system), although the error appears also for other groups.


Here is where the problem appears. When the program is about to finish, 
it makes segmentation fault and ends abruptly. The .xvg file has only 
some of the last lines missing, but those are the lines we are 
interested in. We have tried different ways: we have used different 
number of steps, we have get velocities and forces instead of 
coordinates... and always the same problem appears.


We would be very thankful if someone could tell us what is going wrong.



You're probably running out of memory.  Your .mdp file indicates that you save 
full-precision coordinates every step (yikes!) over 100,000 steps.  If you're 
trying to print the coordinate of every atom at every time, then the file that 
g_traj is trying to produce will be enormous, and you'll potentially use up all 
the memory your machine has.


Other diagnostic information that would be useful would be the number of atoms 
in the system (to see if I'm on to something or completely guessing).  Does 
g_traj work if you just try to output a single frame, or just a few using -b and -e?


-Justin


Best wishes,
Jorge Alberto Jover Galtier
Universidad de Zaragoza, Spain

---

; VARIOUS PREPROCESSING OPTIONS
title= Yo
cpp  = /usr/bin/cpp
include  =
define   =

; RUN CONTROL PARAMETERS
integrator   = md
; Start time and timestep in ps
tinit= 0
dt = 0.001000
nsteps   = 10
; For exact run continuation or redoing part of a run
init_step= 0
; mode for center of mass motion removal
comm-mode= none
; number of steps for center of mass motion removal
nstcomm  = 1
; group(s) for center of mass motion removal
comm-grps=

; OUTPUT CONTROL OPTIONS
; Output frequency for coords (x), velocities (v) and forces (f)
nstxout  = 1
nstvout  = 1
nstfout  = 1
; Checkpointing helps you continue after crashes
nstcheckpoint= 1000
; Output frequency for energies to log file and energy file
nstlog   = 1000
nstenergy= 1
nstcalcenergy = 1
; Output frequency and precision for xtc file
nstxtcout= 50
xtc-precision= 1000
; This selects the subset of atoms for the xtc file. You can
; select multiple groups. By default all atoms will be written.
xtc-grps =
; Selection of energy groups
energygrps   =

; NEIGHBORSEARCHING PARAMETERS
; nblist update frequency
nstlist  = -1
; ns algorithm (simple or grid)
ns_type  = grid
; Periodic boundary conditions: xyz (default), no (vacuum)
; or full (infinite systems only)
pbc  = no
; nblist cut-off   
rlist= 20

domain-decomposition = no

; OPTIONS FOR ELECTROSTATICS AND VDW
; Method for doing electrostatics
coulombtype  = Reaction-Field-zero
rcoulomb-switch  = 0
rcoulomb = 4
; Dielectric constant (DC) for cut-off or DC of reaction field
epsilon-r= 1
epsilon-rf = 0
; Method for doing Van der Waals
vdw-type = Shift
; cut-off lengths  
rvdw-switch  = 0

rvdw = 4
; Apply long range dispersion corrections for Energy and Pressure
DispCorr = no
; Extension of the potential lookup tables beyond the cut-off
table-extension  = 1

; IMPLICIT SOLVENT (for use with Generalized Born electrostatics)
implicit_solvent = No

; OPTIONS FOR WEAK COUPLING ALGORITHMS
; Temperature coupling 
Tcoupl   = no

; Groups to couple separately
tc-grps  = System
; Time constant (ps) and reference temperature (K)
tau_t

Re: [gmx-users] coordination number

2010-08-12 Thread Justin A. Lemkul



Nilesh Dhumal wrote:

Hello,
I am trying to calculate the number of solvent molecular present in the
first solvation shell.
How can I can calculate the coordination number of solute in first
solvation shell.


Search the list archive.  This is just one of a number of useful results for 
"coordination number":


http://lists.gromacs.org/pipermail/gmx-users/2009-December/047175.html

-Justin


NIlesh




--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] Segmentation fault at g_traj

2010-08-12 Thread Jorge Alberto Jover Galtier
Dear friends:Running several simulations, we have found a problem with the 'g_traj' utility. It doesn't finish properly the files it generates, and gives a segmentation fault. This is what we have done:We are working with double precision. With the file .mdp that is at the end of the mail, we have used 'grompp_d' to generate the .tpr file:grompp_d -maxwarn 1000 -f temp_prueba_mail-list_001_mdpmdp -c ../data/in/pep_ini.gro -r ../data/in/pep_ini.gro -p ../data/in/pep.top -o temp_prueba_mail-list_001_tpr.tprAfter that, we ran the simulation with 'mdrun_d':mdrun_d -s temp_prueba_mail-list_001_tpr.tpr -o temp_prueba_mail-list_001_trr.trr -c ../data/out/gro_unconstr_1.00_01.gro -g ../data/out/log_unconstr_1.00_01.log -e temp_prueba_mail-list_001_edr.edrThen we tried to get the coordinates of the atoms with 'g_traj_d':g_traj_d -f temp_prueba_mail-list_001_trr.trr -s temp_prueba_mail-list_001_tpr.tpr -ox temp_prueba_mail-list_001_ox.xvgAt the terminal, we tell the program to get the coordinates from the group 0 (system), although the error appears also for other groups.Here is where the problem appears. When the program is about to finish, it makes segmentation fault and ends abruptly. The .xvg file has only some of the last lines missing, but those are the lines we are interested in. We have tried different ways: we have used different number of steps, we have get velocities and forces instead of coordinates... and always the same problem appears.We would be very thankful if someone could tell us what is going wrong.Best wishes,Jorge Alberto Jover GaltierUniversidad de Zaragoza, Spain---; VARIOUS PREPROCESSING OPTIONStitle    = Yocpp  = /usr/bin/cppinclude  = define   = ; RUN CONTROL PARAMETERSintegrator   = md; Start time and timestep in pstinit    = 0dt = 0.001000nsteps   = 10; For exact run continuation or redoing part of a runinit_step    = 0; mode for center of mass motion removalcomm-mode    = none; number of steps for center of mass motion removalnstcomm  = 1; group(s) for center of mass motion removalcomm-grps    = ; OUTPUT CONTROL OPTIONS; Output frequency for coords (x), velocities (v) and forces (f)nstxout  = 1nstvout  = 1nstfout  = 1; Checkpointing helps you continue after crashesnstcheckpoint    = 1000; Output frequency for energies to log file and energy filenstlog   = 1000nstenergy    = 1nstcalcenergy         = 1; Output frequency and precision for xtc filenstxtcout    = 50xtc-precision    = 1000; This selects the subset of atoms for the xtc file. You can; select multiple groups. By default all atoms will be written.xtc-grps = ; Selection of energy groupsenergygrps   = ; NEIGHBORSEARCHING PARAMETERS; nblist update frequencynstlist  = -1; ns algorithm (simple or grid)ns_type  = grid; Periodic boundary conditions: xyz (default), no (vacuum); or full (infinite systems only)pbc  = no; nblist cut-off    rlist    = 20domain-decomposition = no; OPTIONS FOR ELECTROSTATICS AND VDW; Method for doing electrostaticscoulombtype  = Reaction-Field-zerorcoulomb-switch  = 0rcoulomb = 4; Dielectric constant (DC) for cut-off or DC of reaction fieldepsilon-r    = 1epsilon-rf         = 0; Method for doing Van der Waalsvdw-type = Shift; cut-off lengths   rvdw-switch  = 0rvdw = 4; Apply long range dispersion corrections for Energy and PressureDispCorr = no; Extension of the potential lookup tables beyond the cut-offtable-extension  = 1; IMPLICIT SOLVENT (for use with Generalized Born electrostatics)implicit_solvent = No; OPTIONS FOR WEAK COUPLING ALGORITHMS; Temperature coupling  Tcoupl   = no; Groups to couple separatelytc-grps  = System; Time constant (ps) and reference temperature (K)tau_t    = 0.1ref_t    = 300; Pressure coupling Pcoupl   = noPcoupltype   = isotropic; Time constant (ps), compressibility (1/bar) and reference P (bar)tau_p    = 1.0compressibility  = 4.5e-5ref_p    = 1.0; Random seed for Andersen thermostatandersen_seed    = 815131; GENERATE VELOCITIES FOR STARTUP RUNgen_vel  = yesgen_temp = 300gen_seed = 556380; OPTIONS FOR BONDS    constraints = none; Type of constraint algorithmconstraint-algorithm = Shake; Do not constrain the start configurationunconstrained-start  = yes; Use successive overrelaxation to reduce the number of shake iterationsShake-SOR    = no; Relative tolerance of sha

[gmx-users] coordination number

2010-08-12 Thread Nilesh Dhumal
Hello,
I am trying to calculate the number of solvent molecular present in the
first solvation shell.
How can I can calculate the coordination number of solute in first
solvation shell.
NIlesh


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] homology modelling workshop?

2010-08-12 Thread kulleperuma . kulleperuma

Dear All,

This question is off from the regular questions of Gromacs  but I hope  
some of you would be able to help me.
I wish to know whether there is any homology modelling workshop  
organized anywhere, during the rest of this year. It would be greatly  
appreciated if any of you can give me an update.

Thanking you in advance

kulleperuma

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] Restarting the job

2010-08-12 Thread sonali dhindwal
Hello All,

I have a query regarding the restarts of the jobs after crash.
I want to simulate a protein for 2 ns but in between due to system shutdown, it 
stopped, and I made a restart using this command:
mdrun -s topol.tpr -cpi state.cpt -appendnow I checked the rmsd of in between 
by producing a .xtc file of the job which ran till now and then checked g_rms 
of the simulation, it is showing a graph like this,(I have attached in the mail)

this is showing rmsd after the point the job was restarted with some error in 
the beginning.
I want to know if there will be error at the end of the job too in the output 
file, .gro ?
Thanks in advance
Regards

--
Sonali Dhindwal

<>-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] Problem with removing COM translation

2010-08-12 Thread Alexandre Suman de Araujo

Hi Gmxers

I'm simulating a system composed by a protein centered in a sphere of 
water in vacuum.


The water molecules are kept within a virtual sphere with position 
restrains between oxygen atom and a dummy atom fixed at the center of 
the sphere. The protein can move without any restriction.


To prevent the separation between protein and the water globule, I 
defined "comm_grps = Protein Non-Protein" in my .mdp file (I've also 
used the same groups in temperature coupling as suggested in GMX 
manual). However, when I run the simulation the protein COM moves away 
from the center (where it is in the beginning of the simulation) of the 
water sphere. The movement of COM of the water sphere is small (less 
than 1 angstron). For simulations of 5ns this translation is about 2 
angstrons and for a 14 ns simulation it is more than 10 angstrons.


Does anyone could help me with this issue?

As far as I know, the remove of COM motion is made by subtracting the 
COM velocity from the velocity of the atoms within the groups defined in 
comm_grps. Is it possible to really "freeze" the movement of the COM of 
some groups in GROMACS to achieve an absolute static COM?


Cheers

--
**
Alexandre Suman de Araujo*
Faculdade de Ciências Farmacêuticas de Ribeirão Preto*
Universidade de São Paulo*
Dep. de Física e Química *
Grupo de Física Biológica * e-mail: asara...@fcfrp.usp.br*
Av. do Café, s/n° * e-mail: ale.su...@gmail.com  *
CEP: 14040-903* Phone: +55 (16) 3602-4172*
Ribeirão Preto, SP, Brasil* Phone: +55 (16) 3602-4222*
**

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] pdb2gmx error

2010-08-12 Thread Alpay Temiz
Hello everyone

I am trying to set up a nucleic acid only simulation using gromacs 4.5.2

pdb2gmx is giving me only options to cap protein terminals and when I choose
none it gives the error:

"There is a dangling bond at at least one of the terminal ends. Select a
proper terminal entry."

and exits.

below is the program output.

Alpay


452pdb2gmx -f chr13_115016196_gaa_3loop_h.pdb -o conf.pdb -p -inter
 :-)  G  R  O  M  A  C  S  (-:

   Good gRace! Old Maple Actually Chews Slate

  :-)  VERSION 4.5-beta2  (-:


  Written by David van der Spoel, Erik Lindahl, Berk Hess, and others.
   Copyright (c) 1991-2000, University of Groningen, The Netherlands.
 Copyright (c) 2001-2008, The GROMACS development team,
check out http://www.gromacs.org for more information.

 This program is free software; you can redistribute it and/or
  modify it under the terms of the GNU General Public License
 as published by the Free Software Foundation; either version 2
 of the License, or (at your option) any later version.

:-)  452pdb2gmx (double precision)  (-:

Option Filename  Type Description

  -f chr13_115016196_gaa_3loop_h.pdb  InputStructure file: gro g96
   pdb tpr etc.
  -o   conf.pdb  Output   Structure file: gro g96 pdb etc.
  -p  topol.top  Output   Topology file
  -i  posre.itp  Output   Include file for topology
  -n  clean.ndx  Output, Opt. Index file
  -q  clean.pdb  Output, Opt. Structure file: gro g96 pdb etc.

Option   Type   Value   Description
--
-[no]h   bool   no  Print help info and quit
-[no]version bool   no  Print version info and quit
-niceint0   Set the nicelevel
-[no]cwd bool   no  Also read force field files from the current
working directory
-[no]rtpobool   no  Allow an entry in a local rtp file to override a
library rtp entry
-chainsepenum   id_or_ter  Condition in PDB files when a new chain
should
be started: id_or_ter, id_and_ter, ter, id or
interactive
-ff  string select  Force field, interactive by default. Use -h for
information.
-water   enum   select  Water model to use: select, none, spc, spce,
tip3p, tip4p or tip5p
-[no]inter   bool   yes Set the next 8 options to interactive
-[no]ss  bool   no  Interactive SS bridge selection
-[no]ter bool   no  Interactive termini selection, iso charged
-[no]lys bool   no  Interactive Lysine selection, iso charged
-[no]arg bool   no  Interactive Arganine selection, iso charged
-[no]asp bool   no  Interactive Aspartic Acid selection, iso charged
-[no]glu bool   no  Interactive Glutamic Acid selection, iso charged
-[no]gln bool   no  Interactive Glutamine selection, iso neutral
-[no]his bool   no  Interactive Histidine selection, iso checking
H-bonds
-angle   real   135 Minimum hydrogen-donor-acceptor angle for a
H-bond (degrees)
-distreal   0.3 Maximum donor-acceptor distance for a H-bond
(nm)
-[no]una bool   no  Select aromatic rings with united CH atoms on
Phenylalanine, Tryptophane and Tyrosine
-[no]ignhbool   no  Ignore hydrogen atoms that are in the pdb file
-[no]missing bool   no  Continue when atoms are missing, dangerous
-[no]v   bool   no  Be slightly more verbose in messages
-posrefc real   1000Force constant for position restraints
-vsite   enum   noneConvert atoms to virtual sites: none, hydrogens
or aromatics
-[no]heavyh  bool   no  Make hydrogen atoms heavy
-[no]deuterate bool no  Change the mass of hydrogens to 2 amu
-[no]chargegrp bool yes Use charge groups in the rtp file
-[no]cmapbool   yes Use cmap torsions (if enabled in the rtp file)
-[no]renum   bool   no  Renumber the residues consecutively in the
output
-[no]rtpres  bool   no  Use rtp entry names as residue names


Select the Force Field:
 1: AMBER03_TEST_ONLY_DO_NOT_USE_FOR_PRODUCTION
 2: AMBER94_TEST_ONLY_DO_NOT_USE_FOR_PRODUCTION
 3: AMBER96_TEST_ONLY_DO_NOT_USE_FOR_PRODUCTION
 4: AMBER99_TEST_ONLY_DO_NOT_USE_FOR_PRODUCTION
 5: AMBER99SB-ILDN_TEST_ONLY_DO_NOT_USE_FOR_PRODUCTION
 6: AMBER99SB_TEST_ONLY_DO_NOT_USE_FOR_PRODUCTION
 7: AMBERGS_TEST_ONLY_DO_NOT_USE_FOR_PRODUCTION
 8: CHARMM27 all-atom force field (with CMAP) - version 2.0beta
 9: GROMOS96 43a1 force field
10: GROMOS96 43a2 force field (improved alkane dihedrals)
11: GROMOS96 4

Re: [gmx-users] Question regarding tpr files and rmsd

2010-08-12 Thread Mark Abraham


- Original Message -
From: Bernhard Knapp 
Date: Thursday, August 12, 2010 23:28
Subject: [gmx-users] Question regarding tpr files and rmsd
To: gmx-users@gromacs.org

> Dear users
> 
> Due to a hard disk crash we lost several md simulations. 
> Fortunalty we  have backup copies of the trajectry files 
> (xtc format) and structure files of the first frame of the 
> simulation (created via trjconv -b 0 -e  0 -f myName.md.trr 
> -o myName.md.firstframe.pdb  -s myName.md.tpr). We do  
> not have the tpr files which we usually used for example for the 
> rmsd  calculations. We found that the -s option of g_rms 
> does not only take  tpr files but also pdb files, however 
> when I compare the resulting xvg  files the values are 
> slightly different and I get a warning "Warning: if  there 
> are broken molecules in the trajectory file, they can not be 
> made  whole without a run input file".

Sure. GROMACS-written PDB files contain no connectivity data. .tpr files do.

> The average 
> difference (over 10 ns)  between the xvg file based on the 
> tpr and on the pdb is  0.01447971.

Probably quite a reasonable difference in the average RMSD (in nm), given the 
pdb approximates the floating-point numbers in the .tpr with only 3 decimal 
points (in Angstrom)

> Example for the 2 file:
> 
> [bkn...@quovadis02 test]$ sdiff rmsd.pdb.xvg rmsd.tpr.xvg | less
> # This file was created Thu Aug 12 11:23:09 
> 2010  | # This file was created Thu Aug 12 11:22:06 2010
> # by the following 
> command: # by the following command:
> # g_rms -f 1mi5_A6D.md.xtc -s 1mi5_A6D.md.firstframe.pdb -o r | 
> # g_rms -f 1mi5_A6D.md.xtc -s 1mi5_A6D.md.tpr -o rmsd.tpr.xvg
> #   #
> # g_rms is part of G R O M A C 
> S:   # g_rms is part of G R O M A C S:
> #   #
> # GROtesk MACabre and 
> Sinister| # S  C  A  M  O  R  G
> #   #
> @title 
> "RMSD"   @title "RMSD"
> @xaxis  label "Time 
> (ps)"   @xaxis  label "Time (ps)"
> @yaxis  label "RMSD 
> (nm)"   @yaxis  label "RMSD (nm)"
> @TYPE 
> xy@TYPE xy
> @ subtitle "Protein after lsq fit to 
> Protein"   @ subtitle "Protein after lsq fit to Protein"
>   0.000
> 0.0005041 |0.0000.0005046
>   3.000
> 0.1072081 |3.0000.0981387
>   6.000
> 0.1281023 |6.0000.1207779
>   9.000
> 0.1452615 |9.0000.1351306
> ...
> 
> 
> My questions are now:
> 
> - Why are the xvg files different if they are based on the tpr 
> and on the pdb file?

PDB has more limited precision than the .tpr, and if you're fitting to their 
structure, you'll get slightly different results.

> - What is the more appropriate way to calculate the rmsd?

Depends what you're trying to measure - but I'd argue there shouldn't be a 
significant difference between these two.
 
> - Just if the more appropriate way is the tpr file: Is it valid 
> to recreate the tpr file via grompp solely on the firstframe.pdb 
> and the xtc of the trajectory? eg via

It's about the best you can do. IIRC, the firstframe.pdb will have higher 
precision than the first frame of the .xtc, typically, and so produce a .tpr 
closer to the original.

The procedure below is unnecessarily regenerating water. Use pdb2gmx to 
generate the .top, and then simply use grompp to combine .mdp, .top and 
whichever source of the first frame you choose.

> pdb2gmx -f 1mi5_A6D.md.firstframe.pdb -o 
> 1mi5_A6D.md.firstframe.pdb.gro -p 1mi5_A6D.md.top
> editconf -f 1mi5_A6D.md.firstframe.pdb.gro -o 
> 1mi5_A6D.firstframe.cube.pdb -bt cubic -d 2.0
> genbox -cp 1mi5_A6D.firstframe.cube.pdb -cs spc216.gro -o 
> 1mi5_A6D.firstframe.water.pdb -p 1mi5_A6D.md.top
> grompp -f md.mdp -c 1mi5_A6D.firstframe.water.pdb -p 
> 1mi5_A6D.md.top -o 1mi5_A6D.md.RECREATED.tpr
> then the average difference between the xvg file based on the 
> tpr and the recreated tpr is 1.4865E-05 (which is much more 
> similar however still not identical)

You fitted and analyzed with respect to Protein, so the re-generation of the 
water should have no effect here. The remaining difference should be 
attributable to the difference between whatever coordinate changes pdb2gmx is 
making (I can't tell what, if any) based on the two slightly different starting 
points - but subjecting both to rounding to .gro precision. This last point 
explains why they're so similar, I expect.

Mark
  
> example:
> [bkn...@q

[gmx-users] Question regarding tpr files and rmsd

2010-08-12 Thread Bernhard Knapp

Dear users

Due to a hard disk crash we lost several md simulations. Fortunalty we  
have backup copies of the trajectry files (xtc format) and structure 
files of the first frame of the simulation (created via trjconv -b 0 -e  
0 -f myName.md.trr -o myName.md.firstframe.pdb  -s myName.md.tpr). We 
do  not have the tpr files which we usually used for example for the 
rmsd  calculations. We found that the -s option of g_rms does not only 
take  tpr files but also pdb files, however when I compare the resulting 
xvg  files the values are slightly different and I get a warning 
"Warning: if  there are broken molecules in the trajectory file, they 
can not be made  whole without a run input file". The average difference 
(over 10 ns)  between the xvg file based on the tpr and on the pdb is  
0.01447971.

Example for the 2 file:

[bkn...@quovadis02 test]$ sdiff rmsd.pdb.xvg rmsd.tpr.xvg | less
# This file was created Thu Aug 12 11:23:09 2010  | # This 
file was created Thu Aug 12 11:22:06 2010
# by the following command: # by the 
following command:
# g_rms -f 1mi5_A6D.md.xtc -s 1mi5_A6D.md.firstframe.pdb -o r | # g_rms 
-f 1mi5_A6D.md.xtc -s 1mi5_A6D.md.tpr -o rmsd.tpr.xvg

#   #
# g_rms is part of G R O M A C S:   # g_rms 
is part of G R O M A C S:

#   #
# GROtesk MACabre and Sinister| # S  C  
A  M  O  R  G

#   #
@title "RMSD"   @
title "RMSD"
@xaxis  label "Time (ps)"   @
xaxis  label "Time (ps)"
@yaxis  label "RMSD (nm)"   @
yaxis  label "RMSD (nm)"

@TYPE xy@TYPE xy
@ subtitle "Protein after lsq fit to Protein"   @ 
subtitle "Protein after lsq fit to Protein"
  0.0000.0005041 |
0.0000.0005046
  3.0000.1072081 |
3.0000.0981387
  6.0000.1281023 |
6.0000.1207779
  9.0000.1452615 |
9.0000.1351306

...


My questions are now:

- Why are the xvg files different if they are based on the tpr and on 
the pdb file?


- What is the more appropriate way to calculate the rmsd?

- Just if the more appropriate way is the tpr file: Is it valid to 
recreate the tpr file via grompp solely on the firstframe.pdb and the 
xtc of the trajectory? eg via
pdb2gmx -f 1mi5_A6D.md.firstframe.pdb -o 1mi5_A6D.md.firstframe.pdb.gro 
-p 1mi5_A6D.md.top
editconf -f 1mi5_A6D.md.firstframe.pdb.gro -o 
1mi5_A6D.firstframe.cube.pdb -bt cubic -d 2.0
genbox -cp 1mi5_A6D.firstframe.cube.pdb -cs spc216.gro -o 
1mi5_A6D.firstframe.water.pdb -p 1mi5_A6D.md.top
grompp -f md.mdp -c 1mi5_A6D.firstframe.water.pdb -p 1mi5_A6D.md.top -o 
1mi5_A6D.md.RECREATED.tpr
then the average difference between the xvg file based on the tpr and 
the recreated tpr is 1.4865E-05 (which is much more similar however 
still not identical)


example:
[bkn...@quovadis02 test]$ sdiff rmsd.tprRECREATED.xvg rmsd.tpr.xvg | less
# This file was created Thu Aug 12 11:59:23 2010  | # This 
file was created Thu Aug 12 11:22:06 2010
# by the following command: # by the 
following command:
# g_rms -f 1mi5_A6D.md.xtc -s 1mi5_A6D.md.RECREATED.tpr -o rm | # g_rms 
-f 1mi5_A6D.md.xtc -s 1mi5_A6D.md.tpr -o rmsd.tpr.xvg

#   #
# g_rms is part of G R O M A C S:   # g_rms 
is part of G R O M A C S:

#   #
# GROup of MAchos and Cynical Suckers | # S  C  
A  M  O  R  G

#   #
@title "RMSD"   @
title "RMSD"
@xaxis  label "Time (ps)"   @
xaxis  label "Time (ps)"
@yaxis  label "RMSD (nm)"   @
yaxis  label "RMSD (nm)"

@TYPE xy@TYPE xy
@ subtitle "Protein after lsq fit to Protein"   @ 
subtitle "Protein after lsq fit to Protein"
  0.0000.0022253 |
0.0000.0005046
  3.0000.0981661 |
3.0000.0981387
  6.0000.1207914 |
6.0000.1207779
  9.0000.1351781 |
9.0000.1351306

...


cheers
Bernhard



--
gmx-us

Re: [gmx-users] Re: gromacs from git source and openmm failed

2010-08-12 Thread Rossen Apostolov

 Hi,

On 8/12/10 2:43 PM, Alan wrote:

Thanks Rossen,

> Try it again with the latest release-4-5-branch, Erik added a lot of 
fixes.


Indeed, when I did my report *was* with Erik's mods, which I am 
afraid, was what broke the compilation.


I am using 'git log':

Author: Rossen Apostolov mailto:ros...@cbr.su.se>>
Date:   Thu Aug 12 11:20:18 2010 +0200

Fixed a reverted version string in configure.ac <http://configure.ac>


You have compiled the master branch.

During the current release stage all development work and fixes first go 
in the release-4-5-patches branch, as I wrote. Later those fixes will be 
merged in the master.


Since you have already cloned the repository, you should do:

$ git checkout -t origin/release-4-5-patches

This branch is under heavy development so it may not even compile 
sometimes. Do regular $ git pull to keep updated.
BTW, is there simpler way to say the "revision number" in git like 
'svn info'?

If you run

$ git log -n 1
commit 617c955b9154e2361b3240ede285f3094e5dc621
Author: Rossen Apostolov 
Date:   Thu Aug 12 14:22:45 2010 +0200

OpenMM: added support for AmberFF proper/improper torsion potentials.

"commit" is the identifier that would roughly correspond to a "revision" 
in SVN, but the two systems have conceptual differences and the terms 
are not equivalent.


Have a look at some tutorials on the web for a detailed description.

Also, mdrun prints out information about the commit it was compiled from:

$ mdrun
...
:-)  mdrun-gpu  (-:

Back Off! I just backed up md.log to ./#md.log.1#
Getting Loaded...
Reading file topol.tpr, VERSION 4.5-beta3-dev-20100812-617c9 (single 
precision)


"617c9" are the first characters of the commit (compare to the git log 
command above)


Rossen
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] Re: gromacs from git source and openmm failed

2010-08-12 Thread Alan
Thanks Rossen,

> Try it again with the latest release-4-5-branch, Erik added a lot of
fixes.

Indeed, when I did my report *was* with Erik's mods, which I am afraid, was
what broke the compilation.

I am using 'git log':

Author: Rossen Apostolov 
Date:   Thu Aug 12 11:20:18 2010 +0200

Fixed a reverted version string in configure.ac

BTW, is there simpler way to say the "revision number" in git like 'svn
info'?

Thanks

Alan

On 12 August 2010 12:11, Alan  wrote:

> Hi there,
>
> I am trying now to compile mdrun-openmm, by doing:
>
> cmake -DGMX_OPENMM=ON ..
>
> make mdrun
>
> -- Threads not compatible with OpenMM build, disabled
> CMake Warning at CMakeLists.txt:127 (message):
>   The OpenMM build does not support other acceleration modes!
>
> -- Using internal FFT library - fftpack
> -- Loaded CMakeASM-ATTInformation - ASM-ATT support is still experimental,
> please report issues
> -- Configuring done
> -- Generating done
> -- Build files have been written to: /Users/alan/Programmes/gromacs/build
> dhcp-128-232-144-215[2416]:~/Programmes/gromacs/build% make mdrun
> [  0%] Building NVCC (Device) object
> src/kernel/gmx_gpu_utils/./gmx_gpu_utils_generated_memtestG80_core.cu.o
> [  1%] Building NVCC (Device) object
> src/kernel/gmx_gpu_utils/./gmx_gpu_utils_generated_gmx_gpu_utils.cu.o
>
> ...
>
> And then I got this error:
>
> ...
> [ 56%] Building C object
> src/gmxlib/CMakeFiles/gmx.dir/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c.o
> /Users/alan/Programmes/gromacs/src/gmxlib/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c:
> In function ‘nb_kernel400nf_x86_64_sse’:
> /Users/alan/Programmes/gromacs/src/gmxlib/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c:630:
> error: ‘gmx_invsqrt_exptab’ undeclared (first use in this function)
> /Users/alan/Programmes/gromacs/src/gmxlib/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c:630:
> error: (Each undeclared identifier is reported only once
> /Users/alan/Programmes/gromacs/src/gmxlib/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c:630:
> error: for each function it appears in.)
> /Users/alan/Programmes/gromacs/src/gmxlib/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c:630:
> error: ‘gmx_invsqrt_fracttab’ undeclared (first use in this function)
> make[3]: ***
> [src/gmxlib/CMakeFiles/gmx.dir/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c.o]
> Error 1
> make[2]: *** [src/gmxlib/CMakeFiles/gmx.dir/all] Error 2
> make[1]: *** [src/kernel/CMakeFiles/mdrun.dir/rule] Error 2
> make: *** [mdrun] Error 2
>
> Thanks,
>
> Alan
>
> --
> Alan Wilter S. da Silva, D.Sc. - CCPN Research Associate
> Department of Biochemistry, University of Cambridge.
> 80 Tennis Court Road, Cambridge CB2 1GA, UK.
> >>http://www.bio.cam.ac.uk/~awd28<<
>



-- 
Alan Wilter S. da Silva, D.Sc. - CCPN Research Associate
Department of Biochemistry, University of Cambridge.
80 Tennis Court Road, Cambridge CB2 1GA, UK.
>>http://www.bio.cam.ac.uk/~awd28<<
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] Gromacs 4.5-beta2 forcefield troubles with the GPU version

2010-08-12 Thread Rossen Apostolov

 Hi,

I commited a fix and now all AmberFF are supported with the GPU version. 
The other forcefields are not supported at the moment.


Rossen

On 8/10/10 12:19 PM, Karel Berka wrote:

Hi all,

I am trying to get 4.5-beta2 running on graphic card but mdrun-gpu 
(gcc 4.1.3) is still complaining about something in forcefields


OPLS - The combination rules of the used force-field do not match the 
one supported by OpenMM:  sigma_ij = (sigma_i + sigma_j)/2, eps_ij = 
sqrt(eps_i * eps_j). Switch to a force-field that uses these rules in 
order to simulate this system using OpenMM.
Amber03 - OpenMM does not support (some) of the provided interaction 
type(s) (Improper Dih.)
Gromos96 53a6 - OpenMM does not support (some) of the provided 
interaction type(s) (G96 bonds)
Charmm27 - OpenMM does not support (some) of the provided interaction 
type(s) (Improper Dih.)


is there any force field which can be used?

--
Zdraví skoro zdravý
Karel "Krápník" Berka


RNDr. Karel Berka, Ph.D.
Palacký University in Olomouc
Faculty of Science
Department of Physical Chemistry
tř. 17. listopadu 1192/12
771 46 Olomouc
tel: +420-585634769
fax: +420-585634769
e-mail: karel.be...@upol.cz 




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] gromacs from git source and openmm failed

2010-08-12 Thread Rossen Apostolov

 Hi,

Try it again with the latest release-4-5-branch, Erik added a lot of fixes.

Rossen

On 8/12/10 1:11 PM, Alan wrote:

Hi there,

I am trying now to compile mdrun-openmm, by doing:

cmake -DGMX_OPENMM=ON ..

make mdrun

-- Threads not compatible with OpenMM build, disabled
CMake Warning at CMakeLists.txt:127 (message):
  The OpenMM build does not support other acceleration modes!

-- Using internal FFT library - fftpack
-- Loaded CMakeASM-ATTInformation - ASM-ATT support is still 
experimental, please report issues

-- Configuring done
-- Generating done
-- Build files have been written to: /Users/alan/Programmes/gromacs/build
dhcp-128-232-144-215[2416]:~/Programmes/gromacs/build% make mdrun
[  0%] Building NVCC (Device) object 
src/kernel/gmx_gpu_utils/./gmx_gpu_utils_generated_memtestG80_core.cu.o
[  1%] Building NVCC (Device) object 
src/kernel/gmx_gpu_utils/./gmx_gpu_utils_generated_gmx_gpu_utils.cu.o


...

And then I got this error:

...
[ 56%] Building C object 
src/gmxlib/CMakeFiles/gmx.dir/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c.o
/Users/alan/Programmes/gromacs/src/gmxlib/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c: 
In function ‘nb_kernel400nf_x86_64_sse’:
/Users/alan/Programmes/gromacs/src/gmxlib/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c:630: 
error: ‘gmx_invsqrt_exptab’ undeclared (first use in this function)
/Users/alan/Programmes/gromacs/src/gmxlib/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c:630: 
error: (Each undeclared identifier is reported only once
/Users/alan/Programmes/gromacs/src/gmxlib/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c:630: 
error: for each function it appears in.)
/Users/alan/Programmes/gromacs/src/gmxlib/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c:630: 
error: ‘gmx_invsqrt_fracttab’ undeclared (first use in this function)
make[3]: *** 
[src/gmxlib/CMakeFiles/gmx.dir/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c.o] 
Error 1

make[2]: *** [src/gmxlib/CMakeFiles/gmx.dir/all] Error 2
make[1]: *** [src/kernel/CMakeFiles/mdrun.dir/rule] Error 2
make: *** [mdrun] Error 2

Thanks,

Alan

--
Alan Wilter S. da Silva, D.Sc. - CCPN Research Associate
Department of Biochemistry, University of Cambridge.
80 Tennis Court Road, Cambridge CB2 1GA, UK.
>>http://www.bio.cam.ac.uk/~awd28 <<


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] RE: log files

2010-08-12 Thread Gaurav Goel
On Wed, Aug 11, 2010 at 1:10 PM, Nimesh Jain <
nimeshjain2...@u.northwestern.edu> wrote:

> Well, no. I got a 20GB log file when nstlog was 1000. When I changed it to
> 1, the log file was about 1 MB after a few minutes of simulation which
> means that it will be in GBs in a few days.
>
> Did you re-compile using grompp after changing nstlog? Otherwise in md.tpr
(output of grompp), nstlog=1000.

-Gaurav

>
> On Wed, Aug 11, 2010 at 10:49 AM, Gaurav Goel wrote:
>
>> You've set the frequency of writing to log file as
>> 'nstlog   = 10'.
>> Given that 'nsteps   = 1', you're writing to the
>> log file only 1000 times. Do you get a 20GB md.log file with these settings?
>>
>> -Gaurav
>>
>> On Wed, Aug 11, 2010 at 10:52 AM, Nimesh Jain <
>> nimeshjain2...@u.northwestern.edu> wrote:
>>
>>> Hi,
>>>
>>> I am having some problem in my simulations related to log files. The file
>>> sizes are enormous, its like after 3 days of simulation I had a 20 GB md.log
>>> file. One of my grompps looks like this:
>>> [tau_t is very low because I am using bd and it doesn't work otherwise].
>>> [I am doing a replica exchange with 10,000 as exchange frequency]
>>>
>>>
>>> include  =
>>> define   =
>>> integrator   = bd
>>> tinit= 0
>>> dt   = 0.001
>>> nsteps   = 1 ;10
>>> simulation_part  = 1
>>> init_step= 0
>>> comm-mode= Angular
>>> nstcomm  = 1
>>> comm-grps=
>>>
>>>
>>> emtol= 0.01
>>> emstep   = 1.5
>>>
>>> nstxout  = 1
>>> nstvout  = 1
>>> nstfout  = 1
>>>
>>> nstlog   = 10
>>> nstenergy= 1000
>>>
>>> nstxtcout= 1000
>>> xtc-precision= 1000
>>>
>>> xtc-grps =
>>> energygrps   =
>>>
>>> ns_type  = grid
>>> pbc  = xyz
>>> periodic_molecules   = no
>>>
>>> rlist= 8.95
>>>
>>> coulombtype  = user
>>> rcoulomb-switch  = 0
>>> rcoulomb = 8.95
>>>
>>> epsilon-r= 1
>>>
>>> vdw-type = user  ;cutoff
>>> rvdw-switch  = 0
>>> rvdw = 8.95
>>> DispCorr = No
>>> table-extension  = 1
>>> ; Seperate tables between energy group pairs
>>> energygrps   = A T G C P260 SA SB
>>>
>>> energygrp_table  = A A  A T  A G  A C  A P260  A SA  A SB  T T  T
>>> G  T C  T P260  T S
>>> A  T SB  G G  G C  G P260  G SA  G SB  C C  C P260  C SA  C SB  P260
>>> P260  P260 SA  P260 SB
>>> SA SA  SA SB  SB SB
>>>
>>> ; Spacing for the PME/PPPM FFT grid
>>> fourierspacing   = 0.10
>>>
>>> Tcoupl   = Nose-Hoover
>>> tc-grps  = System
>>> tau_t= 0.0001
>>> ref_t= 260.00
>>>
>>> Pcoupl   = No
>>>
>>> andersen_seed= 815131
>>>
>>> gen_vel  = yes
>>> gen_temp = 260.
>>> gen_seed = 1993
>>>
>>> ; ENERGY GROUP EXCLUSIONS
>>> ; Pairs of energy groups for which all non-bonded interactions are
>>> excluded
>>> energygrp_excl   =
>>>
>>>
>>>
>>> Please let me know if anyone knows whats the problem.
>>>
>>> Thanks,
>>> Nimesh
>>>
>>> --
>>> gmx-users mailing listgmx-users@gromacs.org
>>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>>> Please search the archive at http://www.gromacs.org/search before
>>> posting!
>>> Please don't post (un)subscribe requests to the list. Use the
>>> www interface or send it to gmx-users-requ...@gromacs.org.
>>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>>>
>>
>>
>> --
>> gmx-users mailing listgmx-users@gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at http://www.gromacs.org/search before
>> posting!
>> Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-requ...@gromacs.org.
>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>>
>
>
>
> --
> Nimesh Jain
> Graduate Student
> Biomedical Engineering
> Northwestern University
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the 

[gmx-users] force constant

2010-08-12 Thread Gavin Melaugh
Hi all

I am running umbrella sampling to generate a potential of mean force
curve. Most of the histogram generated from g_wham are fine, and some
are not. I have focused my attention on one of these histogram and
played about with a few parameters so that I can get tobe the correct
shape. However using a force constant of 5000 give me a much sharper
distribution than when using a force constant of 1 (which is broader).
Why is this so? The input files are the exact same apart from the
pull_k1 value. The ref distance is also the exact same 2.59 nm. I double
checked the fluctuations of the distances between the centres of mass of
the two molecules using g_dist, and again the higher force constant
gives greater fluctuations. How can this be?


Many thanks in advance

Gavin
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] gromacs from git source and openmm failed

2010-08-12 Thread Alan
Hi there,

I am trying now to compile mdrun-openmm, by doing:

cmake -DGMX_OPENMM=ON ..

make mdrun

-- Threads not compatible with OpenMM build, disabled
CMake Warning at CMakeLists.txt:127 (message):
  The OpenMM build does not support other acceleration modes!

-- Using internal FFT library - fftpack
-- Loaded CMakeASM-ATTInformation - ASM-ATT support is still experimental,
please report issues
-- Configuring done
-- Generating done
-- Build files have been written to: /Users/alan/Programmes/gromacs/build
dhcp-128-232-144-215[2416]:~/Programmes/gromacs/build% make mdrun
[  0%] Building NVCC (Device) object
src/kernel/gmx_gpu_utils/./gmx_gpu_utils_generated_memtestG80_core.cu.o
[  1%] Building NVCC (Device) object
src/kernel/gmx_gpu_utils/./gmx_gpu_utils_generated_gmx_gpu_utils.cu.o

...

And then I got this error:

...
[ 56%] Building C object
src/gmxlib/CMakeFiles/gmx.dir/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c.o
/Users/alan/Programmes/gromacs/src/gmxlib/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c:
In function ‘nb_kernel400nf_x86_64_sse’:
/Users/alan/Programmes/gromacs/src/gmxlib/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c:630:
error: ‘gmx_invsqrt_exptab’ undeclared (first use in this function)
/Users/alan/Programmes/gromacs/src/gmxlib/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c:630:
error: (Each undeclared identifier is reported only once
/Users/alan/Programmes/gromacs/src/gmxlib/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c:630:
error: for each function it appears in.)
/Users/alan/Programmes/gromacs/src/gmxlib/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c:630:
error: ‘gmx_invsqrt_fracttab’ undeclared (first use in this function)
make[3]: ***
[src/gmxlib/CMakeFiles/gmx.dir/nonbonded/nb_kernel_x86_64_sse/nb_kernel400_x86_64_sse.c.o]
Error 1
make[2]: *** [src/gmxlib/CMakeFiles/gmx.dir/all] Error 2
make[1]: *** [src/kernel/CMakeFiles/mdrun.dir/rule] Error 2
make: *** [mdrun] Error 2

Thanks,

Alan

-- 
Alan Wilter S. da Silva, D.Sc. - CCPN Research Associate
Department of Biochemistry, University of Cambridge.
80 Tennis Court Road, Cambridge CB2 1GA, UK.
>>http://www.bio.cam.ac.uk/~awd28<<
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] Re: gromacs from git failed with cmake on Mac SL and fftw from Fink

2010-08-12 Thread Alan
Thanks,

I saw your commits at gromacs git and cmake worked fine now.

Alan

On 11 August 2010 17:34, Alan  wrote:

> Hi there,
>
> I am using gromacs from git source with cmake on Mac SL with Fink.
>
> ~/Programmes/gromacs% git show
> commit 86226a1a075a071920b0413aa7030545f8e6e282
> Merge: b8f35b9 c903375
> Author: Berk Hess 
> Date:   Wed Aug 11 12:57:53 2010 +0200
>
> Merge remote branch 'origin/release-4-5-patches'
>
>
> If using the old way (after bootstrapping), everything goes fine with:
>
> ./configure CPPFLAGS=-I/sw/include LDFLAGS=-L/sw/lib --with-gsl --with-x
>
> With cmake (cmake -D BUILD_SHARED_LIBS=ON or OFF), although CMakeCache.txt
> seems to be correct, for example, I see:
>
> //Path to a file.
> FFTW3F_INCLUDE_DIR:PATH=/sw/include
>
> //Path to a library.
> FFTW3F_LIBRARIES:FILEPATH=/sw/lib/libfftw3f.dylib
>
> (But have no idea if using gsl libs)
>
> I got this error:
>
> [ skip ]
> Scanning dependencies of target grompp
> [ 77%] Building C object src/kernel/CMakeFiles/grompp.dir/grompp.c.o
> Linking C executable grompp
> [ 77%] Building C object src/tools/CMakeFiles/gmxana.dir/gmx_lie.c.o
> Undefined symbols:
>   "_fftwf_plan_many_dft_r2c", referenced from:
>   _gmx_fft_init_many_1d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_many_1d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_many_1d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_many_1d_real in libmd.a(gmx_fft_fftw3.c.o)
>   "_fftwf_plan_dft_r2c_2d", referenced from:
>   _gmx_fft_init_2d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_2d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_2d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_2d_real in libmd.a(gmx_fft_fftw3.c.o)
>   "_fftwf_plan_dft_r2c_3d", referenced from:
>   _gmx_fft_init_3d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_3d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_3d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_3d_real in libmd.a(gmx_fft_fftw3.c.o)
>   "_fftwf_malloc", referenced from:
>   _gmx_fft_init_3d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_3d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_3d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_3d in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_3d in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_3d in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_2d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_2d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_2d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_2d in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_2d in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_2d in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_many_1d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_many_1d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_many_1d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_many_1d in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_many_1d in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_many_1d in libmd.a(gmx_fft_fftw3.c.o)
>   "_fftwf_execute_dft_c2r", referenced from:
>   _gmx_fft_3d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_2d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_1d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_many_1d_real in libmd.a(gmx_fft_fftw3.c.o)
>   "_fftwf_free", referenced from:
>   _gmx_fft_destroy in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_3d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_3d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_3d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_3d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_3d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_3d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_3d in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_3d in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_3d in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_3d in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_3d in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_3d in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_2d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_2d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_2d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_2d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_2d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_2d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_2d in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_2d in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_2d in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_2d in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_2d in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_2d in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_many_1d_real in libmd.a(gmx_fft_fftw3.c.o)
>   _gmx_fft_init_many_1d_real in libmd.a(gmx_fft_

[gmx-users] Re: broken links

2010-08-12 Thread Alan
Thanks, it's working now.

Alan

On 11 August 2010 17:59, Alan  wrote:

> Hi there,
>
> I cannot download from
> http://www.gromacs.org/Downloads/Installation_Instructions/compiling_QMMMany 
> file and link to Gamess-UK seems to be broken as well.
>
> Alan
>
> --
> Alan Wilter S. da Silva, D.Sc. - CCPN Research Associate
> Department of Biochemistry, University of Cambridge.
> 80 Tennis Court Road, Cambridge CB2 1GA, UK.
> >>http://www.bio.cam.ac.uk/~awd28<<
>



-- 
Alan Wilter S. da Silva, D.Sc. - CCPN Research Associate
Department of Biochemistry, University of Cambridge.
80 Tennis Court Road, Cambridge CB2 1GA, UK.
>>http://www.bio.cam.ac.uk/~awd28<<
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] trying to install gromacs on linux single processor

2010-08-12 Thread Jussi Lehtola
On Thu, 12 Aug 2010 15:01:31 +0530
Anamika Awasthi  wrote:

> hello all,
> 
>  I am trying to install new version  of gromacs on linux single
> processor, getting this error
>  ./configure --enable-threads --enable-float
> checking build system type... x86_64-unknown-linux-gnu
> checking host system type... x86_64-unknown-linux-gnu
> checking for a BSD-compatible install... /usr/bin/install -c
> checking whether build environment is sane... configure: error: newly
> created file is older than distributed files!
> Check your system clock

So do what is said - your system clock time is incorrect.

To fix this, run

# date [MMDDhhmm[[CC]YY][.ss]

or, you can just fetch the time from a NTP server with

# ntpdate 0.pool.ntp.org
-- 
--
Jussi Lehtola, FM, Tohtorikoulutettava
Fysiikan laitos, Helsingin Yliopisto
jussi.leht...@helsinki.fi, p. 191 50632
--
Mr. Jussi Lehtola, M. Sc., Doctoral Student
Department of Physics, University of Helsinki, Finland
jussi.leht...@helsinki.fi
--
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] trying to install gromacs on linux single processor

2010-08-12 Thread Anamika Awasthi
hello all,

 I am trying to install new version  of gromacs on linux single processor,
getting this error
 ./configure --enable-threads --enable-float
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... configure: error: newly
created file is older than distributed files!
Check your system clock
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] g_rms question

2010-08-12 Thread Tsjerk Wassenaar
Hi Udi,

Square the numbers... It's Root Mean Square Deviation, right? But roots
don't add up like that.

Cheers,

Tsjerk

On Aug 12, 2010 12:02 AM, "udi"  wrote:

 Hi gromacs users,

I’m simulating a protein that consists of 5 domains. I have calculated the
whole protein’s backbone RMSD by entering ‘4’ twice.

Now, I would like to calculate the contribution of every domain i.e. if the
whole protein’s RMSD in the first frame is  1nm, then how is this 1nm
distributed between the 5 domains.

I have created 5 groups in the index file of the backbone of every domain
and calculated the RMSD by first entering ‘4’ in order to fit the whole
backbone and entered the domains backbone groups in the second entry. (5
different calculations). The problem is that the values I get from the
domains do not add up to the whole backbone RMSD values!!! What am doing
wrong?



Thanks from advanced

Chears



Udi

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php