Re: [gmx-users] Fwd: KALP-15 in DPPC Tutorial Step 0 Segmentation Fault

2011-03-08 Thread Justin A. Lemkul



Steve Vivian wrote:


New to Gromacs.

Worked my way through the tutorial with relatively few issues until the 
Equilibration stage.  My system blows up!!


Returned to the Topology stage and rebuilt my system ensuring that I 
followed the procedure correctly for the InflateGro process.  It appears 
to be correct, reasonable lipid area, no water inside my bilayer, vmd 
shows a structure which appears normal (although I am new to this).  
There are voids between bilayer and water molecules, but this is to be 
expected, correct?


Energy Minimization repeatedly produces results within the expected range.

Again system blows up at equilibration, step 0 segmentation fault.  
Regardless of whether I attempt the NVT or Anneal_Npt process (using the 
provided mdp files, including the updates for restraints on the protein 
and the lipid molecules).


I have attempted many variations of the nvt.mdp and anneal_npt.mdp files 
hoping to resolve my issue, but with no success.  I will post the log 
information from the nvt.mdp file included in the tutorial.


Started mdrun on node 0 Tue Mar  8 15:42:35 2011

   Step   Time Lambda
  00.00.0

Grid: 9 x 9 x 9 cells
   Energies (kJ/mol)
   G96AngleProper Dih.  Improper Dih.  
LJ-14 Coulomb-14
8.52380e+016.88116e+012.23939e+01   -3.03546e+01
2.71260e+03
LJ (SR)  Disper. corr.   Coulomb (SR)   
Coul. recip. Position Rest.
1.49883e+04   -1.42684e+03   -2.78329e+05   -1.58540e+05   
 2.57100e+00
  PotentialKinetic En.   Total Energy  
Conserved En.   Temperature
   -4.20446e+05*1.41436e+141.41436e+141.41436e+14
1.23343e+12*

 Pres. DC (bar) Pressure (bar)   Constr. rmsd
   -1.56331e+025.05645e+121.18070e+01


As you can see the Potential Energy is reasonable, but the Kinetic 
Energy and Temperature seem unrealistic.


I am hoping that this is enough information for a more experienced 
Gromacs user to provide guidance. 
Note:  that I have tried all of the suggestions that I read on the 
mailing list and in the "blowing up" section of the manual, specifically:

-reduced time steps in Equilibration Stages
-reduced Fmax during EM stage (down as low as 100kJ which did not help)
-modified neighbours list parameters
 
Any help is appreciated. 
I can attach and forward any further information as required, please let 
me know.




Which Gromacs version are you using?  It looks like you're running in serial, is 
that correct?  Otherwise, please provide your mdrun command line.  If you're 
using version 4.5.3 in serial, I have identified a very problematic bug that 
seems to affect a wide variety of systems that could be related:


http://redmine.gromacs.org/issues/715

I have seen even the most robust tutorial systems fail as well, as some new lab 
members experienced the same problem.  The workaround is to run in parallel.


-Justin



Regards,
Steve Vivian.
sviv...@uwo.ca

 



--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Fwd: KALP-15 in DPPC Tutorial Step 0 Segmentation Fault

2011-03-08 Thread Steve Vivian


New to Gromacs.

Worked my way through the tutorial with relatively few issues until the 
Equilibration stage.  My system blows up!!


Returned to the Topology stage and rebuilt my system ensuring that I 
followed the procedure correctly for the InflateGro process.  It appears 
to be correct, reasonable lipid area, no water inside my bilayer, vmd 
shows a structure which appears normal (although I am new to this).  
There are voids between bilayer and water molecules, but this is to be 
expected, correct?


Energy Minimization repeatedly produces results within the expected range.

Again system blows up at equilibration, step 0 segmentation fault.  
Regardless of whether I attempt the NVT or Anneal_Npt process (using the 
provided mdp files, including the updates for restraints on the protein 
and the lipid molecules).


I have attempted many variations of the nvt.mdp and anneal_npt.mdp files 
hoping to resolve my issue, but with no success.  I will post the log 
information from the nvt.mdp file included in the tutorial.


Started mdrun on node 0 Tue Mar  8 15:42:35 2011

   Step   Time Lambda
  00.00.0

Grid: 9 x 9 x 9 cells
   Energies (kJ/mol)
   G96AngleProper Dih.  Improper Dih.  
LJ-14 Coulomb-14
8.52380e+016.88116e+012.23939e+01   -3.03546e+01
2.71260e+03
LJ (SR)  Disper. corr.   Coulomb (SR)   
Coul. recip. Position Rest.
1.49883e+04   -1.42684e+03   -2.78329e+05   -1.58540e+05   
 2.57100e+00
  PotentialKinetic En.   Total Energy  
Conserved En.   Temperature
   -4.20446e+05 *1.41436e+141.41436e+141.41436e+14
1.23343e+12*

 Pres. DC (bar) Pressure (bar)   Constr. rmsd
   -1.56331e+025.05645e+121.18070e+01


As you can see the Potential Energy is reasonable, but the Kinetic 
Energy and Temperature seem unrealistic.


I am hoping that this is enough information for a more experienced 
Gromacs user to provide guidance.
Note:  that I have tried all of the suggestions that I read on the 
mailing list and in the "blowing up" section of the manual, specifically:

-reduced time steps in Equilibration Stages
-reduced Fmax during EM stage (down as low as 100kJ which did not help)
-modified neighbours list parameters

Any help is appreciated.
I can attach and forward any further information as required, please let 
me know.



Regards,
Steve Vivian.
sviv...@uwo.ca



-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

RE: [gmx-users] Membrane Protein Tutorial

2011-03-08 Thread Dallas Warren
Below a certain temperature lipids form a gel or crystalline phase.  If
you run it at below that temperature, then it will behave nothing like a
biological lipid membrane.  (unless of course, that is what you want)

 

Catch ya,

Dr. Dallas Warren

Medicinal Chemistry and Drug Action

Monash Institute of Pharmaceutical Sciences, Monash University
381 Royal Parade, Parkville VIC 3010
dallas.war...@monash.edu

+61 3 9903 9304
-
When the only tool you own is a hammer, every problem begins to resemble
a nail. 

 

From: gmx-users-boun...@gromacs.org
[mailto:gmx-users-boun...@gromacs.org] On Behalf Of mohsen ramezanpour
Sent: Wednesday, 9 March 2011 7:01 AM
To: Discussion list for GROMACS users
Subject: [gmx-users] Membrane Protein Tutorial

 

Dear All

doing membrane protein tutorial of Dr.Justin I have a few question,
Please let me know their answers.

1-Why do I need to use a tempreture upper than lipid phase transition
tempreture for equilibration section?

2-What the lipid phase transition means exactly?



Thanks in advance for your idea
Mohsen

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Tweeking MDP file options to run on GPU

2011-03-08 Thread Szilárd Páll
Hi,

What is the error you are getting? What is unfortunate about
temperature coupling?

Have you checked out the part of the documentation, especially the
supported features on GPUs part
(http://www.gromacs.org/gpu#Supported_features)?

--
Szilárd



On Mon, Mar 7, 2011 at 3:43 PM, kala  wrote:
> Dear friends
>                 I am trying to run a ternary complex simulation using
> gromacs. so far the simulation is time taking on my dual-core maching
> 36hrs/ns. Fortunately or unfortunately I have a fermi graphics card wherein
> I can run the simulation quite fast. Now the unfortunate thing is the
> temperature coulpling. I am new to gmx and tweeking the mdp files is out of
> my head. I seek for advice for tweeking this mdp file (which i now use for
> cpu calculations)  for fast md using mdrun-gpu.
> My system
> 725 Amino acids
> 2 ligand molecules
> 1 co-ordinate zn Ion
>
> my MDP-file
>
> title   = Protein-ligand complex NVT equilibration
> ; Run parameters
> integrator  = md; leap-frog integrator
> nsteps  = 50; 2 * 50 = 1000 ps (1 ns)
> dt  = 0.002 ; 2 fs
> ; Output control
> nstxout = 0 ; suppress .trr output
> nstvout = 0 ; suppress .trr output
> nstenergy   = 1000  ; save energies every 2 ps
> nstlog  = 1000  ; update log file every 2 ps
> nstxtcout   = 1000  ; write .xtc trajectory every 2 ps
> energygrps  = Protein JZ4
> ; Bond parameters
> continuation= yes   ; first dynamics run
> constraint_algorithm = lincs; holonomic constraints
> constraints = all-bonds ; all bonds (even heavy atom-H bonds)
> constrained
> lincs_iter  = 1 ; accuracy of LINCS
> lincs_order = 4 ; also related to accuracy
> ; Neighborsearching
> ns_type = grid  ; search neighboring grid cells
> nstlist = 5 ; 10 fs
> rlist   = 0.9   ; short-range neighborlist cutoff (in nm)
> rcoulomb= 0.9   ; short-range electrostatic cutoff (in nm)
> rvdw= 1.4   ; short-range van der Waals cutoff (in nm)
> ; Electrostatics
> coulombtype = PME   ; Particle Mesh Ewald for long-range
> electrostatics
> pme_order   = 4 ; cubic interpolation
> fourierspacing  = 0.16  ; grid spacing for FFT
> ; Temperature coupling is on
> tcoupl  = V-rescale ; modified Berendsen thermostat
> tc-grps = Protein_JZ4 Water_and_ions; two coupling groups - more
> accurate
> tau_t   = 0.1   0.1 ; time constant, in ps
> ref_t   = 300   300 ; reference temperature, one for
> each group, in K
> ; Pressure coupling is off
> pcoupl  = Parrinello-Rahman ; pressure coupling is on for
> NPT
> pcoupltype  = isotropic ; uniform scaling of box vectors
> tau_p   = 2.0   ; time constant, in ps
> ref_p   = 1.0   ; reference pressure, in bar
> compressibility = 4.5e-5; isothermal compressibility of
> water, bar^-1
> ; Periodic boundary conditions
> pbc = xyz   ; 3-D PBC
> ; Dispersion correction
> DispCorr= EnerPres  ; account for cut-off vdW scheme
> ; Velocity generation
> gen_vel = yes   ; assign velocities from Maxwell distribution
> gen_temp= 300   ; temperature for Maxwell distribution
> gen_seed= -1; generate a random seed
>
> thanks and regards
>
> bharath
>
> --
> gmx-users mailing list    gmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] membrane-protein

2011-03-08 Thread mohsen ramezanpour
Dear All

doing membrane protein tutorial of Dr.Justin I have a few question,

1-Why do I need to know the stabilization state of my box vector?

2-How can I do this (with wich program?)?

3-What can I do if they were not stable?

Thanks in advance
Mohsen
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Membrane Protein Tutorial

2011-03-08 Thread mohsen ramezanpour
Dear All

doing membrane protein tutorial of Dr.Justin I have a few question,
Please let me know their answers.

1-Why do I need to use a tempreture upper than lipid phase transition
tempreture for equilibration section?

2-What the lipid phase transition means exactly?



Thanks in advance for your idea
Mohsen
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] parallel running

2011-03-08 Thread Mark Abraham

On 9/03/2011 1:51 AM, mohsen ramezanpour wrote:

Dear Dr.Justin
Thank you for your notice.
I don't know the problem is related to MY COMMAND or to CLUSTER 
CONFIGURATION.
please look at my commands and let me know the answer of this 
question.just this.


The command you posted said that your compute environment could not find 
the GROMACS MPI-compiled executable on your cluster. Nobody here has any 
real ability to say why that is happening. You need to find out where 
that executable is, and how to access it. All of that information is 
local to you, and absent from us. Thus Justin said:




The administrators of your cluster should be able to tell you how
your system runs, what is installed, how things are configured,
etc.  Anything that anyone on this list would provide would be
generic advice, much of which is probably idle speculation that
may end up wasting your time.



Mark
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] parallel running

2011-03-08 Thread mohsen ramezanpour
Dear Dr.Justin
Thank you for your notice.
I don't know the problem is related to MY COMMAND or to CLUSTER
CONFIGURATION.
please look at my commands and let me know the answer of this question.just
this.
Thanks in advance for your reply
Mohsen

On Tue, Mar 8, 2011 at 6:07 PM, Justin A. Lemkul  wrote:

>
>
> mohsen ramezanpour wrote:
>
>
>>
>>
>> On Tue, Mar 8, 2011 at 2:49 PM, Esztermann, Ansgar <
>> ansgar.eszterm...@mpi-bpc.mpg.de >
>> wrote:
>>
>>
>> >> You don't use qsub or bsub?
>> >
>> > No,What is these?How can I prepare and use them?
>>
>>They are commands to submit jobs to batch systems.
>>
>> thank you.Please let me know more if it is possible
>>
>
> The administrators of your cluster should be able to tell you how your
> system runs, what is installed, how things are configured, etc.  Anything
> that anyone on this list would provide would be generic advice, much of
> which is probably idle speculation that may end up wasting your time.
>
> -Justin
>
>  Thanks in advance
>>
>>
>>A.
>>--
>>Ansgar Esztermann
>>DV-Systemadministration
>>Max-Planck-Institut für biophysikalische Chemie, Abteilung 105
>>
>>--
>>gmx-users mailing listgmx-users@gromacs.org
>>
>>
>>http://lists.gromacs.org/mailman/listinfo/gmx-users
>>Please search the archive at
>>http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>>Please don't post (un)subscribe requests to the list. Use the
>>www interface or send it to gmx-users-requ...@gromacs.org
>>.
>>
>>Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>>
>>
> --
> 
>
> Justin A. Lemkul
> Ph.D. Candidate
> ICTAS Doctoral Scholar
> MILES-IGERT Trainee
> Department of Biochemistry
> Virginia Tech
> Blacksburg, VA
> jalemkul[at]vt.edu | (540) 231-9080
> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
>
> 
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the www interface
> or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] parallel running

2011-03-08 Thread Justin A. Lemkul



mohsen ramezanpour wrote:



On Tue, Mar 8, 2011 at 2:49 PM, Esztermann, Ansgar 
> wrote:



 >> You don't use qsub or bsub?
 >
 > No,What is these?How can I prepare and use them?

They are commands to submit jobs to batch systems.

thank you.Please let me know more if it is possible


The administrators of your cluster should be able to tell you how your system 
runs, what is installed, how things are configured, etc.  Anything that anyone 
on this list would provide would be generic advice, much of which is probably 
idle speculation that may end up wasting your time.


-Justin


Thanks in advance


A.
--
Ansgar Esztermann
DV-Systemadministration
Max-Planck-Institut für biophysikalische Chemie, Abteilung 105

--
gmx-users mailing listgmx-users@gromacs.org

http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org
.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] parallel running

2011-03-08 Thread mohsen ramezanpour
On Tue, Mar 8, 2011 at 2:49 PM, Esztermann, Ansgar <
ansgar.eszterm...@mpi-bpc.mpg.de> wrote:

>
> >> You don't use qsub or bsub?
> >
> > No,What is these?How can I prepare and use them?
>
> They are commands to submit jobs to batch systems.
>
> thank you.Please let me know more if it is possible
Thanks in advance

>
> A.
> --
> Ansgar Esztermann
> DV-Systemadministration
> Max-Planck-Institut für biophysikalische Chemie, Abteilung 105
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] parallel running

2011-03-08 Thread mohsen ramezanpour
On Tue, Mar 8, 2011 at 2:53 PM, Esztermann, Ansgar <
ansgar.eszterm...@mpi-bpc.mpg.de> wrote:

>
> On Mar 8, 2011, at 12:00 , mohsen ramezanpour wrote:
> >
> >> > Besides when I used the following command I get an executeable Error:
> >> > mpirun   -np   8   mdrun_mpi -deffnm   output  &
> >>
> >> What is the error message?
> >
> > the Error is:
> > Failed to find the following executable:
> >
> > Host:   compute-0-4.local
> > Executable: mdrun_mpi
> >
> > Cannot continue.
>
> Is mdrun_mpi available on compute-0-4? If so, it's just a matter of using
> the right path: your shell knows where to look for the executable, but
> mpirun does not. Try
>
> Sorry,You are right.There is not any mdrun_mpi on Nodes.
Thank you.
Besides there are not any followings on nodes:
mdrun_mpimdrun_mpi_d.openmpimdrun_d
mdrun_mpi_d  mdrun_mpi.openmpi  mpiexec.openmpi
mpirun.openmpi
Do I need to install some of these on cluster to be able run mdrun on all
Nodes?
Thanks in advance for your guidances


> mpirun -np 8 `which mdrun_mpi` -deffnm output &
>
> instead. Note the "backticks" (`).
>
> A.
> --
> Ansgar Esztermann
> DV-Systemadministration
> Max-Planck-Institut für biophysikalische Chemie, Abteilung 105
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] RMSD truncation Restart simulation problems

2011-03-08 Thread Mark Abraham

On 8/03/2011 9:41 PM, Henri Mone wrote:

Hi All, hi Mark,
Here are some more details. The outputs and error messages are
attached at the end of the e-mail. After truncation I get the error
message [1a], gromacs has problems with the checksum of the trr fles.
After truncation the trajectories (xtc, trr) have the same length of
27752 frames [1b]. All the edr files have the same length of 277518
frames [1b]. The cpt files used after truncation have a step =
138762700 and t = 277525.40 [1c].
Before truncation I got the error message [2], gromacs complains that
the 32 subsystems are not compatible.
Anyone a idea was is going wrong?

Thanks,
Henri



1a: AFTER TRUNCATION: ERROR MESSAGE
Reading checkpoint file state1.cpt generated: Thu Jan 27 02:19:50 2011
   #PME-nodes mismatch,
 current program: -1
 checkpoint file: 0
Reading checkpoint file state2.cpt generated: Thu Jan 27 02:19:50 2011
   #PME-nodes mismatch,
 current program: -1
 checkpoint file: 0
Gromacs binary or parallel settings not identical to previous run.
Continuation is exact, but is not guaranteed to be binary identical.
...
---
Program mdrun_mpi, VERSION 4.5.3
Source code file: checkpoint.c, line: 1767
Fatal error:
Can't read 1048576 bytes of 'traj1.trr' to compute checksum. The file
has been replaced or its contents has been modified.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---
---
Program mdrun_mpi, VERSION 4.5.3
Source code file: checkpoint.c, line: 1767
Fatal error:
Can't read 1048576 bytes of 'traj2.trr' to compute checksum. The file
has been replaced or its contents has been modified.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---
Error on node 1, will try to stop all the nodes
Halting parallel program mdrun_mpi on CPU 1 out of 32
gcq#307: "Good Music Saves your Soul" (Lemmy)
[n030212:18418] MPI_ABORT invoked on rank 1 in communicator
MPI_COMM_WORLD with errorcode -1


Ah yes, I remember now. mdrun tries to be smart and check that all the 
files match the state they were in before the crash by computing 
checksums when writing and again when reading.





1b: AFTER TRUNCATION: XTC TRR
$ gmxcheck -f traj0.xtc
Checking file traj0.xtc
Reading frame   0 time0.000
# Atoms  224
Precision 0.001 (nm)
Reading frame   27000 time 27.000
Item#frames Timestep (ps)
Step 2775210
Time 2775210
Lambda   0
Coords   2775210
Velocities   0
Forces   0
Box  2775210
...
$ gmxcheck -f traj31.xtc
Checking file traj31.xtc
Reading frame   0 time0.000
# Atoms  224
Precision 0.001 (nm)
Reading frame   27000 time 27.000
Item#frames Timestep (ps)
Step 2775210
Time 2775210
Lambda   0
Coords   2775210
Velocities   0
Forces   0
Box  2775210

$ gmxcheck -f traj0.trr
Checking file traj0.trr
trn version: GMX_trn_file (single precision)
Reading frame   0 time0.000
# Atoms  6647
Reading frame   27000 time 27.000
Item#frames Timestep (ps)
Step 2775210
Time 2775210
Lambda   2775210
Coords   2775210
Velocities   2775210
Forces   0
Box  2775210
$ gmxcheck -f traj1.trr
Checking file traj1.trr
trn version: GMX_trn_file (single precision)
Reading frame   0 time0.000
# Atoms  6647
Reading frame   27000 time 27.000
Item#frames Timestep (ps)
Step 2775210
Time 2775210
Lambda   2775210
Coords   2775210
Velocities   2775210
Forces   0
Box  2775210
...
$ gmxcheck -f traj31.trr
Checking file traj31.trr
trn version: GMX_trn_file (single precision)
Reading frame   0 time0.000
# Atoms  6647
Reading frame   27000 time 27.000
Item#frames Timestep (ps)
Step 2775210
Time 2775210
Lambda   2775210
Coords   2775210
Velocities   2775210
Forces   0
Box  2775210

$ eneconv -f ener0.edr
Reading energy frame  0 time0.000
Continue writing frames from t=0, step=0
Last energy frame read 138759 time 277518.000 iting frame time
276000
Last step written from ener0.edr: t 277518, step 138759000
Last frame written was at step 138759000, time 277518.00
Wrote 138760 frames
...
$ eneconv -f ener31.edr
Reading energy frame  0 time0.000
Continue writing frames from t=0, step=0
Last energy frame read 138759 time 277518.000 iting frame time
276000
Last step written from ener31.edr: t 277518, step 138759000
Last frame written was at step 138759000, time 277518.00
Wr

Re: [gmx-users] g_energy inconsistent results

2011-03-08 Thread Mark Abraham

On 8/03/2011 9:44 PM, Ehud Schreiber wrote:


Dear Gromacs users,

I am working with version 4.5.3, using the opls-aa forcefield in an 
implicit solvent, all-vs-all setting:


pdb2gmx -ter -ff oplsaa -water none -f file.pdb

I am energy-minimizing structures in 3 stages (steep, cg and l-bfgs). 
The last stage is the following:


grompp -f em3.mdp -p topol.top -c em2.gro -t em2.trr -o em3.tpr -po 
em3.mdout.mdp


mdrun -nice 0 -v -pd -deffnm em3

g_energy -s em3.tpr -f em3.edr -o em3.potential_energy.xvg

where the mdp file is:

;;; em3.mdp ;;;

integrator   = l-bfgs

nsteps   = 5

implicit_solvent = GBSA

gb_algorithm = Still

sa_algorithm = Ace-approximation

pbc  = no

rgbradii = 0

ns_type  = simple

nstlist  = 0

rlist= 0

coulombtype  = cut-off

rcoulomb = 0

vdwtype  = cut-off

rvdw = 0

nstcalcenergy= 1

nstenergy= 1000

emtol= 0

;;;

The last line in the em3.potential_energy.xvg file should give the 
(potential) energy of the minimized structure em3.gro .


I wish also to compute the potential energy of .gro files in general, 
not necessarily obtained from a simulation. For that, I prepared a 
.mdp file for a degenerate energy minimization, having 0 steps, 
designed just to give the status of the file:




Zero-step EM does not calculate the initial energy because it is not 
useful for gradient-based energy minimization. I don't recall the 
details, but perhaps the first EM step is reported as step zero.


Instead, you should use zero-step MD (with unconstrained_start = yes), 
or (for multiple single points) mdrun -rerun.


You will not necessarily reproduce the g_energy energies with anything, 
because the energy is dependent on the state of the neighbour lists. If 
nstenergy is a multiple of nstlist, then those energies should be fairly 
reproducible.


I have updated the grompp source to provide a note to the user to warn 
against zero-step EM.


Mark


;;; status.mdp ;;;

integrator   = l-bfgs

nsteps   = 0

implicit_solvent = GBSA

gb_algorithm = Still

sa_algorithm = Ace-approximation

pbc  = no

rgbradii = 0

ns_type  = simple

nstlist  = 0

rlist= 0

coulombtype  = cut-off

rcoulomb = 0

vdwtype  = cut-off

rvdw = 0

nstcalcenergy= 1

nstenergy= 1

emtol= 0

;;;

The only changes from the former .mdp file are in nsteps and nstenergy.

However, when I run this potential energy status run on em3.gro itself,

grompp -f status.mdp -p topol.top -c em3.gro -o status.tpr -po 
status.mdout.mdp


mdrun -nice 0 -v -pd -deffnm status

g_energy -s status.tpr -f status.edr -o status.potential_energy.xvg

and look at the (single) energy line in status.potential_energy.xvg I 
find that the energy does not agree with the one obtained during 
minimization (it's higher by some tens of kJ/mol).


What am I doing wrong? How should one reliably find the energy of a 
given .gro file?


Moreover, when changing in status.mdp to integrator = steep, the 
results also change dramatically -- why should the algorithm matter if 
no steps are performed and the initial structure is explored?


Thanks,

Ehud.



-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] QMMM

2011-03-08 Thread Jack Shultz
Good luck. I followed the instructions and was not successful.

On Tue, Mar 8, 2011 at 12:48 AM, Haresh  wrote:

> Hello everyone,
>
> I want install gromacs with mopac7 for qmmm.
>
> Can you guide me for installation procedure
>
> Thank you.
>
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Instantaneous Square Displacement

2011-03-08 Thread Justin A. Lemkul



Mark Abraham wrote:

On 8/03/2011 3:01 AM, Jennifer Williams wrote:


Hi,

I am writing a paper where I describe that gas molecules move inside a 
pore and then stick for long periods of time in occlusions in the pore 
wall.


A reviewer has mentioned that I could illustrate this effect by using 
"instantaneous square-displacement".


I have already produced MSD vs time plots and used them to obtain the 
self diffusion coefficient. Can someone shed some light on how I can 
obtain the instantaneous square displacement in gromacs?


I have no idea what "ISD" means, and Google doesn't know either :) 
Perhaps they want to see the diffusion of a single molecule?




Searching for "instantaneous square displacement" turns up very little (3 
results), but the last seems to be what you need, as long as this person is correct:


http://smartech.gatech.edu/bitstream/handle/1853/13994/bai_xianming_200612_phd.pdf?sequence=1

Section 2.3.3.

-Justin

--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] parallel running

2011-03-08 Thread Esztermann, Ansgar

On Mar 8, 2011, at 12:00 , mohsen ramezanpour wrote:
> 
>> > Besides when I used the following command I get an executeable Error:
>> > mpirun   -np   8   mdrun_mpi -deffnm   output  &
>> 
>> What is the error message?
> 
> the Error is: 
> Failed to find the following executable:
> 
> Host:   compute-0-4.local
> Executable: mdrun_mpi
> 
> Cannot continue.

Is mdrun_mpi available on compute-0-4? If so, it's just a matter of using the 
right path: your shell knows where to look for the executable, but mpirun does 
not. Try

mpirun -np 8 `which mdrun_mpi` -deffnm output &

instead. Note the "backticks" (`).

A.
-- 
Ansgar Esztermann
DV-Systemadministration
Max-Planck-Institut für biophysikalische Chemie, Abteilung 105

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] parallel running

2011-03-08 Thread Esztermann, Ansgar

>> You don't use qsub or bsub? 
> 
> No,What is these?How can I prepare and use them?

They are commands to submit jobs to batch systems.


A.
-- 
Ansgar Esztermann
DV-Systemadministration
Max-Planck-Institut für biophysikalische Chemie, Abteilung 105

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] parallel running

2011-03-08 Thread mohsen ramezanpour
On Tue, Mar 8, 2011 at 1:40 PM, Esztermann, Ansgar <
ansgar.eszterm...@mpi-bpc.mpg.de> wrote:

>
> On Mar 8, 2011, at 10:26 , mohsen ramezanpour wrote:
>
> > 4- nohup   mpirun  -np  8   mdrun  -deffnm  output   &
> >
> > The result is running mdrun on one node(compute-0-1) (on its 4 CPUs)
>
> That's just as it is supposed to be.
>
> > Besides when I used the following command I get an executeable Error:
> > mpirun   -np   8   mdrun_mpi -deffnm   output  &
>
> What is the error message?
>
> the Error is:

> Failed to find the following executable:
>
> Host:   compute-0-4.local
> Executable: mdrun_mpi
>
> Cannot continue.
>
Please let me know how can I solve this problem
Thanks in advance



> A.
>
> --
> Ansgar Esztermann
> DV-Systemadministration
> Max-Planck-Institut für biophysikalische Chemie, Abteilung 105
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] g_energy inconsistent results

2011-03-08 Thread Ehud Schreiber
Dear Gromacs users,

 

I am working with version 4.5.3, using the opls-aa forcefield in an
implicit solvent, all-vs-all setting:

 

pdb2gmx -ter -ff oplsaa -water none -f file.pdb

 

I am energy-minimizing structures in 3 stages (steep, cg and l-bfgs).
The last stage is the following:

 

grompp -f em3.mdp -p topol.top -c em2.gro -t em2.trr -o em3.tpr -po
em3.mdout.mdp

mdrun -nice 0 -v -pd -deffnm em3

g_energy -s em3.tpr -f em3.edr -o em3.potential_energy.xvg 

 

where the mdp file is:

 

;;; em3.mdp ;;;

integrator   = l-bfgs

nsteps   = 5

implicit_solvent = GBSA

gb_algorithm = Still 

sa_algorithm = Ace-approximation 

pbc  = no

rgbradii = 0 

ns_type  = simple

nstlist  = 0

rlist= 0

coulombtype  = cut-off

rcoulomb = 0

vdwtype  = cut-off

rvdw = 0

nstcalcenergy= 1

nstenergy= 1000

emtol= 0 

;;;

 

The last line in the em3.potential_energy.xvg file should give the
(potential) energy of the minimized structure em3.gro .

 

I wish also to compute the potential energy of .gro files in general,
not necessarily obtained from a simulation. For that, I prepared a .mdp
file for a degenerate energy minimization, having 0 steps, designed just
to give the status of the file:

 

;;; status.mdp ;;;

integrator   = l-bfgs

nsteps   = 0

implicit_solvent = GBSA

gb_algorithm = Still

sa_algorithm = Ace-approximation

pbc  = no

rgbradii = 0

ns_type  = simple

nstlist  = 0

rlist= 0

coulombtype  = cut-off

rcoulomb = 0

vdwtype  = cut-off

rvdw = 0

nstcalcenergy= 1

nstenergy= 1

emtol= 0

;;;

 

The only changes from the former .mdp file are in nsteps and nstenergy.

 

However, when I run this potential energy status run on em3.gro itself,

 

grompp -f status.mdp -p topol.top -c em3.gro -o status.tpr -po
status.mdout.mdp

mdrun -nice 0 -v -pd -deffnm status

g_energy -s status.tpr -f status.edr -o status.potential_energy.xvg

 

and look at the (single) energy line in status.potential_energy.xvg I
find that the energy does not agree with the one obtained during
minimization (it's higher by some tens of kJ/mol).

 

What am I doing wrong? How should one reliably find the energy of a
given .gro file?

 

Moreover, when changing in status.mdp to integrator = steep, the results
also change dramatically - why should the algorithm matter if no steps
are performed and the initial structure is explored?

 

Thanks,

Ehud.

 

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] parallel running

2011-03-08 Thread mohsen ramezanpour
On Tue, Mar 8, 2011 at 1:36 PM, Jianguo Li  wrote:

> You don't use qsub or bsub?
>

No,What is these?How can I prepare and use them?
Thanks in advance


> usually you should submit a script file containing the gromacs command,
> then bsub/qsub will allocate the required resource to your job.
> Jianguo
> --
> *From:* mohsen ramezanpour 
> *To:* Discussion list for GROMACS users 
> *Sent:* Tuesday, 8 March 2011 17:26:19
> *Subject:* [gmx-users] parallel running
>
> Dear All
>
> I want to run gromacs in parallel on cluster.for this I follow below steps:
> 1-I connect to a node with ssh comand,fro example: ssh compute-o-1
> 2-cd scratch
> 3-grompp -f   md.mdp-c   input.gro-o   output.tpr   -p
> topol.top -n  index.ndx
> 4- nohup   mpirun  -np  8   mdrun  -deffnm  output   &
>
> The result is running mdrun on one node(compute-0-1) (on its 4 CPUs)
> Besides when I used the following command I get an executeable Error:
> mpirun   -np   8   mdrun_mpi -deffnm   output  &
>
> The Error is related to mdrun_mpi
> I think is related to my cluster,because both of above commands work in my
> laptop
>
> Please let me know how can I run mdrun on all of CPUs of my cluster.
> Thanks in advance
> Mohsen
>
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] RMSD truncation Restart simulation problems

2011-03-08 Thread Henri Mone
Hi All, hi Mark,
Here are some more details. The outputs and error messages are
attached at the end of the e-mail. After truncation I get the error
message [1a], gromacs has problems with the checksum of the trr fles.
After truncation the trajectories (xtc, trr) have the same length of
27752 frames [1b]. All the edr files have the same length of 277518
frames [1b]. The cpt files used after truncation have a step =
138762700 and t = 277525.40 [1c].
Before truncation I got the error message [2], gromacs complains that
the 32 subsystems are not compatible.
Anyone a idea was is going wrong?

Thanks,
Henri



1a: AFTER TRUNCATION: ERROR MESSAGE
Reading checkpoint file state1.cpt generated: Thu Jan 27 02:19:50 2011
  #PME-nodes mismatch,
current program: -1
checkpoint file: 0
Reading checkpoint file state2.cpt generated: Thu Jan 27 02:19:50 2011
  #PME-nodes mismatch,
current program: -1
checkpoint file: 0
Gromacs binary or parallel settings not identical to previous run.
Continuation is exact, but is not guaranteed to be binary identical.
...
---
Program mdrun_mpi, VERSION 4.5.3
Source code file: checkpoint.c, line: 1767
Fatal error:
Can't read 1048576 bytes of 'traj1.trr' to compute checksum. The file
has been replaced or its contents has been modified.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---
---
Program mdrun_mpi, VERSION 4.5.3
Source code file: checkpoint.c, line: 1767
Fatal error:
Can't read 1048576 bytes of 'traj2.trr' to compute checksum. The file
has been replaced or its contents has been modified.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---
Error on node 1, will try to stop all the nodes
Halting parallel program mdrun_mpi on CPU 1 out of 32
gcq#307: "Good Music Saves your Soul" (Lemmy)
[n030212:18418] MPI_ABORT invoked on rank 1 in communicator
MPI_COMM_WORLD with errorcode -1



1b: AFTER TRUNCATION: XTC TRR
$ gmxcheck -f traj0.xtc
Checking file traj0.xtc
Reading frame   0 time0.000
# Atoms  224
Precision 0.001 (nm)
Reading frame   27000 time 27.000
Item#frames Timestep (ps)
Step 2775210
Time 2775210
Lambda   0
Coords   2775210
Velocities   0
Forces   0
Box  2775210
...
$ gmxcheck -f traj31.xtc
Checking file traj31.xtc
Reading frame   0 time0.000
# Atoms  224
Precision 0.001 (nm)
Reading frame   27000 time 27.000
Item#frames Timestep (ps)
Step 2775210
Time 2775210
Lambda   0
Coords   2775210
Velocities   0
Forces   0
Box  2775210

$ gmxcheck -f traj0.trr
Checking file traj0.trr
trn version: GMX_trn_file (single precision)
Reading frame   0 time0.000
# Atoms  6647
Reading frame   27000 time 27.000
Item#frames Timestep (ps)
Step 2775210
Time 2775210
Lambda   2775210
Coords   2775210
Velocities   2775210
Forces   0
Box  2775210
$ gmxcheck -f traj1.trr
Checking file traj1.trr
trn version: GMX_trn_file (single precision)
Reading frame   0 time0.000
# Atoms  6647
Reading frame   27000 time 27.000
Item#frames Timestep (ps)
Step 2775210
Time 2775210
Lambda   2775210
Coords   2775210
Velocities   2775210
Forces   0
Box  2775210
...
$ gmxcheck -f traj31.trr
Checking file traj31.trr
trn version: GMX_trn_file (single precision)
Reading frame   0 time0.000
# Atoms  6647
Reading frame   27000 time 27.000
Item#frames Timestep (ps)
Step 2775210
Time 2775210
Lambda   2775210
Coords   2775210
Velocities   2775210
Forces   0
Box  2775210

$ eneconv -f ener0.edr
Reading energy frame  0 time0.000
Continue writing frames from t=0, step=0
Last energy frame read 138759 time 277518.000 iting frame time
276000
Last step written from ener0.edr: t 277518, step 138759000
Last frame written was at step 138759000, time 277518.00
Wrote 138760 frames
...
$ eneconv -f ener31.edr
Reading energy frame  0 time0.000
Continue writing frames from t=0, step=0
Last energy frame read 138759 time 277518.000 iting frame time
276000
Last step written from ener31.edr: t 277518, step 138759000
Last frame written was at step 138759000, time 277518.00
Wrote 138760 frames





1c: AFTER TRUNCATION: CPT
state0.cpt:
generation time = Thu Jan 27 02:19:50 2011
step = 138762700
t = 277525.40
...
state31.cpt:
generation time = Thu Jan 27 02:19:50 2011
step = 138762700
t = 277525.40

Re: [gmx-users] LINCS WARNING after good minimization and equilibration (NPT and NVT)

2011-03-08 Thread Yulian Gavrilov
Thank you! I will try to change something and write to you about the result
.


-- 

Sincerely,

Yulian Gavrilov
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] OPLS force field for RNA nucleotides for protein RNA simulation

2011-03-08 Thread maria goranovic
Hello

I am running a protein-RNA simulation, and was unable to find OLPS-AA
topologies for RNA nucleotides. I am aware AMBER or CHARMM are the best
force fields for nucleotides, but my protein only simulations were done in
OPLS. Can I get any help with OPLS-AA topologies for, say, GMP compatible
with gromacs v. 4.5.3. If this is not available, I will try to make these,
but where can I find a topology in the first place?

Maria


Maria G.
Technical University of Denmark
Copenhagen
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] parallel running

2011-03-08 Thread Esztermann, Ansgar

On Mar 8, 2011, at 10:26 , mohsen ramezanpour wrote:

> 4- nohup   mpirun  -np  8   mdrun  -deffnm  output   &
> 
> The result is running mdrun on one node(compute-0-1) (on its 4 CPUs)

That's just as it is supposed to be.

> Besides when I used the following command I get an executeable Error:
> mpirun   -np   8   mdrun_mpi -deffnm   output  &

What is the error message?


A.

-- 
Ansgar Esztermann
DV-Systemadministration
Max-Planck-Institut für biophysikalische Chemie, Abteilung 105

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] parallel running

2011-03-08 Thread Jianguo Li
You don't use qsub or bsub? 
usually you should submit a script file containing the gromacs command, then 
bsub/qsub will allocate the required resource to your job. 

Jianguo



From: mohsen ramezanpour 
To: Discussion list for GROMACS users 
Sent: Tuesday, 8 March 2011 17:26:19
Subject: [gmx-users] parallel running

Dear All

I want to run gromacs in parallel on cluster.for this I follow below steps:
1-I connect to a node with ssh comand,fro example: ssh compute-o-1
2-cd scratch 
3-grompp -f   md.mdp-c   input.gro-o   output.tpr   -p  topol.top   
  -n  index.ndx
4- nohup   mpirun  -np  8   mdrun  -deffnm  output   &

The result is running mdrun on one node(compute-0-1) (on its 4 CPUs)
Besides when I used the following command I get an executeable Error:
mpirun   -np   8   mdrun_mpi -deffnm   output  &

The Error is related to mdrun_mpi 
I think is related to my cluster,because both of above commands work in my 
laptop

Please let me know how can I run mdrun on all of CPUs of my cluster.
Thanks in advance
Mohsen


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] parallel running

2011-03-08 Thread mohsen ramezanpour
Dear All

I want to run gromacs in parallel on cluster.for this I follow below steps:
1-I connect to a node with ssh comand,fro example: ssh compute-o-1
2-cd scratch
3-grompp -f   md.mdp-c   input.gro-o   output.tpr   -p
topol.top -n  index.ndx
4- nohup   mpirun  -np  8   mdrun  -deffnm  output   &

The result is running mdrun on one node(compute-0-1) (on its 4 CPUs)
Besides when I used the following command I get an executeable Error:
mpirun   -np   8   mdrun_mpi -deffnm   output  &

The Error is related to mdrun_mpi
I think is related to my cluster,because both of above commands work in my
laptop

Please let me know how can I run mdrun on all of CPUs of my cluster.
Thanks in advance
Mohsen
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists