Re: [gmx-users] Re: Build on OSX with 4.6beta1

2012-12-04 Thread Szilárd Páll
Hi Carlo,

I think I know now what the problem is. The CUDA compiler, nvcc, uses a the
host compiler to compile CPU code which is generated as C++ and therefore,
a C++ compiler is needed. However, up until CUDA 5.0 nvcc did not recognize
the Intel C++ compiler (icpc) and it would only accept icc as host
compiler. As all host-compilers supported by nvcc happily compile C++ code
with the compiler that is normally used for C (gcc, icc), I decided to pass
the C compiler as host-compiler to nvcc. Although this has worked for a
bunch of gcc versions as well as icc 12/13 on Linux, for some strange
reason, in your case (and probably on Mac OS in general) this did not
result in correct linking.

I still can't tell whether what Roland's fix does is needed or we could do
it some other way.

Thanks for the information, a fix will be implemented for an upcoming beta.

Cheers,

--
Szilárd


On Tue, Dec 4, 2012 at 8:02 PM, Carlo Camilloni
wrote:

> Dear Szilárd,
>
> My cmake version in the 2.8.9. The error is
>
> Linking C shared library libgmx.dylib
> Undefined symbols for architecture x86_64:
>  "std::terminate()", referenced from:
>  do_memtest(unsigned int, int, int) in
> libgpu_utils.a(gpu_utils_generated_gpu_utils.cu.o)
>  memtestState::allocate(unsigned int) in
> libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o)
>  "typeinfo for int", referenced from:
>  memtestState::allocate(unsigned int) in
> libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o)
>  "___cxa_allocate_exception", referenced from:
>  memtestState::allocate(unsigned int) in
> libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o)
>  "___cxa_begin_catch", referenced from:
>  memtestState::allocate(unsigned int) in
> libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o)
>  "___cxa_end_catch", referenced from:
>  memtestState::allocate(unsigned int) in
> libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o)
>  "___cxa_throw", referenced from:
>  memtestState::allocate(unsigned int) in
> libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o)
>  "___gxx_personality_v0", referenced from:
>  Dwarf Exception Unwind Info (__eh_frame) in
> libgpu_utils.a(gpu_utils_generated_gpu_utils.cu.o)
>  Dwarf Exception Unwind Info (__eh_frame) in
> libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o)
> ld: symbol(s) not found for architecture x86_64
> clang: error: linker command failed with exit code 1 (use -v to see
> invocation)
> make[2]: *** [src/gmxlib/libgmx.6.dylib] Error 1
> make[1]: *** [src/gmxlib/CMakeFiles/gmx.dir/all] Error 2
> make: *** [all] Error 2
>
> and is the same if I compile with clang/clang++
> or gcc-mp-4.7/g++-mp-4.7 (in order to get the openMP parallelisation that
> is not
> supported by clang) or gcc/g++ (compiling in the most simple fashion just
> with cmake ../ -DGMX_CPU_ACCELERATION=SSE4.1)
>
> the code compiled with the all the above compilers and the standard gcc
> for the nvcc part works and gives
> reproducible results on simple systems.
>
> again in all the above cases it is enough to change the c compiler to a
> c++ compiler
> in src/gmxlib/CMakeFiles/gmx.dir/link.txt
>
> Best,
> Carlo
>
>
>
> >
> > Message: 1
> > Date: Mon, 3 Dec 2012 16:10:11 +0100
> > From: Szil?rd P?ll 
> > Subject: Re: [gmx-users] Build on OSX with 4.6beta1
> > To: Discussion list for GROMACS users 
> > Message-ID:
> >
> > Content-Type: text/plain; charset=ISO-8859-1
> >
> > Hi,
> >
> > I think this happens either because you have cmake 2.8.10 and the
> > host-compiler gets double-set or because something gets messed up when
> you
> > use clang/clang++ with gcc as the CUDA host-compiler. Could you provide
> the
> > exact error output you are getting as well as cmake invocation? As I
> don't
> > have access to such a machine myself, I'll need some help with figuring
> out
> > what exactly is causing the trouble.
> >
> > Btw, the CUDA Mac OS X user guide states that you need to use "The gcc
> > compiler and toolchain installed using Xcode" (
> >
> http://docs.nvidia.com/cuda/cuda-getting-started-guide-for-mac-os-x/index.html
> > ).
> >
> > Note that on Intel CPUs, when running on a single node, you'll get much
> > better performance (up to +30%!) if you use only OpenMP-based
> > parallelization - which is the default with up to 16 threads (on Intel).
> >
> > Cheers,
> > --
> > Szilárd
> >
> >
> > On Fri, Nov 30, 2012 at 11:01 AM, Carlo Camilloni <
> carlo.camill...@gmail.com
> >> wrote:
> >
> >> Dear All,
> >>
> >> I have successfully compiled the beta1 of gromacs 4.6 on my macbook pro
> >> with mountain lion.
> >> I used the latest cuda and the clang/clang++ compilers in order to have
> >> access to the AVX instructions.
> >> mdrun works with great performances!! great job!
> >>
> >> two things:
> >>
> >> 1. the compilation was easy but not straightforward:
> >> cmake ../ -DGMX_GPU=ON
> >> -DCMAKE_INSTALL_PREFIX=/Users/carlo/Codes/gromacs-4.6/build-gpu
> >> -DCMAKE_CXX_COMPILER=/usr/bin/clang++ -DCMA

Re: [gmx-users] results of md

2012-12-04 Thread Justin Lemkul



On 12/4/12 4:15 PM, mohammad agha wrote:

Dear Justin,

Thank you very much from your response.
It means that this problem is not important. Should I use -reprod option in 
running of md.mdp? I want to obtain micelles as monodisperse with identical 
Nagg(aggregation number) in the one simulation, but distribution of sizes is 
broad and it is opposite of experimental results.


The -reprod option only turns off various optimizations to help diagnose 
potential bugs.  It is not relevant here.  MD is chaotic; even the exact same 
.tpr file on the same hardware will not necessarily produce binary identical 
results.  Hence the sampling issue.



Can you help me about this problem, Please?



I have no expertise in micelle simulations.  In general, either longer 
simulations or several more simulations of intermediate length will give you 
better sampling from which you can gather more statistically reliable estimates 
of experimental observables.


Note that the quality of the topology will also dictate whether or not the 
results will reflect reality, so proper force field derivation and validation is 
necessary before any results are trustworthy.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] results of md

2012-12-04 Thread mohammad agha
Dear Justin,

Thank you very much from your response.
It means that this problem is not important. Should I use -reprod option in 
running of md.mdp? I want to obtain micelles as monodisperse with identical 
Nagg(aggregation number) in the one simulation, but distribution of sizes is 
broad and it is opposite of experimental results. 
Can you help me about this problem, Please?

Best Regards
Sara

- Original Message -
From: Justin Lemkul 
To: mohammad agha ; Discussion list for GROMACS users 

Cc: 
Sent: Wednesday, December 5, 2012 12:31 AM
Subject: Re: [gmx-users] results of md



On 12/4/12 3:57 PM, mohammad agha wrote:
> Dear Gromacs Specialists,
> 
> I ran one system (to create of micelles) tow times. Time of creation of 
> micelles was different in each run and for example if it has been created 5 
> micelles (in the end of simulation) in both of them but the number of 
> molecules presence in micelles (aggregation number) is different.
> 
> My question is that why the results of systems aren't identical?
> May I ask you to answer my question, Please?
> 

Simulations rarely are identical and hence why extensive sampling is 
important.  Think about micelle formation under experimental conditions.  How 
long does it take?  Seconds?  Minutes?  I am willing to bet your simulations 
are orders of magnitude shorter, by necessity.  Simulation time and number of 
simulations (replicates) are important in deriving proper statistics, but there 
is certainly no guarantee that two simulations will ever be exactly the same. 
Reproducibility is discussed very nicely on the Gromacs website.

-Justin

-- 

Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin



--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] results of md

2012-12-04 Thread Justin Lemkul



On 12/4/12 3:57 PM, mohammad agha wrote:

Dear Gromacs Specialists,

I ran one system (to create of micelles) tow times. Time of creation of 
micelles was different in each run and for example if it has been created 5 
micelles (in the end of simulation) in both of them but the number of molecules 
presence in micelles (aggregation number) is different.

My question is that why the results of systems aren't identical?
May I ask you to answer my question, Please?



Simulations rarely are identical and hence why extensive sampling is important. 
 Think about micelle formation under experimental conditions.  How long does it 
take?  Seconds?  Minutes?  I am willing to bet your simulations are orders of 
magnitude shorter, by necessity.  Simulation time and number of simulations 
(replicates) are important in deriving proper statistics, but there is certainly 
no guarantee that two simulations will ever be exactly the same. 
Reproducibility is discussed very nicely on the Gromacs website.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] results of md

2012-12-04 Thread mohammad agha
Dear Gromacs Specialists,

I ran one system (to create of micelles) tow times. Time of creation of 
micelles was different in each run and for example if it has been created 5 
micelles (in the end of simulation) in both of them but the number of molecules 
presence in micelles (aggregation number) is different.

My question is that why the results of systems aren't identical?
May I ask you to answer my question, Please?

Best Regards
Sara

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] system is not well equilibrated -Reg

2012-12-04 Thread Justin Lemkul



On 12/4/12 1:20 PM, venkatesh s wrote:

Respected Gromacs Peoples,
  i got following error in the step mdrun nvt

1 particles communicated to PME node 1 are more than 2/3
times the cut-off out of the domain decomposition cell of their charge
group in dimension x.
This usually means that your system is not well equilibrated.

i searched gmx user regarding the error but i cant get solution
i understand Blowing up concept (
http://www.gromacs.org/Documentation/Terminology/Blowing_Up) before nvt npt
itself error arise  what i want do ? rectifying the problem

(i followed protein&ligand tutorial
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/complex/)


SYSTEM INFORMATIONS

ff :GROMOS96 43a1 force field
ligand -->prodrg ,antechamber (charge)
FOUR PROTEIN (USING :CHAINSEP -TER ) ONE LIGAND

TOTAL NUMBER OF ATOM & RESIDUE
chain  #res #atoms

   1 'A'   661   5352
   2 'B'   686   5550
   3 'Z' 4  4
   4 'C'   148   1236
   5 'D'   225   1706
   6 'E'   236   1817
   7 ligand45



With this level of complexity, you're searching for a needle in a haystack. 
Separate each component and attempt to simulate it, starting with the ligand to 
check the integrity of its topology.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Coordinates problem

2012-12-04 Thread Justin Lemkul



On 12/4/12 1:37 PM, Marcelo Depolo wrote:

Hi everyone!


I'm running a protein-ligand simulation and i'm having a little problem
with the ligand atoms coordinates. After I equilibrate the simulation box
with counter-ions, i checked my ligand with VMD and it was ok. After a
minimization process, a hydrogen atom and a oxygen atom occupies the same
xyz coordinates. This can be verified in a highFmax value obtained in the
minimization (2.16717723680607e+06 on atom 3881)

Just for record, I already included the ligand coordinates in the .gro file
and its topology (.itp) in .top file before I even fill the box with water.

Can someone help me?



Check the topology - does the ligand successfully minimize by itself in vacuo? 
If it does, then the topology is sound and there's something wrong with how the 
ligand is placed within the protein.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Build on OSX with 4.6beta1

2012-12-04 Thread Carlo Camilloni
Dear Szilárd,

My cmake version in the 2.8.9. The error is

Linking C shared library libgmx.dylib
Undefined symbols for architecture x86_64:
 "std::terminate()", referenced from:
 do_memtest(unsigned int, int, int) in 
libgpu_utils.a(gpu_utils_generated_gpu_utils.cu.o)
 memtestState::allocate(unsigned int) in 
libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o)
 "typeinfo for int", referenced from:
 memtestState::allocate(unsigned int) in 
libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o)
 "___cxa_allocate_exception", referenced from:
 memtestState::allocate(unsigned int) in 
libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o)
 "___cxa_begin_catch", referenced from:
 memtestState::allocate(unsigned int) in 
libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o)
 "___cxa_end_catch", referenced from:
 memtestState::allocate(unsigned int) in 
libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o)
 "___cxa_throw", referenced from:
 memtestState::allocate(unsigned int) in 
libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o)
 "___gxx_personality_v0", referenced from:
 Dwarf Exception Unwind Info (__eh_frame) in 
libgpu_utils.a(gpu_utils_generated_gpu_utils.cu.o)
 Dwarf Exception Unwind Info (__eh_frame) in 
libgpu_utils.a(gpu_utils_generated_memtestG80_core.cu.o)
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [src/gmxlib/libgmx.6.dylib] Error 1
make[1]: *** [src/gmxlib/CMakeFiles/gmx.dir/all] Error 2
make: *** [all] Error 2

and is the same if I compile with clang/clang++
or gcc-mp-4.7/g++-mp-4.7 (in order to get the openMP parallelisation that is not
supported by clang) or gcc/g++ (compiling in the most simple fashion just
with cmake ../ -DGMX_CPU_ACCELERATION=SSE4.1)

the code compiled with the all the above compilers and the standard gcc for the 
nvcc part works and gives
reproducible results on simple systems.

again in all the above cases it is enough to change the c compiler to a c++ 
compiler
in src/gmxlib/CMakeFiles/gmx.dir/link.txt

Best,
Carlo



> 
> Message: 1
> Date: Mon, 3 Dec 2012 16:10:11 +0100
> From: Szil?rd P?ll 
> Subject: Re: [gmx-users] Build on OSX with 4.6beta1
> To: Discussion list for GROMACS users 
> Message-ID:
>
> Content-Type: text/plain; charset=ISO-8859-1
> 
> Hi,
> 
> I think this happens either because you have cmake 2.8.10 and the
> host-compiler gets double-set or because something gets messed up when you
> use clang/clang++ with gcc as the CUDA host-compiler. Could you provide the
> exact error output you are getting as well as cmake invocation? As I don't
> have access to such a machine myself, I'll need some help with figuring out
> what exactly is causing the trouble.
> 
> Btw, the CUDA Mac OS X user guide states that you need to use "The gcc
> compiler and toolchain installed using Xcode" (
> http://docs.nvidia.com/cuda/cuda-getting-started-guide-for-mac-os-x/index.html
> ).
> 
> Note that on Intel CPUs, when running on a single node, you'll get much
> better performance (up to +30%!) if you use only OpenMP-based
> parallelization - which is the default with up to 16 threads (on Intel).
> 
> Cheers,
> --
> Szilárd
> 
> 
> On Fri, Nov 30, 2012 at 11:01 AM, Carlo Camilloni > wrote:
> 
>> Dear All,
>> 
>> I have successfully compiled the beta1 of gromacs 4.6 on my macbook pro
>> with mountain lion.
>> I used the latest cuda and the clang/clang++ compilers in order to have
>> access to the AVX instructions.
>> mdrun works with great performances!! great job!
>> 
>> two things:
>> 
>> 1. the compilation was easy but not straightforward:
>> cmake ../ -DGMX_GPU=ON
>> -DCMAKE_INSTALL_PREFIX=/Users/carlo/Codes/gromacs-4.6/build-gpu
>> -DCMAKE_CXX_COMPILER=/usr/bin/clang++ -DCMAKE_C_COMPILER=/usr/bin/clang
>> -DCUDA_NVCC_HOST_COMPILER=/usr/bin/gcc -DCUDA_PROPAGATE_HOST_FLAGS=OFF
>> 
>> and then I had to manually edit
>> src/gmxlib/CMakeFiles/gmx.dir/link.txt
>> 
>> and change clang to clang++
>> (I noted that in many other places it was correctly set, and without this
>> change I got an error on some c++ related stuff)
>> 
>> 2. is there any way to have openmp parallelisation on osx?
>> 
>> Best,
>> Carlo
>> 
>> 
>> --
>> gmx-users mailing listgmx-users@gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> * Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-requ...@gromacs.org.
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ..

Re: [gmx-users] strange lincs warning with version 4.6

2012-12-04 Thread Szilárd Páll
On Tue, Dec 4, 2012 at 5:09 PM, Mark Abraham wrote:

> 2fs is normally considered too large a time step for stable integration
> with only bonds to hydrogen constrained, so your observation of
> non-reproducible LINCS warnings is not indicative of some other problem.
>
> Also, if you ran your CPU-only calculations with nstlist=25 then AFAIK this
> works fine, but is inefficient.
>

With the verlet scheme when running on GPUs and/or at high parallelization
nstlist = 25, even up to 50 can give higher performance (and this is safe
to do because the buffer is automatically adjusted).

--
Szilárd


>
> Mark
>
> On Tue, Dec 4, 2012 at 3:41 PM, sebastian <
> sebastian.wa...@physik.uni-freiburg.de> wrote:
>
> > On 11/23/2012 08:29 PM, Szilárd Páll wrote:
> >
> >> Hi,
> >>
> >> On Fri, Nov 23, 2012 at 9:40 AM, sebastian<
> >> sebastian.wa...@physik.uni-**freiburg.de<
> sebastian.wa...@physik.uni-freiburg.de>>
> >>  wrote:
> >>
> >>
> >>
> >>> Dear GROMCS user,
> >>>
> >>> I installed the git gromacs VERSION 4.6-dev-20121117-7a330e6-dirty on
> my
> >>> local desktop
> >>>
> >>>
> >>
> >> Watch out, the dirty version suffix means you have changed something in
> >> the
> >> source.
> >>
> >>
> >>
> >>
> >>> (2*GTX 670 + i7) and everything works as smooth as possible. The
> outcomes
> >>> are very reasonable and match the outcome of the 4.5.5 version without
> >>> GPU
> >>> acceleration. On our
> >>>
> >>>
> >>
> >> What does "outcome" mean? If that means performance, than something is
> >> wrong, you should see a considerable performance increase (PME,
> >> non-bonded,
> >> bondeds have all gotten a lot faster).
> >>
> >>
> >>
> >
> > With outcome I mean the trajectory not the performance.
> >
> >
> >
> >>
> >>> cluster (M2090+2*Xeon X5650) I installed the  VERSION
> >>> 4.6-dev-20121120-0290409. Using the same .tpr file used for runs with
> my
> >>> desktop I get lincs warnings that the watermolecules can't be settled.
> >>>
> >>>
> >>>
> >> The group kernels have not "stabilized" yet and there have been some
> fixes
> >> lately. Could you please the latest version and check again.
> >>
> >>
> >
> > I installed the beta1 release and still the water can not be settled.
> >
> >  Additionally, you could try running our regression tests suite (
> >> git.gromacs.org/**regressiontests<
> http://git.gromacs.org/regressiontests>)
> >> to see if at least the tests pass with the
> >> binaries you compiled
> >> Cheers,
> >> --
> >> Szilárd
> >>
> >>
> >>
> >>
> >
> > Cheers,
> >
> > Sebastian
> >
> >
> >  My .mdp file looks like:
> >>>
> >>>   ;
> >>> title= ttt
> >>> cpp =  /lib/cpp
> >>> include = -I../top
> >>> constraints =  hbonds
> >>> integrator  =  md
> >>> cutoff-scheme   =  verlet
> >>>
> >>> ;define  =  -DPOSRES; for possition restraints
> >>>
> >>> dt  =  0.002; ps !
> >>> nsteps  =  1  \
> >>> nstcomm =  25; frequency for center of mass
> >>> motion
> >>> removal
> >>> nstcalcenergy   =  25
> >>> nstxout =  10; frequency for writting the
> >>> trajectory
> >>> nstvout =  10; frequency for writting the
> >>> velocity
> >>> nstfout =  10; frequency to write forces to
> >>> output trajectory
> >>> nstlog  =  1; frequency to write the log
> file
> >>> nstenergy   =  1; frequency to write energies
> to
> >>> energy file
> >>> nstxtcout   =  1
> >>>
> >>> xtc_grps=  System
> >>>
> >>> nstlist =  25; Frequency to update the neighbor
> >>> list
> >>> ns_type =  grid; Make a grid in the box and
> only
> >>> check atoms in neighboring grid cells when constructing a new neighbor
> >>> rlist   =  1.4; cut-off distance for the
> >>> short-range neighbor list
> >>>
> >>> coulombtype =  PME; Fast Particle-Mesh Ewald
> >>> electrostatics
> >>> rcoulomb=  1.4; cut-off distance for the
> coulomb
> >>> field
> >>> vdwtype =  cut-off
> >>> rvdw=  1.4; cut-off distance for the vdw
> >>> field
> >>> fourierspacing  =  0.12; The maximum grid spacing for
> the
> >>> FFT grid
> >>> pme_order   =  6; Interpolation order for PME
> >>> optimize_fft=  yes
> >>> pbc=  xyz
> >>> Tcoupl  =  v-rescale
> >>> tc-grps =  System
> >>> tau_t   =  0.1
> >>> ref_t   =  300
> >>>
> >>> energygrps  =  Protein Non-Protein
> >>>
> >>> Pcoupl  =  no;berendsen
> >>> tau_p   =  0.1
> >>> compressibility =  4.5e-5
> >>> ref_p   =  1.0
> >>> nstpcouple=  5
> >>> refcoord_scaling=  all
> >>> Pcoupltype  =  isotropic
> >>

Re: [gmx-users] Energy estimations of the protein-ligand complexes

2012-12-04 Thread James Starlight
Mark,

I've told about estimation of the energy of the ligands compounds
(small hormone-like compounds) embedded in the receptors binding
pocket.

In other words I want to simulate that ligands in water as well as in
the receptor complex and compare it's potential energy values in both
cases. From such experiment I want to check hypothesis that in the
receptor interior the ligands always in more strained conformations (
with highter value of potential energy) than in the unbound state.

James

2012/12/4, Mark Abraham :
> What components will that potential energy have?
>
> Mark
>
> On Mon, Dec 3, 2012 at 6:54 PM, James Starlight
> wrote:
>
>> Dear Gromacs Users!
>>
>> I'm simulating different complexes of the receptors with different
>> ligands.
>>
>> For each complex I want to determine potential energy (not the binding
>> energy) of the ligand molecule. In other words I want to check in what
>> exactly complex ligand was in more or less strained conformation due
>> to the binding pocket influence. How I could do such estimations ?
>>
>> Thanks for help
>>
>>
>> James
>> --
>> gmx-users mailing listgmx-users@gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> * Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-requ...@gromacs.org.
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] strange lincs warning with version 4.6

2012-12-04 Thread Thomas Piggot

Hi Mark,

I am not so sure regarding your point about a 2 fs timestep and 
constraints = hbonds. Typical bond vibration between heavy atoms has 
roughly a 20 fs period (see table I of the Feenstra et al. paper you 
mentioned previously), so a timestep of 2 fs should be fine with 
constraints = hbonds. As to whether this is more appropriate than 
constraining all bonds is a different matter, but I do not believe the 
simulations should have problems because of this setting.


Cheers

Tom

Mark Abraham wrote:

I can't know why you observe things working, in part because you haven't
said what's in this system :-) Pure inflexible water would be fine because
the constraints setting doesn't matter to water.

Moreover, whether integration issues cause things to crash can depend on
non-reproducible numerical effects, and the phase of the moon when you
compiled can determine that. You may well have a real problem with GROMACS
on GPUs that might be a GROMACS bug, but at the moment we can't be sure of
that because of this other issue.

You claim your settings are "rather typical." They are, except that
constraints = hbonds and a 2fs time step are inconsistent because of the
typical time scale of the vibration of the bonds between heavy atoms. This
has been a frequent topic of conversation on this list, despite being
knowledge at least a decade old - e.g. ref 118 of GROMACS manual, Feenstra
et al.

Mark

On Tue, Dec 4, 2012 at 5:30 PM, sebastian <
sebastian.wa...@physik.uni-freiburg.de> wrote:


On 12/04/2012 05:09 PM, Mark Abraham wrote:


2fs is normally considered too large a time step for stable integration
with only bonds to hydrogen constrained, so your observation of
non-reproducible LINCS warnings is not indicative of some other problem.



Sorry, but why is this whole setup running on my local desktop with GPUs?
As far as I know this is a rather typical set of parameters.
The only difference what I can think of is that gromacs was compiled with
intel and mkl libs on the cluster there as it was compiled with gcc and
fftw3 libs on the local desktop.

Sebastian

 Also, if you ran your CPU-only calculations with nstlist=25 then AFAIK

this
works fine, but is inefficient.

Mark

On Tue, Dec 4, 2012 at 3:41 PM, sebastian<
sebastian.wa...@physik.uni-**freiburg.de>
 wrote:




On 11/23/2012 08:29 PM, Szilárd Páll wrote:




Hi,

On Fri, Nov 23, 2012 at 9:40 AM, sebastian<
sebastian.wa...@physik.uni-**f**reiburg.de <
sebastian.waltz@**physik.uni-freiburg.de
  wrote:






Dear GROMCS user,

I installed the git gromacs VERSION 4.6-dev-20121117-7a330e6-dirty on
my
local desktop





Watch out, the dirty version suffix means you have changed something in
the
source.







(2*GTX 670 + i7) and everything works as smooth as possible. The
outcomes
are very reasonable and match the outcome of the 4.5.5 version without
GPU
acceleration. On our





What does "outcome" mean? If that means performance, than something is
wrong, you should see a considerable performance increase (PME,
non-bonded,
bondeds have all gotten a lot faster).






With outcome I mean the trajectory not the performance.








cluster (M2090+2*Xeon X5650) I installed the  VERSION
4.6-dev-20121120-0290409. Using the same .tpr file used for runs with
my
desktop I get lincs warnings that the watermolecules can't be settled.






The group kernels have not "stabilized" yet and there have been some
fixes
lately. Could you please the latest version and check again.





I installed the beta1 release and still the water can not be settled.

  Additionally, you could try running our regression tests suite (



git.gromacs.org/regressiontests


)

to see if at least the tests pass with the
binaries you compiled
Cheers,
--
Szilárd







Cheers,

Sebastian


  My .mdp file looks like:



   ;

title= ttt
cpp =  /lib/cpp
include = -I../top
constraints =  hbonds
integrator  =  md
cutoff-scheme   =  verlet

;define  =  -DPOSRES; for possition restraints

dt  =  0.002; ps !
nsteps  =  1  \
nstcomm =  25; frequency for center of mass
motion
removal
nstcalcenergy   =  25
nstxout =  10; frequency for writting the
trajectory
nstvout =  10; frequency for writting the
velocity
nstfout =  10; frequency to write forces to
output trajectory
nstlog  =  1; frequency to write the log
file
nstenergy   =  1; frequency to write energies
to
energy file
nstxtcout   =  1

xtc_grps=  System

nstlist =  25; Frequency to update the neighbor
list
ns_type =  grid; Make a gr

[gmx-users] 9-oder cosine series of dihedral angle

2012-12-04 Thread Tom
Dear Gmx Developer and Users,

*Is it possible for gromacs to use an 8-order or 9-oder cosine series for
dihedral angle computations?*

>From the menue, (Page 125, the table of Intramolecular Interactions
Definition)
there is one
--
interaction type   directive # at.   f. tp
proper dih. multi  dihedrals 4   9


There is not detailed explanation about the format of these assignment in
ffbonded.itp of oplsaa.ff
*Can any one give an explanation about the format of typing these
parameters for 9-oder cosine series dihedral angle?*

Thanks,

Tom
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] strange lincs warning with version 4.6

2012-12-04 Thread Roland Schulz
On Tue, Dec 4, 2012 at 11:30 AM, sebastian <
sebastian.wa...@physik.uni-freiburg.de> wrote:

> On 12/04/2012 05:09 PM, Mark Abraham wrote:
> > 2fs is normally considered too large a time step for stable integration
> > with only bonds to hydrogen constrained, so your observation of
> > non-reproducible LINCS warnings is not indicative of some other problem.
> >
>
> Sorry, but why is this whole setup running on my local desktop with
> GPUs? As far as I know this is a rather typical set of parameters.
>
I agree with you that normally a well equilibrated system should not crash
because of that. But the integration error is relative large with that so
you might want to change it for accuracy. Also you don't want to use a
system like that for bug hunting because as Mark said, the stability can
depend on slight numerical differences. And for bug hunting you want a
system which only crashes if something is wrong and not also sometimes if
everything is OK.


> The only difference what I can think of is that gromacs was compiled
> with intel and mkl libs on the cluster there as it was compiled with gcc
> and fftw3 libs on the local desktop.
>
Did you run the regressiontests as suggested by Szilard? Can you narrow it
down to one compontent? E.g. by compiling a version on the cluster with
gcc+fftw (to make sure it is not any other cluster componennt e.g. the MPI
library) and a version with icc+fftw (to see whether it correlates with
either icc or mkl?).

Roland


>
> Sebastian
>
> > Also, if you ran your CPU-only calculations with nstlist=25 then AFAIK
> this
> > works fine, but is inefficient.
> >
> > Mark
> >
> > On Tue, Dec 4, 2012 at 3:41 PM, sebastian<
> > sebastian.wa...@physik.uni-freiburg.de>  wrote:
> >
> >
> >> On 11/23/2012 08:29 PM, Szilárd Páll wrote:
> >>
> >>
> >>> Hi,
> >>>
> >>> On Fri, Nov 23, 2012 at 9:40 AM, sebastian<
> >>> sebastian.wa...@physik.uni-**freiburg.de<
> sebastian.wa...@physik.uni-freiburg.de>>
> >>>   wrote:
> >>>
> >>>
> >>>
> >>>
>  Dear GROMCS user,
> 
>  I installed the git gromacs VERSION 4.6-dev-20121117-7a330e6-dirty on
> my
>  local desktop
> 
> 
> 
> >>> Watch out, the dirty version suffix means you have changed something in
> >>> the
> >>> source.
> >>>
> >>>
> >>>
> >>>
> >>>
>  (2*GTX 670 + i7) and everything works as smooth as possible. The
> outcomes
>  are very reasonable and match the outcome of the 4.5.5 version without
>  GPU
>  acceleration. On our
> 
> 
> 
> >>> What does "outcome" mean? If that means performance, than something is
> >>> wrong, you should see a considerable performance increase (PME,
> >>> non-bonded,
> >>> bondeds have all gotten a lot faster).
> >>>
> >>>
> >>>
> >>>
> >> With outcome I mean the trajectory not the performance.
> >>
> >>
> >>
> >>
> >>>
>  cluster (M2090+2*Xeon X5650) I installed the  VERSION
>  4.6-dev-20121120-0290409. Using the same .tpr file used for runs with
> my
>  desktop I get lincs warnings that the watermolecules can't be settled.
> 
> 
> 
> 
> >>> The group kernels have not "stabilized" yet and there have been some
> fixes
> >>> lately. Could you please the latest version and check again.
> >>>
> >>>
> >>>
> >> I installed the beta1 release and still the water can not be settled.
> >>
> >>   Additionally, you could try running our regression tests suite (
> >>
> >>> git.gromacs.org/**regressiontests<
> http://git.gromacs.org/regressiontests>)
> >>> to see if at least the tests pass with the
> >>> binaries you compiled
> >>> Cheers,
> >>> --
> >>> Szilárd
> >>>
> >>>
> >>>
> >>>
> >>>
> >> Cheers,
> >>
> >> Sebastian
> >>
> >>
> >>   My .mdp file looks like:
> >>
> ;
>  title= ttt
>  cpp =  /lib/cpp
>  include = -I../top
>  constraints =  hbonds
>  integrator  =  md
>  cutoff-scheme   =  verlet
> 
>  ;define  =  -DPOSRES; for possition restraints
> 
>  dt  =  0.002; ps !
>  nsteps  =  1  \
>  nstcomm =  25; frequency for center of mass
>  motion
>  removal
>  nstcalcenergy   =  25
>  nstxout =  10; frequency for writting the
>  trajectory
>  nstvout =  10; frequency for writting the
>  velocity
>  nstfout =  10; frequency to write forces
> to
>  output trajectory
>  nstlog  =  1; frequency to write the log
> file
>  nstenergy   =  1; frequency to write energies
> to
>  energy file
>  nstxtcout   =  1
> 
>  xtc_grps=  System
> 
>  nstlist =  25; Frequency to update the
> neighbor
>  list
>  ns_type =  grid; Make a grid in the box and
> on

Re: [gmx-users] strange lincs warning with version 4.6

2012-12-04 Thread Mark Abraham
I can't know why you observe things working, in part because you haven't
said what's in this system :-) Pure inflexible water would be fine because
the constraints setting doesn't matter to water.

Moreover, whether integration issues cause things to crash can depend on
non-reproducible numerical effects, and the phase of the moon when you
compiled can determine that. You may well have a real problem with GROMACS
on GPUs that might be a GROMACS bug, but at the moment we can't be sure of
that because of this other issue.

You claim your settings are "rather typical." They are, except that
constraints = hbonds and a 2fs time step are inconsistent because of the
typical time scale of the vibration of the bonds between heavy atoms. This
has been a frequent topic of conversation on this list, despite being
knowledge at least a decade old - e.g. ref 118 of GROMACS manual, Feenstra
et al.

Mark

On Tue, Dec 4, 2012 at 5:30 PM, sebastian <
sebastian.wa...@physik.uni-freiburg.de> wrote:

> On 12/04/2012 05:09 PM, Mark Abraham wrote:
>
>> 2fs is normally considered too large a time step for stable integration
>> with only bonds to hydrogen constrained, so your observation of
>> non-reproducible LINCS warnings is not indicative of some other problem.
>>
>>
>
> Sorry, but why is this whole setup running on my local desktop with GPUs?
> As far as I know this is a rather typical set of parameters.
> The only difference what I can think of is that gromacs was compiled with
> intel and mkl libs on the cluster there as it was compiled with gcc and
> fftw3 libs on the local desktop.
>
> Sebastian
>
>  Also, if you ran your CPU-only calculations with nstlist=25 then AFAIK
>> this
>> works fine, but is inefficient.
>>
>> Mark
>>
>> On Tue, Dec 4, 2012 at 3:41 PM, sebastian<
>> sebastian.wa...@physik.uni-**freiburg.de>
>>  wrote:
>>
>>
>>
>>> On 11/23/2012 08:29 PM, Szilárd Páll wrote:
>>>
>>>
>>>
 Hi,

 On Fri, Nov 23, 2012 at 9:40 AM, sebastian<
 sebastian.wa...@physik.uni-**f**reiburg.de <
 sebastian.waltz@**physik.uni-freiburg.de
 >>

   wrote:





> Dear GROMCS user,
>
> I installed the git gromacs VERSION 4.6-dev-20121117-7a330e6-dirty on
> my
> local desktop
>
>
>
>
 Watch out, the dirty version suffix means you have changed something in
 the
 source.






> (2*GTX 670 + i7) and everything works as smooth as possible. The
> outcomes
> are very reasonable and match the outcome of the 4.5.5 version without
> GPU
> acceleration. On our
>
>
>
>
 What does "outcome" mean? If that means performance, than something is
 wrong, you should see a considerable performance increase (PME,
 non-bonded,
 bondeds have all gotten a lot faster).





>>> With outcome I mean the trajectory not the performance.
>>>
>>>
>>>
>>>
>>>


> cluster (M2090+2*Xeon X5650) I installed the  VERSION
> 4.6-dev-20121120-0290409. Using the same .tpr file used for runs with
> my
> desktop I get lincs warnings that the watermolecules can't be settled.
>
>
>
>
>
 The group kernels have not "stabilized" yet and there have been some
 fixes
 lately. Could you please the latest version and check again.




>>> I installed the beta1 release and still the water can not be settled.
>>>
>>>   Additionally, you could try running our regression tests suite (
>>>
>>>
 git.gromacs.org/regressiontests
 
 >)

 to see if at least the tests pass with the
 binaries you compiled
 Cheers,
 --
 Szilárd






>>> Cheers,
>>>
>>> Sebastian
>>>
>>>
>>>   My .mdp file looks like:
>>>
>>>
;
> title= ttt
> cpp =  /lib/cpp
> include = -I../top
> constraints =  hbonds
> integrator  =  md
> cutoff-scheme   =  verlet
>
> ;define  =  -DPOSRES; for possition restraints
>
> dt  =  0.002; ps !
> nsteps  =  1  \
> nstcomm =  25; frequency for center of mass
> motion
> removal
> nstcalcenergy   =  25
> nstxout =  10; frequency for writting the
> trajectory
> nstvout =  10; frequency for writting the
> velocity
> nstfout =  10; frequency to write forces to
> output trajectory
> nstlog  =  1; frequency to write the log
> file
> nstenergy   =  1; frequency to write energies
> to
> energy file
> nstxtcout   = 

[gmx-users] Dihedral angle restraints

2012-12-04 Thread Antonia Mey
Dear Gromacs users,

I have a question regarding position restraints on dihedral angles. 
I would like to restrain the positions of phi and psi dihedral angles and on 
top of that increase the potential barrier which needs to be over come in order 
to go from one conformation to the next. (I.e. I want to add an additional 
potential term for all common dihedral minima that can be deduced in the 
Ramachandran diagram, say for \beta, \alpha', \alpha_R and PII)

What I understand so far:
I can add a restraining potential of the form given by equation 4.76 in the 
user manual. 
In the topolgy file I specify the atoms and angle I want like this:
[ dihedral_restraints ]
; ai   ajakal  type  label  phi  dphi  kfac  power
;phi
  59 78 1  1  -70 10 1 2   
;psi  
  9171516 1  1  150 10 1 2

and in the mdp file I have added this line:

;dihedral restraints
dihre   =  yes
dihre_fc=  100  ;(adjustable accordingly)


Ideally I want multiple restraints for these angles such that for phi I  may 
have a restraint at -170 and -80 and for psi around 20 and 160 or something 
along those lines. Do I then just use the function type of 9 in order to do 
that?

As mentioned above, on top of that I want the minima of these angles to be 
embedded in a further potential, so that on top of the restraints I have an 
expression like this for the dihedral angles:

U(\phi, \psi) = 0.5*k(d\phi-\phi)^2 + \sum_{i=1}^nA_{\psi 
i}exp(-(\psi-\psi_i)^2/2\sigma^2_{\psi,i})
as was defined in this reference:
http://www.pnas.org/content/102/39/13749


Is it possible to achieve this without tempering with the forcefield? I know 
exactly where i want to place the minima of the potential and how strong I want 
them to be. Is the tabulated method for dihedrals the correct approach? 

I would simply evaluate my U(\phi\psi) at intervals of 10 degrees between -180 
and 180 degrees, evaluate the derivative of that function. Then save them in a 
file table_d0.xvg in the following format:
angle U(\phi\psi) U'(\phi\psi)
:
:
and so on. 

Do this for say my 4 constrains corresponding to structures like \betasheet 
\alpha ' \alpha_R and PII  (So I end up with 4 files named table_d0, table_d1, 
table_d2 and table_d3)
Before I dive into doing something that isn't correct I wanted to clarify this. 

Any help would be greatly appreciated!

Best,
AntoniaThis message and any attachment are intended solely for the addressee 
and may contain confidential information. If you have received this message in 
error, please send it back to me, and immediately delete it.   Please do not 
use, copy or disclose the information contained in this message or in any 
attachment.  Any views or opinions expressed by the author of this email do not 
necessarily reflect the views of the University of Nottingham.

This message has been checked for viruses but the contents of an attachment
may still contain software viruses which could damage your computer system:
you are advised to perform your own checks. Email communications with the
University of Nottingham may be monitored as permitted by UK legislation.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] strange lincs warning with version 4.6

2012-12-04 Thread sebastian

On 12/04/2012 05:09 PM, Mark Abraham wrote:

2fs is normally considered too large a time step for stable integration
with only bonds to hydrogen constrained, so your observation of
non-reproducible LINCS warnings is not indicative of some other problem.
   


Sorry, but why is this whole setup running on my local desktop with 
GPUs? As far as I know this is a rather typical set of parameters.
The only difference what I can think of is that gromacs was compiled 
with intel and mkl libs on the cluster there as it was compiled with gcc 
and fftw3 libs on the local desktop.


Sebastian


Also, if you ran your CPU-only calculations with nstlist=25 then AFAIK this
works fine, but is inefficient.

Mark

On Tue, Dec 4, 2012 at 3:41 PM, sebastian<
sebastian.wa...@physik.uni-freiburg.de>  wrote:

   

On 11/23/2012 08:29 PM, Szilárd Páll wrote:

 

Hi,

On Fri, Nov 23, 2012 at 9:40 AM, sebastian<
sebastian.wa...@physik.uni-**freiburg.de>
  wrote:



   

Dear GROMCS user,

I installed the git gromacs VERSION 4.6-dev-20121117-7a330e6-dirty on my
local desktop


 

Watch out, the dirty version suffix means you have changed something in
the
source.




   

(2*GTX 670 + i7) and everything works as smooth as possible. The outcomes
are very reasonable and match the outcome of the 4.5.5 version without
GPU
acceleration. On our


 

What does "outcome" mean? If that means performance, than something is
wrong, you should see a considerable performance increase (PME,
non-bonded,
bondeds have all gotten a lot faster).



   

With outcome I mean the trajectory not the performance.



 
   

cluster (M2090+2*Xeon X5650) I installed the  VERSION
4.6-dev-20121120-0290409. Using the same .tpr file used for runs with my
desktop I get lincs warnings that the watermolecules can't be settled.



 

The group kernels have not "stabilized" yet and there have been some fixes
lately. Could you please the latest version and check again.


   

I installed the beta1 release and still the water can not be settled.

  Additionally, you could try running our regression tests suite (
 

git.gromacs.org/**regressiontests)
to see if at least the tests pass with the
binaries you compiled
Cheers,
--
Szilárd




   

Cheers,

Sebastian


  My .mdp file looks like:
 

   ;
title= ttt
cpp =  /lib/cpp
include = -I../top
constraints =  hbonds
integrator  =  md
cutoff-scheme   =  verlet

;define  =  -DPOSRES; for possition restraints

dt  =  0.002; ps !
nsteps  =  1  \
nstcomm =  25; frequency for center of mass
motion
removal
nstcalcenergy   =  25
nstxout =  10; frequency for writting the
trajectory
nstvout =  10; frequency for writting the
velocity
nstfout =  10; frequency to write forces to
output trajectory
nstlog  =  1; frequency to write the log file
nstenergy   =  1; frequency to write energies to
energy file
nstxtcout   =  1

xtc_grps=  System

nstlist =  25; Frequency to update the neighbor
list
ns_type =  grid; Make a grid in the box and only
check atoms in neighboring grid cells when constructing a new neighbor
rlist   =  1.4; cut-off distance for the
short-range neighbor list

coulombtype =  PME; Fast Particle-Mesh Ewald
electrostatics
rcoulomb=  1.4; cut-off distance for the coulomb
field
vdwtype =  cut-off
rvdw=  1.4; cut-off distance for the vdw
field
fourierspacing  =  0.12; The maximum grid spacing for the
FFT grid
pme_order   =  6; Interpolation order for PME
optimize_fft=  yes
pbc=  xyz
Tcoupl  =  v-rescale
tc-grps =  System
tau_t   =  0.1
ref_t   =  300

energygrps  =  Protein Non-Protein

Pcoupl  =  no;berendsen
tau_p   =  0.1
compressibility =  4.5e-5
ref_p   =  1.0
nstpcouple=  5
refcoord_scaling=  all
Pcoupltype  =  isotropic
gen_vel =  no;yes
gen_temp=  300
gen_seed=  -1


Thanks a lot

Sebastian
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
http://lists.gromacs.org/mailman/listinfo/gmx-users>
 
   

* Please search the archive at http://www.gromacs.org/**
Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Search>>before
posting!
* Please don't post (un)subscribe requests to the list. Use the www

Re: [gmx-users] strange lincs warning with version 4.6

2012-12-04 Thread Mark Abraham
2fs is normally considered too large a time step for stable integration
with only bonds to hydrogen constrained, so your observation of
non-reproducible LINCS warnings is not indicative of some other problem.

Also, if you ran your CPU-only calculations with nstlist=25 then AFAIK this
works fine, but is inefficient.

Mark

On Tue, Dec 4, 2012 at 3:41 PM, sebastian <
sebastian.wa...@physik.uni-freiburg.de> wrote:

> On 11/23/2012 08:29 PM, Szilárd Páll wrote:
>
>> Hi,
>>
>> On Fri, Nov 23, 2012 at 9:40 AM, sebastian<
>> sebastian.wa...@physik.uni-**freiburg.de>
>>  wrote:
>>
>>
>>
>>> Dear GROMCS user,
>>>
>>> I installed the git gromacs VERSION 4.6-dev-20121117-7a330e6-dirty on my
>>> local desktop
>>>
>>>
>>
>> Watch out, the dirty version suffix means you have changed something in
>> the
>> source.
>>
>>
>>
>>
>>> (2*GTX 670 + i7) and everything works as smooth as possible. The outcomes
>>> are very reasonable and match the outcome of the 4.5.5 version without
>>> GPU
>>> acceleration. On our
>>>
>>>
>>
>> What does "outcome" mean? If that means performance, than something is
>> wrong, you should see a considerable performance increase (PME,
>> non-bonded,
>> bondeds have all gotten a lot faster).
>>
>>
>>
>
> With outcome I mean the trajectory not the performance.
>
>
>
>>
>>> cluster (M2090+2*Xeon X5650) I installed the  VERSION
>>> 4.6-dev-20121120-0290409. Using the same .tpr file used for runs with my
>>> desktop I get lincs warnings that the watermolecules can't be settled.
>>>
>>>
>>>
>> The group kernels have not "stabilized" yet and there have been some fixes
>> lately. Could you please the latest version and check again.
>>
>>
>
> I installed the beta1 release and still the water can not be settled.
>
>  Additionally, you could try running our regression tests suite (
>> git.gromacs.org/**regressiontests)
>> to see if at least the tests pass with the
>> binaries you compiled
>> Cheers,
>> --
>> Szilárd
>>
>>
>>
>>
>
> Cheers,
>
> Sebastian
>
>
>  My .mdp file looks like:
>>>
>>>   ;
>>> title= ttt
>>> cpp =  /lib/cpp
>>> include = -I../top
>>> constraints =  hbonds
>>> integrator  =  md
>>> cutoff-scheme   =  verlet
>>>
>>> ;define  =  -DPOSRES; for possition restraints
>>>
>>> dt  =  0.002; ps !
>>> nsteps  =  1  \
>>> nstcomm =  25; frequency for center of mass
>>> motion
>>> removal
>>> nstcalcenergy   =  25
>>> nstxout =  10; frequency for writting the
>>> trajectory
>>> nstvout =  10; frequency for writting the
>>> velocity
>>> nstfout =  10; frequency to write forces to
>>> output trajectory
>>> nstlog  =  1; frequency to write the log file
>>> nstenergy   =  1; frequency to write energies to
>>> energy file
>>> nstxtcout   =  1
>>>
>>> xtc_grps=  System
>>>
>>> nstlist =  25; Frequency to update the neighbor
>>> list
>>> ns_type =  grid; Make a grid in the box and only
>>> check atoms in neighboring grid cells when constructing a new neighbor
>>> rlist   =  1.4; cut-off distance for the
>>> short-range neighbor list
>>>
>>> coulombtype =  PME; Fast Particle-Mesh Ewald
>>> electrostatics
>>> rcoulomb=  1.4; cut-off distance for the coulomb
>>> field
>>> vdwtype =  cut-off
>>> rvdw=  1.4; cut-off distance for the vdw
>>> field
>>> fourierspacing  =  0.12; The maximum grid spacing for the
>>> FFT grid
>>> pme_order   =  6; Interpolation order for PME
>>> optimize_fft=  yes
>>> pbc=  xyz
>>> Tcoupl  =  v-rescale
>>> tc-grps =  System
>>> tau_t   =  0.1
>>> ref_t   =  300
>>>
>>> energygrps  =  Protein Non-Protein
>>>
>>> Pcoupl  =  no;berendsen
>>> tau_p   =  0.1
>>> compressibility =  4.5e-5
>>> ref_p   =  1.0
>>> nstpcouple=  5
>>> refcoord_scaling=  all
>>> Pcoupltype  =  isotropic
>>> gen_vel =  no;yes
>>> gen_temp=  300
>>> gen_seed=  -1
>>>
>>>
>>> Thanks a lot
>>>
>>> Sebastian
>>> --
>>> gmx-users mailing listgmx-users@gromacs.org
>>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>>> http://lists.gromacs.org/mailman/listinfo/gmx-users>
>>> >
>>> * Please search the archive at http://www.gromacs.org/**
>>> Support/Mailing_Lists/Search>> Mailing_Lists/Search>before
>>> posting!
>>> * Please don't post (un)subscribe requests to the list. Use the www

[gmx-users] About construction of Cyclic peptide

2012-12-04 Thread vidhya sankar


Dear gromacs users 

I would Like to Construct a assembly of cyclic Peptide  Is there is Any On line 
server or Tools Available ? Or any other Package Available


It would be helpful if somebody Assists me 


Thanks In Advanve
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] strange lincs warning with version 4.6

2012-12-04 Thread sebastian

On 11/23/2012 08:29 PM, Szilárd Páll wrote:

Hi,

On Fri, Nov 23, 2012 at 9:40 AM, sebastian<
sebastian.wa...@physik.uni-freiburg.de>  wrote:

   

Dear GROMCS user,

I installed the git gromacs VERSION 4.6-dev-20121117-7a330e6-dirty on my
local desktop
 


Watch out, the dirty version suffix means you have changed something in the
source.


   

(2*GTX 670 + i7) and everything works as smooth as possible. The outcomes
are very reasonable and match the outcome of the 4.5.5 version without GPU
acceleration. On our
 


What does "outcome" mean? If that means performance, than something is
wrong, you should see a considerable performance increase (PME, non-bonded,
bondeds have all gotten a lot faster).

   


With outcome I mean the trajectory not the performance.

   

cluster (M2090+2*Xeon X5650) I installed the  VERSION
4.6-dev-20121120-0290409. Using the same .tpr file used for runs with my
desktop I get lincs warnings that the watermolecules can't be settled.

 

The group kernels have not "stabilized" yet and there have been some fixes
lately. Could you please the latest version and check again.
   


I installed the beta1 release and still the water can not be settled.


Additionally, you could try running our regression tests suite (
git.gromacs.org/regressiontests) to see if at least the tests pass with the
binaries you compiled
Cheers,
--
Szilárd


   


Cheers,

Sebastian


My .mdp file looks like:

  ;
title= ttt
cpp =  /lib/cpp
include = -I../top
constraints =  hbonds
integrator  =  md
cutoff-scheme   =  verlet

;define  =  -DPOSRES; for possition restraints

dt  =  0.002; ps !
nsteps  =  1  \
nstcomm =  25; frequency for center of mass motion
removal
nstcalcenergy   =  25
nstxout =  10; frequency for writting the
trajectory
nstvout =  10; frequency for writting the
velocity
nstfout =  10; frequency to write forces to
output trajectory
nstlog  =  1; frequency to write the log file
nstenergy   =  1; frequency to write energies to
energy file
nstxtcout   =  1

xtc_grps=  System

nstlist =  25; Frequency to update the neighbor
list
ns_type =  grid; Make a grid in the box and only
check atoms in neighboring grid cells when constructing a new neighbor
rlist   =  1.4; cut-off distance for the
short-range neighbor list

coulombtype =  PME; Fast Particle-Mesh Ewald
electrostatics
rcoulomb=  1.4; cut-off distance for the coulomb
field
vdwtype =  cut-off
rvdw=  1.4; cut-off distance for the vdw field
fourierspacing  =  0.12; The maximum grid spacing for the
FFT grid
pme_order   =  6; Interpolation order for PME
optimize_fft=  yes
pbc=  xyz
Tcoupl  =  v-rescale
tc-grps =  System
tau_t   =  0.1
ref_t   =  300

energygrps  =  Protein Non-Protein

Pcoupl  =  no;berendsen
tau_p   =  0.1
compressibility =  4.5e-5
ref_p   =  1.0
nstpcouple=  5
refcoord_scaling=  all
Pcoupltype  =  isotropic
gen_vel =  no;yes
gen_temp=  300
gen_seed=  -1


Thanks a lot

Sebastian
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/**mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/**
Support/Mailing_Lists/Searchbefore
 posting!
* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read 
http://www.gromacs.org/**Support/Mailing_Lists

 


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_tune_pme for multiple nodes

2012-12-04 Thread Chandan Choudhury
On Tue, Dec 4, 2012 at 7:18 PM, Carsten Kutzner  wrote:

>
> On Dec 4, 2012, at 2:45 PM, Chandan Choudhury  wrote:
>
> > Hi Carsten,
> >
> > Thanks for the reply.
> >
> > If PME nodes for the g_tune is half of np, then if it exceeds the ppn of
> of
> > a node, how would g_tune perform. What I mean if $NPROCS=36, the its half
> > is 18 ppn, but 18 ppns are not present in a single node  (max. ppn = 12
> per
> > node). How would g_tune function in such scenario?
> Typically mdrun allocates the PME and PP nodes in an interleaved way,
> meaning
> you would end up with 9 PME nodes on each of your two nodes.
>
> Check the -ddorder of mdrun.
>
> Interleaving is normally fastest unless you could have all PME processes
> exclusively
> on a single node.
>

Thanks Carsten for the explanation.

Chandan

>
> Carsten
>
> >
> > Chandan
> >
> >
> > --
> > Chandan kumar Choudhury
> > NCL, Pune
> > INDIA
> >
> >
> > On Tue, Dec 4, 2012 at 6:39 PM, Carsten Kutzner  wrote:
> >
> >> Hi Chandan,
> >>
> >> the number of separate PME nodes in Gromacs must be larger than two and
> >> smaller or equal to half the number of MPI processes (=np). Thus,
> >> g_tune_pme
> >> checks only up to npme = np/2 PME nodes.
> >>
> >> Best,
> >>  Carsten
> >>
> >>
> >> On Dec 4, 2012, at 1:54 PM, Chandan Choudhury 
> wrote:
> >>
> >>> Dear Carsten and Florian,
> >>>
> >>> Thanks for you useful suggestions. It did work. I still have a doubt
> >>> regarding the execution :
> >>>
> >>> export MPIRUN=`which mpirun`
> >>> export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
> >>> g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
> >>> tune.edr -g tune.log
> >>>
> >>> I am suppling $NPROCS as 24 [2 (nodes)*12(ppn)], so that g_tune_pme
> tunes
> >>> the no. of pme nodes. As I am executing it on a single node, mdrun
> never
> >>> checks pme for greater than 12 ppn. So, how do I understand that the
> pme
> >> is
> >>> tuned for 24 ppn spanning across the two nodes.
> >>>
> >>> Chandan
> >>>
> >>>
> >>> --
> >>> Chandan kumar Choudhury
> >>> NCL, Pune
> >>> INDIA
> >>>
> >>>
> >>> On Thu, Nov 29, 2012 at 8:32 PM, Carsten Kutzner 
> >> wrote:
> >>>
>  Hi Chandan,
> 
>  On Nov 29, 2012, at 3:30 PM, Chandan Choudhury 
> >> wrote:
> 
> > Hi Carsten,
> >
> > Thanks for your suggestion.
> >
> > I did try to pass to total number of cores with the np flag to the
> > g_tune_pme, but it didnot help. Hopefully I am doing something
> silliy.
> >> I
> > have pasted the snippet of the PBS script.
> >
> > #!/bin/csh
> > #PBS -l nodes=2:ppn=12:twelve
> > #PBS -N bilayer_tune
> > 
> > 
> > 
> >
> > cd $PBS_O_WORKDIR
> > export
> MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
>  from here on you job file should read:
> 
>  export MPIRUN=`which mpirun`
>  g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
>  tune.edr -g tune.log
> 
> > mpirun -np $NPROCS  g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c
> tune.pdb
> >> -x
> > tune.xtc -e tune.edr -g tune.log -nice 0
>  this way you will get $NPROCS g_tune_pme instances, each trying to run
> >> an
>  mdrun job on 24 cores,
>  which is not what you want. g_tune_pme itself is a serial program, it
> >> just
>  spawns the mdrun's.
> 
>  Carsten
> >
> >
> > Then I submit the script using qsub.
> > When I login to the compute nodes there I donot find and mdrun
> >> executable
> > running.
> >
> > I also tried using nodes=1 and np 12. It didnot work through qsub.
> >
> > Then I logged in to the compute nodes and executed g_tune_pme_4.5.5
> -np
>  12
> > -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr -g tune.log -nice
> 0
> >
> > It worked.
> >
> > Also, if I just use
> > $g_tune_pme_4.5.5 -np 12 -s md0-200.tpr -c tune.pdb -x tune.xtc -e
>  tune.edr
> > -g tune.log -nice 0
> > g_tune_pme executes on the head node and writes various files.
> >
> > Kindly let me know what am I missing when I submit through qsub.
> >
> > Thanks
> >
> > Chandan
> > --
> > Chandan kumar Choudhury
> > NCL, Pune
> > INDIA
> >
> >
> > On Mon, Sep 3, 2012 at 3:31 PM, Carsten Kutzner 
> >> wrote:
> >
> >> Hi Chandan,
> >>
> >> g_tune_pme also finds the optimal number of PME cores if the cores
> >> are distributed on multiple nodes. Simply pass the total number of
> >> cores to the -np option. Depending on the MPI and queue environment
> >> that you use, the distribution of the cores over the nodes may have
> >> to be specified in a hostfile / machinefile. Check g_tune_pme -h
> >> on how to set that.
> >>
> >> Best,
> >> Carsten
> >>
> >>
> >> On Aug 28, 2012, at 8:33 PM, Chandan Choudhury 
>  wrote:
> >>
> >>> Dear gmx users,
> >>>
> 

Re: [gmx-users] g_tune_pme for multiple nodes

2012-12-04 Thread Carsten Kutzner

On Dec 4, 2012, at 2:45 PM, Chandan Choudhury  wrote:

> Hi Carsten,
> 
> Thanks for the reply.
> 
> If PME nodes for the g_tune is half of np, then if it exceeds the ppn of of
> a node, how would g_tune perform. What I mean if $NPROCS=36, the its half
> is 18 ppn, but 18 ppns are not present in a single node  (max. ppn = 12 per
> node). How would g_tune function in such scenario?
Typically mdrun allocates the PME and PP nodes in an interleaved way, meaning
you would end up with 9 PME nodes on each of your two nodes.

Check the -ddorder of mdrun.

Interleaving is normally fastest unless you could have all PME processes 
exclusively
on a single node.

Carsten

> 
> Chandan
> 
> 
> --
> Chandan kumar Choudhury
> NCL, Pune
> INDIA
> 
> 
> On Tue, Dec 4, 2012 at 6:39 PM, Carsten Kutzner  wrote:
> 
>> Hi Chandan,
>> 
>> the number of separate PME nodes in Gromacs must be larger than two and
>> smaller or equal to half the number of MPI processes (=np). Thus,
>> g_tune_pme
>> checks only up to npme = np/2 PME nodes.
>> 
>> Best,
>>  Carsten
>> 
>> 
>> On Dec 4, 2012, at 1:54 PM, Chandan Choudhury  wrote:
>> 
>>> Dear Carsten and Florian,
>>> 
>>> Thanks for you useful suggestions. It did work. I still have a doubt
>>> regarding the execution :
>>> 
>>> export MPIRUN=`which mpirun`
>>> export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
>>> g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
>>> tune.edr -g tune.log
>>> 
>>> I am suppling $NPROCS as 24 [2 (nodes)*12(ppn)], so that g_tune_pme tunes
>>> the no. of pme nodes. As I am executing it on a single node, mdrun never
>>> checks pme for greater than 12 ppn. So, how do I understand that the pme
>> is
>>> tuned for 24 ppn spanning across the two nodes.
>>> 
>>> Chandan
>>> 
>>> 
>>> --
>>> Chandan kumar Choudhury
>>> NCL, Pune
>>> INDIA
>>> 
>>> 
>>> On Thu, Nov 29, 2012 at 8:32 PM, Carsten Kutzner 
>> wrote:
>>> 
 Hi Chandan,
 
 On Nov 29, 2012, at 3:30 PM, Chandan Choudhury 
>> wrote:
 
> Hi Carsten,
> 
> Thanks for your suggestion.
> 
> I did try to pass to total number of cores with the np flag to the
> g_tune_pme, but it didnot help. Hopefully I am doing something silliy.
>> I
> have pasted the snippet of the PBS script.
> 
> #!/bin/csh
> #PBS -l nodes=2:ppn=12:twelve
> #PBS -N bilayer_tune
> 
> 
> 
> 
> cd $PBS_O_WORKDIR
> export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
 from here on you job file should read:
 
 export MPIRUN=`which mpirun`
 g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
 tune.edr -g tune.log
 
> mpirun -np $NPROCS  g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c tune.pdb
>> -x
> tune.xtc -e tune.edr -g tune.log -nice 0
 this way you will get $NPROCS g_tune_pme instances, each trying to run
>> an
 mdrun job on 24 cores,
 which is not what you want. g_tune_pme itself is a serial program, it
>> just
 spawns the mdrun's.
 
 Carsten
> 
> 
> Then I submit the script using qsub.
> When I login to the compute nodes there I donot find and mdrun
>> executable
> running.
> 
> I also tried using nodes=1 and np 12. It didnot work through qsub.
> 
> Then I logged in to the compute nodes and executed g_tune_pme_4.5.5 -np
 12
> -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr -g tune.log -nice 0
> 
> It worked.
> 
> Also, if I just use
> $g_tune_pme_4.5.5 -np 12 -s md0-200.tpr -c tune.pdb -x tune.xtc -e
 tune.edr
> -g tune.log -nice 0
> g_tune_pme executes on the head node and writes various files.
> 
> Kindly let me know what am I missing when I submit through qsub.
> 
> Thanks
> 
> Chandan
> --
> Chandan kumar Choudhury
> NCL, Pune
> INDIA
> 
> 
> On Mon, Sep 3, 2012 at 3:31 PM, Carsten Kutzner 
>> wrote:
> 
>> Hi Chandan,
>> 
>> g_tune_pme also finds the optimal number of PME cores if the cores
>> are distributed on multiple nodes. Simply pass the total number of
>> cores to the -np option. Depending on the MPI and queue environment
>> that you use, the distribution of the cores over the nodes may have
>> to be specified in a hostfile / machinefile. Check g_tune_pme -h
>> on how to set that.
>> 
>> Best,
>> Carsten
>> 
>> 
>> On Aug 28, 2012, at 8:33 PM, Chandan Choudhury 
 wrote:
>> 
>>> Dear gmx users,
>>> 
>>> I am using 4.5.5 of gromacs.
>>> 
>>> I was trying to use g_tune_pme for a simulation. I intend to execute
>>> mdrun at multiple nodes with 12 cores each. Therefore, I would like
>> to
>>> optimize the number of pme nodes. I could execute g_tune_pme -np 12
>>> md.tpr. But this will only find the optimal PME nodes for single
>> nodes
>>> run. How do I find the optimal PME nodes for 

Re: [gmx-users] g_tune_pme for multiple nodes

2012-12-04 Thread Chandan Choudhury
Hi Carsten,

Thanks for the reply.

If PME nodes for the g_tune is half of np, then if it exceeds the ppn of of
a node, how would g_tune perform. What I mean if $NPROCS=36, the its half
is 18 ppn, but 18 ppns are not present in a single node  (max. ppn = 12 per
node). How would g_tune function in such scenario?

Chandan


--
Chandan kumar Choudhury
NCL, Pune
INDIA


On Tue, Dec 4, 2012 at 6:39 PM, Carsten Kutzner  wrote:

> Hi Chandan,
>
> the number of separate PME nodes in Gromacs must be larger than two and
> smaller or equal to half the number of MPI processes (=np). Thus,
> g_tune_pme
> checks only up to npme = np/2 PME nodes.
>
> Best,
>   Carsten
>
>
> On Dec 4, 2012, at 1:54 PM, Chandan Choudhury  wrote:
>
> > Dear Carsten and Florian,
> >
> > Thanks for you useful suggestions. It did work. I still have a doubt
> > regarding the execution :
> >
> > export MPIRUN=`which mpirun`
> > export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
> > g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
> > tune.edr -g tune.log
> >
> > I am suppling $NPROCS as 24 [2 (nodes)*12(ppn)], so that g_tune_pme tunes
> > the no. of pme nodes. As I am executing it on a single node, mdrun never
> > checks pme for greater than 12 ppn. So, how do I understand that the pme
> is
> > tuned for 24 ppn spanning across the two nodes.
> >
> > Chandan
> >
> >
> > --
> > Chandan kumar Choudhury
> > NCL, Pune
> > INDIA
> >
> >
> > On Thu, Nov 29, 2012 at 8:32 PM, Carsten Kutzner 
> wrote:
> >
> >> Hi Chandan,
> >>
> >> On Nov 29, 2012, at 3:30 PM, Chandan Choudhury 
> wrote:
> >>
> >>> Hi Carsten,
> >>>
> >>> Thanks for your suggestion.
> >>>
> >>> I did try to pass to total number of cores with the np flag to the
> >>> g_tune_pme, but it didnot help. Hopefully I am doing something silliy.
> I
> >>> have pasted the snippet of the PBS script.
> >>>
> >>> #!/bin/csh
> >>> #PBS -l nodes=2:ppn=12:twelve
> >>> #PBS -N bilayer_tune
> >>> 
> >>> 
> >>> 
> >>>
> >>> cd $PBS_O_WORKDIR
> >>> export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
> >> from here on you job file should read:
> >>
> >> export MPIRUN=`which mpirun`
> >> g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
> >> tune.edr -g tune.log
> >>
> >>> mpirun -np $NPROCS  g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c tune.pdb
> -x
> >>> tune.xtc -e tune.edr -g tune.log -nice 0
> >> this way you will get $NPROCS g_tune_pme instances, each trying to run
> an
> >> mdrun job on 24 cores,
> >> which is not what you want. g_tune_pme itself is a serial program, it
> just
> >> spawns the mdrun's.
> >>
> >> Carsten
> >>>
> >>>
> >>> Then I submit the script using qsub.
> >>> When I login to the compute nodes there I donot find and mdrun
> executable
> >>> running.
> >>>
> >>> I also tried using nodes=1 and np 12. It didnot work through qsub.
> >>>
> >>> Then I logged in to the compute nodes and executed g_tune_pme_4.5.5 -np
> >> 12
> >>> -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr -g tune.log -nice 0
> >>>
> >>> It worked.
> >>>
> >>> Also, if I just use
> >>> $g_tune_pme_4.5.5 -np 12 -s md0-200.tpr -c tune.pdb -x tune.xtc -e
> >> tune.edr
> >>> -g tune.log -nice 0
> >>> g_tune_pme executes on the head node and writes various files.
> >>>
> >>> Kindly let me know what am I missing when I submit through qsub.
> >>>
> >>> Thanks
> >>>
> >>> Chandan
> >>> --
> >>> Chandan kumar Choudhury
> >>> NCL, Pune
> >>> INDIA
> >>>
> >>>
> >>> On Mon, Sep 3, 2012 at 3:31 PM, Carsten Kutzner 
> wrote:
> >>>
>  Hi Chandan,
> 
>  g_tune_pme also finds the optimal number of PME cores if the cores
>  are distributed on multiple nodes. Simply pass the total number of
>  cores to the -np option. Depending on the MPI and queue environment
>  that you use, the distribution of the cores over the nodes may have
>  to be specified in a hostfile / machinefile. Check g_tune_pme -h
>  on how to set that.
> 
>  Best,
>  Carsten
> 
> 
>  On Aug 28, 2012, at 8:33 PM, Chandan Choudhury 
> >> wrote:
> 
> > Dear gmx users,
> >
> > I am using 4.5.5 of gromacs.
> >
> > I was trying to use g_tune_pme for a simulation. I intend to execute
> > mdrun at multiple nodes with 12 cores each. Therefore, I would like
> to
> > optimize the number of pme nodes. I could execute g_tune_pme -np 12
> > md.tpr. But this will only find the optimal PME nodes for single
> nodes
> > run. How do I find the optimal PME nodes for multiple nodes.
> >
> > Any suggestion would be helpful.
> >
> > Chandan
> >
> > --
> > Chandan kumar Choudhury
> > NCL, Pune
> > INDIA
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
>  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > * Please don't post (un)

Re: [gmx-users] g_tune_pme for multiple nodes

2012-12-04 Thread Carsten Kutzner
Hi Chandan,

the number of separate PME nodes in Gromacs must be larger than two and
smaller or equal to half the number of MPI processes (=np). Thus, g_tune_pme
checks only up to npme = np/2 PME nodes. 

Best,
  Carsten


On Dec 4, 2012, at 1:54 PM, Chandan Choudhury  wrote:

> Dear Carsten and Florian,
> 
> Thanks for you useful suggestions. It did work. I still have a doubt
> regarding the execution :
> 
> export MPIRUN=`which mpirun`
> export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
> g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
> tune.edr -g tune.log
> 
> I am suppling $NPROCS as 24 [2 (nodes)*12(ppn)], so that g_tune_pme tunes
> the no. of pme nodes. As I am executing it on a single node, mdrun never
> checks pme for greater than 12 ppn. So, how do I understand that the pme is
> tuned for 24 ppn spanning across the two nodes.
> 
> Chandan
> 
> 
> --
> Chandan kumar Choudhury
> NCL, Pune
> INDIA
> 
> 
> On Thu, Nov 29, 2012 at 8:32 PM, Carsten Kutzner  wrote:
> 
>> Hi Chandan,
>> 
>> On Nov 29, 2012, at 3:30 PM, Chandan Choudhury  wrote:
>> 
>>> Hi Carsten,
>>> 
>>> Thanks for your suggestion.
>>> 
>>> I did try to pass to total number of cores with the np flag to the
>>> g_tune_pme, but it didnot help. Hopefully I am doing something silliy. I
>>> have pasted the snippet of the PBS script.
>>> 
>>> #!/bin/csh
>>> #PBS -l nodes=2:ppn=12:twelve
>>> #PBS -N bilayer_tune
>>> 
>>> 
>>> 
>>> 
>>> cd $PBS_O_WORKDIR
>>> export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
>> from here on you job file should read:
>> 
>> export MPIRUN=`which mpirun`
>> g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
>> tune.edr -g tune.log
>> 
>>> mpirun -np $NPROCS  g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c tune.pdb -x
>>> tune.xtc -e tune.edr -g tune.log -nice 0
>> this way you will get $NPROCS g_tune_pme instances, each trying to run an
>> mdrun job on 24 cores,
>> which is not what you want. g_tune_pme itself is a serial program, it just
>> spawns the mdrun's.
>> 
>> Carsten
>>> 
>>> 
>>> Then I submit the script using qsub.
>>> When I login to the compute nodes there I donot find and mdrun executable
>>> running.
>>> 
>>> I also tried using nodes=1 and np 12. It didnot work through qsub.
>>> 
>>> Then I logged in to the compute nodes and executed g_tune_pme_4.5.5 -np
>> 12
>>> -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr -g tune.log -nice 0
>>> 
>>> It worked.
>>> 
>>> Also, if I just use
>>> $g_tune_pme_4.5.5 -np 12 -s md0-200.tpr -c tune.pdb -x tune.xtc -e
>> tune.edr
>>> -g tune.log -nice 0
>>> g_tune_pme executes on the head node and writes various files.
>>> 
>>> Kindly let me know what am I missing when I submit through qsub.
>>> 
>>> Thanks
>>> 
>>> Chandan
>>> --
>>> Chandan kumar Choudhury
>>> NCL, Pune
>>> INDIA
>>> 
>>> 
>>> On Mon, Sep 3, 2012 at 3:31 PM, Carsten Kutzner  wrote:
>>> 
 Hi Chandan,
 
 g_tune_pme also finds the optimal number of PME cores if the cores
 are distributed on multiple nodes. Simply pass the total number of
 cores to the -np option. Depending on the MPI and queue environment
 that you use, the distribution of the cores over the nodes may have
 to be specified in a hostfile / machinefile. Check g_tune_pme -h
 on how to set that.
 
 Best,
 Carsten
 
 
 On Aug 28, 2012, at 8:33 PM, Chandan Choudhury 
>> wrote:
 
> Dear gmx users,
> 
> I am using 4.5.5 of gromacs.
> 
> I was trying to use g_tune_pme for a simulation. I intend to execute
> mdrun at multiple nodes with 12 cores each. Therefore, I would like to
> optimize the number of pme nodes. I could execute g_tune_pme -np 12
> md.tpr. But this will only find the optimal PME nodes for single nodes
> run. How do I find the optimal PME nodes for multiple nodes.
> 
> Any suggestion would be helpful.
> 
> Chandan
> 
> --
> Chandan kumar Choudhury
> NCL, Pune
> INDIA
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 
 --
 Dr. Carsten Kutzner
 Max Planck Institute for Biophysical Chemistry
 Theoretical and Computational Biophysics
 Am Fassberg 11, 37077 Goettingen, Germany
 Tel. +49-551-2012313, Fax: +49-551-2012302
 http://www3.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before

Re: [gmx-users] g_tune_pme for multiple nodes

2012-12-04 Thread Chandan Choudhury
Dear Carsten and Florian,

Thanks for you useful suggestions. It did work. I still have a doubt
regarding the execution :

export MPIRUN=`which mpirun`
export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
tune.edr -g tune.log

I am suppling $NPROCS as 24 [2 (nodes)*12(ppn)], so that g_tune_pme tunes
the no. of pme nodes. As I am executing it on a single node, mdrun never
checks pme for greater than 12 ppn. So, how do I understand that the pme is
tuned for 24 ppn spanning across the two nodes.

Chandan


--
Chandan kumar Choudhury
NCL, Pune
INDIA


On Thu, Nov 29, 2012 at 8:32 PM, Carsten Kutzner  wrote:

> Hi Chandan,
>
> On Nov 29, 2012, at 3:30 PM, Chandan Choudhury  wrote:
>
> > Hi Carsten,
> >
> > Thanks for your suggestion.
> >
> > I did try to pass to total number of cores with the np flag to the
> > g_tune_pme, but it didnot help. Hopefully I am doing something silliy. I
> > have pasted the snippet of the PBS script.
> >
> > #!/bin/csh
> > #PBS -l nodes=2:ppn=12:twelve
> > #PBS -N bilayer_tune
> > 
> > 
> > 
> >
> > cd $PBS_O_WORKDIR
> > export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
> from here on you job file should read:
>
> export MPIRUN=`which mpirun`
> g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
> tune.edr -g tune.log
>
> > mpirun -np $NPROCS  g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c tune.pdb -x
> > tune.xtc -e tune.edr -g tune.log -nice 0
> this way you will get $NPROCS g_tune_pme instances, each trying to run an
> mdrun job on 24 cores,
> which is not what you want. g_tune_pme itself is a serial program, it just
> spawns the mdrun's.
>
> Carsten
> >
> >
> > Then I submit the script using qsub.
> > When I login to the compute nodes there I donot find and mdrun executable
> > running.
> >
> > I also tried using nodes=1 and np 12. It didnot work through qsub.
> >
> > Then I logged in to the compute nodes and executed g_tune_pme_4.5.5 -np
> 12
> > -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr -g tune.log -nice 0
> >
> > It worked.
> >
> > Also, if I just use
> > $g_tune_pme_4.5.5 -np 12 -s md0-200.tpr -c tune.pdb -x tune.xtc -e
> tune.edr
> > -g tune.log -nice 0
> > g_tune_pme executes on the head node and writes various files.
> >
> > Kindly let me know what am I missing when I submit through qsub.
> >
> > Thanks
> >
> > Chandan
> > --
> > Chandan kumar Choudhury
> > NCL, Pune
> > INDIA
> >
> >
> > On Mon, Sep 3, 2012 at 3:31 PM, Carsten Kutzner  wrote:
> >
> >> Hi Chandan,
> >>
> >> g_tune_pme also finds the optimal number of PME cores if the cores
> >> are distributed on multiple nodes. Simply pass the total number of
> >> cores to the -np option. Depending on the MPI and queue environment
> >> that you use, the distribution of the cores over the nodes may have
> >> to be specified in a hostfile / machinefile. Check g_tune_pme -h
> >> on how to set that.
> >>
> >> Best,
> >>  Carsten
> >>
> >>
> >> On Aug 28, 2012, at 8:33 PM, Chandan Choudhury 
> wrote:
> >>
> >>> Dear gmx users,
> >>>
> >>> I am using 4.5.5 of gromacs.
> >>>
> >>> I was trying to use g_tune_pme for a simulation. I intend to execute
> >>> mdrun at multiple nodes with 12 cores each. Therefore, I would like to
> >>> optimize the number of pme nodes. I could execute g_tune_pme -np 12
> >>> md.tpr. But this will only find the optimal PME nodes for single nodes
> >>> run. How do I find the optimal PME nodes for multiple nodes.
> >>>
> >>> Any suggestion would be helpful.
> >>>
> >>> Chandan
> >>>
> >>> --
> >>> Chandan kumar Choudhury
> >>> NCL, Pune
> >>> INDIA
> >>> --
> >>> gmx-users mailing listgmx-users@gromacs.org
> >>> http://lists.gromacs.org/mailman/listinfo/gmx-users
> >>> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> >>> * Please don't post (un)subscribe requests to the list. Use the
> >>> www interface or send it to gmx-users-requ...@gromacs.org.
> >>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >>
> >> --
> >> Dr. Carsten Kutzner
> >> Max Planck Institute for Biophysical Chemistry
> >> Theoretical and Computational Biophysics
> >> Am Fassberg 11, 37077 Goettingen, Germany
> >> Tel. +49-551-2012313, Fax: +49-551-2012302
> >> http://www3.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne
> >>
> >> --
> >> gmx-users mailing listgmx-users@gromacs.org
> >> http://lists.gromacs.org/mailman/listinfo/gmx-users
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> >> * Please don't post (un)subscribe requests to the list. Use the
> >> www interface or send it to gmx-users-requ...@gromacs.org.
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> http://www.gr

Re: [gmx-users] g_rms alignment question

2012-12-04 Thread Justin Lemkul



On 12/4/12 3:10 AM, Tsjerk Wassenaar wrote:

Hi Jia,

You can use trjconv for custom fitting, and then feed the fitted trajectory
to g_rms, not using fitting there. Either that's using -nofit or -fit none.



The same can be done in one step by using an index file with g_rms and choosing 
that group for fitting.


-Justin


Cheers,

Tsjerk


On Tue, Dec 4, 2012 at 6:06 AM, Jia Xu  wrote:


Dear gromacs users,
 I have a trajectory of 500-atom system and would like to obtain RMSD of
all atoms but only aligned to residue 1-400 of a reference structure. Is
there any way to do this?
 Thank you so much!
Regards,
Jia
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists







--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] metal_ligand complex

2012-12-04 Thread Justin Lemkul



On 12/3/12 11:55 PM, tarak karmakar wrote:

Thanks Justin

Yah! We can run steered MD or Umbrella Pulling ( sampling) for this
purpose. But one thing I am wondering that how to move the entire
complex. In my model there are no covalent bond ( bonding info)
between the metal and ligands. So how to select a particular point in
the complex to tie a spring with an atom in the ligand  binding site ?



No idea, but the metal itself is the logical starting place.  Just make sure you 
have some kind of restraints or pseudo-bonds between the metal and the ligands, 
otherwise the ligands will just float away (and they might do so even if the 
pulling force isn't applied).


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Energy estimations of the protein-ligand complexes

2012-12-04 Thread Mark Abraham
What components will that potential energy have?

Mark

On Mon, Dec 3, 2012 at 6:54 PM, James Starlight wrote:

> Dear Gromacs Users!
>
> I'm simulating different complexes of the receptors with different ligands.
>
> For each complex I want to determine potential energy (not the binding
> energy) of the ligand molecule. In other words I want to check in what
> exactly complex ligand was in more or less strained conformation due
> to the binding pocket influence. How I could do such estimations ?
>
> Thanks for help
>
>
> James
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_rms alignment question

2012-12-04 Thread Tsjerk Wassenaar
Hi Jia,

You can use trjconv for custom fitting, and then feed the fitted trajectory
to g_rms, not using fitting there. Either that's using -nofit or -fit none.

Cheers,

Tsjerk


On Tue, Dec 4, 2012 at 6:06 AM, Jia Xu  wrote:

> Dear gromacs users,
> I have a trajectory of 500-atom system and would like to obtain RMSD of
> all atoms but only aligned to residue 1-400 of a reference structure. Is
> there any way to do this?
> Thank you so much!
> Regards,
> Jia
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
Tsjerk A. Wassenaar, Ph.D.

post-doctoral researcher
Biocomputing Group
Department of Biological Sciences
2500 University Drive NW
Calgary, AB T2N 1N4
Canada
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists