Re: [gmx-users] AVX2 SIMD intrinsics speed boost

2013-07-12 Thread Erik Lindahl
Hi,

We will have AVX2 acceleration ready for general usage before the end of July 
(together with some other goodies), and it will be markedly faster, but until 
it's ready and tested to give correct results we can't say anything about the 
final performance.

However, in general AVX2 will have the largest effects on the same parts of the 
code that run on the GPU when one is present, which means it might not provide 
a huge speedup when used in combination with accelerators.

Cheers,

E.

On Jul 11, 2013, at 6:01 PM, Bin Liu  wrote:

> Hi all,
> 
> If my understanding is correct, GROMACS parallelization and acceleration
> page indicates AVX2 SIMD intrinsics can offer a speed boost on a Haswell
> CPU. I was wondering how much performance gain we can expect from it. In
> another word, what's the approximate speed increase if we run a simulation
> with AVX2 SIMD intrinsics on a Haswell CPU (say i7 4770K) than on an Ivy
> Bridge CPU of the same  clock (say i7 3770K) with the current AVX SIMD
> intrinsics? And is there a timeline for the release of AVX2 SIMD intrinsics?
> 
> This information is crucial if we want to assemble a machine with balanced
> CPU and GPU performance.  My current machine has i7 3770K (3.5GHz, stock
> frequency) and Geforce 650 Ti (768 CUDA cores, 1032MHz). When I ran
> simulations with   rcoulomb=1.0 and rvdw=1.0, I got this at the end of the
> log file:
> 
> *Force evaluation time GPU/CPU: 1.762 ms/1.150 ms = 1.531*
> *
> *
> It seems I need a GPU with 50% more CUDA cores. In the best scenario, If
> AVX2 can give 30% speed boost, and I can successfully overclock 4770K to
> 4.5GHz, I need 1900 CUDA cores( 130%*(4.5GHz/3.5GHz)*1.531*768 cores) at
> the same frequency to get balanced CPU and GPU performance. Then I will
> need a GeForce GTX 780 (2304 CUDA cores at 863MHz, equivalent to 1925 CUDA
> cores at 1032MHz). Since GROMACS is highly insensitive to memory clock and
> latency, I hope this naive arithmetic can give a good estimation which
> graphic card I should purchase.
> 
> Best
> 
> Bin
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Problems with REMD in Gromacs 4.6.3

2013-07-12 Thread Mark Abraham
What does --loadbalance do? What do the .log files say about
OMP_NUM_THREADS, thread affinities, pinning, etc?

Mark

On Fri, Jul 12, 2013 at 3:46 AM, gigo  wrote:
> Dear GMXers,
> With Gromacs 4.6.2 I was running REMD with 144 replicas. Replicas were
> separate MPI jobs of course (OpenMPI 1.6.4). Each replica I run on 4 cores
> with OpenMP. There is Torque installed on the cluster build of 12-cores
> nodes, so I used the following script:
>
> #!/bin/tcsh -f
> #PBS -S /bin/tcsh
> #PBS -N test
> #PBS -l nodes=48:ppn=12
> #PBS -l walltime=300:00:00
> #PBS -l mem=288Gb
> #PBS -r n
> cd $PBS_O_WORKDIR
> mpiexec -np 144 --loadbalance mdrun_mpi -v -cpt 20 -multi 144 -ntomp 4
> -replex 2000
>
> It was working just great with 4.6.2. It does not work with 4.6.3. The new
> version was compiled with the same options in the same environment. Mpiexec
> spreads the replicas evenly over the cluster. Each replica forks 4 threads,
> but only one of them uses any cpu. Logs end at the citations. Some empty
> energy and trajectory files are created, nothing is written to them.
> Please let me know if you have any immediate suggestion on how to make it
> work (maybe based on some differences between versions), or if I should fill
> the bug report with all the technical details.
> Best Regards,
>
> Grzegorz Wieczorek
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] RE: Is non-linear data output/storage possible?

2013-07-12 Thread Mark Abraham
On Thu, Jul 11, 2013 at 11:41 PM, Neha  wrote:
>
>
> The .cpt file stores information related to output frequency.  The existing
> .cpt
> file designates output every X steps, while the new .tpr file specifies
> output
> every Y steps, and X != Y, so mdrun complains.  I'm assuming mdrun aborts at
> that point?  Have you tried with -noappend?
>
> Hi,
>
> So this is really interesting because I have done the same thing earlier and
> it worked just fine. As in I was trying to do similar non-linear storage and
> it was working fine when eventually it stopped working. Mdrun aborts and
> tells me the input and state checkpoints are not identical. I wish that the
> error was more reproducible or happened consistently because that has not
> been my experience so far.

So far we haven't even seen your error message, so it is hard to help.
Please find a reproducible case and show your command lines and the
error output.

> The only reason I don't want to do -noappend is that I will end up with a
> bunch of multiple trajectory files and it would be a pain to sort through
> them. It seems like that might eventually be my only choice.

Using trjcat is not a big deal!

Mark

>
>
>
>
>
> --
> View this message in context: 
> http://gromacs.5086.x6.nabble.com/Is-non-linear-data-output-storage-possible-tp5008858p5009784.html
> Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Umbrella Sampling settings

2013-07-12 Thread Shima Arasteh
Allright. 
As I said earlier, my system is a lipid bilayer. A channel is inserted in it 
and I want to run US on this system.
An ion is considered in center of the each window, the reaction coordinate is 
set to z,  so the group which is pulled is an ion, and my ref group would be 
COM of the protein. But I don't know what statement is supposed to write in mdp 
settings exactly:
; Pull code
pull    = umbrella
pull_geometry   = position
pull_dim    = N N Y
pull_start  = yes 
pull_ngroups    = 1
pull_group0 = COM of protein
pull_group1 = ion
pull_init1  = 0
pull_rate1  = 0.0
pull_k1 = 4000  ; kJ mol^-1 nm^-2
pull_nstxout    = 1000  ; every 2 ps
pull_nstfout    = 1000  ; every 2 ps


IN fact, to implement such settings, how I make the US understand to get the 
COM of protein as the ref group and the proposed ion as the pulled group?

Would you please give me any suggestions?

Thanks for all your time and consideration.

Sincerely,
Shima


- Original Message -
From: Justin Lemkul 
To: Discussion list for GROMACS users 
Cc: 
Sent: Friday, July 12, 2013 1:41 AM
Subject: Re: [gmx-users] Umbrella Sampling settings



On 7/11/13 5:10 PM, Shima Arasteh wrote:
> Thanks for your reply.
>
> But when I don't understand why these extra lines are needed to set when are 
> not advantageous practically! :-(
>

There's nothing "extra."  Everything here has a functional purpose.

-Justin

>
> Sincerely,
> Shima
>
>
> - Original Message -
> From: Justin Lemkul 
> To: Shima Arasteh ; Discussion list for GROMACS 
> users 
> Cc:
> Sent: Friday, July 12, 2013 1:37 AM
> Subject: Re: [gmx-users] Umbrella Sampling settings
>
>
>
> On 7/11/13 4:21 PM, Shima Arasteh wrote:
>> Hi,
>>
>> I want to run Umbrella Sampling on my system. In initial configurations, an 
>> ion is located in center of the window.
>> Some mdp file settings for running US, as I found in US tutorial are :
>> ; Pull code
>> pull            = umbrella
>> pull_geometry   = distance
>> pull_dim        = N N Y
>> pull_start      = yes
>> pull_ngroups    = 1
>> pull_group0     = Chain_B
>> pull_group1     = Chain_A
>> pull_init1      = 0
>> pull_rate1      = 0.0
>> pull_k1         = 4000      ; kJ mol^-1 nm^-2
>> pull_nstxout    = 1000      ; every 2 ps
>> pull_nstfout    = 1000      ; every 2 ps
>>
>>
>> But I'd like to know which lines are specifically for US? Because in this 
>> step, no group is supposed to be pulled but there are some lines written 
>> here related to pulling!
>>
>
> All of them are related to umbrella sampling.  Pulling (steered MD) and 
> umbrella
> sampling simply use common parts of the "pull code" in Gromacs because US
> requires a restraint potential.  Whether or not that restraint potential 
> induces
> net displacement (steering, i.e. non-zero pull_rate) or not (zero pull rate,
> restrain to a given set of conditions) is the only difference.  Both processes
> require reference and "pull" groups, geometry information, etc.
>
> -Justin
>

-- 
==

Justin A. Lemkul, Ph.D.
Postdoctoral Associate

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
-- 
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Umbrella Sampling settings

2013-07-12 Thread Thomas Schlesier
But you need each of these lines for both cases (SMD and US). Probably 
one could skip two lines and use the default values, but it's better to 
set them manually. See below for comments (comments are under the 
related entry):




Thanks for your reply. But when I don't understand why these extra lines
are needed to set when are not advantageous practically! :-( Sincerely,
Shima - Original Message - From: Justin Lemkul 
To: Shima Arasteh ; Discussion list for
GROMACS users  Cc: Sent: Friday, July 12, 2013
1:37 AM Subject: Re: [gmx-users] Umbrella Sampling settings On 7/11/13
4:21 PM, Shima Arasteh wrote:

>Hi,
>
>I want to run Umbrella Sampling on my system. In initial configurations, an 
ion is located in center of the window.
>Some mdp file settings for running US, as I found in US tutorial are :
>; Pull code
>pull? ? ? ? ? ? = umbrella

how do you want to pull (umbrella / constant force / ...)

>pull_geometry?  = distance

second part for 'how to pull' ; see the manual

>pull_dim? ? ? ? = N N Y

in which direction do you want to pull

>pull_start? ? ? = yes
should the initial distance of 'pull_group0' and 'pull_group1' be added 
to 'pull_init1'

>pull_ngroups? ? = 1

number of pulled groups

>pull_group0? ?  = Chain_B

referene group

>pull_group1? ?  = Chain_A

pulled group

>pull_init1? ? ? = 0

place where origin of the potential is place

>pull_rate1? ? ? = 0.0
pulling veloity (default value could be 0, but if you want to pull with 
a zero-velocity it's safer to give this value manually

>pull_k1? ? ? ?  = 4000? ? ? ; kJ mol^-1  nm^-2

force constant for umbrella / force for constant force

>pull_nstxout? ? = 1000? ? ? ; every 2 ps

output coordinates

>pull_nstfout? ? = 1000? ? ? ; every 2 ps

output forces

>
>


So, you see each line is sensible, there are no 'extra' lines.

Greetings
Thomas



>But I'd like to know which lines are specifically for US? Because in this 
step, no group is supposed to be pulled but there are some lines written here 
related to pulling!
>

All of them are related to umbrella sampling.? Pulling (steered MD) and umbrella
sampling simply use common parts of the "pull code" in Gromacs because US
requires a restraint potential.? Whether or not that restraint potential induces
net displacement (steering, i.e. non-zero pull_rate) or not (zero pull rate,
restrain to a given set of conditions) is the only difference.? Both processes
require reference and "pull" groups, geometry information, etc.

-Justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Gromacs installation problem

2013-07-12 Thread Douglas Houston

Hi,

I am having trouble installing Gromacs 4.6.3.

In bash I am using the following sequence of commands:

cd gromacs-4.6.3
mkdir build
cd build
CC=/usr/people/douglas/programs/gcc-4.7.3/installation/bin/gcc  
~/programs/cmake-2.8.7/bin/cmake .. -DGMX_BUILD_OWN_FFTW=ON

make
sudo make install

Everything seems to go OK until the 'sudo make install' stage when I get:

"make: Warning: File `Makefile' has modification time 1.2e+04 s in the future
CMake Error at cmake/gmxDetectClang30.cmake:36 (try_compile):
  Failed to open

 
/usr/people/douglas/programs/gromacs-4.6.3/build/CMakeFiles/CMakeTmp/CMakeLists.txt


  Permission denied
Call Stack (most recent call first):
  CMakeLists.txt:301 (gmx_detect_clang_3_0)


CMake Error at CMakeLists.txt:946 (add_subdirectory):
  add_subdirectory given source "src/contrib/fftw" which is not an existing
  directory.


CMake Error at CMakeLists.txt:956 (MESSAGE):
  Cannot find FFTW 3 (with correct precision - libfftw3f for single-precision
  GROMACS or libfftw3 for double-precision GROMACS).  Either choose the right
  precision, choose another FFT(W) library, enable the advanced option to let
  GROMACS build FFTW 3 for you, or use the really slow GROMACS built-in
  fftpack library.


-- Configuring incomplete, errors occurred!
CMake Error: Unable to open check cache file for write.  
/usr/people/douglas/programs/gromacs-4.6.3/build/CMakeFiles/cmake.check_cache

make: *** [cmake_check_build_system] Error 1"


I don't understand the 'permission denied' error on opening  
CMakeLists.txt - when I 'ls' for this file it's not there at all. The  
"let gromacs build FFTW 3 for you" I also don't understand as I'm  
already using the -DGMX_BUILD_OWN_FFTW=ON option.


Any help would be most appreciated.

cheers,

Doug


_
Dr. Douglas R. Houston
Lecturer
Room 3.23
Institute of Structural and Molecular Biology
Michael Swann Building
King's Buildings
University of Edinburgh
Edinburgh, EH9 3JR, UK
Tel. 0131 650 7358

--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] convert gromacs formats and xyz

2013-07-12 Thread maggin
Hi,

Is anybody known how to convert gromacs formats and xyz?

If VMD or catdcd can do it ?

Thank you very much!

maggin





--
View this message in context: 
http://gromacs.5086.x6.nabble.com/convert-gromacs-formats-and-xyz-tp5009793.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] convert gromacs formats and xyz

2013-07-12 Thread Mark Abraham
Yes, Google knows quite a lot about this question! Please search before asking.

Mark

On Fri, Jul 12, 2013 at 11:43 AM, maggin  wrote:
> Hi,
>
> Is anybody known how to convert gromacs formats and xyz?
>
> If VMD or catdcd can do it ?
>
> Thank you very much!
>
> maggin
>
>
>
>
>
> --
> View this message in context: 
> http://gromacs.5086.x6.nabble.com/convert-gromacs-formats-and-xyz-tp5009793.html
> Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Gromacs installation problem

2013-07-12 Thread Mark Abraham
On Fri, Jul 12, 2013 at 11:18 AM, Douglas Houston
 wrote:
> Hi,
>
> I am having trouble installing Gromacs 4.6.3.
>
> In bash I am using the following sequence of commands:
>
> cd gromacs-4.6.3
> mkdir build
> cd build
> CC=/usr/people/douglas/programs/gcc-4.7.3/installation/bin/gcc
> ~/programs/cmake-2.8.7/bin/cmake .. -DGMX_BUILD_OWN_FFTW=ON
> make
> sudo make install
>
> Everything seems to go OK until the 'sudo make install' stage when I get:
>
> "make: Warning: File `Makefile' has modification time 1.2e+04 s in the
> future
> CMake Error at cmake/gmxDetectClang30.cmake:36 (try_compile):
>   Failed to open
>
>
> /usr/people/douglas/programs/gromacs-4.6.3/build/CMakeFiles/CMakeTmp/CMakeLists.txt
>
>   Permission denied
> Call Stack (most recent call first):
>   CMakeLists.txt:301 (gmx_detect_clang_3_0)
>
>
> CMake Error at CMakeLists.txt:946 (add_subdirectory):
>   add_subdirectory given source "src/contrib/fftw" which is not an existing
>   directory.
>
>
> CMake Error at CMakeLists.txt:956 (MESSAGE):
>   Cannot find FFTW 3 (with correct precision - libfftw3f for
> single-precision
>   GROMACS or libfftw3 for double-precision GROMACS).  Either choose the
> right
>   precision, choose another FFT(W) library, enable the advanced option to
> let
>   GROMACS build FFTW 3 for you, or use the really slow GROMACS built-in
>   fftpack library.
>
>
> -- Configuring incomplete, errors occurred!
> CMake Error: Unable to open check cache file for write.
> /usr/people/douglas/programs/gromacs-4.6.3/build/CMakeFiles/cmake.check_cache
> make: *** [cmake_check_build_system] Error 1"
>
>
> I don't understand the 'permission denied' error on opening CMakeLists.txt -
> when I 'ls' for this file it's not there at all. The "let gromacs build FFTW
> 3 for you" I also don't understand as I'm already using the
> -DGMX_BUILD_OWN_FFTW=ON option.
>
> Any help would be most appreciated.

Your system (or file server) seems a bit broken, if the time seen by
root is very different from that of a file just made by a user. Also,
you seem to have your build tree's file permissions set so that root
can't read the files. I would suggest you plan to install to user file
space, as you have done for gcc and cmake, via cmake ..
-DCMAKE_INSTALL_PREFIX=/your/full/path/here. Now you side-step the
access permissions issue. The subsequent problems are all caused by
that.

Mark

> cheers,
>
> Doug
>
>
> _
> Dr. Douglas R. Houston
> Lecturer
> Room 3.23
> Institute of Structural and Molecular Biology
> Michael Swann Building
> King's Buildings
> University of Edinburgh
> Edinburgh, EH9 3JR, UK
> Tel. 0131 650 7358
>
> --
> The University of Edinburgh is a charitable body, registered in
> Scotland, with registration number SC005336.
>
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use thewww interface
> or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] gpu cluster explanation

2013-07-12 Thread Francesco
Hi all,
I'm working with a 200K atoms system (protein + explicit water) and
after a while using a cpu cluster I had to switch to a gpu cluster.
I read both Acceleration and parallelization and Gromacs-gpu
documentation pages
(http://www.gromacs.org/Documentation/Acceleration_and_parallelization
and
http://www.gromacs.org/Documentation/Installation_Instructions_4.5/GROMACS-OpenMM)
but it's a bit confusing and I need help to understand if I really have
understood correctly. :)
I have 2 type of nodes:
3gpu ( NVIDIA Tesla M2090) and 2 cpu 6cores each (Intel Xeon E5649 @
2.53GHz)
8gpu and 2 cpu (6 cores each)

1) I can only have 1 MPI per gpu, meaning that with 3 gpu I can have 3
MPI max.
2) because I have 12 cores I can open 4 OPenMP threads x MPI, because
4x3= 12

now if I have a node with 8 gpu, I can use 4 gpu:
4 MPI and 3 OpenMP 
is it right?
is it possible to use 8 gpu and 8 cores only?

Using gromacs 4.6.2 and 144 cpu cores I reach 35 ns/day, while with 3
gpu  and 12 cores I get 9-11 ns/day.

the command that I use is:
mdrun -dlb yes -s input_50.tpr -deffnm 306s_50 -v
with n° gpu set via script :
#BSUB -n 3

I also tried to set -npme / -nt / -ntmpi / -ntomp, but nothing changes.

The mdp file and some statistics are following:

 START MDP 

title = G6PD wt molecular dynamics (2bhl.pdb) - NPT MD

; Run parameters
integrator  = md; Algorithm options
nsteps  = 2500  ; maximum number of steps to
perform [50 ns]
dt  = 0.002 ; 2 fs = 0.002 ps

; Output control
nstxout= 1 ; [steps] freq to write coordinates to
trajectory, the last coordinates are always written
nstvout= 1 ; [steps] freq to write velocities to
trajectory, the last velocities are always written
nstlog  = 1 ; [steps] freq to write energies to log
file, the last energies are always written
nstenergy = 1  ; [steps] write energies to disk
every nstenergy steps
nstxtcout  = 1 ; [steps] freq to write coordinates to
xtc trajectory
xtc_precision   = 1000  ; precision to write to xtc trajectory
(1000 = default)
xtc_grps= system; which coordinate
group(s) to write to disk 
energygrps  = system; or System / which energy
group(s) to writk

; Bond parameters
continuation= yes   ; restarting from npt
constraints = all-bonds ; Bond types to replace by constraints
constraint_algorithm= lincs ; holonomic constraints
lincs_iter  = 1 ; accuracy of LINCS
lincs_order = 4 ; also related to
accuracy
lincs_warnangle  = 30; [degrees] maximum angle that a bond can
rotate before LINCS will complain

; Neighborsearching
ns_type = grid  ; method of updating neighbor list
cutoff-scheme = Verlet
nstlist = 10; [steps] frequence to update
neighbor list (10)
rlist = 1.0   ; [nm] cut-off distance for the
short-range neighbor list  (1 default)
rcoulomb  = 1.0   ; [nm] long range electrostatic cut-off
rvdw  = 1.0   ; [nm]  long range Van der Waals cut-off

; Electrostatics
coulombtype= PME  ; treatment of long range electrostatic
interactions  
vdwtype = cut-off   ; treatment of Van der Waals
interactions

; Periodic boundary conditions
pbc = xyz  

; Dispersion correction
DispCorr= EnerPres  ; appling long
range dispersion corrections

; Ewald
fourierspacing= 0.12; grid spacing for FFT  -
controll the higest magnitude of wave vectors (0.12)
pme_order = 4 ; interpolation order for PME, 4 = cubic
ewald_rtol= 1e-5  ; relative strength of Ewald-shifted
potential at rcoulomb

; Temperature coupling
tcoupl  = nose-hoover   ; temperature
coupling with Nose-Hoover ensemble
tc_grps = Protein Non-Protein
tau_t   = 0.40.4; [ps]
time constant
ref_t   = 310310; [K]
reference temperature for coupling [310 = 28°C

; Pressure coupling
pcoupl  = parrinello-rahman 
pcoupltype= isotropic   ;
uniform scaling of box vect
tau_p   = 2.0  
; [ps] time constant
ref_p   = 1.0   
   ; [bar] reference pressure for coupling
compressibility = 4.5e-5   
; [bar^-1] isothermal compressibility of water
refcoord_scaling= com   
   ; have a look at GROMACS documentation 7.

; Velocity generation
gen_vel = no 

[gmx-users] DCD can not open file 'md_0_1.xtc' for reading

2013-07-12 Thread maggin
at linux --- ubantu:

use:
catdcd -o  md_0_1.xyz  md_0_1.xtc

error:
CatDCD 4.0
dcdplugin) unrecognized DCD header:
dcdplugin)   [0]:   1995  [1]: 1059782656
dcdplugin)   [0]: 0x07cb  [1]: 0x3f2b
dcdplugin) read_dcdheader: corruption or unrecognized file structure
Error: could not open file 'md_0_1.xtc' for reading.

How to fix it ?

Thank you very much!

maggin



--
View this message in context: 
http://gromacs.5086.x6.nabble.com/DCD-can-not-open-file-md-0-1-xtc-for-reading-tp5009795.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: DCD can not open file 'md_0_1.xtc' for reading

2013-07-12 Thread maggin
at redhat  INUXAMD64

use:
catdcd -o  md_0_1.xyz  md_0_1.xtc

error:
CatDCD 4.0
dcdplugin) unrecognized DCD header:
dcdplugin)   [0]:   1995  [1]: 1059782656
dcdplugin)   [0]: 0x07cb  [1]: 0x3f2b
dcdplugin) read_dcdheader: corruption or unrecognized file structure
Error: could not open file 'md_0_1.xtc' for reading.

Something wrong in somewhere, where is it ?

Thank you very much!

maggin

by the way, 
1.Download  catdcd-4.0b.tar.gz
2.tar -zxvf catdcd-4.0b.tar.gz
3. add path LINUXAMD64/bin/catdcd4.0 to .bash_profile
4. source .bash_profile
5.  which catdcd
~/softwares/LINUXAMD64/bin/catdcd4.0/catdcd
6. cd LINUXAMD64/bin/catdcd4.0
7. ls
  catdcd  md_0_1.xtc
8.  catdcd -o  md_0_1.xyz  md_0_1.xtc
error:
CatDCD 4.0
dcdplugin) unrecognized DCD header:
dcdplugin)   [0]:   1995  [1]: 1059782656
dcdplugin)   [0]: 0x07cb  [1]: 0x3f2b
dcdplugin) read_dcdheader: corruption or unrecognized file structure
Error: could not open file 'md_0_1.xtc' for reading.







--
View this message in context: 
http://gromacs.5086.x6.nabble.com/DCD-can-not-open-file-md-0-1-xtc-for-reading-tp5009795p5009796.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: DCD can not open file 'md_0_1.xtc' for reading

2013-07-12 Thread maggin
I use pdb to do test :

 catdcd -o  1dx0.gro  1dx0.pdb


error:

CatDCD 4.0
dcdplugin) unrecognized DCD header:
dcdplugin)   [0]: 1380273473  [1]:  538987346
dcdplugin)   [0]: 0x52454d41  [1]: 0x20204b52
dcdplugin) read_dcdheader: corruption or unrecognized file structure
Error: could not open file '1dx0.pdb' for reading.


It seems catdcd can not work!

maggin



--
View this message in context: 
http://gromacs.5086.x6.nabble.com/DCD-can-not-open-file-md-0-1-xtc-for-reading-tp5009795p5009797.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: DCD can not open file 'md_0_1.xtc' for reading

2013-07-12 Thread Justin Lemkul



On 7/12/13 9:34 AM, maggin wrote:

I use pdb to do test :

  catdcd -o  1dx0.gro  1dx0.pdb


error:

CatDCD 4.0
dcdplugin) unrecognized DCD header:
dcdplugin)   [0]: 1380273473  [1]:  538987346
dcdplugin)   [0]: 0x52454d41  [1]: 0x20204b52
dcdplugin) read_dcdheader: corruption or unrecognized file structure
Error: could not open file '1dx0.pdb' for reading.


It seems catdcd can not work!



You should be asking all of these questions on the VMD mailing list, as 
indicated on the catdcd homepage.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Associate

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Gromacs installation problem

2013-07-12 Thread Douglas Houston
Thanks a lot Mark, that worked (after I did "setenv LD_LIBRARY_PATH  
${LD_LIBRARY_PATH}:/usr/people/douglas/programs/gromacs-4.6.3/install/lib").



Quoting Mark Abraham  on Fri, 12 Jul 2013  
13:22:01 +0100:



On Fri, Jul 12, 2013 at 11:18 AM, Douglas Houston
 wrote:

Hi,

I am having trouble installing Gromacs 4.6.3.

In bash I am using the following sequence of commands:

cd gromacs-4.6.3
mkdir build
cd build
CC=/usr/people/douglas/programs/gcc-4.7.3/installation/bin/gcc
~/programs/cmake-2.8.7/bin/cmake .. -DGMX_BUILD_OWN_FFTW=ON
make
sudo make install

Everything seems to go OK until the 'sudo make install' stage when I get:

"make: Warning: File `Makefile' has modification time 1.2e+04 s in the
future
CMake Error at cmake/gmxDetectClang30.cmake:36 (try_compile):
  Failed to open


/usr/people/douglas/programs/gromacs-4.6.3/build/CMakeFiles/CMakeTmp/CMakeLists.txt

  Permission denied
Call Stack (most recent call first):
  CMakeLists.txt:301 (gmx_detect_clang_3_0)


CMake Error at CMakeLists.txt:946 (add_subdirectory):
  add_subdirectory given source "src/contrib/fftw" which is not an existing
  directory.


CMake Error at CMakeLists.txt:956 (MESSAGE):
  Cannot find FFTW 3 (with correct precision - libfftw3f for
single-precision
  GROMACS or libfftw3 for double-precision GROMACS).  Either choose the
right
  precision, choose another FFT(W) library, enable the advanced option to
let
  GROMACS build FFTW 3 for you, or use the really slow GROMACS built-in
  fftpack library.


-- Configuring incomplete, errors occurred!
CMake Error: Unable to open check cache file for write.
/usr/people/douglas/programs/gromacs-4.6.3/build/CMakeFiles/cmake.check_cache
make: *** [cmake_check_build_system] Error 1"


I don't understand the 'permission denied' error on opening CMakeLists.txt -
when I 'ls' for this file it's not there at all. The "let gromacs build FFTW
3 for you" I also don't understand as I'm already using the
-DGMX_BUILD_OWN_FFTW=ON option.

Any help would be most appreciated.


Your system (or file server) seems a bit broken, if the time seen by
root is very different from that of a file just made by a user. Also,
you seem to have your build tree's file permissions set so that root
can't read the files. I would suggest you plan to install to user file
space, as you have done for gcc and cmake, via cmake ..
-DCMAKE_INSTALL_PREFIX=/your/full/path/here. Now you side-step the
access permissions issue. The subsequent problems are all caused by
that.

Mark


cheers,

Doug


_
Dr. Douglas R. Houston
Lecturer
Room 3.23
Institute of Structural and Molecular Biology
Michael Swann Building
King's Buildings
University of Edinburgh
Edinburgh, EH9 3JR, UK
Tel. 0131 650 7358

--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use thewww interface
or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at  
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!

* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists







_
Dr. Douglas R. Houston
Lecturer
Room 3.23
Institute of Structural and Molecular Biology
Michael Swann Building
King's Buildings
University of Edinburgh
Edinburgh, EH9 3JR, UK
Tel. 0131 650 7358

--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Problems with REMD in Gromacs 4.6.3

2013-07-12 Thread gigo

Hi!

On 2013-07-12 11:15, Mark Abraham wrote:

What does --loadbalance do?


It balances the total number of processes across all allocated nodes. 
The thing is that mpiexec does not know that I want each replica to fork 
to 4 OpenMP threads. Thus, without this option and without affinities 
(in a sec about it) mpiexec starts too many replicas on some nodes - 
gromacs complains about the overload then - while some cores on other 
nodes are not used. It is possible to run my simulation like that:


mpiexec mdrun_mpi -v -cpt 20 -multi 144 -replex 2000 -cpi (without 
--loadbalance for mpiexec and without -ntomp for mdrun)


Then each replica runs on 4 MPI processes (I allocate 4 times more 
cores then replicas and mdrun sees it). The problem is that it is much 
slower than using OpenMP for each replica. I did not find any other way 
than --loadbalance in mpiexec and then -multi 144 -ntomp 4 in mdrun to 
use MPI and OpenMP at the same time on the torque-controlled cluster.



What do the .log files say about
OMP_NUM_THREADS, thread affinities, pinning, etc?


Each replica logs:
"Using 1 MPI process
Using 4 OpenMP threads",
That is is correct. As I said, the threads are forked, but 3 out of 4 
don't do anything, and the simulation does not go at all.


About affinities Gromacs says:
"Can not set thread affinities on the current platform. On NUMA systems 
this
can cause performance degradation. If you think your platform should 
support

setting affinities, contact the GROMACS developers."

Well, the "current platform" is normal x86_64 cluster, but the whole 
information about resources is passed by Torque to OpenMPI-linked 
Gromacs. Can it be that mdrun sees the resources allocated by torque as 
a big pool of cpus and misses the information about nodes topology?


If you have any suggestions how to debug or trace this issue, I would 
be glad to participate.

Best,
G








Mark

On Fri, Jul 12, 2013 at 3:46 AM, gigo  wrote:

Dear GMXers,
With Gromacs 4.6.2 I was running REMD with 144 replicas. Replicas 
were
separate MPI jobs of course (OpenMPI 1.6.4). Each replica I run on 4 
cores
with OpenMP. There is Torque installed on the cluster build of 
12-cores

nodes, so I used the following script:

#!/bin/tcsh -f
#PBS -S /bin/tcsh
#PBS -N test
#PBS -l nodes=48:ppn=12
#PBS -l walltime=300:00:00
#PBS -l mem=288Gb
#PBS -r n
cd $PBS_O_WORKDIR
mpiexec -np 144 --loadbalance mdrun_mpi -v -cpt 20 -multi 144 -ntomp 
4

-replex 2000

It was working just great with 4.6.2. It does not work with 4.6.3. 
The new
version was compiled with the same options in the same environment. 
Mpiexec
spreads the replicas evenly over the cluster. Each replica forks 4 
threads,
but only one of them uses any cpu. Logs end at the citations. Some 
empty

energy and trajectory files are created, nothing is written to them.
Please let me know if you have any immediate suggestion on how to 
make it
work (maybe based on some differences between versions), or if I 
should fill

the bug report with all the technical details.
Best Regards,

Grzegorz Wieczorek

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Umbrella Sampling settings

2013-07-12 Thread Shima Arasteh



Allright. 
As I said earlier, my system is a lipid bilayer. A channel is inserted in it 
and I want to run US on this system.
An ion is considered in center of the each window, the reaction coordinate is 
set to z,  so the group which is pulled is an ion, and my ref group would be 
COM of the protein. But I don't know what statement is supposed to write in mdp 
settings exactly:
; Pull code
pull    = umbrella
pull_geometry   = position
pull_dim    = N N Y
pull_start  = yes 
pull_ngroups    = 1
pull_group0 = COM of protein
pull_group1 = ion
pull_init1  = 0
pull_rate1  = 0.0
pull_k1 = 4000  ; kJ mol^-1 nm^-2
pull_nstxout    = 1000  ; every 2 ps
pull_nstfout    = 1000  ; every 2 ps


IN fact, to implement such settings, how I make the US understand to get the 
COM of protein as the ref group and the proposed ion as the pulled group?

Would you please give me any suggestions?

Thanks for all your time and consideration.

Sincerely,
Shima


- Original Message -
From: Justin Lemkul 
To: Discussion list for GROMACS users 
Cc: 
Sent: Friday, July 12, 2013 1:41 AM
Subject: Re: [gmx-users] Umbrella Sampling settings



On 7/11/13 5:10 PM, Shima Arasteh wrote:
> Thanks for your reply.
>
> But when I don't understand why these extra lines are needed to set when are 
> not advantageous practically! :-(
>

There's nothing "extra."  Everything here has a functional purpose.

-Justin

>
> Sincerely,
> Shima
>
>
> - Original Message -
> From: Justin Lemkul 
> To: Shima Arasteh ; Discussion list for GROMACS 
> users 
> Cc:
> Sent: Friday, July 12, 2013 1:37 AM
> Subject: Re: [gmx-users] Umbrella Sampling settings
>
>
>
> On 7/11/13 4:21 PM, Shima Arasteh wrote:
>> Hi,
>>
>> I want to run Umbrella Sampling on my system. In initial configurations, an 
>> ion is located in center of the window.
>> Some mdp file settings for running US, as I found in US tutorial are :
>> ; Pull code
>> pull            = umbrella
>> pull_geometry   = distance
>> pull_dim        = N N Y
>> pull_start      = yes
>> pull_ngroups    = 1
>> pull_group0     = Chain_B
>> pull_group1     = Chain_A
>> pull_init1      = 0
>> pull_rate1      = 0.0
>> pull_k1         = 4000      ; kJ mol^-1 nm^-2
>> pull_nstxout    = 1000      ; every 2 ps
>> pull_nstfout    = 1000      ; every 2 ps
>>
>>
>> But I'd like to know which lines are specifically for US? Because in this 
>> step, no group is supposed to be pulled but there are some lines written 
>> here related to pulling!
>>
>
> All of them are related to umbrella sampling.  Pulling (steered MD) and 
> umbrella
> sampling simply use common parts of the "pull code" in Gromacs because US
> requires a restraint potential.  Whether or not that restraint potential 
> induces
> net displacement (steering, i.e. non-zero pull_rate) or not (zero pull rate,
> restrain to a given set of conditions) is the only difference.  Both processes
> require reference and "pull" groups, geometry information, etc.
>
> -Justin
>

-- 
==

Justin A. Lemkul, Ph.D.
Postdoctoral Associate

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
-- 
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] gpu cluster explanation

2013-07-12 Thread Richard Broadbent



On 12/07/13 13:26, Francesco wrote:

Hi all,
I'm working with a 200K atoms system (protein + explicit water) and
after a while using a cpu cluster I had to switch to a gpu cluster.
I read both Acceleration and parallelization and Gromacs-gpu
documentation pages
(http://www.gromacs.org/Documentation/Acceleration_and_parallelization
and
http://www.gromacs.org/Documentation/Installation_Instructions_4.5/GROMACS-OpenMM)
but it's a bit confusing and I need help to understand if I really have
understood correctly. :)
I have 2 type of nodes:
3gpu ( NVIDIA Tesla M2090) and 2 cpu 6cores each (Intel Xeon E5649 @
2.53GHz)
8gpu and 2 cpu (6 cores each)

1) I can only have 1 MPI per gpu, meaning that with 3 gpu I can have 3
MPI max.
2) because I have 12 cores I can open 4 OPenMP threads x MPI, because
4x3= 12

now if I have a node with 8 gpu, I can use 4 gpu:
4 MPI and 3 OpenMP
is it right?
is it possible to use 8 gpu and 8 cores only?


you could set -ntomp 0, however and setup mpi/thread mpi to use 8 cores. 
However, a system that unbalanced (huge amount of gpu power to 
comparatively little cpu power) is unlikely to get great performance.


Using gromacs 4.6.2 and 144 cpu cores I reach 35 ns/day, while with 3
gpu  and 12 cores I get 9-11 ns/day.

That slowdown is in line with what I got when I tried a similar cpu-gpu 
setup. That said other's might have some advice that will improve your 
performance.



the command that I use is:
mdrun -dlb yes -s input_50.tpr -deffnm 306s_50 -v
with n° gpu set via script :
#BSUB -n 3

I also tried to set -npme / -nt / -ntmpi / -ntomp, but nothing changes.

The mdp file and some statistics are following:

 START MDP 

title = G6PD wt molecular dynamics (2bhl.pdb) - NPT MD

; Run parameters
integrator  = md; Algorithm options
nsteps  = 2500  ; maximum number of steps to
perform [50 ns]
dt  = 0.002 ; 2 fs = 0.002 ps

; Output control
nstxout= 1 ; [steps] freq to write coordinates to
trajectory, the last coordinates are always written
nstvout= 1 ; [steps] freq to write velocities to
trajectory, the last velocities are always written
nstlog  = 1 ; [steps] freq to write energies to log
file, the last energies are always written
nstenergy = 1  ; [steps] write energies to disk
every nstenergy steps
nstxtcout  = 1 ; [steps] freq to write coordinates to
xtc trajectory
xtc_precision   = 1000  ; precision to write to xtc trajectory
(1000 = default)
xtc_grps= system; which coordinate
group(s) to write to disk
energygrps  = system; or System / which energy
group(s) to writk

; Bond parameters
continuation= yes   ; restarting from npt
constraints = all-bonds ; Bond types to replace by constraints
constraint_algorithm= lincs ; holonomic constraints
lincs_iter  = 1 ; accuracy of LINCS
lincs_order = 4 ; also related to
accuracy
lincs_warnangle  = 30; [degrees] maximum angle that a bond can
rotate before LINCS will complain



That seems a little loose for constraints but setting that up and 
checking it's conserving energy and preserving bond lengths is something 
you'll have to do yourself


Richard

; Neighborsearching
ns_type = grid  ; method of updating neighbor list
cutoff-scheme = Verlet
nstlist = 10; [steps] frequence to update
neighbor list (10)
rlist = 1.0   ; [nm] cut-off distance for the
short-range neighbor list  (1 default)
rcoulomb  = 1.0   ; [nm] long range electrostatic cut-off
rvdw  = 1.0   ; [nm]  long range Van der Waals cut-off

; Electrostatics
coulombtype= PME  ; treatment of long range electrostatic
interactions
vdwtype = cut-off   ; treatment of Van der Waals
interactions

; Periodic boundary conditions
pbc = xyz

; Dispersion correction
DispCorr= EnerPres  ; appling long
range dispersion corrections

; Ewald
fourierspacing= 0.12; grid spacing for FFT  -
controll the higest magnitude of wave vectors (0.12)
pme_order = 4 ; interpolation order for PME, 4 = cubic
ewald_rtol= 1e-5  ; relative strength of Ewald-shifted
potential at rcoulomb

; Temperature coupling
tcoupl  = nose-hoover   ; temperature
coupling with Nose-Hoover ensemble
tc_grps = Protein Non-Protein
tau_t   = 0.40.4; [ps]
time constant
ref_t   = 310310; [K]
reference temperature for coupling [310 = 28°C

; Pressure coupling
pcoupl  = parrinello-rahman
pcoupltype= isotro

Re: [gmx-users] Umbrella Sampling settings

2013-07-12 Thread Justin Lemkul



On 7/12/13 11:32 AM, Shima Arasteh wrote:




Allright.
As I said earlier, my system is a lipid bilayer. A channel is inserted in it 
and I want to run US on this system.
An ion is considered in center of the each window, the reaction coordinate is 
set to z,  so the group which is pulled is an ion, and my ref group would be 
COM of the protein. But I don't know what statement is supposed to write in mdp 
settings exactly:
; Pull code
pull= umbrella
pull_geometry   = position
pull_dim= N N Y
pull_start  = yes
pull_ngroups= 1
pull_group0 = COM of protein
pull_group1 = ion
pull_init1  = 0
pull_rate1  = 0.0
pull_k1 = 4000  ; kJ mol^-1 nm^-2
pull_nstxout= 1000  ; every 2 ps
pull_nstfout= 1000  ; every 2 ps


IN fact, to implement such settings, how I make the US understand to get the 
COM of protein as the ref group and the proposed ion as the pulled group?

Would you please give me any suggestions?



You got a very thorough response already today:

http://lists.gromacs.org/pipermail/gmx-users/2013-July/082855.html

I see that your settings are now different, using "position" geometry instead of 
"distance," which is good because that's a better approach for your system. 
What you haven't specified is pull_vec1, which is necessary when using 
"position" geometry.


All of these details are discussed to some extent in my umbrella sampling 
tutorial; it should certainly serve as a basic guide.  What you're trying to do 
is ultimately going to require a slightly different approach, but the general 
principles and explanations of .mdp terms are the same.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Associate

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] remd

2013-07-12 Thread gigo

Hi!

On 2013-07-12 07:58, Shine A wrote:

Hi Sir,

 Is it possible to run an REMD simulation having 16 replicas 
in a
cluster(group of cpu) having 8 nodes. Here each node have 8 
processors.


It is possible. If you have Gromacs (version >= 4.6) compiled with MPI 
and you specify the number of replicas (-multi 16) in the mdrun command 
and 64 processors are allocated by mpirun, mdrun should start 4 MPI 
processes per each replica. It worked for me, at least. With OpenMP 
parallelization it would run faster, I have some problems with it 
though. Read the latest posts "Problems with REMD in Gromacs 4.6.3".

Best,
G
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Umbrella Sampling settings

2013-07-12 Thread Shima Arasteh
Yes, I got Thomas response and I am so grateful in this about. :-)

Also many many thanks for your response Justin.

Although I don't know the definition of pull-vec yet and I need to study in 
this about, Would you  please let me know if the grompp knows what I wrote as 
the COM of protein or not? And if it recognizes which ion I mean to be pulled 
among many ions exist in the whole system? How is it?


 
Sincerely,
Shima


- Original Message -
From: Justin Lemkul 
To: Shima Arasteh ; Discussion list for GROMACS 
users 
Cc: 
Sent: Friday, July 12, 2013 8:16 PM
Subject: Re: [gmx-users] Umbrella Sampling settings



On 7/12/13 11:32 AM, Shima Arasteh wrote:
>
>
>
> Allright.
> As I said earlier, my system is a lipid bilayer. A channel is inserted in it 
> and I want to run US on this system.
> An ion is considered in center of the each window, the reaction coordinate is 
> set to z,  so the group which is pulled is an ion, and my ref group would be 
> COM of the protein. But I don't know what statement is supposed to write in 
> mdp settings exactly:
> ; Pull code
> pull            = umbrella
> pull_geometry   = position
> pull_dim        = N N Y
> pull_start      = yes
> pull_ngroups    = 1
> pull_group0     = COM of protein
> pull_group1     = ion
> pull_init1      = 0
> pull_rate1      = 0.0
> pull_k1         = 4000      ; kJ mol^-1 nm^-2
> pull_nstxout    = 1000      ; every 2 ps
> pull_nstfout    = 1000      ; every 2 ps
>
>
> IN fact, to implement such settings, how I make the US understand to get the 
> COM of protein as the ref group and the proposed ion as the pulled group?
>
> Would you please give me any suggestions?
>

You got a very thorough response already today:

http://lists.gromacs.org/pipermail/gmx-users/2013-July/082855.html

I see that your settings are now different, using "position" geometry instead 
of 
"distance," which is good because that's a better approach for your system. 
What you haven't specified is pull_vec1, which is necessary when using 
"position" geometry.

All of these details are discussed to some extent in my umbrella sampling 
tutorial; it should certainly serve as a basic guide.  What you're trying to do 
is ultimately going to require a slightly different approach, but the general 
principles and explanations of .mdp terms are the same.

-Justin

-- 
==

Justin A. Lemkul, Ph.D.
Postdoctoral Associate

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Umbrella Sampling settings

2013-07-12 Thread Thomas Schlesier

In GROMACS groups are called via the *.ndx file (default: index.ndx)
Be aware that 'pull_dim' determines in which diretions (x,y,z) the 
umbrella potential acts. So use N N Y , if you want that the ion can 
move freely (onsidering the pull) in the xy-plane and Y Y Y if you want 
to also restrit the movement in the xy-plane.



Am 12.07.2013 17:32, schrieb gmx-users-requ...@gromacs.org:

Allright.
As I said earlier, my system is a lipid bilayer. A channel is inserted in it 
and I want to run US on this system.
An ion is considered in center of the each window, the reaction coordinate is 
set to z,? so the group which is pulled is an ion, and my ref group would be 
COM of the protein. But I don't know what statement is supposed to write in mdp 
settings exactly:
; Pull code
pull??? = umbrella
pull_geometry?? = position
pull_dim??? = N N Y
pull_start? = yes
pull_ngroups??? = 1
pull_group0 = COM of protein
pull_group1 = ion
pull_init1? = 0
pull_rate1? = 0.0
pull_k1 = 4000? ; kJ mol^-1  nm^-2
pull_nstxout??? = 1000? ; every 2 ps
pull_nstfout??? = 1000? ; every 2 ps


IN fact, to implement such settings, how I make the US understand to get the 
COM of protein as the ref group and the proposed ion as the pulled group?

Would you please give me any suggestions?

Thanks for all your time and consideration.

Sincerely,
Shima


- Original Message -
From: Justin Lemkul
To: Discussion list for GROMACS users
Cc:
Sent: Friday, July 12, 2013 1:41 AM
Subject: Re: [gmx-users] Umbrella Sampling settings



On 7/11/13 5:10 PM, Shima Arasteh wrote:

>Thanks for your reply.
>
>But when I don't understand why these extra lines are needed to set when are 
not advantageous practically!:-(
>

There's nothing "extra."? Everything here has a functional purpose.

-Justin


>
>Sincerely,
>Shima
>
>
>- Original Message -
>From: Justin Lemkul
>To: Shima Arasteh; Discussion list for GROMACS 
users
>Cc:
>Sent: Friday, July 12, 2013 1:37 AM
>Subject: Re: [gmx-users] Umbrella Sampling settings
>
>
>
>On 7/11/13 4:21 PM, Shima Arasteh wrote:

>>Hi,
>>
>>I want to run Umbrella Sampling on my system. In initial configurations, an 
ion is located in center of the window.
>>Some mdp file settings for running US, as I found in US tutorial are :
>>; Pull code
>>pull? ? ? ? ? ? = umbrella
>>pull_geometry?? = distance
>>pull_dim? ? ? ? = N N Y
>>pull_start? ? ? = yes
>>pull_ngroups? ? = 1
>>pull_group0? ?? = Chain_B
>>pull_group1? ?? = Chain_A
>>pull_init1? ? ? = 0
>>pull_rate1? ? ? = 0.0
>>pull_k1? ? ? ?? = 4000? ? ? ; kJ mol^-1  nm^-2
>>pull_nstxout? ? = 1000? ? ? ; every 2 ps
>>pull_nstfout? ? = 1000? ? ? ; every 2 ps
>>
>>
>>But I'd like to know which lines are specifically for US? Because in this 
step, no group is supposed to be pulled but there are some lines written here related 
to pulling!
>>

>
>All of them are related to umbrella sampling.? Pulling (steered MD) and 
umbrella
>sampling simply use common parts of the "pull code" in Gromacs because US
>requires a restraint potential.? Whether or not that restraint potential 
induces
>net displacement (steering, i.e. non-zero pull_rate) or not (zero pull rate,
>restrain to a given set of conditions) is the only difference.? Both processes
>require reference and "pull" groups, geometry information, etc.
>
>-Justin
>


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Problems with REMD in Gromacs 4.6.3

2013-07-12 Thread Mark Abraham
On Fri, Jul 12, 2013 at 4:27 PM, gigo  wrote:
> Hi!
>
> On 2013-07-12 11:15, Mark Abraham wrote:
>>
>> What does --loadbalance do?
>
>
> It balances the total number of processes across all allocated nodes.

OK, but using it means you are hostage to its assumptions about balance.

> The
> thing is that mpiexec does not know that I want each replica to fork to 4
> OpenMP threads. Thus, without this option and without affinities (in a sec
> about it) mpiexec starts too many replicas on some nodes - gromacs complains
> about the overload then - while some cores on other nodes are not used. It
> is possible to run my simulation like that:
>
> mpiexec mdrun_mpi -v -cpt 20 -multi 144 -replex 2000 -cpi (without
> --loadbalance for mpiexec and without -ntomp for mdrun)
>
> Then each replica runs on 4 MPI processes (I allocate 4 times more cores
> then replicas and mdrun sees it). The problem is that it is much slower than
> using OpenMP for each replica. I did not find any other way than
> --loadbalance in mpiexec and then -multi 144 -ntomp 4 in mdrun to use MPI
> and OpenMP at the same time on the torque-controlled cluster.

That seems highly surprising. I have not yet encountered a job
scheduler that was completely lacking a "do what I tell you" layout
scheme. More importantly, why are you using #PBS -l nodes=48:ppn=12?
Surely you want 3 MPI processes per 12-core node?

>> What do the .log files say about
>> OMP_NUM_THREADS, thread affinities, pinning, etc?
>
>
> Each replica logs:
> "Using 1 MPI process
> Using 4 OpenMP threads",
> That is is correct. As I said, the threads are forked, but 3 out of 4 don't
> do anything, and the simulation does not go at all.
>
> About affinities Gromacs says:
> "Can not set thread affinities on the current platform. On NUMA systems this
> can cause performance degradation. If you think your platform should support
> setting affinities, contact the GROMACS developers."
>
> Well, the "current platform" is normal x86_64 cluster, but the whole
> information about resources is passed by Torque to OpenMPI-linked Gromacs.
> Can it be that mdrun sees the resources allocated by torque as a big pool of
> cpus and misses the information about nodes topology?

mdrun gets its processor topology from the MPI layer, so that is where
you need to focus. The error message confirms that GROMACS sees things
that seem wrong.

Mark

>
> If you have any suggestions how to debug or trace this issue, I would be
> glad to participate.
> Best,
>
> G
>
>
>
>
>
>
>>
>> Mark
>>
>> On Fri, Jul 12, 2013 at 3:46 AM, gigo  wrote:
>>>
>>> Dear GMXers,
>>> With Gromacs 4.6.2 I was running REMD with 144 replicas. Replicas were
>>> separate MPI jobs of course (OpenMPI 1.6.4). Each replica I run on 4
>>> cores
>>> with OpenMP. There is Torque installed on the cluster build of 12-cores
>>> nodes, so I used the following script:
>>>
>>> #!/bin/tcsh -f
>>> #PBS -S /bin/tcsh
>>> #PBS -N test
>>> #PBS -l nodes=48:ppn=12
>>> #PBS -l walltime=300:00:00
>>> #PBS -l mem=288Gb
>>> #PBS -r n
>>> cd $PBS_O_WORKDIR
>>> mpiexec -np 144 --loadbalance mdrun_mpi -v -cpt 20 -multi 144 -ntomp 4
>>> -replex 2000
>>>
>>> It was working just great with 4.6.2. It does not work with 4.6.3. The
>>> new
>>> version was compiled with the same options in the same environment.
>>> Mpiexec
>>> spreads the replicas evenly over the cluster. Each replica forks 4
>>> threads,
>>> but only one of them uses any cpu. Logs end at the citations. Some empty
>>> energy and trajectory files are created, nothing is written to them.
>>> Please let me know if you have any immediate suggestion on how to make it
>>> work (maybe based on some differences between versions), or if I should
>>> fill
>>> the bug report with all the technical details.
>>> Best Regards,
>>>
>>> Grzegorz Wieczorek
>>>
>>> --
>>> gmx-users mailing listgmx-users@gromacs.org
>>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>>> * Please search the archive at
>>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>>> * Please don't post (un)subscribe requests to the list. Use the www
>>> interface or send it to gmx-users-requ...@gromacs.org.
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.grom

[gmx-users] request for a/v material for promotional video

2013-07-12 Thread Mark Abraham
Hi,

To support an upcoming promotional video being prepared by the CRESTA
project (http://cresta-project.eu) which provides some of the funding
support for GROMACS development, it would be nice to have some
real-life examples of work being done with GROMACS as the visual
element while a voice-over talks very generally about the kinds of
things that GROMACS can do. In particular, if anybody can contribute a
still (or short trajectory) of

1) an enzyme system (e.g. ligand being pulled away from a reaction site), or

2) a system with all three of protein + lipid + nucleic acid

then that would be really great! We won't be talking about the detail
of the systems at all (the voice recordings are already made!), and
the science can be complete junk if that's all you have, but some
visuals of wiggling atoms will make a nice break from my talking head!
:-)

Do drop me an email if you think you have something you're happy for
us to use for this. Unfortunately, we won't be able to acknowledge you
in such a short video.

Thanks in advance!

Mark
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Umbrella Sampling settings

2013-07-12 Thread Shima Arasteh
Thanks for your replies. :-)
So, if I want to the ion move in only z-direction, I need to set the 'pull_dim' 
Y Y N? Correct?

But in tutorial Justin writes : pull_dim = N N Y: we are pulling only in the 
z-dimension.  Thus, x and y are set to "no" (N) and z is set to "yes" (Y).

So what should I do?! 


 
Sincerely,
Shima


- Original Message -
From: Thomas Schlesier 
To: gmx-users@gromacs.org
Cc: 
Sent: Friday, July 12, 2013 9:04 PM
Subject: [gmx-users] Re: Umbrella Sampling settings

In GROMACS groups are called via the *.ndx file (default: index.ndx)
Be aware that 'pull_dim' determines in which diretions (x,y,z) the 
umbrella potential acts. So use N N Y , if you want that the ion can 
move freely (onsidering the pull) in the xy-plane and Y Y Y if you want 
to also restrit the movement in the xy-plane.


Am 12.07.2013 17:32, schrieb gmx-users-requ...@gromacs.org:
> Allright.
> As I said earlier, my system is a lipid bilayer. A channel is inserted in it 
> and I want to run US on this system.
> An ion is considered in center of the each window, the reaction coordinate is 
> set to z,? so the group which is pulled is an ion, and my ref group would be 
> COM of the protein. But I don't know what statement is supposed to write in 
> mdp settings exactly:
> ; Pull code
> pull??? = umbrella
> pull_geometry?? = position
> pull_dim??? = N N Y
> pull_start? = yes
> pull_ngroups??? = 1
> pull_group0 = COM of protein
> pull_group1 = ion
> pull_init1? = 0
> pull_rate1? = 0.0
> pull_k1 = 4000? ; kJ mol^-1  nm^-2
> pull_nstxout??? = 1000? ; every 2 ps
> pull_nstfout??? = 1000? ; every 2 ps
>
>
> IN fact, to implement such settings, how I make the US understand to get the 
> COM of protein as the ref group and the proposed ion as the pulled group?
>
> Would you please give me any suggestions?
>
> Thanks for all your time and consideration.
>
> Sincerely,
> Shima
>
>
> - Original Message -
> From: Justin Lemkul
> To: Discussion list for GROMACS users
> Cc:
> Sent: Friday, July 12, 2013 1:41 AM
> Subject: Re: [gmx-users] Umbrella Sampling settings
>
>
>
> On 7/11/13 5:10 PM, Shima Arasteh wrote:
>> >Thanks for your reply.
>> >
>> >But when I don't understand why these extra lines are needed to set when 
>> >are not advantageous practically!:-(
>> >
> There's nothing "extra."? Everything here has a functional purpose.
>
> -Justin
>
>> >
>> >Sincerely,
>> >Shima
>> >
>> >
>> >- Original Message -
>> >From: Justin Lemkul
>> >To: Shima Arasteh; Discussion list for GROMACS 
>> >users
>> >Cc:
>> >Sent: Friday, July 12, 2013 1:37 AM
>> >Subject: Re: [gmx-users] Umbrella Sampling settings
>> >
>> >
>> >
>> >On 7/11/13 4:21 PM, Shima Arasteh wrote:
>>> >>Hi,
>>> >>
>>> >>I want to run Umbrella Sampling on my system. In initial configurations, 
>>> >>an ion is located in center of the window.
>>> >>Some mdp file settings for running US, as I found in US tutorial are :
>>> >>; Pull code
>>> >>pull? ? ? ? ? ? = umbrella
>>> >>pull_geometry?? = distance
>>> >>pull_dim? ? ? ? = N N Y
>>> >>pull_start? ? ? = yes
>>> >>pull_ngroups? ? = 1
>>> >>pull_group0? ?? = Chain_B
>>> >>pull_group1? ?? = Chain_A
>>> >>pull_init1? ? ? = 0
>>> >>pull_rate1? ? ? = 0.0
>>> >>pull_k1? ? ? ?? = 4000? ? ? ; kJ mol^-1  nm^-2
>>> >>pull_nstxout? ? = 1000? ? ? ; every 2 ps
>>> >>pull_nstfout? ? = 1000? ? ? ; every 2 ps
>>> >>
>>> >>
>>> >>But I'd like to know which lines are specifically for US? Because in this 
>>> >>step, no group is supposed to be pulled but there are some lines written 
>>> >>here related to pulling!
>>> >>
>> >
>> >All of them are related to umbrella sampling.? Pulling (steered MD) and 
>> >umbrella
>> >sampling simply use common parts of the "pull code" in Gromacs because US
>> >requires a restraint potential.? Whether or not that restraint potential 
>> >induces
>> >net displacement (steering, i.e. non-zero pull_rate) or not (zero pull rate,
>> >restrain to a given set of conditions) is the only difference.? Both 
>> >processes
>> >require reference and "pull" groups, geometry information, etc.
>> >
>> >-Justin
>> >

-- 
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Umbrella Sampling settings

2013-07-12 Thread Justin Lemkul



On 7/12/13 2:28 PM, Shima Arasteh wrote:

Thanks for your replies. :-)
So, if I want to the ion move in only z-direction, I need to set the 'pull_dim' 
Y Y N? Correct?

But in tutorial Justin writes : pull_dim = N N Y: we are pulling only in the z-dimension.  Thus, x 
and y are set to "no" (N) and z is set to "yes" (Y).

So what should I do?!



The pull_dim setting is irrelevant when using position geometry.  Only pull_vec1 
matters here.


-Justin




Sincerely,
Shima


- Original Message -
From: Thomas Schlesier 
To: gmx-users@gromacs.org
Cc:
Sent: Friday, July 12, 2013 9:04 PM
Subject: [gmx-users] Re: Umbrella Sampling settings

In GROMACS groups are called via the *.ndx file (default: index.ndx)
Be aware that 'pull_dim' determines in which diretions (x,y,z) the
umbrella potential acts. So use N N Y , if you want that the ion can
move freely (onsidering the pull) in the xy-plane and Y Y Y if you want
to also restrit the movement in the xy-plane.


Am 12.07.2013 17:32, schrieb gmx-users-requ...@gromacs.org:

Allright.
As I said earlier, my system is a lipid bilayer. A channel is inserted in it 
and I want to run US on this system.
An ion is considered in center of the each window, the reaction coordinate is 
set to z,? so the group which is pulled is an ion, and my ref group would be 
COM of the protein. But I don't know what statement is supposed to write in mdp 
settings exactly:
; Pull code
pull??? = umbrella
pull_geometry?? = position
pull_dim??? = N N Y
pull_start? = yes
pull_ngroups??? = 1
pull_group0 = COM of protein
pull_group1 = ion
pull_init1? = 0
pull_rate1? = 0.0
pull_k1 = 4000? ; kJ mol^-1  nm^-2
pull_nstxout??? = 1000? ; every 2 ps
pull_nstfout??? = 1000? ; every 2 ps


IN fact, to implement such settings, how I make the US understand to get the 
COM of protein as the ref group and the proposed ion as the pulled group?

Would you please give me any suggestions?

Thanks for all your time and consideration.

Sincerely,
Shima


- Original Message -
From: Justin Lemkul
To: Discussion list for GROMACS users
Cc:
Sent: Friday, July 12, 2013 1:41 AM
Subject: Re: [gmx-users] Umbrella Sampling settings



On 7/11/13 5:10 PM, Shima Arasteh wrote:

Thanks for your reply.

But when I don't understand why these extra lines are needed to set when are 
not advantageous practically!:-(


There's nothing "extra."? Everything here has a functional purpose.

-Justin



Sincerely,
Shima


- Original Message -
From: Justin Lemkul
To: Shima Arasteh; Discussion list for GROMACS 
users
Cc:
Sent: Friday, July 12, 2013 1:37 AM
Subject: Re: [gmx-users] Umbrella Sampling settings



On 7/11/13 4:21 PM, Shima Arasteh wrote:

Hi,

I want to run Umbrella Sampling on my system. In initial configurations, an ion 
is located in center of the window.
Some mdp file settings for running US, as I found in US tutorial are :
; Pull code
pull? ? ? ? ? ? = umbrella
pull_geometry?? = distance
pull_dim? ? ? ? = N N Y
pull_start? ? ? = yes
pull_ngroups? ? = 1
pull_group0? ?? = Chain_B
pull_group1? ?? = Chain_A
pull_init1? ? ? = 0
pull_rate1? ? ? = 0.0
pull_k1? ? ? ?? = 4000? ? ? ; kJ mol^-1  nm^-2
pull_nstxout? ? = 1000? ? ? ; every 2 ps
pull_nstfout? ? = 1000? ? ? ; every 2 ps


But I'd like to know which lines are specifically for US? Because in this step, 
no group is supposed to be pulled but there are some lines written here related 
to pulling!



All of them are related to umbrella sampling.? Pulling (steered MD) and umbrella
sampling simply use common parts of the "pull code" in Gromacs because US
requires a restraint potential.? Whether or not that restraint potential induces
net displacement (steering, i.e. non-zero pull_rate) or not (zero pull rate,
restrain to a given set of conditions) is the only difference.? Both processes
require reference and "pull" groups, geometry information, etc.

-Justin





--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Associate

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] qm-mm calculation

2013-07-12 Thread Nilesh Dhumal
Hello,
I am trying to run the qm-mm gas phase calculations for my system.

 I am using following in md.mdp file.

title   =  cpeptide MD
cpp =  /usr/bin/cpp
integrator  =  md
dt  =  0.001; ps !
nsteps  =  500 ; total 5 ps.
nstcomm =  1
nstxout =  1
nstvout =  1
nstfout =  1
nstlist =  1
ns_type = simple
rlist   =  0.0
rcoulomb=  0.0
rvdw=  0.0
coulombtype = cut-off
vdwtype = cut-off
pbc = no
fourierspacing  = 0.12
fourier_nx = 0
fourier_ny = 0
fourier_nz = 0
pme_order   = 4
ewald_rtol  = 1e-5
optimize_fft= yes
; Berendsen temperature coupling is on
Tcoupl = nose-hoover
tau_t = 0.1
tc-grps  =system
ref_t =   350
; Pressure coupling is  on
Pcoupl  = no ;Parrinello-Rahman
pcoupltype  = isotropic
tau_p   =  2.0
compressibility =  4.5e-5
ref_p   =  1.0
; Generate velocites is on at 300 K.
gen_vel =  yes
gen_temp=  350.0
gen_seed=  173529
QMMM = yes
QMMM-grps= System
QMmethod = RHF
QMbasis  = 3-21G
QMcharge = 0
QMmult   = 1


I could run the grompp and for mdrun I am getting following error.

Back Off! I just backed up md.log to ./#md.log.2#
Reading file 1.tpr, VERSION 4.0.7 (single precision)
QM/MM calculation requested.
there we go!
Layer 0
nr of QM atoms 32
QMlevel: RHF/3-21G

number of CPUs for gaussian = 1
memory for gaussian = 5000
accuracy in l510 = 8
NOT using cp-mcscf in l1003
Level of SA at start = 0
[c63:25888] *** Process received signal ***
[c63:25888] Signal: Segmentation fault (11)
[c63:25888] Signal code: Address not mapped (1)
[c63:25888] Failing at address: (nil)
[c63:25888] [ 0] /lib64/libpthread.so.0 [0x2b2d89ac1a90]
[c63:25888] [ 1] /lib64/libc.so.6(strlen+0x40) [0x2b2d89d4b590]
[c63:25888] [ 2] /lib64/libc.so.6(fputs+0x1e) [0x2b2d89d33dde]
[c63:25888] [ 3] mdrun(init_gaussian+0x4c3) [0x51e993]
[c63:25888] [ 4] mdrun(init_QMMMrec+0xef8) [0x517578]
[c63:25888] [ 5] mdrun(mdrunner+0x1019) [0x42e109]
[c63:25888] [ 6] mdrun(main+0x3c5) [0x434b95]
[c63:25888] [ 7] /lib64/libc.so.6(__libc_start_main+0xe6) [0x2b2d89ced586]
[c63:25888] [ 8] mdrun [0x415cb9]
[c63:25888] *** End of error message ***
Segmentation fault

Could you tell what is the problem.

Nilesh


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Maxwell-Stefan diffusion coefficient

2013-07-12 Thread Rasoul Nasiri
Hello all,

Is it possible one can calculate molecular diffusion of multi-component
systems in the gas phase by GROMACS?

This quantity is very important in evaporation of fluids when liquid and
vapour phases are in quasi-equilibrium state.

Any help would be highly appreciated?

Best
Rasoul
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] editconf: Invalid command line argument: –f

2013-07-12 Thread Jonathan Saboury
I am following "Tutorial 1" from
https://extras.csc.fi/chem/courses/gmx2007/tutorial1/index.html

I try the command "editconf –f conf.gro –bt dodecahedron –d 0.5 –o box.gro"
but I get the error:

"Program editconf, VERSION 4.5.5
Source code file: /build/buildd/gromacs-4.5.5/src/gmxlib/statutil.c, line:
819

Invalid command line argument:
–f
"

Here are the files I am currently using:
http://www.sendspace.com/file/a2twvx


What is the problem? Thanks!

-Jonathan
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Problems with REMD in Gromacs 4.6.3

2013-07-12 Thread gigo

On 2013-07-12 20:00, Mark Abraham wrote:

On Fri, Jul 12, 2013 at 4:27 PM, gigo  wrote:

Hi!

On 2013-07-12 11:15, Mark Abraham wrote:


What does --loadbalance do?



It balances the total number of processes across all allocated nodes.


OK, but using it means you are hostage to its assumptions about 
balance.


Thats true, but as long as I do not try to use more resources that the 
torque gives me, everything is OK. The question is, what is a proper way 
of running multiple simulations in parallel with MPI that are further 
parallelized with OpenMP, when pinning fails? I could not find any 
other.





The
thing is that mpiexec does not know that I want each replica to fork 
to 4
OpenMP threads. Thus, without this option and without affinities (in 
a sec
about it) mpiexec starts too many replicas on some nodes - gromacs 
complains
about the overload then - while some cores on other nodes are not 
used. It

is possible to run my simulation like that:

mpiexec mdrun_mpi -v -cpt 20 -multi 144 -replex 2000 -cpi (without
--loadbalance for mpiexec and without -ntomp for mdrun)

Then each replica runs on 4 MPI processes (I allocate 4 times more 
cores
then replicas and mdrun sees it). The problem is that it is much 
slower than

using OpenMP for each replica. I did not find any other way than
--loadbalance in mpiexec and then -multi 144 -ntomp 4 in mdrun to use 
MPI

and OpenMP at the same time on the torque-controlled cluster.


That seems highly surprising. I have not yet encountered a job
scheduler that was completely lacking a "do what I tell you" layout
scheme. More importantly, why are you using #PBS -l nodes=48:ppn=12?


I thing that torque is very similar to all PBS-like resource managers 
in this regard. It actually does what I tell it to do. There are 12-core 
nodes, I ask for 48 of them - I get them (simple #PBS -l ncpus=576 does 
not work), end of story. Now, the program that I run is responsible for 
populating resources that I got.



Surely you want 3 MPI processes per 12-core node?


Yes - I want each node to run 3 MPI processes. Preferably, I would like 
to run each MPI process on separate node (spread on 12 cores with 
OpenMP) but I will not get as much of resources. But again, without the 
--loadbalance hack I would not be able to properly populate the nodes...





What do the .log files say about
OMP_NUM_THREADS, thread affinities, pinning, etc?



Each replica logs:
"Using 1 MPI process
Using 4 OpenMP threads",
That is is correct. As I said, the threads are forked, but 3 out of 4 
don't

do anything, and the simulation does not go at all.

About affinities Gromacs says:
"Can not set thread affinities on the current platform. On NUMA 
systems this
can cause performance degradation. If you think your platform should 
support

setting affinities, contact the GROMACS developers."

Well, the "current platform" is normal x86_64 cluster, but the whole
information about resources is passed by Torque to OpenMPI-linked 
Gromacs.
Can it be that mdrun sees the resources allocated by torque as a big 
pool of

cpus and misses the information about nodes topology?


mdrun gets its processor topology from the MPI layer, so that is where
you need to focus. The error message confirms that GROMACS sees things
that seem wrong.


Thank you, I will take a look. But the first thing I want to do is 
finding the reason why Gromacs 4.6.3 is not able to run on my (slightly 
weird, I admit) setup, while 4.6.2 does it very well.

Best,

Grzegorz
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] About Solvation dynamics

2013-07-12 Thread Hari Pandey
Hi GROMACS users,

Could you please somebody tell me how do I solvate A Reverse Micell by fixed 
(200) molecule of water keeping in fixed annular region around it. That is how 
do I solvate spherical micell by water so that the width of spherical region 
around it is fixed?
lots of thanks for help 

Hari

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] How do I make an AOT reverse micell, which package I should use

2013-07-12 Thread Hari Pandey
Hi all GROMACS users,

I need to make a pdb file of AOT reverse micell . Please some body tell me how 
do I build it and which package would best for this work. Now I am using 
PACKMOL but it seems just a geometrical mathematical manipulation. I want to 
arrange charge, LJ parameter, hydrogen bond length  protonation state of the 
water molecules, and the proper orientation like angles as well. I don't know 
how do I use all these parameters in PACKMOL. So please advice me which package 
could be good for this purpose.
I appreciate your help. 

Thank you very much
Hari

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] editconf: Invalid command line argument: –f

2013-07-12 Thread Tsjerk Wassenaar
Hi Jonathan,

I suspect the dash is not of the right kind. Did you by chance copy/paste
the command? Did you try typing it?

Cheers,

Tsjerk



On Sat, Jul 13, 2013 at 12:03 AM, Jonathan Saboury wrote:

> I am following "Tutorial 1" from
> https://extras.csc.fi/chem/courses/gmx2007/tutorial1/index.html
>
> I try the command "editconf –f conf.gro –bt dodecahedron –d 0.5 –o box.gro"
> but I get the error:
>
> "Program editconf, VERSION 4.5.5
> Source code file: /build/buildd/gromacs-4.5.5/src/gmxlib/statutil.c, line:
> 819
>
> Invalid command line argument:
> –f
> "
>
> Here are the files I am currently using:
> http://www.sendspace.com/file/a2twvx
>
>
> What is the problem? Thanks!
>
> -Jonathan
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
Tsjerk A. Wassenaar, Ph.D.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists