Re: [gmx-users] GPU-gromacs

2013-10-25 Thread Carsten Kutzner
On Oct 25, 2013, at 4:07 PM, aixintiankong  wrote:

> Dear prof.,
> i want install gromacs on a multi-core workstation with a GPU(tesla c2075), 
> should i install the openmpi or mpich2? 
If you want to run Gromacs on just one workstation with a single GPU, you do
not need to install an MPI library at all!

Carsten

> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Output pinning for mdrun

2013-10-24 Thread Carsten Kutzner
On Oct 24, 2013, at 4:25 PM, Mark Abraham  wrote:

> Hi,
> 
> No. mdrun reports the stride with which it moves over the logical cores
> reported by the OS, setting the affinity of GROMACS threads to logical
> cores, and warnings are written for various wrong-looking cases, but we
> haven't taken the time to write a sane report of how GROMACS logical
> threads and ranks are actually mapped to CPU cores. Where supported by the
> processor, the CPUID information is available and used in
> gmx_thread_affinity.c. It's just not much fun to try to report that in a
> way that will make sense on all possible hardware that supports CPUID - and
> then people will ask why it doesn't map to what their mpirun reports, get
> confused by hyper-threading, etc.
Yes, I see.
> 
> What question were you seeking to answer?
Well, I just wanted to check whether my process placement is correct and that
I am not getting decreased performance due to a suboptimal placement. In
many cases the performance is really bad (like 50% of the expected values) 
if the pinning is wrong or does not work, but you never know.

On some clusters there are of course tools that check and output the process 
placement for a dummy parallel job, or environment variables like MP_INFOLEVEL 
for
loadleveler.

Thanks!
  Carsten


> Mark
> 
> 
> 
> On Thu, Oct 24, 2013 at 11:44 AM, Carsten Kutzner  wrote:
> 
>> Hi,
>> 
>> can one output how mdrun threads are pinned to CPU cores?
>> 
>> Thanks,
>>  Carsten
>> --
>> gmx-users mailing listgmx-users@gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> * Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-requ...@gromacs.org.
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Output pinning for mdrun

2013-10-24 Thread Carsten Kutzner
Hi,

can one output how mdrun threads are pinned to CPU cores?

Thanks,
  Carsten
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] parallelization

2013-10-17 Thread Carsten Kutzner
Hi,

On Oct 17, 2013, at 2:25 PM, pratibha kapoor  wrote:

> Dear gromacs users
> 
> I would like to run my simulations on all nodes(8) with full utilisation of
> all cores(2 each). I have compiled gromacs version 4.6.3 using both thread
> mpi and open mpi. I am using following command:
> mpirun -np 8 mdrun_mpi -v -s -nt 2 -s *.tpr -c *.gro
> But I am getting following error:
> Setting the total number of threads is only supported with thread-MPI and
> Gromacs was compiled without thread-MPI .
> Although during compilation I have used:
> cmake .. -DGMX_MPI=ON -DGMX_THREAD_MPI=ON
you can either use MPI or thread_mpi. But you can use MPI and OpenMP with
-DGMX_MPI=ON -DGMX_OPENMP=ON

> If I dont use -nt option, I could see that all the processors(8) are
> utilised but I am not sure whether all cores are being utilised. For
You can run with 
mpirun -np 16 mdrun_mpi -v -s -nt 2 -s *.tpr -c *.gro

to use all 16 available cores.

> version 4.6.3 without mpi, I Know by default gromacs uses all the threads
> but not sure if mpi version uses all threads or not.
Take a look at the md.log output file, there it should be written
what Groamcs did use!

Best,
  Carsten

> Any help is appreciated.
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] MPI runs on a local computer

2013-09-20 Thread Carsten Kutzner
Hi Jianqing,

On Sep 19, 2013, at 2:48 PM, "Xu, Jianqing"  wrote:
> Say I have a local desktop having 16 cores. If I just want to run jobs on one 
> computer or a single node (but multiple cores), I understand that I don't 
> have to install and use OpenMPI, as Gromacs has its own thread-MPI included 
> already and it should be good enough to run jobs on one machine. However, for 
> some reasons, OpenMPI has already been installed on my machine, and I 
> compiled Gromacs with it by using the flag: "-DGMX_MPI=ON". My questions are:
> 
> 
> 1.   Can I still use this executable (mdrun_mpi, built with OpenMPI 
> library) to run multi-core jobs on my local desktop? Or the default 
> Thread-MPI is actually a better option for a single computer or single node 
> (but multi-cores) for whatever reasons?
You can either use OpenMPI or Gromacs build-in thread MPI library. If you only 
want
to run on a single machine, I would recommend recompiling with thread-MPI, 
because 
this is in many cases a bit faster.

> 2.   Assuming I can still use this executable, let's say I want to use 
> half of the cores (8 cores) on my machine to run a job,
> 
> mpirun -np 8 mdrun_mpi -v -deffnm md
> 
> a). Since I am not using all the cores, do I still need to "lock" the 
> physical cores to use for better performance? Something like "-nt" for 
> Thread-MPI? Or it is not necessary?
Depends on whether you get good scaling or not. Compare to a run on 1 core, for 
large
systems the 4 or 8 core parallel runs should be (nearly) 4 or 8 times as fast. 
If 
that is the case, you do not need to worry about pinning.

> 
> b). For running jobs on a local desktop, or single node having ...  say 16 
> cores, or even 64 cores, should I turn off the "separate PME nodes" (-npme 
> 0)? Or it is better to leave as is?
You may want to check with g_tune_pme. Note that the optimum will depend on your
system, and for each MD system you should find that out.

> 
> 3.   If I want to run two different projects on my local desktop, say one 
> project takes 8 cores, the other takes 4 cores (assuming I have enough 
> memory), I just submit the jobs twice on my desktop:
> 
> nohup mpirun -np 8 mdrun_mpi -v -deffnm md1 >& log1&
> 
> nohup mpirun -np 4 mdrun_mpi -v -deffnm md2 >& log2 &
> 
> Will this be acceptable ? Will two jobs be competing the resource and 
> eventually affect the performance?
Make some quick test runs (over a couple of minutes). Then you can check 
the performance of your 8 core run with and without another simulation running.

Best,
  Carsten

> 
> Sorry for so many detailed questions, but your help on this will be highly 
> appreciated!
> 
> Thanks a lot,
> 
> Jianqing
> 
> 
> 
> To the extent this electronic communication or any of its attachments contain 
> information that is not in the public domain, such information is considered 
> by MedImmune to be confidential and proprietary. This communication is 
> expected to be read and/or used only by the individual(s) for whom it is 
> intended. If you have received this electronic communication in error, please 
> reply to the sender advising of the error in transmission and delete the 
> original message and any accompanying documents from your system immediately, 
> without copying, reviewing or otherwise using them for any purpose. Thank you 
> for your cooperation.
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] performance issue with the parallel implementation of gromacs

2013-09-19 Thread Carsten Kutzner
90.9801.005
> 4  11   1
>   9   13   190.704   22.6730.9140.931
> 3   5   3
>  10   10   293.6765.4600.589  -
> 8   6   1
>  11   1   -1(  8)  188.9783.6860.9151.266
> 8   5   1
>  12   28   210.631   17.4570.8241.176
> 8   5   1
>  13   26   171.926   10.4621.0081.186
> 6   7   1
>  14   24   200.0156.6960.8650.839
> 4  11   1
>  15   23   215.0135.8810.8040.863
> 3   5   3
>  16   20   298.3637.1870.580  -
> 8   6   1
>  17   2   -1(  8)  208.821   34.4090.8401.088
> 8   5   1
> 
> 
> Best performance was achieved with 6 PME nodes (see line 7)
> Optimized PME settings:
>   New Coulomb radius: 1.10 nm (was 1.00 nm)
>   New Van der Waals radius: 1.10 nm (was 1.00 nm)
>   New Fourier grid xyz: 80 80 80 (was 96 96 96)
> Please use this command line to launch the simulation:
> 
> mpirun -np 48 mdrun_mpi -npme 6 -s tuned.tpr -pin on
> 
> 
> Summary of successful runs:
> Line tpr PME nodes  Gcycles Av. Std.dev.   ns/dayPME/f
> DD grid
>   0   0   25   283.6282.1910.6101.749
> 5   9   3
>   1   0   20   240.8889.1320.7191.618
> 5   4   7
>   2   0   16   166.5700.3941.0381.239
> 8   6   3
>   3   00   435.3893.3990.397  -
> 10   8   2
>   4   0   -1( 20)  237.6236.2980.7291.406
> 5   4   7
>   5   1   25   286.9901.6620.6031.813
> 5   9   3
>   6   1   20   235.8180.7540.7341.495
> 5   4   7
>   7   1   16   167.8883.0281.0301.256
> 8   6   3
>   8   10   284.2643.7750.609  -
> 8   5   4
>   9   1   -1( 16)  167.8581.9241.0301.303
> 8   6   3
>  10   2   25   298.6371.6600.5791.696
> 5   9   3
>  11   2   20   281.6471.0740.6141.296
> 5   4   7
>  12   2   16   184.0124.0220.9411.244
> 8   6   3
>  13   20   304.6580.7930.568  -
> 8   5   4
>  14   2   -1( 16)  183.084    2.2030.9451.188
> 8   6   3
> 
> 
> Best performance was achieved with 16 PME nodes (see line 2)
> and original PME settings.
> Please use this command line to launch the simulation:
> 
> mpirun -np 160 /data1/shashi/localbin/gromacs/bin/mdrun_mpi -npme 16 -s
> 4icl.tpr -pin on
> 
> 
> Both of these outcomes(1.110ns/day and 1.038ns/day) are lower than what I
> get on my workstation with Xeon W3550 3.07 GHz using 8 thread (1.431ns/day)
> for a similar system.
> The bench.log file generated by g_tune PME shows very high load imbalance
> (>60% -100 %). I have tried several combinations of np and npme but the
> perfomance is always in this range only.
> Can someone please tell me what is it that I am doing wrong or how can I
> decrease the simulation time.
> -- 
> Regards
> Ashutosh Srivastava
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] question about installation parameters

2013-09-16 Thread Carsten Kutzner
Hi,

On Sep 16, 2013, at 11:23 AM, mjyang  wrote:

> Dear GMX users,
> 
> 
> I have a question about the combination of the installation parameters. I 
> compiled the fftw lib with --enable-sse2 and configured the gromacs with 
> "cmake .. -DGMX_CPU_ACCELERATION=SSE4.1". I'd like to know if it is ok to use 
> such a
> combination?
yes, for Gromacs the FFTW should always be compiled with SSE2. You can combine 
that with any
-DGMX_CPU_ACCELERATION setting you want, typically the best that is supported 
on your platform.

Best,
  Carsten


> 
> Many thanks.
> 
> Mingjun-- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Umbrella sampling simulations using make_edi with -restrain and -harmonic

2013-09-10 Thread Carsten Kutzner
Hi Po-chia,

On Sep 10, 2013, at 9:53 AM, "Chen, Po-chia"  wrote:

> Hi all,
> I can't seem to find the correct combination of EDI parameters impose a 
> harmonically-constrained simulation along an eigenvector in GROMACS 4.6.2, 
> and I'd like to confirm that the inputs I have are actually correct. Here is 
> the make_edi command I used to generate the .edi file fed to mdrun:
> 
> echo "C-alpha System" | make_edi -restrain -harmonic \
> -f ../ca-evec.trr \
> -eig ../ca-eval.xvg \
> -s ../analysis.tpr \
> -ori ./init.gro \
> -outfrq 2500 -deltaF0 150 -Eflnull 100 -flood 1 \
> -o constrain.edi
I think you should add -tau 0 when using flooding as harmonic restraint
(since you do not want the flooding potential to change).

> Where the eigenvectors and eigenvalues are previously derived by g_covar for 
> a set of unrestrained trajectories. The origin file init.gro is the same as 
> the starting coordinates of this constrained run, extracted from a previous 
> EDI run to start at an appropriate location on the eigenvector space.
> 
> The 1st eigenvalue is ~ 66, and the eigenvector looked fine in VMD when 
> plotted by g_anaeig -extr.
> 
> So mdrun -ei constrain.edi runs normally, but the flooding potential drops to 
> zero so the protein diffuses freely along the 1st eigenvector. e.g. the 
> first/last line in edsam.xvg looks like:
> # time RMSD EV1projFLOOD EV1-Ef1  EV1-Vf1 EV1-deltaF
>  0.0   0.43   9.99   -9.4e+1 9.5e-8  3.8e-9  -5.3e-4
> 10.0  0.40   9.49  0.0   -0.0  1.6e-44-0.0
> ...
>  1.00 0.94  -2.2 0.0 -0.0 1.6e-44-0.0
> 
> When c(0) is supposed to be according 9.99 as I set init.gro to be the -ori. 
> Do I need to change -alpha as well? What parameters am I missing/added by 
> mistake? The manual gives no indication as to which ones to do.
Can you check whether make_edi -ori wrote the correct position on the 1st 
eigenvector in your
.edi file? Scroll down to a line called

# NUMBER OF EIGENVECTORS + COMPONENTS GROUP 7

(Group 7 means the flooding vectors), if you have a single flooding vector, the 
next
line should read "1" and in the following line you will find three entries that 
describe

   

which I guess read in your case 
 1   19.99   

so the reference projection should be 9.99, as calculated from your .ori 
structure.
If there is no 3rd entry, you can simply put it there manually (and leave away
the -ori option to make_edi).

Best,
  Carsten
 

> 
> = = =
> P.S. the relevant constrain.edi lines contain:
> ...
> #DELTA_F0
> 150.00
> #INIT_DELTA_F
> 0.0
> #TAU
> 0.10
> #EFL_NULL
> -100.00
> #ALPHA2
> -1.0
> #KT
> 2.50
> #HARMONIC
> 1
> #CONST_FORCE_FLOODING
> 0
> ...
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Rotation Constraints - PMF - external potential

2013-07-26 Thread Carsten Kutzner
em with respect to the reference orientation. The reference 
>>> orientation is the
>>> starting conformation of the first subsystem. For a protein, backbone is a 
>>> reasonable choice)
>>> 
>>> How one have to give the group? using an index file or defining the group 
> in 
>>> the topology?
>> This is the "orire-fitgrp = A_B" mdp file setting that you made.
>> 
>> Best,
>> Carsten
>>> 
>>> 
>>>> Messaggio originale
>>>> Da: ckut...@gwdg.de
>>>> Data: 23/07/2013 13.09
>>>> A: "battis...@libero.it", "Discussion list for 
> GROMACS 
>>> users"
>>>> Ogg: Re: [gmx-users] Rotation Constraints - PMF
>>>> 
>>>> Hi Anna,
>>>> 
>>>> please have a look at the Enforced Rotation Section in the Gromacs 4.6 
>>> manual.
>>>> You can restrain the angle of rotation about an axis by setting the 
> rotation 
>>> rate
>>>> to zero. There is also a 4.5 add-on available with rotational restraints 
> in
>>>> the Gromacs git repository (branch "rotation"). For more info you may want 
> to
>>>> look at this page:
>>>> 
>>>> http://www.mpibpc.mpg.de/grubmueller/rotation
>>>> 
>>>> Best,
>>>> Carsten
>>>> 
>>>> 
>>>> On Jul 23, 2013, at 12:18 PM, battis...@libero.it wrote:
>>>> 
>>>>> Dear user and expert,
>>>>> I'd like ask you a suggestion about a problem that I will try present 
> you 
>>> schematically.
>>>>> I have got a structure "s" and I have generated the topolgy file itp for 
> it.
>>> A number of separate "s" in turn generate a complex structure A, that is 
>>> characterized by a cylindrical shape.
>>>>> Now, I constructed a system with two cylindrical structures, A and B (in 
>>> total made by 64 "s" structures), and I'd like make an Umbrella Sampling 
>>> calculation in order to study the PMF varying the distance between A and B.
>>>>> 
>>>>> My problem is that I'd like fix the orientation of the axis of each 
>>> structure A and B long the z axis, during the dynamics.
>>>>> So I need to put a force into the system or a constrain, such that when 
> the 
>>> axis of A or B rotates respect to z axis, the force puts back the axis of 
> the 
>>> structure in the z direction.
>>>>> 
>>>>> It this possible?  If it is so, could you tell me how to do that?
>>>>> Than you very much,
>>>>> Anna
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> -- 
>>>>> gmx-users mailing listgmx-users@gromacs.org
>>>>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>>>>> * Please search the archive at http://www.gromacs.
>>> org/Support/Mailing_Lists/Search before posting!
>>>>> * Please don't post (un)subscribe requests to the list. Use the 
>>>>> www interface or send it to gmx-users-requ...@gromacs.org.
>>>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>> 
>>>> 
>>>> --
>>>> Dr. Carsten Kutzner
>>>> Max Planck Institute for Biophysical Chemistry
>>>> Theoretical and Computational Biophysics
>>>> Am Fassberg 11, 37077 Goettingen, Germany
>>>> Tel. +49-551-2012313, Fax: +49-551-2012302
>>>> http://www.mpibpc.mpg.de/grubmueller/kutzner
>>>> http://www.mpibpc.mpg.de/grubmueller/sppexa
>>>> 
>>>> 
>>> 
>>> 
>> 
>> 
>> --
>> Dr. Carsten Kutzner
>> Max Planck Institute for Biophysical Chemistry
>> Theoretical and Computational Biophysics
>> Am Fassberg 11, 37077 Goettingen, Germany
>> Tel. +49-551-2012313, Fax: +49-551-2012302
>> http://www.mpibpc.mpg.de/grubmueller/kutzner
>> http://www.mpibpc.mpg.de/grubmueller/sppexa
>> 
>> 
> 
> 


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Rotation Constraints - PMF - external potential

2013-07-25 Thread Carsten Kutzner
On Jul 24, 2013, at 5:53 PM, battis...@libero.it wrote:

> Dear Carsten
> 
> could you give me more information about your suggestions?
> I tried but probably I did not understand well what you meant.
Hi Anna,

I suggested to use the enforced rotation module of Gromacs 4.6
to restrain the orientation of your molecule(s). If you want to
use the orientation restraints module instead, I am afraid I
can not help you much with that, maybe someone else on this list? 

> In order to avoid the rotation of the structure A and of the structure B,  I 
> have defined into the index file a group A_B that contains A+B and  I have 
> setted in the mdp file the following parameters:
> 
> ; Orientation restraints: No or Yes
> orire= yes
> ; Orientation restraints force constant and tau for time averaging
> orire-fc = 500
> orire-tau= 100
> orire-fitgrp = A_B
> ; Output frequency for trace(SD) and S to energy file
> nstorireout  = 100
> 
> As I have synthetically described in the first post , the structures A and B 
> (characterized by a cylindrical shape) are defined by a number of 32  unit-
> structures that I call s.
> 
> Into the itp is defined the topology for the s structure, and so in order to 
> put an orientation restraints between atoms that are not included into the 
> same 
> itp file,
> I cannot put into the topology a section like that described into the manual 
> 4.6.2 pag. 92 namely,  [ orientation_restraints ],  could I ?
> 
> Could you tell me How I can fix the orientation of the systems A and B?
Using the enforced rotation module you would choose an index group and an axis 
for each group that you want to fix the orientation, set the rotation angle to 
zero and choose an appropriate force constant. Appropriate potential functions
would be the pivot-free ones if I understand your setting correctly.
> 
> I don't understand the manual's explanation about the   orire-fitgrp:
> (fit group for orientation restraining. This group of atoms is used to 
> determine the rotation
> R of the system with respect to the reference orientation. The reference 
> orientation is the
> starting conformation of the first subsystem. For a protein, backbone is a 
> reasonable choice)
> 
> How one have to give the group? using an index file or defining the group in 
> the topology?
This is the "orire-fitgrp = A_B" mdp file setting that you made.

Best,
  Carsten
> 
> 
>> Messaggio originale
>> Da: ckut...@gwdg.de
>> Data: 23/07/2013 13.09
>> A: "battis...@libero.it", "Discussion list for GROMACS 
> users"
>> Ogg: Re: [gmx-users] Rotation Constraints - PMF
>> 
>> Hi Anna,
>> 
>> please have a look at the Enforced Rotation Section in the Gromacs 4.6 
> manual.
>> You can restrain the angle of rotation about an axis by setting the rotation 
> rate
>> to zero. There is also a 4.5 add-on available with rotational restraints in
>> the Gromacs git repository (branch "rotation"). For more info you may want to
>> look at this page:
>> 
>> http://www.mpibpc.mpg.de/grubmueller/rotation
>> 
>> Best,
>> Carsten
>> 
>> 
>> On Jul 23, 2013, at 12:18 PM, battis...@libero.it wrote:
>> 
>>> Dear user and expert,
>>> I'd like ask you a suggestion about a problem that I will try present you 
> schematically.
>>> I have got a structure "s" and I have generated the topolgy file itp for it.
> A number of separate "s" in turn generate a complex structure A, that is 
> characterized by a cylindrical shape.
>>> Now, I constructed a system with two cylindrical structures, A and B (in 
> total made by 64 "s" structures), and I'd like make an Umbrella Sampling 
> calculation in order to study the PMF varying the distance between A and B.
>>> 
>>> My problem is that I'd like fix the orientation of the axis of each 
> structure A and B long the z axis, during the dynamics.
>>> So I need to put a force into the system or a constrain, such that when the 
> axis of A or B rotates respect to z axis, the force puts back the axis of the 
> structure in the z direction.
>>> 
>>> It this possible?  If it is so, could you tell me how to do that?
>>> Than you very much,
>>> Anna
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> -- 
>>> gmx-users mailing listgmx-users@gromacs.org
>>> http://li

Re: [gmx-users] Rotation Constraints - PMF + rerun

2013-07-24 Thread Carsten Kutzner
On Jul 24, 2013, at 12:30 PM, battis...@libero.it wrote:

> Dear Carsten,
> 
> Thank you very much for your very useful help!
> I'm making some tries to test the orire options that probably will solve my 
> problem.
> 
> In order to do not waste resource, I thought using the rerun option of mdrun 
> I 
> can use the trajectories generated before, where my mistake was to allow the 
> rotation of my structure. 
> So I generated a new topol.tpr file changing the orire options in the mdp and 
> I made:
> 
> 1. mdrun -rerun ../traj.xtc -s topol.tpr -o trj.trr
> 2. trjcat -f traj.trr -o trajout.xtc
> 
> but in the trajout.xtc there is only one point as I can check for example 
Hm, I am not sure, maybe you need to use -x trj.xtc instead of -o trj.trr
to trigger output of all .xtc frames. How many frames are in ../traj.xtc?

Carsten

> with: 
> 3. g_gyrate -f trajout.xtc -s topol.tpr -n index.ndx
> 
> 
> Could you confirm me that it is not possible follow this idea?
> In fact I suppose that this method it is not applicable;  but  it is 
> necessary 
> to generate a new trajectory, because the angular restraints modify 
> completely 
> the trajectory.
> 
> Or, just to be sure,  did I not made the things in the right way? 
> 
> Thank you very much!
> 
> Anna
> 
> 
>> Messaggio originale
>> Da: ckut...@gwdg.de
>> Data: 23/07/2013 13.09
>> A: "battis...@libero.it", "Discussion list for GROMACS 
> users"
>> Ogg: Re: [gmx-users] Rotation Constraints - PMF
>> 
>> Hi Anna,
>> 
>> please have a look at the Enforced Rotation Section in the Gromacs 4.6 
> manual.
>> You can restrain the angle of rotation about an axis by setting the rotation 
> rate
>> to zero. There is also a 4.5 add-on available with rotational restraints in
>> the Gromacs git repository (branch "rotation"). For more info you may want to
>> look at this page:
>> 
>> http://www.mpibpc.mpg.de/grubmueller/rotation
>> 
>> Best,
>> Carsten
>> 
>> 
>> On Jul 23, 2013, at 12:18 PM, battis...@libero.it wrote:
>> 
>>> Dear user and expert,
>>> I'd like ask you a suggestion about a problem that I will try present you 
> schematically.
>>> I have got a structure "s" and I have generated the topolgy file itp for it.
> A number of separate "s" in turn generate a complex structure A, that is 
> characterized by a cylindrical shape.
>>> Now, I constructed a system with two cylindrical structures, A and B (in 
> total made by 64 "s" structures), and I'd like make an Umbrella Sampling 
> calculation in order to study the PMF varying the distance between A and B.
>>> 
>>> My problem is that I'd like fix the orientation of the axis of each 
> structure A and B long the z axis, during the dynamics.
>>> So I need to put a force into the system or a constrain, such that when the 
> axis of A or B rotates respect to z axis, the force puts back the axis of the 
> structure in the z direction.
>>> 
>>> It this possible?  If it is so, could you tell me how to do that?
>>> Than you very much,
>>> Anna
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> -- 
>>> gmx-users mailing listgmx-users@gromacs.org
>>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>>> * Please search the archive at http://www.gromacs.
> org/Support/Mailing_Lists/Search before posting!
>>> * Please don't post (un)subscribe requests to the list. Use the 
>>> www interface or send it to gmx-users-requ...@gromacs.org.
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> 
>> --
>> Dr. Carsten Kutzner
>> Max Planck Institute for Biophysical Chemistry
>> Theoretical and Computational Biophysics
>> Am Fassberg 11, 37077 Goettingen, Germany
>> Tel. +49-551-2012313, Fax: +49-551-2012302
>> http://www.mpibpc.mpg.de/grubmueller/kutzner
>> http://www.mpibpc.mpg.de/grubmueller/sppexa
>> 
>> 
> 
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Rotation Constraints - PMF

2013-07-23 Thread Carsten Kutzner
Hi Anna,

please have a look at the Enforced Rotation Section in the Gromacs 4.6 manual.
You can restrain the angle of rotation about an axis by setting the rotation 
rate
to zero. There is also a 4.5 add-on available with rotational restraints in
the Gromacs git repository (branch "rotation"). For more info you may want to
look at this page:

http://www.mpibpc.mpg.de/grubmueller/rotation

Best,
  Carsten


On Jul 23, 2013, at 12:18 PM, battis...@libero.it wrote:

> Dear user and expert,
> I'd like ask you a suggestion about a problem that I will try present you 
> schematically.
> I have got a structure "s" and I have generated the topolgy file itp for it.A 
> number of separate "s" in turn generate a complex structure A, that is 
> characterized by a cylindrical shape.
> Now, I constructed a system with two cylindrical structures, A and B (in 
> total made by 64 "s" structures), and I'd like make an Umbrella Sampling 
> calculation in order to study the PMF varying the distance between A and B.
> 
> My problem is that I'd like fix the orientation of the axis of each structure 
> A and B long the z axis, during the dynamics.
> So I need to put a force into the system or a constrain, such that when the 
> axis of A or B rotates respect to z axis, the force puts back the axis of the 
> structure in the z direction.
> 
> It this possible?  If it is so, could you tell me how to do that?
> Than you very much,
> Anna
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Intel compiling failed

2013-04-05 Thread Carsten Kutzner

On Apr 5, 2013, at 4:21 PM, Albert  wrote:

> On 04/05/2013 12:38 PM, Carsten Kutzner wrote:
>> Hi Albert,
>> 
>> one reason for the error you see could be that you are using a non-Intel
>> MPI compiler wrapper. I think you need to specify MPICC=mpiicc as well.
>> 
>> Carsten
> 
> 
> thanks a lot both Carsten and Justin.
> 
> I've compiled both fftw and openmpi with intel icc and ifort well. However, 
> when I try to compile gromacs, it failed?:
Hm, this is another issue now. What version of the Intel compiler are you using?
With icc 13.0 it works for me, but I remember having problems with older 
versions as well.

Carsten

> 
> 
> cmake .. -DGMX_MPI=ON -DGMX_GPU=ON -DBUILD_SHARED_LIBS=OFF 
> -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda 
> -DCMAKE_INSTALL_PREFIX=/home/albert/install/gromacs-4.6.1 
> -DCMAKE_PREFIX_PATH=/home/albert/install/fftw-3.3.3 
> -DCMAKE_CXX_COMPILER=/home/albert/install/openmpi-1.6.4/bin/mpiCC 
> -DCMAKE_C_COMPILER=/home/albert/install/openmpi-1.6.4/bin/mpicc
> 
> 
> 
> [  0%] [  0%] [  0%] [  0%] Building NVCC (Device) object 
> src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_pmalloc_cuda.cu.o
> Building NVCC (Device) object 
> src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_cudautils.cu.o
> Building NVCC (Device) object 
> src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_copyrite_gpu.cu.o
> Building NVCC (Device) object 
> src/gmxlib/gpu_utils/CMakeFiles/gpu_utils.dir//./gpu_utils_generated_gpu_utils.cu.o
> cc1plus: error: unrecognized command line option '-ip'
> cc1plus: error: unrecognized command line option '-ip'
> CMake Error at gpu_utils_generated_gpu_utils.cu.o.cmake:198 (message):
>  Error generating
> /home/albert/install/00-source/gromacs-4.6.1/build/src/gmxlib/gpu_utils/CMakeFiles/gpu_utils.dir//./gpu_utils_generated_gpu_utils.cu.o
> 
> 
> cc1plus: error: unrecognized command line option '-ip'make[2]: *** 
> [src/gmxlib/gpu_utils/CMakeFiles/gpu_utils.dir/./gpu_utils_generated_gpu_utils.cu.o]
>  Error 1
> cc1plus: error: unrecognized command line option '-ip'
> make[1]: *** [src/gmxlib/gpu_utils/CMakeFiles/gpu_utils.dir/all] Error 2
> make[1]: *** Waiting for unfinished jobs
> 
> CMake Error at cuda_tools_generated_copyrite_gpu.cu.o.cmake:198 (message):
>  Error generating
> /home/albert/install/00-source/gromacs-4.6.1/build/src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_copyrite_gpu.cu.o
> 
> 
> make[2]: *** 
> [src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir/./cuda_tools_generated_copyrite_gpu.cu.o]
>  Error 1
> make[2]: *** Waiting for unfinished jobs
> CMake Error at cuda_tools_generated_pmalloc_cuda.cu.o.cmake:198 (message):
>  Error generating
> /home/albert/install/00-source/gromacs-4.6.1/build/src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_pmalloc_cuda.cu.o
> 
> 
> CMake Error at cuda_tools_generated_cudautils.cu.o.cmake:198 (message):
>  Error generating
> /home/albert/install/00-source/gromacs-4.6.1/build/src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_cudautils.cu.o
> 
> 
> make[2]: *** 
> [src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir/./cuda_tools_generated_pmalloc_cuda.cu.o]
>  Error 1
> make[2]: *** 
> [src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir/./cuda_tools_generated_cudautils.cu.o]
>  Error 1
> make[1]: *** [src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir/all] Error 2
> make: *** [all] Error 2
> 
> 
> best
> Albert
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the www interface 
> or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Intel compiling failed

2013-04-05 Thread Carsten Kutzner

On Apr 5, 2013, at 12:52 PM, Justin Lemkul  wrote:

> 
> 
> On 4/5/13 6:38 AM, Carsten Kutzner wrote:
>> Hi Albert,
>> 
>> one reason for the error you see could be that you are using a non-Intel
>> MPI compiler wrapper. I think you need to specify MPICC=mpiicc as well.
>> 
> 
> Is there any point in compiling FFTW in parallel?  I have never once done it 
> nor found it necessary.
Hi,

you are absolutely right, sorry I did not express that clearly. The thing is, 
if you
compile the FFTW using the Intel compiler and then compile Gromacs using Intels
mpicc or mpigcc compiler wrapper, you will get these undefined references to 
_intel*
at link time. To make it work, you need to use the mpiicc (two i's here) 
compiler wrapper
for Gromacs.

Carsten

> 
> -Justin
> 
>> On Apr 5, 2013, at 12:25 PM, Albert  wrote:
>> 
>>> Hello:
>>> 
>>> I am trying to compile gromacs with intel compiler. However, it failed when 
>>> I compile FFTW3 with command:
>>> 
>>> 
>>> ./configure --enable-sse --enable-float --with-pic --enable-single 
>>> --enable-static --enable-mpi --prefix=/home/albert/install/fftw-3.3.3 
>>> CC=icc CXX=icc F77=ifort
>>> 
>>> here is the log file:
>>> 
>>> mp.c:(.text+0x3148): undefined reference to `_intel_fast_memset'
>>> ../libbench2/libbench2.a(mp.o):mp.c:(.text+0x3488): more undefined 
>>> references to `_intel_fast_memset' follow
>>> collect2: ld returned 1 exit status
>>> make[3]: *** [mpi-bench] Error 1
>>> make[3]: Leaving directory `/home/albert/install/00-source/fftw-3.3.3/mpi'
>>> make[2]: *** [all] Error 2
>>> make[2]: Leaving directory `/home/albert/install/00-source/fftw-3.3.3/mpi'
>>> make[1]: *** [all-recursive] Error 1
>>> make[1]: Leaving directory `/home/albert/install/00-source/fftw-3.3.3'
>>> 
>>> 
>>> make: *** [all] Error 2
>>> 
>>> thank you very much
>>> best
>>> Albert
>>> --
>>> gmx-users mailing listgmx-users@gromacs.org
>>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>>> * Please search the archive at 
>>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>>> * Please don't post (un)subscribe requests to the list. Use the www 
>>> interface or send it to gmx-users-requ...@gromacs.org.
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> 
>> --
>> Dr. Carsten Kutzner
>> Max Planck Institute for Biophysical Chemistry
>> Theoretical and Computational Biophysics
>> Am Fassberg 11, 37077 Goettingen, Germany
>> Tel. +49-551-2012313, Fax: +49-551-2012302
>> http://www.mpibpc.mpg.de/grubmueller/kutzner
>> http://www.mpibpc.mpg.de/grubmueller/sppexa
>> 
> 
> -- 
> 
> 
> Justin A. Lemkul, Ph.D.
> Research Scientist
> Department of Biochemistry
> Virginia Tech
> Blacksburg, VA
> jalemkul[at]vt.edu | (540) 231-9080
> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
> 
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the www interface 
> or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Intel compiling failed

2013-04-05 Thread Carsten Kutzner
Hi Albert,

one reason for the error you see could be that you are using a non-Intel
MPI compiler wrapper. I think you need to specify MPICC=mpiicc as well.

Carsten


On Apr 5, 2013, at 12:25 PM, Albert  wrote:

> Hello:
> 
> I am trying to compile gromacs with intel compiler. However, it failed when I 
> compile FFTW3 with command:
> 
> 
> ./configure --enable-sse --enable-float --with-pic --enable-single 
> --enable-static --enable-mpi --prefix=/home/albert/install/fftw-3.3.3 CC=icc 
> CXX=icc F77=ifort
> 
> here is the log file:
> 
> mp.c:(.text+0x3148): undefined reference to `_intel_fast_memset'
> ../libbench2/libbench2.a(mp.o):mp.c:(.text+0x3488): more undefined references 
> to `_intel_fast_memset' follow
> collect2: ld returned 1 exit status
> make[3]: *** [mpi-bench] Error 1
> make[3]: Leaving directory `/home/albert/install/00-source/fftw-3.3.3/mpi'
> make[2]: *** [all] Error 2
> make[2]: Leaving directory `/home/albert/install/00-source/fftw-3.3.3/mpi'
> make[1]: *** [all-recursive] Error 1
> make[1]: Leaving directory `/home/albert/install/00-source/fftw-3.3.3'
> 
> 
> make: *** [all] Error 2
> 
> thank you very much
> best
> Albert
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the www interface 
> or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] 4.6.1 support double precision GPU now?

2013-04-02 Thread Carsten Kutzner

On Apr 2, 2013, at 5:47 PM, Albert  wrote:

> Hello:
> 
> I am wondering is double precision supported in current 4.6.1 GPU version? 
> Otherwise it would be very slow to use CPU version running free energy 
> calculations….
Hi Albert,

no, GPU calculations can be done only in single precision.

Best,
  Carsten


> 
> thank you very much
> best
> Albert
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the www interface 
> or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_tune_pme can't be executed

2013-03-21 Thread Carsten Kutzner
Hi Daniel,

are you using the newest version of 4.6? There was an issue with g_tune_pme,
which I already fixed. I guess it could be responsible for the error that 
you see.

Best,
  Carsten


On Mar 21, 2013, at 2:26 PM, Daniel Wang  wrote:

> Hi everyone~
> 
> When I run g_tune_pme_mpi, it prompts:
> 
> Fatal error:
> Need an MPI-enabled version of mdrun. This one
> (mdrun_mpi)
> seems to have been compiled without MPI support.
> 
> I'm sure my gromacs is compiled WITH MPI support and "mpiexec -n xx
> mdrun_mpi -s yy.tpr" works normally.
> How to fix it? I'm using gromacs4.6 and Intel MPI 4.1.0.
> Thanks.
> 
> -- 
> Daniel Wang / npbool
> Computer Science & Technology, Tsinghua University
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Mismatching number of PP MPI processes and GPUs per node

2013-03-11 Thread Carsten Kutzner
Hi,

On Mar 11, 2013, at 10:50 AM, George Patargias  wrote:

> Hello
> 
> Sorry for posting this again.
> 
> I am trying to run GROMACS 4.6 compiled with MPI and GPU acceleration
> (CUDA 5.0 lib) using the following SGE batch script.
> 
> #!/bin/sh
> #$ -V
> #$ -S /bin/sh
> #$ -N test-gpus
> #$ -l h="xgrid-node02"
> #$ -pe mpi_fill_up 12
> #$ -cwd
> 
> source /opt/NetUsers/pgkeka/gromacs-4.6_gpu_mpi/bin/GMXRC
> export DYLD_LIBRARY_PATH=/Developer/NVIDIA/CUDA-5.0/lib:$DYLD_LIBRARY_PATH
> 
> mpirun -np 12 mdrun_mpi -s test.tpr -deffnm test_out -nb gpu
> 
> After detection of the installed GPU card
> 
> 1 GPU detected on host xgrid-node02.xgrid:
>  #0: NVIDIA Quadro 4000, compute cap.: 2.0, ECC:  no, stat: compatible
> 
> GROMACS issues the following error
> 
> Incorrect launch configuration: mismatching number of PP MPI processes and
> GPUs per node. mdrun_mpi was started with 12 PP MPI processes per node,
> but only 1 GPU were detected.
> 
> It can't be that we need to run GROMACS only on a single core so that it
> matches the single GPU card.
Have you compiled mdrun_mpi with OpenMP threads support? Then, if you
do 

mpirun -np 1 mdrun_mpi …

it should start one MPI process with 12 OpenMP threads, which should give
you what you want. You can also manually specify the number of OpenMP threads
by adding 

-ntomp 12

Carsten

> 

> 
> Do you have any idea what has to be done?
> 
> Many thanks.
> 
> Dr. George Patargias
> Postdoctoral Researcher
> Biomedical Research Foundation
> Academy of Athens
> 4, Soranou Ephessiou
> 115 27
> Athens
> Greece
> 
> Office: +302106597568
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Problem with OpenMP+MPI

2013-02-27 Thread Carsten Kutzner
Hi,

On Feb 27, 2013, at 6:55 AM, jesmin jahan  wrote:

> Dear Gromacs Users,
> 
> I am trying to run the following command on gromacs 4.6
> 
> mdrun -ntmpi 2 -ntomp 6 -s imd.tpr
> 
> But I am getting the following error
> 
> OpenMP threads have been requested with cut-off scheme Group, but
> these are only supported with cut-off scheme Verlet
> 
> Does any one know a solution to the problem?
> 
> I am using the following .mdp file
> 
> constraints =  none
> integrator  =  md
> ;cutoff-scheme   = Verlet
yes, as the note says, use the verlet cutoff scheme (by deleting the ";")

Carsten

> pbc =  no
> dt  =  0.001
> nsteps  =  0
> rcoulomb= 300
> rvdw= 300
> rlist   = 300
> nstgbradii  = 300
> rgbradii= 300
> implicit_solvent=  GBSA
> gb_algorithm=  HCT ;
> sa_algorithm=  None
> gb_dielectric_offset= 0.02
> ;optimize_fft = yes
> energygrps   = protein
> 
> Please let me know what to change so that it runs perfectly!
> 
> Thanks,
> Jesmin
> --
> Jesmin Jahan Tithi
> PhD Student, CS
> Stony Brook University, NY-11790.
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] compiling on different architecture than the compute nodes architecture

2013-02-06 Thread Carsten Kutzner
Hi,

On Feb 6, 2013, at 6:03 PM, Richard Broadbent 
 wrote:

> Dear All,
> 
> I would like to compile gromacs 4.6 to run with the correct acceleration on 
> the compute nodes on our local cluster. Some of the nodes have intel 
> sandy-bridge whilst others only have sse4.1 and some (including the login and 
> single core job nodes) are still stuck on ssse3 (gmx would use sse2 
> acceleration here).
> 
> Installing several versions is not a problem however, I'm not sure how to 
> make cmake build a version of the code that is not using the acceleration for 
> the system on which the code is being compiled. Restrictions on job sizes 
> makes running the compilation on the sandy-bridge nodes almost impossible. 
> Can anyone let me know which flags cmake needs to enable avx-256 acceleration?

Use -DGMX_CPU_ACCELERATION=AVX_256

Try ccmake to have a look at the variables you can define and what values they 
can be set to.

Carsten 
> 
> my standard cmake line is:
> 
> $ CC=mpiicc CXX=mpiicpc ; cmake -DGMX_OPENMP=ON  -DGMX_MPI=ON -DGMX_DOUBLE=ON 
> -DGMX_GPU=OFF -DGMX_PREFER_STATIC_LIBS=ON -DGMX_FFT_LIBRARY=mkl 
> -DMKL_INCLUDE_DIR=$MKLROOT/include 
> -DMKL_LIBRARIES="$MKLROOT/lib/intel64/libmkl_core.so;$MKLROOT/lib/intel64/libmkl_intel_lp64.so;$MKLROOT/lib/intel64/libmkl_sequential.so"
>  -DCMAKE_INSTALL_PREFIX=$HOME/libs/gromacs  ../
> 
> 
> 
> Thanks,
> 
> Richard
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the www interface 
> or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] voltage for membrane?

2012-12-24 Thread Carsten Kutzner
On Dec 23, 2012, at 11:23 PM, Martin Hoefling  wrote:
> You can have a look at http://www.ncbi.nlm.nih.gov/pubmed/21843471 ,
> maybe that does what you want.

On
http://www.mpibpc.mpg.de/grubmueller/compel

you will find installation instructions for the special gromacs version that 
support
the above mentioned protocol.

Best,
  Carsten

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Essential dynamics (ED) sampling using make_edi

2012-12-12 Thread Carsten Kutzner
Hi Bipin Singh,

the parameters -deltaF0, -deltaF, -tau, -alpha, and -T are used only
for flooding and have no effect in pure essential dynamics. Which coordinates
appear in the output trajectory (*.trr, *.xtc) is exclusively controlled
by .mdp options (i.e. the group you select there), not by the content of 
the .edi file. 

Best,
  Carsten


On Dec 11, 2012, at 6:27 PM, bipin singh  wrote:

> Hello All,
> 
> I want to use the essential dynamics (ED) sampling  method to simulate the
> unfolding to folding process using make_edi option of GROMACS. For this
> task I am using -radcon option (acceptance radius contraction along the
> first two eigenvectors towards the folded structure (b4md.gro)) of make_edi
> as below:
> 
> *make_edi -f eigenvec.trr -eig eigenval.xvg -s topol.tpr -tar b4md.gro
> -radcon 1-2 -o sam.edi
> *
> *b4md.gro:* folded structure (C-alpha only)
> *topol.tpr: *all atom *
> eigenvec.trr*:from g_covar (C-alpha only)
> 
> Is this is the correct way of doing the ED sampling...
> 
> 
> Also I am not sure about the following:
> 
> *1)* How to judge the correct/appropriate value for the:
> 
>  -maxedsteps
> 
> *2)* How to judge the appropriate values for the following parameters for
> an Essential dynamics sampling input *(or it is neglected for ED sampling
> and used only for flooding input ) *
> 
> -deltaF0
> -deltaF
> -tau
> -alpha
> -T
> 
> *3) *Will the output trajectory (produced using mdrun -ei sam.edi ) contain
> all atoms or only the C-alpha atoms (using the above make_edi command).
> 
> -- 
> *---
> Thanks and Regards,
> Bipin Singh*
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_tune_pme for multiple nodes

2012-12-04 Thread Carsten Kutzner

On Dec 4, 2012, at 2:45 PM, Chandan Choudhury  wrote:

> Hi Carsten,
> 
> Thanks for the reply.
> 
> If PME nodes for the g_tune is half of np, then if it exceeds the ppn of of
> a node, how would g_tune perform. What I mean if $NPROCS=36, the its half
> is 18 ppn, but 18 ppns are not present in a single node  (max. ppn = 12 per
> node). How would g_tune function in such scenario?
Typically mdrun allocates the PME and PP nodes in an interleaved way, meaning
you would end up with 9 PME nodes on each of your two nodes.

Check the -ddorder of mdrun.

Interleaving is normally fastest unless you could have all PME processes 
exclusively
on a single node.

Carsten

> 
> Chandan
> 
> 
> --
> Chandan kumar Choudhury
> NCL, Pune
> INDIA
> 
> 
> On Tue, Dec 4, 2012 at 6:39 PM, Carsten Kutzner  wrote:
> 
>> Hi Chandan,
>> 
>> the number of separate PME nodes in Gromacs must be larger than two and
>> smaller or equal to half the number of MPI processes (=np). Thus,
>> g_tune_pme
>> checks only up to npme = np/2 PME nodes.
>> 
>> Best,
>>  Carsten
>> 
>> 
>> On Dec 4, 2012, at 1:54 PM, Chandan Choudhury  wrote:
>> 
>>> Dear Carsten and Florian,
>>> 
>>> Thanks for you useful suggestions. It did work. I still have a doubt
>>> regarding the execution :
>>> 
>>> export MPIRUN=`which mpirun`
>>> export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
>>> g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
>>> tune.edr -g tune.log
>>> 
>>> I am suppling $NPROCS as 24 [2 (nodes)*12(ppn)], so that g_tune_pme tunes
>>> the no. of pme nodes. As I am executing it on a single node, mdrun never
>>> checks pme for greater than 12 ppn. So, how do I understand that the pme
>> is
>>> tuned for 24 ppn spanning across the two nodes.
>>> 
>>> Chandan
>>> 
>>> 
>>> --
>>> Chandan kumar Choudhury
>>> NCL, Pune
>>> INDIA
>>> 
>>> 
>>> On Thu, Nov 29, 2012 at 8:32 PM, Carsten Kutzner 
>> wrote:
>>> 
>>>> Hi Chandan,
>>>> 
>>>> On Nov 29, 2012, at 3:30 PM, Chandan Choudhury 
>> wrote:
>>>> 
>>>>> Hi Carsten,
>>>>> 
>>>>> Thanks for your suggestion.
>>>>> 
>>>>> I did try to pass to total number of cores with the np flag to the
>>>>> g_tune_pme, but it didnot help. Hopefully I am doing something silliy.
>> I
>>>>> have pasted the snippet of the PBS script.
>>>>> 
>>>>> #!/bin/csh
>>>>> #PBS -l nodes=2:ppn=12:twelve
>>>>> #PBS -N bilayer_tune
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> cd $PBS_O_WORKDIR
>>>>> export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
>>>> from here on you job file should read:
>>>> 
>>>> export MPIRUN=`which mpirun`
>>>> g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
>>>> tune.edr -g tune.log
>>>> 
>>>>> mpirun -np $NPROCS  g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c tune.pdb
>> -x
>>>>> tune.xtc -e tune.edr -g tune.log -nice 0
>>>> this way you will get $NPROCS g_tune_pme instances, each trying to run
>> an
>>>> mdrun job on 24 cores,
>>>> which is not what you want. g_tune_pme itself is a serial program, it
>> just
>>>> spawns the mdrun's.
>>>> 
>>>> Carsten
>>>>> 
>>>>> 
>>>>> Then I submit the script using qsub.
>>>>> When I login to the compute nodes there I donot find and mdrun
>> executable
>>>>> running.
>>>>> 
>>>>> I also tried using nodes=1 and np 12. It didnot work through qsub.
>>>>> 
>>>>> Then I logged in to the compute nodes and executed g_tune_pme_4.5.5 -np
>>>> 12
>>>>> -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr -g tune.log -nice 0
>>>>> 
>>>>> It worked.
>>>>> 
>>>>> Also, if I just use
>>>>> $g_tune_pme_4.5.5 -np 12 -s md0-200.tpr -c tune.pdb -x tune.xtc -e
>>>> tune.edr
>>>>> -g tune.log -nice 0
>>>>> g_tune_pme executes on the head node and writes various files.
>>>>> 
>>>>> Kindly let me know what am 

Re: [gmx-users] g_tune_pme for multiple nodes

2012-12-04 Thread Carsten Kutzner
Hi Chandan,

the number of separate PME nodes in Gromacs must be larger than two and
smaller or equal to half the number of MPI processes (=np). Thus, g_tune_pme
checks only up to npme = np/2 PME nodes. 

Best,
  Carsten


On Dec 4, 2012, at 1:54 PM, Chandan Choudhury  wrote:

> Dear Carsten and Florian,
> 
> Thanks for you useful suggestions. It did work. I still have a doubt
> regarding the execution :
> 
> export MPIRUN=`which mpirun`
> export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
> g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
> tune.edr -g tune.log
> 
> I am suppling $NPROCS as 24 [2 (nodes)*12(ppn)], so that g_tune_pme tunes
> the no. of pme nodes. As I am executing it on a single node, mdrun never
> checks pme for greater than 12 ppn. So, how do I understand that the pme is
> tuned for 24 ppn spanning across the two nodes.
> 
> Chandan
> 
> 
> --
> Chandan kumar Choudhury
> NCL, Pune
> INDIA
> 
> 
> On Thu, Nov 29, 2012 at 8:32 PM, Carsten Kutzner  wrote:
> 
>> Hi Chandan,
>> 
>> On Nov 29, 2012, at 3:30 PM, Chandan Choudhury  wrote:
>> 
>>> Hi Carsten,
>>> 
>>> Thanks for your suggestion.
>>> 
>>> I did try to pass to total number of cores with the np flag to the
>>> g_tune_pme, but it didnot help. Hopefully I am doing something silliy. I
>>> have pasted the snippet of the PBS script.
>>> 
>>> #!/bin/csh
>>> #PBS -l nodes=2:ppn=12:twelve
>>> #PBS -N bilayer_tune
>>> 
>>> 
>>> 
>>> 
>>> cd $PBS_O_WORKDIR
>>> export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
>> from here on you job file should read:
>> 
>> export MPIRUN=`which mpirun`
>> g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
>> tune.edr -g tune.log
>> 
>>> mpirun -np $NPROCS  g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c tune.pdb -x
>>> tune.xtc -e tune.edr -g tune.log -nice 0
>> this way you will get $NPROCS g_tune_pme instances, each trying to run an
>> mdrun job on 24 cores,
>> which is not what you want. g_tune_pme itself is a serial program, it just
>> spawns the mdrun's.
>> 
>> Carsten
>>> 
>>> 
>>> Then I submit the script using qsub.
>>> When I login to the compute nodes there I donot find and mdrun executable
>>> running.
>>> 
>>> I also tried using nodes=1 and np 12. It didnot work through qsub.
>>> 
>>> Then I logged in to the compute nodes and executed g_tune_pme_4.5.5 -np
>> 12
>>> -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr -g tune.log -nice 0
>>> 
>>> It worked.
>>> 
>>> Also, if I just use
>>> $g_tune_pme_4.5.5 -np 12 -s md0-200.tpr -c tune.pdb -x tune.xtc -e
>> tune.edr
>>> -g tune.log -nice 0
>>> g_tune_pme executes on the head node and writes various files.
>>> 
>>> Kindly let me know what am I missing when I submit through qsub.
>>> 
>>> Thanks
>>> 
>>> Chandan
>>> --
>>> Chandan kumar Choudhury
>>> NCL, Pune
>>> INDIA
>>> 
>>> 
>>> On Mon, Sep 3, 2012 at 3:31 PM, Carsten Kutzner  wrote:
>>> 
>>>> Hi Chandan,
>>>> 
>>>> g_tune_pme also finds the optimal number of PME cores if the cores
>>>> are distributed on multiple nodes. Simply pass the total number of
>>>> cores to the -np option. Depending on the MPI and queue environment
>>>> that you use, the distribution of the cores over the nodes may have
>>>> to be specified in a hostfile / machinefile. Check g_tune_pme -h
>>>> on how to set that.
>>>> 
>>>> Best,
>>>> Carsten
>>>> 
>>>> 
>>>> On Aug 28, 2012, at 8:33 PM, Chandan Choudhury 
>> wrote:
>>>> 
>>>>> Dear gmx users,
>>>>> 
>>>>> I am using 4.5.5 of gromacs.
>>>>> 
>>>>> I was trying to use g_tune_pme for a simulation. I intend to execute
>>>>> mdrun at multiple nodes with 12 cores each. Therefore, I would like to
>>>>> optimize the number of pme nodes. I could execute g_tune_pme -np 12
>>>>> md.tpr. But this will only find the optimal PME nodes for single nodes
>>>>> run. How do I find the optimal PME nodes for multiple nodes.
>>>>> 
>>>>> Any suggestion would be helpful.
>>>>> 
>>>

Re: [gmx-users] g_tune_pme for multiple nodes

2012-11-29 Thread Carsten Kutzner
Hi Chandan,

On Nov 29, 2012, at 3:30 PM, Chandan Choudhury  wrote:

> Hi Carsten,
> 
> Thanks for your suggestion.
> 
> I did try to pass to total number of cores with the np flag to the
> g_tune_pme, but it didnot help. Hopefully I am doing something silliy. I
> have pasted the snippet of the PBS script.
> 
> #!/bin/csh
> #PBS -l nodes=2:ppn=12:twelve
> #PBS -N bilayer_tune
> 
> 
> 
> 
> cd $PBS_O_WORKDIR
> export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
from here on you job file should read:

export MPIRUN=`which mpirun`
g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr 
-g tune.log

> mpirun -np $NPROCS  g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c tune.pdb -x
> tune.xtc -e tune.edr -g tune.log -nice 0
this way you will get $NPROCS g_tune_pme instances, each trying to run an mdrun 
job on 24 cores,
which is not what you want. g_tune_pme itself is a serial program, it just 
spawns the mdrun's.

Carsten
> 
> 
> Then I submit the script using qsub.
> When I login to the compute nodes there I donot find and mdrun executable
> running.
> 
> I also tried using nodes=1 and np 12. It didnot work through qsub.
> 
> Then I logged in to the compute nodes and executed g_tune_pme_4.5.5 -np 12
> -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr -g tune.log -nice 0
> 
> It worked.
> 
> Also, if I just use
> $g_tune_pme_4.5.5 -np 12 -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr
> -g tune.log -nice 0
> g_tune_pme executes on the head node and writes various files.
> 
> Kindly let me know what am I missing when I submit through qsub.
> 
> Thanks
> 
> Chandan
> --
> Chandan kumar Choudhury
> NCL, Pune
> INDIA
> 
> 
> On Mon, Sep 3, 2012 at 3:31 PM, Carsten Kutzner  wrote:
> 
>> Hi Chandan,
>> 
>> g_tune_pme also finds the optimal number of PME cores if the cores
>> are distributed on multiple nodes. Simply pass the total number of
>> cores to the -np option. Depending on the MPI and queue environment
>> that you use, the distribution of the cores over the nodes may have
>> to be specified in a hostfile / machinefile. Check g_tune_pme -h
>> on how to set that.
>> 
>> Best,
>>  Carsten
>> 
>> 
>> On Aug 28, 2012, at 8:33 PM, Chandan Choudhury  wrote:
>> 
>>> Dear gmx users,
>>> 
>>> I am using 4.5.5 of gromacs.
>>> 
>>> I was trying to use g_tune_pme for a simulation. I intend to execute
>>> mdrun at multiple nodes with 12 cores each. Therefore, I would like to
>>> optimize the number of pme nodes. I could execute g_tune_pme -np 12
>>> md.tpr. But this will only find the optimal PME nodes for single nodes
>>> run. How do I find the optimal PME nodes for multiple nodes.
>>> 
>>> Any suggestion would be helpful.
>>> 
>>> Chandan
>>> 
>>> --
>>> Chandan kumar Choudhury
>>> NCL, Pune
>>> INDIA
>>> --
>>> gmx-users mailing listgmx-users@gromacs.org
>>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>>> * Please don't post (un)subscribe requests to the list. Use the
>>> www interface or send it to gmx-users-requ...@gromacs.org.
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> 
>> --
>> Dr. Carsten Kutzner
>> Max Planck Institute for Biophysical Chemistry
>> Theoretical and Computational Biophysics
>> Am Fassberg 11, 37077 Goettingen, Germany
>> Tel. +49-551-2012313, Fax: +49-551-2012302
>> http://www3.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne
>> 
>> --
>> gmx-users mailing listgmx-users@gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> * Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-requ...@gromacs.org.
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Question about scaling

2012-11-13 Thread Carsten Kutzner
   49.744.037.6down
>  Constraints79.370.460.0down
>  Comm. energies 3.2 5.3 up
>  Rest   38.327.125.4down
> 
>  Total  3780.5  3254.6  2877.5  down
> 
> 
>  PME redist. X/F133.0   120.5   down
>  PME spread/gather  511.3   465.7   396.8   down
>  PME 3D-FFT 59.488.9102.2   up
>  PME solve  25.222.218.9down
> 
> 
> The two calculations-parts for which the most time is saved for going
> parallel are:
> 1) Forces
> 2) Neighbor search (ok, going from 2cores to 4cores does not make a big
> differences, but from 1core to 2 or 4 saves much time)
> 
> For GMX 4.0.7 ist looks similar, whereas the difference between 2 and 4 cores 
> is not so high as for GMX 4.5.5
> 
> Is there any good explains for this time saving?
> I would have thought that the system has a set number of interaction and
> one has to calculate all these interactions. If i divide the set in 2 or
> 4 smaller sets, the number of interactions shouldn't change and so the
> calculation time shouldn't change?
> 
> Or is something fancy in the algorithm, which reducces the time spent
> for calling up the arrays if the calculation is for a smaller set of
> interactions?
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the www interface 
> or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Question about scaling

2012-11-12 Thread Carsten Kutzner
Hi Thomas,

On Nov 12, 2012, at 5:18 PM, Thomas Schlesier  wrote:

> Dear all,
> i did some scaling tests for a cluster and i'm a little bit clueless about 
> the results.
> So first the setup:
> 
> Cluster:
> Saxonid 6100, Opteron 6272 16C 2.100GHz, Infiniband QDR
> GROMACS version: 4.0.7 and 4.5.5
> Compiler: GCC 4.7.0
> MPI: Intel MPI 4.0.3.008
> FFT-library: ACML 5.1.0 fma4
> 
> System:
> 895 spce water molecules
this is a somewhat small system I would say.

> Simulation time: 750 ps (0.002 fs timestep)
> Cut-off: 1.0 nm
> but with long-range correction ( DispCorr = EnerPres ; PME (standard 
> settings) - but in each case no extra CPU solely for PME)
> V-rescale thermostat and Parrinello-Rahman barostat
> 
> I get the following timings (seconds), whereas is calculated as the time 
> which would be needed for 1 CPU (so if a job on 2 CPUs took X s the time 
> would be 2 * X s).
> These timings were taken from the *.log file, at the end of the
> 'real cycle and time accounting' - section.
> 
> Timings:
> gmx-version   1cpu2cpu4cpu
> 4.0.7 422333843540
> 4.5.5 378032552878
Do you mean CPUs or CPU cores? Are you using the IB network or are you running 
single-node?

> 
> I'm a little bit clueless about the results. I always thought, that if i have 
> a non-interacting system and double the amount of CPUs, i
You do use PME, which means a global interaction of all charges.

> would get a simulation which takes only half the time (so the times as 
> defined above would be equal). If the system does have interactions, i would 
> lose some performance due to communication. Due to node imbalance there could 
> be a further loss of performance.
> 
> Keeping this in mind, i can only explain the timings for version 4.0.7 2cpu 
> -> 4cpu (2cpu a little bit faster, since going to 4cpu leads to more 
> communication -> loss of performance).
> 
> All the other timings, especially that 1cpu takes in each case longer than 
> the other cases, i do not understand.
> Probalby the system is too small and / or the simulation time is too short 
> for a scaling test. But i would assume that the amount of time to setup the 
> simulation would be equal for all three cases of one GROMACS-version.
> Only other explaination, which comes to my mind, would be that something went 
> wrong during the installation of the programs…
You might want to take a closer look at the timings in the md.log output files, 
this will 
give you a clue where the bottleneck is, and also tell you about the 
communication-computation 
ratio.

Best,
  Carsten


> 
> Please, can somebody enlighten me?
> 
> Greetings
> Thomas
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the www interface 
> or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GROMACS with different gcc and FFT versions but one unique *tpr file

2012-11-08 Thread Carsten Kutzner
Hi Thomas,

the .tpr files you prepare should be identical if you prepare them with the same
Gromacs version - regardless of the compiler. You can check that with gmxdump 
and
a diff if you like.

If you run these .tpr files using different machines or different compilers they
will not be numerically identical. Even if you run them twice on the same 
machine
but with dynamic load balancing on, they will not be numerically identical any
more. 

Carsten


On Nov 8, 2012, at 3:43 PM, Thomas Schlesier  wrote:

> Dear all,
> i have access to a cluster on which GROMACS is compiled with a different 
> version of GCC and a different FFT libary (compared to the local machine).
> Will this affect simulationns if i prepare the *.tpr on the local machine and 
> run the simulation on the cluster and the local machine?
> 
> Sorry if this is a dumb question. I could imagine that the two simulations 
> will be not numerical identical due to the different FFT libaries, but how 
> strong this effect is and what else could happen i have no idea...
> 
> Greetings
> Thomas
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the www interface 
> or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Regarding g_tune_pme optimization

2012-11-01 Thread Carsten Kutzner
Hi,

On Nov 1, 2012, at 4:13 PM, Venkat Reddy  wrote:

> Dear all Gromacs users,
> 
> I have *two *questions:
> 
> 1) I have been doing my simulation on a computer having 24
> processors. I issued *g_tune_pme -s *.tpr  -launch *command to
> directly launch my *mdrun *with the optimized settings. At the end of
> optimization, g_tune_pme has given -npme as *'0'*. My doubt is, how could
> it be possible to get best performance without dedicated PME nodes?
This is normal, it just means that 24 PME nodes are optimal and thus no
*separate* PME nodes are needed.

Carsten

> 2) What could be the optimum value for *-rcom *to get the best performance
> on a super cluster (*i.e., 256 nodes*)?
> 
> Thanks in advance
> 
> 
> With Best Wishes
> Venkat Reddy Chirasani
> PhD student
> Laboratory of Computational Biophysics
> Department of Biotechnology
> IIT Madras
> Chennai
> INDIA-600036
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] too much warnings and notes

2012-10-29 Thread Carsten Kutzner
Hi,

On Oct 29, 2012, at 3:57 PM, Albert  wrote:

> On 10/29/2012 03:56 PM, Carsten Kutzner wrote:
>> Hi,
>> 
>> find the reason for the warnings in your mdp file settings
>> and adjust them accordingly.
>> 
>> You can also override the warnings with the -maxwarn 
>> option in grompp.
>> 
>> Carsten
> Hello Carsten:
> 
> thanks for kind reply.
> 
> the only thing I confused is the last one:
> 
> NOTE 4 [file md.mdp]:
>  The sum of the two largest charge group radii (0.597592) is larger than
>  rlist (1.00) - rvdw (1.20)
this is explained here:

http://www.gromacs.org/Documentation/Errors
http://www.gromacs.org/Documentation/Errors#The_sum_of_the_two_largest_charge_group_radii_(X)_is_larger_than.c2.a0rlist_-_rvdw.2frcoulomb

Carsten

> 
> 
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the www interface 
> or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] too much warnings and notes

2012-10-29 Thread Carsten Kutzner
Hi,

find the reason for the warnings in your mdp file settings
and adjust them accordingly.

You can also override the warnings with the -maxwarn 
option in grompp.

Carsten



On Oct 29, 2012, at 3:52 PM, Albert  wrote:

> hello:
> 
> I am generating a .tpr file for proten/ligand system, but it has so much 
> warnings:
> 
> 
> NOTE 1 [file md.mdp]:
>  nstcomm < nstcalcenergy defeats the purpose of nstcalcenergy, setting
>  nstcomm to nstcalcenergy
> 
> 
> NOTE 2 [file md.mdp]:
>  leapfrog does not yet support Nose-Hoover chains, nhchainlength reset to 1
> 
> Generated 23436 of the 23436 non-bonded parameter combinations
> Generating 1-4 interactions: fudge = 1
> Generated 20254 of the 23436 1-4 parameter combinations
> Excluding 3 bonded neighbours molecule type 'Protein'
> turning all bonds into constraints...
> Excluding 3 bonded neighbours molecule type 'LIG'
> turning all bonds into constraints...
> Excluding 3 bonded neighbours molecule type 'POPC'
> turning all bonds into constraints...
> Excluding 2 bonded neighbours molecule type 'SOL'
> turning all bonds into constraints...
> Excluding 1 bonded neighbours molecule type 'NA'
> turning all bonds into constraints...
> Excluding 1 bonded neighbours molecule type 'CL'
> turning all bonds into constraints...
> Setting gen_seed to 6947582
> Velocities were taken from a Maxwell distribution at 300 K
> 
> NOTE 3 [file complex.top]:
>  The largest charge group contains 12 atoms.
>  Since atoms only see each other when the centers of geometry of the charge
>  groups they belong to are within the cut-off distance, too large charge
>  groups can lead to serious cut-off artifacts.
>  For efficiency and accuracy, charge group should consist of a few atoms.
>  For all-atom force fields use: CH3, CH2, CH, NH2, NH, OH, CO2, CO, etc.
> 
> Number of degrees of freedom in T-Coupling group Protein_LIG is 9475.34
> Number of degrees of freedom in T-Coupling group POPC is 33353.66
> Number of degrees of freedom in T-Coupling group Water_and_ions is 53973.00
> Largest charge group radii for Van der Waals: 0.299, 0.299 nm
> Largest charge group radii for Coulomb:   0.299, 0.299 nm
> 
> NOTE 4 [file md.mdp]:
>  The sum of the two largest charge group radii (0.597592) is larger than
>  rlist (1.00) - rvdw (1.20)
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the www interface 
> or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Ion conduction through a protein-membrane system

2012-10-02 Thread Carsten Kutzner
Hi Shima,

there is also a patch for Gromacs available to study ion conduction through
membrane channels that you might find useful. Please take a look at this page:

http://www.mpibpc.mpg.de/grubmueller/compel

Best,
  Carsten



On Oct 2, 2012, at 8:16 AM, Shima Arasteh  wrote:

> 
> 
>  Dear users,
> 
> I want to study ion conduction through a protein-memrane system. 
> First of all, I tried to simulate a usual protein-membrane system. I'd like 
> to know if it is possible to add asymmetrical number of ions to leaflets of 
> membrane?
> Secondly, is it possible to  apply an external electrical field to study ion 
> conduction in a system?
> 
> Thanks in advance.
> 
> 
> Sincerely,
> Shima
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] how to optimize performance of IBM Power 775?

2012-09-03 Thread Carsten Kutzner
Hi Albert,

On Aug 25, 2012, at 7:37 AM, Albert  wrote:

> Dear:
> 
>  Our institute got a  IBM Power 775 cluster and it claimed to be very good. 
> However, it doesn't support g_tune_pme.
Are you shure that it is not supported? Maybe you just need the right syntax.

> I use the following script for job submission:
> 
> 
> 
> #@ job_name = gromacs_job
> #@ output = gromacs.out
> #@ error = gromacs.err
> #@ class = kdm
> #@ node = 4
> #@ tasks_per_node = 32
> #@ wall_clock_limit = 01:00:00
> #@ network.MPI = sn_all,not_shared,US,HIGH
> #@ notification = never
> #@ environment = COPY_ALL
> #@ job_type = parallel
> #@ queue
> mpiexec -n 128 /opt/gromacs/4.5.5/bin/mdrun -nosum -dlb yes -v -s md.tpr
> 
> it is only 7 ns/day.
> 
> However, in another cluster with the same system, Core number and parameters, 
> I can get up to 30 ns/day.
> 
> Does anybody have any advices for this issue?
On a Power6 machine, I have successfully used the following job file:

# @ shell=/bin/ksh
#
# Sample script for LoadLeveler
#
# @ error   = run_1.err.$(jobid)
# @ output  = run_1.out.$(jobid)
# @ job_type = parallel
# @ environment= COPY_ALL
# @ node_usage= not_shared
# @ node = 1
# @ tasks_per_node = 4
# @ resources = ConsumableCpus(1)
# @ network.MPI = sn_all,not_shared,us
# @ wall_clock_limit = 0:05:00
# @ notification = complete
# @ queue

#
# run the program
#
export MDRUN=/path/to/gromacs-4.5.1/bin/mdrun_mpi
export MPIRUN=poe

#no poe here!
/path/to/g_tune_pme -np 4 \
-npstring none -s ./ap.tpr -resetstep 1 -steps 10

Hope that helps,
  Carsten



> 
> thank you very much
> Albert
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Only plain text messages are allowed!
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the www interface 
> or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www3.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_tune_pme for multiple nodes

2012-09-03 Thread Carsten Kutzner
Hi Chandan,

g_tune_pme also finds the optimal number of PME cores if the cores
are distributed on multiple nodes. Simply pass the total number of
cores to the -np option. Depending on the MPI and queue environment
that you use, the distribution of the cores over the nodes may have
to be specified in a hostfile / machinefile. Check g_tune_pme -h
on how to set that.

Best,
  Carsten


On Aug 28, 2012, at 8:33 PM, Chandan Choudhury  wrote:

> Dear gmx users,
> 
> I am using 4.5.5 of gromacs.
> 
> I was trying to use g_tune_pme for a simulation. I intend to execute
> mdrun at multiple nodes with 12 cores each. Therefore, I would like to
> optimize the number of pme nodes. I could execute g_tune_pme -np 12
> md.tpr. But this will only find the optimal PME nodes for single nodes
> run. How do I find the optimal PME nodes for multiple nodes.
> 
> Any suggestion would be helpful.
> 
> Chandan
> 
> --
> Chandan kumar Choudhury
> NCL, Pune
> INDIA
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www3.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_tune_pme cannot be executed

2012-09-03 Thread Carsten Kutzner
Hi Zifeng,

have you tried to use 

g_tune_pme -npstring none …

Carsten


On Aug 20, 2012, at 5:07 PM, zifeng li  wrote:

> Dear Gromacs users,
> 
> Morning!
> I am using Gromacs 4.5.4 version and tries to use the magic power of
> g_tune_pme. However, it cannot be executed with the error in
> benchtest.log file:
> 
> "mpirun error: do not specify a -np argument.  it is set for you."
> 
> The cluster I use needs to submit mpirun job though PBS script, which
> looks like following:
> 
> #PBS -l nodes=8
> #PBS -l walltime=2:00:00
> #PBS -l pmem=2gb
> cd $PBS_O_WORKDIR
> #
> echo " "
> echo " "
> echo "Job started on `hostname` at `date`"
> g_tune_pme -s npt
> echo " "
> echo "Job Ended at `date`"
> echo " "
> ~
> I can run the command "mpirun mdrun_mpi  -deffnm npt " using this PBS
> script before and as you notice, -np for g_tune_mpe is not used.  Any
> suggestions about this issue?
> 
> What I have tried for your reference:
> 1. to delete the first line. well...it won't help.
> 2. to set the environmental variable as Manual suggests curiously:
> export MPIRUN="/usr/local/mpirun -machinefile hosts"use my account
> name as the "hosts" here.
> 
> 
> Thanks in advance!
> 
> Good day :)
> 
> -Zifeng
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Only plain text messages are allowed!
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www3.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] how to run g_tune_pme in cluster?

2012-04-26 Thread Carsten Kutzner
On Apr 26, 2012, at 11:37 AM, Albert wrote:

> hello:
>  it can find mdrun correctly. and it is only give me the log file as I 
> mentioned in previous thread.
What files are produced by g_tune_pme?
Is there a benchtest.log? Can you cat its contents?

Carsten
> 
> thank you very much
> 
> On 04/26/2012 09:53 AM, Carsten Kutzner wrote:
>> Hi,
>> 
>> what output does g_tune_pme provide? What is in "log" and in
>> "perf.out"?
>> Can it find the correct mdrun / mpirun executables?
>> 
>> Carsten
>> 
>> 
>> On Apr 26, 2012, at 9:28 AM, Albert wrote:
>> 
>>> Hello:
>>>  Does anybody have any idea how to run g_tune_pme in a cluster? I tried 
>>> many times with following command:
>>> 
>>> g_tune_pme_d -v -s npt_01.tpr -o npt_01.trr -cpo npt_01.cpt -g npt_01.log 
>>> -launch -nt 24>  log&
>>> 
>>> but it always failed.
>>> 
>>> 
>>> Option   Type   Value   Description
>>> --
>>> -[no]h   bool   no  Print help info and quit
>>> -[[CUDANodeA:03384] [[60523,1],22] ORTE_ERROR_LOG: A message is attempting 
>>> to be sent to a process whose contact information is unknown in file 
>>> rml_oob_send.c at line 105
>>> [CUDANodeA:03384] [[60523,1],22] could not get route to [[INVALID],INVALID]
>>> [CUDANodeA:03384] [[60523,1],22] ORTE_ERROR_LOG: A message is attempting to 
>>> be sent to a process whose contact information is unknown in file 
>>> base/plm_base_proxy.c at line 86
>>> 
>>> -- 
>>> gmx-users mailing listgmx-users@gromacs.org
>>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>>> Please search the archive at 
>>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>>> Please don't post (un)subscribe requests to the list. Use the www interface 
>>> or send it to gmx-users-requ...@gromacs.org.
>>> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the www interface 
> or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] how to run g_tune_pme in cluster?

2012-04-26 Thread Carsten Kutzner
Hi,

what output does g_tune_pme provide? What is in "log" and in
"perf.out"?
Can it find the correct mdrun / mpirun executables?

Carsten


On Apr 26, 2012, at 9:28 AM, Albert wrote:

> Hello:
>  Does anybody have any idea how to run g_tune_pme in a cluster? I tried many 
> times with following command:
> 
> g_tune_pme_d -v -s npt_01.tpr -o npt_01.trr -cpo npt_01.cpt -g npt_01.log 
> -launch -nt 24 > log &
> 
> but it always failed.
> 
> 
> Option   Type   Value   Description
> --
> -[no]h   bool   no  Print help info and quit
> -[[CUDANodeA:03384] [[60523,1],22] ORTE_ERROR_LOG: A message is attempting to 
> be sent to a process whose contact information is unknown in file 
> rml_oob_send.c at line 105
> [CUDANodeA:03384] [[60523,1],22] could not get route to [[INVALID],INVALID]
> [CUDANodeA:03384] [[60523,1],22] ORTE_ERROR_LOG: A message is attempting to 
> be sent to a process whose contact information is unknown in file 
> base/plm_base_proxy.c at line 86
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the www interface 
> or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Error: 4095 characters, fgets2 has size 4095

2012-04-10 Thread Carsten Kutzner
Hi Steven,

you might have to remove files with weird names (such as ._name) in the 
directory where you run
grompp or in your forcefield directory. 

Carsten


On Apr 10, 2012, at 11:52 AM, Steven Neumann wrote:

> Dear Gmx Users,
>  
> It is 1st time I came across such problem. While preparing my NPT simulation 
> before umbrella samping:
>  
> grompp -f npt_umbrella.mdp -c conf0.gro -p topol.top -n index.ndx -o npt0.tpr
>  
> An input file contains a line longer than 4095 characters, while the buffer 
> passed to fgets2 has size 4095. The line starts with: '20s'
>  
> Its not about the files in bad format as I have never had this problem  - I 
> am using Gromacs 4.5.4 installed on the cluster, I am using PuTTy shell. I 
> always use dos2gmx ibefore processing.
>  
> Can you advise?
>  
> Steven
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] another g_tune_pme problem

2012-04-02 Thread Carsten Kutzner

On Apr 1, 2012, at 8:01 PM, Albert wrote:

> Hello:
>   I am trying to test g_tune_pme in workstation by command:
> 
> g_tune_pme_d -v -s md.tpr -o bm.trr -cpi md.cpt -cpo bm.cpt -g bm.log -launch 
> -nt 16 &
> 
> but it stopped immediately with following logs. I complied gromacs with a -d 
> in each module such as mdrun_d and I aliased mdrun_d to mdrun in the shell. 
> However, my g_tune_pme still claimed that it cannot execute md_run..
Hi,

so what does benchtest.log say?

Carsten

> 
> thank you very much
> 
> 
> --log--
> back Off! I just backed up perf.out to ./#perf.out.5#
> Will test 3 tpr files.
> Will try runs with 4 - 8 PME-only nodes.
>   Note that the automatic number of PME-only nodes and no separate PME nodes 
> are always tested.
> 
> Back Off! I just backed up benchtest.log to ./#benchtest.log.5#
> 
> ---
> Program g_tune_pme_d, VERSION 4.5.5
> Source code file: gmx_tune_pme.c, line: 631
> 
> Fatal error:
> Cannot execute mdrun. Please check benchtest.log for problems!
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> ---
> 
> "Once Again Let Me Do This" (Urban Dance Squad)
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Scaling/performance on Gromacs 4

2012-02-20 Thread Carsten Kutzner
Hi Sara,

my guess is that 1500 steps are not at all sufficient for a benchmark on 64 
cores. 
The dynamic load balancing will need more time to adapt the domain sizes
for optimal balance. 
It is also important that you reset the timers when the load is balanced (to get
clean performance numbers); you might want to use the -resethway switch for 
that. 
g_tune_pme will help you find the performance optimum on any number of nodes, 
from 4.5 on it is included in Gromacs.

Carsten


Am Feb 20, 2012 um 5:12 PM schrieb Sara Campos:

> Dear GROMACS users
> 
> My group has had access to a quad processor, 64 core machine (4 x Opteron 
> 6274 @ 2.2 GHz with 16 cores)
> and I made some performance tests, using the following specifications:
> 
> System size: 299787 atoms
> Number of MD steps: 1500
> Electrostatics treatment: PME
> Gromacs version: 4.0.4
> MPI: LAM
> Command ran: mpirun -ssi rpi tcp C mdrun_mpi ...
> 
> #CPUS  Time (s)   Steps/s
> 64 195.000 7.69
> 32 192.000 7.81
> 16 275.000 5.45
> 8  381.000 3.94
> 4  751.000 2.00
> 2 1001.000 1.50
> 1 2352.000 0.64
> 
> The scaling is not good. But the weirdest is the 64 processors performing
> the same as 32. I see the plots from Dr. Hess on the GROMACS 4 paper on JCTC
> and I do not understand why this is happening. Can anyone help?
> 
> Thanks in advance,
> Sara
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Is there a way to omit particles with, q=0, from Coulomb-/PME-calculations?

2012-01-17 Thread Carsten Kutzner
Hi Thomas,

Am Jan 17, 2012 um 10:29 AM schrieb Thomas Schlesier:

> But would there be a way to optimize it further?
> In my real simulation i would have a charged solute and the uncharged solvent 
> (both have nearly the same number of particles). If i could omit the 
> uncharged solvent from the long-ranged coulomb-calculation (PME) it would 
> save much time.
> Or is there a reason that some of the PME stuff is also calculated for 
> uncharged particles?

For PME you need the Fourier-transformed charge grid and you get back the 
potential
grid from which you interpolate the forces on the charged atoms. The charges 
are spread
each on typically 4x4x4 (=PME order) grid points, and in this spreading only
charged atoms will take part. So the spreading part (and also the force 
interpolation part)
will become faster with less charges. However, the rest of PME (the Fourier 
transforms
and calculations in reciprocal space) are unaffected by the number of charges. 
For
this only the size of the whole PME grid matters. You could try to lower the 
number of
PME grid points (enlarge fourierspacing) and at the same time enhance the PME 
order 
(to 6 for example) to keep a comparable force accuracy. You could also try to 
shift
more load to real space, which will also lower the number of PME grid points 
(g_tune_pme
can do that for you). But I am not shure that you can get large performance 
benefits
from that.

Best,
   Carsten


> (Ok, i know that this is a rather specical system, in so far that in most 
> md-simulations the number of uncharged particles is negligible.)
> Would it be probably better to move the question to the developer-list?
> 
> Greetings
> Thomas
> 
> 
>> On 17/01/2012 7:32 PM, Thomas Schlesier wrote:
>>> On 17/01/2012 4:55 AM, Thomas Schlesier wrote:
> Dear all,
> Is there a way to omit particles with zero charge from calculations
> for Coulomb-interactions or PME?
> In my calculations i want to coarse-grain my solvent, but the solute
> should be still represented by atoms. In doing so the
> solvent-molecules have a zero charge. I noticed that for a simulation
> with only the CG-solvent significant time was spent for the PME-part
> of the simulation.
> If i would simulate the complete system (atomic solute +
> coarse-grained solvent), i would save only time for the reduced
>>> number
> of particles (compared to atomistic solvent). But if i could omit the
> zero-charge solvent from the Coulomb-/PME-part, it would save much
> additional time.
> 
> Is there an easy way for the omission, or would one have to hack the
> code? If the latter is true, how hard would it be and where do i have
> to look?
> (First idea would be to create an index-file group with all
> non-zero-charged particles and then run in the loops needed for
> Coulomb/PME only over this subset of particles.)
> I have only experience with Fortran and not with C++.
> 
> Only other solution which comes to my mind would be to use plain
> cut-offs for the Coulomb-part. This would save time required for
>>> doing
> PME but will in turn cost time for the calculations of zeros
> (Coulomb-interaction for the CG-solvent). But more importantly would
> introduce artifacts from the plain cut-off :(
>>> 
 Particles with zero charge are not included in neighbour lists used
 for calculating Coulomb interactions. The statistics in the "M E G A
>>> ->F L O P S   A C C O U N T I N G" section of the .log file will show
 that there is significant use of loops that do not have "Coul"
 component. So already these have no effect on half of the PME
 calculation. I don't know whether the grid part is similarly
 optimized, but you can test this yourself by comparing timing of runs
 with and without charged solvent.
 
 Mark
>>> 
>>> Ok, i will test this.
>>> But here is the data i obtained for two simulations, one with plain
>>> cut-off and the other with PME. As one sees the simulation with plain
>>> cut-offs is much faster (by a factor of 6).
>> 
>> Yes. I think I have seen this before for PME when (some grid cells) are
>> lacking (many) charged particles.
>> 
>> You will see that the nonbonded loops are always "VdW(T)" for tabulated
>> VdW - you have no charges at all in this system and GROMACS has already
>> optimized its choice of nonbonded loops accordingly. You would see
>> "Coul(T) + VdW(T)" if your solvent had charge.
>> 
>> It's not a meaningful test of the performance of PME vs cut-off, either,
>> because there are no charges.
>> 
>> Mark
>> 
>>> 
>>> 
>>> ---
>>> 
>>> With PME:
>>> 
>>> M E G A - F L O P S   A C C O U N T I N G
>>> 
>>>RF=Reaction-Field  FE=Free Energy  SCFE=Soft-Core/Free Energy
>>>T=TabulatedW3=SPC/TIP3pW4=TIP4p (single or pairs)
>>>NF=No Forces
>>> 
>>>  Computing:   

Re: [gmx-users] modify the gromacs4.5.5 code: using cout

2011-11-30 Thread Carsten Kutzner
Use fprintf(stdout, "…");

Carsten


On Nov 30, 2011, at 12:27 PM, 杜波 wrote:

> dear teacher,
> 
> i want to modify the gromacs4.5.5 code ,can i use function "cout" which is 
> introuduce in c++.
> 
> i add the code 
>   #include 
>   #include 
> at the head of the md.c 
> 
> but when i make , there is a error "
> md.c:103:20: error: iostream: No such file or directory"
> 
> thanks 
> regards,
> Bo Du
> Department of Polymer Science and Engineering,
> School of Chemical Engineering and technology,
> Tianjin University, Weijin Road 92, Nankai District 300072,
> Tianjin City P. R. China
> Tel/Fax: +86-22-27404303 ; +8613820062885
> E-mail: 2008d...@gmail.com ; dubo2...@tju.edu.cn
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] do_dssp segmentation fault

2011-11-23 Thread Carsten Kutzner
Hi,

On Nov 23, 2011, at 8:24 AM, Alex Jemulin wrote:

> Thanks for your reply
> Could you tell me the name of the file to download and how to install it?
Please follow the instructions at 
http://www.gromacs.org/Developer_Zone/Git/Basic_Git_Usage and
http://www.gromacs.org/Developer_Zone/Git/Git_Tutorial.

In short:
git clone git://git.gromacs.org/gromacs.git
git checkout --track -b release-4-5-patches origin/release-4-5-patches

If you are using CMake, you can then just install this like the normal
.tar.gz distributions from the gromacs home page. If you are using autotools,
do a ./bootstrap before the configure step.

Best,
  Carsten


>  
> Bests
> 
> Da: Carsten Kutzner 
> A: Alex Jemulin ; Discussion list for GROMACS users 
>  
> Inviato: Martedì 22 Novembre 2011 13:25
> Oggetto: Re: [gmx-users] do_dssp segmentation fault
> 
> Dear Alex,
> 
> On Nov 22, 2011, at 9:28 AM, Alex Jemulin wrote:
> 
> > Dear all
> > I'm experiencing the following error in Gromacs 4.5 with do_dssp
> >  
> > Here is the command
> > do_dssp -f md.xtc -s md.tpr -o secondary-structure.xpm -sc 
> > secondary-structure.xvg -dt 10
> >  
> > give me the following error
> > segmentation fault
> >  
> > How can I fix it?
> I removed a segmentation fault in do_dssp a couple of weeks ago but this is
> post version 4.5.5. So you need to check out the current release-4-5-patches 
> from
> the git server. I believe this will fix your problem.
> 
> Carsten
> 
> >  
> > Thank in ad
> > -- 
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > Please search the archive at 
> > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > Please don't post (un)subscribe requests to the list. Use the 
> > www interface or send it to gmx-users-requ...@gromacs.org.
> > Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> 
> 

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] do_dssp segmentation fault

2011-11-22 Thread Carsten Kutzner
Dear Alex,

On Nov 22, 2011, at 9:28 AM, Alex Jemulin wrote:

> Dear all
> I'm experiencing the following error in Gromacs 4.5 with do_dssp
>  
> Here is the command
> do_dssp -f md.xtc -s md.tpr -o secondary-structure.xpm -sc 
> secondary-structure.xvg -dt 10
>  
> give me the following error
> segmentation fault
>  
> How can I fix it?
I removed a segmentation fault in do_dssp a couple of weeks ago but this is
post version 4.5.5. So you need to check out the current release-4-5-patches 
from
the git server. I believe this will fix your problem.

Carsten
 
>  
> Thank in ad
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] suggestion that mdrun should ensure npme < the numberof processes

2011-08-17 Thread Carsten Kutzner
Hi,

On Aug 17, 2011, at 1:24 AM,  
 wrote:

> Currently, gromacs4.5.4 gives a segfault if one runs mpirun -np 8 mdrun_mpi 
> -npme 120 with no warning of the source of the problem.
> 
> Obviously npme>nnodes is a bad setup, but a check would be nice.
cr->npmenodes is set in mdrun.c right after the command line args are
passed, and in the code there is also a comment that npme>nnodes should not
cause a problem at that point.

However, if npme>nnodes, in init_domain_decomposition / dd_choose_grid / 
optimize_ncells
the number of pp nodes = nnodes-npme turns out to be negative such that in 
factorize
the memory allocation does not work. 

I would have filed a bug report, however the web page seems to be down at the 
moment.

Best,
  Carsten


> 
> Thank you,
> Chris.
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use thewww interface or 
> send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_tune_pme

2011-07-28 Thread Carsten Kutzner
Hi Carla,

On Jul 28, 2011, at 9:38 AM, Carla Jamous wrote:

> Hi everyone, please I was running simulations with gromacs version 4.0.3 ,but 
> I got the following error:
> Average load imbalance: 12.1 %
>  Part of the total run time spent waiting due to load imbalance: 6.9 %
>  Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0 % 
> Y 9 %
>  Average PME mesh/force load: 0.807
>  Part of the total run time spent waiting due to PP/PME imbalance: 5.3 %
> 
This is not an error but just a hint how you could optimize your performance.

> NOTE: 6.9 % performance was lost due to load imbalance
>   in the domain decomposition.
> 
> NOTE: 5.3 % performance was lost because the PME nodes
>   had less work to do than the PP nodes.
>   You might want to decrease the number of PME nodes
>   or decrease the cut-off and the grid spacing.
> 
> After searching the archive mailing list and reading the manual , I decided 
> to use g_tune_pme so I switched to gromacs 4.5.4. Here's my script:
Note that there is also a g_tune_pme version for 4.0.7: 
http://www.mpibpc.mpg.de/home/grubmueller/projects/MethodAdvancements/Gromacs/index.html

As another possibility, you can use the tpr file you created with 4.0.x as 
input for
Gromcas 4.5.x, also for g_tune_pme, this is probably the easiest solution.

> 
> #PBS -S /bin/bash
> #PBS -N job_md6ns
> #PBS -e job_md6ns.err
> #PBS -o job_md6ns.log
> #PBS -m ae -M carlajam...@gmail.com
> #PBS -l select=2:ncpus=8:mpiprocs=8
> #PBS -l walltime=024:00:00
> cd $PBS_O_WORKDIR
> export GMXLIB=$GMXLIB:/scratch/carla/top:.
> module load gromacs
> chem="/opt/software/SGI/gromacs/4.5.4/bin/"
> mdrunmpi="mpiexec /opt/software/SGI/gromacs/4.5.4/bin/"
> ${chem}grompp -v -f md6ns.mdp -c 1rlu_apo_mdeq.gro -o 1rlu_apo_md6ns.tpr -p 
> 1rlu_apo.top
> ${mdrunmpi}g_tune_pme -v -s 1rlu_apo_md6ns.tpr -o 1rlu_apo_md6ns.trr -cpo 
> state_6ns.cpt -c 1rlu_apo_md6ns.gro -x 1rlu_apo_md6ns.xtc -e md6ns.edr -g 
> md6ns.log -np 4 -ntpr 1 -launch
> 
> But now, I have the following error message: 
> 
> Fatal error:
> Library file residuetypes.dat not found in current dir nor in your GMXLIB 
> path.
Why don't you build your tpr file on your workstation and then switch over
to the cluster? I guess this will make life easier for you.

Also note that you must not call g_tune_pme in parallel (which you do by
${mdrunmpi}g_tune_pme. g_tune_pme will spawn its own MPI processes with the
help of the MPIRUN and MDRUN environment variables. See g_tune_pme -h,
probably you need do set
export MDRUN=/opt/software/SGI/gromacs/4.5.4/bin/mdrun
export MPIRUN=`which mpiexec`

Hope that helps,
  Carsten

> 
> Except that I'm using amber94 force-field and that my topology files are in a 
> special directory called top where I modified certain things. With gromacs 
> 4.0.3, it always worked so I don't know what is happening here.
> 
> Please does anyone have an idea of what it might be?
> 
> Do I have to run pdb2gmx, editconf, etc... with the gromacs 4.5.4 for it to 
> work?
> 
> Thank you,
> 
> Carla
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GROMACS 4.5.1 mdrun re-compile for MPI

2011-03-24 Thread Carsten Kutzner
Hi,

you could try a make clean, and then configure again with --enable-threads.
It seems for some reason you only build the serial mdrun version.

Carsten


On Mar 24, 2011, at 1:51 PM, Adam Herbst wrote:

> Dear GROMACS users,
> I successfully installed GROMACS 4.5.1 several months ago on a Mac Pro with 
> 12 CPUs, and the "mdrun" command (not "mpirun mdrun_mpi") allows parallel 
> simulations--it automatically uses multiple processors, while the number of 
> processors can be manually specified as N with the flag "mdrun -nt N".  I 
> understand that this is a feature of GROMACS 4 and later.  Now I am making 
> minor changes to the mdrun source code, and I want to recompile such that the 
> parallel version of mdrun is updated with my changes.  But when I run:
> 
>   make mdrun (or just make)
>   make install-mdrun (or just make install)
> 
> from the top-level source directory, the only executables that are updated 
> are the ones with the _mpi suffix, such as mdrun_mpi.  The version of mdrun 
> in src/kernel/ is updated, but this one has no -nt flag and cannot seem to 
> run on multiple processors.  And when I run
> 
>   mpirun -np N mdrun_mpi [options],
> 
> the same simulation is started separately on each processor, leading to a 
> crash.  If I use
> 
>   mpirun -np 1 -cpus-per-proc N mdrun_mpi [options],
> 
> I get an error message that this is not supported on my computer ("An attempt 
> to set processor affinity has failed").
> 
> I can't configure the input .tpr file for parallel because grompp doesn't 
> have the -np flag in GROMACS 4.
> 
> How can I update the parallel-capable "mdrun" executable with my changes?
> Thanks in advance,
> 
> Adam
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] New maintenance release: gromacs-4.5.4

2011-03-22 Thread Carsten Kutzner
Hi,

try to add the --disable-shared flag to your invocation of .configure.

Carsten


On Mar 22, 2011, at 3:26 PM, Ye MEI wrote:

> Thank you for the new version of gromacs.
> But the compilation of gromacs failed on my computer. The commands are as 
> follows:
> make distclean
> export CC=icc
> export F77=ifort
> export CXX=icc
> export CFLAGS="-xS -I/apps/fftw3/include"
> export FFLAGS="-xS -I/apps/fftw3/include"
> export CXXFLAGS="-I/apps/fftw3/include"
> export LDFLAGS="-L/apps/fftw3/lib -lfftw3f"
> ./configure --prefix=/apps/gromacs4.5 --with-fft=fftw3 --with-x 
> --with-qmmm-gaussian
> make
> 
> and the error message is
> icc  -shared  .libs/calcmu.o .libs/calcvir.o .libs/constr.o .libs/coupling.o 
> .libs/domdec.o .libs/domdec_box.o .libs/domdec_con.o .libs/domdec_network.o 
> .libs/domdec_setup.o .libs/domdec_top.o .libs/ebin.o .libs/edsam.o 
> .libs/ewald.o .libs/force.o .libs/forcerec.o .libs/ghat.o .libs/init.o 
> .libs/mdatom.o .libs/mdebin.o .libs/minimize.o .libs/mvxvf.o .libs/ns.o 
> .libs/nsgrid.o .libs/perf_est.o .libs/genborn.o .libs/genborn_sse2_single.o 
> .libs/genborn_sse2_double.o .libs/genborn_allvsall.o 
> .libs/genborn_allvsall_sse2_single.o .libs/genborn_allvsall_sse2_double.o 
> .libs/gmx_qhop_parm.o .libs/gmx_qhop_xml.o .libs/groupcoord.o .libs/pme.o 
> .libs/pme_pp.o .libs/pppm.o .libs/partdec.o .libs/pull.o .libs/pullutil.o 
> .libs/rf_util.o .libs/shakef.o .libs/sim_util.o .libs/shellfc.o .libs/stat.o 
> .libs/tables.o .libs/tgroup.o .libs/tpi.o .libs/update.o .libs/vcm.o 
> .libs/vsite.o .libs/wall.o .libs/wnblist.o .libs/csettle.o .libs/clincs.o 
> .libs/qmmm.o .libs/gmx_fft.o .libs/gmx_parallel_3dfft.o .libs/fft5d.o 
> .libs/gmx_wallcycle.o .libs/qm_gaussian.o .libs/qm_mopac.o .libs/qm_gamess.o 
> .libs/gmx_fft_fftw2.o .libs/gmx_fft_fftw3.o .libs/gmx_fft_fftpack.o 
> .libs/gmx_fft_mkl.o .libs/qm_orca.o .libs/mdebin_bar.o  -Wl,--rpath 
> -Wl,/home/ymei/gromacs-4.5.4/src/gmxlib/.libs -Wl,--rpath 
> -Wl,/apps/gromacs4.5/lib -lxml2 -L/apps/fftw3/lib /apps/fftw3/lib/libfftw3f.a 
> ../gmxlib/.libs/libgmx.so -lnsl  -pthread -Wl,-soname -Wl,libmd.so.6 -o 
> .libs/libmd.so.6.0.0
> ld: /apps/fftw3/lib/libfftw3f.a(problem.o): relocation R_X86_64_32 against `a 
> local symbol' can not be used when making a shared object; recompile with 
> -fPIC
> /apps/fftw3/lib/libfftw3f.a: could not read symbols: Bad value
> 
> However, it works fine for gromacs 4.5.3. Can anyone help?
> 
> Ye MEI
> 
> 2011-03-22 
> 
> 
> 
> From: Rossen Apostolov 
> Date: 2011-03-22  03:24:55 
> To: Discussion list for GROMACS development; Discussion list for GROMACS 
> users; gmx-announce 
> CC: 
> Subject: [gmx-users] New maintenance release: gromacs-4.5.4 
> 
> Dear Gromacs community,
> A new maintenance release of Gromacs is available for download at 
> ftp://ftp.gromacs.org/pub/gromacs/gromacs-4.5.4.tar.gz.
> Some notable updates in this release:
> * Fixed pdb2gmx picking up force field from local instead of library 
> directory
> * Made pdb2gmx vsite generation work again for certain His namings.
> * Fixed incorrect virial and pressure averages with certain nst... 
> values (instantaneous values correct)
> * Fixed incorrect cosine viscosity output
> * New -multidir alternative for mdrun -multi option
> * Several minor fixes in analysis tools
> * Several updates to the program documentation
> Big thanks to all developers and users!
> Happy simulating!
> Rossen
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Performance in ia64 and x86_64

2011-02-25 Thread Carsten Kutzner
Hello Ignacio,

On Feb 25, 2011, at 10:25 AM, Ignacio Fernández Galván wrote:
> Well, I've compiled mdrun with MPI (with fortran kernels in the ia64), and 
> run 
> my test system in both machines, with a single processor. The results are 
> still 
> worrying (to me). This is a 50 time step (0.5 ns) simulation with 1500 
> water 
> molecules, not a big system, but it still takes some hours:
> 
> x86_64: 3.147 ns/day
> ia64: 0.507 ns/day
> 
> 
> Is this difference normal? Am I doing anything wrong? what further data 
> should I 


Some time ago I compared Itanium and x86 performances, see the fifth slide of 
this PDF: 

http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne/Talks/PDFs/kutzner07talk-optimizing.pdf

With Fortran kernels I got a performance of 0.31 ns/day for an 80,000 atom 
system
(with PME) on an Altix 4700, so your 0.5 ns/day for 1,500 waters seems too slow 
to me. 
What processor is this? Are you shure you are using the Fortran and not the C 
kernels?

Carsten--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GROMACS installation query

2011-02-23 Thread Carsten Kutzner

On Feb 23, 2011, at 6:16 AM, Tom Dupree wrote:

> Greetings all,
>  
> I am new to Linux and wish to confirm  some facts before I press on with the 
> installation.
>  
> In the installation guide, 
> http://www.gromacs.org/Downloads/Installation_Instructions
> There is a line saying  “...Where assembly loops are in use, GROMACS 
> performance is largely independent of the compiler used. However the GCC 
> 4.1.x series of compilers are broken for GROMACS, and these are provided with 
> some commodity Linux clusters. Do not use these compilers!...”
>  
> Firstly I assume this still applies to GROMACS version 4.5 and not just to 
> earlier ones. (Confirm/deny?)
To my knowledge there are no workarounds for gcc 4.1.x compiler bugs in the 
newer Gromacs
versions. 

> Secondly I read this as GCC 4.2.x and greater should be fine. (confirm/deny?)
Yes. You could also use the Intel compiler which will typically give you one or 
two percent
extra performance. But do not expect too much.

Carsten
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] configure: error: cannot compute sizeof (off_t)

2011-02-21 Thread Carsten Kutzner

On Feb 20, 2011, at 9:30 PM, Justin Kat wrote:

> Dear experts,
> 
> I am still unable to overcome this error during the configuration:
> 
> configure: error: cannot compute sizeof (off_t)
> See `config.log' for more details.
So what does config.log say about "cannot compute sizeof (off_t) ?"

Carsten

> 
> I came across this thread with the exact same setup as I have: 
> 
> http://lists.gromacs.org/pipermail/gmx-users/2011-February/058369.html
> 
> I have tried uninstalling openmpi 1.4.4 and installing the more stable 
> openmpi1.4.3 but I am still experiencing the same error.
> 
> ./configure --enable-mpi --program-suffix=_mpi MPICC=/usr/local/bin/mpicc 
> --with-fft=fftw3
> 
> I have also tried to explicitly provide the path to mpicc as above but it 
> still gives me the same error.
> 
> This may or may not be relevant but at the end of the config.log there is 
> also this line:
> 
> configure: exit 77
> 
> Does that mean anything?
> 
> Any help at all is appreciated!
> 
> Thanks,
> Justin--
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Performance in ia64 and x86_64

2011-02-11 Thread Carsten Kutzner
Hi Ignacio,

On Feb 11, 2011, at 1:33 PM, Ignacio Fernández Galván wrote:

> Hi all,
> 
> 
> I'm compiling and testing gromacs 4.5.3 in different machines, and I'm 
> wondering 
> if it's normal that the ia64 is much slower than the x86_64
> 
> I don't know full details of the machines, because I'm not the administrator 
> or 
> owner, but /proc/cpuinfo says:
> 
> ia64 (128 cores): Dual-Core Intel(R) Itanium(R) Processor 9140N
> 
> x86_64 (16 cores): Intel(R) Xeon(R) CPU   E5540  @ 2.53GHz
> 
> Just looking at the GHz, one is 2.5 and the other is 1.4, so I'd expect some 
> difference, but not a tenfold one: with 8 threads (mdrun -nt 8) I get 0.727 
> hours/ns on the x86_64, but 7.607 hours/ns on the ia64. (With 4 threads, it's 
> 1.3 and 13.7).
> 
> I compiled both cases with gcc, although different versions, and default 
> options. I had read assembly or fortran kernels could help with ia64, but 
> fortran is apparently incompatible with threads, and when I tried with 
> assembly 
> the mdrun seemed stuck (no timestep output was written). Is this normal? Is 
Yes, there is a problem with the ia64 assembly loops and this is exactly
how it manifests. I did run into that problem several times. What you can
do is to use the fortran kernels and compile with MPI. The performance
of the threaded and MPI versions should be the same, and the fortran 
kernels are nearly as fast as the ia64 assembly. Probably you can speed
things up a few percent by using the Intel compiler.

Cheers,
  Carsten



> there something else I'm missing?
> 
> Also, in the x86_64 system I get much lower performance with 12 or 16 
> threads, I 
> guess that could be because of the cores/processors, but I don't know what's 
> the 
> exact configuration of the machine. Again: is this normal?
> 
> Thanks,
> Ignacio
> 
> 
> 
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re:General MD question

2011-02-02 Thread Carsten Kutzner
On Feb 2, 2011, at 11:48 AM, lloyd riggs wrote:

> Dear Carsten Kutzner,
> 
> First off, thanks.  I did not specify it in the input md.mdp file, but when I 
> looked at the generated out.mdp it had a Linear center of mass removal for 
> the groups [system].
> 
> When I added the pull vector it works, untill the two subunits crash(move 
> past a realistic distance towards eachother and the force ==too much, and 
> then it crashes after atoms fly offbut I get a change in dG up untill 
> this point from solution.
> 
> I will try and play around today, but woundered if anyone could spot check my 
> final .mdp input, as I lack in reviewers (for this portion of
I or somebody else can take a quick look for any obvious issues, but
nobody can guarantee the correctness of what you get, of course.
This is entirely your responsibility.

Carsten

> my work, ie people using gromacs).  As at some distant point in time I will 
> try and publish dG/ dH and possibly dS, and relate these to affinities (Ka, 
> KD and kD), I would like to make sure I did it correctly before the hellish 
> number crunching(ie most time consuming) part.  I did look over the tutorial 
> already...
> 
> Thanks
> 
> Stephan Watkins
> 
> Message: 3
> Date: Tue, 1 Feb 2011 09:58:07 +0100
> From: Carsten Kutzner 
> Subject: Re: [gmx-users] General MD question
> To: Discussion list for GROMACS users 
> Message-ID: 
> Content-Type: text/plain; charset=iso-8859-1
> 
> Hi Stephan,
> 
> On Jan 31, 2011, at 5:18 PM, lloyd riggs wrote:
> 
>> Dear All,
>> 
>> A quick question as I have not really delved into code for gromacs ever, nor 
>> know anyone close whom has worked on it.
>> 
>> If I set up an MD simulation using a 4 protein complex, and 1 small peptide, 
>> plus waters, etc...and run the whole thing the proteins never move, only the 
>> amino acids within(constant temp RT and pressure 1 bar).
>> 
>> Two domains make one complex, and another two the other.  Basically, if I 
>> seperate the domains say 5, 10, 15 angstrom, etc...the amino acids will 
>> drift (the chains) towards each other, but the two large (global) protein 
>> units never move their center (I know I can make it work with Pull vectors, 
>> but why not in the simple system with a generated initial randomized 
>> velocities), I woundered why they are fixed in a normal run with minimal 
>> parameters?  Is there a reason (specific to developers), historical reason, 
>> or other?  As waters move around fine, and anything else added (salt, small 
>> molecules of 20-30 atoms, water) except the central molecule(s) of interest.
> In a 'normal' run they should not be fixed. Could it be that you did 
> accidentally
> fix them by specifying center of mass removal (comm-grps in .mdp)?
> 
> Carsten
> 
> -- 
> GMX DSL Doppel-Flat ab 19,99 Euro/mtl.! Jetzt mit 
> gratis Handy-Flat! http://portal.gmx.net/de/go/dsl
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] segmentation fault: g_velacc

2011-02-01 Thread Carsten Kutzner
Hi Vigneshwar, 

the problem is fixed now in the release-4-0-patches branch. 

Carsten


On Feb 1, 2011, at 2:00 PM, Carsten Kutzner wrote:

> Hi,
> 
> apparently this bug fix made it to 4.5, but not to 4.0.
> I will apply the fix also there.
> 
> Carsten
> 
> On Feb 1, 2011, at 1:58 PM, Justin A. Lemkul wrote:
> 
>> 
>> 
>> Vigneshwar Ramakrishnan wrote:
>>> Dear All,
>>> I am using the gromacs 4.0.7 version and I was trying to calculate the 
>>> momentum autocorrelation function by using the -m flag. However, I get a 
>>> segmentation fault as follows:
>>> trn version: GMX_trn_file (double precision)
>>> Reading frame   0 time0.000   Segmentation fault
>>> When I don't use the -m option, I have no problem.
>>> Upon searching the userslist, I found this thread: 
>>> http://lists.gromacs.org/pipermail/gmx-users/2010-October/054813.html and a 
>>> patch, but I don't find any related bugs reported elsewhere. So, I am just 
>>> wondering if I sould go ahead and use the patch or if there could be 
>>> something else that is wrong. Will appreciate any kind of pointers. 
>> 
>> Either apply the patch or upgrade to a newer version of Gromacs that 
>> contains this bug fix.
>> 
>> -Justin
>> 
>>> Sincerely, Vignesh
>>> -- 
>>> R.Vigneshwar
>>> Graduate Student,
>>> Dept. of Chemical & Biomolecular Engg,
>>> National University of Singapore,
>>> Singapore
>>> "Strive for Excellence, Never be satisfied with the second Best!!"
>>> I arise in the morning torn between a desire to improve the world and a 
>>> desire to enjoy the world. This makes it hard to plan the day. (E.B. White)
>> 
>> -- 
>> 
>> 
>> Justin A. Lemkul
>> Ph.D. Candidate
>> ICTAS Doctoral Scholar
>> MILES-IGERT Trainee
>> Department of Biochemistry
>> Virginia Tech
>> Blacksburg, VA
>> jalemkul[at]vt.edu | (540) 231-9080
>> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
>> 
>> 
>> -- 
>> gmx-users mailing listgmx-users@gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at 
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> Please don't post (un)subscribe requests to the list. Use the www interface 
>> or send it to gmx-users-requ...@gromacs.org.
>> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> 
> --
> Dr. Carsten Kutzner
> Max Planck Institute for Biophysical Chemistry
> Theoretical and Computational Biophysics
> Am Fassberg 11, 37077 Goettingen, Germany
> Tel. +49-551-2012313, Fax: +49-551-2012302
> http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne
> 
> 
> 
> 
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] segmentation fault: g_velacc

2011-02-01 Thread Carsten Kutzner
Hi,

apparently this bug fix made it to 4.5, but not to 4.0.
I will apply the fix also there.

Carsten

On Feb 1, 2011, at 1:58 PM, Justin A. Lemkul wrote:

> 
> 
> Vigneshwar Ramakrishnan wrote:
>> Dear All,
>> I am using the gromacs 4.0.7 version and I was trying to calculate the 
>> momentum autocorrelation function by using the -m flag. However, I get a 
>> segmentation fault as follows:
>> trn version: GMX_trn_file (double precision)
>> Reading frame   0 time0.000   Segmentation fault
>> When I don't use the -m option, I have no problem.
>> Upon searching the userslist, I found this thread: 
>> http://lists.gromacs.org/pipermail/gmx-users/2010-October/054813.html and a 
>> patch, but I don't find any related bugs reported elsewhere. So, I am just 
>> wondering if I sould go ahead and use the patch or if there could be 
>> something else that is wrong. Will appreciate any kind of pointers. 
> 
> Either apply the patch or upgrade to a newer version of Gromacs that contains 
> this bug fix.
> 
> -Justin
> 
>> Sincerely, Vignesh
>> -- 
>> R.Vigneshwar
>> Graduate Student,
>> Dept. of Chemical & Biomolecular Engg,
>> National University of Singapore,
>> Singapore
>> "Strive for Excellence, Never be satisfied with the second Best!!"
>> I arise in the morning torn between a desire to improve the world and a 
>> desire to enjoy the world. This makes it hard to plan the day. (E.B. White)
> 
> -- 
> 
> 
> Justin A. Lemkul
> Ph.D. Candidate
> ICTAS Doctoral Scholar
> MILES-IGERT Trainee
> Department of Biochemistry
> Virginia Tech
> Blacksburg, VA
> jalemkul[at]vt.edu | (540) 231-9080
> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
> 
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the www interface 
> or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] General MD question

2011-02-01 Thread Carsten Kutzner
Hi Stephan,

On Jan 31, 2011, at 5:18 PM, lloyd riggs wrote:

> Dear All,
> 
> A quick question as I have not really delved into code for gromacs ever, nor 
> know anyone close whom has worked on it.
> 
> If I set up an MD simulation using a 4 protein complex, and 1 small peptide, 
> plus waters, etc...and run the whole thing the proteins never move, only the 
> amino acids within(constant temp RT and pressure 1 bar).
> 
> Two domains make one complex, and another two the other.  Basically, if I 
> seperate the domains say 5, 10, 15 angstrom, etc...the amino acids will drift 
> (the chains) towards each other, but the two large (global) protein units 
> never move their center (I know I can make it work with Pull vectors, but why 
> not in the simple system with a generated initial randomized velocities), I 
> woundered why they are fixed in a normal run with minimal parameters?  Is 
> there a reason (specific to developers), historical reason, or other?  As 
> waters move around fine, and anything else added (salt, small molecules of 
> 20-30 atoms, water) except the central molecule(s) of interest.
In a 'normal' run they should not be fixed. Could it be that you did 
accidentally
fix them by specifying center of mass removal (comm-grps in .mdp)?

Carsten

> 
> Grüsse
> 
> Stephan Watkins
> -- 
> Neu: GMX De-Mail - Einfach wie E-Mail, sicher wie ein Brief!  
> Jetzt De-Mail-Adresse reservieren: http://portal.gmx.net/de/go/demail
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Gromacs + GPU: Problems running dppc example in ftp://ftp.gromacs.org/pub/benchmarks/gmxbench-3.0.tar.gz

2011-01-27 Thread Carsten Kutzner
Hi Camilo,

On Jan 27, 2011, at 7:19 AM, Camilo Andrés Jimenez Cruz wrote:

> Sorry, abrupt sending,
> 
> the coulombtype is the same
> 
> coulombtype =  cut-off
Is your cut-off actually 0.0 then?

Carsten

> 
> and constraints =  all-bonds is the same. Any idea?
> 
> 2011/1/27 Camilo Andrés Jimenez Cruz 
> Hi all!
> 
> I am trying to run the dppc example located in 
> ftp://ftp.gromacs.org/pub/benchmarks/gmxbench-3.0.tar.gz, with the gpu 
> version of gromacs, when  I run it I get 
> 
> WARNING: OpenMM does not support leap-frog, will use velocity-verlet 
> integrator.
> 
> 
> ---
> Program mdrun_sg, VERSION 4.5.3
> Source code file: 
> /usr/src/redhat/BUILD/gromacs-4.5.3/src/kernel/openmm_wrapper.cpp, line: 555
> 
> Fatal error:
> OpenMM supports only the following methods for electrostatics: NoCutoff (i.e. 
> rcoulomb = rvdw = 0 ),Reaction-Field, Ewald or PME.
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> ---
> 
> 
> but when I compare the mdp file with the examples in 
> http://www.gromacs.org/Downloads/Installation_Instructions/Gromacs_on_GPUs 
> (impl_1nm, for example), the integrator is the same
> 
> integrator  =  md
> 
> -- 
> Camilo Andrés Jiménez Cruz
> 
> 
> 
> 
> -- 
> Camilo Andrés Jiménez Cruz
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] 4 x Opteron 12-core or 4 x Xeon 8-core ?

2011-01-20 Thread Carsten Kutzner
Hi David,

On Jan 20, 2011, at 1:21 PM, David McGiven wrote:

> Dear Gromacs Users,
> 
> We're going to buy a new server for HPC. It is going to run mainly Gromacs 
> calculations.
> 
> Regarding Gromacs performance, I'm wondering which one, you Gromacs users and 
> developers, think will be faster.
> 
> AMD Server :   4 x AMD Opteron 6176 12-core 2.3 Ghz + 96GB Memory (2GB / core)
> Intel Server : 4 x Intel Xeon 8-core 2.66 Ghz + 64 GB RAM (2GB / core)
> 
> We normally run ~100k atom systems with PME and explicit water.
> 
> Which one would you recommend ?
> 
> Also, of course, AMD Server is cheaper. But we are mainly interested on 
> performance.

If you have a fixed amount of money, you will get the most ns/day 
if you buy the AMD Magny Cours machines. Each one will be slower compared to the
Intel server but you will get more servers altogether, thus more total
performance. If you can only buy a single server and you do not care about
what it costs, the Intel will be faster for shure.
Note that you do not need 2 GB/core for 100k atoms MD systems if you run
Gromacs. Half of it will be more than enough.

Carsten


> Thanks.
> 
> Best Regards,
> David
> 
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use thewww interface or 
> send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] gromacs 4.5.3 with threads instead of MPI

2011-01-17 Thread Carsten Kutzner
Hi,

On Jan 17, 2011, at 4:11 PM, Arnau Cordomi wrote:

> Dear Gromacs Users,
> 
> We normally run gromacs 4.0.x mdrun with OpenMPI in a 24 core
> shared-memory server (SunFire X4450).
> i. e. the command we use for a 12 core run is : mpirun -np 12 mdrun -v
> -c output_md.gro
> This is working great so far.
> 
> Now we are trying to use  gromacs 4.5.x and I found this on the
> release notes 
> (http://www.gromacs.org/About_Gromacs/Release_Notes/Versions_4.5.x)
> :
> 
> “Running on a multi-core node now uses thread-based parallelization to
> automatically spawn the optimum number of threads in the default
> build. MPI is now only required for parallelization over the network.”
> 
> So, I guess now instead of “mpirun -np 12 mdrun -v -c output_md.gro”
> we should use “mdrun -nt 12 -v -c output_md.gro” and expect the same
> performance. Am I right?
Right.

> 
> Also, is this “automatically spawn the optimum number of threads”
> reliable ? Does that mean that if the recommended number is 4 cores
> (threads) there’s no way to make it run faster even if specifying -nt
> 12 or 24 ?
You should use the number of logical cores you have. This is normally
the number of physical cores, unless you use some kind of hyperthreading
or simultaneous multiprocessing in which case -nt is twice the number
of physical cores. mdrun reads the number of threads from your system
via sysconf().

Carsten

> 
> Any advice will be welcome.
> Thanks in advance.
> 
> Best Regards,
> Arnau
> 
> Cordomí
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_tune_pme big standard deviation in perf.out output

2011-01-01 Thread Carsten Kutzner
Dear Yanbin,

On Dec 30, 2010, at 9:20 PM, WU Yanbin wrote:
> I'm simulating a SPC/E water box with the size of 4nm by 4nm by 4nm. The 
> command "g_tune_pme" was used to find the optimal PME node numbers, Coulomb 
> cutoff radius and grid spacing size. 
> 
> The following command is used:
> g_tune_pme -np 24 -steps 5000 -resetstep 500 ...
> rcoul=1.5nm, rvdw=1.5nm, fourierspacing=0.12
> 
> The simulation is done with no error. Below is the output:
> ---
> Line tpr PME nodes  Gcycles Av. Std.dev.   ns/dayPME/fDD 
> grid
>0   0   12  2813.762  187.1159.6040.3614   
> 3   1
>1   0   11  2969.826  251.2109.1120.510   13   
> 1   1
>2   0   10  2373.469  154.005   11.3850.4452   
> 7   1
>3   09  2129.519   58.132   12.6650.6015   
> 3   1
>4   08  2411.653  265.233   11.2480.5704   
> 4   1
>5   07  2062.770  514.023   13.4900.616   17   
> 1   1
>6   06  1539.237   89.189   17.5470.7486   
> 3   1
>7   00  1633.318  113.037   16.548  -  6   
> 4   1
>8   0   -1(  4) 1330.146   32.362   20.2761.0504   
> 5   1
> ---
> 
> The optimal -npme is 4.
> 
> It seems to me that the "Std. dev" is too huge.
This is the standard deviation resulting from multiple runs with the
same settings. If you do not specify "-r" for the number of repeats 
explicitly to g_tune_pme, it will do two tests for each setting. For
the optimum of 4 PME nodes the standard deviation is 2.4 percent of the 
mean, thus not large at all.

> Can anyone tell me the meaning of "Gcycles Av." and "Std. dev" and their 
> relations to the accuracy of "ns/day"?
Both the number of CPU cycles as the ns/day values are determined from
the md.log output file of the respective runs. g_tune_pme does the averaging
for you, but you can also look at the individual results, these log files
are still there after the tuning run. The standard deviation is printed
only for the Gcycles - maybe it is a good idea to also print the standard
deviation for the ns/day values. If the standard dev is X percent of the
mean for the cycles, then it is also X percent of the mean ns/day.

> 
> Another question:
> I tried 
> g_tune_pme -np 24 -steps 1000 -resetstep 100 ... (the default value of 
> g_tune_pme)
> rcoul=1.5nm, rvdw=1.5nm, fourierspacing=0.12
> 
> The optimal -npme is 6, different from "-npme=4" as obtained with big 
> "-nsteps".
> Should I increase "-nsteps" even more to get better estimate, or what else 
> parameters should I try?
> 
In principle the results will become more exact, the longer the test runs
are. For your system it seems that the load between the processes is not yet
optimally balanced after the default 100 steps so that -resetstep 500 gives
you a more accurate value. I think the -steps 5000 value is large enough, 
but another test with a higher resetstep value would answer your question.
Since you already know that 7-12 PME nodes will not perform well, I would
try

g_tune_pme -np 24 -steps 5000 -resetstep 5000 -min 0.16 -max 0.25 ...

Regards,
  Carsten

> Do let me know if the questions are not made clear.
> Thank you.
> 
> Best,
> Yanbin
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] stopping mdrun without error massage

2010-12-22 Thread Carsten Kutzner
Dear Karim,

a (small) load imbalance is perfectly normal for a parallel
simulation and no need to switch over to particle decomposition
(both domain and particle decomposition should however work).
Are you shure you get no error message? People will need some
more information here to be able to help. It is  usually
a good idea to include which MPI lib you are using
and the exact command line how you invoked mdrun. You could
anyway try to run on a single processor only and see whether
this scenario also "stops" or whether you get a proper
error message or core file.

Carsten


On Dec 22, 2010, at 12:34 PM, Mahnam wrote:

> In God We Trust 
> Hello Dear GMX users 
> I want to do MD on one peptide in water with gromacs 4.5.3. 
> I minimized and equilibrated my system in NPT and NVT for 50 ps , but when I 
> do final mdrun it has load imbalance and when I try  -pd option it stops 
> after 265 ps without any error massage !, can everybody help me. 
> Here is my mdp file .
> constraints =  hbonds 
> integrator  =  md 
> dt  =  0.002 
> nsteps  =  5000 
> nstcomm =  10 
> comm_mode   =  Linear 
> comm_grps   =  protein 
> nstxout =  250 
> nstvout =  1000 
> nstfout =  0 
> nstcalcenergy   =  10 
> nstlog  =  1000 
> nstenergy   =  1000 
> nstlist =  10 
> ns_type =  grid 
> rlist   =  1.2 
> coulombtype =  PME 
> rcoulomb=  1.2 
> rvdw=  1.4 
> fourierspacing  =  0.12 
> fourier_nx  =  0 
> fourier_ny  =  0 
> fourier_nz  =  0 
> pme_order   =  4 
> ewald_rtol  =  1e-5 
> optimize_fft=  yes 
> energygrps  = protein  SOL 
>  
>   
> ; Berendsen temperature coupling is on in three groups 
> Tcoupl  =  v-rescale 
> tau_t   =  0.1   1  
> tc-grps  =  protein   bulk
> ref_t   =  300   300
> ; Pressure coupling is  on 
> Pcoupl  =  parrinello-Rahman 
> tau_p   =  1 
> compressibility =  4.5e-5 
> ref_p   =  1.0 
> ; Generate velocites is on at 300 K. 
> gen_vel =  yes 
> gen_temp=  300.0 
> gen_seed=  173529
>  
> 
> Many thanks in advance for your help and your reply. 
> Yours truly 
> Karim Mahnam 
> Institute of  Biochemistry  and  Biophysics (IBB) 
> Tehran University 
> P.O.box 13145-1384 
> Tehran 
> Iran 
> http://www.ibb.ut.ac.ir/ 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] how to add a electric field to the water box when do a simulation

2010-12-20 Thread Carsten Kutzner
On Dec 20, 2010, at 12:09 PM, 松啸天 wrote:

> dear:
>I would like to use the  electric field inside a box defined by gromacs. 
> So I  added E_x 1 10 0in the .mdp file, is it  the right approach?
Yes, this will add an electric field of strength 10 V/nm acting in x-direction.

Carsten
>  i  hope people who knows will help me to add the electric field when i do a 
> simulation.that's all.thank you!
>  
>  yours
>sincerely
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] pullx.xvg / pullf.xvg

2010-12-16 Thread Carsten Kutzner
On Dec 16, 2010, at 2:23 PM, Poojari, Chetan wrote:

> Hi,
> 
> Following were the commands which i used in the umbrella sampling simulations:
> 
> grompp -f md_umbrella.mdp -c conf500.gro -p topol.top -n index.ndx -o 
> umbrella500.tpr
> 
> mdrun -v -deffnm umbrella500
> 
> 
> Output:umbrella500.xvg, umbrella500.xtc, umbrella500.trr, 
> umbrella500.log, umbrella500.gro, umbrella500.edr, umbrella500.cpt
> 
> 
> pullf.xvg and pullx.xvg files were not produced.
With -deffnm you specified a default filename for all output files.
Try to use mdrun -pf pullf.xvg -px pullx.xvg -s input.tpr
Adding -h to mdrun will show you how your output files will be called.

Carsten

> 
> 
> Please can I know should i mention -px  pullx.xvg  and  -pf  pullf.xvg in the 
> mdrun?
> 
> 
> Kind regards,
> chetan
> 
> 
> 
> From: gmx-users-boun...@gromacs.org [gmx-users-boun...@gromacs.org] On Behalf 
> Of chris.ne...@utoronto.ca [chris.ne...@utoronto.ca]
> Sent: 15 December 2010 17:00
> To: gmx-users@gromacs.org
> Subject: [gmx-users] pullx.xvg / pullf.xvg
> 
> please copy and paste your commands your output. It is unlikely that
> any of us are going to do that tutorial in order to understand your
> question.
> 
> -- original message --
> 
> Hi,
> 
> I am following the umbrella sampling tutorial written by Justin Lemkul.
> 
> I was successfully able to run the umbrella sampling simulations, but
> for each configuration it outputted a single xvg file.
> 
> For the data analysis i should have pullf.xvg / pullx.xvg filesbut
> these files are not outputted after the simulation run.
> 
> I haven't made any changes to the .mdp files mentioned in the tutorial.
> 
> 
> Please can i know what might have gone wrong that it has not produced
> pullf.xvg and pullx.xvg files.
> 
> 
> 
> Kind regards,
> chetan.
> 
> 
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> 
> 
> Forschungszentrum Juelich GmbH
> 52425 Juelich
> Sitz der Gesellschaft: Juelich
> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
> Vorsitzender des Aufsichtsrats: MinDirig Dr. Karl Eugen Huthmacher
> Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender),
> Dr. Ulrich Krafft (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
> Prof. Dr. Sebastian M. Schmidt
> 
> 
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g-WHAM

2010-12-14 Thread Carsten Kutzner
Hi Mohsen, 

for a start, it is always a good idea to read the help
text of a command you are interested in, or to check
the most recent version of the manual. Using Gromacs 4.5

g_wham -h 

will guide you to a JCTC paper about g_wham, which
is a nice starting point. Check out this paper as well
as the references therein!

Also Google is your friend here:
http://pubs.acs.org/doi/abs/10.1021/ct100494z
http://onlinelibrary.wiley.com/doi/10.1002/jcc.540130812/pdf

Carsten


On Dec 14, 2010, at 9:20 AM, mohsen ramezanpour wrote:

> Dear All 
> 
> What is the algorithm of g-WHAM?
> in the other words ,what is the weighted histogeram analysis method?
> Thanks in advance for your reply
> Mohsen
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] problem building Gromacs 4.5.3 using the Intel compiler

2010-12-13 Thread Carsten Kutzner
Hi,

you might also need to use the mpiicc compiler wrapper instead
of the mpicc to enforce using icc instead of gcc.

Carsten

On Dec 13, 2010, at 2:20 PM, Mark Abraham wrote:

> 
> 
> On 12/13/10, "Miah Wadud Dr (ITCS)"  wrote:
>> Hello,
>> 
>> I am trying to build Gromacs 4.5.3 using the Intel compiler, but I am 
>> encountering the following problems when I type "make":
> 
> It seems like there's some mismatch between how FFTW was compiled and how 
> you're linking to it. Try a fresh FFTW compilation with this compiler, etc.
> 
> Mark
>> 
>> /bin/sh ../../libtool --tag=CC   --mode=link mpicc  -O3 -tpp7 -axW -ip -w 
>> -msse2 -funroll-all-loops -std=gnu99  -L/gpfs/grace/fftw-3.2.2/lib   -o 
>> grompp grompp.o libgmxpreprocess_mpi_d.la  ../mdlib/libmd_mpi_d.la 
>> ../gmxlib/libgmx_mpi_d.la  -lnsl -lm
>> mpicc -O3 -tpp7 -axW -ip -w -msse2 -funroll-all-loops -std=gnu99 -o grompp 
>> grompp.o  -L/gpfs/grace/fftw-3.2.2/lib ./.libs/libgmxpreprocess_mpi_d.a 
>> /gpfs/ueasystem/grace/gromacs-4.5.3/src/mdlib/.libs/libmd_mpi_d.a 
>> ../mdlib/.libs/libmd_mpi_d.a /gpfs/grace/fftw-3.2.2/lib/libfftw3.a 
>> /gpfs/ueasystem/grace/gromacs-4.5.3/src/gmxlib/.libs/libgmx_mpi_d.a 
>> ../gmxlib/.libs/libgmx_mpi_d.a -lnsl -lm
>> /gpfs/grace/fftw-3.2.2/lib/libfftw3.a(mapflags.o): In function 
>> `timelimit_to_flags':
>> /gpfs/ueasystem/grace/fftw-3.2.2/api/./mapflags.c:70: undefined reference to 
>> `__fmth_i_dlog'
>> /gpfs/ueasystem/grace/fftw-3.2.2/api/./mapflags.c:70: undefined reference to 
>> `__fmth_i_dlog'
>> /gpfs/grace/fftw-3.2.2/lib/libfftw3.a(timer.o): In function `elapsed':
>> /gpfs/ueasystem/grace/fftw-3.2.2/kernel/./cycle.h:244: undefined reference 
>> to `__mth_i_dfloatuk'
>> /gpfs/ueasystem/grace/fftw-3.2.2/kernel/./cycle.h:244: undefined reference 
>> to `__mth_i_dfloatuk'
>> /gpfs/grace/fftw-3.2.2/lib/libfftw3.a(generic.o): In function `apply':
>> /gpfs/ueasystem/grace/fftw-3.2.2/dft/./generic.c:73: undefined reference to 
>> `__builtin_alloca'
>> /gpfs/grace/fftw-3.2.2/lib/libfftw3.a(lt8-generic.o): In function 
>> `apply_r2hc':
>> /gpfs/ueasystem/grace/fftw-3.2.2/rdft/./generic.c:74: undefined reference to 
>> `__builtin_alloca'
>> /gpfs/grace/fftw-3.2.2/lib/libfftw3.a(lt8-generic.o): In function 
>> `apply_hc2r':
>> /gpfs/ueasystem/grace/fftw-3.2.2/rdft/./generic.c:128: undefined reference 
>> to `__builtin_alloca'
>> /gpfs/grace/fftw-3.2.2/lib/libfftw3.a(vrank3-transpose.o): In function 
>> `transpose_toms513':
>> /gpfs/ueasystem/grace/fftw-3.2.2/rdft/./vrank3-transpose.c:536: undefined 
>> reference to `__c_mzero1'
>> /gpfs/grace/fftw-3.2.2/lib/libfftw3.a(trig.o): In function `real_cexp':
>> /gpfs/ueasystem/grace/fftw-3.2.2/kernel/./trig.c:65: undefined reference to 
>> `__fmth_i_dsincos'
>> /gpfs/grace/fftw-3.2.2/lib/libfftw3.a(dht-rader.o): In function `apply':
>> /gpfs/ueasystem/grace/fftw-3.2.2/rdft/./dht-rader.c:79: undefined reference 
>> to `__c_mzero8'
>> /gpfs/grace/fftw-3.2.2/lib/libfftw3.a(dht-rader.o): In function `mkomega':
>> /gpfs/ueasystem/grace/fftw-3.2.2/rdft/./dht-rader.c:184: undefined reference 
>> to `__c_mzero8'
>> /gpfs/ueasystem/grace/fftw-3.2.2/rdft/./dht-rader.c:187: undefined reference 
>> to `__c_mcopy8_bwd'
>> /gpfs/grace/fftw-3.2.2/lib/libfftw3.a(ct-hc2c-direct.o): In function 
>> `apply_buf':
>> /gpfs/ueasystem/grace/fftw-3.2.2/rdft/./ct-hc2c-direct.c:130: undefined 
>> reference to `__builtin_alloca'
>> /gpfs/grace/fftw-3.2.2/lib/libfftw3.a(dftw-direct.o): In function 
>> `apply_buf':
>> /gpfs/ueasystem/grace/fftw-3.2.2/dft/./dftw-direct.c:103: undefined 
>> reference to `__builtin_alloca'
>> /gpfs/grace/fftw-3.2.2/lib/libfftw3.a(direct.o): In function `apply_buf':
>> /gpfs/ueasystem/grace/fftw-3.2.2/dft/./direct.c:74: undefined reference to 
>> `__builtin_alloca'
>> /gpfs/grace/fftw-3.2.2/lib/libfftw3.a(direct-r2c.o): In function `iterate':
>> /gpfs/ueasystem/grace/fftw-3.2.2/rdft/./direct-r2c.c:129: undefined 
>> reference to `__builtin_alloca'
>> /gpfs/grace/fftw-3.2.2/lib/libfftw3.a(hc2hc-direct.o): In function 
>> `apply_buf':
>> /gpfs/ueasystem/grace/fftw-3.2.2/rdft/./hc2hc-direct.c:96: undefined 
>> reference to `__builtin_alloca'
>> make[3]: *** [grompp] Error 1
>> make[3]: Leaving directory `/gpfs/ueasystem/grace/gromacs-4.5.3/src/kernel'
>> make[2]: *** [all-recursive] Error 1
>> make[2]: Leaving directory `/gpfs/ueasystem/grace/gromacs-4.5.3/src'
>> make[1]: *** [all] Error 2
>> make[1]: Leaving directory `/gpfs/ueasystem/grace/gromacs-4.5.3/src'
>> make: *** [all-recursive] Error 1
>> [r...@head00 gromacs-4.5.3]#
>> 
>> Any help will be greatly appreciated. The configure command prints the 
>> following:
>> 
>> [r...@head00 gromacs-4.5.3]# ./configure --enable-double --enable-mpi 
>> --program-suffix=_mpi_d --prefix=/gpfs/grace/gromacs-4.5.3
>> checking build system type... x86_64-unknown-linux-gnu
>> checking host system type... x86_64-unknown-linux-gnu
>> checking for a BSD-compatible install... /usr/bin/install -c
>> checking whether build environment is sa

[gmx-users] Re: trouble parallelizing a simulation over a cluster

2010-12-08 Thread Carsten Kutzner
On Dec 8, 2010, at 1:03 PM, Hassan Shallal wrote:

> Thanks a lot Justin for the very helpful answers concerning the pressure 
> equilibration. Using Berendsen Barostat over 200 ps has lead to the correct 
> average pressure...
>  
> I have another issue to discuss with you and with the Gromacs mailing list 
> members;
>  
> I have been trying to run a simulation on a computer cluster for the first 
> time using a sub file script. What happened is that the .sub file attempted 
> to run the simulation 24 times instead of parallelizing it over the 24 
> processors
>  
> Here are the contents of run_1.sub file I tried to use to parallelize the 
> simulation using qsub run_1.sub
>  
> #PBS -S /bin/bash
> #PBS -N run_1
> #PBS -l nodes=3:ppn=8
> module load openmpi/gnu
> mpirun -np 24 /home/hassan/bin/bin/mdrun_mpi -deffnm run_1 -v &> 
> run_1_update.txt
> exit $?
> What happens it that it outputs 24 run_1.log files, starting from 
> #run_1.log1# all the way to #run_1.log23#...Has anyone faced this problem 
> before? and If yes, any hints or solutions?
This typically happens when using a serial mdrun. You should check with ldd 
whether
mdrun_mpi is linked to the correct mpi library.

Carsten


>  
> I do appreciate any help in that domain
> Hassan
> 
> From: gmx-users-boun...@gromacs.org on behalf of Justin A. Lemkul
> Sent: Mon 12/6/2010 6:43 PM
> To: Discussion list for GROMACS users
> Subject: Re: [gmx-users] pressure fluctuations
> 
> 
> 
> Hassan Shallal wrote:
> > Dear Gromacs users,
> > 
> > I have some concerns about the both the pressure fluctuations and
> > averages I obtained during the equilibration phase. I have already read
> > through several similar posts as well as the following link
> > http://www.gromacs.org/Documentation/Terminology/Pressure. I understand
> > the pressure is a macroscopic rather than instantaneous property and the
> > average is what really matters. I also found out through similar posts
> > that negative average pressure indicates the system tendency to contract.
> > 
> > In the above link, it mentioned that pressure fluctuations should
> > decrease significantly with increasing the system's size. In my cases, I
> > have a fairly big systems (case_1 with *17393* water molecules
> > and case_2 with *11946 *water molecules). However, the pressure still
> > has huge fluctuations (around 500 bars) from the reference value (1
> > bar). Here are the average pressure and density values resulting from
> > the equilibration phases of two cases, please notice the negative
> > average pressure values in both cases...
> > 
> > Case_1_pressure:
> > Energy  Average   Err.Est.   RMSD  Tot-Drift
> > ---
> > Pressure   *-2.48342*   0.92369.709   -4.89668 
> > (bar)
> > Case_1_density:
> > Energy  Average   Err.Est.   RMSD  Tot-Drift
> > ---
> > Density 1022.89   0.38 3.82532.36724 
> > (kg/m^3)
> > Case_2_pressure:
> > Energy  Average   Err.Est.   RMSD  Tot-Drift
> > ---
> > Pressure   *-8.25259*2.6423.681   -12.1722 
> > (bar)
> > Case_2_density:
> > Energy  Average   Err.Est.   RMSD  Tot-Drift
> > ---
> > Density 1034.11   0.372.499641.35551 
> > (kg/m^3)
> > 
> > So I have some questions to address my concerns:
> > 1- each of the above systems has a protein molecule, NaCl to give 0.15 M
> > system and solvent (water) molecules... Could that tendency to contract
> > be an artifact of buffering the system with sodium and chloride ions?
> > 
> 
> I suppose anything is possible, but given that these are fairly standard
> conditions for most simulations, I tend to doubt it.  My own (similar) systems
> do not show this problem.
> 
> > 2- how to deal with the tendency of my system to contract?  Should
> > I change the number of water molecules in the system? 
> > or
> > Is it possible to improve the average pressure of the above systems by
> > increasing the time of equilibration from 100 ps to may be 500 ps or
> > even 1 ns?
> > 
> > 3- Is there a widely used range of average pressure (for ref_p = 1 bar)
> > that indicates acceptable equilibration of the system prior to the
> > production?
> > 
> 
> To answer #2 and #3 simultaneously - equilibration is considered "finished" 
> when
> your system stabilizes at the appropriate conditions (usually temperature and
> pressure).  Your results indicate that your equilibrium is insufficient.
> 
> > 4- I can't understand how the system has a tendency to contract whereas
> > the average density of the solvent is already slightly higher than it

Re: [gmx-users] How to suppress the error "X particles communicated to PME node Y are more than a cell length out of the domain decomposition cell of their charge group"

2010-12-02 Thread Carsten Kutzner
On Dec 2, 2010, at 6:16 AM, WU Yanbin wrote:

> Dear GMXers,
> 
> I'm running a simulation of water contact angle measurement on top of 
> graphite surface. 
> Initially a water cubic box is placed on two-layer graphite surface with the 
> rest of the box being vacuum. The water droplet is relaxed during the 
> simulation to develop a spherical shape.
> 
> An error of "X particles communicated to PME node Y are more than a cell 
> length out of the domain decomposition cell of their charge group" was 
> encountered.
> And I have read the suggested solutions at the link below
> http://www.gromacs.org/Documentation/Errors#X_particles_communicated_to_PME_node_Y_are_more_than_a_cell_length_out_of_the_domain_decomposition_cell_of_their_charge_group.
> 
> I guess the reason for this error in my case is because of the vacuum such 
> that the water molecules at the boundary of the droplet can move fast. I have 
> check the trajectory and the simulation is OK.
> 
> For this situation, is there a way of suppressing this error? Or what else 
> can I do?
If the system is small enough, you can run it on a single core and
the problem cannot occur. You could also try to use particle decomposition 
(-pd) instead
of domain decomposition. Or use less domains, i.e. less cores in total or at 
least
less PP nodes if you use PME/PP splitting. This will at least reduce the 
probability for the problem to occur.

Carsten


> 
> PS: the GROMACS version I'm using is GROMACS4.5.
> 
> Thank you.
> 
> Best,
> Yanbin
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Failed to lock: pre.log (Gromacs 4.5.3)

2010-11-26 Thread Carsten Kutzner
Hi,

as a workaround you could run with -noappend and later
concatenate the output files. Then you should have no
problems with locking.

Carsten


On Nov 25, 2010, at 9:43 PM, Baofu Qiao wrote:

> Hi all,
> 
> I just recompiled GMX4.0.7. Such error doesn't occur. But 4.0.7 is about 30% 
> slower than 4.5.3. So I really appreciate if anyone can help me with it!
> 
> best regards,
> Baofu Qiao
> 
> 
> 于 2010-11-25 20:17, Baofu Qiao 写道:
>> Hi all,
>> 
>> I got the error message when I am extending the simulation using the 
>> following command:
>> mpiexec -np 64 mdrun -deffnm pre -npme 32 -maxh 2 -table table -cpi pre.cpt 
>> -append 
>> 
>> The previous simuluation is succeeded. I wonder why pre.log is locked, and 
>> the strange warning of "Function not implemented"?
>> 
>> Any suggestion is appreciated!
>> 
>> *
>> Getting Loaded...
>> Reading file pre.tpr, VERSION 4.5.3 (single precision)
>> 
>> Reading checkpoint file pre.cpt generated: Thu Nov 25 19:43:25 2010
>> 
>> ---
>> Program mdrun, VERSION 4.5.3
>> Source code file: checkpoint.c, line: 1750
>> 
>> Fatal error:
>> Failed to lock: pre.log. Function not implemented.
>> For more information and tips for troubleshooting, please check the GROMACS
>> website at http://www.gromacs.org/Documentation/Errors
>> ---
>> 
>> "It Doesn't Have to Be Tip Top" (Pulp Fiction)
>> 
>> Error on node 0, will try to stop all the nodes
>> Halting parallel program mdrun on CPU 0 out of 64
>> 
>> gcq#147: "It Doesn't Have to Be Tip Top" (Pulp Fiction)
>> 
>> --
>> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
>> with errorcode -1.
>> 
>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
>> You may or may not see output from other processes, depending on
>> exactly when Open MPI kills them.
>> --
>> --
>> mpiexec has exited due to process rank 0 with PID 32758 on
>> 
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists





--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] installation

2010-11-24 Thread Carsten Kutzner
On Nov 24, 2010, at 4:04 PM, Rossella Noschese wrote:

> Hi all, I'm trying to install gromacs.4.5.3 on fedora 13.
> I followed the instruction on the website, I completed my make install and 
> this was the output:
>   GROMACS is installed under /usr/local/gromacs.
> Make sure to update your PATH and MANPATH to find the
> programs and unix manual pages, and possibly LD_LIBRARY_PATH
> or /etc/ld.so.conf if you are using dynamic libraries.
> 
> Please run "make tests" now to verify your installation.
> 
> If you want links to the executables in /usr/local/bin,
> you can issue "make links" now.
> 
> Since I understood I could set my environment later, I directly complete my 
> make links.
> Then I added in my .bashrc the line: 
> source /usr/local/gromacs/bin/GMXRC.bash
> 
> and it was added to mi $PATH and also my $MANPATH seems to be right 
> (/usr/local/gromacs/share/man:)
> 
> When I type GMXRC in the shell it comes:
GMXRC is for setting the paths to your Gromacs executables. You
can do that by typing "source GMXRC", but this you have already done
in your bashrc, so everything is already set up and fine! You will find
all the Gromacs programs at your fingertips, e.g. "mdrun -h" prints
out help for the main MD program.

Carsten


> [rosse...@topolone ~]$ GMXRC
> /usr/local/gromacs/bin/GMXRC: line 35: return: can only `return' from a 
> function or sourced script
> /usr/local/gromacs/bin/GMXRC: line 44: CSH:: command not found
> /usr/local/gromacs/bin/GMXRC.csh: line 8: syntax error near unexpected token 
> `setenv'
> /usr/local/gromacs/bin/GMXRC.csh: line 8: `if (! $?LD_LIBRARY_PATH) setenv 
> LD_LIBRARY_PATH ""'
> 
> Is there anyone that could help me?
> 
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] unexpexted stop of simulation

2010-11-03 Thread Carsten Kutzner
Hi,

there was also an issue with the locking of the general md.log
output file which was resolved for 4.5.2. An update might help.

Carsten


On Nov 3, 2010, at 3:50 PM, Florian Dommert wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> On 11/03/2010 03:38 PM, Hong, Liang wrote:
>> Dear all,
>> I'm performing a three-day simulation. It runs well for the first day, but 
>> stops for the second one. The error message is below. Does anyone know what 
>> might be the problem? Thanks
>> Liang
>> 
>> Program mdrun, VERSION 4.5.1-dev-20101008-e2cbc-dirty
>> Source code file: /home/z8g/download/gromacs.head/src/gmxlib/checkpoint.c, 
>> line: 1748
>> 
>> Fatal error:
>> Failed to lock: md100ns.log. Already running simulation?
>> For more information and tips for troubleshooting, please check the GROMACS
>> website at http://www.gromacs.org/Documentation/Errors
>> ---
>> 
>> "Sitting on a rooftop watching molecules collide" (A Camp)
>> 
>> Error on node 0, will try to stop all the nodes
>> Halting parallel program mdrun on CPU 0 out of 32
>> 
>> gcq#348: "Sitting on a rooftop watching molecules collide" (A Camp)
>> 
>> --
>> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
>> with errorcode -1.
>> 
>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
>> You may or may not see output from other processes, depending on
>> exactly when Open MPI kills them.
>> --
>> [node139:04470] [[37327,0],0]-[[37327,1],0] mca_oob_tcp_msg_recv: readv 
>> failed: Connection reset by peer (104)
>> --
>> mpiexec has exited due to process rank 0 with PID 4471 on
>> node node139 exiting without calling "finalize". This may
>> have caused other processes in the application to be
>> terminated by signals sent by mpiexec (as reported here).
> 
> Perhaps the queueing system of your cluster does not allow running a job
> longer than 24h. Or the default is 24h and you have to supply the
> corresponding information to the submission script.
> 
> /Flo
> 
> - -- 
> Florian Dommert
> Dipl.-Phys.
> 
> Institute for Computational Physics
> 
> University Stuttgart
> 
> Pfaffenwaldring 27
> 70569 Stuttgart
> 
> Phone: +49(0)711/685-6-3613
> Fax:   +49-(0)711/685-6-3658
> 
> EMail: domm...@icp.uni-stuttgart.de
> Home: http://www.icp.uni-stuttgart.de/~icp/Florian_Dommert
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.10 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
> 
> iEYEARECAAYFAkzRdrEACgkQLpNNBb9GiPm1sgCg3LkRUWgiZvOOH/GIjp5ifbZI
> bJcAn1aamCMWlWTokD1+eDCLG1WhT/rd
> =4Vs3
> -END PGP SIGNATURE-
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists





--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Fwd: -pbc nojump

2010-10-27 Thread Carsten Kutzner

On Oct 27, 2010, at 10:05 AM, leila karami wrote:

> Hi Carsten
>  
> Thanks for your answer. You got my case very well.
>  
> I understand your mean as follows:
> 
> 1)  Trjconv –f a.xtc –s a.tpr –o b.xtc –pbc mol  (output group=water)
> 
> 2)  Trjconv –f a.xtc –s a.tpr –o c.xtc –pbc nojump (output group 
> =protein-dna)
> 
> Is that true?
Yes, I would try it that way.
> 
> You said, (Then overlay both results in the visualization program). How?
In pymol for example, just load both PDBs. 

Carsten


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Fwd: -pbc nojump

2010-10-27 Thread Carsten Kutzner
Hi,

with the nojump option, your water molecules will slowly
diffuse out of the "home" box and appear far away from your
protein if you display the MD system with VMD or pymol.

You can split your trajectory in two parts (using index groups)
and use different options on them individually:
a) on the water part (and also on the ions, if they are present),
   use trjconv -pbc mol
b) on the rest, use trjconv -pbc nojump

Then overlay both results in the visualization program.

Carsten


On Oct 27, 2010, at 9:40 AM, David van der Spoel wrote:

> 
> 
> 
> I did simulation of protein-dna complex in water solvent. After
> simulation, two strands of dna was separated when I displayed my a.xtc
> with VMD.I used (trjconv –f a.xtc –s a.tpr –n a.ndx –o b.xtc –pbc
> nojump) and problem fixed. But now I have another problem. Before using
> –pbc nojump, there were water molecules in interface of between protein
> and dna but, After using of –pbc nojump, 1) there was no water molecule
> in interface of between protein and dna. 2) The distance between water
> molecules and protein or dna was increased.
> 
> I want to survey interfacial water molecules and dynamics of water
> medited hydrogen bonds.
> 
> I had sent my question to gromacs mailing list, before, but my problem
> was not solved.
> 
> 
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the www interface 
> or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_dipole ? =>salt-molecule => Does Gromacs consider counter ions?

2010-10-22 Thread Carsten Kutzner
On Oct 22, 2010, at 4:14 PM, Chih-Ying Lin wrote:

> 
> Hi
> Sorry, I ask the same question again because i am not a decent person in this 
> field.
> If possible, someone can give me a quick answer while i am trying to get 
> understanding the source codes.
> My basic understanding is that Gromacs has other approach of calculating 
> dipole moment instead of the following equation.
> dipole moment = 48.0 sum of q_i x_i
> x_i is the atomic position.
Gromacs does not have another approach. It exactly calculates the above
equation. Take a look at the function "mol_dip( )" in gmx_dipoles.c. There,
the dipole moment is calculated for a single molecule. A molecule is a group 
of atoms connected by chemical bonds. g_dipoles will only consider the
molecules of the group you are prompted to provide. If you for example
choose 'solvent' then you will get the sum of all the individual dipole
moments of the water molecules in your system. 

Carsten
> 
> 
> When I issued the command g_dipole,
> the dialog poped out and asked me to select a group.
> 
> 1. system
> 2. protein
> 
> .
> 
> 11. solvent
> 12. the rest of the salt-molecule except its counter ion
> 13. counter ions (CL-)
> 
> 
> If I select #12, Gromacs will not consider counter ions to calculate the
> dipole moment ???
> 
> 
> Sorry for disturbing people in the Gromacs mailing list.
> Thank you
> Lin
> 
> 
> On 2010-10-22 00.49, Chih-Ying Lin wrote:
> > Hi
> > When I issued the command g_dipole,
> > the dialog poped out and asked me to select a group.
> > 1. system
> > 2. protein
> > 
> > .
> > 
> > 11. solvent
> > 12. the rest of the salt-molecule except its counter ion
> > 13. counter ions (CL-)
> > If I select #12, Gromacs will not consider counter ions to calculate the
> > dipole moment ???
> > Thank you
> > Lin
> >
> you should try to understand what is going on yourself rather than
> sending many email to the mailing list. Please read the source code of
> the program.
> 
> --
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Gromacs 4.5.1 on 48 core magny-cours AMDs

2010-10-21 Thread Carsten Kutzner
On Oct 21, 2010, at 4:44 PM, Sander Pronk wrote:

> 
> Thanks for the information; the OpenMPI recommendation is probably because 
> OpenMPI goes to great lengths trying to avoid process migration. The numactl 
> doesn't prevent migration as far as I can tell: it controls where memory gets 
> allocated if it's NUMA. 
My understanding is that processes get pinned to cores with the help of 
the --physcpubind switch to numactl, but please correct me if I am wrong.
 
Carsten--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Gromacs 4.5.1 on 48 core magny-cours AMDs

2010-10-21 Thread Carsten Kutzner
Hi Sander,

On Oct 21, 2010, at 12:27 PM, Sander Pronk wrote:

> Hi Carsten,
> 
> As Berk noted, we haven't had problems on 24-core machines, but quite frankly 
> I haven't looked at thread migration. 
I did not have any problems on 32-core machines as well, only on 48-core ones.
> 
> Currently, the wait states actively yield to the scheduler, which is an 
> opportunity for the scheduler to re-assign threads to different cores. I 
> could set harder thread affinity but that could compromise system 
> responsiveness (when running mdrun on a desktop machine without active 
> yielding, the system slows down noticeably). 
> 
> One thing you could try is to turn on the THREAD_MPI_WAIT_FOR_NO_ONE option 
> in cmake. That turns off the yielding which might change the migration 
> behavior.
I will try that, thanks!
> 
> BTW What do you mean with bad performance, and how do you notice thread 
> migration issues?
A while ago I benchmarked a ~80,000 atom test system (membrane+channel+water, 2 
fs time 
step, cutoffs @ 1 nm) on a 48-core 1.9 GHz AMD node. My first try gave a lousy 
7.5 ns/day 
using Gromacs 4.0.7 and IntelMPI. According to AMD, parallel applications 
should be
run under control of numactl to be compliant to the new memory hierarchy. Also, 
they
suggest using OpenMPI rather than other MPI libs. With OpenMPI and numactl - 
which pins
the processes to the cores - the performance was nearly doubled to 14.3 ns/day. 
Using 
Gromacs 4.5 I got 14.0 ns/day with OpenMPI+numactl and 15.2 ns/day with threads 
(here no 
pinning was necessary for the threaded version!)

Now on another machine with identical hardware (but another Linux) I get 4.5.1 
timings that 
vary a lot (see g_tune_pme snippet below) even between identical runs. One run 
actually approaches 
the expected 15 ns/day, while the others with also 20 PME-only nodes) do not. I 
cannot be shure
that thread migration is the problem here, but correct pinning might be 
necessary here. 

Carsten
 


g_tune_pme output snippet for mdrun with threads:
-
Benchmark steps : 1000
dlb equilibration steps : 100
Repeats for each test   : 4

 No.   scaling  rcoulomb  nkx  nky  nkz   spacing  rvdw  tpr file
   0   -input-  1.00   90   88   80  0.119865   1.00  
./Aquaporin_gmx4_bench00.tpr

Individual timings for input file 0 (./Aquaporin_gmx4_bench00.tpr):
PME nodes  Gcycles   ns/dayPME/fRemark
  24  1804.4428.7361.703OK.
  24  1805.6558.7301.689OK.
  24  1260.351   12.5050.647OK.
  24  1954.3148.0641.488OK.
  20  1753.3868.9921.960OK.
  20  1981.0327.9582.190OK.
  20  1344.375   11.7211.180OK.
  20  1103.340   14.2870.896OK.
  16  1876.1348.4041.713OK.
  16  1844.1118.5511.525OK.
  16  1757.4148.9721.845OK.
  16  1785.0508.8331.208OK.
   0  1851.6458.520  -  OK.
   0  1871.9558.427  -  OK.
   0  1978.3577.974  -  OK.
   0  1848.5158.534  -  OK.
  -1( 18) 1926.2028.1821.453OK.
  -1( 18) 1195.456   13.1840.826OK.
  -1( 18) 1816.7658.6771.853OK.
  -1( 18) 1218.834   12.9310.884OK.



> Sander
> 
> On 21 Oct 2010, at 12:03 , Carsten Kutzner wrote:
> 
>> Hi,
>> 
>> does anyone have experience with AMD's 12-core Magny-Cours
>> processors? With 48 cores on a node it is essential that the processes
>> are properly pinned to the cores for optimum performance.  Numactl
>> can do this, but at the moment I do not get good performance with
>> 4.5.1 and threads, which still seem to be migrating around.
>> 
>> Carsten
>> 
>> 

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Gromacs 4.5.1 on 48 core magny-cours AMDs

2010-10-21 Thread Carsten Kutzner
Hi,

does anyone have experience with AMD's 12-core Magny-Cours
processors? With 48 cores on a node it is essential that the processes
are properly pinned to the cores for optimum performance.  Numactl
can do this, but at the moment I do not get good performance with
4.5.1 and threads, which still seem to be migrating around.

Carsten


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Error on install Gromacs 4

2010-10-20 Thread Carsten Kutzner
On Oct 20, 2010, at 5:17 AM, Son Tung Ngo wrote:

> Dear experts,
> 
> I have just install gromacs 4.5.1 on my cluster (using CentOS that was 
> install openmpi1.5, Platform MPI, fftw3, g77, gcc , g++) but I have problem 
> with size of int :
>  
> [r...@icstcluster gromacs-4.5.1]# ./configure --prefix=/shared/apps/gromacs 
> --enable-mpi --enable-double 
> 
> checking for _aligned_malloc... no
> checking size of int... 0
> checking size of long int... 0
> checking size of long long int... 0
> checking size of off_t... configure: error: in 
> `/shared/apps/source/gromacs-4.5.1':
> configure: error: cannot compute sizeof (off_t)
> 
> Any idea about this?
Hi Son,

search the config.log file for your error message "cannot compute sizeof 
(off_t)".
There you can find some more explanation. Could be a missing library.

Carsten-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] MPI and dual-core laptop

2010-09-28 Thread Carsten Kutzner
Hi,

if you only want to use the two processors of you laptop you
can simple leave away the --enable-mpi flag. Then it will
work in parallel using threads. Use mdrun -nt 2 -s ...
to specify two threads.

If you anyhow want to compile with MPI, take a look
at the config.log file (search for 'Cannot compile and link MPI code')
to check the actual cause of your problem,
probably some library was not found.

Carsten


On Sep 27, 2010, at 8:42 PM, simon sham wrote:

> Hi,
> I wanted to test the GROMACS MPI version in my dual-processors laptop. I have 
> installed openmpi 1.4.2 version. However, when I tried to configure GROMACS 
> 4.5.1 with --enable-mpi option, I got the following configuration problem:
> 
> "checking whether the MPI cc command works... configure: error: Cannot 
> compile and link MPI code with mpicc"
> 
> 
> mpicc is in my /usr/local/bin directory.
> 
> Questions:
> 1. Can I run GROMACS 4.5.1 in a dual-processor laptop?
> 2. If yes, how should I configure the software?
> 
> Thanks in advance for your insight.
> 
> Best,
> 
> Simon Sham
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Re: efficient use of pme with gromacs

2010-09-22 Thread Carsten Kutzner
Hi Léo,

please keep Gromacs-related issues on the Gromacs-users mailing
list. This will give others with similar problems the possibility to profit 
from 
already answered questions by searching this list. Also, please choose
a descriptive subject (I have done that for you). Thank you!

The message you see

> DD  step 543999  vol min/aver 0.331  load imb.: force 76.6%  pme mesh/force 
> 0.909

is just meant for your information, meaning that the PME mesh processors
finish their calculation earlier than the short-range Coulomb force processors. 
To balance that optimally, please use the g_tune_pme tool that is provided 
since version 4.5.  If you need this tool for 4.0, look here:

http://www.mpibpc.mpg.de/home/grubmueller/projects/MethodAdvancements/Gromacs/

The DD info message is however no reason for mdrun to stop. So there must be 
another
problem here. Can you find an error message in any of the output files?

Best,
  Carsten


On Sep 21, 2010, at 8:01 PM, Léo Degrève wrote:

> 
> Prof. Kutzner,
> I found papers and other materials that you have published on the efficient 
> use of pme with the gromacs software.
> I have a problem that maybe you can help me, if you wish, to solve since I 
> didn’t found what to do.
> Using gromacs on 96 processors the program stops systematically after 
> warnings like:
> 
> DD  step 543999  vol min/aver 0.331  load imb.: force 76.6%  pme mesh/force 
> 0.909
> 
> Defining –npme according to grompp, the result is the same. Is there a 
> solution?
> Thank you for your attention.
> Léo Degrève
> 
> Grupo de Simulação Molecular
> Departamento de Química  - FFCLRP
> Universidade de São Paulo
> Av. Bandeirantes, 3900
> 14040-901 Ribeirão Preto - SP
> Brazil
> 
>  Fax: +55 (16) 36024838
>  Fone: +55 (16) 3602-3688/ 3602-4372
>  e-mail: l...@obelix.ffclrp.usp.br
>  l...@ffclrp.usp.br
> 
> 
>  http://obelix.ffclrp.usp.br
> 
> 
> 
> 
> 
> ________
> Sent via the WebMail system at srv1.ffclrp.usp.br
> 
> 
> 
> 
> 


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] git gromacs

2010-09-07 Thread Carsten Kutzner
Hi Alan,

'bleeding edge' gromacs development is as always in the 'master' branch.
The latest bugfixes for the 4.5.x versions you are going to find in the
'release-4-5-patches' branch.

Carsten


On Sep 7, 2010, at 12:09 PM, Alan wrote:

> Hi there,
> 
> Now that gromacs 4.5.1 is released I was wondering which branch should I 
> checkout if I want to test the bleeding edge gromacs development.
> 
> Thanks,
> 
> Alan
> 
> -- 
> Alan Wilter S. da Silva, D.Sc. - CCPN Research Associate
> Department of Biochemistry, University of Cambridge. 
> 80 Tennis Court Road, Cambridge CB2 1GA, UK.
> >>http://www.bio.cam.ac.uk/~awd28<<
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Problem with installing mdrun-gpu in Gromacs-4.5

2010-09-02 Thread Carsten Kutzner
etting build user & time - OK
>> -- Checking floating point format
>> -- Checking floating point format - unknown
>> -- Checking for 64-bit off_t
>> -- Checking for 64-bit off_t - present
>> -- Checking for fseeko/ftello
>> -- Checking for fseeko/ftello - present
>> -- Checking for return type of signals
>> -- Checking for return type of signals - void
>> -- Checking for SIGUSR1
>> -- Checking for SIGUSR1 - found
>> -- Checking for inline keyword
>> -- Checking for inline keyword - inline
>> -- Checking for inline keyword
>> -- Checking for inline keyword - inline
>> -- Checking for pipe support
>> -- Checking for GCC x86 inline asm
>> -- Checking for GCC x86 inline asm - supported
>> -- Checking for MSVC x86 inline asm
>> -- Checking for MSVC x86 inline asm - not supported
>> -- Checking for system XDR support
>> -- Checking for system XDR support - present
>> -- Using internal FFT library - fftpack
>> CMake Warning (dev) at CMakeLists.txt:664 (add_subdirectory):
>>  The source directory
>> 
>>/home/c_muecksch/gpu_Install_Linux/gromacs-4.5/scripts
>> 
>>  does not contain a CMakeLists.txt file.
>> 
>>  CMake does not support this case but it used to work accidentally and is
>>  being allowed for compatibility.
>> 
>>  Policy CMP0014 is not set: Input directories must have CMakeLists.txt.  Run
>>  "cmake --help-policy CMP0014" for policy details.  Use the cmake_policy
>>  command to set the policy and suppress this warning.
>> This warning is for project developers.  Use -Wno-dev to suppress it.
>> 
>> -- Configuring incomplete, errors occurred!
>> -
>>  
>> 
>> 
>> When I use this -Wno-dev it does not work either. What am I doing wrong?
>> 
>> 
>> Kind regards,
>> Christian Muecksch
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the www interface 
> or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] help with git

2010-08-24 Thread Carsten Kutzner
On Aug 24, 2010, at 12:57 PM, Alan wrote:

> Hi there,
> 
> I want to change from release-4-5-patches to master
> 
> I am trying:
> 
> git reset master
> git checkout master
> 
> git pull
> error: Your local changes to 'include/resall.h' would be overwritten by 
> merge.  Aborting.
> Please, commit your changes or stash them before you can merge.
> 
> git stash
> Saved working directory and index state WIP on master: 5e3473a Merge branch 
> 'release-4-5-patches'
> HEAD is now at 5e3473a Merge branch 'release-4-5-patches'
> 
> But I don't want branch 'release-4-5-patches'!
> 
> Indeed, I am finding git very annoying to use.
> 
> All I wanted in svn lingo is to change to a branch and if there's conflict, 
> ignore all changes in my side and revert any modification to what's in the 
> repository.
git reset --hard 
will remove all your modifications to that branch that are not checked in yet. 
You might
want to save include/resall.h elsewhere if you still need your modifications.

Then 
git checkout master

will check out the master branch. You might need to "git pull" after you checked
out the master so that you are up-to-date with the gromacs repository.

Carsten

> 
> Is it possible with git?
> 
> Thanks,
> 
> Alan
> 
> 
> -- 
> Alan Wilter S. da Silva, D.Sc. - CCPN Research Associate
> Department of Biochemistry, University of Cambridge. 
> 80 Tennis Court Road, Cambridge CB2 1GA, UK.
> >>http://www.bio.cam.ac.uk/~awd28<<
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] mpi installation problems

2010-07-29 Thread Carsten Kutzner
_Free_mem'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `MPIR_Grequest_set_lang_f77'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `PMPI_Win_create_keyval'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `b_use_gettimeofday'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `PMPI_Type_get_attr'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `PMPI_Close_port'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `PMPI_Comm_create_errhandler'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `PMPI_Query_thread'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `PMPI_Type_match_size'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to `PMPI_Open_port'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `PMPI_Win_set_attr'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to `PMPI_Win_fence'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to `MPID_Wtick'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `PMPI_Comm_set_errhandler'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `MPIR_Err_create_code'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `PMPI_Type_set_name'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `PMPI_Grequest_complete'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `PMPI_Add_error_class'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `PMPI_Type_delete_attr'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `PMPI_Accumulate'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `PMPI_Type_create_hindexed'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to `PMPI_Alltoallw'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `PMPI_Win_create_errhandler'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `MPI_F_STATUSES_IGNORE'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to `PMPI_Win_post'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `PMPI_Comm_get_attr'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `PMPI_Add_error_code'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `PMPI_Comm_call_errhandler'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `PMPI_Type_get_extent'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `PMPI_Publish_name'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `PMPI_Comm_disconnect'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to `PMPI_Get'
> /opt/intel/impi/3.1/lib64/libmpigf.so: undefined reference to 
> `PMPI_Unpublish_name'
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] decomposition

2010-07-26 Thread Carsten Kutzner
Hi Jacopo,

from somewhere the information about the 7 nodes must have come.
What are the exact commands you used? What MPI are you using?

Carsten


On Jul 26, 2010, at 12:35 PM, Jacopo Sgrignani wrote:

> Dear all
> i'm trying to run a MD simulation using domain decomposition but after two
> days i'm only able to get this error:
> 
> 
> There is no domain decomposition for 7 nodes that is compatible with the
> given box and a minimum cell size of 2.37175 nm
> Change the number of nodes or mdrun option -rcon or -dds or your LINCS
> settings
> Look in the log file for details on the domain decomposition
> 
> 
> I don't select a number of nodes but i use the default options, but the
> simulation does not run.
> So could you give me advices to run with domain decomp, or  where can I
> find exaples about this?
> 
> Thanks a lot
> 
> Jacopo
> 
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] Installing gromacs from git

2010-07-10 Thread Carsten Kutzner
 > > --
> > > 
> > >
> > > Justin A. Lemkul
> > > Ph.D. Candidate
> > > ICTAS Doctoral Scholar
> > > MILES-IGERT Trainee
> > > Department of Biochemistry
> > > Virginia Tech
> > > Blacksburg, VA
> > > jalemkul[at]vt.edu <http://vt.edu> | (540) 231-9080
> 
> > >
>http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
> > >
> > > 
> > > --
> > > gmx-users mailing listgmx-users@gromacs.org
> > > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > > Please search the archive at
>http://www.gromacs.org/search before posting!
> > > Please don't post (un)subscribe requests to the
>list. Use the www interface or send it to
>gmx-users-requ...@gromacs.org.
> > > Can't post? Read
>http://www.gromacs.org/mailing_lists/users.php
> 
> > >
> > >
> > >
> > > --
> > > Quaerendo Invenietis-Seek and you shall discover.
> > > --
> > > gmx-users mailing listgmx-users@gromacs.org
> > > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > > Please search the archive at http://www.gromacs.org/search
> > > before posting!
> > > Please don't post (un)subscribe requests to the list.
>Use the
> > > www interface or send it to gmx-users-requ...@gromacs.org.
> > > Can't post? Read
>http://www.gromacs.org/mailing_lists/users.php
> >
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > Please search the archive at
>http://www.gromacs.org/search before posting!
> > Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-requ...@gromacs.org.
> > Can't post? Read
>http://www.gromacs.org/mailing_lists/users.php
> 
> >
> >
> >
> > --
> > Quaerendo Invenietis-Seek and you shall discover.
> > --
> > gmx-users mailing listgmx-users@gromacs.org
><mailto:gmx-users@gromacs.org>
> 
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > Please search the archive at http://www.gromacs.org/search
> > before posting!
> > Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-requ...@gromacs.org
><mailto:gmx-users-requ...@gromacs.org>.
> 
> > Can't post? Read http://www.gromacs.org/mailing_lists/users.php
> 
>    --
> 
>gmx-users mailing listgmx-users@gromacs.org
><mailto:gmx-users@gromacs.org>
> 
>http://lists.gromacs.org/mailman/listinfo/gmx-users
>Please search the archive at http://www.gromacs.org/search
>before posting!
>Please don't post (un)subscribe requests to the list. Use the
>www interface or send it to gmx-users-requ...@gromacs.org
><mailto:gmx-users-requ...@gromacs.org>.
> 
>Can't post? Read http://www.gromacs.org/mailing_lists/users.php
> 
> 
> 
> 
>-- Quaerendo Invenietis-Seek and you shall discover.
> 
> 
> 
> 
> -- 
> Quaerendo Invenietis-Seek and you shall discover.
> 
> 
> -- 
> 
> 
> Justin A. Lemkul
> Ph.D. Candidate
> ICTAS Doctoral Scholar
> MILES-IGERT Trainee
> Department of Biochemistry
> Virginia Tech
> Blacksburg, VA
> jalemkul[at]vt.edu | (540) 231-9080
> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
> 
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the www interface 
> or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
> 
> 
> 
> -- 
> Quaerendo Invenietis-Seek and you shall discover.
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] P4_error for extending coarse grained MD simulations

2010-07-09 Thread Carsten Kutzner
On Jul 9, 2010, at 11:29 AM, 张春雷 wrote:

> Hi Carsten,
> 
> The error message I post is got on a single core with MPI.
> 
> p0_6991:  p4_error: interrupt SIGSEGV: 11
> 
> So it states p0_.
> 
> I dont think the error is due to the MPI progrom.
> Am I right?
Yes, but there must be some more diagnostic information from 
mdrun about what has gone wrong. Please check stderr / stdout
output files as well as md.log.

Carsten

 
> 
> Justin Zhang
> 
> 在 2010年7月9日 下午4:00,Carsten Kutzner 写道:
> Hi Justin,
> 
> what kind of error message do you get if you run this system
> on a single core without MPI?
> 
> Carsten
> 
> 
> On Jul 8, 2010, at 9:36 PM, 张春雷 wrote:
> 
>> Dear all,
>> 
>> I have post this question about a two weeks ago. During these days, I 
>> followed suggestions from Mark and weixin, but did not fix it. Here, I 
>> repeat it again to seek more help.
>> 
>> I attempt to extend MD simulation for a coarse-grained system. CG models are 
>> in MARTINI form. Simulation was carried out on Gromacs 4.02, 4.03 or 4.07.
>> 
>> I tried to use check point file state.cpt to extend my simulation. The mdrun 
>> program can read to the check file. But it reported error like this: 
>> 
>> Reading checkpoint file state.cpt generated: Mon Jun 14 09:48:10 2010
>> 
>> Loaded with Money
>> 
>> starting mdrun 'Protein in POPE bilayer'
>> 2400 steps, 72.0 ps (continuing from step 1200, 36.0 ps).
>> step 1200, remaining runtime: 0 s  
>> p0_6991:  p4_error: interrupt SIGSEGV: 11
>> 
>> I check the state.cpt file using gmxdump and compare it with other 
>> checkpoint files that can be used for extending All-atom simulations. I 
>> found that in the CG check point file, some sections are missing: box-v 
>> (3x3) and thermostat-integral.
>> I am not sure whether this missing sections cause my run crash. If it is, 
>> could any one tell me possible reasons that result in the loss of box-v and 
>> thermostat-integral and how to fix the problem?
>> 
>> Your suggestions are greatly helpful and appreciated.
>> 
>> Justin Zhang
>>  
>> 
>> 
>> 在 2010年6月25日 下午4:45,张春雷 写道:
>> Information shown by gmxcheck:
>> 
>> Checking file state.cpt
>> 
>> # Atoms  9817
>> Last frame -1 time 36.000   
>> 
>> 
>> Item#frames Timestep (ps)
>> Step 1
>> Time 1
>> Lambda   1
>> Coords   1
>> Velocities   1
>> Forces   0
>> Box  1
>> 
>> Checking file state_prev.cpt
>> 
>> # Atoms  9817
>> Last frame -1 time 359010.000   
>> 
>> 
>> Item#frames Timestep (ps)
>> Step 1
>> Time 1
>> Lambda   1
>> Coords   1
>> Velocities   1
>> Forces   0
>> Box  1
>> 
>> Checking file md_360ns.trr
>> trn version: GMX_trn_file (double precision)
>> Reading frame   0 time0.000   
>> # Atoms  9817
>> Reading frame2000 time 30.000   
>> 
>> 
>> Item#frames Timestep (ps)
>> Step  2401150
>> Time  2401150
>> Lambda2401150
>> Coords2401150
>> Velocities2401150
>> Forces   0
>> Box   2401150
>> 
>> Is anything wrong? 
>> 
>> 
>> 2010/6/25 Mark Abraham 
>> 
>> 
>> 
>> - Original Message -
>> From: 张春雷 
>> Date: Friday, June 25, 2010 16:46
>> Subject: Re: [gmx-users] P4_error for extending coarse grained MD simulations
>> To: Discussion list for GROMACS users 
>> 
>> > The last .gro file only provides coordinates of the system. No velocity is 
>> > recorded. Actually, what I attempt to achieve is a binary identical 
>> > trajectory. So I think the velocity from the last step is critical.
>> > 
>> > I have tried another approach in which the checkpoint file is neglected.
>> > 
>> > $mdrun_mpi_d -s md_720ns.tpr  -e md_720ns.edr -o md_720ns.trr -g 
>> > md_720ns.log
>> > 
>> > It works. So the checkpoint file appears to contain some error. But it is 
>> > generated by a normally finished production simulation.
>> 
>> What does gmxcheck say about all the files involved?
>> 
>> Mark
>> 
>> 
>> > Have  you encountered similar things?
>> > 
>> > Thank you for your suggestio

Re: [gmx-users] P4_error for extending coarse grained MD simulations

2010-07-09 Thread Carsten Kutzner
ing file md_720ns.tpr, VERSION 4.0.3 (double precision)
> > 
> > Reading checkpoint file state.cpt generated: Mon Jun 14 09:48:10 2010
> > 
> > Loaded with Money
> > 
> > starting mdrun 'Protein in POPE bilayer'
> > 2400 steps, 72.0 ps (continuing from step 1200, 36.0 ps).
> > step 1200, remaining runtime: 0 s  
> > p0_6991:  p4_error: interrupt SIGSEGV: 11
> > 
> > I have searched the mail-list, but found no similar report. I also search 
> > through google, but no answer seems satisfactory.
> > 
> > I once performed extending simulation for all atom simulation, and the 
> > method mentioned above worked.
> > 
> > Is anyone familiar with MARTINI CG simulation?
> > Could you give me some suggestions?
> > 
> > Many thanks!
> > 
> > Justin
> > 
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > Please search the archive at http://www.gromacs.org/search before posting!
> > Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-requ...@gromacs.org.
> > Can't post? Read http://www.gromacs.org/mailing_lists/users.php
> > 
> > 
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > Please search the archive at http://www.gromacs.org/search before posting!
> > Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-requ...@gromacs.org.
> > Can't post? Read http://www.gromacs.org/mailing_lists/users.php
> > 
> > -- 
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > Please search the archive at http://www.gromacs.org/search 
> > before posting!
> > Please don't post (un)subscribe requests to the list. Use the 
> > www interface or send it to gmx-users-requ...@gromacs.org.
> > Can't post? Read http://www.gromacs.org/mailing_lists/users.php
> 
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
> 
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] mdrun_mpi: error while loading shared libraries: libimf.so: cannot open shared object file: No such file or directory

2010-07-08 Thread Carsten Kutzner
Hi,

you can check with

ldd mdrun_mpi

whether really all needed libraries were found. Is libimf.so in 
/opt/intel/fc/10.1.008/lib ? The intel compilers also come with
files called "iccvars.sh" or "ictvars.sh". If you do

source /path/to/iccvars.sh

everything should be set as needed. Check the Intel Compiler
documentation.

Carsten


On Jul 8, 2010, at 10:52 AM, zhongjin wrote:

> Dear users,
>   When I am using GROMACS 4.0.7 on the Compute node ,executing command:
> mpiexec -n 4  mdrun_mpi -deffnm SWNT66nvt >/dev/null &
> and then met a problem :mdrun_mpi: error while loading shared libraries: 
> libimf.so: cannot open shared object file: No such file or directory
> but I have added export LD_LIBRARY_PATH=/opt/intel/fc/10.1.008/lib
> in the .bash_profile, the libimf.so is in this directory.
> However,when  executing command:mdrun -deffnm SWNT66nvt &,it's OK ! Anybody 
> could help me? Thanks a lot!
> Zhongjin He
> 
>  -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] mpi run

2010-07-08 Thread Carsten Kutzner
Hi Mahmoud,

for anyone to be able to help you, you need to provide
a lot more information, at least:
- which mpi library are you using?
- how did you compile and/or install Gromacs?
- what commands do you use to run mdrun and what was
the output of it?

Best,
  Carsten


On Jul 8, 2010, at 9:41 AM, nanogroup wrote:

> Dear GMX Users,
> 
> I have a PC with 4 CPU, but the Gromacs only use one CPU.
> 
> the command of mpiru works on linux; however, the command of mdrun_mpi does 
> not work.
> 
> Would you please help me how to set up the mdrun_mpi in Gromacs 4.0.4
> 
> Many thanks,
> Mahmoud
> 
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] Fwd: missing atom

2010-07-03 Thread Carsten Kutzner
Dear Abdul,

please keep all Gromacs-related questions on the mailing list.

Best,
  Carsten



Begin forwarded message:

> From: "abdul wadood" 
> Date: July 3, 2010 8:40:29 AM GMT+02:00
> To: 
> Subject: missing atom
> 
> Dear Carsten
> 
> I am running simulation using gromacs with amber forcefields on my protein. I 
> have prepared the input file accordingly and have all the required library. 
> But the problem is that when i run pdb2gmx for the top file the following 
> error come:
> 
> WARNING: atom H is missing in residue LEU 2 in the pdb file
>  You might need to add atom H to the hydrogen database of residue LEU
>  in the file ff???.hdb 
> 
> I tried my best to solve the problem, searching on the gromacs website and 
> manual but cannot succeed.
> If you kindly help me in this respect your help will be highly appreciated by 
> our group our research.
> the input file is attached.
> 
> Many regards
> 
> Abdul Wadood, 
> Research Scholar, 
> Dr.Panjwani Center for Molecular Medicine and 
> Drug Research, 
> International Center for Chemical and 
> Biological Science, 
> University of Karachi, Karachi-75720, Pakistan. 
> Email:wadoodbiochem...@hotmail.com 
> 
> 




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] the job is not being distributed

2010-06-30 Thread Carsten Kutzner
Hi Syed,

you have to give more information for other people to be able to
understand what you are doing. What is the exact sequence of
commands you use to start the mdrun job? How does your 
OpenMPI hostfile look like, how are your nodes called, what does
mdrun print on the first lines. Without that information, nobody
can help you because there is no chance to tell what is possibly 
going wrong.

Carsten



On Jun 30, 2010, at 8:21 PM, Syed Tarique Moin wrote:

> Hello,
> 
> I have successfully compiled gromacs with openmpi but i see the same problem 
> that the jobs is still not distributed to other nodes but showing all the 
> processor in node and it should be distributed. 
> 
> Thanks and regards
> 
> Syed Tarique Moin
> 
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] ED sampling

2010-06-30 Thread Carsten Kutzner
Hi Vijaya,

could it be that you mixed something up when making the .edi file? 
The tool make_edi reads the total number of atoms from the provided
.tpr file and saves this number with the other ED sampling 
information to the .edi file. 

The ED sampling module in mdrun then compares the number of atoms 
from the .edi file with the .tpr file provided to mdrun - to ensure that the 
.edi file was produced for the same MD system.

Carsten


On Jun 29, 2010, at 6:57 PM, vijaya subramanian wrote:

> Hi
> I need some information about how ED sampling works when 
> a subset of the atoms are used for covariance analysis.  Basically I would 
> like to
> move the system along the first eigenvector obtained from covariance analysis 
> of the
> C-alpha atoms only.  From the paper "Toward an Exhaustive Sampling of the 
> Configurational Spaces of two forms of the Peptide Hormone Guanylin"  it 
> appears only the C-alpha atoms are used  to define
> the essential subspace but when I use the following commands, I get an error 
> message saying:
> 
> 
> Fatal error:
> Nr of atoms in edsamp26-180fit.edi (4128) does not match nr of md atoms 
> (294206)
> 
> The commands are:
> tpbconv -s full180.tpr -f full180.trr -extend 5 -o edsam26-180f180.tpr -e 
> full180.edr
> aprun -n 60 $GROMACS_DIR/bin/mdrun -s edsam26-180f180.tpr -nosum -o 
> edsam26-180f180.trr -x edsam26-180f180.xtc -ei edsamp26-180fit.edi -c 
> edsam26-180f180.gro -e edsam26-180f180.edr -g edsam26-180f180.log
> 
> The ED sampling method I am using is  linfix not radacc.
> 
> Thanks
> Vijaya
> 
> 
> The New Busy think 9 to 5 is a cute idea. Combine multiple calendars with 
> Hotmail. Get busy. -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] the job is not being distributed

2010-06-28 Thread Carsten Kutzner
So which MPI library are you using?

Carsten


On Jun 28, 2010, at 3:33 PM, Syed Tarique Moin wrote:

> Hi,
> 
> In case Amber simulation, i run command of mpirun and the jobs are 
> distributed into different nodes 4 on each machines but in case i am 
> observing that all the 8 processes are on node01 but no indication on node02 
> unlike in case of amber. 
> 
> Thanks
> 
> Syed Tarique Moin
> 
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] the job is not being distributed

2010-06-28 Thread Carsten Kutzner
On Jun 28, 2010, at 2:34 PM, Syed Tarique Moin wrote:

> hello,
> 
> I am running a simulation on dual core processor using the following command
> 
> mpirun -np 8 mdrun_mpi -s top
> 
> The job is running but it is not distributed on other node, i mean i cant see 
> the process on other nodes as well. I see only on node01 but it has only 4 
> processors. Can anybody suggest me! 
> 
> 
> 

Hi,

find out how you generally run a simple parallel job with the MPI framework
that you are using. If that works, Gromacs should also run in parallel.
You are going to have to provide some kind of machine / host / nodefile.

Carsten

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] Should I use separate PME nodes

2010-06-25 Thread Carsten Kutzner
Hi Gaurav,

separate PME nodes usually pay off on a larger number of nodes
(>16). In rare cases, you will see a performance benefit on a small number
of nodes as well. Just try it! Or use g_tune_pme ... ;)

Carsten


On Jun 25, 2010, at 3:32 PM, Gaurav Goel wrote:

> I ran my simulation in parallel on 4 nodes (with zero separate PME
> nodes). Below is the information printed in md.log.
> 
> I see that PME-Mesh calculations took 60% of CPU time. Any
> recommendations on using 1 or more separate PME nodes to speed up?
> 
> 
> Computing: M-Number M-Flops  % Flops
> ---
> Coul(T) + VdW(T) 1761401.496982   119775301.79520.2
> Outer nonbonded loop  106414.135764 1064141.358 0.2
> Calc Weights   32400.006480 1166400.233 0.2
> Spread Q Bspline 2332800.466560 4665600.933 0.8
> Gather F Bspline 2332800.46656027993605.599 4.7
> 3D-FFT  47185929.437184   377487435.49763.6
> Solve PME 675840.13516843253768.651 7.3
> NS-Pairs  823453.92765617292532.481 2.9
> Reset In Box2160.0021606480.006 0.0
> CG-CoM  2160.0043206480.013 0.0
> Virial 11700.002340  210600.042 0.0
> Ext.ens. Update10800.002160  583200.117 0.1
> Stop-CM10800.002160  108000.022 0.0
> Calc-Ekin  10800.004320  291600.117 0.0
> ---
> Total 593905146.863   100.0
> ---
> 
>  R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G
> 
> Computing: Nodes Number G-CyclesSeconds %
> ---
> Domain decomp. 4101 3859.416 1488.1 0.6
> Comm. coord.   4501 1874.635  722.8 0.3
> Neighbor search410178640.72230322.211.2
> Force  4501   180659.90269658.525.8
> Wait + Comm. F 4501 2578.994  994.4 0.4
> PME mesh   4501   422268.834   162817.760.4
> Write traj.4  10001   17.5266.8 0.0
> Update 4501 2981.794 1149.7 0.4
> Comm. energies 4501 2633.176 1015.3 0.4
> Rest   43580.341 1380.5 0.5
> ---
> Total  4  699095.342   269556.0   100.0
> ---
> 
> Thanks,
> Gaurav
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] Re: gmx-users Digest, Vol 74, Issue 134

2010-06-24 Thread Carsten Kutzner
Amin,

maybe your MPI-enabled executable is called mdrun_mpi. Check
the directory where mdrun is and make shure (with ldd for example)
the mdrun* you are using is linked to the MPI library you are using.

Carsten



On Jun 24, 2010, at 6:51 AM, Amin Arabbagheri wrote:

> Carsten,
> 
> Thanks for your help, I used something like "mpirun -np 3 mdrun -s topol.tpr",
> it works but its something like repeating a single job 3 times, 
> simultaneously.
> here is the output on the screen :
> {
>  Back Off! I just backed up md_traj_dam_2nd.trr to ./#md_traj_dam_2nd.trr.1#
> 
> Back Off! I just backed up ener.edr to ./#ener.edr.1#
> 
> Back Off! I just backed up md_traj_dam_2nd.trr to ./#md_traj_dam_2nd.trr.2#
> 
> Back Off! I just backed up ener.edr to ./#ener.edr.2#
> 
> Back Off! I just backed up md_traj_dam_2nd.trr to ./#md_traj_dam_2nd.trr.3#
> 
> Back Off! I just backed up ener.edr to ./#ener.edr.3#
> starting mdrun 'Protein in water'
> 100 steps,   1000.0 ps.
> starting mdrun 'Protein in water'
> 100 steps,   1000.0 ps.
> starting mdrun 'Protein in water'
> 100 steps,   1000.0 ps.
> step 736900, will finish Fri Jun 25 07:45:04 2010
> }
> the estimated time is as long as one single job!
> 
> --- On Mon, 21/6/10, gmx-users-requ...@gromacs.org 
>  wrote:
> 
> From: gmx-users-requ...@gromacs.org 
> Subject: gmx-users Digest, Vol 74, Issue 134
> To: gmx-users@gromacs.org
> Date: Monday, 21 June, 2010, 9:03
> 
> Send gmx-users mailing list submissions to
> gmx-users@gromacs.org
> 
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> or, via email, send a message with subject or body 'help' to
> gmx-users-requ...@gromacs.org
> 
> You can reach the person managing the list at
> gmx-users-ow...@gromacs.org
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of gmx-users digest..."
> 
> 
> Today's Topics:
> 
>1. (no subject) (Amin Arabbagheri)
>2. Re: (no subject) (Justin A. Lemkul)
>3. Re: (no subject) (Linus ?stberg)
>4. Re: (no subject) (Carsten Kutzner)
>5. Help with defining new residue (OXY--HEME) (Omololu Akin-Ojo)
>6. Re: Help with defining new residue (OXY--HEME) (Justin A. Lemkul)
> 
> 
> --
> 
> Message: 1
> Date: Mon, 21 Jun 2010 05:00:04 -0700 (PDT)
> From: Amin Arabbagheri 
> Subject: [gmx-users] (no subject)
> To: gmx-users@gromacs.org
> Message-ID: <180446.74209...@web50607.mail.re2.yahoo.com>
> Content-Type: text/plain; charset="utf-8"
> 
> Hi all,
> 
> I've installed GROMACS 4.0.7 and MPI libraries using ubuntu synaptic package 
> manager.
> I want to run a simulation in parallel on a multi processor, single PC, but 
> to compile via grompp, it doesn't accept -np flag, and also , using -np in 
> mdrun, it still runs as a single job.
> Thanks a lot for any instruction.
> 
> Bests,
> Amin
> 
> 
> 
> 
>   
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> http://lists.gromacs.org/pipermail/gmx-users/attachments/20100621/fd800779/attachment-0001.html
> 
> --
> 
> Message: 2
> Date: Mon, 21 Jun 2010 08:05:15 -0400
> From: "Justin A. Lemkul" 
> Subject: Re: [gmx-users] (no subject)
> To: Discussion list for GROMACS users 
> Message-ID: <4c1f557b.4090...@vt.edu>
> Content-Type: text/plain; charset=UTF-8; format=flowed
> 
> 
> 
> Amin Arabbagheri wrote:
> > Hi all,
> > 
> > I've installed GROMACS 4.0.7 and MPI libraries using ubuntu synaptic 
> > package manager.
> > I want to run a simulation in parallel on a multi processor, single PC, 
> > but to compile via grompp, it doesn't accept -np flag, and also , using 
> > -np in mdrun, it still runs as a single job.
> > Thanks a lot for any instruction.
> > 
> 
> Regarding grompp:
> 
> http://www.gromacs.org/Documentation/FAQs
> 
> As for mdrun, please provide your actual command line.  The mdrun -np flag is 
> nonfunctional, instead the number of nodes are taken from, i.e. mpirun -np 
> from 
> which mdrun is launched.
> 
> -Justin
> 
> > Bests,
> > Amin
> > 
> > 
> 
> -- 
> 
> 
> Justin A. Lemkul
> Ph.D. Candidate
> ICTAS Doctoral Scholar
> MILES-IGERT Trainee
> Department of Biochemistry
> Virginia Tech
> Blacksburg, VA
> jalemkul[at]vt.edu | (540) 231-9080
&

Re: [gmx-users] (no subject)

2010-06-21 Thread Carsten Kutzner
Amin,

the -np flag is not necessary any more for grompp in Gromacs 4.0.
For mdrun, just use something like
mpirun -np 4 mdrun -s topol.tpr

Carsten


On Jun 21, 2010, at 2:00 PM, Amin Arabbagheri wrote:

> Hi all,
> 
> I've installed GROMACS 4.0.7 and MPI libraries using ubuntu synaptic package 
> manager.
> I want to run a simulation in parallel on a multi processor, single PC, but 
> to compile via grompp, it doesn't accept -np flag, and also , using -np in 
> mdrun, it still runs as a single job.
> Thanks a lot for any instruction.
> 
> Bests,
> Amin
> 
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] Re: Parallel instalation: gmx-users Digest, Vol 74, Issue 76

2010-06-18 Thread Carsten Kutzner
Hi Abdul,

please keep Gromacs-related questions on the list. The error
is exactly as printed: You have more than 128 backups of md.log
in your directory, at this point mdrun does not make more backups
of md.log any more. You have to delete the #md.log.*# files.

Carsten


On Jun 18, 2010, at 7:24 AM, abdul wadood wrote:

> Dear Carsten
> 
> When I reduce the number of steps in mdp file the error change which now
> 
> 0: Fatal error:
> 0: Won't make more than 128 backups of md.log for you
> 0: ---
> 0:
> 0: "I Wonder, Should I Get Up..." (J0: p0_27074:  p4_error: : -1
> . Lennon)
> 0:
> 0: Halting program mdrun_mpi
> 0:
> 0: gcq#46: "I Wonder, Should I Get Up..." (J. Lennon)
> 0:
> 0: [0] MPI Abort by user Aborting program !
> 0: [0] Aborting program!
> 0: p4_error: latest msg from perror: No such file or directory
> 
> 
> Abdul Wadood, 
> Research Scholar, 
> Dr.Panjwani Center for Molecular Medicine and 
> Drug Research, 
> International Center for Chemical and 
> Biological Science, 
> University of Karachi, Karachi-75720, Pakistan. 
> Email:wadoodbiochem...@hotmail.com 
> 
> 
> 
> 
> From: ckut...@gwdg.de
> Subject: Re: Parallel instalation: gmx-users Digest, Vol 74, Issue 76
> Date: Thu, 17 Jun 2010 16:36:51 +0200
> To: wadoodbiochem...@hotmail.com
> 
> On Jun 17, 2010, at 3:57 PM, abdul wadood wrote:
> 
> Dear Carsten
> 
> I give the path for the topol.tpr file now the error is change which is
> 
> Fatal error:
> 3: Too many LINCS warnings (4254)
> 3: If you know what you are doing you can adjust the lincs warning threshold 
> in your mdp file
> 3: or set the environment variable GMX_MAXCONSTRWARN to -1,
> 3: but normally it is better to fix the problem
> 3: ---
> 3:
> Maybe your system is not well equilibrated, or your time step is too long.
> 
> Carsten
> 
> Hotmail is redefining busy with tools for the New Busy. Get more from your 
> inbox. See how.


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] Re: Parallel instalation: gmx-users Digest, Vol 74, Issue 76

2010-06-17 Thread Carsten Kutzner
On Jun 17, 2010, at 3:42 PM, abdul wadood wrote:

> Dear Carsten 
> 
> the command which i give is 
> 
> mpiexec -l -np 4 /usr/local/gromacs/bin/mdrun_mpi -s topol.tpr
> 
> with this command the same error come which is 
> 
> Can not open file:
> 3: topol.tpr
> 3: ---
Maybe "." (the current directory) is not in your path. Either try

mpiexec -l -np 4 /usr/local/gromacs/bin/mdrun_mpi -s ./topol.tpr

or give the full path name:

mpiexec -l -np 4 /usr/local/gromacs/bin/mdrun_mpi -s  
/absolute/path/to/topol.tpr

Carsten



-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] Parallel installation

2010-06-14 Thread Carsten Kutzner
Hi,

you either have to add the directory where mdrun_mpi resides to your
path or you have to give mpirun the full path name of mdrun_mpi.

You can add the Gromacs executables to your path by the command

source /path/to/your/installed/gromacs/bin/GMXRC

Or use

mpirun -np 12 /path/to/your/installed/gromacs/bin/mdrun_mpi -s topol.tpr ...

Carsten



On Jun 14, 2010, at 7:20 AM, abdul wadood wrote:

> Hello,
> 
> I am new user of gromacs. I have installed the gromacs and enable the mpi but 
> when I run the command
> 
> "mpirun -np 12 mdrun_mpi -s topol.tpr -o test.trr -x test.xtc -c confout.gro 
> -e test.edr -g test.log"
> 
> The error come that 
> 
> "Program mdrun_mpi either does not exist, is not
> executable, or is an erroneous argument to mpirun."
> 
> I have search the problem on mailing list but I do not found satisfactory 
> answer to solve my problem. If you kindly help me in this respect I will be 
> very thankful to you.
> 
> Your help will be highly appreciated by our research group.
> 
> Many regards
> 
> Abdul Wadood, 
> Research Scholar, 
> Dr.Panjwani Center for Molecular Medicine and 
> Drug Research, 
> International Center for Chemical and 
> Biological Science, 
> University of Karachi, Karachi-75720, Pakistan. 
> Email:wadoodbiochem...@hotmail.com 
> 
> 
> 
> Hotmail has tools for the New Busy. Search, chat and e-mail from your inbox. 
> Learn more. -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

  1   2   3   >