[gmx-users] Suggestion on Simple protocol for MD..

2014-10-01 Thread rama david
Dear Friends,
 I want the suggestion from you on my plan of work. The plan is as
follow :

 I have the protein pdb file  in which two protein interact with
each other and the dimer
formed. The dimer is responsible for various bad effect in cell. So my aim
is to to inhibit the dimer formation. To do so, my plan is doing the MD
simulation of dimer for 50 ns or appropriate time to make stable
conformation. Then do umbrella pulling/sampling ( steered Molecular
dynamics ), so that I can find the residues that are most important in
interactions ( hot spot residues for interactions). Afterward design the
peptide from these hot spot to check biological activity.
  My questions are as follows:

1.  Is this approach is good or any other way to do this
2.  I read about Computational alanine scanning,  Is it possible in Gromacs
latest version.
  if it how to do it??
3.  If any one do this work please send me the article link or article (
Surely I am alsoworking hard to find and reading the article. I
have some good article too, but other goodarticle also welcome. )

 I am looking forward for great suggestions from you.
I am waiting eagerly.

I am very much thankful for your help and giving time for my problem.

With Best regards,

 Rama David
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Folding simulation using gromacs

2014-10-01 Thread Mirco Wahab

On 01.10.2014 03:52, AINUN NIZAR M wrote:

I'm a newbie in MD. I would like to simulate a protein folding, from a
primary protein structure to a fully-folded state  for a wild type and
mutated protein. Then I will compare them and make movie from unfolded to
folded state. I want to use REMD method. I want to know the system free
energy as well. Can i do it with gromacs?


You can try that and it will "work" if you set it up correctly.
Will the results be of any use? Probably not. This is, actually,
a very complicated topic which requires a lot work up front.

For the start - don't miss this recent article in PLOS on the
topic from the Perutz-Lab (Vienna) - they use Gromacs.
=> 
http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1003638


Regards

M.


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] performance 1 gpu

2014-10-01 Thread Johnny Lu
That is surprising. I thought intel compiler is the best compiler for intel
cpu.

On Tue, Sep 30, 2014 at 5:40 PM, Szilárd Páll 
wrote:

> Unfortunately this is an ill-balanced hardware setup (for GROMACS), so
> what you see is not unexpected.
>
> There are a couple of things you can try, but don't expect more than a
> few % improvement:
> - try to lower nstlist (unless you already get a close to 0 buffer);
> this will decrease the non-bonded time, hence the CPU waiting/idling,
> but it will also increase the search time (and DD time if applicable),
> so you'll have to see what wors best for you;
> - try to use the -nb gpu_cpu mode, this does a rather splitting the
> non-bonded workload, but if you are lucky (=you don't get too much
> non-local load which will be computed now on the CPU), you may be able
> to get a bit better performance.
>
> You may want to try gcc 4.8 or 4.9 and FFTW 3.3.x, you will most
> likely get better performance than with icc+MKL.
>
>
> On Thu, Sep 25, 2014 at 12:50 PM, Johnny Lu 
> wrote:
> > Hi.
> >
> > I wonder if gromacs 4.6.7 can run faster on xsede.org because I see cpu
> > waits for gpu in the log.
> >
> > There is 16 cpu (2.7 GHz), 1 phi co-processor, and 1 GPU.
>
> Get that Phi swapped to a GPU and you'll be happier ;)
>
> > I compiled gromacs with gpu and without phi and with intel compiler and
> mkl.
> >
> > I didn't install for 5.0.1 because I worry this bug might mess up
> > equilibration when I switch from one ensemble to another one (
> > http://redmine.gromacs.org/issues/1603).
>
> It's been fixed, 5.0.2 will be released soon, so I suggest you wait for it.
>
> > Below are from the log:
> >
> > Gromacs version:VERSION 4.6.7
> > Precision:  single
> > Memory model:   64 bit
> > MPI library:thread_mpi
> > OpenMP support: enabled
> > GPU support:enabled
> > invsqrt routine:gmx_software_invsqrt(x)
> > CPU acceleration:   AVX_256
> > FFT library:MKL
> > Large file support: enabled
> > RDTSCP usage:   enabled
> > Built on:   Wed Sep 24 08:33:22 CDT 2014
> > Built by:   jlu...@login2.stampede.tacc.utexas.edu [CMAKE]
> > Build OS/arch:  Linux 2.6.32-431.17.1.el6.x86_64 x86_64
> > Build CPU vendor:   GenuineIntel
> > Build CPU brand:Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz
> > Build CPU family:   6   Model: 45   Stepping: 7
> > Build CPU features: aes apic avx clfsh cmov cx8 cx16 htt lahf_lm mmx msr
> > nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1
> > sse4.2 ssse3 tdt x2apic
> > C compiler:
> > /opt/apps/intel/13/composer_xe_2013.3.163/bin/intel64/icc Intel icc (ICC)
> > 13.1.1 20130313
> > C compiler flags:   -mavx-mkl=sequential -std=gnu99 -Wall   -ip
> > -funroll-all-loops  -O3 -DNDEBUG
> > C++ compiler:
> > /opt/apps/intel/13/composer_xe_2013.3.163/bin/intel64/icc Intel icc (ICC)
> > 13.1.1 20130313
> > C++ compiler flags: -mavx   -Wall   -ip -funroll-all-loops  -O3 -DNDEBUG
> > Linked with Intel MKL version 11.0.3.
> > CUDA compiler:  /opt/apps/cuda/6.0/bin/nvcc nvcc: NVIDIA (R) Cuda
> > compiler driver;Copyright (c) 2005-2013 NVIDIA Corporation;Built on
> > Thu_Mar_13_11:58:58_PDT_2014;Cuda compilation tools, release 6.0, V6.0.1
> > CUDA compiler
> >
> flags:-gencode;arch=compute_20,code=sm_20;-gencode;arch=compute_20,code=sm_21;-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_35,code=compute_35;-use_fast_math;;
> > -mavx;-Wall;-ip;-funroll-all-loops;-O3;-DNDEBUG
> > CUDA driver:6.0
> > CUDA runtime:   6.0
> >
> > ...
> > Using 1 MPI thread
> > Using 16 OpenMP threads
> >
> > Detecting CPU-specific acceleration.
> > Present hardware specification:
> > Vendor: GenuineIntel
> > Brand:  Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz
> > Family:  6  Model: 45  Stepping:  7
> > Features: aes apic avx clfsh cmov cx8 cx16 htt lahf_lm mmx msr
> nonstop_tsc
> > pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2
> ssse3
> > tdt x2apic
> > Acceleration most likely to fit this hardware: AVX_256
> > Acceleration selected at GROMACS compile time: AVX_256
> >
> >
> > 1 GPU detected:
> >   #0: NVIDIA Tesla K20m, compute cap.: 3.5, ECC: yes, stat: compatible
> >
> > 1 GPU auto-selected for this run.
> > Mapping of GPU to the 1 PP rank in this node: #0
> >
> > Will do PME sum in reciprocal space.
> >
> > ...
> >
> >  M E G A - F L O P S   A C C O U N T I N G
> >
> >  NB=Group-cutoff nonbonded kernelsNxN=N-by-N cluster Verlet kernels
> >  RF=Reaction-Field  VdW=Van der Waals  QSTab=quadratic-spline table
> >  W3=SPC/TIP3p  W4=TIP4p (single or pairs)
> >  V&F=Potential and force  V=Potential only  F=Force only
> >
> >  Computing:   M-Number M-Flops  %
> Flops
> >
> -
> >  Pair Search distance check 1517304.15400013655737.386
>  0.1
> >  NxN Ewald Elec. + VdW [F]370461474.58796

Re: [gmx-users] performance 1 gpu

2014-10-01 Thread Mark Abraham
Hi,

Not really surprising. The compiler teams try to optimize the performance
of lots of different kinds of code on a range of platforms. Some kinds of
code aren't prioritized for a given compiler team. Often there are
trade-offs that mean some kinds of code gets faster while others get
slower, perhaps differently for different hardware targets, while everybody
gradually tries to reach nirvana. Actual performance is a matter of how
well a specific code works with a specific compiler on specific hardware.

Mark

On Wed, Oct 1, 2014 at 2:24 PM, Johnny Lu  wrote:

> That is surprising. I thought intel compiler is the best compiler for intel
> cpu.
>
> On Tue, Sep 30, 2014 at 5:40 PM, Szilárd Páll 
> wrote:
>
> > Unfortunately this is an ill-balanced hardware setup (for GROMACS), so
> > what you see is not unexpected.
> >
> > There are a couple of things you can try, but don't expect more than a
> > few % improvement:
> > - try to lower nstlist (unless you already get a close to 0 buffer);
> > this will decrease the non-bonded time, hence the CPU waiting/idling,
> > but it will also increase the search time (and DD time if applicable),
> > so you'll have to see what wors best for you;
> > - try to use the -nb gpu_cpu mode, this does a rather splitting the
> > non-bonded workload, but if you are lucky (=you don't get too much
> > non-local load which will be computed now on the CPU), you may be able
> > to get a bit better performance.
> >
> > You may want to try gcc 4.8 or 4.9 and FFTW 3.3.x, you will most
> > likely get better performance than with icc+MKL.
> >
> >
> > On Thu, Sep 25, 2014 at 12:50 PM, Johnny Lu 
> > wrote:
> > > Hi.
> > >
> > > I wonder if gromacs 4.6.7 can run faster on xsede.org because I see
> cpu
> > > waits for gpu in the log.
> > >
> > > There is 16 cpu (2.7 GHz), 1 phi co-processor, and 1 GPU.
> >
> > Get that Phi swapped to a GPU and you'll be happier ;)
> >
> > > I compiled gromacs with gpu and without phi and with intel compiler and
> > mkl.
> > >
> > > I didn't install for 5.0.1 because I worry this bug might mess up
> > > equilibration when I switch from one ensemble to another one (
> > > http://redmine.gromacs.org/issues/1603).
> >
> > It's been fixed, 5.0.2 will be released soon, so I suggest you wait for
> it.
> >
> > > Below are from the log:
> > >
> > > Gromacs version:VERSION 4.6.7
> > > Precision:  single
> > > Memory model:   64 bit
> > > MPI library:thread_mpi
> > > OpenMP support: enabled
> > > GPU support:enabled
> > > invsqrt routine:gmx_software_invsqrt(x)
> > > CPU acceleration:   AVX_256
> > > FFT library:MKL
> > > Large file support: enabled
> > > RDTSCP usage:   enabled
> > > Built on:   Wed Sep 24 08:33:22 CDT 2014
> > > Built by:   jlu...@login2.stampede.tacc.utexas.edu [CMAKE]
> > > Build OS/arch:  Linux 2.6.32-431.17.1.el6.x86_64 x86_64
> > > Build CPU vendor:   GenuineIntel
> > > Build CPU brand:Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz
> > > Build CPU family:   6   Model: 45   Stepping: 7
> > > Build CPU features: aes apic avx clfsh cmov cx8 cx16 htt lahf_lm mmx
> msr
> > > nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3
> sse4.1
> > > sse4.2 ssse3 tdt x2apic
> > > C compiler:
> > > /opt/apps/intel/13/composer_xe_2013.3.163/bin/intel64/icc Intel icc
> (ICC)
> > > 13.1.1 20130313
> > > C compiler flags:   -mavx-mkl=sequential -std=gnu99 -Wall   -ip
> > > -funroll-all-loops  -O3 -DNDEBUG
> > > C++ compiler:
> > > /opt/apps/intel/13/composer_xe_2013.3.163/bin/intel64/icc Intel icc
> (ICC)
> > > 13.1.1 20130313
> > > C++ compiler flags: -mavx   -Wall   -ip -funroll-all-loops  -O3
> -DNDEBUG
> > > Linked with Intel MKL version 11.0.3.
> > > CUDA compiler:  /opt/apps/cuda/6.0/bin/nvcc nvcc: NVIDIA (R) Cuda
> > > compiler driver;Copyright (c) 2005-2013 NVIDIA Corporation;Built on
> > > Thu_Mar_13_11:58:58_PDT_2014;Cuda compilation tools, release 6.0,
> V6.0.1
> > > CUDA compiler
> > >
> >
> flags:-gencode;arch=compute_20,code=sm_20;-gencode;arch=compute_20,code=sm_21;-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_35,code=compute_35;-use_fast_math;;
> > > -mavx;-Wall;-ip;-funroll-all-loops;-O3;-DNDEBUG
> > > CUDA driver:6.0
> > > CUDA runtime:   6.0
> > >
> > > ...
> > > Using 1 MPI thread
> > > Using 16 OpenMP threads
> > >
> > > Detecting CPU-specific acceleration.
> > > Present hardware specification:
> > > Vendor: GenuineIntel
> > > Brand:  Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz
> > > Family:  6  Model: 45  Stepping:  7
> > > Features: aes apic avx clfsh cmov cx8 cx16 htt lahf_lm mmx msr
> > nonstop_tsc
> > > pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2
> > ssse3
> > > tdt x2apic
> > > Acceleration most likely to fit this hardware: AVX_256
> > > Acceleration selected at GROMACS compile time: AVX_256
> > >
> > >
> > > 1 GPU detected:
> > >   #0: NVIDIA Tesla K20m, compute cap.

[gmx-users] Gromacs 5.0.2 released

2014-10-01 Thread Mark Abraham
Hi Gromacs users,


 The official release of Gromacs 5.0.2 is available! It contains a fix for
a major simulation correctness problem with PME + GPUs in 5.0 and 5.0.1.
Please see the link to the release notes below for more details. There are
also some other minor bug fixes and performance enhancements. We encourage
all users to upgrade their installations from earlier 5.0 releases,
particularly for use on GPUs.


 You can find the code, manual, release notes, installation instructions
and test suite at the links below.


 ftp://ftp.gromacs.org/pub/gromacs/gromacs-5.0.2.tar.gz

ftp://ftp.gromacs.org/pub/manual/manual-5.0.2.pdf

http://www.gromacs.org/About_Gromacs/Release_Notes/Versions_5.0.x#Release_notes_for_5.0.2

http://www.gromacs.org/Documentation/Installation_Instructions_for_5.0

http://gerrit.gromacs.org/download/regressiontests-5.0.2.tar.gz


 Happy simulating!


 Mark Abraham

Gromacs development manager
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Performance of GTX 980 and 970

2014-10-01 Thread Pappu Kumar
I am planning to buy 2x GTX 970 for 5820K overclocked to 4.5 GHz. I have budget 
limitations and not able to afford workstations with 2x CPUs. Let me know if 
you are aware of any cheaper alternative. 


Also let me know if in future Gromacs could become more GPU intensive allowing 
more GPUs with one CPU. Thank you.



On Tuesday, 30 September 2014 5:18 PM, Szilárd Páll  
wrote:
 


The 6-core Intel CPUs have only 28 PCI-E lanes rather than 40 like the
5830K/5860X which means that with a second GPU you'll get x16/x8 and
with three GPUs x8/x8/x8.

Also note that for the current GROMACS implementation, pairing a 5820K
with two 980s will likely give a rather imbalanced hardware setup -
with three 970s even more so (at least for common types of run
setups). Depending on the exact use case, you may be able to make good
use of 2-3 GPUs even with just a 5820K (e.g. in multi runs, one per
GPU) or setups with long cut-off (or without PME), but otherwise you
may not see much benefit from a second GPU, let alone a third.

--
Szilárd



On Mon, Sep 29, 2014 at 5:01 PM, Pappu Kumar  wrote:
> Thank you for your info. I am planning to buy a computer with the following 
> configuration:
>
> Intel 5820K
> Corsair H100i Hydro Cooling Performance
> MSI X99 SLI Plus
> Fractal Design R4
> Seasonic X 1050W
>
> I am wondering if it would be a good idea to go for 3x GTX 970 instead of 2x 
> GTX 980 since the cost is the same. Thank you.
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Performance of GTX 980 and 970

2014-10-01 Thread Szilárd Páll
On Wed, Oct 1, 2014 at 5:40 PM, Pappu Kumar  wrote:
> I am planning to buy 2x GTX 970 for 5820K overclocked to 4.5 GHz. I have
> budget limitations and not able to afford workstations with 2x CPUs. Let me
> know if you are aware of any cheaper alternative.

Sounds like a good investment!

The only case where 2x970 could perform worse than a single 980 is if
your input system is quite small and even that should change in future
versions.

> Also let me know if in future Gromacs could become more GPU intensive
> allowing more GPUs with one CPU. Thank you.

Certainly! The bonded interactions will certainly be offloaded in the
near future!

Cheers,
--
Szilárd

>
> On Tuesday, 30 September 2014 5:18 PM, Szilárd Páll 
> wrote:
>
>
> The 6-core Intel CPUs have only 28 PCI-E lanes rather than 40 like the
> 5830K/5860X which means that with a second GPU you'll get x16/x8 and
> with three GPUs x8/x8/x8.
>
> Also note that for the current GROMACS implementation, pairing a 5820K
> with two 980s will likely give a rather imbalanced hardware setup -
> with three 970s even more so (at least for common types of run
> setups). Depending on the exact use case, you may be able to make good
> use of 2-3 GPUs even with just a 5820K (e.g. in multi runs, one per
> GPU) or setups with long cut-off (or without PME), but otherwise you
> may not see much benefit from a second GPU, let alone a third.
>
> --
> Szilárd
>
>
> On Mon, Sep 29, 2014 at 5:01 PM, Pappu Kumar  wrote:
>> Thank you for your info. I am planning to buy a computer with the
>> following configuration:
>>
>> Intel 5820K
>> Corsair H100i Hydro Cooling Performance
>> MSI X99 SLI Plus
>> Fractal Design R4
>> Seasonic X 1050W
>>
>> I am wondering if it would be a good idea to go for 3x GTX 970 instead of
>> 2x GTX 980 since the cost is the same. Thank you.
>
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send
>> a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] RB dihedral performance greatly improved! [was: Re: GPU waits for CPU, any remedies?]

2014-10-01 Thread Szilárd Páll
Hi,

The SIMD-accelerated RB dihedrals got implemented a few days ago and as it
turned out to be a relatively minor addition, we accepted the change
for the 5.0 series and it even made it into today's release!

Expect a considerable performance improvement in GPU accelerated simulations of:
* of systems that contain a large amount of RB dihedrals;
* of inhomogeneous systems that contain some RB dihedrals when running
in parallel (due to the decreased load imbalance).

Cheers,
--
Szilárd


On Thu, Sep 18, 2014 at 10:28 AM, Michael Brunsteiner  wrote:
>
> Dear Szilard,
> thanks for your reply!
> one more question ... you wrote that SIMD optimized RB-dihedrals might get
> implemented
> soon ... is there perhaps a link on gerrit.gromacs.org that i can use to
> follow the progress there?
> cheers
> michael
>
>
>
> ===
>
>
> Why be happy when you could be normal?
>
> 
> From: Szilárd Páll 
> To: Michael Brunsteiner 
> Cc: Discussion list for GROMACS users ;
> "gromacs.org_gmx-users@maillist.sys.kth.se"
> 
> Sent: Wednesday, September 17, 2014 4:18 PM
>
> Subject: Re: [gmx-users] GPU waits for CPU, any remedies?
>
> Dear Michael,
>
> I checked and indeed, the Ryckaert-Bellman dihedrals are not SIMD
> accelerated - that's why they are quite slow. While you CPU is the
> bottleneck and you're quite right that the PP-PME balancing can't do
> much about this kind of imbalance, the good news is that it can be
> faster - even without a new CPU.
>
> With SIMD this will accelerate quite well and will likely cut down
> your bonded time by a lot (I'd guess at least 3-4x with AVX, maybe
> more with FMA). This code has ben been SIMD optimized yet mostly
> because in typical runs the RB computation takes relatively little
> time, and additionally it is not quite developer-friendly the way
> these kernels need to be written/rewritten for SIMD-acceleration.
> However it will likely get implemented soon which in your case will
> bring big improvements.
>
> Cheers,
> --
> Szilárd
>
>
> On Wed, Sep 17, 2014 at 3:01 PM, Michael Brunsteiner 
> wrote:
>>
>> Dear Szilard,
>> yes it seems i just should have done a bit more reserarch regarding
>> the optimal CPU/GPU combination ... and as you point out, the
>> bonded interactions are the culprits ... most often people probably
>> simulate aqueous systems, in which LINCS does most of this work
>> here i have a polymer glass ... different story ...
>> the flops table you miss was in my previous mail (see below for another
>> copy) and indeed it tells me that 65% of ther CPU load is "Force" while
>> only 15.5% is for PME mesh, and i assume only the latter is what can
>> be modified by dynamic load balancing ... i assume this means
>> there is no way to improve things ... i guess i just have to live
>> with the fact that for this type of system my slow CPU is the
>> bottleneck ... if you have any other ideas please let me know...
>> regards
>> mic
>>
>>
>>
>> :
>>
>>  Computing:  Num  Num  CallWall timeGiga-Cycles
>>  Ranks Threads  Count  (s)total sum%
>>
>> -
>>  Neighbor search1  12251  0.57423.403  2.1
>>  Launch GPU ops.1  12  10001  0.62725.569  2.3
>>  Force  1  12  10001  17.392709.604  64.5
>>  PME mesh  1  12  10001  4.172170.234  15.5
>>  Wait GPU local1  12  10001  0.206  8.401  0.8
>>  NB X/F buffer ops.1  12  19751  0.239  9.736  0.9
>>  Write traj.1  1211  0.38115.554  1.4
>>  Update1  12  10001  0.30312.365  1.1
>>  Constraints1  12  10001  1.45859.489  5.4
>>  Rest  1.62166.139  6.0
>>
>> -
>>  Total26.973  1100.493 100.0
>>
>> ===
>>
>> Why be happy when you could be normal?
>>
>> 
>> On Tue, 9/16/14, Szilárd Páll  wrote:
>>
>>  Subject: Re: [gmx-users] GPU waits for CPU, any remedies?
>>  To: "Michael Brunsteiner" 
>>  Cc: "Discussion list for GROMACS users" ,
>> "gromacs.org_gmx-users@maillist.sys.kth.se"
>> 
>>  Date: Tuesday, September 16, 2014, 6:52 PM
>>
>>  Well, it looks like you are i)
>>  unlucky ii) limited by the huge bonded workload.
>>
>>  i) As your system is quite small, mdrun thinks that there
>>  are no
>>  convenient grids between 32x32x32 and 28x28x28 (see the
>>  PP-PME tuning
>>  output). As the latter corresponds to quite a big jump in
>>  cut-off
>>  (from 1.296 to 1.482) which more than doubles the non-bonded
>>  workload
>>  and is slower than the former, mdrun sticks to using 

[gmx-users] source cord routines for electric fields

2014-10-01 Thread 米谷慎
Dear Gromacs experts:

I'd like to add the time-dependent function to the electric field routines
in GROMACS.
I could only found the input (reading) codes in src/kernel/readir.c.
Does anyone know which source cord routines I should modify to add the
time-dependent functions.

Thank you for advance.

Makoto Yoneya, Dr.
AIST, Tsukuba
JAPAN
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.