Re: [gmx-users] Error compiling Gromacs 4.5.4: "relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC"

2011-04-21 Thread Luca Bellucci
Hi all,
I have encountered the same problem.
With this command:
./configure --with-fft=mkl --prefix=/path/gmx-4.5.3 --enable-mpi
"make mdrun" works well
 
When i used the same option with gmx 4.5.4 
 ./configure --with-fft=mkl --prefix=/path/gmx-4.5.4 --enable-mpi
"make mdrun" did not work.
The compilation reported this error:
ld: /usr/local/mpich2-1.3.2p2-install/lib/libmpich.a(allreduce.o): relocation 
R_X86_64_32 against `_2__STRING.14' can not be used when making a shared 
object; recompile with -fPIC
etc..
the problem here seems to be the use of shared or static libraries. 
I done configure command several times using the " --with-pic" and other 
combinations, however I did not resolve the problem.  Perhaps there is an 
option that I have not seen!!
Anyway I released that there is a different default behavior of the "configure" 
command.
In fact using the options reported above the configure command gives in 
configure.ac for 4.5.3 at line ~27:
AC_DISABLE_SHARED

whereas in configure.ac for 4.5.4 ther is 
AC_ENABLE_SHARED
test "$enable_mpi" = "yes" && AC_DISABLE_SHARED

When i changed these two lines in AC_DISABLE_SHARED also gmx4.5.4 is compiled. 
Luca

> Pablo Englebienne wrote:
> > Hi all,
> > 
> > I'm trying to compile release 4.5.4 on a system that has been running
> > every release since 4.0.4 without a problem. Even 4.5.3 compiled fine
> > with the following configure:
> > 
> > LDFLAGS="-L/cvos/shared/apps/fftw/gcc/64/3.2/lib"
> > CPPFLAGS="-I/cvos/shared/apps/fftw/gcc/64/3.2/include" ./configure
> > --prefix=$HOME/software
> > 
> > The LDFLAGS and CPPFLAGS specify the (non-standard) location of the FFTW
> > libraries and headers. Configure succeeds in creating the Makefiles, but
> > when running make it aborts at this point:
> > 
> > cc  -shared  .libs/calcmu.o .libs/calcvir.o .libs/constr.o
> > .libs/coupling.o .libs/domdec.o .libs/domdec_box.o .libs/domdec_con.o
> > .libs/domdec_network.o .libs/domdec_setup.o .libs/domdec_top.o
> > .libs/ebin.o .libs/edsam.o .libs/ewald.o .libs/force.o .libs/forcerec.o
> > .libs/ghat.o .libs/init.o .libs/mdatom.o .libs/mdebin.o .libs/minimize.o
> > .libs/mvxvf.o .libs/ns.o .libs/nsgrid.o .libs/perf_est.o .libs/genborn.o
> > .libs/genborn_sse2_single.o .libs/genborn_sse2_double.o
> > .libs/genborn_allvsall.o .libs/genborn_allvsall_sse2_single.o
> > .libs/genborn_allvsall_sse2_double.o .libs/gmx_qhop_parm.o
> > .libs/gmx_qhop_xml.o .libs/groupcoord.o .libs/pme.o .libs/pme_pp.o
> > .libs/pppm.o .libs/partdec.o .libs/pull.o .libs/pullutil.o
> > .libs/rf_util.o .libs/shakef.o .libs/sim_util.o .libs/shellfc.o
> > .libs/stat.o .libs/tables.o .libs/tgroup.o .libs/tpi.o .libs/update.o
> > .libs/vcm.o .libs/vsite.o .libs/wall.o .libs/wnblist.o .libs/csettle.o
> > .libs/clincs.o .libs/qmmm.o .libs/gmx_fft.o .libs/gmx_parallel_3dfft.o
> > .libs/fft5d.o .libs/gmx_wallcycle.o .libs/qm_gaussian.o .libs/qm_mopac.o
> > .libs/qm_gamess.o .libs/gmx_fft_fftw2.o .libs/gmx_fft_fftw3.o
> > .libs/gmx_fft_fftpack.o .libs/gmx_fft_mkl.o .libs/qm_orca.o
> > .libs/mdebin_bar.o  -Wl,--rpath
> > -Wl,/home/penglebie/downloads/gromacs-4.5.4/src/gmxlib/.libs -Wl,--rpath
> > -Wl,/home/penglebie/software/lib
> > /cvos/shared/apps/fftw/gcc/64/3.2/lib/libfftw3f.a -lxml2
> > -L/cvos/shared/apps/fftw/gcc/64/3.2/lib ../gmxlib/.libs/libgmx.so -lnsl
> > -lm  -msse2 -pthread -Wl,-soname -Wl,libmd.so.6 -o .libs/libmd.so.6.0.0
> > /usr/bin/ld:
> > /cvos/shared/apps/fftw/gcc/64/3.2/lib/libfftw3f.a(plan-many-dft-r2c.o):
> > relocation R_X86_64_32 against `a local symbol' can not be used when
> > making a shared object; recompile with -fPIC
> > /cvos/shared/apps/fftw/gcc/64/3.2/lib/libfftw3f.a: could not read
> > symbols: Bad value
> > collect2: ld returned 1 exit status
> > make[3]: *** [libmd.la] Error 1
> > make[3]: Leaving directory
> > `/home/penglebie/downloads/gromacs-4.5.4/src/mdlib'
> > make[2]: *** [all-recursive] Error 1
> > make[2]: Leaving directory `/home/penglebie/downloads/gromacs-4.5.4/src'
> > make[1]: *** [all] Error 2
> > make[1]: Leaving directory `/home/penglebie/downloads/gromacs-4.5.4/src'
> > make: *** [all-recursive] Error 1
> > 
> > I see that recently
> > (http://lists.gromacs.org/pipermail/gmx-users/2011-April/059919.html)
> > another user encountered the same problem but this time with version
> > 4.5.3; in my case 4.5.3 compiles fine, the only issue is with 4.5.4.
> 
> The solution is discussed in the installation instructions:
> 
> http://www.gromacs.org/Downloads/Installation_Instructions#Prerequisites
> 
> > The system is running Scientific Linux 5.5.
> > 
> > $ uname -a
> > 
> > Linux ST-HPC-Main 2.6.18-128.7.1.el5 #1 SMP Mon Aug 24 08:12:52 EDT 2009
> > x86_64 x86_64 x86_64 GNU/Linux
> > 
> > 
> > I am puzzled as to why it doesn't work in 4.5.4 but did until the
> > previous release. Did something change in this respect?
> 
> Maybe, but the fact that this issue has come up numerous times in several
> versions suggests not.  As for why 4.5.3 works and 4.5.4 doesn't, I can

Re: [gmx-users] Is there still interest in rigid-body simulation?

2011-04-07 Thread Luca Bellucci
Hi Adam,
because some time ago I was involved in a rigid-body MD project, I am 
interested to known if you are  following a published or known method.
I do not  know your rigid-body algorithm, however I would suggest you to  take 
care on rotations,  because they are not a simple task. 
Good luck
Luca

> Hi all,
> I have seen a few posts on gmx-users indicating a desire to treat certain
> atom groups as rigid bodies in MD simulations.  I just started implementing
> this, and so far I have it working for translational forces (not rotation,
> though this should be simple to add), even when the group is split over
> multiple processors.  At the moment I have the rigid body groups specified
> as freeze groups in the mdp file, but there could be a separate option.
>  Would anyone else find this useful?  The problem is that: (a) I am
> modifying GROMACS 4.5.1, so I am some months out of date, and (b) my code
> is probably not to spec.  If it is worthwhile, I can restart from 4.5.4
> (the code modifications are quite small) and make an effort to conform to
> coding standard.  Best,
>
> Adam Herbst


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] FEP and loss of performance

2011-04-07 Thread Luca Bellucci
Ok. I agree with you, FEP performance is an important issue to resolve but I 
know that there are also other priorities. However, I would thank you for your 
interest and and your suggestions.

Luca


> I would suggest that you take Chris' advice and post all of this as a
> feature request on redmine.gromacs.org so that it can be put on a to-do
> list.  Enhancing the performance of the free energy code is probably going
> to be a low-priority, long-term goal (in the absence of any proven bug),
> but at least it won't get lost in the shuffle of the mailing list.  If
> there's no record of it in redmine, it likely won't get addressed. 
> Gromacs is undergoing major changes at the moment, so the core developers
> are quite busy with other priorities.
> 
> -Justin
> 
> Luca Bellucci wrote:
> > I posted my test files in:
> > https://www.dropbox.com/link/17.-sUcJyMeEL?k=0f3b6fa098389405e7e15c886dcc
> > 83c1 This is a run for a dialanine peptide in a water box.
> > The cell side cubic box was 40 A.
> > The directory is organized as :
> > TEST\
> > 
> > topol.top
> > 
> > Run-00/confout.gro; Equilibrated structure
> > Run-00/state.cp
> > 
> > MD-std/Commands ; commands to run the simulation , grompp and mdrun
> > 
> > MD-std/md.mdp
> > 
> > MD-FEP/Commands
> > MD-FEP/md.mdp
> > 
> > ~700 kb
> > 
> >> David Mobley wrote:
> >>> Hi,
> >>> 
> >>> This doesn't sound like normal behavior. In fact, this is not what I
> >>> typically observe. While there may be a small performance difference,
> >>> it is probably at the level of a few percent. Certainly not a factor
> >>> of more than 10.
> >> 
> >> I see about a 50% reduction in speed when decoupling small molecules in
> >> water. For me, I don't care if a nanosecond takes 2 or 3 hours.  For
> >> larger systems such as the ones considered here, it seems that the
> >> performance loss is much more dramatic.
> >> 
> >> I can reproduce the poor performance with a simple water box with the
> >> free energy code on.  Decoupling the whole system (or at least, a large
> >> part of it, as was the original intent of this thread, as I understand
> >> it) results in a 1500% slowdown.  Some observations:
> >> 
> >> 1. Water optimizations are turned off when decoupling the water, but
> >> this only accounts for 20% of the slowdown, which is relatively
> >> insignificant.
> >> 
> >> 2. Using lambda=0.9 (from a previous post) in my water box results in
> >> even worse performance, but much of this is due to DD instability.  The
> >> system I used has a few hundred water molecules in it, and after about
> >> 10-12 ps, they collapse in on one another and form clusters,
> >> dramatically shifting the balance of atoms between DD cells.  DLB gets
> >> activated but the force imbalances are around 40%, and the total
> >> slowdown (relative to
> >> non-perturbed trajectories) is 2000%.
> >> 
> >> 3. Using lambda=0 results in stable trajectories with very low
> >> imbalance, but also poor performance.  It seems that mdrun spends all
> >> of its time in
> >> 
> >> the free energy innerloops:
> >>   Computing:   M-Number M-Flops  %
> >> 
> >> Flops
> >> 
> >> -- --- Free energy innerloop19064.187513 2859628.127
> >> 89.1 Outer nonbonded loop   325.1538063251.538
> >> 0.1 Calc Weights   231.7546358343.167
> >> 0.3 Spread Q Bspline  9888.197760   19776.396
> >> 0.6 Gather F Bspline  9888.197760   59329.187
> >> 1.8 3D-FFT   24406.688124  195253.505
> >> 6.1 Solve PME  485.109702   31047.021
> >> 1.0 NS-Pairs   521.616615   10953.949
> >> 0.3 Reset In Box 2.575515   7.727
> >> 0.0 CG-CoM   7.728090  23.184
> >> 0.0 Virial   8.176635 147.179
> >> 0.0 Update  77.2515452394.798
> >> 0.1 Stop-CM  0.774045   7.740
> >> 0.0 Calc-Ekin   

Re: [gmx-users] FEP and loss of performance

2011-04-06 Thread Luca Bellucci
I posted my test files in: 
https://www.dropbox.com/link/17.-sUcJyMeEL?k=0f3b6fa098389405e7e15c886dcc83c1
This is a run for a dialanine peptide in a water box.
The cell side cubic box was 40 A.
The directory is organized as :
TEST\
topol.top
Run-00/confout.gro; Equilibrated structure
Run-00/state.cp

MD-std/Commands ; commands to run the simulation , grompp and mdrun
MD-std/md.mdp

MD-FEP/Commands
MD-FEP/md.mdp

~700 kb


> David Mobley wrote:
> > Hi,
> > 
> > This doesn't sound like normal behavior. In fact, this is not what I
> > typically observe. While there may be a small performance difference,
> > it is probably at the level of a few percent. Certainly not a factor
> > of more than 10.
> 
> I see about a 50% reduction in speed when decoupling small molecules in
> water. For me, I don't care if a nanosecond takes 2 or 3 hours.  For
> larger systems such as the ones considered here, it seems that the
> performance loss is much more dramatic.
> 
> I can reproduce the poor performance with a simple water box with the free
> energy code on.  Decoupling the whole system (or at least, a large part of
> it, as was the original intent of this thread, as I understand it) results
> in a 1500% slowdown.  Some observations:
> 
> 1. Water optimizations are turned off when decoupling the water, but this
> only accounts for 20% of the slowdown, which is relatively insignificant.
> 
> 2. Using lambda=0.9 (from a previous post) in my water box results in even
> worse performance, but much of this is due to DD instability.  The system
> I used has a few hundred water molecules in it, and after about 10-12 ps,
> they collapse in on one another and form clusters, dramatically shifting
> the balance of atoms between DD cells.  DLB gets activated but the force
> imbalances are around 40%, and the total slowdown (relative to
> non-perturbed trajectories) is 2000%.
> 
> 3. Using lambda=0 results in stable trajectories with very low imbalance,
> but also poor performance.  It seems that mdrun spends all of its time in
> the free energy innerloops:
> 
>   Computing:   M-Number M-Flops  %
> Flops
> --
> --- Free energy innerloop19064.187513 2859628.127   
> 89.1 Outer nonbonded loop   325.1538063251.538
> 0.1 Calc Weights   231.7546358343.167
> 0.3 Spread Q Bspline  9888.197760   19776.396
> 0.6 Gather F Bspline  9888.197760   59329.187
> 1.8 3D-FFT   24406.688124  195253.505
> 6.1 Solve PME  485.109702   31047.021
> 1.0 NS-Pairs   521.616615   10953.949
> 0.3 Reset In Box 2.575515   7.727
> 0.0 CG-CoM   7.728090  23.184
> 0.0 Virial   8.176635 147.179
> 0.0 Update  77.2515452394.798
> 0.1 Stop-CM  0.774045   7.740
> 0.0 Calc-Ekin   77.2530902085.833
> 0.1 Constraint-V77.253090 618.025
> 0.0 Constraint-Vir   7.726545 185.437
> 0.0 Settle  51.502060   16635.165
> 0.5
> --
> --- Total 3209687.978  
> 100.0
> --
> ---
> 
> > You may want to provide an mdp file and topology, etc. so someone can
> > see if they can reproduce your problem.
> 
> I agree that would be useful.  I can contribute my water box system if it
> would help, as well.
> 
> -Justin
> 
> > Thanks.
> > 
> > On Wed, Apr 6, 2011 at 7:59 AM, Luca Bellucci  wrote:
> >> I followed your suggestions and i tried to perform a MD run wit GROMACS
> >> and NAMD for dialanine peptide in a water box. The cell side cubic box
> >> was 40 A.
> >> 
> >> GROMACS:
> >> With the free energy module there is a drop in gromacs performance of
> >> about 10/20 fold.
> >> Standard MD:  Time:  6.693   6.693100.0
> >> Free energy MD:   Time:136.113136.113100.0
> >> 
> >> NAMD:
> >> With free energy module there is not a  drop in performance 

Re: [gmx-users] FEP and loss of performance

2011-04-06 Thread Luca Bellucci
I followed your suggestions and i tried to perform a MD run wit GROMACS and 
NAMD for dialanine peptide in a water box. The cell side cubic box was 40 A.

GROMACS:
With the free energy module there is a drop in gromacs performance of about 
10/20 fold.
Standard MD:  Time:  6.693   6.693100.0
Free energy MD:   Time:136.113136.113100.0

NAMD:
With free energy module there is not a  drop in performance so evident as in 
gromacs.
Standard MD   6.90
Free energy MD 9.60

I would like to point out that this kind of calculation is common, in fact in 
the manual of gromacs 4.5.3 it is reported  " There is a special option system 
that couples all molecules types in the system. This can be useful for 
equilibrating a system [..] ".

Actually, I would understand if there is a solution to resolve the drop in 
gromacs performance for this kind of calculation.

Luca



> I don't know if it is possible or not. I think that you can enhance
> your chances of developer attention if you develop a small and simple
> test system that reproduces the slowdown and very explicitly state
> your case for why you can't use some other method. I would suggest
> posting that to the mailing list and, if you don't get any response,
> post it as an enhancement request on the redmine page (or whatever has
> taken over from bugzilla).
> 
> Good luck,
> Chris.
> 
> -- original message --
> 
> 
> Yes i am testing the possibility to perform an Hamiltonian-REMD
> Energy barriers can be overcome  increasing the temperature system or
> scaling potential energy  with a lambda value, these methods are
> "equivalent". Both have advantages and disavantages, at this stage it is
> not the right place to debate on it. The main problem seems to be how to
> overcome to the the loss of gromacs performance in such calculation.  At
> this moment it seems an intrinsic code problem.
> Is it possible?
> 
> >  >> Dear Chris and Justin
> > >>
> > >>/  Thank you for your precious suggestions
> > 
> > />>/  This is a test that i perform in a single machine with 8 cores
> > />>/  and gromacs 4.5.4.
> > />>/
> > />>/  I am trying  to enhance the  sampling of a protein using the
> > decoupling scheme />>/  of the free energy module of gromacs.  However
> > when i decouple only the />>/  protein, the protein collapsed. Because i
> > simulated in NVT i thought that />>/  this was an effect of the solvent.
> > I was trying to decouple also the solvent />>/  to understand the system
> > behavior.
> > />>/
> > />
> > 
> > >Rather than suspect that the solvent is the problem, it's more likely
> > >that decoupling an entire protein simply isn't stable.  I have never
> > >tried
> > >
> > > anything that enormous, but the volume change in the system could be
> > > unstable, along with any number of factors, depending on how you
> > > approach it.
> > >
> > >If you're looking for better sampling, REMD is a much more robust
> > >approach
> > >
> > > than trying to manipulate the interactions of huge parts of your system
> > > using the free energy code.
> > 
> > Presumably Luca is interested in some type of hamiltonian exchange where
> > lambda represents the interactions between the protein and the solvent?
> > This can actually be a useful method for enhancing sampling. I think it's
> > dangerous if we rely to heavily on "try something else". I still see no
> > methodological reason a priori why there should be any actual slowdown,
> > so that makes me think that it's an implementation thing, and there is
> > at least the possibility that this is something that could be fixed as
> > an enhancement.
> > 
> > Chris.
> > 
> > 
> > -Justin
> > 
> > >/   I expected a loss of performance, but not so drastic.
> > 
> > />/  Luca
> > />/
> > />>/  Load balancing problems I can understand, but why would it take
> > longer />>/  in absolute time? I would have thought that some nodes would
> > simple be />>/  sitting idle, but this should not cause an increase in
> > the overall />>/  simulation time (15x at that!).
> > />>/
> > />>/  There must be some extra communication?
> > />>/
> > />>/  I agree with Justin that this seems like a strange thing to do, but
> > />>/  still I think that there must be some underlying coding issue
> > (probably />>/  one that only exists because of a rea

Re: [gmx-users] FEP and loss of performance

2011-04-04 Thread Luca Bellucci
Yes i am testing the possibility to perform an Hamiltonian-REMD
Energy barriers can be overcome  increasing the temperature system or scaling 
potential energy  with a lambda value, these methods are "equivalent".
Both have advantages and disavantages, at this stage it is not the right place 
to debate on it. The main problem seems to be how to overcome to the the loss 
of gromacs performance in such calculation.  At this moment it seems an 
intrinsic code problem.
Is it possible?

>  >> Dear Chris and Justin
> >>
> >>/  Thank you for your precious suggestions
>
> />>/  This is a test that i perform in a single machine with 8 cores
> />>/  and gromacs 4.5.4.
> />>/
> />>/  I am trying  to enhance the  sampling of a protein using the
> decoupling scheme />>/  of the free energy module of gromacs.  However when
> i decouple only the />>/  protein, the protein collapsed. Because i
> simulated in NVT i thought that />>/  this was an effect of the solvent. I
> was trying to decouple also the solvent />>/  to understand the system
> behavior.
> />>/
> />
>
> >Rather than suspect that the solvent is the problem, it's more likely that
> >decoupling an entire protein simply isn't stable.  I have never tried
> > anything that enormous, but the volume change in the system could be
> > unstable, along with any number of factors, depending on how you approach
> > it.
> >
> >If you're looking for better sampling, REMD is a much more robust approach
> > than trying to manipulate the interactions of huge parts of your system
> > using the free energy code.
>
> Presumably Luca is interested in some type of hamiltonian exchange where
> lambda represents the interactions between the protein and the solvent?
> This can actually be a useful method for enhancing sampling. I think it's
> dangerous if we rely to heavily on "try something else". I still see no
> methodological reason a priori why there should be any actual slowdown, so
> that makes me think that it's an implementation thing, and there is at
> least the possibility that this is something that could be fixed as an
> enhancement.
>
> Chris.
>
>
> -Justin
>
> >/   I expected a loss of performance, but not so drastic.
>
> />/  Luca
> />/
> />>/  Load balancing problems I can understand, but why would it take
> longer />>/  in absolute time? I would have thought that some nodes would
> simple be />>/  sitting idle, but this should not cause an increase in the
> overall />>/  simulation time (15x at that!).
> />>/
> />>/  There must be some extra communication?
> />>/
> />>/  I agree with Justin that this seems like a strange thing to do, but
> />>/  still I think that there must be some underlying coding issue
> (probably />>/  one that only exists because of a reasonable assumption
> that nobody />>/  would annihilate the largest part of their system).
> />>/
> />>/  Chris.
> />>/
> />>/  Luca Bellucci wrote:
> />>>/  /  Hi Chris,
> />>/  />/  thank for the suggestions,
> />>/  />/  in the previous mail there is a mistake because
> />>/  />/  couple-moltype = SOL (for solvent) and not "Protein_chaim_P".
> />>/  />/  Now the problem of the load balance seems reasonable, because
> />>/  />/  the water box is large ~9.0 nm.
> />>/  /
> />>/  Now your outcome makes a lot more sense.  You're decoupling all of
> the />>/  solvent? I don't see how that is going to be physically stable or
> terribly /


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] FEP and loss of performance

2011-04-04 Thread Luca Bellucci
Dear Chris and Justin
Thank you for your precious suggestions 
This is a test that i perform in a single machine with 8 cores 
and gromacs 4.5.4.

I am trying  to enhance the  sampling of a protein using the decoupling scheme 
of the free energy module of gromacs.  However when i decouple only the 
protein, the protein collapsed. Because i simulated in NVT i thought that 
this was an effect of the solvent. I was trying to decouple also the solvent 
to understand the system behavior.

 I expected a loss of performance, but not so drastic. 
Luca 

> Load balancing problems I can understand, but why would it take longer
> in absolute time? I would have thought that some nodes would simple be
> sitting idle, but this should not cause an increase in the overall
> simulation time (15x at that!).
>
> There must be some extra communication?
>
> I agree with Justin that this seems like a strange thing to do, but
> still I think that there must be some underlying coding issue (probably
> one that only exists because of a reasonable assumption that nobody
> would annihilate the largest part of their system).
>
> Chris.
>
> Luca Bellucci wrote:
> >/  Hi Chris,
>
> />/  thank for the suggestions,
> />/  in the previous mail there is a mistake because
> />/  couple-moltype = SOL (for solvent) and not "Protein_chaim_P".
> />/  Now the problem of the load balance seems reasonable, because
> />/  the water box is large ~9.0 nm.
> /
> Now your outcome makes a lot more sense.  You're decoupling all of the
> solvent? I don't see how that is going to be physically stable or terribly
> meaningful, but it explains your performance loss.  You're annihilating a
> significant number of interactions (probably the vast majority of all the
> nonbonded interactions in the system), which I would expect would cause
> continuous load balancing issues.
>
> -Justin
>
> >/  However the problem exist and the performance loss is very high, so I
> > have
>
> />/  redone calculations with this command:
> />/
> />/  grompp -f
> />/  md.mdp -c ../Run-02/confout.gro -t ../Run-02/state.cpt -p ../topo.top
> -n ../index.ndx -o />/  md.tpr -maxwarn 1
> />/
> />/  mdrun -s md.tpr -o md
> />/
> />/  this is part of the md.mdp file:
> />/
> />/  ; Run parameters
> />/  ; define  = -DPOSRES
> />/  integrator   = md;
> />/  nsteps   = 1000  ;
> />/  dt   = 0.002 ;
> />/  [..]
> />/  free_energy= yes ; /no
> />/  init_lambda= 0.9
> />/  delta_lambda   = 0.0
> />/  couple-moltype = SOL; solvent water
> />/  couple-lambda0 = vdw-q
> />/  couple-lambda1 = none
> />/  couple-intramol= yes
> />/
> />/  Result for free energy calculation
> />/   Computing: Nodes Number G-CyclesSeconds %
> />/ 
> --- />/
>   Domain decomp.   8126   22.0508.3 0.1 />/  
> DD comm. load  8 150.0090.0 0.0 />/  
> DD comm. bounds 8 120.0310.0 0.0 />/  
> Comm. coord.8   1001   17.3196.5 0.0 />/  
> Neighbor search8127  436.569  163.7 1.1 />/  
> Force   8   100134241.57612840.9   
> 87.8 />/   Wait + Comm. F8   1001   19.4867.3
> 0.0 />/   PME mesh  8   1001 4190.758 1571.6   
> 10.7 />/   Write traj.  8  71.827   
> 0.7 0.0 />/   Update  8   1001   12.557
>4.7 0.0 />/   Constraints   8   1001   26.496   
> 9.9 0.1 />/   Comm. energies  8   1002   10.710   
> 4.0 0.0 />/   Rest   8  25.142   
> 9.4 0.1 />/ 
> --- />/
>   Total  8   39004.53114627.1   100.0 />/ 
> --- />/
>  ---
> />/   PME redist. X/F  8   3003 3479.771 1304.9 8.9
> />/   PME spread/gather   8   4004  277.574  104.1 0.7 />/ 
>  PME 3D-FFT   8   4004  378.090  141.8 1.0 />/ 
>  PME solve  8   2002   55.033   20.6 0.1
> />/ 
> --

Re: [gmx-users] FEP and loss of performance

2011-04-04 Thread Luca Bellucci
Hi Chris,
thank for the suggestions,
in the previous mail there is a mistake because   
couple-moltype = SOL (for solvent) and not "Protein_chaim_P".
Now the problem of the load balance seems reasonable, because
the water box is large ~9.0 nm.
However the problem exist and the performance loss is very high, so I have 
redone calculations with this command:

grompp -f 
md.mdp -c ../Run-02/confout.gro -t ../Run-02/state.cpt -p ../topo.top -n 
../index.ndx -o 
md.tpr -maxwarn 1

mdrun -s md.tpr -o md

this is part of the md.mdp file: 

; Run parameters
; define  = -DPOSRES
integrator  = md; 
nsteps  = 1000  ; 
dt  = 0.002 ; 
[..]
free_energy= yes ; /no
init_lambda= 0.9
delta_lambda   = 0.0
couple-moltype = SOL; solvent water
couple-lambda0 = vdw-q
couple-lambda1 = none
couple-intramol= yes

Result for free energy calculation  
 Computing: Nodes Number G-CyclesSeconds %
---
 Domain decomp.   8126   22.0508.3 0.1
 DD comm. load  8 150.0090.0 0.0
 DD comm. bounds 8 120.0310.0 0.0
 Comm. coord.8   1001   17.3196.5 0.0
 Neighbor search8127  436.569  163.7 1.1
 Force   8   100134241.57612840.987.8
 Wait + Comm. F8   1001   19.4867.3 0.0
 PME mesh  8   1001 4190.758 1571.610.7
 Write traj.  8  71.8270.7 0.0
 Update  8   1001   12.5574.7 0.0
 Constraints   8   1001   26.4969.9 0.1
 Comm. energies  8   1002   10.7104.0 0.0
 Rest   8  25.1429.4 0.1
---
 Total  8   39004.53114627.1   100.0
---
---
 PME redist. X/F  8   3003 3479.771 1304.9 8.9
 PME spread/gather   8   4004  277.574  104.1 0.7
 PME 3D-FFT   8   4004  378.090  141.8 1.0
 PME solve  8   2002   55.033   20.6 0.1
---
Parallel run - timing based on wallclock.

   NODE (s)   Real (s)  (%)
   Time:   1828.385   1828.385100.0
   30:28
 (Mnbf/s)   (GFlops)   (ns/day)  (hour/ns)
Performance:  3.115  3.223  0.095253.689

 I Switched off only the free_energy keyword and I redone the calculation 
I have:
 Computing: Nodes Number G-CyclesSeconds %
---
 Domain decomp.  8 77   10.9754.1 0.6
 DD comm. load 8  10.0010.0 0.0
 Comm. coord.   8   1001   14.4805.4 0.8
 Neighbor search   8 78  136.479   51.2 7.3
 Force 8   1001 1141.115  427.961.3
 Wait + Comm. F  8   1001   17.8456.7 1.0
 PME mesh8   1001  484.581  181.726.0
 Write traj.   8  51.2210.5 0.1
 Update   8   10019.9763.7 0.5
 Constraints8   1001   20.2757.6 1.1
 Comm. energies 89925.9332.2 0.3
 Rest 8  19.6707.4 1.1
---
 Total  81862.552  698.5   100.0
---
---
 PME redist. X/F8   2002   92.204   34.6 5.0
 PME spread/gather  8   2002  192.337   72.110.3
 PME 3D-FFT 8   2002  177.373   66.5 9.5
 PME solve  8   1001   22.5128.4 1.2
---
Parallel run - timing based on wallclock.

   NODE (s)   Real (s)  (%)
   Time: 87.309 87.309100.0
   1:27
 (Mnbf/s)   (GFlops)   (ns/day)  (hour/ns)
Performance:439.731 23.995  1.981 12.114
Finished mdrun on node 0 Mon Apr  4 16:52:04 2011

Luca




> If we accept your text at face value, then the simulati

[gmx-users] FEP and loss of performance

2011-04-04 Thread Luca Bellucci
Dear all,
when I run a single free energy simulation 
i noticed that there is a loss of performace with respect to
the normal MD

free_energy= yes
init_lambda= 0.9
delta_lambda   = 0.0
couple-moltype = Protein_Chain_P
couple-lambda0 = vdw-q
couple-lambda0 = none
couple-intramol= yes

   Average load imbalance: 16.3 %
   Part of the total run time spent waiting due to load imbalance: 12.2 %
   Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X0 %
   Time:   1852.712   1852.712100.0

free_energy= no
   Average load imbalance: 2.7 %
   Part of the total run time spent waiting due to load imbalance: 1.7 % 
   Time:127.394127.394100.0

It seems that the loss of performace is due in part to in the load imbalance 
in the domain decomposition, however I tried to change
these keywords without benefit
Any comment is welcome.

Thanks
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Setting the C6 LJ term for OPLSA FF

2011-04-04 Thread Luca Bellucci
Dear all
I need to change sigma and epsilon for non-bonded parameters of the OPLSA FF.
In particular I want to set the attractive part of the LJ potential to zero 
(C6=0). 
In doing this I have read the manual but unfortunately the reported 
explanation did not help me. To understand how it works in a reliable way, 
I followed the Berk suggestions available at
http://lists.gromacs.org/pipermail/gmx-users/2010-December/056303.html
and i decided to report a simple example.

The main rules are in forcefield.itp file and for OPLSA FF they are: 
 ; nbfunc   comb-rule   gen-pairs fudgeLJ fudgeQQ
 1   3  yes   0.50.5

The non-bonded force field parameters for two atoms are in ffnonbonded.itp 
file and they look like:

[ atomtypes ]
; name   bond_type  mass   charge  ptypesigma   epsilon
 opls_1   C 612.01100   0.500A   sig_1esp_1
 opls_2   O 8   15.99940   -0.500   A   sig_2eps_2

From these values I am going to define the non-bonded parameter between a 
couple of atoms as:
  
[ nonbond_params ]
  i  j func  SIG_ij  EPS_ij 
opls_1 opls_2  1(sig_1*sig2)^1/2  (eps_1*eps_2)^1/2 ; Normal behavior

However, if I want the attractive term C6 of LJ potential equal zero, I should
set sig_12=-sig_12

[ nonbond_params ]
  i  j funcSIG_ij EPS_ij 
opls_1 opls_2  1   -(sig_1*sig_2)^1/2  (eps_1*eps_2)^1/2  ; -sig_ij -> C6=0

It is right?

Thanks
 Luca

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Setting the C6 LJ term for OPLSA FF

2011-04-01 Thread Luca Bellucci
Dear all
I need to change sigma and epsilon for non-bonded parameters of the OPLSA FF.
In particular I want to set the attractive part of the LJ potential to zero 
(C6=0). 
In doing this I have read the manual but unfortunately the reported 
explanation did not help me. To understand how it work in a reliable way, 
I am following the Berk suggestions available at
http://lists.gromacs.org/pipermail/gmx-users/2010-December/056303.html
and i decided to report a simple example.

The main rules are in forcefield.itp file and for OPLSA FF they are: 
 ; nbfunc   comb-rule   gen-pairs fudgeLJ fudgeQQ
 1   3  yes   0.50.5

The non-bonded force field parameters for two atoms are in ffnonbonded.itp 
file and they look like:

[ atomtypes ]
; name   bond_type  mass   charge  ptypesigma   epsilon
 opls_1   C 612.01100   0.500A   sig_1esp_1
 opls_2   O 8   15.99940   -0.500   A   sig_2eps_2

From these values I am going to define the non-bonded parameter between a 
couple of atoms as:
  
[ nonbond_params ]
  i  j func  SIG_ij  EPS_ij 
opls_1 opls_2  1(sig_1*sig2)^1/2  (eps_1*eps_2)^1/2 ; Normal behavior

However, if I want the attractive term C6 of LJ potential equal zero, I will
set sig_12=-sig_12

[ nonbond_params ]
  i  j funcSIG_ij EPS_ij 
opls_1 opls_2  1   -(sig_1*sig_2)^1/2  (eps_1*eps_2)^1/2  ; -sig_ij -> C6=0

It is right?

Thanks
 Luca

PS: because i had some problems with the gmx-users mail delivery ,i decided to 
sent this mail also to developer list.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] couple-moltype for two molecules

2011-03-31 Thread Luca Bellucci
Dear all,
I tried to use "couple-moltype=Protein Ligand" directive   
to annihilate both protein and ligand molecules using free energy method.
I realized that couple-moltype key works for only one molecule type.
Is it right? 

To perform the same annihilation I used
"couple-moltype=Protein" to annihilate the protein and dual topology
 formalism (definied in itp file) to annihilate the ligand.
Are these commands able to work together?
Thanks
Luca
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] couple-moltype for two molecules

2011-03-31 Thread Luca Bellucci
Dear all,
I tryed to use "couple-moltype=Protein Ligand" directive   
to annihilate both protein and ligand molecules using free energy method.
I realized that couple-moltype key works for only one molecule type.
Is it right? 

To perform the same annihilation I used
"couple-moltype=Protein" to annihilate the protein and dual topology
 formalism (definied in itp file) to annihilate the ligand.
Are these commands able to work together?
Thanks
Luca
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] couple-moltype for two molecules

2011-03-31 Thread Luca Bellucci
Dear all,
I tryed to use "couple-moltype=Protein Ligand" directive   
to annihilate both protein and ligand molecules using free energy method.
I realized that couple-moltype key works for only one molecule type.
Is it right? 

To perform the same annihilation I used
"couple-moltype=Protein" to annihilate the protein and dual topology
 formalism (definied in itp file) to annihilate the ligand.
Are these commands able to work together?
Thanks
Luca
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] scaling of replica exchange

2011-02-22 Thread Luca Bellucci
Hi Valeria,
> Dear all,
> I am making some tests to start using replica exchange molecular dynamics
> on my system in water. The setup is ok (i.e. one replica alone runs
> correctly), but I am not able to parallelize the REMD. Details follow:
> 
> - the test is on 8 temperatures, so 8 replicas
> - Gromacs version 4.5.3
> - One replica alone, in 30 minutes with 256 processors, makes 52500 steps.
> 8 replicas with 256x8 = 2048 processors, make 300 (!!) steps each = 2400
> in total (I arrived to these numbers just to see some update of the log
> file: since I am running on a big cluster, I can not use more than half an
> hour for tests with less than 512 processors)
> - I am using mpirun with options -np 256 -s  md_.tpr -multi 8 -replex 1000
I think that with this option you are  using 256/8=32 cpu for each replica.
If you want use 256 for each replica you cna try set up -np option 
equal to 256x8 = 2048.

Luca
> 
> Do you have any idea?
> Thanks in advance
> 
> Valeria
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists