Hi,

On Fri, May 8, 2015 at 3:01 PM Malcolm Tobias <mtob...@wustl.edu> wrote:

>
> Hi Mark,
>
> On Friday 08 May 2015 11:51:03 Mark Abraham wrote:
> > >
> > > I'm attempting to build gromacs on a new cluster and following the same
> > > recipies that I've used in the past, but encountering a strange
> behavior:
> > > It claims to be using both MPI and OpenMP, but I can see by 'top' and
> the
> > > reported core/walltime that it's really only generating the MPI
> processes
> > > and no threads.
> > >
> >
> > I wouldn't take the output from top completely at face value. Do you get
> > the same performance from -ntomp 1 as -ntomp 4?
>
> I'm not relying on top. I also mentioned that the core/walltime as
> reported by Gromacs suggests that it's only utilizing 2 cores.  I've also
> been comparing the performance to an older cluster.
>

What kind of simulation are you testing with? A reaction-field water box
will have almost nothing to do on the CPU, so no real change with #threads.
Check with your users, but a PME test case is often more appropriate.

> > We're running a hetergenous environment, so I tend to build with
> > > MPI/OpenMP/CUDA and the Intel compilers, but I'm seeing this same sort
> of
> > > behavior with the GNU compilers.  Here's how I'm configuring things:
> > >
> > > [root@login01 build2]# cmake -DGMX_FFT_LIBRARY=mkl -DGMX_MPI=ON
> > > -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/opt/cuda -DGMX_OPENMP=ON
> > > -DCMAKE_INSTALL_PREFIX=/act/gromacs-4.6.7_take2 .. | tee cmake.out
> > >
> >
> > You need root access for "make install." Period.
>
> Yes Mark, I ran 'make install' as root.
>
> > Routinely using root means you've probably hosed your system some time...
>
> In 20+ years of managing Unix systems I've managed to hose many a system.
>
> > > Using 2 MPI processes
> > > Using 4 OpenMP threads per MPI process
> > >
> > > although I do see this warning:
> > >
> > > Number of CPUs detected (16) does not match the number reported by
> OpenMP
> > > (1).
> > >
> >
> > Yeah, that happens. There's not really a well-defined standard, so once
> the
> > OS, MPI and OpenMP libraries all combine, things can get messy.
>
> Understood.  On top of that we're using CPUSETs with our queuing system
> which can interfere with how the tasks are distributed.  I've tried running
> the job outside of the queuing system and have seen the same behavior.
>

OK. Well that 1 reported by mdrun is literally the return value from
calling omp_get_num_procs(), so the solution is to look for what part of
the ecosystem is setting that to 1 and give that a slap ;-) IIRC the use of
-ntomp 4 means mdrun will go and use 4 threads anyway, but it'd be good to
fix the wider context.


> > But if people go around using root routinely... ;-)
>
> As soon as I figure out how to manage a computing cluster without becoming
> root I'll let you know  ;-)
>

Sure, you need root access. You don't need it for running cmake when that
runs a pile of unsecure code ;-)

I've got dozens of Gromacs users, so I'm attempting to build the fastest,
> most versatile binary that I can.  Any help that people can offer is
> certainly appreciated.
>

YMMV but hyperthreads were generally not useful with GROMACS 4.6. That is
changing for newer hardware and GROMACS, however.

Mark


> Cheers,
> Malcolm
>
>
> --
> Malcolm Tobias
> 314.362.1594
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Reply via email to