Malcolm,
On Mon, May 11, 2015 at 4:23 PM, Malcolm Tobias wrote:
>
> Szilárd,
>
> On Friday 08 May 2015 21:18:12 Szilárd Páll wrote:
>> >> What is your goal with using CPUSETs? Node sharing?
>> >
>> > Correct. While it might be possible to see the cores that have been
>> > assigned to the job an
Mark,
On Friday 08 May 2015 15:15:31 Mark Abraham wrote:
> > FWIW, I ran the same GROMACs run outside of the queuing system to verify
> > that the CPUSETs were not causing the issue.
> >
>
> MPI gets a chance to play with OMP_NUM_THREADS (and pinning!), too, so your
> tests suggest the issue lie
Szilárd,
On Friday 08 May 2015 21:18:12 Szilárd Páll wrote:
> >> What is your goal with using CPUSETs? Node sharing?
> >
> > Correct. While it might be possible to see the cores that have been
> > assigned to the job and do the correct 'pin setting' it would probably be
> > ugly.
>
> Not sure
On Fri, May 8, 2015 at 8:44 PM, Malcolm Tobias wrote:
>
> Szilárd,
>
> On Friday 08 May 2015 20:25:09 Szilárd Páll wrote:
>> > I wouldn't expect the CPUSETs to be problematic, I've been using them with
>> > Gromacs for over a decade now ;-)
>>
>> Thread affinity setting within mdrun has been empl
Szilárd,
On Friday 08 May 2015 20:25:09 Szilárd Páll wrote:
> > I wouldn't expect the CPUSETs to be problematic, I've been using them with
> > Gromacs for over a decade now ;-)
>
> Thread affinity setting within mdrun has been employed since v4.6 and
> we do it on a per-thread basis and not doi
On Fri, May 8, 2015 at 4:45 PM, Malcolm Tobias wrote:
>
> Szilárd,
>
> On Friday 08 May 2015 15:56:12 Szilárd Páll wrote:
>> What's being utilized vs what's being started are different things. If
>> you don't believe the mdrun output - which is quite likely not wrong
>> about the 2 ranks x 4 threa
On Fri, May 8, 2015 at 4:28 PM Malcolm Tobias wrote:
>
> Mark,
>
> On Friday 08 May 2015 13:48:30 Mark Abraham wrote:
>
> > What kind of simulation are you testing with? A reaction-field water box
> > will have almost nothing to do on the CPU, so no real change with
> #threads.
> > Check with you
Szilárd,
On Friday 08 May 2015 15:56:12 Szilárd Páll wrote:
> What's being utilized vs what's being started are different things. If
> you don't believe the mdrun output - which is quite likely not wrong
> about the 2 ranks x 4 threads -, use your favorite tool to check the
> number of ranks and
Mark,
On Friday 08 May 2015 13:48:30 Mark Abraham wrote:
> What kind of simulation are you testing with? A reaction-field water box
> will have almost nothing to do on the CPU, so no real change with #threads.
> Check with your users, but a PME test case is often more appropriate.
I have no ide
On Fri, May 8, 2015 at 2:50 PM, Malcolm Tobias wrote:
>
> Hi Mark,
>
> On Friday 08 May 2015 11:51:03 Mark Abraham wrote:
>> >
>> > I'm attempting to build gromacs on a new cluster and following the same
>> > recipies that I've used in the past, but encountering a strange behavior:
>> > It claims
Hi,
On Fri, May 8, 2015 at 3:01 PM Malcolm Tobias wrote:
>
> Hi Mark,
>
> On Friday 08 May 2015 11:51:03 Mark Abraham wrote:
> > >
> > > I'm attempting to build gromacs on a new cluster and following the same
> > > recipies that I've used in the past, but encountering a strange
> behavior:
> > >
Hi Mark,
On Friday 08 May 2015 11:51:03 Mark Abraham wrote:
> >
> > I'm attempting to build gromacs on a new cluster and following the same
> > recipies that I've used in the past, but encountering a strange behavior:
> > It claims to be using both MPI and OpenMP, but I can see by 'top' and the
>
Hi,
On Thu, May 7, 2015 at 6:16 PM Malcolm Tobias wrote:
>
> All,
>
> I'm attempting to build gromacs on a new cluster and following the same
> recipies that I've used in the past, but encountering a strange behavior:
> It claims to be using both MPI and OpenMP, but I can see by 'top' and the
>
All,
I'm attempting to build gromacs on a new cluster and following the same
recipies that I've used in the past, but encountering a strange behavior: It
claims to be using both MPI and OpenMP, but I can see by 'top' and the reported
core/walltime that it's really only generating the MPI proce
14 matches
Mail list logo