case was just to rebuild Gromacs using mvapich2 and
everything appears to be behaving normally.
Thanks for everyone's help on this.
Cheers,
Malcolm
--
Malcolm Tobias
314.362.1594
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Li
end, I think I can live without the performance increase of pin'ing the
threads. Since the threads will be confined to the CPUSET, I'm guessing the
threads are less likely to migrate.
Cheers,
Malcolm
--
Malcolm Tobias
314.362.1594
--
Gromacs Users mailing list
* Please search
CPUSETs? Node sharing?
Correct. While it might be possible to see the cores that have been assigned
to the job and do the correct 'pin setting' it would probably be ugly.
Cheers,
Malcolm
--
Malcolm Tobias
314.362.1594
--
Gromacs Users mailing list
* Please sea
%hi, 0.0%si, 0.0%st
Cpu7 : 60.3%us, 0.3%sy, 0.0%ni, 39.1%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st
Weird. I wonder if anyone else has experience using pin'ing with CPUSETs?
Malcolm
--
Malcolm Tobias
314.362.1594
--
Gromacs Users mailing list
* Please search the archive at
http:
ive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/
our queuing system which
can interfere with how the tasks are distributed. I've tried running the job
outside of the queuing system and have seen the same behavior.
> But if people go around using root routinely... ;-)
As soon as I figure out how to manage a computing cluster without becom
of CPUs detected (16) does not match the number reported by OpenMP (1).
I'm not sure how to proceed with debugging this, so any suggestions would be
helpful.
Thanks in advance,
Malcolm
--
Malcolm Tobias
314.362.1594
--
Gromacs Users mailing list
* Please search the archive at
http: