Yes, I want a single 32 core mpi job.  No, that does not work.  To test
this, I created a partition (called test) that contains one 12 core node
and one 20 core node.
* sbatch -p test --ntasks=32  lmpFission.pka1.sh
<http://lmpFission.pka1.sh>*
The job will run on 12 cores on each node.  If I arbitrarily played around
with the node numbers in the "test" partition, so that BatchHost is a 20
core node, then I will have a similar problem, except it will run on 20
cores on each node, causing oversubscription on the 12 core node.
Neither does

*sbatch -p test --ntasks=32 --hint=compute_bound --hint=nomultithread -s
lmpFission.pka1.sh <http://lmpFission.pka1.sh>*
work

Is the behaviour I am seeing unusual?  Will upgrading fix this?  I
downloaded and untarred the 14.03.6 bz2 file, but I can see that upgrading
is not straight forward in my case, so I don't want to upgrade until I have
reasonable assurance that the new version "supposed" to work the way I want.


On Tue, Jul 22, 2014 at 8:48 PM, Christopher Samuel <sam...@unimelb.edu.au>
wrote:

>
> On 19/07/14 00:29, Andrew Petersen wrote:
>
> > Lets say my heterogeneous cluster has n001 with 12 cores n002 with
> > 20 cores How do I get slurm to run a job on 12 cores of node 1 and
> > 20 cores on node 2?
>
> I'm assuming you want a single MPI job using 32 cores across both nodes?
>
> Does --ntasks=32 (and no node specification) not work for that?
>
> cheers,
> Chris
> --
>  Christopher Samuel        Senior Systems Administrator
>  VLSCI - Victorian Life Sciences Computation Initiative
>  Email: sam...@unimelb.edu.au Phone: +61 (0)3 903 55545
>  http://www.vlsci.org.au/      http://twitter.com/vlsci
>

Reply via email to