By default each process is scheduled on its own thread. In the second case, you are trying to only schedule one task per core. This is as expect. More information is available here: http://www.schedmd.com/slurmdocs/mc_support.html
Quoting David Gabriel Simas <dsi...@stanford.edu>: > > > With "Sockets=1 CoresPerSocket=2 ThreadsPerCore=2", when I submit a job > with "sbatch -n 2 ..." will have the value "0-1" or "2-3" in > /cgroup/cpuset/slurm/uid_1000/job_(something)/cpuset.cpus. To get "0-3" > there, I need to use "sbatch -n 3" or "sbatch -n 4". > > However, with "ThreadsPerCore=1", slurm seems to do the right(?) thing: > "sbatch -n 2" gets me "0-3" in /cgroup/cpuset/.../cpuset.cpus, and > "sbatch -n 3" or "sbatch -n 4" yields an error. > > DGS > > ----- Original Message ----- >> >> I used the same settings you described, on a node with 2 cores and 2 >> threads per core, like this: >> NodeName=n0 NodeAddr=xxxxx Sockets=1 CoresPerSocket=2 >> ThreadsPerCore=2 >> Again, I suggest you check your node and partition definitions. Also, >> what are you looking at that makes you think you "never get more >> than one core?" >> Martin >> >> >> >> >> [slurm-dev] Re: Cgoups and cpusets >> David Gabriel Simas to: slurm-dev >> 03/20/2012 11:21 AM >> >> >> From: David Gabriel Simas <dsi...@stanford.edu> >> >> >> To: "slurm-dev" <slurm-dev@schedmd.com> >> >> Please respond to "slurm-dev" <slurm-dev@schedmd.com> >> >> >> >> >> >> Could you send me your slurm,conf and cgroup.conf files? >> >> DGS >> >> ----- Original Message ----- >> > >> > David, >> > Based on the information you provide, I would expect all of your >> > examples to result in an allocation of both cores. I just ran a >> > similar example and I get the expected behavior. Perhaps there's >> > something wrong with your node or partition definitions. >> > Martin >> > >> > >> > >> > >> > [slurm-dev] Cgoups and cpusets >> > David Gabriel Simas to: slurm-dev >> > 03/19/2012 03:01 PM >> > >> > >> > From: David Gabriel Simas <dsi...@stanford.edu> >> > >> > >> > To: "slurm-dev" <slurm-dev@schedmd.com> >> > >> > Please respond to "slurm-dev" <slurm-dev@schedmd.com> >> > >> > >> > >> > >> > >> > Hello, >> > >> > I'm testing Slurm 2.4.0.pre3 on a Fedora 16 system, trying >> > to understand how cgroups and cpusets work. My slurm.conf >> > file contains >> > >> > ProctrackType=proctrack/cgroup >> > TaskPlugin=task/cgroup >> > TaskPluginParam=Cpusets,Verbose >> > SelectType=select/cons_res >> > SelectTypeParameters=CR_Core_Memory >> > >> > and my cgroups.conf file contains >> > >> > ConstrainCores=yes >> > TaskAffinity=no >> > >> > With this configuration, jobs are limited to a single core: >> > no matter how many threads or processes the job launches, >> > they all run on the same core. That's the default behavior >> > I want. >> > >> > However, I can't seem to get more than one core. I've tried >> > >> > sbatch -n 2 ... >> > sbatch -c 2 ... >> > sbatch -n 2 --ntasks-per-core=1 .. >> > sbatch -n 2 -c 2 ... >> > >> > and a few others. I never get more than one core. (My test >> > system is a dual-core laptop with hyperthreads enabled.) >> > >> > Is this expected behavior? >> > >> > DGS >> > >> > >> > >> >> >> >