On Sunday, 29 April 2018 6:57:58 PM AEST Mahmood Naderan wrote:
> [root@rocks7 ~]# scontrol show config | fgrep -i rocks7
Ah, I'd forgotten that wouldn't list the NodeName lines from your config file.
Sorry.
> Chris,
> Regarding this section
>
> NodeName=DEFAULT State=UNKNOWN
> NodeName=rock
[root@rocks7 ~]# scontrol show config | fgrep -i rocks7
AccountingStorageHost = rocks7
ControlMachine = rocks7
JobCompHost = rocks7
Slurmctld(primary/backup) at rocks7/(NULL) are UP/DOWN
Chris,
Regarding this section
NodeName=DEFAULT State=UNKNOWN
NodeName=rocks7 NodeAdd
Hi Mahmood,
Not quite what I meant sorry.
What does this say?
scontrol show config | fgrep -i rocks7
cheers,
Chris
--
Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC
SlurmUser=root
SlurmdUser=root
SlurmctldPort=6817
SlurmdPort=6818
AuthType=auth/munge
CryptoType=crypto/munge
StateSaveLocation=/var/spool/slurm.state
SlurmdSpoolDir=/var/spool/slurmd
SwitchType=switch/none
MpiDefault=none
SlurmctldPidFile=/var/run/slurmctld.pid
SlurmdPidFile=/var/run/slurmd.pid
Pr
On Sunday, 29 April 2018 4:11:39 PM AEST Mahmood Naderan wrote:
> So, I don't know why only 1 core included
What do you have in your slurm.conf for rocks7?
--
Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC
[root@rocks7 ~]# slurmd -C
NodeName=rocks7 slurmd: Considering each NUMA node as a socket
CPUs=32 Boards=1 SocketsPerBoard=4 CoresPerSocket=8 ThreadsPerCore=1
RealMemory=64261
UpTime=15-21:30:53
[root@rocks7 ~]# scontrol show node rocks7
NodeName=rocks7 Arch=x86_64 CoresPerSocket=1
CPUAlloc=0 CP
On Sunday, 29 April 2018 2:34:09 AM AEST Mahmood Naderan wrote:
> [root@rocks7 ~]# sinfo --list-reasons
> REASON USER TIMESTAMP NODELIST
> Low socket*core*thre root 2018-04-19T16:46:39 rocks7
slurmd thinks that "rocks7" doesn't have enough hardware resources to m
[root@rocks7 ~]# sinfo --list-reasons
REASON USER TIMESTAMP NODELIST
Low socket*core*thre root 2018-04-19T16:46:39 rocks7
Regards,
Mahmood
On Sat, Apr 28, 2018 at 6:01 PM, Chris Samuel wrote:
> On Saturday, 28 April 2018 7:58:08 PM AEST Mahmood Naderan wrote
On Saturday, 28 April 2018 7:58:08 PM AEST Mahmood Naderan wrote:
> I see that the state of the frontend is Drained. Is that the default
> state?
Probably not. What does "sinfo --list-reasons" say?
--
Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC
Hi again
I see that the state of the frontend is Drained. Is that the default
state? The following line
PartitionName=OTHERS AllowAccounts=em1 Nodes=compute-0-[2-3],rocks7
Should include all core numbers of all nodes. The computes are set to
idle, but the frontend is drained.
Regards,
Mahmood
Chris,
So the problem still exists ;)
>Yes, if you are happy
>for the asymmetry then you can do that.
That is the question. MaxCPUsPerNode is for symmetrically set the max
core number for all nodes in the partition. That is not applicable for
asymmetric cases.
Regards,
Mahmood
On Mon, Apr 23
On Sunday, 22 April 2018 12:55:46 PM AEST Mahmood Naderan wrote:
> I think that will limit other nodes to 20 too. Isn't that?
>
> Currently computes have 32 cores per node and I want all 32 cores. The head
> node also has 32 core but I want to include only 20 cores.
Apologies, I misunderstood wh
Hello Mahmood,
Am 22.04.2018 um 04:55 schrieb Mahmood Naderan:
> I think that will limit other nodes to 20 too. Isn't that?
you can declare less CPUs than phys. available. I do that for our
cluster; it is working robust for ages.
> Currently computes have 32 cores per node and I want all 32 core
I think that will limit other nodes to 20 too. Isn't that?
Currently computes have 32 cores per node and I want all 32 cores. The head
node also has 32 core but I want to include only 20 cores.
On Sun, Apr 22, 2018, 03:53 Chris Samuel wrote:
>
> All you need to do is add "MaxCPUsPerNode=20" to
On Sunday, 22 April 2018 4:41:43 AM AEST Mahmood Naderan wrote:
> Since our head node has 32 cores, I want to add some cores to a
> partition. If I edit the parts file like this
>
> PartitionName=SPEEDY AllowAccounts=em1 Nodes=compute-0-[2-4],rocks7
>
> then it will include all cores.
All you n
Hi,
Since our head node has 32 cores, I want to add some cores to a
partition. If I edit the parts file like this
PartitionName=SPEEDY AllowAccounts=em1 Nodes=compute-0-[2-4],rocks7
then it will include all cores. I think I have to edit slurm.conf like this then
NodeName=rocks7 NodeAddr=10.1.1.1
16 matches
Mail list logo