>Did you do an scontrol reconfigure?
Thank you. That solved the issue.
Regards,
Mahmood
What services did you restart after changing the slurm.conf? Did you do an
scontrol reconfigure?
Do you have any reservations? scontrol show res
Sean
On Tue, 17 Dec. 2019, 10:35 pm Mahmood Naderan,
mailto:mahmood...@gmail.com>> wrote:
>Your running job is requesting 6 CPUs per node (4 nodes, 6
>Your running job is requesting 6 CPUs per node (4 nodes, 6 CPUs per
node). That means 6 CPUs are being used on node hpc.
>Your queued job is requesting 5 CPUs per node (4 nodes, 5 CPUs per node).
In total, if it was running, that would require 11 CPUs on node hpc. But
hpc only has 10 cores, so it
Dear Mahmood,
I'm not aware of any nodes, that have 32, or even 10 sockets. Are you
sure, you want to use the cluster like that?
Best
Marcus
On 12/17/19 10:03 AM, Mahmood Naderan wrote:
Please see the latest update
# for i in {0..2}; do scontrol show node compute-0-$i | grep
RealMemory; do
Hi Mahmood,
Your running job is requesting 6 CPUs per node (4 nodes, 6 CPUs per node). That
means 6 CPUs are being used on node hpc.
Your queued job is requesting 5 CPUs per node (4 nodes, 5 CPUs per node). In
total, if it was running, that would require 11 CPUs on node hpc. But hpc only
has 1
Please see the latest update
# for i in {0..2}; do scontrol show node compute-0-$i | grep RealMemory;
done && scontrol show node hpc | grep RealMemory
RealMemory=64259 AllocMem=1024 FreeMem=57163 Sockets=32 Boards=1
RealMemory=120705 AllocMem=1024 FreeMem=97287 Sockets=32 Boards=1
RealMem
Dear Mahmood,
could you please show the output of
scontrol show -d job 119
Best
Marcus
On 12/16/19 5:41 PM, Mahmood Naderan wrote:
Excuse me, still I have problem. Although I freed memory on the nodes
as below
RealMemory=64259 AllocMem=1024 FreeMem=61882 Sockets=32 Boards=1
RealMemory
Excuse me, still I have problem. Although I freed memory on the nodes as
below
RealMemory=64259 AllocMem=1024 FreeMem=61882 Sockets=32 Boards=1
RealMemory=120705 AllocMem=1024 FreeMem=115257 Sockets=32 Boards=1
RealMemory=64259 AllocMem=26624 FreeMem=61795 Sockets=32 Boards=1
RealMemor
: Monday, December 16, 2019 07:56
To: Slurm User Community List
Subject: Re: [slurm-users] Question about memory allocation
>your job will be only runnable on nodes that offer at least 200 GB main memory
>(sum of memory on all sockets/cpu of >the node)
But according to the manual
--mem
iversität Berlin
>> Fakultät II
>> Institut für Chemie
>> Sekretariat C3
>> Straße des 17. Juni 135
>> 10623 Berlin
>>
>> Email: sebastian.kr...@tu-berlin.de
>>
>> --
>> *From:* slurm-users on behalf of
>>
> Institut für Chemie
> Sekretariat C3
> Straße des 17. Juni 135
> 10623 Berlin
>
> Email: sebastian.kr...@tu-berlin.de
>
> --
> *From:* slurm-users on behalf of
> Mahmood Naderan
> *Sent:* Monday, December 16, 2019 07:19
> *To:* Slu
ian.kr...@tu-berlin.de
From: slurm-users on behalf of Mahmood
Naderan
Sent: Monday, December 16, 2019 07:19
To: Slurm User Community List
Subject: [slurm-users] Question about memory allocation
Hi,
If I write
#SBATCH --mem=10GB
#SBATCH --nodes=4
#SBATCH --ntask
A follow up to the previous email.
Current state of memory of nodes are
RealMemory=64259 AllocMem=1024 FreeMem=38620 Sockets=32 Boards=1
RealMemory=120705 AllocMem=1024 FreeMem=309 Sockets=32 Boards=1
RealMemory=64259 AllocMem=1024 FreeMem=59334 Sockets=32 Boards=1
RealMemory=64259 Al
Hi,
If I write
#SBATCH --mem=10GB
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=5
will it reserve (looks for) 200GB of memory for the job? Or this is the
hard limit of the memory required by job?
Regards,
Mahmood
14 matches
Mail list logo