Sorry Mahmood,

10 GB per node is requested not 200 GB per node. For all nodes this counts in 
total to 40 GB as you request 4 nodes. The number of tasks per node does not 
matter for this limit.


Best ;-)
Sebastian



Sebastian Kraus
Team IT am Institut für Chemie
Gebäude C, Straße des 17. Juni 115, Raum C7

Technische Universität Berlin
Fakultät II
Institut für Chemie
Sekretariat C3
Straße des 17. Juni 135
10623 Berlin


Tel.: +49 30 314 22263
Fax: +49 30 314 29309
Email: sebastian.kr...@tu-berlin.de

________________________________
From: slurm-users <slurm-users-boun...@lists.schedmd.com> on behalf of Mahmood 
Naderan <mahmood...@gmail.com>
Sent: Monday, December 16, 2019 07:56
To: Slurm User Community List
Subject: Re: [slurm-users] Question about memory allocation

>your job will be only runnable on nodes that offer at least 200 GB main memory 
>(sum of memory on all sockets/cpu of >the node)

But according to the manual

--mem=<size[units]>
Specify the real memory required per node.

so, with

#SBATCH --mem=10GB
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=5

the total requested memory should be 40GB and not 200GB.

Regards,
Mahmood




On Mon, Dec 16, 2019 at 10:19 AM Mahmood Naderan 
<mahmood...@gmail.com<mailto:mahmood...@gmail.com>> wrote:
>No, this indicates the amount of residual/real memory as reqeusted per node. 
>Your job will be only runnable on nodes >that offer at least 200 GB main 
>memory (sum of memory on all sockets/cpu of the node). Please also have a 
>closer look >at man sbatch.


Thanks.
Regarding the status of the nodes, I see
    RealMemory=120705 AllocMem=1024 FreeMem=309 Sockets=32 Boards=1

The question is why freemem is so much low while allocmem is far less than 
realmemory?

Regards,
Mahmood




On Mon, Dec 16, 2019 at 10:12 AM Kraus, Sebastian 
<sebastian.kr...@tu-berlin.de<mailto:sebastian.kr...@tu-berlin.de>> wrote:

Hi Mahmood,


>> will it reserve (looks for) 200GB of memory for the job? Or this is the hard 
>> limit of the memory required by job?


No, this indicates the amount of residual/real memory as reqeusted per node. 
Your job will be only runnable on nodes that offer at least 200 GB main memory 
(sum of memory on all sockets/cpu of the node). Please also have a closer look 
at man sbatch.

Best
Sebastian



Sebastian Kraus
Team IT am Institut für Chemie

Technische Universität Berlin
Fakultät II
Institut für Chemie
Sekretariat C3
Straße des 17. Juni 135
10623 Berlin

Email: sebastian.kr...@tu-berlin.de<mailto:sebastian.kr...@tu-berlin.de>

________________________________
From: slurm-users 
<slurm-users-boun...@lists.schedmd.com<mailto:slurm-users-boun...@lists.schedmd.com>>
 on behalf of Mahmood Naderan 
<mahmood...@gmail.com<mailto:mahmood...@gmail.com>>
Sent: Monday, December 16, 2019 07:19
To: Slurm User Community List
Subject: [slurm-users] Question about memory allocation

Hi,
If I write

#SBATCH --mem=10GB
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=5

will it reserve (looks for) 200GB of memory for the job? Or this is the hard 
limit of the memory required by job?

Regards,
Mahmood


Reply via email to