Dear Lev,

I already have tried  --mem parameter with different values.

For example:

sbatch  --mem=5GB   submit_job.
sbatch  --mem=18000  submit_job.

but every time it gave the same error again unfortunately.



On Thu, Aug 24, 2017 at 2:32 AM, Lev Lafayette <lev.lafaye...@unimelb.edu.au
> wrote:

> On Wed, 2017-08-23 at 01:26 -0600, Sema Atasever wrote:
>
> >
> >
> > Computing predictions by SVM...
> > slurmstepd: Job 3469 exceeded memory limit (4235584 > 2048000), being
> > killed
> > slurmstepd: Exceeded job memory limit
> >
> >
> > How can i fix this problem.
> >
>
> Error messages often give useful information. In this case you haven't
> requested enough memory in your Slurm script.
>
> Memory can be set with `#SBATCH --mem=[mem][M|G|T]` directive (entire
> job) or `#SBATCH --mem-per-cpu=[mem][M|G|T]` (per core).
>
> As a rule of thumb, the maximum request per node should be based around
> total cores -1 (for system processes).
>
> All the best,
>
>
> --
> Lev Lafayette, BA (Hons), GradCertTerAdEd (Murdoch), GradCertPM, MBA
> (Tech Mngmnt) (Chifley)
> HPC Support and Training Officer +61383444193 +61432255208
> Department of Infrastructure Services, University of Melbourne
>
>

Reply via email to