-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi Danny,
On 08/08/13 04:08, Danny Auble wrote:
> Just a note, if srun isn't used to launch a task the odds of
> accounting for the step being correct are very low. Using srun is
> the only "known" way to always guarantee accounting for steps to be
Hello,
Is there a way to increase a users priority on a specific partition
without using qos?
When this user runs jobs on this partition I want their jobs to have a
greater priority than all of the other jobs on this partition.
Thanks for your help,
Neil Van Lysel
smime.p7s
Description: S/
All jobs on the partition which use --ntasks-per-node in the sbatch
script are not scheduled any more. The log shows:
[2013-08-07T12:07:32.016] cons_res: _can_job_run_on_node: 0 cpus on
gpu-2-13(0), mem 0/245760
[2013-08-07T12:07:32.016] cons_res: _can_job_run_on_node: 0 cpus on
gpu-2-14(0)
Hi Carles,
Thanks for your reply.
I dont see any error in the logs files. I use -v flag and there arent errors.
Thanks for your explain about backfillI used backfilling because it was the
default option, but I had to eliminate the walltime limits due to some user
complaints.
This strange behaviou
Just a note, if srun isn't used to launch a task the odds of accounting
for the step being correct are very low. Using srun is the only "known"
way to always guarantee accounting for steps to be accurate. This also
goes for handling memory limits.
Danny
On 08/07/13 00:43, Christopher Samu
Hi Magnus,
Thanks for the reply. I am using
slurm.conf:
SelectTypeParameter=CR_CPU_Memory
to manage memory on the nodes. I am using
partition.conf:
SelectTypeParameter=CR_Core
in one partition to allow gpu jobs to run without memory problems.
The documentation states that is a valid set u
That was old documentation. We'll fix that with the next web page update.
Quoting Jeff Tan :
Dear SchedMD,
Just verifying what might be no more than a missed docupdate: is
CR_Core_Memory definitely implemented in 2.6.0? The cons_res.html that
comes with it says it isn't, but it seems to wor
Hi José Manuel,
Do you see any error on the controller logfile?
Having backfilling with ulimited time jobs is useless because it will never
backfill a job, but should not affect on the "non appearing" jobs issue.
Are any of the user jobs dispatched?
Regards,
Carles Fenoy
Barcelona Supercomputin
Hi!
To use Partition SelectTypeParameter you must use CR_Socket or (CR_Core
and CR_ALLOCATE_FULL_SOCKET) as the default SelectTypeParameter.
You are using CR_CPU_Memory.
Best regards,
Magnus
On 2013-08-05 23:13, Eva Hocks wrote:
I am getting spam messages in the logs:
[2013-08-05T14:04
Hi,
I have a cluster working fine with Slurm 2.3
But I noticed that a specific user can not queue (or dispatch) many works at
the same time.When this user queue many jobs, the following jobs are submitted
correctly (give a job number) but not added to the queue. And the cluster have
free resourc
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 07/08/13 16:59, Janne Blomqvist wrote:
> That is, the memory accounting is per task, and when launching
> using mpirun the number of tasks does not correspond to the number
> of MPI processes, but rather to the number of "orted" processes (1
> per
11 matches
Mail list logo