Thanks.  We will check it out.

-Paul Edmon-

On 02/10/2014 02:19 PM, [email protected] wrote:
Hi Paul,
This should achieve the results that you are looking for using a new configuration parameter. The attached patch, including documentation changes, is built against Slurm version 2.6. You will need to use this as a local patch for now. I will plan to include it as part of the version 14.03 release next month.

Another option might be to configure each partition as a separate cluster and run a separate slurmctld daemon for each partition. That would improve scalability, but make more work for you and perhaps be confusing for the users.

Moe Jette
SchedMD


On 2014-02-10 06:49, Paul Edmon wrote:
How difficult would it be to put a switch into SLURM where instead of
considering the global priority chain it would instead consider each
partition wholly independently with respect to both backfill and main
scheduling loop?  In our environment we have many partitions. We also
have people submitting 1000's of jobs to those partitions and
partitions are at different priorities.  Since SLURM (even in
backfill) runs down the priority chain higher priority queues can
impact scheduling in lower priority queues even of those queues do not
overlap in terms of hardware.  It would be better in our case is SLURM
considered each partition as a wholly independent scheduling run and
did all of them both for backfill and main loop.

I know there is the bf_max_job_part option in the backfill loop but
it would be better to just have each partition be independent as that
way you don't get any cross talk.  Can this be done?  It would be
incredibly helpful for our environment.

-Paul Edmon-

Reply via email to