We don't do anything. In our environment it is the user's
responsibility to optimize their code appropriately. Since we have a
great variety of hardware any modules we build (we have several thousand
of them) are all build generically. If people want processor specific
optimizations then the
...ah, got it. I was confused by "PI/Lab nodes" in your partition list.
Our QoS/account pair for each investigator condo is our approximate
equivalent of what you're doing with owned partitions.
Since we have everything in one partition we segregate processor types via
topology.conf. We break up
I don't know off hand. You can sort of construct a similar system in
Slurm, but I've never seen it as a native option.
-Paul Edmon-
On 6/20/19 10:32 AM, John Hearns wrote:
Paul, you refer to banking resources. Which leads me to ask are
schemes such as Gold used these days in Slurm?
Gold was a
Paul, you refer to banking resources. Which leads me to ask are schemes
such as Gold used these days in Slurm?
Gold was a utility where groups could top up with a virtual amount of money
which would be spent as they consume resources.
Altair also wrote a similar system for PBS, which they offered t
People will specify which partition they need or if they want multiple
they use this:
#SBATCH -p general,shared,serial_requeue
As then the scheduler will just select which partition they will run in
first. Naturally there is a risk that you will end up running in a more
expensive partition.
Janne, thankyou. That FGCI benchmark in a container is pretty smart.
I always say that real application benchmarks beat synthetic benchmarks.
Taking a small mix of applications like that and taking a geometric mean is
great.
Note: *"a reference result run on a Dell PowerEdge C4130"*
In the old da
On 19/06/2019 22.30, Fulcomer, Samuel wrote:
>
> (...and yes, the name is inspired by a certain OEM's software licensing
> schemes...)
>
> At Brown we run a ~400 node cluster containing nodes of multiple
> architectures (Sandy/Ivy, Haswell/Broadwell, and Sky/Cascade) purchased
> in some cases by
Hi Paul,
Thanks..Your setup is interesting. I see that you have your processor types
segregated in their own partitions (with the exception of of the requeue
partition), and that's how you get at the weighting mechanism. Do you have
your users explicitly specify multiple partitions in the batch
co
Hi Alex,
Thanks. The issue is that we don't know where they'll end up running in the
heterogenous environment. In addition, because the limit is applied by
GrpTRES=cpu=N, someone buying 100 cores today shouldn't get access to 130
of todays cores.
Regards,
Sam
On Wed, Jun 19, 2019 at 3:41 PM Alex
We do a similar thing here at Harvard:
https://www.rc.fas.harvard.edu/fairshare/
We simply weight all the partitions based on their core type and then we
allocate Shares for each account based on what they have purchased. We
don't use QoS at all, so we just rely purely on fairshare weighting
Hey Samuel,
Can't you just adjust the existing "cpu" limit numbers using those same
multipliers? Someone bought 100 CPUs 5 years ago, now that's ~70 CPUs.
Or vice versa, someone buys 100 CPUs today, they get a setting of 130 CPUs
because the CPUs are normalized to the old performance. Since it
(...and yes, the name is inspired by a certain OEM's software licensing
schemes...)
At Brown we run a ~400 node cluster containing nodes of multiple
architectures (Sandy/Ivy, Haswell/Broadwell, and Sky/Cascade) purchased in
some cases by University funds and in others by investigator funding
(~50:
12 matches
Mail list logo