On 22-Jan 11:03, Peter Zijlstra wrote:
> On Mon, Jan 21, 2019 at 03:54:07PM +, Patrick Bellasi wrote:
> > On 21-Jan 16:17, Peter Zijlstra wrote:
> > > On Tue, Jan 15, 2019 at 10:15:01AM +, Patrick Bellasi wrote:
> > > > +#ifdef CONFIG_UCLAMP_TASK
> > >
> > > > +struct uclamp_bucket {
> > >
On 22-Jan 10:45, Peter Zijlstra wrote:
> On Mon, Jan 21, 2019 at 04:33:38PM +, Patrick Bellasi wrote:
> > On 21-Jan 17:12, Peter Zijlstra wrote:
> > > On Mon, Jan 21, 2019 at 03:23:11PM +, Patrick Bellasi wrote:
>
> > > > and keep all
> > > > the buckets in use at the beginning of a cache
On Mon, Jan 21, 2019 at 03:54:07PM +, Patrick Bellasi wrote:
> On 21-Jan 16:17, Peter Zijlstra wrote:
> > On Tue, Jan 15, 2019 at 10:15:01AM +, Patrick Bellasi wrote:
> > > +#ifdef CONFIG_UCLAMP_TASK
> >
> > > +struct uclamp_bucket {
> > > + unsigned long value : bits_per(SCHED_CAPACITY_SC
On Mon, Jan 21, 2019 at 04:33:38PM +, Patrick Bellasi wrote:
> On 21-Jan 17:12, Peter Zijlstra wrote:
> > On Mon, Jan 21, 2019 at 03:23:11PM +, Patrick Bellasi wrote:
> > > and keep all
> > > the buckets in use at the beginning of a cache line.
> >
> > That; is that the rationale for all
On 21-Jan 17:12, Peter Zijlstra wrote:
> On Mon, Jan 21, 2019 at 03:23:11PM +, Patrick Bellasi wrote:
> > On 21-Jan 15:59, Peter Zijlstra wrote:
> > > On Tue, Jan 15, 2019 at 10:15:01AM +, Patrick Bellasi wrote:
> > > > @@ -835,6 +954,28 @@ static void uclamp_bucket_inc(struct uclamp_se
>
On Mon, Jan 21, 2019 at 03:23:11PM +, Patrick Bellasi wrote:
> On 21-Jan 15:59, Peter Zijlstra wrote:
> > On Tue, Jan 15, 2019 at 10:15:01AM +, Patrick Bellasi wrote:
> > > @@ -835,6 +954,28 @@ static void uclamp_bucket_inc(struct uclamp_se
> > > *uc_se, unsigned int clamp_id,
> > > } wh
On 21-Jan 16:17, Peter Zijlstra wrote:
> On Tue, Jan 15, 2019 at 10:15:01AM +, Patrick Bellasi wrote:
> > +#ifdef CONFIG_UCLAMP_TASK
>
> > +struct uclamp_bucket {
> > + unsigned long value : bits_per(SCHED_CAPACITY_SCALE);
> > + unsigned long tasks : BITS_PER_LONG - bits_per(SCHED_CAPACITY
On 21-Jan 15:59, Peter Zijlstra wrote:
> On Tue, Jan 15, 2019 at 10:15:01AM +, Patrick Bellasi wrote:
> > @@ -835,6 +954,28 @@ static void uclamp_bucket_inc(struct uclamp_se *uc_se,
> > unsigned int clamp_id,
> > } while (!atomic_long_try_cmpxchg(&uc_maps[bucket_id].adata,
> >
On Tue, Jan 15, 2019 at 10:15:01AM +, Patrick Bellasi wrote:
> +#ifdef CONFIG_UCLAMP_TASK
> +struct uclamp_bucket {
> + unsigned long value : bits_per(SCHED_CAPACITY_SCALE);
> + unsigned long tasks : BITS_PER_LONG - bits_per(SCHED_CAPACITY_SCALE);
> +};
> +struct uclamp_cpu {
> +
On Tue, Jan 15, 2019 at 10:15:01AM +, Patrick Bellasi wrote:
> @@ -835,6 +954,28 @@ static void uclamp_bucket_inc(struct uclamp_se *uc_se,
> unsigned int clamp_id,
> } while (!atomic_long_try_cmpxchg(&uc_maps[bucket_id].adata,
> &uc_map_old.data, u
Utilization clamping allows to clamp the CPU's utilization within a
[util_min, util_max] range, depending on the set of RUNNABLE tasks on
that CPU. Each task references two "clamp buckets" defining its minimum
and maximum (util_{min,max}) utilization "clamp values". A CPU's clamp
bucket is active i
11 matches
Mail list logo