Re: [PATCH v3 1/1] scripts: Add add-maintainer.py

2023-09-26 Thread Pavan Kondeti
On Tue, Sep 26, 2023 at 05:32:10PM +0530, Pavan Kondeti wrote: > On Sat, Aug 26, 2023 at 01:07:42AM -0700, Guru Das Srinagesh wrote: > > +def gather_maintainers_of_file(patch_file): > > +all_entities_of_patch = dict() > > + > > +# Run get_maintainer.pl on patch

Re: [PATCH v3 1/1] scripts: Add add-maintainer.py

2023-09-26 Thread Pavan Kondeti
On Mon, Aug 28, 2023 at 10:21:32AM +0200, Krzysztof Kozlowski wrote: > On 26/08/2023 10:07, Guru Das Srinagesh wrote: > > This script runs get_maintainer.py on a given patch file (or multiple > > patch files) and adds its output to the patch file in place with the > > appropriate email headers

Re: [PATCH v3 1/1] scripts: Add add-maintainer.py

2023-09-26 Thread Pavan Kondeti
On Sat, Aug 26, 2023 at 01:07:42AM -0700, Guru Das Srinagesh wrote: > +def gather_maintainers_of_file(patch_file): > +all_entities_of_patch = dict() > + > +# Run get_maintainer.pl on patch file > +logging.info("GET: Patch: {}".format(os.path.basename(patch_file))) > +cmd =

Re: [PATCH] cgroup: Relax restrictions on kernel threads moving out of root cpu cgroup

2021-04-06 Thread Pavan Kondeti
On Tue, Apr 06, 2021 at 12:15:24PM -0400, Tejun Heo wrote: > Hello, > > On Tue, Apr 06, 2021 at 08:57:15PM +0530, Pavan Kondeti wrote: > > Yeah. The workqueue attrs comes in handy to reduce the nice/prio of a > > background workqueue if we identify that it is c

Re: [PATCH] cgroup: Relax restrictions on kernel threads moving out of root cpu cgroup

2021-04-06 Thread Pavan Kondeti
Hi Tejun, On Tue, Apr 06, 2021 at 09:36:00AM -0400, Tejun Heo wrote: > Hello, > > On Tue, Apr 06, 2021 at 06:34:21PM +0530, Pavankumar Kondeti wrote: > > In Android GKI, CONFIG_FAIR_GROUP_SCHED is enabled [1] to help prioritize > > important work. Given that CPU shares of root cgroup can't be

Re: [PATCH] cgroup: Relax restrictions on kernel threads moving out of root cpu cgroup

2021-04-06 Thread Pavan Kondeti
Hi Quentin, On Tue, Apr 06, 2021 at 12:10:41PM +, Quentin Perret wrote: > Hi Pavan, > > On Tuesday 06 Apr 2021 at 16:29:13 (+0530), Pavankumar Kondeti wrote: > > In Android GKI, CONFIG_FAIR_GROUP_SCHED is enabled [1] to help prioritize > > important work. Given that CPU shares of root cgroup

Re: [PATCH v2 1/7] sched/fair: Ignore percpu threads for imbalance pulls

2021-02-21 Thread Pavan Kondeti
On Fri, Feb 19, 2021 at 12:59:57PM +, Valentin Schneider wrote: > From: Lingutla Chandrasekhar > > In load balancing, when balancing group is unable to pull task > due to ->cpus_ptr constraints from busy group, then it sets > LBF_SOME_PINNED to lb env flags, as a consequence, sgc->imbalance

Re: [PATCH] sched/fair: Ignore percpu threads for imbalance pulls

2021-02-17 Thread Pavan Kondeti
On Wed, Feb 17, 2021 at 02:50:23PM +, Valentin Schneider wrote: > On 17/02/21 17:38, Lingutla Chandrasekhar wrote: > > In load balancing, when balancing group is unable to pull task > > due to ->cpus_ptr constraints from busy group, then it sets > > LBF_SOME_PINNED to lb env flags, as a

Re: [PATCH] PM / EM: Micro optimization in em_pd_energy

2020-11-23 Thread Pavan Kondeti
On Mon, Nov 23, 2020 at 10:28:39AM +, Quentin Perret wrote: > Hi Pavan, > > On Monday 23 Nov 2020 at 15:47:57 (+0530), Pavankumar Kondeti wrote: > > When the sum of the utilization of CPUs in a power domain is zero, > > s/power/performance > > > return the energy as 0 without doing any

Re: [PATCH 1/1] sched/uclamp: release per-task uclamp control if user set to default value

2020-10-05 Thread Pavan Kondeti
On Fri, Oct 02, 2020 at 01:38:12PM +0800, Yun Hsiang wrote: > On Wed, Sep 30, 2020 at 03:12:51PM +0200, Dietmar Eggemann wrote: > Hi Dietmar, > > > Hi Yun, > > > > On 28/09/2020 10:26, Yun Hsiang wrote: > > > If the user wants to release the util clamp and let cgroup to control it, > > > we need

Re: [RFC PATCH v7 12/23] sched: Trivial forced-newidle balancer

2020-09-02 Thread Pavan Kondeti
On Fri, Aug 28, 2020 at 03:51:13PM -0400, Julien Desfossez wrote: > /* > * The static-key + stop-machine variable are needed such that: > * > @@ -4641,7 +4656,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, > struct rq_flags *rf) > struct task_struct *next, *max = NULL; >

Re: Looping more in detach_tasks() when RT and CFS tasks are present

2020-06-24 Thread Pavan Kondeti
Hi Vincent, On Wed, Jun 24, 2020 at 02:39:25PM +0200, Vincent Guittot wrote: > Hi Pavan, > > On Wed, 24 Jun 2020 at 13:42, Pavan Kondeti wrote: > > > > Hi Vincent/Peter, > > > > in load_balance(), we derive env->loop_max based on rq->nr_running. > >

Looping more in detach_tasks() when RT and CFS tasks are present

2020-06-24 Thread Pavan Kondeti
Hi Vincent/Peter, in load_balance(), we derive env->loop_max based on rq->nr_running. When the busiest rq has both RT and CFS tasks, we do more loops in detach_tasks(). Is there any reason for not using rq->cfs.h_nr_running? Lei Wen attempted to fix this before.

Re: [PATCH 2/2] sched: Offload wakee task activation if it the wakee is descheduling

2020-05-27 Thread Pavan Kondeti
On Sun, May 24, 2020 at 09:29:56PM +0100, Mel Gorman wrote: > The patch "sched: Optimize ttwu() spinning on p->on_cpu" avoids spinning > on p->on_rq when the task is descheduling but only if the wakee is on > a CPU that does not share cache with the waker. This patch offloads the > activation of

Re: [PATCH] kthread: Use TASK_IDLE state for newly created kernel threads

2020-05-21 Thread Pavan Kondeti
On Thu, May 21, 2020 at 07:56:39AM +0200, Greg Kroah-Hartman wrote: > On Thu, May 21, 2020 at 07:05:44AM +0530, Pavan Kondeti wrote: > > On Wed, May 20, 2020 at 08:18:58PM +0200, Greg Kroah-Hartman wrote: > > > On Wed, May 20, 2020 at 05:25:09PM +0530, Pavankumar Kondeti wrote: &

Re: [PATCH] kthread: Use TASK_IDLE state for newly created kernel threads

2020-05-20 Thread Pavan Kondeti
On Wed, May 20, 2020 at 08:18:58PM +0200, Greg Kroah-Hartman wrote: > On Wed, May 20, 2020 at 05:25:09PM +0530, Pavankumar Kondeti wrote: > > When kernel threads are created for later use, they will be in > > TASK_UNINTERRUPTIBLE state until they are woken up. This results > > in increased loadavg

Re: [PATCH] sched/fair: enqueue_task_fair optimization

2020-05-12 Thread Pavan Kondeti
On Mon, May 11, 2020 at 09:23:01PM +0200, Vincent Guittot wrote: > enqueue_task_fair() jumps to enqueue_throttle when cfs_rq_of(se) is > throttled, which means that se can't be NULL and we can skip the test. > > Signed-off-by: Vincent Guittot > --- > kernel/sched/fair.c | 2 +- > 1 file

Re: [PATCH 1/2] sched/uclamp: Add a new sysctl to control RT default boost value

2020-05-11 Thread Pavan Kondeti
On Mon, May 11, 2020 at 04:40:52PM +0100, Qais Yousef wrote: > RT tasks by default run at the highest capacity/performance level. When > uclamp is selected this default behavior is retained by enforcing the > requested uclamp.min (p->uclamp_req[UCLAMP_MIN]) of the RT tasks to be >

Re: [PATCH] sched/debug: Fix requested task uclamp values shown in procfs

2020-05-10 Thread Pavan Kondeti
On Sun, May 10, 2020 at 05:16:28PM +0100, Valentin Schneider wrote: > > On 10/05/20 13:56, Pavankumar Kondeti wrote: > > The intention of commit 96e74ebf8d59 ("sched/debug: Add task uclamp > > values to SCHED_DEBUG procfs") was to print requested and effective > > task uclamp values. The

Re: [PATCH 13/14] sched: cpufreq: Use IS_ENABLED() for schedutil

2020-05-08 Thread Pavan Kondeti
On Fri, May 08, 2020 at 02:21:29PM +0100, Quentin Perret wrote: > On Friday 08 May 2020 at 11:00:53 (+0530), Pavan Kondeti wrote: > > > -#if defined(CONFIG_ENERGY_MODEL) && > > > defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) > > > +#if defined(CON

Re: [RFC 2/4] sched/core: Set nr_lat_sensitive counter at various scheduler entry/exit points

2020-05-08 Thread Pavan Kondeti
On Fri, May 08, 2020 at 04:45:16PM +0530, Parth Shah wrote: > Hi Pavan, > > Thanks for going through this patch-set. > > On 5/8/20 2:03 PM, Pavan Kondeti wrote: > > Hi Parth, > > > > On Thu, May 07, 2020 at 07:07:21PM +0530, Parth Shah wrote: > >>

Re: [RFC 3/4] sched/idle: Disable idle call on least latency requirements

2020-05-08 Thread Pavan Kondeti
On Fri, May 08, 2020 at 04:49:04PM +0530, Parth Shah wrote: > Hi Pavan, > > On 5/8/20 2:06 PM, Pavan Kondeti wrote: > > On Thu, May 07, 2020 at 07:07:22PM +0530, Parth Shah wrote: > >> Restrict the call to deeper idle states when the given CPU has been set for > >&g

Re: [RFC 1/4] sched/core: Introduce per_cpu counter to track latency sensitive tasks

2020-05-08 Thread Pavan Kondeti
On Fri, May 08, 2020 at 05:00:44PM +0530, Parth Shah wrote: > > > On 5/8/20 2:10 PM, Pavan Kondeti wrote: > > On Thu, May 07, 2020 at 07:07:20PM +0530, Parth Shah wrote: > >> The "nr_lat_sensitive" per_cpu variable provides hints on the possible > >>

Re: [RFC 1/4] sched/core: Introduce per_cpu counter to track latency sensitive tasks

2020-05-08 Thread Pavan Kondeti
On Thu, May 07, 2020 at 07:07:20PM +0530, Parth Shah wrote: > The "nr_lat_sensitive" per_cpu variable provides hints on the possible > number of latency-sensitive tasks occupying the CPU. This hints further > helps in inhibiting the CPUIDLE governor from calling deeper IDLE states > (next patches

Re: [RFC 3/4] sched/idle: Disable idle call on least latency requirements

2020-05-08 Thread Pavan Kondeti
On Thu, May 07, 2020 at 07:07:22PM +0530, Parth Shah wrote: > Restrict the call to deeper idle states when the given CPU has been set for > the least latency requirements > > Signed-off-by: Parth Shah > --- > kernel/sched/idle.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > >

Re: [RFC 2/4] sched/core: Set nr_lat_sensitive counter at various scheduler entry/exit points

2020-05-08 Thread Pavan Kondeti
Hi Parth, On Thu, May 07, 2020 at 07:07:21PM +0530, Parth Shah wrote: > Monitor tasks at: > 1. wake_up_new_task() - forked tasks > > 2. set_task_cpu() - task migrations, Load balancer > > 3. __sched_setscheduler() - set/unset latency_nice value > Increment the nr_lat_sensitive count on the CPU

Re: [PATCH 04/14] sched: cpufreq: Move sched_cpufreq_governor_change()

2020-05-07 Thread Pavan Kondeti
On Thu, May 07, 2020 at 07:10:02PM +0100, Quentin Perret wrote: > CPUFreq calls into sched_cpufreq_governor_change() when switching > governors, which triggers a sched domain rebuild when entering or > exiting schedutil. > > Move the function to sched/cpufreq.c to prepare the ground for the >

Re: [PATCH 13/14] sched: cpufreq: Use IS_ENABLED() for schedutil

2020-05-07 Thread Pavan Kondeti
Hi Quentin On Thu, May 07, 2020 at 07:10:11PM +0100, Quentin Perret wrote: > The IS_ENABLED() macro evaluates to true when an option is set to =y or > =m. As such, it is a good fit for tristate options. > > In preparation for modularizing schedutil, change all the related ifdefs > to use

Re: [PATCH v2 5/6] sched/deadline: Make DL capacity-aware

2020-05-03 Thread Pavan Kondeti
On Fri, May 01, 2020 at 06:12:07PM +0200, Dietmar Eggemann wrote: > On 30/04/2020 15:10, Pavan Kondeti wrote: > > On Mon, Apr 27, 2020 at 10:37:08AM +0200, Dietmar Eggemann wrote: > >> From: Luca Abeni > > [...] > > >> @@ -1653,10 +1654,19 @@ select_task_

Re: [PATCH v2 5/6] sched/deadline: Make DL capacity-aware

2020-04-30 Thread Pavan Kondeti
On Mon, Apr 27, 2020 at 10:37:08AM +0200, Dietmar Eggemann wrote: > From: Luca Abeni > > The current SCHED_DEADLINE (DL) scheduler uses a global EDF scheduling > algorithm w/o considering CPU capacity or task utilization. > This works well on homogeneous systems where DL tasks are guaranteed >

Re: [PATCH v2 6/6] sched/deadline: Implement fallback mechanism for !fit case

2020-04-30 Thread Pavan Kondeti
On Wed, Apr 29, 2020 at 07:39:50PM +0200, Dietmar Eggemann wrote: > On 27/04/2020 16:17, luca abeni wrote: > > Hi Juri, > > > > On Mon, 27 Apr 2020 15:34:38 +0200 > > Juri Lelli wrote: > > > >> Hi, > >> > >> On 27/04/20 10:37, Dietmar Eggemann wrote: > >>> From: Luca Abeni > >>> > >>> When a

Re: [PATCH v2 2/6] sched/deadline: Optimize dl_bw_cpus()

2020-04-30 Thread Pavan Kondeti
On Mon, Apr 27, 2020 at 10:37:05AM +0200, Dietmar Eggemann wrote: > Return the weight of the rd (root domain) span in case it is a subset > of the cpu_active_mask. > > Continue to compute the number of CPUs over rd span and cpu_active_mask > when in hotplug. > > Signed-off-by: Dietmar Eggemann

Re: [PATCH v3 1/2] sched/uclamp: Add a new sysctl to control RT default boost value

2020-04-29 Thread Pavan Kondeti
Hi Qais, On Wed, Apr 29, 2020 at 01:30:57PM +0100, Qais Yousef wrote: > Hi Pavan > > On 04/29/20 17:02, Pavan Kondeti wrote: > > Hi Qais, > > > > On Tue, Apr 28, 2020 at 05:41:33PM +0100, Qais Yousef wrote: > > > > [...] > > > > > >

Re: [PATCH v3 1/2] sched/uclamp: Add a new sysctl to control RT default boost value

2020-04-29 Thread Pavan Kondeti
Hi Qais, On Tue, Apr 28, 2020 at 05:41:33PM +0100, Qais Yousef wrote: [...] > > +static void uclamp_sync_util_min_rt_default(struct task_struct *p) > +{ > + struct uclamp_se *uc_se = >uclamp_req[UCLAMP_MIN]; > + > + if (unlikely(rt_task(p)) && !uc_se->user_defined) > +

Re: [PATCH] sched/fair: Speed-up energy-aware wake-ups

2019-09-20 Thread Pavan Kondeti
Hi Quentin, On Fri, Sep 20, 2019 at 11:41:15AM +0200, Quentin Perret wrote: > Hi Pavan, > > On Friday 20 Sep 2019 at 08:32:15 (+0530), Pavan Kondeti wrote: > > Earlier, we are not checking the spare capacity for the prev_cpu. Now that > > the > > continue statement

Re: [PATCH] sched/fair: Speed-up energy-aware wake-ups

2019-09-19 Thread Pavan Kondeti
Hi Quentin, On Thu, Sep 12, 2019 at 11:44:04AM +0200, Quentin Perret wrote: > From: Quentin Perret > > EAS computes the energy impact of migrating a waking task when deciding > on which CPU it should run. However, the current approach is known to > have a high algorithmic complexity, which can

Re: [PATCH v2] sched: fix migration to invalid cpu in __set_cpus_allowed_ptr

2019-09-16 Thread Pavan Kondeti
On Mon, Sep 16, 2019 at 06:53:28AM +, KeMeng Shi wrote: > Oops occur when running qemu on arm64: > Unable to handle kernel paging request at virtual address 08effe40 > Internal error: Oops: 9607 [#1] SMP > Process migration/0 (pid: 12, stack limit = 0x084e3736) >

Re: [PATCH] cpu/hotplug: Abort disabling secondary CPUs if wakeup is pending

2019-06-10 Thread Pavan Kondeti
Hi Rafael/Thomas, On Mon, Jun 3, 2019 at 10:03 AM Pavankumar Kondeti wrote: > > When "deep" suspend is enabled, all CPUs except the primary CPU > are hotplugged out. Since CPU hotplug is a costly operation, > check if we have to abort the suspend in between each CPU > hotplug. This would improve

Re: [PATCH] sched, trace: Fix prev_state output in sched_switch tracepoint

2018-11-27 Thread Pavan Kondeti
Hi Peter/Thomas, On Tue, Oct 30, 2018 at 12:25 PM Pavankumar Kondeti wrote: > > commit 3f5fe9fef5b2 ("sched/debug: Fix task state recording/printout") > tried to fix the problem introduced by a previous commit efb40f588b43 > ("sched/tracing: Fix trace_sched_switch task-state printing"). However

Re: [PATCH] sched, trace: Fix prev_state output in sched_switch tracepoint

2018-11-27 Thread Pavan Kondeti
Hi Peter/Thomas, On Tue, Oct 30, 2018 at 12:25 PM Pavankumar Kondeti wrote: > > commit 3f5fe9fef5b2 ("sched/debug: Fix task state recording/printout") > tried to fix the problem introduced by a previous commit efb40f588b43 > ("sched/tracing: Fix trace_sched_switch task-state printing"). However

Re: [PATCH v5 2/2] sched/fair: update scale invariance of PELT

2018-10-30 Thread Pavan Kondeti
Hi Vincent, On Fri, Oct 26, 2018 at 06:11:43PM +0200, Vincent Guittot wrote: > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 6806c27..7a69673 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -674,9 +674,8 @@ static u64 sched_vslice(struct cfs_rq *cfs_rq, struct

Re: [PATCH v5 2/2] sched/fair: update scale invariance of PELT

2018-10-30 Thread Pavan Kondeti
Hi Vincent, On Fri, Oct 26, 2018 at 06:11:43PM +0200, Vincent Guittot wrote: > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 6806c27..7a69673 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -674,9 +674,8 @@ static u64 sched_vslice(struct cfs_rq *cfs_rq, struct

Re: [PATCH v4 2/2] sched/fair: update scale invariance of PELT

2018-10-23 Thread Pavan Kondeti
Hi Vincent, Thanks for the detailed explanation. On Tue, Oct 23, 2018 at 02:15:08PM +0200, Vincent Guittot wrote: > Hi Pavan, > > On Tue, 23 Oct 2018 at 07:59, Pavan Kondeti wrote: > > > > Hi Vincent, > > > > On Fri, Oct 19, 2018 at 06:17:

Re: [PATCH v4 2/2] sched/fair: update scale invariance of PELT

2018-10-23 Thread Pavan Kondeti
Hi Vincent, Thanks for the detailed explanation. On Tue, Oct 23, 2018 at 02:15:08PM +0200, Vincent Guittot wrote: > Hi Pavan, > > On Tue, 23 Oct 2018 at 07:59, Pavan Kondeti wrote: > > > > Hi Vincent, > > > > On Fri, Oct 19, 2018 at 06:17:

Re: [PATCH v4 2/2] sched/fair: update scale invariance of PELT

2018-10-22 Thread Pavan Kondeti
Hi Vincent, On Fri, Oct 19, 2018 at 06:17:51PM +0200, Vincent Guittot wrote: > > /* > + * The clock_pelt scales the time to reflect the effective amount of > + * computation done during the running delta time but then sync back to > + * clock_task when rq is idle. > + * > + * > + * absolute

Re: [PATCH v4 2/2] sched/fair: update scale invariance of PELT

2018-10-22 Thread Pavan Kondeti
Hi Vincent, On Fri, Oct 19, 2018 at 06:17:51PM +0200, Vincent Guittot wrote: > > /* > + * The clock_pelt scales the time to reflect the effective amount of > + * computation done during the running delta time but then sync back to > + * clock_task when rq is idle. > + * > + * > + * absolute

Re: [RFC PATCH 29/30] softirq: Make softirq processing softinterruptible

2018-10-22 Thread Pavan Kondeti
Hi Frederic, On Wed, Oct 17, 2018 at 02:26:02AM +0200, Frederic Weisbecker wrote: > Hi Pavan, > > On Tue, Oct 16, 2018 at 09:45:52AM +0530, Pavan Kondeti wrote: > > Hi Frederic, > > > > On Thu, Oct 11, 2018 at 01:12:16AM +0200, Frederic Weisbecker wrote: >

Re: [RFC PATCH 29/30] softirq: Make softirq processing softinterruptible

2018-10-22 Thread Pavan Kondeti
Hi Frederic, On Wed, Oct 17, 2018 at 02:26:02AM +0200, Frederic Weisbecker wrote: > Hi Pavan, > > On Tue, Oct 16, 2018 at 09:45:52AM +0530, Pavan Kondeti wrote: > > Hi Frederic, > > > > On Thu, Oct 11, 2018 at 01:12:16AM +0200, Frederic Weisbecker wrote: >

Re: [RFC PATCH 29/30] softirq: Make softirq processing softinterruptible

2018-10-15 Thread Pavan Kondeti
Hi Frederic, On Thu, Oct 11, 2018 at 01:12:16AM +0200, Frederic Weisbecker wrote: > From: Frederic Weisbecker > > Make do_softirq() re-entrant and allow a vector, being either processed > or disabled, to be interrupted by another vector. This way a vector > won't be able to monopolize the CPU

Re: [RFC PATCH 29/30] softirq: Make softirq processing softinterruptible

2018-10-15 Thread Pavan Kondeti
Hi Frederic, On Thu, Oct 11, 2018 at 01:12:16AM +0200, Frederic Weisbecker wrote: > From: Frederic Weisbecker > > Make do_softirq() re-entrant and allow a vector, being either processed > or disabled, to be interrupted by another vector. This way a vector > won't be able to monopolize the CPU

Re: [PATCH v3 12/14] sched/core: uclamp: add system default clamps

2018-08-16 Thread Pavan Kondeti
On Mon, Aug 06, 2018 at 05:39:44PM +0100, Patrick Bellasi wrote: > Clamp values cannot be tuned at the root cgroup level. Moreover, because > of the delegation model requirements and how the parent clamps > propagation works, if we want to enable subgroups to set a non null > util.min, we need to

Re: [PATCH v3 12/14] sched/core: uclamp: add system default clamps

2018-08-16 Thread Pavan Kondeti
On Mon, Aug 06, 2018 at 05:39:44PM +0100, Patrick Bellasi wrote: > Clamp values cannot be tuned at the root cgroup level. Moreover, because > of the delegation model requirements and how the parent clamps > propagation works, if we want to enable subgroups to set a non null > util.min, we need to

Re: [PATCH v3 09/14] sched/core: uclamp: propagate parent clamps

2018-08-16 Thread Pavan Kondeti
On Mon, Aug 06, 2018 at 05:39:41PM +0100, Patrick Bellasi wrote: > In order to properly support hierarchical resources control, the cgroup > delegation model requires that attribute writes from a child group never > fail but still are (potentially) constrained based on parent's assigned >

Re: [PATCH v3 09/14] sched/core: uclamp: propagate parent clamps

2018-08-16 Thread Pavan Kondeti
On Mon, Aug 06, 2018 at 05:39:41PM +0100, Patrick Bellasi wrote: > In order to properly support hierarchical resources control, the cgroup > delegation model requires that attribute writes from a child group never > fail but still are (potentially) constrained based on parent's assigned >

Re: [PATCH v3 02/14] sched/core: uclamp: map TASK's clamp values into CPU's clamp groups

2018-08-14 Thread Pavan Kondeti
On Mon, Aug 06, 2018 at 05:39:34PM +0100, Patrick Bellasi wrote: > Utilization clamping requires each CPU to know which clamp values are > assigned to tasks that are currently RUNNABLE on that CPU. > Multiple tasks can be assigned the same clamp value and tasks with > different clamp values can be

Re: [PATCH v3 02/14] sched/core: uclamp: map TASK's clamp values into CPU's clamp groups

2018-08-14 Thread Pavan Kondeti
On Mon, Aug 06, 2018 at 05:39:34PM +0100, Patrick Bellasi wrote: > Utilization clamping requires each CPU to know which clamp values are > assigned to tasks that are currently RUNNABLE on that CPU. > Multiple tasks can be assigned the same clamp value and tasks with > different clamp values can be

Re: [PATCH] stop_machine: Disable preemption after queueing stopper threads

2018-08-06 Thread Pavan Kondeti
Hi Prasad, On Wed, Aug 01, 2018 at 01:07:03AM -0700, Sodagudi Prasad wrote: > On 2018-07-30 14:07, Peter Zijlstra wrote: > >On Mon, Jul 30, 2018 at 10:12:43AM -0700, Sodagudi Prasad wrote: > >>How about including below change as well? Currently, there is > >>no way to > >>identify thread

Re: [PATCH] stop_machine: Disable preemption after queueing stopper threads

2018-08-06 Thread Pavan Kondeti
Hi Prasad, On Wed, Aug 01, 2018 at 01:07:03AM -0700, Sodagudi Prasad wrote: > On 2018-07-30 14:07, Peter Zijlstra wrote: > >On Mon, Jul 30, 2018 at 10:12:43AM -0700, Sodagudi Prasad wrote: > >>How about including below change as well? Currently, there is > >>no way to > >>identify thread

Re: [PATCH v2] stop_machine: Disable preemption when waking two stopper threads

2018-07-01 Thread Pavan Kondeti
Hi Issac, On Fri, Jun 29, 2018 at 01:55:12PM -0700, Isaac J. Manjarres wrote: > When cpu_stop_queue_two_works() begins to wake the stopper > threads, it does so without preemption disabled, which leads > to the following race condition: > > The source CPU calls cpu_stop_queue_two_works(), with

Re: [PATCH v2] stop_machine: Disable preemption when waking two stopper threads

2018-07-01 Thread Pavan Kondeti
Hi Issac, On Fri, Jun 29, 2018 at 01:55:12PM -0700, Isaac J. Manjarres wrote: > When cpu_stop_queue_two_works() begins to wake the stopper > threads, it does so without preemption disabled, which leads > to the following race condition: > > The source CPU calls cpu_stop_queue_two_works(), with

Re: [PATCH] stop_machine: Remove cpu swap from stop_two_cpus

2018-06-28 Thread Pavan Kondeti
On Tue, Jun 26, 2018 at 02:28:26PM -0700, Isaac J. Manjarres wrote: > When invoking migrate_swap(), stop_two_cpus() swaps the > source and destination CPU IDs if the destination CPU > ID is greater than the source CPU ID. This leads to the > following race condition: > > The source CPU invokes

Re: [PATCH] stop_machine: Remove cpu swap from stop_two_cpus

2018-06-28 Thread Pavan Kondeti
On Tue, Jun 26, 2018 at 02:28:26PM -0700, Isaac J. Manjarres wrote: > When invoking migrate_swap(), stop_two_cpus() swaps the > source and destination CPU IDs if the destination CPU > ID is greater than the source CPU ID. This leads to the > following race condition: > > The source CPU invokes

Question about wakeup granularity calculation in wakeup_preempt_entity()

2018-06-26 Thread Pavan Kondeti
Hi Peter, I have a question about wakeup granularity calculation while checking if the waking fair task can preempt the current running fair task. static unsigned long wakeup_gran(struct sched_entity *curr, struct sched_entity *se) { unsigned long gran = sysctl_sched_wakeup_granularity;

Question about wakeup granularity calculation in wakeup_preempt_entity()

2018-06-26 Thread Pavan Kondeti
Hi Peter, I have a question about wakeup granularity calculation while checking if the waking fair task can preempt the current running fair task. static unsigned long wakeup_gran(struct sched_entity *curr, struct sched_entity *se) { unsigned long gran = sysctl_sched_wakeup_granularity;

Re: [RFC PATCH v3 07/10] sched/fair: Introduce an energy estimation helper function

2018-06-19 Thread Pavan Kondeti
On Mon, May 21, 2018 at 03:25:02PM +0100, Quentin Perret wrote: > > +/* > + * Returns the util of "cpu" if "p" wakes up on "dst_cpu". > + */ > +static unsigned long cpu_util_next(int cpu, struct task_struct *p, int > dst_cpu) > +{ > + unsigned long util, util_est; > + struct cfs_rq

Re: [RFC PATCH v3 07/10] sched/fair: Introduce an energy estimation helper function

2018-06-19 Thread Pavan Kondeti
On Mon, May 21, 2018 at 03:25:02PM +0100, Quentin Perret wrote: > > +/* > + * Returns the util of "cpu" if "p" wakes up on "dst_cpu". > + */ > +static unsigned long cpu_util_next(int cpu, struct task_struct *p, int > dst_cpu) > +{ > + unsigned long util, util_est; > + struct cfs_rq

Re: [RFC PATCH v3 10/10] arch_topology: Start Energy Aware Scheduling

2018-06-19 Thread Pavan Kondeti
On Mon, May 21, 2018 at 03:25:05PM +0100, Quentin Perret wrote: > +static void start_eas_workfn(struct work_struct *work); > +static DECLARE_WORK(start_eas_work, start_eas_workfn); > + > static int > init_cpu_capacity_callback(struct notifier_block *nb, > unsigned

Re: [RFC PATCH v3 10/10] arch_topology: Start Energy Aware Scheduling

2018-06-19 Thread Pavan Kondeti
On Mon, May 21, 2018 at 03:25:05PM +0100, Quentin Perret wrote: > +static void start_eas_workfn(struct work_struct *work); > +static DECLARE_WORK(start_eas_work, start_eas_workfn); > + > static int > init_cpu_capacity_callback(struct notifier_block *nb, > unsigned

Re: [RFC PATCH v3 09/10] sched/fair: Select an energy-efficient CPU on task wake-up

2018-06-19 Thread Pavan Kondeti
On Tue, Jun 19, 2018 at 08:57:23AM +0100, Quentin Perret wrote: > Hi Pavan, > > On Tuesday 19 Jun 2018 at 10:36:01 (+0530), Pavan Kondeti wrote: > > On Mon, May 21, 2018 at 03:25:04PM +0100, Quentin Perret wrote: > > > > > > > > > + if

Re: [RFC PATCH v3 09/10] sched/fair: Select an energy-efficient CPU on task wake-up

2018-06-19 Thread Pavan Kondeti
On Tue, Jun 19, 2018 at 08:57:23AM +0100, Quentin Perret wrote: > Hi Pavan, > > On Tuesday 19 Jun 2018 at 10:36:01 (+0530), Pavan Kondeti wrote: > > On Mon, May 21, 2018 at 03:25:04PM +0100, Quentin Perret wrote: > > > > > > > > > + if

Re: [RFC PATCH v3 06/10] sched: Add over-utilization/tipping point indicator

2018-06-19 Thread Pavan Kondeti
On Mon, May 21, 2018 at 03:25:01PM +0100, Quentin Perret wrote: > util_est_enqueue(>cfs, p); > hrtick_update(rq); > @@ -8121,11 +8144,12 @@ static bool update_nohz_stats(struct rq *rq, bool > force) > * @local_group: Does group contain this_cpu. > * @sgs: variable to hold the

Re: [RFC PATCH v3 06/10] sched: Add over-utilization/tipping point indicator

2018-06-19 Thread Pavan Kondeti
On Mon, May 21, 2018 at 03:25:01PM +0100, Quentin Perret wrote: > util_est_enqueue(>cfs, p); > hrtick_update(rq); > @@ -8121,11 +8144,12 @@ static bool update_nohz_stats(struct rq *rq, bool > force) > * @local_group: Does group contain this_cpu. > * @sgs: variable to hold the

Re: [RFC PATCH v3 09/10] sched/fair: Select an energy-efficient CPU on task wake-up

2018-06-18 Thread Pavan Kondeti
On Mon, May 21, 2018 at 03:25:04PM +0100, Quentin Perret wrote: > + if (cpumask_test_cpu(prev_cpu, >cpus_allowed)) > + prev_energy = best_energy = compute_energy(p, prev_cpu); > + else > + prev_energy = best_energy = ULONG_MAX; > + > +

Re: [RFC PATCH v3 09/10] sched/fair: Select an energy-efficient CPU on task wake-up

2018-06-18 Thread Pavan Kondeti
On Mon, May 21, 2018 at 03:25:04PM +0100, Quentin Perret wrote: > + if (cpumask_test_cpu(prev_cpu, >cpus_allowed)) > + prev_energy = best_energy = compute_energy(p, prev_cpu); > + else > + prev_energy = best_energy = ULONG_MAX; > + > +

Re: [RFC PATCH] kernel/sched/core: busy wait before going idle

2018-04-23 Thread Pavan Kondeti
Hi Nick, On Sun, Apr 15, 2018 at 11:31:49PM +1000, Nicholas Piggin wrote: > This is a quick hack for comments, but I've always wondered -- > if we have a short term polling idle states in cpuidle for performance > -- why not skip the context switch and entry into all the idle states, > and just

Re: [RFC PATCH] kernel/sched/core: busy wait before going idle

2018-04-23 Thread Pavan Kondeti
Hi Nick, On Sun, Apr 15, 2018 at 11:31:49PM +1000, Nicholas Piggin wrote: > This is a quick hack for comments, but I've always wondered -- > if we have a short term polling idle states in cpuidle for performance > -- why not skip the context switch and entry into all the idle states, > and just

Re: [PATCH v3 2/3] sched/fair: use util_est in LB and WU paths

2018-01-25 Thread Pavan Kondeti
On Wed, Jan 24, 2018 at 07:31:38PM +, Patrick Bellasi wrote: > > > > + /* > > > +* These are the main cases covered: > > > +* - if *p is the only task sleeping on this CPU, then: > > > +* cpu_util (== task_util) > util_est (== 0) > > > +* and thus

Re: [PATCH v3 2/3] sched/fair: use util_est in LB and WU paths

2018-01-25 Thread Pavan Kondeti
On Wed, Jan 24, 2018 at 07:31:38PM +, Patrick Bellasi wrote: > > > > + /* > > > +* These are the main cases covered: > > > +* - if *p is the only task sleeping on this CPU, then: > > > +* cpu_util (== task_util) > util_est (== 0) > > > +* and thus

Re: [PATCH v3 2/3] sched/fair: use util_est in LB and WU paths

2018-01-24 Thread Pavan Kondeti
Hi Patrick, On Tue, Jan 23, 2018 at 06:08:46PM +, Patrick Bellasi wrote: > static unsigned long cpu_util_wake(int cpu, struct task_struct *p) > { > - unsigned long util, capacity; > + long util, util_est; > > /* Task has no contribution or is new */ > if (cpu !=

Re: [PATCH v3 2/3] sched/fair: use util_est in LB and WU paths

2018-01-24 Thread Pavan Kondeti
Hi Patrick, On Tue, Jan 23, 2018 at 06:08:46PM +, Patrick Bellasi wrote: > static unsigned long cpu_util_wake(int cpu, struct task_struct *p) > { > - unsigned long util, capacity; > + long util, util_est; > > /* Task has no contribution or is new */ > if (cpu !=

Re: [RFC PATCH 2/4] softirq: Per vector deferment to workqueue

2018-01-21 Thread Pavan Kondeti
On Sun, Jan 21, 2018 at 05:11:17PM +0100, Frederic Weisbecker wrote: > On Sat, Jan 20, 2018 at 02:11:39PM +0530, Pavan Kondeti wrote: > > Hi Pavan, > > > > I have couple questions/comments. > > > > (1) Since the work is queued on a bounded per-cpu worke

Re: [RFC PATCH 2/4] softirq: Per vector deferment to workqueue

2018-01-21 Thread Pavan Kondeti
On Sun, Jan 21, 2018 at 05:11:17PM +0100, Frederic Weisbecker wrote: > On Sat, Jan 20, 2018 at 02:11:39PM +0530, Pavan Kondeti wrote: > > Hi Pavan, > > > > I have couple questions/comments. > > > > (1) Since the work is queued on a bounded per-cpu worke

Re: [RFC PATCH 2/4] softirq: Per vector deferment to workqueue

2018-01-20 Thread Pavan Kondeti
Hi Frederic, On Fri, Jan 19, 2018 at 04:46:12PM +0100, Frederic Weisbecker wrote: > Some softirq vectors can be more CPU hungry than others. Especially > networking may sometimes deal with packet storm and need more CPU than > IRQ tail can offer without inducing scheduler latencies. In this case

Re: [RFC PATCH 2/4] softirq: Per vector deferment to workqueue

2018-01-20 Thread Pavan Kondeti
Hi Frederic, On Fri, Jan 19, 2018 at 04:46:12PM +0100, Frederic Weisbecker wrote: > Some softirq vectors can be more CPU hungry than others. Especially > networking may sometimes deal with packet storm and need more CPU than > IRQ tail can offer without inducing scheduler latencies. In this case

Re: [tip:sched/core] sched/rt: Simplify the IPI based RT balancing logic

2018-01-19 Thread Pavan Kondeti
Hi Steve, On Fri, Jan 19, 2018 at 02:51:15PM -0500, Steven Rostedt wrote: > On Sat, 20 Jan 2018 00:27:56 +0530 > Pavan Kondeti <pkond...@codeaurora.org> wrote: > > > Hi Steve, > > > > Thanks for the patch. > > > > On Fri, Jan 19, 2018 at 01:12:54PM

Re: [tip:sched/core] sched/rt: Simplify the IPI based RT balancing logic

2018-01-19 Thread Pavan Kondeti
Hi Steve, On Fri, Jan 19, 2018 at 02:51:15PM -0500, Steven Rostedt wrote: > On Sat, 20 Jan 2018 00:27:56 +0530 > Pavan Kondeti wrote: > > > Hi Steve, > > > > Thanks for the patch. > > > > On Fri, Jan 19, 2018 at 01:12:54PM -0500, Steven Rostedt wrote: &

Re: [tip:sched/core] sched/rt: Simplify the IPI based RT balancing logic

2018-01-19 Thread Pavan Kondeti
Hi Steve, Thanks for the patch. On Fri, Jan 19, 2018 at 01:12:54PM -0500, Steven Rostedt wrote: > On Fri, 19 Jan 2018 13:11:21 -0500 > Steven Rostedt wrote: > > > void rto_push_irq_work_func(struct irq_work *work) > > { > > + struct root_domain *rd = > > +

Re: [tip:sched/core] sched/rt: Simplify the IPI based RT balancing logic

2018-01-19 Thread Pavan Kondeti
Hi Steve, Thanks for the patch. On Fri, Jan 19, 2018 at 01:12:54PM -0500, Steven Rostedt wrote: > On Fri, 19 Jan 2018 13:11:21 -0500 > Steven Rostedt wrote: > > > void rto_push_irq_work_func(struct irq_work *work) > > { > > + struct root_domain *rd = > > + container_of(work,

Re: [tip:sched/core] sched/rt: Simplify the IPI based RT balancing logic

2018-01-19 Thread Pavan Kondeti
On Fri, Jan 19, 2018 at 01:11:21PM -0500, Steven Rostedt wrote: > On Fri, 19 Jan 2018 23:16:17 +0530 > Pavan Kondeti <pkond...@codeaurora.org> wrote: > > > I am thinking of another problem because of the race between > > rto_push_irq_work_func() and rq_attach_root

Re: [tip:sched/core] sched/rt: Simplify the IPI based RT balancing logic

2018-01-19 Thread Pavan Kondeti
On Fri, Jan 19, 2018 at 01:11:21PM -0500, Steven Rostedt wrote: > On Fri, 19 Jan 2018 23:16:17 +0530 > Pavan Kondeti wrote: > > > I am thinking of another problem because of the race between > > rto_push_irq_work_func() and rq_attach_root() where rq->rd is modified. >

Re: [tip:sched/core] sched/rt: Simplify the IPI based RT balancing logic

2018-01-19 Thread Pavan Kondeti
On Fri, Jan 19, 2018 at 10:03:53AM -0500, Steven Rostedt wrote: > On Fri, 19 Jan 2018 14:53:05 +0530 > Pavan Kondeti <pkond...@codeaurora.org> wrote: > > > I am seeing "spinlock already unlocked" BUG for rd->rto_lock on a 4.9 > > stable kernel based s

Re: [tip:sched/core] sched/rt: Simplify the IPI based RT balancing logic

2018-01-19 Thread Pavan Kondeti
On Fri, Jan 19, 2018 at 10:03:53AM -0500, Steven Rostedt wrote: > On Fri, 19 Jan 2018 14:53:05 +0530 > Pavan Kondeti wrote: > > > I am seeing "spinlock already unlocked" BUG for rd->rto_lock on a 4.9 > > stable kernel based system. This issue is observed only

Re: [tip:sched/core] sched/rt: Simplify the IPI based RT balancing logic

2018-01-19 Thread Pavan Kondeti
Hi Steven, On Fri, Jan 19, 2018 at 10:03:53AM -0500, Steven Rostedt wrote: > On Fri, 19 Jan 2018 14:53:05 +0530 > Pavan Kondeti <pkond...@codeaurora.org> wrote: > > > I am seeing "spinlock already unlocked" BUG for rd->rto_lock on a 4.9 > > stable kernel b

Re: [tip:sched/core] sched/rt: Simplify the IPI based RT balancing logic

2018-01-19 Thread Pavan Kondeti
Hi Steven, On Fri, Jan 19, 2018 at 10:03:53AM -0500, Steven Rostedt wrote: > On Fri, 19 Jan 2018 14:53:05 +0530 > Pavan Kondeti wrote: > > > I am seeing "spinlock already unlocked" BUG for rd->rto_lock on a 4.9 > > stable kernel based system. This issue is

Re: [tip:sched/core] sched/rt: Simplify the IPI based RT balancing logic

2018-01-19 Thread Pavan Kondeti
Hi Steven, > /* Called from hardirq context */ > -static void try_to_push_tasks(void *arg) > +void rto_push_irq_work_func(struct irq_work *work) > { > - struct rt_rq *rt_rq = arg; > - struct rq *rq, *src_rq; > - int this_cpu; > + struct rq *rq; > int cpu; > > -

Re: [tip:sched/core] sched/rt: Simplify the IPI based RT balancing logic

2018-01-19 Thread Pavan Kondeti
Hi Steven, > /* Called from hardirq context */ > -static void try_to_push_tasks(void *arg) > +void rto_push_irq_work_func(struct irq_work *work) > { > - struct rt_rq *rt_rq = arg; > - struct rq *rq, *src_rq; > - int this_cpu; > + struct rq *rq; > int cpu; > > -

Re: [RFC 2/3] sched/fair: use util_est in LB

2017-09-04 Thread Pavan Kondeti
On Mon, Sep 4, 2017 at 7:48 PM, Patrick Bellasi <patrick.bell...@arm.com> wrote: > On 29-Aug 10:15, Pavan Kondeti wrote: >> On Fri, Aug 25, 2017 at 3:50 PM, Patrick Bellasi >> <patrick.bell...@arm.com> wrote: >> > When the scheduler looks at the CP

Re: [RFC 2/3] sched/fair: use util_est in LB

2017-09-04 Thread Pavan Kondeti
On Mon, Sep 4, 2017 at 7:48 PM, Patrick Bellasi wrote: > On 29-Aug 10:15, Pavan Kondeti wrote: >> On Fri, Aug 25, 2017 at 3:50 PM, Patrick Bellasi >> wrote: >> > When the scheduler looks at the CPU utlization, the current PELT value >> > for a CPU is returned st

Re: [RFC 1/3] sched/fair: add util_est on top of PELT

2017-08-29 Thread Pavan Kondeti
On Fri, Aug 25, 2017 at 3:50 PM, Patrick Bellasi wrote: > The util_avg signal computed by PELT is too variable for some use-cases. > For example, a big task waking up after a long sleep period will have its > utilization almost completely decayed. This introduces some

Re: [RFC 1/3] sched/fair: add util_est on top of PELT

2017-08-29 Thread Pavan Kondeti
On Fri, Aug 25, 2017 at 3:50 PM, Patrick Bellasi wrote: > The util_avg signal computed by PELT is too variable for some use-cases. > For example, a big task waking up after a long sleep period will have its > utilization almost completely decayed. This introduces some latency before > schedutil

  1   2   >