On Tue, Sep 26, 2023 at 05:32:10PM +0530, Pavan Kondeti wrote:
> On Sat, Aug 26, 2023 at 01:07:42AM -0700, Guru Das Srinagesh wrote:
> > +def gather_maintainers_of_file(patch_file):
> > +all_entities_of_patch = dict()
> > +
> > +# Run get_maintainer.pl on patch
On Mon, Aug 28, 2023 at 10:21:32AM +0200, Krzysztof Kozlowski wrote:
> On 26/08/2023 10:07, Guru Das Srinagesh wrote:
> > This script runs get_maintainer.py on a given patch file (or multiple
> > patch files) and adds its output to the patch file in place with the
> > appropriate email headers
On Sat, Aug 26, 2023 at 01:07:42AM -0700, Guru Das Srinagesh wrote:
> +def gather_maintainers_of_file(patch_file):
> +all_entities_of_patch = dict()
> +
> +# Run get_maintainer.pl on patch file
> +logging.info("GET: Patch: {}".format(os.path.basename(patch_file)))
> +cmd =
On Tue, Apr 06, 2021 at 12:15:24PM -0400, Tejun Heo wrote:
> Hello,
>
> On Tue, Apr 06, 2021 at 08:57:15PM +0530, Pavan Kondeti wrote:
> > Yeah. The workqueue attrs comes in handy to reduce the nice/prio of a
> > background workqueue if we identify that it is c
Hi Tejun,
On Tue, Apr 06, 2021 at 09:36:00AM -0400, Tejun Heo wrote:
> Hello,
>
> On Tue, Apr 06, 2021 at 06:34:21PM +0530, Pavankumar Kondeti wrote:
> > In Android GKI, CONFIG_FAIR_GROUP_SCHED is enabled [1] to help prioritize
> > important work. Given that CPU shares of root cgroup can't be
Hi Quentin,
On Tue, Apr 06, 2021 at 12:10:41PM +, Quentin Perret wrote:
> Hi Pavan,
>
> On Tuesday 06 Apr 2021 at 16:29:13 (+0530), Pavankumar Kondeti wrote:
> > In Android GKI, CONFIG_FAIR_GROUP_SCHED is enabled [1] to help prioritize
> > important work. Given that CPU shares of root cgroup
On Fri, Feb 19, 2021 at 12:59:57PM +, Valentin Schneider wrote:
> From: Lingutla Chandrasekhar
>
> In load balancing, when balancing group is unable to pull task
> due to ->cpus_ptr constraints from busy group, then it sets
> LBF_SOME_PINNED to lb env flags, as a consequence, sgc->imbalance
On Wed, Feb 17, 2021 at 02:50:23PM +, Valentin Schneider wrote:
> On 17/02/21 17:38, Lingutla Chandrasekhar wrote:
> > In load balancing, when balancing group is unable to pull task
> > due to ->cpus_ptr constraints from busy group, then it sets
> > LBF_SOME_PINNED to lb env flags, as a
On Mon, Nov 23, 2020 at 10:28:39AM +, Quentin Perret wrote:
> Hi Pavan,
>
> On Monday 23 Nov 2020 at 15:47:57 (+0530), Pavankumar Kondeti wrote:
> > When the sum of the utilization of CPUs in a power domain is zero,
>
> s/power/performance
>
> > return the energy as 0 without doing any
On Fri, Oct 02, 2020 at 01:38:12PM +0800, Yun Hsiang wrote:
> On Wed, Sep 30, 2020 at 03:12:51PM +0200, Dietmar Eggemann wrote:
> Hi Dietmar,
>
> > Hi Yun,
> >
> > On 28/09/2020 10:26, Yun Hsiang wrote:
> > > If the user wants to release the util clamp and let cgroup to control it,
> > > we need
On Fri, Aug 28, 2020 at 03:51:13PM -0400, Julien Desfossez wrote:
> /*
> * The static-key + stop-machine variable are needed such that:
> *
> @@ -4641,7 +4656,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev,
> struct rq_flags *rf)
> struct task_struct *next, *max = NULL;
>
Hi Vincent,
On Wed, Jun 24, 2020 at 02:39:25PM +0200, Vincent Guittot wrote:
> Hi Pavan,
>
> On Wed, 24 Jun 2020 at 13:42, Pavan Kondeti wrote:
> >
> > Hi Vincent/Peter,
> >
> > in load_balance(), we derive env->loop_max based on rq->nr_running.
> >
Hi Vincent/Peter,
in load_balance(), we derive env->loop_max based on rq->nr_running.
When the busiest rq has both RT and CFS tasks, we do more loops in
detach_tasks(). Is there any reason for not using
rq->cfs.h_nr_running?
Lei Wen attempted to fix this before.
On Sun, May 24, 2020 at 09:29:56PM +0100, Mel Gorman wrote:
> The patch "sched: Optimize ttwu() spinning on p->on_cpu" avoids spinning
> on p->on_rq when the task is descheduling but only if the wakee is on
> a CPU that does not share cache with the waker. This patch offloads the
> activation of
On Thu, May 21, 2020 at 07:56:39AM +0200, Greg Kroah-Hartman wrote:
> On Thu, May 21, 2020 at 07:05:44AM +0530, Pavan Kondeti wrote:
> > On Wed, May 20, 2020 at 08:18:58PM +0200, Greg Kroah-Hartman wrote:
> > > On Wed, May 20, 2020 at 05:25:09PM +0530, Pavankumar Kondeti wrote:
&
On Wed, May 20, 2020 at 08:18:58PM +0200, Greg Kroah-Hartman wrote:
> On Wed, May 20, 2020 at 05:25:09PM +0530, Pavankumar Kondeti wrote:
> > When kernel threads are created for later use, they will be in
> > TASK_UNINTERRUPTIBLE state until they are woken up. This results
> > in increased loadavg
On Mon, May 11, 2020 at 09:23:01PM +0200, Vincent Guittot wrote:
> enqueue_task_fair() jumps to enqueue_throttle when cfs_rq_of(se) is
> throttled, which means that se can't be NULL and we can skip the test.
>
> Signed-off-by: Vincent Guittot
> ---
> kernel/sched/fair.c | 2 +-
> 1 file
On Mon, May 11, 2020 at 04:40:52PM +0100, Qais Yousef wrote:
> RT tasks by default run at the highest capacity/performance level. When
> uclamp is selected this default behavior is retained by enforcing the
> requested uclamp.min (p->uclamp_req[UCLAMP_MIN]) of the RT tasks to be
>
On Sun, May 10, 2020 at 05:16:28PM +0100, Valentin Schneider wrote:
>
> On 10/05/20 13:56, Pavankumar Kondeti wrote:
> > The intention of commit 96e74ebf8d59 ("sched/debug: Add task uclamp
> > values to SCHED_DEBUG procfs") was to print requested and effective
> > task uclamp values. The
On Fri, May 08, 2020 at 02:21:29PM +0100, Quentin Perret wrote:
> On Friday 08 May 2020 at 11:00:53 (+0530), Pavan Kondeti wrote:
> > > -#if defined(CONFIG_ENERGY_MODEL) &&
> > > defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
> > > +#if defined(CON
On Fri, May 08, 2020 at 04:45:16PM +0530, Parth Shah wrote:
> Hi Pavan,
>
> Thanks for going through this patch-set.
>
> On 5/8/20 2:03 PM, Pavan Kondeti wrote:
> > Hi Parth,
> >
> > On Thu, May 07, 2020 at 07:07:21PM +0530, Parth Shah wrote:
> >>
On Fri, May 08, 2020 at 04:49:04PM +0530, Parth Shah wrote:
> Hi Pavan,
>
> On 5/8/20 2:06 PM, Pavan Kondeti wrote:
> > On Thu, May 07, 2020 at 07:07:22PM +0530, Parth Shah wrote:
> >> Restrict the call to deeper idle states when the given CPU has been set for
> >&g
On Fri, May 08, 2020 at 05:00:44PM +0530, Parth Shah wrote:
>
>
> On 5/8/20 2:10 PM, Pavan Kondeti wrote:
> > On Thu, May 07, 2020 at 07:07:20PM +0530, Parth Shah wrote:
> >> The "nr_lat_sensitive" per_cpu variable provides hints on the possible
> >>
On Thu, May 07, 2020 at 07:07:20PM +0530, Parth Shah wrote:
> The "nr_lat_sensitive" per_cpu variable provides hints on the possible
> number of latency-sensitive tasks occupying the CPU. This hints further
> helps in inhibiting the CPUIDLE governor from calling deeper IDLE states
> (next patches
On Thu, May 07, 2020 at 07:07:22PM +0530, Parth Shah wrote:
> Restrict the call to deeper idle states when the given CPU has been set for
> the least latency requirements
>
> Signed-off-by: Parth Shah
> ---
> kernel/sched/idle.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
>
Hi Parth,
On Thu, May 07, 2020 at 07:07:21PM +0530, Parth Shah wrote:
> Monitor tasks at:
> 1. wake_up_new_task() - forked tasks
>
> 2. set_task_cpu() - task migrations, Load balancer
>
> 3. __sched_setscheduler() - set/unset latency_nice value
> Increment the nr_lat_sensitive count on the CPU
On Thu, May 07, 2020 at 07:10:02PM +0100, Quentin Perret wrote:
> CPUFreq calls into sched_cpufreq_governor_change() when switching
> governors, which triggers a sched domain rebuild when entering or
> exiting schedutil.
>
> Move the function to sched/cpufreq.c to prepare the ground for the
>
Hi Quentin
On Thu, May 07, 2020 at 07:10:11PM +0100, Quentin Perret wrote:
> The IS_ENABLED() macro evaluates to true when an option is set to =y or
> =m. As such, it is a good fit for tristate options.
>
> In preparation for modularizing schedutil, change all the related ifdefs
> to use
On Fri, May 01, 2020 at 06:12:07PM +0200, Dietmar Eggemann wrote:
> On 30/04/2020 15:10, Pavan Kondeti wrote:
> > On Mon, Apr 27, 2020 at 10:37:08AM +0200, Dietmar Eggemann wrote:
> >> From: Luca Abeni
>
> [...]
>
> >> @@ -1653,10 +1654,19 @@ select_task_
On Mon, Apr 27, 2020 at 10:37:08AM +0200, Dietmar Eggemann wrote:
> From: Luca Abeni
>
> The current SCHED_DEADLINE (DL) scheduler uses a global EDF scheduling
> algorithm w/o considering CPU capacity or task utilization.
> This works well on homogeneous systems where DL tasks are guaranteed
>
On Wed, Apr 29, 2020 at 07:39:50PM +0200, Dietmar Eggemann wrote:
> On 27/04/2020 16:17, luca abeni wrote:
> > Hi Juri,
> >
> > On Mon, 27 Apr 2020 15:34:38 +0200
> > Juri Lelli wrote:
> >
> >> Hi,
> >>
> >> On 27/04/20 10:37, Dietmar Eggemann wrote:
> >>> From: Luca Abeni
> >>>
> >>> When a
On Mon, Apr 27, 2020 at 10:37:05AM +0200, Dietmar Eggemann wrote:
> Return the weight of the rd (root domain) span in case it is a subset
> of the cpu_active_mask.
>
> Continue to compute the number of CPUs over rd span and cpu_active_mask
> when in hotplug.
>
> Signed-off-by: Dietmar Eggemann
Hi Qais,
On Wed, Apr 29, 2020 at 01:30:57PM +0100, Qais Yousef wrote:
> Hi Pavan
>
> On 04/29/20 17:02, Pavan Kondeti wrote:
> > Hi Qais,
> >
> > On Tue, Apr 28, 2020 at 05:41:33PM +0100, Qais Yousef wrote:
> >
> > [...]
> >
> > >
>
Hi Qais,
On Tue, Apr 28, 2020 at 05:41:33PM +0100, Qais Yousef wrote:
[...]
>
> +static void uclamp_sync_util_min_rt_default(struct task_struct *p)
> +{
> + struct uclamp_se *uc_se = >uclamp_req[UCLAMP_MIN];
> +
> + if (unlikely(rt_task(p)) && !uc_se->user_defined)
> +
Hi Quentin,
On Fri, Sep 20, 2019 at 11:41:15AM +0200, Quentin Perret wrote:
> Hi Pavan,
>
> On Friday 20 Sep 2019 at 08:32:15 (+0530), Pavan Kondeti wrote:
> > Earlier, we are not checking the spare capacity for the prev_cpu. Now that
> > the
> > continue statement
Hi Quentin,
On Thu, Sep 12, 2019 at 11:44:04AM +0200, Quentin Perret wrote:
> From: Quentin Perret
>
> EAS computes the energy impact of migrating a waking task when deciding
> on which CPU it should run. However, the current approach is known to
> have a high algorithmic complexity, which can
On Mon, Sep 16, 2019 at 06:53:28AM +, KeMeng Shi wrote:
> Oops occur when running qemu on arm64:
> Unable to handle kernel paging request at virtual address 08effe40
> Internal error: Oops: 9607 [#1] SMP
> Process migration/0 (pid: 12, stack limit = 0x084e3736)
>
Hi Rafael/Thomas,
On Mon, Jun 3, 2019 at 10:03 AM Pavankumar Kondeti
wrote:
>
> When "deep" suspend is enabled, all CPUs except the primary CPU
> are hotplugged out. Since CPU hotplug is a costly operation,
> check if we have to abort the suspend in between each CPU
> hotplug. This would improve
Hi Peter/Thomas,
On Tue, Oct 30, 2018 at 12:25 PM Pavankumar Kondeti
wrote:
>
> commit 3f5fe9fef5b2 ("sched/debug: Fix task state recording/printout")
> tried to fix the problem introduced by a previous commit efb40f588b43
> ("sched/tracing: Fix trace_sched_switch task-state printing"). However
Hi Peter/Thomas,
On Tue, Oct 30, 2018 at 12:25 PM Pavankumar Kondeti
wrote:
>
> commit 3f5fe9fef5b2 ("sched/debug: Fix task state recording/printout")
> tried to fix the problem introduced by a previous commit efb40f588b43
> ("sched/tracing: Fix trace_sched_switch task-state printing"). However
Hi Vincent,
On Fri, Oct 26, 2018 at 06:11:43PM +0200, Vincent Guittot wrote:
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 6806c27..7a69673 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -674,9 +674,8 @@ static u64 sched_vslice(struct cfs_rq *cfs_rq, struct
Hi Vincent,
On Fri, Oct 26, 2018 at 06:11:43PM +0200, Vincent Guittot wrote:
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 6806c27..7a69673 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -674,9 +674,8 @@ static u64 sched_vslice(struct cfs_rq *cfs_rq, struct
Hi Vincent,
Thanks for the detailed explanation.
On Tue, Oct 23, 2018 at 02:15:08PM +0200, Vincent Guittot wrote:
> Hi Pavan,
>
> On Tue, 23 Oct 2018 at 07:59, Pavan Kondeti wrote:
> >
> > Hi Vincent,
> >
> > On Fri, Oct 19, 2018 at 06:17:
Hi Vincent,
Thanks for the detailed explanation.
On Tue, Oct 23, 2018 at 02:15:08PM +0200, Vincent Guittot wrote:
> Hi Pavan,
>
> On Tue, 23 Oct 2018 at 07:59, Pavan Kondeti wrote:
> >
> > Hi Vincent,
> >
> > On Fri, Oct 19, 2018 at 06:17:
Hi Vincent,
On Fri, Oct 19, 2018 at 06:17:51PM +0200, Vincent Guittot wrote:
>
> /*
> + * The clock_pelt scales the time to reflect the effective amount of
> + * computation done during the running delta time but then sync back to
> + * clock_task when rq is idle.
> + *
> + *
> + * absolute
Hi Vincent,
On Fri, Oct 19, 2018 at 06:17:51PM +0200, Vincent Guittot wrote:
>
> /*
> + * The clock_pelt scales the time to reflect the effective amount of
> + * computation done during the running delta time but then sync back to
> + * clock_task when rq is idle.
> + *
> + *
> + * absolute
Hi Frederic,
On Wed, Oct 17, 2018 at 02:26:02AM +0200, Frederic Weisbecker wrote:
> Hi Pavan,
>
> On Tue, Oct 16, 2018 at 09:45:52AM +0530, Pavan Kondeti wrote:
> > Hi Frederic,
> >
> > On Thu, Oct 11, 2018 at 01:12:16AM +0200, Frederic Weisbecker wrote:
>
Hi Frederic,
On Wed, Oct 17, 2018 at 02:26:02AM +0200, Frederic Weisbecker wrote:
> Hi Pavan,
>
> On Tue, Oct 16, 2018 at 09:45:52AM +0530, Pavan Kondeti wrote:
> > Hi Frederic,
> >
> > On Thu, Oct 11, 2018 at 01:12:16AM +0200, Frederic Weisbecker wrote:
>
Hi Frederic,
On Thu, Oct 11, 2018 at 01:12:16AM +0200, Frederic Weisbecker wrote:
> From: Frederic Weisbecker
>
> Make do_softirq() re-entrant and allow a vector, being either processed
> or disabled, to be interrupted by another vector. This way a vector
> won't be able to monopolize the CPU
Hi Frederic,
On Thu, Oct 11, 2018 at 01:12:16AM +0200, Frederic Weisbecker wrote:
> From: Frederic Weisbecker
>
> Make do_softirq() re-entrant and allow a vector, being either processed
> or disabled, to be interrupted by another vector. This way a vector
> won't be able to monopolize the CPU
On Mon, Aug 06, 2018 at 05:39:44PM +0100, Patrick Bellasi wrote:
> Clamp values cannot be tuned at the root cgroup level. Moreover, because
> of the delegation model requirements and how the parent clamps
> propagation works, if we want to enable subgroups to set a non null
> util.min, we need to
On Mon, Aug 06, 2018 at 05:39:44PM +0100, Patrick Bellasi wrote:
> Clamp values cannot be tuned at the root cgroup level. Moreover, because
> of the delegation model requirements and how the parent clamps
> propagation works, if we want to enable subgroups to set a non null
> util.min, we need to
On Mon, Aug 06, 2018 at 05:39:41PM +0100, Patrick Bellasi wrote:
> In order to properly support hierarchical resources control, the cgroup
> delegation model requires that attribute writes from a child group never
> fail but still are (potentially) constrained based on parent's assigned
>
On Mon, Aug 06, 2018 at 05:39:41PM +0100, Patrick Bellasi wrote:
> In order to properly support hierarchical resources control, the cgroup
> delegation model requires that attribute writes from a child group never
> fail but still are (potentially) constrained based on parent's assigned
>
On Mon, Aug 06, 2018 at 05:39:34PM +0100, Patrick Bellasi wrote:
> Utilization clamping requires each CPU to know which clamp values are
> assigned to tasks that are currently RUNNABLE on that CPU.
> Multiple tasks can be assigned the same clamp value and tasks with
> different clamp values can be
On Mon, Aug 06, 2018 at 05:39:34PM +0100, Patrick Bellasi wrote:
> Utilization clamping requires each CPU to know which clamp values are
> assigned to tasks that are currently RUNNABLE on that CPU.
> Multiple tasks can be assigned the same clamp value and tasks with
> different clamp values can be
Hi Prasad,
On Wed, Aug 01, 2018 at 01:07:03AM -0700, Sodagudi Prasad wrote:
> On 2018-07-30 14:07, Peter Zijlstra wrote:
> >On Mon, Jul 30, 2018 at 10:12:43AM -0700, Sodagudi Prasad wrote:
> >>How about including below change as well? Currently, there is
> >>no way to
> >>identify thread
Hi Prasad,
On Wed, Aug 01, 2018 at 01:07:03AM -0700, Sodagudi Prasad wrote:
> On 2018-07-30 14:07, Peter Zijlstra wrote:
> >On Mon, Jul 30, 2018 at 10:12:43AM -0700, Sodagudi Prasad wrote:
> >>How about including below change as well? Currently, there is
> >>no way to
> >>identify thread
Hi Issac,
On Fri, Jun 29, 2018 at 01:55:12PM -0700, Isaac J. Manjarres wrote:
> When cpu_stop_queue_two_works() begins to wake the stopper
> threads, it does so without preemption disabled, which leads
> to the following race condition:
>
> The source CPU calls cpu_stop_queue_two_works(), with
Hi Issac,
On Fri, Jun 29, 2018 at 01:55:12PM -0700, Isaac J. Manjarres wrote:
> When cpu_stop_queue_two_works() begins to wake the stopper
> threads, it does so without preemption disabled, which leads
> to the following race condition:
>
> The source CPU calls cpu_stop_queue_two_works(), with
On Tue, Jun 26, 2018 at 02:28:26PM -0700, Isaac J. Manjarres wrote:
> When invoking migrate_swap(), stop_two_cpus() swaps the
> source and destination CPU IDs if the destination CPU
> ID is greater than the source CPU ID. This leads to the
> following race condition:
>
> The source CPU invokes
On Tue, Jun 26, 2018 at 02:28:26PM -0700, Isaac J. Manjarres wrote:
> When invoking migrate_swap(), stop_two_cpus() swaps the
> source and destination CPU IDs if the destination CPU
> ID is greater than the source CPU ID. This leads to the
> following race condition:
>
> The source CPU invokes
Hi Peter,
I have a question about wakeup granularity calculation while checking
if the waking fair task can preempt the current running fair task.
static unsigned long
wakeup_gran(struct sched_entity *curr, struct sched_entity *se)
{
unsigned long gran = sysctl_sched_wakeup_granularity;
Hi Peter,
I have a question about wakeup granularity calculation while checking
if the waking fair task can preempt the current running fair task.
static unsigned long
wakeup_gran(struct sched_entity *curr, struct sched_entity *se)
{
unsigned long gran = sysctl_sched_wakeup_granularity;
On Mon, May 21, 2018 at 03:25:02PM +0100, Quentin Perret wrote:
>
> +/*
> + * Returns the util of "cpu" if "p" wakes up on "dst_cpu".
> + */
> +static unsigned long cpu_util_next(int cpu, struct task_struct *p, int
> dst_cpu)
> +{
> + unsigned long util, util_est;
> + struct cfs_rq
On Mon, May 21, 2018 at 03:25:02PM +0100, Quentin Perret wrote:
>
> +/*
> + * Returns the util of "cpu" if "p" wakes up on "dst_cpu".
> + */
> +static unsigned long cpu_util_next(int cpu, struct task_struct *p, int
> dst_cpu)
> +{
> + unsigned long util, util_est;
> + struct cfs_rq
On Mon, May 21, 2018 at 03:25:05PM +0100, Quentin Perret wrote:
> +static void start_eas_workfn(struct work_struct *work);
> +static DECLARE_WORK(start_eas_work, start_eas_workfn);
> +
> static int
> init_cpu_capacity_callback(struct notifier_block *nb,
> unsigned
On Mon, May 21, 2018 at 03:25:05PM +0100, Quentin Perret wrote:
> +static void start_eas_workfn(struct work_struct *work);
> +static DECLARE_WORK(start_eas_work, start_eas_workfn);
> +
> static int
> init_cpu_capacity_callback(struct notifier_block *nb,
> unsigned
On Tue, Jun 19, 2018 at 08:57:23AM +0100, Quentin Perret wrote:
> Hi Pavan,
>
> On Tuesday 19 Jun 2018 at 10:36:01 (+0530), Pavan Kondeti wrote:
> > On Mon, May 21, 2018 at 03:25:04PM +0100, Quentin Perret wrote:
> >
> >
> >
> > > + if
On Tue, Jun 19, 2018 at 08:57:23AM +0100, Quentin Perret wrote:
> Hi Pavan,
>
> On Tuesday 19 Jun 2018 at 10:36:01 (+0530), Pavan Kondeti wrote:
> > On Mon, May 21, 2018 at 03:25:04PM +0100, Quentin Perret wrote:
> >
> >
> >
> > > + if
On Mon, May 21, 2018 at 03:25:01PM +0100, Quentin Perret wrote:
> util_est_enqueue(>cfs, p);
> hrtick_update(rq);
> @@ -8121,11 +8144,12 @@ static bool update_nohz_stats(struct rq *rq, bool
> force)
> * @local_group: Does group contain this_cpu.
> * @sgs: variable to hold the
On Mon, May 21, 2018 at 03:25:01PM +0100, Quentin Perret wrote:
> util_est_enqueue(>cfs, p);
> hrtick_update(rq);
> @@ -8121,11 +8144,12 @@ static bool update_nohz_stats(struct rq *rq, bool
> force)
> * @local_group: Does group contain this_cpu.
> * @sgs: variable to hold the
On Mon, May 21, 2018 at 03:25:04PM +0100, Quentin Perret wrote:
> + if (cpumask_test_cpu(prev_cpu, >cpus_allowed))
> + prev_energy = best_energy = compute_energy(p, prev_cpu);
> + else
> + prev_energy = best_energy = ULONG_MAX;
> +
> +
On Mon, May 21, 2018 at 03:25:04PM +0100, Quentin Perret wrote:
> + if (cpumask_test_cpu(prev_cpu, >cpus_allowed))
> + prev_energy = best_energy = compute_energy(p, prev_cpu);
> + else
> + prev_energy = best_energy = ULONG_MAX;
> +
> +
Hi Nick,
On Sun, Apr 15, 2018 at 11:31:49PM +1000, Nicholas Piggin wrote:
> This is a quick hack for comments, but I've always wondered --
> if we have a short term polling idle states in cpuidle for performance
> -- why not skip the context switch and entry into all the idle states,
> and just
Hi Nick,
On Sun, Apr 15, 2018 at 11:31:49PM +1000, Nicholas Piggin wrote:
> This is a quick hack for comments, but I've always wondered --
> if we have a short term polling idle states in cpuidle for performance
> -- why not skip the context switch and entry into all the idle states,
> and just
On Wed, Jan 24, 2018 at 07:31:38PM +, Patrick Bellasi wrote:
>
> > > + /*
> > > +* These are the main cases covered:
> > > +* - if *p is the only task sleeping on this CPU, then:
> > > +* cpu_util (== task_util) > util_est (== 0)
> > > +* and thus
On Wed, Jan 24, 2018 at 07:31:38PM +, Patrick Bellasi wrote:
>
> > > + /*
> > > +* These are the main cases covered:
> > > +* - if *p is the only task sleeping on this CPU, then:
> > > +* cpu_util (== task_util) > util_est (== 0)
> > > +* and thus
Hi Patrick,
On Tue, Jan 23, 2018 at 06:08:46PM +, Patrick Bellasi wrote:
> static unsigned long cpu_util_wake(int cpu, struct task_struct *p)
> {
> - unsigned long util, capacity;
> + long util, util_est;
>
> /* Task has no contribution or is new */
> if (cpu !=
Hi Patrick,
On Tue, Jan 23, 2018 at 06:08:46PM +, Patrick Bellasi wrote:
> static unsigned long cpu_util_wake(int cpu, struct task_struct *p)
> {
> - unsigned long util, capacity;
> + long util, util_est;
>
> /* Task has no contribution or is new */
> if (cpu !=
On Sun, Jan 21, 2018 at 05:11:17PM +0100, Frederic Weisbecker wrote:
> On Sat, Jan 20, 2018 at 02:11:39PM +0530, Pavan Kondeti wrote:
>
> Hi Pavan,
>
>
> > I have couple questions/comments.
> >
> > (1) Since the work is queued on a bounded per-cpu worke
On Sun, Jan 21, 2018 at 05:11:17PM +0100, Frederic Weisbecker wrote:
> On Sat, Jan 20, 2018 at 02:11:39PM +0530, Pavan Kondeti wrote:
>
> Hi Pavan,
>
>
> > I have couple questions/comments.
> >
> > (1) Since the work is queued on a bounded per-cpu worke
Hi Frederic,
On Fri, Jan 19, 2018 at 04:46:12PM +0100, Frederic Weisbecker wrote:
> Some softirq vectors can be more CPU hungry than others. Especially
> networking may sometimes deal with packet storm and need more CPU than
> IRQ tail can offer without inducing scheduler latencies. In this case
Hi Frederic,
On Fri, Jan 19, 2018 at 04:46:12PM +0100, Frederic Weisbecker wrote:
> Some softirq vectors can be more CPU hungry than others. Especially
> networking may sometimes deal with packet storm and need more CPU than
> IRQ tail can offer without inducing scheduler latencies. In this case
Hi Steve,
On Fri, Jan 19, 2018 at 02:51:15PM -0500, Steven Rostedt wrote:
> On Sat, 20 Jan 2018 00:27:56 +0530
> Pavan Kondeti <pkond...@codeaurora.org> wrote:
>
> > Hi Steve,
> >
> > Thanks for the patch.
> >
> > On Fri, Jan 19, 2018 at 01:12:54PM
Hi Steve,
On Fri, Jan 19, 2018 at 02:51:15PM -0500, Steven Rostedt wrote:
> On Sat, 20 Jan 2018 00:27:56 +0530
> Pavan Kondeti wrote:
>
> > Hi Steve,
> >
> > Thanks for the patch.
> >
> > On Fri, Jan 19, 2018 at 01:12:54PM -0500, Steven Rostedt wrote:
&
Hi Steve,
Thanks for the patch.
On Fri, Jan 19, 2018 at 01:12:54PM -0500, Steven Rostedt wrote:
> On Fri, 19 Jan 2018 13:11:21 -0500
> Steven Rostedt wrote:
>
> > void rto_push_irq_work_func(struct irq_work *work)
> > {
> > + struct root_domain *rd =
> > +
Hi Steve,
Thanks for the patch.
On Fri, Jan 19, 2018 at 01:12:54PM -0500, Steven Rostedt wrote:
> On Fri, 19 Jan 2018 13:11:21 -0500
> Steven Rostedt wrote:
>
> > void rto_push_irq_work_func(struct irq_work *work)
> > {
> > + struct root_domain *rd =
> > + container_of(work,
On Fri, Jan 19, 2018 at 01:11:21PM -0500, Steven Rostedt wrote:
> On Fri, 19 Jan 2018 23:16:17 +0530
> Pavan Kondeti <pkond...@codeaurora.org> wrote:
>
> > I am thinking of another problem because of the race between
> > rto_push_irq_work_func() and rq_attach_root
On Fri, Jan 19, 2018 at 01:11:21PM -0500, Steven Rostedt wrote:
> On Fri, 19 Jan 2018 23:16:17 +0530
> Pavan Kondeti wrote:
>
> > I am thinking of another problem because of the race between
> > rto_push_irq_work_func() and rq_attach_root() where rq->rd is modified.
>
On Fri, Jan 19, 2018 at 10:03:53AM -0500, Steven Rostedt wrote:
> On Fri, 19 Jan 2018 14:53:05 +0530
> Pavan Kondeti <pkond...@codeaurora.org> wrote:
>
> > I am seeing "spinlock already unlocked" BUG for rd->rto_lock on a 4.9
> > stable kernel based s
On Fri, Jan 19, 2018 at 10:03:53AM -0500, Steven Rostedt wrote:
> On Fri, 19 Jan 2018 14:53:05 +0530
> Pavan Kondeti wrote:
>
> > I am seeing "spinlock already unlocked" BUG for rd->rto_lock on a 4.9
> > stable kernel based system. This issue is observed only
Hi Steven,
On Fri, Jan 19, 2018 at 10:03:53AM -0500, Steven Rostedt wrote:
> On Fri, 19 Jan 2018 14:53:05 +0530
> Pavan Kondeti <pkond...@codeaurora.org> wrote:
>
> > I am seeing "spinlock already unlocked" BUG for rd->rto_lock on a 4.9
> > stable kernel b
Hi Steven,
On Fri, Jan 19, 2018 at 10:03:53AM -0500, Steven Rostedt wrote:
> On Fri, 19 Jan 2018 14:53:05 +0530
> Pavan Kondeti wrote:
>
> > I am seeing "spinlock already unlocked" BUG for rd->rto_lock on a 4.9
> > stable kernel based system. This issue is
Hi Steven,
> /* Called from hardirq context */
> -static void try_to_push_tasks(void *arg)
> +void rto_push_irq_work_func(struct irq_work *work)
> {
> - struct rt_rq *rt_rq = arg;
> - struct rq *rq, *src_rq;
> - int this_cpu;
> + struct rq *rq;
> int cpu;
>
> -
Hi Steven,
> /* Called from hardirq context */
> -static void try_to_push_tasks(void *arg)
> +void rto_push_irq_work_func(struct irq_work *work)
> {
> - struct rt_rq *rt_rq = arg;
> - struct rq *rq, *src_rq;
> - int this_cpu;
> + struct rq *rq;
> int cpu;
>
> -
On Mon, Sep 4, 2017 at 7:48 PM, Patrick Bellasi <patrick.bell...@arm.com> wrote:
> On 29-Aug 10:15, Pavan Kondeti wrote:
>> On Fri, Aug 25, 2017 at 3:50 PM, Patrick Bellasi
>> <patrick.bell...@arm.com> wrote:
>> > When the scheduler looks at the CP
On Mon, Sep 4, 2017 at 7:48 PM, Patrick Bellasi wrote:
> On 29-Aug 10:15, Pavan Kondeti wrote:
>> On Fri, Aug 25, 2017 at 3:50 PM, Patrick Bellasi
>> wrote:
>> > When the scheduler looks at the CPU utlization, the current PELT value
>> > for a CPU is returned st
On Fri, Aug 25, 2017 at 3:50 PM, Patrick Bellasi
wrote:
> The util_avg signal computed by PELT is too variable for some use-cases.
> For example, a big task waking up after a long sleep period will have its
> utilization almost completely decayed. This introduces some
On Fri, Aug 25, 2017 at 3:50 PM, Patrick Bellasi
wrote:
> The util_avg signal computed by PELT is too variable for some use-cases.
> For example, a big task waking up after a long sleep period will have its
> utilization almost completely decayed. This introduces some latency before
> schedutil
1 - 100 of 124 matches
Mail list logo