Re: [PATCH] sched/rt: Clean up usage of rt_task()

2024-05-15 Thread Phil Auld
On Wed, May 15, 2024 at 01:06:13PM +0100 Qais Yousef wrote: > On 05/15/24 07:20, Phil Auld wrote: > > On Wed, May 15, 2024 at 10:32:38AM +0200 Peter Zijlstra wrote: > > > On Tue, May 14, 2024 at 07:58:51PM -0400, Phil Auld wrote: > > > > > > > > Hi Q

Re: [PATCH] sched/rt: Clean up usage of rt_task()

2024-05-15 Thread Phil Auld
On Wed, May 15, 2024 at 10:32:38AM +0200 Peter Zijlstra wrote: > On Tue, May 14, 2024 at 07:58:51PM -0400, Phil Auld wrote: > > > > Hi Qais, > > > > On Wed, May 15, 2024 at 12:41:12AM +0100 Qais Yousef wrote: > > > rt_task() checks if a task has RT priority.

Re: [PATCH] sched/rt: Clean up usage of rt_task()

2024-05-14 Thread Phil Auld
stays as it was but this change makes sense as you have written it too. Cheers, Phil > > No functional changes were intended. > > [1] > https://lore.kernel.org/lkml/20240506100509.gl40...@noisy.programming.kicks-ass.net/ > > Signed-off-by: Qais Yousef > ---

Re: [PATCH 2/2] sched/fair: Relax task_hot() for misfit tasks

2021-04-19 Thread Phil Auld
On Mon, Apr 19, 2021 at 06:17:47PM +0100 Valentin Schneider wrote: > On 19/04/21 08:59, Phil Auld wrote: > > On Fri, Apr 16, 2021 at 10:43:38AM +0100 Valentin Schneider wrote: > >> On 15/04/21 16:39, Rik van Riel wrote: > >> > On Thu, 2021-04-15 at 18:58 +

Re: [PATCH 2/2] sched/fair: Relax task_hot() for misfit tasks

2021-04-19 Thread Phil Auld
are) && > >> + !migrate_degrades_capacity(p, env)) > >> + tsk_cache_hot = 0; > > > > ... I'm starting to wonder if we should not rename the > > tsk_cache_hot variable to something else to make this > > code more readable. Probably in another patch :) > > > > I'd tend to agree, but naming is hard. "migration_harmful" ? I thought Rik meant tsk_cache_hot, for which I'd suggest at least buying a vowel and putting an 'a' in there :) Cheers, Phil > > > -- > > All Rights Reversed. > --

Re: [PATCH v5] audit: log nftables configuration change events once per table

2021-04-01 Thread Phil Sutter
tween running or stopped auditd, at least for large rulesets. Individual calls suffer from added audit logging, but that's expected of course. Tested-by: Phil Sutter Thanks, Phil

Re: [PATCH] audit: log nftables configuration change events once per table

2021-03-19 Thread Phil Sutter
On Thu, Mar 18, 2021 at 02:37:03PM -0400, Richard Guy Briggs wrote: > On 2021-03-18 17:30, Phil Sutter wrote: [...] > > Why did you leave the object-related logs in place? They should reappear > > at commit time just like chains and sets for instance, no? > > There are

Re: [PATCH] audit: log nftables configuration change events once per table

2021-03-18 Thread Phil Sutter
table->handle); > + net->nft.base_seq); > > audit_log_nfcfg(buf, > family, Why did you leave the object-related logs in place? They should reappear at commit time just like chains and sets for instance, no? Thanks, Phil

Re: [PATCH v4 1/4] sched/fair: Introduce primitives for CFS bandwidth burst

2021-03-18 Thread Phil Auld
this sense, I suggest limit burst buffer to 16 times of quota or around. > That should be enough for users to > improve tail latency caused by throttling. And users might choose a smaller > one or even none, if the interference > is unacceptable. What do you think? > Having quotas that can regularly be exceeded by 16 times seems to make the concept of a quota meaningless. I'd have thought a burst would be some small percentage. What if several such containers burst at the same time? Can't that lead to overcommit that can effect other well-behaved containers? Cheers, Phil --

Re: [PATCH ghak124 v3] audit: log nftables configuration change events

2021-02-12 Thread Phil Sutter
hen? I guess Florian sufficiently illustrated how this would be implemented. > Hope this helps... It does, thanks a lot for the information! Thanks, Phil

Re: [PATCH ghak124 v3] audit: log nftables configuration change events

2021-02-11 Thread Phil Sutter
. Unlike nft monitor, auditd is not designed to be disabled "at will". So turning it off for performance-critical workloads is no option. Cheers, Phil

Re: [RFC/PATCH v2 09/16] soc: bcm: bcm2835-power: Add support for BCM2711's Argon ASB

2021-02-09 Thread Phil Elwell
Nicolas, On Tue, 9 Feb 2021 at 14:00, Nicolas Saenz Julienne wrote: > > On Tue, 2021-02-09 at 13:19 +, Phil Elwell wrote: > > Hi Nicolas, > > > > On Tue, 9 Feb 2021 at 13:00, Nicolas Saenz Julienne > > wrote: > > > > > > In BCM2711 the n

Re: [RFC/PATCH v2 09/16] soc: bcm: bcm2835-power: Add support for BCM2711's Argon ASB

2021-02-09 Thread Phil Elwell
struct platform_device > *pdev) > power->dev = dev; > power->base = pm->base; > power->rpivid_asb = pm->rpivid_asb; > + power->argon_asb = pm->argon_asb; > > - id = ASB_READ(ASB_AXI_BRDG_ID); > + id = ASB_READ(ASB_AXI_BRDG_ID, false); > if (id != 0x62726467 /* "BRDG" */) { > - dev_err(dev, "ASB register ID returned 0x%08x\n", id); > + dev_err(dev, "RPiVid ASB register ID returned 0x%08x\n", id); > return -ENODEV; > } > > + if (pm->argon_asb) { > + id = ASB_READ(ASB_AXI_BRDG_ID, true); > + if (id != 0x62726467 /* "BRDG" */) { > + dev_err(dev, "Argon ASB register ID returned > 0x%08x\n", id); > + return -ENODEV; > + } > + } > + Surely these are the same register. Is this the result of a bad merge? Thanks, Phil

Re: [PATCH 2/2] audit: show (grand)parents information of an audit context

2021-02-03 Thread Phil Zhang (xuanyzha)
the audit log. But we'd like to hear alternatives. On Wed, 2021-02-03 at 18:57 +, Daniel Walker (danielwa) wrote: > On Tue, Feb 02, 2021 at 04:44:47PM -0500, Paul Moore wrote: > > On Tue, Feb 2, 2021 at 4:29 PM Daniel Walker < > > danie...@cisco.com > > > wr

Re: [PATCH] scsi: megaraid_sas: Fix MEGASAS_IOC_FIRMWARE regression

2021-01-04 Thread Phil Oester
On Tue, Jan 05, 2021 at 12:41:04AM +0100, Arnd Bergmann wrote: > Phil Oester reported that a fix for a possible buffer overrun that I > sent caused a regression that manifests in this output: > > Event Message: A PCI parity error was detected on a component at bus 0 > devi

Re: [PATCH 2/3] scsi: megaraid_sas: check user-provided offsets

2021-01-04 Thread Phil Oester
atch and it resolves the regression. It does not trigger the warning message you added. Phil

Re: [PATCH 2/3] scsi: megaraid_sas: check user-provided offsets

2020-12-30 Thread Phil Oester
ice 5 function 0. Severity: Critical Message ID: PCI1308 I reverted this single patch and the errors went away. Thoughts? Phil Oester

Re: [PATCH v1] sched/fair: update_pick_idlest() Select group with lowest group_util when idle_cpus are equal

2020-11-09 Thread Phil Auld
On Mon, Nov 09, 2020 at 03:38:15PM + Mel Gorman wrote: > On Mon, Nov 09, 2020 at 10:24:11AM -0500, Phil Auld wrote: > > Hi, > > > > On Fri, Nov 06, 2020 at 04:00:10PM + Mel Gorman wrote: > > > On Fri, Nov 06, 2020 at 02:33:56PM +0100, Vincent Guittot wro

Re: [PATCH v1] sched/fair: update_pick_idlest() Select group with lowest group_util when idle_cpus are equal

2020-11-09 Thread Phil Auld
t gen servers. As I mentioned earlier in the thread we have all the 5.9 patches in this area in our development distro kernel (plus a handful from 5.10-rc) and don't see the same effect we see here between 5.8 and 5.9 caused by this patch. But there are other variables there. We've queued up a comparison between that kernel and one with just the patch in question reverted. That may tell us if there is an effect that is otherwise being masked. Jirka - feel free to correct me if I mis-summarized your results :) Cheers, Phil --

Re: [PATCH v1] sched/fair: update_pick_idlest() Select group with lowest group_util when idle_cpus are equal

2020-11-02 Thread Phil Auld
and some minor overall perf gains in a few places, but generally did not see any difference from before the commit mentioned here. I'm wondering, Mel, if you have compared 5.10-rc1? We don't have everything though so it's possible something we have not pulled back is interacting with this p

Re: [PATCH] sched/fair: remove the spin_lock operations

2020-11-02 Thread Phil Auld
estore(_b->lock, flags); It's just a leftover. I agree that if it was there for some other purpose that it would really need a comment. In this case, it's an artifact of patch-based development I think. Cheers, Phil > avid > > - > Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 > 1PT, UK > Registration No: 1397386 (Wales) > --

November Equity Investment 20-20

2020-10-30 Thread JOHN PHIL
How are you doing today I have a proposal which i think may interest you and benefit you.I will like to give you full details of this via email: gerradfinancialplann...@gmail.com Thanks. John PHIL

Re: [PATCH] sched/fair: remove the spin_lock operations

2020-10-30 Thread Phil Auld
5105,9 +5105,6 @@ static void do_sched_cfs_slack_timer(struct > cfs_bandwidth *cfs_b) > return; > > distribute_cfs_runtime(cfs_b); > - > - raw_spin_lock_irqsave(_b->lock, flags); > - raw_spin_unlock_irqrestore(_b->lock, flags); > } > > /* > -- > 2.29.0 > > Nice :) Reviewed-by: Phil Auld --

Re: [PATCH 0/8] Style and small fixes for core-scheduling

2020-10-28 Thread Phil Auld
advise on any corrections or improvements that can be > made. Thanks for these. I wonder, though, if it would not make more sense to post these changes as comments on the original as-yet-unmerged patches that you are fixing up? Cheers, Phil > > John B. Wyatt IV (8): > sched: Correct

Re: default cpufreq gov, was: [PATCH] sched/fair: check for idle core

2020-10-22 Thread Phil Auld
air finger pointing at one company's test > team. If at least two distos check it out and it still goes wrong, at > least there will be shared blame :/ > > > > Other distros assuming they're watching can nominate their own victim. > > > > But no other victims had been nominated

Re: default cpufreq gov, was: [PATCH] sched/fair: check for idle core

2020-10-22 Thread Phil Auld
nt to SLAB in terms of performance. Block > > multiqueue also had vaguely similar issues before the default changes > > and a period of time before it was removed removed (example whinging mail > > https://lore.kernel.org/lkml/20170803085115.r2jfz2lofy5sp...@techsingularity.net/) > > It's schedutil's turn :P > > > Agreed. I'd like the option to switch back if we make the default change. It's on the table and I'd like to be able to go that way. Cheers, Phil --

Re: [PATCH] [PATCH] of_reserved_mem: Increase the number of reserved regions

2020-10-05 Thread Phil Chang
wrote: > Hi, Phil: > > Phil Chang 於 2020年10月4日 週日 下午1:51寫道: > > > > Certain SoCs need to support large amount of reserved memory > > regions, especially to follow the GKI rules from Google. > > In MTK new SoC requires more than 68 regions of reserved memory >

[PATCH] [PATCH] of_reserved_mem: Increase the number of reserved regions

2020-10-03 Thread Phil Chang
-by: Joe Liu Signed-off-by: YJ Chiang Signed-off-by: Alix Wu Signed-off-by: Phil Chang --- drivers/of/of_reserved_mem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c index 46b9371c8a33..595f0741dcef 100644

Re: [PATCH] sched/fair: Remove the force parameter of update_tg_load_avg()

2020-09-25 Thread Phil Auld
update_tg_load_avg(cfs_rq); > propagate_entity_cfs_rq(se); > } > > @@ -10805,7 +10804,7 @@ static void attach_entity_cfs_rq(struct sched_entity > *se) > /* Synchronize entity with its cfs_rq */ > update_load_avg(cfs_rq, se, sched_feat(ATTACH_AGE_LOAD) ? 0 : > SKIP_AGE_LOAD); > attach_entity_load_avg(cfs_rq, se); > - update_tg_load_avg(cfs_rq, false); > + update_tg_load_avg(cfs_rq); > propagate_entity_cfs_rq(se); > } > > -- > 2.17.1 > LGTM, Reviewed-by: Phil Auld --

Re: [RFC PATCH v2] sched/fair: select idle cpu from idle cpumask in sched domain

2020-09-24 Thread Phil Auld
On Thu, Sep 24, 2020 at 10:43:12AM -0700 Tim Chen wrote: > > > On 9/24/20 10:13 AM, Phil Auld wrote: > > On Thu, Sep 24, 2020 at 09:37:33AM -0700 Tim Chen wrote: > >> > >> > >> On 9/22/20 12:14 AM, Vincent Guittot wrote: > >> > >>>

Re: [RFC PATCH v2] sched/fair: select idle cpu from idle cpumask in sched domain

2020-09-24 Thread Phil Auld
On Thu, Sep 24, 2020 at 09:37:33AM -0700 Tim Chen wrote: > > > On 9/22/20 12:14 AM, Vincent Guittot wrote: > > >> > > And a quick test with hackbench on my octo cores arm64 gives for 12 > > Vincent, > > Is it octo (=10) or octa (=8) cores on a single socket for your system? In what

Re: Re: [PATCH] [PATCH] ARM64: Setup DMA32 zone size by bootargs

2020-09-24 Thread Phil Chang
Actually, In a embedded system with 3GB memory, the memory bus width is not the same among the 3GB. (The first 2GB is 48-bit wide, and the latter 1GB is 16-bit wide.) For memory throughput reason of hardware IPs, we need allocate memory from the first 2GB for the hardware IPs. And that is why we

Re: [RFC -V2] autonuma: Migrate on fault among multiple bound nodes

2020-09-22 Thread Phil Auld
, > gfp_zone(GFP_HIGHUSER), > @@ -2516,6 +2520,7 @@ int mpol_misplaced(struct page *page, struct > vm_area_struct *vma, unsigned long > > /* Migrate the page towards the node whose CPU is referencing it */ > if (pol->flags & MPOL_F_MORON) { > +moron: > polnid = thisnid; > > if (!should_numa_migrate_memory(current, page, curnid, thiscpu)) > -- > 2.28.0 > Cheers, Phil --

Re: [PATCH 0/4] sched/fair: Improve fairness between cfs tasks

2020-09-18 Thread Phil Auld
On Fri, Sep 18, 2020 at 12:39:28PM -0400 Phil Auld wrote: > Hi Peter, > > On Mon, Sep 14, 2020 at 01:42:02PM +0200 pet...@infradead.org wrote: > > On Mon, Sep 14, 2020 at 12:03:36PM +0200, Vincent Guittot wrote: > > > Vincent Guittot (4): > > > sched/fair: relax

Re: [PATCH 0/4] sched/fair: Improve fairness between cfs tasks

2020-09-18 Thread Phil Auld
l imbalance threshold > > sched/fair: minimize concurrent LBs between domain level > > sched/fair: reduce busy load balance interval > > I see nothing objectionable there, a little more testing can't hurt, but > I'm tempted to apply them. > > Phil, Mel, any chance

[PATCH] [PATCH] ARM64: Setup DMA32 zone size by bootargs

2020-09-16 Thread Phil Chang
of architecture Signed-off-by: Alix Wu Signed-off-by: YJ Chiang Signed-off-by: Phil Chang --- Hi supplement the reason of this usage. Thanks. .../admin-guide/kernel-parameters.txt | 3 +++ arch/arm64/include/asm/memory.h | 2 ++ arch/arm64/mm/init.c

[PATCH] [PATCH] ARM64: Setup DMA32 zone size by bootargs

2020-09-16 Thread Phil Chang
this patch allowing the DMA32 zone be configurable in ARM64. Signed-off-by: Alix Wu Signed-off-by: YJ Chiang Signed-off-by: Phil Chang --- For some devices, the main memory split into 2 part due to the memory architecture, the efficient and less inefficient part. One of the use case is fine

[PATCH] [PATCH] ARM64: Setup DMA32 zone size by bootargs

2020-09-15 Thread Phil Chang
Allowing the DMA32 zone be configurable in ARM64 but at most 4Gb. Signed-off-by: Alix Wu Signed-off-by: YJ Chiang Signed-off-by: Phil Chang --- .../admin-guide/kernel-parameters.txt | 3 ++ arch/arm64/include/asm/memory.h | 2 + arch/arm64/mm/init.c

Re: [PATCH 0/4] sched/fair: Improve fairness between cfs tasks

2020-09-14 Thread Phil Auld
threshold > > sched/fair: minimize concurrent LBs between domain level > > sched/fair: reduce busy load balance interval > > I see nothing objectionable there, a little more testing can't hurt, but > I'm tempted to apply them. > > Phil, Mel, any chance you can run th

Re: [PATCH v2] sched/debug: Add new tracepoint to track cpu_capacity

2020-09-08 Thread Phil Auld
Hi Quais, On Mon, Sep 07, 2020 at 12:02:24PM +0100 Qais Yousef wrote: > On 09/02/20 09:54, Phil Auld wrote: > > > > > > I think this decoupling is not necessary. The natural place for those > > > scheduler trace_event based on trace_points extension files i

Re: Requirements to control kernel isolation/nohz_full at runtime

2020-09-03 Thread Phil Auld
ot;Cpusets provide a Linux kernel mechanism to constrain which CPUs and > > Memory Nodes are used by a process or set of processes. > > > > The Linux kernel already has a pair of mechanisms to specify on which > > CPUs a task may be scheduled (sched_setaffinity) and on which Memory > > Nodes it may obtain memory (mbind, set_mempolicy). > > > > Cpusets extends these two mechanisms as follows:" > > > > The isolation flags do not necessarily have anything to do with > > tasks, but with CPUs: a given feature is disabled or enabled on a > > given CPU. > > No? > > One cpumask per feature, implemented separately in sysfs, also > seems OK (modulo documentation about the RCU update and users > of the previous versions). > > This is what is being done for rcu_nocbs= already... > exclusive cpusets is used now to control scheduler load balancing on a group of cpus. It seems to me that this is the same idea and is part of the isolation concept. Having a toggle for each subsystem/feature in cpusets could provide the needed userspace api. Under the covers it might be implemented as twiddling various cpumasks. We need to be shifting to managing load balancing with cpusets anyway. Cheers, Phil --

Re: [PATCH v2] sched/debug: Add new tracepoint to track cpu_capacity

2020-09-02 Thread Phil Auld
On Wed, Sep 02, 2020 at 12:44:42PM +0200 Dietmar Eggemann wrote: > + Phil Auld > Thanks Dietmar. > On 28/08/2020 19:26, Qais Yousef wrote: > > On 08/28/20 19:10, Dietmar Eggemann wrote: > >> On 28/08/2020 12:27, Qais Yousef wrote: > >>> On 08/28/20 10

Re: [PATCH 2/4] i2c: at91: implement i2c bus recovery

2020-08-25 Thread Phil Reid
On 25/08/2020 21:28, Wolfram Sang wrote: Hi Phil, yes, this thread is old but a similar issue came up again... On Fri, Oct 25, 2019 at 09:14:00AM +0800, Phil Reid wrote: So at the beginning of a new transfer, we should check if SDA (or SCL?) is low and, if it's true, only then we should

Re: Re: [PATCH] ARM64: Setup DMA32 zone size by bootargs

2020-08-16 Thread Phil Chang
>> this patch allowing the arm64 DMA zone be configurable. >> >> Signed-off-by: Alix Wu >> Signed-off-by: YJ Chiang >> Signed-off-by: Phil Chang >> --- >> Hi >> >> For some devices, the main memory split into 2 part due to the memory

[tip: sched/urgent] sched: Fix use of count for nr_running tracepoint

2020-08-06 Thread tip-bot2 for Phil Auld
The following commit has been merged into the sched/urgent branch of tip: Commit-ID: a1bd06853ee478d37fae9435c5521e301de94c67 Gitweb: https://git.kernel.org/tip/a1bd06853ee478d37fae9435c5521e301de94c67 Author:Phil Auld AuthorDate:Wed, 05 Aug 2020 16:31:38 -04:00 Committer

[PATCH] sched: Fix use of count for nr_running tracepoint

2020-08-05 Thread Phil Auld
The count field is meant to tell if an update to nr_running is an add or a subtract. Make it do so by adding the missing minus sign. Fixes: 9d246053a691 ("sched: Add a tracepoint to track rq->nr_running") Signed-off-by: Phil Auld --- kernel/sched/sched.h | 2 +- 1 file changed

[PATCH] ARM64: Setup DMA32 zone size by bootargs

2020-08-03 Thread Phil Chang
this patch allowing the arm64 DMA zone be configurable. Signed-off-by: Alix Wu Signed-off-by: YJ Chiang Signed-off-by: Phil Chang --- Hi For some devices, the main memory split into 2 part due to the memory architecture, the efficient and less inefficient part. One of the use case is fine

[tip: sched/core] sched: Add a tracepoint to track rq->nr_running

2020-07-09 Thread tip-bot2 for Phil Auld
The following commit has been merged into the sched/core branch of tip: Commit-ID: 9d246053a69196c7c27068870e9b4b66ac536f68 Gitweb: https://git.kernel.org/tip/9d246053a69196c7c27068870e9b4b66ac536f68 Author:Phil Auld AuthorDate:Mon, 29 Jun 2020 15:23:03 -04:00 Committer

Re: [RFC][PATCH] sched: Better document ttwu()

2020-07-02 Thread Phil Auld
not be > - * reordered with p->state check below. This pairs with mb() in > - * set_current_state() the waiting thread does. > + * reordered with p->state check below. This pairs with smp_store_mb() > + * in set_current_state() that the waiting thread does. >

Re: [RFC PATCH 00/13] Core scheduling v5

2020-06-30 Thread Phil Auld
ling and just > use that for tagging. (No need to even have a tag file, just adding/removing > to/from CGroup will tag). > ... this could be an interesting approach. Then the cookie could still be the cgroup address as is and there would be no need for the prctl. At least so it seems.

Re: [PATCH v2] Sched: Add a tracepoint to track rq->nr_running

2020-06-29 Thread Phil Auld
nts are added to add_nr_running() and sub_nr_running() which are in kernel/sched/sched.h. In order to avoid CREATE_TRACE_POINTS in the header a wrapper call is used and the trace/events/sched.h include is moved before sched.h in kernel/sched/core. Signed-off-by: Phil Auld CC: Qais Yousef CC: Ingo Mol

Re: [PATCH] Sched: Add a tracepoint to track rq->nr_running

2020-06-23 Thread Phil Auld
Hi Qais, On Mon, Jun 22, 2020 at 01:17:47PM +0100 Qais Yousef wrote: > On 06/19/20 10:11, Phil Auld wrote: > > Add a bare tracepoint trace_sched_update_nr_running_tp which tracks > > ->nr_running CPU's rq. This is used to accurately trace this data and > > provide a vi

Re: [PATCH] Sched: Add a tracepoint to track rq->nr_running

2020-06-19 Thread Phil Auld
On Fri, Jun 19, 2020 at 12:46:41PM -0400 Steven Rostedt wrote: > On Fri, 19 Jun 2020 10:11:20 -0400 > Phil Auld wrote: > > > > > diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h > > index ed168b0e2c53..a6d9fe5a68cf 100644 > > --- a/inclu

[PATCH] Sched: Add a tracepoint to track rq->nr_running

2020-06-19 Thread Phil Auld
nts are added to add_nr_running() and sub_nr_running() which are in kernel/sched/sched.h. Since sched.h includes trace/events/tlb.h via mmu_context.h we had to limit when CREATE_TRACE_POINTS is defined. Signed-off-by: Phil Auld CC: Qais Yousef CC: Ingo Molnar CC: Peter Zijlstra CC: Vincent Guittot

Re: [tip: sched/core] sched/fair: Remove distribute_running from CFS bandwidth

2020-06-08 Thread Phil Auld
On Tue, Jun 09, 2020 at 07:05:38AM +0800 Tao Zhou wrote: > Hi Phil, > > On Mon, Jun 08, 2020 at 10:53:04AM -0400, Phil Auld wrote: > > On Sun, Jun 07, 2020 at 09:25:58AM +0800 Tao Zhou wrote: > > > Hi, > > > > > > On Fri, May 01, 2020 at 06:

Re: [tip: sched/core] sched/fair: Remove distribute_running from CFS bandwidth

2020-06-08 Thread Phil Auld
> > don't start a distribution while one is already running. However, even > > in the event that this race occurs, it is fine to have two distributions > > running (especially now that distribute grabs the cfs_b->lock to > > determine remaining quota before assigning). > > &

Re: [PATCH RFC] sched: Add a per-thread core scheduling interface

2020-05-28 Thread Phil Auld
On Thu, May 28, 2020 at 02:17:19PM -0400 Phil Auld wrote: > On Thu, May 28, 2020 at 07:01:28PM +0200 Peter Zijlstra wrote: > > On Sun, May 24, 2020 at 10:00:46AM -0400, Phil Auld wrote: > > > On Fri, May 22, 2020 at 05:35:24PM -0400 Joel Fernandes wrote: > > > > On F

Re: [PATCH RFC] sched: Add a per-thread core scheduling interface

2020-05-28 Thread Phil Auld
On Thu, May 28, 2020 at 07:01:28PM +0200 Peter Zijlstra wrote: > On Sun, May 24, 2020 at 10:00:46AM -0400, Phil Auld wrote: > > On Fri, May 22, 2020 at 05:35:24PM -0400 Joel Fernandes wrote: > > > On Fri, May 22, 2020 at 02:59:05PM +0200, Peter Zijlstra wrote: > > >

Re: [PATCH RFC] sched: Add a per-thread core scheduling interface

2020-05-24 Thread Phil Auld
roup. They'd keep their explicitly assigned tags and everything should "just work". There are other reasons to be in a cpu cgroup together than just the core scheduling tag. There are a few other edge cases, like if you are in a cgroup, but have been tagged explicitly with sched_setattr and then get untagged (presumably by setting 0) do you get the cgroup tag or just stay untagged? I think based on per-task winning you'd stay untagged. I supposed you could move out and back in the cgroup to get the tag reapplied (Or maybe the cgroup interface could just be reused with the same value to re-tag everyone who's untagged). Cheers, Phil > thanks, > > - Joel > --

[tip: sched/urgent] sched/fair: Fix enqueue_task_fair() warning some more

2020-05-19 Thread tip-bot2 for Phil Auld
The following commit has been merged into the sched/urgent branch of tip: Commit-ID: b34cb07dde7c2346dec73d053ce926aeaa087303 Gitweb: https://git.kernel.org/tip/b34cb07dde7c2346dec73d053ce926aeaa087303 Author:Phil Auld AuthorDate:Tue, 12 May 2020 09:52:22 -04:00 Committer

Re: netfilter: does the API break or something else ?

2020-05-14 Thread Phil Sutter
Hi, On Wed, May 13, 2020 at 11:20:35PM +0800, Xiubo Li wrote: > Recently I hit one netfilter issue, it seems the API breaks or something > else. Just for the record, this was caused by a misconfigured kernel. Cheers, Phil

Re: [PATCH v2] sched/fair: enqueue_task_fair optimization

2020-05-13 Thread Phil Auld
On Wed, May 13, 2020 at 03:25:29PM +0200 Vincent Guittot wrote: > On Wed, 13 May 2020 at 15:18, Phil Auld wrote: > > > > On Wed, May 13, 2020 at 03:15:53PM +0200 Vincent Guittot wrote: > > > On Wed, 13 May 2020 at 15:13, Phil Auld wrote: > > > > > > &g

Re: [PATCH v2] sched/fair: enqueue_task_fair optimization

2020-05-13 Thread Phil Auld
On Wed, May 13, 2020 at 03:15:53PM +0200 Vincent Guittot wrote: > On Wed, 13 May 2020 at 15:13, Phil Auld wrote: > > > > On Wed, May 13, 2020 at 03:10:28PM +0200 Vincent Guittot wrote: > > > On Wed, 13 May 2020 at 14:45, Phil Auld wrote: > > > > > > >

Re: [PATCH v2] sched/fair: enqueue_task_fair optimization

2020-05-13 Thread Phil Auld
On Wed, May 13, 2020 at 03:10:28PM +0200 Vincent Guittot wrote: > On Wed, 13 May 2020 at 14:45, Phil Auld wrote: > > > > Hi Vincent, > > > > On Wed, May 13, 2020 at 02:33:35PM +0200 Vincent Guittot wrote: > > > enqueue_task_fair jumps to enqu

Re: [PATCH v2] sched/fair: fix unthrottle_cfs_rq for leaf_cfs_rq list

2020-05-13 Thread Phil Auld
the same pattern as > enqueue_task_fair(). This fixes a problem already faced with the latter and > add an optimization in the last for_each_sched_entity loop. > > Reported-by Tao Zhou > Reviewed-by: Phil Auld > Signed-off-by: Vincent Guittot > --- > > v2 changes: > - R

Re: [PATCH v2] sched/fair: enqueue_task_fair optimization

2020-05-13 Thread Phil Auld
sn't jump to the label then se must be NULL for the loop to terminate. The final loop is a NOP if se is NULL. The check wasn't protecting that. Otherwise still > Reviewed-by: Phil Auld Cheers, Phil > Signed-off-by: Vincent Guittot > --- > > v2 changes: > - Remove useless if s

Re: [PATCH] sched/fair: fix unthrottle_cfs_rq for leaf_cfs_rq list

2020-05-12 Thread Phil Auld
with this one as well. As expected, since the first patch fixed the issue I was seeing and I wasn't hitting the assert here anyway, I didn't hit the assert. But I also didn't hit any other issues, new or old. It makes sense to use the same logic flow here as enqueue_task_fair. Reviewed-by: Phil Auld Cheers, Phil --

Re: [PATCH] sched/fair: enqueue_task_fair optimization

2020-05-12 Thread Phil Auld
ask_struct *p, > int flags) > > } > > +enqueue_throttle: > if (cfs_bandwidth_used()) { > /* >* When bandwidth control is enabled; the cfs_rq_throttled() > -- > 2.17.1 > Reviewed-by: Phil Auld --

Re: [PATCH v3] sched/fair: Fix enqueue_task_fair warning some more

2020-05-12 Thread Phil Auld
On Tue, May 12, 2020 at 04:10:48PM +0200 Peter Zijlstra wrote: > On Tue, May 12, 2020 at 09:52:22AM -0400, Phil Auld wrote: > > sched/fair: Fix enqueue_task_fair warning some more > > > > The recent patch, fe61468b2cb (sched/fair: Fix enqueue_task_fair warning) >

Re: [PATCH v3] sched/fair: Fix enqueue_task_fair warning some more

2020-05-12 Thread Phil Auld
fixes and review tags. Suggested-by: Vincent Guittot Signed-off-by: Phil Auld Cc: Peter Zijlstra (Intel) Cc: Vincent Guittot Cc: Ingo Molnar Cc: Juri Lelli Reviewed-by: Vincent Guittot Reviewed-by: Dietmar Eggemann Fixes: fe61468b2cb (sched/fair: Fix enqueue_task_fair warning) --- kernel

Re: [PATCH v2] sched/fair: Fix enqueue_task_fair warning some more

2020-05-12 Thread Phil Auld
Hi Dietmar, On Tue, May 12, 2020 at 11:00:16AM +0200 Dietmar Eggemann wrote: > On 11/05/2020 22:44, Phil Auld wrote: > > On Mon, May 11, 2020 at 09:25:43PM +0200 Vincent Guittot wrote: > >> On Thu, 7 May 2020 at 22:36, Phil Auld wrote: > >>> > >>> sche

Re: [PATCH v2] sched/fair: Fix enqueue_task_fair warning some more

2020-05-11 Thread Phil Auld
On Mon, May 11, 2020 at 09:25:43PM +0200 Vincent Guittot wrote: > On Thu, 7 May 2020 at 22:36, Phil Auld wrote: > > > > sched/fair: Fix enqueue_task_fair warning some more > > > > The recent patch, fe61468b2cb (sched/fair: Fix enqueue_task_fair warning) > >

Re: [PATCH v2] sched/fair: Fix enqueue_task_fair warning some more

2020-05-07 Thread Phil Auld
ddress this by calling leaf_add_rq_list if there are throttled parents while doing the second for_each_sched_entity loop. Suggested-by: Vincent Guittot Signed-off-by: Phil Auld Cc: Peter Zijlstra (Intel) Cc: Vincent Guittot Cc: Ingo Molnar Cc: Juri Lelli --- kernel/sched/fair.c | 7 +++

Re: [PATCH] sched/fair: Fix enqueue_task_fair warning some more

2020-05-07 Thread Phil Auld
Hi Vincent, On Thu, May 07, 2020 at 05:06:29PM +0200 Vincent Guittot wrote: > Hi Phil, > > On Wed, 6 May 2020 at 20:05, Phil Auld wrote: > > > > Hi Vincent, > > > > Thanks for taking a look. More below... > > > > On Wed, May 06, 2020 at 06:36:45

Re: [PATCH 00/13] Reconcile NUMA balancing decisions with the load balancer v6

2020-05-07 Thread Phil Auld
initial glance I'm thinking it would be the imbalance_min which is currently hardcoded to 2. But there may be something else... Cheers, Phil > Thanks a lot! > Jirka > > On Thu, May 7, 2020 at 5:54 PM Mel Gorman wrote: > > > > On Thu, May 07, 2020 at 05:24:17PM +0200, Ji

Re: [PATCH] sched/fair: Fix enqueue_task_fair warning some more

2020-05-07 Thread Phil Auld
Hi Vincent, On Thu, May 07, 2020 at 05:06:29PM +0200 Vincent Guittot wrote: > Hi Phil, > > On Wed, 6 May 2020 at 20:05, Phil Auld wrote: > > > > Hi Vincent, > > > > Thanks for taking a look. More below... > > > > On Wed, May 06, 2020 at 06:36:45

Re: [PATCH] sched/fair: Fix enqueue_task_fair warning some more

2020-05-06 Thread Phil Auld
Hi Vincent, Thanks for taking a look. More below... On Wed, May 06, 2020 at 06:36:45PM +0200 Vincent Guittot wrote: > Hi Phil, > > - reply to all this time > > On Wed, 6 May 2020 at 16:18, Phil Auld wrote: > > > > sched/fair: Fix enqueue_task_fair warning some mo

[PATCH] sched/fair: Fix enqueue_task_fair warning some more

2020-05-06 Thread Phil Auld
ddress this issue by saving the se pointer when the first loop exits and resetting it before doing the fix up, if needed. Signed-off-by: Phil Auld Cc: Peter Zijlstra (Intel) Cc: Vincent Guittot Cc: Ingo Molnar Cc: Juri Lelli --- kernel/sched/fair.c | 4 1 file changed, 4 insertions(+)

Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance

2019-10-21 Thread Phil Auld
st_group > > > > > > kernel/sched/fair.c | 1181 > > > +-- > > > 1 file changed, 682 insertions(+), 499 deletions(-) > > > > Thanks, that's an excellent series! > > > > I've queued it up in sched/core with

Re: [PATCH v3 0/8] sched/fair: rework the CFS load balance

2019-10-09 Thread Phil Auld
On Tue, Oct 08, 2019 at 05:53:11PM +0200 Vincent Guittot wrote: > Hi Phil, > ... > While preparing v4, I have noticed that I have probably oversimplified > the end of find_idlest_group() in patch "sched/fair: optimize > find_idlest_group" when it compares local

Re: [PATCH v3 0/8] sched/fair: rework the CFS load balance

2019-10-08 Thread Phil Auld
is high variance so it may not be anythign specific between v1 and v3 here. The initial fixes I made for this issue did not exhibit this behavior. They would have had other issues dealing with overload cases though. In this case however there are only 154 or 158 threads on 160 CPUs so not ove

Re: [PATCH] sched/fair: scale quota and period without losing quota/period ratio precision

2019-10-07 Thread Phil Auld
20, cfs_quota_us = 3200) [ 1393.965140] cfs_period_timer[cpu11]: period too short, but cannot scale up without losing precision (cfs_period_us = 20, cfs_quota_us = 3200) I suspect going higher could cause the original lockup, but that'd be the case with the old code as well. And this als

Re: [PATCH] sched/fair: scale quota and period without losing quota/period ratio precision

2019-10-07 Thread Phil Auld
Hi Xuewei, On Fri, Oct 04, 2019 at 05:28:15PM -0700 Xuewei Zhang wrote: > On Fri, Oct 4, 2019 at 6:14 AM Phil Auld wrote: > > > > On Thu, Oct 03, 2019 at 07:05:56PM -0700 Xuewei Zhang wrote: > > > +cc neeln...@google.com and hao...@google.com, they helped a lot >

Re: [PATCH] sched/fair: scale quota and period without losing quota/period ratio precision

2019-10-04 Thread Phil Auld
On Thu, Oct 03, 2019 at 07:05:56PM -0700 Xuewei Zhang wrote: > +cc neeln...@google.com and hao...@google.com, they helped a lot > for this issue. Sorry I forgot to include them when sending out the patch. > > On Thu, Oct 3, 2019 at 5:55 PM Phil Auld wrote: > > > > Hi

Re: [PATCH] sched/fair: scale quota and period without losing quota/period ratio precision

2019-10-03 Thread Phil Auld
uota_period/2 and max_cfs_quota_period that would get us out of the loop. Possibly in practice it won't matter but here you trigger the warning and take no action to keep it from continuing. Also, if you are actually hitting this then you might want to just start at a higher but proportional quota a

Re: [PATCH 4.19 32/79] fpga: altera-ps-spi: Fix getting of optional confd gpio

2019-09-22 Thread Phil Reid
confd gpio"); } /* Register manager with unique name */ Best regards, Pavel -- Regards Phil Reid

Re: [PATCH v2 0/8] sched/fair: rework the CFS load balance

2019-08-29 Thread Phil Auld
oup due to using the average load. The second was in fix_small_imbalance(). The "load" of the lu.C tasks was so low it often failed to move anything even when it did find a group that was overloaded (nr_running > width). I have two small patches which fix this but since Vincent was > embarking on a re-work which also addressed this I dropped them. We've also run a series of performance tests we use to check for regressions and did not find any bad results on our workloads and systems. So... Tested-by: Phil Auld Cheers, Phil --

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-29 Thread Phil Auld
On Wed, Aug 28, 2019 at 06:01:14PM +0200 Peter Zijlstra wrote: > On Wed, Aug 28, 2019 at 11:30:34AM -0400, Phil Auld wrote: > > On Tue, Aug 27, 2019 at 11:50:35PM +0200 Peter Zijlstra wrote: > > > > And given MDS, I'm still not entirely convinced it all makes sense. If &g

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-28 Thread Phil Auld
); > > break; > > + case PR_CORE_ISOLATE: > > +#ifdef CONFIG_SCHED_CORE > > + current->core_cookie = (unsigned long)current; > > This needs to then also force a reschedule of current. And there's the > little issue of what happens if

RE: [PATCH v2] clk: Document of_parse_clkspec() some more

2019-08-27 Thread Phil Edworthy
peculiarity is documented by commit 5c56dfe63b6e ("clk: Add comment > about __of_clk_get_by_name() error values"). > > Let's further document this function so that it's clear what the return > value is and how to use the arguments to parse clk specifiers. > > Cc: Phil

Re: [PATCH 4/4] iio: adc: ina2xx: Use label proper for device identification

2019-08-26 Thread Phil Reid
On 26/08/2019 02:07, Jonathan Cameron wrote: On Wed, 21 Aug 2019 11:12:00 +0200 Michal Simek wrote: On 21. 08. 19 4:11, Phil Reid wrote: On 20/08/2019 22:11, Michal Simek wrote: Add support for using label property for easier device identification via iio framework. Signed-off-by: Michal

Re: [PATCH -next v2] sched/fair: fix -Wunused-but-set-variable warnings

2019-08-23 Thread Phil Auld
but their > > existence would indicate an over-loaded node or too short of a > > cfs_period. Additionally, it would be interesting to see if we could > > capture the offset between when the bandwidth was refilled, and when > > the timer was supposed to fire. I've always done all my calculations > > assuming that the timer fires and is handled exceedingly close to the > > time it was supposed to fire. Although, if the node is running that > > overloaded you probably have many more problems than worrying about > > timer warnings. > > That "overrun" there is not really an overrun - it's the number of > complete periods the timer has been inactive for. It was used so that a > given tg's period timer would keep the same > phase/offset/whatever-you-call-it, even if it goes idle for a while, > rather than having the next period start N ms after a task wakes up. > > Also, poor choices by userspace is not generally something the kernel > generally WARNs on, as I understand it. I don't think it matters in the start_cfs_bandwidth case, anyway. We do effectively check in sched_cfs_period_timer. Cleanup looks okay to me as well. Cheers, Phil --

Re: iio: Is storing output values to non volatile registers something we should do automatically or leave to userspace action. [was Re: [PATCH] iio: potentiometer: max5432: update the non-volatile pos

2019-08-22 Thread Phil Reid
On 19/08/2019 03:32, Jonathan Cameron wrote: On Mon, 12 Aug 2019 19:08:12 +0800 Phil Reid wrote: G'day Martin / Jonathan, On 12/08/2019 18:37, Martin Kaiser wrote: Hi Jonathan, Thus wrote Jonathan Cameron (ji...@kernel.org): The patch is fine, but I'm wondering about whether we need

Re: [PATCH 4/4] iio: adc: ina2xx: Use label proper for device identification

2019-08-20 Thread Phil Reid
sonally. It'd be nice if it was a core function so it could be an opt in to any iio device. Don't know how well received that'd be thou. -- Regards Phil Reid

Re: [PATCH 2/4] iio: adc: ina2xx: Setup better name then simple ina2xx

2019-08-20 Thread Phil Reid
vm_iio_kfifo_allocate(_dev->dev); -- Regards Phil Reid

Re: [PATCH 1/4] iio: adc: ina2xx: Define *device_node only once

2019-08-20 Thread Phil Reid
>driver_data == ina226) { indio_dev->channels = ina226_channels; indio_dev->num_channels = ARRAY_SIZE(ina226_channels); -- Regards Phil Reid ElectroMagnetic Imaging Technology Pty Ltd Development of Geophysical Instrumentation & Software www.elect

[PATCH] sched/rt: silence double clock update warning by using rq_lock wrappers

2019-08-15 Thread Phil Auld
er does: raw_spin_lock(>lock); update_rq_clock(rq); which triggers the warning because of not using the rq_lock wrappers. So, use the wrappers. Signed-off-by: Phil Auld Cc: Peter Zijlstra (Intel) Cc: Ingo Molnar Cc: Valentin Schneider Cc: Dietmar Eggemann --- ke

Re: [PATCH] sched: use rq_lock/unlock in online_fair_sched_group

2019-08-15 Thread Phil Auld
On Fri, Aug 09, 2019 at 06:43:09PM +0100 Valentin Schneider wrote: > On 09/08/2019 14:33, Phil Auld wrote: > > On Tue, Aug 06, 2019 at 03:03:34PM +0200 Peter Zijlstra wrote: > >> On Thu, Aug 01, 2019 at 09:37:49AM -0400, Phil Auld wrote: > >>> Enabling WARN_DOU

Re: [tip:sched/core] sched/fair: Use rq_lock/unlock in online_fair_sched_group

2019-08-12 Thread Phil Auld
On Mon, Aug 12, 2019 at 05:52:04AM -0700 tip-bot for Phil Auld wrote: > Commit-ID: a46d14eca7b75fffe35603aa8b81df654353d80f > Gitweb: > https://git.kernel.org/tip/a46d14eca7b75fffe35603aa8b81df654353d80f > Author: Phil Auld > AuthorDate: Thu, 1 Aug 2019 09:37:49 -0

[tip:sched/core] sched/fair: Use rq_lock/unlock in online_fair_sched_group

2019-08-12 Thread tip-bot for Phil Auld
Commit-ID: a46d14eca7b75fffe35603aa8b81df654353d80f Gitweb: https://git.kernel.org/tip/a46d14eca7b75fffe35603aa8b81df654353d80f Author: Phil Auld AuthorDate: Thu, 1 Aug 2019 09:37:49 -0400 Committer: Thomas Gleixner CommitDate: Mon, 12 Aug 2019 14:45:34 +0200 sched/fair: Use rq_lock

  1   2   3   4   5   6   7   8   9   10   >