Hi Frederic.
On 2/11/26 10:36 PM, Frederic Weisbecker wrote:
Le Wed, Feb 11, 2026 at 07:13:45PM +0530, Shrikanth Hegde a écrit :
Hi Frederic,
Gave this series a spin on the same system as v1.
On 2/6/26 7:52 PM, Frederic Weisbecker wrote:
Hi,
After the issue reported here:
https://lore.kernel.org/all/[email protected]/
It occurs that the idle cputime accounting is a big mess that
accumulates within two concurrent statistics, each having their own
shortcomings:
* The accounting for online CPUs which is based on the delta between
tick_nohz_start_idle() and tick_nohz_stop_idle().
Pros:
- Works when the tick is off
- Has nsecs granularity
Cons:
- Account idle steal time but doesn't substract it from idle
cputime.
- Assumes CONFIG_IRQ_TIME_ACCOUNTING by not accounting IRQs but
the IRQ time is simply ignored when
CONFIG_IRQ_TIME_ACCOUNTING=n
- The windows between 1) idle task scheduling and the first call
to tick_nohz_start_idle() and 2) idle task between the last
tick_nohz_stop_idle() and the rest of the idle time are
blindspots wrt. cputime accounting (though mostly insignificant
amount)
- Relies on private fields outside of kernel stats, with specific
accessors.
* The accounting for offline CPUs which is based on ticks and the
jiffies delta during which the tick was stopped.
Pros:
- Handles steal time correctly
- Handle CONFIG_IRQ_TIME_ACCOUNTING=y and
CONFIG_IRQ_TIME_ACCOUNTING=n correctly.
- Handles the whole idle task
- Accounts directly to kernel stats, without midlayer accumulator.
Cons:
- Doesn't elapse when the tick is off, which doesn't make it
suitable for online CPUs.
- Has TICK_NSEC granularity (jiffies)
- Needs to track the dyntick-idle ticks that were accounted and
substract them from the total jiffies time spent while the tick
was stopped. This is an ugly workaround.
Having two different accounting for a single context is not the only
problem: since those accountings are of different natures, it is
possible to observe the global idle time going backward after a CPU goes
offline, as reported by Xin Zhao.
Clean up the situation with introducing a hybrid approach that stays
coherent, fixes the backward jumps and works for both online and offline
CPUs:
* Tick based or native vtime accounting operate before the tick is
stopped and resumes once the tick is restarted.
* When the idle loop starts, switch to dynticks-idle accounting as is
done currently, except that the statistics accumulate directly to the
relevant kernel stat fields.
* Private dyntick cputime accounting fields are removed.
* Works on both online and offline case.
* Move most of the relevant code to the common sched/cputime subsystem
* Handle CONFIG_IRQ_TIME_ACCOUNTING=n correctly such that the
dynticks-idle accounting still elapses while on IRQs.
* Correctly substract idle steal cputime from idle time
Changes since v1:
- Fix deadlock involving double seq count lock on idle
- Fix build breakage on powerpc
- Fix build breakage on s390 (Heiko)
- Fix broken sysfs s390 idle time file (Heiko)
- Convert most ktime usage here into u64 (Peterz)
- Add missing (or too implicit) <linux/sched/clock.h> (Peterz)
- Fix whole idle time acccounting breakage due to missing TS_FLAG_ set
on idle entry (Shrikanth Hegde)
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
timers/core-v2
HEAD: 21458b98c80a0567d48131240317b7b73ba34c3c
Thanks,
Frederic
idle and runtime utilization with mpstat while running stress-ng looks
correct now.
However, when running hackbench I am noticing the below data. hackbench shows
severe regressions.
base: tip/master at 9c61ebbdb587a3950072700ab74a9310afe3ad73.
(nit: patch 7 is already part of tip. so skipped applying it)
+-----------------------------------------------+-------+---------+-----------+
| Test | base | +series | % Diff |
+-----------------------------------------------+-------+---------+-----------+
| HackBench Process 10 groups | 2.23 | 3.05 | -36.77% |
| HackBench Process 20 groups | 4.17 | 5.82 | -39.57% |
| HackBench Process 30 groups | 6.04 | 8.49 | -40.56% |
| HackBench Process 40 groups | 7.90 | 11.10 | -40.51% |
| HackBench thread 10 | 2.44 | 3.36 | -37.70% |
| HackBench thread 20 | 4.57 | 6.35 | -38.95% |
| HackBench Process(Pipe) 10 | 1.76 | 2.29 | -30.11% |
| HackBench Process(Pipe) 20 | 3.49 | 4.76 | -36.39% |
| HackBench Process(Pipe) 30 | 5.21 | 7.13 | -36.85% |
| HackBench Process(Pipe) 40 | 6.89 | 9.31 | -35.12% |
| HackBench thread(Pipe) 10 | 1.91 | 2.50 | -30.89% |
| HackBench thread(Pipe) 20 | 3.74 | 5.16 | -37.97% |
+-----------------------------------------------+-------+---------+-----------+
I have these in .config and I don't have nohz_full or isolated cpus.
CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ_COMMON=y
# CONFIG_HZ_PERIODIC is not set
# CONFIG_NO_HZ_IDLE is not set
CONFIG_NO_HZ_FULL=y
# CPU/Task time and stats accounting
#
CONFIG_VIRT_CPU_ACCOUNTING=y
CONFIG_VIRT_CPU_ACCOUNTING_GEN=y
CONFIG_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_SCHED_AVG_IRQ=y
I did a git bisect and below is what it says.
git bisect start
# status: waiting for both good and bad commits
# bad: [6821315886a3b5267ea31d29dba26fd34647fbbc] sched/cputime: Handle
dyntick-idle steal time correctly
git bisect bad 6821315886a3b5267ea31d29dba26fd34647fbbc
# status: waiting for good commit(s), bad commit known
# good: [9c61ebbdb587a3950072700ab74a9310afe3ad73] Merge branch into
tip/master: 'x86/sev'
git bisect good 9c61ebbdb587a3950072700ab74a9310afe3ad73
# good: [dc8bb3c84d162f7d9aa6becf9f8392474f92655a] tick/sched: Remove nohz
disabled special case in cputime fetch
git bisect good dc8bb3c84d162f7d9aa6becf9f8392474f92655a
# good: [5070a778a581cd668f5d717f85fb22b078d8c20c] tick/sched: Account tickless
idle cputime only when tick is stopped
git bisect good 5070a778a581cd668f5d717f85fb22b078d8c20c
# bad: [1e0ccc25a9a74b188b239c4de716fde279adbf8e] sched/cputime: Provide
get_cpu_[idle|iowait]_time_us() off-case
git bisect bad 1e0ccc25a9a74b188b239c4de716fde279adbf8e
# bad: [ee7c735b76071000d401869fc2883c451ee3fa61] tick/sched: Consolidate idle
time fetching APIs
git bisect bad ee7c735b76071000d401869fc2883c451ee3fa61
# first bad commit: [ee7c735b76071000d401869fc2883c451ee3fa61] tick/sched:
Consolidate idle time fetching APIs
I see. Can you try this? (or fetch timers/core-v3 from my tree)
Perhaps that mistake had some impact on cpufreq.
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 057fdc00dbc6..08550a6d9469 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -524,7 +524,7 @@ static u64 get_cpu_sleep_time_us(int cpu, enum
cpu_usage_stat idx,
do_div(res, NSEC_PER_USEC);
if (last_update_time)
- *last_update_time = res;
+ *last_update_time = ktime_to_us(now);
return res;
}
I have done testing in below cases on PowerNV(power9) box.
1. CONFIG_VIRT_CPU_ACCOUNTING_GEN + CONFIG_IRQ_TIME_ACCOUNTING=y.
This is common case of having VTIME_GEN + IRQ_TIME enabled.
2. CONFIG_VIRT_CPU_ACCOUNTING_GEN only.
IRQ_TIME is not selected
3. CONFIG_VIRT_CPU_ACCOUNTING_NATIVE=y (for this i had to disable
CONFIG_NO_HZ_FULL)
CONFIG_NO_HZ_IDLE=y and CONFIG_NO_HZ_FULL=n and VTIME_GEN=n
4. CONFIG_TICK_CPU_ACCOUNTING=y
(CONFIG_NO_HZ_FULL=n and CONFIG_NO_HZ_IDLE=y)
In all cases the idle time and iowait time doesn't go backwards.
So that's a clear win.
Without the patches iowait did go backwards.
So, with that for the series.
Tested-by: Shrikanth Hegde <[email protected]>
---
However, with the series, with NATIVE=y i am seeing one peculiar info.
without series: cpu0 0 0 9 60800 4 2 90 0 0 0 << 608 seconds after boot.
That's ok.
with series: cpu0 1 0 17 9122062 0 3 140 0 0 0 << 91220 seconds?? Strange.
However, i see the time passage looks normal.
If i do like, cat /proc/stat; sleep 5; cat /proc/stat;
then i see same time difference with/without series.
So timekeeping works as expected.
Almost all CPUs have similar stat. I am wondering if there is bug or some kind
of wrapping in mftb which raises an irq and during that particular period the
values go very large. Even without series, I see one or two CPUs have same huge
system
time. Maybe since the series handles the irq case now, it might be showing up
in all CPUs.
This is a slightly older system. I will give this a try on power10 when I get
the
systems in few weeks time.