Andrey Ryabinin <a...@yandex-team.com> writes:

> cpuacct has 2 different ways of accounting and showing user
> and system times.
>
> The first one uses cpuacct_account_field() to account times
> and cpuacct.stat file to expose them. And this one seems to work ok.
>
> The second one is uses cpuacct_charge() function for accounting and
> set of cpuacct.usage* files to show times. Despite some attempts to
> fix it in the past it still doesn't work. E.g. while running KVM
> guest the cpuacct_charge() accounts most of the guest time as
> system time. This doesn't match with user&system times shown in
> cpuacct.stat or proc/<pid>/stat.

I couldn't reproduce this running a cpu bound load in a kvm guest on a
nohz_full cpu on 5.11.  The time is almost entirely in cpuacct.usage and
_user, while _sys stays low.

Could you say more about how you're seeing this?  Don't really doubt
there's a problem, just wondering what you're doing.

> diff --git a/kernel/sched/cpuacct.c b/kernel/sched/cpuacct.c
> index 941c28cf9738..7eff79faab0d 100644
> --- a/kernel/sched/cpuacct.c
> +++ b/kernel/sched/cpuacct.c
> @@ -29,7 +29,7 @@ struct cpuacct_usage {
>  struct cpuacct {
>       struct cgroup_subsys_state      css;
>       /* cpuusage holds pointer to a u64-type object on every CPU */
> -     struct cpuacct_usage __percpu   *cpuusage;

Definition of struct cpuacct_usage can go away now.

> @@ -99,7 +99,8 @@ static void cpuacct_css_free(struct cgroup_subsys_state 
> *css)
>  static u64 cpuacct_cpuusage_read(struct cpuacct *ca, int cpu,
>                                enum cpuacct_stat_index index)
>  {
> -     struct cpuacct_usage *cpuusage = per_cpu_ptr(ca->cpuusage, cpu);
> +     u64 *cpuusage = per_cpu_ptr(ca->cpuusage, cpu);
> +     u64 *cpustat = per_cpu_ptr(ca->cpustat, cpu)->cpustat;
>       u64 data;

There's a BUG_ON below this that could probably be WARN_ON_ONCE while
you're here

> @@ -278,8 +274,8 @@ static int cpuacct_stats_show(struct seq_file *sf, void 
> *v)
>       for_each_possible_cpu(cpu) {
>               u64 *cpustat = per_cpu_ptr(ca->cpustat, cpu)->cpustat;
>  
> -             val[CPUACCT_STAT_USER]   += cpustat[CPUTIME_USER];
> -             val[CPUACCT_STAT_USER]   += cpustat[CPUTIME_NICE];
> +             val[CPUACCT_STAT_USER] += cpustat[CPUTIME_USER];
> +             val[CPUACCT_STAT_USER] += cpustat[CPUTIME_NICE];

unnecessary whitespace change?

Reply via email to