On Fri, 2016-07-08 at 14:30 +0200, Frederic Weisbecker wrote: > On Thu, Jun 30, 2016 at 03:35:50PM -0400, [email protected] wrote: > > From: Rik van Riel <[email protected]> > > > > Drop local_irq_save/restore from irqtime_account_irq. > > Instead, have softirq and hardirq track their time spent > > independently, with the softirq code subtracting hardirq > > time that happened during the duration of the softirq run. > > > > The softirq code can be interrupted by hardirq code at > > any point in time, but it can check whether it got a > > consistent snapshot of the timekeeping variables it wants, > > and loop around in the unlikely case that it did not. > > > > Signed-off-by: Rik van Riel <[email protected]> > > So the purpose is to get rid of local_irq_save/restore()? > Is it really worth such complication?
local_irq_save/restore are quite slow, and look like the
largest source of overhead in irq time accounting.
However, I have not gotten numbers yet, and have no problem
with this patch being dropped for now.
> > ---
> > kernel/sched/cputime.c | 72
> > +++++++++++++++++++++++++++++++++++++++++---------
> > kernel/sched/sched.h | 38 +++++++++++++++++++++-----
> > 2 files changed, 90 insertions(+), 20 deletions(-)
> >
> > diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
> > index a0aefd4c7ea6..b78991fac228 100644
> > --- a/kernel/sched/cputime.c
> > +++ b/kernel/sched/cputime.c
> > @@ -26,7 +26,9 @@
> > DEFINE_PER_CPU(u64, cpu_hardirq_time);
> > DEFINE_PER_CPU(u64, cpu_softirq_time);
> >
> > -static DEFINE_PER_CPU(u64, irq_start_time);
> > +static DEFINE_PER_CPU(u64, hardirq_start_time);
> > +static DEFINE_PER_CPU(u64, softirq_start_time);
> > +static DEFINE_PER_CPU(u64, prev_hardirq_time);
> > static int sched_clock_irqtime;
> >
> > void enable_sched_clock_irqtime(void)
> > @@ -41,6 +43,7 @@ void disable_sched_clock_irqtime(void)
> >
> > #ifndef CONFIG_64BIT
> > DEFINE_PER_CPU(seqcount_t, irq_time_seq);
> > +DEFINE_PER_CPU(seqcount_t, softirq_time_seq);
> > #endif /* CONFIG_64BIT */
> >
> > /*
> > @@ -53,36 +56,79 @@ DEFINE_PER_CPU(seqcount_t, irq_time_seq);
> > * softirq -> hardirq, hardirq -> softirq
> > *
> > * When exiting hardirq or softirq time, account the elapsed time.
> > + *
> > + * When exiting softirq time, subtract the amount of hardirq time
> > that
> > + * interrupted this softirq run, to avoid double accounting of
> > that time.
> > */
> > void irqtime_account_irq(struct task_struct *curr, int irqtype)
> > {
> > - unsigned long flags;
> > + u64 prev_softirq_start;
> > + bool leaving_softirq;
> > + u64 prev_hardirq;
> > + u64 hardirq_time;
> > s64 delta;
> > int cpu;
> >
> > if (!sched_clock_irqtime)
> > return;
> >
> > - local_irq_save(flags);
> > -
> > cpu = smp_processor_id();
> > - delta = sched_clock_cpu(cpu) -
> > __this_cpu_read(irq_start_time);
> > - __this_cpu_add(irq_start_time, delta);
> >
> > - irq_time_write_begin();
> > + /*
> > + * Hardirq time accounting is pretty straightforward. If
> > not in
> > + * hardirq context yet (entering hardirq), set the start
> > time.
> > + * If already in hardirq context (leaving), account the
> > elapsed time.
> > + */
> > + if (irqtype == HARDIRQ_OFFSET) {
> > + bool leaving_hardirq = hardirq_count();
> > + delta = sched_clock_cpu(cpu) -
> > __this_cpu_read(hardirq_start_time);
> > + __this_cpu_add(hardirq_start_time, delta);
> > + if (leaving_hardirq) {
> > + hardirq_time_write_begin();
> > + __this_cpu_add(cpu_hardirq_time, delta);
> > + hardirq_time_write_end();
> > + }
>
> This doesn't seem to work with nesting hardirqs.
>
> Thanks.
Where does it break?
enter hardirq A -> hardirq_start_time = now
enter hardirq B -> hardirq_start_time = now,
account already elapsed time
leave hardirq B -> account elapsed time, set
hardirq_start_time = now
leave hardirq A -> account elapsed time
What am I missing, except a softirq-style do-while
loop to account for hardirq A being interrupted by
hardirq B while updating the statistics?
--
All Rights Reversed.
signature.asc
Description: This is a digitally signed message part

