On 05/08/20 14:40, pet...@infradead.org wrote:
> On Mon, Aug 03, 2020 at 09:22:53PM +0200, Thomas Gleixner wrote:
>
>>    totaltime = irqtime + tasktime
>>
>> Ignoring irqtime and pretending that totaltime is what the scheduler
>> can control and deal with is naive at best.
>
> Well no, that's what we call system overhead and is assumed to be
> included in the 'error margin'.
>
> The way things are set up is that we say that, by default, RT tasks can
> consume 95% of cputime and the remaining 5% is sufficient to keep the
> system alive.
>
> Those 5% include all system overhead, IRQs, RCU, !RT workqueues etc..
>
> Obviously IRQ_TIME accounting changes the balance a bit, but that's what
> it is. We can't really do anything better.
>

I'm starting to think that as well. I tried some fugly hack of injecting
avg_irq into sched_rt_runtime_exceeded() with something along the lines of:

  irq_time = (rq->avg_irq.util_avg * sched_rt_period(rt_rq)) >> 
SCHED_CAPACITY_SHIFT;

It's pretty bad for a few reasons; one is that avg_irq already has its own
period (PELT-based). Another is that it is, as Dietmar pointed out, CPU and
freq invariant, so falls over on big.LITTLE.

Making update_curr_rt() use rq_clock() rather than rq_clock_task() makes it
"work" but goes against all the good reasons there were to introduce
rq_clock_task() in the first place.

> Apparently this SoC has significant IRQ time for some reason. Also,
> relying on RT throttling for 'correct' behaviour is also wrong. What
> needs to be done is find who is using all this RT time and why, that
> isn't right.

I've been tempted to say the test case is a bit bogus, but am not familiar
enough with the RT throttling details to stand that ground. That said, from
both looking at the execution and the stress-ng source code, it seems to
unconditionally spawn 32 FIFO-50 tasks (there's even an option to make
these FIFO-99!!!), which is quite a crowd on monoCPU systems.

Reply via email to