On 22/08/2019 04:17, Rik van Riel wrote: > The current implementation of the CPU controller uses hierarchical > runqueues, where on wakeup a task is enqueued on its group's runqueue, > the group is enqueued on the runqueue of the group above it, etc. > > This increases a fairly large amount of overhead for workloads that > do a lot of wakeups a second, especially given that the default systemd > hierarchy is 2 or 3 levels deep. > > This patch series is an attempt at reducing that overhead, by placing > all the tasks on the same runqueue, and scaling the task priority by > the priority of the group, which is calculated periodically. > > My main TODO items for the next period of time are likely going to > be testing, testing, and testing. I hope to find and flush out any > corner case I can find, and make sure performance does not regress > with any workloads, and hopefully improves some.
I did some testing with a small & simple rt-app based test-case: 2 CPUs (rq->cpu_capacity_orig=1024), CPUfreq performance governor 2 taskgroups /tg0 and /tg1 6 CFS tasks (periodic, 8/16ms (runtime/period)) /tg0 (cpu.shares=1024) ran 4 tasks and /tg1 (cpu.shares=1024) ran 2 tasks (arm64 defconfig with !CONFIG_NUMA_BALANCING, !CONFIG_SCHED_AUTOGROUP) --- v5.2: The 2 /tg1 tasks ran 8/16ms. The 4 /tg0 tasks ran 4/16ms in the beginning and then 8/16ms after the 2 /tg1 tasks did finish. --- v5.2 + v4: There is no runtime/period pattern visible anymore. I see a lot of extra wakeup latency for those tasks though. v5.2 + (v4 without 07/15, 08/15, 15/15) didn't change much. --- I could try to reduce the stack even further (e.g. without 13/15). IMHO it's a good idea to have a set of these small & simple test-cases handy to verify that the base-functionality is still in place. This might be hard to achieve with benchmarks.