On Sun, Jan 30, 2022 at 10:41:34AM -0500, Ryan Kavanagh wrote: > On Sun, Jan 30, 2022 at 12:39:02AM -0600, Scott Cheloha wrote: > > > btrace -e 'profile:hz:100 { @[kstack] = count(); }' > /tmp/btrace.out > > > > > > for ten seconds and ran the output through > > > > > > https://github.com/brendangregg/FlameGraph/raw/master/stackcollapse-bpftrace.pl > > > https://github.com/brendangregg/FlameGraph/raw/master/flamegraph.pl > > > > > > The output of stackcollapse-bpftrace.pl and flamegraph.pl are attached > > > as btrace.collapsed and btrace.svg. > > > > The flamegraph suggests that you spent 10% of that time servicing > > ichiic(4) interrupts from idle. > > > > That could be a fluke though. > > In case it was a fluke, I've regenerated the flamegraph on 7.0 > GENERIC.MP#293 amd64 using 10 seconds of output on an idle machine. > Please see attached. > > > What does the main systat view look like in the interrupt column? > > > > $ systat 1 > > Again on #293: > > Interrupts (range after idling for a few seconds) > 247 total (235-260) > 200 clock (200-200) > 21 ipi (16-23) > 1 acpi0 (0-1) > 6 inteldrm (5-7) > azalia1 (0-0) > 11 iwm0 (10-16) > ehci0 (0-0) > 1 ahci0 (0-1) > 1 ichiic0 (0-1) > 6 pckbc0 (0-0) > pckbc0 (0-0)
Based on these numbers and the similar-looking flamegraph I'd say you're spending a relatively large amount of time handling ichiic(4) interrupts. I don't know anything about that device but my guess is that it is slow if you're spending that much time in x86_bus_space_io_read_1() and its _write_1() counterpart. Someone else is going to have to weigh in on what might be the cause and solution. Thank you providing the traces.