I tried to gather more data on this and made several runs of "hdparm -Tt" for 
the good and bad case:
- For the good case (nosmp)
  - cached reads avg. 184.61MB/sec (MIN=181.02 MAX=186.30 STDDEV=1.62)
  - buffered reads avg. 3.64MB/sec (MIN=3.64 MAX=3.65 STDDEV=0)
- For the bad case
  - cached reads avg. 62.00MB/sec (MIN=21.98 MAX=111.55 STDDEV=27.97)
  - buffered reads avg. 1.35MB/sec (MIN=1.09 MAX=1.86 STDDEV=0.20)

So not only the SMP case is slower, there is also a high variation in
the numbers. Whatever happens, it is not a linear slowdown.

When trying various options maxcpus=1 has the same effect as nosmp
(disabling a core later does not improve things), there was one option
(which right now slips my memory) causing a higher rate of timer
interrupts which lead to a even worse buffered reads performance (as was
disabling the irqbalance daemon). Using nohlt seemed to have no effect.

When looking at the interrupts, the main difference seemed to be the use
of gp_timer when booted with nosmp or maxcpus=1 and LOCal timers in the
other case. Booting with 2 CPUs and disabling one did not change that. I
cannot recall whether the IPIs stopped being incremented completely or
only on the other CPU. But LOC definitely was still used for CPU#0.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/709245

Title:
  ARM SMP scheduler performance bug

To manage notifications about this bug go to:
https://bugs.launchpad.net/linaro-ubuntu/+bug/709245/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to