In that same Google+ post, Arjan van de Ven wrote:

"""
Now, about ondemand and cpufreq.
The ondemand algorithm was designed roughly 10 years ago, for CPUs from that 
era. If you look at what ondemand really ends up doing, is managing the 
frequency during idle periods, and 10 years ago, that mattered for power.

Today (well, last 5 years), the frequency in idle is zero, and even the
voltage is now zero (NHM and later).... so what frequency the OS picks
during the idle period is completely irrelevant. This, and other things,
make ondemand not a good algorithm for current Intel processors.

...

The new code in the 3.9 kernel, under, CONFIG_X86_INTEL_PSTATE, is a fresh 
approach to all of this.
"""

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1579278

Title:
  Consider changing default CPU frequency scaling governor back to
  "performance" (Ubuntu Server)

Status in linux package in Ubuntu:
  Invalid
Status in sysvinit package in Ubuntu:
  New
Status in linux source package in Xenial:
  Invalid
Status in sysvinit source package in Xenial:
  New

Bug description:
  Hi,

  With the new Ubuntu archive servers, we saw constantly high load and
  after some tinkering, we found that it was mostly CPUs being woken up
  to see if they should enter idle states. Changing the CPU frequency
  scaling governor to "performance" saw a considerable drop.

  Perf report using the following commands:

  | perf record -g -a sleep 10
  | perf report

  | Samples: 287K of event 'cycles:pp', Event count (approx.): 124776998906
  |   Children      Self  Command          Shared Object             Symbol
  | +   55.24%     0.20%  swapper          [kernel.kallsyms]         [k] 
cpu_startup_entry
  | +   53.51%     0.00%  swapper          [kernel.kallsyms]         [k] 
start_secondary
  | +   53.02%     0.08%  swapper          [kernel.kallsyms]         [k] 
call_cpuidle
  | +   52.94%     0.02%  swapper          [kernel.kallsyms]         [k] 
cpuidle_enter
  | +   31.81%     0.67%  swapper          [kernel.kallsyms]         [k] 
cpuidle_enter_state
  | +   29.59%     0.12%  swapper          [kernel.kallsyms]         [k] 
acpi_idle_enter
  | +   29.45%     0.05%  swapper          [kernel.kallsyms]         [k] 
acpi_idle_do_entry
  | +   29.43%    29.43%  swapper          [kernel.kallsyms]         [k] 
acpi_processor_ffh_cstate_enter
  | +   20.51%     0.04%  swapper          [kernel.kallsyms]         [k] 
ret_from_intr
  | +   20.47%     0.12%  swapper          [kernel.kallsyms]         [k] do_IRQ
  | +   19.30%     0.07%  swapper          [kernel.kallsyms]         [k] 
irq_exit
  | +   19.18%     0.07%  apache2          [kernel.kallsyms]         [k] 
entry_SYSCALL_64_fastpath
  | +   18.80%     0.17%  swapper          [kernel.kallsyms]         [k] 
__do_softirq
  | +   16.45%     0.11%  swapper          [kernel.kallsyms]         [k] 
net_rx_action
  | +   16.25%     0.43%  swapper          [kernel.kallsyms]         [k] be_poll
  | +   14.74%     0.21%  swapper          [kernel.kallsyms]         [k] 
be_process_rx
  | +   13.61%     0.07%  swapper          [kernel.kallsyms]         [k] 
napi_gro_frags
  | +   12.58%     0.04%  swapper          [kernel.kallsyms]         [k] 
netif_receive_skb_internal
  | +   12.48%     0.03%  swapper          [kernel.kallsyms]         [k] 
__netif_receive_skb
  | +   12.42%     0.24%  swapper          [kernel.kallsyms]         [k] 
__netif_receive_skb_core
  | +   12.41%     0.00%  apache2          [unknown]                 [k] 
0x00007f27983b5028
  | +   12.41%     0.00%  apache2          [unknown]                 [k] 
0x00007f2798369028
  | +   11.49%     0.16%  swapper          [kernel.kallsyms]         [k] ip_rcv
  | +   11.29%     0.09%  swapper          [kernel.kallsyms]         [k] 
ip_rcv_finish
  | +   10.77%     0.05%  swapper          [kernel.kallsyms]         [k] 
ip_local_deliver
  | +   10.70%     0.06%  swapper          [kernel.kallsyms]         [k] 
ip_local_deliver_finish
  | +   10.55%     0.22%  swapper          [kernel.kallsyms]         [k] 
tcp_v4_rcv
  | +   10.10%     0.00%  apache2          [unknown]                 [k] 
0000000000000000
  | +   10.01%     0.04%  swapper          [kernel.kallsyms]         [k] 
tcp_v4_do_rcv

  Expanding in a few of those, you'll see:

  | -   55.24%     0.20%  swapper          [kernel.kallsyms]         [k] 
cpu_startup_entry
  |    - 55.04% cpu_startup_entry
  |       - 52.98% call_cpuidle
  |          + 52.93% cpuidle_enter
  |          + 0.00% ret_from_intr
  |            0.00% cpuidle_enter_state
  |            0.00% irq_entries_start
  |       + 1.14% cpuidle_select
  |       + 0.47% schedule_preempt_disabled
  |         0.10% rcu_idle_enter
  |         0.09% rcu_idle_exit
  |       + 0.05% ret_from_intr
  |       + 0.05% tick_nohz_idle_enter
  |       + 0.04% arch_cpu_idle_enter
  |         0.02% cpuidle_enter
  |         0.02% tick_check_broadcast_expired
  |       + 0.01% cpuidle_reflect
  |         0.01% menu_reflect
  |         0.01% atomic_notifier_call_chain
  |         0.01% local_touch_nmi
  |         0.01% cpuidle_not_available
  |         0.01% menu_select
  |         0.01% cpuidle_get_cpu_driver
  |       + 0.01% tick_nohz_idle_exit
  |       + 0.01% sched_ttwu_pending
  |         0.00% set_cpu_sd_state_idle
  |         0.00% native_irq_return_iret
  |         0.00% schedule
  |       + 0.00% arch_cpu_idle_exit
  |         0.00% __tick_nohz_idle_enter
  |         0.00% irq_entries_start
  |         0.00% sched_clock_idle_wakeup_event
  |         0.00% reschedule_interrupt
  |       + 0.00% apic_timer_interrupt
  |    + 0.20% start_secondary
  |    + 0.00% x86_64_start_kernel
  | +   53.51%     0.00%  swapper          [kernel.kallsyms]         [k] 
start_secondary
  | +   53.02%     0.08%  swapper          [kernel.kallsyms]         [k] 
call_cpuidle
  | -   52.94%     0.02%  swapper          [kernel.kallsyms]         [k] 
cpuidle_enter
  |    - 52.92% cpuidle_enter
  |       + 31.81% cpuidle_enter_state
  |       + 20.01% ret_from_intr
  |       + 0.51% apic_timer_interrupt
  |         0.28% native_irq_return_iret
  |       + 0.09% reschedule_interrupt
  |         0.05% irq_entries_start
  |         0.05% do_IRQ
  |         0.05% common_interrupt
  |         0.02% sched_idle_set_state
  |         0.01% acpi_idle_enter
  |         0.01% ktime_get
  |         0.01% restore_regs_and_iret
  |         0.01% restore_c_regs_and_iret
  |       + 0.01% call_function_single_interrupt
  |         0.00% native_iret
  |       + 0.00% call_function_interrupt
  |         0.00% smp_apic_timer_interrupt
  |         0.00% smp_reschedule_interrupt
  |         0.00% smp_call_function_single_interrupt
  |    + 0.02% start_secondary

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1579278/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to