@Haw Loeung: In addition to what I wrote earlier:

> With the new Ubuntu archive servers, we saw constantly high load
> and after some tinkering, we found that it was mostly CPUs
> being woken up to see if they should enter idle states.
> Changing the CPU frequency scaling governor to "performance" saw a 
> considerable drop.

What do you mean by "high load"?
And when you say "saw a considerable drop", does that mean in wakeups per 
second or load?

Note that you should observe a significant difference in load average on
a server between powersave and performance mode, and that actually
indicates things are working as they should be. For the SpecPower
simulator test I posted above, I'll add some more data for the 0.5X and
X lines:

0.5X, where Performance used 31.7% more package power:
Powersave: Busy%: 12.58% (load average = 1.01) Bzy MHz: 1651
Performance: Busy%: 5.04% (load average = 0.40) Bzy MHz: 3686

X, where Performance used 42.1% more package power:
Powersave: Busy%: 23.66% (load average = 1.89) Bzy MHz: 1798
Performance: Busy%: 10.56% (load average = 0.84) Bzy MHz: 3681

Isn't energy consumption what really matters, as long as performance doesn't 
suffer too much?
What I would like to see for your servers is the results from:

sudo turbostat -J -S --debug sleep 300

For the intel_pstate CPU frequency scaling driver and the powersave and
performance scaling governors with your work flow.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to sysvinit in Ubuntu.
https://bugs.launchpad.net/bugs/1579278

Title:
  Keep powersave CPU frequency scaling governor for CPUs that support
  intel_pstate

Status in linux package in Ubuntu:
  Invalid
Status in systemd package in Ubuntu:
  Fix Committed
Status in sysvinit package in Ubuntu:
  Invalid
Status in linux source package in Xenial:
  Invalid
Status in systemd source package in Xenial:
  Invalid
Status in sysvinit source package in Xenial:
  Triaged

Bug description:
  Hi,

  With the new Ubuntu archive servers, we saw constantly high load and
  after some tinkering, we found that it was mostly CPUs being woken up
  to see if they should enter idle states. Changing the CPU frequency
  scaling governor to "performance" saw a considerable drop.

  Perf report using the following commands:

  | perf record -g -a sleep 10
  | perf report

  | Samples: 287K of event 'cycles:pp', Event count (approx.): 124776998906
  |   Children      Self  Command          Shared Object             Symbol
  | +   55.24%     0.20%  swapper          [kernel.kallsyms]         [k] 
cpu_startup_entry
  | +   53.51%     0.00%  swapper          [kernel.kallsyms]         [k] 
start_secondary
  | +   53.02%     0.08%  swapper          [kernel.kallsyms]         [k] 
call_cpuidle
  | +   52.94%     0.02%  swapper          [kernel.kallsyms]         [k] 
cpuidle_enter
  | +   31.81%     0.67%  swapper          [kernel.kallsyms]         [k] 
cpuidle_enter_state
  | +   29.59%     0.12%  swapper          [kernel.kallsyms]         [k] 
acpi_idle_enter
  | +   29.45%     0.05%  swapper          [kernel.kallsyms]         [k] 
acpi_idle_do_entry
  | +   29.43%    29.43%  swapper          [kernel.kallsyms]         [k] 
acpi_processor_ffh_cstate_enter
  | +   20.51%     0.04%  swapper          [kernel.kallsyms]         [k] 
ret_from_intr
  | +   20.47%     0.12%  swapper          [kernel.kallsyms]         [k] do_IRQ
  | +   19.30%     0.07%  swapper          [kernel.kallsyms]         [k] 
irq_exit
  | +   19.18%     0.07%  apache2          [kernel.kallsyms]         [k] 
entry_SYSCALL_64_fastpath
  | +   18.80%     0.17%  swapper          [kernel.kallsyms]         [k] 
__do_softirq
  | +   16.45%     0.11%  swapper          [kernel.kallsyms]         [k] 
net_rx_action
  | +   16.25%     0.43%  swapper          [kernel.kallsyms]         [k] be_poll
  | +   14.74%     0.21%  swapper          [kernel.kallsyms]         [k] 
be_process_rx
  | +   13.61%     0.07%  swapper          [kernel.kallsyms]         [k] 
napi_gro_frags
  | +   12.58%     0.04%  swapper          [kernel.kallsyms]         [k] 
netif_receive_skb_internal
  | +   12.48%     0.03%  swapper          [kernel.kallsyms]         [k] 
__netif_receive_skb
  | +   12.42%     0.24%  swapper          [kernel.kallsyms]         [k] 
__netif_receive_skb_core
  | +   12.41%     0.00%  apache2          [unknown]                 [k] 
0x00007f27983b5028
  | +   12.41%     0.00%  apache2          [unknown]                 [k] 
0x00007f2798369028
  | +   11.49%     0.16%  swapper          [kernel.kallsyms]         [k] ip_rcv
  | +   11.29%     0.09%  swapper          [kernel.kallsyms]         [k] 
ip_rcv_finish
  | +   10.77%     0.05%  swapper          [kernel.kallsyms]         [k] 
ip_local_deliver
  | +   10.70%     0.06%  swapper          [kernel.kallsyms]         [k] 
ip_local_deliver_finish
  | +   10.55%     0.22%  swapper          [kernel.kallsyms]         [k] 
tcp_v4_rcv
  | +   10.10%     0.00%  apache2          [unknown]                 [k] 
0000000000000000
  | +   10.01%     0.04%  swapper          [kernel.kallsyms]         [k] 
tcp_v4_do_rcv

  Expanding in a few of those, you'll see:

  | -   55.24%     0.20%  swapper          [kernel.kallsyms]         [k] 
cpu_startup_entry
  |    - 55.04% cpu_startup_entry
  |       - 52.98% call_cpuidle
  |          + 52.93% cpuidle_enter
  |          + 0.00% ret_from_intr
  |            0.00% cpuidle_enter_state
  |            0.00% irq_entries_start
  |       + 1.14% cpuidle_select
  |       + 0.47% schedule_preempt_disabled
  |         0.10% rcu_idle_enter
  |         0.09% rcu_idle_exit
  |       + 0.05% ret_from_intr
  |       + 0.05% tick_nohz_idle_enter
  |       + 0.04% arch_cpu_idle_enter
  |         0.02% cpuidle_enter
  |         0.02% tick_check_broadcast_expired
  |       + 0.01% cpuidle_reflect
  |         0.01% menu_reflect
  |         0.01% atomic_notifier_call_chain
  |         0.01% local_touch_nmi
  |         0.01% cpuidle_not_available
  |         0.01% menu_select
  |         0.01% cpuidle_get_cpu_driver
  |       + 0.01% tick_nohz_idle_exit
  |       + 0.01% sched_ttwu_pending
  |         0.00% set_cpu_sd_state_idle
  |         0.00% native_irq_return_iret
  |         0.00% schedule
  |       + 0.00% arch_cpu_idle_exit
  |         0.00% __tick_nohz_idle_enter
  |         0.00% irq_entries_start
  |         0.00% sched_clock_idle_wakeup_event
  |         0.00% reschedule_interrupt
  |       + 0.00% apic_timer_interrupt
  |    + 0.20% start_secondary
  |    + 0.00% x86_64_start_kernel
  | +   53.51%     0.00%  swapper          [kernel.kallsyms]         [k] 
start_secondary
  | +   53.02%     0.08%  swapper          [kernel.kallsyms]         [k] 
call_cpuidle
  | -   52.94%     0.02%  swapper          [kernel.kallsyms]         [k] 
cpuidle_enter
  |    - 52.92% cpuidle_enter
  |       + 31.81% cpuidle_enter_state
  |       + 20.01% ret_from_intr
  |       + 0.51% apic_timer_interrupt
  |         0.28% native_irq_return_iret
  |       + 0.09% reschedule_interrupt
  |         0.05% irq_entries_start
  |         0.05% do_IRQ
  |         0.05% common_interrupt
  |         0.02% sched_idle_set_state
  |         0.01% acpi_idle_enter
  |         0.01% ktime_get
  |         0.01% restore_regs_and_iret
  |         0.01% restore_c_regs_and_iret
  |       + 0.01% call_function_single_interrupt
  |         0.00% native_iret
  |       + 0.00% call_function_interrupt
  |         0.00% smp_apic_timer_interrupt
  |         0.00% smp_reschedule_interrupt
  |         0.00% smp_call_function_single_interrupt
  |    + 0.02% start_secondary

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1579278/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to     : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to