Re: [lkp] [sched/fair] 98d8fd8126: -20.8% hackbench.throughput

2015-10-13 Thread Huang, Ying
Dietmar Eggemann  writes:

> Hi Ying,
>
> On 24/09/15 03:00, kernel test robot wrote:
>> FYI, we noticed the below changes on
>> 
>> https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
>> commit 98d8fd8126676f7ba6e133e65b2ca4b17989d32c ("sched/fair: Initialize 
>> task load and utilization before placing task on rq")
>> 
>> 
>> =
>> tbox_group/testcase/rootfs/kconfig/compiler/nr_threads/mode/ipc:
>>   
>> lkp-ws02/hackbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/1600%/process/pipe
>> 
>> commit: 
>>   231678b768da07d19ab5683a39eeb0c250631d02
>>   98d8fd8126676f7ba6e133e65b2ca4b17989d32c
>> 
>> 231678b768da07d1 98d8fd8126676f7ba6e133e65b 
>>  -- 
>>  %stddev %change %stddev
>>  \  |\  
>> 188818 ±  1% -20.8% 149585 ±  1%  hackbench.throughput
>
> [...]
>
>> 
>> lkp-ws02: Westmere-EP
>> Memory: 16G
>> 
>> 
>> 
>> 
>>   hackbench.time.involuntary_context_switches
>> 
>> 3e+08 ++O---+
>>   O  O O   O O  |
>>   2.5e+08 ++ OO  O   O  O
>>   | O   O O  O  |
>>   |   O   O O   O   |
>> 2e+08 ++   O|
>>   |O|
>>   1.5e+08 ++|
>>   | |
>> 1e+08 ++|
>>   |  .*...*..*..*...*..*|
>>   *..*...*..*..*...*..  .*..  ..*..*.   |
>> 5e+07 ++  *.*.  |
>>   | |
>> 0 +++
>> 
>>   vmstat.system.in
>> 
>>   30 ++-+
>>  |O O   O   |
>>  OO  O   O   O  |
>>   25 ++ O   O O O
>>  | O|
>>  | O  O  O   O  O  O|
>>   20 ++O   O|
>>  |  |
>>   15 ++ |
>>  |  |
>>  |  |
>>   10 ++ |
>>  |   .*...   .*..*...*..*..*...*|
>>  *..*...*..*. *..  .*...  .*...*.   |
>>5 ++--*--*---+
>> 
>>  [*] bisect-good sample
>>  [O] bisect-bad  sample
>> 
>> To reproduce:
>> 
>> git clone 
>> git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
>> cd lkp-tests
>> bin/lkp install job.yaml  # job file is attached in this email
>> bin/lkp run job.yaml
>
> I try to recreate this on one of my Intel machines (Xeon CPU E5-2650 v2
> @ 2.60GHz) w/ 16 logical cpus. We haven't seen anything near a 20%
> performance degradation for hackbench when we were running our hackbench
> tests on 5/6 core arm machines as well as on a IVB-EP (2*10*2) Intel
> machine.
>
> So I cloned the repo:
>
> # git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg
> /lkp-tests.git lkp-tests
>
> and ran the hackbench example:
>
> root # lkp install $LKP_SRC/jobs/hackbench.yaml
>
> root # lkp split-job $LKP_SRC/jobs/hackbench.yaml
>
> root # lkp run ./hackbench-50%-threads-socket.yaml
>
> 2015-10-12 19:27:20 /usr/bin/hackbench -g 8 --threads -l 6
> Running in threaded mode with 8 groups using 40 file descriptors each
> (== 320 tasks)
> Each sender will pass 6 messages of 100 bytes
> ...
> wait for background monitors: perf-profile uptime proc-vmstat proc-stat
> meminfo slabinfo interrupts softirqs diskstats cpuidle turbostat sched_debug
>
> root # lkp result hackbench
>
> 

Re: [lkp] [sched/fair] 98d8fd8126: -20.8% hackbench.throughput

2015-10-13 Thread Huang, Ying
Dietmar Eggemann  writes:

> Hi Ying,
>
> On 24/09/15 03:00, kernel test robot wrote:
>> FYI, we noticed the below changes on
>> 
>> https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
>> commit 98d8fd8126676f7ba6e133e65b2ca4b17989d32c ("sched/fair: Initialize 
>> task load and utilization before placing task on rq")
>> 
>> 
>> =
>> tbox_group/testcase/rootfs/kconfig/compiler/nr_threads/mode/ipc:
>>   
>> lkp-ws02/hackbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/1600%/process/pipe
>> 
>> commit: 
>>   231678b768da07d19ab5683a39eeb0c250631d02
>>   98d8fd8126676f7ba6e133e65b2ca4b17989d32c
>> 
>> 231678b768da07d1 98d8fd8126676f7ba6e133e65b 
>>  -- 
>>  %stddev %change %stddev
>>  \  |\  
>> 188818 ±  1% -20.8% 149585 ±  1%  hackbench.throughput
>
> [...]
>
>> 
>> lkp-ws02: Westmere-EP
>> Memory: 16G
>> 
>> 
>> 
>> 
>>   hackbench.time.involuntary_context_switches
>> 
>> 3e+08 ++O---+
>>   O  O O   O O  |
>>   2.5e+08 ++ OO  O   O  O
>>   | O   O O  O  |
>>   |   O   O O   O   |
>> 2e+08 ++   O|
>>   |O|
>>   1.5e+08 ++|
>>   | |
>> 1e+08 ++|
>>   |  .*...*..*..*...*..*|
>>   *..*...*..*..*...*..  .*..  ..*..*.   |
>> 5e+07 ++  *.*.  |
>>   | |
>> 0 +++
>> 
>>   vmstat.system.in
>> 
>>   30 ++-+
>>  |O O   O   |
>>  OO  O   O   O  |
>>   25 ++ O   O O O
>>  | O|
>>  | O  O  O   O  O  O|
>>   20 ++O   O|
>>  |  |
>>   15 ++ |
>>  |  |
>>  |  |
>>   10 ++ |
>>  |   .*...   .*..*...*..*..*...*|
>>  *..*...*..*. *..  .*...  .*...*.   |
>>5 ++--*--*---+
>> 
>>  [*] bisect-good sample
>>  [O] bisect-bad  sample
>> 
>> To reproduce:
>> 
>> git clone 
>> git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
>> cd lkp-tests
>> bin/lkp install job.yaml  # job file is attached in this email
>> bin/lkp run job.yaml
>
> I try to recreate this on one of my Intel machines (Xeon CPU E5-2650 v2
> @ 2.60GHz) w/ 16 logical cpus. We haven't seen anything near a 20%
> performance degradation for hackbench when we were running our hackbench
> tests on 5/6 core arm machines as well as on a IVB-EP (2*10*2) Intel
> machine.
>
> So I cloned the repo:
>
> # git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg
> /lkp-tests.git lkp-tests
>
> and ran the hackbench example:
>
> root # lkp install $LKP_SRC/jobs/hackbench.yaml
>
> root # lkp split-job $LKP_SRC/jobs/hackbench.yaml
>
> root # lkp run ./hackbench-50%-threads-socket.yaml
>
> 2015-10-12 19:27:20 /usr/bin/hackbench -g 8 --threads -l 6
> Running in threaded mode with 8 groups using 40 file descriptors each
> (== 320 tasks)
> Each sender will pass 6 messages of 100 bytes
> ...
> wait for background monitors: perf-profile uptime proc-vmstat proc-stat
> meminfo slabinfo interrupts softirqs diskstats cpuidle turbostat sched_debug
>
> root # lkp result hackbench
>
> 

Re: [lkp] [sched/fair] 98d8fd8126: -20.8% hackbench.throughput

2015-10-12 Thread Dietmar Eggemann
Hi Ying,

On 24/09/15 03:00, kernel test robot wrote:
> FYI, we noticed the below changes on
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
> commit 98d8fd8126676f7ba6e133e65b2ca4b17989d32c ("sched/fair: Initialize task 
> load and utilization before placing task on rq")
> 
> 
> =
> tbox_group/testcase/rootfs/kconfig/compiler/nr_threads/mode/ipc:
>   
> lkp-ws02/hackbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/1600%/process/pipe
> 
> commit: 
>   231678b768da07d19ab5683a39eeb0c250631d02
>   98d8fd8126676f7ba6e133e65b2ca4b17989d32c
> 
> 231678b768da07d1 98d8fd8126676f7ba6e133e65b 
>  -- 
>  %stddev %change %stddev
>  \  |\  
> 188818 ±  1% -20.8% 149585 ±  1%  hackbench.throughput

[...]

> 
> lkp-ws02: Westmere-EP
> Memory: 16G
> 
> 
> 
> 
>   hackbench.time.involuntary_context_switches
> 
> 3e+08 ++O---+
>   O  O O   O O  |
>   2.5e+08 ++ OO  O   O  O
>   | O   O O  O  |
>   |   O   O O   O   |
> 2e+08 ++   O|
>   |O|
>   1.5e+08 ++|
>   | |
> 1e+08 ++|
>   |  .*...*..*..*...*..*|
>   *..*...*..*..*...*..  .*..  ..*..*.   |
> 5e+07 ++  *.*.  |
>   | |
> 0 +++
> 
>   vmstat.system.in
> 
>   30 ++-+
>  |O O   O   |
>  OO  O   O   O  |
>   25 ++ O   O O O
>  | O|
>  | O  O  O   O  O  O|
>   20 ++O   O|
>  |  |
>   15 ++ |
>  |  |
>  |  |
>   10 ++ |
>  |   .*...   .*..*...*..*..*...*|
>  *..*...*..*. *..  .*...  .*...*.   |
>5 ++--*--*---+
> 
>   [*] bisect-good sample
>   [O] bisect-bad  sample
> 
> To reproduce:
> 
> git clone 
> git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
> cd lkp-tests
> bin/lkp install job.yaml  # job file is attached in this email
> bin/lkp run job.yaml

I try to recreate this on one of my Intel machines (Xeon CPU E5-2650 v2
@ 2.60GHz) w/ 16 logical cpus. We haven't seen anything near a 20%
performance degradation for hackbench when we were running our hackbench
tests on 5/6 core arm machines as well as on a IVB-EP (2*10*2) Intel
machine.

So I cloned the repo:

# git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg
/lkp-tests.git lkp-tests

and ran the hackbench example:

root # lkp install $LKP_SRC/jobs/hackbench.yaml

root # lkp split-job $LKP_SRC/jobs/hackbench.yaml

root # lkp run ./hackbench-50%-threads-socket.yaml

2015-10-12 19:27:20 /usr/bin/hackbench -g 8 --threads -l 6
Running in threaded mode with 8 groups using 40 file descriptors each
(== 320 tasks)
Each sender will pass 6 messages of 100 bytes
...
wait for background monitors: perf-profile uptime proc-vmstat proc-stat
meminfo slabinfo interrupts softirqs diskstats cpuidle turbostat sched_debug

root # lkp result hackbench

/result/hackbench/50%-threads-socket-50/$MACHINE/ubuntu/defconfig/gcc-4.8/3.16.0-50-generic/0/

But I can't get any statistics out of it?

root # lkp stat hackbench
runs sum average stddev% case

Thanks for 

Re: [lkp] [sched/fair] 98d8fd8126: -20.8% hackbench.throughput

2015-10-12 Thread Dietmar Eggemann
Hi Ying,

On 24/09/15 03:00, kernel test robot wrote:
> FYI, we noticed the below changes on
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
> commit 98d8fd8126676f7ba6e133e65b2ca4b17989d32c ("sched/fair: Initialize task 
> load and utilization before placing task on rq")
> 
> 
> =
> tbox_group/testcase/rootfs/kconfig/compiler/nr_threads/mode/ipc:
>   
> lkp-ws02/hackbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/1600%/process/pipe
> 
> commit: 
>   231678b768da07d19ab5683a39eeb0c250631d02
>   98d8fd8126676f7ba6e133e65b2ca4b17989d32c
> 
> 231678b768da07d1 98d8fd8126676f7ba6e133e65b 
>  -- 
>  %stddev %change %stddev
>  \  |\  
> 188818 ±  1% -20.8% 149585 ±  1%  hackbench.throughput

[...]

> 
> lkp-ws02: Westmere-EP
> Memory: 16G
> 
> 
> 
> 
>   hackbench.time.involuntary_context_switches
> 
> 3e+08 ++O---+
>   O  O O   O O  |
>   2.5e+08 ++ OO  O   O  O
>   | O   O O  O  |
>   |   O   O O   O   |
> 2e+08 ++   O|
>   |O|
>   1.5e+08 ++|
>   | |
> 1e+08 ++|
>   |  .*...*..*..*...*..*|
>   *..*...*..*..*...*..  .*..  ..*..*.   |
> 5e+07 ++  *.*.  |
>   | |
> 0 +++
> 
>   vmstat.system.in
> 
>   30 ++-+
>  |O O   O   |
>  OO  O   O   O  |
>   25 ++ O   O O O
>  | O|
>  | O  O  O   O  O  O|
>   20 ++O   O|
>  |  |
>   15 ++ |
>  |  |
>  |  |
>   10 ++ |
>  |   .*...   .*..*...*..*..*...*|
>  *..*...*..*. *..  .*...  .*...*.   |
>5 ++--*--*---+
> 
>   [*] bisect-good sample
>   [O] bisect-bad  sample
> 
> To reproduce:
> 
> git clone 
> git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
> cd lkp-tests
> bin/lkp install job.yaml  # job file is attached in this email
> bin/lkp run job.yaml

I try to recreate this on one of my Intel machines (Xeon CPU E5-2650 v2
@ 2.60GHz) w/ 16 logical cpus. We haven't seen anything near a 20%
performance degradation for hackbench when we were running our hackbench
tests on 5/6 core arm machines as well as on a IVB-EP (2*10*2) Intel
machine.

So I cloned the repo:

# git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg
/lkp-tests.git lkp-tests

and ran the hackbench example:

root # lkp install $LKP_SRC/jobs/hackbench.yaml

root # lkp split-job $LKP_SRC/jobs/hackbench.yaml

root # lkp run ./hackbench-50%-threads-socket.yaml

2015-10-12 19:27:20 /usr/bin/hackbench -g 8 --threads -l 6
Running in threaded mode with 8 groups using 40 file descriptors each
(== 320 tasks)
Each sender will pass 6 messages of 100 bytes
...
wait for background monitors: perf-profile uptime proc-vmstat proc-stat
meminfo slabinfo interrupts softirqs diskstats cpuidle turbostat sched_debug

root # lkp result hackbench

/result/hackbench/50%-threads-socket-50/$MACHINE/ubuntu/defconfig/gcc-4.8/3.16.0-50-generic/0/

But I can't get any statistics out of it?

root # lkp stat hackbench
runs sum average stddev% case

Thanks for 

[lkp] [sched/fair] 98d8fd8126: -20.8% hackbench.throughput

2015-09-23 Thread kernel test robot
FYI, we noticed the below changes on

https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
commit 98d8fd8126676f7ba6e133e65b2ca4b17989d32c ("sched/fair: Initialize task 
load and utilization before placing task on rq")


=
tbox_group/testcase/rootfs/kconfig/compiler/nr_threads/mode/ipc:
  
lkp-ws02/hackbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/1600%/process/pipe

commit: 
  231678b768da07d19ab5683a39eeb0c250631d02
  98d8fd8126676f7ba6e133e65b2ca4b17989d32c

231678b768da07d1 98d8fd8126676f7ba6e133e65b 
 -- 
 %stddev %change %stddev
 \  |\  
188818 ±  1% -20.8% 149585 ±  1%  hackbench.throughput
  81712173 ±  4%+211.8%  2.548e+08 ±  1%  
hackbench.time.involuntary_context_switches
  21611286 ±  0% -19.2%   17453366 ±  1%  hackbench.time.minor_page_faults
  2226 ±  0%  +1.3%   2255 ±  0%  
hackbench.time.percent_of_cpu_this_job_got
 12445 ±  0%  +2.1%  12704 ±  0%  hackbench.time.system_time
 2.494e+08 ±  3%+118.5%  5.448e+08 ±  1%  
hackbench.time.voluntary_context_switches
   1097790 ±  0% +50.6%1653664 ±  1%  softirqs.RCU
554877 ±  3%+137.8%1319318 ±  1%  vmstat.system.cs
 89017 ±  4%+187.8% 256235 ±  1%  vmstat.system.in
 1.312e+08 ±  1% -16.0%  1.102e+08 ±  4%  numa-numastat.node0.local_node
 1.312e+08 ±  1% -16.0%  1.102e+08 ±  4%  numa-numastat.node0.numa_hit
 1.302e+08 ±  1% -34.9%   84785305 ±  5%  numa-numastat.node1.local_node
 1.302e+08 ±  1% -34.9%   84785344 ±  5%  numa-numastat.node1.numa_hit
302.00 ±  1% -19.2% 244.00 ±  1%  time.file_system_outputs
  81712173 ±  4%+211.8%  2.548e+08 ±  1%  time.involuntary_context_switches
  21611286 ±  0% -19.2%   17453366 ±  1%  time.minor_page_faults
 2.494e+08 ±  3%+118.5%  5.448e+08 ±  1%  time.voluntary_context_switches
 92.88 ±  0%  +1.3%  94.13 ±  0%  turbostat.%Busy
  2675 ±  0%  +1.8%   2723 ±  0%  turbostat.Avg_MHz
  4.44 ±  1% -24.9%   3.34 ±  2%  turbostat.CPU%c1
  0.98 ±  2% -32.2%   0.66 ±  3%  turbostat.CPU%c3
  2.79e+08 ±  4% -25.2%  2.086e+08 ±  6%  cpuidle.C1-NHM.time
 1.235e+08 ±  4% -28.6%   88251264 ±  7%  cpuidle.C1E-NHM.time
243525 ±  4% -21.9% 190252 ±  8%  cpuidle.C1E-NHM.usage
 1.819e+08 ±  2% -25.8%   1.35e+08 ±  1%  cpuidle.C3-NHM.time
260585 ±  1% -20.4% 207474 ±  2%  cpuidle.C3-NHM.usage
266207 ±  1% -39.4% 161453 ±  3%  cpuidle.C6-NHM.usage
493467 ±  0% +26.5% 624337 ±  0%  meminfo.Active
395397 ±  0% +33.0% 525811 ±  0%  meminfo.Active(anon)
372719 ±  1% +34.2% 500207 ±  1%  meminfo.AnonPages
   4543041 ±  1% +37.5%6248687 ±  1%  meminfo.Committed_AS
185265 ±  1% +16.3% 215373 ±  0%  meminfo.KernelStack
302233 ±  1% +37.1% 414289 ±  1%  meminfo.PageTables
333827 ±  0% +18.6% 396038 ±  0%  meminfo.SUnreclaim
380340 ±  0% +16.6% 443518 ±  0%  meminfo.Slab
 51154 ±143%-100.0%   5.00 ±100%  
latency_stats.avg.call_rwsem_down_write_failed.copy_process._do_fork.SyS_clone.entry_SYSCALL_64_fastpath
  0.00 ± -1%  +Inf%  30679 ±100%  
latency_stats.avg.proc_cgroup_show.proc_single_show.seq_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
  7795 ±100%   +1304.6% 109497 ± 93%  
latency_stats.max.blk_execute_rq.scsi_execute.scsi_execute_req_flags.ses_recv_diag.[ses].ses_get_page2_descriptor.[ses].ses_get_power_status.[ses].ses_enclosure_data_process.[ses].ses_intf_add.[ses].class_interface_register.scsi_register_interface.0xa0006013.do_one_initcall
297190 ±117%-100.0%  23.00 ±100%  
latency_stats.max.call_rwsem_down_write_failed.copy_process._do_fork.SyS_clone.entry_SYSCALL_64_fastpath
  0.00 ± -1%  +Inf%  97905 ±109%  
latency_stats.max.proc_cgroup_show.proc_single_show.seq_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
 12901 ±131% -78.9%   2717 ±135%  
latency_stats.max.wait_on_page_bit.wait_on_page_read.do_read_cache_page.read_cache_page_gfp.btrfs_scan_one_device.[btrfs].btrfs_control_ioctl.[btrfs].do_vfs_ioctl.SyS_ioctl.entry_SYSCALL_64_fastpath
392778 ±128%-100.0%  75.50 ±100%  
latency_stats.sum.call_rwsem_down_write_failed.copy_process._do_fork.SyS_clone.entry_SYSCALL_64_fastpath
 13678 ± 75% -68.1%   4368 ± 67%  
latency_stats.sum.flush_work.__cancel_work_timer.cancel_delayed_work_sync.disk_block_events.__blkdev_get.blkdev_get.blkdev_get_by_path.btrfs_scan_one_device.[btrfs].btrfs_control_ioctl.[btrfs].do_vfs_ioctl.SyS_ioctl.entry_SYSCALL_64_fastpath
 19088 ±101%-100.0%   8.67 ±110%  
latency_stats.sum.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
  0.00 

[lkp] [sched/fair] 98d8fd8126: -20.8% hackbench.throughput

2015-09-23 Thread kernel test robot
FYI, we noticed the below changes on

https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
commit 98d8fd8126676f7ba6e133e65b2ca4b17989d32c ("sched/fair: Initialize task 
load and utilization before placing task on rq")


=
tbox_group/testcase/rootfs/kconfig/compiler/nr_threads/mode/ipc:
  
lkp-ws02/hackbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/1600%/process/pipe

commit: 
  231678b768da07d19ab5683a39eeb0c250631d02
  98d8fd8126676f7ba6e133e65b2ca4b17989d32c

231678b768da07d1 98d8fd8126676f7ba6e133e65b 
 -- 
 %stddev %change %stddev
 \  |\  
188818 ±  1% -20.8% 149585 ±  1%  hackbench.throughput
  81712173 ±  4%+211.8%  2.548e+08 ±  1%  
hackbench.time.involuntary_context_switches
  21611286 ±  0% -19.2%   17453366 ±  1%  hackbench.time.minor_page_faults
  2226 ±  0%  +1.3%   2255 ±  0%  
hackbench.time.percent_of_cpu_this_job_got
 12445 ±  0%  +2.1%  12704 ±  0%  hackbench.time.system_time
 2.494e+08 ±  3%+118.5%  5.448e+08 ±  1%  
hackbench.time.voluntary_context_switches
   1097790 ±  0% +50.6%1653664 ±  1%  softirqs.RCU
554877 ±  3%+137.8%1319318 ±  1%  vmstat.system.cs
 89017 ±  4%+187.8% 256235 ±  1%  vmstat.system.in
 1.312e+08 ±  1% -16.0%  1.102e+08 ±  4%  numa-numastat.node0.local_node
 1.312e+08 ±  1% -16.0%  1.102e+08 ±  4%  numa-numastat.node0.numa_hit
 1.302e+08 ±  1% -34.9%   84785305 ±  5%  numa-numastat.node1.local_node
 1.302e+08 ±  1% -34.9%   84785344 ±  5%  numa-numastat.node1.numa_hit
302.00 ±  1% -19.2% 244.00 ±  1%  time.file_system_outputs
  81712173 ±  4%+211.8%  2.548e+08 ±  1%  time.involuntary_context_switches
  21611286 ±  0% -19.2%   17453366 ±  1%  time.minor_page_faults
 2.494e+08 ±  3%+118.5%  5.448e+08 ±  1%  time.voluntary_context_switches
 92.88 ±  0%  +1.3%  94.13 ±  0%  turbostat.%Busy
  2675 ±  0%  +1.8%   2723 ±  0%  turbostat.Avg_MHz
  4.44 ±  1% -24.9%   3.34 ±  2%  turbostat.CPU%c1
  0.98 ±  2% -32.2%   0.66 ±  3%  turbostat.CPU%c3
  2.79e+08 ±  4% -25.2%  2.086e+08 ±  6%  cpuidle.C1-NHM.time
 1.235e+08 ±  4% -28.6%   88251264 ±  7%  cpuidle.C1E-NHM.time
243525 ±  4% -21.9% 190252 ±  8%  cpuidle.C1E-NHM.usage
 1.819e+08 ±  2% -25.8%   1.35e+08 ±  1%  cpuidle.C3-NHM.time
260585 ±  1% -20.4% 207474 ±  2%  cpuidle.C3-NHM.usage
266207 ±  1% -39.4% 161453 ±  3%  cpuidle.C6-NHM.usage
493467 ±  0% +26.5% 624337 ±  0%  meminfo.Active
395397 ±  0% +33.0% 525811 ±  0%  meminfo.Active(anon)
372719 ±  1% +34.2% 500207 ±  1%  meminfo.AnonPages
   4543041 ±  1% +37.5%6248687 ±  1%  meminfo.Committed_AS
185265 ±  1% +16.3% 215373 ±  0%  meminfo.KernelStack
302233 ±  1% +37.1% 414289 ±  1%  meminfo.PageTables
333827 ±  0% +18.6% 396038 ±  0%  meminfo.SUnreclaim
380340 ±  0% +16.6% 443518 ±  0%  meminfo.Slab
 51154 ±143%-100.0%   5.00 ±100%  
latency_stats.avg.call_rwsem_down_write_failed.copy_process._do_fork.SyS_clone.entry_SYSCALL_64_fastpath
  0.00 ± -1%  +Inf%  30679 ±100%  
latency_stats.avg.proc_cgroup_show.proc_single_show.seq_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
  7795 ±100%   +1304.6% 109497 ± 93%  
latency_stats.max.blk_execute_rq.scsi_execute.scsi_execute_req_flags.ses_recv_diag.[ses].ses_get_page2_descriptor.[ses].ses_get_power_status.[ses].ses_enclosure_data_process.[ses].ses_intf_add.[ses].class_interface_register.scsi_register_interface.0xa0006013.do_one_initcall
297190 ±117%-100.0%  23.00 ±100%  
latency_stats.max.call_rwsem_down_write_failed.copy_process._do_fork.SyS_clone.entry_SYSCALL_64_fastpath
  0.00 ± -1%  +Inf%  97905 ±109%  
latency_stats.max.proc_cgroup_show.proc_single_show.seq_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
 12901 ±131% -78.9%   2717 ±135%  
latency_stats.max.wait_on_page_bit.wait_on_page_read.do_read_cache_page.read_cache_page_gfp.btrfs_scan_one_device.[btrfs].btrfs_control_ioctl.[btrfs].do_vfs_ioctl.SyS_ioctl.entry_SYSCALL_64_fastpath
392778 ±128%-100.0%  75.50 ±100%  
latency_stats.sum.call_rwsem_down_write_failed.copy_process._do_fork.SyS_clone.entry_SYSCALL_64_fastpath
 13678 ± 75% -68.1%   4368 ± 67%  
latency_stats.sum.flush_work.__cancel_work_timer.cancel_delayed_work_sync.disk_block_events.__blkdev_get.blkdev_get.blkdev_get_by_path.btrfs_scan_one_device.[btrfs].btrfs_control_ioctl.[btrfs].do_vfs_ioctl.SyS_ioctl.entry_SYSCALL_64_fastpath
 19088 ±101%-100.0%   8.67 ±110%  
latency_stats.sum.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
  0.00