[lkp] [thp] 79553da293d: +1.8% fileio.time.file_system_inputs

2015-07-07 Thread Huang Ying
FYI, we noticed the below changes on

git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 79553da293d38d63097278de13e28a3b371f43c1 ("thp: cleanup khugepaged 
startup")


=
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/disk/iosched/fs/nr_threads:
  
bay/dd-write/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/1HDD/cfq/xfs/10dd

commit: 
  e39155ea11eac6da056b04669d7c9fc612e2065a
  79553da293d38d63097278de13e28a3b371f43c1

e39155ea11eac6da 79553da293d38d63097278de13 
 -- 
 %stddev %change %stddev
 \  |\  
 30460 ±  0% +34.3%  40920 ±  0%  softirqs.BLOCK
231.27 ±  4% -17.3% 191.30 ±  3%  uptime.idle
765.75 ±  7% +19.7% 916.50 ±  4%  slabinfo.kmalloc-512.active_objs
972.25 ±  8% +17.4%   1141 ±  5%  slabinfo.kmalloc-512.num_objs
 74.00 ±  0% +15.9%  85.75 ± 10%  vmstat.memory.buff
 91092 ±  0% -62.3%  34370 ±  1%  vmstat.memory.free
 22460 ±  1% +29.2%  29026 ±  1%  ftrace.global_dirty_state.dirty
 35516 ±  1% -11.0%  31615 ±  0%  
ftrace.global_dirty_state.writeback
  2.00 ±  0% +50.0%   3.00 ±  0%  
ftrace.writeback_single_inode.sda.age
  4913 ±  1% +35.7%    ±  1%  
ftrace.writeback_single_inode.sda.wrote
 36958 ±  4% -67.3%  12083 ± 28%  meminfo.AnonHugePages
 89634 ±  0% +29.2% 115815 ±  1%  meminfo.Dirty
   1315242 ±  0% +10.4%1451507 ±  0%  meminfo.MemAvailable
 87870 ±  0% -63.5%  32046 ±  2%  meminfo.MemFree
142291 ±  1% -11.2% 126331 ±  0%  meminfo.Writeback
  2.76 ± 14% +22.8%   3.39 ±  8%  
perf-profile.cpu-cycles.__clear_user.iov_iter_zero.read_iter_zero.new_sync_read.__vfs_read
  4.94 ± 16% +31.2%   6.48 ±  6%  
perf-profile.cpu-cycles.__memset.xfs_vm_write_begin.generic_perform_write.xfs_file_buffered_aio_write.xfs_file_write_iter
  1.31 ± 34% -50.4%   0.65 ± 36%  
perf-profile.cpu-cycles.end_page_writeback.end_buffer_async_write.xfs_destroy_ioend.xfs_end_io.process_one_work
  0.86 ± 24% -44.1%   0.48 ± 42%  
perf-profile.cpu-cycles.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
  1.15 ± 27% -50.4%   0.57 ± 39%  
perf-profile.cpu-cycles.test_clear_page_writeback.end_page_writeback.end_buffer_async_write.xfs_destroy_ioend.xfs_end_io
  0.76 ± 39% +65.6%   1.26 ± 23%  
perf-profile.cpu-cycles.try_to_free_buffers.xfs_vm_releasepage.try_to_release_page.shrink_page_list.shrink_inactive_list
  0.00 ± -1%  +Inf%   9581 ± 43%  
latency_stats.avg.do_truncate.do_sys_ftruncate.SyS_ftruncate.system_call_fastpath
  0.00 ± -1%  +Inf% 500330 ± 58%  
latency_stats.avg.xfs_file_buffered_aio_write.xfs_file_write_iter.new_sync_write.vfs_write.SyS_write.system_call_fastpath
163371 ±  3% +26.6% 206775 ±  1%  
latency_stats.hits.ring_buffer_wait.wait_on_pipe.tracing_wait_pipe.tracing_read_pipe.__vfs_read.vfs_read.SyS_read.system_call_fastpath
  0.00 ± -1%  +Inf%  32695 ± 10%  
latency_stats.max.do_truncate.do_sys_ftruncate.SyS_ftruncate.system_call_fastpath
  0.00 ± -1%  +Inf% 669208 ± 18%  
latency_stats.max.xfs_file_buffered_aio_write.xfs_file_write_iter.new_sync_write.vfs_write.SyS_write.system_call_fastpath
  0.00 ± -1%  +Inf%  53331 ± 28%  
latency_stats.sum.do_truncate.do_sys_ftruncate.SyS_ftruncate.system_call_fastpath
  0.00 ± -1%  +Inf% 706140 ± 12%  
latency_stats.sum.xfs_file_buffered_aio_write.xfs_file_write_iter.new_sync_write.vfs_write.SyS_write.system_call_fastpath
394.50 ± 24% -53.2% 184.50 ± 37%  sched_debug.cfs_rq[3]:/.load
 37.00 ± 34% +51.4%  56.00 ± 17%  sched_debug.cpu#1.cpu_load[1]
 84.75 ± 19% -52.8%  40.00 ± 68%  sched_debug.cpu#3.cpu_load[0]
 69.25 ± 20% -50.5%  34.25 ± 34%  sched_debug.cpu#3.cpu_load[1]
 50.25 ± 11% -45.8%  27.25 ± 23%  sched_debug.cpu#3.cpu_load[2]
 36.25 ±  5% -37.9%  22.50 ± 22%  sched_debug.cpu#3.cpu_load[3]
394.50 ± 24% -53.2% 184.50 ± 37%  sched_debug.cpu#3.load
 2.138e+11 ±  0%  -1.4%  2.108e+11 ±  0%  perf-stat.L1-dcache-loads
 2.483e+08 ±  0%  -4.3%  2.376e+08 ±  0%  perf-stat.L1-dcache-prefetches
 4.605e+09 ±  0%  +7.8%  4.962e+09 ±  0%  perf-stat.L1-icache-load-misses
 5.684e+11 ±  0%  +2.0%  5.797e+11 ±  0%  perf-stat.L1-icache-loads
 1.637e+08 ±  0% +15.8%  1.895e+08 ±  1%  perf-stat.LLC-load-misses
 1.089e+11 ±  0%  +1.3%  1.103e+11 ±  0%  perf-stat.branch-loads
 8.163e+10 ±  0%  +2.9%  8.396e+10 ±  1%  perf-stat.bus-cycles
 3.116e+08 ±  1%  +9.6%  3.415e+08 ±  0%  perf-stat.cache-misses
   9306030 ±  1%  -1.4%9171655 ±  0%  

[lkp] [thp] 79553da293d: +1.8% fileio.time.file_system_inputs

2015-07-07 Thread Huang Ying
FYI, we noticed the below changes on

git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 79553da293d38d63097278de13e28a3b371f43c1 (thp: cleanup khugepaged 
startup)


=
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/disk/iosched/fs/nr_threads:
  
bay/dd-write/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/1HDD/cfq/xfs/10dd

commit: 
  e39155ea11eac6da056b04669d7c9fc612e2065a
  79553da293d38d63097278de13e28a3b371f43c1

e39155ea11eac6da 79553da293d38d63097278de13 
 -- 
 %stddev %change %stddev
 \  |\  
 30460 ±  0% +34.3%  40920 ±  0%  softirqs.BLOCK
231.27 ±  4% -17.3% 191.30 ±  3%  uptime.idle
765.75 ±  7% +19.7% 916.50 ±  4%  slabinfo.kmalloc-512.active_objs
972.25 ±  8% +17.4%   1141 ±  5%  slabinfo.kmalloc-512.num_objs
 74.00 ±  0% +15.9%  85.75 ± 10%  vmstat.memory.buff
 91092 ±  0% -62.3%  34370 ±  1%  vmstat.memory.free
 22460 ±  1% +29.2%  29026 ±  1%  ftrace.global_dirty_state.dirty
 35516 ±  1% -11.0%  31615 ±  0%  
ftrace.global_dirty_state.writeback
  2.00 ±  0% +50.0%   3.00 ±  0%  
ftrace.writeback_single_inode.sda.age
  4913 ±  1% +35.7%    ±  1%  
ftrace.writeback_single_inode.sda.wrote
 36958 ±  4% -67.3%  12083 ± 28%  meminfo.AnonHugePages
 89634 ±  0% +29.2% 115815 ±  1%  meminfo.Dirty
   1315242 ±  0% +10.4%1451507 ±  0%  meminfo.MemAvailable
 87870 ±  0% -63.5%  32046 ±  2%  meminfo.MemFree
142291 ±  1% -11.2% 126331 ±  0%  meminfo.Writeback
  2.76 ± 14% +22.8%   3.39 ±  8%  
perf-profile.cpu-cycles.__clear_user.iov_iter_zero.read_iter_zero.new_sync_read.__vfs_read
  4.94 ± 16% +31.2%   6.48 ±  6%  
perf-profile.cpu-cycles.__memset.xfs_vm_write_begin.generic_perform_write.xfs_file_buffered_aio_write.xfs_file_write_iter
  1.31 ± 34% -50.4%   0.65 ± 36%  
perf-profile.cpu-cycles.end_page_writeback.end_buffer_async_write.xfs_destroy_ioend.xfs_end_io.process_one_work
  0.86 ± 24% -44.1%   0.48 ± 42%  
perf-profile.cpu-cycles.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
  1.15 ± 27% -50.4%   0.57 ± 39%  
perf-profile.cpu-cycles.test_clear_page_writeback.end_page_writeback.end_buffer_async_write.xfs_destroy_ioend.xfs_end_io
  0.76 ± 39% +65.6%   1.26 ± 23%  
perf-profile.cpu-cycles.try_to_free_buffers.xfs_vm_releasepage.try_to_release_page.shrink_page_list.shrink_inactive_list
  0.00 ± -1%  +Inf%   9581 ± 43%  
latency_stats.avg.do_truncate.do_sys_ftruncate.SyS_ftruncate.system_call_fastpath
  0.00 ± -1%  +Inf% 500330 ± 58%  
latency_stats.avg.xfs_file_buffered_aio_write.xfs_file_write_iter.new_sync_write.vfs_write.SyS_write.system_call_fastpath
163371 ±  3% +26.6% 206775 ±  1%  
latency_stats.hits.ring_buffer_wait.wait_on_pipe.tracing_wait_pipe.tracing_read_pipe.__vfs_read.vfs_read.SyS_read.system_call_fastpath
  0.00 ± -1%  +Inf%  32695 ± 10%  
latency_stats.max.do_truncate.do_sys_ftruncate.SyS_ftruncate.system_call_fastpath
  0.00 ± -1%  +Inf% 669208 ± 18%  
latency_stats.max.xfs_file_buffered_aio_write.xfs_file_write_iter.new_sync_write.vfs_write.SyS_write.system_call_fastpath
  0.00 ± -1%  +Inf%  53331 ± 28%  
latency_stats.sum.do_truncate.do_sys_ftruncate.SyS_ftruncate.system_call_fastpath
  0.00 ± -1%  +Inf% 706140 ± 12%  
latency_stats.sum.xfs_file_buffered_aio_write.xfs_file_write_iter.new_sync_write.vfs_write.SyS_write.system_call_fastpath
394.50 ± 24% -53.2% 184.50 ± 37%  sched_debug.cfs_rq[3]:/.load
 37.00 ± 34% +51.4%  56.00 ± 17%  sched_debug.cpu#1.cpu_load[1]
 84.75 ± 19% -52.8%  40.00 ± 68%  sched_debug.cpu#3.cpu_load[0]
 69.25 ± 20% -50.5%  34.25 ± 34%  sched_debug.cpu#3.cpu_load[1]
 50.25 ± 11% -45.8%  27.25 ± 23%  sched_debug.cpu#3.cpu_load[2]
 36.25 ±  5% -37.9%  22.50 ± 22%  sched_debug.cpu#3.cpu_load[3]
394.50 ± 24% -53.2% 184.50 ± 37%  sched_debug.cpu#3.load
 2.138e+11 ±  0%  -1.4%  2.108e+11 ±  0%  perf-stat.L1-dcache-loads
 2.483e+08 ±  0%  -4.3%  2.376e+08 ±  0%  perf-stat.L1-dcache-prefetches
 4.605e+09 ±  0%  +7.8%  4.962e+09 ±  0%  perf-stat.L1-icache-load-misses
 5.684e+11 ±  0%  +2.0%  5.797e+11 ±  0%  perf-stat.L1-icache-loads
 1.637e+08 ±  0% +15.8%  1.895e+08 ±  1%  perf-stat.LLC-load-misses
 1.089e+11 ±  0%  +1.3%  1.103e+11 ±  0%  perf-stat.branch-loads
 8.163e+10 ±  0%  +2.9%  8.396e+10 ±  1%  perf-stat.bus-cycles
 3.116e+08 ±  1%  +9.6%  3.415e+08 ±  0%  perf-stat.cache-misses
   9306030 ±  1%  -1.4%9171655 ±  0%