FYI, we noticed a -6.3% regression of unixbench.score due to commit:

commit 5c0a85fad949212b3e059692deecdeed74ae7ec7 ("mm: make faultaround produce 
old ptes")
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master

in testcase: unixbench
on test machine: lituya: 16 threads Haswell High-end Desktop (i7-5960X 3.0G) 
with 16G memory
with following parameters: cpufreq_governor=performance/nr_task=1/test=shell8


Details are as below:
-------------------------------------------------------------------------------------------------->


=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/tbox_group/test/testcase:
  
gcc-4.9/performance/x86_64-rhel/1/debian-x86_64-2015-02-07.cgz/lituya/shell8/unixbench

commit: 
  4b50bcc7eda4d3cc9e3f2a0aa60e590fedf728c5
  5c0a85fad949212b3e059692deecdeed74ae7ec7

4b50bcc7eda4d3cc 5c0a85fad949212b3e059692de 
---------------- -------------------------- 
       fail:runs  %reproduction    fail:runs
           |             |             |    
          3:4          -75%            :4     
kmsg.DHCP/BOOTP:Reply_not_for_us,op[#]xid[#]
         %stddev     %change         %stddev
             \          |                \  
     14321 ±  0%      -6.3%      13425 ±  0%  unixbench.score
   1996897 ±  0%      -6.1%    1874635 ±  0%  
unixbench.time.involuntary_context_switches
 1.721e+08 ±  0%      -6.2%  1.613e+08 ±  0%  unixbench.time.minor_page_faults
    758.65 ±  0%      -3.0%     735.86 ±  0%  unixbench.time.system_time
    387.66 ±  0%      +5.4%     408.49 ±  0%  unixbench.time.user_time
   5950278 ±  0%      -6.2%    5583456 ±  0%  
unixbench.time.voluntary_context_switches
   1960642 ±  0%     -11.4%    1737753 ±  0%  cpuidle.C1-HSW.usage
      5851 ±  0%     -43.8%       3286 ±  1%  proc-vmstat.nr_active_file
     46185 ±  0%     -21.2%      36385 ±  2%  meminfo.Active
     23404 ±  0%     -43.8%      13147 ±  1%  meminfo.Active(file)
      4109 ±  5%     -19.6%       3302 ±  4%  slabinfo.pid.active_objs
      4109 ±  5%     -19.6%       3302 ±  4%  slabinfo.pid.num_objs
     94603 ±  0%      -5.7%      89247 ±  0%  vmstat.system.cs
      8976 ±  0%      -2.5%       8754 ±  0%  vmstat.system.in
      3.38 ±  2%     +11.8%       3.77 ±  0%  turbostat.CPU%c3
      0.24 ±101%     -86.3%       0.03 ± 54%  turbostat.Pkg%pc3
     66.53 ±  0%      -1.7%      65.41 ±  0%  turbostat.PkgWatt
      2061 ±  1%      -8.5%       1886 ±  0%  
sched_debug.cfs_rq:/.exec_clock.stddev
    737154 ±  5%     +10.8%     817107 ±  3%  sched_debug.cpu.avg_idle.max
    133057 ±  5%     -33.2%      88864 ± 11%  sched_debug.cpu.avg_idle.min
    181562 ±  8%     +15.9%     210434 ±  3%  sched_debug.cpu.avg_idle.stddev
      0.97 ±  7%     +19.0%       1.16 ±  8%  sched_debug.cpu.clock.stddev
      0.97 ±  7%     +19.0%       1.16 ±  8%  sched_debug.cpu.clock_task.stddev
    248.06 ± 11%     +31.0%     324.94 ±  8%  sched_debug.cpu.cpu_load[1].max
     55.65 ± 14%     +28.1%      71.30 ±  8%  sched_debug.cpu.cpu_load[1].stddev
    233.38 ± 10%     +34.4%     313.56 ±  8%  sched_debug.cpu.cpu_load[2].max
     49.79 ± 15%     +35.6%      67.50 ±  9%  sched_debug.cpu.cpu_load[2].stddev
    233.25 ± 12%     +29.9%     302.94 ±  6%  sched_debug.cpu.cpu_load[3].max
     46.56 ±  8%     +12.2%      52.25 ±  6%  sched_debug.cpu.cpu_load[3].min
     48.51 ± 15%     +31.4%      63.76 ±  7%  sched_debug.cpu.cpu_load[3].stddev
    238.44 ± 12%     +19.0%     283.69 ±  3%  sched_debug.cpu.cpu_load[4].max
     49.56 ±  9%     +13.4%      56.19 ±  4%  sched_debug.cpu.cpu_load[4].min
     48.22 ± 13%     +20.1%      57.93 ±  5%  sched_debug.cpu.cpu_load[4].stddev
     14792 ± 30%     +71.9%      25424 ± 17%  sched_debug.cpu.curr->pid.avg
     42862 ±  1%     +42.6%      61121 ±  0%  sched_debug.cpu.curr->pid.max
     19466 ± 10%     +35.4%      26351 ±  9%  sched_debug.cpu.curr->pid.stddev
      1067 ±  6%     -14.9%     909.35 ±  4%  sched_debug.cpu.ttwu_local.stddev



To reproduce:

        git clone 
git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Xiaolong
---
LKP_SERVER: inn
LKP_CGI_PORT: 80
LKP_CIFS_PORT: 139
testcase: unixbench
default-monitors:
  wait: activate-monitor
  kmsg: 
  uptime: 
  iostat: 
  heartbeat: 
  vmstat: 
  numa-numastat: 
  numa-vmstat: 
  numa-meminfo: 
  proc-vmstat: 
  proc-stat:
    interval: 10
  meminfo: 
  slabinfo: 
  interrupts: 
  lock_stat: 
  latency_stats: 
  softirqs: 
  bdi_dev_mapping: 
  diskstats: 
  nfsstat: 
  cpuidle: 
  cpufreq-stats: 
  turbostat: 
  pmeter: 
  sched_debug:
    interval: 60
cpufreq_governor: performance
NFS_HANG_DF_TIMEOUT: 200
NFS_HANG_CHECK_INTERVAL: 900
default-watchdogs:
  oom-killer: 
  watchdog: 
  nfs-hang: 
commit: 5c0a85fad949212b3e059692deecdeed74ae7ec7
model: Haswell High-end Desktop
nr_cpu: 16
memory: 16G
hdd_partitions: 
swap_partitions: 
rootfs_partition: 
description: 16 threads Haswell High-end Desktop (i7-5960X 3.0G) with 16G memory
category: benchmark
nr_task: 1
unixbench:
  test: shell8
queue: bisect
testbox: lituya
tbox_group: lituya
kconfig: x86_64-rhel
enqueue_time: 2016-06-04 03:26:52.444586006 +08:00
compiler: gcc-4.9
rootfs: debian-x86_64-2015-02-07.cgz
id: 101932ca34f6ff20613b88f6bed66fbc4afdfb95
user: lkp
head_commit: 73aa85b30706f742655a10c967c033b56c731aff
base_commit: 1a695a905c18548062509178b98bc91e67510864
branch: internal-devel/devel-hourly-2016060108-internal
result_root: 
"/result/unixbench/performance-1-shell8/lituya/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/5c0a85fad949212b3e059692deecdeed74ae7ec7/1"
job_file: 
"/lkp/scheduled/lituya/bisect_unixbench-performance-1-shell8-debian-x86_64-2015-02-07.cgz-x86_64-rhel-5c0a85fad949212b3e059692deecdeed74ae7ec7-20160604-57400-1fovod8-1.yaml"
max_uptime: 1032.28
initrd: "/osimage/debian/debian-x86_64-2015-02-07.cgz"
bootloader_append:
- root=/dev/ram0
- user=lkp
- 
job=/lkp/scheduled/lituya/bisect_unixbench-performance-1-shell8-debian-x86_64-2015-02-07.cgz-x86_64-rhel-5c0a85fad949212b3e059692deecdeed74ae7ec7-20160604-57400-1fovod8-1.yaml
- ARCH=x86_64
- kconfig=x86_64-rhel
- branch=internal-devel/devel-hourly-2016060108-internal
- commit=5c0a85fad949212b3e059692deecdeed74ae7ec7
- 
BOOT_IMAGE=/pkg/linux/x86_64-rhel/gcc-4.9/5c0a85fad949212b3e059692deecdeed74ae7ec7/vmlinuz-4.6.0-06629-g5c0a85f
- max_uptime=1032
- 
RESULT_ROOT=/result/unixbench/performance-1-shell8/lituya/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/5c0a85fad949212b3e059692deecdeed74ae7ec7/1
- LKP_SERVER=inn
- |2-


  earlyprintk=ttyS0,115200 systemd.log_level=err
  debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100
  panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 
prompt_ramdisk=0
  console=ttyS0,115200 console=tty0 vga=normal

  rw
lkp_initrd: "/lkp/lkp/lkp-x86_64.cgz"
modules_initrd: 
"/pkg/linux/x86_64-rhel/gcc-4.9/5c0a85fad949212b3e059692deecdeed74ae7ec7/modules.cgz"
bm_initrd: 
"/osimage/deps/debian-x86_64-2015-02-07.cgz/lkp.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/run-ipconfig.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/turbostat.cgz,/lkp/benchmarks/turbostat.cgz,/lkp/benchmarks/unixbench.cgz"
linux_headers_initrd: 
"/pkg/linux/x86_64-rhel/gcc-4.9/5c0a85fad949212b3e059692deecdeed74ae7ec7/linux-headers.cgz"
repeat_to: 2
kernel: 
"/pkg/linux/x86_64-rhel/gcc-4.9/5c0a85fad949212b3e059692deecdeed74ae7ec7/vmlinuz-4.6.0-06629-g5c0a85f"
dequeue_time: 2016-06-04 03:40:50.807274201 +08:00
job_state: finished
loadavg: 5.70 2.70 1.05 1/257 3744
start_time: '1465010834'
end_time: '1465011023'
version: "/lkp/lkp/.src-20160603-214427"
2016-06-04 11:23:32 echo performance > 
/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
2016-06-04 11:23:32 echo performance > 
/sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
2016-06-04 11:23:32 echo performance > 
/sys/devices/system/cpu/cpu10/cpufreq/scaling_governor
2016-06-04 11:23:32 echo performance > 
/sys/devices/system/cpu/cpu11/cpufreq/scaling_governor
2016-06-04 11:23:32 echo performance > 
/sys/devices/system/cpu/cpu12/cpufreq/scaling_governor
2016-06-04 11:23:32 echo performance > 
/sys/devices/system/cpu/cpu13/cpufreq/scaling_governor
2016-06-04 11:23:32 echo performance > 
/sys/devices/system/cpu/cpu14/cpufreq/scaling_governor
2016-06-04 11:23:32 echo performance > 
/sys/devices/system/cpu/cpu15/cpufreq/scaling_governor
2016-06-04 11:23:32 echo performance > 
/sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
2016-06-04 11:23:32 echo performance > 
/sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
2016-06-04 11:23:32 echo performance > 
/sys/devices/system/cpu/cpu4/cpufreq/scaling_governor
2016-06-04 11:23:32 echo performance > 
/sys/devices/system/cpu/cpu5/cpufreq/scaling_governor
2016-06-04 11:23:32 echo performance > 
/sys/devices/system/cpu/cpu6/cpufreq/scaling_governor
2016-06-04 11:23:32 echo performance > 
/sys/devices/system/cpu/cpu7/cpufreq/scaling_governor
2016-06-04 11:23:32 echo performance > 
/sys/devices/system/cpu/cpu8/cpufreq/scaling_governor
2016-06-04 11:23:32 echo performance > 
/sys/devices/system/cpu/cpu9/cpufreq/scaling_governor
2016-06-04 11:23:33 ./Run shell8 -c 1

Reply via email to