After my first attempt to rework the MSR access functions [1] this is the result of the feedback I got.
I have still followed the idea to: - Reduce the number of MSR access functions by keeping the ones with 64-bit values only (instead of the dual 32-bit ones). - Try to have inline functions instead of macros for rdmsr*(), removing the hard to read cases where parameters specified the variables for the results. One feedback I got was NOT to rename the access functions, which I avoided in my new approach. The first 8 patches are a complete set for achieving especially the first point above for the *_on_cpu() functions. Patch 9 is preparing the switch of the CPU-local MSR access functions to have only rdmsr(), rdmsr_safe(), wrmsr() and wrmsr_safe() (all with 64-bit values and as inline functions) in the end. For this purpose the already existing functions/macros are overloaded via macros to accept both variants (64-bit and dual 32-bit values) during the phase switching the different subsystems to the new scheme. This has the advantage to avoid having to either patch all users of the current functions in one patch (like done in the first 8 patches), or having to use intermediate function names with need to be patched at the end another time. The resulting patches would be very hard to review due to their size. The last 2 patches are examples how switches of subsystems would look like. Up to now all of that is compile tested only. [1]: https://lore.kernel.org/lkml/[email protected]/ Juergen Gross (11): x86/msr: Switch rdmsr_on_cpu() to return a 64-bit quantity x86/msr: Switch all callers of rdmsrq_on_cpu() to use rdmsr_on_cpu() x86/msr: Switch wrmsr_on_cpu() to use a 64-bit quantity x86/msr: Switch all callers of wrmsrq_on_cpu() to use wrmsr_on_cpu() x86/msr: Switch rdmsr_safe_on_cpu() to return a 64-bit quantity x86/msr: Switch all callers of rdmsrq_safe_on_cpu() to use rdmsr_safe_on_cpu() x86/msr: Switch wrmsr_safe_on_cpu() to use a 64-bit quantity x86/msr: Switch all callers of wrmsrq_safe_on_cpu() to use wrmsr_safe_on_cpu() x86/msr: Add macros for preparing to switch rdmsr/wrmsr interfaces x86/events: Switch core parts to use 64-bit rdmsr/wrmsr() variants x86/cpu/mce: Switch code to use 64-bit rdmsr/wrmsr() variants arch/x86/events/core.c | 42 ++++---- arch/x86/events/intel/ds.c | 11 +- arch/x86/events/intel/pt.c | 2 +- arch/x86/events/intel/uncore_discovery.c | 2 +- arch/x86/events/intel/uncore_snbep.c | 2 +- arch/x86/events/msr.c | 2 +- arch/x86/events/perf_event.h | 26 ++--- arch/x86/events/probe.c | 2 +- arch/x86/events/rapl.c | 8 +- arch/x86/include/asm/msr.h | 90 +++++++++------- arch/x86/include/asm/paravirt.h | 6 +- arch/x86/kernel/acpi/cppc.c | 8 +- arch/x86/kernel/cpu/intel_epb.c | 8 +- arch/x86/kernel/cpu/mce/amd.c | 101 +++++++++--------- arch/x86/kernel/cpu/mce/core.c | 18 ++-- arch/x86/kernel/cpu/mce/inject.c | 40 +++---- arch/x86/kernel/cpu/mce/intel.c | 32 +++--- arch/x86/kernel/cpu/mce/p5.c | 16 +-- arch/x86/kernel/cpu/mce/winchip.c | 10 +- arch/x86/kernel/cpu/microcode/intel.c | 2 +- arch/x86/kernel/msr.c | 8 +- arch/x86/lib/msr-smp.c | 79 ++------------ drivers/cpufreq/acpi-cpufreq.c | 4 +- drivers/cpufreq/amd-pstate-ut.c | 2 +- drivers/cpufreq/amd-pstate.c | 21 ++-- drivers/cpufreq/amd_freq_sensitivity.c | 4 +- drivers/cpufreq/intel_pstate.c | 64 +++++------ drivers/cpufreq/p4-clockmod.c | 32 +++--- drivers/cpufreq/speedstep-centrino.c | 27 ++--- drivers/hwmon/coretemp.c | 44 ++++---- drivers/hwmon/via-cputemp.c | 16 +-- drivers/platform/x86/amd/hfi/hfi.c | 4 +- .../intel/speed_select_if/isst_if_common.c | 13 ++- .../intel/uncore-frequency/uncore-frequency.c | 12 +-- drivers/powercap/intel_rapl_msr.c | 2 +- drivers/thermal/intel/intel_tcc.c | 43 ++++---- drivers/thermal/intel/x86_pkg_temp_thermal.c | 22 ++-- 37 files changed, 387 insertions(+), 438 deletions(-) -- 2.53.0

