Script 'mail_helper' called by obssrc Hello community, here is the log from the commit of package kernel-source for openSUSE:Factory checked in at 2025-09-14 18:48:24 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Comparing /work/SRC/openSUSE:Factory/kernel-source (Old) and /work/SRC/openSUSE:Factory/.kernel-source.new.1977 (New) ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "kernel-source" Sun Sep 14 18:48:24 2025 rev:798 rq:1304216 version:6.16.7 Changes: -------- --- /work/SRC/openSUSE:Factory/kernel-source/dtb-aarch64.changes 2025-09-11 14:37:48.135895109 +0200 +++ /work/SRC/openSUSE:Factory/.kernel-source.new.1977/dtb-aarch64.changes 2025-09-14 18:48:26.645743400 +0200 @@ -1,0 +2,53 @@ +Fri Sep 12 08:03:38 CEST 2025 - [email protected] + +- Update + patches.kernel.org/6.16.7-002-x86-vmscape-Enumerate-VMSCAPE-bug.patch + (bsc#1012628 CVE-2025-40300). + Add CVE number. +- commit 4e78a24 + +------------------------------------------------------------------- +Fri Sep 12 07:46:17 CEST 2025 - [email protected] + +- Linux 6.16.7 (bsc#1012628). +- x86/vmscape: Add old Intel CPUs to affected list (bsc#1012628). +- x86/vmscape: Warn when STIBP is disabled with SMT (bsc#1012628). +- x86/bugs: Move cpu_bugs_smt_update() down (bsc#1012628). +- x86/vmscape: Enable the mitigation (bsc#1012628). +- Update config files (set MITIGATION_VMSCAPE=y). +- x86/vmscape: Add conditional IBPB mitigation (bsc#1012628). +- x86/vmscape: Enumerate VMSCAPE bug (bsc#1012628). +- Documentation/hw-vuln: Add VMSCAPE documentation (bsc#1012628). +- commit 6ebd23a + +------------------------------------------------------------------- +Thu Sep 11 18:10:25 CEST 2025 - [email protected] + +- tar-up: Set owner of files in generated tar archives to root rather than + nobody +- commit 1c79230 + +------------------------------------------------------------------- +Thu Sep 11 17:34:41 CEST 2025 - [email protected] + +- tar-up: Also sort generated tar archives +- commit 688ab6a + +------------------------------------------------------------------- +Thu Sep 11 17:07:59 CEST 2025 - [email protected] + +- tar-up: Use the tar utility instead of stable-tar script + The stable-tar script no longer works on Tumbleweed. + Note: this relies on git setting the permissions uniformly, they cannot + be set on tar commandline +- commit f5c226b + +------------------------------------------------------------------- +Thu Sep 11 09:10:22 CEST 2025 - [email protected] + +- Refresh + patches.suse/bcachefs-print-message-at-mount-time-regarding-immin.patch. + Update the message as discussed in bsc#1248109. +- commit bf4fa57 + +------------------------------------------------------------------- @@ -331,0 +385,12 @@ +Tue Sep 9 13:45:14 CEST 2025 - [email protected] + +- scripts/python/kss-dashboard: prepare for the alternative CVE branch +- commit b421c1b + +------------------------------------------------------------------- +Tue Sep 9 11:45:06 CEST 2025 - [email protected] + +- scripts/python/kss-dashboard: speed up patch checking a bit +- commit 9e99f3b + +------------------------------------------------------------------- @@ -399,0 +465,6 @@ + +------------------------------------------------------------------- +Fri Sep 5 17:22:03 CEST 2025 - [email protected] + +- scripts/python/kss-dashboard: fetch into repos if stale +- commit 8b008e3 dtb-armv6l.changes: same change dtb-armv7l.changes: same change dtb-riscv64.changes: same change kernel-64kb.changes: same change kernel-default.changes: same change kernel-docs.changes: same change kernel-kvmsmall.changes: same change kernel-lpae.changes: same change kernel-obs-build.changes: same change kernel-obs-qa.changes: same change kernel-pae.changes: same change kernel-source.changes: same change kernel-syms.changes: same change kernel-vanilla.changes: same change kernel-zfcpdump.changes: same change ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Other differences: ------------------ ++++++ dtb-aarch64.spec ++++++ --- /var/tmp/diff_new_pack.Op2BLB/_old 2025-09-14 18:48:45.610538109 +0200 +++ /var/tmp/diff_new_pack.Op2BLB/_new 2025-09-14 18:48:45.614538277 +0200 @@ -17,7 +17,7 @@ %define srcversion 6.16 -%define patchversion 6.16.6 +%define patchversion 6.16.7 %define variant %{nil} %include %_sourcedir/kernel-spec-macros @@ -25,9 +25,9 @@ %(chmod +x %_sourcedir/{guards,apply-patches,check-for-config-changes,group-source-files.pl,split-modules,modversions,kabi.pl,mkspec,compute-PATCHVERSION.sh,arch-symbols,mkspec-dtb,check-module-license,splitflist,mergedep,moddep,modflist,kernel-subpackage-build}) Name: dtb-aarch64 -Version: 6.16.6 +Version: 6.16.7 %if 0%{?is_kotd} -Release: <RELEASE>.gad8b04f +Release: <RELEASE>.g4e78a24 %else Release: 0 %endif dtb-armv6l.spec: same change dtb-armv7l.spec: same change dtb-riscv64.spec: same change ++++++ kernel-64kb.spec ++++++ --- /var/tmp/diff_new_pack.Op2BLB/_old 2025-09-14 18:48:45.738543474 +0200 +++ /var/tmp/diff_new_pack.Op2BLB/_new 2025-09-14 18:48:45.742543641 +0200 @@ -18,8 +18,8 @@ %define srcversion 6.16 -%define patchversion 6.16.6 -%define git_commit ad8b04f0f117450e075d87f288567848190dfa36 +%define patchversion 6.16.7 +%define git_commit 4e78a24cfd328eb3380ea779cf2726f08e0124ec %define variant %{nil} %define compress_modules zstd %define compress_vmlinux xz @@ -40,9 +40,9 @@ %(chmod +x %_sourcedir/{guards,apply-patches,check-for-config-changes,group-source-files.pl,split-modules,modversions,kabi.pl,mkspec,compute-PATCHVERSION.sh,arch-symbols,mkspec-dtb,check-module-license,splitflist,mergedep,moddep,modflist,kernel-subpackage-build}) Name: kernel-64kb -Version: 6.16.6 +Version: 6.16.7 %if 0%{?is_kotd} -Release: <RELEASE>.gad8b04f +Release: <RELEASE>.g4e78a24 %else Release: 0 %endif kernel-default.spec: same change ++++++ kernel-docs.spec ++++++ --- /var/tmp/diff_new_pack.Op2BLB/_old 2025-09-14 18:48:45.806546323 +0200 +++ /var/tmp/diff_new_pack.Op2BLB/_new 2025-09-14 18:48:45.806546323 +0200 @@ -17,8 +17,8 @@ %define srcversion 6.16 -%define patchversion 6.16.6 -%define git_commit ad8b04f0f117450e075d87f288567848190dfa36 +%define patchversion 6.16.7 +%define git_commit 4e78a24cfd328eb3380ea779cf2726f08e0124ec %define variant %{nil} %define build_html 1 %define build_pdf 0 @@ -28,9 +28,9 @@ %(chmod +x %_sourcedir/{guards,apply-patches,check-for-config-changes,group-source-files.pl,split-modules,modversions,kabi.pl,mkspec,compute-PATCHVERSION.sh,arch-symbols,mkspec-dtb,check-module-license,splitflist,mergedep,moddep,modflist,kernel-subpackage-build}) Name: kernel-docs -Version: 6.16.6 +Version: 6.16.7 %if 0%{?is_kotd} -Release: <RELEASE>.gad8b04f +Release: <RELEASE>.g4e78a24 %else Release: 0 %endif ++++++ kernel-kvmsmall.spec ++++++ --- /var/tmp/diff_new_pack.Op2BLB/_old 2025-09-14 18:48:45.838547664 +0200 +++ /var/tmp/diff_new_pack.Op2BLB/_new 2025-09-14 18:48:45.838547664 +0200 @@ -18,8 +18,8 @@ %define srcversion 6.16 -%define patchversion 6.16.6 -%define git_commit ad8b04f0f117450e075d87f288567848190dfa36 +%define patchversion 6.16.7 +%define git_commit 4e78a24cfd328eb3380ea779cf2726f08e0124ec %define variant %{nil} %define compress_modules zstd %define compress_vmlinux xz @@ -40,9 +40,9 @@ %(chmod +x %_sourcedir/{guards,apply-patches,check-for-config-changes,group-source-files.pl,split-modules,modversions,kabi.pl,mkspec,compute-PATCHVERSION.sh,arch-symbols,mkspec-dtb,check-module-license,splitflist,mergedep,moddep,modflist,kernel-subpackage-build}) Name: kernel-kvmsmall -Version: 6.16.6 +Version: 6.16.7 %if 0%{?is_kotd} -Release: <RELEASE>.gad8b04f +Release: <RELEASE>.g4e78a24 %else Release: 0 %endif kernel-lpae.spec: same change ++++++ kernel-obs-build.spec ++++++ --- /var/tmp/diff_new_pack.Op2BLB/_old 2025-09-14 18:48:45.894550010 +0200 +++ /var/tmp/diff_new_pack.Op2BLB/_new 2025-09-14 18:48:45.898550178 +0200 @@ -19,7 +19,7 @@ #!BuildIgnore: post-build-checks -%define patchversion 6.16.6 +%define patchversion 6.16.7 %define variant %{nil} %include %_sourcedir/kernel-spec-macros @@ -38,23 +38,23 @@ %endif %endif %endif -%global kernel_package kernel%kernel_flavor-srchash-ad8b04f0f117450e075d87f288567848190dfa36 +%global kernel_package kernel%kernel_flavor-srchash-4e78a24cfd328eb3380ea779cf2726f08e0124ec %endif %if 0%{?rhel_version} %global kernel_package kernel %endif Name: kernel-obs-build -Version: 6.16.6 +Version: 6.16.7 %if 0%{?is_kotd} -Release: <RELEASE>.gad8b04f +Release: <RELEASE>.g4e78a24 %else Release: 0 %endif Summary: package kernel and initrd for OBS VM builds License: GPL-2.0-only Group: SLES -Provides: kernel-obs-build-srchash-ad8b04f0f117450e075d87f288567848190dfa36 +Provides: kernel-obs-build-srchash-4e78a24cfd328eb3380ea779cf2726f08e0124ec BuildRequires: coreutils BuildRequires: device-mapper BuildRequires: dracut ++++++ kernel-obs-qa.spec ++++++ --- /var/tmp/diff_new_pack.Op2BLB/_old 2025-09-14 18:48:45.926551352 +0200 +++ /var/tmp/diff_new_pack.Op2BLB/_new 2025-09-14 18:48:45.926551352 +0200 @@ -17,15 +17,15 @@ # needsrootforbuild -%define patchversion 6.16.6 +%define patchversion 6.16.7 %define variant %{nil} %include %_sourcedir/kernel-spec-macros Name: kernel-obs-qa -Version: 6.16.6 +Version: 6.16.7 %if 0%{?is_kotd} -Release: <RELEASE>.gad8b04f +Release: <RELEASE>.g4e78a24 %else Release: 0 %endif @@ -36,7 +36,7 @@ # kernel-obs-build must be also configured as VMinstall, but is required # here as well to avoid that qa and build package build parallel %if ! 0%{?qemu_user_space_build} -BuildRequires: kernel-obs-build-srchash-ad8b04f0f117450e075d87f288567848190dfa36 +BuildRequires: kernel-obs-build-srchash-4e78a24cfd328eb3380ea779cf2726f08e0124ec %endif BuildRequires: modutils ExclusiveArch: aarch64 armv6hl armv7hl ppc64le riscv64 s390x x86_64 ++++++ kernel-pae.spec ++++++ --- /var/tmp/diff_new_pack.Op2BLB/_old 2025-09-14 18:48:45.958552693 +0200 +++ /var/tmp/diff_new_pack.Op2BLB/_new 2025-09-14 18:48:45.958552693 +0200 @@ -18,8 +18,8 @@ %define srcversion 6.16 -%define patchversion 6.16.6 -%define git_commit ad8b04f0f117450e075d87f288567848190dfa36 +%define patchversion 6.16.7 +%define git_commit 4e78a24cfd328eb3380ea779cf2726f08e0124ec %define variant %{nil} %define compress_modules zstd %define compress_vmlinux xz @@ -40,9 +40,9 @@ %(chmod +x %_sourcedir/{guards,apply-patches,check-for-config-changes,group-source-files.pl,split-modules,modversions,kabi.pl,mkspec,compute-PATCHVERSION.sh,arch-symbols,mkspec-dtb,check-module-license,splitflist,mergedep,moddep,modflist,kernel-subpackage-build}) Name: kernel-pae -Version: 6.16.6 +Version: 6.16.7 %if 0%{?is_kotd} -Release: <RELEASE>.gad8b04f +Release: <RELEASE>.g4e78a24 %else Release: 0 %endif ++++++ kernel-source.spec ++++++ --- /var/tmp/diff_new_pack.Op2BLB/_old 2025-09-14 18:48:45.994554201 +0200 +++ /var/tmp/diff_new_pack.Op2BLB/_new 2025-09-14 18:48:45.994554201 +0200 @@ -17,8 +17,8 @@ %define srcversion 6.16 -%define patchversion 6.16.6 -%define git_commit ad8b04f0f117450e075d87f288567848190dfa36 +%define patchversion 6.16.7 +%define git_commit 4e78a24cfd328eb3380ea779cf2726f08e0124ec %define variant %{nil} %define gcc_package gcc %define gcc_compiler gcc @@ -28,9 +28,9 @@ %(chmod +x %_sourcedir/{guards,apply-patches,check-for-config-changes,group-source-files.pl,split-modules,modversions,kabi.pl,mkspec,compute-PATCHVERSION.sh,arch-symbols,mkspec-dtb,check-module-license,splitflist,mergedep,moddep,modflist,kernel-subpackage-build}) Name: kernel-source -Version: 6.16.6 +Version: 6.16.7 %if 0%{?is_kotd} -Release: <RELEASE>.gad8b04f +Release: <RELEASE>.g4e78a24 %else Release: 0 %endif ++++++ kernel-syms.spec ++++++ --- /var/tmp/diff_new_pack.Op2BLB/_old 2025-09-14 18:48:46.034555878 +0200 +++ /var/tmp/diff_new_pack.Op2BLB/_new 2025-09-14 18:48:46.034555878 +0200 @@ -16,15 +16,15 @@ # -%define git_commit ad8b04f0f117450e075d87f288567848190dfa36 +%define git_commit 4e78a24cfd328eb3380ea779cf2726f08e0124ec %define variant %{nil} %include %_sourcedir/kernel-spec-macros Name: kernel-syms -Version: 6.16.6 +Version: 6.16.7 %if 0%{?is_kotd} -Release: <RELEASE>.gad8b04f +Release: <RELEASE>.g4e78a24 %else Release: 0 %endif ++++++ kernel-vanilla.spec ++++++ --- /var/tmp/diff_new_pack.Op2BLB/_old 2025-09-14 18:48:46.074557554 +0200 +++ /var/tmp/diff_new_pack.Op2BLB/_new 2025-09-14 18:48:46.078557721 +0200 @@ -18,8 +18,8 @@ %define srcversion 6.16 -%define patchversion 6.16.6 -%define git_commit ad8b04f0f117450e075d87f288567848190dfa36 +%define patchversion 6.16.7 +%define git_commit 4e78a24cfd328eb3380ea779cf2726f08e0124ec %define variant %{nil} %define compress_modules zstd %define compress_vmlinux xz @@ -40,9 +40,9 @@ %(chmod +x %_sourcedir/{guards,apply-patches,check-for-config-changes,group-source-files.pl,split-modules,modversions,kabi.pl,mkspec,compute-PATCHVERSION.sh,arch-symbols,mkspec-dtb,check-module-license,splitflist,mergedep,moddep,modflist,kernel-subpackage-build}) Name: kernel-vanilla -Version: 6.16.6 +Version: 6.16.7 %if 0%{?is_kotd} -Release: <RELEASE>.gad8b04f +Release: <RELEASE>.g4e78a24 %else Release: 0 %endif kernel-zfcpdump.spec: same change ++++++ config.addon.tar.bz2 ++++++ ++++++ config.tar.bz2 ++++++ diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/config/i386/pae new/config/i386/pae --- old/config/i386/pae 2025-09-08 11:36:17.000000000 +0200 +++ new/config/i386/pae 2025-09-12 09:00:16.000000000 +0200 @@ -1,6 +1,6 @@ # # Automatically generated file; DO NOT EDIT. -# Linux/i386 6.16.4 Kernel Configuration +# Linux/i386 6.16.7 Kernel Configuration # CONFIG_CC_VERSION_TEXT="gcc (scripts/dummy-tools/gcc)" CONFIG_CC_IS_GCC=y @@ -563,6 +563,7 @@ CONFIG_MITIGATION_SRBDS=y CONFIG_MITIGATION_SSB=y CONFIG_MITIGATION_TSA=y +CONFIG_MITIGATION_VMSCAPE=y # # Power management and ACPI options diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/config/x86_64/default new/config/x86_64/default --- old/config/x86_64/default 2025-09-08 11:36:17.000000000 +0200 +++ new/config/x86_64/default 2025-09-12 09:00:16.000000000 +0200 @@ -1,6 +1,6 @@ # # Automatically generated file; DO NOT EDIT. -# Linux/x86_64 6.16.4 Kernel Configuration +# Linux/x86_64 6.16.7 Kernel Configuration # CONFIG_CC_VERSION_TEXT="gcc (scripts/dummy-tools/gcc)" CONFIG_CC_IS_GCC=y @@ -597,6 +597,7 @@ CONFIG_MITIGATION_SSB=y CONFIG_MITIGATION_ITS=y CONFIG_MITIGATION_TSA=y +CONFIG_MITIGATION_VMSCAPE=y CONFIG_ARCH_HAS_ADD_PAGES=y # ++++++ kabi.tar.bz2 ++++++ ++++++ patches.addon.tar.bz2 ++++++ ++++++ patches.apparmor.tar.bz2 ++++++ ++++++ patches.arch.tar.bz2 ++++++ ++++++ patches.drivers.tar.bz2 ++++++ ++++++ patches.drm.tar.bz2 ++++++ ++++++ patches.fixes.tar.bz2 ++++++ ++++++ patches.kabi.tar.bz2 ++++++ ++++++ patches.kernel.org.tar.bz2 ++++++ diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/patches.kernel.org/6.16.7-001-Documentation-hw-vuln-Add-VMSCAPE-documentatio.patch new/patches.kernel.org/6.16.7-001-Documentation-hw-vuln-Add-VMSCAPE-documentatio.patch --- old/patches.kernel.org/6.16.7-001-Documentation-hw-vuln-Add-VMSCAPE-documentatio.patch 1970-01-01 01:00:00.000000000 +0100 +++ new/patches.kernel.org/6.16.7-001-Documentation-hw-vuln-Add-VMSCAPE-documentatio.patch 2025-09-12 09:00:22.000000000 +0200 @@ -0,0 +1,155 @@ +From: Pawan Gupta <[email protected]> +Date: Thu, 14 Aug 2025 10:20:42 -0700 +Subject: [PATCH] Documentation/hw-vuln: Add VMSCAPE documentation +References: bsc#1012628 +Patch-mainline: 6.16.7 +Git-commit: 9969779d0803f5dcd4460ae7aca2bc3fd91bff12 + +Commit 9969779d0803f5dcd4460ae7aca2bc3fd91bff12 upstream. + +VMSCAPE is a vulnerability that may allow a guest to influence the branch +prediction in host userspace, particularly affecting hypervisors like QEMU. + +Add the documentation. + +Signed-off-by: Pawan Gupta <[email protected]> +Signed-off-by: Dave Hansen <[email protected]> +Signed-off-by: Borislav Petkov (AMD) <[email protected]> +Reviewed-by: Borislav Petkov (AMD) <[email protected]> +Reviewed-by: Dave Hansen <[email protected]> +Signed-off-by: Greg Kroah-Hartman <[email protected]> +Signed-off-by: Jiri Slaby <[email protected]> +--- + Documentation/admin-guide/hw-vuln/index.rst | 1 + + Documentation/admin-guide/hw-vuln/vmscape.rst | 110 ++++++++++++++++++ + 2 files changed, 111 insertions(+) + create mode 100644 Documentation/admin-guide/hw-vuln/vmscape.rst + +diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst +index 09890a8f3ee9..8e6130d21de1 100644 +--- a/Documentation/admin-guide/hw-vuln/index.rst ++++ b/Documentation/admin-guide/hw-vuln/index.rst +@@ -25,3 +25,4 @@ are configurable at compile, boot or run time. + rsb + old_microcode + indirect-target-selection ++ vmscape +diff --git a/Documentation/admin-guide/hw-vuln/vmscape.rst b/Documentation/admin-guide/hw-vuln/vmscape.rst +new file mode 100644 +index 000000000000..d9b9a2b6c114 +--- /dev/null ++++ b/Documentation/admin-guide/hw-vuln/vmscape.rst +@@ -0,0 +1,110 @@ ++.. SPDX-License-Identifier: GPL-2.0 ++ ++VMSCAPE ++======= ++ ++VMSCAPE is a vulnerability that may allow a guest to influence the branch ++prediction in host userspace. It particularly affects hypervisors like QEMU. ++ ++Even if a hypervisor may not have any sensitive data like disk encryption keys, ++guest-userspace may be able to attack the guest-kernel using the hypervisor as ++a confused deputy. ++ ++Affected processors ++------------------- ++ ++The following CPU families are affected by VMSCAPE: ++ ++**Intel processors:** ++ - Skylake generation (Parts without Enhanced-IBRS) ++ - Cascade Lake generation - (Parts affected by ITS guest/host separation) ++ - Alder Lake and newer (Parts affected by BHI) ++ ++Note that, BHI affected parts that use BHB clearing software mitigation e.g. ++Icelake are not vulnerable to VMSCAPE. ++ ++**AMD processors:** ++ - Zen series (families 0x17, 0x19, 0x1a) ++ ++** Hygon processors:** ++ - Family 0x18 ++ ++Mitigation ++---------- ++ ++Conditional IBPB ++---------------- ++ ++Kernel tracks when a CPU has run a potentially malicious guest and issues an ++IBPB before the first exit to userspace after VM-exit. If userspace did not run ++between VM-exit and the next VM-entry, no IBPB is issued. ++ ++Note that the existing userspace mitigation against Spectre-v2 is effective in ++protecting the userspace. They are insufficient to protect the userspace VMMs ++from a malicious guest. This is because Spectre-v2 mitigations are applied at ++context switch time, while the userspace VMM can run after a VM-exit without a ++context switch. ++ ++Vulnerability enumeration and mitigation is not applied inside a guest. This is ++because nested hypervisors should already be deploying IBPB to isolate ++themselves from nested guests. ++ ++SMT considerations ++------------------ ++ ++When Simultaneous Multi-Threading (SMT) is enabled, hypervisors can be ++vulnerable to cross-thread attacks. For complete protection against VMSCAPE ++attacks in SMT environments, STIBP should be enabled. ++ ++The kernel will issue a warning if SMT is enabled without adequate STIBP ++protection. Warning is not issued when: ++ ++- SMT is disabled ++- STIBP is enabled system-wide ++- Intel eIBRS is enabled (which implies STIBP protection) ++ ++System information and options ++------------------------------ ++ ++The sysfs file showing VMSCAPE mitigation status is: ++ ++ /sys/devices/system/cpu/vulnerabilities/vmscape ++ ++The possible values in this file are: ++ ++ * 'Not affected': ++ ++ The processor is not vulnerable to VMSCAPE attacks. ++ ++ * 'Vulnerable': ++ ++ The processor is vulnerable and no mitigation has been applied. ++ ++ * 'Mitigation: IBPB before exit to userspace': ++ ++ Conditional IBPB mitigation is enabled. The kernel tracks when a CPU has ++ run a potentially malicious guest and issues an IBPB before the first ++ exit to userspace after VM-exit. ++ ++ * 'Mitigation: IBPB on VMEXIT': ++ ++ IBPB is issued on every VM-exit. This occurs when other mitigations like ++ RETBLEED or SRSO are already issuing IBPB on VM-exit. ++ ++Mitigation control on the kernel command line ++---------------------------------------------- ++ ++The mitigation can be controlled via the ``vmscape=`` command line parameter: ++ ++ * ``vmscape=off``: ++ ++ Disable the VMSCAPE mitigation. ++ ++ * ``vmscape=ibpb``: ++ ++ Enable conditional IBPB mitigation (default when CONFIG_MITIGATION_VMSCAPE=y). ++ ++ * ``vmscape=force``: ++ ++ Force vulnerability detection and mitigation even on processors that are ++ not known to be affected. +-- +2.51.0 + diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/patches.kernel.org/6.16.7-002-x86-vmscape-Enumerate-VMSCAPE-bug.patch new/patches.kernel.org/6.16.7-002-x86-vmscape-Enumerate-VMSCAPE-bug.patch --- old/patches.kernel.org/6.16.7-002-x86-vmscape-Enumerate-VMSCAPE-bug.patch 1970-01-01 01:00:00.000000000 +0100 +++ new/patches.kernel.org/6.16.7-002-x86-vmscape-Enumerate-VMSCAPE-bug.patch 2025-09-12 09:00:22.000000000 +0200 @@ -0,0 +1,156 @@ +From: Pawan Gupta <[email protected]> +Date: Thu, 14 Aug 2025 10:20:42 -0700 +Subject: [PATCH] x86/vmscape: Enumerate VMSCAPE bug +References: bsc#1012628 CVE-2025-40300 +Patch-mainline: 6.16.7 +Git-commit: a508cec6e5215a3fbc7e73ae86a5c5602187934d + +Commit a508cec6e5215a3fbc7e73ae86a5c5602187934d upstream. + +The VMSCAPE vulnerability may allow a guest to cause Branch Target +Injection (BTI) in userspace hypervisors. + +Kernels (both host and guest) have existing defenses against direct BTI +attacks from guests. There are also inter-process BTI mitigations which +prevent processes from attacking each other. However, the threat in this +case is to a userspace hypervisor within the same process as the attacker. + +Userspace hypervisors have access to their own sensitive data like disk +encryption keys and also typically have access to all guest data. This +means guest userspace may use the hypervisor as a confused deputy to attack +sensitive guest kernel data. There are no existing mitigations for these +attacks. + +Introduce X86_BUG_VMSCAPE for this vulnerability and set it on affected +Intel and AMD CPUs. + +Signed-off-by: Pawan Gupta <[email protected]> +Signed-off-by: Dave Hansen <[email protected]> +Signed-off-by: Borislav Petkov (AMD) <[email protected]> +Reviewed-by: Borislav Petkov (AMD) <[email protected]> +Signed-off-by: Greg Kroah-Hartman <[email protected]> +Signed-off-by: Jiri Slaby <[email protected]> +--- + arch/x86/include/asm/cpufeatures.h | 1 + + arch/x86/kernel/cpu/common.c | 65 ++++++++++++++++++++---------- + 2 files changed, 44 insertions(+), 22 deletions(-) + +diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h +index 4597ef662122..29a53b97a3d8 100644 +--- a/arch/x86/include/asm/cpufeatures.h ++++ b/arch/x86/include/asm/cpufeatures.h +@@ -548,4 +548,5 @@ + #define X86_BUG_ITS X86_BUG( 1*32+ 7) /* "its" CPU is affected by Indirect Target Selection */ + #define X86_BUG_ITS_NATIVE_ONLY X86_BUG( 1*32+ 8) /* "its_native_only" CPU is affected by ITS, VMX is not affected */ + #define X86_BUG_TSA X86_BUG( 1*32+ 9) /* "tsa" CPU is affected by Transient Scheduler Attacks */ ++#define X86_BUG_VMSCAPE X86_BUG( 1*32+10) /* "vmscape" CPU is affected by VMSCAPE attacks from guests */ + #endif /* _ASM_X86_CPUFEATURES_H */ +diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c +index fb50c1dd53ef..acac92fe6c16 100644 +--- a/arch/x86/kernel/cpu/common.c ++++ b/arch/x86/kernel/cpu/common.c +@@ -1235,6 +1235,8 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = { + #define ITS_NATIVE_ONLY BIT(9) + /* CPU is affected by Transient Scheduler Attacks */ + #define TSA BIT(10) ++/* CPU is affected by VMSCAPE */ ++#define VMSCAPE BIT(11) + + static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { + VULNBL_INTEL_STEPS(INTEL_IVYBRIDGE, X86_STEP_MAX, SRBDS), +@@ -1246,44 +1248,55 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { + VULNBL_INTEL_STEPS(INTEL_BROADWELL_G, X86_STEP_MAX, SRBDS), + VULNBL_INTEL_STEPS(INTEL_BROADWELL_X, X86_STEP_MAX, MMIO), + VULNBL_INTEL_STEPS(INTEL_BROADWELL, X86_STEP_MAX, SRBDS), +- VULNBL_INTEL_STEPS(INTEL_SKYLAKE_X, 0x5, MMIO | RETBLEED | GDS), +- VULNBL_INTEL_STEPS(INTEL_SKYLAKE_X, X86_STEP_MAX, MMIO | RETBLEED | GDS | ITS), +- VULNBL_INTEL_STEPS(INTEL_SKYLAKE_L, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS), +- VULNBL_INTEL_STEPS(INTEL_SKYLAKE, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS), +- VULNBL_INTEL_STEPS(INTEL_KABYLAKE_L, 0xb, MMIO | RETBLEED | GDS | SRBDS), +- VULNBL_INTEL_STEPS(INTEL_KABYLAKE_L, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | ITS), +- VULNBL_INTEL_STEPS(INTEL_KABYLAKE, 0xc, MMIO | RETBLEED | GDS | SRBDS), +- VULNBL_INTEL_STEPS(INTEL_KABYLAKE, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | ITS), +- VULNBL_INTEL_STEPS(INTEL_CANNONLAKE_L, X86_STEP_MAX, RETBLEED), ++ VULNBL_INTEL_STEPS(INTEL_SKYLAKE_X, 0x5, MMIO | RETBLEED | GDS | VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_SKYLAKE_X, X86_STEP_MAX, MMIO | RETBLEED | GDS | ITS | VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_SKYLAKE_L, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_SKYLAKE, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_KABYLAKE_L, 0xb, MMIO | RETBLEED | GDS | SRBDS | VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_KABYLAKE_L, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | ITS | VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_KABYLAKE, 0xc, MMIO | RETBLEED | GDS | SRBDS | VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_KABYLAKE, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | ITS | VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_CANNONLAKE_L, X86_STEP_MAX, RETBLEED | VMSCAPE), + VULNBL_INTEL_STEPS(INTEL_ICELAKE_L, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS | ITS_NATIVE_ONLY), + VULNBL_INTEL_STEPS(INTEL_ICELAKE_D, X86_STEP_MAX, MMIO | GDS | ITS | ITS_NATIVE_ONLY), + VULNBL_INTEL_STEPS(INTEL_ICELAKE_X, X86_STEP_MAX, MMIO | GDS | ITS | ITS_NATIVE_ONLY), +- VULNBL_INTEL_STEPS(INTEL_COMETLAKE, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS), +- VULNBL_INTEL_STEPS(INTEL_COMETLAKE_L, 0x0, MMIO | RETBLEED | ITS), +- VULNBL_INTEL_STEPS(INTEL_COMETLAKE_L, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS), ++ VULNBL_INTEL_STEPS(INTEL_COMETLAKE, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS | VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_COMETLAKE_L, 0x0, MMIO | RETBLEED | ITS | VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_COMETLAKE_L, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS | VMSCAPE), + VULNBL_INTEL_STEPS(INTEL_TIGERLAKE_L, X86_STEP_MAX, GDS | ITS | ITS_NATIVE_ONLY), + VULNBL_INTEL_STEPS(INTEL_TIGERLAKE, X86_STEP_MAX, GDS | ITS | ITS_NATIVE_ONLY), + VULNBL_INTEL_STEPS(INTEL_LAKEFIELD, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED), + VULNBL_INTEL_STEPS(INTEL_ROCKETLAKE, X86_STEP_MAX, MMIO | RETBLEED | GDS | ITS | ITS_NATIVE_ONLY), +- VULNBL_INTEL_TYPE(INTEL_ALDERLAKE, ATOM, RFDS), +- VULNBL_INTEL_STEPS(INTEL_ALDERLAKE_L, X86_STEP_MAX, RFDS), +- VULNBL_INTEL_TYPE(INTEL_RAPTORLAKE, ATOM, RFDS), +- VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE_P, X86_STEP_MAX, RFDS), +- VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE_S, X86_STEP_MAX, RFDS), +- VULNBL_INTEL_STEPS(INTEL_ATOM_GRACEMONT, X86_STEP_MAX, RFDS), ++ VULNBL_INTEL_TYPE(INTEL_ALDERLAKE, ATOM, RFDS | VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_ALDERLAKE, X86_STEP_MAX, VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_ALDERLAKE_L, X86_STEP_MAX, RFDS | VMSCAPE), ++ VULNBL_INTEL_TYPE(INTEL_RAPTORLAKE, ATOM, RFDS | VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE, X86_STEP_MAX, VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE_P, X86_STEP_MAX, RFDS | VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE_S, X86_STEP_MAX, RFDS | VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_METEORLAKE_L, X86_STEP_MAX, VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_ARROWLAKE_H, X86_STEP_MAX, VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_ARROWLAKE, X86_STEP_MAX, VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_ARROWLAKE_U, X86_STEP_MAX, VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_LUNARLAKE_M, X86_STEP_MAX, VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_SAPPHIRERAPIDS_X, X86_STEP_MAX, VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_GRANITERAPIDS_X, X86_STEP_MAX, VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_EMERALDRAPIDS_X, X86_STEP_MAX, VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_ATOM_GRACEMONT, X86_STEP_MAX, RFDS | VMSCAPE), + VULNBL_INTEL_STEPS(INTEL_ATOM_TREMONT, X86_STEP_MAX, MMIO | MMIO_SBDS | RFDS), + VULNBL_INTEL_STEPS(INTEL_ATOM_TREMONT_D, X86_STEP_MAX, MMIO | RFDS), + VULNBL_INTEL_STEPS(INTEL_ATOM_TREMONT_L, X86_STEP_MAX, MMIO | MMIO_SBDS | RFDS), + VULNBL_INTEL_STEPS(INTEL_ATOM_GOLDMONT, X86_STEP_MAX, RFDS), + VULNBL_INTEL_STEPS(INTEL_ATOM_GOLDMONT_D, X86_STEP_MAX, RFDS), + VULNBL_INTEL_STEPS(INTEL_ATOM_GOLDMONT_PLUS, X86_STEP_MAX, RFDS), ++ VULNBL_INTEL_STEPS(INTEL_ATOM_CRESTMONT_X, X86_STEP_MAX, VMSCAPE), + + VULNBL_AMD(0x15, RETBLEED), + VULNBL_AMD(0x16, RETBLEED), +- VULNBL_AMD(0x17, RETBLEED | SMT_RSB | SRSO), +- VULNBL_HYGON(0x18, RETBLEED | SMT_RSB | SRSO), +- VULNBL_AMD(0x19, SRSO | TSA), +- VULNBL_AMD(0x1a, SRSO), ++ VULNBL_AMD(0x17, RETBLEED | SMT_RSB | SRSO | VMSCAPE), ++ VULNBL_HYGON(0x18, RETBLEED | SMT_RSB | SRSO | VMSCAPE), ++ VULNBL_AMD(0x19, SRSO | TSA | VMSCAPE), ++ VULNBL_AMD(0x1a, SRSO | VMSCAPE), + {} + }; + +@@ -1542,6 +1555,14 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) + } + } + ++ /* ++ * Set the bug only on bare-metal. A nested hypervisor should already be ++ * deploying IBPB to isolate itself from nested guests. ++ */ ++ if (cpu_matches(cpu_vuln_blacklist, VMSCAPE) && ++ !boot_cpu_has(X86_FEATURE_HYPERVISOR)) ++ setup_force_cpu_bug(X86_BUG_VMSCAPE); ++ + if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN)) + return; + +-- +2.51.0 + diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/patches.kernel.org/6.16.7-003-x86-vmscape-Add-conditional-IBPB-mitigation.patch new/patches.kernel.org/6.16.7-003-x86-vmscape-Add-conditional-IBPB-mitigation.patch --- old/patches.kernel.org/6.16.7-003-x86-vmscape-Add-conditional-IBPB-mitigation.patch 1970-01-01 01:00:00.000000000 +0100 +++ new/patches.kernel.org/6.16.7-003-x86-vmscape-Add-conditional-IBPB-mitigation.patch 2025-09-12 09:00:22.000000000 +0200 @@ -0,0 +1,132 @@ +From: Pawan Gupta <[email protected]> +Date: Thu, 14 Aug 2025 10:20:42 -0700 +Subject: [PATCH] x86/vmscape: Add conditional IBPB mitigation +References: bsc#1012628 +Patch-mainline: 6.16.7 +Git-commit: 2f8f173413f1cbf52660d04df92d0069c4306d25 + +Commit 2f8f173413f1cbf52660d04df92d0069c4306d25 upstream. + +VMSCAPE is a vulnerability that exploits insufficient branch predictor +isolation between a guest and a userspace hypervisor (like QEMU). Existing +mitigations already protect kernel/KVM from a malicious guest. Userspace +can additionally be protected by flushing the branch predictors after a +VMexit. + +Since it is the userspace that consumes the poisoned branch predictors, +conditionally issue an IBPB after a VMexit and before returning to +userspace. Workloads that frequently switch between hypervisor and +userspace will incur the most overhead from the new IBPB. + +This new IBPB is not integrated with the existing IBPB sites. For +instance, a task can use the existing speculation control prctl() to +get an IBPB at context switch time. With this implementation, the +IBPB is doubled up: one at context switch and another before running +userspace. + +The intent is to integrate and optimize these cases post-embargo. + +[ dhansen: elaborate on suboptimal IBPB solution ] + +Suggested-by: Dave Hansen <[email protected]> +Signed-off-by: Pawan Gupta <[email protected]> +Signed-off-by: Dave Hansen <[email protected]> +Signed-off-by: Borislav Petkov (AMD) <[email protected]> +Reviewed-by: Dave Hansen <[email protected]> +Reviewed-by: Borislav Petkov (AMD) <[email protected]> +Acked-by: Sean Christopherson <[email protected]> +Signed-off-by: Greg Kroah-Hartman <[email protected]> +Signed-off-by: Jiri Slaby <[email protected]> +--- + arch/x86/include/asm/cpufeatures.h | 1 + + arch/x86/include/asm/entry-common.h | 7 +++++++ + arch/x86/include/asm/nospec-branch.h | 2 ++ + arch/x86/kernel/cpu/bugs.c | 8 ++++++++ + arch/x86/kvm/x86.c | 9 +++++++++ + 5 files changed, 27 insertions(+) + +diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h +index 29a53b97a3d8..48ffdbab9145 100644 +--- a/arch/x86/include/asm/cpufeatures.h ++++ b/arch/x86/include/asm/cpufeatures.h +@@ -492,6 +492,7 @@ + #define X86_FEATURE_TSA_SQ_NO (21*32+11) /* AMD CPU not vulnerable to TSA-SQ */ + #define X86_FEATURE_TSA_L1_NO (21*32+12) /* AMD CPU not vulnerable to TSA-L1 */ + #define X86_FEATURE_CLEAR_CPU_BUF_VM (21*32+13) /* Clear CPU buffers using VERW before VMRUN */ ++#define X86_FEATURE_IBPB_EXIT_TO_USER (21*32+14) /* Use IBPB on exit-to-userspace, see VMSCAPE bug */ + + /* + * BUG word(s) +diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/entry-common.h +index d535a97c7284..ce3eb6d5fdf9 100644 +--- a/arch/x86/include/asm/entry-common.h ++++ b/arch/x86/include/asm/entry-common.h +@@ -93,6 +93,13 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, + * 8 (ia32) bits. + */ + choose_random_kstack_offset(rdtsc()); ++ ++ /* Avoid unnecessary reads of 'x86_ibpb_exit_to_user' */ ++ if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER) && ++ this_cpu_read(x86_ibpb_exit_to_user)) { ++ indirect_branch_prediction_barrier(); ++ this_cpu_write(x86_ibpb_exit_to_user, false); ++ } + } + #define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare + +diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h +index 10f261678749..e29f82466f43 100644 +--- a/arch/x86/include/asm/nospec-branch.h ++++ b/arch/x86/include/asm/nospec-branch.h +@@ -530,6 +530,8 @@ void alternative_msr_write(unsigned int msr, u64 val, unsigned int feature) + : "memory"); + } + ++DECLARE_PER_CPU(bool, x86_ibpb_exit_to_user); ++ + static inline void indirect_branch_prediction_barrier(void) + { + asm_inline volatile(ALTERNATIVE("", "call write_ibpb", X86_FEATURE_IBPB) +diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c +index d19972d5d729..fdf18bf61490 100644 +--- a/arch/x86/kernel/cpu/bugs.c ++++ b/arch/x86/kernel/cpu/bugs.c +@@ -105,6 +105,14 @@ EXPORT_SYMBOL_GPL(x86_spec_ctrl_base); + DEFINE_PER_CPU(u64, x86_spec_ctrl_current); + EXPORT_PER_CPU_SYMBOL_GPL(x86_spec_ctrl_current); + ++/* ++ * Set when the CPU has run a potentially malicious guest. An IBPB will ++ * be needed to before running userspace. That IBPB will flush the branch ++ * predictor content. ++ */ ++DEFINE_PER_CPU(bool, x86_ibpb_exit_to_user); ++EXPORT_PER_CPU_SYMBOL_GPL(x86_ibpb_exit_to_user); ++ + u64 x86_pred_cmd __ro_after_init = PRED_CMD_IBPB; + + static u64 __ro_after_init x86_arch_cap_msr; +diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c +index 7d4cb1cbd629..6b3a64e73f21 100644 +--- a/arch/x86/kvm/x86.c ++++ b/arch/x86/kvm/x86.c +@@ -11145,6 +11145,15 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) + if (vcpu->arch.guest_fpu.xfd_err) + wrmsrq(MSR_IA32_XFD_ERR, 0); + ++ /* ++ * Mark this CPU as needing a branch predictor flush before running ++ * userspace. Must be done before enabling preemption to ensure it gets ++ * set for the CPU that actually ran the guest, and not the CPU that it ++ * may migrate to. ++ */ ++ if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER)) ++ this_cpu_write(x86_ibpb_exit_to_user, true); ++ + /* + * Consume any pending interrupts, including the possible source of + * VM-Exit on SVM and any ticks that occur between VM-Exit and now. +-- +2.51.0 + diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/patches.kernel.org/6.16.7-004-x86-vmscape-Enable-the-mitigation.patch new/patches.kernel.org/6.16.7-004-x86-vmscape-Enable-the-mitigation.patch --- old/patches.kernel.org/6.16.7-004-x86-vmscape-Enable-the-mitigation.patch 1970-01-01 01:00:00.000000000 +0100 +++ new/patches.kernel.org/6.16.7-004-x86-vmscape-Enable-the-mitigation.patch 2025-09-12 09:00:22.000000000 +0200 @@ -0,0 +1,282 @@ +From: Pawan Gupta <[email protected]> +Date: Thu, 14 Aug 2025 10:20:42 -0700 +Subject: [PATCH] x86/vmscape: Enable the mitigation +References: bsc#1012628 +Patch-mainline: 6.16.7 +Git-commit: 556c1ad666ad90c50ec8fccb930dd5046cfbecfb + +Commit 556c1ad666ad90c50ec8fccb930dd5046cfbecfb upstream. + +Enable the previously added mitigation for VMscape. Add the cmdline +vmscape={off|ibpb|force} and sysfs reporting. + +Signed-off-by: Pawan Gupta <[email protected]> +Signed-off-by: Dave Hansen <[email protected]> +Signed-off-by: Borislav Petkov (AMD) <[email protected]> +Reviewed-by: Borislav Petkov (AMD) <[email protected]> +Reviewed-by: Dave Hansen <[email protected]> +Signed-off-by: Greg Kroah-Hartman <[email protected]> +Signed-off-by: Jiri Slaby <[email protected]> +--- + .../ABI/testing/sysfs-devices-system-cpu | 1 + + .../admin-guide/kernel-parameters.txt | 11 +++ + arch/x86/Kconfig | 9 ++ + arch/x86/kernel/cpu/bugs.c | 90 +++++++++++++++++++ + drivers/base/cpu.c | 3 + + include/linux/cpu.h | 1 + + 6 files changed, 115 insertions(+) + +diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu +index ab8cd337f43a..8aed6d94c4cd 100644 +--- a/Documentation/ABI/testing/sysfs-devices-system-cpu ++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu +@@ -586,6 +586,7 @@ What: /sys/devices/system/cpu/vulnerabilities + /sys/devices/system/cpu/vulnerabilities/srbds + /sys/devices/system/cpu/vulnerabilities/tsa + /sys/devices/system/cpu/vulnerabilities/tsx_async_abort ++ /sys/devices/system/cpu/vulnerabilities/vmscape + Date: January 2018 + Contact: Linux kernel mailing list <[email protected]> + Description: Information about CPU vulnerabilities +diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt +index f6d317e1674d..089e1a395178 100644 +--- a/Documentation/admin-guide/kernel-parameters.txt ++++ b/Documentation/admin-guide/kernel-parameters.txt +@@ -3774,6 +3774,7 @@ + srbds=off [X86,INTEL] + ssbd=force-off [ARM64] + tsx_async_abort=off [X86] ++ vmscape=off [X86] + + Exceptions: + This does not have any effect on +@@ -7937,6 +7938,16 @@ + vmpoff= [KNL,S390] Perform z/VM CP command after power off. + Format: <command> + ++ vmscape= [X86] Controls mitigation for VMscape attacks. ++ VMscape attacks can leak information from a userspace ++ hypervisor to a guest via speculative side-channels. ++ ++ off - disable the mitigation ++ ibpb - use Indirect Branch Prediction Barrier ++ (IBPB) mitigation (default) ++ force - force vulnerability detection even on ++ unaffected processors ++ + vsyscall= [X86-64,EARLY] + Controls the behavior of vsyscalls (i.e. calls to + fixed addresses of 0xffffffffff600x00 from legacy +diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig +index 8bed9030ad47..874c9b264d6f 100644 +--- a/arch/x86/Kconfig ++++ b/arch/x86/Kconfig +@@ -2704,6 +2704,15 @@ config MITIGATION_TSA + security vulnerability on AMD CPUs which can lead to forwarding of + invalid info to subsequent instructions and thus can affect their + timing and thereby cause a leakage. ++ ++config MITIGATION_VMSCAPE ++ bool "Mitigate VMSCAPE" ++ depends on KVM ++ default y ++ help ++ Enable mitigation for VMSCAPE attacks. VMSCAPE is a hardware security ++ vulnerability on Intel and AMD CPUs that may allow a guest to do ++ Spectre v2 style attacks on userspace hypervisor. + endif + + config ARCH_HAS_ADD_PAGES +diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c +index fdf18bf61490..ae228970ed55 100644 +--- a/arch/x86/kernel/cpu/bugs.c ++++ b/arch/x86/kernel/cpu/bugs.c +@@ -96,6 +96,9 @@ static void __init its_update_mitigation(void); + static void __init its_apply_mitigation(void); + static void __init tsa_select_mitigation(void); + static void __init tsa_apply_mitigation(void); ++static void __init vmscape_select_mitigation(void); ++static void __init vmscape_update_mitigation(void); ++static void __init vmscape_apply_mitigation(void); + + /* The base value of the SPEC_CTRL MSR without task-specific bits set */ + u64 x86_spec_ctrl_base; +@@ -235,6 +238,7 @@ void __init cpu_select_mitigations(void) + its_select_mitigation(); + bhi_select_mitigation(); + tsa_select_mitigation(); ++ vmscape_select_mitigation(); + + /* + * After mitigations are selected, some may need to update their +@@ -266,6 +270,7 @@ void __init cpu_select_mitigations(void) + bhi_update_mitigation(); + /* srso_update_mitigation() depends on retbleed_update_mitigation(). */ + srso_update_mitigation(); ++ vmscape_update_mitigation(); + + spectre_v1_apply_mitigation(); + spectre_v2_apply_mitigation(); +@@ -283,6 +288,7 @@ void __init cpu_select_mitigations(void) + its_apply_mitigation(); + bhi_apply_mitigation(); + tsa_apply_mitigation(); ++ vmscape_apply_mitigation(); + } + + /* +@@ -3145,6 +3151,77 @@ static void __init srso_apply_mitigation(void) + } + } + ++#undef pr_fmt ++#define pr_fmt(fmt) "VMSCAPE: " fmt ++ ++enum vmscape_mitigations { ++ VMSCAPE_MITIGATION_NONE, ++ VMSCAPE_MITIGATION_AUTO, ++ VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER, ++ VMSCAPE_MITIGATION_IBPB_ON_VMEXIT, ++}; ++ ++static const char * const vmscape_strings[] = { ++ [VMSCAPE_MITIGATION_NONE] = "Vulnerable", ++ /* [VMSCAPE_MITIGATION_AUTO] */ ++ [VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER] = "Mitigation: IBPB before exit to userspace", ++ [VMSCAPE_MITIGATION_IBPB_ON_VMEXIT] = "Mitigation: IBPB on VMEXIT", ++}; ++ ++static enum vmscape_mitigations vmscape_mitigation __ro_after_init = ++ IS_ENABLED(CONFIG_MITIGATION_VMSCAPE) ? VMSCAPE_MITIGATION_AUTO : VMSCAPE_MITIGATION_NONE; ++ ++static int __init vmscape_parse_cmdline(char *str) ++{ ++ if (!str) ++ return -EINVAL; ++ ++ if (!strcmp(str, "off")) { ++ vmscape_mitigation = VMSCAPE_MITIGATION_NONE; ++ } else if (!strcmp(str, "ibpb")) { ++ vmscape_mitigation = VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER; ++ } else if (!strcmp(str, "force")) { ++ setup_force_cpu_bug(X86_BUG_VMSCAPE); ++ vmscape_mitigation = VMSCAPE_MITIGATION_AUTO; ++ } else { ++ pr_err("Ignoring unknown vmscape=%s option.\n", str); ++ } ++ ++ return 0; ++} ++early_param("vmscape", vmscape_parse_cmdline); ++ ++static void __init vmscape_select_mitigation(void) ++{ ++ if (cpu_mitigations_off() || ++ !boot_cpu_has_bug(X86_BUG_VMSCAPE) || ++ !boot_cpu_has(X86_FEATURE_IBPB)) { ++ vmscape_mitigation = VMSCAPE_MITIGATION_NONE; ++ return; ++ } ++ ++ if (vmscape_mitigation == VMSCAPE_MITIGATION_AUTO) ++ vmscape_mitigation = VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER; ++} ++ ++static void __init vmscape_update_mitigation(void) ++{ ++ if (!boot_cpu_has_bug(X86_BUG_VMSCAPE)) ++ return; ++ ++ if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB || ++ srso_mitigation == SRSO_MITIGATION_IBPB_ON_VMEXIT) ++ vmscape_mitigation = VMSCAPE_MITIGATION_IBPB_ON_VMEXIT; ++ ++ pr_info("%s\n", vmscape_strings[vmscape_mitigation]); ++} ++ ++static void __init vmscape_apply_mitigation(void) ++{ ++ if (vmscape_mitigation == VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER) ++ setup_force_cpu_cap(X86_FEATURE_IBPB_EXIT_TO_USER); ++} ++ + #undef pr_fmt + #define pr_fmt(fmt) fmt + +@@ -3396,6 +3473,11 @@ static ssize_t tsa_show_state(char *buf) + return sysfs_emit(buf, "%s\n", tsa_strings[tsa_mitigation]); + } + ++static ssize_t vmscape_show_state(char *buf) ++{ ++ return sysfs_emit(buf, "%s\n", vmscape_strings[vmscape_mitigation]); ++} ++ + static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr, + char *buf, unsigned int bug) + { +@@ -3462,6 +3544,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr + case X86_BUG_TSA: + return tsa_show_state(buf); + ++ case X86_BUG_VMSCAPE: ++ return vmscape_show_state(buf); ++ + default: + break; + } +@@ -3553,6 +3638,11 @@ ssize_t cpu_show_tsa(struct device *dev, struct device_attribute *attr, char *bu + { + return cpu_show_common(dev, attr, buf, X86_BUG_TSA); + } ++ ++ssize_t cpu_show_vmscape(struct device *dev, struct device_attribute *attr, char *buf) ++{ ++ return cpu_show_common(dev, attr, buf, X86_BUG_VMSCAPE); ++} + #endif + + void __warn_thunk(void) +diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c +index efc575a00edd..008da0354fba 100644 +--- a/drivers/base/cpu.c ++++ b/drivers/base/cpu.c +@@ -603,6 +603,7 @@ CPU_SHOW_VULN_FALLBACK(ghostwrite); + CPU_SHOW_VULN_FALLBACK(old_microcode); + CPU_SHOW_VULN_FALLBACK(indirect_target_selection); + CPU_SHOW_VULN_FALLBACK(tsa); ++CPU_SHOW_VULN_FALLBACK(vmscape); + + static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL); + static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL); +@@ -622,6 +623,7 @@ static DEVICE_ATTR(ghostwrite, 0444, cpu_show_ghostwrite, NULL); + static DEVICE_ATTR(old_microcode, 0444, cpu_show_old_microcode, NULL); + static DEVICE_ATTR(indirect_target_selection, 0444, cpu_show_indirect_target_selection, NULL); + static DEVICE_ATTR(tsa, 0444, cpu_show_tsa, NULL); ++static DEVICE_ATTR(vmscape, 0444, cpu_show_vmscape, NULL); + + static struct attribute *cpu_root_vulnerabilities_attrs[] = { + &dev_attr_meltdown.attr, +@@ -642,6 +644,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = { + &dev_attr_old_microcode.attr, + &dev_attr_indirect_target_selection.attr, + &dev_attr_tsa.attr, ++ &dev_attr_vmscape.attr, + NULL + }; + +diff --git a/include/linux/cpu.h b/include/linux/cpu.h +index 6378370a952f..9cc5472b87ea 100644 +--- a/include/linux/cpu.h ++++ b/include/linux/cpu.h +@@ -83,6 +83,7 @@ extern ssize_t cpu_show_old_microcode(struct device *dev, + extern ssize_t cpu_show_indirect_target_selection(struct device *dev, + struct device_attribute *attr, char *buf); + extern ssize_t cpu_show_tsa(struct device *dev, struct device_attribute *attr, char *buf); ++extern ssize_t cpu_show_vmscape(struct device *dev, struct device_attribute *attr, char *buf); + + extern __printf(4, 5) + struct device *cpu_device_create(struct device *parent, void *drvdata, +-- +2.51.0 + diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/patches.kernel.org/6.16.7-005-x86-bugs-Move-cpu_bugs_smt_update-down.patch new/patches.kernel.org/6.16.7-005-x86-bugs-Move-cpu_bugs_smt_update-down.patch --- old/patches.kernel.org/6.16.7-005-x86-bugs-Move-cpu_bugs_smt_update-down.patch 1970-01-01 01:00:00.000000000 +0100 +++ new/patches.kernel.org/6.16.7-005-x86-bugs-Move-cpu_bugs_smt_update-down.patch 2025-09-12 09:00:22.000000000 +0200 @@ -0,0 +1,215 @@ +From: Pawan Gupta <[email protected]> +Date: Thu, 14 Aug 2025 10:20:43 -0700 +Subject: [PATCH] x86/bugs: Move cpu_bugs_smt_update() down +References: bsc#1012628 +Patch-mainline: 6.16.7 +Git-commit: 6449f5baf9c78a7a442d64f4a61378a21c5db113 + +Commit 6449f5baf9c78a7a442d64f4a61378a21c5db113 upstream. + +cpu_bugs_smt_update() uses global variables from different mitigations. For +SMT updates it can't currently use vmscape_mitigation that is defined after +it. + +Since cpu_bugs_smt_update() depends on many other mitigations, move it +after all mitigations are defined. With that, it can use vmscape_mitigation +in a moment. + +No functional change. + +Signed-off-by: Pawan Gupta <[email protected]> +Signed-off-by: Dave Hansen <[email protected]> +Signed-off-by: Borislav Petkov (AMD) <[email protected]> +Reviewed-by: Dave Hansen <[email protected]> +Signed-off-by: Greg Kroah-Hartman <[email protected]> +Signed-off-by: Jiri Slaby <[email protected]> +--- + arch/x86/kernel/cpu/bugs.c | 165 +++++++++++++++++++------------------ + 1 file changed, 83 insertions(+), 82 deletions(-) + +diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c +index ae228970ed55..4bd9aff80534 100644 +--- a/arch/x86/kernel/cpu/bugs.c ++++ b/arch/x86/kernel/cpu/bugs.c +@@ -2369,88 +2369,6 @@ static void update_mds_branch_idle(void) + } + } + +-#define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n" +-#define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.\n" +-#define MMIO_MSG_SMT "MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.\n" +- +-void cpu_bugs_smt_update(void) +-{ +- mutex_lock(&spec_ctrl_mutex); +- +- if (sched_smt_active() && unprivileged_ebpf_enabled() && +- spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE) +- pr_warn_once(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG); +- +- switch (spectre_v2_user_stibp) { +- case SPECTRE_V2_USER_NONE: +- break; +- case SPECTRE_V2_USER_STRICT: +- case SPECTRE_V2_USER_STRICT_PREFERRED: +- update_stibp_strict(); +- break; +- case SPECTRE_V2_USER_PRCTL: +- case SPECTRE_V2_USER_SECCOMP: +- update_indir_branch_cond(); +- break; +- } +- +- switch (mds_mitigation) { +- case MDS_MITIGATION_FULL: +- case MDS_MITIGATION_AUTO: +- case MDS_MITIGATION_VMWERV: +- if (sched_smt_active() && !boot_cpu_has(X86_BUG_MSBDS_ONLY)) +- pr_warn_once(MDS_MSG_SMT); +- update_mds_branch_idle(); +- break; +- case MDS_MITIGATION_OFF: +- break; +- } +- +- switch (taa_mitigation) { +- case TAA_MITIGATION_VERW: +- case TAA_MITIGATION_AUTO: +- case TAA_MITIGATION_UCODE_NEEDED: +- if (sched_smt_active()) +- pr_warn_once(TAA_MSG_SMT); +- break; +- case TAA_MITIGATION_TSX_DISABLED: +- case TAA_MITIGATION_OFF: +- break; +- } +- +- switch (mmio_mitigation) { +- case MMIO_MITIGATION_VERW: +- case MMIO_MITIGATION_AUTO: +- case MMIO_MITIGATION_UCODE_NEEDED: +- if (sched_smt_active()) +- pr_warn_once(MMIO_MSG_SMT); +- break; +- case MMIO_MITIGATION_OFF: +- break; +- } +- +- switch (tsa_mitigation) { +- case TSA_MITIGATION_USER_KERNEL: +- case TSA_MITIGATION_VM: +- case TSA_MITIGATION_AUTO: +- case TSA_MITIGATION_FULL: +- /* +- * TSA-SQ can potentially lead to info leakage between +- * SMT threads. +- */ +- if (sched_smt_active()) +- static_branch_enable(&cpu_buf_idle_clear); +- else +- static_branch_disable(&cpu_buf_idle_clear); +- break; +- case TSA_MITIGATION_NONE: +- case TSA_MITIGATION_UCODE_NEEDED: +- break; +- } +- +- mutex_unlock(&spec_ctrl_mutex); +-} +- + #undef pr_fmt + #define pr_fmt(fmt) "Speculative Store Bypass: " fmt + +@@ -3225,6 +3143,89 @@ static void __init vmscape_apply_mitigation(void) + #undef pr_fmt + #define pr_fmt(fmt) fmt + ++#define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n" ++#define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.\n" ++#define MMIO_MSG_SMT "MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.\n" ++#define VMSCAPE_MSG_SMT "VMSCAPE: SMT on, STIBP is required for full protection. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/vmscape.html for more details.\n" ++ ++void cpu_bugs_smt_update(void) ++{ ++ mutex_lock(&spec_ctrl_mutex); ++ ++ if (sched_smt_active() && unprivileged_ebpf_enabled() && ++ spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE) ++ pr_warn_once(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG); ++ ++ switch (spectre_v2_user_stibp) { ++ case SPECTRE_V2_USER_NONE: ++ break; ++ case SPECTRE_V2_USER_STRICT: ++ case SPECTRE_V2_USER_STRICT_PREFERRED: ++ update_stibp_strict(); ++ break; ++ case SPECTRE_V2_USER_PRCTL: ++ case SPECTRE_V2_USER_SECCOMP: ++ update_indir_branch_cond(); ++ break; ++ } ++ ++ switch (mds_mitigation) { ++ case MDS_MITIGATION_FULL: ++ case MDS_MITIGATION_AUTO: ++ case MDS_MITIGATION_VMWERV: ++ if (sched_smt_active() && !boot_cpu_has(X86_BUG_MSBDS_ONLY)) ++ pr_warn_once(MDS_MSG_SMT); ++ update_mds_branch_idle(); ++ break; ++ case MDS_MITIGATION_OFF: ++ break; ++ } ++ ++ switch (taa_mitigation) { ++ case TAA_MITIGATION_VERW: ++ case TAA_MITIGATION_AUTO: ++ case TAA_MITIGATION_UCODE_NEEDED: ++ if (sched_smt_active()) ++ pr_warn_once(TAA_MSG_SMT); ++ break; ++ case TAA_MITIGATION_TSX_DISABLED: ++ case TAA_MITIGATION_OFF: ++ break; ++ } ++ ++ switch (mmio_mitigation) { ++ case MMIO_MITIGATION_VERW: ++ case MMIO_MITIGATION_AUTO: ++ case MMIO_MITIGATION_UCODE_NEEDED: ++ if (sched_smt_active()) ++ pr_warn_once(MMIO_MSG_SMT); ++ break; ++ case MMIO_MITIGATION_OFF: ++ break; ++ } ++ ++ switch (tsa_mitigation) { ++ case TSA_MITIGATION_USER_KERNEL: ++ case TSA_MITIGATION_VM: ++ case TSA_MITIGATION_AUTO: ++ case TSA_MITIGATION_FULL: ++ /* ++ * TSA-SQ can potentially lead to info leakage between ++ * SMT threads. ++ */ ++ if (sched_smt_active()) ++ static_branch_enable(&cpu_buf_idle_clear); ++ else ++ static_branch_disable(&cpu_buf_idle_clear); ++ break; ++ case TSA_MITIGATION_NONE: ++ case TSA_MITIGATION_UCODE_NEEDED: ++ break; ++ } ++ ++ mutex_unlock(&spec_ctrl_mutex); ++} ++ + #ifdef CONFIG_SYSFS + + #define L1TF_DEFAULT_MSG "Mitigation: PTE Inversion" +-- +2.51.0 + diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/patches.kernel.org/6.16.7-006-x86-vmscape-Warn-when-STIBP-is-disabled-with-S.patch new/patches.kernel.org/6.16.7-006-x86-vmscape-Warn-when-STIBP-is-disabled-with-S.patch --- old/patches.kernel.org/6.16.7-006-x86-vmscape-Warn-when-STIBP-is-disabled-with-S.patch 1970-01-01 01:00:00.000000000 +0100 +++ new/patches.kernel.org/6.16.7-006-x86-vmscape-Warn-when-STIBP-is-disabled-with-S.patch 2025-09-12 09:00:22.000000000 +0200 @@ -0,0 +1,67 @@ +From: Pawan Gupta <[email protected]> +Date: Thu, 14 Aug 2025 10:20:43 -0700 +Subject: [PATCH] x86/vmscape: Warn when STIBP is disabled with SMT +References: bsc#1012628 +Patch-mainline: 6.16.7 +Git-commit: b7cc9887231526ca4fa89f3fa4119e47c2dc7b1e + +Commit b7cc9887231526ca4fa89f3fa4119e47c2dc7b1e upstream. + +Cross-thread attacks are generally harder as they require the victim to be +co-located on a core. However, with VMSCAPE the adversary targets belong to +the same guest execution, that are more likely to get co-located. In +particular, a thread that is currently executing userspace hypervisor +(after the IBPB) may still be targeted by a guest execution from a sibling +thread. + +Issue a warning about the potential risk, except when: + +- SMT is disabled +- STIBP is enabled system-wide +- Intel eIBRS is enabled (which implies STIBP protection) + +Signed-off-by: Pawan Gupta <[email protected]> +Signed-off-by: Dave Hansen <[email protected]> +Signed-off-by: Borislav Petkov (AMD) <[email protected]> +Signed-off-by: Greg Kroah-Hartman <[email protected]> +Signed-off-by: Jiri Slaby <[email protected]> +--- + arch/x86/kernel/cpu/bugs.c | 22 ++++++++++++++++++++++ + 1 file changed, 22 insertions(+) + +diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c +index 4bd9aff80534..65e253ef5218 100644 +--- a/arch/x86/kernel/cpu/bugs.c ++++ b/arch/x86/kernel/cpu/bugs.c +@@ -3223,6 +3223,28 @@ void cpu_bugs_smt_update(void) + break; + } + ++ switch (vmscape_mitigation) { ++ case VMSCAPE_MITIGATION_NONE: ++ case VMSCAPE_MITIGATION_AUTO: ++ break; ++ case VMSCAPE_MITIGATION_IBPB_ON_VMEXIT: ++ case VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER: ++ /* ++ * Hypervisors can be attacked across-threads, warn for SMT when ++ * STIBP is not already enabled system-wide. ++ * ++ * Intel eIBRS (!AUTOIBRS) implies STIBP on. ++ */ ++ if (!sched_smt_active() || ++ spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT || ++ spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED || ++ (spectre_v2_in_eibrs_mode(spectre_v2_enabled) && ++ !boot_cpu_has(X86_FEATURE_AUTOIBRS))) ++ break; ++ pr_warn_once(VMSCAPE_MSG_SMT); ++ break; ++ } ++ + mutex_unlock(&spec_ctrl_mutex); + } + +-- +2.51.0 + diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/patches.kernel.org/6.16.7-007-x86-vmscape-Add-old-Intel-CPUs-to-affected-lis.patch new/patches.kernel.org/6.16.7-007-x86-vmscape-Add-old-Intel-CPUs-to-affected-lis.patch --- old/patches.kernel.org/6.16.7-007-x86-vmscape-Add-old-Intel-CPUs-to-affected-lis.patch 1970-01-01 01:00:00.000000000 +0100 +++ new/patches.kernel.org/6.16.7-007-x86-vmscape-Add-old-Intel-CPUs-to-affected-lis.patch 2025-09-12 09:00:22.000000000 +0200 @@ -0,0 +1,55 @@ +From: Pawan Gupta <[email protected]> +Date: Fri, 29 Aug 2025 15:28:52 -0700 +Subject: [PATCH] x86/vmscape: Add old Intel CPUs to affected list +References: bsc#1012628 +Patch-mainline: 6.16.7 +Git-commit: 8a68d64bb10334426834e8c273319601878e961e + +Commit 8a68d64bb10334426834e8c273319601878e961e upstream. + +These old CPUs are not tested against VMSCAPE, but are likely vulnerable. + +Signed-off-by: Pawan Gupta <[email protected]> +Signed-off-by: Dave Hansen <[email protected]> +Signed-off-by: Borislav Petkov (AMD) <[email protected]> +Signed-off-by: Greg Kroah-Hartman <[email protected]> +Signed-off-by: Jiri Slaby <[email protected]> +--- + arch/x86/kernel/cpu/common.c | 21 ++++++++++++--------- + 1 file changed, 12 insertions(+), 9 deletions(-) + +diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c +index acac92fe6c16..bce82fa055e4 100644 +--- a/arch/x86/kernel/cpu/common.c ++++ b/arch/x86/kernel/cpu/common.c +@@ -1239,15 +1239,18 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = { + #define VMSCAPE BIT(11) + + static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { +- VULNBL_INTEL_STEPS(INTEL_IVYBRIDGE, X86_STEP_MAX, SRBDS), +- VULNBL_INTEL_STEPS(INTEL_HASWELL, X86_STEP_MAX, SRBDS), +- VULNBL_INTEL_STEPS(INTEL_HASWELL_L, X86_STEP_MAX, SRBDS), +- VULNBL_INTEL_STEPS(INTEL_HASWELL_G, X86_STEP_MAX, SRBDS), +- VULNBL_INTEL_STEPS(INTEL_HASWELL_X, X86_STEP_MAX, MMIO), +- VULNBL_INTEL_STEPS(INTEL_BROADWELL_D, X86_STEP_MAX, MMIO), +- VULNBL_INTEL_STEPS(INTEL_BROADWELL_G, X86_STEP_MAX, SRBDS), +- VULNBL_INTEL_STEPS(INTEL_BROADWELL_X, X86_STEP_MAX, MMIO), +- VULNBL_INTEL_STEPS(INTEL_BROADWELL, X86_STEP_MAX, SRBDS), ++ VULNBL_INTEL_STEPS(INTEL_SANDYBRIDGE_X, X86_STEP_MAX, VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_SANDYBRIDGE, X86_STEP_MAX, VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_IVYBRIDGE_X, X86_STEP_MAX, VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_IVYBRIDGE, X86_STEP_MAX, SRBDS | VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_HASWELL, X86_STEP_MAX, SRBDS | VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_HASWELL_L, X86_STEP_MAX, SRBDS | VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_HASWELL_G, X86_STEP_MAX, SRBDS | VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_HASWELL_X, X86_STEP_MAX, MMIO | VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_BROADWELL_D, X86_STEP_MAX, MMIO | VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_BROADWELL_X, X86_STEP_MAX, MMIO | VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_BROADWELL_G, X86_STEP_MAX, SRBDS | VMSCAPE), ++ VULNBL_INTEL_STEPS(INTEL_BROADWELL, X86_STEP_MAX, SRBDS | VMSCAPE), + VULNBL_INTEL_STEPS(INTEL_SKYLAKE_X, 0x5, MMIO | RETBLEED | GDS | VMSCAPE), + VULNBL_INTEL_STEPS(INTEL_SKYLAKE_X, X86_STEP_MAX, MMIO | RETBLEED | GDS | ITS | VMSCAPE), + VULNBL_INTEL_STEPS(INTEL_SKYLAKE_L, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | VMSCAPE), +-- +2.51.0 + diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/patches.kernel.org/6.16.7-008-Linux-6.16.7.patch new/patches.kernel.org/6.16.7-008-Linux-6.16.7.patch --- old/patches.kernel.org/6.16.7-008-Linux-6.16.7.patch 1970-01-01 01:00:00.000000000 +0100 +++ new/patches.kernel.org/6.16.7-008-Linux-6.16.7.patch 2025-09-12 09:00:22.000000000 +0200 @@ -0,0 +1,29 @@ +From: Greg Kroah-Hartman <[email protected]> +Date: Thu, 11 Sep 2025 17:23:23 +0200 +Subject: [PATCH] Linux 6.16.7 +References: bsc#1012628 +Patch-mainline: 6.16.7 +Git-commit: 131e2001572ba68b6728bcba91c58647168d237f + +Signed-off-by: Greg Kroah-Hartman <[email protected]> +Signed-off-by: Jiri Slaby <[email protected]> +--- + Makefile | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/Makefile b/Makefile +index 0200497da26c..86359283ccc9 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 6 + PATCHLEVEL = 16 +-SUBLEVEL = 6 ++SUBLEVEL = 7 + EXTRAVERSION = + NAME = Baby Opossum Posse + +-- +2.51.0 + ++++++ patches.rpmify.tar.bz2 ++++++ ++++++ patches.rt.tar.bz2 ++++++ ++++++ patches.suse.tar.bz2 ++++++ diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/patches.suse/bcachefs-print-message-at-mount-time-regarding-immin.patch new/patches.suse/bcachefs-print-message-at-mount-time-regarding-immin.patch --- old/patches.suse/bcachefs-print-message-at-mount-time-regarding-immin.patch 2025-09-09 20:01:16.000000000 +0200 +++ new/patches.suse/bcachefs-print-message-at-mount-time-regarding-immin.patch 2025-09-11 09:20:20.000000000 +0200 @@ -8,24 +8,20 @@ Signed-off-by: David Disseldorp <[email protected]> --- - fs/bcachefs/super.c | 4 ++++ - 1 file changed, 4 insertions(+) + fs/bcachefs/super.c | 5 +++++ + 1 file changed, 5 insertions(+) -diff --git a/fs/bcachefs/super.c b/fs/bcachefs/super.c -index c46b1053a02c9..343252d489c6c 100644 --- a/fs/bcachefs/super.c +++ b/fs/bcachefs/super.c -@@ -1241,6 +1241,10 @@ int bch2_fs_start(struct bch_fs *c) +@@ -1241,6 +1241,11 @@ err: bch_err_msg(c, ret, "starting filesystem"); else bch_verbose(c, "done starting filesystem"); + -+ pr_crit("Bcachefs may be removed from the kernel very soon. See:\n" -+ "https://lore.kernel.org/all/CAHk-=wi+k8E4kWR8c-nREP0+EA4D+=rz5j0hdk3n6cwgfe0...@mail.gmail.com/\n"); ++ pr_crit("bcachefs will be removed from the SUSE kernel in 6.18.\n" ++ "This kernel may be missing critical bcachefs fixes, due to its mainline transition to \"externally maintained\" status.\n" ++ "See also: https://bugzilla.opensuse.org/show_bug.cgi?id=1248109\n"); + return ret; } --- -2.50.1 - ++++++ series.conf ++++++ --- /var/tmp/diff_new_pack.Op2BLB/_old 2025-09-14 18:48:48.646665337 +0200 +++ /var/tmp/diff_new_pack.Op2BLB/_new 2025-09-14 18:48:48.650665504 +0200 @@ -2009,6 +2009,14 @@ patches.kernel.org/6.16.6-182-riscv-Fix-sparse-warning-about-different-addre.patch patches.kernel.org/6.16.6-183-Revert-drm-i915-gem-Allow-EXEC_CAPTURE-on-reco.patch patches.kernel.org/6.16.6-184-Linux-6.16.6.patch + patches.kernel.org/6.16.7-001-Documentation-hw-vuln-Add-VMSCAPE-documentatio.patch + patches.kernel.org/6.16.7-002-x86-vmscape-Enumerate-VMSCAPE-bug.patch + patches.kernel.org/6.16.7-003-x86-vmscape-Add-conditional-IBPB-mitigation.patch + patches.kernel.org/6.16.7-004-x86-vmscape-Enable-the-mitigation.patch + patches.kernel.org/6.16.7-005-x86-bugs-Move-cpu_bugs_smt_update-down.patch + patches.kernel.org/6.16.7-006-x86-vmscape-Warn-when-STIBP-is-disabled-with-S.patch + patches.kernel.org/6.16.7-007-x86-vmscape-Add-old-Intel-CPUs-to-affected-lis.patch + patches.kernel.org/6.16.7-008-Linux-6.16.7.patch ######################################################## # Build fixes that apply to the vanilla kernel too. ++++++ source-timestamp ++++++ --- /var/tmp/diff_new_pack.Op2BLB/_old 2025-09-14 18:48:48.674666510 +0200 +++ /var/tmp/diff_new_pack.Op2BLB/_new 2025-09-14 18:48:48.678666678 +0200 @@ -1,4 +1,4 @@ -2025-09-09 18:01:16 +0000 -GIT Revision: ad8b04f0f117450e075d87f288567848190dfa36 +2025-09-12 07:00:22 +0000 +GIT Revision: 4e78a24cfd328eb3380ea779cf2726f08e0124ec GIT Branch: stable ++++++ sysctl.tar.bz2 ++++++
