cpu_tss_rw was declared with the macro

DECLARE_PER_CPU_PAGE_ALIGNED

but then defined with the macro

DEFINE_PER_CPU_SHARED_ALIGNED

leading to section mismatch warnings. Prefer the macro

DEFINE_PER_CPU_PAGE_ALIGNED

Suggested-by: Thomas Gleixner <t...@linutronix.de>
Signed-off-by: Nick Desaulniers <ndesaulni...@google.com>
---
 arch/x86/kernel/process.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index aed9d94bd46f..832a6acd730f 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -47,7 +47,7 @@
  * section. Since TSS's are completely CPU-local, we want them
  * on exact cacheline boundaries, to eliminate cacheline ping-pong.
  */
-__visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tss_struct, cpu_tss_rw) = {
+__visible DEFINE_PER_CPU_PAGE_ALIGNED(struct tss_struct, cpu_tss_rw) = {
        .x86_tss = {
                /*
                 * .sp0 is only used when entering ring 0 from a lower
-- 
2.16.0.rc0.223.g4a4ac83678-goog

Reply via email to