[PATCH v4 16/16] powerpc/traps: Machine check fix RI=0 recoverability check

2020-05-07 Thread Nicholas Piggin
The MSR[RI]=0 recoverability check should be in the recovered machine
check case. Without this, a machine check that hits in a RI region that
has for example live SRRs, will cause the interrupted context to resume
with corrupted registers and crash unpredictably.

This does not affect 64s at the moment, because it does its own early
handling with RI check, but it may affect 32s.

Cc: Christophe Leroy 
Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/traps.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index 477befcda8d3..759d8dbf867b 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -873,13 +873,13 @@ void machine_check_exception(struct pt_regs *regs)
 
die("Machine check", regs, SIGBUS);
 
+   return;
+
+bail:
/* Must die if the interrupt is not recoverable */
if (!(regs->msr & MSR_RI))
die("Unrecoverable Machine check", regs, SIGBUS);
 
-   return;
-
-bail:
if (!nested)
nmi_exit();
 }
-- 
2.23.0



[PATCH v4 15/16] powerpc/traps: make unrecoverable NMIs die instead of panic

2020-05-07 Thread Nicholas Piggin
System Reset and Machine Check interrupts that are not recoverable due
to being nested or interrupting when RI=0 currently panic. This is
not necessary, and can often just kill the current context and recover.

Reviewed-by: Christophe Leroy 
Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/traps.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index ee209c5a1ad7..477befcda8d3 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -513,11 +513,11 @@ void system_reset_exception(struct pt_regs *regs)
 #ifdef CONFIG_PPC_BOOK3S_64
BUG_ON(get_paca()->in_nmi == 0);
if (get_paca()->in_nmi > 1)
-   nmi_panic(regs, "Unrecoverable nested System Reset");
+   die("Unrecoverable nested System Reset", regs, SIGABRT);
 #endif
/* Must die if the interrupt is not recoverable */
if (!(regs->msr & MSR_RI))
-   nmi_panic(regs, "Unrecoverable System Reset");
+   die("Unrecoverable System Reset", regs, SIGABRT);
 
if (saved_hsrrs) {
mtspr(SPRN_HSRR0, hsrr0);
@@ -875,7 +875,7 @@ void machine_check_exception(struct pt_regs *regs)
 
/* Must die if the interrupt is not recoverable */
if (!(regs->msr & MSR_RI))
-   nmi_panic(regs, "Unrecoverable Machine check");
+   die("Unrecoverable Machine check", regs, SIGBUS);
 
return;
 
-- 
2.23.0



[PATCH v4 14/16] powerpc/traps: system reset do not trace

2020-05-07 Thread Nicholas Piggin
Similarly to the previous patch, do not trace system reset. This code
is used when there is a crash or hang, and tracing disturbs the system
more and has been known to crash in the crash handling path.

Acked-by: Naveen N. Rao 
Reviewed-by: Christophe Leroy 
Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/traps.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index 9f6852322e59..ee209c5a1ad7 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -443,6 +443,9 @@ void system_reset_exception(struct pt_regs *regs)
unsigned long hsrr0, hsrr1;
bool nested = in_nmi();
bool saved_hsrrs = false;
+   u8 ftrace_enabled = this_cpu_get_ftrace_enabled();
+
+   this_cpu_set_ftrace_enabled(0);
 
/*
 * Avoid crashes in case of nested NMI exceptions. Recoverability
@@ -524,6 +527,8 @@ void system_reset_exception(struct pt_regs *regs)
if (!nested)
nmi_exit();
 
+   this_cpu_set_ftrace_enabled(ftrace_enabled);
+
/* What should we do here? We could issue a shutdown or hard reset. */
 }
 
-- 
2.23.0



[PATCH v4 13/16] powerpc/64s: machine check do not trace real-mode handler

2020-05-07 Thread Nicholas Piggin
Rather than notrace annotations throughout a significant part of the
machine check code across kernel/ pseries/ and powernv/ which can
easily be broken and is infrequently tested, use paca->ftrace_enabled
to blanket-disable tracing of the real-mode non-maskable handler.

Acked-by: Naveen N. Rao 
Reviewed-by: Christophe Leroy 
Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/mce.c | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/mce.c b/arch/powerpc/kernel/mce.c
index be7e3f92a7b5..fd90c0eda229 100644
--- a/arch/powerpc/kernel/mce.c
+++ b/arch/powerpc/kernel/mce.c
@@ -16,6 +16,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -571,10 +572,14 @@ EXPORT_SYMBOL_GPL(machine_check_print_event_info);
  *
  * regs->nip and regs->msr contains srr0 and ssr1.
  */
-long machine_check_early(struct pt_regs *regs)
+long notrace machine_check_early(struct pt_regs *regs)
 {
long handled = 0;
bool nested = in_nmi();
+   u8 ftrace_enabled = this_cpu_get_ftrace_enabled();
+
+   this_cpu_set_ftrace_enabled(0);
+
if (!nested)
nmi_enter();
 
@@ -589,6 +594,8 @@ long machine_check_early(struct pt_regs *regs)
if (!nested)
nmi_exit();
 
+   this_cpu_set_ftrace_enabled(ftrace_enabled);
+
return handled;
 }
 
-- 
2.23.0



[PATCH v4 12/16] powerpc: implement ftrace_enabled helper

2020-05-07 Thread Nicholas Piggin
Signed-off-by: Nicholas Piggin 
Reviewed-by: Christophe Leroy 
---
 arch/powerpc/include/asm/ftrace.h | 14 ++
 1 file changed, 14 insertions(+)

diff --git a/arch/powerpc/include/asm/ftrace.h 
b/arch/powerpc/include/asm/ftrace.h
index f54a08a2cd70..bc76970b6ee5 100644
--- a/arch/powerpc/include/asm/ftrace.h
+++ b/arch/powerpc/include/asm/ftrace.h
@@ -108,9 +108,23 @@ static inline void this_cpu_enable_ftrace(void)
 {
get_paca()->ftrace_enabled = 1;
 }
+
+/* Disable ftrace on this CPU if possible (may not be implemented) */
+static inline void this_cpu_set_ftrace_enabled(u8 ftrace_enabled)
+{
+   get_paca()->ftrace_enabled = ftrace_enabled;
+}
+
+static inline u8 this_cpu_get_ftrace_enabled(void)
+{
+   return get_paca()->ftrace_enabled;
+}
+
 #else /* CONFIG_PPC64 */
 static inline void this_cpu_disable_ftrace(void) { }
 static inline void this_cpu_enable_ftrace(void) { }
+static inline void this_cpu_set_ftrace_enabled(u8 ftrace_enabled) { }
+static inline u8 this_cpu_get_ftrace_enabled(void) { return 1; }
 #endif /* CONFIG_PPC64 */
 #endif /* !__ASSEMBLY__ */
 
-- 
2.23.0



[PATCH v4 11/16] powerpc/64s: machine check interrupt update NMI accounting

2020-05-07 Thread Nicholas Piggin
machine_check_early is taken as an NMI, so nmi_enter is used there.
machine_check_exception is no longer taken as an NMI (it's invoked
via irq_work in the case a machine check hits in kernel mode), so
remove the nmi_enter from that case.

In NMI context, hash faults don't try to refill the hash table, which
can lead to crashes accessing non-pinned kernel pages. System reset
still has this potential problem.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/mce.c |  7 +++
 arch/powerpc/kernel/process.c |  2 +-
 arch/powerpc/kernel/traps.c   | 14 +-
 3 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/mce.c b/arch/powerpc/kernel/mce.c
index 8077b5fb18a7..be7e3f92a7b5 100644
--- a/arch/powerpc/kernel/mce.c
+++ b/arch/powerpc/kernel/mce.c
@@ -574,6 +574,9 @@ EXPORT_SYMBOL_GPL(machine_check_print_event_info);
 long machine_check_early(struct pt_regs *regs)
 {
long handled = 0;
+   bool nested = in_nmi();
+   if (!nested)
+   nmi_enter();
 
hv_nmi_check_nonrecoverable(regs);
 
@@ -582,6 +585,10 @@ long machine_check_early(struct pt_regs *regs)
 */
if (ppc_md.machine_check_early)
handled = ppc_md.machine_check_early(regs);
+
+   if (!nested)
+   nmi_exit();
+
return handled;
 }
 
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 9c21288f8645..44410dd3029f 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1421,7 +1421,7 @@ void show_regs(struct pt_regs * regs)
pr_cont("DAR: "REG" DSISR: %08lx ", regs->dar, regs->dsisr);
 #endif
 #ifdef CONFIG_PPC64
-   pr_cont("IRQMASK: %lx ", regs->softe);
+   pr_cont("IRQMASK: %lx IN_NMI:%d IN_MCE:%d", regs->softe, 
(int)get_paca()->in_nmi, (int)get_paca()->in_mce);
 #endif
 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
if (MSR_TM_ACTIVE(regs->msr))
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index 3fca22276bb1..9f6852322e59 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -823,7 +823,19 @@ int machine_check_generic(struct pt_regs *regs)
 void machine_check_exception(struct pt_regs *regs)
 {
int recover = 0;
-   bool nested = in_nmi();
+   bool nested;
+
+   /*
+* BOOK3S_64 does not call this handler as a non-maskable interrupt
+* (it uses its own early real-mode handler to handle the MCE proper
+* and then raises irq_work to call this handler when interrupts are
+* enabled). Set nested = true for this case, which just makes it avoid
+* the nmi_enter/exit.
+*/
+   if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) || in_nmi())
+   nested = true;
+   else
+   nested = false;
if (!nested)
nmi_enter();
 
-- 
2.23.0



[PATCH v4 10/16] powerpc/pseries: machine check use rtas_call_unlocked with args on stack

2020-05-07 Thread Nicholas Piggin
With the previous patch, machine checks can use rtas_call_unlocked
which avoids the rtas spinlock which would deadlock if a machine
check hits while making an rtas call.

This also avoids the complex rtas error logging which has more rtas calls
and includes kmalloc (which can return memory beyond RMA, which would
also crash).

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/platforms/pseries/ras.c | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/platforms/pseries/ras.c 
b/arch/powerpc/platforms/pseries/ras.c
index b2adba59f0ff..ce1665e58d9b 100644
--- a/arch/powerpc/platforms/pseries/ras.c
+++ b/arch/powerpc/platforms/pseries/ras.c
@@ -468,7 +468,15 @@ static struct rtas_error_log *fwnmi_get_errinfo(struct 
pt_regs *regs)
  */
 static void fwnmi_release_errinfo(void)
 {
-   int ret = rtas_call(ibm_nmi_interlock_token, 0, 1, NULL);
+   struct rtas_args rtas_args;
+   int ret;
+
+   /*
+* On pseries, the machine check stack is limited to under 4GB, so
+* args can be on-stack.
+*/
+   rtas_call_unlocked(_args, ibm_nmi_interlock_token, 0, 1, NULL);
+   ret = be32_to_cpu(rtas_args.rets[0]);
if (ret != 0)
printk(KERN_ERR "FWNMI: nmi-interlock failed: %d\n", ret);
 }
-- 
2.23.0



[PATCH v4 09/16] powerpc/pseries: limit machine check stack to 4GB

2020-05-07 Thread Nicholas Piggin
This allows rtas_args to be put on the machine check stack, which
avoids a lot of complications with re-entrancy deadlocks.

Reviewed-by: Christophe Leroy 
Reviewed-by: Mahesh Salgaonkar 
Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/setup_64.c | 15 ---
 1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 8105010b0e76..bb47555d48a2 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -711,7 +711,7 @@ void __init exc_lvl_early_init(void)
  */
 void __init emergency_stack_init(void)
 {
-   u64 limit;
+   u64 limit, mce_limit;
unsigned int i;
 
/*
@@ -728,7 +728,16 @@ void __init emergency_stack_init(void)
 * initialized in kernel/irq.c. These are initialized here in order
 * to have emergency stacks available as early as possible.
 */
-   limit = min(ppc64_bolted_size(), ppc64_rma_size);
+   limit = mce_limit = min(ppc64_bolted_size(), ppc64_rma_size);
+
+   /*
+* Machine check on pseries calls rtas, but can't use the static
+* rtas_args due to a machine check hitting while the lock is held.
+* rtas args have to be under 4GB, so the machine check stack is
+* limited to 4GB so args can be put on stack.
+*/
+   if (firmware_has_feature(FW_FEATURE_LPAR) && mce_limit > SZ_4G)
+   mce_limit = SZ_4G;
 
for_each_possible_cpu(i) {
paca_ptrs[i]->emergency_sp = alloc_stack(limit, i) + 
THREAD_SIZE;
@@ -738,7 +747,7 @@ void __init emergency_stack_init(void)
paca_ptrs[i]->nmi_emergency_sp = alloc_stack(limit, i) + 
THREAD_SIZE;
 
/* emergency stack for machine check exception handling. */
-   paca_ptrs[i]->mc_emergency_sp = alloc_stack(limit, i) + 
THREAD_SIZE;
+   paca_ptrs[i]->mc_emergency_sp = alloc_stack(mce_limit, i) + 
THREAD_SIZE;
 #endif
}
 }
-- 
2.23.0



[PATCH v4 08/16] powerpc/pseries/ras: fwnmi sreset should not interlock

2020-05-07 Thread Nicholas Piggin
PAPR does not specify that fwnmi sreset should be interlocked, and
PowerVM (and therefore now QEMU) do not require it.

These "ibm,nmi-interlock" calls are ignored by firmware, but there
is a possibility that the sreset could have interrupted a machine
check and release the machine check's interlock too early, corrupting
it if another machine check came in.

This is an extremely rare case, but it should be fixed for clarity
and reducing the code executed in the sreset path. Firmware also
does not provide error information for the sreset case to look at, so
remove that comment.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/platforms/pseries/ras.c | 46 +++-
 1 file changed, 32 insertions(+), 14 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/ras.c 
b/arch/powerpc/platforms/pseries/ras.c
index fe14186a8cef..b2adba59f0ff 100644
--- a/arch/powerpc/platforms/pseries/ras.c
+++ b/arch/powerpc/platforms/pseries/ras.c
@@ -406,6 +406,20 @@ static inline struct rtas_error_log *fwnmi_get_errlog(void)
return (struct rtas_error_log *)local_paca->mce_data_buf;
 }
 
+static unsigned long *fwnmi_get_savep(struct pt_regs *regs)
+{
+   unsigned long savep_ra;
+
+   /* Mask top two bits */
+   savep_ra = regs->gpr[3] & ~(0x3UL << 62);
+   if (!VALID_FWNMI_BUFFER(savep_ra)) {
+   printk(KERN_ERR "FWNMI: corrupt r3 0x%016lx\n", regs->gpr[3]);
+   return NULL;
+   }
+
+   return __va(savep_ra);
+}
+
 /*
  * Get the error information for errors coming through the
  * FWNMI vectors.  The pt_regs' r3 will be updated to reflect
@@ -423,20 +437,14 @@ static inline struct rtas_error_log 
*fwnmi_get_errlog(void)
  */
 static struct rtas_error_log *fwnmi_get_errinfo(struct pt_regs *regs)
 {
-   unsigned long savep_ra;
unsigned long *savep;
struct rtas_error_log *h;
 
-   /* Mask top two bits */
-   savep_ra = regs->gpr[3] & ~(0x3UL << 62);
-
-   if (!VALID_FWNMI_BUFFER(savep_ra)) {
-   printk(KERN_ERR "FWNMI: corrupt r3 0x%016lx\n", regs->gpr[3]);
+   savep = fwnmi_get_savep(regs);
+   if (!savep)
return NULL;
-   }
 
-   savep = __va(savep_ra);
-   regs->gpr[3] = be64_to_cpu(savep[0]);   /* restore original r3 */
+   regs->gpr[3] = be64_to_cpu(savep[0]); /* restore original r3 */
 
h = (struct rtas_error_log *)[1];
/* Use the per cpu buffer from paca to store rtas error log */
@@ -483,11 +491,21 @@ int pSeries_system_reset_exception(struct pt_regs *regs)
 #endif
 
if (fwnmi_active) {
-   struct rtas_error_log *errhdr = fwnmi_get_errinfo(regs);
-   if (errhdr) {
-   /* XXX Should look at FWNMI information */
-   }
-   fwnmi_release_errinfo();
+   unsigned long *savep;
+
+   /*
+* Firmware (PowerVM and KVM) saves r3 to a save area like
+* machine check, which is not exactly what PAPR (2.9)
+* suggests but there is no way to detect otherwise, so this
+* is the interface now.
+*
+* System resets do not save any error log or require an
+* "ibm,nmi-interlock" rtas call to release.
+*/
+
+   savep = fwnmi_get_savep(regs);
+   if (savep)
+   regs->gpr[3] = be64_to_cpu(savep[0]); /* restore 
original r3 */
}
 
if (smp_handle_nmi_ipi(regs))
-- 
2.23.0



[PATCH v4 07/16] powerpc/pseries/ras: fwnmi avoid modifying r3 in error case

2020-05-07 Thread Nicholas Piggin
If there is some error with the fwnmi save area, r3 has already been
modified which doesn't help with debugging.

Only update r3 when to restore the saved value.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/platforms/pseries/ras.c | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/ras.c 
b/arch/powerpc/platforms/pseries/ras.c
index a5bd0f747bb1..fe14186a8cef 100644
--- a/arch/powerpc/platforms/pseries/ras.c
+++ b/arch/powerpc/platforms/pseries/ras.c
@@ -423,18 +423,19 @@ static inline struct rtas_error_log 
*fwnmi_get_errlog(void)
  */
 static struct rtas_error_log *fwnmi_get_errinfo(struct pt_regs *regs)
 {
+   unsigned long savep_ra;
unsigned long *savep;
struct rtas_error_log *h;
 
/* Mask top two bits */
-   regs->gpr[3] &= ~(0x3UL << 62);
+   savep_ra = regs->gpr[3] & ~(0x3UL << 62);
 
-   if (!VALID_FWNMI_BUFFER(regs->gpr[3])) {
+   if (!VALID_FWNMI_BUFFER(savep_ra)) {
printk(KERN_ERR "FWNMI: corrupt r3 0x%016lx\n", regs->gpr[3]);
return NULL;
}
 
-   savep = __va(regs->gpr[3]);
+   savep = __va(savep_ra);
regs->gpr[3] = be64_to_cpu(savep[0]);   /* restore original r3 */
 
h = (struct rtas_error_log *)[1];
-- 
2.23.0



[PATCH v4 06/16] powerpc/pseries/ras: FWNMI_VALID off by one

2020-05-07 Thread Nicholas Piggin
This was discovered developing qemu fwnmi sreset support. This
off-by-one bug means the last 16 bytes of the rtas area can not
be used for a 16 byte save area.

It's not a serious bug, and QEMU implementation has to retain a
workaround for old kernels, but it's good to tighten it.

Acked-by: Mahesh Salgaonkar 
Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/platforms/pseries/ras.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/ras.c 
b/arch/powerpc/platforms/pseries/ras.c
index ac92f8687ea3..a5bd0f747bb1 100644
--- a/arch/powerpc/platforms/pseries/ras.c
+++ b/arch/powerpc/platforms/pseries/ras.c
@@ -395,10 +395,11 @@ static irqreturn_t ras_error_interrupt(int irq, void 
*dev_id)
 /*
  * Some versions of FWNMI place the buffer inside the 4kB page starting at
  * 0x7000. Other versions place it inside the rtas buffer. We check both.
+ * Minimum size of the buffer is 16 bytes.
  */
 #define VALID_FWNMI_BUFFER(A) \
-   A) >= 0x7000) && ((A) < 0x7ff0)) || \
-   (((A) >= rtas.base) && ((A) < (rtas.base + rtas.size - 16
+   A) >= 0x7000) && ((A) <= 0x8000 - 16)) || \
+   (((A) >= rtas.base) && ((A) <= (rtas.base + rtas.size - 16
 
 static inline struct rtas_error_log *fwnmi_get_errlog(void)
 {
-- 
2.23.0



[PATCH v4 05/16] powerpc/pseries/ras: avoid calling rtas_token in NMI paths

2020-05-07 Thread Nicholas Piggin
In the interest of reducing code and possible failures in the
machine check and system reset paths, grab the "ibm,nmi-interlock"
token at init time.

Reviewed-by: Christophe Leroy 
Reviewed-by: Mahesh Salgaonkar 
Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/include/asm/firmware.h|  1 +
 arch/powerpc/platforms/pseries/ras.c   |  2 +-
 arch/powerpc/platforms/pseries/setup.c | 14 ++
 3 files changed, 12 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/include/asm/firmware.h 
b/arch/powerpc/include/asm/firmware.h
index ca33f4ef6cb4..6003c2e533a0 100644
--- a/arch/powerpc/include/asm/firmware.h
+++ b/arch/powerpc/include/asm/firmware.h
@@ -128,6 +128,7 @@ extern void machine_check_fwnmi(void);
 
 /* This is true if we are using the firmware NMI handler (typically LPAR) */
 extern int fwnmi_active;
+extern int ibm_nmi_interlock_token;
 
 extern unsigned int __start___fw_ftr_fixup, __stop___fw_ftr_fixup;
 
diff --git a/arch/powerpc/platforms/pseries/ras.c 
b/arch/powerpc/platforms/pseries/ras.c
index 1d1da639b8b7..ac92f8687ea3 100644
--- a/arch/powerpc/platforms/pseries/ras.c
+++ b/arch/powerpc/platforms/pseries/ras.c
@@ -458,7 +458,7 @@ static struct rtas_error_log *fwnmi_get_errinfo(struct 
pt_regs *regs)
  */
 static void fwnmi_release_errinfo(void)
 {
-   int ret = rtas_call(rtas_token("ibm,nmi-interlock"), 0, 1, NULL);
+   int ret = rtas_call(ibm_nmi_interlock_token, 0, 1, NULL);
if (ret != 0)
printk(KERN_ERR "FWNMI: nmi-interlock failed: %d\n", ret);
 }
diff --git a/arch/powerpc/platforms/pseries/setup.c 
b/arch/powerpc/platforms/pseries/setup.c
index 0c8421dd01ab..dd234095ae4f 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -83,6 +83,7 @@ unsigned long CMO_PageSize = (ASM_CONST(1) << 
IOMMU_PAGE_SHIFT_4K);
 EXPORT_SYMBOL(CMO_PageSize);
 
 int fwnmi_active;  /* TRUE if an FWNMI handler is present */
+int ibm_nmi_interlock_token;
 
 static void pSeries_show_cpuinfo(struct seq_file *m)
 {
@@ -113,9 +114,14 @@ static void __init fwnmi_init(void)
struct slb_entry *slb_ptr;
size_t size;
 #endif
+   int ibm_nmi_register_token;
 
-   int ibm_nmi_register = rtas_token("ibm,nmi-register");
-   if (ibm_nmi_register == RTAS_UNKNOWN_SERVICE)
+   ibm_nmi_register_token = rtas_token("ibm,nmi-register");
+   if (ibm_nmi_register_token == RTAS_UNKNOWN_SERVICE)
+   return;
+
+   ibm_nmi_interlock_token = rtas_token("ibm,nmi-interlock");
+   if (WARN_ON(ibm_nmi_interlock_token == RTAS_UNKNOWN_SERVICE))
return;
 
/* If the kernel's not linked at zero we point the firmware at low
@@ -123,8 +129,8 @@ static void __init fwnmi_init(void)
system_reset_addr  = __pa(system_reset_fwnmi) - PHYSICAL_START;
machine_check_addr = __pa(machine_check_fwnmi) - PHYSICAL_START;
 
-   if (0 == rtas_call(ibm_nmi_register, 2, 1, NULL, system_reset_addr,
-   machine_check_addr))
+   if (0 == rtas_call(ibm_nmi_register_token, 2, 1, NULL,
+  system_reset_addr, machine_check_addr))
fwnmi_active = 1;
 
/*
-- 
2.23.0



[PATCH v4 04/16] powerpc/64s/exceptions: machine check reconcile irq state

2020-05-07 Thread Nicholas Piggin
pseries fwnmi machine check code pops the soft-irq checks in rtas_call
(after the previous patch to remove rtas_token from this call path).
Rather than play whack a mole with these and forever having fragile
code, it seems better to have the early machine check handler perform
the same kind of reconcile as the other NMI interrupts.

  WARNING: CPU: 0 PID: 493 at arch/powerpc/kernel/irq.c:343
  CPU: 0 PID: 493 Comm: a Tainted: GW
  NIP:  c001ed2c LR: c0042c40 CTR: 
  REGS: c001fffd38b0 TRAP: 0700   Tainted: GW
  MSR:  80021003   CR: 28000488  XER: 
  CFAR: c001ec90 IRQMASK: 0
  GPR00: c0043820 c001fffd3b40 c12ba300 
  GPR04: 48000488   deadbeef
  GPR08: 0080   1001
  GPR12:  c14a  
  GPR16:    
  GPR20:    
  GPR24:    
  GPR28:  0001 c1360810 
  NIP [c001ed2c] arch_local_irq_restore.part.0+0xac/0x100
  LR [c0042c40] unlock_rtas+0x30/0x90
  Call Trace:
  [c001fffd3b40] [c1360810] 0xc1360810 (unreliable)
  [c001fffd3b60] [c0043820] rtas_call+0x1c0/0x280
  [c001fffd3bb0] [c00dc328] fwnmi_release_errinfo+0x38/0x70
  [c001fffd3c10] [c00dcd8c] 
pseries_machine_check_realmode+0x1dc/0x540
  [c001fffd3cd0] [c003fe04] machine_check_early+0x54/0x70
  [c001fffd3d00] [c0008384] machine_check_early_common+0x134/0x1f0
  --- interrupt: 200 at 0x13f1307c8
  LR = 0x7fff888b8528
  Instruction dump:
  6000 7d2000a6 71298000 41820068 3922 7d210164 4b9c 6000
  6000 7d2000a6 71298000 4c820020 <0fe0> 4e800020 6000 6000

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/exceptions-64s.S | 19 +++
 1 file changed, 19 insertions(+)

diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index a42b73efb1a9..072772803b7c 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -1116,11 +1116,30 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
li  r10,MSR_RI
mtmsrd  r10,1
 
+   /*
+* Set IRQS_ALL_DISABLED and save PACAIRQHAPPENED (see
+* system_reset_common)
+*/
+   li  r10,IRQS_ALL_DISABLED
+   stb r10,PACAIRQSOFTMASK(r13)
+   lbz r10,PACAIRQHAPPENED(r13)
+   std r10,RESULT(r1)
+   ori r10,r10,PACA_IRQ_HARD_DIS
+   stb r10,PACAIRQHAPPENED(r13)
+
addir3,r1,STACK_FRAME_OVERHEAD
bl  machine_check_early
std r3,RESULT(r1)   /* Save result */
ld  r12,_MSR(r1)
 
+   /*
+* Restore soft mask settings.
+*/
+   ld  r10,RESULT(r1)
+   stb r10,PACAIRQHAPPENED(r13)
+   ld  r10,SOFTE(r1)
+   stb r10,PACAIRQSOFTMASK(r13)
+
 #ifdef CONFIG_PPC_P7_NAP
/*
 * Check if thread was in power saving mode. We come here when any
-- 
2.23.0



[PATCH v4 03/16] powerpc/64s/exceptions: Change irq reconcile for NMIs from reusing _DAR to RESULT

2020-05-07 Thread Nicholas Piggin
A spare interrupt stack slot is needed to save irq state when
reconciling NMIs (sreset and decrementer soft-nmi). _DAR is used
for this, but we want to reconcile machine checks as well, which
do use _DAR. Switch to using RESULT instead, as it's used by
system calls.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/exceptions-64s.S | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index 3322000316ab..a42b73efb1a9 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -939,13 +939,13 @@ EXC_COMMON_BEGIN(system_reset_common)
 * the right thing. We do not want to reconcile because that goes
 * through irq tracing which we don't want in NMI.
 *
-* Save PACAIRQHAPPENED to _DAR (otherwise unused), and set HARD_DIS
+* Save PACAIRQHAPPENED to RESULT (otherwise unused), and set HARD_DIS
 * as we are running with MSR[EE]=0.
 */
li  r10,IRQS_ALL_DISABLED
stb r10,PACAIRQSOFTMASK(r13)
lbz r10,PACAIRQHAPPENED(r13)
-   std r10,_DAR(r1)
+   std r10,RESULT(r1)
ori r10,r10,PACA_IRQ_HARD_DIS
stb r10,PACAIRQHAPPENED(r13)
 
@@ -966,7 +966,7 @@ EXC_COMMON_BEGIN(system_reset_common)
/*
 * Restore soft mask settings.
 */
-   ld  r10,_DAR(r1)
+   ld  r10,RESULT(r1)
stb r10,PACAIRQHAPPENED(r13)
ld  r10,SOFTE(r1)
stb r10,PACAIRQSOFTMASK(r13)
@@ -2743,7 +2743,7 @@ EXC_COMMON_BEGIN(soft_nmi_common)
li  r10,IRQS_ALL_DISABLED
stb r10,PACAIRQSOFTMASK(r13)
lbz r10,PACAIRQHAPPENED(r13)
-   std r10,_DAR(r1)
+   std r10,RESULT(r1)
ori r10,r10,PACA_IRQ_HARD_DIS
stb r10,PACAIRQHAPPENED(r13)
 
@@ -2757,7 +2757,7 @@ EXC_COMMON_BEGIN(soft_nmi_common)
/*
 * Restore soft mask settings.
 */
-   ld  r10,_DAR(r1)
+   ld  r10,RESULT(r1)
stb r10,PACAIRQHAPPENED(r13)
ld  r10,SOFTE(r1)
stb r10,PACAIRQSOFTMASK(r13)
-- 
2.23.0



[PATCH v4 02/16] powerpc/64s/exceptions: Fix in_mce accounting in unrecoverable path

2020-05-07 Thread Nicholas Piggin
Acked-by: Mahesh Salgaonkar 
Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/exceptions-64s.S | 4 
 1 file changed, 4 insertions(+)

diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index bbf3109c5cba..3322000316ab 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -1267,6 +1267,10 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
andcr10,r10,r3
mtmsrd  r10
 
+   lhz r12,PACA_IN_MCE(r13)
+   subir12,r12,1
+   sth r12,PACA_IN_MCE(r13)
+
/* Invoke machine_check_exception to print MCE event and panic. */
addir3,r1,STACK_FRAME_OVERHEAD
bl  machine_check_exception
-- 
2.23.0



[PATCH v4 01/16] powerpc/64s/exception: Fix machine check no-loss idle wakeup

2020-05-07 Thread Nicholas Piggin
The architecture allows for machine check exceptions to cause idle
wakeups which resume at the 0x200 address which has to return via
the idle wakeup code, but the early machine check handler is run
first.

The case of a no state-loss sleep is broken because the early
handler uses non-volatile register r1 , which is needed for the wakeup
protocol, but it is not restored.

Fix this by loading r1 from the MCE exception frame before returning
to the idle wakeup code. Also update the comment which has become
stale since the idle rewrite in C.

Fixes: 10d91611f426d ("powerpc/64s: Reimplement book3s idle code in C")
Signed-off-by: Nicholas Piggin 

This crash was found and fix confirmed with a machine check injection
test in qemu powernv model (which is not upstream in qemu yet).
---
 arch/powerpc/kernel/exceptions-64s.S | 14 --
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index 728ccb0f560c..bbf3109c5cba 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -1224,17 +1224,19 @@ EXC_COMMON_BEGIN(machine_check_idle_common)
bl  machine_check_queue_event
 
/*
-* We have not used any non-volatile GPRs here, and as a rule
-* most exception code including machine check does not.
-* Therefore PACA_NAPSTATELOST does not need to be set. Idle
-* wakeup will restore volatile registers.
+* GPR-loss wakeups are relatively straightforward, because the
+* idle sleep code has saved all non-volatile registers on its
+* own stack, and r1 in PACAR1.
 *
-* Load the original SRR1 into r3 for pnv_powersave_wakeup_mce.
+* For no-loss wakeups the r1 and lr registers used by the
+* early machine check handler have to be restored first. r2 is
+* the kernel TOC, so no need to restore it.
 *
 * Then decrement MCE nesting after finishing with the stack.
 */
ld  r3,_MSR(r1)
ld  r4,_LINK(r1)
+   ld  r1,GPR1(r1)
 
lhz r11,PACA_IN_MCE(r13)
subir11,r11,1
@@ -1243,7 +1245,7 @@ EXC_COMMON_BEGIN(machine_check_idle_common)
mtlrr4
rlwinm  r10,r3,47-31,30,31
cmpwi   cr1,r10,2
-   bltlr   cr1 /* no state loss, return to idle caller */
+   bltlr   cr1 /* no state loss, return to idle caller with r3=SRR1 */
b   idle_return_gpr_loss
 #endif
 
-- 
2.23.0



[PATCH v4 00/16] powerpc: machine check and system reset fixes

2020-05-07 Thread Nicholas Piggin
Since v3, I fixed a compile error and returned the generic machine check
exception handler to be NMI on 32 and 64e, as caught by Christophe's
review.

Also added the last patch, just found it by looking at the code, a
review for 32s would be good.

Thanks,
Nick

Nicholas Piggin (16):
  powerpc/64s/exception: Fix machine check no-loss idle wakeup
  powerpc/64s/exceptions: Fix in_mce accounting in unrecoverable path
  powerpc/64s/exceptions: Change irq reconcile for NMIs from reusing
_DAR to RESULT
  powerpc/64s/exceptions: machine check reconcile irq state
  powerpc/pseries/ras: avoid calling rtas_token in NMI paths
  powerpc/pseries/ras: FWNMI_VALID off by one
  powerpc/pseries/ras: fwnmi avoid modifying r3 in error case
  powerpc/pseries/ras: fwnmi sreset should not interlock
  powerpc/pseries: limit machine check stack to 4GB
  powerpc/pseries: machine check use rtas_call_unlocked with args on
stack
  powerpc/64s: machine check interrupt update NMI accounting
  powerpc: implement ftrace_enabled helper
  powerpc/64s: machine check do not trace real-mode handler
  powerpc/traps: system reset do not trace
  powerpc/traps: make unrecoverable NMIs die instead of panic
  powerpc/traps: Machine check fix RI=0 recoverability check

 arch/powerpc/include/asm/firmware.h|  1 +
 arch/powerpc/include/asm/ftrace.h  | 14 ++
 arch/powerpc/kernel/exceptions-64s.S   | 47 +++-
 arch/powerpc/kernel/mce.c  | 16 ++-
 arch/powerpc/kernel/process.c  |  2 +-
 arch/powerpc/kernel/setup_64.c | 15 +--
 arch/powerpc/kernel/traps.c| 31 ++---
 arch/powerpc/platforms/pseries/ras.c   | 60 +++---
 arch/powerpc/platforms/pseries/setup.c | 14 --
 9 files changed, 157 insertions(+), 43 deletions(-)

-- 
2.23.0



Re: Fwd: [CRON] Broken: ClangBuiltLinux/continuous-integration#1432 (master - 0aceafc)

2020-05-07 Thread Michael Ellerman
Nick Desaulniers  writes:
> Looks like ppc64le powernv_defconfig is suddenly failing the locking
> torture tests, then locks up?
> https://travis-ci.com/github/ClangBuiltLinux/continuous-integration/jobs/329211572#L3111-L3167
> Any recent changes related here in -next?  I believe this is the first
> failure, so I'll report back if we see this again.

Thanks for the report.

There's nothing newly in next-20200507 that seems related.

Odd that it just showed up.

cheers


> -- Forwarded message -
> From: Travis CI 
> Date: Thu, May 7, 2020 at 9:40 AM
> Subject: [CRON] Broken: ClangBuiltLinux/continuous-integration#1432 (master
> - 0aceafc)
> To: , 
>
>
> ClangBuiltLinux
>
> /
>
> continuous-integration
> <https://travis-ci.com/github/ClangBuiltLinux/continuous-integration?utm_medium=notification_source=email>
>
> [image: branch icon]master
> <https://github.com/ClangBuiltLinux/continuous-integration/tree/master>
> [image: build has failed]
> Build #1432 was broken
> <https://travis-ci.com/github/ClangBuiltLinux/continuous-integration/builds/164415390?utm_medium=notification_source=email>
> [image: arrow to build time]
> [image: clock icon]7 hrs, 0 mins, and 54 secs
>
> [image: Nick Desaulniers avatar]Nick Desaulniers
> 0aceafc CHANGESET →
> <https://github.com/ClangBuiltLinux/continuous-integration/compare/877d002bdcfe6bc5cb0255c3c39192e8175e2c19...0aceafcfcca7c4a095957efae0939a612d755077>
>
> Merge pull request #182 from ClangBuiltLinux/i386
>
> i386
>
> Want to know about upcoming build environment updates?
>
> Would you like to stay up-to-date with the upcoming Travis CI build
> environment updates? We set up a mailing list for you!
> SIGN UP HERE <http://eepurl.com/9OCsP>
>
> [image: book icon]
>
> Documentation <https://docs.travis-ci.com/> about Travis CI
> Have any questions? We're here to help. 
> Unsubscribe
> <https://travis-ci.com/account/preferences/unsubscribe?repository=6718752_medium=notification_source=email>
> from build emails from the ClangBuiltLinux/continuous-integration
> repository.
> To unsubscribe from *all* build emails, please update your settings
> <https://travis-ci.com/account/preferences/unsubscribe?utm_medium=notification_source=email>.
>
> [image: black and white travis ci logo] <https://travis-ci.com>
>
> Travis CI GmbH, Rigaer Str. 8, 10427 Berlin, Germany | GF/CEO: Randy Jacops
> | Contact: cont...@travis-ci.com | Amtsgericht Charlottenburg, Berlin, HRB
> 140133 B | Umsatzsteuer-ID gemäß §27 a Umsatzsteuergesetz: DE282002648
>
>
> -- 
> Thanks,
> ~Nick Desaulniers


[PATCH V3 3/3] mm/hugetlb: Define a generic fallback for arch_clear_hugepage_flags()

2020-05-07 Thread Anshuman Khandual
There are multiple similar definitions for arch_clear_hugepage_flags() on
various platforms. Lets just add it's generic fallback definition for
platforms that do not override. This help reduce code duplication.

Cc: Russell King 
Cc: Catalin Marinas 
Cc: Will Deacon 
Cc: Tony Luck 
Cc: Fenghua Yu 
Cc: Thomas Bogendoerfer 
Cc: "James E.J. Bottomley" 
Cc: Helge Deller 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Cc: Michael Ellerman 
Cc: Paul Walmsley 
Cc: Palmer Dabbelt 
Cc: Heiko Carstens 
Cc: Vasily Gorbik 
Cc: Christian Borntraeger 
Cc: Yoshinori Sato 
Cc: Rich Felker 
Cc: "David S. Miller" 
Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: "H. Peter Anvin" 
Cc: Mike Kravetz 
Cc: Andrew Morton 
Cc: x...@kernel.org
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux-i...@vger.kernel.org
Cc: linux-m...@vger.kernel.org
Cc: linux-par...@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-ri...@lists.infradead.org
Cc: linux-s...@vger.kernel.org
Cc: linux...@vger.kernel.org
Cc: sparcli...@vger.kernel.org
Cc: linux...@kvack.org
Cc: linux-a...@vger.kernel.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 arch/arm/include/asm/hugetlb.h | 1 +
 arch/arm64/include/asm/hugetlb.h   | 1 +
 arch/ia64/include/asm/hugetlb.h| 4 
 arch/mips/include/asm/hugetlb.h| 4 
 arch/parisc/include/asm/hugetlb.h  | 4 
 arch/powerpc/include/asm/hugetlb.h | 4 
 arch/riscv/include/asm/hugetlb.h   | 4 
 arch/s390/include/asm/hugetlb.h| 1 +
 arch/sh/include/asm/hugetlb.h  | 1 +
 arch/sparc/include/asm/hugetlb.h   | 4 
 arch/x86/include/asm/hugetlb.h | 4 
 include/linux/hugetlb.h| 5 +
 12 files changed, 9 insertions(+), 28 deletions(-)

diff --git a/arch/arm/include/asm/hugetlb.h b/arch/arm/include/asm/hugetlb.h
index 9ecd516d1ff7..d02d6ca88e92 100644
--- a/arch/arm/include/asm/hugetlb.h
+++ b/arch/arm/include/asm/hugetlb.h
@@ -18,5 +18,6 @@ static inline void arch_clear_hugepage_flags(struct page 
*page)
 {
clear_bit(PG_dcache_clean, >flags);
 }
+#define arch_clear_hugepage_flags arch_clear_hugepage_flags
 
 #endif /* _ASM_ARM_HUGETLB_H */
diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h
index 8f58e052697a..94ba0c5bced2 100644
--- a/arch/arm64/include/asm/hugetlb.h
+++ b/arch/arm64/include/asm/hugetlb.h
@@ -21,6 +21,7 @@ static inline void arch_clear_hugepage_flags(struct page 
*page)
 {
clear_bit(PG_dcache_clean, >flags);
 }
+#define arch_clear_hugepage_flags arch_clear_hugepage_flags
 
 extern pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma,
struct page *page, int writable);
diff --git a/arch/ia64/include/asm/hugetlb.h b/arch/ia64/include/asm/hugetlb.h
index 6ef50b9a4bdf..7e46ebde8c0c 100644
--- a/arch/ia64/include/asm/hugetlb.h
+++ b/arch/ia64/include/asm/hugetlb.h
@@ -28,10 +28,6 @@ static inline void huge_ptep_clear_flush(struct 
vm_area_struct *vma,
 {
 }
 
-static inline void arch_clear_hugepage_flags(struct page *page)
-{
-}
-
 #include 
 
 #endif /* _ASM_IA64_HUGETLB_H */
diff --git a/arch/mips/include/asm/hugetlb.h b/arch/mips/include/asm/hugetlb.h
index 8b201e281f67..10e3be870df7 100644
--- a/arch/mips/include/asm/hugetlb.h
+++ b/arch/mips/include/asm/hugetlb.h
@@ -75,10 +75,6 @@ static inline int huge_ptep_set_access_flags(struct 
vm_area_struct *vma,
return changed;
 }
 
-static inline void arch_clear_hugepage_flags(struct page *page)
-{
-}
-
 #include 
 
 #endif /* __ASM_HUGETLB_H */
diff --git a/arch/parisc/include/asm/hugetlb.h 
b/arch/parisc/include/asm/hugetlb.h
index 411d9d867baa..a69cf9efb0c1 100644
--- a/arch/parisc/include/asm/hugetlb.h
+++ b/arch/parisc/include/asm/hugetlb.h
@@ -42,10 +42,6 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
 unsigned long addr, pte_t *ptep,
 pte_t pte, int dirty);
 
-static inline void arch_clear_hugepage_flags(struct page *page)
-{
-}
-
 #include 
 
 #endif /* _ASM_PARISC64_HUGETLB_H */
diff --git a/arch/powerpc/include/asm/hugetlb.h 
b/arch/powerpc/include/asm/hugetlb.h
index b167c869d72d..e6dfa63da552 100644
--- a/arch/powerpc/include/asm/hugetlb.h
+++ b/arch/powerpc/include/asm/hugetlb.h
@@ -61,10 +61,6 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
   unsigned long addr, pte_t *ptep,
   pte_t pte, int dirty);
 
-static inline void arch_clear_hugepage_flags(struct page *page)
-{
-}
-
 #include 
 
 #else /* ! CONFIG_HUGETLB_PAGE */
diff --git a/arch/riscv/include/asm/hugetlb.h b/arch/riscv/include/asm/hugetlb.h
index 866f6ae6467c..a5c2ca1d1cd8 100644
--- a/arch/riscv/include/asm/hugetlb.h
+++ b/arch/riscv/include/asm/hugetlb.h
@@ -5,8 +5,4 @@
 #include 
 #include 
 
-static inline void arch_clear_hugepage_flags(struct page *page)
-{
-}
-
 #endif /* _ASM_RISCV_HUGETLB_H */
diff --git 

[PATCH V3 2/3] mm/hugetlb: Define a generic fallback for is_hugepage_only_range()

2020-05-07 Thread Anshuman Khandual
There are multiple similar definitions for is_hugepage_only_range() on
various platforms. Lets just add it's generic fallback definition for
platforms that do not override. This help reduce code duplication.

Cc: Russell King 
Cc: Catalin Marinas 
Cc: Will Deacon 
Cc: Tony Luck 
Cc: Fenghua Yu 
Cc: Thomas Bogendoerfer 
Cc: "James E.J. Bottomley" 
Cc: Helge Deller 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Cc: Michael Ellerman 
Cc: Paul Walmsley 
Cc: Palmer Dabbelt 
Cc: Heiko Carstens 
Cc: Vasily Gorbik 
Cc: Christian Borntraeger 
Cc: Yoshinori Sato 
Cc: Rich Felker 
Cc: "David S. Miller" 
Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: "H. Peter Anvin" 
Cc: Mike Kravetz 
Cc: Andrew Morton 
Cc: x...@kernel.org
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux-i...@vger.kernel.org
Cc: linux-m...@vger.kernel.org
Cc: linux-par...@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-ri...@lists.infradead.org
Cc: linux-s...@vger.kernel.org
Cc: linux...@vger.kernel.org
Cc: sparcli...@vger.kernel.org
Cc: linux...@kvack.org
Cc: linux-a...@vger.kernel.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 arch/arm/include/asm/hugetlb.h | 6 --
 arch/arm64/include/asm/hugetlb.h   | 6 --
 arch/ia64/include/asm/hugetlb.h| 1 +
 arch/mips/include/asm/hugetlb.h| 7 ---
 arch/parisc/include/asm/hugetlb.h  | 6 --
 arch/powerpc/include/asm/hugetlb.h | 1 +
 arch/riscv/include/asm/hugetlb.h   | 6 --
 arch/s390/include/asm/hugetlb.h| 7 ---
 arch/sh/include/asm/hugetlb.h  | 6 --
 arch/sparc/include/asm/hugetlb.h   | 6 --
 arch/x86/include/asm/hugetlb.h | 6 --
 include/linux/hugetlb.h| 9 +
 12 files changed, 11 insertions(+), 56 deletions(-)

diff --git a/arch/arm/include/asm/hugetlb.h b/arch/arm/include/asm/hugetlb.h
index 318dcf5921ab..9ecd516d1ff7 100644
--- a/arch/arm/include/asm/hugetlb.h
+++ b/arch/arm/include/asm/hugetlb.h
@@ -14,12 +14,6 @@
 #include 
 #include 
 
-static inline int is_hugepage_only_range(struct mm_struct *mm,
-unsigned long addr, unsigned long len)
-{
-   return 0;
-}
-
 static inline void arch_clear_hugepage_flags(struct page *page)
 {
clear_bit(PG_dcache_clean, >flags);
diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h
index b88878ddc88b..8f58e052697a 100644
--- a/arch/arm64/include/asm/hugetlb.h
+++ b/arch/arm64/include/asm/hugetlb.h
@@ -17,12 +17,6 @@
 extern bool arch_hugetlb_migration_supported(struct hstate *h);
 #endif
 
-static inline int is_hugepage_only_range(struct mm_struct *mm,
-unsigned long addr, unsigned long len)
-{
-   return 0;
-}
-
 static inline void arch_clear_hugepage_flags(struct page *page)
 {
clear_bit(PG_dcache_clean, >flags);
diff --git a/arch/ia64/include/asm/hugetlb.h b/arch/ia64/include/asm/hugetlb.h
index 36cc0396b214..6ef50b9a4bdf 100644
--- a/arch/ia64/include/asm/hugetlb.h
+++ b/arch/ia64/include/asm/hugetlb.h
@@ -20,6 +20,7 @@ static inline int is_hugepage_only_range(struct mm_struct *mm,
return (REGION_NUMBER(addr) == RGN_HPAGE ||
REGION_NUMBER((addr)+(len)-1) == RGN_HPAGE);
 }
+#define is_hugepage_only_range is_hugepage_only_range
 
 #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
 static inline void huge_ptep_clear_flush(struct vm_area_struct *vma,
diff --git a/arch/mips/include/asm/hugetlb.h b/arch/mips/include/asm/hugetlb.h
index 425bb6fc3bda..8b201e281f67 100644
--- a/arch/mips/include/asm/hugetlb.h
+++ b/arch/mips/include/asm/hugetlb.h
@@ -11,13 +11,6 @@
 
 #include 
 
-static inline int is_hugepage_only_range(struct mm_struct *mm,
-unsigned long addr,
-unsigned long len)
-{
-   return 0;
-}
-
 #define __HAVE_ARCH_PREPARE_HUGEPAGE_RANGE
 static inline int prepare_hugepage_range(struct file *file,
 unsigned long addr,
diff --git a/arch/parisc/include/asm/hugetlb.h 
b/arch/parisc/include/asm/hugetlb.h
index 7cb595dcb7d7..411d9d867baa 100644
--- a/arch/parisc/include/asm/hugetlb.h
+++ b/arch/parisc/include/asm/hugetlb.h
@@ -12,12 +12,6 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long 
addr,
 pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
  pte_t *ptep);
 
-static inline int is_hugepage_only_range(struct mm_struct *mm,
-unsigned long addr,
-unsigned long len) {
-   return 0;
-}
-
 /*
  * If the arch doesn't supply something else, assume that hugepage
  * size aligned regions are ok without further preparation.
diff --git a/arch/powerpc/include/asm/hugetlb.h 
b/arch/powerpc/include/asm/hugetlb.h
index bd6504c28c2f..b167c869d72d 100644
--- a/arch/powerpc/include/asm/hugetlb.h
+++ b/arch/powerpc/include/asm/hugetlb.h
@@ 

[PATCH V3 0/3] mm/hugetlb: Add some new generic fallbacks

2020-05-07 Thread Anshuman Khandual
This series adds the following new generic fallbacks. Before that it drops
__HAVE_ARCH_HUGE_PTEP_GET from arm64 platform.

1. is_hugepage_only_range()
2. arch_clear_hugepage_flags()

This has been boot tested on arm64 and x86 platforms but built tested on
some more platforms including the changed ones here. This series applies
on v5.7-rc4. After this arm (32 bit) remains the sole platform defining
it's own huge_ptep_get() via __HAVE_ARCH_HUGE_PTEP_GET.

Changes in V3:

- Added READ_ONCE() in generic huge_ptep_get() per Will

Changes in V2: 
(https://patchwork.kernel.org/project/linux-mm/list/?series=282947)

- Adopted "#ifndef func" method (adding a single symbol to namespace) per Andrew
- Updated the commit messages in [PATCH 2/3] and [PATCH 3/3] as required

Changes in V1: 
(https://patchwork.kernel.org/project/linux-mm/list/?series=270677)

Cc: Russell King 
Cc: Catalin Marinas 
Cc: Will Deacon 
Cc: Tony Luck 
Cc: Fenghua Yu 
Cc: Thomas Bogendoerfer 
Cc: "James E.J. Bottomley" 
Cc: Helge Deller 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Cc: Michael Ellerman 
Cc: Paul Walmsley 
Cc: Palmer Dabbelt 
Cc: Heiko Carstens 
Cc: Vasily Gorbik 
Cc: Christian Borntraeger 
Cc: Yoshinori Sato 
Cc: Rich Felker 
Cc: "David S. Miller" 
Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: "H. Peter Anvin" 
Cc: Mike Kravetz 
Cc: Andrew Morton 
Cc: x...@kernel.org
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux-i...@vger.kernel.org
Cc: linux-m...@vger.kernel.org
Cc: linux-par...@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-ri...@lists.infradead.org
Cc: linux-s...@vger.kernel.org
Cc: linux...@vger.kernel.org
Cc: sparcli...@vger.kernel.org
Cc: linux...@kvack.org
Cc: linux-a...@vger.kernel.org
Cc: linux-ker...@vger.kernel.org

Anshuman Khandual (3):
  arm64/mm: Drop __HAVE_ARCH_HUGE_PTEP_GET
  mm/hugetlb: Define a generic fallback for is_hugepage_only_range()
  mm/hugetlb: Define a generic fallback for arch_clear_hugepage_flags()

 arch/arm/include/asm/hugetlb.h |  7 +--
 arch/arm64/include/asm/hugetlb.h   | 13 +
 arch/ia64/include/asm/hugetlb.h|  5 +
 arch/mips/include/asm/hugetlb.h| 11 ---
 arch/parisc/include/asm/hugetlb.h  | 10 --
 arch/powerpc/include/asm/hugetlb.h |  5 +
 arch/riscv/include/asm/hugetlb.h   | 10 --
 arch/s390/include/asm/hugetlb.h|  8 +---
 arch/sh/include/asm/hugetlb.h  |  7 +--
 arch/sparc/include/asm/hugetlb.h   | 10 --
 arch/x86/include/asm/hugetlb.h | 10 --
 include/asm-generic/hugetlb.h  |  2 +-
 include/linux/hugetlb.h| 14 ++
 13 files changed, 21 insertions(+), 91 deletions(-)

-- 
2.20.1



Re: [PATCH v8 22/30] powerpc: Define new SRR1 bits for a future ISA version

2020-05-07 Thread Jordan Niethe
Hi mpe,
Could you please take some changes for the commit message.
In the patch title
s/a future ISA version/ISA v3.1/

On Wed, May 6, 2020 at 1:47 PM Jordan Niethe  wrote:
>
> Add the BOUNDARY SRR1 bit definition for when the cause of an alignment
> exception is a prefixed instruction that crosses a 64-byte boundary.
> Add the PREFIXED SRR1 bit definition for exceptions caused by prefixed
> instructions.
>
> Bit 35 of SRR1 is called SRR1_ISI_N_OR_G. This name comes from it being
> used to indicate that an ISI was due to the access being no-exec or
> guarded. A future ISA version adds another purpose. It is also set if

s/A future ISA version/ISA v3.1/

> there is an access in a cache-inhibited location for prefixed
> instruction.  Rename from SRR1_ISI_N_OR_G to SRR1_ISI_N_G_OR_CIP.
>
> Reviewed-by: Alistair Popple 
> Signed-off-by: Jordan Niethe 
> ---
> v2: Combined all the commits concerning SRR1 bits.
> ---


Re: [PATCH v8 11/30] powerpc: Use a datatype for instructions

2020-05-07 Thread Jordan Niethe
Hi mpe,
On Wed, May 6, 2020 at 1:45 PM Jordan Niethe  wrote:
>
> Currently unsigned ints are used to represent instructions on powerpc.
> This has worked well as instructions have always been 4 byte words.
> However, a future ISA version will introduce some changes to

s/a future ISA version will introduce/ISA v3.1 introduces/

> instructions that mean this scheme will no longer work as well. This
> change is Prefixed Instructions. A prefixed instruction is made up of a
> word prefix followed by a word suffix to make an 8 byte double word
> instruction. No matter the endianness of the system the prefix always
> comes first. Prefixed instructions are only planned for powerpc64.
>
> Introduce a ppc_inst type to represent both prefixed and word
> instructions on powerpc64 while keeping it possible to exclusively have
> word instructions on powerpc32.
>
> Signed-off-by: Jordan Niethe 
> ---


Re: [PATCH 1/3] powerpc/va: Add a __va() variant that doesn't do input validation

2020-05-07 Thread kbuild test robot
Hi "Aneesh,

I love your patch! Perhaps something to improve:

[auto build test WARNING on powerpc/next]
[also build test WARNING on v5.7-rc4 next-20200507]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:
https://github.com/0day-ci/linux/commits/Aneesh-Kumar-K-V/powerpc-va-Add-a-__va-variant-that-doesn-t-do-input-validation/20200508-042829
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-allyesconfig (attached as .config)
compiler: powerpc64-linux-gcc (GCC) 9.3.0
reproduce:
wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day GCC_VERSION=9.3.0 make.cross 
ARCH=powerpc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot 

All warnings (new ones prefixed by >>):

   arch/powerpc/platforms/powernv/opal-core.c: In function 'read_opalcore':
>> arch/powerpc/platforms/powernv/opal-core.c:199:20: warning: passing argument 
>> 1 of '__va' makes integer from pointer without a cast [-Wint-conversion]
 199 |memcpy(to, __va(addr), tsz);
 |^~~~
 ||
 |void *
   In file included from arch/powerpc/include/asm/mmu.h:132,
from arch/powerpc/include/asm/lppaca.h:47,
from arch/powerpc/include/asm/paca.h:17,
from arch/powerpc/include/asm/current.h:13,
from include/linux/thread_info.h:21,
from include/asm-generic/preempt.h:5,
from ./arch/powerpc/include/generated/asm/preempt.h:1,
from include/linux/preempt.h:78,
from include/linux/spinlock.h:51,
from include/linux/mmzone.h:8,
from include/linux/gfp.h:6,
from include/linux/mm.h:10,
from include/linux/memblock.h:13,
from arch/powerpc/platforms/powernv/opal-core.c:11:
   arch/powerpc/include/asm/page.h:229:38: note: expected 'phys_addr_t' {aka 
'long long unsigned int'} but argument is of type 'void *'
 229 | static inline void *__va(phys_addr_t addr)
 |  ^~~~

vim +/__va +199 arch/powerpc/platforms/powernv/opal-core.c

6f713d18144ce86 Hari Bathini 2019-09-11  156  
6f713d18144ce86 Hari Bathini 2019-09-11  157  /*
6f713d18144ce86 Hari Bathini 2019-09-11  158   * Read from the ELF header and 
then the crash dump.
6f713d18144ce86 Hari Bathini 2019-09-11  159   * Returns number of bytes read 
on success, -errno on failure.
6f713d18144ce86 Hari Bathini 2019-09-11  160   */
6f713d18144ce86 Hari Bathini 2019-09-11  161  static ssize_t 
read_opalcore(struct file *file, struct kobject *kobj,
6f713d18144ce86 Hari Bathini 2019-09-11  162 struct 
bin_attribute *bin_attr, char *to,
6f713d18144ce86 Hari Bathini 2019-09-11  163 loff_t 
pos, size_t count)
6f713d18144ce86 Hari Bathini 2019-09-11  164  {
6f713d18144ce86 Hari Bathini 2019-09-11  165struct opalcore *m;
6f713d18144ce86 Hari Bathini 2019-09-11  166ssize_t tsz, avail;
6f713d18144ce86 Hari Bathini 2019-09-11  167loff_t tpos = pos;
6f713d18144ce86 Hari Bathini 2019-09-11  168  
6f713d18144ce86 Hari Bathini 2019-09-11  169if (pos >= 
oc_conf->opalcore_size)
6f713d18144ce86 Hari Bathini 2019-09-11  170return 0;
6f713d18144ce86 Hari Bathini 2019-09-11  171  
6f713d18144ce86 Hari Bathini 2019-09-11  172/* Adjust count if it goes 
beyond opalcore size */
6f713d18144ce86 Hari Bathini 2019-09-11  173avail = oc_conf->opalcore_size 
- pos;
6f713d18144ce86 Hari Bathini 2019-09-11  174if (count > avail)
6f713d18144ce86 Hari Bathini 2019-09-11  175count = avail;
6f713d18144ce86 Hari Bathini 2019-09-11  176  
6f713d18144ce86 Hari Bathini 2019-09-11  177if (count == 0)
6f713d18144ce86 Hari Bathini 2019-09-11  178return 0;
6f713d18144ce86 Hari Bathini 2019-09-11  179  
6f713d18144ce86 Hari Bathini 2019-09-11  180/* Read ELF core header and/or 
PT_NOTE segment */
6f713d18144ce86 Hari Bathini 2019-09-11  181if (tpos < 
oc_conf->opalcorebuf_sz) {
6f713d18144ce86 Hari Bathini 2019-09-11  182tsz = min_t(size_t, 
oc_conf->opalcorebuf_sz - tpos, count);
6f713d18144ce86 Hari Bathini 2019-09-11  183memcpy(to, 
oc_conf->opalcorebuf + tpos, tsz);
6f713d18144ce86 Hari Bathini 2019-09-11  184to += tsz;
6f713d18144ce86 Hari Bathini 2019-09-11  185tpos += tsz;
6f713d18144ce86 Hari 

Re: [PATCH v8 11/30] powerpc: Use a datatype for instructions

2020-05-07 Thread Jordan Niethe
On Wed, May 6, 2020 at 1:45 PM Jordan Niethe  wrote:
>
> Currently unsigned ints are used to represent instructions on powerpc.
> This has worked well as instructions have always been 4 byte words.
> However, a future ISA version will introduce some changes to
> instructions that mean this scheme will no longer work as well. This
> change is Prefixed Instructions. A prefixed instruction is made up of a
> word prefix followed by a word suffix to make an 8 byte double word
> instruction. No matter the endianness of the system the prefix always
> comes first. Prefixed instructions are only planned for powerpc64.
>
> Introduce a ppc_inst type to represent both prefixed and word
> instructions on powerpc64 while keeping it possible to exclusively have
> word instructions on powerpc32.
>
> Signed-off-by: Jordan Niethe 
> ---
> v4: New to series
> v5: Add to epapr_paravirt.c, kgdb.c
> v6: - setup_32.c: machine_init(): Use type
> - feature-fixups.c: do_final_fixups(): Use type
> - optprobes.c: arch_prepare_optimized_kprobe(): change a void * to
>   struct ppc_inst *
> - fault.c: store_updates_sp(): Use type
> - Change ppc_inst_equal() implementation from memcpy()
> v7: - Fix compilation issue in early_init_dt_scan_epapr() and
>   do_patch_instruction() with CONFIG_STRICT_KERNEL_RWX
> v8: - style
> - Use in crash_dump.c, mpc86xx_smp.c, smp.c
> ---
>  arch/powerpc/include/asm/code-patching.h  | 32 -
>  arch/powerpc/include/asm/inst.h   | 18 +++--
>  arch/powerpc/include/asm/sstep.h  |  5 +-
>  arch/powerpc/include/asm/uprobes.h|  5 +-
>  arch/powerpc/kernel/align.c   |  4 +-
>  arch/powerpc/kernel/crash_dump.c  |  2 +-
>  arch/powerpc/kernel/epapr_paravirt.c  |  6 +-
>  arch/powerpc/kernel/hw_breakpoint.c   |  4 +-
>  arch/powerpc/kernel/jump_label.c  |  2 +-
>  arch/powerpc/kernel/kgdb.c|  4 +-
>  arch/powerpc/kernel/kprobes.c |  8 +--
>  arch/powerpc/kernel/mce_power.c   |  5 +-
>  arch/powerpc/kernel/optprobes.c   | 64 +
>  arch/powerpc/kernel/setup_32.c|  4 +-
>  arch/powerpc/kernel/trace/ftrace.c| 83 ---
>  arch/powerpc/kernel/vecemu.c  |  5 +-
>  arch/powerpc/lib/code-patching.c  | 76 ++---
>  arch/powerpc/lib/feature-fixups.c | 60 
>  arch/powerpc/lib/sstep.c  |  4 +-
>  arch/powerpc/lib/test_emulate_step.c  |  9 +--
>  arch/powerpc/mm/fault.c   |  4 +-
>  arch/powerpc/perf/core-book3s.c   |  4 +-
>  arch/powerpc/platforms/86xx/mpc86xx_smp.c |  4 +-
>  arch/powerpc/platforms/powermac/smp.c |  4 +-
>  arch/powerpc/xmon/xmon.c  | 22 +++---
>  arch/powerpc/xmon/xmon_bpts.h |  6 +-
>  26 files changed, 233 insertions(+), 211 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/code-patching.h 
> b/arch/powerpc/include/asm/code-patching.h
> index 48e021957ee5..eacc9102c251 100644
> --- a/arch/powerpc/include/asm/code-patching.h
> +++ b/arch/powerpc/include/asm/code-patching.h
> @@ -23,33 +23,33 @@
>  #define BRANCH_ABSOLUTE0x2
>
>  bool is_offset_in_branch_range(long offset);
> -int create_branch(unsigned int *instr, const unsigned int *addr,
> +int create_branch(struct ppc_inst *instr, const struct ppc_inst *addr,
>   unsigned long target, int flags);
> -int create_cond_branch(unsigned int *instr, const unsigned int *addr,
> +int create_cond_branch(struct ppc_inst *instr, const struct ppc_inst *addr,
>unsigned long target, int flags);
> -int patch_branch(unsigned int *addr, unsigned long target, int flags);
> -int patch_instruction(unsigned int *addr, unsigned int instr);
> -int raw_patch_instruction(unsigned int *addr, unsigned int instr);
> +int patch_branch(struct ppc_inst *addr, unsigned long target, int flags);
> +int patch_instruction(struct ppc_inst *addr, struct ppc_inst instr);
> +int raw_patch_instruction(struct ppc_inst *addr, struct ppc_inst instr);
>
>  static inline unsigned long patch_site_addr(s32 *site)
>  {
> return (unsigned long)site + *site;
>  }
>
> -static inline int patch_instruction_site(s32 *site, unsigned int instr)
> +static inline int patch_instruction_site(s32 *site, struct ppc_inst instr)
>  {
> -   return patch_instruction((unsigned int *)patch_site_addr(site), 
> instr);
> +   return patch_instruction((struct ppc_inst *)patch_site_addr(site), 
> instr);
>  }
>
>  static inline int patch_branch_site(s32 *site, unsigned long target, int 
> flags)
>  {
> -   return patch_branch((unsigned int *)patch_site_addr(site), target, 
> flags);
> +   return patch_branch((struct ppc_inst *)patch_site_addr(site), target, 
> flags);
>  }
>
>  static inline int modify_instruction(unsigned int *addr, unsigned int clr,
>  unsigned int set)
>  {
> -   return 

[PATCH V3.1] kmap: Consolidate kmap_prot definitions

2020-05-07 Thread ira . weiny
From: Ira Weiny 

Most architectures define kmap_prot to be PAGE_KERNEL.

Let sparc and xtensa define there own and define PAGE_KERNEL as the
default if not overridden.

Suggested-by: Christoph Hellwig 
Signed-off-by: Ira Weiny 

---
Changes from V3:
Fix semicolon in macro

Changes from V2:
New Patch for this series
---
 arch/arc/include/asm/highmem.h| 3 ---
 arch/arm/include/asm/highmem.h| 2 --
 arch/csky/include/asm/highmem.h   | 2 --
 arch/microblaze/include/asm/highmem.h | 1 -
 arch/mips/include/asm/highmem.h   | 2 --
 arch/nds32/include/asm/highmem.h  | 1 -
 arch/powerpc/include/asm/highmem.h| 1 -
 arch/sparc/include/asm/highmem.h  | 3 ++-
 arch/sparc/mm/highmem.c   | 4 
 arch/x86/include/asm/fixmap.h | 1 -
 include/linux/highmem.h   | 4 
 11 files changed, 6 insertions(+), 18 deletions(-)

diff --git a/arch/arc/include/asm/highmem.h b/arch/arc/include/asm/highmem.h
index 70900a73bfc8..6e5eafb3afdd 100644
--- a/arch/arc/include/asm/highmem.h
+++ b/arch/arc/include/asm/highmem.h
@@ -25,9 +25,6 @@
 #define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))
 #define PKMAP_NR(virt) (((virt) - PKMAP_BASE) >> PAGE_SHIFT)
 
-#define kmap_prot  PAGE_KERNEL
-
-
 #include 
 
 extern void kmap_init(void);
diff --git a/arch/arm/include/asm/highmem.h b/arch/arm/include/asm/highmem.h
index b0d4bd8dc3c1..31811be38d78 100644
--- a/arch/arm/include/asm/highmem.h
+++ b/arch/arm/include/asm/highmem.h
@@ -10,8 +10,6 @@
 #define PKMAP_NR(virt) (((virt) - PKMAP_BASE) >> PAGE_SHIFT)
 #define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))
 
-#define kmap_prot  PAGE_KERNEL
-
 #define flush_cache_kmaps() \
do { \
if (cache_is_vivt()) \
diff --git a/arch/csky/include/asm/highmem.h b/arch/csky/include/asm/highmem.h
index ea2f3f39174d..14645e3d5cd5 100644
--- a/arch/csky/include/asm/highmem.h
+++ b/arch/csky/include/asm/highmem.h
@@ -38,8 +38,6 @@ extern void *kmap_atomic_pfn(unsigned long pfn);
 
 extern void kmap_init(void);
 
-#define kmap_prot PAGE_KERNEL
-
 #endif /* __KERNEL__ */
 
 #endif /* __ASM_CSKY_HIGHMEM_H */
diff --git a/arch/microblaze/include/asm/highmem.h 
b/arch/microblaze/include/asm/highmem.h
index d7c55cfd27bd..284ca8fb54c1 100644
--- a/arch/microblaze/include/asm/highmem.h
+++ b/arch/microblaze/include/asm/highmem.h
@@ -25,7 +25,6 @@
 #include 
 #include 
 
-#define kmap_prot  PAGE_KERNEL
 extern pte_t *kmap_pte;
 extern pte_t *pkmap_page_table;
 
diff --git a/arch/mips/include/asm/highmem.h b/arch/mips/include/asm/highmem.h
index 76dec0bd4f59..f1f788b57166 100644
--- a/arch/mips/include/asm/highmem.h
+++ b/arch/mips/include/asm/highmem.h
@@ -54,8 +54,6 @@ extern void *kmap_atomic_pfn(unsigned long pfn);
 
 extern void kmap_init(void);
 
-#define kmap_prot PAGE_KERNEL
-
 #endif /* __KERNEL__ */
 
 #endif /* _ASM_HIGHMEM_H */
diff --git a/arch/nds32/include/asm/highmem.h b/arch/nds32/include/asm/highmem.h
index a48a6536d41a..5717647d14d1 100644
--- a/arch/nds32/include/asm/highmem.h
+++ b/arch/nds32/include/asm/highmem.h
@@ -32,7 +32,6 @@
 #define LAST_PKMAP_MASK(LAST_PKMAP - 1)
 #define PKMAP_NR(virt) (((virt) - (PKMAP_BASE)) >> PAGE_SHIFT)
 #define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))
-#define kmap_prot  PAGE_KERNEL
 
 static inline void flush_cache_kmaps(void)
 {
diff --git a/arch/powerpc/include/asm/highmem.h 
b/arch/powerpc/include/asm/highmem.h
index 8d8ee3fcd800..104026f7d6bc 100644
--- a/arch/powerpc/include/asm/highmem.h
+++ b/arch/powerpc/include/asm/highmem.h
@@ -29,7 +29,6 @@
 #include 
 #include 
 
-#define kmap_prot  PAGE_KERNEL
 extern pte_t *kmap_pte;
 extern pte_t *pkmap_page_table;
 
diff --git a/arch/sparc/include/asm/highmem.h b/arch/sparc/include/asm/highmem.h
index f4babe67cb5d..ddb03c04f1f3 100644
--- a/arch/sparc/include/asm/highmem.h
+++ b/arch/sparc/include/asm/highmem.h
@@ -25,11 +25,12 @@
 #include 
 #include 
 #include 
+#include 
 
 /* declarations for highmem.c */
 extern unsigned long highstart_pfn, highend_pfn;
 
-extern pgprot_t kmap_prot;
+#define kmap_prot __pgprot(SRMMU_ET_PTE | SRMMU_PRIV | SRMMU_CACHE)
 extern pte_t *pkmap_page_table;
 
 void kmap_init(void) __init;
diff --git a/arch/sparc/mm/highmem.c b/arch/sparc/mm/highmem.c
index 414f578d1e57..d237d902f9c3 100644
--- a/arch/sparc/mm/highmem.c
+++ b/arch/sparc/mm/highmem.c
@@ -32,9 +32,6 @@
 #include 
 #include 
 
-pgprot_t kmap_prot;
-EXPORT_SYMBOL(kmap_prot);
-
 static pte_t *kmap_pte;
 
 void __init kmap_init(void)
@@ -51,7 +48,6 @@ void __init kmap_init(void)
 
 /* cache the first kmap pte */
 kmap_pte = pte_offset_kernel(dir, address);
-kmap_prot = __pgprot(SRMMU_ET_PTE | SRMMU_PRIV | SRMMU_CACHE);
 }
 
 void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
diff --git a/arch/x86/include/asm/fixmap.h 

Re: [PATCH V3 15/15] kmap: Consolidate kmap_prot definitions

2020-05-07 Thread Ira Weiny
On Thu, May 07, 2020 at 01:53:07PM -0700, Andrew Morton wrote:
> On Thu,  7 May 2020 08:00:03 -0700 ira.we...@intel.com wrote:
> 
> > From: Ira Weiny 
> > 
> > Most architectures define kmap_prot to be PAGE_KERNEL.
> > 
> > Let sparc and xtensa define there own and define PAGE_KERNEL as the
> > default if not overridden.
> > 
> 
> checkpatch considered useful ;)

Yes sorry...  V3.1 on it's way...

Ira

> 
> 
> From: Andrew Morton 
> Subject: kmap-consolidate-kmap_prot-definitions-checkpatch-fixes
> 
> WARNING: macros should not use a trailing semicolon
> #134: FILE: arch/sparc/include/asm/highmem.h:33:
> +#define kmap_prot __pgprot(SRMMU_ET_PTE | SRMMU_PRIV | SRMMU_CACHE);
> 
> total: 0 errors, 1 warnings, 100 lines checked
> 
> NOTE: For some of the reported defects, checkpatch may be able to
>   mechanically convert to the typical style using --fix or --fix-inplace.
> 
> ./patches/kmap-consolidate-kmap_prot-definitions.patch has style problems, 
> please review.
> 
> NOTE: If any of the errors are false positives, please report
>   them to the maintainer, see CHECKPATCH in MAINTAINERS.
> 
> Please run checkpatch prior to sending patches
> 
> Cc: Ira Weiny 
> Signed-off-by: Andrew Morton 
> ---
> 
>  arch/sparc/include/asm/highmem.h |2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> --- 
> a/arch/sparc/include/asm/highmem.h~kmap-consolidate-kmap_prot-definitions-checkpatch-fixes
> +++ a/arch/sparc/include/asm/highmem.h
> @@ -30,7 +30,7 @@
>  /* declarations for highmem.c */
>  extern unsigned long highstart_pfn, highend_pfn;
>  
> -#define kmap_prot __pgprot(SRMMU_ET_PTE | SRMMU_PRIV | SRMMU_CACHE);
> +#define kmap_prot __pgprot(SRMMU_ET_PTE | SRMMU_PRIV | SRMMU_CACHE)
>  extern pte_t *pkmap_page_table;
>  
>  void kmap_init(void) __init;
> _
> 


Re: [PATCH V3 13/15] parisc/kmap: Remove duplicate kmap code

2020-05-07 Thread Ira Weiny
On Thu, May 07, 2020 at 01:52:58PM -0700, Andrew Morton wrote:
> On Thu,  7 May 2020 08:00:01 -0700 ira.we...@intel.com wrote:
> 
> > parisc reimplements the kmap calls except to flush it's dcache.  This is
> > arguably an abuse of kmap but regardless it is messy and confusing.
> > 
> > Remove the duplicate code and have parisc define
> > ARCH_HAS_FLUSH_ON_KUNMAP for a kunmap_flush_on_unmap() architecture
> > specific call to flush the cache.
> 
> checkpatch says:
> 
> ERROR: #define of 'ARCH_HAS_FLUSH_ON_KUNMAP' is wrong - use Kconfig variables 
> or standard guards instead
> #69: FILE: arch/parisc/include/asm/cacheflush.h:103:
> +#define ARCH_HAS_FLUSH_ON_KUNMAP
> 
> which is fair enough, I guess.  More conventional would be
> 
> arch/parisc/include/asm/cacheflush.h:
> 
> static inline void kunmap_flush_on_unmap(void *addr)
> {
>   ...
> }
> #define kunmap_flush_on_unmap kunmap_flush_on_unmap
> 
> 
> include/linux/highmem.h:
> 
> #ifndef kunmap_flush_on_unmap
> static inline void kunmap_flush_on_unmap(void *addr)
> {
> }
> #define kunmap_flush_on_unmap kunmap_flush_on_unmap
> #endif
> 
> 
> static inline void kunmap_atomic_high(void *addr)
> {
>   /* Mostly nothing to do in the CONFIG_HIGHMEM=n case as kunmap_atomic()
>* handles re-enabling faults + preemption */
>   kunmap_flush_on_unmap(addr);
> }
> 
> 
> but I don't really think it's worth bothering changing it.
> 
> (Ditto patch 3/15)

Yes I was following the pattern already there.

I'll fix up the last patch now.
Ira



[PATCH] powerpc/mm: Replace zero-length array with flexible-array

2020-05-07 Thread Gustavo A. R. Silva
The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:

struct foo {
int stuff;
struct boo array[];
};

By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.

Also, notice that, dynamic memory allocations won't be affected by
this change:

"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]

sizeof(flexible-array-member) triggers a warning because flexible array
members have incomplete type[1]. There are some instances of code in
which the sizeof operator is being incorrectly/erroneously applied to
zero-length arrays and the result is zero. Such instances may be hiding
some bugs. So, this work (flexible-array member conversions) will also
help to get completely rid of those sorts of issues.

This issue was found with the help of Coccinelle.

[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour")

Signed-off-by: Gustavo A. R. Silva 
---
 arch/powerpc/mm/hugetlbpage.c   |2 +-
 tools/testing/selftests/powerpc/pmu/ebb/trace.h |4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 33b3461d91e8..d06efb946c7d 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -253,7 +253,7 @@ int __init alloc_bootmem_huge_page(struct hstate *h)
 struct hugepd_freelist {
struct rcu_head rcu;
unsigned int index;
-   void *ptes[0];
+   void *ptes[];
 };
 
 static DEFINE_PER_CPU(struct hugepd_freelist *, hugepd_freelist_cur);
diff --git a/tools/testing/selftests/powerpc/pmu/ebb/trace.h 
b/tools/testing/selftests/powerpc/pmu/ebb/trace.h
index 7c0fb5d2bdb1..da2a3be5441f 100644
--- a/tools/testing/selftests/powerpc/pmu/ebb/trace.h
+++ b/tools/testing/selftests/powerpc/pmu/ebb/trace.h
@@ -18,7 +18,7 @@ struct trace_entry
 {
u8 type;
u8 length;
-   u8 data[0];
+   u8 data[];
 };
 
 struct trace_buffer
@@ -26,7 +26,7 @@ struct trace_buffer
u64  size;
bool overflow;
void *tail;
-   u8   data[0];
+   u8   data[];
 };
 
 struct trace_buffer *trace_buffer_allocate(u64 size);
-- 
2.26.2




[PATCH] powerpc: Replace zero-length array with flexible-array

2020-05-07 Thread Gustavo A. R. Silva
The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:

struct foo {
int stuff;
struct boo array[];
};

By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.

Also, notice that, dynamic memory allocations won't be affected by
this change:

"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]

sizeof(flexible-array-member) triggers a warning because flexible array
members have incomplete type[1]. There are some instances of code in
which the sizeof operator is being incorrectly/erroneously applied to
zero-length arrays and the result is zero. Such instances may be hiding
some bugs. So, this work (flexible-array member conversions) will also
help to get completely rid of those sorts of issues.

This issue was found with the help of Coccinelle.

[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour")

Signed-off-by: Gustavo A. R. Silva 
---
 arch/powerpc/platforms/powermac/nvram.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/platforms/powermac/nvram.c 
b/arch/powerpc/platforms/powermac/nvram.c
index dc7a5bae8f1c..853ccc4480e2 100644
--- a/arch/powerpc/platforms/powermac/nvram.c
+++ b/arch/powerpc/platforms/powermac/nvram.c
@@ -55,7 +55,7 @@ struct chrp_header {
   u8   cksum;
   u16  len;
   char  name[12];
-  u8   data[0];
+  u8   data[];
 };
 
 struct core99_header {



[PATCH] treewide: Replace zero-length array with flexible-array

2020-05-07 Thread Gustavo A. R. Silva
The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:

struct foo {
int stuff;
struct boo array[];
};

By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.

Also, notice that, dynamic memory allocations won't be affected by
this change:

"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]

sizeof(flexible-array-member) triggers a warning because flexible array
members have incomplete type[1]. There are some instances of code in
which the sizeof operator is being incorrectly/erroneously applied to
zero-length arrays and the result is zero. Such instances may be hiding
some bugs. So, this work (flexible-array member conversions) will also
help to get completely rid of those sorts of issues.

This issue was found with the help of Coccinelle.

[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour")

Signed-off-by: Gustavo A. R. Silva 
---
 include/linux/fsl/bestcomm/bestcomm.h |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/fsl/bestcomm/bestcomm.h 
b/include/linux/fsl/bestcomm/bestcomm.h
index a0e2e6b19b57..154e541ce57e 100644
--- a/include/linux/fsl/bestcomm/bestcomm.h
+++ b/include/linux/fsl/bestcomm/bestcomm.h
@@ -27,7 +27,7 @@
  */
 struct bcom_bd {
u32 status;
-   u32 data[0];/* variable payload size */
+   u32 data[]; /* variable payload size */
 };
 
 /*  */



Re: [PATCH net 11/16] net: ethernet: marvell: mvneta: fix fixed-link phydev leaks

2020-05-07 Thread Naresh Kamboju
On Thu, 7 May 2020 at 16:43, Greg Kroah-Hartman
 wrote:
>

> > >
> > > Greg, 3f65047c853a ("of_mdio: add helper to deregister fixed-link
> > > PHYs") needs to be backported as well for these.
> > >
> > > Original series can be found here:
> > >
> > > 
> > > https://lkml.kernel.org/r/1480357509-28074-1-git-send-email-jo...@kernel.org
> >
> > Ah, thanks for that, I thought I dropped all of the ones that caused
> > build errors, but missed the above one.  I'll go take the whole series
> > instead.
>
> This should now all be fixed up, thanks.

While building kernel Image for arm architecture on stable-rc 4.4 branch
the following build error found.

of_mdio: add helper to deregister fixed-link PHYs
commit 3f65047c853a2a5abcd8ac1984af3452b5df4ada upstream.

Add helper to deregister fixed-link PHYs registered using
of_phy_register_fixed_link().

Convert the two drivers that care to deregister their fixed-link PHYs to
use the new helper, but note that most drivers currently fail to do so.

Signed-off-by: Johan Hovold 
Signed-off-by: David S. Miller 
[only take helper function for 4.4.y - gregkh]

 # make -sk KBUILD_BUILD_USER=TuxBuild -C/linux -j16 ARCH=arm
CROSS_COMPILE=arm-linux-gnueabihf- HOSTCC=gcc CC="sccache
arm-linux-gnueabihf-gcc" O=build zImage
70 #
71 ../drivers/of/of_mdio.c: In function ‘of_phy_deregister_fixed_link’:
72 ../drivers/of/of_mdio.c:379:2: error: implicit declaration of
function ‘fixed_phy_unregister’; did you mean ‘fixed_phy_register’?
[-Werror=implicit-function-declaration]
73  379 | fixed_phy_unregister(phydev);
74  | ^~~~
75  | fixed_phy_register
76 ../drivers/of/of_mdio.c:381:22: error: ‘struct phy_device’ has no
member named ‘mdio’; did you mean ‘mdix’?
77  381 | put_device(>mdio.dev); /* of_phy_find_device() */
78  | ^~~~
79  | mdix

>
> greg k-h


[PATCH 3/5] powerpc/mpc85xx: Add Cyrus Power-off and reset

2020-05-07 Thread Darren Stevens
The Cyrus board has GPIO connected power-off and reset, add device tree
nodes describing them.

Signed-off-by: Darren Stevens 

---

 arch/powerpc/boot/dts/fsl/cyrus_p5020.dts | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/arch/powerpc/boot/dts/fsl/cyrus_p5020.dts
b/arch/powerpc/boot/dts/fsl/cyrus_p5020.dts index bdf0405..f0548fe
100644 --- a/arch/powerpc/boot/dts/fsl/cyrus_p5020.dts
+++ b/arch/powerpc/boot/dts/fsl/cyrus_p5020.dts
@@ -73,6 +73,16 @@
};
};
 
+   gpio-poweroff {
+   compatible = "gpio-poweroff";
+   gpios = < 3 1>;
+   };
+
+   gpio-restart {
+   compatible = "gpio-restart";
+   gpios = < 2 1>;
+   };
+
fman@40 {
mdio@e1120 {
phy3: ethernet-phy@3 {


[PATCH 5/5] powerpc/mpc85xx: Add Cyrus P5040 device tree source

2020-05-07 Thread Darren Stevens
The Cyrus P5040 does not currently have a dts file in Linux, Add one.

Signed-off-by: Darren Stevens 
Tested-by: Christian Zigotzky 

---
 arch/powerpc/boot/dts/fsl/cyrus_p5040.dts | 235 ++
 1 file changed, 235 insertions(+)

diff --git a/arch/powerpc/boot/dts/fsl/cyrus_p5040.dts 
b/arch/powerpc/boot/dts/fsl/cyrus_p5040.dts
new file mode 100644
index 000..596ee19
--- /dev/null
+++ b/arch/powerpc/boot/dts/fsl/cyrus_p5040.dts
@@ -0,0 +1,235 @@
+/*
+ * Cyrus 5040 Device Tree Source, based on p5040ds.dts
+ *
+ * Copyright 2020 Darren Stevens
+ *
+ * p5040ds.dts Copyright 2012 - 2015 Freescale Semiconductor Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ *   notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ *   notice, this list of conditions and the following disclaimer in the
+ *   documentation and/or other materials provided with the distribution.
+ * * Neither the name of Freescale Semiconductor nor the
+ *   names of its contributors may be used to endorse or promote products
+ *   derived from this software without specific prior written permission.
+ *
+ *
+ * ALTERNATIVELY, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") as published by the Free Software
+ * Foundation, either version 2 of that License or (at your option) any
+ * later version.
+ *
+ * This software is provided by Freescale Semiconductor "as is" and any
+ * express or implied warranties, including, but not limited to, the implied
+ * warranties of merchantability and fitness for a particular purpose are
+ * disclaimed. In no event shall Freescale Semiconductor be liable for any
+ * direct, indirect, incidental, special, exemplary, or consequential damages
+ * (including, but not limited to, procurement of substitute goods or services;
+ * loss of use, data, or profits; or business interruption) however caused and
+ * on any theory of liability, whether in contract, strict liability, or tort
+ * (including negligence or otherwise) arising in any way out of the use of 
this
+ * software, even if advised of the possibility of such damage.
+ */
+
+/include/ "p5040si-pre.dtsi"
+
+/ {
+   model = "varisys,CYRUS5040";
+   compatible = "varisys,CYRUS";
+   #address-cells = <2>;
+   #size-cells = <2>;
+   interrupt-parent = <>;
+
+   aliases{
+   ethernet0 = 
+   ethernet1 = 
+   };
+
+   memory {
+   device_type = "memory";
+   };
+
+   reserved-memory {
+   #address-cells = <2>;
+   #size-cells = <2>;
+   ranges;
+
+   bman_fbpr: bman-fbpr {
+   size = <0 0x100>;
+   alignment = <0 0x100>;
+   };
+   qman_fqd: qman-fqd {
+   size = <0 0x40>;
+   alignment = <0 0x40>;
+   };
+   qman_pfdr: qman-pfdr {
+   size = <0 0x200>;
+   alignment = <0 0x200>;
+   };
+   };
+
+   dcsr: dcsr@f {
+   ranges = <0x 0xf 0x 0x01008000>;
+   };
+
+   bportals: bman-portals@ff400 {
+   ranges = <0x0 0xf 0xf400 0x20>;
+   };
+
+   qportals: qman-portals@ff420 {
+   ranges = <0x0 0xf 0xf420 0x20>;
+   };
+
+   soc: soc@ffe00 {
+   ranges = <0x 0xf 0xfe00 0x100>;
+   reg = <0xf 0xfe00 0 0x1000>;
+   spi@11 {
+   };
+
+   i2c@118100 {
+   };
+
+   i2c@119100 {
+   rtc@6f {
+   compatible = "microchip,mcp7941x";
+   reg = <0x6f>;
+   };
+   };
+
+   gpio-poweroff {
+   compatible = "gpio-poweroff";
+   gpios = < 3 1>;
+   };
+
+   gpio-restart {
+   compatible = "gpio-restart";
+   gpios = < 2 1>;
+   };
+
+   leds {
+   compatible = "gpio-leds";
+   hdd {
+   label = "Disk activity";
+   gpios = < 5 0>;
+   linux,default-trigger = "disk-activity";
+   };
+   };
+
+   fman@40 {
+   mdio@e1120 {
+   phy3: ethernet-phy@3 {
+   reg = <0x3>;
+

[PATCH 1/5] powerpc/mpc85xx: Define ethernet port aliases in board dts file

2020-05-07 Thread Darren Stevens
in patch da414bb923d9 (Add FSL Qoriq DPAA FMan support to the SoC
device tree(s)) we added aliases for all ethernet ports, and linked
them to specific hardware devices, but we put them in the pre.dtsi
include file meaning any board wishing to use this file is stuck with
this port layout, even if it don't match the boards hardware. The Cyrus
5020 and 5040 boards are examples, they are based on the p5020 ref
design, but only have 2 ethernet ports.
Fix the problem by moving the ethernet aliases to the boards dts file
where we define the phy aliases.

Signed-off-by: Darren Stevens 

---

Only patched the p5020ds and p5040ds as they are the boards I work
with. Others may need looking at.

 arch/powerpc/boot/dts/fsl/p5020ds.dts  |  7 +++
 arch/powerpc/boot/dts/fsl/p5020si-pre.dtsi |  6 --
 arch/powerpc/boot/dts/fsl/p5040ds.dts  | 13 +
 arch/powerpc/boot/dts/fsl/p5040si-pre.dtsi | 12 
 4 files changed, 20 insertions(+), 18 deletions(-)

diff --git a/arch/powerpc/boot/dts/fsl/p5020ds.dts
b/arch/powerpc/boot/dts/fsl/p5020ds.dts index b24adf9..cdf0559 100644
--- a/arch/powerpc/boot/dts/fsl/p5020ds.dts
+++ b/arch/powerpc/boot/dts/fsl/p5020ds.dts
@@ -53,6 +53,13 @@
emi1_rgmii = _mdio_rgmii;
emi1_sgmii = _mdio_sgmii;
emi2_xgmii = _mdio_xgmii;
+
+   ethernet0 = 
+   ethernet1 = 
+   ethernet2 = 
+   ethernet3 = 
+   ethernet4 = 
+   ethernet5 = 
};
 
memory {
diff --git a/arch/powerpc/boot/dts/fsl/p5020si-pre.dtsi
b/arch/powerpc/boot/dts/fsl/p5020si-pre.dtsi index 2d74ea8..8bc7a75
100644 --- a/arch/powerpc/boot/dts/fsl/p5020si-pre.dtsi
+++ b/arch/powerpc/boot/dts/fsl/p5020si-pre.dtsi
@@ -81,12 +81,6 @@
raideng_jr3 = _jr3;
 
fman0 = 
-   ethernet0 = 
-   ethernet1 = 
-   ethernet2 = 
-   ethernet3 = 
-   ethernet4 = 
-   ethernet5 = 
};
 
cpus {
diff --git a/arch/powerpc/boot/dts/fsl/p5040ds.dts
b/arch/powerpc/boot/dts/fsl/p5040ds.dts index 30850b3..bffbba5 100644
--- a/arch/powerpc/boot/dts/fsl/p5040ds.dts
+++ b/arch/powerpc/boot/dts/fsl/p5040ds.dts
@@ -65,6 +65,19 @@
hydra_sg_slot6 = _sg_slot6;
hydra_xg_slot1 = _xg_slot1;
hydra_xg_slot2 = _xg_slot2;
+
+   ethernet0 = 
+   ethernet1 = 
+   ethernet2 = 
+   ethernet3 = 
+   ethernet4 = 
+   ethernet5 = 
+   ethernet6 = 
+   ethernet7 = 
+   ethernet8 = 
+   ethernet9 = 
+   ethernet10 = 
+   ethernet11 = 
};
 
memory {
diff --git a/arch/powerpc/boot/dts/fsl/p5040si-pre.dtsi
b/arch/powerpc/boot/dts/fsl/p5040si-pre.dtsi index ed89dbb..bc4e0bc
100644 --- a/arch/powerpc/boot/dts/fsl/p5040si-pre.dtsi
+++ b/arch/powerpc/boot/dts/fsl/p5040si-pre.dtsi
@@ -81,18 +81,6 @@
 
fman0 = 
fman1 = 
-   ethernet0 = 
-   ethernet1 = 
-   ethernet2 = 
-   ethernet3 = 
-   ethernet4 = 
-   ethernet5 = 
-   ethernet6 = 
-   ethernet7 = 
-   ethernet8 = 
-   ethernet9 = 
-   ethernet10 = 
-   ethernet11 = 
};
 
cpus {


[PATCH 4/5] powerpc/mpc85xx: Add Cyrus HDD LED

2020-05-07 Thread Darren Stevens
The Cyrus board has its HDD LED connected to a GPIO pin. Add a device
tree entry for this.

Signed-off-By: Darren Stevens 

---
 arch/powerpc/boot/dts/fsl/cyrus_p5020.dts | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/arch/powerpc/boot/dts/fsl/cyrus_p5020.dts
b/arch/powerpc/boot/dts/fsl/cyrus_p5020.dts index f0548fe..74c100f
100644 --- a/arch/powerpc/boot/dts/fsl/cyrus_p5020.dts
+++ b/arch/powerpc/boot/dts/fsl/cyrus_p5020.dts
@@ -83,6 +83,16 @@
gpios = < 2 1>;
};
 
+   leds {
+   compatible = "gpio-leds";
+
+   hdd {
+   label = "Disk Activity";
+   gpios = < 5 0>;
+   linux,default-trigger =
"disk-activity";
+   };
+   };
+
fman@40 {
mdio@e1120 {
phy3: ethernet-phy@3 {


[PATCH 2/5] powerpc/mpc85xx: Activate Cyrus P5020 ethernet

2020-05-07 Thread Darren Stevens
The Cyrus P5020 board has 2 ethernet ports, add the required device tree
entries.

Signed-off-by: Darren Stevens 

---

 arch/powerpc/boot/dts/fsl/cyrus_p5020.dts | 39
 +++ 1 file changed, 39 insertions(+)

diff --git a/arch/powerpc/boot/dts/fsl/cyrus_p5020.dts
b/arch/powerpc/boot/dts/fsl/cyrus_p5020.dts index 40ba060..bdf0405
100644 --- a/arch/powerpc/boot/dts/fsl/cyrus_p5020.dts
+++ b/arch/powerpc/boot/dts/fsl/cyrus_p5020.dts
@@ -17,6 +17,11 @@
#size-cells = <2>;
interrupt-parent = <>;
 
+   aliases {
+   ethernet0 = 
+   ethernet1 = 
+   };
+
memory {
device_type = "memory";
};
@@ -67,6 +72,40 @@
reg = <0x6f>;
};
};
+
+   fman@40 {
+   mdio@e1120 {
+   phy3: ethernet-phy@3 {
+   reg = <0x3>;
+   };
+
+   phy7: ethernet-phy@7 {
+   reg = <0x7>;
+   };
+   };
+
+   ethernet@e {
+   status = "disabled";
+   };
+
+   ethernet@e2000 {
+   status = "disabled";
+   };
+
+   ethernet@e4000 {
+   status = "disabled";
+   };
+
+   ethernet@e6000 {
+   phy-handle = <>;
+   phy-connection-type = "rgmii";
+   };
+
+   ethernet@e8000 {
+   phy-handle = <>;
+   phy-connection-type = "rgmii";
+   };
+   };
};
 
rio: rapidio@ffe0c {


Re: [PATCH V3 15/15] kmap: Consolidate kmap_prot definitions

2020-05-07 Thread Andrew Morton
On Thu,  7 May 2020 08:00:03 -0700 ira.we...@intel.com wrote:

> From: Ira Weiny 
> 
> Most architectures define kmap_prot to be PAGE_KERNEL.
> 
> Let sparc and xtensa define there own and define PAGE_KERNEL as the
> default if not overridden.
> 

checkpatch considered useful ;)


From: Andrew Morton 
Subject: kmap-consolidate-kmap_prot-definitions-checkpatch-fixes

WARNING: macros should not use a trailing semicolon
#134: FILE: arch/sparc/include/asm/highmem.h:33:
+#define kmap_prot __pgprot(SRMMU_ET_PTE | SRMMU_PRIV | SRMMU_CACHE);

total: 0 errors, 1 warnings, 100 lines checked

NOTE: For some of the reported defects, checkpatch may be able to
  mechanically convert to the typical style using --fix or --fix-inplace.

./patches/kmap-consolidate-kmap_prot-definitions.patch has style problems, 
please review.

NOTE: If any of the errors are false positives, please report
  them to the maintainer, see CHECKPATCH in MAINTAINERS.

Please run checkpatch prior to sending patches

Cc: Ira Weiny 
Signed-off-by: Andrew Morton 
---

 arch/sparc/include/asm/highmem.h |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- 
a/arch/sparc/include/asm/highmem.h~kmap-consolidate-kmap_prot-definitions-checkpatch-fixes
+++ a/arch/sparc/include/asm/highmem.h
@@ -30,7 +30,7 @@
 /* declarations for highmem.c */
 extern unsigned long highstart_pfn, highend_pfn;
 
-#define kmap_prot __pgprot(SRMMU_ET_PTE | SRMMU_PRIV | SRMMU_CACHE);
+#define kmap_prot __pgprot(SRMMU_ET_PTE | SRMMU_PRIV | SRMMU_CACHE)
 extern pte_t *pkmap_page_table;
 
 void kmap_init(void) __init;
_



Re: [PATCH V3 13/15] parisc/kmap: Remove duplicate kmap code

2020-05-07 Thread Andrew Morton
On Thu,  7 May 2020 08:00:01 -0700 ira.we...@intel.com wrote:

> parisc reimplements the kmap calls except to flush it's dcache.  This is
> arguably an abuse of kmap but regardless it is messy and confusing.
> 
> Remove the duplicate code and have parisc define
> ARCH_HAS_FLUSH_ON_KUNMAP for a kunmap_flush_on_unmap() architecture
> specific call to flush the cache.

checkpatch says:

ERROR: #define of 'ARCH_HAS_FLUSH_ON_KUNMAP' is wrong - use Kconfig variables 
or standard guards instead
#69: FILE: arch/parisc/include/asm/cacheflush.h:103:
+#define ARCH_HAS_FLUSH_ON_KUNMAP

which is fair enough, I guess.  More conventional would be

arch/parisc/include/asm/cacheflush.h:

static inline void kunmap_flush_on_unmap(void *addr)
{
...
}
#define kunmap_flush_on_unmap kunmap_flush_on_unmap


include/linux/highmem.h:

#ifndef kunmap_flush_on_unmap
static inline void kunmap_flush_on_unmap(void *addr)
{
}
#define kunmap_flush_on_unmap kunmap_flush_on_unmap
#endif


static inline void kunmap_atomic_high(void *addr)
{
/* Mostly nothing to do in the CONFIG_HIGHMEM=n case as kunmap_atomic()
 * handles re-enabling faults + preemption */
kunmap_flush_on_unmap(addr);
}


but I don't really think it's worth bothering changing it.  

(Ditto patch 3/15)


Re: [PATCH v4 02/14] arm: add support for folded p4d page tables

2020-05-07 Thread Mike Rapoport
Hi,

On Thu, May 07, 2020 at 02:16:56PM +0200, Marek Szyprowski wrote:
> Hi
> 
> On 14.04.2020 17:34, Mike Rapoport wrote:
> > From: Mike Rapoport 
> >
> > Implement primitives necessary for the 4th level folding, add walks of p4d
> > level where appropriate, and remove __ARCH_USE_5LEVEL_HACK.
> >
> > Signed-off-by: Mike Rapoport 
> 
> Today I've noticed that kexec is broken on ARM 32bit. Bisecting between 
> current linux-next and v5.7-rc1 pointed to this commit. I've tested this 
> on Odroid XU4 and Raspberry Pi4 boards. Here is the relevant log:
> 
> # kexec --kexec-syscall -l zImage --append "$(cat /proc/cmdline)"
> memory_range[0]:0x4000..0xbe9f
> memory_range[0]:0x4000..0xbe9f
> # kexec -e
> kexec_core: Starting new kernel
> 8<--- cut here ---
> Unable to handle kernel paging request at virtual address c010f1f4
> pgd = c6817793
> [c010f1f4] *pgd=441e(bad)
> Internal error: Oops: 80d [#1] PREEMPT ARM
> Modules linked in:
> CPU: 0 PID: 1329 Comm: kexec Tainted: G    W 
> 5.7.0-rc3-00127-g6cba81ed0f62 #611
> Hardware name: Samsung Exynos (Flattened Device Tree)
> PC is at machine_kexec+0x40/0xfc

Any chance you have the debug info in this kernel?
scripts/faddr2line would come handy here.

> LR is at 0x
> pc : []    lr : []    psr: 6013
> sp : ebc13e60  ip : 40008000  fp : 0001
> r10: 0058  r9 : fee1dead  r8 : 0001
> r7 : c121387c  r6 : 6c224000  r5 : ece40c00  r4 : ec222000
> r3 : c010f1f4  r2 : c110  r1 : c110  r0 : 418d
> Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment none
> Control: 10c5387d  Table: 6bc14059  DAC: 0051
> Process kexec (pid: 1329, stack limit = 0x366bb4dc)
> Stack: (0xebc13e60 to 0xebc14000)
> ...
> [] (machine_kexec) from [] (kernel_kexec+0x74/0x7c)
> [] (kernel_kexec) from [] (__do_sys_reboot+0x1f8/0x210)
> [] (__do_sys_reboot) from [] (ret_fast_syscall+0x0/0x28)
> Exception stack(0xebc13fa8 to 0xebc13ff0)
> ...
> ---[ end trace 3e8d6c81723c778d ]---
> 1329 Segmentation fault  ./kexec -e
> 
> > ---
> >   arch/arm/include/asm/pgtable.h |  1 -
> >   arch/arm/lib/uaccess_with_memcpy.c |  7 +-
> >   arch/arm/mach-sa1100/assabet.c |  2 +-
> >   arch/arm/mm/dump.c | 29 +-
> >   arch/arm/mm/fault-armv.c   |  7 +-
> >   arch/arm/mm/fault.c| 22 ++--
> >   arch/arm/mm/idmap.c|  3 ++-
> >   arch/arm/mm/init.c |  2 +-
> >   arch/arm/mm/ioremap.c  | 12 ++---
> >   arch/arm/mm/mm.h   |  2 +-
> >   arch/arm/mm/mmu.c  | 35 +-
> >   arch/arm/mm/pgd.c  | 40 --
> >   12 files changed, 125 insertions(+), 37 deletions(-)
> >
> > ...
> 
> Best regards
> -- 
> Marek Szyprowski, PhD
> Samsung R Institute Poland
> 

-- 
Sincerely yours,
Mike.


Re: [PATCH v2 12/45] powerpc/ptdump: Properly handle non standard page size

2020-05-07 Thread kbuild test robot
Hi Christophe,

I love your patch! Yet something to improve:

[auto build test ERROR on v5.7-rc4]
[cannot apply to powerpc/next next-20200507]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:
https://github.com/0day-ci/linux/commits/Christophe-Leroy/Use-hugepages-to-map-kernel-mem-on-8xx/20200507-060838
base:0e698dfa282211e414076f9dc7e83c1c288314fd
config: powerpc-randconfig-r013-20200507 (attached as .config)
compiler: powerpc64-linux-gcc (GCC) 9.3.0
reproduce:
wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day GCC_VERSION=9.3.0 make.cross 
ARCH=powerpc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot 

All errors (new ones prefixed by >>):

   arch/powerpc/mm/ptdump/ptdump.c: In function 'walk_pmd':
>> arch/powerpc/mm/ptdump/ptdump.c:285:42: error: 'PTE_SHIFT' undeclared (first 
>> use in this function); did you mean 'PUD_SHIFT'?
 285 |note_page(st, addr, 3, pmd_val(*pmd), PTE_SHIFT);
 |  ^
 |  PUD_SHIFT
   arch/powerpc/mm/ptdump/ptdump.c:285:42: note: each undeclared identifier is 
reported only once for each function it appears in

vim +285 arch/powerpc/mm/ptdump/ptdump.c

   272  
   273  static void walk_pmd(struct pg_state *st, pud_t *pud, unsigned long 
start)
   274  {
   275  pmd_t *pmd = pmd_offset(pud, 0);
   276  unsigned long addr;
   277  unsigned int i;
   278  
   279  for (i = 0; i < PTRS_PER_PMD; i++, pmd++) {
   280  addr = start + i * PMD_SIZE;
   281  if (!pmd_none(*pmd) && !pmd_is_leaf(*pmd))
   282  /* pmd exists */
   283  walk_pte(st, pmd, addr);
   284  else
 > 285  note_page(st, addr, 3, pmd_val(*pmd), 
 > PTE_SHIFT);
   286  }
   287  }
   288  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


.config.gz
Description: application/gzip


[PATCH V3 15/15] kmap: Consolidate kmap_prot definitions

2020-05-07 Thread ira . weiny
From: Ira Weiny 

Most architectures define kmap_prot to be PAGE_KERNEL.

Let sparc and xtensa define there own and define PAGE_KERNEL as the
default if not overridden.

Suggested-by: Christoph Hellwig 
Signed-off-by: Ira Weiny 

---
Changes from V2:
New Patch for this series
---
 arch/arc/include/asm/highmem.h| 3 ---
 arch/arm/include/asm/highmem.h| 2 --
 arch/csky/include/asm/highmem.h   | 2 --
 arch/microblaze/include/asm/highmem.h | 1 -
 arch/mips/include/asm/highmem.h   | 2 --
 arch/nds32/include/asm/highmem.h  | 1 -
 arch/powerpc/include/asm/highmem.h| 1 -
 arch/sparc/include/asm/highmem.h  | 3 ++-
 arch/sparc/mm/highmem.c   | 4 
 arch/x86/include/asm/fixmap.h | 1 -
 include/linux/highmem.h   | 4 
 11 files changed, 6 insertions(+), 18 deletions(-)

diff --git a/arch/arc/include/asm/highmem.h b/arch/arc/include/asm/highmem.h
index 70900a73bfc8..6e5eafb3afdd 100644
--- a/arch/arc/include/asm/highmem.h
+++ b/arch/arc/include/asm/highmem.h
@@ -25,9 +25,6 @@
 #define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))
 #define PKMAP_NR(virt) (((virt) - PKMAP_BASE) >> PAGE_SHIFT)
 
-#define kmap_prot  PAGE_KERNEL
-
-
 #include 
 
 extern void kmap_init(void);
diff --git a/arch/arm/include/asm/highmem.h b/arch/arm/include/asm/highmem.h
index b0d4bd8dc3c1..31811be38d78 100644
--- a/arch/arm/include/asm/highmem.h
+++ b/arch/arm/include/asm/highmem.h
@@ -10,8 +10,6 @@
 #define PKMAP_NR(virt) (((virt) - PKMAP_BASE) >> PAGE_SHIFT)
 #define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))
 
-#define kmap_prot  PAGE_KERNEL
-
 #define flush_cache_kmaps() \
do { \
if (cache_is_vivt()) \
diff --git a/arch/csky/include/asm/highmem.h b/arch/csky/include/asm/highmem.h
index ea2f3f39174d..14645e3d5cd5 100644
--- a/arch/csky/include/asm/highmem.h
+++ b/arch/csky/include/asm/highmem.h
@@ -38,8 +38,6 @@ extern void *kmap_atomic_pfn(unsigned long pfn);
 
 extern void kmap_init(void);
 
-#define kmap_prot PAGE_KERNEL
-
 #endif /* __KERNEL__ */
 
 #endif /* __ASM_CSKY_HIGHMEM_H */
diff --git a/arch/microblaze/include/asm/highmem.h 
b/arch/microblaze/include/asm/highmem.h
index d7c55cfd27bd..284ca8fb54c1 100644
--- a/arch/microblaze/include/asm/highmem.h
+++ b/arch/microblaze/include/asm/highmem.h
@@ -25,7 +25,6 @@
 #include 
 #include 
 
-#define kmap_prot  PAGE_KERNEL
 extern pte_t *kmap_pte;
 extern pte_t *pkmap_page_table;
 
diff --git a/arch/mips/include/asm/highmem.h b/arch/mips/include/asm/highmem.h
index 76dec0bd4f59..f1f788b57166 100644
--- a/arch/mips/include/asm/highmem.h
+++ b/arch/mips/include/asm/highmem.h
@@ -54,8 +54,6 @@ extern void *kmap_atomic_pfn(unsigned long pfn);
 
 extern void kmap_init(void);
 
-#define kmap_prot PAGE_KERNEL
-
 #endif /* __KERNEL__ */
 
 #endif /* _ASM_HIGHMEM_H */
diff --git a/arch/nds32/include/asm/highmem.h b/arch/nds32/include/asm/highmem.h
index a48a6536d41a..5717647d14d1 100644
--- a/arch/nds32/include/asm/highmem.h
+++ b/arch/nds32/include/asm/highmem.h
@@ -32,7 +32,6 @@
 #define LAST_PKMAP_MASK(LAST_PKMAP - 1)
 #define PKMAP_NR(virt) (((virt) - (PKMAP_BASE)) >> PAGE_SHIFT)
 #define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))
-#define kmap_prot  PAGE_KERNEL
 
 static inline void flush_cache_kmaps(void)
 {
diff --git a/arch/powerpc/include/asm/highmem.h 
b/arch/powerpc/include/asm/highmem.h
index 8d8ee3fcd800..104026f7d6bc 100644
--- a/arch/powerpc/include/asm/highmem.h
+++ b/arch/powerpc/include/asm/highmem.h
@@ -29,7 +29,6 @@
 #include 
 #include 
 
-#define kmap_prot  PAGE_KERNEL
 extern pte_t *kmap_pte;
 extern pte_t *pkmap_page_table;
 
diff --git a/arch/sparc/include/asm/highmem.h b/arch/sparc/include/asm/highmem.h
index f4babe67cb5d..37f8694bde84 100644
--- a/arch/sparc/include/asm/highmem.h
+++ b/arch/sparc/include/asm/highmem.h
@@ -25,11 +25,12 @@
 #include 
 #include 
 #include 
+#include 
 
 /* declarations for highmem.c */
 extern unsigned long highstart_pfn, highend_pfn;
 
-extern pgprot_t kmap_prot;
+#define kmap_prot __pgprot(SRMMU_ET_PTE | SRMMU_PRIV | SRMMU_CACHE);
 extern pte_t *pkmap_page_table;
 
 void kmap_init(void) __init;
diff --git a/arch/sparc/mm/highmem.c b/arch/sparc/mm/highmem.c
index 414f578d1e57..d237d902f9c3 100644
--- a/arch/sparc/mm/highmem.c
+++ b/arch/sparc/mm/highmem.c
@@ -32,9 +32,6 @@
 #include 
 #include 
 
-pgprot_t kmap_prot;
-EXPORT_SYMBOL(kmap_prot);
-
 static pte_t *kmap_pte;
 
 void __init kmap_init(void)
@@ -51,7 +48,6 @@ void __init kmap_init(void)
 
 /* cache the first kmap pte */
 kmap_pte = pte_offset_kernel(dir, address);
-kmap_prot = __pgprot(SRMMU_ET_PTE | SRMMU_PRIV | SRMMU_CACHE);
 }
 
 void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h
index 28183ee3cc42..b9527a54db99 100644
--- 

[PATCH V3 14/15] sparc: Remove unnecessary includes

2020-05-07 Thread ira . weiny
From: Ira Weiny 

linux/highmem.h has not been needed for the pte_offset_map =>
kmap_atomic use in sparc for some time (~2002)

Remove this include.

Suggested-by: Al Viro 
Signed-off-by: Ira Weiny 

---
Changes from V2:
New Patch for this series
---
 arch/sparc/mm/io-unit.c | 1 -
 arch/sparc/mm/iommu.c   | 1 -
 2 files changed, 2 deletions(-)

diff --git a/arch/sparc/mm/io-unit.c b/arch/sparc/mm/io-unit.c
index 289276b99b01..08238d989cfd 100644
--- a/arch/sparc/mm/io-unit.c
+++ b/arch/sparc/mm/io-unit.c
@@ -10,7 +10,6 @@
 #include 
 #include 
 #include 
-#include  /* pte_offset_map => kmap_atomic */
 #include 
 #include 
 #include 
diff --git a/arch/sparc/mm/iommu.c b/arch/sparc/mm/iommu.c
index b00dde13681b..f1e08e30b64e 100644
--- a/arch/sparc/mm/iommu.c
+++ b/arch/sparc/mm/iommu.c
@@ -12,7 +12,6 @@
 #include 
 #include 
 #include 
-#include  /* pte_offset_map => kmap_atomic */
 #include 
 #include 
 #include 
-- 
2.25.1



[PATCH V3 12/15] kmap: Remove kmap_atomic_to_page()

2020-05-07 Thread ira . weiny
From: Ira Weiny 

kmap_atomic_to_page() has no callers and is only defined on 1 arch and
declared on another.  Remove it.

Suggested-by: Al Viro 
Signed-off-by: Ira Weiny 

---
Changes from V2:
New Patch for this series
---
 arch/csky/include/asm/highmem.h  |  1 -
 arch/csky/mm/highmem.c   | 13 -
 arch/nds32/include/asm/highmem.h |  1 -
 3 files changed, 15 deletions(-)

diff --git a/arch/csky/include/asm/highmem.h b/arch/csky/include/asm/highmem.h
index 263fbddcd0a3..ea2f3f39174d 100644
--- a/arch/csky/include/asm/highmem.h
+++ b/arch/csky/include/asm/highmem.h
@@ -33,7 +33,6 @@ extern pte_t *pkmap_page_table;
 #define ARCH_HAS_KMAP_FLUSH_TLB
 extern void kmap_flush_tlb(unsigned long addr);
 extern void *kmap_atomic_pfn(unsigned long pfn);
-extern struct page *kmap_atomic_to_page(void *ptr);
 
 #define flush_cache_kmaps() do {} while (0)
 
diff --git a/arch/csky/mm/highmem.c b/arch/csky/mm/highmem.c
index 3ae5c8cd7619..3b3f622f5ae9 100644
--- a/arch/csky/mm/highmem.c
+++ b/arch/csky/mm/highmem.c
@@ -81,19 +81,6 @@ void *kmap_atomic_pfn(unsigned long pfn)
return (void *) vaddr;
 }
 
-struct page *kmap_atomic_to_page(void *ptr)
-{
-   unsigned long idx, vaddr = (unsigned long)ptr;
-   pte_t *pte;
-
-   if (vaddr < FIXADDR_START)
-   return virt_to_page(ptr);
-
-   idx = virt_to_fix(vaddr);
-   pte = kmap_pte - (idx - FIX_KMAP_BEGIN);
-   return pte_page(*pte);
-}
-
 static void __init kmap_pages_init(void)
 {
unsigned long vaddr;
diff --git a/arch/nds32/include/asm/highmem.h b/arch/nds32/include/asm/highmem.h
index 4d21308549c9..a48a6536d41a 100644
--- a/arch/nds32/include/asm/highmem.h
+++ b/arch/nds32/include/asm/highmem.h
@@ -52,7 +52,6 @@ extern void kmap_init(void);
  */
 #ifdef CONFIG_HIGHMEM
 extern void *kmap_atomic_pfn(unsigned long pfn);
-extern struct page *kmap_atomic_to_page(void *ptr);
 #endif
 
 #endif
-- 
2.25.1



[PATCH V3 11/15] drm: Remove drm specific kmap_atomic code

2020-05-07 Thread ira . weiny
From: Ira Weiny 

kmap_atomic_prot() is now exported by all architectures.  Use this
function rather than open coding a driver specific kmap_atomic.

Acked-by: Daniel Vetter 
Reviewed-by: Christian König 
Reviewed-by: Christoph Hellwig 
Signed-off-by: Ira Weiny 
---
 drivers/gpu/drm/ttm/ttm_bo_util.c| 56 ++--
 drivers/gpu/drm/vmwgfx/vmwgfx_blit.c | 16 
 include/drm/ttm/ttm_bo_api.h |  4 --
 3 files changed, 12 insertions(+), 64 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c 
b/drivers/gpu/drm/ttm/ttm_bo_util.c
index 52d2b71f1588..f09b096ba4fd 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -257,54 +257,6 @@ static int ttm_copy_io_page(void *dst, void *src, unsigned 
long page)
return 0;
 }
 
-#ifdef CONFIG_X86
-#define __ttm_kmap_atomic_prot(__page, __prot) kmap_atomic_prot(__page, __prot)
-#define __ttm_kunmap_atomic(__addr) kunmap_atomic(__addr)
-#else
-#define __ttm_kmap_atomic_prot(__page, __prot) vmap(&__page, 1, 0,  __prot)
-#define __ttm_kunmap_atomic(__addr) vunmap(__addr)
-#endif
-
-
-/**
- * ttm_kmap_atomic_prot - Efficient kernel map of a single page with
- * specified page protection.
- *
- * @page: The page to map.
- * @prot: The page protection.
- *
- * This function maps a TTM page using the kmap_atomic api if available,
- * otherwise falls back to vmap. The user must make sure that the
- * specified page does not have an aliased mapping with a different caching
- * policy unless the architecture explicitly allows it. Also mapping and
- * unmapping using this api must be correctly nested. Unmapping should
- * occur in the reverse order of mapping.
- */
-void *ttm_kmap_atomic_prot(struct page *page, pgprot_t prot)
-{
-   if (pgprot_val(prot) == pgprot_val(PAGE_KERNEL))
-   return kmap_atomic(page);
-   else
-   return __ttm_kmap_atomic_prot(page, prot);
-}
-EXPORT_SYMBOL(ttm_kmap_atomic_prot);
-
-/**
- * ttm_kunmap_atomic_prot - Unmap a page that was mapped using
- * ttm_kmap_atomic_prot.
- *
- * @addr: The virtual address from the map.
- * @prot: The page protection.
- */
-void ttm_kunmap_atomic_prot(void *addr, pgprot_t prot)
-{
-   if (pgprot_val(prot) == pgprot_val(PAGE_KERNEL))
-   kunmap_atomic(addr);
-   else
-   __ttm_kunmap_atomic(addr);
-}
-EXPORT_SYMBOL(ttm_kunmap_atomic_prot);
-
 static int ttm_copy_io_ttm_page(struct ttm_tt *ttm, void *src,
unsigned long page,
pgprot_t prot)
@@ -316,13 +268,13 @@ static int ttm_copy_io_ttm_page(struct ttm_tt *ttm, void 
*src,
return -ENOMEM;
 
src = (void *)((unsigned long)src + (page << PAGE_SHIFT));
-   dst = ttm_kmap_atomic_prot(d, prot);
+   dst = kmap_atomic_prot(d, prot);
if (!dst)
return -ENOMEM;
 
memcpy_fromio(dst, src, PAGE_SIZE);
 
-   ttm_kunmap_atomic_prot(dst, prot);
+   kunmap_atomic(dst);
 
return 0;
 }
@@ -338,13 +290,13 @@ static int ttm_copy_ttm_io_page(struct ttm_tt *ttm, void 
*dst,
return -ENOMEM;
 
dst = (void *)((unsigned long)dst + (page << PAGE_SHIFT));
-   src = ttm_kmap_atomic_prot(s, prot);
+   src = kmap_atomic_prot(s, prot);
if (!src)
return -ENOMEM;
 
memcpy_toio(dst, src, PAGE_SIZE);
 
-   ttm_kunmap_atomic_prot(src, prot);
+   kunmap_atomic(src);
 
return 0;
 }
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c 
b/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c
index bb46ca0c458f..94d456a1d1a9 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c
@@ -374,12 +374,12 @@ static int vmw_bo_cpu_blit_line(struct 
vmw_bo_blit_line_data *d,
copy_size = min_t(u32, copy_size, PAGE_SIZE - src_page_offset);
 
if (unmap_src) {
-   ttm_kunmap_atomic_prot(d->src_addr, d->src_prot);
+   kunmap_atomic(d->src_addr);
d->src_addr = NULL;
}
 
if (unmap_dst) {
-   ttm_kunmap_atomic_prot(d->dst_addr, d->dst_prot);
+   kunmap_atomic(d->dst_addr);
d->dst_addr = NULL;
}
 
@@ -388,8 +388,8 @@ static int vmw_bo_cpu_blit_line(struct 
vmw_bo_blit_line_data *d,
return -EINVAL;
 
d->dst_addr =
-   ttm_kmap_atomic_prot(d->dst_pages[dst_page],
-d->dst_prot);
+   kmap_atomic_prot(d->dst_pages[dst_page],
+d->dst_prot);
if (!d->dst_addr)
return -ENOMEM;
 
@@ -401,8 +401,8 @@ static int vmw_bo_cpu_blit_line(struct 
vmw_bo_blit_line_data *d,

[PATCH V3 13/15] parisc/kmap: Remove duplicate kmap code

2020-05-07 Thread ira . weiny
From: Ira Weiny 

parisc reimplements the kmap calls except to flush it's dcache.  This is
arguably an abuse of kmap but regardless it is messy and confusing.

Remove the duplicate code and have parisc define
ARCH_HAS_FLUSH_ON_KUNMAP for a kunmap_flush_on_unmap() architecture
specific call to flush the cache.

Suggested-by: Al Viro 
Signed-off-by: Ira Weiny 

---
Changes from V2:
New Patch for this series
---
 arch/parisc/include/asm/cacheflush.h | 28 ++--
 include/linux/highmem.h  | 10 +++---
 2 files changed, 9 insertions(+), 29 deletions(-)

diff --git a/arch/parisc/include/asm/cacheflush.h 
b/arch/parisc/include/asm/cacheflush.h
index 119c9a7681bc..99663fc1f997 100644
--- a/arch/parisc/include/asm/cacheflush.h
+++ b/arch/parisc/include/asm/cacheflush.h
@@ -100,35 +100,11 @@ flush_anon_page(struct vm_area_struct *vma, struct page 
*page, unsigned long vma
}
 }
 
-#include 
-
-#define ARCH_HAS_KMAP
-
-static inline void *kmap(struct page *page)
-{
-   might_sleep();
-   return page_address(page);
-}
-
-static inline void kunmap(struct page *page)
-{
-   flush_kernel_dcache_page_addr(page_address(page));
-}
-
-static inline void *kmap_atomic(struct page *page)
-{
-   preempt_disable();
-   pagefault_disable();
-   return page_address(page);
-}
-
-static inline void kunmap_atomic_high(void *addr)
+#define ARCH_HAS_FLUSH_ON_KUNMAP
+static inline void kunmap_flush_on_unmap(void *addr)
 {
flush_kernel_dcache_page_addr(addr);
 }
 
-#define kmap_atomic_prot(page, prot)   kmap_atomic(page)
-#define kmap_atomic_pfn(pfn)   kmap_atomic(pfn_to_page(pfn))
-
 #endif /* _PARISC_CACHEFLUSH_H */
 
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 89838306f50d..cc0c3904e501 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -129,7 +129,6 @@ static inline struct page *kmap_to_page(void *addr)
 
 static inline unsigned long totalhigh_pages(void) { return 0UL; }
 
-#ifndef ARCH_HAS_KMAP
 static inline void *kmap(struct page *page)
 {
might_sleep();
@@ -138,6 +137,9 @@ static inline void *kmap(struct page *page)
 
 static inline void kunmap(struct page *page)
 {
+#ifdef ARCH_HAS_FLUSH_ON_KUNMAP
+   kunmap_flush_on_unmap(page_address(page));
+#endif
 }
 
 static inline void *kmap_atomic(struct page *page)
@@ -150,14 +152,16 @@ static inline void *kmap_atomic(struct page *page)
 
 static inline void kunmap_atomic_high(void *addr)
 {
-   /* Nothing to do in the CONFIG_HIGHMEM=n case as kunmap_atomic()
+   /* Mostly nothing to do in the CONFIG_HIGHMEM=n case as kunmap_atomic()
 * handles re-enabling faults + preemption */
+#ifdef ARCH_HAS_FLUSH_ON_KUNMAP
+   kunmap_flush_on_unmap(addr);
+#endif
 }
 
 #define kmap_atomic_pfn(pfn)   kmap_atomic(pfn_to_page(pfn))
 
 #define kmap_flush_unused()do {} while(0)
-#endif
 
 #endif /* CONFIG_HIGHMEM */
 
-- 
2.25.1



[PATCH V3 10/15] arch/kmap: Define kmap_atomic_prot() for all arch's

2020-05-07 Thread ira . weiny
From: Ira Weiny 

To support kmap_atomic_prot(), all architectures need to support
protections passed to their kmap_atomic_high() function.  Pass
protections into kmap_atomic_high() and change the name to
kmap_atomic_high_prot() to match.

Then define kmap_atomic_prot() as a core function which calls
kmap_atomic_high_prot() when needed.

Finally, redefine kmap_atomic() as a wrapper of kmap_atomic_prot() with
the default kmap_prot exported by the architectures.

Reviewed-by: Christoph Hellwig 
Signed-off-by: Ira Weiny 

---
Changes from V1:
Adjust for bisect-ability
Adjust for removing kunmap_atomic_high
Remove kmap_atomic_high_prot declarations
---
 arch/arc/mm/highmem.c |  6 +++---
 arch/arm/mm/highmem.c |  6 +++---
 arch/csky/mm/highmem.c|  6 +++---
 arch/microblaze/include/asm/highmem.h | 16 
 arch/mips/mm/highmem.c|  6 +++---
 arch/nds32/mm/highmem.c   |  6 +++---
 arch/powerpc/include/asm/highmem.h| 17 -
 arch/sparc/mm/highmem.c   |  6 +++---
 arch/x86/include/asm/highmem.h| 14 --
 arch/xtensa/mm/highmem.c  |  6 +++---
 include/linux/highmem.h   |  7 ---
 11 files changed, 25 insertions(+), 71 deletions(-)

diff --git a/arch/arc/mm/highmem.c b/arch/arc/mm/highmem.c
index 5d3eab4ac0b0..479b0d72d3cf 100644
--- a/arch/arc/mm/highmem.c
+++ b/arch/arc/mm/highmem.c
@@ -49,7 +49,7 @@
 extern pte_t * pkmap_page_table;
 static pte_t * fixmap_page_table;
 
-void *kmap_atomic_high(struct page *page)
+void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
 {
int idx, cpu_idx;
unsigned long vaddr;
@@ -59,11 +59,11 @@ void *kmap_atomic_high(struct page *page)
vaddr = FIXMAP_ADDR(idx);
 
set_pte_at(_mm, vaddr, fixmap_page_table + idx,
-  mk_pte(page, kmap_prot));
+  mk_pte(page, prot));
 
return (void *)vaddr;
 }
-EXPORT_SYMBOL(kmap_atomic_high);
+EXPORT_SYMBOL(kmap_atomic_high_prot);
 
 void kunmap_atomic_high(void *kv)
 {
diff --git a/arch/arm/mm/highmem.c b/arch/arm/mm/highmem.c
index ac8394655a6e..e013f6b81328 100644
--- a/arch/arm/mm/highmem.c
+++ b/arch/arm/mm/highmem.c
@@ -31,7 +31,7 @@ static inline pte_t get_fixmap_pte(unsigned long vaddr)
return *ptep;
 }
 
-void *kmap_atomic_high(struct page *page)
+void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
 {
unsigned int idx;
unsigned long vaddr;
@@ -67,11 +67,11 @@ void *kmap_atomic_high(struct page *page)
 * in place, so the contained TLB flush ensures the TLB is updated
 * with the new mapping.
 */
-   set_fixmap_pte(idx, mk_pte(page, kmap_prot));
+   set_fixmap_pte(idx, mk_pte(page, prot));
 
return (void *)vaddr;
 }
-EXPORT_SYMBOL(kmap_atomic_high);
+EXPORT_SYMBOL(kmap_atomic_high_prot);
 
 void kunmap_atomic_high(void *kvaddr)
 {
diff --git a/arch/csky/mm/highmem.c b/arch/csky/mm/highmem.c
index f4311669b5bb..3ae5c8cd7619 100644
--- a/arch/csky/mm/highmem.c
+++ b/arch/csky/mm/highmem.c
@@ -21,7 +21,7 @@ EXPORT_SYMBOL(kmap_flush_tlb);
 
 EXPORT_SYMBOL(kmap);
 
-void *kmap_atomic_high(struct page *page)
+void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
 {
unsigned long vaddr;
int idx, type;
@@ -32,12 +32,12 @@ void *kmap_atomic_high(struct page *page)
 #ifdef CONFIG_DEBUG_HIGHMEM
BUG_ON(!pte_none(*(kmap_pte - idx)));
 #endif
-   set_pte(kmap_pte-idx, mk_pte(page, kmap_prot));
+   set_pte(kmap_pte-idx, mk_pte(page, prot));
flush_tlb_one((unsigned long)vaddr);
 
return (void *)vaddr;
 }
-EXPORT_SYMBOL(kmap_atomic_high);
+EXPORT_SYMBOL(kmap_atomic_high_prot);
 
 void kunmap_atomic_high(void *kvaddr)
 {
diff --git a/arch/microblaze/include/asm/highmem.h 
b/arch/microblaze/include/asm/highmem.h
index 90d96239152f..d7c55cfd27bd 100644
--- a/arch/microblaze/include/asm/highmem.h
+++ b/arch/microblaze/include/asm/highmem.h
@@ -51,22 +51,6 @@ extern pte_t *pkmap_page_table;
 #define PKMAP_NR(virt)  ((virt - PKMAP_BASE) >> PAGE_SHIFT)
 #define PKMAP_ADDR(nr)  (PKMAP_BASE + ((nr) << PAGE_SHIFT))
 
-extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
-static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
-{
-   preempt_disable();
-   pagefault_disable();
-   if (!PageHighMem(page))
-   return page_address(page);
-
-   return kmap_atomic_high_prot(page, prot);
-}
-
-static inline void *kmap_atomic_high(struct page *page)
-{
-   return kmap_atomic_high_prot(page, kmap_prot);
-}
-
 #define flush_cache_kmaps(){ flush_icache(); flush_dcache(); }
 
 #endif /* __KERNEL__ */
diff --git a/arch/mips/mm/highmem.c b/arch/mips/mm/highmem.c
index 87023bd1a33c..37e244cdb14e 100644
--- a/arch/mips/mm/highmem.c
+++ b/arch/mips/mm/highmem.c
@@ -18,7 +18,7 @@ void kmap_flush_tlb(unsigned long addr)
 }
 

[PATCH V3 08/15] arch/kmap: Ensure kmap_prot visibility

2020-05-07 Thread ira . weiny
From: Ira Weiny 

We want to support kmap_atomic_prot() on all architectures and it makes
sense to define kmap_atomic() to use the default kmap_prot.

So we ensure all arch's have a globally available kmap_prot either as a
define or exported symbol.

Reviewed-by: Christoph Hellwig 
Signed-off-by: Ira Weiny 
---
 arch/microblaze/include/asm/highmem.h | 2 +-
 arch/microblaze/mm/init.c | 3 ---
 arch/powerpc/include/asm/highmem.h| 2 +-
 arch/powerpc/mm/mem.c | 3 ---
 arch/sparc/mm/highmem.c   | 1 +
 5 files changed, 3 insertions(+), 8 deletions(-)

diff --git a/arch/microblaze/include/asm/highmem.h 
b/arch/microblaze/include/asm/highmem.h
index c3cbda90391d..90d96239152f 100644
--- a/arch/microblaze/include/asm/highmem.h
+++ b/arch/microblaze/include/asm/highmem.h
@@ -25,8 +25,8 @@
 #include 
 #include 
 
+#define kmap_prot  PAGE_KERNEL
 extern pte_t *kmap_pte;
-extern pgprot_t kmap_prot;
 extern pte_t *pkmap_page_table;
 
 /*
diff --git a/arch/microblaze/mm/init.c b/arch/microblaze/mm/init.c
index 1ffbfa96b9b8..a467686c13af 100644
--- a/arch/microblaze/mm/init.c
+++ b/arch/microblaze/mm/init.c
@@ -49,8 +49,6 @@ unsigned long lowmem_size;
 #ifdef CONFIG_HIGHMEM
 pte_t *kmap_pte;
 EXPORT_SYMBOL(kmap_pte);
-pgprot_t kmap_prot;
-EXPORT_SYMBOL(kmap_prot);
 
 static inline pte_t *virt_to_kpte(unsigned long vaddr)
 {
@@ -68,7 +66,6 @@ static void __init highmem_init(void)
pkmap_page_table = virt_to_kpte(PKMAP_BASE);
 
kmap_pte = virt_to_kpte(__fix_to_virt(FIX_KMAP_BEGIN));
-   kmap_prot = PAGE_KERNEL;
 }
 
 static void highmem_setup(void)
diff --git a/arch/powerpc/include/asm/highmem.h 
b/arch/powerpc/include/asm/highmem.h
index 373a470df205..ee5de974c5ef 100644
--- a/arch/powerpc/include/asm/highmem.h
+++ b/arch/powerpc/include/asm/highmem.h
@@ -29,8 +29,8 @@
 #include 
 #include 
 
+#define kmap_prot  PAGE_KERNEL
 extern pte_t *kmap_pte;
-extern pgprot_t kmap_prot;
 extern pte_t *pkmap_page_table;
 
 /*
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 041ed7cfd341..3f642b058731 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -64,8 +64,6 @@ bool init_mem_is_free;
 #ifdef CONFIG_HIGHMEM
 pte_t *kmap_pte;
 EXPORT_SYMBOL(kmap_pte);
-pgprot_t kmap_prot;
-EXPORT_SYMBOL(kmap_prot);
 #endif
 
 pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
@@ -245,7 +243,6 @@ void __init paging_init(void)
pkmap_page_table = virt_to_kpte(PKMAP_BASE);
 
kmap_pte = virt_to_kpte(__fix_to_virt(FIX_KMAP_BEGIN));
-   kmap_prot = PAGE_KERNEL;
 #endif /* CONFIG_HIGHMEM */
 
printk(KERN_DEBUG "Top of RAM: 0x%llx, Total RAM: 0x%llx\n",
diff --git a/arch/sparc/mm/highmem.c b/arch/sparc/mm/highmem.c
index 469786bc430f..9f06d75e88e1 100644
--- a/arch/sparc/mm/highmem.c
+++ b/arch/sparc/mm/highmem.c
@@ -33,6 +33,7 @@
 #include 
 
 pgprot_t kmap_prot;
+EXPORT_SYMBOL(kmap_prot);
 
 static pte_t *kmap_pte;
 
-- 
2.25.1



[PATCH V3 09/15] arch/kmap: Don't hard code kmap_prot values

2020-05-07 Thread ira . weiny
From: Ira Weiny 

To support kmap_atomic_prot() on all architectures each arch must
support protections passed in to them.

Change csky, mips, nds32 and xtensa to use their global constant
kmap_prot rather than a hard coded value which was equal.

Reviewed-by: Christoph Hellwig 
Signed-off-by: Ira Weiny 

---
changes from V1:
Mention that kmap_prot is a constant in commit message
---
 arch/csky/mm/highmem.c   | 2 +-
 arch/mips/mm/highmem.c   | 2 +-
 arch/nds32/mm/highmem.c  | 2 +-
 arch/xtensa/mm/highmem.c | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/csky/mm/highmem.c b/arch/csky/mm/highmem.c
index 0aafbbbe651c..f4311669b5bb 100644
--- a/arch/csky/mm/highmem.c
+++ b/arch/csky/mm/highmem.c
@@ -32,7 +32,7 @@ void *kmap_atomic_high(struct page *page)
 #ifdef CONFIG_DEBUG_HIGHMEM
BUG_ON(!pte_none(*(kmap_pte - idx)));
 #endif
-   set_pte(kmap_pte-idx, mk_pte(page, PAGE_KERNEL));
+   set_pte(kmap_pte-idx, mk_pte(page, kmap_prot));
flush_tlb_one((unsigned long)vaddr);
 
return (void *)vaddr;
diff --git a/arch/mips/mm/highmem.c b/arch/mips/mm/highmem.c
index 155fbb107b35..87023bd1a33c 100644
--- a/arch/mips/mm/highmem.c
+++ b/arch/mips/mm/highmem.c
@@ -29,7 +29,7 @@ void *kmap_atomic_high(struct page *page)
 #ifdef CONFIG_DEBUG_HIGHMEM
BUG_ON(!pte_none(*(kmap_pte - idx)));
 #endif
-   set_pte(kmap_pte-idx, mk_pte(page, PAGE_KERNEL));
+   set_pte(kmap_pte-idx, mk_pte(page, kmap_prot));
local_flush_tlb_one((unsigned long)vaddr);
 
return (void*) vaddr;
diff --git a/arch/nds32/mm/highmem.c b/arch/nds32/mm/highmem.c
index f6e6915c0d31..809f8c830f06 100644
--- a/arch/nds32/mm/highmem.c
+++ b/arch/nds32/mm/highmem.c
@@ -21,7 +21,7 @@ void *kmap_atomic_high(struct page *page)
 
idx = type + KM_TYPE_NR * smp_processor_id();
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
-   pte = (page_to_pfn(page) << PAGE_SHIFT) | (PAGE_KERNEL);
+   pte = (page_to_pfn(page) << PAGE_SHIFT) | (kmap_prot);
ptep = pte_offset_kernel(pmd_off_k(vaddr), vaddr);
set_pte(ptep, pte);
 
diff --git a/arch/xtensa/mm/highmem.c b/arch/xtensa/mm/highmem.c
index 4de323e43682..50168b09510a 100644
--- a/arch/xtensa/mm/highmem.c
+++ b/arch/xtensa/mm/highmem.c
@@ -48,7 +48,7 @@ void *kmap_atomic_high(struct page *page)
 #ifdef CONFIG_DEBUG_HIGHMEM
BUG_ON(!pte_none(*(kmap_pte + idx)));
 #endif
-   set_pte(kmap_pte + idx, mk_pte(page, PAGE_KERNEL_EXEC));
+   set_pte(kmap_pte + idx, mk_pte(page, kmap_prot));
 
return (void *)vaddr;
 }
-- 
2.25.1



[PATCH V3 06/15] arch/kmap_atomic: Consolidate duplicate code

2020-05-07 Thread ira . weiny
From: Ira Weiny 

Every arch has the same code to ensure atomic operations and a check for
!HIGHMEM page.

Remove the duplicate code by defining a core kmap_atomic() which only
calls the arch specific kmap_atomic_high() when the page is high memory.

Reviewed-by: Christoph Hellwig 
Signed-off-by: Ira Weiny 

---
Changes from V1:
Adjust to preserve bisect-ability
Remove unneeded kmap_atomic_high declarations
---
 arch/arc/include/asm/highmem.h|  1 -
 arch/arc/mm/highmem.c |  9 ++---
 arch/arm/include/asm/highmem.h|  1 -
 arch/arm/mm/highmem.c |  9 ++---
 arch/csky/include/asm/highmem.h   |  1 -
 arch/csky/mm/highmem.c|  9 ++---
 arch/microblaze/include/asm/highmem.h |  4 ++--
 arch/mips/include/asm/highmem.h   |  1 -
 arch/mips/mm/cache.c  |  2 +-
 arch/mips/mm/highmem.c| 18 ++
 arch/nds32/include/asm/highmem.h  |  1 -
 arch/nds32/mm/highmem.c   |  9 ++---
 arch/powerpc/include/asm/highmem.h|  4 ++--
 arch/powerpc/mm/highmem.c |  6 --
 arch/sparc/include/asm/highmem.h  |  1 -
 arch/sparc/mm/highmem.c   |  9 ++---
 arch/x86/include/asm/highmem.h|  5 -
 arch/x86/mm/highmem_32.c  | 14 --
 arch/xtensa/include/asm/highmem.h |  1 -
 arch/xtensa/mm/highmem.c  |  9 ++---
 include/linux/highmem.h   | 23 +++
 21 files changed, 46 insertions(+), 91 deletions(-)

diff --git a/arch/arc/include/asm/highmem.h b/arch/arc/include/asm/highmem.h
index 8387a5596a91..db425cd38545 100644
--- a/arch/arc/include/asm/highmem.h
+++ b/arch/arc/include/asm/highmem.h
@@ -30,7 +30,6 @@
 
 #include 
 
-extern void *kmap_atomic(struct page *page);
 extern void __kunmap_atomic(void *kvaddr);
 
 extern void kmap_init(void);
diff --git a/arch/arc/mm/highmem.c b/arch/arc/mm/highmem.c
index 4db13a6b9f3b..0964b011c29f 100644
--- a/arch/arc/mm/highmem.c
+++ b/arch/arc/mm/highmem.c
@@ -49,16 +49,11 @@
 extern pte_t * pkmap_page_table;
 static pte_t * fixmap_page_table;
 
-void *kmap_atomic(struct page *page)
+void *kmap_atomic_high(struct page *page)
 {
int idx, cpu_idx;
unsigned long vaddr;
 
-   preempt_disable();
-   pagefault_disable();
-   if (!PageHighMem(page))
-   return page_address(page);
-
cpu_idx = kmap_atomic_idx_push();
idx = cpu_idx + KM_TYPE_NR * smp_processor_id();
vaddr = FIXMAP_ADDR(idx);
@@ -68,7 +63,7 @@ void *kmap_atomic(struct page *page)
 
return (void *)vaddr;
 }
-EXPORT_SYMBOL(kmap_atomic);
+EXPORT_SYMBOL(kmap_atomic_high);
 
 void __kunmap_atomic(void *kv)
 {
diff --git a/arch/arm/include/asm/highmem.h b/arch/arm/include/asm/highmem.h
index 736f65283e7b..8c80bfe18a34 100644
--- a/arch/arm/include/asm/highmem.h
+++ b/arch/arm/include/asm/highmem.h
@@ -60,7 +60,6 @@ static inline void *kmap_high_get(struct page *page)
  * when CONFIG_HIGHMEM is not set.
  */
 #ifdef CONFIG_HIGHMEM
-extern void *kmap_atomic(struct page *page);
 extern void __kunmap_atomic(void *kvaddr);
 extern void *kmap_atomic_pfn(unsigned long pfn);
 #endif
diff --git a/arch/arm/mm/highmem.c b/arch/arm/mm/highmem.c
index c700b32350ee..075fdc235091 100644
--- a/arch/arm/mm/highmem.c
+++ b/arch/arm/mm/highmem.c
@@ -31,18 +31,13 @@ static inline pte_t get_fixmap_pte(unsigned long vaddr)
return *ptep;
 }
 
-void *kmap_atomic(struct page *page)
+void *kmap_atomic_high(struct page *page)
 {
unsigned int idx;
unsigned long vaddr;
void *kmap;
int type;
 
-   preempt_disable();
-   pagefault_disable();
-   if (!PageHighMem(page))
-   return page_address(page);
-
 #ifdef CONFIG_DEBUG_HIGHMEM
/*
 * There is no cache coherency issue when non VIVT, so force the
@@ -76,7 +71,7 @@ void *kmap_atomic(struct page *page)
 
return (void *)vaddr;
 }
-EXPORT_SYMBOL(kmap_atomic);
+EXPORT_SYMBOL(kmap_atomic_high);
 
 void __kunmap_atomic(void *kvaddr)
 {
diff --git a/arch/csky/include/asm/highmem.h b/arch/csky/include/asm/highmem.h
index be11c5b67122..8ceee12f9bc1 100644
--- a/arch/csky/include/asm/highmem.h
+++ b/arch/csky/include/asm/highmem.h
@@ -32,7 +32,6 @@ extern pte_t *pkmap_page_table;
 
 #define ARCH_HAS_KMAP_FLUSH_TLB
 extern void kmap_flush_tlb(unsigned long addr);
-extern void *kmap_atomic(struct page *page);
 extern void __kunmap_atomic(void *kvaddr);
 extern void *kmap_atomic_pfn(unsigned long pfn);
 extern struct page *kmap_atomic_to_page(void *ptr);
diff --git a/arch/csky/mm/highmem.c b/arch/csky/mm/highmem.c
index e9952211264b..63d74b47eee6 100644
--- a/arch/csky/mm/highmem.c
+++ b/arch/csky/mm/highmem.c
@@ -21,16 +21,11 @@ EXPORT_SYMBOL(kmap_flush_tlb);
 
 EXPORT_SYMBOL(kmap);
 
-void *kmap_atomic(struct page *page)
+void *kmap_atomic_high(struct page *page)
 {
unsigned long vaddr;
int idx, 

[PATCH V3 07/15] arch/kunmap_atomic: Consolidate duplicate code

2020-05-07 Thread ira . weiny
From: Ira Weiny 

Every single architecture (including !CONFIG_HIGHMEM) calls...

pagefault_enable();
preempt_enable();

... before returning from __kunmap_atomic().  Lift this code into the
kunmap_atomic() macro.

While we are at it rename __kunmap_atomic() to kunmap_atomic_high() to
be consistent.

Reviewed-by: Christoph Hellwig 
Signed-off-by: Ira Weiny 

---
Changes from V1:
Adjust to preserve bisect-ability
Remove uneeded kunmap_atomic_high() declarations
---
 arch/arc/include/asm/highmem.h|  2 --
 arch/arc/mm/highmem.c |  7 ++-
 arch/arm/include/asm/highmem.h|  1 -
 arch/arm/mm/highmem.c |  6 ++
 arch/csky/include/asm/highmem.h   |  1 -
 arch/csky/mm/highmem.c|  9 +++--
 arch/microblaze/include/asm/highmem.h |  1 -
 arch/microblaze/mm/highmem.c  |  6 ++
 arch/mips/include/asm/highmem.h   |  1 -
 arch/mips/mm/cache.c  |  4 ++--
 arch/mips/mm/highmem.c|  6 ++
 arch/nds32/include/asm/highmem.h  |  1 -
 arch/nds32/mm/highmem.c   |  6 ++
 arch/parisc/include/asm/cacheflush.h  |  4 +---
 arch/powerpc/include/asm/highmem.h|  1 -
 arch/powerpc/mm/highmem.c |  6 ++
 arch/sparc/include/asm/highmem.h  |  2 --
 arch/sparc/mm/highmem.c   |  6 ++
 arch/x86/include/asm/highmem.h|  1 -
 arch/x86/mm/highmem_32.c  |  7 ++-
 arch/xtensa/include/asm/highmem.h |  2 --
 arch/xtensa/mm/highmem.c  |  7 ++-
 include/linux/highmem.h   | 11 +++
 23 files changed, 31 insertions(+), 67 deletions(-)

diff --git a/arch/arc/include/asm/highmem.h b/arch/arc/include/asm/highmem.h
index db425cd38545..70900a73bfc8 100644
--- a/arch/arc/include/asm/highmem.h
+++ b/arch/arc/include/asm/highmem.h
@@ -30,8 +30,6 @@
 
 #include 
 
-extern void __kunmap_atomic(void *kvaddr);
-
 extern void kmap_init(void);
 
 static inline void flush_cache_kmaps(void)
diff --git a/arch/arc/mm/highmem.c b/arch/arc/mm/highmem.c
index 0964b011c29f..5d3eab4ac0b0 100644
--- a/arch/arc/mm/highmem.c
+++ b/arch/arc/mm/highmem.c
@@ -65,7 +65,7 @@ void *kmap_atomic_high(struct page *page)
 }
 EXPORT_SYMBOL(kmap_atomic_high);
 
-void __kunmap_atomic(void *kv)
+void kunmap_atomic_high(void *kv)
 {
unsigned long kvaddr = (unsigned long)kv;
 
@@ -87,11 +87,8 @@ void __kunmap_atomic(void *kv)
 
kmap_atomic_idx_pop();
}
-
-   pagefault_enable();
-   preempt_enable();
 }
-EXPORT_SYMBOL(__kunmap_atomic);
+EXPORT_SYMBOL(kunmap_atomic_high);
 
 static noinline pte_t * __init alloc_kmap_pgtable(unsigned long kvaddr)
 {
diff --git a/arch/arm/include/asm/highmem.h b/arch/arm/include/asm/highmem.h
index 8c80bfe18a34..b0d4bd8dc3c1 100644
--- a/arch/arm/include/asm/highmem.h
+++ b/arch/arm/include/asm/highmem.h
@@ -60,7 +60,6 @@ static inline void *kmap_high_get(struct page *page)
  * when CONFIG_HIGHMEM is not set.
  */
 #ifdef CONFIG_HIGHMEM
-extern void __kunmap_atomic(void *kvaddr);
 extern void *kmap_atomic_pfn(unsigned long pfn);
 #endif
 
diff --git a/arch/arm/mm/highmem.c b/arch/arm/mm/highmem.c
index 075fdc235091..ac8394655a6e 100644
--- a/arch/arm/mm/highmem.c
+++ b/arch/arm/mm/highmem.c
@@ -73,7 +73,7 @@ void *kmap_atomic_high(struct page *page)
 }
 EXPORT_SYMBOL(kmap_atomic_high);
 
-void __kunmap_atomic(void *kvaddr)
+void kunmap_atomic_high(void *kvaddr)
 {
unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
int idx, type;
@@ -95,10 +95,8 @@ void __kunmap_atomic(void *kvaddr)
/* this address was obtained through kmap_high_get() */
kunmap_high(pte_page(pkmap_page_table[PKMAP_NR(vaddr)]));
}
-   pagefault_enable();
-   preempt_enable();
 }
-EXPORT_SYMBOL(__kunmap_atomic);
+EXPORT_SYMBOL(kunmap_atomic_high);
 
 void *kmap_atomic_pfn(unsigned long pfn)
 {
diff --git a/arch/csky/include/asm/highmem.h b/arch/csky/include/asm/highmem.h
index 8ceee12f9bc1..263fbddcd0a3 100644
--- a/arch/csky/include/asm/highmem.h
+++ b/arch/csky/include/asm/highmem.h
@@ -32,7 +32,6 @@ extern pte_t *pkmap_page_table;
 
 #define ARCH_HAS_KMAP_FLUSH_TLB
 extern void kmap_flush_tlb(unsigned long addr);
-extern void __kunmap_atomic(void *kvaddr);
 extern void *kmap_atomic_pfn(unsigned long pfn);
 extern struct page *kmap_atomic_to_page(void *ptr);
 
diff --git a/arch/csky/mm/highmem.c b/arch/csky/mm/highmem.c
index 63d74b47eee6..0aafbbbe651c 100644
--- a/arch/csky/mm/highmem.c
+++ b/arch/csky/mm/highmem.c
@@ -39,13 +39,13 @@ void *kmap_atomic_high(struct page *page)
 }
 EXPORT_SYMBOL(kmap_atomic_high);
 
-void __kunmap_atomic(void *kvaddr)
+void kunmap_atomic_high(void *kvaddr)
 {
unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
int idx;
 
if (vaddr < FIXADDR_START)
-   goto out;
+   return;
 
 #ifdef CONFIG_DEBUG_HIGHMEM
idx = 

[PATCH V3 04/15] arch/kunmap: Remove duplicate kunmap implementations

2020-05-07 Thread ira . weiny
From: Ira Weiny 

All architectures do exactly the same thing for kunmap(); remove all the
duplicate definitions and lift the call to the core.

This also has the benefit of changing kmap_unmap() on a number of
architectures to be an inline call rather than an actual function.

Reviewed-by: Christoph Hellwig 
Signed-off-by: Ira Weiny 
---
 arch/arc/include/asm/highmem.h| 10 --
 arch/arm/include/asm/highmem.h|  3 ---
 arch/arm/mm/highmem.c |  9 -
 arch/csky/include/asm/highmem.h   |  3 ---
 arch/csky/mm/highmem.c|  9 -
 arch/microblaze/include/asm/highmem.h |  9 -
 arch/mips/include/asm/highmem.h   |  3 ---
 arch/mips/mm/highmem.c|  9 -
 arch/nds32/include/asm/highmem.h  |  3 ---
 arch/nds32/mm/highmem.c   | 10 --
 arch/powerpc/include/asm/highmem.h|  9 -
 arch/sparc/include/asm/highmem.h  | 10 --
 arch/x86/include/asm/highmem.h|  4 
 arch/x86/mm/highmem_32.c  |  9 -
 arch/xtensa/include/asm/highmem.h | 10 --
 include/linux/highmem.h   |  9 +
 16 files changed, 9 insertions(+), 110 deletions(-)

diff --git a/arch/arc/include/asm/highmem.h b/arch/arc/include/asm/highmem.h
index 96eb67c86961..8387a5596a91 100644
--- a/arch/arc/include/asm/highmem.h
+++ b/arch/arc/include/asm/highmem.h
@@ -32,7 +32,6 @@
 
 extern void *kmap_atomic(struct page *page);
 extern void __kunmap_atomic(void *kvaddr);
-extern void kunmap_high(struct page *page);
 
 extern void kmap_init(void);
 
@@ -41,15 +40,6 @@ static inline void flush_cache_kmaps(void)
flush_cache_all();
 }
 
-static inline void kunmap(struct page *page)
-{
-   might_sleep();
-   if (!PageHighMem(page))
-   return;
-   kunmap_high(page);
-}
-
-
 #endif
 
 #endif
diff --git a/arch/arm/include/asm/highmem.h b/arch/arm/include/asm/highmem.h
index c917522541de..736f65283e7b 100644
--- a/arch/arm/include/asm/highmem.h
+++ b/arch/arm/include/asm/highmem.h
@@ -20,8 +20,6 @@
 
 extern pte_t *pkmap_page_table;
 
-extern void kunmap_high(struct page *page);
-
 /*
  * The reason for kmap_high_get() is to ensure that the currently kmap'd
  * page usage count does not decrease to zero while we're using its
@@ -62,7 +60,6 @@ static inline void *kmap_high_get(struct page *page)
  * when CONFIG_HIGHMEM is not set.
  */
 #ifdef CONFIG_HIGHMEM
-extern void kunmap(struct page *page);
 extern void *kmap_atomic(struct page *page);
 extern void __kunmap_atomic(void *kvaddr);
 extern void *kmap_atomic_pfn(unsigned long pfn);
diff --git a/arch/arm/mm/highmem.c b/arch/arm/mm/highmem.c
index e8ba37c36590..c700b32350ee 100644
--- a/arch/arm/mm/highmem.c
+++ b/arch/arm/mm/highmem.c
@@ -31,15 +31,6 @@ static inline pte_t get_fixmap_pte(unsigned long vaddr)
return *ptep;
 }
 
-void kunmap(struct page *page)
-{
-   might_sleep();
-   if (!PageHighMem(page))
-   return;
-   kunmap_high(page);
-}
-EXPORT_SYMBOL(kunmap);
-
 void *kmap_atomic(struct page *page)
 {
unsigned int idx;
diff --git a/arch/csky/include/asm/highmem.h b/arch/csky/include/asm/highmem.h
index 9d0516e38110..be11c5b67122 100644
--- a/arch/csky/include/asm/highmem.h
+++ b/arch/csky/include/asm/highmem.h
@@ -30,11 +30,8 @@ extern pte_t *pkmap_page_table;
 #define PKMAP_NR(virt)  ((virt-PKMAP_BASE) >> PAGE_SHIFT)
 #define PKMAP_ADDR(nr)  (PKMAP_BASE + ((nr) << PAGE_SHIFT))
 
-extern void kunmap_high(struct page *page);
-
 #define ARCH_HAS_KMAP_FLUSH_TLB
 extern void kmap_flush_tlb(unsigned long addr);
-extern void kunmap(struct page *page);
 extern void *kmap_atomic(struct page *page);
 extern void __kunmap_atomic(void *kvaddr);
 extern void *kmap_atomic_pfn(unsigned long pfn);
diff --git a/arch/csky/mm/highmem.c b/arch/csky/mm/highmem.c
index 4a3c273bc8b9..e9952211264b 100644
--- a/arch/csky/mm/highmem.c
+++ b/arch/csky/mm/highmem.c
@@ -21,15 +21,6 @@ EXPORT_SYMBOL(kmap_flush_tlb);
 
 EXPORT_SYMBOL(kmap);
 
-void kunmap(struct page *page)
-{
-   might_sleep();
-   if (!PageHighMem(page))
-   return;
-   kunmap_high(page);
-}
-EXPORT_SYMBOL(kunmap);
-
 void *kmap_atomic(struct page *page)
 {
unsigned long vaddr;
diff --git a/arch/microblaze/include/asm/highmem.h 
b/arch/microblaze/include/asm/highmem.h
index 8c5bfd228bd8..0c94046f2d58 100644
--- a/arch/microblaze/include/asm/highmem.h
+++ b/arch/microblaze/include/asm/highmem.h
@@ -51,18 +51,9 @@ extern pte_t *pkmap_page_table;
 #define PKMAP_NR(virt)  ((virt - PKMAP_BASE) >> PAGE_SHIFT)
 #define PKMAP_ADDR(nr)  (PKMAP_BASE + ((nr) << PAGE_SHIFT))
 
-extern void kunmap_high(struct page *page);
 extern void *kmap_atomic_prot(struct page *page, pgprot_t prot);
 extern void __kunmap_atomic(void *kvaddr);
 
-static inline void kunmap(struct page *page)
-{
-   might_sleep();
-   if (!PageHighMem(page))
-   return;
-   kunmap_high(page);
-}

[PATCH V3 03/15] arch/kmap: Remove redundant arch specific kmaps

2020-05-07 Thread ira . weiny
From: Ira Weiny 

The kmap code for all the architectures is almost 100% identical.

Lift the common code to the core.  Use ARCH_HAS_KMAP_FLUSH_TLB to
indicate if an arch defines kmap_flush_tlb() and call if if needed.

This also has the benefit of changing kmap() on a number of
architectures to be an inline call rather than an actual function.

Reviewed-by: Christoph Hellwig 
Signed-off-by: Ira Weiny 
---
 arch/arc/include/asm/highmem.h|  2 --
 arch/arc/mm/highmem.c | 10 --
 arch/arm/include/asm/highmem.h|  2 --
 arch/arm/mm/highmem.c |  9 -
 arch/csky/include/asm/highmem.h   |  4 ++--
 arch/csky/mm/highmem.c| 14 --
 arch/microblaze/include/asm/highmem.h |  9 -
 arch/mips/include/asm/highmem.h   |  4 ++--
 arch/mips/mm/highmem.c| 14 +++---
 arch/nds32/include/asm/highmem.h  |  2 --
 arch/nds32/mm/highmem.c   | 12 
 arch/powerpc/include/asm/highmem.h|  9 -
 arch/sparc/include/asm/highmem.h  |  9 -
 arch/x86/include/asm/highmem.h|  2 --
 arch/x86/mm/highmem_32.c  |  9 -
 arch/xtensa/include/asm/highmem.h |  9 -
 include/linux/highmem.h   | 18 ++
 17 files changed, 29 insertions(+), 109 deletions(-)

diff --git a/arch/arc/include/asm/highmem.h b/arch/arc/include/asm/highmem.h
index 042e92921c4c..96eb67c86961 100644
--- a/arch/arc/include/asm/highmem.h
+++ b/arch/arc/include/asm/highmem.h
@@ -30,8 +30,6 @@
 
 #include 
 
-extern void *kmap(struct page *page);
-extern void *kmap_high(struct page *page);
 extern void *kmap_atomic(struct page *page);
 extern void __kunmap_atomic(void *kvaddr);
 extern void kunmap_high(struct page *page);
diff --git a/arch/arc/mm/highmem.c b/arch/arc/mm/highmem.c
index 39ef7b9a3aa9..4db13a6b9f3b 100644
--- a/arch/arc/mm/highmem.c
+++ b/arch/arc/mm/highmem.c
@@ -49,16 +49,6 @@
 extern pte_t * pkmap_page_table;
 static pte_t * fixmap_page_table;
 
-void *kmap(struct page *page)
-{
-   might_sleep();
-   if (!PageHighMem(page))
-   return page_address(page);
-
-   return kmap_high(page);
-}
-EXPORT_SYMBOL(kmap);
-
 void *kmap_atomic(struct page *page)
 {
int idx, cpu_idx;
diff --git a/arch/arm/include/asm/highmem.h b/arch/arm/include/asm/highmem.h
index eb4e4207cd3c..c917522541de 100644
--- a/arch/arm/include/asm/highmem.h
+++ b/arch/arm/include/asm/highmem.h
@@ -20,7 +20,6 @@
 
 extern pte_t *pkmap_page_table;
 
-extern void *kmap_high(struct page *page);
 extern void kunmap_high(struct page *page);
 
 /*
@@ -63,7 +62,6 @@ static inline void *kmap_high_get(struct page *page)
  * when CONFIG_HIGHMEM is not set.
  */
 #ifdef CONFIG_HIGHMEM
-extern void *kmap(struct page *page);
 extern void kunmap(struct page *page);
 extern void *kmap_atomic(struct page *page);
 extern void __kunmap_atomic(void *kvaddr);
diff --git a/arch/arm/mm/highmem.c b/arch/arm/mm/highmem.c
index cc6eb79ef20c..e8ba37c36590 100644
--- a/arch/arm/mm/highmem.c
+++ b/arch/arm/mm/highmem.c
@@ -31,15 +31,6 @@ static inline pte_t get_fixmap_pte(unsigned long vaddr)
return *ptep;
 }
 
-void *kmap(struct page *page)
-{
-   might_sleep();
-   if (!PageHighMem(page))
-   return page_address(page);
-   return kmap_high(page);
-}
-EXPORT_SYMBOL(kmap);
-
 void kunmap(struct page *page)
 {
might_sleep();
diff --git a/arch/csky/include/asm/highmem.h b/arch/csky/include/asm/highmem.h
index a345a2f2c22e..9d0516e38110 100644
--- a/arch/csky/include/asm/highmem.h
+++ b/arch/csky/include/asm/highmem.h
@@ -30,10 +30,10 @@ extern pte_t *pkmap_page_table;
 #define PKMAP_NR(virt)  ((virt-PKMAP_BASE) >> PAGE_SHIFT)
 #define PKMAP_ADDR(nr)  (PKMAP_BASE + ((nr) << PAGE_SHIFT))
 
-extern void *kmap_high(struct page *page);
 extern void kunmap_high(struct page *page);
 
-extern void *kmap(struct page *page);
+#define ARCH_HAS_KMAP_FLUSH_TLB
+extern void kmap_flush_tlb(unsigned long addr);
 extern void kunmap(struct page *page);
 extern void *kmap_atomic(struct page *page);
 extern void __kunmap_atomic(void *kvaddr);
diff --git a/arch/csky/mm/highmem.c b/arch/csky/mm/highmem.c
index 690d678649d1..4a3c273bc8b9 100644
--- a/arch/csky/mm/highmem.c
+++ b/arch/csky/mm/highmem.c
@@ -13,18 +13,12 @@ static pte_t *kmap_pte;
 
 unsigned long highstart_pfn, highend_pfn;
 
-void *kmap(struct page *page)
+void kmap_flush_tlb(unsigned long addr)
 {
-   void *addr;
-
-   might_sleep();
-   if (!PageHighMem(page))
-   return page_address(page);
-   addr = kmap_high(page);
-   flush_tlb_one((unsigned long)addr);
-
-   return addr;
+   flush_tlb_one(addr);
 }
+EXPORT_SYMBOL(kmap_flush_tlb);
+
 EXPORT_SYMBOL(kmap);
 
 void kunmap(struct page *page)
diff --git a/arch/microblaze/include/asm/highmem.h 
b/arch/microblaze/include/asm/highmem.h
index 99ced7278b5c..8c5bfd228bd8 100644
--- 

[PATCH V3 05/15] {x86,powerpc,microblaze}/kmap: Move preempt disable

2020-05-07 Thread ira . weiny
From: Ira Weiny 

During this kmap() conversion series we must maintain bisect-ability.
To do this, kmap_atomic_prot() in x86, powerpc, and microblaze need to
remain functional.

Create a temporary inline version of kmap_atomic_prot within these
architectures so we can rework their kmap_atomic() calls and then lift
kmap_atomic_prot() to the core.

Reviewed-by: Christoph Hellwig 
Suggested-by: Al Viro 
Signed-off-by: Ira Weiny 

---
Changes from V2:
Fix microblaze not being static inline

Changes from V1:
New patch
---
 arch/microblaze/include/asm/highmem.h | 11 ++-
 arch/microblaze/mm/highmem.c  | 10 ++
 arch/powerpc/include/asm/highmem.h| 11 ++-
 arch/powerpc/mm/highmem.c |  9 ++---
 arch/x86/include/asm/highmem.h| 11 ++-
 arch/x86/mm/highmem_32.c  | 10 ++
 6 files changed, 36 insertions(+), 26 deletions(-)

diff --git a/arch/microblaze/include/asm/highmem.h 
b/arch/microblaze/include/asm/highmem.h
index 0c94046f2d58..c38d920a1171 100644
--- a/arch/microblaze/include/asm/highmem.h
+++ b/arch/microblaze/include/asm/highmem.h
@@ -51,7 +51,16 @@ extern pte_t *pkmap_page_table;
 #define PKMAP_NR(virt)  ((virt - PKMAP_BASE) >> PAGE_SHIFT)
 #define PKMAP_ADDR(nr)  (PKMAP_BASE + ((nr) << PAGE_SHIFT))
 
-extern void *kmap_atomic_prot(struct page *page, pgprot_t prot);
+extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
+static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
+{
+   preempt_disable();
+   pagefault_disable();
+   if (!PageHighMem(page))
+   return page_address(page);
+
+   return kmap_atomic_high_prot(page, prot);
+}
 extern void __kunmap_atomic(void *kvaddr);
 
 static inline void *kmap_atomic(struct page *page)
diff --git a/arch/microblaze/mm/highmem.c b/arch/microblaze/mm/highmem.c
index d7569f77fa15..0e3efaa8a004 100644
--- a/arch/microblaze/mm/highmem.c
+++ b/arch/microblaze/mm/highmem.c
@@ -32,18 +32,12 @@
  */
 #include 
 
-void *kmap_atomic_prot(struct page *page, pgprot_t prot)
+void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
 {
 
unsigned long vaddr;
int idx, type;
 
-   preempt_disable();
-   pagefault_disable();
-   if (!PageHighMem(page))
-   return page_address(page);
-
-
type = kmap_atomic_idx_push();
idx = type + KM_TYPE_NR*smp_processor_id();
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
@@ -55,7 +49,7 @@ void *kmap_atomic_prot(struct page *page, pgprot_t prot)
 
return (void *) vaddr;
 }
-EXPORT_SYMBOL(kmap_atomic_prot);
+EXPORT_SYMBOL(kmap_atomic_high_prot);
 
 void __kunmap_atomic(void *kvaddr)
 {
diff --git a/arch/powerpc/include/asm/highmem.h 
b/arch/powerpc/include/asm/highmem.h
index ba3371977d49..d049806a8354 100644
--- a/arch/powerpc/include/asm/highmem.h
+++ b/arch/powerpc/include/asm/highmem.h
@@ -59,7 +59,16 @@ extern pte_t *pkmap_page_table;
 #define PKMAP_NR(virt)  ((virt-PKMAP_BASE) >> PAGE_SHIFT)
 #define PKMAP_ADDR(nr)  (PKMAP_BASE + ((nr) << PAGE_SHIFT))
 
-extern void *kmap_atomic_prot(struct page *page, pgprot_t prot);
+extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
+static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
+{
+   preempt_disable();
+   pagefault_disable();
+   if (!PageHighMem(page))
+   return page_address(page);
+
+   return kmap_atomic_high_prot(page, prot);
+}
 extern void __kunmap_atomic(void *kvaddr);
 
 static inline void *kmap_atomic(struct page *page)
diff --git a/arch/powerpc/mm/highmem.c b/arch/powerpc/mm/highmem.c
index 320c1672b2ae..f075cef6d663 100644
--- a/arch/powerpc/mm/highmem.c
+++ b/arch/powerpc/mm/highmem.c
@@ -30,16 +30,11 @@
  * be used in IRQ contexts, so in some (very limited) cases we need
  * it.
  */
-void *kmap_atomic_prot(struct page *page, pgprot_t prot)
+void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
 {
unsigned long vaddr;
int idx, type;
 
-   preempt_disable();
-   pagefault_disable();
-   if (!PageHighMem(page))
-   return page_address(page);
-
type = kmap_atomic_idx_push();
idx = type + KM_TYPE_NR*smp_processor_id();
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
@@ -49,7 +44,7 @@ void *kmap_atomic_prot(struct page *page, pgprot_t prot)
 
return (void*) vaddr;
 }
-EXPORT_SYMBOL(kmap_atomic_prot);
+EXPORT_SYMBOL(kmap_atomic_high_prot);
 
 void __kunmap_atomic(void *kvaddr)
 {
diff --git a/arch/x86/include/asm/highmem.h b/arch/x86/include/asm/highmem.h
index 90b96594d6c5..61f47fef40e5 100644
--- a/arch/x86/include/asm/highmem.h
+++ b/arch/x86/include/asm/highmem.h
@@ -58,7 +58,16 @@ extern unsigned long highstart_pfn, highend_pfn;
 #define PKMAP_NR(virt)  ((virt-PKMAP_BASE) >> PAGE_SHIFT)
 #define PKMAP_ADDR(nr)  (PKMAP_BASE + ((nr) << PAGE_SHIFT))
 
-void *kmap_atomic_prot(struct page *page, pgprot_t prot);
+extern 

[PATCH V3 02/15] arch/xtensa: Move kmap build bug out of the way

2020-05-07 Thread ira . weiny
From: Ira Weiny 

Move the kmap() build bug to kmap_init() to facilitate patches to lift
kmap() to the core.

Reviewed-by: Christoph Hellwig 
Signed-off-by: Ira Weiny 

---
Changes from V1:
combine code onto 1 line.
---
 arch/xtensa/include/asm/highmem.h | 5 -
 arch/xtensa/mm/highmem.c  | 4 
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/arch/xtensa/include/asm/highmem.h 
b/arch/xtensa/include/asm/highmem.h
index 413848cc1e56..a9587c85be85 100644
--- a/arch/xtensa/include/asm/highmem.h
+++ b/arch/xtensa/include/asm/highmem.h
@@ -68,11 +68,6 @@ void kunmap_high(struct page *page);
 
 static inline void *kmap(struct page *page)
 {
-   /* Check if this memory layout is broken because PKMAP overlaps
-* page table.
-*/
-   BUILD_BUG_ON(PKMAP_BASE <
-TLBTEMP_BASE_1 + TLBTEMP_SIZE);
might_sleep();
if (!PageHighMem(page))
return page_address(page);
diff --git a/arch/xtensa/mm/highmem.c b/arch/xtensa/mm/highmem.c
index 184ceadccc1a..da734a2ed641 100644
--- a/arch/xtensa/mm/highmem.c
+++ b/arch/xtensa/mm/highmem.c
@@ -88,6 +88,10 @@ void __init kmap_init(void)
 {
unsigned long kmap_vstart;
 
+   /* Check if this memory layout is broken because PKMAP overlaps
+* page table.
+*/
+   BUILD_BUG_ON(PKMAP_BASE < TLBTEMP_BASE_1 + TLBTEMP_SIZE);
/* cache the first kmap pte */
kmap_vstart = __fix_to_virt(FIX_KMAP_BEGIN);
kmap_pte = kmap_get_fixmap_pte(kmap_vstart);
-- 
2.25.1



[PATCH V3 00/15] Remove duplicated kmap code

2020-05-07 Thread ira . weiny
From: Ira Weiny 

The kmap infrastructure has been copied almost verbatim to every architecture.
This series consolidates obvious duplicated code by defining core functions
which call into the architectures only when needed.

Some of the k[un]map_atomic() implementations have some similarities but the
similarities were not sufficient to warrant further changes.

In addition we remove a duplicate implementation of kmap() in DRM.

Testing was done by 0day to cover all the architectures I can't readily
build/test.

---
Changes from V2:
Collect review/acks
Add kmap_prot consolidation patch from Christoph
Add 3 suggested patches from Al Viro
Fix include for microblaze
Fix static inline for microblaze

Changes from V1:
Fix bisect-ability
Update commit message and fix line lengths
Remove unneded kunmap_atomic_high() declarations
Remove unneded kmap_atomic_high() declarations
collect reviews
rebase to 5.7-rc4

Changes from V0:
Define kmap_flush_tlb() and make kmap() truely arch independent.
Redefine the k[un]map_atomic_* code to call into the architectures for
high mem pages
Ensure all architectures define kmap_prot, use it appropriately, and
define kmap_atomic_prot()
Remove drm implementation of kmap_atomic()


Ira Weiny (15):
  arch/kmap: Remove BUG_ON()
  arch/xtensa: Move kmap build bug out of the way
  arch/kmap: Remove redundant arch specific kmaps
  arch/kunmap: Remove duplicate kunmap implementations
  {x86,powerpc,microblaze}/kmap: Move preempt disable
  arch/kmap_atomic: Consolidate duplicate code
  arch/kunmap_atomic: Consolidate duplicate code
  arch/kmap: Ensure kmap_prot visibility
  arch/kmap: Don't hard code kmap_prot values
  arch/kmap: Define kmap_atomic_prot() for all arch's
  drm: Remove drm specific kmap_atomic code
  kmap: Remove kmap_atomic_to_page()
  parisc/kmap: Remove duplicate kmap code
  sparc: Remove unnecessary includes
  kmap: Consolidate kmap_prot definitions

 arch/arc/include/asm/highmem.h| 18 ---
 arch/arc/mm/highmem.c | 28 ++
 arch/arm/include/asm/highmem.h|  9 
 arch/arm/mm/highmem.c | 35 ++---
 arch/csky/include/asm/highmem.h   | 12 +
 arch/csky/mm/highmem.c| 56 
 arch/microblaze/include/asm/highmem.h | 27 --
 arch/microblaze/mm/highmem.c  | 16 ++
 arch/microblaze/mm/init.c |  3 --
 arch/mips/include/asm/highmem.h   | 11 +---
 arch/mips/mm/cache.c  |  6 +--
 arch/mips/mm/highmem.c| 49 +++---
 arch/nds32/include/asm/highmem.h  |  9 
 arch/nds32/mm/highmem.c   | 39 ++
 arch/parisc/include/asm/cacheflush.h  | 30 +--
 arch/powerpc/include/asm/highmem.h| 28 --
 arch/powerpc/mm/highmem.c | 21 ++--
 arch/powerpc/mm/mem.c |  3 --
 arch/sparc/include/asm/highmem.h  | 25 +
 arch/sparc/mm/highmem.c   | 20 ++--
 arch/sparc/mm/io-unit.c   |  1 -
 arch/sparc/mm/iommu.c |  1 -
 arch/x86/include/asm/fixmap.h |  1 -
 arch/x86/include/asm/highmem.h|  9 
 arch/x86/mm/highmem_32.c  | 50 ++
 arch/xtensa/include/asm/highmem.h | 27 --
 arch/xtensa/mm/highmem.c  | 22 
 drivers/gpu/drm/ttm/ttm_bo_util.c | 56 ++--
 drivers/gpu/drm/vmwgfx/vmwgfx_blit.c  | 16 +++---
 include/drm/ttm/ttm_bo_api.h  |  4 --
 include/linux/highmem.h   | 74 ---
 31 files changed, 150 insertions(+), 556 deletions(-)

-- 
2.25.1



[PATCH V3 01/15] arch/kmap: Remove BUG_ON()

2020-05-07 Thread ira . weiny
From: Ira Weiny 

Replace the use of BUG_ON(in_interrupt()) in the kmap() and kunmap()
in favor of might_sleep().

Besides the benefits of might_sleep(), this normalizes the
implementations such that they can be made generic in subsequent
patches.

Reviewed-by: Dan Williams 
Reviewed-by: Christoph Hellwig 
Signed-off-by: Ira Weiny 
---
 arch/arc/include/asm/highmem.h| 2 +-
 arch/arc/mm/highmem.c | 2 +-
 arch/arm/mm/highmem.c | 2 +-
 arch/csky/mm/highmem.c| 2 +-
 arch/microblaze/include/asm/highmem.h | 2 +-
 arch/mips/mm/highmem.c| 2 +-
 arch/nds32/mm/highmem.c   | 2 +-
 arch/powerpc/include/asm/highmem.h| 2 +-
 arch/sparc/include/asm/highmem.h  | 4 ++--
 arch/x86/mm/highmem_32.c  | 3 +--
 arch/xtensa/include/asm/highmem.h | 4 ++--
 11 files changed, 13 insertions(+), 14 deletions(-)

diff --git a/arch/arc/include/asm/highmem.h b/arch/arc/include/asm/highmem.h
index 1af00accb37f..042e92921c4c 100644
--- a/arch/arc/include/asm/highmem.h
+++ b/arch/arc/include/asm/highmem.h
@@ -45,7 +45,7 @@ static inline void flush_cache_kmaps(void)
 
 static inline void kunmap(struct page *page)
 {
-   BUG_ON(in_interrupt());
+   might_sleep();
if (!PageHighMem(page))
return;
kunmap_high(page);
diff --git a/arch/arc/mm/highmem.c b/arch/arc/mm/highmem.c
index fc8849e4f72e..39ef7b9a3aa9 100644
--- a/arch/arc/mm/highmem.c
+++ b/arch/arc/mm/highmem.c
@@ -51,7 +51,7 @@ static pte_t * fixmap_page_table;
 
 void *kmap(struct page *page)
 {
-   BUG_ON(in_interrupt());
+   might_sleep();
if (!PageHighMem(page))
return page_address(page);
 
diff --git a/arch/arm/mm/highmem.c b/arch/arm/mm/highmem.c
index a76f8ace9ce6..cc6eb79ef20c 100644
--- a/arch/arm/mm/highmem.c
+++ b/arch/arm/mm/highmem.c
@@ -42,7 +42,7 @@ EXPORT_SYMBOL(kmap);
 
 void kunmap(struct page *page)
 {
-   BUG_ON(in_interrupt());
+   might_sleep();
if (!PageHighMem(page))
return;
kunmap_high(page);
diff --git a/arch/csky/mm/highmem.c b/arch/csky/mm/highmem.c
index 813129145f3d..690d678649d1 100644
--- a/arch/csky/mm/highmem.c
+++ b/arch/csky/mm/highmem.c
@@ -29,7 +29,7 @@ EXPORT_SYMBOL(kmap);
 
 void kunmap(struct page *page)
 {
-   BUG_ON(in_interrupt());
+   might_sleep();
if (!PageHighMem(page))
return;
kunmap_high(page);
diff --git a/arch/microblaze/include/asm/highmem.h 
b/arch/microblaze/include/asm/highmem.h
index 332c78e15198..99ced7278b5c 100644
--- a/arch/microblaze/include/asm/highmem.h
+++ b/arch/microblaze/include/asm/highmem.h
@@ -66,7 +66,7 @@ static inline void *kmap(struct page *page)
 
 static inline void kunmap(struct page *page)
 {
-   BUG_ON(in_interrupt());
+   might_sleep();
if (!PageHighMem(page))
return;
kunmap_high(page);
diff --git a/arch/mips/mm/highmem.c b/arch/mips/mm/highmem.c
index d08e6d7d533b..edd889f6cede 100644
--- a/arch/mips/mm/highmem.c
+++ b/arch/mips/mm/highmem.c
@@ -28,7 +28,7 @@ EXPORT_SYMBOL(kmap);
 
 void kunmap(struct page *page)
 {
-   BUG_ON(in_interrupt());
+   might_sleep();
if (!PageHighMem(page))
return;
kunmap_high(page);
diff --git a/arch/nds32/mm/highmem.c b/arch/nds32/mm/highmem.c
index 022779af6148..4c7c28e994ea 100644
--- a/arch/nds32/mm/highmem.c
+++ b/arch/nds32/mm/highmem.c
@@ -24,7 +24,7 @@ EXPORT_SYMBOL(kmap);
 
 void kunmap(struct page *page)
 {
-   BUG_ON(in_interrupt());
+   might_sleep();
if (!PageHighMem(page))
return;
kunmap_high(page);
diff --git a/arch/powerpc/include/asm/highmem.h 
b/arch/powerpc/include/asm/highmem.h
index a4b65b186ec6..529512f6d65a 100644
--- a/arch/powerpc/include/asm/highmem.h
+++ b/arch/powerpc/include/asm/highmem.h
@@ -74,7 +74,7 @@ static inline void *kmap(struct page *page)
 
 static inline void kunmap(struct page *page)
 {
-   BUG_ON(in_interrupt());
+   might_sleep();
if (!PageHighMem(page))
return;
kunmap_high(page);
diff --git a/arch/sparc/include/asm/highmem.h b/arch/sparc/include/asm/highmem.h
index 18d776925c45..7dd2d4b3f980 100644
--- a/arch/sparc/include/asm/highmem.h
+++ b/arch/sparc/include/asm/highmem.h
@@ -55,7 +55,7 @@ void kunmap_high(struct page *page);
 
 static inline void *kmap(struct page *page)
 {
-   BUG_ON(in_interrupt());
+   might_sleep();
if (!PageHighMem(page))
return page_address(page);
return kmap_high(page);
@@ -63,7 +63,7 @@ static inline void *kmap(struct page *page)
 
 static inline void kunmap(struct page *page)
 {
-   BUG_ON(in_interrupt());
+   might_sleep();
if (!PageHighMem(page))
return;
kunmap_high(page);
diff --git a/arch/x86/mm/highmem_32.c b/arch/x86/mm/highmem_32.c
index 0a1898b8552e..8af66382672b 100644
--- a/arch/x86/mm/highmem_32.c
+++ 

powerpc/pci: [PATCH 1/1]: PCIE PHB reset

2020-05-07 Thread wenxiong
From: Wen Xiong 

Several device drivers hit EEH(Extended Error handling) when triggering
kdump on Pseries PowerVM. This patch implemented a reset of the PHBs
in pci general code. PHB reset stop all PCI transactions from previous
kernel. We have tested the patch in several enviroments:
- direct slot adapters
- adapters under the switch
- a VF adapter in PowerVM
- a VF adapter/adapter in KVM guest.

Signed-off-by: Wen Xiong 
---
 arch/powerpc/platforms/pseries/pci.c | 153 +++
 1 file changed, 153 insertions(+)

diff --git a/arch/powerpc/platforms/pseries/pci.c 
b/arch/powerpc/platforms/pseries/pci.c
index 911534b89c85..aac7f00696d2 100644
--- a/arch/powerpc/platforms/pseries/pci.c
+++ b/arch/powerpc/platforms/pseries/pci.c
@@ -11,6 +11,8 @@
 #include 
 #include 
 #include 
+#include 
+#include 
 
 #include 
 #include 
@@ -354,3 +356,154 @@ int pseries_root_bridge_prepare(struct pci_host_bridge 
*bridge)
 
return 0;
 }
+
+/**
+ * pseries_get_pdn_addr - Retrieve PHB address
+ * @pe: EEH PE
+ *
+ * Retrieve the assocated PHB address. Actually, there're 2 RTAS
+ * function calls dedicated for the purpose. We need implement
+ * it through the new function and then the old one. Besides,
+ * you should make sure the config address is figured out from
+ * FDT node before calling the function.
+ *
+ */
+static int pseries_get_pdn_addr(struct pci_controller *phb)
+{
+   int ret = -1;
+   int rets[3];
+   int ibm_get_config_addr_info;
+   int ibm_get_config_addr_info2;
+   int config_addr = 0;
+   struct pci_dn *root_pdn, *pdn;
+
+   ibm_get_config_addr_info2   = rtas_token("ibm,get-config-addr-info2");
+   ibm_get_config_addr_info= rtas_token("ibm,get-config-addr-info");
+
+   root_pdn = PCI_DN(phb->dn);
+   pdn = list_first_entry(_pdn->child_list, struct pci_dn, list);
+   config_addr = (pdn->busno << 16) | (pdn->devfn << 8);
+
+   if (ibm_get_config_addr_info2 != RTAS_UNKNOWN_SERVICE) {
+   /*
+* First of all, we need to make sure there has one PE
+* associated with the device. Otherwise, PE address is
+* meaningless.
+*/
+   ret = rtas_call(ibm_get_config_addr_info2, 4, 2, rets,
+   config_addr, BUID_HI(pdn->phb->buid),
+   BUID_LO(pdn->phb->buid), 1);
+   if (ret || (rets[0] == 0)) {
+   pr_warn("%s: Failed to get address for PHB#%x-PE# "
+   "option=%d config_addr=%x\n",
+   __func__, pdn->phb->global_number, 1, rets[0]);
+   return -1;
+   }
+
+   /* Retrieve the associated PE config address */
+   ret = rtas_call(ibm_get_config_addr_info2, 4, 2, rets,
+   config_addr, BUID_HI(pdn->phb->buid),
+   BUID_LO(pdn->phb->buid), 0);
+   if (ret) {
+   pr_warn("%s: Failed to get address for PHB#%x-PE# "
+   "option=%d config_addr=%x\n",
+   __func__, pdn->phb->global_number, 0, rets[0]);
+   return -1;
+   }
+   return rets[0];
+   }
+
+   if (ibm_get_config_addr_info != RTAS_UNKNOWN_SERVICE) {
+   ret = rtas_call(ibm_get_config_addr_info, 4, 2, rets,
+   config_addr, BUID_HI(pdn->phb->buid),
+   BUID_LO(pdn->phb->buid), 0);
+   if (ret || rets[0]) {
+   pr_warn("%s: Failed to get address for PHB#%x-PE# "
+   "config_addr=%x\n",
+   __func__, pdn->phb->global_number, rets[0]);
+   return -1;
+   }
+   return rets[0];
+   }
+
+   return ret;
+}
+
+static int __init pseries_phb_reset(void)
+{
+   struct pci_controller *phb;
+   int config_addr;
+   int ibm_set_slot_reset;
+   int ibm_configure_pe;
+   int ret;
+
+   if (is_kdump_kernel() || reset_devices) {
+   pr_info("Issue PHB reset ...\n");
+   ibm_set_slot_reset = rtas_token("ibm,set-slot-reset");
+   ibm_configure_pe = rtas_token("ibm,configure-pe");
+
+   if (ibm_set_slot_reset == RTAS_UNKNOWN_SERVICE ||
+   ibm_configure_pe == RTAS_UNKNOWN_SERVICE) {
+   pr_info("%s: EEH functionality not supported\n",
+   __func__);
+   }
+
+   list_for_each_entry(phb, _list, list_node) {
+   config_addr = pseries_get_pdn_addr(phb);
+   if (config_addr == -1)
+   continue;
+
+   ret = rtas_call(ibm_set_slot_reset, 4, 1, NULL,
+   config_addr, BUID_HI(phb->buid),
+   

Re: [PATCH v2 15/28] powerpc/book3s64/pkeys: Reset userspace AMR correctly on exec

2020-05-07 Thread Aneesh Kumar K.V
"Aneesh Kumar K.V"  writes:

> On fork, we inherit from the parent and on exec, we should switch to 
> default_amr values.
>
> Also, avoid changing the AMR register value within the kernel. The kernel now 
> runs with
> different AMR values.
>
> Signed-off-by: Aneesh Kumar K.V 
> ---
>  arch/powerpc/include/asm/book3s/64/kup.h |  2 ++
>  arch/powerpc/kernel/process.c| 19 ++-
>  arch/powerpc/mm/book3s64/pkeys.c | 18 ++
>  3 files changed, 22 insertions(+), 17 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/book3s/64/kup.h 
> b/arch/powerpc/include/asm/book3s/64/kup.h
> index 67320a990f3f..fe1818954e51 100644
> --- a/arch/powerpc/include/asm/book3s/64/kup.h
> +++ b/arch/powerpc/include/asm/book3s/64/kup.h
> @@ -171,6 +171,8 @@
>  #include 
>  
>  extern u64 default_uamor;
> +extern u64 default_amr;
> +extern u64 default_iamr;
>  
>  static inline void kuap_restore_user_amr(struct pt_regs *regs)
>  {
> diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
> index 9ef95a1217ef..0ab9a8cf1bcb 100644
> --- a/arch/powerpc/kernel/process.c
> +++ b/arch/powerpc/kernel/process.c
> @@ -1474,7 +1474,25 @@ void arch_setup_new_exec(void)
>   current->thread.regs = regs - 1;
>   }
>  
> +#ifdef CONFIG_PPC_MEM_KEYS
> + current->thread.regs->kuap  = default_amr;
> + current->thread.regs->kuep  = default_iamr;
> +#endif
> +
>  }
> +#else
> +void arch_setup_new_exec(void)
> +{
> + /*
> +  * If we exec out of a kernel thread then thread.regs will not be
> +  * set.  Do it now.
> +  */
> + if (!current->thread.regs) {
> + struct pt_regs *regs = task_stack_page(current) + THREAD_SIZE;
> + current->thread.regs = regs - 1;
> + }
> +}
> +
>  #endif
>  
>  #ifdef CONFIG_PPC64
> @@ -1809,7 +1827,6 @@ void start_thread(struct pt_regs *regs, unsigned long 
> start, unsigned long sp)
>   current->thread.load_tm = 0;
>  #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
>  
> - thread_pkey_regs_init(>thread);
>  }
>  EXPORT_SYMBOL(start_thread);
>  
> diff --git a/arch/powerpc/mm/book3s64/pkeys.c 
> b/arch/powerpc/mm/book3s64/pkeys.c
> index 976f65f27324..5012b57af808 100644
> --- a/arch/powerpc/mm/book3s64/pkeys.c
> +++ b/arch/powerpc/mm/book3s64/pkeys.c
> @@ -20,8 +20,8 @@ int  max_pkey;  /* Maximum key value 
> supported */
>   */
>  u32  reserved_allocation_mask;
>  static u32  initial_allocation_mask;   /* Bits set for the initially 
> allocated keys */
> -static u64 default_amr;
> -static u64 default_iamr;
> +u64 default_amr;
> +u64 default_iamr;
>  /* Allow all keys to be modified by default */
>  u64 default_uamor = ~0x0UL;
>  /*
> @@ -387,20 +387,6 @@ void thread_pkey_regs_restore(struct thread_struct 
> *new_thread,
>   write_uamor(new_thread->uamor);
>  }
>  
> -void thread_pkey_regs_init(struct thread_struct *thread)
> -{
> - if (!mmu_has_feature(MMU_FTR_PKEY))
> - return;
> -
> - thread->amr   = default_amr;
> - thread->iamr  = default_iamr;
> - thread->uamor = default_uamor;
> -
> - write_amr(default_amr);
> - write_iamr(default_iamr);
> - write_uamor(default_uamor);
> -}
> -
>  int execute_only_pkey(struct mm_struct *mm)
>  {
>   if (static_branch_likely(_pkey_disabled))
> -- 
> 2.26.2

Needs this change to fix build error.

---
 arch/powerpc/include/asm/thread_info.h | 2 --
 1 file changed, 2 deletions(-)

diff --git a/arch/powerpc/include/asm/thread_info.h 
b/arch/powerpc/include/asm/thread_info.h
index ca6c97025704..9418dff1cfe1 100644
--- a/arch/powerpc/include/asm/thread_info.h
+++ b/arch/powerpc/include/asm/thread_info.h
@@ -77,10 +77,8 @@ struct thread_info {
 /* how to get the thread information struct from C */
 extern int arch_dup_task_struct(struct task_struct *dst, struct task_struct 
*src);
 
-#ifdef CONFIG_PPC_BOOK3S_64
 void arch_setup_new_exec(void);
 #define arch_setup_new_exec arch_setup_new_exec
-#endif
 
 #endif /* __ASSEMBLY__ */
 
-- 
2.26.2



[PATCH 2/3] powerpc: Fix instruction dumping to use address value correctly

2020-05-07 Thread Aneesh Kumar K.V
Avoid updating address value in the loop. If in real mode, convert the address
outside the loop. Also, use ___va() to convert the real address that will
skip the input validation. We can get interrupts with IR=0 and with NIP
value > PAGE_OFFSET.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/processor.h |  9 +
 arch/powerpc/kernel/process.c| 18 ++
 2 files changed, 19 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/include/asm/processor.h 
b/arch/powerpc/include/asm/processor.h
index 591987da44e2..4e7a5adc04df 100644
--- a/arch/powerpc/include/asm/processor.h
+++ b/arch/powerpc/include/asm/processor.h
@@ -436,6 +436,15 @@ extern void cvt_fd(float *from, double *to);
 extern void cvt_df(double *from, float *to);
 extern void _nmask_and_or_msr(unsigned long nmask, unsigned long or_val);
 
+static inline unsigned long fixup_real_addr(struct pt_regs *regs,
+   unsigned long addr)
+{
+   if (!(regs->msr & MSR_IR))
+   return (unsigned long)___va(addr);
+
+   return addr;
+}
+
 #ifdef CONFIG_PPC64
 /*
  * We handle most unaligned accesses in hardware. On the other hand 
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 682d421f..36cc03c33d25 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1238,29 +1238,31 @@ struct task_struct *__switch_to(struct task_struct 
*prev,
 static void show_instructions(struct pt_regs *regs)
 {
int i;
+   unsigned long nip = regs->nip;
unsigned long pc = regs->nip - (NR_INSN_TO_PRINT * 3 / 4 * sizeof(int));
 
printk("Instruction dump:");
 
+
+#if !defined(CONFIG_BOOKE)
+   /* If executing with the IMMU off, adjust pc rather
+* than print .
+*/
+   pc = fixup_real_addr(regs, pc);
+   nip = fixup_real_addr(regs, nip);
+#endif
for (i = 0; i < NR_INSN_TO_PRINT; i++) {
int instr;
 
if (!(i % 8))
pr_cont("\n");
 
-#if !defined(CONFIG_BOOKE)
-   /* If executing with the IMMU off, adjust pc rather
-* than print .
-*/
-   if (!(regs->msr & MSR_IR))
-   pc = (unsigned long)phys_to_virt(pc);
-#endif
 
if (!__kernel_text_address(pc) ||
probe_kernel_address((const void *)pc, instr)) {
pr_cont(" ");
} else {
-   if (regs->nip == pc)
+   if (nip == pc)
pr_cont("<%08x> ", instr);
else
pr_cont("%08x ", instr);
-- 
2.26.2



[PATCH 3/3] powerp: Avoid opencoding fixup_real_addr

2020-05-07 Thread Aneesh Kumar K.V
Use the newly added helper.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/kernel/traps.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index a47fb49b7af8..503097c6aab2 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -1479,12 +1479,10 @@ void program_check_exception(struct pt_regs *regs)
== NOTIFY_STOP)
goto bail;
 
-   bugaddr = regs->nip;
/*
 * Fixup bugaddr for BUG_ON() in real mode
 */
-   if (!is_kernel_addr(bugaddr) && !(regs->msr & MSR_IR))
-   bugaddr += PAGE_OFFSET;
+   bugaddr = fixup_real_addr(regs, regs->nip);
 
if (!(regs->msr & MSR_PR) &&  /* not user-mode */
report_bug(bugaddr, regs) == BUG_TRAP_TYPE_WARN) {
-- 
2.26.2



[PATCH 1/3] powerpc/va: Add a __va() variant that doesn't do input validation

2020-05-07 Thread Aneesh Kumar K.V
On ppc64, __va(x) do check for input argument to be less than PAGE_OFFSET.
In certain code paths, we want to skip that check. Add a variant ___va(x)
to be used in such cases.

Switch the #define to static inline. __pa() still doesn't benefit from this. But
a static inline done in this patch is better than multi-line #define.
For __va() we get the type checking benefit. We still have to keep the
macro __pa(x) to avoid a large number of compilation errors with the change.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/page.h | 38 -
 arch/powerpc/mm/nohash/book3e_pgtable.c |  2 +-
 2 files changed, 26 insertions(+), 14 deletions(-)

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index 3ee8df0f66e0..a3a2725a80ab 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -9,6 +9,7 @@
 #ifndef __ASSEMBLY__
 #include 
 #include 
+#include 
 #else
 #include 
 #endif
@@ -208,30 +209,41 @@ static inline bool pfn_valid(unsigned long pfn)
  * the other definitions for __va & __pa.
  */
 #if defined(CONFIG_PPC32) && defined(CONFIG_BOOKE)
-#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + VIRT_PHYS_OFFSET))
+#define ___va(x) ((void *)(unsigned long)((phys_addr_t)(x) + VIRT_PHYS_OFFSET))
 #define __pa(x) ((phys_addr_t)(unsigned long)(x) - VIRT_PHYS_OFFSET)
+#define __va(x) ___va(x)
 #else
 #ifdef CONFIG_PPC64
+
+#ifndef __ASSEMBLY__
 /*
  * gcc miscompiles (unsigned long)(_var) - PAGE_OFFSET
  * with -mcmodel=medium, so we use & and | instead of - and + on 64-bit.
  * This also results in better code generation.
  */
-#define __va(x)
\
-({ \
-   VIRTUAL_BUG_ON((unsigned long)(x) >= PAGE_OFFSET);  \
-   (void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET);\
-})
-
-#define __pa(x)
\
-({ \
-   VIRTUAL_BUG_ON((unsigned long)(x) < PAGE_OFFSET);   \
-   (unsigned long)(x) & 0x0fffUL;  \
-})
+static inline void *___va(phys_addr_t addr)
+{
+   return (void *)(addr | PAGE_OFFSET);
+}
+
+static inline void *__va(phys_addr_t addr)
+{
+   VIRTUAL_BUG_ON((unsigned long)(addr) >= PAGE_OFFSET);
+   return ___va(addr);
+}
+
+static inline phys_addr_t ___pa(void *addr)
+{
+   VIRTUAL_BUG_ON((unsigned long)(addr) < PAGE_OFFSET);
+   return (phys_addr_t)((unsigned long)addr & 0x0fffUL);
+}
+#define __pa(x) ___pa((void *)(x))
+#endif /*  __ASSEMBLY__ */
 
 #else /* 32-bit, non book E */
-#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + PAGE_OFFSET - 
MEMORY_START))
+#define ___va(x) ((void *)(unsigned long)((phys_addr_t)(x) + PAGE_OFFSET - 
MEMORY_START))
 #define __pa(x) ((unsigned long)(x) - PAGE_OFFSET + MEMORY_START)
+#define __va(x) ___va(x)
 #endif
 #endif
 
diff --git a/arch/powerpc/mm/nohash/book3e_pgtable.c 
b/arch/powerpc/mm/nohash/book3e_pgtable.c
index 4637fdd469cf..a8ce309ce740 100644
--- a/arch/powerpc/mm/nohash/book3e_pgtable.c
+++ b/arch/powerpc/mm/nohash/book3e_pgtable.c
@@ -60,7 +60,7 @@ static void __init *early_alloc_pgtable(unsigned long size)
 
if (!ptr)
panic("%s: Failed to allocate %lu bytes align=0x%lx 
max_addr=%lx\n",
- __func__, size, size, __pa(MAX_DMA_ADDRESS));
+ __func__, size, size, (unsigned 
long)__pa(MAX_DMA_ADDRESS));
 
return ptr;
 }
-- 
2.26.2



Re: [PATCH] powerpc/uaccess: Don't use "m<>" constraint

2020-05-07 Thread Segher Boessenkool
On Thu, May 07, 2020 at 10:33:24PM +1000, Michael Ellerman wrote:
> The "m<>" constraint breaks compilation with GCC 4.6.x era compilers.
> 
> The use of the constraint allows the compiler to use update-form
> instructions, however in practice current compilers never generate
> those forms for any of the current uses of __put_user_asm_goto().
> 
> We anticipate that GCC 4.6 will be declared unsupported for building
> the kernel in the not too distant future. So for now just switch to
> the "m" constraint.
> 
> Fixes: 334710b1496a ("powerpc/uaccess: Implement unsafe_put_user() using 'asm 
> goto'")
> Signed-off-by: Michael Ellerman 

Acked-by: Segher Boessenkool 

Thanks!  So much trouble for what looked like such a simple change, all
those years ago :-(


Segher


Re: [PATCH v4 2/7] KVM: arm64: clean up redundant 'kvm_run' parameters

2020-05-07 Thread Tianjia Zhang




On 2020/5/5 16:39, Marc Zyngier wrote:

Hi Tianjia,

On 2020-04-27 05:35, Tianjia Zhang wrote:

In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
structure. For historical reasons, many kvm-related function parameters
retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
patch does a unified cleanup of these remaining redundant parameters.

Signed-off-by: Tianjia Zhang 


On the face of it, this looks OK, but I haven't tried to run the
resulting kernel. I'm not opposed to taking this patch *if* there
is an agreement across architectures to take the series (I value
consistency over the janitorial exercise).

Another thing is that this is going to conflict with the set of
patches that move the KVM/arm code back where it belongs (arch/arm64/kvm),
so I'd probably cherry-pick that one directly.

Thanks,

     M.



Do I need to submit this set of patches separately for each 
architecture? Could it be merged at once, if necessary, I will

resubmit based on the latest mainline.

Thanks,
Tianjia


[PATCH] powerpc/uaccess: Don't use "m<>" constraint

2020-05-07 Thread Michael Ellerman
The "m<>" constraint breaks compilation with GCC 4.6.x era compilers.

The use of the constraint allows the compiler to use update-form
instructions, however in practice current compilers never generate
those forms for any of the current uses of __put_user_asm_goto().

We anticipate that GCC 4.6 will be declared unsupported for building
the kernel in the not too distant future. So for now just switch to
the "m" constraint.

Fixes: 334710b1496a ("powerpc/uaccess: Implement unsafe_put_user() using 'asm 
goto'")
Signed-off-by: Michael Ellerman 
---
 arch/powerpc/include/asm/uaccess.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/uaccess.h 
b/arch/powerpc/include/asm/uaccess.h
index 62cc8d7640ec..164112007f54 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -210,7 +210,7 @@ do {
\
"1: " op "%U1%X1 %0,%1  # put_user\n"   \
EX_TABLE(1b, %l2)   \
:   \
-   : "r" (x), "m<>" (*addr)\
+   : "r" (x), "m" (*addr)  \
:   \
: label)
 
-- 
2.25.1



Re: [PATCH v4 02/14] arm: add support for folded p4d page tables

2020-05-07 Thread Marek Szyprowski
Hi

On 14.04.2020 17:34, Mike Rapoport wrote:
> From: Mike Rapoport 
>
> Implement primitives necessary for the 4th level folding, add walks of p4d
> level where appropriate, and remove __ARCH_USE_5LEVEL_HACK.
>
> Signed-off-by: Mike Rapoport 

Today I've noticed that kexec is broken on ARM 32bit. Bisecting between 
current linux-next and v5.7-rc1 pointed to this commit. I've tested this 
on Odroid XU4 and Raspberry Pi4 boards. Here is the relevant log:

# kexec --kexec-syscall -l zImage --append "$(cat /proc/cmdline)"
memory_range[0]:0x4000..0xbe9f
memory_range[0]:0x4000..0xbe9f
# kexec -e
kexec_core: Starting new kernel
8<--- cut here ---
Unable to handle kernel paging request at virtual address c010f1f4
pgd = c6817793
[c010f1f4] *pgd=441e(bad)
Internal error: Oops: 80d [#1] PREEMPT ARM
Modules linked in:
CPU: 0 PID: 1329 Comm: kexec Tainted: G    W 
5.7.0-rc3-00127-g6cba81ed0f62 #611
Hardware name: Samsung Exynos (Flattened Device Tree)
PC is at machine_kexec+0x40/0xfc
LR is at 0x
pc : []    lr : []    psr: 6013
sp : ebc13e60  ip : 40008000  fp : 0001
r10: 0058  r9 : fee1dead  r8 : 0001
r7 : c121387c  r6 : 6c224000  r5 : ece40c00  r4 : ec222000
r3 : c010f1f4  r2 : c110  r1 : c110  r0 : 418d
Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment none
Control: 10c5387d  Table: 6bc14059  DAC: 0051
Process kexec (pid: 1329, stack limit = 0x366bb4dc)
Stack: (0xebc13e60 to 0xebc14000)
...
[] (machine_kexec) from [] (kernel_kexec+0x74/0x7c)
[] (kernel_kexec) from [] (__do_sys_reboot+0x1f8/0x210)
[] (__do_sys_reboot) from [] (ret_fast_syscall+0x0/0x28)
Exception stack(0xebc13fa8 to 0xebc13ff0)
...
---[ end trace 3e8d6c81723c778d ]---
1329 Segmentation fault  ./kexec -e

> ---
>   arch/arm/include/asm/pgtable.h |  1 -
>   arch/arm/lib/uaccess_with_memcpy.c |  7 +-
>   arch/arm/mach-sa1100/assabet.c |  2 +-
>   arch/arm/mm/dump.c | 29 +-
>   arch/arm/mm/fault-armv.c   |  7 +-
>   arch/arm/mm/fault.c| 22 ++--
>   arch/arm/mm/idmap.c|  3 ++-
>   arch/arm/mm/init.c |  2 +-
>   arch/arm/mm/ioremap.c  | 12 ++---
>   arch/arm/mm/mm.h   |  2 +-
>   arch/arm/mm/mmu.c  | 35 +-
>   arch/arm/mm/pgd.c  | 40 --
>   12 files changed, 125 insertions(+), 37 deletions(-)
>
> ...

Best regards
-- 
Marek Szyprowski, PhD
Samsung R Institute Poland



Re: [PATCH -next] soc: fsl_asrc: Make some functions static

2020-05-07 Thread Mark Brown
On Thu, 7 May 2020 10:29:59 +0800, ChenTao wrote:
> Fix the following warning:
> 
> sound/soc/fsl/fsl_asrc.c:157:5: warning:
> symbol 'fsl_asrc_request_pair' was not declared. Should it be static?
> sound/soc/fsl/fsl_asrc.c:200:6: warning:
> symbol 'fsl_asrc_release_pair' was not declared. Should it be static?
> 
> [...]

Applied to

   https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git for-5.8

Thanks!

[1/1] soc: fsl_asrc: Make some functions static
  commit: c16e923dd635d383026a306acea540b8e0706c88

All being well this means that it will be integrated into the linux-next
tree (usually sometime in the next 24 hours) and sent to Linus during
the next merge window (or sooner if it is a bug fix), however if
problems are discovered then the patch may be dropped or reverted.

You may get further e-mails resulting from automated or manual testing
and review of the tree, please engage with people reporting problems and
send followup patches addressing any issues that are reported if needed.

If any updates are required or you are submitting further changes they
should be sent as incremental updates against current git, existing
patches will not be replaced.

Please add any relevant lists and maintainers to the CCs when replying
to this mail.

Thanks,
Mark


[PATCH v2 4/4] powerpc: Use trap metadata to prevent double restart rather than zeroing trap

2020-05-07 Thread Michael Ellerman
From: Nicholas Piggin 

It's not very nice to zero trap for this, because then system calls no
longer have trap_is_syscall(regs) invariant, and we can't distinguish
between sc and scv system calls (in a later patch).

Take one last unused bit from the low bits of the pt_regs.trap word
for this instead. There is not a really good reason why it should be
in trap as opposed to another field, but trap has some concept of
flags and it exists. Ideally I think we would move trap to 2-byte
field and have 2 more bytes available independently.

Add a selftests case for this, which can be seen to fail if
trap_norestart() is changed to return false.

Signed-off-by: Nicholas Piggin 
[mpe: Make them static inlines]
Signed-off-by: Michael Ellerman 
---
 arch/powerpc/include/asm/ptrace.h |  22 ++-
 arch/powerpc/kernel/signal.c  |   7 +-
 arch/powerpc/kernel/signal_32.c   |   2 +-
 arch/powerpc/kernel/signal_64.c   |  10 +-
 .../testing/selftests/powerpc/signal/Makefile |   2 +-
 .../powerpc/signal/sig_sc_double_restart.c| 174 ++
 6 files changed, 201 insertions(+), 16 deletions(-)
 create mode 100644 
tools/testing/selftests/powerpc/signal/sig_sc_double_restart.c

v2: mpe: Make them static inlines

diff --git a/arch/powerpc/include/asm/ptrace.h 
b/arch/powerpc/include/asm/ptrace.h
index 5db45790a087..b92877a81626 100644
--- a/arch/powerpc/include/asm/ptrace.h
+++ b/arch/powerpc/include/asm/ptrace.h
@@ -182,13 +182,13 @@ extern int ptrace_put_reg(struct task_struct *task, int 
regno,
 
 #ifdef __powerpc64__
 #ifdef CONFIG_PPC_BOOK3S
-#define TRAP_FLAGS_MASK0
-#define TRAP(regs) ((regs)->trap)
+#define TRAP_FLAGS_MASK0x10
+#define TRAP(regs) ((regs)->trap & ~TRAP_FLAGS_MASK)
 #define FULL_REGS(regs)true
 #define SET_FULL_REGS(regs)do { } while (0)
 #else
-#define TRAP_FLAGS_MASK0x1
-#define TRAP(regs) ((regs)->trap & ~0x1)
+#define TRAP_FLAGS_MASK0x11
+#define TRAP(regs) ((regs)->trap & ~TRAP_FLAGS_MASK)
 #define FULL_REGS(regs)(((regs)->trap & 1) == 0)
 #define SET_FULL_REGS(regs)((regs)->trap |= 1)
 #endif
@@ -202,8 +202,8 @@ extern int ptrace_put_reg(struct task_struct *task, int 
regno,
  * On 4xx we use the next bit to indicate whether the exception
  * is a critical exception (1 means it is).
  */
-#define TRAP_FLAGS_MASK0xF
-#define TRAP(regs) ((regs)->trap & ~0xF)
+#define TRAP_FLAGS_MASK0x1F
+#define TRAP(regs) ((regs)->trap & ~TRAP_FLAGS_MASK)
 #define FULL_REGS(regs)(((regs)->trap & 1) == 0)
 #define SET_FULL_REGS(regs)((regs)->trap |= 1)
 #define IS_CRITICAL_EXC(regs)  (((regs)->trap & 2) != 0)
@@ -227,6 +227,16 @@ static inline bool trap_is_syscall(struct pt_regs *regs)
return TRAP(regs) == 0xc00;
 }
 
+static inline bool trap_norestart(struct pt_regs *regs)
+{
+   return regs->trap & 0x10;
+}
+
+static inline void set_trap_norestart(struct pt_regs *regs)
+{
+   regs->trap |= 16;
+}
+
 #define arch_has_single_step() (1)
 #ifndef CONFIG_BOOK3S_601
 #define arch_has_block_step()  (true)
diff --git a/arch/powerpc/kernel/signal.c b/arch/powerpc/kernel/signal.c
index f2be9e960c2e..a46c3fdb6853 100644
--- a/arch/powerpc/kernel/signal.c
+++ b/arch/powerpc/kernel/signal.c
@@ -201,6 +201,9 @@ static void check_syscall_restart(struct pt_regs *regs, 
struct k_sigaction *ka,
if (!trap_is_syscall(regs))
return;
 
+   if (trap_norestart(regs))
+   return;
+
/* error signalled ? */
if (!(regs->ccr & 0x1000))
return;
@@ -258,7 +261,7 @@ static void do_signal(struct task_struct *tsk)
if (ksig.sig <= 0) {
/* No signal to deliver -- put the saved sigmask back */
restore_saved_sigmask();
-   tsk->thread.regs->trap = 0;
+   set_trap_norestart(tsk->thread.regs);
return;   /* no signals delivered */
}
 
@@ -285,7 +288,7 @@ static void do_signal(struct task_struct *tsk)
ret = handle_rt_signal64(, oldset, tsk);
}
 
-   tsk->thread.regs->trap = 0;
+   set_trap_norestart(tsk->thread.regs);
signal_setup_done(ret, , test_thread_flag(TIF_SINGLESTEP));
 }
 
diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c
index 4f96d29a22bf..ae3da7440b2f 100644
--- a/arch/powerpc/kernel/signal_32.c
+++ b/arch/powerpc/kernel/signal_32.c
@@ -500,7 +500,7 @@ static long restore_user_regs(struct pt_regs *regs,
if (!sig)
save_r2 = (unsigned int)regs->gpr[2];
err = restore_general_regs(regs, sr);
-   regs->trap = 0;
+   set_trap_norestart(regs);
err |= __get_user(msr, >mc_gregs[PT_MSR]);
if (!sig)
regs->gpr[2] = (unsigned long) save_r2;
diff 

[PATCH v2 3/4] powerpc: trap_is_syscall() helper to hide syscall trap number

2020-05-07 Thread Michael Ellerman
From: Nicholas Piggin 

A new system call interrupt will be added with a new trap number.
Hide the explicit 0xc00 test behind an accessor to reduce churn
in callers.

Signed-off-by: Nicholas Piggin 
[mpe: Make it a static inline]
Signed-off-by: Michael Ellerman 
---
 arch/powerpc/include/asm/ptrace.h  | 5 +
 arch/powerpc/include/asm/syscall.h | 5 -
 arch/powerpc/kernel/process.c  | 2 +-
 arch/powerpc/kernel/signal.c   | 2 +-
 arch/powerpc/xmon/xmon.c   | 2 +-
 5 files changed, 12 insertions(+), 4 deletions(-)

v2: mpe: Make it a static inline

diff --git a/arch/powerpc/include/asm/ptrace.h 
b/arch/powerpc/include/asm/ptrace.h
index 7c585bddc06e..5db45790a087 100644
--- a/arch/powerpc/include/asm/ptrace.h
+++ b/arch/powerpc/include/asm/ptrace.h
@@ -222,6 +222,11 @@ static inline void set_trap(struct pt_regs *regs, unsigned 
long val)
regs->trap = (regs->trap & TRAP_FLAGS_MASK) | (val & ~TRAP_FLAGS_MASK);
 }
 
+static inline bool trap_is_syscall(struct pt_regs *regs)
+{
+   return TRAP(regs) == 0xc00;
+}
+
 #define arch_has_single_step() (1)
 #ifndef CONFIG_BOOK3S_601
 #define arch_has_block_step()  (true)
diff --git a/arch/powerpc/include/asm/syscall.h 
b/arch/powerpc/include/asm/syscall.h
index 38d62acfdce7..fd1b518eed17 100644
--- a/arch/powerpc/include/asm/syscall.h
+++ b/arch/powerpc/include/asm/syscall.h
@@ -26,7 +26,10 @@ static inline int syscall_get_nr(struct task_struct *task, 
struct pt_regs *regs)
 * This is important for seccomp so that compat tasks can set r0 = -1
 * to reject the syscall.
 */
-   return TRAP(regs) == 0xc00 ? regs->gpr[0] : -1;
+   if (trap_is_syscall(regs))
+   return regs->gpr[0];
+   else
+   return -1;
 }
 
 static inline void syscall_rollback(struct task_struct *task,
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 8af3583546b7..db766252238f 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1413,7 +1413,7 @@ void show_regs(struct pt_regs * regs)
print_msr_bits(regs->msr);
pr_cont("  CR: %08lx  XER: %08lx\n", regs->ccr, regs->xer);
trap = TRAP(regs);
-   if ((TRAP(regs) != 0xc00) && cpu_has_feature(CPU_FTR_CFAR))
+   if (!trap_is_syscall(regs) && cpu_has_feature(CPU_FTR_CFAR))
pr_cont("CFAR: "REG" ", regs->orig_gpr3);
if (trap == 0x200 || trap == 0x300 || trap == 0x600)
 #if defined(CONFIG_4xx) || defined(CONFIG_BOOKE)
diff --git a/arch/powerpc/kernel/signal.c b/arch/powerpc/kernel/signal.c
index a264989626fd..f2be9e960c2e 100644
--- a/arch/powerpc/kernel/signal.c
+++ b/arch/powerpc/kernel/signal.c
@@ -198,7 +198,7 @@ static void check_syscall_restart(struct pt_regs *regs, 
struct k_sigaction *ka,
int restart = 1;
 
/* syscall ? */
-   if (TRAP(regs) != 0x0C00)
+   if (!trap_is_syscall(regs))
return;
 
/* error signalled ? */
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index 92761e47fb5c..a7430632bab4 100644
--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -1776,7 +1776,7 @@ static void prregs(struct pt_regs *fp)
 #endif
printf("pc  = ");
xmon_print_symbol(fp->nip, " ", "\n");
-   if (TRAP(fp) != 0xc00 && cpu_has_feature(CPU_FTR_CFAR)) {
+   if (!trap_is_syscall(fp) && cpu_has_feature(CPU_FTR_CFAR)) {
printf("cfar= ");
xmon_print_symbol(fp->orig_gpr3, " ", "\n");
}
-- 
2.25.1



[PATCH v2 2/4] powerpc: Use set_trap() and avoid open-coding trap masking

2020-05-07 Thread Michael Ellerman
From: Nicholas Piggin 

The pt_regs.trap field keeps 4 low bits for some metadata about the
trap or how it was handled, which is masked off in order to test the
architectural trap number.

Add a set_trap() accessor to set this, equivalent to TRAP() for
returning it. This is actually not quite the equivalent of TRAP()
because it always clears the low bits, which may be harmless if
it can only be updated via ptrace syscall, but it seems dangerous.

In fact settting TRAP from ptrace doesn't seem like a great idea
so maybe it's better deleted.

Signed-off-by: Nicholas Piggin 
[mpe: Make it a static inline rather than a shouty macro]
Signed-off-by: Michael Ellerman 
---
 arch/powerpc/include/asm/ptrace.h| 8 
 arch/powerpc/kernel/ptrace/ptrace-tm.c   | 2 +-
 arch/powerpc/kernel/ptrace/ptrace-view.c | 2 +-
 arch/powerpc/xmon/xmon.c | 2 +-
 4 files changed, 11 insertions(+), 3 deletions(-)

v2: mpe: Make it a static inline rather than a shouty macro

diff --git a/arch/powerpc/include/asm/ptrace.h 
b/arch/powerpc/include/asm/ptrace.h
index 89f31d5a8062..7c585bddc06e 100644
--- a/arch/powerpc/include/asm/ptrace.h
+++ b/arch/powerpc/include/asm/ptrace.h
@@ -182,10 +182,12 @@ extern int ptrace_put_reg(struct task_struct *task, int 
regno,
 
 #ifdef __powerpc64__
 #ifdef CONFIG_PPC_BOOK3S
+#define TRAP_FLAGS_MASK0
 #define TRAP(regs) ((regs)->trap)
 #define FULL_REGS(regs)true
 #define SET_FULL_REGS(regs)do { } while (0)
 #else
+#define TRAP_FLAGS_MASK0x1
 #define TRAP(regs) ((regs)->trap & ~0x1)
 #define FULL_REGS(regs)(((regs)->trap & 1) == 0)
 #define SET_FULL_REGS(regs)((regs)->trap |= 1)
@@ -200,6 +202,7 @@ extern int ptrace_put_reg(struct task_struct *task, int 
regno,
  * On 4xx we use the next bit to indicate whether the exception
  * is a critical exception (1 means it is).
  */
+#define TRAP_FLAGS_MASK0xF
 #define TRAP(regs) ((regs)->trap & ~0xF)
 #define FULL_REGS(regs)(((regs)->trap & 1) == 0)
 #define SET_FULL_REGS(regs)((regs)->trap |= 1)
@@ -214,6 +217,11 @@ do {   
  \
 } while (0)
 #endif /* __powerpc64__ */
 
+static inline void set_trap(struct pt_regs *regs, unsigned long val)
+{
+   regs->trap = (regs->trap & TRAP_FLAGS_MASK) | (val & ~TRAP_FLAGS_MASK);
+}
+
 #define arch_has_single_step() (1)
 #ifndef CONFIG_BOOK3S_601
 #define arch_has_block_step()  (true)
diff --git a/arch/powerpc/kernel/ptrace/ptrace-tm.c 
b/arch/powerpc/kernel/ptrace/ptrace-tm.c
index d75aff31f637..32d62c606681 100644
--- a/arch/powerpc/kernel/ptrace/ptrace-tm.c
+++ b/arch/powerpc/kernel/ptrace/ptrace-tm.c
@@ -43,7 +43,7 @@ static int set_user_ckpt_msr(struct task_struct *task, 
unsigned long msr)
 
 static int set_user_ckpt_trap(struct task_struct *task, unsigned long trap)
 {
-   task->thread.ckpt_regs.trap = trap & 0xfff0;
+   set_trap(>thread.ckpt_regs, trap);
return 0;
 }
 
diff --git a/arch/powerpc/kernel/ptrace/ptrace-view.c 
b/arch/powerpc/kernel/ptrace/ptrace-view.c
index 15e3b79b6395..caeb5822a8f4 100644
--- a/arch/powerpc/kernel/ptrace/ptrace-view.c
+++ b/arch/powerpc/kernel/ptrace/ptrace-view.c
@@ -149,7 +149,7 @@ static int set_user_dscr(struct task_struct *task, unsigned 
long dscr)
  */
 static int set_user_trap(struct task_struct *task, unsigned long trap)
 {
-   task->thread.regs->trap = trap & 0xfff0;
+   set_trap(task->thread.regs, trap);
return 0;
 }
 
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index 7af840c0fc93..92761e47fb5c 100644
--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -1178,7 +1178,7 @@ static int do_step(struct pt_regs *regs)
return 0;
}
if (stepped > 0) {
-   regs->trap = 0xd00 | (regs->trap & 1);
+   set_trap(regs, 0xd00);
printf("stepped to ");
xmon_print_symbol(regs->nip, " ", "\n");
ppc_inst_dump(regs->nip, 1, 0);
-- 
2.25.1



[PATCH v2 1/4] powerpc/64s: Always has full regs, so remove remnant checks

2020-05-07 Thread Michael Ellerman
From: Nicholas Piggin 

Signed-off-by: Nicholas Piggin 
Signed-off-by: Michael Ellerman 
---
 arch/powerpc/include/asm/ptrace.h | 23 ---
 arch/powerpc/kernel/process.c |  2 +-
 2 files changed, 17 insertions(+), 8 deletions(-)

v2: Unchanged.

diff --git a/arch/powerpc/include/asm/ptrace.h 
b/arch/powerpc/include/asm/ptrace.h
index e0195e6b892b..89f31d5a8062 100644
--- a/arch/powerpc/include/asm/ptrace.h
+++ b/arch/powerpc/include/asm/ptrace.h
@@ -179,6 +179,20 @@ extern int ptrace_put_reg(struct task_struct *task, int 
regno,
 
 #define current_pt_regs() \
((struct pt_regs *)((unsigned long)task_stack_page(current) + 
THREAD_SIZE) - 1)
+
+#ifdef __powerpc64__
+#ifdef CONFIG_PPC_BOOK3S
+#define TRAP(regs) ((regs)->trap)
+#define FULL_REGS(regs)true
+#define SET_FULL_REGS(regs)do { } while (0)
+#else
+#define TRAP(regs) ((regs)->trap & ~0x1)
+#define FULL_REGS(regs)(((regs)->trap & 1) == 0)
+#define SET_FULL_REGS(regs)((regs)->trap |= 1)
+#endif
+#define CHECK_FULL_REGS(regs)  BUG_ON(!FULL_REGS(regs))
+#define NV_REG_POISON  0xdeadbeefdeadbeefUL
+#else
 /*
  * We use the least-significant bit of the trap field to indicate
  * whether we have saved the full set of registers, or only a
@@ -186,17 +200,12 @@ extern int ptrace_put_reg(struct task_struct *task, int 
regno,
  * On 4xx we use the next bit to indicate whether the exception
  * is a critical exception (1 means it is).
  */
+#define TRAP(regs) ((regs)->trap & ~0xF)
 #define FULL_REGS(regs)(((regs)->trap & 1) == 0)
-#ifndef __powerpc64__
+#define SET_FULL_REGS(regs)((regs)->trap |= 1)
 #define IS_CRITICAL_EXC(regs)  (((regs)->trap & 2) != 0)
 #define IS_MCHECK_EXC(regs)(((regs)->trap & 4) != 0)
 #define IS_DEBUG_EXC(regs) (((regs)->trap & 8) != 0)
-#endif /* ! __powerpc64__ */
-#define TRAP(regs) ((regs)->trap & ~0xF)
-#ifdef __powerpc64__
-#define NV_REG_POISON  0xdeadbeefdeadbeefUL
-#define CHECK_FULL_REGS(regs)  BUG_ON(regs->trap & 1)
-#else
 #define NV_REG_POISON  0xdeadbeef
 #define CHECK_FULL_REGS(regs)\
 do { \
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 8479c762aef2..8af3583546b7 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1720,7 +1720,7 @@ void start_thread(struct pt_regs *regs, unsigned long 
start, unsigned long sp)
 * FULL_REGS(regs) return true.  This is necessary to allow
 * ptrace to examine the thread immediately after exec.
 */
-   regs->trap &= ~1UL;
+   SET_FULL_REGS(regs);
 
 #ifdef CONFIG_PPC32
regs->mq = 0;
-- 
2.25.1



Re: [PATCH net 11/16] net: ethernet: marvell: mvneta: fix fixed-link phydev leaks

2020-05-07 Thread Greg Kroah-Hartman
On Thu, May 07, 2020 at 08:47:34AM +0200, Greg Kroah-Hartman wrote:
> On Thu, May 07, 2020 at 08:44:12AM +0200, Johan Hovold wrote:
> > On Thu, May 07, 2020 at 12:27:53AM +0530, Naresh Kamboju wrote:
> > > On Tue, 29 Nov 2016 at 00:00, Johan Hovold  wrote:
> > > >
> > > > Make sure to deregister and free any fixed-link PHY registered using
> > > > of_phy_register_fixed_link() on probe errors and on driver unbind.
> > > >
> > > > Fixes: 83895bedeee6 ("net: mvneta: add support for fixed links")
> > > > Signed-off-by: Johan Hovold 
> > > > ---
> > > >  drivers/net/ethernet/marvell/mvneta.c | 5 +
> > > >  1 file changed, 5 insertions(+)
> > > >
> > > > diff --git a/drivers/net/ethernet/marvell/mvneta.c 
> > > > b/drivers/net/ethernet/marvell/mvneta.c
> > > > index 0c0a45af950f..707bc4680b9b 100644
> > > > --- a/drivers/net/ethernet/marvell/mvneta.c
> > > > +++ b/drivers/net/ethernet/marvell/mvneta.c
> > > > @@ -4191,6 +4191,8 @@ static int mvneta_probe(struct platform_device 
> > > > *pdev)
> > > > clk_disable_unprepare(pp->clk);
> > > >  err_put_phy_node:
> > > > of_node_put(phy_node);
> > > > +   if (of_phy_is_fixed_link(dn))
> > > > +   of_phy_deregister_fixed_link(dn);
> > > 
> > > While building kernel Image for arm architecture on stable-rc 4.4 branch
> > > the following build error found.
> > > 
> > > drivers/net/ethernet/marvell/mvneta.c:3442:3: error: implicit
> > > declaration of function 'of_phy_deregister_fixed_link'; did you mean
> > > 'of_phy_register_fixed_link'? [-Werror=implicit-function-declaration]
> > > |of_phy_deregister_fixed_link(dn);
> > > |^~~~
> > > |of_phy_register_fixed_link
> > > 
> > > ref:
> > > https://gitlab.com/Linaro/lkft/kernel-runs/-/jobs/541374729
> > 
> > Greg, 3f65047c853a ("of_mdio: add helper to deregister fixed-link
> > PHYs") needs to be backported as well for these.
> > 
> > Original series can be found here:
> > 
> > 
> > https://lkml.kernel.org/r/1480357509-28074-1-git-send-email-jo...@kernel.org
> 
> Ah, thanks for that, I thought I dropped all of the ones that caused
> build errors, but missed the above one.  I'll go take the whole series
> instead.

This should now all be fixed up, thanks.

greg k-h


Re: [PATCH -next] ALSA: sound/ppc: Use bitwise instead of arithmetic operator for flags

2020-05-07 Thread Takashi Iwai
On Thu, 07 May 2020 05:54:07 +0200,
Samuel Zou wrote:
> 
> Fix the following coccinelle warnings:
> 
> sound/ppc/pmac.c:729:57-58: WARNING: sum of probable bitmasks, consider |
> sound/ppc/pmac.c:229:37-38: WARNING: sum of probable bitmasks, consider |
> 
> Reported-by: Hulk Robot 
> Signed-off-by: Samuel Zou 

Applied, thanks.


Takashi


Re: [PATCH] powerpc/spufs: Add rcu_read_lock() around fcheck()

2020-05-07 Thread Christoph Hellwig
On Wed, Apr 29, 2020 at 09:42:39PM +1000, Michael Ellerman wrote:
> Christoph Hellwig  writes:
> > On Tue, Apr 28, 2020 at 09:48:11PM +1000, Michael Ellerman wrote:
> >> 
> >> This comes from fcheck_files() via fcheck().
> >> 
> >> It's pretty clearly documented that fcheck() must be wrapped with
> >> rcu_read_lock(), so fix it.
> >
> > But for this to actually be useful you'd need the rcu read lock until
> > your are done with the file (or got a reference).
> 
> Hmm OK. My reasoning was that we were done with the struct file, because
> we return the ctx that's hanging off the inode.
> 
> + ctx = SPUFS_I(file_inode(file))->i_ctx;
> 
> But I guess the lifetime of the ctx is not guaranteed if the file goes
> away.
> 
> It looks like the only long lived reference on the ctx is the one
> taken in spufs_new_file() and dropped in spufs_evict_inode().
> 
> So if we take a reference to the ctx with the RCU lock held we should be
> safe, I think. But I've definitely exhausted my spufs/vfs knowledge at
> this point.
> 
> Something like below.

Looks reasonable.


Re: [PATCH net 11/16] net: ethernet: marvell: mvneta: fix fixed-link phydev leaks

2020-05-07 Thread Greg Kroah-Hartman
On Thu, May 07, 2020 at 08:44:12AM +0200, Johan Hovold wrote:
> On Thu, May 07, 2020 at 12:27:53AM +0530, Naresh Kamboju wrote:
> > On Tue, 29 Nov 2016 at 00:00, Johan Hovold  wrote:
> > >
> > > Make sure to deregister and free any fixed-link PHY registered using
> > > of_phy_register_fixed_link() on probe errors and on driver unbind.
> > >
> > > Fixes: 83895bedeee6 ("net: mvneta: add support for fixed links")
> > > Signed-off-by: Johan Hovold 
> > > ---
> > >  drivers/net/ethernet/marvell/mvneta.c | 5 +
> > >  1 file changed, 5 insertions(+)
> > >
> > > diff --git a/drivers/net/ethernet/marvell/mvneta.c 
> > > b/drivers/net/ethernet/marvell/mvneta.c
> > > index 0c0a45af950f..707bc4680b9b 100644
> > > --- a/drivers/net/ethernet/marvell/mvneta.c
> > > +++ b/drivers/net/ethernet/marvell/mvneta.c
> > > @@ -4191,6 +4191,8 @@ static int mvneta_probe(struct platform_device 
> > > *pdev)
> > > clk_disable_unprepare(pp->clk);
> > >  err_put_phy_node:
> > > of_node_put(phy_node);
> > > +   if (of_phy_is_fixed_link(dn))
> > > +   of_phy_deregister_fixed_link(dn);
> > 
> > While building kernel Image for arm architecture on stable-rc 4.4 branch
> > the following build error found.
> > 
> > drivers/net/ethernet/marvell/mvneta.c:3442:3: error: implicit
> > declaration of function 'of_phy_deregister_fixed_link'; did you mean
> > 'of_phy_register_fixed_link'? [-Werror=implicit-function-declaration]
> > |of_phy_deregister_fixed_link(dn);
> > |^~~~
> > |of_phy_register_fixed_link
> > 
> > ref:
> > https://gitlab.com/Linaro/lkft/kernel-runs/-/jobs/541374729
> 
> Greg, 3f65047c853a ("of_mdio: add helper to deregister fixed-link
> PHYs") needs to be backported as well for these.
> 
> Original series can be found here:
> 
>   
> https://lkml.kernel.org/r/1480357509-28074-1-git-send-email-jo...@kernel.org

Ah, thanks for that, I thought I dropped all of the ones that caused
build errors, but missed the above one.  I'll go take the whole series
instead.

greg k-h


Re: [PATCH net 11/16] net: ethernet: marvell: mvneta: fix fixed-link phydev leaks

2020-05-07 Thread Johan Hovold
On Thu, May 07, 2020 at 12:27:53AM +0530, Naresh Kamboju wrote:
> On Tue, 29 Nov 2016 at 00:00, Johan Hovold  wrote:
> >
> > Make sure to deregister and free any fixed-link PHY registered using
> > of_phy_register_fixed_link() on probe errors and on driver unbind.
> >
> > Fixes: 83895bedeee6 ("net: mvneta: add support for fixed links")
> > Signed-off-by: Johan Hovold 
> > ---
> >  drivers/net/ethernet/marvell/mvneta.c | 5 +
> >  1 file changed, 5 insertions(+)
> >
> > diff --git a/drivers/net/ethernet/marvell/mvneta.c 
> > b/drivers/net/ethernet/marvell/mvneta.c
> > index 0c0a45af950f..707bc4680b9b 100644
> > --- a/drivers/net/ethernet/marvell/mvneta.c
> > +++ b/drivers/net/ethernet/marvell/mvneta.c
> > @@ -4191,6 +4191,8 @@ static int mvneta_probe(struct platform_device *pdev)
> > clk_disable_unprepare(pp->clk);
> >  err_put_phy_node:
> > of_node_put(phy_node);
> > +   if (of_phy_is_fixed_link(dn))
> > +   of_phy_deregister_fixed_link(dn);
> 
> While building kernel Image for arm architecture on stable-rc 4.4 branch
> the following build error found.
> 
> drivers/net/ethernet/marvell/mvneta.c:3442:3: error: implicit
> declaration of function 'of_phy_deregister_fixed_link'; did you mean
> 'of_phy_register_fixed_link'? [-Werror=implicit-function-declaration]
> |of_phy_deregister_fixed_link(dn);
> |^~~~
> |of_phy_register_fixed_link
> 
> ref:
> https://gitlab.com/Linaro/lkft/kernel-runs/-/jobs/541374729

Greg, 3f65047c853a ("of_mdio: add helper to deregister fixed-link
PHYs") needs to be backported as well for these.

Original series can be found here:


https://lkml.kernel.org/r/1480357509-28074-1-git-send-email-jo...@kernel.org

Johan