[PATCH] powerpc/epapr: Move register keyword at the beginning of declaration

2018-01-30 Thread Mathieu Malaterre
Fix warning for all register unsigned long (0,3-12) that appear during W=1
compilation:

./arch/powerpc/include/asm/epapr_hcalls.h:479:2: warning: ‘register’ is not at 
beginning of declaration [-Wold-style-declaration]
  unsigned long register r[\d] asm("r[\d]");

Signed-off-by: Mathieu Malaterre 
---
 arch/powerpc/include/asm/epapr_hcalls.h | 22 +++---
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/include/asm/epapr_hcalls.h 
b/arch/powerpc/include/asm/epapr_hcalls.h
index 90863245df53..d3a7e36f1402 100644
--- a/arch/powerpc/include/asm/epapr_hcalls.h
+++ b/arch/powerpc/include/asm/epapr_hcalls.h
@@ -466,17 +466,17 @@ static inline unsigned long epapr_hypercall(unsigned long 
*in,
unsigned long *out,
unsigned long nr)
 {
-   unsigned long register r0 asm("r0");
-   unsigned long register r3 asm("r3") = in[0];
-   unsigned long register r4 asm("r4") = in[1];
-   unsigned long register r5 asm("r5") = in[2];
-   unsigned long register r6 asm("r6") = in[3];
-   unsigned long register r7 asm("r7") = in[4];
-   unsigned long register r8 asm("r8") = in[5];
-   unsigned long register r9 asm("r9") = in[6];
-   unsigned long register r10 asm("r10") = in[7];
-   unsigned long register r11 asm("r11") = nr;
-   unsigned long register r12 asm("r12");
+   register unsigned long r0 asm("r0");
+   register unsigned long r3 asm("r3") = in[0];
+   register unsigned long r4 asm("r4") = in[1];
+   register unsigned long r5 asm("r5") = in[2];
+   register unsigned long r6 asm("r6") = in[3];
+   register unsigned long r7 asm("r7") = in[4];
+   register unsigned long r8 asm("r8") = in[5];
+   register unsigned long r9 asm("r9") = in[6];
+   register unsigned long r10 asm("r10") = in[7];
+   register unsigned long r11 asm("r11") = nr;
+   register unsigned long r12 asm("r12");
 
asm volatile("blepapr_hypercall_start"
 : "=r"(r0), "=r"(r3), "=r"(r4), "=r"(r5), "=r"(r6),
-- 
2.11.0



Re: [PATCH v11 0/3] mm, x86, powerpc: Enhancements to Memory Protection Keys.

2018-01-30 Thread Ingo Molnar

* Ram Pai  wrote:

> This patch series provides arch-neutral enhancements to
> enable memory-keys on new architecutes, and the corresponding
> changes in x86 and powerpc specific code to support that.
> 
> a) Provides ability to support upto 32 keys.  PowerPC
>   can handle 32 keys and hence needs this.
> 
> b) Arch-neutral code; and not the arch-specific code,
>determines the format of the string, that displays the key
>for each vma in smaps.
> 
> PowerPC implementation of memory-keys is now in powerpc/next tree.
> https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/commit/?h=next=92e3da3cf193fd27996909956c12a23c0333da44

All three patches look sane to me. If you would like to carry these generic 
bits 
in the PowerPC tree as well then:

  Reviewed-by: Ingo Molnar 

Thanks,

Ingo


[PATCH] KVM: PPC: Book3S PR: close a race window when SVCPU pointer is hold before kvmppc_copy_from_svcpu()

2018-01-30 Thread wei . guo . simon
From: Simon Guo 

commit 40fdd8c88c4a ("KVM: PPC: Book3S: PR: Make svcpu -> vcpu store
preempt savvy") and commit 3d3319b45eea ("KVM: PPC: Book3S: PR: Enable
interrupts earlier") is trying to turns on preemption early when
return into highmem guest exit handler.

However there is a race window in following example at
arch/powerpc/kvm/book3s_interrupts.S:

highmem guest exit handler:
...
195 GET_SHADOW_VCPU(r4)
196 bl  FUNC(kvmppc_copy_from_svcpu)
...
239 bl  FUNC(kvmppc_handle_exit_pr)

If there comes a preemption between line 195 and 196, line 196
may hold an invalid SVCPU reference with following sequence:
1) Qemu task T1 runs at GET_SHADOW_VCPU(r4) at line 195, on CPU A.
2) T1 is preempted and switch out CPU A. As a result, it checks
CPU A's svcpu->in_use (=1 at present) and flush cpu A's svcpu to
T1's vcpu.
3) Another task T2 switches into CPU A and it may update CPU A's
svcpu->in_use into 1.
4) T1 is scheduled into CPU B. But it still holds CPU A's svcpu
reference as R4. Then it executes kvmppc_copy_from_svcpu() with
R4 and it will corrupt T1's VCPU with T2's content. T2's VCPU
will also be impacted.

This patch moves the svcpu->in_use into VCPU so that the vcpus
sharing the same svcpu can work properly and fix the above case.

Signed-off-by: Simon Guo 
---
 arch/powerpc/include/asm/kvm_book3s_asm.h |  1 -
 arch/powerpc/include/asm/kvm_host.h   |  4 
 arch/powerpc/kvm/book3s_pr.c  | 12 ++--
 3 files changed, 10 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s_asm.h 
b/arch/powerpc/include/asm/kvm_book3s_asm.h
index ab386af..9a8ef23 100644
--- a/arch/powerpc/include/asm/kvm_book3s_asm.h
+++ b/arch/powerpc/include/asm/kvm_book3s_asm.h
@@ -142,7 +142,6 @@ struct kvmppc_host_state {
 };
 
 struct kvmppc_book3s_shadow_vcpu {
-   bool in_use;
ulong gpr[14];
u32 cr;
ulong xer;
diff --git a/arch/powerpc/include/asm/kvm_host.h 
b/arch/powerpc/include/asm/kvm_host.h
index 3aa5b57..4f54daf 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -781,6 +781,10 @@ struct kvm_vcpu_arch {
struct dentry *debugfs_dir;
struct dentry *debugfs_timings;
 #endif /* CONFIG_KVM_BOOK3S_HV_EXIT_TIMING */
+   bool svcpu_in_use; /* indicates whether current vcpu need copy svcpu
+   * content to local.
+   * false: no need to copy; true: need copy;
+   */
 };
 
 #define VCPU_FPR(vcpu, i)  (vcpu)->arch.fp.fpr[i][TS_FPROFFSET]
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 7deaeeb..d791142 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -98,7 +98,7 @@ static void kvmppc_core_vcpu_load_pr(struct kvm_vcpu *vcpu, 
int cpu)
struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
memcpy(svcpu->slb, to_book3s(vcpu)->slb_shadow, sizeof(svcpu->slb));
svcpu->slb_max = to_book3s(vcpu)->slb_shadow_max;
-   svcpu->in_use = 0;
+   vcpu->arch.svcpu_in_use = 0;
svcpu_put(svcpu);
 #endif
 
@@ -120,9 +120,9 @@ static void kvmppc_core_vcpu_put_pr(struct kvm_vcpu *vcpu)
 {
 #ifdef CONFIG_PPC_BOOK3S_64
struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu);
-   if (svcpu->in_use) {
+   if (vcpu->arch.svcpu_in_use)
kvmppc_copy_from_svcpu(vcpu, svcpu);
-   }
+
memcpy(to_book3s(vcpu)->slb_shadow, svcpu->slb, sizeof(svcpu->slb));
to_book3s(vcpu)->slb_shadow_max = svcpu->slb_max;
svcpu_put(svcpu);
@@ -176,7 +176,7 @@ void kvmppc_copy_to_svcpu(struct kvmppc_book3s_shadow_vcpu 
*svcpu,
vcpu->arch.entry_vtb = get_vtb();
if (cpu_has_feature(CPU_FTR_ARCH_207S))
vcpu->arch.entry_ic = mfspr(SPRN_IC);
-   svcpu->in_use = true;
+   vcpu->arch.svcpu_in_use = true;
 }
 
 /* Copy data touched by real-mode code from shadow vcpu back to vcpu */
@@ -193,7 +193,7 @@ void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
 * Maybe we were already preempted and synced the svcpu from
 * our preempt notifiers. Don't bother touching this svcpu then.
 */
-   if (!svcpu->in_use)
+   if (!vcpu->arch.svcpu_in_use)
goto out;
 
vcpu->arch.gpr[0] = svcpu->gpr[0];
@@ -230,7 +230,7 @@ void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu,
to_book3s(vcpu)->vtb += get_vtb() - vcpu->arch.entry_vtb;
if (cpu_has_feature(CPU_FTR_ARCH_207S))
vcpu->arch.ic += mfspr(SPRN_IC) - vcpu->arch.entry_ic;
-   svcpu->in_use = false;
+   vcpu->arch.svcpu_in_use = false;
 
 out:
preempt_enable();
-- 
1.8.3.1



Re: [PATCH] ibmvfc: fix misdefined reserved field in ibmvfc_fcp_rsp_info

2018-01-30 Thread Martin K. Petersen

Tyrel,

> The fcp_rsp_info structure as defined in the FC spec has an initial 3 bytes
> reserved field. The ibmvfc driver mistakenly defined this field as 4 bytes
> resulting in the rsp_code field being defined in what should be the start of
> the second reserved field and thus always being reported as zero by the
> driver.

Applied to 4.16/scsi-fixes, thanks!

-- 
Martin K. Petersen  Oracle Linux Engineering


Re: [PATCH] ocxl: fix signed comparison with less than zero

2018-01-30 Thread Andrew Donnellan

On 31/01/18 02:11, Colin King wrote:

From: Colin Ian King 

Currently the comparison of used < 0 is always false because
uses is a size_t. Fix this by making used a ssize_t type.

Detected by Coccinelle:
drivers/misc/ocxl/file.c:320:6-10: WARNING: Unsigned expression
compared with zero: used < 0

Fixes: 5ef3166e8a32 ("ocxl: Driver code for 'generic' opencapi devices")
Signed-off-by: Colin Ian King 


Thanks for picking this up!

Acked-by: Andrew Donnellan 


---
  drivers/misc/ocxl/file.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/misc/ocxl/file.c b/drivers/misc/ocxl/file.c
index c90c1a578d2f..1287e4430e6b 100644
--- a/drivers/misc/ocxl/file.c
+++ b/drivers/misc/ocxl/file.c
@@ -277,7 +277,7 @@ static ssize_t afu_read(struct file *file, char __user 
*buf, size_t count,
struct ocxl_context *ctx = file->private_data;
struct ocxl_kernel_event_header header;
ssize_t rc;
-   size_t used = 0;
+   ssize_t used = 0;
DEFINE_WAIT(event_wait);
  
  	memset(, 0, sizeof(header));




--
Andrew Donnellan  OzLabs, ADL Canberra
andrew.donnel...@au1.ibm.com  IBM Australia Limited



[PATCH v11 3/3] mm, x86: display pkey in smaps only if arch supports pkeys

2018-01-30 Thread Ram Pai
Currently the  architecture  specific code is expected to
display  the  protection  keys  in  smap  for a given vma.
This can lead to redundant code and possibly to divergent
formats in which the key gets displayed.

This  patch  changes  the implementation. It displays the
pkey only if the architecture support pkeys, i.e
arch_pkeys_enabled() returns true.  This patch
provides x86 implementation for arch_pkeys_enabled().

x86 arch_show_smap() function is not needed anymore.
Deleting it.

Signed-off-by: Ram Pai 
---
 arch/x86/include/asm/pkeys.h |1 +
 arch/x86/kernel/fpu/xstate.c |5 +
 arch/x86/kernel/setup.c  |8 
 fs/proc/task_mmu.c   |9 -
 include/linux/pkeys.h|6 ++
 5 files changed, 16 insertions(+), 13 deletions(-)

diff --git a/arch/x86/include/asm/pkeys.h b/arch/x86/include/asm/pkeys.h
index a0ba1ff..f6c287b 100644
--- a/arch/x86/include/asm/pkeys.h
+++ b/arch/x86/include/asm/pkeys.h
@@ -6,6 +6,7 @@
 
 extern int arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
unsigned long init_val);
+extern bool arch_pkeys_enabled(void);
 
 /*
  * Try to dedicate one of the protection keys to be used as an
diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
index 87a57b7..4f566e9 100644
--- a/arch/x86/kernel/fpu/xstate.c
+++ b/arch/x86/kernel/fpu/xstate.c
@@ -945,6 +945,11 @@ int arch_set_user_pkey_access(struct task_struct *tsk, int 
pkey,
 
return 0;
 }
+
+bool arch_pkeys_enabled(void)
+{
+   return boot_cpu_has(X86_FEATURE_OSPKE);
+}
 #endif /* ! CONFIG_ARCH_HAS_PKEYS */
 
 /*
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 8af2e8d..ddf945a 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -1326,11 +1326,3 @@ static int __init register_kernel_offset_dumper(void)
return 0;
 }
 __initcall(register_kernel_offset_dumper);
-
-void arch_show_smap(struct seq_file *m, struct vm_area_struct *vma)
-{
-   if (!boot_cpu_has(X86_FEATURE_OSPKE))
-   return;
-
-   seq_printf(m, "ProtectionKey:  %8u\n", vma_pkey(vma));
-}
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 0edd4da..6f9fbde 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -18,6 +18,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -728,10 +729,6 @@ static int smaps_hugetlb_range(pte_t *pte, unsigned long 
hmask,
 }
 #endif /* HUGETLB_PAGE */
 
-void __weak arch_show_smap(struct seq_file *m, struct vm_area_struct *vma)
-{
-}
-
 static int show_smap(struct seq_file *m, void *v, int is_pid)
 {
struct proc_maps_private *priv = m->private;
@@ -851,9 +848,11 @@ static int show_smap(struct seq_file *m, void *v, int 
is_pid)
   (unsigned long)(mss->pss >> (10 + PSS_SHIFT)));
 
if (!rollup_mode) {
-   arch_show_smap(m, vma);
+   if (arch_pkeys_enabled())
+   seq_printf(m, "ProtectionKey:  %8u\n", vma_pkey(vma));
show_smap_vma_flags(m, vma);
}
+
m_cache_vma(m, vma);
return ret;
 }
diff --git a/include/linux/pkeys.h b/include/linux/pkeys.h
index 0794ca7..dfdc609 100644
--- a/include/linux/pkeys.h
+++ b/include/linux/pkeys.h
@@ -13,6 +13,7 @@
 #define arch_override_mprotect_pkey(vma, prot, pkey) (0)
 #define PKEY_DEDICATED_EXECUTE_ONLY 0
 #define ARCH_VM_PKEY_FLAGS 0
+#define vma_pkey(vma) 0
 
 static inline bool mm_pkey_is_allocated(struct mm_struct *mm, int pkey)
 {
@@ -35,6 +36,11 @@ static inline int arch_set_user_pkey_access(struct 
task_struct *tsk, int pkey,
return 0;
 }
 
+static inline bool arch_pkeys_enabled(void)
+{
+   return false;
+}
+
 static inline void copy_init_pkru_to_fpregs(void)
 {
 }
-- 
1.7.1



[PATCH v11 2/3] mm, powerpc, x86: introduce an additional vma bit for powerpc pkey

2018-01-30 Thread Ram Pai
Currently only 4bits are allocated in the vma flags to hold 16
keys. This is sufficient for x86. PowerPC  supports  32  keys,
which needs 5bits. This patch allocates an  additional bit.

Acked-by: Balbir Singh 
Signed-off-by: Ram Pai 
---
 fs/proc/task_mmu.c |1 +
 include/linux/mm.h |3 ++-
 2 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index b139617..0edd4da 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -680,6 +680,7 @@ static void show_smap_vma_flags(struct seq_file *m, struct 
vm_area_struct *vma)
[ilog2(VM_PKEY_BIT1)]   = "",
[ilog2(VM_PKEY_BIT2)]   = "",
[ilog2(VM_PKEY_BIT3)]   = "",
+   [ilog2(VM_PKEY_BIT4)]   = "",
 #endif /* CONFIG_ARCH_HAS_PKEYS */
};
size_t i;
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 01381d3..ebcb997 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -231,9 +231,10 @@ extern int overcommit_kbytes_handler(struct ctl_table *, 
int, void __user *,
 #ifdef CONFIG_ARCH_HAS_PKEYS
 # define VM_PKEY_SHIFT VM_HIGH_ARCH_BIT_0
 # define VM_PKEY_BIT0  VM_HIGH_ARCH_0  /* A protection key is a 4-bit value */
-# define VM_PKEY_BIT1  VM_HIGH_ARCH_1
+# define VM_PKEY_BIT1  VM_HIGH_ARCH_1  /* on x86 and 5-bit value on ppc64   */
 # define VM_PKEY_BIT2  VM_HIGH_ARCH_2
 # define VM_PKEY_BIT3  VM_HIGH_ARCH_3
+# define VM_PKEY_BIT4  VM_HIGH_ARCH_4
 #endif /* CONFIG_ARCH_HAS_PKEYS */
 
 #if defined(CONFIG_X86)
-- 
1.7.1



[PATCH v11 1/3] mm, powerpc, x86: define VM_PKEY_BITx bits if CONFIG_ARCH_HAS_PKEYS is enabled

2018-01-30 Thread Ram Pai
VM_PKEY_BITx are defined only if CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
is enabled. Powerpc also needs these bits. Hence lets define the
VM_PKEY_BITx bits for any architecture that enables
CONFIG_ARCH_HAS_PKEYS.

Signed-off-by: Ram Pai 
---
 fs/proc/task_mmu.c |4 ++--
 include/linux/mm.h |9 +
 2 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 339e4c1..b139617 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -674,13 +674,13 @@ static void show_smap_vma_flags(struct seq_file *m, 
struct vm_area_struct *vma)
[ilog2(VM_MERGEABLE)]   = "mg",
[ilog2(VM_UFFD_MISSING)]= "um",
[ilog2(VM_UFFD_WP)] = "uw",
-#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
+#ifdef CONFIG_ARCH_HAS_PKEYS
/* These come out via ProtectionKey: */
[ilog2(VM_PKEY_BIT0)]   = "",
[ilog2(VM_PKEY_BIT1)]   = "",
[ilog2(VM_PKEY_BIT2)]   = "",
[ilog2(VM_PKEY_BIT3)]   = "",
-#endif
+#endif /* CONFIG_ARCH_HAS_PKEYS */
};
size_t i;
 
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ea818ff..01381d3 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -228,15 +228,16 @@ extern int overcommit_kbytes_handler(struct ctl_table *, 
int, void __user *,
 #define VM_HIGH_ARCH_4 BIT(VM_HIGH_ARCH_BIT_4)
 #endif /* CONFIG_ARCH_USES_HIGH_VMA_FLAGS */
 
-#if defined(CONFIG_X86)
-# define VM_PATVM_ARCH_1   /* PAT reserves whole VMA at 
once (x86) */
-#if defined (CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS)
+#ifdef CONFIG_ARCH_HAS_PKEYS
 # define VM_PKEY_SHIFT VM_HIGH_ARCH_BIT_0
 # define VM_PKEY_BIT0  VM_HIGH_ARCH_0  /* A protection key is a 4-bit value */
 # define VM_PKEY_BIT1  VM_HIGH_ARCH_1
 # define VM_PKEY_BIT2  VM_HIGH_ARCH_2
 # define VM_PKEY_BIT3  VM_HIGH_ARCH_3
-#endif
+#endif /* CONFIG_ARCH_HAS_PKEYS */
+
+#if defined(CONFIG_X86)
+# define VM_PATVM_ARCH_1   /* PAT reserves whole VMA at 
once (x86) */
 #elif defined(CONFIG_PPC)
 # define VM_SAOVM_ARCH_1   /* Strong Access Ordering 
(powerpc) */
 #elif defined(CONFIG_PARISC)
-- 
1.7.1



[PATCH v11 0/3] mm, x86, powerpc: Enhancements to Memory Protection Keys.

2018-01-30 Thread Ram Pai
This patch series provides arch-neutral enhancements to
enable memory-keys on new architecutes, and the corresponding
changes in x86 and powerpc specific code to support that.

a) Provides ability to support upto 32 keys.  PowerPC
can handle 32 keys and hence needs this.

b) Arch-neutral code; and not the arch-specific code,
   determines the format of the string, that displays the key
   for each vma in smaps.

PowerPC implementation of memory-keys is now in powerpc/next tree.
https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/commit/?h=next=92e3da3cf193fd27996909956c12a23c0333da44

History:
---
version v11:
(1) code that displays key in smaps is not any more
defined under CONFIG_ARCH_HAS_PKEYS.
- Comment by Eric W. Biederman and Michal Hocko
(2) merged two patches that implemented (1).
- comment by Michal Hocko

version prior to v11:
(1) used one additional bit from VM_HIGH_ARCH_*
to support 32 keys.
- Suggestion by Dave Hansen.
(2) powerpc specific changes to support memory keys.

Ram Pai (3):
  mm, powerpc, x86: define VM_PKEY_BITx bits if CONFIG_ARCH_HAS_PKEYS
is enabled
  mm, powerpc, x86: introduce an additional vma bit for powerpc pkey
  mm, x86: display pkey in smaps only if arch supports pkeys

 arch/x86/include/asm/pkeys.h |1 +
 arch/x86/kernel/fpu/xstate.c |5 +
 arch/x86/kernel/setup.c  |8 
 fs/proc/task_mmu.c   |   14 +++---
 include/linux/mm.h   |   12 +++-
 include/linux/pkeys.h|6 ++
 6 files changed, 26 insertions(+), 20 deletions(-)


Re: [PATCH v10 27/27] mm: display pkey in smaps if arch_pkeys_enabled() is true

2018-01-30 Thread Ram Pai
On Tue, Jan 30, 2018 at 01:16:11PM +0100, Michal Hocko wrote:
> On Thu 18-01-18 17:50:48, Ram Pai wrote:
> [...]
> > @@ -851,9 +848,13 @@ static int show_smap(struct seq_file *m, void *v, int 
> > is_pid)
> >(unsigned long)(mss->pss >> (10 + PSS_SHIFT)));
> >  
> > if (!rollup_mode) {
> > -   arch_show_smap(m, vma);
> > +#ifdef CONFIG_ARCH_HAS_PKEYS
> > +   if (arch_pkeys_enabled())
> > +   seq_printf(m, "ProtectionKey:  %8u\n", vma_pkey(vma));
> > +#endif
> > show_smap_vma_flags(m, vma);
> > }
> > +
> 
> Why do you need to add ifdef here? The previous patch should make
> arch_pkeys_enabled == F when CONFIG_ARCH_HAS_PKEYS=n.

You are right. it need not be wrapped in CONFIG_ARCH_HAS_PKEYS.  I had to do it
because vma_pkey(vma)  is not defined in some architectures.

I will provide a generic vma_pkey() definition for architectures that do 
not support PKEYS.



> Btw. could you
> merge those two patches into one. It is usually much easier to review a
> new helper function if it is added along with a user.


ok.

Thanks,
RP



[PATCH] ocxl: fix signed comparison with less than zero

2018-01-30 Thread Colin King
From: Colin Ian King 

Currently the comparison of used < 0 is always false because
uses is a size_t. Fix this by making used a ssize_t type.

Detected by Coccinelle:
drivers/misc/ocxl/file.c:320:6-10: WARNING: Unsigned expression
compared with zero: used < 0

Fixes: 5ef3166e8a32 ("ocxl: Driver code for 'generic' opencapi devices")
Signed-off-by: Colin Ian King 
---
 drivers/misc/ocxl/file.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/misc/ocxl/file.c b/drivers/misc/ocxl/file.c
index c90c1a578d2f..1287e4430e6b 100644
--- a/drivers/misc/ocxl/file.c
+++ b/drivers/misc/ocxl/file.c
@@ -277,7 +277,7 @@ static ssize_t afu_read(struct file *file, char __user 
*buf, size_t count,
struct ocxl_context *ctx = file->private_data;
struct ocxl_kernel_event_header header;
ssize_t rc;
-   size_t used = 0;
+   ssize_t used = 0;
DEFINE_WAIT(event_wait);
 
memset(, 0, sizeof(header));
-- 
2.15.1



[PATCH v2] macintosh: Add module license to ans-lcd

2018-01-30 Thread Larry Finger
In kernel 4.15, the modprobe step on my PowerBook G4 started complaining that
there was no module license for ans-lcd.

Signed-off-by: Larry Finger 
---
v2 - fixed typo in commit message
---
 drivers/macintosh/ans-lcd.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/macintosh/ans-lcd.c b/drivers/macintosh/ans-lcd.c
index 1de81d922d8a..c8e078b911c7 100644
--- a/drivers/macintosh/ans-lcd.c
+++ b/drivers/macintosh/ans-lcd.c
@@ -201,3 +201,4 @@ anslcd_exit(void)
 
 module_init(anslcd_init);
 module_exit(anslcd_exit);
+MODULE_LICENSE("GPL v2");
-- 
2.16.1



Re: [PATCH v10 27/27] mm: display pkey in smaps if arch_pkeys_enabled() is true

2018-01-30 Thread Michal Hocko
On Thu 18-01-18 17:50:48, Ram Pai wrote:
[...]
> @@ -851,9 +848,13 @@ static int show_smap(struct seq_file *m, void *v, int 
> is_pid)
>  (unsigned long)(mss->pss >> (10 + PSS_SHIFT)));
>  
>   if (!rollup_mode) {
> - arch_show_smap(m, vma);
> +#ifdef CONFIG_ARCH_HAS_PKEYS
> + if (arch_pkeys_enabled())
> + seq_printf(m, "ProtectionKey:  %8u\n", vma_pkey(vma));
> +#endif
>   show_smap_vma_flags(m, vma);
>   }
> +

Why do you need to add ifdef here? The previous patch should make
arch_pkeys_enabled == F when CONFIG_ARCH_HAS_PKEYS=n. Btw. could you
merge those two patches into one. It is usually much easier to review a
new helper function if it is added along with a user.

>   m_cache_vma(m, vma);
>   return ret;
>  }
> -- 
> 1.7.1

-- 
Michal Hocko
SUSE Labs