Re: [PATCH] mpc832x_rdb: fix of_irq_to_resource() error check

2017-07-29 Thread Scott Wood
On Sat, 2017-07-29 at 22:52 +0300, Sergei Shtylyov wrote:
> of_irq_to_resource() has recently been  fixed to return negative error #'s
> along with 0 in case of failure,  however the Freescale MPC832x RDB board
> code still only regards 0 as as failure indication -- fix it up.
> 
> Fixes: 7a4228bbff76 ("of: irq: use of_irq_get() in of_irq_to_resource()")
> Signed-off-by: Sergei Shtylyov 
> 
> ---
> The patch is against the 'master' branch of Scott Wood's 'linux.git' repo
> (the 'fixes' branch is too much behind).

The master branch is also old.  Those branches are only used when needed to
apply patches; I don't update them just to sync up.  If they're older than
what's in Michael's or Linus's tree (as they almost always are), then use
those instead.

Not that I expect it to make a difference to this patch...

-Scott



Re: [RFC v6 21/62] powerpc: introduce execute-only pkey

2017-07-29 Thread Ram Pai
On Fri, Jul 28, 2017 at 07:17:13PM -0300, Thiago Jung Bauermann wrote:
> 
> Ram Pai  writes:
> > --- a/arch/powerpc/mm/pkeys.c
> > +++ b/arch/powerpc/mm/pkeys.c
> > @@ -97,3 +97,60 @@ int __arch_set_user_pkey_access(struct task_struct *tsk, 
> > int pkey,
> > init_iamr(pkey, new_iamr_bits);
> > return 0;
> >  }
> > +
> > +static inline bool pkey_allows_readwrite(int pkey)
> > +{
> > +   int pkey_shift = pkeyshift(pkey);
> > +
> > +   if (!(read_uamor() & (0x3UL << pkey_shift)))
> > +   return true;
> > +
> > +   return !(read_amr() & ((AMR_RD_BIT|AMR_WR_BIT) << pkey_shift));
> > +}
> > +
> > +int __execute_only_pkey(struct mm_struct *mm)
> > +{
> > +   bool need_to_set_mm_pkey = false;
> > +   int execute_only_pkey = mm->context.execute_only_pkey;
> > +   int ret;
> > +
> > +   /* Do we need to assign a pkey for mm's execute-only maps? */
> > +   if (execute_only_pkey == -1) {
> > +   /* Go allocate one to use, which might fail */
> > +   execute_only_pkey = mm_pkey_alloc(mm);
> > +   if (execute_only_pkey < 0)
> > +   return -1;
> > +   need_to_set_mm_pkey = true;
> > +   }
> > +
> > +   /*
> > +* We do not want to go through the relatively costly
> > +* dance to set AMR if we do not need to.  Check it
> > +* first and assume that if the execute-only pkey is
> > +* readwrite-disabled than we do not have to set it
> > +* ourselves.
> > +*/
> > +   if (!need_to_set_mm_pkey &&
> > +   !pkey_allows_readwrite(execute_only_pkey))
^
Here uamor and amr is read once each.

> > +   return execute_only_pkey;
> > +
> > +   /*
> > +* Set up AMR so that it denies access for everything
> > +* other than execution.
> > +*/
> > +   ret = __arch_set_user_pkey_access(current, execute_only_pkey,
> > +   (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE));
^^^
here amr and iamr are written once each if the
the function returns successfully.
> > +   /*
> > +* If the AMR-set operation failed somehow, just return
> > +* 0 and effectively disable execute-only support.
> > +*/
> > +   if (ret) {
> > +   mm_set_pkey_free(mm, execute_only_pkey);
^^^
here only if __arch_set_user_pkey_access() fails
amr and iamr and uamor will be written once each.

> > +   return -1;
> > +   }
> > +
> > +   /* We got one, store it and use it from here on out */
> > +   if (need_to_set_mm_pkey)
> > +   mm->context.execute_only_pkey = execute_only_pkey;
> > +   return execute_only_pkey;
> > +}
> 
> If you follow the code flow in __execute_only_pkey, the AMR and UAMOR
> are read 3 times in total, and AMR is written twice. IAMR is read and
> written twice. Since they are SPRs and access to them is slow (or isn't
> it?), is it worth it to read them once in __execute_only_pkey and pass
> down their values to the callees, and then write them once at the end of
> the function?

If my calculations are right: 
uamor may be read once and may be written once.
amr may be read once and is written once.
iamr is written once.
So not that bad, i think.

RP



Re: [RFC v6 27/62] powerpc: helper to validate key-access permissions of a pte

2017-07-29 Thread Ram Pai
On Fri, Jul 28, 2017 at 06:00:02PM -0300, Thiago Jung Bauermann wrote:
> 
> Ram Pai  writes:
> > --- a/arch/powerpc/mm/pkeys.c
> > +++ b/arch/powerpc/mm/pkeys.c
> > @@ -201,3 +201,36 @@ int __arch_override_mprotect_pkey(struct 
> > vm_area_struct *vma, int prot,
> >  */
> > return vma_pkey(vma);
> >  }
> > +
> > +static bool pkey_access_permitted(int pkey, bool write, bool execute)
> > +{
> > +   int pkey_shift;
> > +   u64 amr;
> > +
> > +   if (!pkey)
> > +   return true;
> > +
> > +   pkey_shift = pkeyshift(pkey);
> > +   if (!(read_uamor() & (0x3UL << pkey_shift)))
> > +   return true;
> > +
> > +   if (execute && !(read_iamr() & (IAMR_EX_BIT << pkey_shift)))
> > +   return true;
> > +
> > +   if (!write) {
> > +   amr = read_amr();
> > +   if (!(amr & (AMR_RD_BIT << pkey_shift)))
> > +   return true;
> > +   }
> > +
> > +   amr = read_amr(); /* delay reading amr uptil absolutely needed */
> 
> Actually, this is causing amr to be read twice in case control enters
> the "if (!write)" block above but doesn't enter the other if block nested
> in it.
> 
> read_amr should be called only once, right before "if (!write)".

the code can be simplified without having to read amr twice.
will fix it.

thanks,
RP

> 
> -- 
> Thiago Jung Bauermann
> IBM Linux Technology Center

-- 
Ram Pai



Re: [RFC v6 15/62] powerpc: helper functions to initialize AMR, IAMR and UMOR registers

2017-07-29 Thread Ram Pai
On Thu, Jul 27, 2017 at 05:40:44PM -0300, Thiago Jung Bauermann wrote:
> 
> Ram Pai  writes:
> 
> > Introduce helper functions that can initialize the bits in the AMR,
> > IAMR and UMOR register; the bits that correspond to the given pkey.
> >
> > Signed-off-by: Ram Pai 
> 
> s/UMOR/UAMOR/ here and in the subject as well.

yes. fixed it.

> 
> > --- a/arch/powerpc/mm/pkeys.c
> > +++ b/arch/powerpc/mm/pkeys.c
> > @@ -16,3 +16,47 @@
> >  #include /* PKEY_*   */
> >
> >  bool pkey_inited;
> > +#define pkeyshift(pkey) ((arch_max_pkey()-pkey-1) * AMR_BITS_PER_PKEY)
> > +
> > +static inline void init_amr(int pkey, u8 init_bits)
> > +{
> > +   u64 new_amr_bits = (((u64)init_bits & 0x3UL) << pkeyshift(pkey));
> > +   u64 old_amr = read_amr() & ~((u64)(0x3ul) << pkeyshift(pkey));
> > +
> > +   write_amr(old_amr | new_amr_bits);
> > +}
> > +
> > +static inline void init_iamr(int pkey, u8 init_bits)
> > +{
> > +   u64 new_iamr_bits = (((u64)init_bits & 0x3UL) << pkeyshift(pkey));
> > +   u64 old_iamr = read_iamr() & ~((u64)(0x3ul) << pkeyshift(pkey));
> > +
> > +   write_amr(old_iamr | new_iamr_bits);
> > +}
> 
> init_iamr should call write_iamr, not write_amr.

excellent catch. thanks.
RP



Re: [RFC v6 20/62] powerpc: store and restore the pkey state across context switches

2017-07-29 Thread Ram Pai
On Thu, Jul 27, 2017 at 02:32:59PM -0300, Thiago Jung Bauermann wrote:
> 
> Ram Pai  writes:
> 
> > Store and restore the AMR, IAMR and UMOR register state of the task
> > before scheduling out and after scheduling in, respectively.
> >
> > Signed-off-by: Ram Pai 
> 
> s/UMOR/UAMOR/
> 
> > diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
> > index 2ad725e..9429361 100644
> > --- a/arch/powerpc/kernel/process.c
> > +++ b/arch/powerpc/kernel/process.c
> > @@ -1096,6 +1096,11 @@ static inline void save_sprs(struct thread_struct *t)
> > t->tar = mfspr(SPRN_TAR);
> > }
> >  #endif
> > +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
> > +   t->amr = mfspr(SPRN_AMR);
> > +   t->iamr = mfspr(SPRN_IAMR);
> > +   t->uamor = mfspr(SPRN_UAMOR);
> > +#endif
> >  }
> >
> >  static inline void restore_sprs(struct thread_struct *old_thread,
> > @@ -1131,6 +1136,14 @@ static inline void restore_sprs(struct thread_struct 
> > *old_thread,
> > mtspr(SPRN_TAR, new_thread->tar);
> > }
> >  #endif
> > +#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
> > +   if (old_thread->amr != new_thread->amr)
> > +   mtspr(SPRN_AMR, new_thread->amr);
> > +   if (old_thread->iamr != new_thread->iamr)
> > +   mtspr(SPRN_IAMR, new_thread->iamr);
> > +   if (old_thread->uamor != new_thread->uamor)
> > +   mtspr(SPRN_UAMOR, new_thread->uamor);
> > +#endif
> >  }
> 
> Shouldn't the saving and restoring of the SPRs be guarded by a check for
> whether memory protection keys are enabled? What happens when trying to
> access these registers on a CPU which doesn't have them?

Good point. need to guard it.  However; i think, these registers have been
available since power6.

RP



Re: [RFC v6 19/62] powerpc: ability to create execute-disabled pkeys

2017-07-29 Thread Ram Pai
On Thu, Jul 27, 2017 at 11:54:31AM -0300, Thiago Jung Bauermann wrote:
> 
> Ram Pai  writes:
> 
> > --- a/arch/powerpc/include/asm/pkeys.h
> > +++ b/arch/powerpc/include/asm/pkeys.h
> > @@ -2,6 +2,18 @@
> >  #define _ASM_PPC64_PKEYS_H
> >
> >  extern bool pkey_inited;
> > +/* override any generic PKEY Permission defines */
> > +#undef  PKEY_DISABLE_ACCESS
> > +#define PKEY_DISABLE_ACCESS0x1
> > +#undef  PKEY_DISABLE_WRITE
> > +#define PKEY_DISABLE_WRITE 0x2
> > +#undef  PKEY_DISABLE_EXECUTE
> > +#define PKEY_DISABLE_EXECUTE   0x4
> > +#undef  PKEY_ACCESS_MASK
> > +#define PKEY_ACCESS_MASK   (PKEY_DISABLE_ACCESS |\
> > +   PKEY_DISABLE_WRITE  |\
> > +   PKEY_DISABLE_EXECUTE)
> > +
> 
> Is it ok to #undef macros from another header? Especially since said
> header is in uapi (include/uapi/asm-generic/mman-common.h).
> 
> Also, it's unnecessary to undef the _ACCESS and _WRITE macros since they
> are identical to the original definition. And since these macros are
> originally defined in an uapi header, the powerpc-specific ones should
> be in an uapi header as well, if I understand it correctly.

The architectural neutral code allows the implementation to define the
macros to its taste. powerpc headers due to legacy reason includes the
include/uapi/asm-generic/mman-common.h header. That header includes the
generic definitions of only PKEY_DISABLE_ACCESS and PKEY_DISABLE_WRITE.
Unfortunately we end up importing them. I dont want to depend on them.
Any changes there could effect us. Example if the generic uapi header
changed PKEY_DISABLE_ACCESS to 0x4, we will have a conflict with
PKEY_DISABLE_EXECUTE.  Hence I undef them and define the it my way.

> 
> An alternative solution is to define only PKEY_DISABLE_EXECUTE in
> arch/powerpc/include/uapi/asm/mman.h and then test for its existence to
> properly define PKEY_ACCESS_MASK in
> include/uapi/asm-generic/mman-common.h. What do you think of the code
> below?
> 
> diff --git a/arch/powerpc/include/asm/pkeys.h 
> b/arch/powerpc/include/asm/pkeys.h
> index e31f5ee8e81f..67e6a3a343ae 100644
> --- a/arch/powerpc/include/asm/pkeys.h
> +++ b/arch/powerpc/include/asm/pkeys.h
> @@ -4,17 +4,6 @@
>  #include 
> 
>  extern bool pkey_inited;
> -/* override any generic PKEY Permission defines */
> -#undef  PKEY_DISABLE_ACCESS
> -#define PKEY_DISABLE_ACCESS0x1
> -#undef  PKEY_DISABLE_WRITE
> -#define PKEY_DISABLE_WRITE 0x2
> -#undef  PKEY_DISABLE_EXECUTE
> -#define PKEY_DISABLE_EXECUTE   0x4
> -#undef  PKEY_ACCESS_MASK
> -#define PKEY_ACCESS_MASK   (PKEY_DISABLE_ACCESS |\
> - PKEY_DISABLE_WRITE  |\
> - PKEY_DISABLE_EXECUTE)
> 
>  #define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \
>   VM_PKEY_BIT3 | VM_PKEY_BIT4)
> diff --git a/arch/powerpc/include/uapi/asm/mman.h 
> b/arch/powerpc/include/uapi/asm/mman.h
> index ab45cc2f3101..dee43feb7c53 100644
> --- a/arch/powerpc/include/uapi/asm/mman.h
> +++ b/arch/powerpc/include/uapi/asm/mman.h
> @@ -45,4 +45,6 @@
>  #define MAP_HUGE_1GB (30 << MAP_HUGE_SHIFT)  /* 1GB   HugeTLB Page */
>  #define MAP_HUGE_16GB(34 << MAP_HUGE_SHIFT)  /* 16GB  HugeTLB Page */
> 
> +#define PKEY_DISABLE_EXECUTE   0x4
> +
>  #endif /* _UAPI_ASM_POWERPC_MMAN_H */
> diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
> index 72eb9a1bde79..777f8f8dff47 100644
> --- a/arch/powerpc/mm/pkeys.c
> +++ b/arch/powerpc/mm/pkeys.c
> @@ -12,7 +12,7 @@
>   * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
>   * more details.
>   */
> -#include 
> +#include 
>  #include /* PKEY_*   */
> 
>  bool pkey_inited;
> diff --git a/include/uapi/asm-generic/mman-common.h 
> b/include/uapi/asm-generic/mman-common.h
> index 8c27db0c5c08..93e3841d9ada 100644
> --- a/include/uapi/asm-generic/mman-common.h
> +++ b/include/uapi/asm-generic/mman-common.h
> @@ -74,7 +74,15 @@
> 
>  #define PKEY_DISABLE_ACCESS  0x1
>  #define PKEY_DISABLE_WRITE   0x2
> +
> +/* The arch-specific code may define PKEY_DISABLE_EXECUTE */
> +#ifdef PKEY_DISABLE_EXECUTE
> +#define PKEY_ACCESS_MASK   (PKEY_DISABLE_ACCESS |\
> + PKEY_DISABLE_WRITE  |   \
> + PKEY_DISABLE_EXECUTE)
> +#else
>  #define PKEY_ACCESS_MASK (PKEY_DISABLE_ACCESS |\
>PKEY_DISABLE_WRITE)
> +#endif
> 
>  #endif /* __ASM_GENERIC_MMAN_COMMON_H */

I suppose we can do it this way aswell. but dont like the way it is
spreading the defines accross multiple files.

> 
> 
> > diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
> > index 98d0391..b9ad98d 100644
> > --- a/arch/powerpc/mm/pkeys.c
> > +++ b/arch/powerpc/mm/pkeys.c
> > @@ -73,6 +73,7 @@ int __arch_set_user_pkey_access(struct task_struct *tsk, 
> > int pkey,
> > unsigned long init_val)
> >  

Re: [RFC PATCH v1] powerpc/radix/kasan: KASAN support for Radix

2017-07-29 Thread Balbir Singh
On Sun, Jul 30, 2017 at 8:58 AM, Balbir Singh  wrote:
>> +
>> +extern struct static_key_false powerpc_kasan_enabled_key;
>> +#define check_return_arch_not_ready() \
>> +   do {\
>> +   if (!static_branch_likely(_kasan_enabled_key))  \
>> +   return; \
>> +   } while (0)
>
> This is supposed to call __mem*() before returning, I'll do a new RFC,
> I must have missed it in my rebasing somewhere

Sorry for the noise, I am sleep deprived, I was trying to state that this
does not work for hash (with disable_radix on the command-line)

Balbir Singh.


Re: [RFC v6 17/62] powerpc: implementation for arch_set_user_pkey_access()

2017-07-29 Thread Ram Pai
On Thu, Jul 27, 2017 at 11:15:36AM -0300, Thiago Jung Bauermann wrote:
> 
> Ram Pai  writes:
> > @@ -113,10 +117,14 @@ static inline int arch_override_mprotect_pkey(struct 
> > vm_area_struct *vma,
> > return 0;
> >  }
> >
> > +extern int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
> > +   unsigned long init_val);
> >  static inline int arch_set_user_pkey_access(struct task_struct *tsk, int 
> > pkey,
> > unsigned long init_val)
> >  {
> > -   return 0;
> > +   if (!pkey_inited)
> > +   return -1;
> > +   return __arch_set_user_pkey_access(tsk, pkey, init_val);
> >  }
> 
> If non-zero, the return value of this function will be passed to
> userspace by the pkey_alloc syscall. Shouldn't it be returning an errno
> macro such as -EPERM?

Yes. it should be -EINVAL.  fixed it.

> 
> Also, why are there both arch_set_user_pkey_access and
> __arch_set_user_pkey_access? Is it a speed optimization so that the
> early return is inlined into the caller? Ditto for execute_only_pkey
> and __arch_override_mprotect_pkey.

arch_set_user_pkey_access() is the interface expected by the
architecture independent code.  The __arch_set_user_pkey_access() is an
powerpc internal function that implements the bulk of the work. It can
be called by any of the pkeys internal code only. This gives me the
flexibility to change implementation without having to worry about
changing the interface.

RP



Re: [RFC PATCH v1] powerpc/radix/kasan: KASAN support for Radix

2017-07-29 Thread Balbir Singh
> +
> +extern struct static_key_false powerpc_kasan_enabled_key;
> +#define check_return_arch_not_ready() \
> +   do {\
> +   if (!static_branch_likely(_kasan_enabled_key))  \
> +   return; \
> +   } while (0)

This is supposed to call __mem*() before returning, I'll do a new RFC,
I must have missed it in my rebasing somewhere

Balbir


Re: [RFC v6 13/62] powerpc: track allocation status of all pkeys

2017-07-29 Thread Ram Pai
On Thu, Jul 27, 2017 at 11:01:44AM -0300, Thiago Jung Bauermann wrote:
> 
> Hello Ram,
> 
> I'm still going through the patches and haven't formed a full picture of
> the feature in my mind yet, so my comments today won't be particularly
> insightful...
> 
> But hopefully the comments that I currently have will be helpful anyway.

sure. thanx for taking the time to look through the patches.

> 
> Ram Pai  writes:
> > diff --git a/arch/powerpc/include/asm/pkeys.h 
> > b/arch/powerpc/include/asm/pkeys.h
> > index 203d7de..09b268e 100644
> > --- a/arch/powerpc/include/asm/pkeys.h
> > +++ b/arch/powerpc/include/asm/pkeys.h
> > @@ -2,21 +2,87 @@
> >  #define _ASM_PPC64_PKEYS_H
> >
> >  extern bool pkey_inited;
> > -#define ARCH_VM_PKEY_FLAGS 0
> > +#define arch_max_pkey()  32
> > +#define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \
> > +   VM_PKEY_BIT3 | VM_PKEY_BIT4)
> > +/*
> > + * Bits are in BE format.
> > + * NOTE: key 31, 1, 0 are not used.
> > + * key 0 is used by default. It give read/write/execute permission.
> > + * key 31 is reserved by the hypervisor.
> > + * key 1 is recommended to be not used.
> > + * PowerISA(3.0) page 1015, programming note.
> > + */
> > +#define PKEY_INITIAL_ALLOCAION  0xc001
> 
> There's a typo in the macro name, should be "ALLOCATION".

Thanks fixed it. The new version of the code, calculates the
allocation_mask at runtime, depending on the number of keys specified by
the device tree as well as other factors.  So the above macro is
replaced by a variable 'initial_allocation_mask'.

RP

> 
> -- 
> Thiago Jung Bauermann
> IBM Linux Technology Center

-- 
Ram Pai



[PATCH] mpc832x_rdb: fix of_irq_to_resource() error check

2017-07-29 Thread Sergei Shtylyov
of_irq_to_resource() has recently been  fixed to return negative error #'s
along with 0 in case of failure,  however the Freescale MPC832x RDB board
code still only regards 0 as as failure indication -- fix it up.

Fixes: 7a4228bbff76 ("of: irq: use of_irq_get() in of_irq_to_resource()")
Signed-off-by: Sergei Shtylyov 

---
The patch is against the 'master' branch of Scott Wood's 'linux.git' repo
(the 'fixes' branch is too much behind).

 arch/powerpc/platforms/83xx/mpc832x_rdb.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Index: linux/arch/powerpc/platforms/83xx/mpc832x_rdb.c
===
--- linux.orig/arch/powerpc/platforms/83xx/mpc832x_rdb.c
+++ linux/arch/powerpc/platforms/83xx/mpc832x_rdb.c
@@ -89,7 +89,7 @@ static int __init of_fsl_spi_probe(char
goto err;
 
ret = of_irq_to_resource(np, 0, [1]);
-   if (!ret)
+   if (ret <= 0)
goto err;
 
pdev = platform_device_alloc("mpc83xx_spi", i);



[RFC PATCH v1] powerpc/radix/kasan: KASAN support for Radix

2017-07-29 Thread Balbir Singh
This is the first attempt to implement KASAN for radix
on powerpc64. Aneesh Kumar implemented KASAN for hash 64
in limited mode (support only for kernel linear mapping)
(https://lwn.net/Articles/655642/)

This patch does the following:
1. Defines its own zero_page,pte,pmd and pud because
the generic PTRS_PER_PTE, etc are variables on ppc64
book3s. Since the implementation is for radix, we use
the radix constants. This patch uses ARCH_DEFINES_KASAN_ZERO_PTE
for that purpose
2. There is a new function check_return_arch_not_ready()
which is defined for ppc64/book3s/radix and overrides the
checks in check_memory_region_inline() until the arch has
done kasan setup is done for the architecture. This is needed
for powerpc. A lot of functions are called in real mode prior
to MMU paging init, we could fix some of this by using
the kasan_early_init() bits, but that just maps the zero
page and does not do useful reporting. For this RFC we
just delay the checks in mem* functions till kasan_init()
3. This patch renames memcpy/memset/memmove to their
equivalent __memcpy/__memset/__memmove and for files
that skip KASAN via KASAN_SANITIZE, we use the __
variants. This is largely based on Aneesh's patchset
mentioned above
4. In paca.c, some explicit memcpy inserted by the
compiler/linker is replaced via explicit memcpy
for structure content copying
5. prom_init and a few other files have KASAN_SANITIZE
set to n, I think with the delayed checks (#2 above)
we might be able to work around many of them
6. Resizing of virtual address space is done a little
aggressively the size is reduced to 1/4 and totally
to 1/2. For the RFC it was considered OK, since this
is just a debug tool for developers. This can be revisited
in the final implementation

Tests:

I ran test_kasan.ko and it reported errors for all test
cases except for

kasan test: memcg_accounted_kmem_cache allocate memcg accounted object
kasan test: kasan_stack_oob out-of-bounds on stack
kasan test: kasan_global_oob out-of-bounds global variable
kasan test: use_after_scope_test use-after-scope on int
kasan test: use_after_scope_test use-after-scope on array

Based on my understanding of the test, which is an expected
kasan bug report after each test starting with a "===" line.

Signed-off-by: Balbir Singh 
---
 arch/powerpc/Kconfig |   1 +
 arch/powerpc/include/asm/book3s/64/pgtable.h |   1 +
 arch/powerpc/include/asm/book3s/64/radix-kasan.h |  56 +++
 arch/powerpc/include/asm/book3s/64/radix.h   |   9 ++
 arch/powerpc/include/asm/kasan.h |  24 +
 arch/powerpc/include/asm/string.h|  24 +
 arch/powerpc/kernel/Makefile |   5 +
 arch/powerpc/kernel/cputable.c   |   6 +-
 arch/powerpc/kernel/paca.c   |   2 +-
 arch/powerpc/kernel/prom_init_check.sh   |   3 +-
 arch/powerpc/kernel/setup-common.c   |   3 +
 arch/powerpc/kernel/setup_64.c   |   1 -
 arch/powerpc/lib/mem_64.S|  20 +++-
 arch/powerpc/lib/memcpy_64.S |  10 +-
 arch/powerpc/mm/Makefile |   3 +
 arch/powerpc/mm/radix_kasan_init.c   | 120 +++
 include/linux/kasan.h|   7 ++
 mm/kasan/kasan.c |   2 +
 mm/kasan/kasan_init.c|   2 +
 19 files changed, 290 insertions(+), 9 deletions(-)
 create mode 100644 arch/powerpc/include/asm/book3s/64/radix-kasan.h
 create mode 100644 arch/powerpc/include/asm/kasan.h
 create mode 100644 arch/powerpc/mm/radix_kasan_init.c

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 36f858c37ca7..83b882e00fcf 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -160,6 +160,7 @@ config PPC
select GENERIC_TIME_VSYSCALL
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_JUMP_LABEL
+   select HAVE_ARCH_KASAN if (PPC_BOOK3S && PPC64 && SPARSEMEM_VMEMMAP)
select HAVE_ARCH_KGDB
select HAVE_ARCH_MMAP_RND_BITS
select HAVE_ARCH_MMAP_RND_COMPAT_BITS   if COMPAT
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h 
b/arch/powerpc/include/asm/book3s/64/pgtable.h
index d1da415e283c..7b8afe97bb8e 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -299,6 +299,7 @@ extern unsigned long pci_io_base;
  * IOREMAP_BASE = ISA_IO_BASE + 2G to VMALLOC_START + PGTABLE_RANGE
  */
 #define KERN_IO_START  (KERN_VIRT_START + (KERN_VIRT_SIZE >> 1))
+
 #define FULL_IO_SIZE   0x8000ul
 #define  ISA_IO_BASE   (KERN_IO_START)
 #define  ISA_IO_END(KERN_IO_START + 0x1ul)
diff --git a/arch/powerpc/include/asm/book3s/64/radix-kasan.h 
b/arch/powerpc/include/asm/book3s/64/radix-kasan.h
new file mode 100644
index ..67022dde6548
--- /dev/null
+++ b/arch/powerpc/include/asm/book3s/64/radix-kasan.h

[RFC PATCH] powerpc: improve accounting of non maskable interrupts

2017-07-29 Thread Nicholas Piggin
This fixes a case of double counting MCEs on PowerNV.

Adds a counter for the system reset interrupt, which will
see more use as a debugging NMI.

Adds a soft-NMI counter for the 64s watchdog. Although this could cause
confusion because it only fires when interrupts are soft-disabled, so it
won't increment much even when the watchdog is running.

Signed-off-by: Nicholas Piggin 
---
I can split these out or drop any objectionable bits. At least the
MCE we should fix, not sure if the other bits are wanted.

Thanks,
Nick

 arch/powerpc/include/asm/hardirq.h |  4 
 arch/powerpc/kernel/irq.c  | 16 
 arch/powerpc/kernel/traps.c|  9 +
 arch/powerpc/kernel/watchdog.c |  3 +++
 4 files changed, 32 insertions(+)

diff --git a/arch/powerpc/include/asm/hardirq.h 
b/arch/powerpc/include/asm/hardirq.h
index 8add8b861e8d..a3c83ec416c6 100644
--- a/arch/powerpc/include/asm/hardirq.h
+++ b/arch/powerpc/include/asm/hardirq.h
@@ -12,6 +12,10 @@ typedef struct {
unsigned int mce_exceptions;
unsigned int spurious_irqs;
unsigned int hmi_exceptions;
+   unsigned int sreset_irqs;
+#if defined(CONFIG_HARDLOCKUP_DETECTOR) && 
defined(CONFIG_HAVE_HARDLOCKUP_DETECTOR_ARCH)
+   unsigned int soft_nmi_irqs;
+#endif
 #ifdef CONFIG_PPC_DOORBELL
unsigned int doorbell_irqs;
 #endif
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 0bcec745a672..36250df64615 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -470,6 +470,18 @@ int arch_show_interrupts(struct seq_file *p, int prec)
seq_printf(p, "  Hypervisor Maintenance Interrupts\n");
}
 
+   seq_printf(p, "%*s: ", prec, "NMI");
+   for_each_online_cpu(j)
+   seq_printf(p, "%10u ", per_cpu(irq_stat, j).sreset_irqs);
+   seq_printf(p, "  System Reset interrupts\n");
+
+#if defined(CONFIG_HARDLOCKUP_DETECTOR) && 
defined(CONFIG_HAVE_HARDLOCKUP_DETECTOR_ARCH)
+   seq_printf(p, "%*s: ", prec, "WDG");
+   for_each_online_cpu(j)
+   seq_printf(p, "%10u ", per_cpu(irq_stat, j).soft_nmi_irqs);
+   seq_printf(p, "  Watchdog soft-NMI interrupts\n");
+#endif
+
 #ifdef CONFIG_PPC_DOORBELL
if (cpu_has_feature(CPU_FTR_DBELL)) {
seq_printf(p, "%*s: ", prec, "DBL");
@@ -494,6 +506,10 @@ u64 arch_irq_stat_cpu(unsigned int cpu)
sum += per_cpu(irq_stat, cpu).spurious_irqs;
sum += per_cpu(irq_stat, cpu).timer_irqs_others;
sum += per_cpu(irq_stat, cpu).hmi_exceptions;
+   sum += per_cpu(irq_stat, cpu).sreset_irqs;
+#ifdef CONFIG_HARDLOCKUP_DETECTOR
+   sum += per_cpu(irq_stat, cpu).soft_nmi_irqs;
+#endif
 #ifdef CONFIG_PPC_DOORBELL
sum += per_cpu(irq_stat, cpu).doorbell_irqs;
 #endif
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index bfcfd9ef09f2..6a892ca7bf18 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -288,6 +288,8 @@ void system_reset_exception(struct pt_regs *regs)
if (!nested)
nmi_enter();
 
+   __this_cpu_inc(irq_stat.sreset_irqs);
+
/* See if any machine dependent calls */
if (ppc_md.system_reset_exception) {
if (ppc_md.system_reset_exception(regs))
@@ -755,7 +757,14 @@ void machine_check_exception(struct pt_regs *regs)
enum ctx_state prev_state = exception_enter();
int recover = 0;
 
+#ifdef CONFIG_PPC_BOOK3S_64
+   /* 64s accounts the mce in machine_check_early when in HVMODE */
+   if (!cpu_has_feature(CPU_FTR_HVMODE))
+   __this_cpu_inc(irq_stat.mce_exceptions);
+#else
__this_cpu_inc(irq_stat.mce_exceptions);
+#endif
+
 
add_taint(TAINT_MACHINE_CHECK, LOCKDEP_NOW_UNRELIABLE);
 
diff --git a/arch/powerpc/kernel/watchdog.c b/arch/powerpc/kernel/watchdog.c
index b67f8b03a32d..4b9a567c9975 100644
--- a/arch/powerpc/kernel/watchdog.c
+++ b/arch/powerpc/kernel/watchdog.c
@@ -204,6 +204,9 @@ void soft_nmi_interrupt(struct pt_regs *regs)
return;
 
nmi_enter();
+
+   __this_cpu_inc(irq_stat.soft_nmi_irqs);
+
tb = get_tb();
if (tb - per_cpu(wd_timer_tb, cpu) >= wd_panic_timeout_tb) {
per_cpu(wd_timer_tb, cpu) = tb;
-- 
2.11.0



[RFC PATCH] powerpc: improve accounting of non maskable interrupts

2017-07-29 Thread Nicholas Piggin
This fixes a case of double counting MCEs on PowerNV.

Adds a counter for the system reset interrupt, which will
see more use as a debugging NMI.

Adds a soft-NMI counter for the 64s watchdog. Although this could cause
confusion because it only fires when interrupts are soft-disabled, so it
won't increment much even when the watchdog is running.

Signed-off-by: Nicholas Piggin 
---
I can split these out or drop any objectionable bits. At least the
MCE we should fix, not sure if the other bits are wanted.

Thanks,
Nick

 arch/powerpc/include/asm/hardirq.h |  4 
 arch/powerpc/kernel/irq.c  | 16 
 arch/powerpc/kernel/traps.c|  9 +
 arch/powerpc/kernel/watchdog.c |  3 +++
 4 files changed, 32 insertions(+)

diff --git a/arch/powerpc/include/asm/hardirq.h 
b/arch/powerpc/include/asm/hardirq.h
index 8add8b861e8d..a3c83ec416c6 100644
--- a/arch/powerpc/include/asm/hardirq.h
+++ b/arch/powerpc/include/asm/hardirq.h
@@ -12,6 +12,10 @@ typedef struct {
unsigned int mce_exceptions;
unsigned int spurious_irqs;
unsigned int hmi_exceptions;
+   unsigned int sreset_irqs;
+#if defined(CONFIG_HARDLOCKUP_DETECTOR) && 
defined(CONFIG_HAVE_HARDLOCKUP_DETECTOR_ARCH)
+   unsigned int soft_nmi_irqs;
+#endif
 #ifdef CONFIG_PPC_DOORBELL
unsigned int doorbell_irqs;
 #endif
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 0bcec745a672..36250df64615 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -470,6 +470,18 @@ int arch_show_interrupts(struct seq_file *p, int prec)
seq_printf(p, "  Hypervisor Maintenance Interrupts\n");
}
 
+   seq_printf(p, "%*s: ", prec, "NMI");
+   for_each_online_cpu(j)
+   seq_printf(p, "%10u ", per_cpu(irq_stat, j).sreset_irqs);
+   seq_printf(p, "  System Reset interrupts\n");
+
+#if defined(CONFIG_HARDLOCKUP_DETECTOR) && 
defined(CONFIG_HAVE_HARDLOCKUP_DETECTOR_ARCH)
+   seq_printf(p, "%*s: ", prec, "WDG");
+   for_each_online_cpu(j)
+   seq_printf(p, "%10u ", per_cpu(irq_stat, j).soft_nmi_irqs);
+   seq_printf(p, "  Watchdog soft-NMI interrupts\n");
+#endif
+
 #ifdef CONFIG_PPC_DOORBELL
if (cpu_has_feature(CPU_FTR_DBELL)) {
seq_printf(p, "%*s: ", prec, "DBL");
@@ -494,6 +506,10 @@ u64 arch_irq_stat_cpu(unsigned int cpu)
sum += per_cpu(irq_stat, cpu).spurious_irqs;
sum += per_cpu(irq_stat, cpu).timer_irqs_others;
sum += per_cpu(irq_stat, cpu).hmi_exceptions;
+   sum += per_cpu(irq_stat, cpu).sreset_irqs;
+#ifdef CONFIG_HARDLOCKUP_DETECTOR
+   sum += per_cpu(irq_stat, cpu).soft_nmi_irqs;
+#endif
 #ifdef CONFIG_PPC_DOORBELL
sum += per_cpu(irq_stat, cpu).doorbell_irqs;
 #endif
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index bfcfd9ef09f2..6a892ca7bf18 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -288,6 +288,8 @@ void system_reset_exception(struct pt_regs *regs)
if (!nested)
nmi_enter();
 
+   __this_cpu_inc(irq_stat.sreset_irqs);
+
/* See if any machine dependent calls */
if (ppc_md.system_reset_exception) {
if (ppc_md.system_reset_exception(regs))
@@ -755,7 +757,14 @@ void machine_check_exception(struct pt_regs *regs)
enum ctx_state prev_state = exception_enter();
int recover = 0;
 
+#ifdef CONFIG_PPC_BOOK3S_64
+   /* 64s accounts the mce in machine_check_early when in HVMODE */
+   if (!cpu_has_feature(CPU_FTR_HVMODE))
+   __this_cpu_inc(irq_stat.mce_exceptions);
+#else
__this_cpu_inc(irq_stat.mce_exceptions);
+#endif
+
 
add_taint(TAINT_MACHINE_CHECK, LOCKDEP_NOW_UNRELIABLE);
 
diff --git a/arch/powerpc/kernel/watchdog.c b/arch/powerpc/kernel/watchdog.c
index b67f8b03a32d..4b9a567c9975 100644
--- a/arch/powerpc/kernel/watchdog.c
+++ b/arch/powerpc/kernel/watchdog.c
@@ -204,6 +204,9 @@ void soft_nmi_interrupt(struct pt_regs *regs)
return;
 
nmi_enter();
+
+   __this_cpu_inc(irq_stat.soft_nmi_irqs);
+
tb = get_tb();
if (tb - per_cpu(wd_timer_tb, cpu) >= wd_panic_timeout_tb) {
per_cpu(wd_timer_tb, cpu) = tb;
-- 
2.11.0



[PATCH] powerpc/64s: watchdog fix stack setup

2017-07-29 Thread Nicholas Piggin
The watchdog soft-NMI exception stack setup loads a stack pointer
twice, which is an obvious error. It ends up using the system reset
interrupt (true-NMI) stack, which is also a bug because the watchdog
could be preempted by a system reset interrupt that overwrites the
NMI stack.

Change the soft-NMI to use the "emergency stack". The current kernel
stack is not used, because of the longer-term goal to prevent
asynchronous stack access using soft-disable.

Signed-off-by: Nicholas Piggin 
---

This was tested by booting a kernel and verifying there was some
soft NMI activity, and also by deliberately causing a watchdog
lockup from the soft NMI path. Seems to be working.

In the system simulator you can inject a system reset when in the
soft_nmi_interrupt function and things go haywire without this
patch. 

 arch/powerpc/kernel/exceptions-64s.S | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index 9029afd1fa2a..f14f3c04ec7e 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -1325,10 +1325,18 @@ EXC_VIRT_NONE(0x5800, 0x100)
std r10,PACA_EXGEN+EX_R13(r13); \
EXCEPTION_PROLOG_PSERIES_1(soft_nmi_common, _H)
 
+/*
+ * Branch to soft_nmi_interrupt using the emergency stack. The emergency
+ * stack is one that is usable by maskable interrupts so long as MSR_EE
+ * remains off. It is used for recovery when something has corrupted the
+ * normal kernel stack, for example. The "soft NMI" must not use the process
+ * stack because we want irq disabled sections to avoid touching the stack
+ * at all (other than PMU interrupts), so use the emergency stack for this,
+ * and run it entirely with interrupts hard disabled.
+ */
 EXC_COMMON_BEGIN(soft_nmi_common)
mr  r10,r1
ld  r1,PACAEMERGSP(r13)
-   ld  r1,PACA_NMI_EMERG_SP(r13)
subir1,r1,INT_FRAME_SIZE
EXCEPTION_COMMON_NORET_STACK(PACA_EXGEN, 0x900,
system_reset, soft_nmi_interrupt,
-- 
2.11.0



[PATCH 5/5] Use __func__ instead of function name

2017-07-29 Thread SZ Lin
Fix following checkpatch.pl warning:
WARNING: Prefer using '"%s...", __func__' to using
the function's name, in a string

Signed-off-by: SZ Lin 
---
 drivers/char/tpm/tpm_ibmvtpm.c | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/char/tpm/tpm_ibmvtpm.c b/drivers/char/tpm/tpm_ibmvtpm.c
index e75a674b44ac..2d33acc43e25 100644
--- a/drivers/char/tpm/tpm_ibmvtpm.c
+++ b/drivers/char/tpm/tpm_ibmvtpm.c
@@ -151,7 +151,7 @@ static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, 
size_t count)
rc = ibmvtpm_send_crq(ibmvtpm->vdev, be64_to_cpu(word[0]),
  be64_to_cpu(word[1]));
if (rc != H_SUCCESS) {
-   dev_err(ibmvtpm->dev, "tpm_ibmvtpm_send failed rc=%d\n", rc);
+   dev_err(ibmvtpm->dev, "%s failed rc=%d\n", __func__, rc);
rc = 0;
ibmvtpm->tpm_processing_cmd = false;
} else
@@ -193,7 +193,7 @@ static int ibmvtpm_crq_get_rtce_size(struct ibmvtpm_dev 
*ibmvtpm)
  cpu_to_be64(buf[1]));
if (rc != H_SUCCESS)
dev_err(ibmvtpm->dev,
-   "ibmvtpm_crq_get_rtce_size failed rc=%d\n", rc);
+   "%s failed rc=%d\n", __func__, rc);
 
return rc;
 }
@@ -221,7 +221,7 @@ static int ibmvtpm_crq_get_version(struct ibmvtpm_dev 
*ibmvtpm)
  cpu_to_be64(buf[1]));
if (rc != H_SUCCESS)
dev_err(ibmvtpm->dev,
-   "ibmvtpm_crq_get_version failed rc=%d\n", rc);
+   "%s failed rc=%d\n", __func__, rc);
 
return rc;
 }
@@ -241,7 +241,7 @@ static int ibmvtpm_crq_send_init_complete(struct 
ibmvtpm_dev *ibmvtpm)
rc = ibmvtpm_send_crq(ibmvtpm->vdev, INIT_CRQ_COMP_CMD, 0);
if (rc != H_SUCCESS)
dev_err(ibmvtpm->dev,
-   "ibmvtpm_crq_send_init_complete failed rc=%d\n", rc);
+   "%s rc=%d\n", __func__, rc);
 
return rc;
 }
@@ -261,7 +261,7 @@ static int ibmvtpm_crq_send_init(struct ibmvtpm_dev 
*ibmvtpm)
rc = ibmvtpm_send_crq(ibmvtpm->vdev, INIT_CRQ_CMD, 0);
if (rc != H_SUCCESS)
dev_err(ibmvtpm->dev,
-   "ibmvtpm_crq_send_init failed rc=%d\n", rc);
+   "%s failed rc=%d\n", __func__, rc);
 
return rc;
 }
@@ -351,7 +351,7 @@ static int tpm_ibmvtpm_suspend(struct device *dev)
  cpu_to_be64(buf[1]));
if (rc != H_SUCCESS)
dev_err(ibmvtpm->dev,
-   "tpm_ibmvtpm_suspend failed rc=%d\n", rc);
+   "%s failed rc=%d\n", __func__, rc);
 
return rc;
 }
-- 
2.13.3



[PATCH 4/5] Remove unneccessary 'out of memory' message

2017-07-29 Thread SZ Lin
WARNING: Possible unnecessary 'out of memory' message
+   if (!ibmvtpm->rtce_buf) {
+   dev_err(ibmvtpm->dev, "Failed to allocate 
memory for rtce buffer\n");

WARNING: Possible unnecessary 'out of memory' message
+   if (!ibmvtpm) {
+   dev_err(dev, "kzalloc for ibmvtpm failed\n");

Signed-off-by: SZ Lin 
---
 drivers/char/tpm/tpm_ibmvtpm.c | 8 ++--
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/drivers/char/tpm/tpm_ibmvtpm.c b/drivers/char/tpm/tpm_ibmvtpm.c
index e53b9fb517d9..e75a674b44ac 100644
--- a/drivers/char/tpm/tpm_ibmvtpm.c
+++ b/drivers/char/tpm/tpm_ibmvtpm.c
@@ -501,10 +501,8 @@ static void ibmvtpm_crq_process(struct ibmvtpm_crq *crq,
ibmvtpm->rtce_size = be16_to_cpu(crq->len);
ibmvtpm->rtce_buf = kmalloc(ibmvtpm->rtce_size,
GFP_ATOMIC);
-   if (!ibmvtpm->rtce_buf) {
-   dev_err(ibmvtpm->dev, "Failed to allocate 
memory for rtce buffer\n");
+   if (!ibmvtpm->rtce_buf)
return;
-   }
 
ibmvtpm->rtce_dma_handle = dma_map_single(ibmvtpm->dev,
ibmvtpm->rtce_buf, ibmvtpm->rtce_size,
@@ -584,10 +582,8 @@ static int tpm_ibmvtpm_probe(struct vio_dev *vio_dev,
return PTR_ERR(chip);
 
ibmvtpm = kzalloc(sizeof(struct ibmvtpm_dev), GFP_KERNEL);
-   if (!ibmvtpm) {
-   dev_err(dev, "kzalloc for ibmvtpm failed\n");
+   if (!ibmvtpm)
goto cleanup;
-   }
 
ibmvtpm->dev = dev;
ibmvtpm->vdev = vio_dev;
-- 
2.13.3



[PATCH 2/5] Fix "ERROR: code indent should use tabs where possible"

2017-07-29 Thread SZ Lin
ERROR: code indent should use tabs where possible
+^I^I "Need to wait for TPM to finish\n");$

Signed-off-by: SZ Lin 
---
 drivers/char/tpm/tpm_ibmvtpm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/char/tpm/tpm_ibmvtpm.c b/drivers/char/tpm/tpm_ibmvtpm.c
index f01d083eced2..23913fc86158 100644
--- a/drivers/char/tpm/tpm_ibmvtpm.c
+++ b/drivers/char/tpm/tpm_ibmvtpm.c
@@ -127,7 +127,7 @@ static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, 
size_t count)
 
if (ibmvtpm->tpm_processing_cmd) {
dev_info(ibmvtpm->dev,
-"Need to wait for TPM to finish\n");
+   "Need to wait for TPM to finish\n");
/* wait for previous command to finish */
sig = wait_event_interruptible(ibmvtpm->wq, 
!ibmvtpm->tpm_processing_cmd);
if (sig)
-- 
2.13.3



[PATCH 3/5] Fix 'void function return statements are not generally useful' warning

2017-07-29 Thread SZ Lin
WARNING: void function return statements are not generally useful
+   return;
+}

Signed-off-by: SZ Lin 
---
 drivers/char/tpm/tpm_ibmvtpm.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/char/tpm/tpm_ibmvtpm.c b/drivers/char/tpm/tpm_ibmvtpm.c
index 23913fc86158..e53b9fb517d9 100644
--- a/drivers/char/tpm/tpm_ibmvtpm.c
+++ b/drivers/char/tpm/tpm_ibmvtpm.c
@@ -531,7 +531,6 @@ static void ibmvtpm_crq_process(struct ibmvtpm_crq *crq,
return;
}
}
-   return;
 }
 
 /**
-- 
2.13.3



[PATCH 1/5] Fix packed and aligned attribute warnings.

2017-07-29 Thread SZ Lin
WARNING: __packed is preferred over __attribute__((packed))
+} __attribute__((packed, aligned(8)));

WARNING: __aligned(size) is preferred over __attribute__((aligned(size)))
+} __attribute__((packed, aligned(8)));

Signed-off-by: SZ Lin 
---
 drivers/char/tpm/tpm_ibmvtpm.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/char/tpm/tpm_ibmvtpm.h b/drivers/char/tpm/tpm_ibmvtpm.h
index 91dfe766d080..9f708ca3dc84 100644
--- a/drivers/char/tpm/tpm_ibmvtpm.h
+++ b/drivers/char/tpm/tpm_ibmvtpm.h
@@ -25,7 +25,7 @@ struct ibmvtpm_crq {
__be16 len;
__be32 data;
__be64 reserved;
-} __attribute__((packed, aligned(8)));
+} __packed __aligned(8);
 
 struct ibmvtpm_crq_queue {
struct ibmvtpm_crq *crq_addr;
-- 
2.13.3



[PATCH 0/5] tpm: tpm_ibmvtpm: - style fix

2017-07-29 Thread SZ Lin
Fix styling WARNINGs and Errors of tpm_ibmvtpm.c driver by using checkpatch.pl

SZ Lin (5):
  Fix packed and aligned attribute warnings.
  Fix "ERROR: code indent should use tabs where possible"
  Fix 'void function return statements are not generally useful' warning
  Remove unneccessary 'out of memory' message
  Use __func__ instead of function name

 drivers/char/tpm/tpm_ibmvtpm.c | 23 +--
 drivers/char/tpm/tpm_ibmvtpm.h |  2 +-
 2 files changed, 10 insertions(+), 15 deletions(-)

--
2.13.3