Re: [PATCH V4 07/26] mm/mmap: Build protect protection_map[] with ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Anshuman Khandual



On 6/24/22 10:52, Christophe Leroy wrote:
> 
> 
> Le 24/06/2022 à 06:43, Anshuman Khandual a écrit :
>> protection_map[] has already been moved inside those platforms which enable
> 
> Usually "already" means before your series.
> 
> Your series is the one that moves protection_map[] so I would have just 
> said "Now that protection_map[] has been moved inside those platforms 
> which enable "

Got it, will update the commit message.

> 
>> ARCH_HAS_VM_GET_PAGE_PROT. Hence generic protection_map[] array now can be
>> protected with CONFIG_ARCH_HAS_VM_GET_PAGE_PROT intead of __P000.
>>
>> Cc: Andrew Morton 
>> Cc: linux...@kvack.org
>> Cc: linux-ker...@vger.kernel.org
>> Signed-off-by: Anshuman Khandual 
>> ---
>>   include/linux/mm.h | 2 +-
>>   mm/mmap.c  | 5 +
>>   2 files changed, 2 insertions(+), 5 deletions(-)
>>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index 237828c2bae2..70d900f6df43 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -424,7 +424,7 @@ extern unsigned int kobjsize(const void *objp);
>>* mapping from the currently active vm_flags protection bits (the
>>* low four bits) to a page protection mask..
>>*/
>> -#ifdef __P000
>> +#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
>>   extern pgprot_t protection_map[16];
> 
> Is this declaration still needed ? I have the feeling that 
> protection_map[] is only used in mm/mmap.c now.

At this point generic protection_map[] array is still being used via
this declaration on many (!ARCH_HAS_VM_GET_PAGE_PROT) platforms such
as mips, m68k, arm etc.

> 
>>   #endif
>>   
>> diff --git a/mm/mmap.c b/mm/mmap.c
>> index 55c30aee3999..43db3bd49071 100644
>> --- a/mm/mmap.c
>> +++ b/mm/mmap.c
>> @@ -101,7 +101,7 @@ static void unmap_region(struct mm_struct *mm,
>>* w: (no) no
>>* x: (yes) yes
>>*/
>> -#ifdef __P000
>> +#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
>>   pgprot_t protection_map[16] __ro_after_init = {
> 
> Should this be static, as it seems to now be used only in this file ?

This is being used in some platforms as mentioned before.

> And it could also be 'const' instead of __ro_after_init.

Then should be able to be a 'const' wrt  mips, m68k, arm platforms.
But should this even be changed, if this is going to be dropped off
eventually ?

> 
>>  [VM_NONE]   = __P000,
>>  [VM_READ]   = __P001,
>> @@ -120,9 +120,6 @@ pgprot_t protection_map[16] __ro_after_init = {
>>  [VM_SHARED | VM_EXEC | VM_WRITE]= __S110,
>>  [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = __S111
>>   };
>> -#endif
>> -
>> -#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
>>   DECLARE_VM_GET_PAGE_PROT
>>   #endif /* CONFIG_ARCH_HAS_VM_GET_PAGE_PROT */
>>   


Re: [PATCH V4 26/26] mm/mmap: Drop ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Christophe Leroy


Le 24/06/2022 à 06:43, Anshuman Khandual a écrit :
> Now all the platforms enable ARCH_HAS_GET_PAGE_PROT. They define and export
> own vm_get_page_prot() whether custom or standard DECLARE_VM_GET_PAGE_PROT.
> Hence there is no need for default generic fallback for vm_get_page_prot().
> Just drop this fallback and also ARCH_HAS_GET_PAGE_PROT mechanism.
> 
> Cc: Andrew Morton 
> Cc: linux...@kvack.org
> Cc: linux-ker...@vger.kernel.org
> Signed-off-by: Anshuman Khandual 
> ---
>   arch/alpha/Kconfig  |  1 -
>   arch/arc/Kconfig|  1 -
>   arch/arm/Kconfig|  1 -
>   arch/arm64/Kconfig  |  1 -
>   arch/csky/Kconfig   |  1 -
>   arch/hexagon/Kconfig|  1 -
>   arch/ia64/Kconfig   |  1 -
>   arch/loongarch/Kconfig  |  1 -
>   arch/m68k/Kconfig   |  1 -
>   arch/microblaze/Kconfig |  1 -
>   arch/mips/Kconfig   |  1 -
>   arch/nios2/Kconfig  |  1 -
>   arch/openrisc/Kconfig   |  1 -
>   arch/parisc/Kconfig |  1 -
>   arch/powerpc/Kconfig|  1 -
>   arch/riscv/Kconfig  |  1 -
>   arch/s390/Kconfig   |  1 -
>   arch/sh/Kconfig |  1 -
>   arch/sparc/Kconfig  |  1 -
>   arch/um/Kconfig |  1 -
>   arch/x86/Kconfig|  1 -
>   arch/xtensa/Kconfig |  1 -
>   include/linux/mm.h  |  3 ---
>   mm/Kconfig  |  3 ---
>   mm/mmap.c   | 22 --
>   25 files changed, 50 deletions(-)
> 
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 43db3bd49071..3557fe83d124 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -101,28 +101,6 @@ static void unmap_region(struct mm_struct *mm,
>*  w: (no) no
>*  x: (yes) yes
>*/

The above comment is not orphaned. I think it should go in linux/mm.h

> -#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
> -pgprot_t protection_map[16] __ro_after_init = {
> - [VM_NONE]   = __P000,
> - [VM_READ]   = __P001,
> - [VM_WRITE]  = __P010,
> - [VM_WRITE | VM_READ]= __P011,
> - [VM_EXEC]   = __P100,
> - [VM_EXEC | VM_READ] = __P101,
> - [VM_EXEC | VM_WRITE]= __P110,
> - [VM_EXEC | VM_WRITE | VM_READ]  = __P111,
> - [VM_SHARED] = __S000,
> - [VM_SHARED | VM_READ]   = __S001,
> - [VM_SHARED | VM_WRITE]  = __S010,
> - [VM_SHARED | VM_WRITE | VM_READ]= __S011,
> - [VM_SHARED | VM_EXEC]   = __S100,
> - [VM_SHARED | VM_EXEC | VM_READ] = __S101,
> - [VM_SHARED | VM_EXEC | VM_WRITE]= __S110,
> - [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = __S111
> -};
> -DECLARE_VM_GET_PAGE_PROT
> -#endif   /* CONFIG_ARCH_HAS_VM_GET_PAGE_PROT */
> -
>   static pgprot_t vm_pgprot_modify(pgprot_t oldprot, unsigned long vm_flags)
>   {
>   return pgprot_modify(oldprot, vm_get_page_prot(vm_flags));

Re: [PATCH V4 02/26] mm/mmap: Define DECLARE_VM_GET_PAGE_PROT

2022-06-23 Thread Christophe Leroy


Le 24/06/2022 à 06:43, Anshuman Khandual a écrit :
> This just converts the generic vm_get_page_prot() implementation into a new
> macro i.e DECLARE_VM_GET_PAGE_PROT which later can be used across platforms
> when enabling them with ARCH_HAS_VM_GET_PAGE_PROT. This does not create any
> functional change.
> 
> Cc: Andrew Morton 
> Cc: linux...@kvack.org
> Cc: linux-ker...@vger.kernel.org
> Suggested-by: Christoph Hellwig 
> Signed-off-by: Anshuman Khandual 
> ---
>   include/linux/mm.h | 8 
>   mm/mmap.c  | 6 +-
>   2 files changed, 9 insertions(+), 5 deletions(-)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 47bfe038d46e..237828c2bae2 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -428,6 +428,14 @@ extern unsigned int kobjsize(const void *objp);
>   extern pgprot_t protection_map[16];
>   #endif
>   

I think the comment above protection_map[16] in mm/mmap.c should be 
moved here.

> +#define DECLARE_VM_GET_PAGE_PROT \
> +pgprot_t vm_get_page_prot(unsigned long vm_flags)\
> +{\
> + return protection_map[vm_flags &\
> + (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)];\
> +}\
> +EXPORT_SYMBOL(vm_get_page_prot);
> +
>   /*
>* The default fault flags that should be used by most of the
>* arch-specific page fault handlers.
> diff --git a/mm/mmap.c b/mm/mmap.c
> index b01f0280bda2..55c30aee3999 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -123,11 +123,7 @@ pgprot_t protection_map[16] __ro_after_init = {
>   #endif
>   
>   #ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
> -pgprot_t vm_get_page_prot(unsigned long vm_flags)
> -{
> - return protection_map[vm_flags & (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)];
> -}
> -EXPORT_SYMBOL(vm_get_page_prot);
> +DECLARE_VM_GET_PAGE_PROT
>   #endif  /* CONFIG_ARCH_HAS_VM_GET_PAGE_PROT */
>   
>   static pgprot_t vm_pgprot_modify(pgprot_t oldprot, unsigned long vm_flags)

Re: [PATCH V4 08/26] microblaze/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Christophe Leroy


Le 24/06/2022 à 06:43, Anshuman Khandual a écrit :
> This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
> vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
> up a private and static protection_map[] array. Subsequently all __SXXX and
> __PXXX macros can be dropped which are no longer needed.

In this patch and all following ones, can't protection_map[] be const 
instead of __ro_after_init ?

> 
> Cc: Michal Simek 
> Cc: linux-ker...@vger.kernel.org
> Signed-off-by: Anshuman Khandual 
> ---
>   arch/microblaze/Kconfig   |  1 +
>   arch/microblaze/include/asm/pgtable.h | 17 -
>   arch/microblaze/mm/init.c | 20 
>   3 files changed, 21 insertions(+), 17 deletions(-)
> 
> diff --git a/arch/microblaze/Kconfig b/arch/microblaze/Kconfig
> index 8cf429ad1c84..15f91ba8a0c4 100644
> --- a/arch/microblaze/Kconfig
> +++ b/arch/microblaze/Kconfig
> @@ -7,6 +7,7 @@ config MICROBLAZE
>   select ARCH_HAS_GCOV_PROFILE_ALL
>   select ARCH_HAS_SYNC_DMA_FOR_CPU
>   select ARCH_HAS_SYNC_DMA_FOR_DEVICE
> + select ARCH_HAS_VM_GET_PAGE_PROT
>   select ARCH_MIGHT_HAVE_PC_PARPORT
>   select ARCH_WANT_IPC_PARSE_VERSION
>   select BUILDTIME_TABLE_SORT
> diff --git a/arch/microblaze/include/asm/pgtable.h 
> b/arch/microblaze/include/asm/pgtable.h
> index 0c72646370e1..ba348e997dbb 100644
> --- a/arch/microblaze/include/asm/pgtable.h
> +++ b/arch/microblaze/include/asm/pgtable.h
> @@ -204,23 +204,6 @@ extern pte_t *va_to_pte(unsigned long address);
>* We consider execute permission the same as read.
>* Also, write permissions imply read permissions.
>*/
> -#define __P000   PAGE_NONE
> -#define __P001   PAGE_READONLY_X
> -#define __P010   PAGE_COPY
> -#define __P011   PAGE_COPY_X
> -#define __P100   PAGE_READONLY
> -#define __P101   PAGE_READONLY_X
> -#define __P110   PAGE_COPY
> -#define __P111   PAGE_COPY_X
> -
> -#define __S000   PAGE_NONE
> -#define __S001   PAGE_READONLY_X
> -#define __S010   PAGE_SHARED
> -#define __S011   PAGE_SHARED_X
> -#define __S100   PAGE_READONLY
> -#define __S101   PAGE_READONLY_X
> -#define __S110   PAGE_SHARED
> -#define __S111   PAGE_SHARED_X
>   
>   #ifndef __ASSEMBLY__
>   /*
> diff --git a/arch/microblaze/mm/init.c b/arch/microblaze/mm/init.c
> index f4e503461d24..315fd5024f00 100644
> --- a/arch/microblaze/mm/init.c
> +++ b/arch/microblaze/mm/init.c
> @@ -285,3 +285,23 @@ void * __ref zalloc_maybe_bootmem(size_t size, gfp_t 
> mask)
>   
>   return p;
>   }
> +
> +static pgprot_t protection_map[16] __ro_after_init = {
> + [VM_NONE]   = PAGE_NONE,
> + [VM_READ]   = PAGE_READONLY_X,
> + [VM_WRITE]  = PAGE_COPY,
> + [VM_WRITE | VM_READ]= PAGE_COPY_X,
> + [VM_EXEC]   = PAGE_READONLY,
> + [VM_EXEC | VM_READ] = PAGE_READONLY_X,
> + [VM_EXEC | VM_WRITE]= PAGE_COPY,
> + [VM_EXEC | VM_WRITE | VM_READ]  = PAGE_COPY_X,
> + [VM_SHARED] = PAGE_NONE,
> + [VM_SHARED | VM_READ]   = PAGE_READONLY_X,
> + [VM_SHARED | VM_WRITE]  = PAGE_SHARED,
> + [VM_SHARED | VM_WRITE | VM_READ]= PAGE_SHARED_X,
> + [VM_SHARED | VM_EXEC]   = PAGE_READONLY,
> + [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY_X,
> + [VM_SHARED | VM_EXEC | VM_WRITE]= PAGE_SHARED,
> + [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = PAGE_SHARED_X
> +};
> +DECLARE_VM_GET_PAGE_PROT

Re: [PATCH V4 03/26] powerpc/mm: Move protection_map[] inside the platform

2022-06-23 Thread Anshuman Khandual



On 6/24/22 10:48, Christophe Leroy wrote:
> 
> 
> Le 24/06/2022 à 06:43, Anshuman Khandual a écrit :
>> This moves protection_map[] inside the platform and while here, also enable
>> ARCH_HAS_VM_GET_PAGE_PROT on 32 bit platforms via DECLARE_VM_GET_PAGE_PROT.
> 
> Not only 32 bit platforms, also nohash 64 (aka book3e/64)

Sure, will update the commit message.

> 
>>
>> Cc: Michael Ellerman 
>> Cc: Paul Mackerras 
>> Cc: Nicholas Piggin 
>> Cc: linuxppc-dev@lists.ozlabs.org
>> Cc: linux-ker...@vger.kernel.org
>> Signed-off-by: Anshuman Khandual 
>> ---
>>   arch/powerpc/Kconfig   |  2 +-
>>   arch/powerpc/include/asm/pgtable.h | 20 +---
>>   arch/powerpc/mm/pgtable.c  | 24 
>>   3 files changed, 26 insertions(+), 20 deletions(-)
>>
>> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
>> index c2ce2e60c8f0..1035d172c7dd 100644
>> --- a/arch/powerpc/Kconfig
>> +++ b/arch/powerpc/Kconfig
>> @@ -140,7 +140,7 @@ config PPC
>>  select ARCH_HAS_TICK_BROADCAST  if GENERIC_CLOCKEVENTS_BROADCAST
>>  select ARCH_HAS_UACCESS_FLUSHCACHE
>>  select ARCH_HAS_UBSAN_SANITIZE_ALL
>> -select ARCH_HAS_VM_GET_PAGE_PROTif PPC_BOOK3S_64
>> +select ARCH_HAS_VM_GET_PAGE_PROT
>>  select ARCH_HAVE_NMI_SAFE_CMPXCHG
>>  select ARCH_KEEP_MEMBLOCK
>>  select ARCH_MIGHT_HAVE_PC_PARPORT
>> diff --git a/arch/powerpc/include/asm/pgtable.h 
>> b/arch/powerpc/include/asm/pgtable.h
>> index d564d0ecd4cd..bf98db844579 100644
>> --- a/arch/powerpc/include/asm/pgtable.h
>> +++ b/arch/powerpc/include/asm/pgtable.h
>> @@ -20,25 +20,6 @@ struct mm_struct;
>>   #include 
>>   #endif /* !CONFIG_PPC_BOOK3S */
>>   
>> -/* Note due to the way vm flags are laid out, the bits are XWR */
>> -#define __P000  PAGE_NONE
>> -#define __P001  PAGE_READONLY
>> -#define __P010  PAGE_COPY
>> -#define __P011  PAGE_COPY
>> -#define __P100  PAGE_READONLY_X
>> -#define __P101  PAGE_READONLY_X
>> -#define __P110  PAGE_COPY_X
>> -#define __P111  PAGE_COPY_X
>> -
>> -#define __S000  PAGE_NONE
>> -#define __S001  PAGE_READONLY
>> -#define __S010  PAGE_SHARED
>> -#define __S011  PAGE_SHARED
>> -#define __S100  PAGE_READONLY_X
>> -#define __S101  PAGE_READONLY_X
>> -#define __S110  PAGE_SHARED_X
>> -#define __S111  PAGE_SHARED_X
>> -
>>   #ifndef __ASSEMBLY__
>>   
>>   #ifndef MAX_PTRS_PER_PGD
>> @@ -79,6 +60,7 @@ extern void paging_init(void);
>>   void poking_init(void);
>>   
>>   extern unsigned long ioremap_bot;
>> +extern pgprot_t protection_map[16] __ro_after_init;
>>   
>>   /*
>>* kern_addr_valid is intended to indicate whether an address is a valid
>> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
>> index e6166b71d36d..618f30d35b17 100644
>> --- a/arch/powerpc/mm/pgtable.c
>> +++ b/arch/powerpc/mm/pgtable.c
>> @@ -472,3 +472,27 @@ pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea,
>>  return ret_pte;
>>   }
>>   EXPORT_SYMBOL_GPL(__find_linux_pte);
>> +
>> +/* Note due to the way vm flags are laid out, the bits are XWR */
>> +pgprot_t protection_map[16] __ro_after_init = {
> 
> I can't see any place where protection_map[] gets modified. This could 
> be made const.

Sure, will make it a const as in case for many other platforms as well.

> 
>> +[VM_NONE]   = PAGE_NONE,
>> +[VM_READ]   = PAGE_READONLY,
>> +[VM_WRITE]  = PAGE_COPY,
>> +[VM_WRITE | VM_READ]= PAGE_COPY,
>> +[VM_EXEC]   = PAGE_READONLY_X,
>> +[VM_EXEC | VM_READ] = PAGE_READONLY_X,
>> +[VM_EXEC | VM_WRITE]= PAGE_COPY_X,
>> +[VM_EXEC | VM_WRITE | VM_READ]  = PAGE_COPY_X,
>> +[VM_SHARED] = PAGE_NONE,
>> +[VM_SHARED | VM_READ]   = PAGE_READONLY,
>> +[VM_SHARED | VM_WRITE]  = PAGE_SHARED,
>> +[VM_SHARED | VM_WRITE | VM_READ]= PAGE_SHARED,
>> +[VM_SHARED | VM_EXEC]   = PAGE_READONLY_X,
>> +[VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY_X,
>> +[VM_SHARED | VM_EXEC | VM_WRITE]= PAGE_SHARED_X,
>> +[VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = PAGE_SHARED_X
>> +};
>> +
>> +#ifndef CONFIG_PPC_BOOK3S_64
>> +DECLARE_VM_GET_PAGE_PROT
>> +#endif


Re: [PATCH V4 07/26] mm/mmap: Build protect protection_map[] with ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Christophe Leroy


Le 24/06/2022 à 06:43, Anshuman Khandual a écrit :
> protection_map[] has already been moved inside those platforms which enable

Usually "already" means before your series.

Your series is the one that moves protection_map[] so I would have just 
said "Now that protection_map[] has been moved inside those platforms 
which enable "

> ARCH_HAS_VM_GET_PAGE_PROT. Hence generic protection_map[] array now can be
> protected with CONFIG_ARCH_HAS_VM_GET_PAGE_PROT intead of __P000.
> 
> Cc: Andrew Morton 
> Cc: linux...@kvack.org
> Cc: linux-ker...@vger.kernel.org
> Signed-off-by: Anshuman Khandual 
> ---
>   include/linux/mm.h | 2 +-
>   mm/mmap.c  | 5 +
>   2 files changed, 2 insertions(+), 5 deletions(-)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 237828c2bae2..70d900f6df43 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -424,7 +424,7 @@ extern unsigned int kobjsize(const void *objp);
>* mapping from the currently active vm_flags protection bits (the
>* low four bits) to a page protection mask..
>*/
> -#ifdef __P000
> +#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
>   extern pgprot_t protection_map[16];

Is this declaration still needed ? I have the feeling that 
protection_map[] is only used in mm/mmap.c now.

>   #endif
>   
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 55c30aee3999..43db3bd49071 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -101,7 +101,7 @@ static void unmap_region(struct mm_struct *mm,
>*  w: (no) no
>*  x: (yes) yes
>*/
> -#ifdef __P000
> +#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
>   pgprot_t protection_map[16] __ro_after_init = {

Should this be static, as it seems to now be used only in this file ?
And it could also be 'const' instead of __ro_after_init.

>   [VM_NONE]   = __P000,
>   [VM_READ]   = __P001,
> @@ -120,9 +120,6 @@ pgprot_t protection_map[16] __ro_after_init = {
>   [VM_SHARED | VM_EXEC | VM_WRITE]= __S110,
>   [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = __S111
>   };
> -#endif
> -
> -#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
>   DECLARE_VM_GET_PAGE_PROT
>   #endif  /* CONFIG_ARCH_HAS_VM_GET_PAGE_PROT */
>   

Re: [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms

2022-06-23 Thread Christoph Hellwig
On Fri, Jun 24, 2022 at 10:50:33AM +0530, Anshuman Khandual wrote:
> > On most architectures this should be const now, only very few ever
> > modify it.
> 
> Will make it a 'static const pgprot_t protection_map[16] __ro_after_init'
> on platforms that do not change the protection_map[] even during boot.

No need for __ro_after_init when it is already declarated const.


Re: [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms

2022-06-23 Thread Anshuman Khandual



On 6/24/22 10:42, Christoph Hellwig wrote:
> On Fri, Jun 24, 2022 at 10:13:13AM +0530, Anshuman Khandual wrote:
>> vm_get_page_prot(), in order for it to be reused on platforms that do not
>> require custom implementation. Finally, ARCH_HAS_VM_GET_PAGE_PROT can just
>> be dropped, as all platforms now define and export vm_get_page_prot(), via
>> looking up a private and static protection_map[] array. protection_map[]
>> data type is the following for all platforms without deviation (except the
>> powerpc one which is shared between 32 and 64 bit platforms), keeping it
>> unchanged for now.
>>
>> static pgprot_t protection_map[16] __ro_after_init
> 
> On most architectures this should be const now, only very few ever
> modify it.

Will make it a 'static const pgprot_t protection_map[16] __ro_after_init'
on platforms that do not change the protection_map[] even during boot.


Re: [PATCH V4 03/26] powerpc/mm: Move protection_map[] inside the platform

2022-06-23 Thread Christophe Leroy


Le 24/06/2022 à 06:43, Anshuman Khandual a écrit :
> This moves protection_map[] inside the platform and while here, also enable
> ARCH_HAS_VM_GET_PAGE_PROT on 32 bit platforms via DECLARE_VM_GET_PAGE_PROT.

Not only 32 bit platforms, also nohash 64 (aka book3e/64)

> 
> Cc: Michael Ellerman 
> Cc: Paul Mackerras 
> Cc: Nicholas Piggin 
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: linux-ker...@vger.kernel.org
> Signed-off-by: Anshuman Khandual 
> ---
>   arch/powerpc/Kconfig   |  2 +-
>   arch/powerpc/include/asm/pgtable.h | 20 +---
>   arch/powerpc/mm/pgtable.c  | 24 
>   3 files changed, 26 insertions(+), 20 deletions(-)
> 
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index c2ce2e60c8f0..1035d172c7dd 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -140,7 +140,7 @@ config PPC
>   select ARCH_HAS_TICK_BROADCAST  if GENERIC_CLOCKEVENTS_BROADCAST
>   select ARCH_HAS_UACCESS_FLUSHCACHE
>   select ARCH_HAS_UBSAN_SANITIZE_ALL
> - select ARCH_HAS_VM_GET_PAGE_PROTif PPC_BOOK3S_64
> + select ARCH_HAS_VM_GET_PAGE_PROT
>   select ARCH_HAVE_NMI_SAFE_CMPXCHG
>   select ARCH_KEEP_MEMBLOCK
>   select ARCH_MIGHT_HAVE_PC_PARPORT
> diff --git a/arch/powerpc/include/asm/pgtable.h 
> b/arch/powerpc/include/asm/pgtable.h
> index d564d0ecd4cd..bf98db844579 100644
> --- a/arch/powerpc/include/asm/pgtable.h
> +++ b/arch/powerpc/include/asm/pgtable.h
> @@ -20,25 +20,6 @@ struct mm_struct;
>   #include 
>   #endif /* !CONFIG_PPC_BOOK3S */
>   
> -/* Note due to the way vm flags are laid out, the bits are XWR */
> -#define __P000   PAGE_NONE
> -#define __P001   PAGE_READONLY
> -#define __P010   PAGE_COPY
> -#define __P011   PAGE_COPY
> -#define __P100   PAGE_READONLY_X
> -#define __P101   PAGE_READONLY_X
> -#define __P110   PAGE_COPY_X
> -#define __P111   PAGE_COPY_X
> -
> -#define __S000   PAGE_NONE
> -#define __S001   PAGE_READONLY
> -#define __S010   PAGE_SHARED
> -#define __S011   PAGE_SHARED
> -#define __S100   PAGE_READONLY_X
> -#define __S101   PAGE_READONLY_X
> -#define __S110   PAGE_SHARED_X
> -#define __S111   PAGE_SHARED_X
> -
>   #ifndef __ASSEMBLY__
>   
>   #ifndef MAX_PTRS_PER_PGD
> @@ -79,6 +60,7 @@ extern void paging_init(void);
>   void poking_init(void);
>   
>   extern unsigned long ioremap_bot;
> +extern pgprot_t protection_map[16] __ro_after_init;
>   
>   /*
>* kern_addr_valid is intended to indicate whether an address is a valid
> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
> index e6166b71d36d..618f30d35b17 100644
> --- a/arch/powerpc/mm/pgtable.c
> +++ b/arch/powerpc/mm/pgtable.c
> @@ -472,3 +472,27 @@ pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea,
>   return ret_pte;
>   }
>   EXPORT_SYMBOL_GPL(__find_linux_pte);
> +
> +/* Note due to the way vm flags are laid out, the bits are XWR */
> +pgprot_t protection_map[16] __ro_after_init = {

I can't see any place where protection_map[] gets modified. This could 
be made const.

> + [VM_NONE]   = PAGE_NONE,
> + [VM_READ]   = PAGE_READONLY,
> + [VM_WRITE]  = PAGE_COPY,
> + [VM_WRITE | VM_READ]= PAGE_COPY,
> + [VM_EXEC]   = PAGE_READONLY_X,
> + [VM_EXEC | VM_READ] = PAGE_READONLY_X,
> + [VM_EXEC | VM_WRITE]= PAGE_COPY_X,
> + [VM_EXEC | VM_WRITE | VM_READ]  = PAGE_COPY_X,
> + [VM_SHARED] = PAGE_NONE,
> + [VM_SHARED | VM_READ]   = PAGE_READONLY,
> + [VM_SHARED | VM_WRITE]  = PAGE_SHARED,
> + [VM_SHARED | VM_WRITE | VM_READ]= PAGE_SHARED,
> + [VM_SHARED | VM_EXEC]   = PAGE_READONLY_X,
> + [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY_X,
> + [VM_SHARED | VM_EXEC | VM_WRITE]= PAGE_SHARED_X,
> + [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = PAGE_SHARED_X
> +};
> +
> +#ifndef CONFIG_PPC_BOOK3S_64
> +DECLARE_VM_GET_PAGE_PROT
> +#endif

Re: [PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms

2022-06-23 Thread Christoph Hellwig
On Fri, Jun 24, 2022 at 10:13:13AM +0530, Anshuman Khandual wrote:
> vm_get_page_prot(), in order for it to be reused on platforms that do not
> require custom implementation. Finally, ARCH_HAS_VM_GET_PAGE_PROT can just
> be dropped, as all platforms now define and export vm_get_page_prot(), via
> looking up a private and static protection_map[] array. protection_map[]
> data type is the following for all platforms without deviation (except the
> powerpc one which is shared between 32 and 64 bit platforms), keeping it
> unchanged for now.
> 
> static pgprot_t protection_map[16] __ro_after_init

On most architectures this should be const now, only very few ever
modify it.


Re: [PATCH V4 26/26] mm/mmap: Drop ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Christoph Hellwig
Looks good:

Reviewed-by: Christoph Hellwig 


Re: [PATCH V4 16/26] riscv/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Christoph Hellwig
On Fri, Jun 24, 2022 at 10:13:29AM +0530, Anshuman Khandual wrote:
index d466ec670e1f..f976580500b1 100644
> --- a/arch/riscv/mm/init.c
> +++ b/arch/riscv/mm/init.c
> @@ -288,6 +288,26 @@ static pmd_t __maybe_unused early_dtb_pmd[PTRS_PER_PMD] 
> __initdata __aligned(PAG
>  #define early_pg_dir   ((pgd_t *)XIP_FIXUP(early_pg_dir))
>  #endif /* CONFIG_XIP_KERNEL */
>  
> +static pgprot_t protection_map[16] __ro_after_init = {

Can't this be marked const now?



Re: [PATCH V4 06/26] x86/mm: Move protection_map[] inside the platform

2022-06-23 Thread Christoph Hellwig
Looks good:

Reviewed-by: Christoph Hellwig 


Re: [PATCH V4 02/26] mm/mmap: Define DECLARE_VM_GET_PAGE_PROT

2022-06-23 Thread Christoph Hellwig
On Fri, Jun 24, 2022 at 10:13:15AM +0530, Anshuman Khandual wrote:
> This just converts the generic vm_get_page_prot() implementation into a new
> macro i.e DECLARE_VM_GET_PAGE_PROT which later can be used across platforms
> when enabling them with ARCH_HAS_VM_GET_PAGE_PROT. This does not create any
> functional change.

mm.h is a huhe header included by almost everything in the kernel.
I'd rather have it in something only included in a few files.  If we
can't find anything suitable it might be worth to add a header just
for this even.


Re: [PATCH V4 01/26] mm/mmap: Build protect protection_map[] with __P000

2022-06-23 Thread Christoph Hellwig
Looks good:

Reviewed-by: Christoph Hellwig 


[PATCH V4 26/26] mm/mmap: Drop ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Anshuman Khandual
Now all the platforms enable ARCH_HAS_GET_PAGE_PROT. They define and export
own vm_get_page_prot() whether custom or standard DECLARE_VM_GET_PAGE_PROT.
Hence there is no need for default generic fallback for vm_get_page_prot().
Just drop this fallback and also ARCH_HAS_GET_PAGE_PROT mechanism.

Cc: Andrew Morton 
Cc: linux...@kvack.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 arch/alpha/Kconfig  |  1 -
 arch/arc/Kconfig|  1 -
 arch/arm/Kconfig|  1 -
 arch/arm64/Kconfig  |  1 -
 arch/csky/Kconfig   |  1 -
 arch/hexagon/Kconfig|  1 -
 arch/ia64/Kconfig   |  1 -
 arch/loongarch/Kconfig  |  1 -
 arch/m68k/Kconfig   |  1 -
 arch/microblaze/Kconfig |  1 -
 arch/mips/Kconfig   |  1 -
 arch/nios2/Kconfig  |  1 -
 arch/openrisc/Kconfig   |  1 -
 arch/parisc/Kconfig |  1 -
 arch/powerpc/Kconfig|  1 -
 arch/riscv/Kconfig  |  1 -
 arch/s390/Kconfig   |  1 -
 arch/sh/Kconfig |  1 -
 arch/sparc/Kconfig  |  1 -
 arch/um/Kconfig |  1 -
 arch/x86/Kconfig|  1 -
 arch/xtensa/Kconfig |  1 -
 include/linux/mm.h  |  3 ---
 mm/Kconfig  |  3 ---
 mm/mmap.c   | 22 --
 25 files changed, 50 deletions(-)

diff --git a/arch/alpha/Kconfig b/arch/alpha/Kconfig
index db1c8b329461..7d0d26b5b3f5 100644
--- a/arch/alpha/Kconfig
+++ b/arch/alpha/Kconfig
@@ -2,7 +2,6 @@
 config ALPHA
bool
default y
-   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_32BIT_USTAT_F_TINODE
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_MIGHT_HAVE_PC_SERIO
diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig
index 8be56a5d8a9b..9e3653253ef2 100644
--- a/arch/arc/Kconfig
+++ b/arch/arc/Kconfig
@@ -13,7 +13,6 @@ config ARC
select ARCH_HAS_SETUP_DMA_OPS
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
-   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_SUPPORTS_ATOMIC_RMW if ARC_HAS_LLSC
select ARCH_32BIT_OFF_T
select BUILDTIME_TABLE_SORT
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index e153b6d4fc5b..7630ba9cb6cc 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -24,7 +24,6 @@ config ARM
select ARCH_HAS_SYNC_DMA_FOR_CPU if SWIOTLB || !MMU
select ARCH_HAS_TEARDOWN_DMA_OPS if MMU
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
-   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAVE_CUSTOM_GPIO_H
select ARCH_HAVE_NMI_SAFE_CMPXCHG if CPU_V7 || CPU_V7M || CPU_V6K
select ARCH_HAS_GCOV_PROFILE_ALL
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 1652a9800ebe..7030bf3f8d6f 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -45,7 +45,6 @@ config ARM64
select ARCH_HAS_SYSCALL_WRAPPER
select ARCH_HAS_TEARDOWN_DMA_OPS if IOMMU_SUPPORT
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
-   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAS_ZONE_DMA_SET if EXPERT
select ARCH_HAVE_ELF_PROT
select ARCH_HAVE_NMI_SAFE_CMPXCHG
diff --git a/arch/csky/Kconfig b/arch/csky/Kconfig
index 588b8a9c68ed..21d72b078eef 100644
--- a/arch/csky/Kconfig
+++ b/arch/csky/Kconfig
@@ -6,7 +6,6 @@ config CSKY
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
-   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_QUEUED_RWLOCKS
select ARCH_WANT_FRAME_POINTERS if !CPU_CK610 && 
$(cc-option,-mbacktrace)
diff --git a/arch/hexagon/Kconfig b/arch/hexagon/Kconfig
index bc4ceecd0588..54eadf265178 100644
--- a/arch/hexagon/Kconfig
+++ b/arch/hexagon/Kconfig
@@ -6,7 +6,6 @@ config HEXAGON
def_bool y
select ARCH_32BIT_OFF_T
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
-   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_NO_PREEMPT
select DMA_GLOBAL_POOL
# Other pending projects/to-do items.
diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
index 0510a5737711..cb93769a9f2a 100644
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -12,7 +12,6 @@ config IA64
select ARCH_HAS_DMA_MARK_CLEAN
select ARCH_HAS_STRNCPY_FROM_USER
select ARCH_HAS_STRNLEN_USER
-   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_MIGHT_HAVE_PC_SERIO
select ACPI
diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
index fd07b8e760ee..1920d52653b4 100644
--- a/arch/loongarch/Kconfig
+++ b/arch/loongarch/Kconfig
@@ -9,7 +9,6 @@ config LOONGARCH
select ARCH_HAS_ACPI_TABLE_UPGRADE  if ACPI
select ARCH_HAS_PHYS_TO_DMA
select ARCH_HAS_PTE_SPECIAL
-   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_INLINE_READ_LOCK if !PREEMPTION
select 

[PATCH V4 25/26] sh/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Anshuman Khandual
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.

Cc: Yoshinori Sato 
Cc: Rich Felker 
Cc: linux...@vger.kernel.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 arch/sh/Kconfig   |  1 +
 arch/sh/include/asm/pgtable.h | 17 -
 arch/sh/mm/mmap.c | 20 
 3 files changed, 21 insertions(+), 17 deletions(-)

diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig
index 5f220e903e5a..91f3ea325388 100644
--- a/arch/sh/Kconfig
+++ b/arch/sh/Kconfig
@@ -12,6 +12,7 @@ config SUPERH
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_PTE_SPECIAL
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
+   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HIBERNATION_POSSIBLE if MMU
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_WANT_IPC_PARSE_VERSION
diff --git a/arch/sh/include/asm/pgtable.h b/arch/sh/include/asm/pgtable.h
index d7ddb1ec86a0..6fb9ec54cf9b 100644
--- a/arch/sh/include/asm/pgtable.h
+++ b/arch/sh/include/asm/pgtable.h
@@ -89,23 +89,6 @@ static inline unsigned long phys_addr_mask(void)
  * completely separate permission bits for user and kernel space.
  */
 /*xwr*/
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY
-#define __P100 PAGE_EXECREAD
-#define __P101 PAGE_EXECREAD
-#define __P110 PAGE_COPY
-#define __P111 PAGE_COPY
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_WRITEONLY
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_EXECREAD
-#define __S101 PAGE_EXECREAD
-#define __S110 PAGE_RWX
-#define __S111 PAGE_RWX
 
 typedef pte_t *pte_addr_t;
 
diff --git a/arch/sh/mm/mmap.c b/arch/sh/mm/mmap.c
index 6a1a1297baae..0a61ce6950bb 100644
--- a/arch/sh/mm/mmap.c
+++ b/arch/sh/mm/mmap.c
@@ -162,3 +162,23 @@ int valid_mmap_phys_addr_range(unsigned long pfn, size_t 
size)
 {
return 1;
 }
+
+static pgprot_t protection_map[16] __ro_after_init = {
+   [VM_NONE]   = PAGE_NONE,
+   [VM_READ]   = PAGE_READONLY,
+   [VM_WRITE]  = PAGE_COPY,
+   [VM_WRITE | VM_READ]= PAGE_COPY,
+   [VM_EXEC]   = PAGE_EXECREAD,
+   [VM_EXEC | VM_READ] = PAGE_EXECREAD,
+   [VM_EXEC | VM_WRITE]= PAGE_COPY,
+   [VM_EXEC | VM_WRITE | VM_READ]  = PAGE_COPY,
+   [VM_SHARED] = PAGE_NONE,
+   [VM_SHARED | VM_READ]   = PAGE_READONLY,
+   [VM_SHARED | VM_WRITE]  = PAGE_WRITEONLY,
+   [VM_SHARED | VM_WRITE | VM_READ]= PAGE_SHARED,
+   [VM_SHARED | VM_EXEC]   = PAGE_EXECREAD,
+   [VM_SHARED | VM_EXEC | VM_READ] = PAGE_EXECREAD,
+   [VM_SHARED | VM_EXEC | VM_WRITE]= PAGE_RWX,
+   [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = PAGE_RWX
+};
+DECLARE_VM_GET_PAGE_PROT
-- 
2.25.1



[PATCH V4 24/26] um/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Anshuman Khandual
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.

Cc: Jeff Dike 
Cc: linux...@lists.infradead.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 arch/um/Kconfig   |  1 +
 arch/um/include/asm/pgtable.h | 17 -
 arch/um/kernel/mem.c  | 20 
 arch/x86/um/mem_32.c  |  2 +-
 4 files changed, 22 insertions(+), 18 deletions(-)

diff --git a/arch/um/Kconfig b/arch/um/Kconfig
index 4ec22e156a2e..7fb43654e5b5 100644
--- a/arch/um/Kconfig
+++ b/arch/um/Kconfig
@@ -10,6 +10,7 @@ config UML
select ARCH_HAS_KCOV
select ARCH_HAS_STRNCPY_FROM_USER
select ARCH_HAS_STRNLEN_USER
+   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_NO_PREEMPT
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/um/include/asm/pgtable.h b/arch/um/include/asm/pgtable.h
index 167e236d9bb8..66bc3f99d9be 100644
--- a/arch/um/include/asm/pgtable.h
+++ b/arch/um/include/asm/pgtable.h
@@ -68,23 +68,6 @@ extern unsigned long end_iomem;
  * Also, write permissions imply read permissions. This is the closest we can
  * get..
  */
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY
-#define __P100 PAGE_READONLY
-#define __P101 PAGE_READONLY
-#define __P110 PAGE_COPY
-#define __P111 PAGE_COPY
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_READONLY
-#define __S101 PAGE_READONLY
-#define __S110 PAGE_SHARED
-#define __S111 PAGE_SHARED
 
 /*
  * ZERO_PAGE is a global shared page that is always zero: used
diff --git a/arch/um/kernel/mem.c b/arch/um/kernel/mem.c
index 15295c3237a0..26ef8a77be59 100644
--- a/arch/um/kernel/mem.c
+++ b/arch/um/kernel/mem.c
@@ -197,3 +197,23 @@ void *uml_kmalloc(int size, int flags)
 {
return kmalloc(size, flags);
 }
+
+static pgprot_t protection_map[16] __ro_after_init = {
+   [VM_NONE]   = PAGE_NONE,
+   [VM_READ]   = PAGE_READONLY,
+   [VM_WRITE]  = PAGE_COPY,
+   [VM_WRITE | VM_READ]= PAGE_COPY,
+   [VM_EXEC]   = PAGE_READONLY,
+   [VM_EXEC | VM_READ] = PAGE_READONLY,
+   [VM_EXEC | VM_WRITE]= PAGE_COPY,
+   [VM_EXEC | VM_WRITE | VM_READ]  = PAGE_COPY,
+   [VM_SHARED] = PAGE_NONE,
+   [VM_SHARED | VM_READ]   = PAGE_READONLY,
+   [VM_SHARED | VM_WRITE]  = PAGE_SHARED,
+   [VM_SHARED | VM_WRITE | VM_READ]= PAGE_SHARED,
+   [VM_SHARED | VM_EXEC]   = PAGE_READONLY,
+   [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY,
+   [VM_SHARED | VM_EXEC | VM_WRITE]= PAGE_SHARED,
+   [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = PAGE_SHARED
+};
+DECLARE_VM_GET_PAGE_PROT
diff --git a/arch/x86/um/mem_32.c b/arch/x86/um/mem_32.c
index 19c5dbd46770..cafd01f730da 100644
--- a/arch/x86/um/mem_32.c
+++ b/arch/x86/um/mem_32.c
@@ -17,7 +17,7 @@ static int __init gate_vma_init(void)
gate_vma.vm_start = FIXADDR_USER_START;
gate_vma.vm_end = FIXADDR_USER_END;
gate_vma.vm_flags = VM_READ | VM_MAYREAD | VM_EXEC | VM_MAYEXEC;
-   gate_vma.vm_page_prot = __P101;
+   gate_vma.vm_page_prot = PAGE_READONLY;
 
return 0;
 }
-- 
2.25.1



[PATCH V4 23/26] arm/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Anshuman Khandual
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.

Cc: Russell King 
Cc: Arnd Bergmann 
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 arch/arm/Kconfig   |  1 +
 arch/arm/include/asm/pgtable.h | 17 -
 arch/arm/lib/uaccess_with_memcpy.c |  2 +-
 arch/arm/mm/mmu.c  | 20 
 4 files changed, 22 insertions(+), 18 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 7630ba9cb6cc..e153b6d4fc5b 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -24,6 +24,7 @@ config ARM
select ARCH_HAS_SYNC_DMA_FOR_CPU if SWIOTLB || !MMU
select ARCH_HAS_TEARDOWN_DMA_OPS if MMU
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
+   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAVE_CUSTOM_GPIO_H
select ARCH_HAVE_NMI_SAFE_CMPXCHG if CPU_V7 || CPU_V7M || CPU_V6K
select ARCH_HAS_GCOV_PROFILE_ALL
diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
index cd1f84bb40ae..78a532068fec 100644
--- a/arch/arm/include/asm/pgtable.h
+++ b/arch/arm/include/asm/pgtable.h
@@ -137,23 +137,6 @@ extern pgprot_t phys_mem_access_prot(struct file *file, 
unsigned long pfn,
  *  2) If we could do execute protection, then read is implied
  *  3) write implies read permissions
  */
-#define __P000  __PAGE_NONE
-#define __P001  __PAGE_READONLY
-#define __P010  __PAGE_COPY
-#define __P011  __PAGE_COPY
-#define __P100  __PAGE_READONLY_EXEC
-#define __P101  __PAGE_READONLY_EXEC
-#define __P110  __PAGE_COPY_EXEC
-#define __P111  __PAGE_COPY_EXEC
-
-#define __S000  __PAGE_NONE
-#define __S001  __PAGE_READONLY
-#define __S010  __PAGE_SHARED
-#define __S011  __PAGE_SHARED
-#define __S100  __PAGE_READONLY_EXEC
-#define __S101  __PAGE_READONLY_EXEC
-#define __S110  __PAGE_SHARED_EXEC
-#define __S111  __PAGE_SHARED_EXEC
 
 #ifndef __ASSEMBLY__
 /*
diff --git a/arch/arm/lib/uaccess_with_memcpy.c 
b/arch/arm/lib/uaccess_with_memcpy.c
index c30b689bec2e..14eecaaf295f 100644
--- a/arch/arm/lib/uaccess_with_memcpy.c
+++ b/arch/arm/lib/uaccess_with_memcpy.c
@@ -237,7 +237,7 @@ static int __init test_size_treshold(void)
if (!dst_page)
goto no_dst;
kernel_ptr = page_address(src_page);
-   user_ptr = vmap(_page, 1, VM_IOREMAP, __pgprot(__P010));
+   user_ptr = vmap(_page, 1, VM_IOREMAP, __pgprot(__PAGE_COPY));
if (!user_ptr)
goto no_vmap;
 
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 5e2be37a198e..2722abddd725 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -405,6 +405,26 @@ void __set_fixmap(enum fixed_addresses idx, phys_addr_t 
phys, pgprot_t prot)
local_flush_tlb_kernel_range(vaddr, vaddr + PAGE_SIZE);
 }
 
+static pgprot_t protection_map[16] __ro_after_init = {
+   [VM_NONE]   = __PAGE_NONE,
+   [VM_READ]   = __PAGE_READONLY,
+   [VM_WRITE]  = __PAGE_COPY,
+   [VM_WRITE | VM_READ]= __PAGE_COPY,
+   [VM_EXEC]   = __PAGE_READONLY_EXEC,
+   [VM_EXEC | VM_READ] = __PAGE_READONLY_EXEC,
+   [VM_EXEC | VM_WRITE]= __PAGE_COPY_EXEC,
+   [VM_EXEC | VM_WRITE | VM_READ]  = __PAGE_COPY_EXEC,
+   [VM_SHARED] = __PAGE_NONE,
+   [VM_SHARED | VM_READ]   = __PAGE_READONLY,
+   [VM_SHARED | VM_WRITE]  = __PAGE_SHARED,
+   [VM_SHARED | VM_WRITE | VM_READ]= __PAGE_SHARED,
+   [VM_SHARED | VM_EXEC]   = __PAGE_READONLY_EXEC,
+   [VM_SHARED | VM_EXEC | VM_READ] = __PAGE_READONLY_EXEC,
+   [VM_SHARED | VM_EXEC | VM_WRITE]= __PAGE_SHARED_EXEC,
+   [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = __PAGE_SHARED_EXEC
+};
+DECLARE_VM_GET_PAGE_PROT
+
 /*
  * Adjust the PMD section entries according to the CPU in use.
  */
-- 
2.25.1



[PATCH V4 19/26] ia64/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Anshuman Khandual
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.

Cc: linux-i...@vger.kernel.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 arch/ia64/Kconfig   |  1 +
 arch/ia64/include/asm/pgtable.h | 18 --
 arch/ia64/mm/init.c | 28 +++-
 3 files changed, 28 insertions(+), 19 deletions(-)

diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
index cb93769a9f2a..0510a5737711 100644
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -12,6 +12,7 @@ config IA64
select ARCH_HAS_DMA_MARK_CLEAN
select ARCH_HAS_STRNCPY_FROM_USER
select ARCH_HAS_STRNLEN_USER
+   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_MIGHT_HAVE_PC_SERIO
select ACPI
diff --git a/arch/ia64/include/asm/pgtable.h b/arch/ia64/include/asm/pgtable.h
index 7aa8f2330fb1..6925e28ae61d 100644
--- a/arch/ia64/include/asm/pgtable.h
+++ b/arch/ia64/include/asm/pgtable.h
@@ -161,24 +161,6 @@
  * attempts to write to the page.
  */
/* xwr */
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 PAGE_READONLY   /* write to priv pg -> copy & make writable */
-#define __P011 PAGE_READONLY   /* ditto */
-#define __P100 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_X_RX)
-#define __P101 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RX)
-#define __P110 PAGE_COPY_EXEC
-#define __P111 PAGE_COPY_EXEC
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_SHARED /* we don't have (and don't need) write-only */
-#define __S011 PAGE_SHARED
-#define __S100 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_X_RX)
-#define __S101 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RX)
-#define __S110 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RWX)
-#define __S111 __pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RWX)
-
 #define pgd_ERROR(e)   printk("%s:%d: bad pgd %016lx.\n", __FILE__, __LINE__, 
pgd_val(e))
 #if CONFIG_PGTABLE_LEVELS == 4
 #define pud_ERROR(e)   printk("%s:%d: bad pud %016lx.\n", __FILE__, __LINE__, 
pud_val(e))
diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c
index 855d949d81df..9c91df243d62 100644
--- a/arch/ia64/mm/init.c
+++ b/arch/ia64/mm/init.c
@@ -273,7 +273,7 @@ static int __init gate_vma_init(void)
gate_vma.vm_start = FIXADDR_USER_START;
gate_vma.vm_end = FIXADDR_USER_END;
gate_vma.vm_flags = VM_READ | VM_MAYREAD | VM_EXEC | VM_MAYEXEC;
-   gate_vma.vm_page_prot = __P101;
+   gate_vma.vm_page_prot = __pgprot(__ACCESS_BITS | _PAGE_PL_3 | 
_PAGE_AR_RX);
 
return 0;
 }
@@ -490,3 +490,29 @@ void arch_remove_memory(u64 start, u64 size, struct 
vmem_altmap *altmap)
__remove_pages(start_pfn, nr_pages, altmap);
 }
 #endif
+
+static pgprot_t protection_map[16] __ro_after_init = {
+   [VM_NONE]   = PAGE_NONE,
+   [VM_READ]   = PAGE_READONLY,
+   [VM_WRITE]  = PAGE_READONLY,
+   [VM_WRITE | VM_READ]= PAGE_READONLY,
+   [VM_EXEC]   = 
__pgprot(__ACCESS_BITS | _PAGE_PL_3 |
+  
_PAGE_AR_X_RX),
+   [VM_EXEC | VM_READ] = 
__pgprot(__ACCESS_BITS | _PAGE_PL_3 |
+  _PAGE_AR_RX),
+   [VM_EXEC | VM_WRITE]= PAGE_COPY_EXEC,
+   [VM_EXEC | VM_WRITE | VM_READ]  = PAGE_COPY_EXEC,
+   [VM_SHARED] = PAGE_NONE,
+   [VM_SHARED | VM_READ]   = PAGE_READONLY,
+   [VM_SHARED | VM_WRITE]  = PAGE_SHARED,
+   [VM_SHARED | VM_WRITE | VM_READ]= PAGE_SHARED,
+   [VM_SHARED | VM_EXEC]   = 
__pgprot(__ACCESS_BITS | _PAGE_PL_3 |
+  
_PAGE_AR_X_RX),
+   [VM_SHARED | VM_EXEC | VM_READ] = 
__pgprot(__ACCESS_BITS | _PAGE_PL_3 |
+  _PAGE_AR_RX),
+   [VM_SHARED | VM_EXEC | VM_WRITE]= 
__pgprot(__ACCESS_BITS | _PAGE_PL_3 |
+  
_PAGE_AR_RWX),
+   [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = 
__pgprot(__ACCESS_BITS | _PAGE_PL_3 |
+  _PAGE_AR_RWX)
+};
+DECLARE_VM_GET_PAGE_PROT
-- 
2.25.1



[PATCH V4 22/26] arc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Anshuman Khandual
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.

Cc: Vineet Gupta 
Cc: linux-snps-...@lists.infradead.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 arch/arc/Kconfig  |  1 +
 arch/arc/include/asm/pgtable-bits-arcv2.h | 18 --
 arch/arc/mm/mmap.c| 20 
 3 files changed, 21 insertions(+), 18 deletions(-)

diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig
index 9e3653253ef2..8be56a5d8a9b 100644
--- a/arch/arc/Kconfig
+++ b/arch/arc/Kconfig
@@ -13,6 +13,7 @@ config ARC
select ARCH_HAS_SETUP_DMA_OPS
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
+   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_SUPPORTS_ATOMIC_RMW if ARC_HAS_LLSC
select ARCH_32BIT_OFF_T
select BUILDTIME_TABLE_SORT
diff --git a/arch/arc/include/asm/pgtable-bits-arcv2.h 
b/arch/arc/include/asm/pgtable-bits-arcv2.h
index 183d23bc1e00..b23be557403e 100644
--- a/arch/arc/include/asm/pgtable-bits-arcv2.h
+++ b/arch/arc/include/asm/pgtable-bits-arcv2.h
@@ -72,24 +72,6 @@
  * This is to enable COW mechanism
  */
/* xwr */
-#define __P000  PAGE_U_NONE
-#define __P001  PAGE_U_R
-#define __P010  PAGE_U_R   /* Pvt-W => !W */
-#define __P011  PAGE_U_R   /* Pvt-W => !W */
-#define __P100  PAGE_U_X_R /* X => R */
-#define __P101  PAGE_U_X_R
-#define __P110  PAGE_U_X_R /* Pvt-W => !W and X => R */
-#define __P111  PAGE_U_X_R /* Pvt-W => !W */
-
-#define __S000  PAGE_U_NONE
-#define __S001  PAGE_U_R
-#define __S010  PAGE_U_W_R /* W => R */
-#define __S011  PAGE_U_W_R
-#define __S100  PAGE_U_X_R /* X => R */
-#define __S101  PAGE_U_X_R
-#define __S110  PAGE_U_X_W_R   /* X => R */
-#define __S111  PAGE_U_X_W_R
-
 #ifndef __ASSEMBLY__
 
 #define pte_write(pte) (pte_val(pte) & _PAGE_WRITE)
diff --git a/arch/arc/mm/mmap.c b/arch/arc/mm/mmap.c
index 722d26b94307..7dd50b66f266 100644
--- a/arch/arc/mm/mmap.c
+++ b/arch/arc/mm/mmap.c
@@ -74,3 +74,23 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
info.align_offset = pgoff << PAGE_SHIFT;
return vm_unmapped_area();
 }
+
+static pgprot_t protection_map[16] __ro_after_init = {
+   [VM_NONE]   = PAGE_U_NONE,
+   [VM_READ]   = PAGE_U_R,
+   [VM_WRITE]  = PAGE_U_R,
+   [VM_WRITE | VM_READ]= PAGE_U_R,
+   [VM_EXEC]   = PAGE_U_X_R,
+   [VM_EXEC | VM_READ] = PAGE_U_X_R,
+   [VM_EXEC | VM_WRITE]= PAGE_U_X_R,
+   [VM_EXEC | VM_WRITE | VM_READ]  = PAGE_U_X_R,
+   [VM_SHARED] = PAGE_U_NONE,
+   [VM_SHARED | VM_READ]   = PAGE_U_R,
+   [VM_SHARED | VM_WRITE]  = PAGE_U_W_R,
+   [VM_SHARED | VM_WRITE | VM_READ]= PAGE_U_W_R,
+   [VM_SHARED | VM_EXEC]   = PAGE_U_X_R,
+   [VM_SHARED | VM_EXEC | VM_READ] = PAGE_U_X_R,
+   [VM_SHARED | VM_EXEC | VM_WRITE]= PAGE_U_X_W_R,
+   [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = PAGE_U_X_W_R
+};
+DECLARE_VM_GET_PAGE_PROT
-- 
2.25.1



[PATCH V4 18/26] s390/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Anshuman Khandual
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.

Cc: Heiko Carstens 
Cc: Vasily Gorbik 
Cc: linux-s...@vger.kernel.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 arch/s390/Kconfig   |  1 +
 arch/s390/include/asm/pgtable.h | 17 -
 arch/s390/mm/mmap.c | 20 
 3 files changed, 21 insertions(+), 17 deletions(-)

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 91c0b80a8bf0..c4481377ca83 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -81,6 +81,7 @@ config S390
select ARCH_HAS_SYSCALL_WRAPPER
select ARCH_HAS_UBSAN_SANITIZE_ALL
select ARCH_HAS_VDSO_DATA
+   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select ARCH_INLINE_READ_LOCK
select ARCH_INLINE_READ_LOCK_BH
diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
index a397b072a580..c63a05b5368a 100644
--- a/arch/s390/include/asm/pgtable.h
+++ b/arch/s390/include/asm/pgtable.h
@@ -424,23 +424,6 @@ static inline int is_module_addr(void *addr)
  * implies read permission.
  */
  /*xwr*/
-#define __P000 PAGE_NONE
-#define __P001 PAGE_RO
-#define __P010 PAGE_RO
-#define __P011 PAGE_RO
-#define __P100 PAGE_RX
-#define __P101 PAGE_RX
-#define __P110 PAGE_RX
-#define __P111 PAGE_RX
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_RO
-#define __S010 PAGE_RW
-#define __S011 PAGE_RW
-#define __S100 PAGE_RX
-#define __S101 PAGE_RX
-#define __S110 PAGE_RWX
-#define __S111 PAGE_RWX
 
 /*
  * Segment entry (large page) protection definitions.
diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c
index d545f5c39f7e..c745b545012b 100644
--- a/arch/s390/mm/mmap.c
+++ b/arch/s390/mm/mmap.c
@@ -188,3 +188,23 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct 
rlimit *rlim_stack)
mm->get_unmapped_area = arch_get_unmapped_area_topdown;
}
 }
+
+static pgprot_t protection_map[16] __ro_after_init = {
+   [VM_NONE]   = PAGE_NONE,
+   [VM_READ]   = PAGE_RO,
+   [VM_WRITE]  = PAGE_RO,
+   [VM_WRITE | VM_READ]= PAGE_RO,
+   [VM_EXEC]   = PAGE_RX,
+   [VM_EXEC | VM_READ] = PAGE_RX,
+   [VM_EXEC | VM_WRITE]= PAGE_RX,
+   [VM_EXEC | VM_WRITE | VM_READ]  = PAGE_RX,
+   [VM_SHARED] = PAGE_NONE,
+   [VM_SHARED | VM_READ]   = PAGE_RO,
+   [VM_SHARED | VM_WRITE]  = PAGE_RW,
+   [VM_SHARED | VM_WRITE | VM_READ]= PAGE_RW,
+   [VM_SHARED | VM_EXEC]   = PAGE_RX,
+   [VM_SHARED | VM_EXEC | VM_READ] = PAGE_RX,
+   [VM_SHARED | VM_EXEC | VM_WRITE]= PAGE_RWX,
+   [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = PAGE_RWX
+};
+DECLARE_VM_GET_PAGE_PROT
-- 
2.25.1



[PATCH V4 21/26] m68k/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Anshuman Khandual
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.

Cc: Thomas Bogendoerfer 
Cc: linux-m...@lists.linux-m68k.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 arch/m68k/Kconfig|  1 +
 arch/m68k/include/asm/mcf_pgtable.h  | 54 ---
 arch/m68k/include/asm/motorola_pgtable.h | 22 --
 arch/m68k/include/asm/sun3_pgtable.h | 17 
 arch/m68k/mm/mcfmmu.c| 55 
 arch/m68k/mm/motorola.c  | 20 +
 arch/m68k/mm/sun3mmu.c   | 20 +
 7 files changed, 96 insertions(+), 93 deletions(-)

diff --git a/arch/m68k/Kconfig b/arch/m68k/Kconfig
index 936cce42ae9a..49aa0cf13e96 100644
--- a/arch/m68k/Kconfig
+++ b/arch/m68k/Kconfig
@@ -7,6 +7,7 @@ config M68K
select ARCH_HAS_CURRENT_STACK_POINTER
select ARCH_HAS_DMA_PREP_COHERENT if HAS_DMA && MMU && !COLDFIRE
select ARCH_HAS_SYNC_DMA_FOR_DEVICE if HAS_DMA
+   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAVE_NMI_SAFE_CMPXCHG if RMW_INSNS
select ARCH_MIGHT_HAVE_PC_PARPORT if ISA
select ARCH_NO_PREEMPT if !COLDFIRE
diff --git a/arch/m68k/include/asm/mcf_pgtable.h 
b/arch/m68k/include/asm/mcf_pgtable.h
index 94f38d76e278..0e9c1b28dcab 100644
--- a/arch/m68k/include/asm/mcf_pgtable.h
+++ b/arch/m68k/include/asm/mcf_pgtable.h
@@ -91,60 +91,6 @@
  * for use. In general, the bit positions are xwr, and P-items are
  * private, the S-items are shared.
  */
-#define __P000 PAGE_NONE
-#define __P001 __pgprot(CF_PAGE_VALID \
-| CF_PAGE_ACCESSED \
-| CF_PAGE_READABLE)
-#define __P010 __pgprot(CF_PAGE_VALID \
-| CF_PAGE_ACCESSED \
-| CF_PAGE_WRITABLE)
-#define __P011 __pgprot(CF_PAGE_VALID \
-| CF_PAGE_ACCESSED \
-| CF_PAGE_READABLE \
-| CF_PAGE_WRITABLE)
-#define __P100 __pgprot(CF_PAGE_VALID \
-| CF_PAGE_ACCESSED \
-| CF_PAGE_EXEC)
-#define __P101 __pgprot(CF_PAGE_VALID \
-| CF_PAGE_ACCESSED \
-| CF_PAGE_READABLE \
-| CF_PAGE_EXEC)
-#define __P110 __pgprot(CF_PAGE_VALID \
-| CF_PAGE_ACCESSED \
-| CF_PAGE_WRITABLE \
-| CF_PAGE_EXEC)
-#define __P111 __pgprot(CF_PAGE_VALID \
-| CF_PAGE_ACCESSED \
-| CF_PAGE_READABLE \
-| CF_PAGE_WRITABLE \
-| CF_PAGE_EXEC)
-
-#define __S000 PAGE_NONE
-#define __S001 __pgprot(CF_PAGE_VALID \
-| CF_PAGE_ACCESSED \
-| CF_PAGE_READABLE)
-#define __S010 PAGE_SHARED
-#define __S011 __pgprot(CF_PAGE_VALID \
-| CF_PAGE_ACCESSED \
-| CF_PAGE_SHARED \
-| CF_PAGE_READABLE)
-#define __S100 __pgprot(CF_PAGE_VALID \
-| CF_PAGE_ACCESSED \
-| CF_PAGE_EXEC)
-#define __S101 __pgprot(CF_PAGE_VALID \
-| CF_PAGE_ACCESSED \
-| CF_PAGE_READABLE \
-| CF_PAGE_EXEC)
-#define __S110 __pgprot(CF_PAGE_VALID \
-| CF_PAGE_ACCESSED \
-| CF_PAGE_SHARED \
-| CF_PAGE_EXEC)
-#define __S111 __pgprot(CF_PAGE_VALID \
-| CF_PAGE_ACCESSED \
-| CF_PAGE_SHARED \
-| CF_PAGE_READABLE \
-| CF_PAGE_EXEC)
-
 #define PTE_MASK   PAGE_MASK
 #define CF_PAGE_CHG_MASK (PTE_MASK | CF_PAGE_ACCESSED | CF_PAGE_DIRTY)
 
diff --git a/arch/m68k/include/asm/motorola_pgtable.h 
b/arch/m68k/include/asm/motorola_pgtable.h
index 7c9b56e2a750..63aaece0722f 100644
--- a/arch/m68k/include/asm/motorola_pgtable.h
+++ b/arch/m68k/include/asm/motorola_pgtable.h
@@ -83,28 +83,6 @@ extern unsigned long mm_cachebits;
 #define PAGE_COPY_C__pgprot(_PAGE_PRESENT | _PAGE_RONLY | _PAGE_ACCESSED)
 #define PAGE_READONLY_C__pgprot(_PAGE_PRESENT | _PAGE_RONLY | 
_PAGE_ACCESSED)
 
-/*
- * The m68k can't do page 

[PATCH V4 17/26] csky/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Anshuman Khandual
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.

Cc: Geert Uytterhoeven 
Cc: linux-c...@vger.kernel.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 arch/csky/Kconfig   |  1 +
 arch/csky/include/asm/pgtable.h | 18 --
 arch/csky/mm/init.c | 20 
 3 files changed, 21 insertions(+), 18 deletions(-)

diff --git a/arch/csky/Kconfig b/arch/csky/Kconfig
index 21d72b078eef..588b8a9c68ed 100644
--- a/arch/csky/Kconfig
+++ b/arch/csky/Kconfig
@@ -6,6 +6,7 @@ config CSKY
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
+   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_QUEUED_RWLOCKS
select ARCH_WANT_FRAME_POINTERS if !CPU_CK610 && 
$(cc-option,-mbacktrace)
diff --git a/arch/csky/include/asm/pgtable.h b/arch/csky/include/asm/pgtable.h
index bbe24511..229a5f4ad7fc 100644
--- a/arch/csky/include/asm/pgtable.h
+++ b/arch/csky/include/asm/pgtable.h
@@ -77,24 +77,6 @@
 #define MAX_SWAPFILES_CHECK() \
BUILD_BUG_ON(MAX_SWAPFILES_SHIFT != 5)
 
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READ
-#define __P010 PAGE_READ
-#define __P011 PAGE_READ
-#define __P100 PAGE_READ
-#define __P101 PAGE_READ
-#define __P110 PAGE_READ
-#define __P111 PAGE_READ
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READ
-#define __S010 PAGE_WRITE
-#define __S011 PAGE_WRITE
-#define __S100 PAGE_READ
-#define __S101 PAGE_READ
-#define __S110 PAGE_WRITE
-#define __S111 PAGE_WRITE
-
 extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
 #define ZERO_PAGE(vaddr)   (virt_to_page(empty_zero_page))
 
diff --git a/arch/csky/mm/init.c b/arch/csky/mm/init.c
index bf2004aa811a..1bf7b2a748fd 100644
--- a/arch/csky/mm/init.c
+++ b/arch/csky/mm/init.c
@@ -197,3 +197,23 @@ void __init fixaddr_init(void)
vaddr = __fix_to_virt(__end_of_fixed_addresses - 1) & PMD_MASK;
fixrange_init(vaddr, vaddr + PMD_SIZE, swapper_pg_dir);
 }
+
+static pgprot_t protection_map[16] __ro_after_init = {
+   [VM_NONE]   = PAGE_NONE,
+   [VM_READ]   = PAGE_READ,
+   [VM_WRITE]  = PAGE_READ,
+   [VM_WRITE | VM_READ]= PAGE_READ,
+   [VM_EXEC]   = PAGE_READ,
+   [VM_EXEC | VM_READ] = PAGE_READ,
+   [VM_EXEC | VM_WRITE]= PAGE_READ,
+   [VM_EXEC | VM_WRITE | VM_READ]  = PAGE_READ,
+   [VM_SHARED] = PAGE_NONE,
+   [VM_SHARED | VM_READ]   = PAGE_READ,
+   [VM_SHARED | VM_WRITE]  = PAGE_WRITE,
+   [VM_SHARED | VM_WRITE | VM_READ]= PAGE_WRITE,
+   [VM_SHARED | VM_EXEC]   = PAGE_READ,
+   [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READ,
+   [VM_SHARED | VM_EXEC | VM_WRITE]= PAGE_WRITE,
+   [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = PAGE_WRITE
+};
+DECLARE_VM_GET_PAGE_PROT
-- 
2.25.1



[PATCH V4 20/26] mips/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Anshuman Khandual
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.

Cc: Thomas Bogendoerfer 
Cc: linux-m...@vger.kernel.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 arch/mips/Kconfig   |  1 +
 arch/mips/include/asm/pgtable.h | 22 --
 arch/mips/mm/cache.c|  3 +++
 3 files changed, 4 insertions(+), 22 deletions(-)

diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index db09d45d59ec..d0b7eb11ec81 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -14,6 +14,7 @@ config MIPS
select ARCH_HAS_STRNLEN_USER
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAS_UBSAN_SANITIZE_ALL
+   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_KEEP_MEMBLOCK
select ARCH_SUPPORTS_UPROBES
diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h
index 374c6322775d..6caec386ad2f 100644
--- a/arch/mips/include/asm/pgtable.h
+++ b/arch/mips/include/asm/pgtable.h
@@ -41,28 +41,6 @@ struct vm_area_struct;
  * by reasonable means..
  */
 
-/*
- * Dummy values to fill the table in mmap.c
- * The real values will be generated at runtime
- */
-#define __P000 __pgprot(0)
-#define __P001 __pgprot(0)
-#define __P010 __pgprot(0)
-#define __P011 __pgprot(0)
-#define __P100 __pgprot(0)
-#define __P101 __pgprot(0)
-#define __P110 __pgprot(0)
-#define __P111 __pgprot(0)
-
-#define __S000 __pgprot(0)
-#define __S001 __pgprot(0)
-#define __S010 __pgprot(0)
-#define __S011 __pgprot(0)
-#define __S100 __pgprot(0)
-#define __S101 __pgprot(0)
-#define __S110 __pgprot(0)
-#define __S111 __pgprot(0)
-
 extern unsigned long _page_cachable_default;
 extern void __update_cache(unsigned long address, pte_t pte);
 
diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c
index 7be7240f7703..11b3e7ddafd5 100644
--- a/arch/mips/mm/cache.c
+++ b/arch/mips/mm/cache.c
@@ -159,6 +159,9 @@ EXPORT_SYMBOL(_page_cachable_default);
 
 #define PM(p)  __pgprot(_page_cachable_default | (p))
 
+static pgprot_t protection_map[16] __ro_after_init;
+DECLARE_VM_GET_PAGE_PROT
+
 static inline void setup_protection_map(void)
 {
protection_map[0]  = PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_NO_READ);
-- 
2.25.1



[PATCH V4 16/26] riscv/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Anshuman Khandual
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.

Cc: Paul Walmsley 
Cc: Palmer Dabbelt 
Cc: linux-ri...@lists.infradead.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 arch/riscv/Kconfig   |  1 +
 arch/riscv/include/asm/pgtable.h | 20 
 arch/riscv/mm/init.c | 20 
 3 files changed, 21 insertions(+), 20 deletions(-)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 32ffef9f6e5b..583389d4e43a 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -32,6 +32,7 @@ config RISCV
select ARCH_HAS_STRICT_MODULE_RWX if MMU && !XIP_KERNEL
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAS_UBSAN_SANITIZE_ALL
+   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT
select ARCH_STACKWALK
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 1d1be9d9419c..23e643db6575 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -186,26 +186,6 @@ extern struct pt_alloc_ops pt_ops __initdata;
 
 extern pgd_t swapper_pg_dir[];
 
-/* MAP_PRIVATE permissions: xwr (copy-on-write) */
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READ
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY
-#define __P100 PAGE_EXEC
-#define __P101 PAGE_READ_EXEC
-#define __P110 PAGE_COPY_EXEC
-#define __P111 PAGE_COPY_READ_EXEC
-
-/* MAP_SHARED permissions: xwr */
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READ
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_EXEC
-#define __S101 PAGE_READ_EXEC
-#define __S110 PAGE_SHARED_EXEC
-#define __S111 PAGE_SHARED_EXEC
-
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 static inline int pmd_present(pmd_t pmd)
 {
diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
index d466ec670e1f..f976580500b1 100644
--- a/arch/riscv/mm/init.c
+++ b/arch/riscv/mm/init.c
@@ -288,6 +288,26 @@ static pmd_t __maybe_unused early_dtb_pmd[PTRS_PER_PMD] 
__initdata __aligned(PAG
 #define early_pg_dir   ((pgd_t *)XIP_FIXUP(early_pg_dir))
 #endif /* CONFIG_XIP_KERNEL */
 
+static pgprot_t protection_map[16] __ro_after_init = {
+   [VM_NONE]   = PAGE_NONE,
+   [VM_READ]   = PAGE_READ,
+   [VM_WRITE]  = PAGE_COPY,
+   [VM_WRITE | VM_READ]= PAGE_COPY,
+   [VM_EXEC]   = PAGE_EXEC,
+   [VM_EXEC | VM_READ] = PAGE_READ_EXEC,
+   [VM_EXEC | VM_WRITE]= PAGE_COPY_EXEC,
+   [VM_EXEC | VM_WRITE | VM_READ]  = PAGE_COPY_READ_EXEC,
+   [VM_SHARED] = PAGE_NONE,
+   [VM_SHARED | VM_READ]   = PAGE_READ,
+   [VM_SHARED | VM_WRITE]  = PAGE_SHARED,
+   [VM_SHARED | VM_WRITE | VM_READ]= PAGE_SHARED,
+   [VM_SHARED | VM_EXEC]   = PAGE_EXEC,
+   [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READ_EXEC,
+   [VM_SHARED | VM_EXEC | VM_WRITE]= PAGE_SHARED_EXEC,
+   [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = PAGE_SHARED_EXEC
+};
+DECLARE_VM_GET_PAGE_PROT
+
 void __set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t prot)
 {
unsigned long addr = __fix_to_virt(idx);
-- 
2.25.1



[PATCH V4 15/26] nios2/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Anshuman Khandual
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.

Cc: Dinh Nguyen 
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 arch/nios2/Kconfig   |  1 +
 arch/nios2/include/asm/pgtable.h | 16 
 arch/nios2/mm/init.c | 20 
 3 files changed, 21 insertions(+), 16 deletions(-)

diff --git a/arch/nios2/Kconfig b/arch/nios2/Kconfig
index 4167f1eb4cd8..e0459dffd218 100644
--- a/arch/nios2/Kconfig
+++ b/arch/nios2/Kconfig
@@ -6,6 +6,7 @@ config NIOS2
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
select ARCH_HAS_DMA_SET_UNCACHED
+   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_NO_SWAP
select COMMON_CLK
select TIMER_OF
diff --git a/arch/nios2/include/asm/pgtable.h b/arch/nios2/include/asm/pgtable.h
index 262d0609268c..470516d4555e 100644
--- a/arch/nios2/include/asm/pgtable.h
+++ b/arch/nios2/include/asm/pgtable.h
@@ -40,24 +40,8 @@ struct mm_struct;
  */
 
 /* Remove W bit on private pages for COW support */
-#define __P000 MKP(0, 0, 0)
-#define __P001 MKP(0, 0, 1)
-#define __P010 MKP(0, 0, 0)/* COW */
-#define __P011 MKP(0, 0, 1)/* COW */
-#define __P100 MKP(1, 0, 0)
-#define __P101 MKP(1, 0, 1)
-#define __P110 MKP(1, 0, 0)/* COW */
-#define __P111 MKP(1, 0, 1)/* COW */
 
 /* Shared pages can have exact HW mapping */
-#define __S000 MKP(0, 0, 0)
-#define __S001 MKP(0, 0, 1)
-#define __S010 MKP(0, 1, 0)
-#define __S011 MKP(0, 1, 1)
-#define __S100 MKP(1, 0, 0)
-#define __S101 MKP(1, 0, 1)
-#define __S110 MKP(1, 1, 0)
-#define __S111 MKP(1, 1, 1)
 
 /* Used all over the kernel */
 #define PAGE_KERNEL __pgprot(_PAGE_PRESENT | _PAGE_CACHED | _PAGE_READ | \
diff --git a/arch/nios2/mm/init.c b/arch/nios2/mm/init.c
index 613fcaa5988a..9a3dd4c80d70 100644
--- a/arch/nios2/mm/init.c
+++ b/arch/nios2/mm/init.c
@@ -124,3 +124,23 @@ const char *arch_vma_name(struct vm_area_struct *vma)
 {
return (vma->vm_start == KUSER_BASE) ? "[kuser]" : NULL;
 }
+
+static pgprot_t protection_map[16] __ro_after_init = {
+   [VM_NONE]   = MKP(0, 0, 0),
+   [VM_READ]   = MKP(0, 0, 1),
+   [VM_WRITE]  = MKP(0, 0, 0),
+   [VM_WRITE | VM_READ]= MKP(0, 0, 1),
+   [VM_EXEC]   = MKP(1, 0, 0),
+   [VM_EXEC | VM_READ] = MKP(1, 0, 1),
+   [VM_EXEC | VM_WRITE]= MKP(1, 0, 0),
+   [VM_EXEC | VM_WRITE | VM_READ]  = MKP(1, 0, 1),
+   [VM_SHARED] = MKP(0, 0, 0),
+   [VM_SHARED | VM_READ]   = MKP(0, 0, 1),
+   [VM_SHARED | VM_WRITE]  = MKP(0, 1, 0),
+   [VM_SHARED | VM_WRITE | VM_READ]= MKP(0, 1, 1),
+   [VM_SHARED | VM_EXEC]   = MKP(1, 0, 0),
+   [VM_SHARED | VM_EXEC | VM_READ] = MKP(1, 0, 1),
+   [VM_SHARED | VM_EXEC | VM_WRITE]= MKP(1, 1, 0),
+   [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = MKP(1, 1, 1)
+};
+DECLARE_VM_GET_PAGE_PROT
-- 
2.25.1



[PATCH V4 11/26] extensa/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Anshuman Khandual
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.

Cc: Chris Zankel 
Cc: Guo Ren 
Cc: linux-xte...@linux-xtensa.org
Cc: linux-c...@vger.kernel.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 arch/xtensa/Kconfig   |  1 +
 arch/xtensa/include/asm/pgtable.h | 18 --
 arch/xtensa/mm/init.c | 20 
 3 files changed, 21 insertions(+), 18 deletions(-)

diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
index 0b0f0172cced..4c0d83520ff1 100644
--- a/arch/xtensa/Kconfig
+++ b/arch/xtensa/Kconfig
@@ -11,6 +11,7 @@ config XTENSA
select ARCH_HAS_DMA_SET_UNCACHED if MMU
select ARCH_HAS_STRNCPY_FROM_USER if !KASAN
select ARCH_HAS_STRNLEN_USER
+   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_USE_MEMTEST
select ARCH_USE_QUEUED_RWLOCKS
select ARCH_USE_QUEUED_SPINLOCKS
diff --git a/arch/xtensa/include/asm/pgtable.h 
b/arch/xtensa/include/asm/pgtable.h
index 0a91376131c5..e0d5531ae00d 100644
--- a/arch/xtensa/include/asm/pgtable.h
+++ b/arch/xtensa/include/asm/pgtable.h
@@ -200,24 +200,6 @@
  * What follows is the closest we can get by reasonable means..
  * See linux/mm/mmap.c for protection_map[] array that uses these definitions.
  */
-#define __P000 PAGE_NONE   /* private --- */
-#define __P001 PAGE_READONLY   /* private --r */
-#define __P010 PAGE_COPY   /* private -w- */
-#define __P011 PAGE_COPY   /* private -wr */
-#define __P100 PAGE_READONLY_EXEC  /* private x-- */
-#define __P101 PAGE_READONLY_EXEC  /* private x-r */
-#define __P110 PAGE_COPY_EXEC  /* private xw- */
-#define __P111 PAGE_COPY_EXEC  /* private xwr */
-
-#define __S000 PAGE_NONE   /* shared  --- */
-#define __S001 PAGE_READONLY   /* shared  --r */
-#define __S010 PAGE_SHARED /* shared  -w- */
-#define __S011 PAGE_SHARED /* shared  -wr */
-#define __S100 PAGE_READONLY_EXEC  /* shared  x-- */
-#define __S101 PAGE_READONLY_EXEC  /* shared  x-r */
-#define __S110 PAGE_SHARED_EXEC/* shared  xw- */
-#define __S111 PAGE_SHARED_EXEC/* shared  xwr */
-
 #ifndef __ASSEMBLY__
 
 #define pte_ERROR(e) \
diff --git a/arch/xtensa/mm/init.c b/arch/xtensa/mm/init.c
index 6a32b2cf2718..7d5ac1b049c3 100644
--- a/arch/xtensa/mm/init.c
+++ b/arch/xtensa/mm/init.c
@@ -216,3 +216,23 @@ static int __init parse_memmap_opt(char *str)
return 0;
 }
 early_param("memmap", parse_memmap_opt);
+
+static pgprot_t protection_map[16] __ro_after_init = {
+   [VM_NONE]   = PAGE_NONE,
+   [VM_READ]   = PAGE_READONLY,
+   [VM_WRITE]  = PAGE_COPY,
+   [VM_WRITE | VM_READ]= PAGE_COPY,
+   [VM_EXEC]   = PAGE_READONLY_EXEC,
+   [VM_EXEC | VM_READ] = PAGE_READONLY_EXEC,
+   [VM_EXEC | VM_WRITE]= PAGE_COPY_EXEC,
+   [VM_EXEC | VM_WRITE | VM_READ]  = PAGE_COPY_EXEC,
+   [VM_SHARED] = PAGE_NONE,
+   [VM_SHARED | VM_READ]   = PAGE_READONLY,
+   [VM_SHARED | VM_WRITE]  = PAGE_SHARED,
+   [VM_SHARED | VM_WRITE | VM_READ]= PAGE_SHARED,
+   [VM_SHARED | VM_EXEC]   = PAGE_READONLY_EXEC,
+   [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY_EXEC,
+   [VM_SHARED | VM_EXEC | VM_WRITE]= PAGE_SHARED_EXEC,
+   [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = PAGE_SHARED_EXEC
+};
+DECLARE_VM_GET_PAGE_PROT
-- 
2.25.1



[PATCH V4 14/26] alpha/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Anshuman Khandual
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.

Cc: Richard Henderson 
Cc: linux-al...@vger.kernel.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 arch/alpha/Kconfig   |  1 +
 arch/alpha/include/asm/pgtable.h | 17 -
 arch/alpha/mm/init.c | 22 ++
 3 files changed, 23 insertions(+), 17 deletions(-)

diff --git a/arch/alpha/Kconfig b/arch/alpha/Kconfig
index 7d0d26b5b3f5..db1c8b329461 100644
--- a/arch/alpha/Kconfig
+++ b/arch/alpha/Kconfig
@@ -2,6 +2,7 @@
 config ALPHA
bool
default y
+   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_32BIT_USTAT_F_TINODE
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_MIGHT_HAVE_PC_SERIO
diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h
index 170451fde043..3ea9661c09ff 100644
--- a/arch/alpha/include/asm/pgtable.h
+++ b/arch/alpha/include/asm/pgtable.h
@@ -116,23 +116,6 @@ struct vm_area_struct;
  * arch/alpha/mm/fault.c)
  */
/* xwr */
-#define __P000 _PAGE_P(_PAGE_FOE | _PAGE_FOW | _PAGE_FOR)
-#define __P001 _PAGE_P(_PAGE_FOE | _PAGE_FOW)
-#define __P010 _PAGE_P(_PAGE_FOE)
-#define __P011 _PAGE_P(_PAGE_FOE)
-#define __P100 _PAGE_P(_PAGE_FOW | _PAGE_FOR)
-#define __P101 _PAGE_P(_PAGE_FOW)
-#define __P110 _PAGE_P(0)
-#define __P111 _PAGE_P(0)
-
-#define __S000 _PAGE_S(_PAGE_FOE | _PAGE_FOW | _PAGE_FOR)
-#define __S001 _PAGE_S(_PAGE_FOE | _PAGE_FOW)
-#define __S010 _PAGE_S(_PAGE_FOE)
-#define __S011 _PAGE_S(_PAGE_FOE)
-#define __S100 _PAGE_S(_PAGE_FOW | _PAGE_FOR)
-#define __S101 _PAGE_S(_PAGE_FOW)
-#define __S110 _PAGE_S(0)
-#define __S111 _PAGE_S(0)
 
 /*
  * pgprot_noncached() is only for infiniband pci support, and a real
diff --git a/arch/alpha/mm/init.c b/arch/alpha/mm/init.c
index 7511723b7669..a2350b2f44d0 100644
--- a/arch/alpha/mm/init.c
+++ b/arch/alpha/mm/init.c
@@ -280,3 +280,25 @@ mem_init(void)
high_memory = (void *) __va(max_low_pfn * PAGE_SIZE);
memblock_free_all();
 }
+
+static pgprot_t protection_map[16] __ro_after_init = {
+   [VM_NONE]   = _PAGE_P(_PAGE_FOE | 
_PAGE_FOW |
+ _PAGE_FOR),
+   [VM_READ]   = _PAGE_P(_PAGE_FOE | 
_PAGE_FOW),
+   [VM_WRITE]  = _PAGE_P(_PAGE_FOE),
+   [VM_WRITE | VM_READ]= _PAGE_P(_PAGE_FOE),
+   [VM_EXEC]   = _PAGE_P(_PAGE_FOW | 
_PAGE_FOR),
+   [VM_EXEC | VM_READ] = _PAGE_P(_PAGE_FOW),
+   [VM_EXEC | VM_WRITE]= _PAGE_P(0),
+   [VM_EXEC | VM_WRITE | VM_READ]  = _PAGE_P(0),
+   [VM_SHARED] = _PAGE_S(_PAGE_FOE | 
_PAGE_FOW |
+ _PAGE_FOR),
+   [VM_SHARED | VM_READ]   = _PAGE_S(_PAGE_FOE | 
_PAGE_FOW),
+   [VM_SHARED | VM_WRITE]  = _PAGE_S(_PAGE_FOE),
+   [VM_SHARED | VM_WRITE | VM_READ]= _PAGE_S(_PAGE_FOE),
+   [VM_SHARED | VM_EXEC]   = _PAGE_S(_PAGE_FOW | 
_PAGE_FOR),
+   [VM_SHARED | VM_EXEC | VM_READ] = _PAGE_S(_PAGE_FOW),
+   [VM_SHARED | VM_EXEC | VM_WRITE]= _PAGE_S(0),
+   [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = _PAGE_S(0)
+};
+DECLARE_VM_GET_PAGE_PROT
-- 
2.25.1



[PATCH V4 10/26] openrisc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Anshuman Khandual
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.

Cc: Jonas Bonn 
Cc: openr...@lists.librecores.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 arch/openrisc/Kconfig   |  1 +
 arch/openrisc/include/asm/pgtable.h | 18 --
 arch/openrisc/mm/init.c | 20 
 3 files changed, 21 insertions(+), 18 deletions(-)

diff --git a/arch/openrisc/Kconfig b/arch/openrisc/Kconfig
index e814df4c483c..fe0dfb50eb86 100644
--- a/arch/openrisc/Kconfig
+++ b/arch/openrisc/Kconfig
@@ -10,6 +10,7 @@ config OPENRISC
select ARCH_HAS_DMA_SET_UNCACHED
select ARCH_HAS_DMA_CLEAR_UNCACHED
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
+   select ARCH_HAS_VM_GET_PAGE_PROT
select COMMON_CLK
select OF
select OF_EARLY_FLATTREE
diff --git a/arch/openrisc/include/asm/pgtable.h 
b/arch/openrisc/include/asm/pgtable.h
index c3abbf71e09f..dcae8aea132f 100644
--- a/arch/openrisc/include/asm/pgtable.h
+++ b/arch/openrisc/include/asm/pgtable.h
@@ -176,24 +176,6 @@ extern void paging_init(void);
__pgprot(_PAGE_ALL | _PAGE_SRE | _PAGE_SWE \
 | _PAGE_SHARED | _PAGE_DIRTY | _PAGE_EXEC | _PAGE_CI)
 
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY_X
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY_X
-#define __P100 PAGE_READONLY
-#define __P101 PAGE_READONLY_X
-#define __P110 PAGE_COPY
-#define __P111 PAGE_COPY_X
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY_X
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED_X
-#define __S100 PAGE_READONLY
-#define __S101 PAGE_READONLY_X
-#define __S110 PAGE_SHARED
-#define __S111 PAGE_SHARED_X
-
 /* zero page used for uninitialized stuff */
 extern unsigned long empty_zero_page[2048];
 #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
diff --git a/arch/openrisc/mm/init.c b/arch/openrisc/mm/init.c
index 3a021ab6f1ae..a654b9dcba91 100644
--- a/arch/openrisc/mm/init.c
+++ b/arch/openrisc/mm/init.c
@@ -208,3 +208,23 @@ void __init mem_init(void)
mem_init_done = 1;
return;
 }
+
+static pgprot_t protection_map[16] __ro_after_init = {
+   [VM_NONE]   = PAGE_NONE,
+   [VM_READ]   = PAGE_READONLY_X,
+   [VM_WRITE]  = PAGE_COPY,
+   [VM_WRITE | VM_READ]= PAGE_COPY_X,
+   [VM_EXEC]   = PAGE_READONLY,
+   [VM_EXEC | VM_READ] = PAGE_READONLY_X,
+   [VM_EXEC | VM_WRITE]= PAGE_COPY,
+   [VM_EXEC | VM_WRITE | VM_READ]  = PAGE_COPY_X,
+   [VM_SHARED] = PAGE_NONE,
+   [VM_SHARED | VM_READ]   = PAGE_READONLY_X,
+   [VM_SHARED | VM_WRITE]  = PAGE_SHARED,
+   [VM_SHARED | VM_WRITE | VM_READ]= PAGE_SHARED_X,
+   [VM_SHARED | VM_EXEC]   = PAGE_READONLY,
+   [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY_X,
+   [VM_SHARED | VM_EXEC | VM_WRITE]= PAGE_SHARED,
+   [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = PAGE_SHARED_X
+};
+DECLARE_VM_GET_PAGE_PROT
-- 
2.25.1



[PATCH V4 13/26] parisc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Anshuman Khandual
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.

Cc: "James E.J. Bottomley" 
Cc: linux-par...@vger.kernel.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 arch/parisc/Kconfig   |  1 +
 arch/parisc/include/asm/pgtable.h | 18 --
 arch/parisc/mm/init.c | 20 
 3 files changed, 21 insertions(+), 18 deletions(-)

diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
index 5f2448dc5a2b..90eabc846f81 100644
--- a/arch/parisc/Kconfig
+++ b/arch/parisc/Kconfig
@@ -11,6 +11,7 @@ config PARISC
select ARCH_HAS_ELF_RANDOMIZE
select ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_HAS_UBSAN_SANITIZE_ALL
+   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAS_PTE_SPECIAL
select ARCH_NO_SG_CHAIN
select ARCH_SUPPORTS_HUGETLBFS if PA20
diff --git a/arch/parisc/include/asm/pgtable.h 
b/arch/parisc/include/asm/pgtable.h
index 69765a6dbe89..6a1899a9b420 100644
--- a/arch/parisc/include/asm/pgtable.h
+++ b/arch/parisc/include/asm/pgtable.h
@@ -271,24 +271,6 @@ extern void __update_cache(pte_t pte);
  */
 
 /*xwr*/
-#define __P000  PAGE_NONE
-#define __P001  PAGE_READONLY
-#define __P010  __P000 /* copy on write */
-#define __P011  __P001 /* copy on write */
-#define __P100  PAGE_EXECREAD
-#define __P101  PAGE_EXECREAD
-#define __P110  __P100 /* copy on write */
-#define __P111  __P101 /* copy on write */
-
-#define __S000  PAGE_NONE
-#define __S001  PAGE_READONLY
-#define __S010  PAGE_WRITEONLY
-#define __S011  PAGE_SHARED
-#define __S100  PAGE_EXECREAD
-#define __S101  PAGE_EXECREAD
-#define __S110  PAGE_RWX
-#define __S111  PAGE_RWX
-
 
 extern pgd_t swapper_pg_dir[]; /* declared in init_task.c */
 
diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
index 0a81499dd35e..451f20f87711 100644
--- a/arch/parisc/mm/init.c
+++ b/arch/parisc/mm/init.c
@@ -871,3 +871,23 @@ void flush_tlb_all(void)
spin_unlock(_lock);
 }
 #endif
+
+static pgprot_t protection_map[16] __ro_after_init = {
+   [VM_NONE]   = PAGE_NONE,
+   [VM_READ]   = PAGE_READONLY,
+   [VM_WRITE]  = PAGE_NONE,
+   [VM_WRITE | VM_READ]= PAGE_READONLY,
+   [VM_EXEC]   = PAGE_EXECREAD,
+   [VM_EXEC | VM_READ] = PAGE_EXECREAD,
+   [VM_EXEC | VM_WRITE]= PAGE_EXECREAD,
+   [VM_EXEC | VM_WRITE | VM_READ]  = PAGE_EXECREAD,
+   [VM_SHARED] = PAGE_NONE,
+   [VM_SHARED | VM_READ]   = PAGE_READONLY,
+   [VM_SHARED | VM_WRITE]  = PAGE_WRITEONLY,
+   [VM_SHARED | VM_WRITE | VM_READ]= PAGE_SHARED,
+   [VM_SHARED | VM_EXEC]   = PAGE_EXECREAD,
+   [VM_SHARED | VM_EXEC | VM_READ] = PAGE_EXECREAD,
+   [VM_SHARED | VM_EXEC | VM_WRITE]= PAGE_RWX,
+   [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = PAGE_RWX
+};
+DECLARE_VM_GET_PAGE_PROT
-- 
2.25.1



[PATCH V4 09/26] loongarch/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Anshuman Khandual
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.

Cc: Huacai Chen 
Cc: WANG Xuerui 
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 arch/loongarch/Kconfig|  1 +
 arch/loongarch/include/asm/pgtable-bits.h | 19 --
 arch/loongarch/mm/cache.c | 46 +++
 3 files changed, 47 insertions(+), 19 deletions(-)

diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
index 1920d52653b4..fd07b8e760ee 100644
--- a/arch/loongarch/Kconfig
+++ b/arch/loongarch/Kconfig
@@ -9,6 +9,7 @@ config LOONGARCH
select ARCH_HAS_ACPI_TABLE_UPGRADE  if ACPI
select ARCH_HAS_PHYS_TO_DMA
select ARCH_HAS_PTE_SPECIAL
+   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_INLINE_READ_LOCK if !PREEMPTION
select ARCH_INLINE_READ_LOCK_BH if !PREEMPTION
diff --git a/arch/loongarch/include/asm/pgtable-bits.h 
b/arch/loongarch/include/asm/pgtable-bits.h
index 3badd112d9ab..9ca147a29bab 100644
--- a/arch/loongarch/include/asm/pgtable-bits.h
+++ b/arch/loongarch/include/asm/pgtable-bits.h
@@ -83,25 +83,6 @@
 _PAGE_GLOBAL | _PAGE_KERN |  _CACHE_SUC)
 #define PAGE_KERNEL_WUC __pgprot(_PAGE_PRESENT | __READABLE | __WRITEABLE | \
 _PAGE_GLOBAL | _PAGE_KERN |  _CACHE_WUC)
-
-#define __P000 __pgprot(_CACHE_CC | _PAGE_USER | _PAGE_PROTNONE | 
_PAGE_NO_EXEC | _PAGE_NO_READ)
-#define __P001 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT | 
_PAGE_NO_EXEC)
-#define __P010 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT | 
_PAGE_NO_EXEC)
-#define __P011 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT | 
_PAGE_NO_EXEC)
-#define __P100 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT)
-#define __P101 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT)
-#define __P110 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT)
-#define __P111 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT)
-
-#define __S000 __pgprot(_CACHE_CC | _PAGE_USER | _PAGE_PROTNONE | 
_PAGE_NO_EXEC | _PAGE_NO_READ)
-#define __S001 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT | 
_PAGE_NO_EXEC)
-#define __S010 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT | 
_PAGE_NO_EXEC | _PAGE_WRITE)
-#define __S011 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT | 
_PAGE_NO_EXEC | _PAGE_WRITE)
-#define __S100 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT)
-#define __S101 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT)
-#define __S110 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT | 
_PAGE_WRITE)
-#define __S111 __pgprot(_CACHE_CC | _PAGE_VALID | _PAGE_USER | _PAGE_PRESENT | 
_PAGE_WRITE)
-
 #ifndef __ASSEMBLY__
 
 #define pgprot_noncached pgprot_noncached
diff --git a/arch/loongarch/mm/cache.c b/arch/loongarch/mm/cache.c
index 9e5ce5aa73f7..aa4ea357ea44 100644
--- a/arch/loongarch/mm/cache.c
+++ b/arch/loongarch/mm/cache.c
@@ -139,3 +139,49 @@ void cpu_cache_init(void)
 
shm_align_mask = PAGE_SIZE - 1;
 }
+
+static pgprot_t protection_map[16] __ro_after_init = {
+   [VM_NONE]   = __pgprot(_CACHE_CC | 
_PAGE_USER |
+  
_PAGE_PROTNONE | _PAGE_NO_EXEC |
+  
_PAGE_NO_READ),
+   [VM_READ]   = __pgprot(_CACHE_CC | 
_PAGE_VALID |
+  _PAGE_USER | 
_PAGE_PRESENT |
+  
_PAGE_NO_EXEC),
+   [VM_WRITE]  = __pgprot(_CACHE_CC | 
_PAGE_VALID |
+  _PAGE_USER | 
_PAGE_PRESENT |
+  
_PAGE_NO_EXEC),
+   [VM_WRITE | VM_READ]= __pgprot(_CACHE_CC | 
_PAGE_VALID |
+  _PAGE_USER | 
_PAGE_PRESENT |
+  
_PAGE_NO_EXEC),
+   [VM_EXEC]   = __pgprot(_CACHE_CC | 
_PAGE_VALID |
+  _PAGE_USER | 
_PAGE_PRESENT),
+   [VM_EXEC | VM_READ] = __pgprot(_CACHE_CC | 
_PAGE_VALID |
+  _PAGE_USER | 
_PAGE_PRESENT),
+   

[PATCH V4 12/26] hexagon/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Anshuman Khandual
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.

Cc: Brian Cain 
Cc: linux-hexa...@vger.kernel.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 arch/hexagon/Kconfig   |  1 +
 arch/hexagon/include/asm/pgtable.h | 27 ---
 arch/hexagon/mm/init.c | 42 ++
 3 files changed, 43 insertions(+), 27 deletions(-)

diff --git a/arch/hexagon/Kconfig b/arch/hexagon/Kconfig
index 54eadf265178..bc4ceecd0588 100644
--- a/arch/hexagon/Kconfig
+++ b/arch/hexagon/Kconfig
@@ -6,6 +6,7 @@ config HEXAGON
def_bool y
select ARCH_32BIT_OFF_T
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
+   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_NO_PREEMPT
select DMA_GLOBAL_POOL
# Other pending projects/to-do items.
diff --git a/arch/hexagon/include/asm/pgtable.h 
b/arch/hexagon/include/asm/pgtable.h
index 0610724d6a28..f7048c18b6f9 100644
--- a/arch/hexagon/include/asm/pgtable.h
+++ b/arch/hexagon/include/asm/pgtable.h
@@ -126,33 +126,6 @@ extern unsigned long _dflt_cache_att;
  */
 #define CACHEDEF   (CACHE_DEFAULT << 6)
 
-/* Private (copy-on-write) page protections. */
-#define __P000 __pgprot(_PAGE_PRESENT | _PAGE_USER | CACHEDEF)
-#define __P001 __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_READ | CACHEDEF)
-#define __P010 __P000  /* Write-only copy-on-write */
-#define __P011 __P001  /* Read/Write copy-on-write */
-#define __P100 __pgprot(_PAGE_PRESENT | _PAGE_USER | \
-   _PAGE_EXECUTE | CACHEDEF)
-#define __P101 __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_EXECUTE | \
-   _PAGE_READ | CACHEDEF)
-#define __P110 __P100  /* Write/execute copy-on-write */
-#define __P111 __P101  /* Read/Write/Execute, copy-on-write */
-
-/* Shared page protections. */
-#define __S000 __P000
-#define __S001 __P001
-#define __S010 __pgprot(_PAGE_PRESENT | _PAGE_USER | \
-   _PAGE_WRITE | CACHEDEF)
-#define __S011 __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_READ | \
-   _PAGE_WRITE | CACHEDEF)
-#define __S100 __pgprot(_PAGE_PRESENT | _PAGE_USER | \
-   _PAGE_EXECUTE | CACHEDEF)
-#define __S101 __P101
-#define __S110 __pgprot(_PAGE_PRESENT | _PAGE_USER | \
-   _PAGE_EXECUTE | _PAGE_WRITE | CACHEDEF)
-#define __S111 __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_READ | \
-   _PAGE_EXECUTE | _PAGE_WRITE | CACHEDEF)
-
 extern pgd_t swapper_pg_dir[PTRS_PER_PGD];  /* located in head.S */
 
 /*  HUGETLB not working currently  */
diff --git a/arch/hexagon/mm/init.c b/arch/hexagon/mm/init.c
index 3167a3b5c97b..5d4a44a48ad0 100644
--- a/arch/hexagon/mm/init.c
+++ b/arch/hexagon/mm/init.c
@@ -234,3 +234,45 @@ void __init setup_arch_memory(void)
 *  which is called by start_kernel() later on in the process
 */
 }
+
+static pgprot_t protection_map[16] __ro_after_init = {
+   [VM_NONE]   = 
__pgprot(_PAGE_PRESENT | _PAGE_USER |
+  CACHEDEF),
+   [VM_READ]   = 
__pgprot(_PAGE_PRESENT | _PAGE_USER |
+  _PAGE_READ | 
CACHEDEF),
+   [VM_WRITE]  = 
__pgprot(_PAGE_PRESENT | _PAGE_USER |
+  CACHEDEF),
+   [VM_WRITE | VM_READ]= 
__pgprot(_PAGE_PRESENT | _PAGE_USER |
+  _PAGE_READ | 
CACHEDEF),
+   [VM_EXEC]   = 
__pgprot(_PAGE_PRESENT | _PAGE_USER |
+  
_PAGE_EXECUTE | CACHEDEF),
+   [VM_EXEC | VM_READ] = 
__pgprot(_PAGE_PRESENT | _PAGE_USER |
+  
_PAGE_EXECUTE | _PAGE_READ |
+  CACHEDEF),
+   [VM_EXEC | VM_WRITE]= 
__pgprot(_PAGE_PRESENT | _PAGE_USER |
+  
_PAGE_EXECUTE | CACHEDEF),
+   [VM_EXEC | VM_WRITE | VM_READ]  = 
__pgprot(_PAGE_PRESENT | _PAGE_USER |
+  
_PAGE_EXECUTE | _PAGE_READ |
+  CACHEDEF),
+   [VM_SHARED] = 
__pgprot(_PAGE_PRESENT | _PAGE_USER |
+   

[PATCH V4 08/26] microblaze/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Anshuman Khandual
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports standard
vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT, which looks
up a private and static protection_map[] array. Subsequently all __SXXX and
__PXXX macros can be dropped which are no longer needed.

Cc: Michal Simek 
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 arch/microblaze/Kconfig   |  1 +
 arch/microblaze/include/asm/pgtable.h | 17 -
 arch/microblaze/mm/init.c | 20 
 3 files changed, 21 insertions(+), 17 deletions(-)

diff --git a/arch/microblaze/Kconfig b/arch/microblaze/Kconfig
index 8cf429ad1c84..15f91ba8a0c4 100644
--- a/arch/microblaze/Kconfig
+++ b/arch/microblaze/Kconfig
@@ -7,6 +7,7 @@ config MICROBLAZE
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
+   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_WANT_IPC_PARSE_VERSION
select BUILDTIME_TABLE_SORT
diff --git a/arch/microblaze/include/asm/pgtable.h 
b/arch/microblaze/include/asm/pgtable.h
index 0c72646370e1..ba348e997dbb 100644
--- a/arch/microblaze/include/asm/pgtable.h
+++ b/arch/microblaze/include/asm/pgtable.h
@@ -204,23 +204,6 @@ extern pte_t *va_to_pte(unsigned long address);
  * We consider execute permission the same as read.
  * Also, write permissions imply read permissions.
  */
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY_X
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY_X
-#define __P100 PAGE_READONLY
-#define __P101 PAGE_READONLY_X
-#define __P110 PAGE_COPY
-#define __P111 PAGE_COPY_X
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY_X
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED_X
-#define __S100 PAGE_READONLY
-#define __S101 PAGE_READONLY_X
-#define __S110 PAGE_SHARED
-#define __S111 PAGE_SHARED_X
 
 #ifndef __ASSEMBLY__
 /*
diff --git a/arch/microblaze/mm/init.c b/arch/microblaze/mm/init.c
index f4e503461d24..315fd5024f00 100644
--- a/arch/microblaze/mm/init.c
+++ b/arch/microblaze/mm/init.c
@@ -285,3 +285,23 @@ void * __ref zalloc_maybe_bootmem(size_t size, gfp_t mask)
 
return p;
 }
+
+static pgprot_t protection_map[16] __ro_after_init = {
+   [VM_NONE]   = PAGE_NONE,
+   [VM_READ]   = PAGE_READONLY_X,
+   [VM_WRITE]  = PAGE_COPY,
+   [VM_WRITE | VM_READ]= PAGE_COPY_X,
+   [VM_EXEC]   = PAGE_READONLY,
+   [VM_EXEC | VM_READ] = PAGE_READONLY_X,
+   [VM_EXEC | VM_WRITE]= PAGE_COPY,
+   [VM_EXEC | VM_WRITE | VM_READ]  = PAGE_COPY_X,
+   [VM_SHARED] = PAGE_NONE,
+   [VM_SHARED | VM_READ]   = PAGE_READONLY_X,
+   [VM_SHARED | VM_WRITE]  = PAGE_SHARED,
+   [VM_SHARED | VM_WRITE | VM_READ]= PAGE_SHARED_X,
+   [VM_SHARED | VM_EXEC]   = PAGE_READONLY,
+   [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY_X,
+   [VM_SHARED | VM_EXEC | VM_WRITE]= PAGE_SHARED,
+   [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = PAGE_SHARED_X
+};
+DECLARE_VM_GET_PAGE_PROT
-- 
2.25.1



[PATCH V4 07/26] mm/mmap: Build protect protection_map[] with ARCH_HAS_VM_GET_PAGE_PROT

2022-06-23 Thread Anshuman Khandual
protection_map[] has already been moved inside those platforms which enable
ARCH_HAS_VM_GET_PAGE_PROT. Hence generic protection_map[] array now can be
protected with CONFIG_ARCH_HAS_VM_GET_PAGE_PROT intead of __P000.

Cc: Andrew Morton 
Cc: linux...@kvack.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 include/linux/mm.h | 2 +-
 mm/mmap.c  | 5 +
 2 files changed, 2 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 237828c2bae2..70d900f6df43 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -424,7 +424,7 @@ extern unsigned int kobjsize(const void *objp);
  * mapping from the currently active vm_flags protection bits (the
  * low four bits) to a page protection mask..
  */
-#ifdef __P000
+#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
 extern pgprot_t protection_map[16];
 #endif
 
diff --git a/mm/mmap.c b/mm/mmap.c
index 55c30aee3999..43db3bd49071 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -101,7 +101,7 @@ static void unmap_region(struct mm_struct *mm,
  * w: (no) no
  * x: (yes) yes
  */
-#ifdef __P000
+#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
 pgprot_t protection_map[16] __ro_after_init = {
[VM_NONE]   = __P000,
[VM_READ]   = __P001,
@@ -120,9 +120,6 @@ pgprot_t protection_map[16] __ro_after_init = {
[VM_SHARED | VM_EXEC | VM_WRITE]= __S110,
[VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = __S111
 };
-#endif
-
-#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
 DECLARE_VM_GET_PAGE_PROT
 #endif /* CONFIG_ARCH_HAS_VM_GET_PAGE_PROT */
 
-- 
2.25.1



[PATCH V4 06/26] x86/mm: Move protection_map[] inside the platform

2022-06-23 Thread Anshuman Khandual
This moves protection_map[] inside the platform and makes it a static. This
also defines a helper function add_encrypt_protection_map() that can update
the protection_map[] array with pgprot_encrypted().

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: x...@kernel.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 arch/x86/include/asm/pgtable_types.h | 19 ---
 arch/x86/mm/mem_encrypt_amd.c|  7 +++
 arch/x86/mm/pgprot.c | 27 +++
 3 files changed, 30 insertions(+), 23 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h 
b/arch/x86/include/asm/pgtable_types.h
index bdaf8391e2e0..aa174fed3a71 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -230,25 +230,6 @@ enum page_cache_mode {
 
 #endif /* __ASSEMBLY__ */
 
-/* xwr */
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY
-#define __P100 PAGE_READONLY_EXEC
-#define __P101 PAGE_READONLY_EXEC
-#define __P110 PAGE_COPY_EXEC
-#define __P111 PAGE_COPY_EXEC
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_READONLY_EXEC
-#define __S101 PAGE_READONLY_EXEC
-#define __S110 PAGE_SHARED_EXEC
-#define __S111 PAGE_SHARED_EXEC
-
 /*
  * early identity mapping  pte attrib macros.
  */
diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c
index f6d038e2cd8e..4b3ec87e8c7d 100644
--- a/arch/x86/mm/mem_encrypt_amd.c
+++ b/arch/x86/mm/mem_encrypt_amd.c
@@ -484,10 +484,10 @@ void __init early_set_mem_enc_dec_hypercall(unsigned long 
vaddr, int npages, boo
enc_dec_hypercall(vaddr, npages, enc);
 }
 
+void add_encrypt_protection_map(void);
+
 void __init sme_early_init(void)
 {
-   unsigned int i;
-
if (!sme_me_mask)
return;
 
@@ -496,8 +496,7 @@ void __init sme_early_init(void)
__supported_pte_mask = __sme_set(__supported_pte_mask);
 
/* Update the protection map with memory encryption mask */
-   for (i = 0; i < ARRAY_SIZE(protection_map); i++)
-   protection_map[i] = pgprot_encrypted(protection_map[i]);
+   add_encrypt_protection_map();
 
x86_platform.guest.enc_status_change_prepare = 
amd_enc_status_change_prepare;
x86_platform.guest.enc_status_change_finish  = 
amd_enc_status_change_finish;
diff --git a/arch/x86/mm/pgprot.c b/arch/x86/mm/pgprot.c
index 763742782286..b867839b16aa 100644
--- a/arch/x86/mm/pgprot.c
+++ b/arch/x86/mm/pgprot.c
@@ -4,6 +4,33 @@
 #include 
 #include 
 
+static pgprot_t protection_map[16] __ro_after_init = {
+   [VM_NONE]   = PAGE_NONE,
+   [VM_READ]   = PAGE_READONLY,
+   [VM_WRITE]  = PAGE_COPY,
+   [VM_WRITE | VM_READ]= PAGE_COPY,
+   [VM_EXEC]   = PAGE_READONLY_EXEC,
+   [VM_EXEC | VM_READ] = PAGE_READONLY_EXEC,
+   [VM_EXEC | VM_WRITE]= PAGE_COPY_EXEC,
+   [VM_EXEC | VM_WRITE | VM_READ]  = PAGE_COPY_EXEC,
+   [VM_SHARED] = PAGE_NONE,
+   [VM_SHARED | VM_READ]   = PAGE_READONLY,
+   [VM_SHARED | VM_WRITE]  = PAGE_SHARED,
+   [VM_SHARED | VM_WRITE | VM_READ]= PAGE_SHARED,
+   [VM_SHARED | VM_EXEC]   = PAGE_READONLY_EXEC,
+   [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY_EXEC,
+   [VM_SHARED | VM_EXEC | VM_WRITE]= PAGE_SHARED_EXEC,
+   [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = PAGE_SHARED_EXEC
+};
+
+void add_encrypt_protection_map(void)
+{
+   unsigned int i;
+
+   for (i = 0; i < ARRAY_SIZE(protection_map); i++)
+   protection_map[i] = pgprot_encrypted(protection_map[i]);
+}
+
 pgprot_t vm_get_page_prot(unsigned long vm_flags)
 {
unsigned long val = pgprot_val(protection_map[vm_flags &
-- 
2.25.1



[PATCH V4 05/26] arm64/mm: Move protection_map[] inside the platform

2022-06-23 Thread Anshuman Khandual
This moves protection_map[] inside the platform and makes it a static.

Cc: Catalin Marinas 
Cc: Will Deacon 
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 arch/arm64/include/asm/pgtable-prot.h | 18 --
 arch/arm64/mm/mmap.c  | 21 +
 2 files changed, 21 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/include/asm/pgtable-prot.h 
b/arch/arm64/include/asm/pgtable-prot.h
index 62e0ebeed720..9b165117a454 100644
--- a/arch/arm64/include/asm/pgtable-prot.h
+++ b/arch/arm64/include/asm/pgtable-prot.h
@@ -89,24 +89,6 @@ extern bool arm64_use_ng_mappings;
 #define PAGE_READONLY_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY 
| PTE_NG | PTE_PXN)
 #define PAGE_EXECONLY  __pgprot(_PAGE_DEFAULT | PTE_RDONLY | PTE_NG | 
PTE_PXN)
 
-#define __P000  PAGE_NONE
-#define __P001  PAGE_READONLY
-#define __P010  PAGE_READONLY
-#define __P011  PAGE_READONLY
-#define __P100  PAGE_READONLY_EXEC /* PAGE_EXECONLY if Enhanced PAN */
-#define __P101  PAGE_READONLY_EXEC
-#define __P110  PAGE_READONLY_EXEC
-#define __P111  PAGE_READONLY_EXEC
-
-#define __S000  PAGE_NONE
-#define __S001  PAGE_READONLY
-#define __S010  PAGE_SHARED
-#define __S011  PAGE_SHARED
-#define __S100  PAGE_READONLY_EXEC /* PAGE_EXECONLY if Enhanced PAN */
-#define __S101  PAGE_READONLY_EXEC
-#define __S110  PAGE_SHARED_EXEC
-#define __S111  PAGE_SHARED_EXEC
-
 #endif /* __ASSEMBLY__ */
 
 #endif /* __ASM_PGTABLE_PROT_H */
diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c
index 78e9490f748d..8f5b7ce857ed 100644
--- a/arch/arm64/mm/mmap.c
+++ b/arch/arm64/mm/mmap.c
@@ -13,6 +13,27 @@
 #include 
 #include 
 
+static pgprot_t protection_map[16] __ro_after_init = {
+   [VM_NONE]   = PAGE_NONE,
+   [VM_READ]   = PAGE_READONLY,
+   [VM_WRITE]  = PAGE_READONLY,
+   [VM_WRITE | VM_READ]= PAGE_READONLY,
+   /* PAGE_EXECONLY if Enhanced PAN */
+   [VM_EXEC]   = PAGE_READONLY_EXEC,
+   [VM_EXEC | VM_READ] = PAGE_READONLY_EXEC,
+   [VM_EXEC | VM_WRITE]= PAGE_READONLY_EXEC,
+   [VM_EXEC | VM_WRITE | VM_READ]  = PAGE_READONLY_EXEC,
+   [VM_SHARED] = PAGE_NONE,
+   [VM_SHARED | VM_READ]   = PAGE_READONLY,
+   [VM_SHARED | VM_WRITE]  = PAGE_SHARED,
+   [VM_SHARED | VM_WRITE | VM_READ]= PAGE_SHARED,
+   /* PAGE_EXECONLY if Enhanced PAN */
+   [VM_SHARED | VM_EXEC]   = PAGE_READONLY_EXEC,
+   [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY_EXEC,
+   [VM_SHARED | VM_EXEC | VM_WRITE]= PAGE_SHARED_EXEC,
+   [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = PAGE_SHARED_EXEC
+};
+
 /*
  * You really shouldn't be using read() or write() on /dev/mem.  This might go
  * away in the future.
-- 
2.25.1



[PATCH V4 04/26] sparc/mm: Move protection_map[] inside the platform

2022-06-23 Thread Anshuman Khandual
This moves protection_map[] inside the platform and while here, also enable
ARCH_HAS_VM_GET_PAGE_PROT on 32 bit platforms via DECLARE_VM_GET_PAGE_PROT.

Cc: "David S. Miller" 
Cc: sparcli...@vger.kernel.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 arch/sparc/Kconfig  |  2 +-
 arch/sparc/include/asm/pgtable_32.h | 19 ---
 arch/sparc/include/asm/pgtable_64.h | 19 ---
 arch/sparc/mm/init_32.c | 20 
 arch/sparc/mm/init_64.c |  3 +++
 5 files changed, 24 insertions(+), 39 deletions(-)

diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index ba449c47effd..09f868613a4d 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -13,6 +13,7 @@ config 64BIT
 config SPARC
bool
default y
+   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_MIGHT_HAVE_PC_PARPORT if SPARC64 && PCI
select ARCH_MIGHT_HAVE_PC_SERIO
select DMA_OPS
@@ -84,7 +85,6 @@ config SPARC64
select PERF_USE_VMALLOC
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select HAVE_C_RECORDMCOUNT
-   select ARCH_HAS_VM_GET_PAGE_PROT
select HAVE_ARCH_AUDITSYSCALL
select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_SUPPORTS_DEBUG_PAGEALLOC
diff --git a/arch/sparc/include/asm/pgtable_32.h 
b/arch/sparc/include/asm/pgtable_32.h
index 4866625da314..8ff549004fac 100644
--- a/arch/sparc/include/asm/pgtable_32.h
+++ b/arch/sparc/include/asm/pgtable_32.h
@@ -64,25 +64,6 @@ void paging_init(void);
 
 extern unsigned long ptr_in_current_pgd;
 
-/* xwr */
-#define __P000  PAGE_NONE
-#define __P001  PAGE_READONLY
-#define __P010  PAGE_COPY
-#define __P011  PAGE_COPY
-#define __P100  PAGE_READONLY
-#define __P101  PAGE_READONLY
-#define __P110  PAGE_COPY
-#define __P111  PAGE_COPY
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_READONLY
-#define __S101 PAGE_READONLY
-#define __S110 PAGE_SHARED
-#define __S111 PAGE_SHARED
-
 /* First physical page can be anywhere, the following is needed so that
  * va-->pa and vice versa conversions work properly without performance
  * hit for all __pa()/__va() operations.
diff --git a/arch/sparc/include/asm/pgtable_64.h 
b/arch/sparc/include/asm/pgtable_64.h
index 4679e45c8348..a779418ceba9 100644
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -187,25 +187,6 @@ bool kern_addr_valid(unsigned long addr);
 #define _PAGE_SZHUGE_4U_PAGE_SZ4MB_4U
 #define _PAGE_SZHUGE_4V_PAGE_SZ4MB_4V
 
-/* These are actually filled in at boot time by sun4{u,v}_pgprot_init() */
-#define __P000 __pgprot(0)
-#define __P001 __pgprot(0)
-#define __P010 __pgprot(0)
-#define __P011 __pgprot(0)
-#define __P100 __pgprot(0)
-#define __P101 __pgprot(0)
-#define __P110 __pgprot(0)
-#define __P111 __pgprot(0)
-
-#define __S000 __pgprot(0)
-#define __S001 __pgprot(0)
-#define __S010 __pgprot(0)
-#define __S011 __pgprot(0)
-#define __S100 __pgprot(0)
-#define __S101 __pgprot(0)
-#define __S110 __pgprot(0)
-#define __S111 __pgprot(0)
-
 #ifndef __ASSEMBLY__
 
 pte_t mk_pte_io(unsigned long, pgprot_t, int, unsigned long);
diff --git a/arch/sparc/mm/init_32.c b/arch/sparc/mm/init_32.c
index 1e9f577f084d..8693e4e28b86 100644
--- a/arch/sparc/mm/init_32.c
+++ b/arch/sparc/mm/init_32.c
@@ -302,3 +302,23 @@ void sparc_flush_page_to_ram(struct page *page)
__flush_page_to_ram(vaddr);
 }
 EXPORT_SYMBOL(sparc_flush_page_to_ram);
+
+static pgprot_t protection_map[16] __ro_after_init = {
+   [VM_NONE]   = PAGE_NONE,
+   [VM_READ]   = PAGE_READONLY,
+   [VM_WRITE]  = PAGE_COPY,
+   [VM_WRITE | VM_READ]= PAGE_COPY,
+   [VM_EXEC]   = PAGE_READONLY,
+   [VM_EXEC | VM_READ] = PAGE_READONLY,
+   [VM_EXEC | VM_WRITE]= PAGE_COPY,
+   [VM_EXEC | VM_WRITE | VM_READ]  = PAGE_COPY,
+   [VM_SHARED] = PAGE_NONE,
+   [VM_SHARED | VM_READ]   = PAGE_READONLY,
+   [VM_SHARED | VM_WRITE]  = PAGE_SHARED,
+   [VM_SHARED | VM_WRITE | VM_READ]= PAGE_SHARED,
+   [VM_SHARED | VM_EXEC]   = PAGE_READONLY,
+   [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY,
+   [VM_SHARED | VM_EXEC | VM_WRITE]= PAGE_SHARED,
+   [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = PAGE_SHARED
+};
+DECLARE_VM_GET_PAGE_PROT
diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index f6174df2d5af..d6faee23c77d 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -2634,6 +2634,9 @@ void 

[PATCH V4 03/26] powerpc/mm: Move protection_map[] inside the platform

2022-06-23 Thread Anshuman Khandual
This moves protection_map[] inside the platform and while here, also enable
ARCH_HAS_VM_GET_PAGE_PROT on 32 bit platforms via DECLARE_VM_GET_PAGE_PROT.

Cc: Michael Ellerman 
Cc: Paul Mackerras 
Cc: Nicholas Piggin 
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 arch/powerpc/Kconfig   |  2 +-
 arch/powerpc/include/asm/pgtable.h | 20 +---
 arch/powerpc/mm/pgtable.c  | 24 
 3 files changed, 26 insertions(+), 20 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index c2ce2e60c8f0..1035d172c7dd 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -140,7 +140,7 @@ config PPC
select ARCH_HAS_TICK_BROADCAST  if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAS_UACCESS_FLUSHCACHE
select ARCH_HAS_UBSAN_SANITIZE_ALL
-   select ARCH_HAS_VM_GET_PAGE_PROTif PPC_BOOK3S_64
+   select ARCH_HAS_VM_GET_PAGE_PROT
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select ARCH_KEEP_MEMBLOCK
select ARCH_MIGHT_HAVE_PC_PARPORT
diff --git a/arch/powerpc/include/asm/pgtable.h 
b/arch/powerpc/include/asm/pgtable.h
index d564d0ecd4cd..bf98db844579 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -20,25 +20,6 @@ struct mm_struct;
 #include 
 #endif /* !CONFIG_PPC_BOOK3S */
 
-/* Note due to the way vm flags are laid out, the bits are XWR */
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY
-#define __P100 PAGE_READONLY_X
-#define __P101 PAGE_READONLY_X
-#define __P110 PAGE_COPY_X
-#define __P111 PAGE_COPY_X
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_READONLY_X
-#define __S101 PAGE_READONLY_X
-#define __S110 PAGE_SHARED_X
-#define __S111 PAGE_SHARED_X
-
 #ifndef __ASSEMBLY__
 
 #ifndef MAX_PTRS_PER_PGD
@@ -79,6 +60,7 @@ extern void paging_init(void);
 void poking_init(void);
 
 extern unsigned long ioremap_bot;
+extern pgprot_t protection_map[16] __ro_after_init;
 
 /*
  * kern_addr_valid is intended to indicate whether an address is a valid
diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
index e6166b71d36d..618f30d35b17 100644
--- a/arch/powerpc/mm/pgtable.c
+++ b/arch/powerpc/mm/pgtable.c
@@ -472,3 +472,27 @@ pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea,
return ret_pte;
 }
 EXPORT_SYMBOL_GPL(__find_linux_pte);
+
+/* Note due to the way vm flags are laid out, the bits are XWR */
+pgprot_t protection_map[16] __ro_after_init = {
+   [VM_NONE]   = PAGE_NONE,
+   [VM_READ]   = PAGE_READONLY,
+   [VM_WRITE]  = PAGE_COPY,
+   [VM_WRITE | VM_READ]= PAGE_COPY,
+   [VM_EXEC]   = PAGE_READONLY_X,
+   [VM_EXEC | VM_READ] = PAGE_READONLY_X,
+   [VM_EXEC | VM_WRITE]= PAGE_COPY_X,
+   [VM_EXEC | VM_WRITE | VM_READ]  = PAGE_COPY_X,
+   [VM_SHARED] = PAGE_NONE,
+   [VM_SHARED | VM_READ]   = PAGE_READONLY,
+   [VM_SHARED | VM_WRITE]  = PAGE_SHARED,
+   [VM_SHARED | VM_WRITE | VM_READ]= PAGE_SHARED,
+   [VM_SHARED | VM_EXEC]   = PAGE_READONLY_X,
+   [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY_X,
+   [VM_SHARED | VM_EXEC | VM_WRITE]= PAGE_SHARED_X,
+   [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = PAGE_SHARED_X
+};
+
+#ifndef CONFIG_PPC_BOOK3S_64
+DECLARE_VM_GET_PAGE_PROT
+#endif
-- 
2.25.1



[PATCH V4 02/26] mm/mmap: Define DECLARE_VM_GET_PAGE_PROT

2022-06-23 Thread Anshuman Khandual
This just converts the generic vm_get_page_prot() implementation into a new
macro i.e DECLARE_VM_GET_PAGE_PROT which later can be used across platforms
when enabling them with ARCH_HAS_VM_GET_PAGE_PROT. This does not create any
functional change.

Cc: Andrew Morton 
Cc: linux...@kvack.org
Cc: linux-ker...@vger.kernel.org
Suggested-by: Christoph Hellwig 
Signed-off-by: Anshuman Khandual 
---
 include/linux/mm.h | 8 
 mm/mmap.c  | 6 +-
 2 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 47bfe038d46e..237828c2bae2 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -428,6 +428,14 @@ extern unsigned int kobjsize(const void *objp);
 extern pgprot_t protection_map[16];
 #endif
 
+#define DECLARE_VM_GET_PAGE_PROT   \
+pgprot_t vm_get_page_prot(unsigned long vm_flags)  \
+{  \
+   return protection_map[vm_flags &\
+   (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)];\
+}  \
+EXPORT_SYMBOL(vm_get_page_prot);
+
 /*
  * The default fault flags that should be used by most of the
  * arch-specific page fault handlers.
diff --git a/mm/mmap.c b/mm/mmap.c
index b01f0280bda2..55c30aee3999 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -123,11 +123,7 @@ pgprot_t protection_map[16] __ro_after_init = {
 #endif
 
 #ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
-pgprot_t vm_get_page_prot(unsigned long vm_flags)
-{
-   return protection_map[vm_flags & (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)];
-}
-EXPORT_SYMBOL(vm_get_page_prot);
+DECLARE_VM_GET_PAGE_PROT
 #endif /* CONFIG_ARCH_HAS_VM_GET_PAGE_PROT */
 
 static pgprot_t vm_pgprot_modify(pgprot_t oldprot, unsigned long vm_flags)
-- 
2.25.1



[PATCH V4 01/26] mm/mmap: Build protect protection_map[] with __P000

2022-06-23 Thread Anshuman Khandual
Build protect generic protection_map[] array with __P000, so that it can be
moved inside all the platforms one after the other. Otherwise there will be
build failures during this process. CONFIG_ARCH_HAS_VM_GET_PAGE_PROT cannot
be used for this purpose as only certain platforms enable this config now.

Cc: Andrew Morton 
Cc: linux...@kvack.org
Cc: linux-ker...@vger.kernel.org
Suggested-by: Christophe Leroy 
Signed-off-by: Anshuman Khandual 
---
 include/linux/mm.h | 2 ++
 mm/mmap.c  | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index bc8f326be0ce..47bfe038d46e 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -424,7 +424,9 @@ extern unsigned int kobjsize(const void *objp);
  * mapping from the currently active vm_flags protection bits (the
  * low four bits) to a page protection mask..
  */
+#ifdef __P000
 extern pgprot_t protection_map[16];
+#endif
 
 /*
  * The default fault flags that should be used by most of the
diff --git a/mm/mmap.c b/mm/mmap.c
index 61e6135c54ef..b01f0280bda2 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -101,6 +101,7 @@ static void unmap_region(struct mm_struct *mm,
  * w: (no) no
  * x: (yes) yes
  */
+#ifdef __P000
 pgprot_t protection_map[16] __ro_after_init = {
[VM_NONE]   = __P000,
[VM_READ]   = __P001,
@@ -119,6 +120,7 @@ pgprot_t protection_map[16] __ro_after_init = {
[VM_SHARED | VM_EXEC | VM_WRITE]= __S110,
[VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]  = __S111
 };
+#endif
 
 #ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
 pgprot_t vm_get_page_prot(unsigned long vm_flags)
-- 
2.25.1



[PATCH V4 00/26] mm/mmap: Drop __SXXX/__PXXX macros from across platforms

2022-06-23 Thread Anshuman Khandual
__SXXX/__PXXX macros is an unnecessary abstraction layer in creating the
generic protection_map[] array which is used for vm_get_page_prot(). This
abstraction layer can be avoided, if the platforms just define the array
protection_map[] for all possible vm_flags access permission combinations
and also export vm_get_page_prot() implementation.

This series drops __SXXX/__PXXX macros from across platforms in the tree.
First it build protects generic protection_map[] array with '#ifdef __P000'
and moves it inside platforms which enable ARCH_HAS_VM_GET_PAGE_PROT. Later
this build protects same array with '#ifdef ARCH_HAS_VM_GET_PAGE_PROT' and
moves inside remaining platforms while enabling ARCH_HAS_VM_GET_PAGE_PROT.
This adds a new macro DECLARE_VM_GET_PAGE_PROT defining the current generic
vm_get_page_prot(), in order for it to be reused on platforms that do not
require custom implementation. Finally, ARCH_HAS_VM_GET_PAGE_PROT can just
be dropped, as all platforms now define and export vm_get_page_prot(), via
looking up a private and static protection_map[] array. protection_map[]
data type is the following for all platforms without deviation (except the
powerpc one which is shared between 32 and 64 bit platforms), keeping it
unchanged for now.

static pgprot_t protection_map[16] __ro_after_init

This series applies on v5.19-rc3 and has been build tested for multiple
platforms. While here it has dropped off all previous tags from folks after
the current restructuring. Series common CC list has been expanded to cover
all impacted platforms for wider reach.

- Anshuman

Changes in V4:

- Both protection_map[] and vm_get_page_prot() moves inside all platforms
- Split patches to create modular changes for individual platforms
- Add macro DECLARE_VM_GET_PAGE_PROT defining generic vm_get_page_prot()
- Drop ARCH_HAS_VM_GET_PAGE_PROT

Changes in V3:

https://lore.kernel.org/all/20220616040924.1022607-1-anshuman.khand...@arm.com/

- Fix build issues on powerpc and riscv

Changes in V2:

https://lore.kernel.org/all/20220613053354.553579-1-anshuman.khand...@arm.com/

- Add 'const' identifier to protection_map[] on powerpc
- Dropped #ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT check from sparc 32
- Dropped protection_map[] init from sparc 64
- Dropped all new platform changes subscribing ARCH_HAS_VM_GET_PAGE_PROT
- Added a second patch which moves generic protection_map[] array into
  all remaining platforms (!ARCH_HAS_VM_GET_PAGE_PROT)

Changes in V1:

https://lore.kernel.org/all/20220603101411.488970-1-anshuman.khand...@arm.com/

Cc: Andrew Morton 
Cc: Christoph Hellwig 
Cc: Christophe Leroy 
Cc: linuxppc-dev@lists.ozlabs.org
Cc: sparcli...@vger.kernel.org
Cc: x...@kernel.org
Cc: openr...@lists.librecores.org
Cc: linux-xte...@linux-xtensa.org
Cc: linux-c...@vger.kernel.org
Cc: linux-hexa...@vger.kernel.org
Cc: linux-par...@vger.kernel.org
Cc: linux-al...@vger.kernel.org
Cc: linux-ri...@lists.infradead.org
Cc: linux-c...@vger.kernel.org
Cc: linux-s...@vger.kernel.org
Cc: linux-i...@vger.kernel.org
Cc: linux-m...@vger.kernel.org
Cc: linux-m...@lists.linux-m68k.org
Cc: linux-snps-...@lists.infradead.org
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux...@lists.infradead.org
Cc: linux...@vger.kernel.org
Cc: linux...@kvack.org
Cc: linux-ker...@vger.kernel.org

Anshuman Khandual (26):
  mm/mmap: Build protect protection_map[] with __P000
  mm/mmap: Define DECLARE_VM_GET_PAGE_PROT
  powerpc/mm: Move protection_map[] inside the platform
  sparc/mm: Move protection_map[] inside the platform
  arm64/mm: Move protection_map[] inside the platform
  x86/mm: Move protection_map[] inside the platform
  mm/mmap: Build protect protection_map[] with ARCH_HAS_VM_GET_PAGE_PROT
  microblaze/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
  loongarch/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
  openrisc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
  extensa/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
  hexagon/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
  parisc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
  alpha/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
  nios2/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
  riscv/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
  csky/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
  s390/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
  ia64/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
  mips/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
  m68k/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
  arc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
  arm/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
  um/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
  sh/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
  mm/mmap: Drop ARCH_HAS_VM_GET_PAGE_PROT

 arch/alpha/include/asm/pgtable.h  | 17 ---
 arch/alpha/mm/init.c  | 22 +
 arch/arc/include/asm/pgtable-bits-arcv2.h | 18 
 arch/arc/mm/mmap.c| 20 +
 arch/arm/include/asm/pgtable.h| 17 ---
 arch/arm/lib/uaccess_with_memcpy.c|  2 +-
 arch/arm/mm/mmu.c | 20 +
 arch/arm64/Kconfig

Re: [PATCH v3 0/5] of: add of_property_alloc/free() and of_node_alloc()

2022-06-23 Thread Frank Rowand
Sorry for the lack of response, it's been a busy week.  I will get to this
soon.

-Frank

On 6/20/22 06:41, Clément Léger wrote:
> In order to be able to create new nodes and properties dynamically from
> drivers, add of_property_alloc/free() and of_node_alloc(). These
> functions can be used to create new nodes and properties flagged with
> OF_DYNAMIC and to free them.
> 
> Some powerpc code was already doing such operations and thus, these
> functions have been used to replace the manual creation of nodes and
> properties. This code has been more than simply replaced to allow using
> of_node_put() rather than a manual deletion of the properties.
> Unfortunately, as I don't own a powerpc platform, it would need to be
> tested.
> 
> ---
> 
> Changes in V3:
> - Remove gfpflag attribute from of_node_alloc() and of_property_alloc().
> - Removed allocflags from __of_node_dup().
> - Rework powerpc code to only use of_node_put().
> - Fix properties free using of_node_property in OF unittests.
> 
> Changes in V2:
> - Remove of_node_free()
> - Rework property allocation to allocate both property and value with
>   1 allocation
> - Rework node allocation to allocate name at the same time the node is
>   allocated
> - Remove extern from definitions
> - Remove of_property_alloc() value_len parameter and add more
>   explanation for the arguments
> - Add a check in of_property_free to check OF_DYNAMIC flag
> - Add a commit which constify the property argument of
>   of_property_check_flags()
> 
> Clément Léger (5):
>   of: constify of_property_check_flags() prop argument
>   of: remove __of_node_dup() allocflags parameter
>   of: dynamic: add of_property_alloc() and of_property_free()
>   of: dynamic: add of_node_alloc()
>   powerpc/pseries: use of_property_alloc/free() and of_node_alloc()
> 
>  arch/powerpc/platforms/pseries/dlpar.c|  62 +---
>  .../platforms/pseries/hotplug-memory.c|  21 +--
>  arch/powerpc/platforms/pseries/reconfig.c | 123 ++--
>  drivers/of/dynamic.c  | 137 --
>  drivers/of/of_private.h   |  19 ++-
>  drivers/of/overlay.c  |   2 +-
>  drivers/of/unittest.c |  24 ++-
>  include/linux/of.h|  24 ++-
>  8 files changed, 191 insertions(+), 221 deletions(-)
> 



Re: [PATCH RFC] drivers/usb/ehci-fsl: Fix interrupt setup in host mode.

2022-06-23 Thread Michael Ellerman
Darren Stevens  writes:
> In patch a1a2b7125e1079 (Drop static setup of IRQ resource from DT
> core) we stopped platform_get_resource() from returning the IRQ, as all
> drivers were supposed to have switched to platform_get_irq()
> Unfortunately the Freescale EHCI driver in host mode got missed. Fix
> it. Also fix allocation of resources to work with current kernel.
>
> Fixes:a1a2b7125e1079 (Drop static setup of IRQ resource from DT core)
> Reported-by Christian Zigotzky 
> Signed-off-by Darren Stevens 
> ---
> Tested on AmigaOne X5000/20 and X5000/40 not sure if this is entirely
> correct fix though. Contains code by Rob Herring (in fsl-mph-dr-of.c)

It looks like this driver is used on some arm/arm64 boards:

  $ git grep -l fsl-usb2-dr arch/arm*/boot/dts
  arch/arm/boot/dts/ls1021a.dtsi
  arch/arm64/boot/dts/freescale/fsl-ls1012a.dtsi

Which is for the "Layerscape-1012A family SoC".

Have we heard of any bug reports from users of those boards? Is it wired
up differently or otherwise immune to the problem?

I've added the Layerscape maintainers to Cc.

cheers

> diff --git a/drivers/usb/host/fsl-mph-dr-of.c
> b/drivers/usb/host/fsl-mph-dr-of.c index 44a7e58..766e4ab 100644
> --- a/drivers/usb/host/fsl-mph-dr-of.c
> +++ b/drivers/usb/host/fsl-mph-dr-of.c
> @@ -80,8 +80,6 @@ static struct platform_device
> *fsl_usb2_device_register( const char *name, int id)
>  {
>   struct platform_device *pdev;
> - const struct resource *res = ofdev->resource;
> - unsigned int num = ofdev->num_resources;
>   int retval;
>  
>   pdev = platform_device_alloc(name, id);
> @@ -106,11 +104,8 @@ static struct platform_device
> *fsl_usb2_device_register( if (retval)
>   goto error;
>  
> - if (num) {
> - retval = platform_device_add_resources(pdev, res, num);
> - if (retval)
> - goto error;
> - }
> + pdev->dev.of_node = ofdev->dev.of_node;
> + pdev->dev.of_node_reused = true;
>  
>   retval = platform_device_add(pdev);
>   if (retval)


Re: [PATCH] powerpc: dts: Add DTS file for CZ.NIC Turris 1.x routers

2022-06-23 Thread Michael Ellerman
Pali Rohár  writes:
> CZ.NIC Turris 1.0 and 1.1 are open source routers, they have dual-core
> PowerPC Freescale P2020 CPU and are based on Freescale P2020RDB-PC-A board.
> Hardware design is fully open source, all firmware and hardware design
> files are available at Turris project website:
>
> https://docs.turris.cz/hw/turris-1x/turris-1x/
> https://project.turris.cz/en/hardware.html
>
> Signed-off-by: Pali Rohár 
> ---
>  arch/powerpc/boot/dts/turris1x.dts | 470 +
>  1 file changed, 470 insertions(+)
>  create mode 100644 arch/powerpc/boot/dts/turris1x.dts

The headers say you Cc'ed this to the devicetree list, but I don't see
it in the devicetree patchwork:

  
https://patchwork.ozlabs.org/project/devicetree-bindings/list/?state=*=turris=both

Which means it hasn't been run through Rob's CI scripts.

Maybe try a resend?

cheers


Re: [PATCH 3/3] powerpc/mm: Use VMALLOC_START to validate addr

2022-06-23 Thread Michael Ellerman
"Aneesh Kumar K.V"  writes:
> Instead of high_memory use VMALLOC_START to validate that the address is
> not in the vmalloc range.
>
> Cc: Kefeng Wang 
> Cc: Christophe Leroy 
> Signed-off-by: Aneesh Kumar K.V 

Isn't this really the fix for ffa0b64e3be5 ("powerpc: Fix
virt_addr_valid() for 64-bit Book3E & 32-bit") ?

cheers

> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
> index e5f75c70eda8..256cad69e42e 100644
> --- a/arch/powerpc/include/asm/page.h
> +++ b/arch/powerpc/include/asm/page.h
> @@ -134,7 +134,7 @@ static inline bool pfn_valid(unsigned long pfn)
>  
>  #define virt_addr_valid(vaddr)   ({  
> \
>   unsigned long _addr = (unsigned long)vaddr; \
> - _addr >= PAGE_OFFSET && _addr < (unsigned long)high_memory &&   \
> + _addr >= PAGE_OFFSET && _addr < (unsigned long)VMALLOC_START && \
>   pfn_valid(virt_to_pfn(_addr));  \
>  })
>  
> -- 
> 2.36.1


Re: [PATCH] kexec: replace crash_mem_range with range

2022-06-23 Thread Baoquan He
Hi,

On 06/15/22 at 07:37pm, Li Chen wrote:
> Hi Baoquan,
> 
>   On Wed, 15 Jun 2022 19:03:53 -0700 Baoquan He  wrote 
> 
>  > On 06/14/22 at 10:04pm, Li Chen wrote:
>  > > From: Li Chen 
>  > > 
>  > > We already have struct range, so just use it.
>  > 
>  > Looks good, have you tested it?
> 
> No, I don't have ppc machine, just pass compile on x86.

Can someone from ppc side help test this patch? If no, I will try to
find a ppc machine to run the test. Thanks in advance.

Thanks
Baoquan

> 
> 
>  > 
>  > > 
>  > > Signed-off-by: Li Chen 
>  > > ---
>  > >  arch/powerpc/kexec/file_load_64.c | 2 +-
>  > >  arch/powerpc/kexec/ranges.c   | 8 
>  > >  include/linux/kexec.h | 7 ++-
>  > >  kernel/kexec_file.c   | 2 +-
>  > >  4 files changed, 8 insertions(+), 11 deletions(-)
>  > > 
>  > > diff --git a/arch/powerpc/kexec/file_load_64.c 
> b/arch/powerpc/kexec/file_load_64.c
>  > > index b4981b651d9a..583b7fc478f2 100644
>  > > --- a/arch/powerpc/kexec/file_load_64.c
>  > > +++ b/arch/powerpc/kexec/file_load_64.c
>  > > @@ -34,7 +34,7 @@ struct umem_info {
>  > >  
>  > >  /* usable memory ranges to look up */
>  > >  unsigned int nr_ranges;
>  > > -const struct crash_mem_range *ranges;
>  > > +const struct range *ranges;
>  > >  };
>  > >  
>  > >  const struct kexec_file_ops * const kexec_file_loaders[] = {
>  > > diff --git a/arch/powerpc/kexec/ranges.c b/arch/powerpc/kexec/ranges.c
>  > > index 563e9989a5bf..5fc53a5fcfdf 100644
>  > > --- a/arch/powerpc/kexec/ranges.c
>  > > +++ b/arch/powerpc/kexec/ranges.c
>  > > @@ -33,7 +33,7 @@
>  > >  static inline unsigned int get_max_nr_ranges(size_t size)
>  > >  {
>  > >  return ((size - sizeof(struct crash_mem)) /
>  > > -sizeof(struct crash_mem_range));
>  > > +sizeof(struct range));
>  > >  }
>  > >  
>  > >  /**
>  > > @@ -51,7 +51,7 @@ static inline size_t get_mem_rngs_size(struct 
> crash_mem *mem_rngs)
>  > >  return 0;
>  > >  
>  > >  size = (sizeof(struct crash_mem) +
>  > > -(mem_rngs->max_nr_ranges * sizeof(struct crash_mem_range)));
>  > > +(mem_rngs->max_nr_ranges * sizeof(struct range)));
>  > >  
>  > >  /*
>  > >   * Memory is allocated in size multiple of MEM_RANGE_CHUNK_SZ.
>  > > @@ -98,7 +98,7 @@ static int __add_mem_range(struct crash_mem 
> **mem_ranges, u64 base, u64 size)
>  > >   */
>  > >  static void __merge_memory_ranges(struct crash_mem *mem_rngs)
>  > >  {
>  > > -struct crash_mem_range *ranges;
>  > > +struct range *ranges;
>  > >  int i, idx;
>  > >  
>  > >  if (!mem_rngs)
>  > > @@ -123,7 +123,7 @@ static void __merge_memory_ranges(struct crash_mem 
> *mem_rngs)
>  > >  /* cmp_func_t callback to sort ranges with sort() */
>  > >  static int rngcmp(const void *_x, const void *_y)
>  > >  {
>  > > -const struct crash_mem_range *x = _x, *y = _y;
>  > > +const struct range *x = _x, *y = _y;
>  > >  
>  > >  if (x->start > y->start)
>  > >  return 1;
>  > > diff --git a/include/linux/kexec.h b/include/linux/kexec.h
>  > > index 58d1b58a971e..d7ab4ad4c619 100644
>  > > --- a/include/linux/kexec.h
>  > > +++ b/include/linux/kexec.h
>  > > @@ -17,6 +17,7 @@
>  > >  
>  > >  #include 
>  > >  #include 
>  > > +#include 
>  > >  
>  > >  #include 
>  > >  
>  > > @@ -214,14 +215,10 @@ int kexec_locate_mem_hole(struct kexec_buf *kbuf);
>  > >  /* Alignment required for elf header segment */
>  > >  #define ELF_CORE_HEADER_ALIGN   4096
>  > >  
>  > > -struct crash_mem_range {
>  > > -u64 start, end;
>  > > -};
>  > > -
>  > >  struct crash_mem {
>  > >  unsigned int max_nr_ranges;
>  > >  unsigned int nr_ranges;
>  > > -struct crash_mem_range ranges[];
>  > > +struct range ranges[];
>  > >  };
>  > >  
>  > >  extern int crash_exclude_mem_range(struct crash_mem *mem,
>  > > diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
>  > > index 8347fc158d2b..f2758af86b93 100644
>  > > --- a/kernel/kexec_file.c
>  > > +++ b/kernel/kexec_file.c
>  > > @@ -1183,7 +1183,7 @@ int crash_exclude_mem_range(struct crash_mem *mem,
>  > >  {
>  > >  int i, j;
>  > >  unsigned long long start, end, p_start, p_end;
>  > > -struct crash_mem_range temp_range = {0, 0};
>  > > +struct range temp_range = {0, 0};
>  > >  
>  > >  for (i = 0; i < mem->nr_ranges; i++) {
>  > >  start = mem->ranges[i].start;
>  > > -- 
>  > > 2.36.1
>  > > 
>  > > 
>  > > 
>  > > ___
>  > > kexec mailing list
>  > > ke...@lists.infradead.org
>  > > http://lists.infradead.org/mailman/listinfo/kexec
>  > > 
>  > 
>  > 
> 



[PATCH] powerpc/prom_init: Fix kernel config grep

2022-06-23 Thread Liam Howlett
When searching for config options, use the KCONFIG shell variable so
that builds using non-standard config locations work.

Signed-off-by: Liam R. Howlett 
---
 arch/powerpc/kernel/prom_init_check.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/prom_init_check.sh 
b/arch/powerpc/kernel/prom_init_check.sh
index b183ab9c5107..dfa5f729f774 100644
--- a/arch/powerpc/kernel/prom_init_check.sh
+++ b/arch/powerpc/kernel/prom_init_check.sh
@@ -13,7 +13,7 @@
 # If you really need to reference something from prom_init.o add
 # it to the list below:
 
-grep "^CONFIG_KASAN=y$" .config >/dev/null
+grep "^CONFIG_KASAN=y$" ${KCONFIG_CONFIG} >/dev/null
 if [ $? -eq 0 ]
 then
MEM_FUNCS="__memcpy __memset"
-- 
2.35.1


[PATCH] crypto: vmx - drop unexpected word 'for' in comments

2022-06-23 Thread Jiang Jian
there is an unexpected word 'for' in the comments that need to be dropped

file - drivers/crypto/vmx/ghashp8-ppc.pl
line - 19

"# GHASH for for PowerISA v2.07."

changed to:

"# GHASH for PowerISA v2.07."

Signed-off-by: Jiang Jian 
---
 drivers/crypto/vmx/ghashp8-ppc.pl | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/crypto/vmx/ghashp8-ppc.pl 
b/drivers/crypto/vmx/ghashp8-ppc.pl
index 09bba1852eec..041e633c214f 100644
--- a/drivers/crypto/vmx/ghashp8-ppc.pl
+++ b/drivers/crypto/vmx/ghashp8-ppc.pl
@@ -16,7 +16,7 @@
 # details see https://www.openssl.org/~appro/cryptogams/.
 # 
 #
-# GHASH for for PowerISA v2.07.
+# GHASH for PowerISA v2.07.
 #
 # July 2014
 #
-- 
2.17.1



[PATCH] KVM: Fix spelling mistake

2022-06-23 Thread Zhang Jiaming
Change 'subsquent' to 'subsequent'.
Change 'accross' to 'across'.

Signed-off-by: Zhang Jiaming 
---
 arch/powerpc/kvm/book3s_xive.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_xive.c b/arch/powerpc/kvm/book3s_xive.c
index 4ca23644f752..b4b680f2d853 100644
--- a/arch/powerpc/kvm/book3s_xive.c
+++ b/arch/powerpc/kvm/book3s_xive.c
@@ -539,7 +539,7 @@ static int xive_vm_h_eoi(struct kvm_vcpu *vcpu, unsigned 
long xirr)
if (irq == XICS_IPI || irq == 0) {
/*
 * This barrier orders the setting of xc->cppr vs.
-* subsquent test of xc->mfrr done inside
+* subsequent test of xc->mfrr done inside
 * scan_interrupts and push_pending_to_hw
 */
smp_mb();
@@ -563,7 +563,7 @@ static int xive_vm_h_eoi(struct kvm_vcpu *vcpu, unsigned 
long xirr)
/*
 * This barrier orders both setting of in_eoi above vs,
 * subsequent test of guest_priority, and the setting
-* of xc->cppr vs. subsquent test of xc->mfrr done inside
+* of xc->cppr vs. subsequent test of xc->mfrr done inside
 * scan_interrupts and push_pending_to_hw
 */
smp_mb();
@@ -2392,7 +2392,7 @@ static int xive_set_source(struct kvmppc_xive *xive, long 
irq, u64 addr)
/*
 * Now, we select a target if we have one. If we don't we
 * leave the interrupt untargetted. It means that an interrupt
-* can become "untargetted" accross migration if it was masked
+* can become "untargetted" across migration if it was masked
 * by set_xive() but there is little we can do about it.
 */
 
-- 
2.25.1



[PATCH] powerpc/eeh: drop unexpected word 'for' in comments

2022-06-23 Thread Jiang Jian
there is an unexpected word 'for' in the comments that need to be dropped

file - arch/powerpc/kernel/eeh_driver.c
line - 753

* presence state. This might happen for for PCIe slots if the PE containing

changed to:

* presence state. This might happen for PCIe slots if the PE containing

Signed-off-by: Jiang Jian 
---
 arch/powerpc/kernel/eeh_driver.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/eeh_driver.c b/arch/powerpc/kernel/eeh_driver.c
index 260273e56431..f279295179bd 100644
--- a/arch/powerpc/kernel/eeh_driver.c
+++ b/arch/powerpc/kernel/eeh_driver.c
@@ -750,7 +750,7 @@ static void eeh_pe_cleanup(struct eeh_pe *pe)
  * @pdev: pci_dev to check
  *
  * This function may return a false positive if we can't determine the slot's
- * presence state. This might happen for for PCIe slots if the PE containing
+ * presence state. This might happen for PCIe slots if the PE containing
  * the upstream bridge is also frozen, or the bridge is part of the same PE
  * as the device.
  *
-- 
2.17.1



[PATCH] powerpc/64s: drop unexpected word 'and' in the comments

2022-06-23 Thread Jiang Jian
there is an unexpected word 'and' in the comments that need to be dropped

file: arch/powerpc/kernel/exceptions-64s.S
line: 2782

* - If it was a decrementer interrupt, we bump the dec to max and and return.

changed to:

* - If it was a decrementer interrupt, we bump the dec to max and return.

Signed-off-by: Jiang Jian 
---
 arch/powerpc/kernel/exceptions-64s.S | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index b66dd6f775a4..3d0dc133a9ae 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -2779,7 +2779,7 @@ EXC_COMMON_BEGIN(soft_nmi_common)
 
 /*
  * An interrupt came in while soft-disabled. We set paca->irq_happened, then:
- * - If it was a decrementer interrupt, we bump the dec to max and and return.
+ * - If it was a decrementer interrupt, we bump the dec to max and return.
  * - If it was a doorbell we return immediately since doorbells are edge
  *   triggered and won't automatically refire.
  * - If it was a HMI we return immediately since we handled it in realmode
-- 
2.17.1



[PATCH] powerpc/ptrace: drop unexpected word 'and' in the comments

2022-06-23 Thread Jiang Jian
there is an unexpected word 'and' in the comments that need to be dropped

file & line:
arch/powerpc/kernel/ptrace/ptrace-vsx.c:74:

* Currently to set and and get all the vsx state, you need to call
changed to:
* Currently to set and get all the vsx state, you need to call

Signed-off-by: Jiang Jian 
---
 arch/powerpc/kernel/ptrace/ptrace-vsx.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/ptrace/ptrace-vsx.c 
b/arch/powerpc/kernel/ptrace/ptrace-vsx.c
index 1da4303128ef..7df08004c47d 100644
--- a/arch/powerpc/kernel/ptrace/ptrace-vsx.c
+++ b/arch/powerpc/kernel/ptrace/ptrace-vsx.c
@@ -71,7 +71,7 @@ int fpr_set(struct task_struct *target, const struct 
user_regset *regset,
 }
 
 /*
- * Currently to set and and get all the vsx state, you need to call
+ * Currently to set and get all the vsx state, you need to call
  * the fp and VMX calls as well.  This only get/sets the lower 32
  * 128bit VSX registers.
  */
-- 
2.17.1



[PATCH] cxl: drop unexpected word "the" in the comments

2022-06-23 Thread Jiang Jian
there is an unexpected word "the" in the comments that need to be dropped

file: drivers/misc/cxl/cxl.h
line: 1107
+/* check if the given pci_dev is on the the cxl vphb bus */
changed to
+/* check if the given pci_dev is on the cxl vphb bus */

Signed-off-by: Jiang Jian 
---
 drivers/misc/cxl/cxl.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
index 7a6dd91987fd..0562071cdd4a 100644
--- a/drivers/misc/cxl/cxl.h
+++ b/drivers/misc/cxl/cxl.h
@@ -1104,7 +1104,7 @@ extern const struct cxl_backend_ops cxl_native_ops;
 extern const struct cxl_backend_ops cxl_guest_ops;
 extern const struct cxl_backend_ops *cxl_ops;
 
-/* check if the given pci_dev is on the the cxl vphb bus */
+/* check if the given pci_dev is on the cxl vphb bus */
 bool cxl_pci_is_vphb_device(struct pci_dev *dev);
 
 /* decode AFU error bits in the PSL register PSL_SERR_An */
-- 
2.17.1



[PATCH] crypto: nx - drop unexpected word "the"

2022-06-23 Thread Jiang Jian
there is an unexpected word "the" in the comments that need to be dropped

>- * The DDE is setup with the the DDE count, byte count, and address of
>+ * The DDE is setup with the DDE count, byte count, and address of

Signed-off-by: Jiang Jian 
---
 drivers/crypto/nx/nx-common-powernv.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/crypto/nx/nx-common-powernv.c 
b/drivers/crypto/nx/nx-common-powernv.c
index f418817c0f43..f34c75a862f2 100644
--- a/drivers/crypto/nx/nx-common-powernv.c
+++ b/drivers/crypto/nx/nx-common-powernv.c
@@ -75,7 +75,7 @@ static int (*nx842_powernv_exec)(const unsigned char *in,
 /**
  * setup_indirect_dde - Setup an indirect DDE
  *
- * The DDE is setup with the the DDE count, byte count, and address of
+ * The DDE is setup with the DDE count, byte count, and address of
  * first direct DDE in the list.
  */
 static void setup_indirect_dde(struct data_descriptor_entry *dde,
-- 
2.17.1



[linux-next:master] BUILD REGRESSION 08897940f458ee55416cf80ab13d2d8b9f20038e

2022-06-23 Thread kernel test robot
tree/branch: 
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
branch HEAD: 08897940f458ee55416cf80ab13d2d8b9f20038e  Add linux-next specific 
files for 20220623

Error/Warning reports:

https://lore.kernel.org/linux-mm/202206212029.yr5m7cd3-...@intel.com
https://lore.kernel.org/linux-mm/202206212033.3lgl72fw-...@intel.com
https://lore.kernel.org/lkml/202206071511.fi7wldzo-...@intel.com

Error/Warning: (recently discovered and may have been fixed)

arch/powerpc/kernel/interrupt.c:542:55: error: suggest braces around empty body 
in an 'if' statement [-Werror=empty-body]
arch/powerpc/kernel/interrupt.c:542:55: warning: suggest braces around empty 
body in an 'if' statement [-Wempty-body]
drivers/gpu/drm/amd/amdgpu/../display/dc/core/dc_link.c:1025:33: warning: 
variable 'pre_connection_type' set but not used [-Wunused-but-set-variable]
net/ipv6/raw.c:335:25: warning: variable 'saddr' set but not used 
[-Wunused-but-set-variable]
net/ipv6/raw.c:335:32: warning: variable 'saddr' set but not used 
[-Wunused-but-set-variable]
net/ipv6/raw.c:335:33: warning: variable 'daddr' set but not used 
[-Wunused-but-set-variable]
net/ipv6/raw.c:335:40: warning: variable 'daddr' set but not used 
[-Wunused-but-set-variable]

Unverified Error/Warning (likely false positive, please contact us if 
interested):

drivers/ufs/host/ufs-mediatek.c:1391:5: sparse: sparse: symbol 
'ufs_mtk_runtime_suspend' was not declared. Should it be static?
drivers/ufs/host/ufs-mediatek.c:1405:5: sparse: sparse: symbol 
'ufs_mtk_runtime_resume' was not declared. Should it be static?

Error/Warning ids grouped by kconfigs:

gcc_recent_errors
|-- alpha-allmodconfig
|   |-- 
drivers-staging-rtl8723bs-hal-hal_btcoex.c:warning:variable-pHalData-set-but-not-used
|   |-- net-ipv6-raw.c:warning:variable-daddr-set-but-not-used
|   `-- net-ipv6-raw.c:warning:variable-saddr-set-but-not-used
|-- alpha-allyesconfig
|   |-- 
drivers-staging-rtl8723bs-hal-hal_btcoex.c:warning:variable-pHalData-set-but-not-used
|   |-- net-ipv6-raw.c:warning:variable-daddr-set-but-not-used
|   `-- net-ipv6-raw.c:warning:variable-saddr-set-but-not-used
|-- arc-allyesconfig
|   |-- 
drivers-gpu-drm-amd-amdgpu-..-display-dc-core-dc_link.c:warning:variable-pre_connection_type-set-but-not-used
|   |-- 
drivers-staging-rtl8723bs-hal-hal_btcoex.c:warning:variable-pHalData-set-but-not-used
|   |-- net-ipv6-raw.c:warning:variable-daddr-set-but-not-used
|   `-- net-ipv6-raw.c:warning:variable-saddr-set-but-not-used
|-- arm-allyesconfig
|   |-- 
drivers-staging-rtl8723bs-hal-hal_btcoex.c:warning:variable-pHalData-set-but-not-used
|   |-- net-ipv6-raw.c:warning:variable-daddr-set-but-not-used
|   `-- net-ipv6-raw.c:warning:variable-saddr-set-but-not-used
|-- arm-aspeed_g5_defconfig
|   |-- net-ipv6-raw.c:warning:variable-daddr-set-but-not-used
|   `-- net-ipv6-raw.c:warning:variable-saddr-set-but-not-used
|-- arm-defconfig
|   |-- net-ipv6-raw.c:warning:variable-daddr-set-but-not-used
|   `-- net-ipv6-raw.c:warning:variable-saddr-set-but-not-used
|-- arm-u8500_defconfig
|   |-- net-ipv6-raw.c:warning:variable-daddr-set-but-not-used
|   `-- net-ipv6-raw.c:warning:variable-saddr-set-but-not-used
|-- arm64-allyesconfig
|   |-- 
drivers-staging-rtl8723bs-hal-hal_btcoex.c:warning:variable-pHalData-set-but-not-used
|   |-- net-ipv6-raw.c:warning:variable-daddr-set-but-not-used
|   `-- net-ipv6-raw.c:warning:variable-saddr-set-but-not-used
|-- arm64-randconfig-s031-20220622
|   |-- 
drivers-misc-lkdtm-cfi.c:sparse:sparse:Using-plain-integer-as-NULL-pointer
|   |-- 
drivers-ufs-host-ufs-mediatek.c:sparse:sparse:symbol-ufs_mtk_runtime_resume-was-not-declared.-Should-it-be-static
|   |-- 
drivers-ufs-host-ufs-mediatek.c:sparse:sparse:symbol-ufs_mtk_runtime_suspend-was-not-declared.-Should-it-be-static
|   |-- 
drivers-vfio-pci-vfio_pci_config.c:sparse:sparse:restricted-pci_power_t-degrades-to-integer
|   |-- 
fs-xfs-xfs_file.c:sparse:sparse:incorrect-type-in-assignment-(different-base-types)-expected-restricted-vm_fault_t-usertype-ret-got-int
|   |-- 
fs-xfs-xfs_file.c:sparse:sparse:incorrect-type-in-return-expression-(different-base-types)-expected-int-got-restricted-vm_fault_t
|   `-- 
kernel-signal.c:sparse:sparse:incorrect-type-in-argument-(different-address-spaces)-expected-struct-lockdep_map-const-lock-got-struct-lockdep_map-noderef-__rcu
|-- i386-allyesconfig
|   |-- 
drivers-staging-rtl8723bs-hal-hal_btcoex.c:warning:variable-pHalData-set-but-not-used
|   |-- net-ipv6-raw.c:warning:variable-daddr-set-but-not-used
|   |-- net-ipv6-raw.c:warning:variable-saddr-set-but-not-used
|   `-- ntb_perf.c:(.text):undefined-reference-to-__umoddi3
|-- i386-debian-10.3
|   |-- net-ipv6-raw.c:warning:variable-daddr-set-but-not-used
|   `-- net-ipv6-raw.c:warning:variable-saddr-set-but-not-used
|-- i386-debian-10.3-kselftests
|   |-- net-ipv6-raw.c:warning:variable-daddr-set-but-not-used
|   `-- net-ipv6-raw.c:warning:variable-saddr-set-but-not-used
|-- i386-defconfig
|   |-- net

Re: [PATCH] powerpc/xive/spapr: correct bitmap allocation size

2022-06-23 Thread Cédric Le Goater

On 6/23/22 20:25, Nathan Lynch wrote:

kasan detects access beyond the end of the xibm->bitmap allocation:

BUG: KASAN: slab-out-of-bounds in _find_first_zero_bit+0x40/0x140
Read of size 8 at addr c0001d1d0118 by task swapper/0/1

CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.19.0-rc2-1-g90df023b36dd #28
Call Trace:
[c0001d98f770] [c12baab8] dump_stack_lvl+0xac/0x108 (unreliable)
[c0001d98f7b0] [c068faac] print_report+0x37c/0x710
[c0001d98f880] [c06902c0] kasan_report+0x110/0x354
[c0001d98f950] [c0692324] __asan_load8+0xa4/0xe0
[c0001d98f970] [c11c6ed0] _find_first_zero_bit+0x40/0x140
[c0001d98f9b0] [c00dbfbc] xive_spapr_get_ipi+0xcc/0x260
[c0001d98fa70] [c00d6d28] xive_setup_cpu_ipi+0x1e8/0x450
[c0001d98fb30] [c4032a20] pSeries_smp_probe+0x5c/0x118
[c0001d98fb60] [c4018b44] smp_prepare_cpus+0x944/0x9ac
[c0001d98fc90] [c4009f9c] kernel_init_freeable+0x2d4/0x640
[c0001d98fd90] [c00131e8] kernel_init+0x28/0x1d0
[c0001d98fe10] [c000cd54] ret_from_kernel_thread+0x5c/0x64

Allocated by task 0:
  kasan_save_stack+0x34/0x70
  __kasan_kmalloc+0xb4/0xf0
  __kmalloc+0x268/0x540
  xive_spapr_init+0x4d0/0x77c
  pseries_init_irq+0x40/0x27c
  init_IRQ+0x44/0x84
  start_kernel+0x2a4/0x538
  start_here_common+0x1c/0x20

The buggy address belongs to the object at c0001d1d0118
  which belongs to the cache kmalloc-8 of size 8
The buggy address is located 0 bytes inside of
  8-byte region [c0001d1d0118, c0001d1d0120)

The buggy address belongs to the physical page:
page:c00c00074740 refcount:1 mapcount:0 mapping: 
index:0xc0001d1d0558 pfn:0x1d1d
flags: 0x700200(slab|node=0|zone=0|lastcpupid=0x7)
raw: 00700200 c0001d0003c8 c0001d0003c8 c0001d010480
raw: c0001d1d0558 01e1000a 0001 
page dumped because: kasan: bad access detected

Memory state around the buggy address:
  c0001d1d: fc 00 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
  c0001d1d0080: fc fc 00 fc fc fc fc fc fc fc fc fc fc fc fc fc

c0001d1d0100: fc fc fc 02 fc fc fc fc fc fc fc fc fc fc fc fc

 ^
  c0001d1d0180: fc fc fc fc 04 fc fc fc fc fc fc fc fc fc fc fc
  c0001d1d0200: fc fc fc fc fc 04 fc fc fc fc fc fc fc fc fc fc

This happens because the allocation uses the wrong unit (bits) when it
should pass (BITS_TO_LONGS(count) * sizeof(long)) or equivalent. With small
numbers of bits, the allocated object can be smaller than sizeof(long),
which results in invalid accesses.

Use bitmap_zalloc() to allocate and initialize the irq bitmap, paired with
bitmap_free() for consistency.

Signed-off-by: Nathan Lynch 



Reviewed-by: Cédric Le Goater 

Thanks,

C.


---
  arch/powerpc/sysdev/xive/spapr.c | 5 +++--
  1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/sysdev/xive/spapr.c b/arch/powerpc/sysdev/xive/spapr.c
index 7d5128676e83..d02911e78cfc 100644
--- a/arch/powerpc/sysdev/xive/spapr.c
+++ b/arch/powerpc/sysdev/xive/spapr.c
@@ -15,6 +15,7 @@
  #include 
  #include 
  #include 
+#include 
  #include 
  #include 
  #include 
@@ -57,7 +58,7 @@ static int __init xive_irq_bitmap_add(int base, int count)
spin_lock_init(>lock);
xibm->base = base;
xibm->count = count;
-   xibm->bitmap = kzalloc(xibm->count, GFP_KERNEL);
+   xibm->bitmap = bitmap_zalloc(xibm->count, GFP_KERNEL);
if (!xibm->bitmap) {
kfree(xibm);
return -ENOMEM;
@@ -75,7 +76,7 @@ static void xive_irq_bitmap_remove_all(void)
  
  	list_for_each_entry_safe(xibm, tmp, _irq_bitmaps, list) {

list_del(>list);
-   kfree(xibm->bitmap);
+   bitmap_free(xibm->bitmap);
kfree(xibm);
}
  }




[PATCH] powerpc/xive/spapr: correct bitmap allocation size

2022-06-23 Thread Nathan Lynch
kasan detects access beyond the end of the xibm->bitmap allocation:

BUG: KASAN: slab-out-of-bounds in _find_first_zero_bit+0x40/0x140
Read of size 8 at addr c0001d1d0118 by task swapper/0/1

CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.19.0-rc2-1-g90df023b36dd #28
Call Trace:
[c0001d98f770] [c12baab8] dump_stack_lvl+0xac/0x108 (unreliable)
[c0001d98f7b0] [c068faac] print_report+0x37c/0x710
[c0001d98f880] [c06902c0] kasan_report+0x110/0x354
[c0001d98f950] [c0692324] __asan_load8+0xa4/0xe0
[c0001d98f970] [c11c6ed0] _find_first_zero_bit+0x40/0x140
[c0001d98f9b0] [c00dbfbc] xive_spapr_get_ipi+0xcc/0x260
[c0001d98fa70] [c00d6d28] xive_setup_cpu_ipi+0x1e8/0x450
[c0001d98fb30] [c4032a20] pSeries_smp_probe+0x5c/0x118
[c0001d98fb60] [c4018b44] smp_prepare_cpus+0x944/0x9ac
[c0001d98fc90] [c4009f9c] kernel_init_freeable+0x2d4/0x640
[c0001d98fd90] [c00131e8] kernel_init+0x28/0x1d0
[c0001d98fe10] [c000cd54] ret_from_kernel_thread+0x5c/0x64

Allocated by task 0:
 kasan_save_stack+0x34/0x70
 __kasan_kmalloc+0xb4/0xf0
 __kmalloc+0x268/0x540
 xive_spapr_init+0x4d0/0x77c
 pseries_init_irq+0x40/0x27c
 init_IRQ+0x44/0x84
 start_kernel+0x2a4/0x538
 start_here_common+0x1c/0x20

The buggy address belongs to the object at c0001d1d0118
 which belongs to the cache kmalloc-8 of size 8
The buggy address is located 0 bytes inside of
 8-byte region [c0001d1d0118, c0001d1d0120)

The buggy address belongs to the physical page:
page:c00c00074740 refcount:1 mapcount:0 mapping: 
index:0xc0001d1d0558 pfn:0x1d1d
flags: 0x700200(slab|node=0|zone=0|lastcpupid=0x7)
raw: 00700200 c0001d0003c8 c0001d0003c8 c0001d010480
raw: c0001d1d0558 01e1000a 0001 
page dumped because: kasan: bad access detected

Memory state around the buggy address:
 c0001d1d: fc 00 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
 c0001d1d0080: fc fc 00 fc fc fc fc fc fc fc fc fc fc fc fc fc
>c0001d1d0100: fc fc fc 02 fc fc fc fc fc fc fc fc fc fc fc fc
^
 c0001d1d0180: fc fc fc fc 04 fc fc fc fc fc fc fc fc fc fc fc
 c0001d1d0200: fc fc fc fc fc 04 fc fc fc fc fc fc fc fc fc fc

This happens because the allocation uses the wrong unit (bits) when it
should pass (BITS_TO_LONGS(count) * sizeof(long)) or equivalent. With small
numbers of bits, the allocated object can be smaller than sizeof(long),
which results in invalid accesses.

Use bitmap_zalloc() to allocate and initialize the irq bitmap, paired with
bitmap_free() for consistency.

Signed-off-by: Nathan Lynch 
---
 arch/powerpc/sysdev/xive/spapr.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/sysdev/xive/spapr.c b/arch/powerpc/sysdev/xive/spapr.c
index 7d5128676e83..d02911e78cfc 100644
--- a/arch/powerpc/sysdev/xive/spapr.c
+++ b/arch/powerpc/sysdev/xive/spapr.c
@@ -15,6 +15,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -57,7 +58,7 @@ static int __init xive_irq_bitmap_add(int base, int count)
spin_lock_init(>lock);
xibm->base = base;
xibm->count = count;
-   xibm->bitmap = kzalloc(xibm->count, GFP_KERNEL);
+   xibm->bitmap = bitmap_zalloc(xibm->count, GFP_KERNEL);
if (!xibm->bitmap) {
kfree(xibm);
return -ENOMEM;
@@ -75,7 +76,7 @@ static void xive_irq_bitmap_remove_all(void)
 
list_for_each_entry_safe(xibm, tmp, _irq_bitmaps, list) {
list_del(>list);
-   kfree(xibm->bitmap);
+   bitmap_free(xibm->bitmap);
kfree(xibm);
}
 }
-- 
2.35.1



Re: [PATCH v2 4/4] pseries/mobility: Set NMI watchdog factor during LPM

2022-06-23 Thread Nathan Lynch
Laurent Dufour  writes:
> diff --git a/arch/powerpc/platforms/pseries/mobility.c 
> b/arch/powerpc/platforms/pseries/mobility.c
> index 179bbd4ae881..4284ceaf9060 100644
> --- a/arch/powerpc/platforms/pseries/mobility.c
> +++ b/arch/powerpc/platforms/pseries/mobility.c
> @@ -48,6 +48,39 @@ struct update_props_workarea {
>  #define MIGRATION_SCOPE  (1)
>  #define PRRN_SCOPE -2
>  
> +#ifdef CONFIG_PPC_WATCHDOG
> +static unsigned int lpm_nmi_wd_factor = 200;
> +
> +#ifdef CONFIG_SYSCTL
> +static struct ctl_table lpm_nmi_wd_factor_ctl_table[] = {
> + {
> + .procname   = "lpm_nmi_watchdog_factor",

Assuming the basic idea is acceptable, I suggest making the user-visible
name more generic (e.g. "nmi_watchdog_factor") in case it makes sense to
apply this to other contexts in the future.

> + .data   = _nmi_wd_factor,
> + .maxlen = sizeof(int),
> + .mode   = 0644,
> + .proc_handler   = proc_douintvec_minmax,
> + },
> + {}
> +};
> +static struct ctl_table lpm_nmi_wd_factor_sysctl_root[] = {
> + {
> + .procname   = "kernel",
> + .mode   = 0555,
> + .child  = lpm_nmi_wd_factor_ctl_table,
> + },
> + {}
> +};
> +
> +static int __init register_lpm_nmi_wd_factor_sysctl(void)
> +{
> + register_sysctl_table(lpm_nmi_wd_factor_sysctl_root);
> +
> + return 0;
> +}
> +device_initcall(register_lpm_nmi_wd_factor_sysctl);
> +#endif /* CONFIG_SYSCTL */
> +#endif /* CONFIG_PPC_WATCHDOG */
> +
>  static int mobility_rtas_call(int token, char *buf, s32 scope)
>  {
>   int rc;
> @@ -702,6 +735,7 @@ static int pseries_suspend(u64 handle)
>  static int pseries_migrate_partition(u64 handle)
>  {
>   int ret;
> + unsigned int factor = lpm_nmi_wd_factor;
>  
>   ret = wait_for_vasi_session_suspending(handle);
>   if (ret)
> @@ -709,6 +743,13 @@ static int pseries_migrate_partition(u64 handle)
>  
>   vas_migration_handler(VAS_SUSPEND);
>  
> +#ifdef CONFIG_PPC_WATCHDOG
> + if (factor) {
> + pr_info("Set the NMI watchdog factor to %u%%\n", factor);
> + watchdog_nmi_set_lpm_factor(factor);
> + }
> +#endif /* CONFIG_PPC_WATCHDOG */
> +
>   ret = pseries_suspend(handle);
>   if (ret == 0) {
>   post_mobility_fixup();
> @@ -716,6 +757,13 @@ static int pseries_migrate_partition(u64 handle)
>   } else
>   pseries_cancel_migration(handle, ret);
>  
> +#ifdef CONFIG_PPC_WATCHDOG
> + if (factor) {
> + pr_info("Restoring NMI watchdog timer\n");
> + watchdog_nmi_set_lpm_factor(0);
> + }
> +#endif /* CONFIG_PPC_WATCHDOG */
> +

A couple more suggestions:

* Move the prints into a single statement in watchdog_nmi_set_lpm_factor().

* Add no-op versions of watchdog_nmi_set_lpm_factor for
  !CONFIG_PPC_WATCHDOG so we can minimize the #ifdef here.

Otherwise this looks fine to me.


Re: [PATCH v4 2/2] PCI/DPC: Disable DPC service when link is in L2/L3 ready, L2 and L3 state

2022-06-23 Thread Bjorn Helgaas
On Tue, Jun 21, 2022 at 10:27:31AM +0800, Kai-Heng Feng wrote:
> On Mon, Apr 18, 2022 at 10:41 AM Sathyanarayanan Kuppuswamy
>  wrote:
> > On 4/8/22 8:31 AM, Kai-Heng Feng wrote:
> > > On Intel Alder Lake platforms, Thunderbolt entering D3cold can cause
> > > some errors reported by AER:
> > > [   30.100211] pcieport :00:1d.0: AER: Uncorrected (Non-Fatal) error 
> > > received: :00:1d.0
> > > [   30.100251] pcieport :00:1d.0: PCIe Bus Error: 
> > > severity=Uncorrected (Non-Fatal), type=Transaction Layer, (Requester ID)
> > > [   30.100256] pcieport :00:1d.0:   device [8086:7ab0] error 
> > > status/mask=0010/4000
> > > [   30.100262] pcieport :00:1d.0:[20] UnsupReq   
> > > (First)
> > > [   30.100267] pcieport :00:1d.0: AER:   TLP Header: 3400 
> > > 0852  
> > > [   30.100372] thunderbolt :0a:00.0: AER: can't recover (no 
> > > error_detected callback)
> > > [   30.100401] xhci_hcd :3e:00.0: AER: can't recover (no 
> > > error_detected callback)
> > > [   30.100427] pcieport :00:1d.0: AER: device recovery failed
> > >
> > > Since AER is disabled in previous patch for a Link in L2/L3 Ready, L2
> > > and L3, also disable DPC here as DPC depends on AER to work.
> > >
> > > Bugzilla:https://bugzilla.kernel.org/show_bug.cgi?id=215453
> > > Reviewed-by: Mika Westerberg
> > > Signed-off-by: Kai-Heng Feng
> >
> > Reviewed-by: Kuppuswamy Sathyanarayanan
> > 
> 
> A gentle ping...

See questions here:
https://lore.kernel.org/r/2022042433.GA1464120@bhelgaas


Re: [PATCH] powerpc/64e: Rewrite p4d_populate() as a static inline function

2022-06-23 Thread Mike Rapoport
On Thu, Jun 23, 2022 at 10:56:57AM +0200, Christophe Leroy wrote:
> Rewrite p4d_populate() as a static inline function instead of
> a macro.
> 
> This change allows typechecking and would have helped detecting
> a recently found bug in map_kernel_page().
> 
> Cc: Mike Rapoport 
> Signed-off-by: Christophe Leroy 

Acked-by: Mike Rapoport 

> ---
>  arch/powerpc/include/asm/nohash/64/pgalloc.h | 5 -
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/include/asm/nohash/64/pgalloc.h 
> b/arch/powerpc/include/asm/nohash/64/pgalloc.h
> index 668aee6017e7..e50b211becb3 100644
> --- a/arch/powerpc/include/asm/nohash/64/pgalloc.h
> +++ b/arch/powerpc/include/asm/nohash/64/pgalloc.h
> @@ -15,7 +15,10 @@ struct vmemmap_backing {
>  };
>  extern struct vmemmap_backing *vmemmap_list;
>  
> -#define p4d_populate(MM, P4D, PUD)   p4d_set(P4D, (unsigned long)PUD)
> +static inline void p4d_populate(struct mm_struct *mm, p4d_t *p4d, pud_t *pud)
> +{
> + p4d_set(p4d, (unsigned long)pud);
> +}
>  
>  static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr)
>  {
> -- 
> 2.36.1
> 

-- 
Sincerely yours,
Mike.


Re: [PATCH] powerpc/book3e: Fix PUD allocation size in map_kernel_page()

2022-06-23 Thread Mike Rapoport
On Thu, Jun 23, 2022 at 10:56:17AM +0200, Christophe Leroy wrote:
> Commit 2fb4706057bc ("powerpc: add support for folded p4d page tables")
> erroneously changed PUD setup to a mix of PMD and PUD. Fix it.
> 
> While at it, use PTE_TABLE_SIZE instead of PAGE_SIZE for PTE tables
> in order to avoid any confusion.
> 
> Fixes: 2fb4706057bc ("powerpc: add support for folded p4d page tables")
> Cc: sta...@vger.kernel.org
> Cc: Mike Rapoport 
> Signed-off-by: Christophe Leroy 

Acked-by: Mike Rapoport 

> ---
>  arch/powerpc/mm/nohash/book3e_pgtable.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/powerpc/mm/nohash/book3e_pgtable.c 
> b/arch/powerpc/mm/nohash/book3e_pgtable.c
> index 7d4368d055a6..b80fc4a91a53 100644
> --- a/arch/powerpc/mm/nohash/book3e_pgtable.c
> +++ b/arch/powerpc/mm/nohash/book3e_pgtable.c
> @@ -96,8 +96,8 @@ int __ref map_kernel_page(unsigned long ea, unsigned long 
> pa, pgprot_t prot)
>   pgdp = pgd_offset_k(ea);
>   p4dp = p4d_offset(pgdp, ea);
>   if (p4d_none(*p4dp)) {
> - pmdp = early_alloc_pgtable(PMD_TABLE_SIZE);
> - p4d_populate(_mm, p4dp, pmdp);
> + pudp = early_alloc_pgtable(PUD_TABLE_SIZE);
> + p4d_populate(_mm, p4dp, pudp);
>   }
>   pudp = pud_offset(p4dp, ea);
>   if (pud_none(*pudp)) {
> @@ -106,7 +106,7 @@ int __ref map_kernel_page(unsigned long ea, unsigned long 
> pa, pgprot_t prot)
>   }
>   pmdp = pmd_offset(pudp, ea);
>   if (!pmd_present(*pmdp)) {
> - ptep = early_alloc_pgtable(PAGE_SIZE);
> + ptep = early_alloc_pgtable(PTE_TABLE_SIZE);
>   pmd_populate_kernel(_mm, pmdp, ptep);
>   }
>   ptep = pte_offset_kernel(pmdp, ea);
> -- 
> 2.36.1
> 

-- 
Sincerely yours,
Mike.


Re: [PATCH v2 2/3] scsi: BusLogic remove bus_to_virt

2022-06-23 Thread Arnd Bergmann
On Tue, Jun 21, 2022 at 11:56 PM Khalid Aziz  wrote:
> >   while ((comp_code = next_inbox->comp_code) != BLOGIC_INBOX_FREE) {
> > - /*
> > -We are only allowed to do this because we limit our
> > -architectures we run on to machines where bus_to_virt(
> > -actually works.  There *needs* to be a dma_addr_to_virt()
> > -in the new PCI DMA mapping interface to replace
> > -bus_to_virt() or else this code is going to become very
> > -innefficient.
> > -  */
> > - struct blogic_ccb *ccb =
> > - (struct blogic_ccb *) bus_to_virt(next_inbox->ccb);
> > + struct blogic_ccb *ccb = blogic_inbox_to_ccb(adapter, 
> > adapter->next_inbox);
>
> This change looks good enough as workaround to not use bus_to_virt() for
> now. There are two problems I see though. One, I do worry about
> blogic_inbox_to_ccb() returning NULL for ccb which should not happen
> unless the mailbox pointer was corrupted which would indicate a bigger
> problem. Nevertheless a NULL pointer causing kernel panic concerns me.
> How about adding a check before we dereference ccb?

Right, makes sense

> Second, with this patch applied, I am seeing errors from the driver:
>
> =
> [ 1623.902685]  sdb: sdb1 sdb2
> [ 1623.903245] sd 2:0:0:0: [sdb] Attached SCSI disk
> [ 1623.911000] scsi2: Illegal CCB #76 status 2 in Incoming Mailbox
> [ 1623.911005] scsi2: Illegal CCB #76 status 2 in Incoming Mailbox
> [ 1623.911070] scsi2: Illegal CCB #79 status 2 in Incoming Mailbox
> [ 1651.458008] scsi2: Warning: Partition Table appears to have Geometry
> 256/63 which is
> [ 1651.458013] scsi2: not compatible with current BusLogic Host Adapter
> Geometry 255/63
> [ 1658.797609] scsi2: Resetting BusLogic BT-958D Failed
> [ 1659.533208] sd 2:0:0:0: Device offlined - not ready after error recovery
> [ 1659.51] sd 2:0:0:0: Device offlined - not ready after error recovery
> [ 1659.53] sd 2:0:0:0: Device offlined - not ready after error recovery
> [ 1659.533342] sd 2:0:0:0: [sdb] tag#101 FAILED Result:
> hostbyte=DID_TIME_OUT driverbyte=DRIVER_OK cmd_age=35s
> [ 1659.533345] sd 2:0:0:0: [sdb] tag#101 CDB: Read(10) 28 00 00 00 00 28
> 00 00 10 00
> [ 1659.533346] I/O error, dev sdb, sector 40 op 0x0:(READ) flags 0x80700
> phys_seg 1 prio class 0
>
> =
>
> This is on VirtualBox using emulated BusLogic adapter.
>
> This patch needs more refinement.

Thanks for testing the patch, too bad it didn't work. At least I quickly found
one stupid mistake on my end, hope it's the only one.

Can you test it again with this patch on top?

diff --git a/drivers/scsi/BusLogic.c b/drivers/scsi/BusLogic.c
index d057abfcdd5c..9e67f2ee25ee 100644
--- a/drivers/scsi/BusLogic.c
+++ b/drivers/scsi/BusLogic.c
@@ -2554,8 +2554,14 @@ static void blogic_scan_inbox(struct
blogic_adapter *adapter)
enum blogic_cmplt_code comp_code;

while ((comp_code = next_inbox->comp_code) != BLOGIC_INBOX_FREE) {
-   struct blogic_ccb *ccb = blogic_inbox_to_ccb(adapter,
adapter->next_inbox);
-   if (comp_code != BLOGIC_CMD_NOTFOUND) {
+   struct blogic_ccb *ccb = blogic_inbox_to_ccb(adapter,
next_inbox);
+   if (!ccb) {
+   /*
+* This should never happen, unless the CCB list is
+* corrupted in memory.
+*/
+   blogic_warn("Could not find CCB for dma
address 0x%x\n", adapter, next_inbox->ccb);
+   } else if (comp_code != BLOGIC_CMD_NOTFOUND) {
if (ccb->status == BLOGIC_CCB_ACTIVE ||
ccb->status == BLOGIC_CCB_RESET) {


Re: [RFC PATCH v2 2/3] fs: define a firmware security filesystem named fwsecurityfs

2022-06-23 Thread James Bottomley
On Thu, 2022-06-23 at 10:54 +0200, Greg Kroah-Hartman wrote:
[...]
> > diff --git a/fs/fwsecurityfs/inode.c b/fs/fwsecurityfs/inode.c
> > new file mode 100644
> > index ..5d06dc0de059
> > --- /dev/null
> > +++ b/fs/fwsecurityfs/inode.c
> > @@ -0,0 +1,159 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> > +/*
> > + * Copyright (C) 2022 IBM Corporation
> > + * Author: Nayna Jain 
> > + */
> > +
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +
> > +#include "internal.h"
> > +
> > +int fwsecurityfs_remove_file(struct dentry *dentry)
> > +{
> > +   drop_nlink(d_inode(dentry));
> > +   dput(dentry);
> > +   return 0;
> > +};
> > +EXPORT_SYMBOL_GPL(fwsecurityfs_remove_file);
> > +
> > +int fwsecurityfs_create_file(const char *name, umode_t mode,
> > +   u16 filesize, struct dentry
> > *parent,
> > +   struct dentry *dentry,
> > +   const struct file_operations
> > *fops)
> > +{
> > +   struct inode *inode;
> > +   int error;
> > +   struct inode *dir;
> > +
> > +   if (!parent)
> > +   return -EINVAL;
> > +
> > +   dir = d_inode(parent);
> > +   pr_debug("securityfs: creating file '%s'\n", name);
> 
> Did you forget to call simple_pin_fs() here or anywhere else?
> 
> And this can be just one function with the directory creation file,
> just check the mode and you will be fine.  Look at securityfs as an
> example of how to make this simpler.

Actually, before you go down this route can you consider the namespace
ramifications.  In fact we're just having to rework securityfs to pull
out all the simple_pin_... calls because simple_pin_... is completely
inimical to namespaces.

The first thing to consider is if you simply use securityfs you'll
inherit all the simple_pin_... removal work and be namespace ready.  It
could be that creating a new filesystem that can't be namespaced is the
right thing to do here, but at least ask the question: would we ever
want any of these files to be presented selectively inside containers? 
If the answer is "yes" then simple_pin_... is the wrong interface.

James




[PATCH 2/2] powerpc/numa: Return the first online node if device tree mapping returns a not online node

2022-06-23 Thread Aneesh Kumar K.V
While building the cpu_to_node map make sure we always use the online node
to build the mapping table. In general this should not be an issue
because the kernel use similar lookup mechanism (vphn_get_nid()) to mark
nodes online (setup_node_data()). Hence NUMA nodes we find during
lookup in numa_setup_cpu() will always be found online.

To keep logic simpler/correct, make sure that if the hypervisor
or device tree returned a not online node, don't use that to build
the map table. Instead, use the first_online_node.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/mm/numa.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index 0801b2ce9b7d..f387b9eb9dc9 100644
--- a/arch/powerpc/mm/numa.c
+++ b/arch/powerpc/mm/numa.c
@@ -741,7 +741,7 @@ static int numa_setup_cpu(unsigned long lcpu)
of_node_put(cpu);
 
 out_present:
-   if (nid < 0 || !node_possible(nid))
+   if (nid < 0 || !node_online(nid))
nid = first_online_node;
 
/*
-- 
2.36.1



[PATCH 1/2] powerpc/numa: Return the first online node instead of 0

2022-06-23 Thread Aneesh Kumar K.V
If early cpu to node mapping finds an invalid node id, return
the first online node instead of node 0.

With commit e75130f20b1f ("powerpc/numa: Offline memoryless cpuless node 0")
the kernel marks node 0 offline in certain scenarios.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/topology.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/topology.h 
b/arch/powerpc/include/asm/topology.h
index 8a4d4f4d9749..704088b1d53c 100644
--- a/arch/powerpc/include/asm/topology.h
+++ b/arch/powerpc/include/asm/topology.h
@@ -60,7 +60,7 @@ static inline int early_cpu_to_node(int cpu)
 * Fall back to node 0 if nid is unset (it should be, except bugs).
 * This allows callers to safely do NODE_DATA(early_cpu_to_node(cpu)).
 */
-   return (nid < 0) ? 0 : nid;
+   return (nid < 0) ? first_online_node : nid;
 }
 
 int of_drconf_to_nid_single(struct drmem_lmb *lmb);
-- 
2.36.1



[PATCH 3/3] powerpc/mm: Use VMALLOC_START to validate addr

2022-06-23 Thread Aneesh Kumar K.V
Instead of high_memory use VMALLOC_START to validate that the address is
not in the vmalloc range.

Cc: Kefeng Wang 
Cc: Christophe Leroy 
Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/page.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index e5f75c70eda8..256cad69e42e 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -134,7 +134,7 @@ static inline bool pfn_valid(unsigned long pfn)
 
 #define virt_addr_valid(vaddr) ({  \
unsigned long _addr = (unsigned long)vaddr; \
-   _addr >= PAGE_OFFSET && _addr < (unsigned long)high_memory &&   \
+   _addr >= PAGE_OFFSET && _addr < (unsigned long)VMALLOC_START && \
pfn_valid(virt_to_pfn(_addr));  \
 })
 
-- 
2.36.1



[PATCH 1/3] powerpc/memhotplug: Add add_pages override for PPC

2022-06-23 Thread Aneesh Kumar K.V
With commit ffa0b64e3be5 ("powerpc: Fix virt_addr_valid() for 64-bit Book3E & 
32-bit")
the kernel now validate the addr against high_memory value. This results
in the below BUG_ON with dax pfns.

[  635.798741][T26531] kernel BUG at mm/page_alloc.c:5521!
1:mon> e
cpu 0x1: Vector: 700 (Program Check) at [c7287630]
pc: c055ed48: free_pages.part.0+0x48/0x110
lr: c053ca70: tlb_finish_mmu+0x80/0xd0
sp: c72878d0
   msr: 8282b033
  current = 0xcafabe00
  paca= 0xc0037300   irqmask: 0x03   irq_happened: 0x05
pid   = 26531, comm = 50-landscape-sy
kernel BUG at :5521!
Linux version 5.19.0-rc3-14659-g4ec05be7c2e1 (kvaneesh@ltc-boston8) (gcc 
(Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, GNU ld (GNU Binutils for Ubuntu) 2.34) 
#625 SMP Thu Jun 23 00:35:43 CDT 2022
1:mon> t
[link register   ] c053ca70 tlb_finish_mmu+0x80/0xd0
[c72878d0] c053ca54 tlb_finish_mmu+0x64/0xd0 (unreliable)
[c7287900] c0539424 exit_mmap+0xe4/0x2a0
[c72879e0] c019fc1c mmput+0xcc/0x210
[c7287a20] c0629230 begin_new_exec+0x5e0/0xf40
[c7287ae0] c070b3cc load_elf_binary+0x3ac/0x1e00
[c7287c10] c0627af0 bprm_execve+0x3b0/0xaf0
[c7287cd0] c0628414 do_execveat_common.isra.0+0x1e4/0x310
[c7287d80] c062858c sys_execve+0x4c/0x60
[c7287db0] c002c1b0 system_call_exception+0x160/0x2c0
[c7287e10] c000c53c system_call_common+0xec/0x250

The fix is to make sure we update high_memory on memory hotplug.
This is similar to what x86 does in commit 3072e413e305 ("mm/memory_hotplug: 
introduce add_pages")

Fixes: ffa0b64e3be5 ("powerpc: Fix virt_addr_valid() for 64-bit Book3E & 
32-bit")
Cc: Kefeng Wang 
Cc: Christophe Leroy 
Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/Kconfig  |  1 +
 arch/powerpc/mm/mem.c | 32 +++-
 arch/x86/Kconfig  |  5 +
 mm/Kconfig|  3 +++
 4 files changed, 36 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index c2ce2e60c8f0..20c1f8e26c96 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -112,6 +112,7 @@ config PPC
select ARCH_DISABLE_KASAN_INLINEif PPC_RADIX_MMU
select ARCH_ENABLE_MEMORY_HOTPLUG
select ARCH_ENABLE_MEMORY_HOTREMOVE
+   select ARCH_HAS_ADD_PAGES   if ARCH_ENABLE_MEMORY_HOTPLUG
select ARCH_HAS_COPY_MC if PPC64
select ARCH_HAS_CURRENT_STACK_POINTER
select ARCH_HAS_DEBUG_VIRTUAL
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 52b77684acda..2a63920c369d 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -105,6 +105,36 @@ void __ref arch_remove_linear_mapping(u64 start, u64 size)
vm_unmap_aliases();
 }
 
+/*
+ * After memory hotplug the variables max_pfn, max_low_pfn and high_memory need
+ * updating.
+ */
+static void update_end_of_memory_vars(u64 start, u64 size)
+{
+   unsigned long end_pfn = PFN_UP(start + size);
+
+   if (end_pfn > max_pfn) {
+   max_pfn = end_pfn;
+   max_low_pfn = end_pfn;
+   high_memory = (void *)__va(max_pfn * PAGE_SIZE - 1) + 1;
+   }
+}
+
+int __ref add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages,
+   struct mhp_params *params)
+{
+   int ret;
+
+   ret = __add_pages(nid, start_pfn, nr_pages, params);
+   WARN_ON_ONCE(ret);
+
+   /* update max_pfn, max_low_pfn and high_memory */
+   update_end_of_memory_vars(start_pfn << PAGE_SHIFT,
+ nr_pages << PAGE_SHIFT);
+
+   return ret;
+}
+
 int __ref arch_add_memory(int nid, u64 start, u64 size,
  struct mhp_params *params)
 {
@@ -115,7 +145,7 @@ int __ref arch_add_memory(int nid, u64 start, u64 size,
rc = arch_create_linear_mapping(nid, start, size, params);
if (rc)
return rc;
-   rc = __add_pages(nid, start_pfn, nr_pages, params);
+   rc = add_pages(nid, start_pfn, nr_pages, params);
if (rc)
arch_remove_linear_mapping(start, size);
return rc;
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index be0b95e51df6..151ddb96ae46 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -68,6 +68,7 @@ config X86
select ARCH_ENABLE_SPLIT_PMD_PTLOCK if (PGTABLE_LEVELS > 2) && (X86_64 
|| X86_PAE)
select ARCH_ENABLE_THP_MIGRATION if X86_64 && TRANSPARENT_HUGEPAGE
select ARCH_HAS_ACPI_TABLE_UPGRADE  if ACPI
+   select ARCH_HAS_ADD_PAGES   if ARCH_ENABLE_MEMORY_HOTPLUG
select ARCH_HAS_CACHE_LINE_SIZE
select ARCH_HAS_CURRENT_STACK_POINTER
select ARCH_HAS_DEBUG_VIRTUAL
@@ -2453,10 +2454,6 @@ source "kernel/livepatch/Kconfig"
 
 endmenu
 
-config ARCH_HAS_ADD_PAGES
-   def_bool y
-   depends on 

[PATCH 2/3] powerpc/mm: Update max/min_low_pfn in the same function

2022-06-23 Thread Aneesh Kumar K.V
For both CONFIG_NUMA enabled/disabled use mem_topology_setup to
update max/min_low_pfn.

This also add min_low_pfn update to CONFIG_NUMA which was initialized
to zero before.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/mm/numa.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index 0801b2ce9b7d..b44ce71917d7 100644
--- a/arch/powerpc/mm/numa.c
+++ b/arch/powerpc/mm/numa.c
@@ -1160,6 +1160,9 @@ void __init mem_topology_setup(void)
 {
int cpu;
 
+   max_low_pfn = max_pfn = memblock_end_of_DRAM() >> PAGE_SHIFT;
+   min_low_pfn = MEMORY_START >> PAGE_SHIFT;
+
/*
 * Linux/mm assumes node 0 to be online at boot. However this is not
 * true on PowerPC, where node 0 is similar to any other node, it
@@ -1204,9 +1207,6 @@ void __init initmem_init(void)
 {
int nid;
 
-   max_low_pfn = memblock_end_of_DRAM() >> PAGE_SHIFT;
-   max_pfn = max_low_pfn;
-
memblock_dump_all();
 
for_each_online_node(nid) {
-- 
2.36.1



[PATCH] powerpc/powermac: Remove empty function note_scsi_host()

2022-06-23 Thread Christophe Leroy
note_scsi_host() has been an empty function since
commit 6ee0d9f744d4 ("[POWERPC] Remove unused old code
from powermac setup code").

Remove it.

Reported-by: kernel test robot 
Signed-off-by: Christophe Leroy 
---
 arch/powerpc/include/asm/setup.h| 1 -
 arch/powerpc/platforms/powermac/setup.c | 7 ---
 drivers/scsi/mesh.c | 5 -
 3 files changed, 13 deletions(-)

diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index 8fa37ef5da4d..07b487896c27 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -12,7 +12,6 @@ extern unsigned long long memory_limit;
 extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
 
 struct device_node;
-extern void note_scsi_host(struct device_node *, void *);
 
 /* Used in very early kernel initialization. */
 extern unsigned long reloc_offset(void);
diff --git a/arch/powerpc/platforms/powermac/setup.c 
b/arch/powerpc/platforms/powermac/setup.c
index f71735ec449f..04daa7f0a03c 100644
--- a/arch/powerpc/platforms/powermac/setup.c
+++ b/arch/powerpc/platforms/powermac/setup.c
@@ -320,13 +320,6 @@ static void __init pmac_setup_arch(void)
 #endif /* CONFIG_ADB */
 }
 
-#ifdef CONFIG_SCSI
-void note_scsi_host(struct device_node *node, void *host)
-{
-}
-EXPORT_SYMBOL(note_scsi_host);
-#endif
-
 static int initializing = 1;
 
 static int pmac_late_init(void)
diff --git a/drivers/scsi/mesh.c b/drivers/scsi/mesh.c
index 322d3ad38159..1f8e240592a9 100644
--- a/drivers/scsi/mesh.c
+++ b/drivers/scsi/mesh.c
@@ -1882,11 +1882,6 @@ static int mesh_probe(struct macio_dev *mdev, const 
struct of_device_id *match)
goto out_release;
}

-   /* Old junk for root discovery, that will die ultimately */
-#if !defined(MODULE)
-   note_scsi_host(mesh, mesh_host);
-#endif
-
mesh_host->base = macio_resource_start(mdev, 0);
mesh_host->irq = macio_irq(mdev, 0);
ms = (struct mesh_state *) mesh_host->hostdata;
-- 
2.36.1



[PATCH] powerpc/64e: Rewrite p4d_populate() as a static inline function

2022-06-23 Thread Christophe Leroy
Rewrite p4d_populate() as a static inline function instead of
a macro.

This change allows typechecking and would have helped detecting
a recently found bug in map_kernel_page().

Cc: Mike Rapoport 
Signed-off-by: Christophe Leroy 
---
 arch/powerpc/include/asm/nohash/64/pgalloc.h | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/nohash/64/pgalloc.h 
b/arch/powerpc/include/asm/nohash/64/pgalloc.h
index 668aee6017e7..e50b211becb3 100644
--- a/arch/powerpc/include/asm/nohash/64/pgalloc.h
+++ b/arch/powerpc/include/asm/nohash/64/pgalloc.h
@@ -15,7 +15,10 @@ struct vmemmap_backing {
 };
 extern struct vmemmap_backing *vmemmap_list;
 
-#define p4d_populate(MM, P4D, PUD) p4d_set(P4D, (unsigned long)PUD)
+static inline void p4d_populate(struct mm_struct *mm, p4d_t *p4d, pud_t *pud)
+{
+   p4d_set(p4d, (unsigned long)pud);
+}
 
 static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr)
 {
-- 
2.36.1



[PATCH] powerpc/book3e: Fix PUD allocation size in map_kernel_page()

2022-06-23 Thread Christophe Leroy
Commit 2fb4706057bc ("powerpc: add support for folded p4d page tables")
erroneously changed PUD setup to a mix of PMD and PUD. Fix it.

While at it, use PTE_TABLE_SIZE instead of PAGE_SIZE for PTE tables
in order to avoid any confusion.

Fixes: 2fb4706057bc ("powerpc: add support for folded p4d page tables")
Cc: sta...@vger.kernel.org
Cc: Mike Rapoport 
Signed-off-by: Christophe Leroy 
---
 arch/powerpc/mm/nohash/book3e_pgtable.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/mm/nohash/book3e_pgtable.c 
b/arch/powerpc/mm/nohash/book3e_pgtable.c
index 7d4368d055a6..b80fc4a91a53 100644
--- a/arch/powerpc/mm/nohash/book3e_pgtable.c
+++ b/arch/powerpc/mm/nohash/book3e_pgtable.c
@@ -96,8 +96,8 @@ int __ref map_kernel_page(unsigned long ea, unsigned long pa, 
pgprot_t prot)
pgdp = pgd_offset_k(ea);
p4dp = p4d_offset(pgdp, ea);
if (p4d_none(*p4dp)) {
-   pmdp = early_alloc_pgtable(PMD_TABLE_SIZE);
-   p4d_populate(_mm, p4dp, pmdp);
+   pudp = early_alloc_pgtable(PUD_TABLE_SIZE);
+   p4d_populate(_mm, p4dp, pudp);
}
pudp = pud_offset(p4dp, ea);
if (pud_none(*pudp)) {
@@ -106,7 +106,7 @@ int __ref map_kernel_page(unsigned long ea, unsigned long 
pa, pgprot_t prot)
}
pmdp = pmd_offset(pudp, ea);
if (!pmd_present(*pmdp)) {
-   ptep = early_alloc_pgtable(PAGE_SIZE);
+   ptep = early_alloc_pgtable(PTE_TABLE_SIZE);
pmd_populate_kernel(_mm, pmdp, ptep);
}
ptep = pte_offset_kernel(pmdp, ea);
-- 
2.36.1



Re: [RFC PATCH v2 2/3] fs: define a firmware security filesystem named fwsecurityfs

2022-06-23 Thread Greg Kroah-Hartman
On Wed, Jun 22, 2022 at 05:56:47PM -0400, Nayna Jain wrote:
> securityfs is meant for linux security subsystems to expose policies/logs
> or any other information. However, there are various firmware security
> features which expose their variables for user management via kernel.
> There is currently no single place to expose these variables. Different
> platforms use sysfs/platform specific filesystem(efivarfs)/securityfs
> interface as find appropriate. Thus, there is a gap in kernel interfaces
> to expose variables for security features.
> 
> Define a firmware security filesystem (fwsecurityfs) to be used for
> exposing variables managed by firmware and to be used by firmware
> enabled security features. These variables are platform specific.
> Filesystem provides platforms to implement their own underlying
> semantics by defining own inode and file operations.
> 
> Similar to securityfs, the firmware security filesystem is recommended
> to be exposed on a well known mount point /sys/firmware/security.
> Platforms can define their own directory or file structure under this path.
> 
> Example:
> 
> # mount -t fwsecurityfs fwsecurityfs /sys/firmware/security
> 
> # cd /sys/firmware/security/
> 
> Signed-off-by: Nayna Jain 
> ---
>  fs/Kconfig   |   1 +
>  fs/Makefile  |   1 +
>  fs/fwsecurityfs/Kconfig  |  14 +++
>  fs/fwsecurityfs/Makefile |  10 +++
>  fs/fwsecurityfs/inode.c  | 159 +++
>  fs/fwsecurityfs/internal.h   |  13 +++
>  fs/fwsecurityfs/super.c  | 154 +
>  include/linux/fwsecurityfs.h |  29 +++
>  include/uapi/linux/magic.h   |   1 +
>  9 files changed, 382 insertions(+)
>  create mode 100644 fs/fwsecurityfs/Kconfig
>  create mode 100644 fs/fwsecurityfs/Makefile
>  create mode 100644 fs/fwsecurityfs/inode.c
>  create mode 100644 fs/fwsecurityfs/internal.h
>  create mode 100644 fs/fwsecurityfs/super.c
>  create mode 100644 include/linux/fwsecurityfs.h
> 
> diff --git a/fs/Kconfig b/fs/Kconfig
> index 5976eb33535f..19ea28143428 100644
> --- a/fs/Kconfig
> +++ b/fs/Kconfig
> @@ -276,6 +276,7 @@ config ARCH_HAS_GIGANTIC_PAGE
>  
>  source "fs/configfs/Kconfig"
>  source "fs/efivarfs/Kconfig"
> +source "fs/fwsecurityfs/Kconfig"
>  
>  endmenu
>  
> diff --git a/fs/Makefile b/fs/Makefile
> index 208a74e0b00e..5792cd0443cb 100644
> --- a/fs/Makefile
> +++ b/fs/Makefile
> @@ -137,6 +137,7 @@ obj-$(CONFIG_F2FS_FS) += f2fs/
>  obj-$(CONFIG_CEPH_FS)+= ceph/
>  obj-$(CONFIG_PSTORE) += pstore/
>  obj-$(CONFIG_EFIVAR_FS)  += efivarfs/
> +obj-$(CONFIG_FWSECURITYFS)   += fwsecurityfs/
>  obj-$(CONFIG_EROFS_FS)   += erofs/
>  obj-$(CONFIG_VBOXSF_FS)  += vboxsf/
>  obj-$(CONFIG_ZONEFS_FS)  += zonefs/
> diff --git a/fs/fwsecurityfs/Kconfig b/fs/fwsecurityfs/Kconfig
> new file mode 100644
> index ..f1665511eeb9
> --- /dev/null
> +++ b/fs/fwsecurityfs/Kconfig
> @@ -0,0 +1,14 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +#
> +# Copyright (C) 2022 IBM Corporation
> +# Author: Nayna Jain 
> +#
> +
> +config FWSECURITYFS
> + bool "Enable the fwsecurityfs filesystem"
> + help
> +   This will build the fwsecurityfs file system which is recommended
> +   to be mounted on /sys/firmware/security. This can be used by
> +   platforms to expose their variables which are managed by firmware.
> +
> +   If you are unsure how to answer this question, answer N.
> diff --git a/fs/fwsecurityfs/Makefile b/fs/fwsecurityfs/Makefile
> new file mode 100644
> index ..b9931d180178
> --- /dev/null
> +++ b/fs/fwsecurityfs/Makefile
> @@ -0,0 +1,10 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +#
> +# Copyright (C) 2022 IBM Corporation
> +# Author: Nayna Jain 
> +#
> +# Makefile for the firmware security filesystem
> +
> +obj-$(CONFIG_FWSECURITYFS)   += fwsecurityfs.o
> +
> +fwsecurityfs-objs:= inode.o super.o
> diff --git a/fs/fwsecurityfs/inode.c b/fs/fwsecurityfs/inode.c
> new file mode 100644
> index ..5d06dc0de059
> --- /dev/null
> +++ b/fs/fwsecurityfs/inode.c
> @@ -0,0 +1,159 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Copyright (C) 2022 IBM Corporation
> + * Author: Nayna Jain 
> + */
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#include "internal.h"
> +
> +int fwsecurityfs_remove_file(struct dentry *dentry)
> +{
> + drop_nlink(d_inode(dentry));
> + dput(dentry);
> + return 0;
> +};
> +EXPORT_SYMBOL_GPL(fwsecurityfs_remove_file);
> +
> +int fwsecurityfs_create_file(const char *name, umode_t mode,
> + u16 filesize, struct dentry *parent,
> + struct dentry *dentry,
> + const 

Re: [PATCH v6 32/33] arm64: irq-gic: Replace unreachable() with -EINVAL

2022-06-23 Thread Marc Zyngier

On 2022-06-23 02:49, Chen Zhongjin wrote:

Using unreachable() at default of switch generates an extra branch at
end of the function, and compiler won't generate a ret to close this
branch because it knows it's unreachable.

If there's no instruction in this branch, compiler will generate a NOP,
And it will confuse objtool to warn this NOP as a fall through branch.

In fact these branches are actually unreachable, so we can replace
unreachable() with returning a -EINVAL value.

Signed-off-by: Chen Zhongjin 
---
 arch/arm64/kvm/hyp/vgic-v3-sr.c | 7 +++
 drivers/irqchip/irq-gic-v3.c| 2 +-
 2 files changed, 4 insertions(+), 5 deletions(-)


Basic courtesy would have been to Cc the maintainers of this code.



diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c 
b/arch/arm64/kvm/hyp/vgic-v3-sr.c

index 4fb419f7b8b6..f3cee92c3038 100644
--- a/arch/arm64/kvm/hyp/vgic-v3-sr.c
+++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c
@@ -6,7 +6,6 @@

 #include 

-#include 
 #include 
 #include 

@@ -55,7 +54,7 @@ static u64 __gic_v3_get_lr(unsigned int lr)
return read_gicreg(ICH_LR15_EL2);
}

-   unreachable();
+   return -EINVAL;


NAK. That's absolutely *wrong*, and will hide future bugs.
Nothing checks for -EINVAL, and we *never* expect to
reach this, hence the perfectly valid annotation.

If something needs fixing, it probably is your tooling.

M.
--
Jazz is not dead. It just smells funny...