Re: [PATCH 2/2] arm64: Allow changing of attributes outside of modules

2015-11-05 Thread Laura Abbott

On 11/04/2015 11:44 PM, Ard Biesheuvel wrote:

On 3 November 2015 at 22:48, Laura Abbott  wrote:


Currently, the set_memory_* functions that are implemented for arm64
are restricted to module addresses only. This was mostly done
because arm64 maps normal zone memory with larger page sizes to
improve TLB performance. This has the side effect though of making it
difficult to adjust attributes at the PAGE_SIZE granularity. There are
an increasing number of use cases related to security where it is
necessary to change the attributes of kernel memory. Add functionality
to the page attribute changing code under a Kconfig to let systems
designers decide if they want to make the trade off of security for TLB
pressure.

Signed-off-by: Laura Abbott 
---
  arch/arm64/Kconfig.debug | 11 +++
  arch/arm64/mm/mm.h   |  3 ++
  arch/arm64/mm/mmu.c  |  2 +-
  arch/arm64/mm/pageattr.c | 74 
  4 files changed, 84 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
index d6285ef..abc6922 100644
--- a/arch/arm64/Kconfig.debug
+++ b/arch/arm64/Kconfig.debug
@@ -89,6 +89,17 @@ config DEBUG_ALIGN_RODATA

   If in doubt, say N

+config DEBUG_CHANGE_PAGEATTR


I don't think this belongs in Kconfig.debug, and I don't think this
should default to off.

We know we currently have no users of set_memory_xx() in arch/arm64
that operate on linear mapping addresses, so we will not introduce any
performance regressions by adding this functionality now. By putting
this feature behind a debug option, every newly introduced call
set_memory_xx() that operates on linear or vmalloc() addresses needs
to deal with -EINVAL (or depend on DEBUG_CHANGE_PAGEATTR), or it may
error out at runtime if the feature is not enabled.



I stuck it in Kconfig.debug to have it match with the rest of the
module and DEBUG_RODATA options. I'll pull it out.
 

+   bool "Allow all kernel memory to have attributes changed"
+   help
+ If this option is selected, APIs that change page attributes
+ (RW <-> RO, X <-> NX) will be valid for all memory mapped in
+ the kernel space. The trade off is that there may be increased
+ TLB pressure from finer grained page mapping. Turn on this option
+ if performance is more important than security
+


This is backwards


+ If in doubt, say N
+
  source "drivers/hwtracing/coresight/Kconfig"

  endmenu


[...]


diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index e47ed1c..48a4ce9 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c


[...]


@@ -45,17 +108,18 @@ static int change_memory_common(unsigned long addr, int 
numpages,
 int ret;
 struct page_change_data data;

+   if (addr < PAGE_OFFSET && !is_vmalloc_addr((void *)addr))
+   return -EINVAL;
+


Doesn't this exclude the module area?


 if (!IS_ALIGNED(addr, PAGE_SIZE)) {
 start &= PAGE_MASK;
 end = start + size;
 WARN_ON_ONCE(1);
 }

-   if (start < MODULES_VADDR || start >= MODULES_END)
-   return -EINVAL;
-
-   if (end < MODULES_VADDR || end >= MODULES_END)
-   return -EINVAL;
+   ret = check_address(addr);
+   if (ret)
+   return ret;

 data.set_mask = set_mask;
 data.clear_mask = clear_mask;
--
2.4.3



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 2/2] arm64: Allow changing of attributes outside of modules

2015-11-05 Thread Laura Abbott

On 11/05/2015 03:10 AM, Xishi Qiu wrote:

On 2015/11/5 2:12, Laura Abbott wrote:


On 11/03/2015 06:59 PM, zhong jiang wrote:


Hi Laura

This patch seems vaild, but I didn't feel very reasonable.
Because of the large page to make TLB performance better, just
split it if it is necessary.therefore, I think the first thing
we try to keep it, if they fail ,and then to split.



I'm not quite sure I understand the request. We know we are going
to have to have something mapped at page size granularity so we
are going to have to break down the larger mappings no matter
what. Can you explain a bit more where you think we could try to
keep the larger mappings?



Hi Laura,

He means like this, if the range is aligned with large page, we
need not to split it, just change the flag.



This complicates the logic for doing the update. Apply to page range
nicely walks across all the 4K pages and does the update. It looks
like x86 does the check to keep the large pages though so I'll
give it some thought.


I have one more question.

alloc_init_pud()
...
if (!pud_none(old_pud))
...
memblock_free(table, PAGE_SIZE);
...

Here we will free the memory from pmd page, so why not free
more memory from 512 pte pages, if the 512 old pmds are not none?



It would be nice to reclaim the memory but I'm not sure if that will
work. The memory was allocated before any of the regular kernel data
structures were set up. It's not clear if giving the pages back to
the buddy allocator would actually work. I'll take a look though.
 

Thanks,
Xishi Qiu



Thanks,
Laura

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 2/2] arm64: Allow changing of attributes outside of modules

2015-11-05 Thread Xishi Qiu
On 2015/11/4 5:48, Laura Abbott wrote:

> 
> Currently, the set_memory_* functions that are implemented for arm64
> are restricted to module addresses only. This was mostly done
> because arm64 maps normal zone memory with larger page sizes to
> improve TLB performance. This has the side effect though of making it
> difficult to adjust attributes at the PAGE_SIZE granularity. There are
> an increasing number of use cases related to security where it is
> necessary to change the attributes of kernel memory. Add functionality
> to the page attribute changing code under a Kconfig to let systems
> designers decide if they want to make the trade off of security for TLB
> pressure.
> 
> Signed-off-by: Laura Abbott 
> ---
>  arch/arm64/Kconfig.debug | 11 +++
>  arch/arm64/mm/mm.h   |  3 ++
>  arch/arm64/mm/mmu.c  |  2 +-
>  arch/arm64/mm/pageattr.c | 74 
> 
>  4 files changed, 84 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
> index d6285ef..abc6922 100644
> --- a/arch/arm64/Kconfig.debug
> +++ b/arch/arm64/Kconfig.debug
> @@ -89,6 +89,17 @@ config DEBUG_ALIGN_RODATA
>  
> If in doubt, say N
>  
> +config DEBUG_CHANGE_PAGEATTR
> + bool "Allow all kernel memory to have attributes changed"
> + help
> +   If this option is selected, APIs that change page attributes
> +   (RW <-> RO, X <-> NX) will be valid for all memory mapped in
> +   the kernel space. The trade off is that there may be increased
> +   TLB pressure from finer grained page mapping. Turn on this option
> +   if performance is more important than security
> +
> +   If in doubt, say N
> +
>  source "drivers/hwtracing/coresight/Kconfig"
>  
>  endmenu
> diff --git a/arch/arm64/mm/mm.h b/arch/arm64/mm/mm.h
> index ef47d99..7b0dcc4 100644
> --- a/arch/arm64/mm/mm.h
> +++ b/arch/arm64/mm/mm.h
> @@ -1,3 +1,6 @@
>  extern void __init bootmem_init(void);
>  
>  void fixup_init(void);
> +
> +void split_pud(pud_t *old_pud, pmd_t *pmd);
> +void split_pmd(pmd_t *pmd, pte_t *pte);
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index ff41efa..cefad2d 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -72,7 +72,7 @@ static void __init *early_alloc(unsigned long sz)
>  /*
>   * remap a PMD into pages
>   */
> -static void split_pmd(pmd_t *pmd, pte_t *pte)
> +void split_pmd(pmd_t *pmd, pte_t *pte)
>  {
>   unsigned long pfn = pmd_pfn(*pmd);
>   unsigned long addr = pfn << PAGE_SHIFT;
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index e47ed1c..48a4ce9 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -15,9 +15,12 @@
>  #include 
>  #include 
>  
> +#include 
>  #include 
>  #include 
>  
> +#include "mm.h"
> +
>  struct page_change_data {
>   pgprot_t set_mask;
>   pgprot_t clear_mask;
> @@ -36,6 +39,66 @@ static int change_page_range(pte_t *ptep, pgtable_t token, 
> unsigned long addr,
>   return 0;
>  }
>  
> +#ifdef CONFIG_DEBUG_CHANGE_PAGEATTR
> +static int check_address(unsigned long addr)
> +{
> + pgd_t *pgd = pgd_offset_k(addr);
> + pud_t *pud;
> + pmd_t *pmd;
> + pte_t *pte;
> + int ret = -EFAULT;
> +
> + if (pgd_none(*pgd))
> + goto out;
> +
> + pud = pud_offset(pgd, addr);
> + if (pud_none(*pud))
> + goto out;
> +
> + if (pud_sect(*pud)) {
> + pmd = pmd_alloc_one(_mm, addr);
> + if (!pmd) {
> + ret = -ENOMEM;
> + goto out;
> + }
> + split_pud(pud, pmd);
> + pud_populate(_mm, pud, pmd);
> + }
> +
> + pmd = pmd_offset(pud, addr);
> + if (pmd_none(*pmd))
> + goto out;
> +
> + if (pmd_sect(*pmd)) {
> + pte = pte_alloc_one_kernel(_mm, addr);
> + if (!pte) {
> + ret = -ENOMEM;
> + goto out;
> + }
> + split_pmd(pmd, pte);
> + __pmd_populate(pmd, __pa(pte), PMD_TYPE_TABLE);
> + }
> +
> + pte = pte_offset_kernel(pmd, addr);
> + if (pte_none(*pte))
> + goto out;
> +
> + flush_tlb_all();
> + ret = 0;
> +
> +out:
> + return ret;
> +}
> +#else
> +static int check_address(unsigned long addr)
> +{
> + if (addr < MODULES_VADDR || addr >= MODULES_END)
> + return -EINVAL;
> +
> + return 0;
> +}
> +#endif
> +
>  static int change_memory_common(unsigned long addr, int numpages,
>   pgprot_t set_mask, pgprot_t clear_mask)
>  {
> @@ -45,17 +108,18 @@ static int change_memory_common(unsigned long addr, int 
> numpages,
>   int ret;
>   struct page_change_data data;
>  
> + if (addr < PAGE_OFFSET && !is_vmalloc_addr((void *)addr))
> + return -EINVAL;
> +
>   if (!IS_ALIGNED(addr, PAGE_SIZE)) {
>   start &= PAGE_MASK;
> 

Re: [PATCH 2/2] arm64: Allow changing of attributes outside of modules

2015-11-05 Thread Xishi Qiu
On 2015/11/5 2:12, Laura Abbott wrote:

> On 11/03/2015 06:59 PM, zhong jiang wrote:
>>
>> Hi Laura
>>
>> This patch seems vaild, but I didn't feel very reasonable.
>> Because of the large page to make TLB performance better, just
>> split it if it is necessary.therefore, I think the first thing
>> we try to keep it, if they fail ,and then to split.
>>
> 
> I'm not quite sure I understand the request. We know we are going
> to have to have something mapped at page size granularity so we
> are going to have to break down the larger mappings no matter
> what. Can you explain a bit more where you think we could try to
> keep the larger mappings?
> 

Hi Laura,

He means like this, if the range is aligned with large page, we
need not to split it, just change the flag.

I have one more question.

alloc_init_pud()
...
if (!pud_none(old_pud))
...
memblock_free(table, PAGE_SIZE);
...

Here we will free the memory from pmd page, so why not free
more memory from 512 pte pages, if the 512 old pmds are not none?

Thanks,
Xishi Qiu

> At least two things I noticed looking at this again though:
> - This only splits the start address. If the range happens
> to overlap a pud/pmd this won't work. I'll address that in v2
> - We're always flushing the TLB even if nothing changed. Was
> this what you were referring to?
> 
>  
>> thanks
>> zhongjiang
>>
>>
> 
> Thanks,
> Laura
> 
> 
> .
> 



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 2/2] arm64: Allow changing of attributes outside of modules

2015-11-05 Thread Xishi Qiu
On 2015/11/5 2:12, Laura Abbott wrote:

> On 11/03/2015 06:59 PM, zhong jiang wrote:
>>
>> Hi Laura
>>
>> This patch seems vaild, but I didn't feel very reasonable.
>> Because of the large page to make TLB performance better, just
>> split it if it is necessary.therefore, I think the first thing
>> we try to keep it, if they fail ,and then to split.
>>
> 
> I'm not quite sure I understand the request. We know we are going
> to have to have something mapped at page size granularity so we
> are going to have to break down the larger mappings no matter
> what. Can you explain a bit more where you think we could try to
> keep the larger mappings?
> 

Hi Laura,

He means like this, if the range is aligned with large page, we
need not to split it, just change the flag.

I have one more question.

alloc_init_pud()
...
if (!pud_none(old_pud))
...
memblock_free(table, PAGE_SIZE);
...

Here we will free the memory from pmd page, so why not free
more memory from 512 pte pages, if the 512 old pmds are not none?

Thanks,
Xishi Qiu

> At least two things I noticed looking at this again though:
> - This only splits the start address. If the range happens
> to overlap a pud/pmd this won't work. I'll address that in v2
> - We're always flushing the TLB even if nothing changed. Was
> this what you were referring to?
> 
>  
>> thanks
>> zhongjiang
>>
>>
> 
> Thanks,
> Laura
> 
> 
> .
> 



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 2/2] arm64: Allow changing of attributes outside of modules

2015-11-05 Thread Xishi Qiu
On 2015/11/4 5:48, Laura Abbott wrote:

> 
> Currently, the set_memory_* functions that are implemented for arm64
> are restricted to module addresses only. This was mostly done
> because arm64 maps normal zone memory with larger page sizes to
> improve TLB performance. This has the side effect though of making it
> difficult to adjust attributes at the PAGE_SIZE granularity. There are
> an increasing number of use cases related to security where it is
> necessary to change the attributes of kernel memory. Add functionality
> to the page attribute changing code under a Kconfig to let systems
> designers decide if they want to make the trade off of security for TLB
> pressure.
> 
> Signed-off-by: Laura Abbott 
> ---
>  arch/arm64/Kconfig.debug | 11 +++
>  arch/arm64/mm/mm.h   |  3 ++
>  arch/arm64/mm/mmu.c  |  2 +-
>  arch/arm64/mm/pageattr.c | 74 
> 
>  4 files changed, 84 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
> index d6285ef..abc6922 100644
> --- a/arch/arm64/Kconfig.debug
> +++ b/arch/arm64/Kconfig.debug
> @@ -89,6 +89,17 @@ config DEBUG_ALIGN_RODATA
>  
> If in doubt, say N
>  
> +config DEBUG_CHANGE_PAGEATTR
> + bool "Allow all kernel memory to have attributes changed"
> + help
> +   If this option is selected, APIs that change page attributes
> +   (RW <-> RO, X <-> NX) will be valid for all memory mapped in
> +   the kernel space. The trade off is that there may be increased
> +   TLB pressure from finer grained page mapping. Turn on this option
> +   if performance is more important than security
> +
> +   If in doubt, say N
> +
>  source "drivers/hwtracing/coresight/Kconfig"
>  
>  endmenu
> diff --git a/arch/arm64/mm/mm.h b/arch/arm64/mm/mm.h
> index ef47d99..7b0dcc4 100644
> --- a/arch/arm64/mm/mm.h
> +++ b/arch/arm64/mm/mm.h
> @@ -1,3 +1,6 @@
>  extern void __init bootmem_init(void);
>  
>  void fixup_init(void);
> +
> +void split_pud(pud_t *old_pud, pmd_t *pmd);
> +void split_pmd(pmd_t *pmd, pte_t *pte);
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index ff41efa..cefad2d 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -72,7 +72,7 @@ static void __init *early_alloc(unsigned long sz)
>  /*
>   * remap a PMD into pages
>   */
> -static void split_pmd(pmd_t *pmd, pte_t *pte)
> +void split_pmd(pmd_t *pmd, pte_t *pte)
>  {
>   unsigned long pfn = pmd_pfn(*pmd);
>   unsigned long addr = pfn << PAGE_SHIFT;
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index e47ed1c..48a4ce9 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -15,9 +15,12 @@
>  #include 
>  #include 
>  
> +#include 
>  #include 
>  #include 
>  
> +#include "mm.h"
> +
>  struct page_change_data {
>   pgprot_t set_mask;
>   pgprot_t clear_mask;
> @@ -36,6 +39,66 @@ static int change_page_range(pte_t *ptep, pgtable_t token, 
> unsigned long addr,
>   return 0;
>  }
>  
> +#ifdef CONFIG_DEBUG_CHANGE_PAGEATTR
> +static int check_address(unsigned long addr)
> +{
> + pgd_t *pgd = pgd_offset_k(addr);
> + pud_t *pud;
> + pmd_t *pmd;
> + pte_t *pte;
> + int ret = -EFAULT;
> +
> + if (pgd_none(*pgd))
> + goto out;
> +
> + pud = pud_offset(pgd, addr);
> + if (pud_none(*pud))
> + goto out;
> +
> + if (pud_sect(*pud)) {
> + pmd = pmd_alloc_one(_mm, addr);
> + if (!pmd) {
> + ret = -ENOMEM;
> + goto out;
> + }
> + split_pud(pud, pmd);
> + pud_populate(_mm, pud, pmd);
> + }
> +
> + pmd = pmd_offset(pud, addr);
> + if (pmd_none(*pmd))
> + goto out;
> +
> + if (pmd_sect(*pmd)) {
> + pte = pte_alloc_one_kernel(_mm, addr);
> + if (!pte) {
> + ret = -ENOMEM;
> + goto out;
> + }
> + split_pmd(pmd, pte);
> + __pmd_populate(pmd, __pa(pte), PMD_TYPE_TABLE);
> + }
> +
> + pte = pte_offset_kernel(pmd, addr);
> + if (pte_none(*pte))
> + goto out;
> +
> + flush_tlb_all();
> + ret = 0;
> +
> +out:
> + return ret;
> +}
> +#else
> +static int check_address(unsigned long addr)
> +{
> + if (addr < MODULES_VADDR || addr >= MODULES_END)
> + return -EINVAL;
> +
> + return 0;
> +}
> +#endif
> +
>  static int change_memory_common(unsigned long addr, int numpages,
>   pgprot_t set_mask, pgprot_t clear_mask)
>  {
> @@ -45,17 +108,18 @@ static int change_memory_common(unsigned long addr, int 
> numpages,
>   int ret;
>   struct page_change_data data;
>  
> + if (addr < PAGE_OFFSET && !is_vmalloc_addr((void *)addr))
> + return -EINVAL;
> +
>   if (!IS_ALIGNED(addr, PAGE_SIZE)) {
>   

Re: [PATCH 2/2] arm64: Allow changing of attributes outside of modules

2015-11-05 Thread Laura Abbott

On 11/04/2015 11:44 PM, Ard Biesheuvel wrote:

On 3 November 2015 at 22:48, Laura Abbott  wrote:


Currently, the set_memory_* functions that are implemented for arm64
are restricted to module addresses only. This was mostly done
because arm64 maps normal zone memory with larger page sizes to
improve TLB performance. This has the side effect though of making it
difficult to adjust attributes at the PAGE_SIZE granularity. There are
an increasing number of use cases related to security where it is
necessary to change the attributes of kernel memory. Add functionality
to the page attribute changing code under a Kconfig to let systems
designers decide if they want to make the trade off of security for TLB
pressure.

Signed-off-by: Laura Abbott 
---
  arch/arm64/Kconfig.debug | 11 +++
  arch/arm64/mm/mm.h   |  3 ++
  arch/arm64/mm/mmu.c  |  2 +-
  arch/arm64/mm/pageattr.c | 74 
  4 files changed, 84 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
index d6285ef..abc6922 100644
--- a/arch/arm64/Kconfig.debug
+++ b/arch/arm64/Kconfig.debug
@@ -89,6 +89,17 @@ config DEBUG_ALIGN_RODATA

   If in doubt, say N

+config DEBUG_CHANGE_PAGEATTR


I don't think this belongs in Kconfig.debug, and I don't think this
should default to off.

We know we currently have no users of set_memory_xx() in arch/arm64
that operate on linear mapping addresses, so we will not introduce any
performance regressions by adding this functionality now. By putting
this feature behind a debug option, every newly introduced call
set_memory_xx() that operates on linear or vmalloc() addresses needs
to deal with -EINVAL (or depend on DEBUG_CHANGE_PAGEATTR), or it may
error out at runtime if the feature is not enabled.



I stuck it in Kconfig.debug to have it match with the rest of the
module and DEBUG_RODATA options. I'll pull it out.
 

+   bool "Allow all kernel memory to have attributes changed"
+   help
+ If this option is selected, APIs that change page attributes
+ (RW <-> RO, X <-> NX) will be valid for all memory mapped in
+ the kernel space. The trade off is that there may be increased
+ TLB pressure from finer grained page mapping. Turn on this option
+ if performance is more important than security
+


This is backwards


+ If in doubt, say N
+
  source "drivers/hwtracing/coresight/Kconfig"

  endmenu


[...]


diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index e47ed1c..48a4ce9 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c


[...]


@@ -45,17 +108,18 @@ static int change_memory_common(unsigned long addr, int 
numpages,
 int ret;
 struct page_change_data data;

+   if (addr < PAGE_OFFSET && !is_vmalloc_addr((void *)addr))
+   return -EINVAL;
+


Doesn't this exclude the module area?


 if (!IS_ALIGNED(addr, PAGE_SIZE)) {
 start &= PAGE_MASK;
 end = start + size;
 WARN_ON_ONCE(1);
 }

-   if (start < MODULES_VADDR || start >= MODULES_END)
-   return -EINVAL;
-
-   if (end < MODULES_VADDR || end >= MODULES_END)
-   return -EINVAL;
+   ret = check_address(addr);
+   if (ret)
+   return ret;

 data.set_mask = set_mask;
 data.clear_mask = clear_mask;
--
2.4.3



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 2/2] arm64: Allow changing of attributes outside of modules

2015-11-05 Thread Laura Abbott

On 11/05/2015 03:10 AM, Xishi Qiu wrote:

On 2015/11/5 2:12, Laura Abbott wrote:


On 11/03/2015 06:59 PM, zhong jiang wrote:


Hi Laura

This patch seems vaild, but I didn't feel very reasonable.
Because of the large page to make TLB performance better, just
split it if it is necessary.therefore, I think the first thing
we try to keep it, if they fail ,and then to split.



I'm not quite sure I understand the request. We know we are going
to have to have something mapped at page size granularity so we
are going to have to break down the larger mappings no matter
what. Can you explain a bit more where you think we could try to
keep the larger mappings?



Hi Laura,

He means like this, if the range is aligned with large page, we
need not to split it, just change the flag.



This complicates the logic for doing the update. Apply to page range
nicely walks across all the 4K pages and does the update. It looks
like x86 does the check to keep the large pages though so I'll
give it some thought.


I have one more question.

alloc_init_pud()
...
if (!pud_none(old_pud))
...
memblock_free(table, PAGE_SIZE);
...

Here we will free the memory from pmd page, so why not free
more memory from 512 pte pages, if the 512 old pmds are not none?



It would be nice to reclaim the memory but I'm not sure if that will
work. The memory was allocated before any of the regular kernel data
structures were set up. It's not clear if giving the pages back to
the buddy allocator would actually work. I'll take a look though.
 

Thanks,
Xishi Qiu



Thanks,
Laura

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 2/2] arm64: Allow changing of attributes outside of modules

2015-11-04 Thread Ard Biesheuvel
On 3 November 2015 at 22:48, Laura Abbott  wrote:
>
> Currently, the set_memory_* functions that are implemented for arm64
> are restricted to module addresses only. This was mostly done
> because arm64 maps normal zone memory with larger page sizes to
> improve TLB performance. This has the side effect though of making it
> difficult to adjust attributes at the PAGE_SIZE granularity. There are
> an increasing number of use cases related to security where it is
> necessary to change the attributes of kernel memory. Add functionality
> to the page attribute changing code under a Kconfig to let systems
> designers decide if they want to make the trade off of security for TLB
> pressure.
>
> Signed-off-by: Laura Abbott 
> ---
>  arch/arm64/Kconfig.debug | 11 +++
>  arch/arm64/mm/mm.h   |  3 ++
>  arch/arm64/mm/mmu.c  |  2 +-
>  arch/arm64/mm/pageattr.c | 74 
> 
>  4 files changed, 84 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
> index d6285ef..abc6922 100644
> --- a/arch/arm64/Kconfig.debug
> +++ b/arch/arm64/Kconfig.debug
> @@ -89,6 +89,17 @@ config DEBUG_ALIGN_RODATA
>
>   If in doubt, say N
>
> +config DEBUG_CHANGE_PAGEATTR

I don't think this belongs in Kconfig.debug, and I don't think this
should default to off.

We know we currently have no users of set_memory_xx() in arch/arm64
that operate on linear mapping addresses, so we will not introduce any
performance regressions by adding this functionality now. By putting
this feature behind a debug option, every newly introduced call
set_memory_xx() that operates on linear or vmalloc() addresses needs
to deal with -EINVAL (or depend on DEBUG_CHANGE_PAGEATTR), or it may
error out at runtime if the feature is not enabled.

> +   bool "Allow all kernel memory to have attributes changed"
> +   help
> + If this option is selected, APIs that change page attributes
> + (RW <-> RO, X <-> NX) will be valid for all memory mapped in
> + the kernel space. The trade off is that there may be increased
> + TLB pressure from finer grained page mapping. Turn on this option
> + if performance is more important than security
> +

This is backwards

> + If in doubt, say N
> +
>  source "drivers/hwtracing/coresight/Kconfig"
>
>  endmenu

[...]

> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index e47ed1c..48a4ce9 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c

[...]

> @@ -45,17 +108,18 @@ static int change_memory_common(unsigned long addr, int 
> numpages,
> int ret;
> struct page_change_data data;
>
> +   if (addr < PAGE_OFFSET && !is_vmalloc_addr((void *)addr))
> +   return -EINVAL;
> +

Doesn't this exclude the module area?

> if (!IS_ALIGNED(addr, PAGE_SIZE)) {
> start &= PAGE_MASK;
> end = start + size;
> WARN_ON_ONCE(1);
> }
>
> -   if (start < MODULES_VADDR || start >= MODULES_END)
> -   return -EINVAL;
> -
> -   if (end < MODULES_VADDR || end >= MODULES_END)
> -   return -EINVAL;
> +   ret = check_address(addr);
> +   if (ret)
> +   return ret;
>
> data.set_mask = set_mask;
> data.clear_mask = clear_mask;
> --
> 2.4.3
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 2/2] arm64: Allow changing of attributes outside of modules

2015-11-04 Thread Ard Biesheuvel
On 3 November 2015 at 22:48, Laura Abbott  wrote:
>
> Currently, the set_memory_* functions that are implemented for arm64
> are restricted to module addresses only. This was mostly done
> because arm64 maps normal zone memory with larger page sizes to
> improve TLB performance. This has the side effect though of making it
> difficult to adjust attributes at the PAGE_SIZE granularity. There are
> an increasing number of use cases related to security where it is
> necessary to change the attributes of kernel memory. Add functionality
> to the page attribute changing code under a Kconfig to let systems
> designers decide if they want to make the trade off of security for TLB
> pressure.
>
> Signed-off-by: Laura Abbott 
> ---
>  arch/arm64/Kconfig.debug | 11 +++
>  arch/arm64/mm/mm.h   |  3 ++
>  arch/arm64/mm/mmu.c  |  2 +-
>  arch/arm64/mm/pageattr.c | 74 
> 
>  4 files changed, 84 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
> index d6285ef..abc6922 100644
> --- a/arch/arm64/Kconfig.debug
> +++ b/arch/arm64/Kconfig.debug
> @@ -89,6 +89,17 @@ config DEBUG_ALIGN_RODATA
>
>   If in doubt, say N
>
> +config DEBUG_CHANGE_PAGEATTR

I don't think this belongs in Kconfig.debug, and I don't think this
should default to off.

We know we currently have no users of set_memory_xx() in arch/arm64
that operate on linear mapping addresses, so we will not introduce any
performance regressions by adding this functionality now. By putting
this feature behind a debug option, every newly introduced call
set_memory_xx() that operates on linear or vmalloc() addresses needs
to deal with -EINVAL (or depend on DEBUG_CHANGE_PAGEATTR), or it may
error out at runtime if the feature is not enabled.

> +   bool "Allow all kernel memory to have attributes changed"
> +   help
> + If this option is selected, APIs that change page attributes
> + (RW <-> RO, X <-> NX) will be valid for all memory mapped in
> + the kernel space. The trade off is that there may be increased
> + TLB pressure from finer grained page mapping. Turn on this option
> + if performance is more important than security
> +

This is backwards

> + If in doubt, say N
> +
>  source "drivers/hwtracing/coresight/Kconfig"
>
>  endmenu

[...]

> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index e47ed1c..48a4ce9 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c

[...]

> @@ -45,17 +108,18 @@ static int change_memory_common(unsigned long addr, int 
> numpages,
> int ret;
> struct page_change_data data;
>
> +   if (addr < PAGE_OFFSET && !is_vmalloc_addr((void *)addr))
> +   return -EINVAL;
> +

Doesn't this exclude the module area?

> if (!IS_ALIGNED(addr, PAGE_SIZE)) {
> start &= PAGE_MASK;
> end = start + size;
> WARN_ON_ONCE(1);
> }
>
> -   if (start < MODULES_VADDR || start >= MODULES_END)
> -   return -EINVAL;
> -
> -   if (end < MODULES_VADDR || end >= MODULES_END)
> -   return -EINVAL;
> +   ret = check_address(addr);
> +   if (ret)
> +   return ret;
>
> data.set_mask = set_mask;
> data.clear_mask = clear_mask;
> --
> 2.4.3
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 2/2] arm64: Allow changing of attributes outside of modules

2015-11-03 Thread zhong jiang
On 2015/11/4 5:48, Laura Abbott wrote:
> 
> Currently, the set_memory_* functions that are implemented for arm64
> are restricted to module addresses only. This was mostly done
> because arm64 maps normal zone memory with larger page sizes to
> improve TLB performance. This has the side effect though of making it
> difficult to adjust attributes at the PAGE_SIZE granularity. There are
> an increasing number of use cases related to security where it is
> necessary to change the attributes of kernel memory. Add functionality
> to the page attribute changing code under a Kconfig to let systems
> designers decide if they want to make the trade off of security for TLB
> pressure.
> 
> Signed-off-by: Laura Abbott 
> ---
>  arch/arm64/Kconfig.debug | 11 +++
>  arch/arm64/mm/mm.h   |  3 ++
>  arch/arm64/mm/mmu.c  |  2 +-
>  arch/arm64/mm/pageattr.c | 74 
> 
>  4 files changed, 84 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
> index d6285ef..abc6922 100644
> --- a/arch/arm64/Kconfig.debug
> +++ b/arch/arm64/Kconfig.debug
> @@ -89,6 +89,17 @@ config DEBUG_ALIGN_RODATA
>  
> If in doubt, say N
>  
> +config DEBUG_CHANGE_PAGEATTR
> + bool "Allow all kernel memory to have attributes changed"
> + help
> +   If this option is selected, APIs that change page attributes
> +   (RW <-> RO, X <-> NX) will be valid for all memory mapped in
> +   the kernel space. The trade off is that there may be increased
> +   TLB pressure from finer grained page mapping. Turn on this option
> +   if performance is more important than security
> +
> +   If in doubt, say N
> +
>  source "drivers/hwtracing/coresight/Kconfig"
>  
>  endmenu
> diff --git a/arch/arm64/mm/mm.h b/arch/arm64/mm/mm.h
> index ef47d99..7b0dcc4 100644
> --- a/arch/arm64/mm/mm.h
> +++ b/arch/arm64/mm/mm.h
> @@ -1,3 +1,6 @@
>  extern void __init bootmem_init(void);
>  
>  void fixup_init(void);
> +
> +void split_pud(pud_t *old_pud, pmd_t *pmd);
> +void split_pmd(pmd_t *pmd, pte_t *pte);
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index ff41efa..cefad2d 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -72,7 +72,7 @@ static void __init *early_alloc(unsigned long sz)
>  /*
>   * remap a PMD into pages
>   */
> -static void split_pmd(pmd_t *pmd, pte_t *pte)
> +void split_pmd(pmd_t *pmd, pte_t *pte)
>  {
>   unsigned long pfn = pmd_pfn(*pmd);
>   unsigned long addr = pfn << PAGE_SHIFT;
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index e47ed1c..48a4ce9 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -15,9 +15,12 @@
>  #include 
>  #include 
>  
> +#include 
>  #include 
>  #include 
>  
> +#include "mm.h"
> +
>  struct page_change_data {
>   pgprot_t set_mask;
>   pgprot_t clear_mask;
> @@ -36,6 +39,66 @@ static int change_page_range(pte_t *ptep, pgtable_t token, 
> unsigned long addr,
>   return 0;
>  }
>  
> +#ifdef CONFIG_DEBUG_CHANGE_PAGEATTR
> +static int check_address(unsigned long addr)
> +{
> + pgd_t *pgd = pgd_offset_k(addr);
> + pud_t *pud;
> + pmd_t *pmd;
> + pte_t *pte;
> + int ret = -EFAULT;
> +
> + if (pgd_none(*pgd))
> + goto out;
> +
> + pud = pud_offset(pgd, addr);
> + if (pud_none(*pud))
> + goto out;
> +
> + if (pud_sect(*pud)) {
> + pmd = pmd_alloc_one(_mm, addr);
> + if (!pmd) {
> + ret = -ENOMEM;
> + goto out;
> + }
> + split_pud(pud, pmd);
> + pud_populate(_mm, pud, pmd);
> + }
> +
> + pmd = pmd_offset(pud, addr);
> + if (pmd_none(*pmd))
> + goto out;
> +
> + if (pmd_sect(*pmd)) {
> + pte = pte_alloc_one_kernel(_mm, addr);
> + if (!pte) {
> + ret = -ENOMEM;
> + goto out;
> + }
> + split_pmd(pmd, pte);
> + __pmd_populate(pmd, __pa(pte), PMD_TYPE_TABLE);
> + }
> +
> + pte = pte_offset_kernel(pmd, addr);
> + if (pte_none(*pte))
> + goto out;
> +
> + flush_tlb_all();
> + ret = 0;
> +
> +out:
> + return ret;
> +}
> +#else
> +static int check_address(unsigned long addr)
> +{
> + if (addr < MODULES_VADDR || addr >= MODULES_END)
> + return -EINVAL;
> +
> + return 0;
> +}
> +#endif
> +
>  static int change_memory_common(unsigned long addr, int numpages,
>   pgprot_t set_mask, pgprot_t clear_mask)
>  {
> @@ -45,17 +108,18 @@ static int change_memory_common(unsigned long addr, int 
> numpages,
>   int ret;
>   struct page_change_data data;
>  
> + if (addr < PAGE_OFFSET && !is_vmalloc_addr((void *)addr))
> + return -EINVAL;
> +
>   if (!IS_ALIGNED(addr, PAGE_SIZE)) {
>   start &= PAGE_MASK;
>  

[PATCH 2/2] arm64: Allow changing of attributes outside of modules

2015-11-03 Thread Laura Abbott

Currently, the set_memory_* functions that are implemented for arm64
are restricted to module addresses only. This was mostly done
because arm64 maps normal zone memory with larger page sizes to
improve TLB performance. This has the side effect though of making it
difficult to adjust attributes at the PAGE_SIZE granularity. There are
an increasing number of use cases related to security where it is
necessary to change the attributes of kernel memory. Add functionality
to the page attribute changing code under a Kconfig to let systems
designers decide if they want to make the trade off of security for TLB
pressure.

Signed-off-by: Laura Abbott 
---
 arch/arm64/Kconfig.debug | 11 +++
 arch/arm64/mm/mm.h   |  3 ++
 arch/arm64/mm/mmu.c  |  2 +-
 arch/arm64/mm/pageattr.c | 74 
 4 files changed, 84 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
index d6285ef..abc6922 100644
--- a/arch/arm64/Kconfig.debug
+++ b/arch/arm64/Kconfig.debug
@@ -89,6 +89,17 @@ config DEBUG_ALIGN_RODATA
 
  If in doubt, say N
 
+config DEBUG_CHANGE_PAGEATTR
+   bool "Allow all kernel memory to have attributes changed"
+   help
+ If this option is selected, APIs that change page attributes
+ (RW <-> RO, X <-> NX) will be valid for all memory mapped in
+ the kernel space. The trade off is that there may be increased
+ TLB pressure from finer grained page mapping. Turn on this option
+ if performance is more important than security
+
+ If in doubt, say N
+
 source "drivers/hwtracing/coresight/Kconfig"
 
 endmenu
diff --git a/arch/arm64/mm/mm.h b/arch/arm64/mm/mm.h
index ef47d99..7b0dcc4 100644
--- a/arch/arm64/mm/mm.h
+++ b/arch/arm64/mm/mm.h
@@ -1,3 +1,6 @@
 extern void __init bootmem_init(void);
 
 void fixup_init(void);
+
+void split_pud(pud_t *old_pud, pmd_t *pmd);
+void split_pmd(pmd_t *pmd, pte_t *pte);
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index ff41efa..cefad2d 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -72,7 +72,7 @@ static void __init *early_alloc(unsigned long sz)
 /*
  * remap a PMD into pages
  */
-static void split_pmd(pmd_t *pmd, pte_t *pte)
+void split_pmd(pmd_t *pmd, pte_t *pte)
 {
unsigned long pfn = pmd_pfn(*pmd);
unsigned long addr = pfn << PAGE_SHIFT;
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index e47ed1c..48a4ce9 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -15,9 +15,12 @@
 #include 
 #include 
 
+#include 
 #include 
 #include 
 
+#include "mm.h"
+
 struct page_change_data {
pgprot_t set_mask;
pgprot_t clear_mask;
@@ -36,6 +39,66 @@ static int change_page_range(pte_t *ptep, pgtable_t token, 
unsigned long addr,
return 0;
 }
 
+#ifdef CONFIG_DEBUG_CHANGE_PAGEATTR
+static int check_address(unsigned long addr)
+{
+   pgd_t *pgd = pgd_offset_k(addr);
+   pud_t *pud;
+   pmd_t *pmd;
+   pte_t *pte;
+   int ret = -EFAULT;
+
+   if (pgd_none(*pgd))
+   goto out;
+
+   pud = pud_offset(pgd, addr);
+   if (pud_none(*pud))
+   goto out;
+
+   if (pud_sect(*pud)) {
+   pmd = pmd_alloc_one(_mm, addr);
+   if (!pmd) {
+   ret = -ENOMEM;
+   goto out;
+   }
+   split_pud(pud, pmd);
+   pud_populate(_mm, pud, pmd);
+   }
+
+   pmd = pmd_offset(pud, addr);
+   if (pmd_none(*pmd))
+   goto out;
+
+   if (pmd_sect(*pmd)) {
+   pte = pte_alloc_one_kernel(_mm, addr);
+   if (!pte) {
+   ret = -ENOMEM;
+   goto out;
+   }
+   split_pmd(pmd, pte);
+   __pmd_populate(pmd, __pa(pte), PMD_TYPE_TABLE);
+   }
+
+   pte = pte_offset_kernel(pmd, addr);
+   if (pte_none(*pte))
+   goto out;
+
+   flush_tlb_all();
+   ret = 0;
+
+out:
+   return ret;
+}
+#else
+static int check_address(unsigned long addr)
+{
+   if (addr < MODULES_VADDR || addr >= MODULES_END)
+   return -EINVAL;
+
+   return 0;
+}
+#endif
+
 static int change_memory_common(unsigned long addr, int numpages,
pgprot_t set_mask, pgprot_t clear_mask)
 {
@@ -45,17 +108,18 @@ static int change_memory_common(unsigned long addr, int 
numpages,
int ret;
struct page_change_data data;
 
+   if (addr < PAGE_OFFSET && !is_vmalloc_addr((void *)addr))
+   return -EINVAL;
+
if (!IS_ALIGNED(addr, PAGE_SIZE)) {
start &= PAGE_MASK;
end = start + size;
WARN_ON_ONCE(1);
}
 
-   if (start < MODULES_VADDR || start >= MODULES_END)
-   return -EINVAL;
-
-   if (end < MODULES_VADDR || end >= MODULES_END)
-   return 

[PATCH 2/2] arm64: Allow changing of attributes outside of modules

2015-11-03 Thread Laura Abbott

Currently, the set_memory_* functions that are implemented for arm64
are restricted to module addresses only. This was mostly done
because arm64 maps normal zone memory with larger page sizes to
improve TLB performance. This has the side effect though of making it
difficult to adjust attributes at the PAGE_SIZE granularity. There are
an increasing number of use cases related to security where it is
necessary to change the attributes of kernel memory. Add functionality
to the page attribute changing code under a Kconfig to let systems
designers decide if they want to make the trade off of security for TLB
pressure.

Signed-off-by: Laura Abbott 
---
 arch/arm64/Kconfig.debug | 11 +++
 arch/arm64/mm/mm.h   |  3 ++
 arch/arm64/mm/mmu.c  |  2 +-
 arch/arm64/mm/pageattr.c | 74 
 4 files changed, 84 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
index d6285ef..abc6922 100644
--- a/arch/arm64/Kconfig.debug
+++ b/arch/arm64/Kconfig.debug
@@ -89,6 +89,17 @@ config DEBUG_ALIGN_RODATA
 
  If in doubt, say N
 
+config DEBUG_CHANGE_PAGEATTR
+   bool "Allow all kernel memory to have attributes changed"
+   help
+ If this option is selected, APIs that change page attributes
+ (RW <-> RO, X <-> NX) will be valid for all memory mapped in
+ the kernel space. The trade off is that there may be increased
+ TLB pressure from finer grained page mapping. Turn on this option
+ if performance is more important than security
+
+ If in doubt, say N
+
 source "drivers/hwtracing/coresight/Kconfig"
 
 endmenu
diff --git a/arch/arm64/mm/mm.h b/arch/arm64/mm/mm.h
index ef47d99..7b0dcc4 100644
--- a/arch/arm64/mm/mm.h
+++ b/arch/arm64/mm/mm.h
@@ -1,3 +1,6 @@
 extern void __init bootmem_init(void);
 
 void fixup_init(void);
+
+void split_pud(pud_t *old_pud, pmd_t *pmd);
+void split_pmd(pmd_t *pmd, pte_t *pte);
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index ff41efa..cefad2d 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -72,7 +72,7 @@ static void __init *early_alloc(unsigned long sz)
 /*
  * remap a PMD into pages
  */
-static void split_pmd(pmd_t *pmd, pte_t *pte)
+void split_pmd(pmd_t *pmd, pte_t *pte)
 {
unsigned long pfn = pmd_pfn(*pmd);
unsigned long addr = pfn << PAGE_SHIFT;
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index e47ed1c..48a4ce9 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -15,9 +15,12 @@
 #include 
 #include 
 
+#include 
 #include 
 #include 
 
+#include "mm.h"
+
 struct page_change_data {
pgprot_t set_mask;
pgprot_t clear_mask;
@@ -36,6 +39,66 @@ static int change_page_range(pte_t *ptep, pgtable_t token, 
unsigned long addr,
return 0;
 }
 
+#ifdef CONFIG_DEBUG_CHANGE_PAGEATTR
+static int check_address(unsigned long addr)
+{
+   pgd_t *pgd = pgd_offset_k(addr);
+   pud_t *pud;
+   pmd_t *pmd;
+   pte_t *pte;
+   int ret = -EFAULT;
+
+   if (pgd_none(*pgd))
+   goto out;
+
+   pud = pud_offset(pgd, addr);
+   if (pud_none(*pud))
+   goto out;
+
+   if (pud_sect(*pud)) {
+   pmd = pmd_alloc_one(_mm, addr);
+   if (!pmd) {
+   ret = -ENOMEM;
+   goto out;
+   }
+   split_pud(pud, pmd);
+   pud_populate(_mm, pud, pmd);
+   }
+
+   pmd = pmd_offset(pud, addr);
+   if (pmd_none(*pmd))
+   goto out;
+
+   if (pmd_sect(*pmd)) {
+   pte = pte_alloc_one_kernel(_mm, addr);
+   if (!pte) {
+   ret = -ENOMEM;
+   goto out;
+   }
+   split_pmd(pmd, pte);
+   __pmd_populate(pmd, __pa(pte), PMD_TYPE_TABLE);
+   }
+
+   pte = pte_offset_kernel(pmd, addr);
+   if (pte_none(*pte))
+   goto out;
+
+   flush_tlb_all();
+   ret = 0;
+
+out:
+   return ret;
+}
+#else
+static int check_address(unsigned long addr)
+{
+   if (addr < MODULES_VADDR || addr >= MODULES_END)
+   return -EINVAL;
+
+   return 0;
+}
+#endif
+
 static int change_memory_common(unsigned long addr, int numpages,
pgprot_t set_mask, pgprot_t clear_mask)
 {
@@ -45,17 +108,18 @@ static int change_memory_common(unsigned long addr, int 
numpages,
int ret;
struct page_change_data data;
 
+   if (addr < PAGE_OFFSET && !is_vmalloc_addr((void *)addr))
+   return -EINVAL;
+
if (!IS_ALIGNED(addr, PAGE_SIZE)) {
start &= PAGE_MASK;
end = start + size;
WARN_ON_ONCE(1);
}
 
-   if (start < MODULES_VADDR || start >= MODULES_END)
-   return -EINVAL;
-
-   if (end < MODULES_VADDR || end >= 

Re: [PATCH 2/2] arm64: Allow changing of attributes outside of modules

2015-11-03 Thread zhong jiang
On 2015/11/4 5:48, Laura Abbott wrote:
> 
> Currently, the set_memory_* functions that are implemented for arm64
> are restricted to module addresses only. This was mostly done
> because arm64 maps normal zone memory with larger page sizes to
> improve TLB performance. This has the side effect though of making it
> difficult to adjust attributes at the PAGE_SIZE granularity. There are
> an increasing number of use cases related to security where it is
> necessary to change the attributes of kernel memory. Add functionality
> to the page attribute changing code under a Kconfig to let systems
> designers decide if they want to make the trade off of security for TLB
> pressure.
> 
> Signed-off-by: Laura Abbott 
> ---
>  arch/arm64/Kconfig.debug | 11 +++
>  arch/arm64/mm/mm.h   |  3 ++
>  arch/arm64/mm/mmu.c  |  2 +-
>  arch/arm64/mm/pageattr.c | 74 
> 
>  4 files changed, 84 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
> index d6285ef..abc6922 100644
> --- a/arch/arm64/Kconfig.debug
> +++ b/arch/arm64/Kconfig.debug
> @@ -89,6 +89,17 @@ config DEBUG_ALIGN_RODATA
>  
> If in doubt, say N
>  
> +config DEBUG_CHANGE_PAGEATTR
> + bool "Allow all kernel memory to have attributes changed"
> + help
> +   If this option is selected, APIs that change page attributes
> +   (RW <-> RO, X <-> NX) will be valid for all memory mapped in
> +   the kernel space. The trade off is that there may be increased
> +   TLB pressure from finer grained page mapping. Turn on this option
> +   if performance is more important than security
> +
> +   If in doubt, say N
> +
>  source "drivers/hwtracing/coresight/Kconfig"
>  
>  endmenu
> diff --git a/arch/arm64/mm/mm.h b/arch/arm64/mm/mm.h
> index ef47d99..7b0dcc4 100644
> --- a/arch/arm64/mm/mm.h
> +++ b/arch/arm64/mm/mm.h
> @@ -1,3 +1,6 @@
>  extern void __init bootmem_init(void);
>  
>  void fixup_init(void);
> +
> +void split_pud(pud_t *old_pud, pmd_t *pmd);
> +void split_pmd(pmd_t *pmd, pte_t *pte);
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index ff41efa..cefad2d 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -72,7 +72,7 @@ static void __init *early_alloc(unsigned long sz)
>  /*
>   * remap a PMD into pages
>   */
> -static void split_pmd(pmd_t *pmd, pte_t *pte)
> +void split_pmd(pmd_t *pmd, pte_t *pte)
>  {
>   unsigned long pfn = pmd_pfn(*pmd);
>   unsigned long addr = pfn << PAGE_SHIFT;
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index e47ed1c..48a4ce9 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -15,9 +15,12 @@
>  #include 
>  #include 
>  
> +#include 
>  #include 
>  #include 
>  
> +#include "mm.h"
> +
>  struct page_change_data {
>   pgprot_t set_mask;
>   pgprot_t clear_mask;
> @@ -36,6 +39,66 @@ static int change_page_range(pte_t *ptep, pgtable_t token, 
> unsigned long addr,
>   return 0;
>  }
>  
> +#ifdef CONFIG_DEBUG_CHANGE_PAGEATTR
> +static int check_address(unsigned long addr)
> +{
> + pgd_t *pgd = pgd_offset_k(addr);
> + pud_t *pud;
> + pmd_t *pmd;
> + pte_t *pte;
> + int ret = -EFAULT;
> +
> + if (pgd_none(*pgd))
> + goto out;
> +
> + pud = pud_offset(pgd, addr);
> + if (pud_none(*pud))
> + goto out;
> +
> + if (pud_sect(*pud)) {
> + pmd = pmd_alloc_one(_mm, addr);
> + if (!pmd) {
> + ret = -ENOMEM;
> + goto out;
> + }
> + split_pud(pud, pmd);
> + pud_populate(_mm, pud, pmd);
> + }
> +
> + pmd = pmd_offset(pud, addr);
> + if (pmd_none(*pmd))
> + goto out;
> +
> + if (pmd_sect(*pmd)) {
> + pte = pte_alloc_one_kernel(_mm, addr);
> + if (!pte) {
> + ret = -ENOMEM;
> + goto out;
> + }
> + split_pmd(pmd, pte);
> + __pmd_populate(pmd, __pa(pte), PMD_TYPE_TABLE);
> + }
> +
> + pte = pte_offset_kernel(pmd, addr);
> + if (pte_none(*pte))
> + goto out;
> +
> + flush_tlb_all();
> + ret = 0;
> +
> +out:
> + return ret;
> +}
> +#else
> +static int check_address(unsigned long addr)
> +{
> + if (addr < MODULES_VADDR || addr >= MODULES_END)
> + return -EINVAL;
> +
> + return 0;
> +}
> +#endif
> +
>  static int change_memory_common(unsigned long addr, int numpages,
>   pgprot_t set_mask, pgprot_t clear_mask)
>  {
> @@ -45,17 +108,18 @@ static int change_memory_common(unsigned long addr, int 
> numpages,
>   int ret;
>   struct page_change_data data;
>  
> + if (addr < PAGE_OFFSET && !is_vmalloc_addr((void *)addr))
> + return -EINVAL;
> +
>   if (!IS_ALIGNED(addr, PAGE_SIZE)) {
>