Re: [PATCHv4 05/10] arm64: Use __pa_symbol for kernel symbols

2016-12-06 Thread Laura Abbott
On 12/06/2016 09:02 AM, Mark Rutland wrote:
> Hi,
> 
> As a heads-up, it looks like this got mangled somewhere. In the hunk at
> arch/arm64/mm/kasan_init.c:68, 'do' in the context became 'edo'.
> Deleting the 'e' makes it apply.
> 

Argh, this must have come in while editing the .patch before e-mailing.
Sorry about that.

> I think this is almost there; other than James's hibernate bug I only
> see one real issue, and everything else is a minor nit.
> 
> On Tue, Nov 29, 2016 at 10:55:24AM -0800, Laura Abbott wrote:
>> __pa_symbol is technically the marco that should be used for kernel
>> symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which
>> will do bounds checking. As part of this, introduce lm_alias, a
>> macro which wraps the __va(__pa(...)) idiom used a few places to
>> get the alias.
> 
> I think the addition of the lm_alias() macro under include/mm should be
> a separate preparatory patch. That way it's separate from the
> arm64-specific parts, and more obvious to !arm64 people reviewing the
> other parts.
> 

I debated if it was more obvious to show how it was used in context
vs a stand alone patch. I think you're right that for non-arm64 reviewers
the separate patch would be easier to find.

>> Signed-off-by: Laura Abbott 
>> ---
>> v4: Stop calling __va early, conversion of a few more sites. I decided 
>> against
>> wrapping the __p*d_populate calls into new functions since the call sites
>> should be limited.
>> ---
>>  arch/arm64/include/asm/kvm_mmu.h  |  4 ++--
>>  arch/arm64/include/asm/memory.h   |  2 ++
>>  arch/arm64/include/asm/mmu_context.h  |  6 +++---
>>  arch/arm64/include/asm/pgtable.h  |  2 +-
>>  arch/arm64/kernel/acpi_parking_protocol.c |  2 +-
>>  arch/arm64/kernel/cpu-reset.h |  2 +-
>>  arch/arm64/kernel/cpufeature.c|  2 +-
>>  arch/arm64/kernel/hibernate.c | 13 +
>>  arch/arm64/kernel/insn.c  |  2 +-
>>  arch/arm64/kernel/psci.c  |  2 +-
>>  arch/arm64/kernel/setup.c |  8 
>>  arch/arm64/kernel/smp_spin_table.c|  2 +-
>>  arch/arm64/kernel/vdso.c  |  4 ++--
>>  arch/arm64/mm/init.c  | 11 ++-
>>  arch/arm64/mm/kasan_init.c| 21 +---
>>  arch/arm64/mm/mmu.c   | 32 
>> +++
>>  drivers/firmware/psci.c   |  2 +-
>>  include/linux/mm.h|  4 
>>  18 files changed, 70 insertions(+), 51 deletions(-)
> 
> It looks like we need to make sure these (directly) include 
> for __pa_symbol() and lm_alias(), or there's some fragility, e.g.
> 
> [mark@leverpostej:~/src/linux]% uselinaro 15.08 make ARCH=arm64 
> CROSS_COMPILE=aarch64-linux-gnu- -j10 -s
> arch/arm64/kernel/psci.c: In function 'cpu_psci_cpu_boot':
> arch/arm64/kernel/psci.c:48:50: error: implicit declaration of function 
> '__pa_symbol' [-Werror=implicit-function-declaration]
>   int err = psci_ops.cpu_on(cpu_logical_map(cpu), 
> __pa_symbol(secondary_entry));
>   ^
> cc1: some warnings being treated as errors
> make[1]: *** [arch/arm64/kernel/psci.o] Error 1
> make: *** [arch/arm64/kernel] Error 2
> make: *** Waiting for unfinished jobs
> 

Right, I'll double check. 

>> --- a/arch/arm64/include/asm/memory.h
>> +++ b/arch/arm64/include/asm/memory.h
>> @@ -205,6 +205,8 @@ static inline void *phys_to_virt(phys_addr_t x)
>>  #define __va(x) ((void 
>> *)__phys_to_virt((phys_addr_t)(x)))
>>  #define pfn_to_kaddr(pfn)   __va((pfn) << PAGE_SHIFT)
>>  #define virt_to_pfn(x)  __phys_to_pfn(__virt_to_phys((unsigned 
>> long)(x)))
>> +#define sym_to_pfn(x)   __phys_to_pfn(__pa_symbol(x))
>> +#define lm_alias(x) __va(__pa_symbol(x))
> 
> As Catalin mentioned, we should be able to drop this copy of lm_alias(),
> given we have the same in the core headers.
> 
>> diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
>> index a2c2478..79cd86b 100644
>> --- a/arch/arm64/kernel/vdso.c
>> +++ b/arch/arm64/kernel/vdso.c
>> @@ -140,11 +140,11 @@ static int __init vdso_init(void)
>>  return -ENOMEM;
>>  
>>  /* Grab the vDSO data page. */
>> -vdso_pagelist[0] = pfn_to_page(PHYS_PFN(__pa(vdso_data)));
>> +vdso_pagelist[0] = phys_to_page(__pa_symbol(vdso_data));
>>  
>>  /* Grab the vDSO code pages. */
>>  for (i = 0; i < vdso_pages; i++)
>> -vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa(_start)) 
>> + i);
>> +vdso_pagelist[i + 1] = 
>> pfn_to_page(PHYS_PFN(__pa_symbol(_start)) + i);
> 
> I see you added sym_to_pfn(), which we can use here to keep this short
> and legible. It might also be worth using a temporary pfn_t, e.g.
> 
>   pfn = sym_to_pfn(_start);
> 
>   for (i = 0; i < vdso_pages; i++)
>   vdso_pagelist[i + 1] = pfn_to_page(pfn + 

Re: [PATCHv4 05/10] arm64: Use __pa_symbol for kernel symbols

2016-12-06 Thread Laura Abbott
On 12/06/2016 09:02 AM, Mark Rutland wrote:
> Hi,
> 
> As a heads-up, it looks like this got mangled somewhere. In the hunk at
> arch/arm64/mm/kasan_init.c:68, 'do' in the context became 'edo'.
> Deleting the 'e' makes it apply.
> 

Argh, this must have come in while editing the .patch before e-mailing.
Sorry about that.

> I think this is almost there; other than James's hibernate bug I only
> see one real issue, and everything else is a minor nit.
> 
> On Tue, Nov 29, 2016 at 10:55:24AM -0800, Laura Abbott wrote:
>> __pa_symbol is technically the marco that should be used for kernel
>> symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which
>> will do bounds checking. As part of this, introduce lm_alias, a
>> macro which wraps the __va(__pa(...)) idiom used a few places to
>> get the alias.
> 
> I think the addition of the lm_alias() macro under include/mm should be
> a separate preparatory patch. That way it's separate from the
> arm64-specific parts, and more obvious to !arm64 people reviewing the
> other parts.
> 

I debated if it was more obvious to show how it was used in context
vs a stand alone patch. I think you're right that for non-arm64 reviewers
the separate patch would be easier to find.

>> Signed-off-by: Laura Abbott 
>> ---
>> v4: Stop calling __va early, conversion of a few more sites. I decided 
>> against
>> wrapping the __p*d_populate calls into new functions since the call sites
>> should be limited.
>> ---
>>  arch/arm64/include/asm/kvm_mmu.h  |  4 ++--
>>  arch/arm64/include/asm/memory.h   |  2 ++
>>  arch/arm64/include/asm/mmu_context.h  |  6 +++---
>>  arch/arm64/include/asm/pgtable.h  |  2 +-
>>  arch/arm64/kernel/acpi_parking_protocol.c |  2 +-
>>  arch/arm64/kernel/cpu-reset.h |  2 +-
>>  arch/arm64/kernel/cpufeature.c|  2 +-
>>  arch/arm64/kernel/hibernate.c | 13 +
>>  arch/arm64/kernel/insn.c  |  2 +-
>>  arch/arm64/kernel/psci.c  |  2 +-
>>  arch/arm64/kernel/setup.c |  8 
>>  arch/arm64/kernel/smp_spin_table.c|  2 +-
>>  arch/arm64/kernel/vdso.c  |  4 ++--
>>  arch/arm64/mm/init.c  | 11 ++-
>>  arch/arm64/mm/kasan_init.c| 21 +---
>>  arch/arm64/mm/mmu.c   | 32 
>> +++
>>  drivers/firmware/psci.c   |  2 +-
>>  include/linux/mm.h|  4 
>>  18 files changed, 70 insertions(+), 51 deletions(-)
> 
> It looks like we need to make sure these (directly) include 
> for __pa_symbol() and lm_alias(), or there's some fragility, e.g.
> 
> [mark@leverpostej:~/src/linux]% uselinaro 15.08 make ARCH=arm64 
> CROSS_COMPILE=aarch64-linux-gnu- -j10 -s
> arch/arm64/kernel/psci.c: In function 'cpu_psci_cpu_boot':
> arch/arm64/kernel/psci.c:48:50: error: implicit declaration of function 
> '__pa_symbol' [-Werror=implicit-function-declaration]
>   int err = psci_ops.cpu_on(cpu_logical_map(cpu), 
> __pa_symbol(secondary_entry));
>   ^
> cc1: some warnings being treated as errors
> make[1]: *** [arch/arm64/kernel/psci.o] Error 1
> make: *** [arch/arm64/kernel] Error 2
> make: *** Waiting for unfinished jobs
> 

Right, I'll double check. 

>> --- a/arch/arm64/include/asm/memory.h
>> +++ b/arch/arm64/include/asm/memory.h
>> @@ -205,6 +205,8 @@ static inline void *phys_to_virt(phys_addr_t x)
>>  #define __va(x) ((void 
>> *)__phys_to_virt((phys_addr_t)(x)))
>>  #define pfn_to_kaddr(pfn)   __va((pfn) << PAGE_SHIFT)
>>  #define virt_to_pfn(x)  __phys_to_pfn(__virt_to_phys((unsigned 
>> long)(x)))
>> +#define sym_to_pfn(x)   __phys_to_pfn(__pa_symbol(x))
>> +#define lm_alias(x) __va(__pa_symbol(x))
> 
> As Catalin mentioned, we should be able to drop this copy of lm_alias(),
> given we have the same in the core headers.
> 
>> diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
>> index a2c2478..79cd86b 100644
>> --- a/arch/arm64/kernel/vdso.c
>> +++ b/arch/arm64/kernel/vdso.c
>> @@ -140,11 +140,11 @@ static int __init vdso_init(void)
>>  return -ENOMEM;
>>  
>>  /* Grab the vDSO data page. */
>> -vdso_pagelist[0] = pfn_to_page(PHYS_PFN(__pa(vdso_data)));
>> +vdso_pagelist[0] = phys_to_page(__pa_symbol(vdso_data));
>>  
>>  /* Grab the vDSO code pages. */
>>  for (i = 0; i < vdso_pages; i++)
>> -vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa(_start)) 
>> + i);
>> +vdso_pagelist[i + 1] = 
>> pfn_to_page(PHYS_PFN(__pa_symbol(_start)) + i);
> 
> I see you added sym_to_pfn(), which we can use here to keep this short
> and legible. It might also be worth using a temporary pfn_t, e.g.
> 
>   pfn = sym_to_pfn(_start);
> 
>   for (i = 0; i < vdso_pages; i++)
>   vdso_pagelist[i + 1] = pfn_to_page(pfn + i);
> 

Good idea.


Re: [PATCHv4 05/10] arm64: Use __pa_symbol for kernel symbols

2016-12-06 Thread Mark Rutland
Hi,

As a heads-up, it looks like this got mangled somewhere. In the hunk at
arch/arm64/mm/kasan_init.c:68, 'do' in the context became 'edo'.
Deleting the 'e' makes it apply.

I think this is almost there; other than James's hibernate bug I only
see one real issue, and everything else is a minor nit.

On Tue, Nov 29, 2016 at 10:55:24AM -0800, Laura Abbott wrote:
> __pa_symbol is technically the marco that should be used for kernel
> symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which
> will do bounds checking. As part of this, introduce lm_alias, a
> macro which wraps the __va(__pa(...)) idiom used a few places to
> get the alias.

I think the addition of the lm_alias() macro under include/mm should be
a separate preparatory patch. That way it's separate from the
arm64-specific parts, and more obvious to !arm64 people reviewing the
other parts.

> Signed-off-by: Laura Abbott 
> ---
> v4: Stop calling __va early, conversion of a few more sites. I decided against
> wrapping the __p*d_populate calls into new functions since the call sites
> should be limited.
> ---
>  arch/arm64/include/asm/kvm_mmu.h  |  4 ++--
>  arch/arm64/include/asm/memory.h   |  2 ++
>  arch/arm64/include/asm/mmu_context.h  |  6 +++---
>  arch/arm64/include/asm/pgtable.h  |  2 +-
>  arch/arm64/kernel/acpi_parking_protocol.c |  2 +-
>  arch/arm64/kernel/cpu-reset.h |  2 +-
>  arch/arm64/kernel/cpufeature.c|  2 +-
>  arch/arm64/kernel/hibernate.c | 13 +
>  arch/arm64/kernel/insn.c  |  2 +-
>  arch/arm64/kernel/psci.c  |  2 +-
>  arch/arm64/kernel/setup.c |  8 
>  arch/arm64/kernel/smp_spin_table.c|  2 +-
>  arch/arm64/kernel/vdso.c  |  4 ++--
>  arch/arm64/mm/init.c  | 11 ++-
>  arch/arm64/mm/kasan_init.c| 21 +---
>  arch/arm64/mm/mmu.c   | 32 
> +++
>  drivers/firmware/psci.c   |  2 +-
>  include/linux/mm.h|  4 
>  18 files changed, 70 insertions(+), 51 deletions(-)

It looks like we need to make sure these (directly) include 
for __pa_symbol() and lm_alias(), or there's some fragility, e.g.

[mark@leverpostej:~/src/linux]% uselinaro 15.08 make ARCH=arm64 
CROSS_COMPILE=aarch64-linux-gnu- -j10 -s
arch/arm64/kernel/psci.c: In function 'cpu_psci_cpu_boot':
arch/arm64/kernel/psci.c:48:50: error: implicit declaration of function 
'__pa_symbol' [-Werror=implicit-function-declaration]
  int err = psci_ops.cpu_on(cpu_logical_map(cpu), __pa_symbol(secondary_entry));
  ^
cc1: some warnings being treated as errors
make[1]: *** [arch/arm64/kernel/psci.o] Error 1
make: *** [arch/arm64/kernel] Error 2
make: *** Waiting for unfinished jobs

> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -205,6 +205,8 @@ static inline void *phys_to_virt(phys_addr_t x)
>  #define __va(x)  ((void 
> *)__phys_to_virt((phys_addr_t)(x)))
>  #define pfn_to_kaddr(pfn)__va((pfn) << PAGE_SHIFT)
>  #define virt_to_pfn(x)  __phys_to_pfn(__virt_to_phys((unsigned long)(x)))
> +#define sym_to_pfn(x)__phys_to_pfn(__pa_symbol(x))
> +#define lm_alias(x)  __va(__pa_symbol(x))

As Catalin mentioned, we should be able to drop this copy of lm_alias(),
given we have the same in the core headers.

> diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
> index a2c2478..79cd86b 100644
> --- a/arch/arm64/kernel/vdso.c
> +++ b/arch/arm64/kernel/vdso.c
> @@ -140,11 +140,11 @@ static int __init vdso_init(void)
>   return -ENOMEM;
>  
>   /* Grab the vDSO data page. */
> - vdso_pagelist[0] = pfn_to_page(PHYS_PFN(__pa(vdso_data)));
> + vdso_pagelist[0] = phys_to_page(__pa_symbol(vdso_data));
>  
>   /* Grab the vDSO code pages. */
>   for (i = 0; i < vdso_pages; i++)
> - vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa(_start)) 
> + i);
> + vdso_pagelist[i + 1] = 
> pfn_to_page(PHYS_PFN(__pa_symbol(_start)) + i);

I see you added sym_to_pfn(), which we can use here to keep this short
and legible. It might also be worth using a temporary pfn_t, e.g.

pfn = sym_to_pfn(_start);

for (i = 0; i < vdso_pages; i++)
vdso_pagelist[i + 1] = pfn_to_page(pfn + i);

> diff --git a/drivers/firmware/psci.c b/drivers/firmware/psci.c
> index 8263429..9defbe2 100644
> --- a/drivers/firmware/psci.c
> +++ b/drivers/firmware/psci.c
> @@ -383,7 +383,7 @@ static int psci_suspend_finisher(unsigned long index)
>   u32 *state = __this_cpu_read(psci_power_state);
>  
>   return psci_ops.cpu_suspend(state[index - 1],
> - virt_to_phys(cpu_resume));
> + __pa_symbol(cpu_resume));
>  }
>  

Re: [PATCHv4 05/10] arm64: Use __pa_symbol for kernel symbols

2016-12-06 Thread Mark Rutland
Hi,

As a heads-up, it looks like this got mangled somewhere. In the hunk at
arch/arm64/mm/kasan_init.c:68, 'do' in the context became 'edo'.
Deleting the 'e' makes it apply.

I think this is almost there; other than James's hibernate bug I only
see one real issue, and everything else is a minor nit.

On Tue, Nov 29, 2016 at 10:55:24AM -0800, Laura Abbott wrote:
> __pa_symbol is technically the marco that should be used for kernel
> symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which
> will do bounds checking. As part of this, introduce lm_alias, a
> macro which wraps the __va(__pa(...)) idiom used a few places to
> get the alias.

I think the addition of the lm_alias() macro under include/mm should be
a separate preparatory patch. That way it's separate from the
arm64-specific parts, and more obvious to !arm64 people reviewing the
other parts.

> Signed-off-by: Laura Abbott 
> ---
> v4: Stop calling __va early, conversion of a few more sites. I decided against
> wrapping the __p*d_populate calls into new functions since the call sites
> should be limited.
> ---
>  arch/arm64/include/asm/kvm_mmu.h  |  4 ++--
>  arch/arm64/include/asm/memory.h   |  2 ++
>  arch/arm64/include/asm/mmu_context.h  |  6 +++---
>  arch/arm64/include/asm/pgtable.h  |  2 +-
>  arch/arm64/kernel/acpi_parking_protocol.c |  2 +-
>  arch/arm64/kernel/cpu-reset.h |  2 +-
>  arch/arm64/kernel/cpufeature.c|  2 +-
>  arch/arm64/kernel/hibernate.c | 13 +
>  arch/arm64/kernel/insn.c  |  2 +-
>  arch/arm64/kernel/psci.c  |  2 +-
>  arch/arm64/kernel/setup.c |  8 
>  arch/arm64/kernel/smp_spin_table.c|  2 +-
>  arch/arm64/kernel/vdso.c  |  4 ++--
>  arch/arm64/mm/init.c  | 11 ++-
>  arch/arm64/mm/kasan_init.c| 21 +---
>  arch/arm64/mm/mmu.c   | 32 
> +++
>  drivers/firmware/psci.c   |  2 +-
>  include/linux/mm.h|  4 
>  18 files changed, 70 insertions(+), 51 deletions(-)

It looks like we need to make sure these (directly) include 
for __pa_symbol() and lm_alias(), or there's some fragility, e.g.

[mark@leverpostej:~/src/linux]% uselinaro 15.08 make ARCH=arm64 
CROSS_COMPILE=aarch64-linux-gnu- -j10 -s
arch/arm64/kernel/psci.c: In function 'cpu_psci_cpu_boot':
arch/arm64/kernel/psci.c:48:50: error: implicit declaration of function 
'__pa_symbol' [-Werror=implicit-function-declaration]
  int err = psci_ops.cpu_on(cpu_logical_map(cpu), __pa_symbol(secondary_entry));
  ^
cc1: some warnings being treated as errors
make[1]: *** [arch/arm64/kernel/psci.o] Error 1
make: *** [arch/arm64/kernel] Error 2
make: *** Waiting for unfinished jobs

> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -205,6 +205,8 @@ static inline void *phys_to_virt(phys_addr_t x)
>  #define __va(x)  ((void 
> *)__phys_to_virt((phys_addr_t)(x)))
>  #define pfn_to_kaddr(pfn)__va((pfn) << PAGE_SHIFT)
>  #define virt_to_pfn(x)  __phys_to_pfn(__virt_to_phys((unsigned long)(x)))
> +#define sym_to_pfn(x)__phys_to_pfn(__pa_symbol(x))
> +#define lm_alias(x)  __va(__pa_symbol(x))

As Catalin mentioned, we should be able to drop this copy of lm_alias(),
given we have the same in the core headers.

> diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
> index a2c2478..79cd86b 100644
> --- a/arch/arm64/kernel/vdso.c
> +++ b/arch/arm64/kernel/vdso.c
> @@ -140,11 +140,11 @@ static int __init vdso_init(void)
>   return -ENOMEM;
>  
>   /* Grab the vDSO data page. */
> - vdso_pagelist[0] = pfn_to_page(PHYS_PFN(__pa(vdso_data)));
> + vdso_pagelist[0] = phys_to_page(__pa_symbol(vdso_data));
>  
>   /* Grab the vDSO code pages. */
>   for (i = 0; i < vdso_pages; i++)
> - vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa(_start)) 
> + i);
> + vdso_pagelist[i + 1] = 
> pfn_to_page(PHYS_PFN(__pa_symbol(_start)) + i);

I see you added sym_to_pfn(), which we can use here to keep this short
and legible. It might also be worth using a temporary pfn_t, e.g.

pfn = sym_to_pfn(_start);

for (i = 0; i < vdso_pages; i++)
vdso_pagelist[i + 1] = pfn_to_page(pfn + i);

> diff --git a/drivers/firmware/psci.c b/drivers/firmware/psci.c
> index 8263429..9defbe2 100644
> --- a/drivers/firmware/psci.c
> +++ b/drivers/firmware/psci.c
> @@ -383,7 +383,7 @@ static int psci_suspend_finisher(unsigned long index)
>   u32 *state = __this_cpu_read(psci_power_state);
>  
>   return psci_ops.cpu_suspend(state[index - 1],
> - virt_to_phys(cpu_resume));
> + __pa_symbol(cpu_resume));
>  }
>  
>  int 

Re: [PATCHv4 05/10] arm64: Use __pa_symbol for kernel symbols

2016-12-06 Thread Mark Rutland
On Thu, Dec 01, 2016 at 12:04:27PM +, James Morse wrote:
> On 29/11/16 18:55, Laura Abbott wrote:
> > diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
> > index d55a7b0..4f0c77d 100644
> > --- a/arch/arm64/kernel/hibernate.c
> > +++ b/arch/arm64/kernel/hibernate.c
> > @@ -484,7 +481,7 @@ int swsusp_arch_resume(void)
> >  * Since we only copied the linear map, we need to find restore_pblist's
> >  * linear map address.
> >  */
> > -   lm_restore_pblist = LMADDR(restore_pblist);
> > +   lm_restore_pblist = lm_alias(restore_pblist);
> >  
> > /*
> >  * We need a zero page that is zero before & after resume in order to
> 
> This change causes resume from hibernate to panic in:
> > VIRTUAL_BUG_ON(x < (unsigned long) KERNEL_START ||
> > x > (unsigned long) KERNEL_END);
> 
> It looks like kaslr's relocation code has already fixed restore_pblist, so 
> your
> debug virtual check catches this doing the wrong thing. My bug.
> 
> readelf -s vmlinux | grep ...
> > 103495: 0808 0 NOTYPE  GLOBAL DEFAULT1 _text
> >  92104: 08e43860 8 OBJECT  GLOBAL DEFAULT   24 restore_pblist
> > 105442: 08e85000 0 NOTYPE  GLOBAL DEFAULT   24 _end
> 
> But restore_pblist == 0x800971b7f998 when passed to __phys_addr_symbol().

I think KASLR's a red herring; it shouldn't change the distance between
the restore_pblist symbol and {_text,_end}.

Above, 08e43860 is the location of the pointer in the kernel
image (i.e. it's _pblist). 0x800971b7f998 is the pointer
that was assigned to restore_pblist. For KASLR, the low bits (at least
up to a page boundary) shouldn't change across relocation.

Assuming it's only ever assigned a dynamic allocation, which should fall
in the linear map, the LMADDR() dance doesn't appear to be necessary.

> This fixes the problem:
> %<
> diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
> index 4f0c77d2ff7a..8bed26a2d558 100644
> --- a/arch/arm64/kernel/hibernate.c
> +++ b/arch/arm64/kernel/hibernate.c
> @@ -457,7 +457,6 @@ int swsusp_arch_resume(void)
> void *zero_page;
> size_t exit_size;
> pgd_t *tmp_pg_dir;
> -   void *lm_restore_pblist;
> phys_addr_t phys_hibernate_exit;
> void __noreturn (*hibernate_exit)(phys_addr_t, phys_addr_t, void *,
>   void *, phys_addr_t, phys_addr_t);
> @@ -478,12 +477,6 @@ int swsusp_arch_resume(void)
> goto out;
> 
> /*
> -* Since we only copied the linear map, we need to find 
> restore_pblist's
> -* linear map address.
> -*/
> -   lm_restore_pblist = lm_alias(restore_pblist);
> -
> -   /*
>  * We need a zero page that is zero before & after resume in order to
>  * to break before make on the ttbr1 page tables.
>  */
> @@ -534,7 +527,7 @@ int swsusp_arch_resume(void)
> }
> 
> hibernate_exit(virt_to_phys(tmp_pg_dir), resume_hdr.ttbr1_el1,
> -  resume_hdr.reenter_kernel, lm_restore_pblist,
> +  resume_hdr.reenter_kernel, restore_pblist,
>resume_hdr.__hyp_stub_vectors, 
> virt_to_phys(zero_page));
> 
>  out:
> %<

Folding that in (or having it as a preparatory cleanup patch) makes
sense to me. AFAICT the logic was valid (albeit confused) until now, so
it's not strictly a fix.

Thanks,
Mark.


Re: [PATCHv4 05/10] arm64: Use __pa_symbol for kernel symbols

2016-12-06 Thread Mark Rutland
On Thu, Dec 01, 2016 at 12:04:27PM +, James Morse wrote:
> On 29/11/16 18:55, Laura Abbott wrote:
> > diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
> > index d55a7b0..4f0c77d 100644
> > --- a/arch/arm64/kernel/hibernate.c
> > +++ b/arch/arm64/kernel/hibernate.c
> > @@ -484,7 +481,7 @@ int swsusp_arch_resume(void)
> >  * Since we only copied the linear map, we need to find restore_pblist's
> >  * linear map address.
> >  */
> > -   lm_restore_pblist = LMADDR(restore_pblist);
> > +   lm_restore_pblist = lm_alias(restore_pblist);
> >  
> > /*
> >  * We need a zero page that is zero before & after resume in order to
> 
> This change causes resume from hibernate to panic in:
> > VIRTUAL_BUG_ON(x < (unsigned long) KERNEL_START ||
> > x > (unsigned long) KERNEL_END);
> 
> It looks like kaslr's relocation code has already fixed restore_pblist, so 
> your
> debug virtual check catches this doing the wrong thing. My bug.
> 
> readelf -s vmlinux | grep ...
> > 103495: 0808 0 NOTYPE  GLOBAL DEFAULT1 _text
> >  92104: 08e43860 8 OBJECT  GLOBAL DEFAULT   24 restore_pblist
> > 105442: 08e85000 0 NOTYPE  GLOBAL DEFAULT   24 _end
> 
> But restore_pblist == 0x800971b7f998 when passed to __phys_addr_symbol().

I think KASLR's a red herring; it shouldn't change the distance between
the restore_pblist symbol and {_text,_end}.

Above, 08e43860 is the location of the pointer in the kernel
image (i.e. it's _pblist). 0x800971b7f998 is the pointer
that was assigned to restore_pblist. For KASLR, the low bits (at least
up to a page boundary) shouldn't change across relocation.

Assuming it's only ever assigned a dynamic allocation, which should fall
in the linear map, the LMADDR() dance doesn't appear to be necessary.

> This fixes the problem:
> %<
> diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
> index 4f0c77d2ff7a..8bed26a2d558 100644
> --- a/arch/arm64/kernel/hibernate.c
> +++ b/arch/arm64/kernel/hibernate.c
> @@ -457,7 +457,6 @@ int swsusp_arch_resume(void)
> void *zero_page;
> size_t exit_size;
> pgd_t *tmp_pg_dir;
> -   void *lm_restore_pblist;
> phys_addr_t phys_hibernate_exit;
> void __noreturn (*hibernate_exit)(phys_addr_t, phys_addr_t, void *,
>   void *, phys_addr_t, phys_addr_t);
> @@ -478,12 +477,6 @@ int swsusp_arch_resume(void)
> goto out;
> 
> /*
> -* Since we only copied the linear map, we need to find 
> restore_pblist's
> -* linear map address.
> -*/
> -   lm_restore_pblist = lm_alias(restore_pblist);
> -
> -   /*
>  * We need a zero page that is zero before & after resume in order to
>  * to break before make on the ttbr1 page tables.
>  */
> @@ -534,7 +527,7 @@ int swsusp_arch_resume(void)
> }
> 
> hibernate_exit(virt_to_phys(tmp_pg_dir), resume_hdr.ttbr1_el1,
> -  resume_hdr.reenter_kernel, lm_restore_pblist,
> +  resume_hdr.reenter_kernel, restore_pblist,
>resume_hdr.__hyp_stub_vectors, 
> virt_to_phys(zero_page));
> 
>  out:
> %<

Folding that in (or having it as a preparatory cleanup patch) makes
sense to me. AFAICT the logic was valid (albeit confused) until now, so
it's not strictly a fix.

Thanks,
Mark.


Re: [PATCHv4 05/10] arm64: Use __pa_symbol for kernel symbols

2016-12-06 Thread Catalin Marinas
On Mon, Dec 05, 2016 at 04:50:33PM -0800, Florian Fainelli wrote:
> On 11/29/2016 10:55 AM, Laura Abbott wrote:
> > __pa_symbol is technically the marco that should be used for kernel
> > symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which
> > will do bounds checking. As part of this, introduce lm_alias, a
> > macro which wraps the __va(__pa(...)) idiom used a few places to
> > get the alias.
> > 
> > Signed-off-by: Laura Abbott 
> > ---
> > v4: Stop calling __va early, conversion of a few more sites. I decided 
> > against
> > wrapping the __p*d_populate calls into new functions since the call sites
> > should be limited.
> > ---
> 
> 
> > -   pud_populate(_mm, pud, bm_pmd);
> > +   if (pud_none(*pud))
> > +   __pud_populate(pud, __pa_symbol(bm_pmd), PMD_TYPE_TABLE);
> > pmd = fixmap_pmd(addr);
> > -   pmd_populate_kernel(_mm, pmd, bm_pte);
> > +   __pmd_populate(pmd, __pa_symbol(bm_pte), PMD_TYPE_TABLE);
> 
> Is there a particular reason why pmd_populate_kernel() is not changed to
> use __pa_symbol() instead of using __pa()? The other users in the arm64
> kernel is arch/arm64/kernel/hibernate.c which seems to call this against
> kernel symbols as well?

create_safe_exec_page() may allocate a pte from the linear map and
passes such pointer to pmd_populate_kernel(). The copy_pte() function
does something similar. In addition, we have the generic
__pte_alloc_kernel() in mm/memory.c using linear addresses.

-- 
Catalin


Re: [PATCHv4 05/10] arm64: Use __pa_symbol for kernel symbols

2016-12-06 Thread Catalin Marinas
On Mon, Dec 05, 2016 at 04:50:33PM -0800, Florian Fainelli wrote:
> On 11/29/2016 10:55 AM, Laura Abbott wrote:
> > __pa_symbol is technically the marco that should be used for kernel
> > symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which
> > will do bounds checking. As part of this, introduce lm_alias, a
> > macro which wraps the __va(__pa(...)) idiom used a few places to
> > get the alias.
> > 
> > Signed-off-by: Laura Abbott 
> > ---
> > v4: Stop calling __va early, conversion of a few more sites. I decided 
> > against
> > wrapping the __p*d_populate calls into new functions since the call sites
> > should be limited.
> > ---
> 
> 
> > -   pud_populate(_mm, pud, bm_pmd);
> > +   if (pud_none(*pud))
> > +   __pud_populate(pud, __pa_symbol(bm_pmd), PMD_TYPE_TABLE);
> > pmd = fixmap_pmd(addr);
> > -   pmd_populate_kernel(_mm, pmd, bm_pte);
> > +   __pmd_populate(pmd, __pa_symbol(bm_pte), PMD_TYPE_TABLE);
> 
> Is there a particular reason why pmd_populate_kernel() is not changed to
> use __pa_symbol() instead of using __pa()? The other users in the arm64
> kernel is arch/arm64/kernel/hibernate.c which seems to call this against
> kernel symbols as well?

create_safe_exec_page() may allocate a pte from the linear map and
passes such pointer to pmd_populate_kernel(). The copy_pte() function
does something similar. In addition, we have the generic
__pte_alloc_kernel() in mm/memory.c using linear addresses.

-- 
Catalin


Re: [PATCHv4 05/10] arm64: Use __pa_symbol for kernel symbols

2016-12-05 Thread Florian Fainelli
On 11/29/2016 10:55 AM, Laura Abbott wrote:
> __pa_symbol is technically the marco that should be used for kernel
> symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which
> will do bounds checking. As part of this, introduce lm_alias, a
> macro which wraps the __va(__pa(...)) idiom used a few places to
> get the alias.
> 
> Signed-off-by: Laura Abbott 
> ---
> v4: Stop calling __va early, conversion of a few more sites. I decided against
> wrapping the __p*d_populate calls into new functions since the call sites
> should be limited.
> ---


> - pud_populate(_mm, pud, bm_pmd);
> + if (pud_none(*pud))
> + __pud_populate(pud, __pa_symbol(bm_pmd), PMD_TYPE_TABLE);
>   pmd = fixmap_pmd(addr);
> - pmd_populate_kernel(_mm, pmd, bm_pte);
> + __pmd_populate(pmd, __pa_symbol(bm_pte), PMD_TYPE_TABLE);

Is there a particular reason why pmd_populate_kernel() is not changed to
use __pa_symbol() instead of using __pa()? The other users in the arm64
kernel is arch/arm64/kernel/hibernate.c which seems to call this against
kernel symbols as well?
-- 
Florian


Re: [PATCHv4 05/10] arm64: Use __pa_symbol for kernel symbols

2016-12-05 Thread Florian Fainelli
On 11/29/2016 10:55 AM, Laura Abbott wrote:
> __pa_symbol is technically the marco that should be used for kernel
> symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which
> will do bounds checking. As part of this, introduce lm_alias, a
> macro which wraps the __va(__pa(...)) idiom used a few places to
> get the alias.
> 
> Signed-off-by: Laura Abbott 
> ---
> v4: Stop calling __va early, conversion of a few more sites. I decided against
> wrapping the __p*d_populate calls into new functions since the call sites
> should be limited.
> ---


> - pud_populate(_mm, pud, bm_pmd);
> + if (pud_none(*pud))
> + __pud_populate(pud, __pa_symbol(bm_pmd), PMD_TYPE_TABLE);
>   pmd = fixmap_pmd(addr);
> - pmd_populate_kernel(_mm, pmd, bm_pte);
> + __pmd_populate(pmd, __pa_symbol(bm_pte), PMD_TYPE_TABLE);

Is there a particular reason why pmd_populate_kernel() is not changed to
use __pa_symbol() instead of using __pa()? The other users in the arm64
kernel is arch/arm64/kernel/hibernate.c which seems to call this against
kernel symbols as well?
-- 
Florian


Re: [PATCHv4 05/10] arm64: Use __pa_symbol for kernel symbols

2016-12-01 Thread James Morse
Hi Laura,

On 29/11/16 18:55, Laura Abbott wrote:
> __pa_symbol is technically the marco that should be used for kernel

macro

> symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which
> will do bounds checking. As part of this, introduce lm_alias, a
> macro which wraps the __va(__pa(...)) idiom used a few places to
> get the alias.

> diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
> index d55a7b0..4f0c77d 100644
> --- a/arch/arm64/kernel/hibernate.c
> +++ b/arch/arm64/kernel/hibernate.c
> @@ -484,7 +481,7 @@ int swsusp_arch_resume(void)
>* Since we only copied the linear map, we need to find restore_pblist's
>* linear map address.
>*/
> - lm_restore_pblist = LMADDR(restore_pblist);
> + lm_restore_pblist = lm_alias(restore_pblist);
>  
>   /*
>* We need a zero page that is zero before & after resume in order to

This change causes resume from hibernate to panic in:
> VIRTUAL_BUG_ON(x < (unsigned long) KERNEL_START ||
>   x > (unsigned long) KERNEL_END);

It looks like kaslr's relocation code has already fixed restore_pblist, so your
debug virtual check catches this doing the wrong thing. My bug.

readelf -s vmlinux | grep ...
> 103495: 0808 0 NOTYPE  GLOBAL DEFAULT1 _text
>  92104: 08e43860 8 OBJECT  GLOBAL DEFAULT   24 restore_pblist
> 105442: 08e85000 0 NOTYPE  GLOBAL DEFAULT   24 _end

But restore_pblist == 0x800971b7f998 when passed to __phys_addr_symbol().

This fixes the problem:
%<
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index 4f0c77d2ff7a..8bed26a2d558 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -457,7 +457,6 @@ int swsusp_arch_resume(void)
void *zero_page;
size_t exit_size;
pgd_t *tmp_pg_dir;
-   void *lm_restore_pblist;
phys_addr_t phys_hibernate_exit;
void __noreturn (*hibernate_exit)(phys_addr_t, phys_addr_t, void *,
  void *, phys_addr_t, phys_addr_t);
@@ -478,12 +477,6 @@ int swsusp_arch_resume(void)
goto out;

/*
-* Since we only copied the linear map, we need to find restore_pblist's
-* linear map address.
-*/
-   lm_restore_pblist = lm_alias(restore_pblist);
-
-   /*
 * We need a zero page that is zero before & after resume in order to
 * to break before make on the ttbr1 page tables.
 */
@@ -534,7 +527,7 @@ int swsusp_arch_resume(void)
}

hibernate_exit(virt_to_phys(tmp_pg_dir), resume_hdr.ttbr1_el1,
-  resume_hdr.reenter_kernel, lm_restore_pblist,
+  resume_hdr.reenter_kernel, restore_pblist,
   resume_hdr.__hyp_stub_vectors, virt_to_phys(zero_page));

 out:
%<

I can post it as a separate fixes patch if you prefer.

I also tested kexec. FWIW:
Tested-by: James Morse 


Thanks,

James

[0] Trace
[4.191607] Freezing user space processes ... (elapsed 0.000 seconds) done.
[4.224251] random: fast init done
[4.243825] PM: Using 3 thread(s) for decompression.
[4.243825] PM: Loading and decompressing image data (90831 pages)...
[4.255257] hibernate: Hibernated on CPU 0 [mpidr:0x100]
[5.213469] PM: Image loading progress:   0%
[5.579886] PM: Image loading progress:  10%
[5.740234] ata2: SATA link down (SStatus 0 SControl 0)
[5.760435] PM: Image loading progress:  20%
[5.970647] PM: Image loading progress:  30%
[6.563108] PM: Image loading progress:  40%
[6.848389] PM: Image loading progress:  50%
[7.526272] PM: Image loading progress:  60%
[7.702727] PM: Image loading progress:  70%
[7.899754] PM: Image loading progress:  80%
[8.100703] PM: Image loading progress:  90%
[8.300978] PM: Image loading progress: 100%
[8.305441] PM: Image loading done.
[8.308975] PM: Read 363324 kbytes in 4.05 seconds (89.70 MB/s)
[8.344299] PM: quiesce of devices complete after 22.706 msecs
[8.350762] PM: late quiesce of devices complete after 0.596 msecs
[8.381334] PM: noirq quiesce of devices complete after 24.365 msecs
[8.387729] Disabling non-boot CPUs ...
[8.412500] CPU1: shutdown
[8.415211] psci: CPU1 killed.
[8.460465] CPU2: shutdown
[8.463175] psci: CPU2 killed.
[8.504447] CPU3: shutdown
[8.507156] psci: CPU3 killed.
[8.540375] CPU4: shutdown
[8.543084] psci: CPU4 killed.
[8.580333] CPU5: shutdown
[8.583043] psci: CPU5 killed.
[8.601206] [ cut here ]
[8.601206] kernel BUG at ../arch/arm64/mm/physaddr.c:25!
[8.601206] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
[8.601206] Modules linked in:
[8.601206] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.9.0-rc7-00010-g27c672
[8.601206] Hardware name: ARM 

Re: [PATCHv4 05/10] arm64: Use __pa_symbol for kernel symbols

2016-12-01 Thread James Morse
Hi Laura,

On 29/11/16 18:55, Laura Abbott wrote:
> __pa_symbol is technically the marco that should be used for kernel

macro

> symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which
> will do bounds checking. As part of this, introduce lm_alias, a
> macro which wraps the __va(__pa(...)) idiom used a few places to
> get the alias.

> diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
> index d55a7b0..4f0c77d 100644
> --- a/arch/arm64/kernel/hibernate.c
> +++ b/arch/arm64/kernel/hibernate.c
> @@ -484,7 +481,7 @@ int swsusp_arch_resume(void)
>* Since we only copied the linear map, we need to find restore_pblist's
>* linear map address.
>*/
> - lm_restore_pblist = LMADDR(restore_pblist);
> + lm_restore_pblist = lm_alias(restore_pblist);
>  
>   /*
>* We need a zero page that is zero before & after resume in order to

This change causes resume from hibernate to panic in:
> VIRTUAL_BUG_ON(x < (unsigned long) KERNEL_START ||
>   x > (unsigned long) KERNEL_END);

It looks like kaslr's relocation code has already fixed restore_pblist, so your
debug virtual check catches this doing the wrong thing. My bug.

readelf -s vmlinux | grep ...
> 103495: 0808 0 NOTYPE  GLOBAL DEFAULT1 _text
>  92104: 08e43860 8 OBJECT  GLOBAL DEFAULT   24 restore_pblist
> 105442: 08e85000 0 NOTYPE  GLOBAL DEFAULT   24 _end

But restore_pblist == 0x800971b7f998 when passed to __phys_addr_symbol().

This fixes the problem:
%<
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index 4f0c77d2ff7a..8bed26a2d558 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -457,7 +457,6 @@ int swsusp_arch_resume(void)
void *zero_page;
size_t exit_size;
pgd_t *tmp_pg_dir;
-   void *lm_restore_pblist;
phys_addr_t phys_hibernate_exit;
void __noreturn (*hibernate_exit)(phys_addr_t, phys_addr_t, void *,
  void *, phys_addr_t, phys_addr_t);
@@ -478,12 +477,6 @@ int swsusp_arch_resume(void)
goto out;

/*
-* Since we only copied the linear map, we need to find restore_pblist's
-* linear map address.
-*/
-   lm_restore_pblist = lm_alias(restore_pblist);
-
-   /*
 * We need a zero page that is zero before & after resume in order to
 * to break before make on the ttbr1 page tables.
 */
@@ -534,7 +527,7 @@ int swsusp_arch_resume(void)
}

hibernate_exit(virt_to_phys(tmp_pg_dir), resume_hdr.ttbr1_el1,
-  resume_hdr.reenter_kernel, lm_restore_pblist,
+  resume_hdr.reenter_kernel, restore_pblist,
   resume_hdr.__hyp_stub_vectors, virt_to_phys(zero_page));

 out:
%<

I can post it as a separate fixes patch if you prefer.

I also tested kexec. FWIW:
Tested-by: James Morse 


Thanks,

James

[0] Trace
[4.191607] Freezing user space processes ... (elapsed 0.000 seconds) done.
[4.224251] random: fast init done
[4.243825] PM: Using 3 thread(s) for decompression.
[4.243825] PM: Loading and decompressing image data (90831 pages)...
[4.255257] hibernate: Hibernated on CPU 0 [mpidr:0x100]
[5.213469] PM: Image loading progress:   0%
[5.579886] PM: Image loading progress:  10%
[5.740234] ata2: SATA link down (SStatus 0 SControl 0)
[5.760435] PM: Image loading progress:  20%
[5.970647] PM: Image loading progress:  30%
[6.563108] PM: Image loading progress:  40%
[6.848389] PM: Image loading progress:  50%
[7.526272] PM: Image loading progress:  60%
[7.702727] PM: Image loading progress:  70%
[7.899754] PM: Image loading progress:  80%
[8.100703] PM: Image loading progress:  90%
[8.300978] PM: Image loading progress: 100%
[8.305441] PM: Image loading done.
[8.308975] PM: Read 363324 kbytes in 4.05 seconds (89.70 MB/s)
[8.344299] PM: quiesce of devices complete after 22.706 msecs
[8.350762] PM: late quiesce of devices complete after 0.596 msecs
[8.381334] PM: noirq quiesce of devices complete after 24.365 msecs
[8.387729] Disabling non-boot CPUs ...
[8.412500] CPU1: shutdown
[8.415211] psci: CPU1 killed.
[8.460465] CPU2: shutdown
[8.463175] psci: CPU2 killed.
[8.504447] CPU3: shutdown
[8.507156] psci: CPU3 killed.
[8.540375] CPU4: shutdown
[8.543084] psci: CPU4 killed.
[8.580333] CPU5: shutdown
[8.583043] psci: CPU5 killed.
[8.601206] [ cut here ]
[8.601206] kernel BUG at ../arch/arm64/mm/physaddr.c:25!
[8.601206] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
[8.601206] Modules linked in:
[8.601206] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.9.0-rc7-00010-g27c672
[8.601206] Hardware name: ARM Juno development board 

Re: [PATCHv4 05/10] arm64: Use __pa_symbol for kernel symbols

2016-11-30 Thread Catalin Marinas
On Tue, Nov 29, 2016 at 10:55:24AM -0800, Laura Abbott wrote:
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -205,6 +205,8 @@ static inline void *phys_to_virt(phys_addr_t x)
>  #define __va(x)  ((void 
> *)__phys_to_virt((phys_addr_t)(x)))
>  #define pfn_to_kaddr(pfn)__va((pfn) << PAGE_SHIFT)
>  #define virt_to_pfn(x)  __phys_to_pfn(__virt_to_phys((unsigned long)(x)))
> +#define sym_to_pfn(x)__phys_to_pfn(__pa_symbol(x))
> +#define lm_alias(x)  __va(__pa_symbol(x))
[...]
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -76,6 +76,10 @@ extern int mmap_rnd_compat_bits __read_mostly;
>  #define page_to_virt(x)  __va(PFN_PHYS(page_to_pfn(x)))
>  #endif
>  
> +#ifndef lm_alias
> +#define lm_alias(x)  __va(__pa_symbol(x))
> +#endif

You can drop the arm64-specific lm_alias macro as it's the same as the
generic one you introduced in the same patch.

-- 
Catalin


Re: [PATCHv4 05/10] arm64: Use __pa_symbol for kernel symbols

2016-11-30 Thread Catalin Marinas
On Tue, Nov 29, 2016 at 10:55:24AM -0800, Laura Abbott wrote:
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -205,6 +205,8 @@ static inline void *phys_to_virt(phys_addr_t x)
>  #define __va(x)  ((void 
> *)__phys_to_virt((phys_addr_t)(x)))
>  #define pfn_to_kaddr(pfn)__va((pfn) << PAGE_SHIFT)
>  #define virt_to_pfn(x)  __phys_to_pfn(__virt_to_phys((unsigned long)(x)))
> +#define sym_to_pfn(x)__phys_to_pfn(__pa_symbol(x))
> +#define lm_alias(x)  __va(__pa_symbol(x))
[...]
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -76,6 +76,10 @@ extern int mmap_rnd_compat_bits __read_mostly;
>  #define page_to_virt(x)  __va(PFN_PHYS(page_to_pfn(x)))
>  #endif
>  
> +#ifndef lm_alias
> +#define lm_alias(x)  __va(__pa_symbol(x))
> +#endif

You can drop the arm64-specific lm_alias macro as it's the same as the
generic one you introduced in the same patch.

-- 
Catalin