Re: [PATCH v1 2/2] x86/power/64: Fix __PAGE_OFFSET usage on restore

2016-08-03 Thread Rafael J. Wysocki
On Tue, Aug 2, 2016 at 4:34 PM, Thomas Garnier  wrote:
> On Mon, Aug 1, 2016 at 5:38 PM, Rafael J. Wysocki  wrote:
>> On Monday, August 01, 2016 10:08:00 AM Thomas Garnier wrote:
>>> When KASLR memory randomization is used, __PAGE_OFFSET is a global
>>> variable changed during boot. The assembly code was using the variable
>>> as an immediate value to calculate the cr3 physical address. The
>>> physical address was incorrect resulting to a GP fault.
>>>
>>> Signed-off-by: Thomas Garnier 
>>> ---
>>>  arch/x86/power/hibernate_asm_64.S | 12 +++-
>>>  1 file changed, 11 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/x86/power/hibernate_asm_64.S 
>>> b/arch/x86/power/hibernate_asm_64.S
>>> index 8eee0e9..8db4905 100644
>>> --- a/arch/x86/power/hibernate_asm_64.S
>>> +++ b/arch/x86/power/hibernate_asm_64.S
>>> @@ -23,6 +23,16 @@
>>>  #include 
>>>  #include 
>>>
>>> +/*
>>> + * A global variable holds the page_offset when KASLR memory randomization
>>> + * is enabled.
>>> + */
>>> +#ifdef CONFIG_RANDOMIZE_MEMORY
>>> +#define __PAGE_OFFSET_REF __PAGE_OFFSET
>>> +#else
>>> +#define __PAGE_OFFSET_REF $__PAGE_OFFSET
>>> +#endif
>>> +
>>>  ENTRY(swsusp_arch_suspend)
>>>   movq$saved_context, %rax
>>>   movq%rsp, pt_regs_sp(%rax)
>>> @@ -72,7 +82,7 @@ ENTRY(restore_image)
>>>   /* code below has been relocated to a safe page */
>>>  ENTRY(core_restore_code)
>>>   /* switch to temporary page tables */
>>> - movq$__PAGE_OFFSET, %rcx
>>> + movq__PAGE_OFFSET_REF, %rcx
>>>   subq%rcx, %rax
>>>   movq%rax, %cr3
>>>   /* flush TLB */
>>>
>>
>> I'm not particularly liking the #ifdefs and they won't be really
>> necessary if the subtraction is carried out by the C code IMO.
>>
>> What about the patch below instead?
>>
>
> Yes, I think that's a good idea. I will test it and send PATCH v2.

No need to send this patch again.  Please just let me know if it works
for you. :-)

Thanks,
Rafael


Re: [PATCH v1 2/2] x86/power/64: Fix __PAGE_OFFSET usage on restore

2016-08-03 Thread Rafael J. Wysocki
On Tue, Aug 2, 2016 at 4:34 PM, Thomas Garnier  wrote:
> On Mon, Aug 1, 2016 at 5:38 PM, Rafael J. Wysocki  wrote:
>> On Monday, August 01, 2016 10:08:00 AM Thomas Garnier wrote:
>>> When KASLR memory randomization is used, __PAGE_OFFSET is a global
>>> variable changed during boot. The assembly code was using the variable
>>> as an immediate value to calculate the cr3 physical address. The
>>> physical address was incorrect resulting to a GP fault.
>>>
>>> Signed-off-by: Thomas Garnier 
>>> ---
>>>  arch/x86/power/hibernate_asm_64.S | 12 +++-
>>>  1 file changed, 11 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/x86/power/hibernate_asm_64.S 
>>> b/arch/x86/power/hibernate_asm_64.S
>>> index 8eee0e9..8db4905 100644
>>> --- a/arch/x86/power/hibernate_asm_64.S
>>> +++ b/arch/x86/power/hibernate_asm_64.S
>>> @@ -23,6 +23,16 @@
>>>  #include 
>>>  #include 
>>>
>>> +/*
>>> + * A global variable holds the page_offset when KASLR memory randomization
>>> + * is enabled.
>>> + */
>>> +#ifdef CONFIG_RANDOMIZE_MEMORY
>>> +#define __PAGE_OFFSET_REF __PAGE_OFFSET
>>> +#else
>>> +#define __PAGE_OFFSET_REF $__PAGE_OFFSET
>>> +#endif
>>> +
>>>  ENTRY(swsusp_arch_suspend)
>>>   movq$saved_context, %rax
>>>   movq%rsp, pt_regs_sp(%rax)
>>> @@ -72,7 +82,7 @@ ENTRY(restore_image)
>>>   /* code below has been relocated to a safe page */
>>>  ENTRY(core_restore_code)
>>>   /* switch to temporary page tables */
>>> - movq$__PAGE_OFFSET, %rcx
>>> + movq__PAGE_OFFSET_REF, %rcx
>>>   subq%rcx, %rax
>>>   movq%rax, %cr3
>>>   /* flush TLB */
>>>
>>
>> I'm not particularly liking the #ifdefs and they won't be really
>> necessary if the subtraction is carried out by the C code IMO.
>>
>> What about the patch below instead?
>>
>
> Yes, I think that's a good idea. I will test it and send PATCH v2.

No need to send this patch again.  Please just let me know if it works
for you. :-)

Thanks,
Rafael


Re: [PATCH v1 2/2] x86/power/64: Fix __PAGE_OFFSET usage on restore

2016-08-02 Thread Rafael J. Wysocki
On Tue, Aug 2, 2016 at 10:59 PM, Thomas Garnier  wrote:
> On Tue, Aug 2, 2016 at 1:47 PM, Rafael J. Wysocki  wrote:
>> On Tue, Aug 2, 2016 at 4:34 PM, Thomas Garnier  wrote:
>>> On Mon, Aug 1, 2016 at 5:38 PM, Rafael J. Wysocki  
>>> wrote:
 On Monday, August 01, 2016 10:08:00 AM Thomas Garnier wrote:
> When KASLR memory randomization is used, __PAGE_OFFSET is a global
> variable changed during boot. The assembly code was using the variable
> as an immediate value to calculate the cr3 physical address. The
> physical address was incorrect resulting to a GP fault.
>
> Signed-off-by: Thomas Garnier 
> ---
>  arch/x86/power/hibernate_asm_64.S | 12 +++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/power/hibernate_asm_64.S 
> b/arch/x86/power/hibernate_asm_64.S
> index 8eee0e9..8db4905 100644
> --- a/arch/x86/power/hibernate_asm_64.S
> +++ b/arch/x86/power/hibernate_asm_64.S
> @@ -23,6 +23,16 @@
>  #include 
>  #include 
>
> +/*
> + * A global variable holds the page_offset when KASLR memory 
> randomization
> + * is enabled.
> + */
> +#ifdef CONFIG_RANDOMIZE_MEMORY
> +#define __PAGE_OFFSET_REF __PAGE_OFFSET
> +#else
> +#define __PAGE_OFFSET_REF $__PAGE_OFFSET
> +#endif
> +
>  ENTRY(swsusp_arch_suspend)
>   movq$saved_context, %rax
>   movq%rsp, pt_regs_sp(%rax)
> @@ -72,7 +82,7 @@ ENTRY(restore_image)
>   /* code below has been relocated to a safe page */
>  ENTRY(core_restore_code)
>   /* switch to temporary page tables */
> - movq$__PAGE_OFFSET, %rcx
> + movq__PAGE_OFFSET_REF, %rcx
>   subq%rcx, %rax
>   movq%rax, %cr3
>   /* flush TLB */
>

 I'm not particularly liking the #ifdefs and they won't be really
 necessary if the subtraction is carried out by the C code IMO.

 What about the patch below instead?

>>>
>>> Yes, I think that's a good idea. I will test it and send PATCH v2.
>>
>> No need to send this patch again.  Please just let me know if it works
>> for you. :-)
>>
>
> It worked well when I tested it and I agree that's a better approach.

OK, thanks!

Let me add a changelog to it then.


Re: [PATCH v1 2/2] x86/power/64: Fix __PAGE_OFFSET usage on restore

2016-08-02 Thread Rafael J. Wysocki
On Tue, Aug 2, 2016 at 10:59 PM, Thomas Garnier  wrote:
> On Tue, Aug 2, 2016 at 1:47 PM, Rafael J. Wysocki  wrote:
>> On Tue, Aug 2, 2016 at 4:34 PM, Thomas Garnier  wrote:
>>> On Mon, Aug 1, 2016 at 5:38 PM, Rafael J. Wysocki  
>>> wrote:
 On Monday, August 01, 2016 10:08:00 AM Thomas Garnier wrote:
> When KASLR memory randomization is used, __PAGE_OFFSET is a global
> variable changed during boot. The assembly code was using the variable
> as an immediate value to calculate the cr3 physical address. The
> physical address was incorrect resulting to a GP fault.
>
> Signed-off-by: Thomas Garnier 
> ---
>  arch/x86/power/hibernate_asm_64.S | 12 +++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/power/hibernate_asm_64.S 
> b/arch/x86/power/hibernate_asm_64.S
> index 8eee0e9..8db4905 100644
> --- a/arch/x86/power/hibernate_asm_64.S
> +++ b/arch/x86/power/hibernate_asm_64.S
> @@ -23,6 +23,16 @@
>  #include 
>  #include 
>
> +/*
> + * A global variable holds the page_offset when KASLR memory 
> randomization
> + * is enabled.
> + */
> +#ifdef CONFIG_RANDOMIZE_MEMORY
> +#define __PAGE_OFFSET_REF __PAGE_OFFSET
> +#else
> +#define __PAGE_OFFSET_REF $__PAGE_OFFSET
> +#endif
> +
>  ENTRY(swsusp_arch_suspend)
>   movq$saved_context, %rax
>   movq%rsp, pt_regs_sp(%rax)
> @@ -72,7 +82,7 @@ ENTRY(restore_image)
>   /* code below has been relocated to a safe page */
>  ENTRY(core_restore_code)
>   /* switch to temporary page tables */
> - movq$__PAGE_OFFSET, %rcx
> + movq__PAGE_OFFSET_REF, %rcx
>   subq%rcx, %rax
>   movq%rax, %cr3
>   /* flush TLB */
>

 I'm not particularly liking the #ifdefs and they won't be really
 necessary if the subtraction is carried out by the C code IMO.

 What about the patch below instead?

>>>
>>> Yes, I think that's a good idea. I will test it and send PATCH v2.
>>
>> No need to send this patch again.  Please just let me know if it works
>> for you. :-)
>>
>
> It worked well when I tested it and I agree that's a better approach.

OK, thanks!

Let me add a changelog to it then.


Re: [PATCH v1 2/2] x86/power/64: Fix __PAGE_OFFSET usage on restore

2016-08-02 Thread Thomas Garnier
On Tue, Aug 2, 2016 at 1:47 PM, Rafael J. Wysocki  wrote:
> On Tue, Aug 2, 2016 at 4:34 PM, Thomas Garnier  wrote:
>> On Mon, Aug 1, 2016 at 5:38 PM, Rafael J. Wysocki  wrote:
>>> On Monday, August 01, 2016 10:08:00 AM Thomas Garnier wrote:
 When KASLR memory randomization is used, __PAGE_OFFSET is a global
 variable changed during boot. The assembly code was using the variable
 as an immediate value to calculate the cr3 physical address. The
 physical address was incorrect resulting to a GP fault.

 Signed-off-by: Thomas Garnier 
 ---
  arch/x86/power/hibernate_asm_64.S | 12 +++-
  1 file changed, 11 insertions(+), 1 deletion(-)

 diff --git a/arch/x86/power/hibernate_asm_64.S 
 b/arch/x86/power/hibernate_asm_64.S
 index 8eee0e9..8db4905 100644
 --- a/arch/x86/power/hibernate_asm_64.S
 +++ b/arch/x86/power/hibernate_asm_64.S
 @@ -23,6 +23,16 @@
  #include 
  #include 

 +/*
 + * A global variable holds the page_offset when KASLR memory randomization
 + * is enabled.
 + */
 +#ifdef CONFIG_RANDOMIZE_MEMORY
 +#define __PAGE_OFFSET_REF __PAGE_OFFSET
 +#else
 +#define __PAGE_OFFSET_REF $__PAGE_OFFSET
 +#endif
 +
  ENTRY(swsusp_arch_suspend)
   movq$saved_context, %rax
   movq%rsp, pt_regs_sp(%rax)
 @@ -72,7 +82,7 @@ ENTRY(restore_image)
   /* code below has been relocated to a safe page */
  ENTRY(core_restore_code)
   /* switch to temporary page tables */
 - movq$__PAGE_OFFSET, %rcx
 + movq__PAGE_OFFSET_REF, %rcx
   subq%rcx, %rax
   movq%rax, %cr3
   /* flush TLB */

>>>
>>> I'm not particularly liking the #ifdefs and they won't be really
>>> necessary if the subtraction is carried out by the C code IMO.
>>>
>>> What about the patch below instead?
>>>
>>
>> Yes, I think that's a good idea. I will test it and send PATCH v2.
>
> No need to send this patch again.  Please just let me know if it works
> for you. :-)
>

It worked well when I tested it and I agree that's a better approach.

> Thanks,
> Rafael


Re: [PATCH v1 2/2] x86/power/64: Fix __PAGE_OFFSET usage on restore

2016-08-02 Thread Thomas Garnier
On Tue, Aug 2, 2016 at 1:47 PM, Rafael J. Wysocki  wrote:
> On Tue, Aug 2, 2016 at 4:34 PM, Thomas Garnier  wrote:
>> On Mon, Aug 1, 2016 at 5:38 PM, Rafael J. Wysocki  wrote:
>>> On Monday, August 01, 2016 10:08:00 AM Thomas Garnier wrote:
 When KASLR memory randomization is used, __PAGE_OFFSET is a global
 variable changed during boot. The assembly code was using the variable
 as an immediate value to calculate the cr3 physical address. The
 physical address was incorrect resulting to a GP fault.

 Signed-off-by: Thomas Garnier 
 ---
  arch/x86/power/hibernate_asm_64.S | 12 +++-
  1 file changed, 11 insertions(+), 1 deletion(-)

 diff --git a/arch/x86/power/hibernate_asm_64.S 
 b/arch/x86/power/hibernate_asm_64.S
 index 8eee0e9..8db4905 100644
 --- a/arch/x86/power/hibernate_asm_64.S
 +++ b/arch/x86/power/hibernate_asm_64.S
 @@ -23,6 +23,16 @@
  #include 
  #include 

 +/*
 + * A global variable holds the page_offset when KASLR memory randomization
 + * is enabled.
 + */
 +#ifdef CONFIG_RANDOMIZE_MEMORY
 +#define __PAGE_OFFSET_REF __PAGE_OFFSET
 +#else
 +#define __PAGE_OFFSET_REF $__PAGE_OFFSET
 +#endif
 +
  ENTRY(swsusp_arch_suspend)
   movq$saved_context, %rax
   movq%rsp, pt_regs_sp(%rax)
 @@ -72,7 +82,7 @@ ENTRY(restore_image)
   /* code below has been relocated to a safe page */
  ENTRY(core_restore_code)
   /* switch to temporary page tables */
 - movq$__PAGE_OFFSET, %rcx
 + movq__PAGE_OFFSET_REF, %rcx
   subq%rcx, %rax
   movq%rax, %cr3
   /* flush TLB */

>>>
>>> I'm not particularly liking the #ifdefs and they won't be really
>>> necessary if the subtraction is carried out by the C code IMO.
>>>
>>> What about the patch below instead?
>>>
>>
>> Yes, I think that's a good idea. I will test it and send PATCH v2.
>
> No need to send this patch again.  Please just let me know if it works
> for you. :-)
>

It worked well when I tested it and I agree that's a better approach.

> Thanks,
> Rafael


Re: [PATCH v1 2/2] x86/power/64: Fix __PAGE_OFFSET usage on restore

2016-08-02 Thread Thomas Garnier
On Mon, Aug 1, 2016 at 5:38 PM, Rafael J. Wysocki  wrote:
> On Monday, August 01, 2016 10:08:00 AM Thomas Garnier wrote:
>> When KASLR memory randomization is used, __PAGE_OFFSET is a global
>> variable changed during boot. The assembly code was using the variable
>> as an immediate value to calculate the cr3 physical address. The
>> physical address was incorrect resulting to a GP fault.
>>
>> Signed-off-by: Thomas Garnier 
>> ---
>>  arch/x86/power/hibernate_asm_64.S | 12 +++-
>>  1 file changed, 11 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/power/hibernate_asm_64.S 
>> b/arch/x86/power/hibernate_asm_64.S
>> index 8eee0e9..8db4905 100644
>> --- a/arch/x86/power/hibernate_asm_64.S
>> +++ b/arch/x86/power/hibernate_asm_64.S
>> @@ -23,6 +23,16 @@
>>  #include 
>>  #include 
>>
>> +/*
>> + * A global variable holds the page_offset when KASLR memory randomization
>> + * is enabled.
>> + */
>> +#ifdef CONFIG_RANDOMIZE_MEMORY
>> +#define __PAGE_OFFSET_REF __PAGE_OFFSET
>> +#else
>> +#define __PAGE_OFFSET_REF $__PAGE_OFFSET
>> +#endif
>> +
>>  ENTRY(swsusp_arch_suspend)
>>   movq$saved_context, %rax
>>   movq%rsp, pt_regs_sp(%rax)
>> @@ -72,7 +82,7 @@ ENTRY(restore_image)
>>   /* code below has been relocated to a safe page */
>>  ENTRY(core_restore_code)
>>   /* switch to temporary page tables */
>> - movq$__PAGE_OFFSET, %rcx
>> + movq__PAGE_OFFSET_REF, %rcx
>>   subq%rcx, %rax
>>   movq%rax, %cr3
>>   /* flush TLB */
>>
>
> I'm not particularly liking the #ifdefs and they won't be really
> necessary if the subtraction is carried out by the C code IMO.
>
> What about the patch below instead?
>

Yes, I think that's a good idea. I will test it and send PATCH v2.

Thanks for the quick feedback.

> ---
>  arch/x86/power/hibernate_64.c |   18 +-
>  arch/x86/power/hibernate_asm_64.S |2 --
>  2 files changed, 9 insertions(+), 11 deletions(-)
>
> Index: linux-pm/arch/x86/power/hibernate_asm_64.S
> ===
> --- linux-pm.orig/arch/x86/power/hibernate_asm_64.S
> +++ linux-pm/arch/x86/power/hibernate_asm_64.S
> @@ -72,8 +72,6 @@ ENTRY(restore_image)
> /* code below has been relocated to a safe page */
>  ENTRY(core_restore_code)
> /* switch to temporary page tables */
> -   movq$__PAGE_OFFSET, %rcx
> -   subq%rcx, %rax
> movq%rax, %cr3
> /* flush TLB */
> movq%rbx, %rcx
> Index: linux-pm/arch/x86/power/hibernate_64.c
> ===
> --- linux-pm.orig/arch/x86/power/hibernate_64.c
> +++ linux-pm/arch/x86/power/hibernate_64.c
> @@ -37,11 +37,11 @@ unsigned long jump_address_phys;
>   */
>  unsigned long restore_cr3 __visible;
>
> -pgd_t *temp_level4_pgt __visible;
> +unsigned long temp_level4_pgt __visible;
>
>  unsigned long relocated_restore_code __visible;
>
> -static int set_up_temporary_text_mapping(void)
> +static int set_up_temporary_text_mapping(pgd_t *pgd)
>  {
> pmd_t *pmd;
> pud_t *pud;
> @@ -71,7 +71,7 @@ static int set_up_temporary_text_mapping
> __pmd((jump_address_phys & PMD_MASK) | 
> __PAGE_KERNEL_LARGE_EXEC));
> set_pud(pud + pud_index(restore_jump_address),
> __pud(__pa(pmd) | _KERNPG_TABLE));
> -   set_pgd(temp_level4_pgt + pgd_index(restore_jump_address),
> +   set_pgd(pgd + pgd_index(restore_jump_address),
> __pgd(__pa(pud) | _KERNPG_TABLE));
>
> return 0;
> @@ -90,15 +90,16 @@ static int set_up_temporary_mappings(voi
> .kernel_mapping = true,
> };
> unsigned long mstart, mend;
> +   pgd_t *pgd;
> int result;
> int i;
>
> -   temp_level4_pgt = (pgd_t *)get_safe_page(GFP_ATOMIC);
> -   if (!temp_level4_pgt)
> +   pgd = (pgd_t *)get_safe_page(GFP_ATOMIC);
> +   if (!pgd)
> return -ENOMEM;
>
> /* Prepare a temporary mapping for the kernel text */
> -   result = set_up_temporary_text_mapping();
> +   result = set_up_temporary_text_mapping(pgd);
> if (result)
> return result;
>
> @@ -107,13 +108,12 @@ static int set_up_temporary_mappings(voi
> mstart = pfn_mapped[i].start << PAGE_SHIFT;
> mend   = pfn_mapped[i].end << PAGE_SHIFT;
>
> -   result = kernel_ident_mapping_init(, temp_level4_pgt,
> -  mstart, mend);
> -
> +   result = kernel_ident_mapping_init(, pgd, mstart, mend);
> if (result)
> return result;
> }
>
> +   temp_level4_pgt = (unsigned long)pgd - __PAGE_OFFSET;
> return 0;
>  }
>
>


Re: [PATCH v1 2/2] x86/power/64: Fix __PAGE_OFFSET usage on restore

2016-08-02 Thread Thomas Garnier
On Mon, Aug 1, 2016 at 5:38 PM, Rafael J. Wysocki  wrote:
> On Monday, August 01, 2016 10:08:00 AM Thomas Garnier wrote:
>> When KASLR memory randomization is used, __PAGE_OFFSET is a global
>> variable changed during boot. The assembly code was using the variable
>> as an immediate value to calculate the cr3 physical address. The
>> physical address was incorrect resulting to a GP fault.
>>
>> Signed-off-by: Thomas Garnier 
>> ---
>>  arch/x86/power/hibernate_asm_64.S | 12 +++-
>>  1 file changed, 11 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/power/hibernate_asm_64.S 
>> b/arch/x86/power/hibernate_asm_64.S
>> index 8eee0e9..8db4905 100644
>> --- a/arch/x86/power/hibernate_asm_64.S
>> +++ b/arch/x86/power/hibernate_asm_64.S
>> @@ -23,6 +23,16 @@
>>  #include 
>>  #include 
>>
>> +/*
>> + * A global variable holds the page_offset when KASLR memory randomization
>> + * is enabled.
>> + */
>> +#ifdef CONFIG_RANDOMIZE_MEMORY
>> +#define __PAGE_OFFSET_REF __PAGE_OFFSET
>> +#else
>> +#define __PAGE_OFFSET_REF $__PAGE_OFFSET
>> +#endif
>> +
>>  ENTRY(swsusp_arch_suspend)
>>   movq$saved_context, %rax
>>   movq%rsp, pt_regs_sp(%rax)
>> @@ -72,7 +82,7 @@ ENTRY(restore_image)
>>   /* code below has been relocated to a safe page */
>>  ENTRY(core_restore_code)
>>   /* switch to temporary page tables */
>> - movq$__PAGE_OFFSET, %rcx
>> + movq__PAGE_OFFSET_REF, %rcx
>>   subq%rcx, %rax
>>   movq%rax, %cr3
>>   /* flush TLB */
>>
>
> I'm not particularly liking the #ifdefs and they won't be really
> necessary if the subtraction is carried out by the C code IMO.
>
> What about the patch below instead?
>

Yes, I think that's a good idea. I will test it and send PATCH v2.

Thanks for the quick feedback.

> ---
>  arch/x86/power/hibernate_64.c |   18 +-
>  arch/x86/power/hibernate_asm_64.S |2 --
>  2 files changed, 9 insertions(+), 11 deletions(-)
>
> Index: linux-pm/arch/x86/power/hibernate_asm_64.S
> ===
> --- linux-pm.orig/arch/x86/power/hibernate_asm_64.S
> +++ linux-pm/arch/x86/power/hibernate_asm_64.S
> @@ -72,8 +72,6 @@ ENTRY(restore_image)
> /* code below has been relocated to a safe page */
>  ENTRY(core_restore_code)
> /* switch to temporary page tables */
> -   movq$__PAGE_OFFSET, %rcx
> -   subq%rcx, %rax
> movq%rax, %cr3
> /* flush TLB */
> movq%rbx, %rcx
> Index: linux-pm/arch/x86/power/hibernate_64.c
> ===
> --- linux-pm.orig/arch/x86/power/hibernate_64.c
> +++ linux-pm/arch/x86/power/hibernate_64.c
> @@ -37,11 +37,11 @@ unsigned long jump_address_phys;
>   */
>  unsigned long restore_cr3 __visible;
>
> -pgd_t *temp_level4_pgt __visible;
> +unsigned long temp_level4_pgt __visible;
>
>  unsigned long relocated_restore_code __visible;
>
> -static int set_up_temporary_text_mapping(void)
> +static int set_up_temporary_text_mapping(pgd_t *pgd)
>  {
> pmd_t *pmd;
> pud_t *pud;
> @@ -71,7 +71,7 @@ static int set_up_temporary_text_mapping
> __pmd((jump_address_phys & PMD_MASK) | 
> __PAGE_KERNEL_LARGE_EXEC));
> set_pud(pud + pud_index(restore_jump_address),
> __pud(__pa(pmd) | _KERNPG_TABLE));
> -   set_pgd(temp_level4_pgt + pgd_index(restore_jump_address),
> +   set_pgd(pgd + pgd_index(restore_jump_address),
> __pgd(__pa(pud) | _KERNPG_TABLE));
>
> return 0;
> @@ -90,15 +90,16 @@ static int set_up_temporary_mappings(voi
> .kernel_mapping = true,
> };
> unsigned long mstart, mend;
> +   pgd_t *pgd;
> int result;
> int i;
>
> -   temp_level4_pgt = (pgd_t *)get_safe_page(GFP_ATOMIC);
> -   if (!temp_level4_pgt)
> +   pgd = (pgd_t *)get_safe_page(GFP_ATOMIC);
> +   if (!pgd)
> return -ENOMEM;
>
> /* Prepare a temporary mapping for the kernel text */
> -   result = set_up_temporary_text_mapping();
> +   result = set_up_temporary_text_mapping(pgd);
> if (result)
> return result;
>
> @@ -107,13 +108,12 @@ static int set_up_temporary_mappings(voi
> mstart = pfn_mapped[i].start << PAGE_SHIFT;
> mend   = pfn_mapped[i].end << PAGE_SHIFT;
>
> -   result = kernel_ident_mapping_init(, temp_level4_pgt,
> -  mstart, mend);
> -
> +   result = kernel_ident_mapping_init(, pgd, mstart, mend);
> if (result)
> return result;
> }
>
> +   temp_level4_pgt = (unsigned long)pgd - __PAGE_OFFSET;
> return 0;
>  }
>
>


Re: [PATCH v1 2/2] x86/power/64: Fix __PAGE_OFFSET usage on restore

2016-08-01 Thread Rafael J. Wysocki
On Monday, August 01, 2016 10:08:00 AM Thomas Garnier wrote:
> When KASLR memory randomization is used, __PAGE_OFFSET is a global
> variable changed during boot. The assembly code was using the variable
> as an immediate value to calculate the cr3 physical address. The
> physical address was incorrect resulting to a GP fault.
> 
> Signed-off-by: Thomas Garnier 
> ---
>  arch/x86/power/hibernate_asm_64.S | 12 +++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/power/hibernate_asm_64.S 
> b/arch/x86/power/hibernate_asm_64.S
> index 8eee0e9..8db4905 100644
> --- a/arch/x86/power/hibernate_asm_64.S
> +++ b/arch/x86/power/hibernate_asm_64.S
> @@ -23,6 +23,16 @@
>  #include 
>  #include 
>  
> +/*
> + * A global variable holds the page_offset when KASLR memory randomization
> + * is enabled.
> + */
> +#ifdef CONFIG_RANDOMIZE_MEMORY
> +#define __PAGE_OFFSET_REF __PAGE_OFFSET
> +#else
> +#define __PAGE_OFFSET_REF $__PAGE_OFFSET
> +#endif
> +
>  ENTRY(swsusp_arch_suspend)
>   movq$saved_context, %rax
>   movq%rsp, pt_regs_sp(%rax)
> @@ -72,7 +82,7 @@ ENTRY(restore_image)
>   /* code below has been relocated to a safe page */
>  ENTRY(core_restore_code)
>   /* switch to temporary page tables */
> - movq$__PAGE_OFFSET, %rcx
> + movq__PAGE_OFFSET_REF, %rcx
>   subq%rcx, %rax
>   movq%rax, %cr3
>   /* flush TLB */
> 

I'm not particularly liking the #ifdefs and they won't be really
necessary if the subtraction is carried out by the C code IMO.

What about the patch below instead?

---
 arch/x86/power/hibernate_64.c |   18 +-
 arch/x86/power/hibernate_asm_64.S |2 --
 2 files changed, 9 insertions(+), 11 deletions(-)

Index: linux-pm/arch/x86/power/hibernate_asm_64.S
===
--- linux-pm.orig/arch/x86/power/hibernate_asm_64.S
+++ linux-pm/arch/x86/power/hibernate_asm_64.S
@@ -72,8 +72,6 @@ ENTRY(restore_image)
/* code below has been relocated to a safe page */
 ENTRY(core_restore_code)
/* switch to temporary page tables */
-   movq$__PAGE_OFFSET, %rcx
-   subq%rcx, %rax
movq%rax, %cr3
/* flush TLB */
movq%rbx, %rcx
Index: linux-pm/arch/x86/power/hibernate_64.c
===
--- linux-pm.orig/arch/x86/power/hibernate_64.c
+++ linux-pm/arch/x86/power/hibernate_64.c
@@ -37,11 +37,11 @@ unsigned long jump_address_phys;
  */
 unsigned long restore_cr3 __visible;
 
-pgd_t *temp_level4_pgt __visible;
+unsigned long temp_level4_pgt __visible;
 
 unsigned long relocated_restore_code __visible;
 
-static int set_up_temporary_text_mapping(void)
+static int set_up_temporary_text_mapping(pgd_t *pgd)
 {
pmd_t *pmd;
pud_t *pud;
@@ -71,7 +71,7 @@ static int set_up_temporary_text_mapping
__pmd((jump_address_phys & PMD_MASK) | 
__PAGE_KERNEL_LARGE_EXEC));
set_pud(pud + pud_index(restore_jump_address),
__pud(__pa(pmd) | _KERNPG_TABLE));
-   set_pgd(temp_level4_pgt + pgd_index(restore_jump_address),
+   set_pgd(pgd + pgd_index(restore_jump_address),
__pgd(__pa(pud) | _KERNPG_TABLE));
 
return 0;
@@ -90,15 +90,16 @@ static int set_up_temporary_mappings(voi
.kernel_mapping = true,
};
unsigned long mstart, mend;
+   pgd_t *pgd;
int result;
int i;
 
-   temp_level4_pgt = (pgd_t *)get_safe_page(GFP_ATOMIC);
-   if (!temp_level4_pgt)
+   pgd = (pgd_t *)get_safe_page(GFP_ATOMIC);
+   if (!pgd)
return -ENOMEM;
 
/* Prepare a temporary mapping for the kernel text */
-   result = set_up_temporary_text_mapping();
+   result = set_up_temporary_text_mapping(pgd);
if (result)
return result;
 
@@ -107,13 +108,12 @@ static int set_up_temporary_mappings(voi
mstart = pfn_mapped[i].start << PAGE_SHIFT;
mend   = pfn_mapped[i].end << PAGE_SHIFT;
 
-   result = kernel_ident_mapping_init(, temp_level4_pgt,
-  mstart, mend);
-
+   result = kernel_ident_mapping_init(, pgd, mstart, mend);
if (result)
return result;
}
 
+   temp_level4_pgt = (unsigned long)pgd - __PAGE_OFFSET;
return 0;
 }
 



Re: [PATCH v1 2/2] x86/power/64: Fix __PAGE_OFFSET usage on restore

2016-08-01 Thread Rafael J. Wysocki
On Monday, August 01, 2016 10:08:00 AM Thomas Garnier wrote:
> When KASLR memory randomization is used, __PAGE_OFFSET is a global
> variable changed during boot. The assembly code was using the variable
> as an immediate value to calculate the cr3 physical address. The
> physical address was incorrect resulting to a GP fault.
> 
> Signed-off-by: Thomas Garnier 
> ---
>  arch/x86/power/hibernate_asm_64.S | 12 +++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/power/hibernate_asm_64.S 
> b/arch/x86/power/hibernate_asm_64.S
> index 8eee0e9..8db4905 100644
> --- a/arch/x86/power/hibernate_asm_64.S
> +++ b/arch/x86/power/hibernate_asm_64.S
> @@ -23,6 +23,16 @@
>  #include 
>  #include 
>  
> +/*
> + * A global variable holds the page_offset when KASLR memory randomization
> + * is enabled.
> + */
> +#ifdef CONFIG_RANDOMIZE_MEMORY
> +#define __PAGE_OFFSET_REF __PAGE_OFFSET
> +#else
> +#define __PAGE_OFFSET_REF $__PAGE_OFFSET
> +#endif
> +
>  ENTRY(swsusp_arch_suspend)
>   movq$saved_context, %rax
>   movq%rsp, pt_regs_sp(%rax)
> @@ -72,7 +82,7 @@ ENTRY(restore_image)
>   /* code below has been relocated to a safe page */
>  ENTRY(core_restore_code)
>   /* switch to temporary page tables */
> - movq$__PAGE_OFFSET, %rcx
> + movq__PAGE_OFFSET_REF, %rcx
>   subq%rcx, %rax
>   movq%rax, %cr3
>   /* flush TLB */
> 

I'm not particularly liking the #ifdefs and they won't be really
necessary if the subtraction is carried out by the C code IMO.

What about the patch below instead?

---
 arch/x86/power/hibernate_64.c |   18 +-
 arch/x86/power/hibernate_asm_64.S |2 --
 2 files changed, 9 insertions(+), 11 deletions(-)

Index: linux-pm/arch/x86/power/hibernate_asm_64.S
===
--- linux-pm.orig/arch/x86/power/hibernate_asm_64.S
+++ linux-pm/arch/x86/power/hibernate_asm_64.S
@@ -72,8 +72,6 @@ ENTRY(restore_image)
/* code below has been relocated to a safe page */
 ENTRY(core_restore_code)
/* switch to temporary page tables */
-   movq$__PAGE_OFFSET, %rcx
-   subq%rcx, %rax
movq%rax, %cr3
/* flush TLB */
movq%rbx, %rcx
Index: linux-pm/arch/x86/power/hibernate_64.c
===
--- linux-pm.orig/arch/x86/power/hibernate_64.c
+++ linux-pm/arch/x86/power/hibernate_64.c
@@ -37,11 +37,11 @@ unsigned long jump_address_phys;
  */
 unsigned long restore_cr3 __visible;
 
-pgd_t *temp_level4_pgt __visible;
+unsigned long temp_level4_pgt __visible;
 
 unsigned long relocated_restore_code __visible;
 
-static int set_up_temporary_text_mapping(void)
+static int set_up_temporary_text_mapping(pgd_t *pgd)
 {
pmd_t *pmd;
pud_t *pud;
@@ -71,7 +71,7 @@ static int set_up_temporary_text_mapping
__pmd((jump_address_phys & PMD_MASK) | 
__PAGE_KERNEL_LARGE_EXEC));
set_pud(pud + pud_index(restore_jump_address),
__pud(__pa(pmd) | _KERNPG_TABLE));
-   set_pgd(temp_level4_pgt + pgd_index(restore_jump_address),
+   set_pgd(pgd + pgd_index(restore_jump_address),
__pgd(__pa(pud) | _KERNPG_TABLE));
 
return 0;
@@ -90,15 +90,16 @@ static int set_up_temporary_mappings(voi
.kernel_mapping = true,
};
unsigned long mstart, mend;
+   pgd_t *pgd;
int result;
int i;
 
-   temp_level4_pgt = (pgd_t *)get_safe_page(GFP_ATOMIC);
-   if (!temp_level4_pgt)
+   pgd = (pgd_t *)get_safe_page(GFP_ATOMIC);
+   if (!pgd)
return -ENOMEM;
 
/* Prepare a temporary mapping for the kernel text */
-   result = set_up_temporary_text_mapping();
+   result = set_up_temporary_text_mapping(pgd);
if (result)
return result;
 
@@ -107,13 +108,12 @@ static int set_up_temporary_mappings(voi
mstart = pfn_mapped[i].start << PAGE_SHIFT;
mend   = pfn_mapped[i].end << PAGE_SHIFT;
 
-   result = kernel_ident_mapping_init(, temp_level4_pgt,
-  mstart, mend);
-
+   result = kernel_ident_mapping_init(, pgd, mstart, mend);
if (result)
return result;
}
 
+   temp_level4_pgt = (unsigned long)pgd - __PAGE_OFFSET;
return 0;
 }
 



[PATCH v1 2/2] x86/power/64: Fix __PAGE_OFFSET usage on restore

2016-08-01 Thread Thomas Garnier
When KASLR memory randomization is used, __PAGE_OFFSET is a global
variable changed during boot. The assembly code was using the variable
as an immediate value to calculate the cr3 physical address. The
physical address was incorrect resulting to a GP fault.

Signed-off-by: Thomas Garnier 
---
 arch/x86/power/hibernate_asm_64.S | 12 +++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/x86/power/hibernate_asm_64.S 
b/arch/x86/power/hibernate_asm_64.S
index 8eee0e9..8db4905 100644
--- a/arch/x86/power/hibernate_asm_64.S
+++ b/arch/x86/power/hibernate_asm_64.S
@@ -23,6 +23,16 @@
 #include 
 #include 
 
+/*
+ * A global variable holds the page_offset when KASLR memory randomization
+ * is enabled.
+ */
+#ifdef CONFIG_RANDOMIZE_MEMORY
+#define __PAGE_OFFSET_REF __PAGE_OFFSET
+#else
+#define __PAGE_OFFSET_REF $__PAGE_OFFSET
+#endif
+
 ENTRY(swsusp_arch_suspend)
movq$saved_context, %rax
movq%rsp, pt_regs_sp(%rax)
@@ -72,7 +82,7 @@ ENTRY(restore_image)
/* code below has been relocated to a safe page */
 ENTRY(core_restore_code)
/* switch to temporary page tables */
-   movq$__PAGE_OFFSET, %rcx
+   movq__PAGE_OFFSET_REF, %rcx
subq%rcx, %rax
movq%rax, %cr3
/* flush TLB */
-- 
2.8.0.rc3.226.g39d4020



[PATCH v1 2/2] x86/power/64: Fix __PAGE_OFFSET usage on restore

2016-08-01 Thread Thomas Garnier
When KASLR memory randomization is used, __PAGE_OFFSET is a global
variable changed during boot. The assembly code was using the variable
as an immediate value to calculate the cr3 physical address. The
physical address was incorrect resulting to a GP fault.

Signed-off-by: Thomas Garnier 
---
 arch/x86/power/hibernate_asm_64.S | 12 +++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/x86/power/hibernate_asm_64.S 
b/arch/x86/power/hibernate_asm_64.S
index 8eee0e9..8db4905 100644
--- a/arch/x86/power/hibernate_asm_64.S
+++ b/arch/x86/power/hibernate_asm_64.S
@@ -23,6 +23,16 @@
 #include 
 #include 
 
+/*
+ * A global variable holds the page_offset when KASLR memory randomization
+ * is enabled.
+ */
+#ifdef CONFIG_RANDOMIZE_MEMORY
+#define __PAGE_OFFSET_REF __PAGE_OFFSET
+#else
+#define __PAGE_OFFSET_REF $__PAGE_OFFSET
+#endif
+
 ENTRY(swsusp_arch_suspend)
movq$saved_context, %rax
movq%rsp, pt_regs_sp(%rax)
@@ -72,7 +82,7 @@ ENTRY(restore_image)
/* code below has been relocated to a safe page */
 ENTRY(core_restore_code)
/* switch to temporary page tables */
-   movq$__PAGE_OFFSET, %rcx
+   movq__PAGE_OFFSET_REF, %rcx
subq%rcx, %rax
movq%rax, %cr3
/* flush TLB */
-- 
2.8.0.rc3.226.g39d4020