On Mon, Mar 14, 2016 at 10:33:01AM +0000, Matt Fleming wrote:
> Scott reports that with the new separate EFI page tables he's seeing
> the following error on boot, caused by setting reserved bits in the
> page table structures (fault code is PF_RSVD | PF_PROT),
> 
>   swapper/0: Corrupted page table at address 17b102020
>   PGD 17b0e5063 PUD 1400000e3
>   Bad pagetable: 0009 [#1] SMP
> 
> On first inspection the PUD is using a 1GB page size (_PAGE_PSE) and
> looks fine but that's only true if support for 1GB PUD pages
> ("pdpe1gb") is present in the cpu.
> 
> Scott's Intel Celeron N2820 does not have that feature and so the
> _PAGE_PSE bit is reserved. Fix this issue by making the 1GB mapping
> code in conditional on "cpu_has_gbpages".
> 
> This issue didn't come up in the past because the required mapping for
> the faulting address (0x17b102020) will already have been setup by the
> kernel in early boot before we got to efi_map_regions(), but we no
> longer use the standard kernel page tables during EFI calls.
> 
> Reported-by: Scott Ashcroft <[email protected]>
> Tested-by: Scott Ashcroft <[email protected]>
> Cc: Ard Biesheuvel <[email protected]>
> Cc: Ben Hutchings <[email protected]>
> Cc: Borislav Petkov <[email protected]>
> Cc: Brian Gerst <[email protected]>
> Cc: Denys Vlasenko <[email protected]>
> Cc: "H. Peter Anvin" <[email protected]>
> Cc: Linus Torvalds <[email protected]>
> Cc: Maarten Lankhorst <[email protected]>
> Cc: Matthew Garrett <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Raphael Hertzog <[email protected]>
> Cc: Roger Shimizu <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: [email protected]
> Signed-off-by: Matt Fleming <[email protected]>
> ---
>  arch/x86/mm/pageattr.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
> index 14c38ae80409..fcf8e290740a 100644
> --- a/arch/x86/mm/pageattr.c
> +++ b/arch/x86/mm/pageattr.c
> @@ -1055,7 +1055,7 @@ static int populate_pud(struct cpa_data *cpa, unsigned 
> long start, pgd_t *pgd,
>       /*
>        * Map everything starting from the Gb boundary, possibly with 1G pages
>        */
> -     while (end - start >= PUD_SIZE) {
> +     while (cpu_has_gbpages && end - start >= PUD_SIZE) {
>               set_pud(pud, __pud(cpa->pfn << PAGE_SHIFT | _PAGE_PSE |
>                                  massage_pgprot(pud_pgprot)));
>  
> --

Yap, looks ok to me as a minimal fix:

Acked-by: Borislav Petkov <[email protected]>

As a future cleanup, I'd carve out the sections of populate_pud() which
map the stuff up to the Gb boundary and the trailing leftover into a
helper, say, __populate_pud_chunk() or so which goes and populates with
smaller sizes, i.e., 2M and 4K and the lower levels.

This'll make populate_pud() more readable too.

Thanks.

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.

Reply via email to