Re: [PATCH] [9/9] GBPAGES: Do kernel direct mapping at boot using GB pages

2008-01-31 Thread Andi Kleen
On Thu, Jan 31, 2008 at 06:10:02PM +0100, Thomas Gleixner wrote:
> On Thu, 31 Jan 2008, Andi Kleen wrote:
> 
> > On Thu, Jan 31, 2008 at 05:17:41PM +0100, Thomas Gleixner wrote:
> > > On Tue, 29 Jan 2008, Andi Kleen wrote:
> > > >
> > > > +static unsigned long direct_entry(unsigned long paddr)
> > > 
> > > Please use a more sensible function name. This one has no association
> > > with the functionality at all.
> > 
> > Can you suggest one? I honestly cannot think of a better one.
> > Its's a "entry" in the "direct" mapping.
> 
> Doh, yes :) It just did not trigger.

I got rid of it now by switching over to the standard pte primitives
(with the usual casts). It's not prettier, but at least more consistent.

I hope i didn't break the paravirt case though, we'll see.

-Andi

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] [9/9] GBPAGES: Do kernel direct mapping at boot using GB pages

2008-01-31 Thread Thomas Gleixner
On Thu, 31 Jan 2008, Andi Kleen wrote:

> On Thu, Jan 31, 2008 at 05:17:41PM +0100, Thomas Gleixner wrote:
> > On Tue, 29 Jan 2008, Andi Kleen wrote:
> > >
> > > +static unsigned long direct_entry(unsigned long paddr)
> > 
> > Please use a more sensible function name. This one has no association
> > with the functionality at all.
> 
> Can you suggest one? I honestly cannot think of a better one.
> Its's a "entry" in the "direct" mapping.

Doh, yes :) It just did not trigger.

> > > + if (direct_gbpages >= 0 && cpu_has_gbpages) {
> > > + printk(KERN_INFO "Using GB pages for direct mapping\n");
> > > + direct_gbpages = 1;
> > > + } else
> > > + direct_gbpages = 0;
> > > +}
> > 
> > Please use simple boolean logic. gbpages are either enabled or disabled.
> 
> It's a fairly standard widely used idiom for command line options because 
> it allows default and forcing.  e.g. there can be cases when the kernel
> disables it, but then it makes sense for the command line option
> to override it. There is already one case in the kernel that 
> disables it.

Right, but please use some constants. I was tripping over the 

   if (direct_gbpages > 0)

somewhere else in the code and it would have been simpler to see
something like:

if (direct_gbpages == GBPAGES_ENABLED)

> > >   * prefetches from the CPU leading to inconsistent cache lines.
> > > @@ -467,6 +511,8 @@ __clear_kernel_mapping(unsigned long add
> > >   continue;
> > >  
> > >   pud = pud_offset(pgd, address);
> > > + if (pud_large(*pud))
> > > + split_gb_page(pud, __pa(address));
> > >   if (pud_none(*pud))
> > >   continue;
> > 
> > As I said before, this needs to use CPA and not implement another variant.
> 
> Ok, I can switch GART over to CPA, but that will make the patch
> much more intrusive and a little riskier. Is that ok for you
> and will it be still considered .25 candidate? 

We have the code in place already and we need to shake out any
remaining problems anyway. So using the CPA code would be preferred as
it adds another user. Can you give it a try please ?

Thanks,
tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] [9/9] GBPAGES: Do kernel direct mapping at boot using GB pages

2008-01-31 Thread Andi Kleen
On Thu, Jan 31, 2008 at 05:17:41PM +0100, Thomas Gleixner wrote:
> On Tue, 29 Jan 2008, Andi Kleen wrote:
> >
> > +static unsigned long direct_entry(unsigned long paddr)
> 
> Please use a more sensible function name. This one has no association
> with the functionality at all.

Can you suggest one? I honestly cannot think of a better one.
Its's a "entry" in the "direct" mapping.

> > +   if (direct_gbpages >= 0 && cpu_has_gbpages) {
> > +   printk(KERN_INFO "Using GB pages for direct mapping\n");
> > +   direct_gbpages = 1;
> > +   } else
> > +   direct_gbpages = 0;
> > +}
> 
> Please use simple boolean logic. gbpages are either enabled or disabled.

It's a fairly standard widely used idiom for command line options because 
it allows default and forcing.  e.g. there can be cases when the kernel
disables it, but then it makes sense for the command line option
to override it. There is already one case in the kernel that 
disables it.
 
> >   * prefetches from the CPU leading to inconsistent cache lines.
> > @@ -467,6 +511,8 @@ __clear_kernel_mapping(unsigned long add
> > continue;
> >  
> > pud = pud_offset(pgd, address);
> > +   if (pud_large(*pud))
> > +   split_gb_page(pud, __pa(address));
> > if (pud_none(*pud))
> > continue;
> 
> As I said before, this needs to use CPA and not implement another variant.

Ok, I can switch GART over to CPA, but that will make the patch
much more intrusive and a little riskier. Is that ok for you
and will it be still considered .25 candidate? 

-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] [9/9] GBPAGES: Do kernel direct mapping at boot using GB pages

2008-01-31 Thread Thomas Gleixner
On Tue, 29 Jan 2008, Andi Kleen wrote:
>
> +static unsigned long direct_entry(unsigned long paddr)

Please use a more sensible function name. This one has no association
with the functionality at all.

> +{
> + unsigned long entry;

New line please

> + entry = __PAGE_KERNEL_LARGE|paddr;
> + entry &= __supported_pte_mask;
> + return entry;
> +}

> +static void init_gbpages(void)
> +{
> +#ifdef CONFIG_DEBUG_PAGEALLOC
> + /* debug pagealloc causes too much recursion with gbpages */
> + if (direct_gbpages == 0)
> + return;
> +#endif
> + if (direct_gbpages >= 0 && cpu_has_gbpages) {
> + printk(KERN_INFO "Using GB pages for direct mapping\n");
> + direct_gbpages = 1;
> + } else
> + direct_gbpages = 0;
> +}

Please use simple boolean logic. gbpages are either enabled or disabled.

> +static void split_gb_page(pud_t *pud, unsigned long paddr)
> +{
> + int i;
> + pmd_t *pmd;
> + struct page *p = alloc_page(GFP_KERNEL);
> + if (!p)
> + return;
> +
> + paddr &= PUD_PAGE_MASK;
> + pmd = page_address(p);
> + for (i = 0; i < PTRS_PER_PTE; i++, paddr += PMD_PAGE_SIZE)
> + pmd[i] = __pmd(direct_entry(paddr));
> + pud_populate(NULL, pud, pmd);
> +}
> +
>  /*
>   * Unmap a kernel mapping if it exists. This is useful to avoid
>   * prefetches from the CPU leading to inconsistent cache lines.
> @@ -467,6 +511,8 @@ __clear_kernel_mapping(unsigned long add
>   continue;
>  
>   pud = pud_offset(pgd, address);
> + if (pud_large(*pud))
> + split_gb_page(pud, __pa(address));
>   if (pud_none(*pud))
>   continue;

As I said before, this needs to use CPA and not implement another variant.

Thanks,
tglx
  
 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] [9/9] GBPAGES: Do kernel direct mapping at boot using GB pages

2008-01-31 Thread Thomas Gleixner
On Tue, 29 Jan 2008, Andi Kleen wrote:

 +static unsigned long direct_entry(unsigned long paddr)

Please use a more sensible function name. This one has no association
with the functionality at all.

 +{
 + unsigned long entry;

New line please

 + entry = __PAGE_KERNEL_LARGE|paddr;
 + entry = __supported_pte_mask;
 + return entry;
 +}

 +static void init_gbpages(void)
 +{
 +#ifdef CONFIG_DEBUG_PAGEALLOC
 + /* debug pagealloc causes too much recursion with gbpages */
 + if (direct_gbpages == 0)
 + return;
 +#endif
 + if (direct_gbpages = 0  cpu_has_gbpages) {
 + printk(KERN_INFO Using GB pages for direct mapping\n);
 + direct_gbpages = 1;
 + } else
 + direct_gbpages = 0;
 +}

Please use simple boolean logic. gbpages are either enabled or disabled.

 +static void split_gb_page(pud_t *pud, unsigned long paddr)
 +{
 + int i;
 + pmd_t *pmd;
 + struct page *p = alloc_page(GFP_KERNEL);
 + if (!p)
 + return;
 +
 + paddr = PUD_PAGE_MASK;
 + pmd = page_address(p);
 + for (i = 0; i  PTRS_PER_PTE; i++, paddr += PMD_PAGE_SIZE)
 + pmd[i] = __pmd(direct_entry(paddr));
 + pud_populate(NULL, pud, pmd);
 +}
 +
  /*
   * Unmap a kernel mapping if it exists. This is useful to avoid
   * prefetches from the CPU leading to inconsistent cache lines.
 @@ -467,6 +511,8 @@ __clear_kernel_mapping(unsigned long add
   continue;
  
   pud = pud_offset(pgd, address);
 + if (pud_large(*pud))
 + split_gb_page(pud, __pa(address));
   if (pud_none(*pud))
   continue;

As I said before, this needs to use CPA and not implement another variant.

Thanks,
tglx
  
 
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] [9/9] GBPAGES: Do kernel direct mapping at boot using GB pages

2008-01-31 Thread Andi Kleen
On Thu, Jan 31, 2008 at 05:17:41PM +0100, Thomas Gleixner wrote:
 On Tue, 29 Jan 2008, Andi Kleen wrote:
 
  +static unsigned long direct_entry(unsigned long paddr)
 
 Please use a more sensible function name. This one has no association
 with the functionality at all.

Can you suggest one? I honestly cannot think of a better one.
Its's a entry in the direct mapping.

  +   if (direct_gbpages = 0  cpu_has_gbpages) {
  +   printk(KERN_INFO Using GB pages for direct mapping\n);
  +   direct_gbpages = 1;
  +   } else
  +   direct_gbpages = 0;
  +}
 
 Please use simple boolean logic. gbpages are either enabled or disabled.

It's a fairly standard widely used idiom for command line options because 
it allows default and forcing.  e.g. there can be cases when the kernel
disables it, but then it makes sense for the command line option
to override it. There is already one case in the kernel that 
disables it.
 
* prefetches from the CPU leading to inconsistent cache lines.
  @@ -467,6 +511,8 @@ __clear_kernel_mapping(unsigned long add
  continue;
   
  pud = pud_offset(pgd, address);
  +   if (pud_large(*pud))
  +   split_gb_page(pud, __pa(address));
  if (pud_none(*pud))
  continue;
 
 As I said before, this needs to use CPA and not implement another variant.

Ok, I can switch GART over to CPA, but that will make the patch
much more intrusive and a little riskier. Is that ok for you
and will it be still considered .25 candidate? 

-Andi
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] [9/9] GBPAGES: Do kernel direct mapping at boot using GB pages

2008-01-31 Thread Thomas Gleixner
On Thu, 31 Jan 2008, Andi Kleen wrote:

 On Thu, Jan 31, 2008 at 05:17:41PM +0100, Thomas Gleixner wrote:
  On Tue, 29 Jan 2008, Andi Kleen wrote:
  
   +static unsigned long direct_entry(unsigned long paddr)
  
  Please use a more sensible function name. This one has no association
  with the functionality at all.
 
 Can you suggest one? I honestly cannot think of a better one.
 Its's a entry in the direct mapping.

Doh, yes :) It just did not trigger.

   + if (direct_gbpages = 0  cpu_has_gbpages) {
   + printk(KERN_INFO Using GB pages for direct mapping\n);
   + direct_gbpages = 1;
   + } else
   + direct_gbpages = 0;
   +}
  
  Please use simple boolean logic. gbpages are either enabled or disabled.
 
 It's a fairly standard widely used idiom for command line options because 
 it allows default and forcing.  e.g. there can be cases when the kernel
 disables it, but then it makes sense for the command line option
 to override it. There is already one case in the kernel that 
 disables it.

Right, but please use some constants. I was tripping over the 

   if (direct_gbpages  0)

somewhere else in the code and it would have been simpler to see
something like:

if (direct_gbpages == GBPAGES_ENABLED)

 * prefetches from the CPU leading to inconsistent cache lines.
   @@ -467,6 +511,8 @@ __clear_kernel_mapping(unsigned long add
 continue;

 pud = pud_offset(pgd, address);
   + if (pud_large(*pud))
   + split_gb_page(pud, __pa(address));
 if (pud_none(*pud))
 continue;
  
  As I said before, this needs to use CPA and not implement another variant.
 
 Ok, I can switch GART over to CPA, but that will make the patch
 much more intrusive and a little riskier. Is that ok for you
 and will it be still considered .25 candidate? 

We have the code in place already and we need to shake out any
remaining problems anyway. So using the CPA code would be preferred as
it adds another user. Can you give it a try please ?

Thanks,
tglx
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] [9/9] GBPAGES: Do kernel direct mapping at boot using GB pages

2008-01-31 Thread Andi Kleen
On Thu, Jan 31, 2008 at 06:10:02PM +0100, Thomas Gleixner wrote:
 On Thu, 31 Jan 2008, Andi Kleen wrote:
 
  On Thu, Jan 31, 2008 at 05:17:41PM +0100, Thomas Gleixner wrote:
   On Tue, 29 Jan 2008, Andi Kleen wrote:
   
+static unsigned long direct_entry(unsigned long paddr)
   
   Please use a more sensible function name. This one has no association
   with the functionality at all.
  
  Can you suggest one? I honestly cannot think of a better one.
  Its's a entry in the direct mapping.
 
 Doh, yes :) It just did not trigger.

I got rid of it now by switching over to the standard pte primitives
(with the usual casts). It's not prettier, but at least more consistent.

I hope i didn't break the paravirt case though, we'll see.

-Andi

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] [9/9] GBPAGES: Do kernel direct mapping at boot using GB pages

2008-01-28 Thread Andi Kleen

This should decrease TLB pressure because the kernel will need
less TLB faults for its own data access.

Only done for 64bit because i386 does not support GB page tables.

This only applies to the data portion of the direct mapping; the
kernel text mapping stays with 2MB pages because the AMD Fam10h
microarchitecture does not support GB ITLBs and AMD recommends 
against using GB mappings for code.

Can be disabled with direct_gbpages=off

Signed-off-by: Andi Kleen <[EMAIL PROTECTED]>

---
 arch/x86/mm/init_64.c |   64 ++
 1 file changed, 55 insertions(+), 9 deletions(-)

Index: linux/arch/x86/mm/init_64.c
===
--- linux.orig/arch/x86/mm/init_64.c
+++ linux/arch/x86/mm/init_64.c
@@ -279,13 +279,20 @@ __meminit void early_iounmap(void *addr,
__flush_tlb_all();
 }
 
+static unsigned long direct_entry(unsigned long paddr)
+{
+   unsigned long entry;
+   entry = __PAGE_KERNEL_LARGE|paddr;
+   entry &= __supported_pte_mask;
+   return entry;
+}
+
 static void __meminit
 phys_pmd_init(pmd_t *pmd_page, unsigned long address, unsigned long end)
 {
int i = pmd_index(address);
 
for (; i < PTRS_PER_PMD; i++, address += PMD_SIZE) {
-   unsigned long entry;
pmd_t *pmd = pmd_page + pmd_index(address);
 
if (address >= end) {
@@ -299,9 +306,7 @@ phys_pmd_init(pmd_t *pmd_page, unsigned 
if (pmd_val(*pmd))
continue;
 
-   entry = __PAGE_KERNEL_LARGE|_PAGE_GLOBAL|address;
-   entry &= __supported_pte_mask;
-   set_pmd(pmd, __pmd(entry));
+   set_pmd(pmd, __pmd(direct_entry(address)));
}
 }
 
@@ -335,7 +340,13 @@ phys_pud_init(pud_t *pud_page, unsigned 
}
 
if (pud_val(*pud)) {
-   phys_pmd_update(pud, addr, end);
+   if (!pud_large(*pud))
+   phys_pmd_update(pud, addr, end);
+   continue;
+   }
+
+   if (direct_gbpages > 0) {
+   set_pud(pud, __pud(direct_entry(addr)));
continue;
}
 
@@ -356,9 +367,11 @@ static void __init find_early_table_spac
unsigned long puds, pmds, tables, start;
 
puds = (end + PUD_SIZE - 1) >> PUD_SHIFT;
-   pmds = (end + PMD_SIZE - 1) >> PMD_SHIFT;
-   tables = round_up(puds * sizeof(pud_t), PAGE_SIZE) +
-round_up(pmds * sizeof(pmd_t), PAGE_SIZE);
+   tables = round_up(puds * sizeof(pud_t), PAGE_SIZE);
+   if (!direct_gbpages) {
+   pmds = (end + PMD_SIZE - 1) >> PMD_SHIFT;
+   tables += round_up(pmds * sizeof(pmd_t), PAGE_SIZE);
+   }
 
/*
 * RED-PEN putting page tables only on node 0 could
@@ -378,6 +391,20 @@ static void __init find_early_table_spac
(table_start << PAGE_SHIFT) + tables);
 }
 
+static void init_gbpages(void)
+{
+#ifdef CONFIG_DEBUG_PAGEALLOC
+   /* debug pagealloc causes too much recursion with gbpages */
+   if (direct_gbpages == 0)
+   return;
+#endif
+   if (direct_gbpages >= 0 && cpu_has_gbpages) {
+   printk(KERN_INFO "Using GB pages for direct mapping\n");
+   direct_gbpages = 1;
+   } else
+   direct_gbpages = 0;
+}
+
 /*
  * Setup the direct mapping of the physical memory at PAGE_OFFSET.
  * This runs before bootmem is initialized and gets pages directly from
@@ -396,8 +423,10 @@ void __init_refok init_memory_mapping(un
 * memory mapped. Unfortunately this is done currently before the
 * nodes are discovered.
 */
-   if (!after_bootmem)
+   if (!after_bootmem) {
+   init_gbpages();
find_early_table_space(end);
+   }
 
start = (unsigned long)__va(start);
end = (unsigned long)__va(end);
@@ -444,6 +473,21 @@ void __init paging_init(void)
 }
 #endif
 
+static void split_gb_page(pud_t *pud, unsigned long paddr)
+{
+   int i;
+   pmd_t *pmd;
+   struct page *p = alloc_page(GFP_KERNEL);
+   if (!p)
+   return;
+
+   paddr &= PUD_PAGE_MASK;
+   pmd = page_address(p);
+   for (i = 0; i < PTRS_PER_PTE; i++, paddr += PMD_PAGE_SIZE)
+   pmd[i] = __pmd(direct_entry(paddr));
+   pud_populate(NULL, pud, pmd);
+}
+
 /*
  * Unmap a kernel mapping if it exists. This is useful to avoid
  * prefetches from the CPU leading to inconsistent cache lines.
@@ -467,6 +511,8 @@ __clear_kernel_mapping(unsigned long add
continue;
 
pud = pud_offset(pgd, address);
+   if (pud_large(*pud))
+   split_gb_page(pud, __pa(address));
if (pud_none(*pud))
continue;
 
--
To unsubscribe from this list: send the 

[PATCH] [9/9] GBPAGES: Do kernel direct mapping at boot using GB pages

2008-01-28 Thread Andi Kleen

This should decrease TLB pressure because the kernel will need
less TLB faults for its own data access.

Only done for 64bit because i386 does not support GB page tables.

This only applies to the data portion of the direct mapping; the
kernel text mapping stays with 2MB pages because the AMD Fam10h
microarchitecture does not support GB ITLBs and AMD recommends 
against using GB mappings for code.

Can be disabled with direct_gbpages=off

Signed-off-by: Andi Kleen [EMAIL PROTECTED]

---
 arch/x86/mm/init_64.c |   64 ++
 1 file changed, 55 insertions(+), 9 deletions(-)

Index: linux/arch/x86/mm/init_64.c
===
--- linux.orig/arch/x86/mm/init_64.c
+++ linux/arch/x86/mm/init_64.c
@@ -279,13 +279,20 @@ __meminit void early_iounmap(void *addr,
__flush_tlb_all();
 }
 
+static unsigned long direct_entry(unsigned long paddr)
+{
+   unsigned long entry;
+   entry = __PAGE_KERNEL_LARGE|paddr;
+   entry = __supported_pte_mask;
+   return entry;
+}
+
 static void __meminit
 phys_pmd_init(pmd_t *pmd_page, unsigned long address, unsigned long end)
 {
int i = pmd_index(address);
 
for (; i  PTRS_PER_PMD; i++, address += PMD_SIZE) {
-   unsigned long entry;
pmd_t *pmd = pmd_page + pmd_index(address);
 
if (address = end) {
@@ -299,9 +306,7 @@ phys_pmd_init(pmd_t *pmd_page, unsigned 
if (pmd_val(*pmd))
continue;
 
-   entry = __PAGE_KERNEL_LARGE|_PAGE_GLOBAL|address;
-   entry = __supported_pte_mask;
-   set_pmd(pmd, __pmd(entry));
+   set_pmd(pmd, __pmd(direct_entry(address)));
}
 }
 
@@ -335,7 +340,13 @@ phys_pud_init(pud_t *pud_page, unsigned 
}
 
if (pud_val(*pud)) {
-   phys_pmd_update(pud, addr, end);
+   if (!pud_large(*pud))
+   phys_pmd_update(pud, addr, end);
+   continue;
+   }
+
+   if (direct_gbpages  0) {
+   set_pud(pud, __pud(direct_entry(addr)));
continue;
}
 
@@ -356,9 +367,11 @@ static void __init find_early_table_spac
unsigned long puds, pmds, tables, start;
 
puds = (end + PUD_SIZE - 1)  PUD_SHIFT;
-   pmds = (end + PMD_SIZE - 1)  PMD_SHIFT;
-   tables = round_up(puds * sizeof(pud_t), PAGE_SIZE) +
-round_up(pmds * sizeof(pmd_t), PAGE_SIZE);
+   tables = round_up(puds * sizeof(pud_t), PAGE_SIZE);
+   if (!direct_gbpages) {
+   pmds = (end + PMD_SIZE - 1)  PMD_SHIFT;
+   tables += round_up(pmds * sizeof(pmd_t), PAGE_SIZE);
+   }
 
/*
 * RED-PEN putting page tables only on node 0 could
@@ -378,6 +391,20 @@ static void __init find_early_table_spac
(table_start  PAGE_SHIFT) + tables);
 }
 
+static void init_gbpages(void)
+{
+#ifdef CONFIG_DEBUG_PAGEALLOC
+   /* debug pagealloc causes too much recursion with gbpages */
+   if (direct_gbpages == 0)
+   return;
+#endif
+   if (direct_gbpages = 0  cpu_has_gbpages) {
+   printk(KERN_INFO Using GB pages for direct mapping\n);
+   direct_gbpages = 1;
+   } else
+   direct_gbpages = 0;
+}
+
 /*
  * Setup the direct mapping of the physical memory at PAGE_OFFSET.
  * This runs before bootmem is initialized and gets pages directly from
@@ -396,8 +423,10 @@ void __init_refok init_memory_mapping(un
 * memory mapped. Unfortunately this is done currently before the
 * nodes are discovered.
 */
-   if (!after_bootmem)
+   if (!after_bootmem) {
+   init_gbpages();
find_early_table_space(end);
+   }
 
start = (unsigned long)__va(start);
end = (unsigned long)__va(end);
@@ -444,6 +473,21 @@ void __init paging_init(void)
 }
 #endif
 
+static void split_gb_page(pud_t *pud, unsigned long paddr)
+{
+   int i;
+   pmd_t *pmd;
+   struct page *p = alloc_page(GFP_KERNEL);
+   if (!p)
+   return;
+
+   paddr = PUD_PAGE_MASK;
+   pmd = page_address(p);
+   for (i = 0; i  PTRS_PER_PTE; i++, paddr += PMD_PAGE_SIZE)
+   pmd[i] = __pmd(direct_entry(paddr));
+   pud_populate(NULL, pud, pmd);
+}
+
 /*
  * Unmap a kernel mapping if it exists. This is useful to avoid
  * prefetches from the CPU leading to inconsistent cache lines.
@@ -467,6 +511,8 @@ __clear_kernel_mapping(unsigned long add
continue;
 
pud = pud_offset(pgd, address);
+   if (pud_large(*pud))
+   split_gb_page(pud, __pa(address));
if (pud_none(*pud))
continue;
 
--
To unsubscribe from this list: send the line unsubscribe