On Wed, 29 Oct 2008 14:41:14 -0700
Grant Erickson <[EMAIL PROTECTED]> wrote:

> If the size of RAM is not an exact power of two, we may not have
> covered RAM in its entirety with large 16 and 4 MiB
> pages. Consequently, restrict the top end of RAM currently allocable
> by updating '__initial_memory_limit_addr' so that calls to the LMB to
> allocate PTEs for "tail" coverage with normal-sized pages (or other
> reasons) do not attempt to allocate outside the allowed range.
> 
> Signed-off-by: Grant Erickson <[EMAIL PROTECTED]>
> ---
> 
> This bug was discovered in the course of working on CONFIG_LOGBUFFER support
> (see http://ozlabs.org/pipermail/linuxppc-dev/2008-October/064685.html).
> However, the bug is triggered quite easily independent of that feature
> by placing a memory limit via the 'mem=' kernel command line that results in
> a memory size that is not equal to an exact power of two.
> 
> For example, on the AMCC PowerPC 405EXr "Haleakala" board with 256 MiB
> of RAM, mmu_mapin_ram() normally covers RAM with precisely 16 16 MiB
> large pages. However, if a memory limit of 256 MiB - 20 KiB (as might
> be the case for CONFIG_LOGBUFFER) is put in place with
> "mem=268414976", then large pages only cover (16 MiB * 15) + (4 MiB *
> 3) = 252 MiB with a 4 MiB - 20 KiB "tail" to cover with normal, 4 KiB
> pages via map_page().
> 
> Unfortunately, if __initial_memory_limit_addr is not updated from its
> initial value of 0x1000 0000 (256 MiB) to reflect what was actually
> mapped via mmu_mapin_ram(), the following happens during the "tail"
> mapping when the first PTE is allocated at 0xFFF A000 (rather than the
> desired 0xFBF F000):
> 
>     mapin_ram
>         mmu_mapin_ram
>         map_page
>             pte_alloc_kernel
>                 pte_alloc_one_kernel
>                     early_get_page
>                         lmb_alloc_base
>                     clear_page
>                         clear_pages
>                             dcbz    0,page  <-- BOOM!
> 
> a non-recoverable page fault.

Nice catch.  I was looking to see if 44x had the same problem, but I
don't think it does because we simply over-map DRAM there.  Does that
seem correct to you, or am I missing something on 44x that would cause
this same problem?

josh
_______________________________________________
Linuxppc-dev mailing list
Linuxppc-dev@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-dev

Reply via email to