Re: [PATCH v5 1/4] powerpc/mm: refactor radix physical page mapping
Reza Arbab writes: > On Mon, Jan 30, 2017 at 07:38:18PM +1100, Michael Ellerman wrote: >>Doesn't build. >> >>In file included from ../include/linux/kernel.h:13:0, >> from ../include/linux/sched.h:17, >> from ../arch/powerpc/mm/pgtable-radix.c:11: >>../arch/powerpc/mm/pgtable-radix.c: In function ‘create_physical_mapping’: >>../include/linux/printk.h:299:2: error: ‘mapping_size’ may be used >>uninitialized in this function [-Werror=maybe-uninitialized] >> printk(KERN_INFO pr_fmt(fmt), ##__VA_ARGS__) >> ^~ >>../arch/powerpc/mm/pgtable-radix.c:123:22: note: ‘mapping_size’ was declared >>here >> unsigned long addr, mapping_size; > > Doh. Could you please do the following for now? > > - unsigned long addr, mapping_size; > + unsigned long addr, mapping_size = 0; Thanks. > I'd like to delay spinning v6 with this until I see any input you might > have on the rest of the set. I don't think I have any at the moment. So I'll just fold the above into v5. > And for future reference, how are you ending up with > -Werror=maybe-uninitialized? On powerpc/next, with pseries_le_defconfig, > I get -Wno-maybe-uninitialized. By default. Probably you're not getting it because your compiler is too old. If I'm reading Makefile correctly it's enabled for GCC 4.9 or later: KBUILD_CFLAGS += $(call cc-ifversion, -lt, 0409, \ $(call cc-disable-warning,maybe-uninitialized,)) I'm using GCC 6.2.0. cheers
Re: [PATCH v5 1/4] powerpc/mm: refactor radix physical page mapping
On Mon, Jan 30, 2017 at 07:38:18PM +1100, Michael Ellerman wrote: Doesn't build. In file included from ../include/linux/kernel.h:13:0, from ../include/linux/sched.h:17, from ../arch/powerpc/mm/pgtable-radix.c:11: ../arch/powerpc/mm/pgtable-radix.c: In function ‘create_physical_mapping’: ../include/linux/printk.h:299:2: error: ‘mapping_size’ may be used uninitialized in this function [-Werror=maybe-uninitialized] printk(KERN_INFO pr_fmt(fmt), ##__VA_ARGS__) ^~ ../arch/powerpc/mm/pgtable-radix.c:123:22: note: ‘mapping_size’ was declared here unsigned long addr, mapping_size; Doh. Could you please do the following for now? - unsigned long addr, mapping_size; + unsigned long addr, mapping_size = 0; I'd like to delay spinning v6 with this until I see any input you might have on the rest of the set. And for future reference, how are you ending up with -Werror=maybe-uninitialized? On powerpc/next, with pseries_le_defconfig, I get -Wno-maybe-uninitialized. -- Reza Arbab
Re: [PATCH v5 1/4] powerpc/mm: refactor radix physical page mapping
Reza Arbab writes: > diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c > index 623a0dc..2ce1354 100644 > --- a/arch/powerpc/mm/pgtable-radix.c > +++ b/arch/powerpc/mm/pgtable-radix.c > @@ -107,54 +107,66 @@ int radix__map_kernel_page(unsigned long ea, unsigned > long pa, > return 0; > } > > +static inline void __meminit print_mapping(unsigned long start, > +unsigned long end, > +unsigned long size) > +{ > + if (end <= start) > + return; > + > + pr_info("Mapped range 0x%lx - 0x%lx with 0x%lx\n", start, end, size); > +} > + > +static int __meminit create_physical_mapping(unsigned long start, > + unsigned long end) > +{ > + unsigned long addr, mapping_size; > + > + start = _ALIGN_UP(start, PAGE_SIZE); > + for (addr = start; addr < end; addr += mapping_size) { > + unsigned long gap, previous_size; > + int rc; > + > + gap = end - addr; > + previous_size = mapping_size; > + > + if (IS_ALIGNED(addr, PUD_SIZE) && gap >= PUD_SIZE && > + mmu_psize_defs[MMU_PAGE_1G].shift) > + mapping_size = PUD_SIZE; > + else if (IS_ALIGNED(addr, PMD_SIZE) && gap >= PMD_SIZE && > + mmu_psize_defs[MMU_PAGE_2M].shift) > + mapping_size = PMD_SIZE; > + else > + mapping_size = PAGE_SIZE; > + > + if (mapping_size != previous_size) { > + print_mapping(start, addr, previous_size); > + start = addr; > + } > + > + rc = radix__map_kernel_page((unsigned long)__va(addr), addr, > + PAGE_KERNEL_X, mapping_size); > + if (rc) > + return rc; > + } > + > + print_mapping(start, addr, mapping_size); Doesn't build. In file included from ../include/linux/kernel.h:13:0, from ../include/linux/sched.h:17, from ../arch/powerpc/mm/pgtable-radix.c:11: ../arch/powerpc/mm/pgtable-radix.c: In function ‘create_physical_mapping’: ../include/linux/printk.h:299:2: error: ‘mapping_size’ may be used uninitialized in this function [-Werror=maybe-uninitialized] printk(KERN_INFO pr_fmt(fmt), ##__VA_ARGS__) ^~ ../arch/powerpc/mm/pgtable-radix.c:123:22: note: ‘mapping_size’ was declared here unsigned long addr, mapping_size; ^~~~ cheers
Re: [PATCH v5 1/4] powerpc/mm: refactor radix physical page mapping
On Tue, Jan 17, 2017 at 12:34:56PM -0600, Reza Arbab wrote: > Thanks for your review! > > On Tue, Jan 17, 2017 at 12:16:35PM +0530, Balbir Singh wrote: > > On Mon, Jan 16, 2017 at 01:07:43PM -0600, Reza Arbab wrote: > > > --- a/arch/powerpc/mm/pgtable-radix.c > > > +++ b/arch/powerpc/mm/pgtable-radix.c > > > @@ -107,54 +107,66 @@ int radix__map_kernel_page(unsigned long ea, > > > unsigned long pa, > > > return 0; > > > } > > > > > > +static inline void __meminit print_mapping(unsigned long start, > > > +unsigned long end, > > > +unsigned long size) > > > +{ > > > + if (end <= start) > > > + return; > > > > Should we pr_err for start > end? > > I think that would be overkill. The way this little inline is called, start > > end is not possible. The real point is not to print anything if start == > end. Using <= just seemed better in context. > Agreed > > > > Should we try a lower page size if map_kernel_page fails for this > > mapping_size? > > The only way map_kernel_page can fail is -ENOMEM. If that's the case, > there's no way we're going to be able to map this range at all. Better to > fail fast here, I would think. > I think I am OK with this implementation for now. Balbir Singh.
Re: [PATCH v5 1/4] powerpc/mm: refactor radix physical page mapping
Thanks for your review! On Tue, Jan 17, 2017 at 12:16:35PM +0530, Balbir Singh wrote: On Mon, Jan 16, 2017 at 01:07:43PM -0600, Reza Arbab wrote: --- a/arch/powerpc/mm/pgtable-radix.c +++ b/arch/powerpc/mm/pgtable-radix.c @@ -107,54 +107,66 @@ int radix__map_kernel_page(unsigned long ea, unsigned long pa, return 0; } +static inline void __meminit print_mapping(unsigned long start, + unsigned long end, + unsigned long size) +{ + if (end <= start) + return; Should we pr_err for start > end? I think that would be overkill. The way this little inline is called, start > end is not possible. The real point is not to print anything if start == end. Using <= just seemed better in context. +static int __meminit create_physical_mapping(unsigned long start, +unsigned long end) +{ + unsigned long addr, mapping_size; + + start = _ALIGN_UP(start, PAGE_SIZE); + for (addr = start; addr < end; addr += mapping_size) { + unsigned long gap, previous_size; + int rc; + + gap = end - addr; + previous_size = mapping_size; + + if (IS_ALIGNED(addr, PUD_SIZE) && gap >= PUD_SIZE && + mmu_psize_defs[MMU_PAGE_1G].shift) + mapping_size = PUD_SIZE; + else if (IS_ALIGNED(addr, PMD_SIZE) && gap >= PMD_SIZE && +mmu_psize_defs[MMU_PAGE_2M].shift) + mapping_size = PMD_SIZE; + else + mapping_size = PAGE_SIZE; + + if (mapping_size != previous_size) { + print_mapping(start, addr, previous_size); + start = addr; + } + + rc = radix__map_kernel_page((unsigned long)__va(addr), addr, + PAGE_KERNEL_X, mapping_size); + if (rc) + return rc; Should we try a lower page size if map_kernel_page fails for this mapping_size? The only way map_kernel_page can fail is -ENOMEM. If that's the case, there's no way we're going to be able to map this range at all. Better to fail fast here, I would think. -- Reza Arbab
Re: [PATCH v5 1/4] powerpc/mm: refactor radix physical page mapping
On Mon, Jan 16, 2017 at 01:07:43PM -0600, Reza Arbab wrote: > Move the page mapping code in radix_init_pgtable() into a separate > function that will also be used for memory hotplug. > > The current goto loop progressively decreases its mapping size as it > covers the tail of a range whose end is unaligned. Change this to a for > loop which can do the same for both ends of the range. > > Signed-off-by: Reza Arbab > --- > arch/powerpc/mm/pgtable-radix.c | 88 > +++-- > 1 file changed, 50 insertions(+), 38 deletions(-) > > diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c > index 623a0dc..2ce1354 100644 > --- a/arch/powerpc/mm/pgtable-radix.c > +++ b/arch/powerpc/mm/pgtable-radix.c > @@ -107,54 +107,66 @@ int radix__map_kernel_page(unsigned long ea, unsigned > long pa, > return 0; > } > > +static inline void __meminit print_mapping(unsigned long start, > +unsigned long end, > +unsigned long size) > +{ > + if (end <= start) > + return; Should we pr_err for start > end? > + > + pr_info("Mapped range 0x%lx - 0x%lx with 0x%lx\n", start, end, size); > +} > + > +static int __meminit create_physical_mapping(unsigned long start, > + unsigned long end) > +{ > + unsigned long addr, mapping_size; > + > + start = _ALIGN_UP(start, PAGE_SIZE); > + for (addr = start; addr < end; addr += mapping_size) { > + unsigned long gap, previous_size; > + int rc; > + > + gap = end - addr; > + previous_size = mapping_size; > + > + if (IS_ALIGNED(addr, PUD_SIZE) && gap >= PUD_SIZE && > + mmu_psize_defs[MMU_PAGE_1G].shift) > + mapping_size = PUD_SIZE; > + else if (IS_ALIGNED(addr, PMD_SIZE) && gap >= PMD_SIZE && > + mmu_psize_defs[MMU_PAGE_2M].shift) > + mapping_size = PMD_SIZE; > + else > + mapping_size = PAGE_SIZE; > + > + if (mapping_size != previous_size) { > + print_mapping(start, addr, previous_size); > + start = addr; > + } > + > + rc = radix__map_kernel_page((unsigned long)__va(addr), addr, > + PAGE_KERNEL_X, mapping_size); > + if (rc) > + return rc; Should we try a lower page size if map_kernel_page fails for this mapping_size? I like the cleanup very much, BTW Balbir Singh.
[PATCH v5 1/4] powerpc/mm: refactor radix physical page mapping
Move the page mapping code in radix_init_pgtable() into a separate function that will also be used for memory hotplug. The current goto loop progressively decreases its mapping size as it covers the tail of a range whose end is unaligned. Change this to a for loop which can do the same for both ends of the range. Signed-off-by: Reza Arbab --- arch/powerpc/mm/pgtable-radix.c | 88 +++-- 1 file changed, 50 insertions(+), 38 deletions(-) diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c index 623a0dc..2ce1354 100644 --- a/arch/powerpc/mm/pgtable-radix.c +++ b/arch/powerpc/mm/pgtable-radix.c @@ -107,54 +107,66 @@ int radix__map_kernel_page(unsigned long ea, unsigned long pa, return 0; } +static inline void __meminit print_mapping(unsigned long start, + unsigned long end, + unsigned long size) +{ + if (end <= start) + return; + + pr_info("Mapped range 0x%lx - 0x%lx with 0x%lx\n", start, end, size); +} + +static int __meminit create_physical_mapping(unsigned long start, +unsigned long end) +{ + unsigned long addr, mapping_size; + + start = _ALIGN_UP(start, PAGE_SIZE); + for (addr = start; addr < end; addr += mapping_size) { + unsigned long gap, previous_size; + int rc; + + gap = end - addr; + previous_size = mapping_size; + + if (IS_ALIGNED(addr, PUD_SIZE) && gap >= PUD_SIZE && + mmu_psize_defs[MMU_PAGE_1G].shift) + mapping_size = PUD_SIZE; + else if (IS_ALIGNED(addr, PMD_SIZE) && gap >= PMD_SIZE && +mmu_psize_defs[MMU_PAGE_2M].shift) + mapping_size = PMD_SIZE; + else + mapping_size = PAGE_SIZE; + + if (mapping_size != previous_size) { + print_mapping(start, addr, previous_size); + start = addr; + } + + rc = radix__map_kernel_page((unsigned long)__va(addr), addr, + PAGE_KERNEL_X, mapping_size); + if (rc) + return rc; + } + + print_mapping(start, addr, mapping_size); + return 0; +} + static void __init radix_init_pgtable(void) { - int loop_count; - u64 base, end, start_addr; unsigned long rts_field; struct memblock_region *reg; - unsigned long linear_page_size; /* We don't support slb for radix */ mmu_slb_size = 0; /* * Create the linear mapping, using standard page size for now */ - loop_count = 0; - for_each_memblock(memory, reg) { - - start_addr = reg->base; - -redo: - if (loop_count < 1 && mmu_psize_defs[MMU_PAGE_1G].shift) - linear_page_size = PUD_SIZE; - else if (loop_count < 2 && mmu_psize_defs[MMU_PAGE_2M].shift) - linear_page_size = PMD_SIZE; - else - linear_page_size = PAGE_SIZE; - - base = _ALIGN_UP(start_addr, linear_page_size); - end = _ALIGN_DOWN(reg->base + reg->size, linear_page_size); - - pr_info("Mapping range 0x%lx - 0x%lx with 0x%lx\n", - (unsigned long)base, (unsigned long)end, - linear_page_size); - - while (base < end) { - radix__map_kernel_page((unsigned long)__va(base), - base, PAGE_KERNEL_X, - linear_page_size); - base += linear_page_size; - } - /* -* map the rest using lower page size -*/ - if (end < reg->base + reg->size) { - start_addr = end; - loop_count++; - goto redo; - } - } + for_each_memblock(memory, reg) + WARN_ON(create_physical_mapping(reg->base, + reg->base + reg->size)); /* * Allocate Partition table and process table for the * host. -- 1.8.3.1