On Fri, 2018-09-14 at 14:36 -0600, Toshi Kani wrote:
> On Wed, 2018-09-12 at 11:26 +0100, Will Deacon wrote:
> > The recently merged API for ensuring break-before-make on page-table
> > entries when installing huge mappings in the vmalloc/ioremap region is
> > fairly counter-intuitive, resulting in the arch freeing functions
> > (e.g. pmd_free_pte_page()) being called even on entries that aren't
> > present. This resulted in a minor bug in the arm64 implementation, giving
> > rise to spurious VM_WARN messages.
> > 
> > This patch moves the pXd_present() checks out into the core code,
> > refactoring the callsites at the same time so that we avoid the complex
> > conjunctions when determining whether or not we can put down a huge
> > mapping.
> > 
> > Cc: Chintan Pandya <cpan...@codeaurora.org>
> > Cc: Toshi Kani <toshi.k...@hpe.com>
> > Cc: Thomas Gleixner <t...@linutronix.de>
> > Cc: Michal Hocko <mho...@suse.com>
> > Cc: Andrew Morton <a...@linux-foundation.org>
> > Suggested-by: Linus Torvalds <torva...@linux-foundation.org>
> > Signed-off-by: Will Deacon <will.dea...@arm.com>
> 
> Yes, this looks nicer.
> 
> Reviewed-by: Toshi Kani <toshi.k...@hpe.com>

Sorry, I take it back since I got a question...

+static int ioremap_try_huge_pmd(pmd_t *pmd, unsigned long addr,
> +                             unsigned long end, phys_addr_t
phys_addr,
> +                             pgprot_t prot)
> +{
> +     if (!ioremap_pmd_enabled())
> +             return 0;
> +
> +     if ((end - addr) != PMD_SIZE)
> +             return 0;
> +
> +     if (!IS_ALIGNED(phys_addr, PMD_SIZE))
> +             return 0;
> +
> +     if (pmd_present(*pmd) && !pmd_free_pte_page(pmd, addr))
> +             return 0;

Is pm_present() a proper check here?  We probably do not have this case
for iomap, but I wonder if one can drop p-bit while it has a pte page
underneath.

Thanks,
-Toshi


> +
> +     return pmd_set_huge(pmd, phys_addr, prot);
> +}
> +


Reply via email to