>> For example, you may have a single page (start,end) address range >> to free, but if this is enclosed by a large enough (floor,ceiling) >> then it may free an entire pgd entry. >> >> I assume the intention of the API would be to provide the full >> pgd width in that case? > >Yes, that is what should happen if the full PGD entry is liberated. > >Any time page table chunks are liberated, they have to be included >in the range passed to the flush_tlb_pgtables() call.
So should this part of Hugh's code: /* * Optimization: gather nearby vmas into one call down */ while (next && next->vm_start <= vma->vm_end + PMD_SIZE && !is_hugepage_only_range(next->vm_start, HPAGE_SIZE)){ vma = next; next = vma->vm_next; } free_pgd_range(tlb, addr, vma->vm_end, floor, next? next->vm_start: ceiling); be changed to use pgd_addr_end() to gather up all the vma that are mapped by a single pgd instead of just spanning out the next PMD_SIZE? On ia64 we can have a vma big enough to require more than one pgd, but in the case that we span, we won't cross the problematic pgd boundaries where the holes in the address space are lurking. -Tony - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/