On Mon, Jun 16, 2025 at 01:25:01PM -0300, Jason Gunthorpe wrote:
> On Sun, Jun 15, 2025 at 10:02:03PM -0700, Nicolin Chen wrote:
> >  FIXTURE_TEARDOWN(iommufd_dirty_tracking)
> >  {
> > -   munmap(self->buffer, variant->buffer_size);
> > -   munmap(self->bitmap, DIV_ROUND_UP(self->bitmap_size, BITS_PER_BYTE));
> > +   unsigned long size = variant->buffer_size;
> > +
> > +   if (variant->hugepages)
> > +           size = __ALIGN_KERNEL(variant->buffer_size, HUGEPAGE_SIZE);
> > +   munmap(self->buffer, size);
> > +   free(self->buffer);
> > +   free(self->bitmap);
> >     teardown_iommufd(self->fd, _metadata);
> 
> munmap followed by free isn't right..

You are right. I re-checked with Copilot. It says the same thing.
I think the whole posix_memalign() + mmap() confuses me..

Yet, should the bitmap pair with free() since it's allocated by a
posix_memalign() call?

> This code is using the glibc allocator to get a bunch of pages mmap'd
> to an aligned location then replacing the pages with MAP_SHARED and
> maybe HAP_HUGETLB versions.

And I studied some use cases from Copilot. It says that, to use
the combination of posix_memalign+mmap, we should do:
        aligned_ptr = posix_memalign(pagesize, pagesize);
        unmap(aligned_ptr, pagesize);
        mapped = mmap(aligned_ptr, pagesize, PROT_READ | PROT_WRITE,
                      MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED, -1, 0);
        munmap(mapped, pagesize);
        // No free() after munmap().

---breakdown---
Before `posix_memalign()`:
  [ heap memory unused ]

After `posix_memalign()`:
  [ posix_memalign() memory ]  ← managed by malloc/free
  ↑ aligned_ptr

After `munmap(aligned_ptr)`:
  [ unmapped memory ]          ← allocator no longer owns it

After `mmap(aligned_ptr, ..., MAP_FIXED)`:
  [ anonymous mmap region ]    ← fully remapped, under your control
  ↑ mapped
---end---

It points out that the heap bookkeeping will be silently clobbered
without the munmap() in-between (like we are doing):
---breakdown---
After `posix_memalign()`:
  [ posix_memalign() memory ]  ← malloc thinks it owns this

Then `mmap(aligned_ptr, ..., MAP_FIXED)`:
  [ anonymous mmap region ]    ← malloc still thinks it owns this (!)
  ↑ mapped
---end---

It also gives a simpler solution for a memory that is not huge
page backed but huge page aligned (our !variant->hugepage case):
---code---
void *ptr;
size_t alignment = 2 * 1024 * 1024; // or whatever HUGEPAGE_SIZE was
size_t size = variant->buffer_size;

// Step 1: Use posix_memalign to get an aligned pointer
if (posix_memalign(&ptr, alignment, size) != 0) {
    perror("posix_memalign");
    return -1;
}

// Use the memory directly
self->buffer = ptr;

// Access/manipulate the memory as needed...

// Step 2: Clean up when done
free(self->buffer);
---end---

Also, for a huge page case, there is no need of posix_memalign():
"Hugepages are not part of the standard heap, so allocator functions
 like posix_memalign() or malloc() don't help and can even get in the
 way."

Instead, it suggests a cleaner version without posix_memalign():
---code---
void *addr = mmap(NULL, variant->buffer_size, PROT_READ | PROT_WRITE,
                  MAP_SHARED | MAP_ANONYMOUS | MAP_HUGETLB | MAP_POPULATE,
                  -1, 0);
if (addr == MAP_FAILED) { perror("mmap"); return -1; }
---end---

Should we follow?

Thanks
Nicolin

Reply via email to