john cooper wrote:
I like it even less.  MAP_POPULATE does not fault in physical
hpages to the map.  Again this was a qemu-side interim bandaid.

Really?  That would seem like a bug in hugetlbfs to me.

+/* we failed to fault in hpage *a, fall back to conventional page mapping
+ */
+int remap_hpage(void *a, int sz)
+{
+    ASSERT(!(sz & (EXEC_PAGESIZE - 1)));
+    if (munmap(a, sz) < 0)
+    perror("remap_hpage: munmap");
+    else if (mmap(a, sz, PROT_READ|PROT_WRITE,
+    MAP_PRIVATE|MAP_ANONYMOUS|MAP_FIXED, -1, 0) == MAP_FAILED)
+        perror("remap_hpage: mmap");
+    else
+    return (1);
+    return (0);
+}

I think this would be simplified with MAP_POPULATE since you can fail in large chunks of memory instead of potentially having a highly fragmented set of VMAs.

Here for 4K pages we only need to setup the map.  If we later
fault on a physically absent 4K page we'll wait if a page isn't
immediately available.  Rather in the case of a hpage being
unavailable, we'll terminate.  Note at this point we've effectively
locked onto whatever hpages we've been able to map as they can't
be reclaimed from us until we exit.

Right now. Once we drop references to the large pages, there's nothing preventing them from being reclaimed in the future. That's what I'm concerned about.

Regards,

Anthony Liguori
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to