Ben-Ami Yassour wrote:
Anthony Liguori <[EMAIL PROTECTED]> wrote on 04/29/2008 05:32:09 PM:

Subject

[PATCH] Handle vma regions with no backing page

This patch allows VMA's that contain no backing page to be used for guest memory. This is a drop-in replacement for Ben-Ami's first page in his direct mmio series. Here, we continue to allow mmio pages to be represented in the
rmap.

 struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn)
 {
-   return pfn_to_page(gfn_to_pfn(kvm, gfn));
+   pfn_t pfn;
+
+   pfn = gfn_to_pfn(kvm, gfn);
+   if (pfn_valid(pfn))
+      return pfn_to_page(pfn);
+
+   return NULL;
 }

We noticed that pfn_valid does not always works as expected by this patch to indicate that a pfn has a backing page. We have seen a case where CONFIG_NUMA was not set and then where pfn_valid returned 1 for an mmio pfn. We then changed the config file with CONFIG_NUMA set and it worked fine as expected (since a different implementation of pfn_valid was used).

How should we overcome this issue?


Looks like we need to reintroduce a refcount bit in the pte, and check the page using the VMA.

Nick Piggin's lockless pagecache patches, which have the same issue, also introduce a pte_special bit. We could follow a similar route.

http://www.mail-archive.com/[EMAIL PROTECTED]/msg04789.html

--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to