From: Matthew Wilcox <wi...@linux.intel.com>

If the first access to a huge page was a store, there would be no existing
zero pmd in this process's page tables.  There could be a zero pmd in
another process's page tables, if it had done a load.  We can detect this
case by noticing that the buffer_head returned from the filesystem is
New, and ensure that other processes mapping this huge page have their
page tables flushed.

Reported-by: Kirill A. Shutemov <kirill.shute...@linux.intel.com>
Signed-off-by: Matthew Wilcox <wi...@linux.intel.com>
---
 fs/dax.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/fs/dax.c b/fs/dax.c
index bf9a22b..233e61e 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -568,7 +568,11 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned 
long address,
        if ((pgoff | PG_PMD_COLOUR) >= size)
                goto fallback;
 
-       if (is_huge_zero_pmd(*pmd))
+       /*
+        * If we allocated new storage, make sure no process has any
+        * zero pages covering this hole
+        */
+       if (buffer_new(&bh))
                unmap_mapping_range(mapping, pgoff << PAGE_SHIFT, PMD_SIZE, 0);
 
        if (!write && !buffer_mapped(&bh) && buffer_uptodate(&bh)) {
-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to