[patch 3/7] mm: fix fault vs invalidate race for linear mappings

2007-01-12 Thread Nick Piggin
Fix the race between invalidate_inode_pages and do_no_page.

Andrea Arcangeli identified a subtle race between invalidation of
pages from pagecache with userspace mappings, and do_no_page.

The issue is that invalidation has to shoot down all mappings to the
page, before it can be discarded from the pagecache. Between shooting
down ptes to a particular page, and actually dropping the struct page
from the pagecache, do_no_page from any process might fault on that
page and establish a new mapping to the page just before it gets
discarded from the pagecache.

The most common case where such invalidation is used is in file
truncation. This case was catered for by doing a sort of open-coded
seqlock between the file's i_size, and its truncate_count.

Truncation will decrease i_size, then increment truncate_count before
unmapping userspace pages; do_no_page will read truncate_count, then
find the page if it is within i_size, and then check truncate_count
under the page table lock and back out and retry if it had
subsequently been changed (ptl will serialise against unmapping, and
ensure a potentially updated truncate_count is actually visible).

Complexity and documentation issues aside, the locking protocol fails
in the case where we would like to invalidate pagecache inside i_size.
do_no_page can come in anytime and filemap_nopage is not aware of the
invalidation in progress (as it is when it is outside i_size). The
end result is that dangling (->mapping == NULL) pages that appear to
be from a particular file may be mapped into userspace with nonsense
data. Valid mappings to the same place will see a different page.

Andrea implemented two working fixes, one using a real seqlock,
another using a page->flags bit. He also proposed using the page lock
in do_no_page, but that was initially considered too heavyweight.
However, it is not a global or per-file lock, and the page cacheline
is modified in do_no_page to increment _count and _mapcount anyway, so
a further modification should not be a large performance hit.
Scalability is not an issue.

This patch implements this latter approach. ->nopage implementations
return with the page locked if it is possible for their underlying
file to be invalidated (in that case, they must set a special vm_flags
bit to indicate so). do_no_page only unlocks the page after setting
up the mapping completely. invalidation is excluded because it holds
the page lock during invalidation of each page (and ensures that the
page is not mapped while holding the lock).

This also allows significant simplifications in do_no_page, because
we have the page locked in the right place in the pagecache from the
start.

Signed-off-by: Nick Piggin <[EMAIL PROTECTED]>

Index: linux-2.6/include/linux/mm.h
===
--- linux-2.6.orig/include/linux/mm.h
+++ linux-2.6/include/linux/mm.h
@@ -168,6 +168,11 @@ extern unsigned int kobjsize(const void 
 #define VM_NONLINEAR   0x0080  /* Is non-linear (remap_file_pages) */
 #define VM_MAPPED_COPY 0x0100  /* T if mapped copy of data (nommu 
mmap) */
 #define VM_INSERTPAGE  0x0200  /* The vma has had "vm_insert_page()" 
done on it */
+#define VM_CAN_INVALIDATE  0x0400  /* The mapping may be 
invalidated,
+* eg. truncate or invalidate_inode_*.
+* In this case, do_no_page must
+* return with the page locked.
+*/
 
 #ifndef VM_STACK_DEFAULT_FLAGS /* arch can override this */
 #define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS
Index: linux-2.6/mm/filemap.c
===
--- linux-2.6.orig/mm/filemap.c
+++ linux-2.6/mm/filemap.c
@@ -1349,9 +1349,10 @@ struct page *filemap_nopage(struct vm_ar
unsigned long size, pgoff;
int did_readaround = 0, majmin = VM_FAULT_MINOR;
 
+   BUG_ON(!(area->vm_flags & VM_CAN_INVALIDATE));
+
pgoff = ((address-area->vm_start) >> PAGE_CACHE_SHIFT) + area->vm_pgoff;
 
-retry_all:
size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
if (pgoff >= size)
goto outside_data_content;
@@ -1373,7 +1374,7 @@ retry_all:
 * Do we have something in the page cache already?
 */
 retry_find:
-   page = find_get_page(mapping, pgoff);
+   page = find_lock_page(mapping, pgoff);
if (!page) {
unsigned long ra_pages;
 
@@ -1407,7 +1408,7 @@ retry_find:
start = pgoff - ra_pages / 2;
do_page_cache_readahead(mapping, file, start, ra_pages);
}
-   page = find_get_page(mapping, pgoff);
+   page = find_lock_page(mapping, pgoff);
if (!page)
goto no_cached_page;
}
@@ -1416,13 +1417,19 @@ 

[patch 3/7] mm: fix fault vs invalidate race for linear mappings

2007-01-12 Thread Nick Piggin
Fix the race between invalidate_inode_pages and do_no_page.

Andrea Arcangeli identified a subtle race between invalidation of
pages from pagecache with userspace mappings, and do_no_page.

The issue is that invalidation has to shoot down all mappings to the
page, before it can be discarded from the pagecache. Between shooting
down ptes to a particular page, and actually dropping the struct page
from the pagecache, do_no_page from any process might fault on that
page and establish a new mapping to the page just before it gets
discarded from the pagecache.

The most common case where such invalidation is used is in file
truncation. This case was catered for by doing a sort of open-coded
seqlock between the file's i_size, and its truncate_count.

Truncation will decrease i_size, then increment truncate_count before
unmapping userspace pages; do_no_page will read truncate_count, then
find the page if it is within i_size, and then check truncate_count
under the page table lock and back out and retry if it had
subsequently been changed (ptl will serialise against unmapping, and
ensure a potentially updated truncate_count is actually visible).

Complexity and documentation issues aside, the locking protocol fails
in the case where we would like to invalidate pagecache inside i_size.
do_no_page can come in anytime and filemap_nopage is not aware of the
invalidation in progress (as it is when it is outside i_size). The
end result is that dangling (-mapping == NULL) pages that appear to
be from a particular file may be mapped into userspace with nonsense
data. Valid mappings to the same place will see a different page.

Andrea implemented two working fixes, one using a real seqlock,
another using a page-flags bit. He also proposed using the page lock
in do_no_page, but that was initially considered too heavyweight.
However, it is not a global or per-file lock, and the page cacheline
is modified in do_no_page to increment _count and _mapcount anyway, so
a further modification should not be a large performance hit.
Scalability is not an issue.

This patch implements this latter approach. -nopage implementations
return with the page locked if it is possible for their underlying
file to be invalidated (in that case, they must set a special vm_flags
bit to indicate so). do_no_page only unlocks the page after setting
up the mapping completely. invalidation is excluded because it holds
the page lock during invalidation of each page (and ensures that the
page is not mapped while holding the lock).

This also allows significant simplifications in do_no_page, because
we have the page locked in the right place in the pagecache from the
start.

Signed-off-by: Nick Piggin [EMAIL PROTECTED]

Index: linux-2.6/include/linux/mm.h
===
--- linux-2.6.orig/include/linux/mm.h
+++ linux-2.6/include/linux/mm.h
@@ -168,6 +168,11 @@ extern unsigned int kobjsize(const void 
 #define VM_NONLINEAR   0x0080  /* Is non-linear (remap_file_pages) */
 #define VM_MAPPED_COPY 0x0100  /* T if mapped copy of data (nommu 
mmap) */
 #define VM_INSERTPAGE  0x0200  /* The vma has had vm_insert_page() 
done on it */
+#define VM_CAN_INVALIDATE  0x0400  /* The mapping may be 
invalidated,
+* eg. truncate or invalidate_inode_*.
+* In this case, do_no_page must
+* return with the page locked.
+*/
 
 #ifndef VM_STACK_DEFAULT_FLAGS /* arch can override this */
 #define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS
Index: linux-2.6/mm/filemap.c
===
--- linux-2.6.orig/mm/filemap.c
+++ linux-2.6/mm/filemap.c
@@ -1349,9 +1349,10 @@ struct page *filemap_nopage(struct vm_ar
unsigned long size, pgoff;
int did_readaround = 0, majmin = VM_FAULT_MINOR;
 
+   BUG_ON(!(area-vm_flags  VM_CAN_INVALIDATE));
+
pgoff = ((address-area-vm_start)  PAGE_CACHE_SHIFT) + area-vm_pgoff;
 
-retry_all:
size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1)  PAGE_CACHE_SHIFT;
if (pgoff = size)
goto outside_data_content;
@@ -1373,7 +1374,7 @@ retry_all:
 * Do we have something in the page cache already?
 */
 retry_find:
-   page = find_get_page(mapping, pgoff);
+   page = find_lock_page(mapping, pgoff);
if (!page) {
unsigned long ra_pages;
 
@@ -1407,7 +1408,7 @@ retry_find:
start = pgoff - ra_pages / 2;
do_page_cache_readahead(mapping, file, start, ra_pages);
}
-   page = find_get_page(mapping, pgoff);
+   page = find_lock_page(mapping, pgoff);
if (!page)
goto no_cached_page;
}
@@ -1416,13 +1417,19 @@ retry_find: