Re: [patch 10/10] mm: fix pagecache write deadlocks
Nick Piggin wrote: @@ -1878,31 +1889,88 @@ generic_file_buffered_write(struct kiocb break; } + /* +* non-uptodate pages cannot cope with short copies, and we +* cannot take a pagefault with the destination page locked. +* So pin the source page to copy it. +*/ + if (!PageUptodate(page)) { + unlock_page(page); + + bytes = min(bytes, PAGE_CACHE_SIZE - +((unsigned long)buf & ~PAGE_CACHE_MASK)); + + /* +* Cannot get_user_pages with a page locked for the +* same reason as we can't take a page fault with a +* page locked (as explained below). +*/ + status = get_user_pages(current, current->mm, + (unsigned long)buf & PAGE_CACHE_MASK, 1, + 0, 0, _page, NULL); Thinko... get_user_pages needs to be called with mmap_sem held, obviously. -- SUSE Labs, Novell Inc. Send instant messages to your online friends http://au.messenger.yahoo.com - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [patch 10/10] mm: fix pagecache write deadlocks
Nick Piggin wrote: @@ -1878,31 +1889,88 @@ generic_file_buffered_write(struct kiocb break; } + /* +* non-uptodate pages cannot cope with short copies, and we +* cannot take a pagefault with the destination page locked. +* So pin the source page to copy it. +*/ + if (!PageUptodate(page)) { + unlock_page(page); + + bytes = min(bytes, PAGE_CACHE_SIZE - +((unsigned long)buf ~PAGE_CACHE_MASK)); + + /* +* Cannot get_user_pages with a page locked for the +* same reason as we can't take a page fault with a +* page locked (as explained below). +*/ + status = get_user_pages(current, current-mm, + (unsigned long)buf PAGE_CACHE_MASK, 1, + 0, 0, src_page, NULL); Thinko... get_user_pages needs to be called with mmap_sem held, obviously. -- SUSE Labs, Novell Inc. Send instant messages to your online friends http://au.messenger.yahoo.com - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[patch 10/10] mm: fix pagecache write deadlocks
Modify the core write() code so that it won't take a pagefault while holding a lock on the pagecache page. There are a number of different deadlocks possible if we try to do such a thing: 1. generic_buffered_write 2. lock_page 3.prepare_write 4. unlock_page+vmtruncate 5. copy_from_user 6. mmap_sem(r) 7. handle_mm_fault 8.lock_page (filemap_nopage) 9.commit_write 1. unlock_page b. sys_munmap / sys_mlock / others c. mmap_sem(w) d. make_pages_present e.get_user_pages f. handle_mm_fault g. lock_page (filemap_nopage) 2,8 - recursive deadlock if page is same 2,8;2,8 - ABBA deadlock is page is different 2,6;c,g - ABBA deadlock if page is same The solution is as follows: 1. If we find the destination page is uptodate, continue as normal, but use atomic usercopies which do not take pagefaults and do not zero the uncopied tail of the destination. The destination is already uptodate, so we can commit_write the full length even if there was a partial copy: it does not matter that the tail was not modified, because if it is dirtied and written back to disk it will not cause any problems (uptodate *means* that the destination page is as new or newer than the copy on disk). 1a. The above requires that fault_in_pages_readable correctly returns access information, because atomic usercopies cannot distinguish between non-present pages in a readable mapping, from lack of a readable mapping. 2. If we find the destination page is non uptodate, unlock it (this could be made slightly more optimal), then find and pin the source page with get_user_pages. Relock the destination page and continue with the copy. However, instead of a usercopy (which might take a fault), copy the data via the kernel address space. (also, rename maxlen to seglen, because it was confusing) Signed-off-by: Nick Piggin <[EMAIL PROTECTED]> Index: linux-2.6/mm/filemap.c === --- linux-2.6.orig/mm/filemap.c +++ linux-2.6/mm/filemap.c @@ -1843,11 +1843,12 @@ generic_file_buffered_write(struct kiocb filemap_set_next_iovec(_iov, nr_segs, _offset, written); do { + struct page *src_page; struct page *page; pgoff_t index; /* Pagecache index for current page */ unsigned long offset; /* Offset into pagecache page */ - unsigned long maxlen; /* Bytes remaining in current iovec */ - size_t bytes; /* Bytes to write to page */ + unsigned long seglen; /* Bytes remaining in current iovec */ + unsigned long bytes;/* Bytes to write to page */ size_t copied; /* Bytes copied from user */ buf = cur_iov->iov_base + iov_offset; @@ -1857,20 +1858,30 @@ generic_file_buffered_write(struct kiocb if (bytes > count) bytes = count; - maxlen = cur_iov->iov_len - iov_offset; - if (maxlen > bytes) - maxlen = bytes; + /* +* a non-NULL src_page indicates that we're doing the +* copy via get_user_pages and kmap. +*/ + src_page = NULL; + + seglen = cur_iov->iov_len - iov_offset; + if (seglen > bytes) + seglen = bytes; -#ifndef CONFIG_DEBUG_VM /* * Bring in the user page that we will copy from _first_. * Otherwise there's a nasty deadlock on copying from the * same page as we're writing to, without it being marked * up-to-date. +* +* Not only is this an optimisation, but it is also required +* to check that the address is actually valid, when atomic +* usercopies are used, below. */ - fault_in_pages_readable(buf, maxlen); -#endif - + if (unlikely(fault_in_pages_readable(buf, seglen))) { + status = -EFAULT; + break; + } page = __grab_cache_page(mapping, index); if (!page) { @@ -1878,31 +1889,88 @@ generic_file_buffered_write(struct kiocb break; } + /* +* non-uptodate pages cannot cope with short copies, and we +* cannot take a pagefault with the destination page locked. +* So pin the source page to copy it. +*/ + if (!PageUptodate(page)) { + unlock_page(page); + + bytes = min(bytes, PAGE_CACHE_SIZE - +((unsigned long)buf & ~PAGE_CACHE_MASK)); + + /* +* Cannot
[patch 10/10] mm: fix pagecache write deadlocks
Modify the core write() code so that it won't take a pagefault while holding a lock on the pagecache page. There are a number of different deadlocks possible if we try to do such a thing: 1. generic_buffered_write 2. lock_page 3.prepare_write 4. unlock_page+vmtruncate 5. copy_from_user 6. mmap_sem(r) 7. handle_mm_fault 8.lock_page (filemap_nopage) 9.commit_write 1. unlock_page b. sys_munmap / sys_mlock / others c. mmap_sem(w) d. make_pages_present e.get_user_pages f. handle_mm_fault g. lock_page (filemap_nopage) 2,8 - recursive deadlock if page is same 2,8;2,8 - ABBA deadlock is page is different 2,6;c,g - ABBA deadlock if page is same The solution is as follows: 1. If we find the destination page is uptodate, continue as normal, but use atomic usercopies which do not take pagefaults and do not zero the uncopied tail of the destination. The destination is already uptodate, so we can commit_write the full length even if there was a partial copy: it does not matter that the tail was not modified, because if it is dirtied and written back to disk it will not cause any problems (uptodate *means* that the destination page is as new or newer than the copy on disk). 1a. The above requires that fault_in_pages_readable correctly returns access information, because atomic usercopies cannot distinguish between non-present pages in a readable mapping, from lack of a readable mapping. 2. If we find the destination page is non uptodate, unlock it (this could be made slightly more optimal), then find and pin the source page with get_user_pages. Relock the destination page and continue with the copy. However, instead of a usercopy (which might take a fault), copy the data via the kernel address space. (also, rename maxlen to seglen, because it was confusing) Signed-off-by: Nick Piggin [EMAIL PROTECTED] Index: linux-2.6/mm/filemap.c === --- linux-2.6.orig/mm/filemap.c +++ linux-2.6/mm/filemap.c @@ -1843,11 +1843,12 @@ generic_file_buffered_write(struct kiocb filemap_set_next_iovec(cur_iov, nr_segs, iov_offset, written); do { + struct page *src_page; struct page *page; pgoff_t index; /* Pagecache index for current page */ unsigned long offset; /* Offset into pagecache page */ - unsigned long maxlen; /* Bytes remaining in current iovec */ - size_t bytes; /* Bytes to write to page */ + unsigned long seglen; /* Bytes remaining in current iovec */ + unsigned long bytes;/* Bytes to write to page */ size_t copied; /* Bytes copied from user */ buf = cur_iov-iov_base + iov_offset; @@ -1857,20 +1858,30 @@ generic_file_buffered_write(struct kiocb if (bytes count) bytes = count; - maxlen = cur_iov-iov_len - iov_offset; - if (maxlen bytes) - maxlen = bytes; + /* +* a non-NULL src_page indicates that we're doing the +* copy via get_user_pages and kmap. +*/ + src_page = NULL; + + seglen = cur_iov-iov_len - iov_offset; + if (seglen bytes) + seglen = bytes; -#ifndef CONFIG_DEBUG_VM /* * Bring in the user page that we will copy from _first_. * Otherwise there's a nasty deadlock on copying from the * same page as we're writing to, without it being marked * up-to-date. +* +* Not only is this an optimisation, but it is also required +* to check that the address is actually valid, when atomic +* usercopies are used, below. */ - fault_in_pages_readable(buf, maxlen); -#endif - + if (unlikely(fault_in_pages_readable(buf, seglen))) { + status = -EFAULT; + break; + } page = __grab_cache_page(mapping, index); if (!page) { @@ -1878,31 +1889,88 @@ generic_file_buffered_write(struct kiocb break; } + /* +* non-uptodate pages cannot cope with short copies, and we +* cannot take a pagefault with the destination page locked. +* So pin the source page to copy it. +*/ + if (!PageUptodate(page)) { + unlock_page(page); + + bytes = min(bytes, PAGE_CACHE_SIZE - +((unsigned long)buf ~PAGE_CACHE_MASK)); + + /* +* Cannot