Re: [PATCH 4/5] vrange: Set affected pages referenced when marking volatile

2014-03-23 Thread KOSAKI Motohiro
On Fri, Mar 21, 2014 at 2:17 PM, John Stultz  wrote:
> One issue that some potential users were concerned about, was that
> they wanted to ensure that all the pages from one volatile range
> were purged before we purge pages from a different volatile range.
> This would prevent the case where they have 4 large objects, and
> the system purges one page from each object, casuing all of the
> objects to have to be re-created.
>
> The counter-point to this case, is when an application is using the
> SIGBUS semantics to continue to access pages after they have been
> marked volatile. In that case, the desire was that the most recently
> touched pages be purged last, and only the "cold" pages be purged
> from the specified range.
>
> Instead of adding option flags for the various usage model (at least
> initially), one way of getting a solutoin for both uses would be to
> have the act of marking pages as volatile in effect mark the pages
> as accessed. Since all of the pages in the range would be marked
> together, they would be of the same "age" and would (approximately)
> be purged together. Further, if any pages in the range were accessed
> after being marked volatile, they would be moved to the end of the
> lru and be purged later.

If you run after two hares, you will catch neither. I suspect this patch
doesn't make happy any user.
I suggest to aim former case (object level caching) and aim latter by
another patch-kit.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 4/5] vrange: Set affected pages referenced when marking volatile

2014-03-21 Thread John Stultz
One issue that some potential users were concerned about, was that
they wanted to ensure that all the pages from one volatile range
were purged before we purge pages from a different volatile range.
This would prevent the case where they have 4 large objects, and
the system purges one page from each object, casuing all of the
objects to have to be re-created.

The counter-point to this case, is when an application is using the
SIGBUS semantics to continue to access pages after they have been
marked volatile. In that case, the desire was that the most recently
touched pages be purged last, and only the "cold" pages be purged
from the specified range.

Instead of adding option flags for the various usage model (at least
initially), one way of getting a solutoin for both uses would be to
have the act of marking pages as volatile in effect mark the pages
as accessed. Since all of the pages in the range would be marked
together, they would be of the same "age" and would (approximately)
be purged together. Further, if any pages in the range were accessed
after being marked volatile, they would be moved to the end of the
lru and be purged later.

This patch provides this solution by walking the pages in the range
and setting them accessed when set volatile.

This does have a performance impact, as we have to touch each page
when setting them volatile. Additionally, while setting all the
pages to the same age solves the basic problem, there is still an
open question of: What age all the pages should be set to?

One could consider them all recently accessed, which would put them
at the end of the active lru. Or one could possibly move them all to
the end of the inactive lru, making them more likely to be purged
sooner.

Another possibility would be to not affect the pages at all when
marking them as volatile, and allow applications to use madvise
prior to marking any pages as volatile to age them together, if
that behavior was needed. In that case this patch would be
unnecessary.

Thoughts on the best approach would be greatly appreciated.


Cc: Andrew Morton 
Cc: Android Kernel Team 
Cc: Johannes Weiner 
Cc: Robert Love 
Cc: Mel Gorman 
Cc: Hugh Dickins 
Cc: Dave Hansen 
Cc: Rik van Riel 
Cc: Dmitry Adamushko 
Cc: Neil Brown 
Cc: Andrea Arcangeli 
Cc: Mike Hommey 
Cc: Taras Glek 
Cc: Jan Kara 
Cc: KOSAKI Motohiro 
Cc: Michel Lespinasse 
Cc: Minchan Kim 
Cc: linux...@kvack.org 
Signed-off-by: John Stultz 
---
 mm/vrange.c | 71 +
 1 file changed, 71 insertions(+)

diff --git a/mm/vrange.c b/mm/vrange.c
index 28ceb6f..9be8f45 100644
--- a/mm/vrange.c
+++ b/mm/vrange.c
@@ -79,6 +79,73 @@ static int vrange_check_purged(struct mm_struct *mm,
 
 }
 
+
+/**
+ * vrange_mark_accessed_pte - Marks pte pages in range accessed
+ *
+ * Iterates over the ptes in the pmd and marks the coresponding page
+ * as accessed. This ensures all the pages in the range are of the
+ * same "age", so that when pages are purged, we will most likely purge
+ * them together.
+ */
+static int vrange_mark_accessed_pte(pmd_t *pmd, unsigned long addr,
+   unsigned long end, struct mm_walk *walk)
+{
+   struct vm_area_struct *vma = walk->private;
+   pte_t *pte;
+   spinlock_t *ptl;
+
+   if (pmd_trans_huge(*pmd))
+   return 0;
+   if (pmd_trans_unstable(pmd))
+   return 0;
+
+   pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
+   for (; addr != end; pte++, addr += PAGE_SIZE) {
+   if (pte_present(*pte)) {
+   struct page *page;
+
+   page = vm_normal_page(vma, addr, *pte);
+   if (IS_ERR_OR_NULL(page))
+   break;
+   get_page(page);
+   /*
+* XXX - So here we may want to do something
+* else other then marking the page accessed.
+* Setting them to all be the same "age" ensures
+* they are pruged together, but its not clear
+* what that "age" should be.
+*/
+   mark_page_accessed(page);
+   put_page(page);
+   }
+   }
+   pte_unmap_unlock(pte - 1, ptl);
+   cond_resched();
+
+   return 0;
+}
+
+
+/**
+ * vrange_mark_range_accessed - Sets up a mm_walk to mark pages accessed
+ *
+ * Sets up and calls wa_page_range() to mark affected pages as accessed.
+ */
+static void vrange_mark_range_accessed(struct vm_area_struct *vma,
+   unsigned long start,
+   unsigned long end)
+{
+   struct mm_walk vrange_walk = {
+   .pmd_entry = vrange_mark_accessed_pte,
+   .mm = vma->vm_mm,
+   .private = vma,
+   };
+
+   walk_page_range(star