On 03/30/2016 09:12 AM, Minchan Kim wrote:
Now, VM has a feature to migrate non-lru movable pages so
balloon doesn't need custom migration hooks in migrate.c
and compact.c. Instead, this patch implements page->mapping
->{isolate|migrate|putback} functions.

With that, we could remove hooks for ballooning in general
migration functions and make balloon compaction simple.

Cc: [email protected]
Cc: Rafael Aquini <[email protected]>
Cc: Konstantin Khlebnikov <[email protected]>
Signed-off-by: Gioh Kim <[email protected]>
Signed-off-by: Minchan Kim <[email protected]>

I'm not familiar with the inode and pseudofs stuff, so just some things I noticed:

-#define PAGE_MOVABLE_MAPCOUNT_VALUE (-255)
+#define PAGE_MOVABLE_MAPCOUNT_VALUE (-256)
+#define PAGE_BALLOON_MAPCOUNT_VALUE PAGE_MOVABLE_MAPCOUNT_VALUE

  static inline int PageMovable(struct page *page)
  {
-       return ((test_bit(PG_movable, &(page)->flags) &&
-               atomic_read(&page->_mapcount) == PAGE_MOVABLE_MAPCOUNT_VALUE)
-               || PageBalloon(page));
+       return (test_bit(PG_movable, &(page)->flags) &&
+               atomic_read(&page->_mapcount) == PAGE_MOVABLE_MAPCOUNT_VALUE);
  }

  /* Caller should hold a PG_lock */
@@ -645,6 +626,35 @@ static inline void __ClearPageMovable(struct page *page)

  PAGEFLAG(Isolated, isolated, PF_ANY);

+static inline int PageBalloon(struct page *page)
+{
+       return atomic_read(&page->_mapcount) == PAGE_BALLOON_MAPCOUNT_VALUE
+               && PagePrivate2(page);
+}

Hmm so you are now using PG_private_2 flag here, but it's not documented. Also the only caller of PageBalloon() seems to be stable_page_flags(). Which will now report all movable pages with PG_private_2 as KPF_BALOON. Seems like an overkill and also not reliable. Could it test e.g. page->mapping instead?

Or maybe if we manage to get rid of PAGE_MOVABLE_MAPCOUNT_VALUE, we can keep PAGE_BALLOON_MAPCOUNT_VALUE to simply distinguish balloon pages for stable_page_flags().

@@ -1033,7 +1019,7 @@ static int __unmap_and_move(struct page *page, struct 
page *newpage,
  out:
        /* If migration is successful, move newpage to right list */
        if (rc == MIGRATEPAGE_SUCCESS) {
-               if (unlikely(__is_movable_balloon_page(newpage)))
+               if (unlikely(PageMovable(newpage)))
                        put_page(newpage);
                else
                        putback_lru_page(newpage);

Hmm shouldn't the condition have been changed to

if (unlikely(__is_movable_balloon_page(newpage)) || PageMovable(newpage)

by patch 02/16? And this patch should be just removing the balloon-specific check? Otherwise it seems like between patches 02 and 04, other kinds of PageMovable pages were unnecessarily/wrongly routed through putback_lru_page()?

diff --git a/mm/vmscan.c b/mm/vmscan.c
index d82196244340..c7696a2e11c7 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1254,7 +1254,7 @@ unsigned long reclaim_clean_pages_from_list(struct zone 
*zone,

        list_for_each_entry_safe(page, next, page_list, lru) {
                if (page_is_file_cache(page) && !PageDirty(page) &&
-                   !isolated_balloon_page(page)) {
+                   !PageIsolated(page)) {
                        ClearPageActive(page);
                        list_move(&page->lru, &clean_pages);
                }

This looks like the same comment as above at first glance. But looking closer, it's even weirder. isolated_balloon_page() was simply PageBalloon() after d6d86c0a7f8dd... weird already. You replace it with check for !PageIsolated() which looks like a more correct check, so ok. Except the potential false positive with PG_owner_priv_1.

Thanks.

Reply via email to