Re: [PATCH] zsmalloc: simplify init_zspage free obj linking

2014-09-15 Thread Minchan Kim
On Mon, Sep 15, 2014 at 04:58:50PM -0400, Dan Streetman wrote:
> Change zsmalloc init_zspage() logic to iterate through each object on
> each of its pages, checking the offset to verify the object is on the
> current page before linking it into the zspage.
> 
> The current zsmalloc init_zspage free object linking code has logic
> that relies on there only being one page per zspage when PAGE_SIZE
> is a multiple of class->size.  It calculates the number of objects
> for the current page, and iterates through all of them plus one,
> to account for the assumed partial object at the end of the page.
> While this currently works, the logic can be simplified to just
> link the object at each successive offset until the offset is larger
> than PAGE_SIZE, which does not rely on PAGE_SIZE being a multiple of
> class->size.
> 
> Signed-off-by: Dan Streetman 
> Cc: Minchan Kim 
Acked-by: Minchan Kim 

-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] zsmalloc: simplify init_zspage free obj linking

2014-09-15 Thread Dan Streetman
Change zsmalloc init_zspage() logic to iterate through each object on
each of its pages, checking the offset to verify the object is on the
current page before linking it into the zspage.

The current zsmalloc init_zspage free object linking code has logic
that relies on there only being one page per zspage when PAGE_SIZE
is a multiple of class->size.  It calculates the number of objects
for the current page, and iterates through all of them plus one,
to account for the assumed partial object at the end of the page.
While this currently works, the logic can be simplified to just
link the object at each successive offset until the offset is larger
than PAGE_SIZE, which does not rely on PAGE_SIZE being a multiple of
class->size.

Signed-off-by: Dan Streetman 
Cc: Minchan Kim 
---
 mm/zsmalloc.c | 14 +-
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index c4a9157..03aa72f 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -628,7 +628,7 @@ static void init_zspage(struct page *first_page, struct 
size_class *class)
while (page) {
struct page *next_page;
struct link_free *link;
-   unsigned int i, objs_on_page;
+   unsigned int i = 1;
 
/*
 * page->index stores offset of first object starting
@@ -641,14 +641,10 @@ static void init_zspage(struct page *first_page, struct 
size_class *class)
 
link = (struct link_free *)kmap_atomic(page) +
off / sizeof(*link);
-   objs_on_page = (PAGE_SIZE - off) / class->size;
 
-   for (i = 1; i <= objs_on_page; i++) {
-   off += class->size;
-   if (off < PAGE_SIZE) {
-   link->next = obj_location_to_handle(page, i);
-   link += class->size / sizeof(*link);
-   }
+   while ((off += class->size) < PAGE_SIZE) {
+   link->next = obj_location_to_handle(page, i++);
+   link += class->size / sizeof(*link);
}
 
/*
@@ -660,7 +656,7 @@ static void init_zspage(struct page *first_page, struct 
size_class *class)
link->next = obj_location_to_handle(next_page, 0);
kunmap_atomic(link);
page = next_page;
-   off = (off + class->size) % PAGE_SIZE;
+   off %= PAGE_SIZE;
}
 }
 
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/