Re: [PATCH 01/10] zsmalloc: fix init_zspage free obj linking

2014-09-14 Thread Minchan Kim
On Fri, Sep 12, 2014 at 12:43:22PM -0400, Dan Streetman wrote:
> On Fri, Sep 12, 2014 at 12:59 AM, Minchan Kim  wrote:
> > On Thu, Sep 11, 2014 at 04:53:52PM -0400, Dan Streetman wrote:
> >> When zsmalloc creates a new zspage, it initializes each object it contains
> >> with a link to the next object, so that the zspage has a singly-linked list
> >> of its free objects.  However, the logic that sets up the links is wrong,
> >> and in the case of objects that are precisely aligned with the page 
> >> boundries
> >> (e.g. a zspage with objects that are 1/2 PAGE_SIZE) the first object on the
> >> next page is skipped, due to incrementing the offset twice.  The logic can 
> >> be
> >> simplified, as it doesn't need to calculate how many objects can fit on the
> >> current page; simply checking the offset for each object is enough.
> >
> > If objects are precisely aligned with the page boundary, pages_per_zspage
> > should be 1 so there is no next page.
> 
> ah, ok.  I wonder if it should be changed anyway so it doesn't rely on
> that detail, in case that's ever changed in the future.  It's not
> obvious the existing logic relies on that for correct operation.  And
> this simplifies the logic too.

Correct description and resend if you want.

-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 01/10] zsmalloc: fix init_zspage free obj linking

2014-09-14 Thread Minchan Kim
On Fri, Sep 12, 2014 at 12:43:22PM -0400, Dan Streetman wrote:
 On Fri, Sep 12, 2014 at 12:59 AM, Minchan Kim minc...@kernel.org wrote:
  On Thu, Sep 11, 2014 at 04:53:52PM -0400, Dan Streetman wrote:
  When zsmalloc creates a new zspage, it initializes each object it contains
  with a link to the next object, so that the zspage has a singly-linked list
  of its free objects.  However, the logic that sets up the links is wrong,
  and in the case of objects that are precisely aligned with the page 
  boundries
  (e.g. a zspage with objects that are 1/2 PAGE_SIZE) the first object on the
  next page is skipped, due to incrementing the offset twice.  The logic can 
  be
  simplified, as it doesn't need to calculate how many objects can fit on the
  current page; simply checking the offset for each object is enough.
 
  If objects are precisely aligned with the page boundary, pages_per_zspage
  should be 1 so there is no next page.
 
 ah, ok.  I wonder if it should be changed anyway so it doesn't rely on
 that detail, in case that's ever changed in the future.  It's not
 obvious the existing logic relies on that for correct operation.  And
 this simplifies the logic too.

Correct description and resend if you want.

-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 01/10] zsmalloc: fix init_zspage free obj linking

2014-09-12 Thread Dan Streetman
On Fri, Sep 12, 2014 at 12:59 AM, Minchan Kim  wrote:
> On Thu, Sep 11, 2014 at 04:53:52PM -0400, Dan Streetman wrote:
>> When zsmalloc creates a new zspage, it initializes each object it contains
>> with a link to the next object, so that the zspage has a singly-linked list
>> of its free objects.  However, the logic that sets up the links is wrong,
>> and in the case of objects that are precisely aligned with the page boundries
>> (e.g. a zspage with objects that are 1/2 PAGE_SIZE) the first object on the
>> next page is skipped, due to incrementing the offset twice.  The logic can be
>> simplified, as it doesn't need to calculate how many objects can fit on the
>> current page; simply checking the offset for each object is enough.
>
> If objects are precisely aligned with the page boundary, pages_per_zspage
> should be 1 so there is no next page.

ah, ok.  I wonder if it should be changed anyway so it doesn't rely on
that detail, in case that's ever changed in the future.  It's not
obvious the existing logic relies on that for correct operation.  And
this simplifies the logic too.

>
>>
>> Change zsmalloc init_zspage() logic to iterate through each object on
>> each of its pages, checking the offset to verify the object is on the
>> current page before linking it into the zspage.
>>
>> Signed-off-by: Dan Streetman 
>> Cc: Minchan Kim 
>> ---
>>  mm/zsmalloc.c | 14 +-
>>  1 file changed, 5 insertions(+), 9 deletions(-)
>>
>> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
>> index c4a9157..03aa72f 100644
>> --- a/mm/zsmalloc.c
>> +++ b/mm/zsmalloc.c
>> @@ -628,7 +628,7 @@ static void init_zspage(struct page *first_page, struct 
>> size_class *class)
>>   while (page) {
>>   struct page *next_page;
>>   struct link_free *link;
>> - unsigned int i, objs_on_page;
>> + unsigned int i = 1;
>>
>>   /*
>>* page->index stores offset of first object starting
>> @@ -641,14 +641,10 @@ static void init_zspage(struct page *first_page, 
>> struct size_class *class)
>>
>>   link = (struct link_free *)kmap_atomic(page) +
>>   off / sizeof(*link);
>> - objs_on_page = (PAGE_SIZE - off) / class->size;
>>
>> - for (i = 1; i <= objs_on_page; i++) {
>> - off += class->size;
>> - if (off < PAGE_SIZE) {
>> - link->next = obj_location_to_handle(page, i);
>> - link += class->size / sizeof(*link);
>> - }
>> + while ((off += class->size) < PAGE_SIZE) {
>> + link->next = obj_location_to_handle(page, i++);
>> + link += class->size / sizeof(*link);
>>   }
>>
>>   /*
>> @@ -660,7 +656,7 @@ static void init_zspage(struct page *first_page, struct 
>> size_class *class)
>>   link->next = obj_location_to_handle(next_page, 0);
>>   kunmap_atomic(link);
>>   page = next_page;
>> - off = (off + class->size) % PAGE_SIZE;
>> + off %= PAGE_SIZE;
>>   }
>>  }
>>
>> --
>> 1.8.3.1
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majord...@kvack.org.  For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: mailto:"d...@kvack.org;> em...@kvack.org 
>
> --
> Kind regards,
> Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 01/10] zsmalloc: fix init_zspage free obj linking

2014-09-12 Thread Dan Streetman
On Fri, Sep 12, 2014 at 12:59 AM, Minchan Kim minc...@kernel.org wrote:
 On Thu, Sep 11, 2014 at 04:53:52PM -0400, Dan Streetman wrote:
 When zsmalloc creates a new zspage, it initializes each object it contains
 with a link to the next object, so that the zspage has a singly-linked list
 of its free objects.  However, the logic that sets up the links is wrong,
 and in the case of objects that are precisely aligned with the page boundries
 (e.g. a zspage with objects that are 1/2 PAGE_SIZE) the first object on the
 next page is skipped, due to incrementing the offset twice.  The logic can be
 simplified, as it doesn't need to calculate how many objects can fit on the
 current page; simply checking the offset for each object is enough.

 If objects are precisely aligned with the page boundary, pages_per_zspage
 should be 1 so there is no next page.

ah, ok.  I wonder if it should be changed anyway so it doesn't rely on
that detail, in case that's ever changed in the future.  It's not
obvious the existing logic relies on that for correct operation.  And
this simplifies the logic too.



 Change zsmalloc init_zspage() logic to iterate through each object on
 each of its pages, checking the offset to verify the object is on the
 current page before linking it into the zspage.

 Signed-off-by: Dan Streetman ddstr...@ieee.org
 Cc: Minchan Kim minc...@kernel.org
 ---
  mm/zsmalloc.c | 14 +-
  1 file changed, 5 insertions(+), 9 deletions(-)

 diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
 index c4a9157..03aa72f 100644
 --- a/mm/zsmalloc.c
 +++ b/mm/zsmalloc.c
 @@ -628,7 +628,7 @@ static void init_zspage(struct page *first_page, struct 
 size_class *class)
   while (page) {
   struct page *next_page;
   struct link_free *link;
 - unsigned int i, objs_on_page;
 + unsigned int i = 1;

   /*
* page-index stores offset of first object starting
 @@ -641,14 +641,10 @@ static void init_zspage(struct page *first_page, 
 struct size_class *class)

   link = (struct link_free *)kmap_atomic(page) +
   off / sizeof(*link);
 - objs_on_page = (PAGE_SIZE - off) / class-size;

 - for (i = 1; i = objs_on_page; i++) {
 - off += class-size;
 - if (off  PAGE_SIZE) {
 - link-next = obj_location_to_handle(page, i);
 - link += class-size / sizeof(*link);
 - }
 + while ((off += class-size)  PAGE_SIZE) {
 + link-next = obj_location_to_handle(page, i++);
 + link += class-size / sizeof(*link);
   }

   /*
 @@ -660,7 +656,7 @@ static void init_zspage(struct page *first_page, struct 
 size_class *class)
   link-next = obj_location_to_handle(next_page, 0);
   kunmap_atomic(link);
   page = next_page;
 - off = (off + class-size) % PAGE_SIZE;
 + off %= PAGE_SIZE;
   }
  }

 --
 1.8.3.1

 --
 To unsubscribe, send a message with 'unsubscribe linux-mm' in
 the body to majord...@kvack.org.  For more info on Linux MM,
 see: http://www.linux-mm.org/ .
 Don't email: a href=mailto:d...@kvack.org; em...@kvack.org /a

 --
 Kind regards,
 Minchan Kim
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 01/10] zsmalloc: fix init_zspage free obj linking

2014-09-11 Thread Minchan Kim
On Thu, Sep 11, 2014 at 04:53:52PM -0400, Dan Streetman wrote:
> When zsmalloc creates a new zspage, it initializes each object it contains
> with a link to the next object, so that the zspage has a singly-linked list
> of its free objects.  However, the logic that sets up the links is wrong,
> and in the case of objects that are precisely aligned with the page boundries
> (e.g. a zspage with objects that are 1/2 PAGE_SIZE) the first object on the
> next page is skipped, due to incrementing the offset twice.  The logic can be
> simplified, as it doesn't need to calculate how many objects can fit on the
> current page; simply checking the offset for each object is enough.

If objects are precisely aligned with the page boundary, pages_per_zspage
should be 1 so there is no next page.

> 
> Change zsmalloc init_zspage() logic to iterate through each object on
> each of its pages, checking the offset to verify the object is on the
> current page before linking it into the zspage.
> 
> Signed-off-by: Dan Streetman 
> Cc: Minchan Kim 
> ---
>  mm/zsmalloc.c | 14 +-
>  1 file changed, 5 insertions(+), 9 deletions(-)
> 
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index c4a9157..03aa72f 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -628,7 +628,7 @@ static void init_zspage(struct page *first_page, struct 
> size_class *class)
>   while (page) {
>   struct page *next_page;
>   struct link_free *link;
> - unsigned int i, objs_on_page;
> + unsigned int i = 1;
>  
>   /*
>* page->index stores offset of first object starting
> @@ -641,14 +641,10 @@ static void init_zspage(struct page *first_page, struct 
> size_class *class)
>  
>   link = (struct link_free *)kmap_atomic(page) +
>   off / sizeof(*link);
> - objs_on_page = (PAGE_SIZE - off) / class->size;
>  
> - for (i = 1; i <= objs_on_page; i++) {
> - off += class->size;
> - if (off < PAGE_SIZE) {
> - link->next = obj_location_to_handle(page, i);
> - link += class->size / sizeof(*link);
> - }
> + while ((off += class->size) < PAGE_SIZE) {
> + link->next = obj_location_to_handle(page, i++);
> + link += class->size / sizeof(*link);
>   }
>  
>   /*
> @@ -660,7 +656,7 @@ static void init_zspage(struct page *first_page, struct 
> size_class *class)
>   link->next = obj_location_to_handle(next_page, 0);
>   kunmap_atomic(link);
>   page = next_page;
> - off = (off + class->size) % PAGE_SIZE;
> + off %= PAGE_SIZE;
>   }
>  }
>  
> -- 
> 1.8.3.1
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org;> em...@kvack.org 

-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 01/10] zsmalloc: fix init_zspage free obj linking

2014-09-11 Thread Seth Jennings
On Thu, Sep 11, 2014 at 04:53:52PM -0400, Dan Streetman wrote:
> When zsmalloc creates a new zspage, it initializes each object it contains
> with a link to the next object, so that the zspage has a singly-linked list
> of its free objects.  However, the logic that sets up the links is wrong,
> and in the case of objects that are precisely aligned with the page boundries
> (e.g. a zspage with objects that are 1/2 PAGE_SIZE) the first object on the
> next page is skipped, due to incrementing the offset twice.  The logic can be
> simplified, as it doesn't need to calculate how many objects can fit on the
> current page; simply checking the offset for each object is enough.
> 
> Change zsmalloc init_zspage() logic to iterate through each object on
> each of its pages, checking the offset to verify the object is on the
> current page before linking it into the zspage.
> 
> Signed-off-by: Dan Streetman 
> Cc: Minchan Kim 

This one stands on its own as a bugfix.

Reviewed-by: Seth Jennings 

> ---
>  mm/zsmalloc.c | 14 +-
>  1 file changed, 5 insertions(+), 9 deletions(-)
> 
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index c4a9157..03aa72f 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -628,7 +628,7 @@ static void init_zspage(struct page *first_page, struct 
> size_class *class)
>   while (page) {
>   struct page *next_page;
>   struct link_free *link;
> - unsigned int i, objs_on_page;
> + unsigned int i = 1;
>  
>   /*
>* page->index stores offset of first object starting
> @@ -641,14 +641,10 @@ static void init_zspage(struct page *first_page, struct 
> size_class *class)
>  
>   link = (struct link_free *)kmap_atomic(page) +
>   off / sizeof(*link);
> - objs_on_page = (PAGE_SIZE - off) / class->size;
>  
> - for (i = 1; i <= objs_on_page; i++) {
> - off += class->size;
> - if (off < PAGE_SIZE) {
> - link->next = obj_location_to_handle(page, i);
> - link += class->size / sizeof(*link);
> - }
> + while ((off += class->size) < PAGE_SIZE) {
> + link->next = obj_location_to_handle(page, i++);
> + link += class->size / sizeof(*link);
>   }
>  
>   /*
> @@ -660,7 +656,7 @@ static void init_zspage(struct page *first_page, struct 
> size_class *class)
>   link->next = obj_location_to_handle(next_page, 0);
>   kunmap_atomic(link);
>   page = next_page;
> - off = (off + class->size) % PAGE_SIZE;
> + off %= PAGE_SIZE;
>   }
>  }
>  
> -- 
> 1.8.3.1
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 01/10] zsmalloc: fix init_zspage free obj linking

2014-09-11 Thread Dan Streetman
When zsmalloc creates a new zspage, it initializes each object it contains
with a link to the next object, so that the zspage has a singly-linked list
of its free objects.  However, the logic that sets up the links is wrong,
and in the case of objects that are precisely aligned with the page boundries
(e.g. a zspage with objects that are 1/2 PAGE_SIZE) the first object on the
next page is skipped, due to incrementing the offset twice.  The logic can be
simplified, as it doesn't need to calculate how many objects can fit on the
current page; simply checking the offset for each object is enough.

Change zsmalloc init_zspage() logic to iterate through each object on
each of its pages, checking the offset to verify the object is on the
current page before linking it into the zspage.

Signed-off-by: Dan Streetman 
Cc: Minchan Kim 
---
 mm/zsmalloc.c | 14 +-
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index c4a9157..03aa72f 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -628,7 +628,7 @@ static void init_zspage(struct page *first_page, struct 
size_class *class)
while (page) {
struct page *next_page;
struct link_free *link;
-   unsigned int i, objs_on_page;
+   unsigned int i = 1;
 
/*
 * page->index stores offset of first object starting
@@ -641,14 +641,10 @@ static void init_zspage(struct page *first_page, struct 
size_class *class)
 
link = (struct link_free *)kmap_atomic(page) +
off / sizeof(*link);
-   objs_on_page = (PAGE_SIZE - off) / class->size;
 
-   for (i = 1; i <= objs_on_page; i++) {
-   off += class->size;
-   if (off < PAGE_SIZE) {
-   link->next = obj_location_to_handle(page, i);
-   link += class->size / sizeof(*link);
-   }
+   while ((off += class->size) < PAGE_SIZE) {
+   link->next = obj_location_to_handle(page, i++);
+   link += class->size / sizeof(*link);
}
 
/*
@@ -660,7 +656,7 @@ static void init_zspage(struct page *first_page, struct 
size_class *class)
link->next = obj_location_to_handle(next_page, 0);
kunmap_atomic(link);
page = next_page;
-   off = (off + class->size) % PAGE_SIZE;
+   off %= PAGE_SIZE;
}
 }
 
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 01/10] zsmalloc: fix init_zspage free obj linking

2014-09-11 Thread Seth Jennings
On Thu, Sep 11, 2014 at 04:53:52PM -0400, Dan Streetman wrote:
 When zsmalloc creates a new zspage, it initializes each object it contains
 with a link to the next object, so that the zspage has a singly-linked list
 of its free objects.  However, the logic that sets up the links is wrong,
 and in the case of objects that are precisely aligned with the page boundries
 (e.g. a zspage with objects that are 1/2 PAGE_SIZE) the first object on the
 next page is skipped, due to incrementing the offset twice.  The logic can be
 simplified, as it doesn't need to calculate how many objects can fit on the
 current page; simply checking the offset for each object is enough.
 
 Change zsmalloc init_zspage() logic to iterate through each object on
 each of its pages, checking the offset to verify the object is on the
 current page before linking it into the zspage.
 
 Signed-off-by: Dan Streetman ddstr...@ieee.org
 Cc: Minchan Kim minc...@kernel.org

This one stands on its own as a bugfix.

Reviewed-by: Seth Jennings sjenni...@variantweb.net

 ---
  mm/zsmalloc.c | 14 +-
  1 file changed, 5 insertions(+), 9 deletions(-)
 
 diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
 index c4a9157..03aa72f 100644
 --- a/mm/zsmalloc.c
 +++ b/mm/zsmalloc.c
 @@ -628,7 +628,7 @@ static void init_zspage(struct page *first_page, struct 
 size_class *class)
   while (page) {
   struct page *next_page;
   struct link_free *link;
 - unsigned int i, objs_on_page;
 + unsigned int i = 1;
  
   /*
* page-index stores offset of first object starting
 @@ -641,14 +641,10 @@ static void init_zspage(struct page *first_page, struct 
 size_class *class)
  
   link = (struct link_free *)kmap_atomic(page) +
   off / sizeof(*link);
 - objs_on_page = (PAGE_SIZE - off) / class-size;
  
 - for (i = 1; i = objs_on_page; i++) {
 - off += class-size;
 - if (off  PAGE_SIZE) {
 - link-next = obj_location_to_handle(page, i);
 - link += class-size / sizeof(*link);
 - }
 + while ((off += class-size)  PAGE_SIZE) {
 + link-next = obj_location_to_handle(page, i++);
 + link += class-size / sizeof(*link);
   }
  
   /*
 @@ -660,7 +656,7 @@ static void init_zspage(struct page *first_page, struct 
 size_class *class)
   link-next = obj_location_to_handle(next_page, 0);
   kunmap_atomic(link);
   page = next_page;
 - off = (off + class-size) % PAGE_SIZE;
 + off %= PAGE_SIZE;
   }
  }
  
 -- 
 1.8.3.1
 
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 01/10] zsmalloc: fix init_zspage free obj linking

2014-09-11 Thread Minchan Kim
On Thu, Sep 11, 2014 at 04:53:52PM -0400, Dan Streetman wrote:
 When zsmalloc creates a new zspage, it initializes each object it contains
 with a link to the next object, so that the zspage has a singly-linked list
 of its free objects.  However, the logic that sets up the links is wrong,
 and in the case of objects that are precisely aligned with the page boundries
 (e.g. a zspage with objects that are 1/2 PAGE_SIZE) the first object on the
 next page is skipped, due to incrementing the offset twice.  The logic can be
 simplified, as it doesn't need to calculate how many objects can fit on the
 current page; simply checking the offset for each object is enough.

If objects are precisely aligned with the page boundary, pages_per_zspage
should be 1 so there is no next page.

 
 Change zsmalloc init_zspage() logic to iterate through each object on
 each of its pages, checking the offset to verify the object is on the
 current page before linking it into the zspage.
 
 Signed-off-by: Dan Streetman ddstr...@ieee.org
 Cc: Minchan Kim minc...@kernel.org
 ---
  mm/zsmalloc.c | 14 +-
  1 file changed, 5 insertions(+), 9 deletions(-)
 
 diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
 index c4a9157..03aa72f 100644
 --- a/mm/zsmalloc.c
 +++ b/mm/zsmalloc.c
 @@ -628,7 +628,7 @@ static void init_zspage(struct page *first_page, struct 
 size_class *class)
   while (page) {
   struct page *next_page;
   struct link_free *link;
 - unsigned int i, objs_on_page;
 + unsigned int i = 1;
  
   /*
* page-index stores offset of first object starting
 @@ -641,14 +641,10 @@ static void init_zspage(struct page *first_page, struct 
 size_class *class)
  
   link = (struct link_free *)kmap_atomic(page) +
   off / sizeof(*link);
 - objs_on_page = (PAGE_SIZE - off) / class-size;
  
 - for (i = 1; i = objs_on_page; i++) {
 - off += class-size;
 - if (off  PAGE_SIZE) {
 - link-next = obj_location_to_handle(page, i);
 - link += class-size / sizeof(*link);
 - }
 + while ((off += class-size)  PAGE_SIZE) {
 + link-next = obj_location_to_handle(page, i++);
 + link += class-size / sizeof(*link);
   }
  
   /*
 @@ -660,7 +656,7 @@ static void init_zspage(struct page *first_page, struct 
 size_class *class)
   link-next = obj_location_to_handle(next_page, 0);
   kunmap_atomic(link);
   page = next_page;
 - off = (off + class-size) % PAGE_SIZE;
 + off %= PAGE_SIZE;
   }
  }
  
 -- 
 1.8.3.1
 
 --
 To unsubscribe, send a message with 'unsubscribe linux-mm' in
 the body to majord...@kvack.org.  For more info on Linux MM,
 see: http://www.linux-mm.org/ .
 Don't email: a href=mailto:d...@kvack.org; em...@kvack.org /a

-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 01/10] zsmalloc: fix init_zspage free obj linking

2014-09-11 Thread Dan Streetman
When zsmalloc creates a new zspage, it initializes each object it contains
with a link to the next object, so that the zspage has a singly-linked list
of its free objects.  However, the logic that sets up the links is wrong,
and in the case of objects that are precisely aligned with the page boundries
(e.g. a zspage with objects that are 1/2 PAGE_SIZE) the first object on the
next page is skipped, due to incrementing the offset twice.  The logic can be
simplified, as it doesn't need to calculate how many objects can fit on the
current page; simply checking the offset for each object is enough.

Change zsmalloc init_zspage() logic to iterate through each object on
each of its pages, checking the offset to verify the object is on the
current page before linking it into the zspage.

Signed-off-by: Dan Streetman ddstr...@ieee.org
Cc: Minchan Kim minc...@kernel.org
---
 mm/zsmalloc.c | 14 +-
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index c4a9157..03aa72f 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -628,7 +628,7 @@ static void init_zspage(struct page *first_page, struct 
size_class *class)
while (page) {
struct page *next_page;
struct link_free *link;
-   unsigned int i, objs_on_page;
+   unsigned int i = 1;
 
/*
 * page-index stores offset of first object starting
@@ -641,14 +641,10 @@ static void init_zspage(struct page *first_page, struct 
size_class *class)
 
link = (struct link_free *)kmap_atomic(page) +
off / sizeof(*link);
-   objs_on_page = (PAGE_SIZE - off) / class-size;
 
-   for (i = 1; i = objs_on_page; i++) {
-   off += class-size;
-   if (off  PAGE_SIZE) {
-   link-next = obj_location_to_handle(page, i);
-   link += class-size / sizeof(*link);
-   }
+   while ((off += class-size)  PAGE_SIZE) {
+   link-next = obj_location_to_handle(page, i++);
+   link += class-size / sizeof(*link);
}
 
/*
@@ -660,7 +656,7 @@ static void init_zspage(struct page *first_page, struct 
size_class *class)
link-next = obj_location_to_handle(next_page, 0);
kunmap_atomic(link);
page = next_page;
-   off = (off + class-size) % PAGE_SIZE;
+   off %= PAGE_SIZE;
}
 }
 
-- 
1.8.3.1

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/