Re: [PATCH v2] mm/vmalloc: terminate searching since one node found

2017-07-18 Thread Zhaoyang Huang
On Mon, Jul 17, 2017 at 4:29 PM, Michal Hocko  wrote:
> On Mon 17-07-17 15:27:31, Zhaoyang Huang wrote:
>> From: Zhaoyang Huang 
>>
>> It is no need to find the very beginning of the area within
>> alloc_vmap_area, which can be done by judging each node during the process
>>
>> For current approach, the worst case is that the starting node which be found
>> for searching the 'vmap_area_list' is close to the 'vstart', while the final
>> available one is round to the tail(especially for the left branch).
>> This commit have the list searching start at the first available node, which
>> will save the time of walking the rb tree'(1)' and walking the list(2).
>>
>>   vmap_area_root
>>   /  \
>>  tmp_next U
>> /   (1)
>>   tmp
>>/
>>  ...
>>   /
>> first(current approach)
>>
>> vmap_area_list->...->first->...->tmp->tmp_next
>> (2)
>
> This still doesn't answer questions posted for your previous version
> http://lkml.kernel.org/r/20170717070024.gc7...@dhcp22.suse.cz
>
> Please note that is really important to describe _why_ the patch is
> needed. What has changed can be easily read in the diff...
>
I did some test on an ARM64 platform and found that there is no great help nor
regression for vmalloc. By more investigation, I find that the vmalloc area for
64bit arch is too huge to reach the end of the vmap_free_list, which have the
new allocated area just grow up(seems no chance to use the rb tree). I
will try to
find a 32bit platform for more test.

>> Signed-off-by: Zhaoyang Huang 
>> ---
>>  mm/vmalloc.c | 7 +++
>>  1 file changed, 7 insertions(+)
>>
>> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>> index 34a1c3e..f833e07 100644
>> --- a/mm/vmalloc.c
>> +++ b/mm/vmalloc.c
>> @@ -459,9 +459,16 @@ static struct vmap_area *alloc_vmap_area(unsigned long 
>> size,
>>
>>   while (n) {
>>   struct vmap_area *tmp;
>> + struct vmap_area *tmp_next;
>>   tmp = rb_entry(n, struct vmap_area, rb_node);
>> + tmp_next = list_next_entry(tmp, list);
>>   if (tmp->va_end >= addr) {
>>   first = tmp;
>> + if (ALIGN(tmp->va_end, align) + size
>> + < tmp_next->va_start) {
>> + addr = ALIGN(tmp->va_end, align);
>> + goto found;
>> + }
>>   if (tmp->va_start <= addr)
>>   break;
>>   n = n->rb_left;
>> --
>> 1.9.1
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majord...@kvack.org.  For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: mailto:"d...@kvack.org;> em...@kvack.org 
>
> --
> Michal Hocko
> SUSE Labs


Re: [PATCH v2] mm/vmalloc: terminate searching since one node found

2017-07-18 Thread Zhaoyang Huang
On Mon, Jul 17, 2017 at 4:29 PM, Michal Hocko  wrote:
> On Mon 17-07-17 15:27:31, Zhaoyang Huang wrote:
>> From: Zhaoyang Huang 
>>
>> It is no need to find the very beginning of the area within
>> alloc_vmap_area, which can be done by judging each node during the process
>>
>> For current approach, the worst case is that the starting node which be found
>> for searching the 'vmap_area_list' is close to the 'vstart', while the final
>> available one is round to the tail(especially for the left branch).
>> This commit have the list searching start at the first available node, which
>> will save the time of walking the rb tree'(1)' and walking the list(2).
>>
>>   vmap_area_root
>>   /  \
>>  tmp_next U
>> /   (1)
>>   tmp
>>/
>>  ...
>>   /
>> first(current approach)
>>
>> vmap_area_list->...->first->...->tmp->tmp_next
>> (2)
>
> This still doesn't answer questions posted for your previous version
> http://lkml.kernel.org/r/20170717070024.gc7...@dhcp22.suse.cz
>
> Please note that is really important to describe _why_ the patch is
> needed. What has changed can be easily read in the diff...
>
I did some test on an ARM64 platform and found that there is no great help nor
regression for vmalloc. By more investigation, I find that the vmalloc area for
64bit arch is too huge to reach the end of the vmap_free_list, which have the
new allocated area just grow up(seems no chance to use the rb tree). I
will try to
find a 32bit platform for more test.

>> Signed-off-by: Zhaoyang Huang 
>> ---
>>  mm/vmalloc.c | 7 +++
>>  1 file changed, 7 insertions(+)
>>
>> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>> index 34a1c3e..f833e07 100644
>> --- a/mm/vmalloc.c
>> +++ b/mm/vmalloc.c
>> @@ -459,9 +459,16 @@ static struct vmap_area *alloc_vmap_area(unsigned long 
>> size,
>>
>>   while (n) {
>>   struct vmap_area *tmp;
>> + struct vmap_area *tmp_next;
>>   tmp = rb_entry(n, struct vmap_area, rb_node);
>> + tmp_next = list_next_entry(tmp, list);
>>   if (tmp->va_end >= addr) {
>>   first = tmp;
>> + if (ALIGN(tmp->va_end, align) + size
>> + < tmp_next->va_start) {
>> + addr = ALIGN(tmp->va_end, align);
>> + goto found;
>> + }
>>   if (tmp->va_start <= addr)
>>   break;
>>   n = n->rb_left;
>> --
>> 1.9.1
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majord...@kvack.org.  For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: mailto:"d...@kvack.org;> em...@kvack.org 
>
> --
> Michal Hocko
> SUSE Labs


Re: [PATCH v2] mm/vmalloc: terminate searching since one node found

2017-07-17 Thread zijun_hu
On 07/17/2017 04:45 PM, zijun_hu wrote:
> On 07/17/2017 04:07 PM, Zhaoyang Huang wrote:
>> It is no need to find the very beginning of the area within
>> alloc_vmap_area, which can be done by judging each node during the process
>>
>> For current approach, the worst case is that the starting node which be found
>> for searching the 'vmap_area_list' is close to the 'vstart', while the final
>> available one is round to the tail(especially for the left branch).
>> This commit have the list searching start at the first available node, which
>> will save the time of walking the rb tree'(1)' and walking the list(2).
>>
>>   vmap_area_root
>>   /  \
>>  tmp_next U
>> /   (1)
>>   tmp
>>/
>>  ...
>>   /
>> first(current approach)
>>
>> vmap_area_list->...->first->...->tmp->tmp_next
> 
> the original code can ensure the following two points :
> A, the result vamp_area has the lowest available address in the range 
> [vstart, vend)
> B, it can maintain the cached vamp_area node rightly which can speedup 
> relative allocation
> i suspect this patch maybe destroy the above two points 
>> (2)
>>
>> Signed-off-by: Zhaoyang Huang 
>> ---
>>  mm/vmalloc.c | 7 +++
>>  1 file changed, 7 insertions(+)
>>
>> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>> index 34a1c3e..f833e07 100644
>> --- a/mm/vmalloc.c
>> +++ b/mm/vmalloc.c
>> @@ -459,9 +459,16 @@ static struct vmap_area *alloc_vmap_area(unsigned
>> long size,
>>
>> while (n) {
>> struct vmap_area *tmp;
>> +   struct vmap_area *tmp_next;
>> tmp = rb_entry(n, struct vmap_area, rb_node);
>> +   tmp_next = list_next_entry(tmp, list);
>> if (tmp->va_end >= addr) {
>> first = tmp;
>> +   if (ALIGN(tmp->va_end, align) + size
>> +   < tmp_next->va_start) {
>> +   addr = ALIGN(tmp->va_end, align);
>> +   goto found;
>> +   }
> is the aim vamp_area the lowest available one if the goto occurs ?
> it will bypass the latter cached vamp_area info  cached_hole_size update 
> possibly if the goto occurs
  it think the aim area maybe don't locates the required range [vstart, vend) 
possibly.
>> if (tmp->va_start <= addr)
>> break;
>> n = n->rb_left;
>> --
>> 1.9.1
>>
> 




Re: [PATCH v2] mm/vmalloc: terminate searching since one node found

2017-07-17 Thread zijun_hu
On 07/17/2017 04:45 PM, zijun_hu wrote:
> On 07/17/2017 04:07 PM, Zhaoyang Huang wrote:
>> It is no need to find the very beginning of the area within
>> alloc_vmap_area, which can be done by judging each node during the process
>>
>> For current approach, the worst case is that the starting node which be found
>> for searching the 'vmap_area_list' is close to the 'vstart', while the final
>> available one is round to the tail(especially for the left branch).
>> This commit have the list searching start at the first available node, which
>> will save the time of walking the rb tree'(1)' and walking the list(2).
>>
>>   vmap_area_root
>>   /  \
>>  tmp_next U
>> /   (1)
>>   tmp
>>/
>>  ...
>>   /
>> first(current approach)
>>
>> vmap_area_list->...->first->...->tmp->tmp_next
> 
> the original code can ensure the following two points :
> A, the result vamp_area has the lowest available address in the range 
> [vstart, vend)
> B, it can maintain the cached vamp_area node rightly which can speedup 
> relative allocation
> i suspect this patch maybe destroy the above two points 
>> (2)
>>
>> Signed-off-by: Zhaoyang Huang 
>> ---
>>  mm/vmalloc.c | 7 +++
>>  1 file changed, 7 insertions(+)
>>
>> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>> index 34a1c3e..f833e07 100644
>> --- a/mm/vmalloc.c
>> +++ b/mm/vmalloc.c
>> @@ -459,9 +459,16 @@ static struct vmap_area *alloc_vmap_area(unsigned
>> long size,
>>
>> while (n) {
>> struct vmap_area *tmp;
>> +   struct vmap_area *tmp_next;
>> tmp = rb_entry(n, struct vmap_area, rb_node);
>> +   tmp_next = list_next_entry(tmp, list);
>> if (tmp->va_end >= addr) {
>> first = tmp;
>> +   if (ALIGN(tmp->va_end, align) + size
>> +   < tmp_next->va_start) {
>> +   addr = ALIGN(tmp->va_end, align);
>> +   goto found;
>> +   }
> is the aim vamp_area the lowest available one if the goto occurs ?
> it will bypass the latter cached vamp_area info  cached_hole_size update 
> possibly if the goto occurs
  it think the aim area maybe don't locates the required range [vstart, vend) 
possibly.
>> if (tmp->va_start <= addr)
>> break;
>> n = n->rb_left;
>> --
>> 1.9.1
>>
> 




Re: [PATCH v2] mm/vmalloc: terminate searching since one node found

2017-07-17 Thread zijun_hu
On 07/17/2017 04:07 PM, Zhaoyang Huang wrote:
> It is no need to find the very beginning of the area within
> alloc_vmap_area, which can be done by judging each node during the process
> 
> For current approach, the worst case is that the starting node which be found
> for searching the 'vmap_area_list' is close to the 'vstart', while the final
> available one is round to the tail(especially for the left branch).
> This commit have the list searching start at the first available node, which
> will save the time of walking the rb tree'(1)' and walking the list(2).
> 
>   vmap_area_root
>   /  \
>  tmp_next U
> /   (1)
>   tmp
>/
>  ...
>   /
> first(current approach)
> 
> vmap_area_list->...->first->...->tmp->tmp_next

the original code can ensure the following two points :
A, the result vamp_area has the lowest available address in the range [vstart, 
vend)
B, it can maintain the cached vamp_area node rightly which can speedup relative 
allocation
i suspect this patch maybe destroy the above two points 
> (2)
> 
> Signed-off-by: Zhaoyang Huang 
> ---
>  mm/vmalloc.c | 7 +++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 34a1c3e..f833e07 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -459,9 +459,16 @@ static struct vmap_area *alloc_vmap_area(unsigned
> long size,
> 
> while (n) {
> struct vmap_area *tmp;
> +   struct vmap_area *tmp_next;
> tmp = rb_entry(n, struct vmap_area, rb_node);
> +   tmp_next = list_next_entry(tmp, list);
> if (tmp->va_end >= addr) {
> first = tmp;
> +   if (ALIGN(tmp->va_end, align) + size
> +   < tmp_next->va_start) {
> +   addr = ALIGN(tmp->va_end, align);
> +   goto found;
> +   }
is the aim vamp_area the lowest available one if the goto occurs ?
it will bypass the latter cached vamp_area info  cached_hole_size update 
possibly if the goto occurs
> if (tmp->va_start <= addr)
> break;
> n = n->rb_left;
> --
> 1.9.1
> 




Re: [PATCH v2] mm/vmalloc: terminate searching since one node found

2017-07-17 Thread zijun_hu
On 07/17/2017 04:07 PM, Zhaoyang Huang wrote:
> It is no need to find the very beginning of the area within
> alloc_vmap_area, which can be done by judging each node during the process
> 
> For current approach, the worst case is that the starting node which be found
> for searching the 'vmap_area_list' is close to the 'vstart', while the final
> available one is round to the tail(especially for the left branch).
> This commit have the list searching start at the first available node, which
> will save the time of walking the rb tree'(1)' and walking the list(2).
> 
>   vmap_area_root
>   /  \
>  tmp_next U
> /   (1)
>   tmp
>/
>  ...
>   /
> first(current approach)
> 
> vmap_area_list->...->first->...->tmp->tmp_next

the original code can ensure the following two points :
A, the result vamp_area has the lowest available address in the range [vstart, 
vend)
B, it can maintain the cached vamp_area node rightly which can speedup relative 
allocation
i suspect this patch maybe destroy the above two points 
> (2)
> 
> Signed-off-by: Zhaoyang Huang 
> ---
>  mm/vmalloc.c | 7 +++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 34a1c3e..f833e07 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -459,9 +459,16 @@ static struct vmap_area *alloc_vmap_area(unsigned
> long size,
> 
> while (n) {
> struct vmap_area *tmp;
> +   struct vmap_area *tmp_next;
> tmp = rb_entry(n, struct vmap_area, rb_node);
> +   tmp_next = list_next_entry(tmp, list);
> if (tmp->va_end >= addr) {
> first = tmp;
> +   if (ALIGN(tmp->va_end, align) + size
> +   < tmp_next->va_start) {
> +   addr = ALIGN(tmp->va_end, align);
> +   goto found;
> +   }
is the aim vamp_area the lowest available one if the goto occurs ?
it will bypass the latter cached vamp_area info  cached_hole_size update 
possibly if the goto occurs
> if (tmp->va_start <= addr)
> break;
> n = n->rb_left;
> --
> 1.9.1
> 




Re: [PATCH v2] mm/vmalloc: terminate searching since one node found

2017-07-17 Thread Michal Hocko
On Mon 17-07-17 15:27:31, Zhaoyang Huang wrote:
> From: Zhaoyang Huang 
> 
> It is no need to find the very beginning of the area within
> alloc_vmap_area, which can be done by judging each node during the process
> 
> For current approach, the worst case is that the starting node which be found
> for searching the 'vmap_area_list' is close to the 'vstart', while the final
> available one is round to the tail(especially for the left branch).
> This commit have the list searching start at the first available node, which
> will save the time of walking the rb tree'(1)' and walking the list(2).
> 
>   vmap_area_root
>   /  \
>  tmp_next U
> /   (1)
>   tmp
>/
>  ...
>   /
> first(current approach)
> 
> vmap_area_list->...->first->...->tmp->tmp_next
> (2)

This still doesn't answer questions posted for your previous version
http://lkml.kernel.org/r/20170717070024.gc7...@dhcp22.suse.cz

Please note that is really important to describe _why_ the patch is
needed. What has changed can be easily read in the diff...

> Signed-off-by: Zhaoyang Huang 
> ---
>  mm/vmalloc.c | 7 +++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 34a1c3e..f833e07 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -459,9 +459,16 @@ static struct vmap_area *alloc_vmap_area(unsigned long 
> size,
>  
>   while (n) {
>   struct vmap_area *tmp;
> + struct vmap_area *tmp_next;
>   tmp = rb_entry(n, struct vmap_area, rb_node);
> + tmp_next = list_next_entry(tmp, list);
>   if (tmp->va_end >= addr) {
>   first = tmp;
> + if (ALIGN(tmp->va_end, align) + size
> + < tmp_next->va_start) {
> + addr = ALIGN(tmp->va_end, align);
> + goto found;
> + }
>   if (tmp->va_start <= addr)
>   break;
>   n = n->rb_left;
> -- 
> 1.9.1
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org;> em...@kvack.org 

-- 
Michal Hocko
SUSE Labs


Re: [PATCH v2] mm/vmalloc: terminate searching since one node found

2017-07-17 Thread Michal Hocko
On Mon 17-07-17 15:27:31, Zhaoyang Huang wrote:
> From: Zhaoyang Huang 
> 
> It is no need to find the very beginning of the area within
> alloc_vmap_area, which can be done by judging each node during the process
> 
> For current approach, the worst case is that the starting node which be found
> for searching the 'vmap_area_list' is close to the 'vstart', while the final
> available one is round to the tail(especially for the left branch).
> This commit have the list searching start at the first available node, which
> will save the time of walking the rb tree'(1)' and walking the list(2).
> 
>   vmap_area_root
>   /  \
>  tmp_next U
> /   (1)
>   tmp
>/
>  ...
>   /
> first(current approach)
> 
> vmap_area_list->...->first->...->tmp->tmp_next
> (2)

This still doesn't answer questions posted for your previous version
http://lkml.kernel.org/r/20170717070024.gc7...@dhcp22.suse.cz

Please note that is really important to describe _why_ the patch is
needed. What has changed can be easily read in the diff...

> Signed-off-by: Zhaoyang Huang 
> ---
>  mm/vmalloc.c | 7 +++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 34a1c3e..f833e07 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -459,9 +459,16 @@ static struct vmap_area *alloc_vmap_area(unsigned long 
> size,
>  
>   while (n) {
>   struct vmap_area *tmp;
> + struct vmap_area *tmp_next;
>   tmp = rb_entry(n, struct vmap_area, rb_node);
> + tmp_next = list_next_entry(tmp, list);
>   if (tmp->va_end >= addr) {
>   first = tmp;
> + if (ALIGN(tmp->va_end, align) + size
> + < tmp_next->va_start) {
> + addr = ALIGN(tmp->va_end, align);
> + goto found;
> + }
>   if (tmp->va_start <= addr)
>   break;
>   n = n->rb_left;
> -- 
> 1.9.1
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org;> em...@kvack.org 

-- 
Michal Hocko
SUSE Labs


[PATCH v2] mm/vmalloc: terminate searching since one node found

2017-07-17 Thread Zhaoyang Huang
From: Zhaoyang Huang 

It is no need to find the very beginning of the area within
alloc_vmap_area, which can be done by judging each node during the process

For current approach, the worst case is that the starting node which be found
for searching the 'vmap_area_list' is close to the 'vstart', while the final
available one is round to the tail(especially for the left branch).
This commit have the list searching start at the first available node, which
will save the time of walking the rb tree'(1)' and walking the list(2).

  vmap_area_root
  /  \
 tmp_next U
/   (1)
  tmp
   /
 ...
  /
first(current approach)

vmap_area_list->...->first->...->tmp->tmp_next
(2)

Signed-off-by: Zhaoyang Huang 
---
 mm/vmalloc.c | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 34a1c3e..f833e07 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -459,9 +459,16 @@ static struct vmap_area *alloc_vmap_area(unsigned long 
size,
 
while (n) {
struct vmap_area *tmp;
+   struct vmap_area *tmp_next;
tmp = rb_entry(n, struct vmap_area, rb_node);
+   tmp_next = list_next_entry(tmp, list);
if (tmp->va_end >= addr) {
first = tmp;
+   if (ALIGN(tmp->va_end, align) + size
+   < tmp_next->va_start) {
+   addr = ALIGN(tmp->va_end, align);
+   goto found;
+   }
if (tmp->va_start <= addr)
break;
n = n->rb_left;
-- 
1.9.1



[PATCH v2] mm/vmalloc: terminate searching since one node found

2017-07-17 Thread Zhaoyang Huang
From: Zhaoyang Huang 

It is no need to find the very beginning of the area within
alloc_vmap_area, which can be done by judging each node during the process

For current approach, the worst case is that the starting node which be found
for searching the 'vmap_area_list' is close to the 'vstart', while the final
available one is round to the tail(especially for the left branch).
This commit have the list searching start at the first available node, which
will save the time of walking the rb tree'(1)' and walking the list(2).

  vmap_area_root
  /  \
 tmp_next U
/   (1)
  tmp
   /
 ...
  /
first(current approach)

vmap_area_list->...->first->...->tmp->tmp_next
(2)

Signed-off-by: Zhaoyang Huang 
---
 mm/vmalloc.c | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 34a1c3e..f833e07 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -459,9 +459,16 @@ static struct vmap_area *alloc_vmap_area(unsigned long 
size,
 
while (n) {
struct vmap_area *tmp;
+   struct vmap_area *tmp_next;
tmp = rb_entry(n, struct vmap_area, rb_node);
+   tmp_next = list_next_entry(tmp, list);
if (tmp->va_end >= addr) {
first = tmp;
+   if (ALIGN(tmp->va_end, align) + size
+   < tmp_next->va_start) {
+   addr = ALIGN(tmp->va_end, align);
+   goto found;
+   }
if (tmp->va_start <= addr)
break;
n = n->rb_left;
-- 
1.9.1