Re: [PATCH v3 01/11] mm/memory_hotplug: Simplify and fix check_hotplug_memory_range()

2019-07-01 Thread Michal Hocko
[Sorry for a really late response]

On Mon 27-05-19 13:11:42, David Hildenbrand wrote:
> By converting start and size to page granularity, we actually ignore
> unaligned parts within a page instead of properly bailing out with an
> error.

I do not expect any code path would ever provide an unaligned address
and even if it did then rounding that to a pfn doesn't sound like a
terrible thing to do. Anyway this removes few lines so why not.
> 
> Cc: Andrew Morton 
> Cc: Oscar Salvador 
> Cc: Michal Hocko 
> Cc: David Hildenbrand 
> Cc: Pavel Tatashin 
> Cc: Qian Cai 
> Cc: Wei Yang 
> Cc: Arun KS 
> Cc: Mathieu Malaterre 
> Reviewed-by: Dan Williams 
> Reviewed-by: Wei Yang 
> Signed-off-by: David Hildenbrand 

Acked-by: Michal Hocko 

> ---
>  mm/memory_hotplug.c | 11 +++
>  1 file changed, 3 insertions(+), 8 deletions(-)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index e096c987d261..762887b2358b 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1051,16 +1051,11 @@ int try_online_node(int nid)
>  
>  static int check_hotplug_memory_range(u64 start, u64 size)
>  {
> - unsigned long block_sz = memory_block_size_bytes();
> - u64 block_nr_pages = block_sz >> PAGE_SHIFT;
> - u64 nr_pages = size >> PAGE_SHIFT;
> - u64 start_pfn = PFN_DOWN(start);
> -
>   /* memory range must be block size aligned */
> - if (!nr_pages || !IS_ALIGNED(start_pfn, block_nr_pages) ||
> - !IS_ALIGNED(nr_pages, block_nr_pages)) {
> + if (!size || !IS_ALIGNED(start, memory_block_size_bytes()) ||
> + !IS_ALIGNED(size, memory_block_size_bytes())) {
>   pr_err("Block size [%#lx] unaligned hotplug range: start %#llx, 
> size %#llx",
> -block_sz, start, size);
> +memory_block_size_bytes(), start, size);
>   return -EINVAL;
>   }
>  
> -- 
> 2.20.1
> 

-- 
Michal Hocko
SUSE Labs


Re: [PATCH v3 01/11] mm/memory_hotplug: Simplify and fix check_hotplug_memory_range()

2019-06-10 Thread Oscar Salvador
On Mon, May 27, 2019 at 01:11:42PM +0200, David Hildenbrand wrote:
> By converting start and size to page granularity, we actually ignore
> unaligned parts within a page instead of properly bailing out with an
> error.
> 
> Cc: Andrew Morton 
> Cc: Oscar Salvador 
> Cc: Michal Hocko 
> Cc: David Hildenbrand 
> Cc: Pavel Tatashin 
> Cc: Qian Cai 
> Cc: Wei Yang 
> Cc: Arun KS 
> Cc: Mathieu Malaterre 
> Reviewed-by: Dan Williams 
> Reviewed-by: Wei Yang 
> Signed-off-by: David Hildenbrand 

Reviewed-by: Oscar Salvador 

-- 
Oscar Salvador
SUSE L3


Re: [PATCH v3 01/11] mm/memory_hotplug: Simplify and fix check_hotplug_memory_range()

2019-05-30 Thread Pavel Tatashin
On Mon, May 27, 2019 at 7:12 AM David Hildenbrand  wrote:
>
> By converting start and size to page granularity, we actually ignore
> unaligned parts within a page instead of properly bailing out with an
> error.
>
> Cc: Andrew Morton 
> Cc: Oscar Salvador 
> Cc: Michal Hocko 
> Cc: David Hildenbrand 
> Cc: Pavel Tatashin 
> Cc: Qian Cai 
> Cc: Wei Yang 
> Cc: Arun KS 
> Cc: Mathieu Malaterre 
> Reviewed-by: Dan Williams 
> Reviewed-by: Wei Yang 
> Signed-off-by: David Hildenbrand 

Reviewed-by: Pavel Tatashin 


[PATCH v3 01/11] mm/memory_hotplug: Simplify and fix check_hotplug_memory_range()

2019-05-27 Thread David Hildenbrand
By converting start and size to page granularity, we actually ignore
unaligned parts within a page instead of properly bailing out with an
error.

Cc: Andrew Morton 
Cc: Oscar Salvador 
Cc: Michal Hocko 
Cc: David Hildenbrand 
Cc: Pavel Tatashin 
Cc: Qian Cai 
Cc: Wei Yang 
Cc: Arun KS 
Cc: Mathieu Malaterre 
Reviewed-by: Dan Williams 
Reviewed-by: Wei Yang 
Signed-off-by: David Hildenbrand 
---
 mm/memory_hotplug.c | 11 +++
 1 file changed, 3 insertions(+), 8 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index e096c987d261..762887b2358b 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1051,16 +1051,11 @@ int try_online_node(int nid)
 
 static int check_hotplug_memory_range(u64 start, u64 size)
 {
-   unsigned long block_sz = memory_block_size_bytes();
-   u64 block_nr_pages = block_sz >> PAGE_SHIFT;
-   u64 nr_pages = size >> PAGE_SHIFT;
-   u64 start_pfn = PFN_DOWN(start);
-
/* memory range must be block size aligned */
-   if (!nr_pages || !IS_ALIGNED(start_pfn, block_nr_pages) ||
-   !IS_ALIGNED(nr_pages, block_nr_pages)) {
+   if (!size || !IS_ALIGNED(start, memory_block_size_bytes()) ||
+   !IS_ALIGNED(size, memory_block_size_bytes())) {
pr_err("Block size [%#lx] unaligned hotplug range: start %#llx, 
size %#llx",
-  block_sz, start, size);
+  memory_block_size_bytes(), start, size);
return -EINVAL;
}
 
-- 
2.20.1