[Xen-devel] [PATCH v2] x86/mm: pod: Use the correct memory flags for alloc_domheap_page{, s}

2015-10-23 Thread Julien Grall
The last parameter of alloc_domheap_page{s,} contain the memory flags and
not the order of the allocation.

Use 0 for the call in p2m_pod_set_cache_target as it was before
1069d63c5ef2510d08b83b2171af660e5bb18c63 "x86/mm/p2m: use defines for
page sizes". Note that PAGE_ORDER_4K is also equal to 0 so the behavior
stays the same.

For the call in p2m_pod_offline_or_broken_replace we want to allocate
the new page on the same numa node as the previous page. So retrieve the
numa node and pass it in the memory flags.

Signed-off-by: Julien Grall 

---

Note that the patch has only been build tested.

Cc: George Dunlap 
Cc: Keir Fraser 
Cc: Jan Beulich 
Cc: Andrew Cooper 
Cc: Dario Faggioli 

Changes in v2:
- Change the behavior of p2m_pod_offline_or_broken_replace
to allocate the new page on the same numa node as the previous
page.
---
 xen/arch/x86/mm/p2m-pod.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index 901da37..acd85ea 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -222,7 +222,7 @@ p2m_pod_set_cache_target(struct p2m_domain *p2m, unsigned 
long pod_target, int p
 else
 order = PAGE_ORDER_4K;
 retry:
-page = alloc_domheap_pages(d, order, PAGE_ORDER_4K);
+page = alloc_domheap_pages(d, order, 0);
 if ( unlikely(page == NULL) )
 {
 if ( order == PAGE_ORDER_2M )
@@ -471,13 +471,14 @@ p2m_pod_offline_or_broken_replace(struct page_info *p)
 {
 struct domain *d;
 struct p2m_domain *p2m;
+nodeid_t node = phys_to_nid(page_to_maddr(p));
 
 if ( !(d = page_get_owner(p)) || !(p2m = p2m_get_hostp2m(d)) )
 return;
 
 free_domheap_page(p);
 
-p = alloc_domheap_page(d, PAGE_ORDER_4K);
+p = alloc_domheap_page(d, MEMF_node(node));
 if ( unlikely(!p) )
 return;
 
-- 
2.1.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2] x86/mm: pod: Use the correct memory flags for alloc_domheap_page{, s}

2015-10-23 Thread Dario Faggioli
On Fri, 2015-10-23 at 11:33 +0100, Julien Grall wrote:
> The last parameter of alloc_domheap_page{s,} contain the memory flags
> and
> not the order of the allocation.
> 
> Use 0 for the call in p2m_pod_set_cache_target as it was before
> 1069d63c5ef2510d08b83b2171af660e5bb18c63 "x86/mm/p2m: use defines for
> page sizes". Note that PAGE_ORDER_4K is also equal to 0 so the
> behavior
> stays the same.
> 
> For the call in p2m_pod_offline_or_broken_replace we want to allocate
> the new page on the same numa node as the previous page. So retrieve
> the
> numa node and pass it in the memory flags.
> 
> Signed-off-by: Julien Grall 
> 
> ---
> 
> Note that the patch has only been build tested.
> 
I've done some basic testing. That means I:
 - created an HVM guest with memory < maxmem
 - played with `xl mem-set' and `xl mem-max' on it
 - local migrated it
 - played with `xl mem-set' and `xl mem-max' on it again
 - shutdown it

All done on a NUMA host, with memory dancing (during the 'play' phases)
up and down the amount of RAM present in each NUMA node.

I'm not sure how I should trigger and test memory hotunplug, neither
whether or not my testbox supports it.

Since it seems that memory hotumplug is what was really necessary, I'm
not sure it's appropriate to add the following tag:

Tested-by: Dario Faggioli 

but I'll let you guys (Jan, mainly, I guess) decide. If the above is
deemed enough, feel free to stick it there, if not, fine anyway. :-)

Regards,
Dario
-- 
<> (Raistlin Majere)
-
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



signature.asc
Description: This is a digitally signed message part
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2] x86/mm: pod: Use the correct memory flags for alloc_domheap_page{, s}

2015-10-26 Thread George Dunlap
On 23/10/15 11:33, Julien Grall wrote:
> The last parameter of alloc_domheap_page{s,} contain the memory flags and
> not the order of the allocation.
> 
> Use 0 for the call in p2m_pod_set_cache_target as it was before
> 1069d63c5ef2510d08b83b2171af660e5bb18c63 "x86/mm/p2m: use defines for
> page sizes". Note that PAGE_ORDER_4K is also equal to 0 so the behavior
> stays the same.
> 
> For the call in p2m_pod_offline_or_broken_replace we want to allocate
> the new page on the same numa node as the previous page. So retrieve the
> numa node and pass it in the memory flags.
> 
> Signed-off-by: Julien Grall 

Acked-by: George Dunlap 

> 
> ---
> 
> Note that the patch has only been build tested.

It would be nice if we could properly test the codepath in question
before checking it in, but we have lots of time before the release for
people to find this sort of thing.

 -George


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2] x86/mm: pod: Use the correct memory flags for alloc_domheap_page{, s}

2015-10-26 Thread Jan Beulich
>>> On 26.10.15 at 11:29,  wrote:
> On 23/10/15 11:33, Julien Grall wrote:
>> Note that the patch has only been build tested.
> 
> It would be nice if we could properly test the codepath in question
> before checking it in, but we have lots of time before the release for
> people to find this sort of thing.

To be honest, I'd rather put it in right away. There's no strict need
to backport it, and in case there is a problem we can always fix/revert
in -unstable. Most of the changes to memory hot (un)plug paths go
in that way, due to the rareness of systems to test such on.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2] x86/mm: pod: Use the correct memory flags for alloc_domheap_page{, s}

2015-10-26 Thread George Dunlap
On 26/10/15 10:40, Jan Beulich wrote:
 On 26.10.15 at 11:29,  wrote:
>> On 23/10/15 11:33, Julien Grall wrote:
>>> Note that the patch has only been build tested.
>>
>> It would be nice if we could properly test the codepath in question
>> before checking it in, but we have lots of time before the release for
>> people to find this sort of thing.
> 
> To be honest, I'd rather put it in right away. There's no strict need
> to backport it, and in case there is a problem we can always fix/revert
> in -unstable. Most of the changes to memory hot (un)plug paths go
> in that way, due to the rareness of systems to test such on.

Indeed, that's what I meant -- we should check it in right away, to
maximize the possibility that if there is a bug, it will be caught in
all the testing (both ad-hoc and explicit) that will happen between now
and the next release.

 -George

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel