Re: [Qemu-devel] [PATCH for 3.1 v2] spapr: Fix ibm, max-associativity-domains property number of nodes

2018-11-20 Thread David Gibson
On Mon, Nov 19, 2018 at 11:23:05AM -0500, Serhii Popovych wrote:
> Laurent Vivier reported off by one with maximum number of NUMA nodes
> provided by qemu-kvm being less by one than required according to
> description of "ibm,max-associativity-domains" property in LoPAPR.
> 
> It appears that I incorrectly treated LoPAPR description of this
> property assuming it provides last valid domain (NUMA node here)
> instead of maximum number of domains.
> 
>   ### Before hot-add
> 
>   (qemu) info numa
>   3 nodes
>   node 0 cpus: 0
>   node 0 size: 0 MB
>   node 0 plugged: 0 MB
>   node 1 cpus:
>   node 1 size: 1024 MB
>   node 1 plugged: 0 MB
>   node 2 cpus:
>   node 2 size: 0 MB
>   node 2 plugged: 0 MB
> 
>   $ numactl -H
>   available: 2 nodes (0-1)
>   node 0 cpus: 0
>   node 0 size: 0 MB
>   node 0 free: 0 MB
>   node 1 cpus:
>   node 1 size: 999 MB
>   node 1 free: 658 MB
>   node distances:
>   node   0   1
> 0:  10  40
> 1:  40  10
> 
>   ### Hot-add
> 
>   (qemu) object_add memory-backend-ram,id=mem0,size=1G
>   (qemu) device_add pc-dimm,id=dimm1,memdev=mem0,node=2
>   (qemu) [   87.704898] pseries-hotplug-mem: Attempting to hot-add 4 ...
>   
>   [   87.705128] lpar: Attempting to resize HPT to shift 21
>   ... 
> 
>   ### After hot-add
> 
>   (qemu) info numa
>   3 nodes
>   node 0 cpus: 0
>   node 0 size: 0 MB
>   node 0 plugged: 0 MB
>   node 1 cpus:
>   node 1 size: 1024 MB
>   node 1 plugged: 0 MB
>   node 2 cpus:
>   node 2 size: 1024 MB
>   node 2 plugged: 1024 MB
> 
>   $ numactl -H
>   available: 2 nodes (0-1)
>   
>  Still only two nodes (and memory hot-added to node 0 below)
>   node 0 cpus: 0
>   node 0 size: 1024 MB
>   node 0 free: 1021 MB
>   node 1 cpus:
>   node 1 size: 999 MB
>   node 1 free: 658 MB
>   node distances:
>   node   0   1
> 0:  10  40
> 1:  40  10
> 
> After fix applied numactl(8) reports 3 nodes available and memory
> plugged into node 2 as expected.
> 
> Fixes: da9f80fbad21 ("spapr: Add ibm,max-associativity-domains property")
> Reported-by: Laurent Vivier 
> Signed-off-by: Serhii Popovych 
> ---
> v2
>   Remove now unneeded ?: statement previously used to catch -1 as numa node
>   causing Linux guests hanging on boot.
> 
>  hw/ppc/spapr.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index 7afd1a1..a7171fb 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -1033,7 +1033,7 @@ static void spapr_dt_rtas(sPAPRMachineState *spapr, 
> void *fdt)
>  cpu_to_be32(0),
>  cpu_to_be32(0),
>  cpu_to_be32(0),
> -cpu_to_be32(nb_numa_nodes ? nb_numa_nodes - 1 : 0),
> +cpu_to_be32(nb_numa_nodes),

Sorry, I know this got discussed in the thread on the earlier version,
but I'd prefer we leave the conditional expression in here.

qemu makes a distinction between "non NUMA" (nb_numa_nodes == 0) and
"NUMA with one node" (nb_numa_nodes == 1).  But from a PAPR guests's
point of view these are equivalent.  I don't want to present two
different cases to the guest when we don't need to, so even though the
guest can handle it, I'd prefer we put a '1' here for both the
nb_numa_nodes == 0 and nb_numa_nodes == 1 case.

>  };
>  
>  _FDT(rtas = fdt_add_subnode(fdt, 0, "rtas"));

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


signature.asc
Description: PGP signature


[Qemu-devel] [PATCH for 3.1 v2] spapr: Fix ibm, max-associativity-domains property number of nodes

2018-11-19 Thread Serhii Popovych
Laurent Vivier reported off by one with maximum number of NUMA nodes
provided by qemu-kvm being less by one than required according to
description of "ibm,max-associativity-domains" property in LoPAPR.

It appears that I incorrectly treated LoPAPR description of this
property assuming it provides last valid domain (NUMA node here)
instead of maximum number of domains.

  ### Before hot-add

  (qemu) info numa
  3 nodes
  node 0 cpus: 0
  node 0 size: 0 MB
  node 0 plugged: 0 MB
  node 1 cpus:
  node 1 size: 1024 MB
  node 1 plugged: 0 MB
  node 2 cpus:
  node 2 size: 0 MB
  node 2 plugged: 0 MB

  $ numactl -H
  available: 2 nodes (0-1)
  node 0 cpus: 0
  node 0 size: 0 MB
  node 0 free: 0 MB
  node 1 cpus:
  node 1 size: 999 MB
  node 1 free: 658 MB
  node distances:
  node   0   1
0:  10  40
1:  40  10

  ### Hot-add

  (qemu) object_add memory-backend-ram,id=mem0,size=1G
  (qemu) device_add pc-dimm,id=dimm1,memdev=mem0,node=2
  (qemu) [   87.704898] pseries-hotplug-mem: Attempting to hot-add 4 ...
  
  [   87.705128] lpar: Attempting to resize HPT to shift 21
  ... 

  ### After hot-add

  (qemu) info numa
  3 nodes
  node 0 cpus: 0
  node 0 size: 0 MB
  node 0 plugged: 0 MB
  node 1 cpus:
  node 1 size: 1024 MB
  node 1 plugged: 0 MB
  node 2 cpus:
  node 2 size: 1024 MB
  node 2 plugged: 1024 MB

  $ numactl -H
  available: 2 nodes (0-1)
  
 Still only two nodes (and memory hot-added to node 0 below)
  node 0 cpus: 0
  node 0 size: 1024 MB
  node 0 free: 1021 MB
  node 1 cpus:
  node 1 size: 999 MB
  node 1 free: 658 MB
  node distances:
  node   0   1
0:  10  40
1:  40  10

After fix applied numactl(8) reports 3 nodes available and memory
plugged into node 2 as expected.

Fixes: da9f80fbad21 ("spapr: Add ibm,max-associativity-domains property")
Reported-by: Laurent Vivier 
Signed-off-by: Serhii Popovych 
---
v2
  Remove now unneeded ?: statement previously used to catch -1 as numa node
  causing Linux guests hanging on boot.

 hw/ppc/spapr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 7afd1a1..a7171fb 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -1033,7 +1033,7 @@ static void spapr_dt_rtas(sPAPRMachineState *spapr, void 
*fdt)
 cpu_to_be32(0),
 cpu_to_be32(0),
 cpu_to_be32(0),
-cpu_to_be32(nb_numa_nodes ? nb_numa_nodes - 1 : 0),
+cpu_to_be32(nb_numa_nodes),
 };
 
 _FDT(rtas = fdt_add_subnode(fdt, 0, "rtas"));
-- 
1.8.3.1