On Mon, Jun 29, 2015 at 02:08:20PM +0200, Igor Mammedov wrote:
> On Mon, 29 Jun 2015 13:50:25 +0530
> Bharata B Rao <bhar...@linux.vnet.ibm.com> wrote:
> 
> > Start storing the (start_addr, end_addr) of the pc-dimm memory
> > in corresponding numa_info[node] so that this information can be used
> > to lookup node by address.
> > 
> > Signed-off-by: Bharata B Rao <bhar...@linux.vnet.ibm.com>
> Reviewed-by: Igor Mammedov <imamm...@redhat.com>
> 
> > ---
> >  hw/mem/pc-dimm.c      |  4 ++++
> >  include/sysemu/numa.h | 10 ++++++++++
> >  numa.c                | 26 ++++++++++++++++++++++++++
> >  3 files changed, 40 insertions(+)
> > 
> > diff --git a/hw/mem/pc-dimm.c b/hw/mem/pc-dimm.c
> > index 98971b7..bb04862 100644
> > --- a/hw/mem/pc-dimm.c
> > +++ b/hw/mem/pc-dimm.c
> > @@ -97,6 +97,7 @@ void pc_dimm_memory_plug(DeviceState *dev, 
> > MemoryHotplugState *hpms,
> >  
> >      memory_region_add_subregion(&hpms->mr, addr - hpms->base, mr);
> >      vmstate_register_ram(mr, dev);
> > +    numa_set_mem_node_id(addr, memory_region_size(mr), dimm->node);
> >  
> >  out:
> >      error_propagate(errp, local_err);
> > @@ -105,6 +106,9 @@ out:
> >  void pc_dimm_memory_unplug(DeviceState *dev, MemoryHotplugState *hpms,
> >                             MemoryRegion *mr)
> >  {
> > +    PCDIMMDevice *dimm = PC_DIMM(dev);
> > +
> > +    numa_unset_mem_node_id(dimm->addr, memory_region_size(mr), dimm->node);
> >      memory_region_del_subregion(&hpms->mr, mr);
> >      vmstate_unregister_ram(mr, dev);
> >  }
> > diff --git a/include/sysemu/numa.h b/include/sysemu/numa.h
> > index 6523b4d..7176364 100644
> > --- a/include/sysemu/numa.h
> > +++ b/include/sysemu/numa.h
> > @@ -10,16 +10,26 @@
> >  
> >  extern int nb_numa_nodes;   /* Number of NUMA nodes */
> >  
> > +struct numa_addr_range {
> > +    ram_addr_t mem_start;
> > +    ram_addr_t mem_end;
> > +    QLIST_ENTRY(numa_addr_range) entry;
> > +};
> > +
> >  typedef struct node_info {
> >      uint64_t node_mem;
> >      DECLARE_BITMAP(node_cpu, MAX_CPUMASK_BITS);
> >      struct HostMemoryBackend *node_memdev;
> >      bool present;
> > +    QLIST_HEAD(, numa_addr_range) addr; /* List to store address ranges */
> >  } NodeInfo;
> > +
> >  extern NodeInfo numa_info[MAX_NODES];
> >  void parse_numa_opts(MachineClass *mc);
> >  void numa_post_machine_init(void);
> >  void query_numa_node_mem(uint64_t node_mem[]);
> >  extern QemuOptsList qemu_numa_opts;
> > +void numa_set_mem_node_id(ram_addr_t addr, uint64_t size, uint32_t node);
> > +void numa_unset_mem_node_id(ram_addr_t addr, uint64_t size, uint32_t node);
> >  
> >  #endif
> > diff --git a/numa.c b/numa.c
> > index 91fc6c1..116d1fb 100644
> > --- a/numa.c
> > +++ b/numa.c
> > @@ -52,6 +52,28 @@ static int max_numa_nodeid; /* Highest specified NUMA 
> > node ID, plus one.
> >  int nb_numa_nodes;
> >  NodeInfo numa_info[MAX_NODES];
> >  
> > +void numa_set_mem_node_id(ram_addr_t addr, uint64_t size, uint32_t node)
> > +{
> > +    struct numa_addr_range *range = g_malloc0(sizeof(*range));
> > +
> > +    range->mem_start = addr;
> > +    range->mem_end = addr + size - 1;
> nit:
>  as a patch on top of it, add asserts that check for overflow, pls

You suggested g_assert(size) in the previous version.

However size can be zero when this API is called for boot time memory
and I have taken care of that in the next patch (5/6).

And for pc-dimm memory, the size can never be zero.

So do you still think overflow is possible ?

Regards,
Bharata.


Reply via email to