> Currently mempools for vhost are being assigned before the vhost device
> is added.  In some cases this may be just reusing an existing mempool but
> in others it can require creation of a mempool.
> 
> For multi-NUMA, the NUMA info of the vhost port is not known until a
> device is added to the port, so on multi-NUMA systems the initial NUMA
> node for the mempool is a best guess based on vswitchd affinity.
> 
> When a device is added to the vhost port, the NUMA info can be checked
> and if the guess was incorrect a mempool on the correct NUMA node
> created.
> 
> For multi-NUMA, the current scheme can have the effect of creating a
> mempool on a NUMA node that will not be needed and at least for a certain
> time period requires more memory on a NUMA node.
> 
> It is also difficult for a user trying to provision memory on different
> NUMA nodes, if they are not sure which NUMA node the initial mempool
> for a vhost port will be on.
> 
> For single NUMA, even though the mempool will be on the correct NUMA,
> it
> is assigned ahead of time and if a vhost device was not added, it could
> also be using uneeded memory.
> 
> This patch delays the creation of the mempool for a vhost port until the
> vhost device is added.
> 
> Signed-off-by: Kevin Traynor <ktray...@redhat.com>
> Reviewed-by: David Marchand <david.march...@redhat.com>

Thanks for the patch Kevin, I've pushed this to master.

Thanks
Ian
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to