----- Original Message -----
> From: "Chris Friesen" <chris.frie...@windriver.com>
> To: openstack@lists.openstack.org
> 
> On 02/02/2015 07:18 PM, Li, Chen wrote:
> 
> >          2015-02-03 09:15:43.921 TRACE nova.compute.manager [instance:
> >          833cfac5-4fac-438b-acff-579b41ee5729]   File
> >          "/usr/lib/python2.7/dist-packages/libvirt.py", line 993, in
> >          createWithFlags
> >          2015-02-03 09:15:43.921 TRACE nova.compute.manager [instance:
> >          833cfac5-4fac-438b-acff-579b41ee5729]     if ret == -1: raise
> >          libvirtError ('virDomainCreateWithFlags() failed', dom=self)
> >          2015-02-03 09:15:43.921 TRACE nova.compute.manager [instance:
> >          833cfac5-4fac-438b-acff-579b41ee5729] libvirtError: internal
> >          error: process exited while connecting to monitor:
> >          2015-02-03T01:15:43.331770Z qemu-system-x86_64: -object
> >          
> > memory-backend-ram,size=2048M,id=ram-node0,host-nodes=0,policy=bind:
> >          NUMA node binding are not supported by this QEMU
> 
> 
> Just thought I'd mention that I'm seeing the same behaviour with devstack on
> ubuntu 14.10:
> 
> 2015-02-04 08:27:39.726 TRACE nova.compute.manager [instance:
> a1044d35-71a6-4f49-b08d-e2b0521fb57e] libvirtError: internal error: process
> exited while connecting to monitor: 2015-02-04T08:27:39.168932Z
> qemu-system-x86_64: -object
> memory-backend-file,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu,size=4096M,id=ram-node0,host-nodes=0,policy=bind:
> NUMA node binding are not supported by this QEMU
> 
> qemu is 2.1, libvirt is 1.2.8
> 
> Looking at the qemu code, it seems that ubuntu must have compiled without
> defining CONFIG_NUMA--which just blows my mind.

Weird, I see you raised 
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1417937 to cover this - 
interestingly it looks like similar issues delayed the eventual enabling of 
NUMA at the Libvirt layer, see 
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/614322

> Setting that aside, there's another topic. I haven't actually specified NUMA
> bindings, only huge pages, so does it really make sense to require a quemu
> with numa binding capability?

Dan can probably comment further but looking at the logic used I believe even 
without a NUMA topology being specified it does make use of topology 
information to try and line up the huge pages being allocated and the cell the 
guest is going in. The large pages work was done after the NUMA work with, I 
suspect, an underlying expectation that most users wanting to use these 
features would be using all in combination.

That's not necessarily to say someone couldn't propose updates to the huge 
pages implementation to provide a fallback path in this case, but it doesn't 
appear to be there today.

Thanks,

Steve

[1] https://wiki.openstack.org/wiki/VirtDriverGuestCPUMemoryPlacement

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to