Re: [Openstack] vCPU -> pCPU MAPPING

2016-07-08 Thread Brent Troge
resending to group instead of to steve.. OK.. Looks like my test results were skewed by strict numa placement policy set in the flavor, and my neutron ports being created against PCI devices in a specific NUMA node. For example, if I use a flavor I suspected of mapping to numa1, and if I created V

Re: [Openstack] vCPU -> pCPU MAPPING

2016-07-08 Thread Kaustubh Kelkar
Although it contradicts the idea of a cloud, I believe the CPU mapping between the guest and the host is a valid case for NFV applications. The best that one can do is to ensure vCPU and virtual memory are mapped to single NUMA node within the host and to make sure the CPUs don’t float within th

Re: [Openstack] vCPU -> pCPU MAPPING

2016-07-08 Thread Arne Wiebalck
We have use cases in our cloud which require vCPU-to-NUMA_node pinning to maximise the CPU performance available in the guests. From what we’ve seen, there was no further improvement when the vCPUs were mapped one-to-one to pCPUs (we did not study this in detail, though, as with the NUMA node pinni

Re: [Openstack] vCPU -> pCPU MAPPING

2016-07-08 Thread Steve Gordon
- Original Message - > From: "Brent Troge" > To: openstack@lists.openstack.org > Sent: Friday, July 8, 2016 9:59:58 AM > Subject: [Openstack] vCPU -> pCPU MAPPING > > context - high performance private cloud with cpu pinning > > Is it possible to map vCPUs to specific pCPUs ? > Currently

Re: [Openstack] LINUX BRIDGE FLAT NETWORK NO DHCP INTO NAMESPACE

2016-07-08 Thread Brent Troge
i disabled use_namespace and it now works. On Fri, Jul 8, 2016 at 11:06 AM, Turbo Fredriksson wrote: > On Jul 8, 2016, at 2:52 PM, Brent Troge wrote: > > > I think I am missing something simple here. > > Don't count on it! Setting up Neutron networking (if that's > what you're doing and not usin

Re: [Openstack] vCPU -> pCPU MAPPING

2016-07-08 Thread Jay Pipes
On 07/08/2016 09:59 AM, Brent Troge wrote: context - high performance private cloud with cpu pinning Is it possible to map vCPUs to specific pCPUs ? Currently I see you can only direct which vCPUs are mapped to a specific NUMA node hw:numa_cpus.0=1,2,3,4 However, to get even more granular, is

Re: [Openstack] LINUX BRIDGE FLAT NETWORK NO DHCP INTO NAMESPACE

2016-07-08 Thread Turbo Fredriksson
On Jul 8, 2016, at 2:52 PM, Brent Troge wrote: > I think I am missing something simple here. Don't count on it! Setting up Neutron networking (if that's what you're doing and not using Nova networking which is the "old" way to do it) is a pain in the royal behind!! > What is needed to direct DHC

[Openstack] vCPU -> pCPU MAPPING

2016-07-08 Thread Brent Troge
context - high performance private cloud with cpu pinning Is it possible to map vCPUs to specific pCPUs ? Currently I see you can only direct which vCPUs are mapped to a specific NUMA node hw:numa_cpus.0=1,2,3,4 However, to get even more granular, is it possible to create a flavor which maps vCP

[Openstack] LINUX BRIDGE FLAT NETWORK NO DHCP INTO NAMESPACE

2016-07-08 Thread Brent Troge
I think I am missing something simple here. I can see the DHCP requests coming into my network node, but I dont see that the DHCP requests are being shuttled into the DHCP namespace. What is needed to direct DHCP requests into the DHCP namespace ? ___ Ma

[Openstack] SRIOV and BOND

2016-07-08 Thread Brent Troge
I want to create a bond within my guest VM and trying to understand how to create my neutron ports without allocating an IP for each VF created. I just need one IP allocated instead of each VF(neutron port) being allocated an IP. I do not see anything within the neutron port api which supports som