On 06/21/2018 07:50 AM, Mooney, Sean K wrote:
-----Original Message-----
From: Jay Pipes [mailto:[email protected]]

Side question... does either approach touch PCI device management
during live migration?

I ask because the only workloads I've ever seen that pin guest vCPU
threads to specific host processors -- or make use of huge pages
consumed from a specific host NUMA node -- have also made use of SR-IOV
and/or PCI passthrough. [1]

If workloads that use PCI passthrough or SR-IOV VFs cannot be live
migrated (due to existing complications in the lower-level virt layers)
I don't see much of a point spending lots of developer resources trying
to "fix" this situation when in the real world, only a mythical
workload that uses CPU pinning or huge pages but *doesn't* use PCI
passthrough or SR-IOV VFs would be helped by it.

[Mooney, Sean K]  I would generally agree but with the extention of include 
dpdk based vswitch like ovs-dpdk or vpp.
Cpu pinned or hugepage backed guests generally also have some kind of high 
performance networking solution or use a hardware
Acclaortor like a gpu to justify the performance assertion that pinning of 
cores or ram is required.
Dpdk networking stack would however not require the pci remaping to be 
addressed though I belive that is planned to be added in stine.

Jay, you make a good point but I'll second what Sean says...for the last few years my organization has been using a DPDK-accelerated vswitch which performs well enough for many high-performance purposes.

In the general case, I think live migration while using physical devices would require coordinating the migration with the guest software.

Chris

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to