On 10/10/2013 01:19 AM, Prashant Upadhyaya wrote:
Hi Chris,

I note two of your comments --

When we worked on H release, we target for basic PCI support
like accelerator card or encryption card etc.

PU> So I note that you are already solving the PCI pass through
usecase somehow ? How ? If you have solved this already in terms of
architecture then SRIOV should not be difficult.

Notice the double indent...that was actually Jiang's statement that I
quoted.


Do we run into the same complexity if we have spare physical NICs
on the host that get passed in to the guest?

PU> In part you are correct. However there is one additional thing.
When we have multiple physical NIC's, the Compute Node's linux is
still in control over those.

<snip>

In case of SRIOV, you can dice up a single
physical NIC into multiple NIC's (effectively), and expose each of
these diced up NIC's to a VM each. This means that the VM will now
'directly' access the NIC bypassing the Hypervisor.

<snip>

But if there are two
physical NIC's which were diced up with SRIOV, then VM's on the diced
parts of the first  physical NIC cannot communicate easily with the
VM's on the diced parts of the second physical NIC. So a native
implementation has to be there on the Compute Node which will aid
this (this native implementation will take over the Physical
Function, PF of each NIC) and will be able to 'switch' the packets
between VM's of different physical diced up NIC's [if we need that
usecase]

Is this strictly necessary? It seems like it would be simpler to let the packets be sent out over the wire and the switch/router would send them back to the other NIC. Of course this would result in higher use of the physical link, but on the other hand it would mean less work for the CPU on the compute node.

Chris

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to