Hi Chris,

I note two of your comments --

> > When we worked on H release, we target for basic PCI support like
> > accelerator card or encryption card etc.

PU> So I note that you are already solving the PCI pass through usecase somehow 
? How ? If you have solved this already in terms of architecture then SRIOV 
should not be difficult.

> Do we run into the same complexity if we have spare physical NICs on
> the host that get passed in to the guest?

PU> In part you are correct. However there is one additional thing. When we 
have multiple physical NIC's, the Compute Node's linux is still in control over 
those. So the data into the VM and out still travels all those tunneling 
devices and finally goes out of these physical NIC's. The NIC is _not_ exposed 
directly to the VM. The VM still has the emulated NIC which interfaces out with 
the tap and over the linux bridge....
In case of SRIOV, you can dice up a single physical NIC into multiple NIC's 
(effectively), and expose each of these diced up NIC's to a VM each. This means 
that the VM will now 'directly' access the NIC bypassing the Hypervisor. 
Similar to PCI pass through, but now you have one pass through for each VM with 
the diced NIC.  So that is a major consideration to keep in mind because this 
means that we will bypass all those tunneling devices in the middle. But since 
you say that you are working with PCI passthrough and seem to have solved it, 
this is a mere extension of that.

Further, for single physical NIC which is diced up and is connected to VM's on 
a single Compute Node, the NIC provides a 'switch' using which these VM's can 
talk to each other. This can aid us because we have bypassed all the tunneling 
devices.
But if there are two physical NIC's which were diced up with SRIOV, then VM's 
on the diced parts of the first  physical NIC cannot communicate easily with 
the VM's on the diced parts of the second physical NIC.
So a native implementation has to be there on the Compute Node which will aid 
this (this native implementation will take over the Physical Function, PF of 
each NIC) and will be able to 'switch' the packets between VM's of different 
physical diced up NIC's [if we need that usecase]

Regards
-Prashant


-----Original Message-----
From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Thursday, October 10, 2013 12:15 PM
To: Jiang, Yunhong; Chris Friesen; openst...@lists.openstack.org
Cc: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
Subject: Re: [openstack-dev] [Openstack] Neutron support for passthrough of 
networking devices?

Hi Chris, Jiang,
We are also looking into enchantment of basic PCI pass-through to provide 
SR-IOV based networking.
In order to support automatic provisioning, it requires the awareness to what 
virtual network to connect the requested SR-IOV device.
This should be considered by the scheduler  in order to run VM on the Host that 
is connected to the physical network.
It requires the Neutron to be aware of PCI pass though allocated device and 
allocate port on the virtual network.
It will require some sort of VIF Driver to manage the libvirt device settings.
It may also require neutron agent to apply port policy on the device. I think 
it makes sense to  support this as part of ML2 neutron plugin (via mechanism 
driver).
In case you plan to attend the design summit, maybe it worth to collaborate 
there and discuss what can be done in the coming  Icehouse release?

Regards,
Irena

-----Original Message-----
From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com]
Sent: Thursday, October 10, 2013 2:26 AM
To: Chris Friesen; openst...@lists.openstack.org
Subject: Re: [Openstack] Neutron support for passthrough of networking devices?

Several thing in my mind:
a) NIC need more information like the switch, and these information need be 
managed by nova also. We have basic support, but not fully implemented.
b) How to setup the device, including the mac address or 802.1Qbh etc. Libvirt 
have several option to support it, need more work to support them, also need 
consider other virt driver like xenapi etc.
c) How to achieve the isolation of tenants, and how to setup like the router in 
Neutron. I'm not well on Neutron, but I think others may have more idea on it.

Thanks
--jyh

> -----Original Message-----
> From: Chris Friesen [mailto:chris.frie...@windriver.com]
> Sent: Wednesday, October 09, 2013 11:53 AM
> To: openst...@lists.openstack.org
> Subject: Re: [Openstack] Neutron support for passthrough of networking
> devices?
>
> On 10/09/2013 12:31 PM, Jiang, Yunhong wrote:
> > When we worked on H release, we target for basic PCI support like
> > accelerator card or encryption card etc. I think SR-IOV network
> > support is more complex and requires more effort, in both Nova side
> > and Neutron side. We are working on some enhancement in Nova side
> > now. But the whole picture may need more time/discussion.
>
> Can you elaborate on the complexities?  Assuming you enable SR-IOV on
> the host, and pass it through to the guest using the normal PCI
> passthrough mechanisms, what's the extra complexity?
>
> Do we run into the same complexity if we have spare physical NICs on
> the host that get passed in to the guest?
>
> Thanks,
> Chris
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openst...@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openst...@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




===============================================================================
Please refer to http://www.aricent.com/legal/email_disclaimer.html
for important disclosures regarding this electronic communication.
===============================================================================

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to