Re: [openstack-dev] [Openstack] Neutron support for passthrough of networking devices?

2013-10-11 Thread Prashant Upadhyaya
 But if there are two
 physical NIC's which were diced up with SRIOV, then VM's on the diced
 parts of the first  physical NIC cannot communicate easily with the
 VM's on the diced parts of the second physical NIC. So a native
 implementation has to be there on the Compute Node which will aid this
 (this native implementation will take over the Physical Function, PF
 of each NIC) and will be able to 'switch' the packets between VM's of
 different physical diced up NIC's [if we need that usecase]

Is this strictly necessary?  It seems like it would be simpler to let the 
packets be sent out over the wire and the switch/router would send them back to 
the other NIC.  Of course this would result in higher use of the physical link, 
but on the other hand it would mean less work for the CPU on the compute node.

PU Not strictly necessary. I am from the data plane background (Intel DPDK + 
SRIOV) and the Intel DPDK guide suggests the above usecase for acceleration of 
data path for the above. I agree, it would be much simpler to go to the switch 
and back into the 2nd NIC, let's solve this first in OpenStack with SRIOV, that 
itself will be a major step forward.

Regards
-Prashant

-Original Message-
From: Chris Friesen [mailto:chris.frie...@windriver.com]
Sent: Thursday, October 10, 2013 8:21 PM
To: Prashant Upadhyaya
Cc: OpenStack Development Mailing List; Jiang, Yunhong; 
openst...@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack] Neutron support for passthrough of 
networking devices?

On 10/10/2013 01:19 AM, Prashant Upadhyaya wrote:
 Hi Chris,

 I note two of your comments --

 When we worked on H release, we target for basic PCI support like
 accelerator card or encryption card etc.

 PU So I note that you are already solving the PCI pass through
 usecase somehow ? How ? If you have solved this already in terms of
 architecture then SRIOV should not be difficult.

Notice the double indent...that was actually Jiang's statement that I quoted.


 Do we run into the same complexity if we have spare physical NICs on
 the host that get passed in to the guest?

 PU In part you are correct. However there is one additional thing.
 When we have multiple physical NIC's, the Compute Node's linux is
 still in control over those.

snip

 In case of SRIOV, you can dice up a single physical NIC into multiple
 NIC's (effectively), and expose each of these diced up NIC's to a VM
 each. This means that the VM will now 'directly' access the NIC
 bypassing the Hypervisor.

snip

 But if there are two
 physical NIC's which were diced up with SRIOV, then VM's on the diced
 parts of the first  physical NIC cannot communicate easily with the
 VM's on the diced parts of the second physical NIC. So a native
 implementation has to be there on the Compute Node which will aid this
 (this native implementation will take over the Physical Function, PF
 of each NIC) and will be able to 'switch' the packets between VM's of
 different physical diced up NIC's [if we need that usecase]

Is this strictly necessary?  It seems like it would be simpler to let the 
packets be sent out over the wire and the switch/router would send them back to 
the other NIC.  Of course this would result in higher use of the physical link, 
but on the other hand it would mean less work for the CPU on the compute node.

Chris




===
Please refer to http://www.aricent.com/legal/email_disclaimer.html
for important disclosures regarding this electronic communication.
===

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Neutron support for passthrough of networking devices?

2013-10-10 Thread Irena Berezovsky
Hi Chris, Jiang,
We are also looking into enchantment of basic PCI pass-through to provide 
SR-IOV based networking.
In order to support automatic provisioning, it requires the awareness to what 
virtual network to connect the requested SR-IOV device. 
This should be considered by the scheduler  in order to run VM on the Host that 
is connected to the physical network. 
It requires the Neutron to be aware of PCI pass though allocated device and 
allocate port on the virtual network. 
It will require some sort of VIF Driver to manage the libvirt device settings. 
It may also require neutron agent to apply port policy on the device. I think 
it makes sense to  support this as part of ML2 neutron plugin (via mechanism 
driver).
In case you plan to attend the design summit, maybe it worth to collaborate 
there and discuss what can be done in the coming  Icehouse release?

Regards,
Irena

-Original Message-
From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com] 
Sent: Thursday, October 10, 2013 2:26 AM
To: Chris Friesen; openst...@lists.openstack.org
Subject: Re: [Openstack] Neutron support for passthrough of networking devices?

Several thing in my mind:
a) NIC need more information like the switch, and these information need be 
managed by nova also. We have basic support, but not fully implemented.
b) How to setup the device, including the mac address or 802.1Qbh etc. Libvirt 
have several option to support it, need more work to support them, also need 
consider other virt driver like xenapi etc.
c) How to achieve the isolation of tenants, and how to setup like the router in 
Neutron. I'm not well on Neutron, but I think others may have more idea on it.

Thanks
--jyh

 -Original Message-
 From: Chris Friesen [mailto:chris.frie...@windriver.com]
 Sent: Wednesday, October 09, 2013 11:53 AM
 To: openst...@lists.openstack.org
 Subject: Re: [Openstack] Neutron support for passthrough of networking 
 devices?
 
 On 10/09/2013 12:31 PM, Jiang, Yunhong wrote:
  When we worked on H release, we target for basic PCI support like 
  accelerator card or encryption card etc. I think SR-IOV network 
  support is more complex and requires more effort, in both Nova side 
  and Neutron side. We are working on some enhancement in Nova side 
  now. But the whole picture may need more time/discussion.
 
 Can you elaborate on the complexities?  Assuming you enable SR-IOV on 
 the host, and pass it through to the guest using the normal PCI 
 passthrough mechanisms, what's the extra complexity?
 
 Do we run into the same complexity if we have spare physical NICs on 
 the host that get passed in to the guest?
 
 Thanks,
 Chris
 
 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openst...@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Neutron support for passthrough of networking devices?

2013-10-10 Thread Prashant Upadhyaya
Hi Chris,

I note two of your comments --

  When we worked on H release, we target for basic PCI support like
  accelerator card or encryption card etc.

PU So I note that you are already solving the PCI pass through usecase somehow 
? How ? If you have solved this already in terms of architecture then SRIOV 
should not be difficult.

 Do we run into the same complexity if we have spare physical NICs on
 the host that get passed in to the guest?

PU In part you are correct. However there is one additional thing. When we 
have multiple physical NIC's, the Compute Node's linux is still in control over 
those. So the data into the VM and out still travels all those tunneling 
devices and finally goes out of these physical NIC's. The NIC is _not_ exposed 
directly to the VM. The VM still has the emulated NIC which interfaces out with 
the tap and over the linux bridge
In case of SRIOV, you can dice up a single physical NIC into multiple NIC's 
(effectively), and expose each of these diced up NIC's to a VM each. This means 
that the VM will now 'directly' access the NIC bypassing the Hypervisor. 
Similar to PCI pass through, but now you have one pass through for each VM with 
the diced NIC.  So that is a major consideration to keep in mind because this 
means that we will bypass all those tunneling devices in the middle. But since 
you say that you are working with PCI passthrough and seem to have solved it, 
this is a mere extension of that.

Further, for single physical NIC which is diced up and is connected to VM's on 
a single Compute Node, the NIC provides a 'switch' using which these VM's can 
talk to each other. This can aid us because we have bypassed all the tunneling 
devices.
But if there are two physical NIC's which were diced up with SRIOV, then VM's 
on the diced parts of the first  physical NIC cannot communicate easily with 
the VM's on the diced parts of the second physical NIC.
So a native implementation has to be there on the Compute Node which will aid 
this (this native implementation will take over the Physical Function, PF of 
each NIC) and will be able to 'switch' the packets between VM's of different 
physical diced up NIC's [if we need that usecase]

Regards
-Prashant


-Original Message-
From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Thursday, October 10, 2013 12:15 PM
To: Jiang, Yunhong; Chris Friesen; openst...@lists.openstack.org
Cc: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
Subject: Re: [openstack-dev] [Openstack] Neutron support for passthrough of 
networking devices?

Hi Chris, Jiang,
We are also looking into enchantment of basic PCI pass-through to provide 
SR-IOV based networking.
In order to support automatic provisioning, it requires the awareness to what 
virtual network to connect the requested SR-IOV device.
This should be considered by the scheduler  in order to run VM on the Host that 
is connected to the physical network.
It requires the Neutron to be aware of PCI pass though allocated device and 
allocate port on the virtual network.
It will require some sort of VIF Driver to manage the libvirt device settings.
It may also require neutron agent to apply port policy on the device. I think 
it makes sense to  support this as part of ML2 neutron plugin (via mechanism 
driver).
In case you plan to attend the design summit, maybe it worth to collaborate 
there and discuss what can be done in the coming  Icehouse release?

Regards,
Irena

-Original Message-
From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com]
Sent: Thursday, October 10, 2013 2:26 AM
To: Chris Friesen; openst...@lists.openstack.org
Subject: Re: [Openstack] Neutron support for passthrough of networking devices?

Several thing in my mind:
a) NIC need more information like the switch, and these information need be 
managed by nova also. We have basic support, but not fully implemented.
b) How to setup the device, including the mac address or 802.1Qbh etc. Libvirt 
have several option to support it, need more work to support them, also need 
consider other virt driver like xenapi etc.
c) How to achieve the isolation of tenants, and how to setup like the router in 
Neutron. I'm not well on Neutron, but I think others may have more idea on it.

Thanks
--jyh

 -Original Message-
 From: Chris Friesen [mailto:chris.frie...@windriver.com]
 Sent: Wednesday, October 09, 2013 11:53 AM
 To: openst...@lists.openstack.org
 Subject: Re: [Openstack] Neutron support for passthrough of networking
 devices?

 On 10/09/2013 12:31 PM, Jiang, Yunhong wrote:
  When we worked on H release, we target for basic PCI support like
  accelerator card or encryption card etc. I think SR-IOV network
  support is more complex and requires more effort, in both Nova side
  and Neutron side. We are working on some enhancement in Nova side
  now. But the whole picture may need more time/discussion.

 Can you elaborate on the complexities?  Assuming you enable SR-IOV

Re: [openstack-dev] [Openstack] Neutron support for passthrough of networking devices?

2013-10-10 Thread Prashant Upadhyaya
Hi Chris,

I note two of your comments --

  When we worked on H release, we target for basic PCI support like
  accelerator card or encryption card etc.

PU So I note that you are already solving the PCI pass through usecase somehow 
? How ? If you have solved this already in terms of architecture then SRIOV 
should not be difficult.

 Do we run into the same complexity if we have spare physical NICs on
 the host that get passed in to the guest?

PU In part you are correct. However there is one additional thing. When we 
have multiple physical NIC's, the Compute Node's linux is still in control over 
those. So the data into the VM and out still travels all those tunneling 
devices and finally goes out of these physical NIC's. The NIC is _not_ exposed 
directly to the VM. The VM still has the emulated NIC which interfaces out with 
the tap and over the linux bridge
In case of SRIOV, you can dice up a single physical NIC into multiple NIC's 
(effectively), and expose each of these diced up NIC's to a VM each. This means 
that the VM will now 'directly' access the NIC bypassing the Hypervisor. 
Similar to PCI pass through, but now you have one pass through for each VM with 
the diced NIC.  So that is a major consideration to keep in mind because this 
means that we will bypass all those tunneling devices in the middle. But since 
you say that you are working with PCI passthrough and seem to have solved it, 
this is a mere extension of that.

Further, for single physical NIC which is diced up and is connected to VM's on 
a single Compute Node, the NIC provides a 'switch' using which these VM's can 
talk to each other. This can aid us because we have bypassed all the tunneling 
devices.
But if there are two physical NIC's which were diced up with SRIOV, then VM's 
on the diced parts of the first  physical NIC cannot communicate easily with 
the VM's on the diced parts of the second physical NIC.
So a native implementation has to be there on the Compute Node which will aid 
this (this native implementation will take over the Physical Function, PF of 
each NIC) and will be able to 'switch' the packets between VM's of different 
physical diced up NIC's [if we need that usecase]

Regards
-Prashant

-Original Message-
From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Thursday, October 10, 2013 12:15 PM
To: Jiang, Yunhong; Chris Friesen; openst...@lists.openstack.org
Cc: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
Subject: Re: [openstack-dev] [Openstack] Neutron support for passthrough of 
networking devices?

Hi Chris, Jiang,
We are also looking into enchantment of basic PCI pass-through to provide 
SR-IOV based networking.
In order to support automatic provisioning, it requires the awareness to what 
virtual network to connect the requested SR-IOV device.
This should be considered by the scheduler  in order to run VM on the Host that 
is connected to the physical network.
It requires the Neutron to be aware of PCI pass though allocated device and 
allocate port on the virtual network.
It will require some sort of VIF Driver to manage the libvirt device settings.
It may also require neutron agent to apply port policy on the device. I think 
it makes sense to  support this as part of ML2 neutron plugin (via mechanism 
driver).
In case you plan to attend the design summit, maybe it worth to collaborate 
there and discuss what can be done in the coming  Icehouse release?

Regards,
Irena

-Original Message-
From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com]
Sent: Thursday, October 10, 2013 2:26 AM
To: Chris Friesen; openst...@lists.openstack.org
Subject: Re: [Openstack] Neutron support for passthrough of networking devices?

Several thing in my mind:
a) NIC need more information like the switch, and these information need be 
managed by nova also. We have basic support, but not fully implemented.
b) How to setup the device, including the mac address or 802.1Qbh etc. Libvirt 
have several option to support it, need more work to support them, also need 
consider other virt driver like xenapi etc.
c) How to achieve the isolation of tenants, and how to setup like the router in 
Neutron. I'm not well on Neutron, but I think others may have more idea on it.

Thanks
--jyh

 -Original Message-
 From: Chris Friesen [mailto:chris.frie...@windriver.com]
 Sent: Wednesday, October 09, 2013 11:53 AM
 To: openst...@lists.openstack.org
 Subject: Re: [Openstack] Neutron support for passthrough of networking
 devices?

 On 10/09/2013 12:31 PM, Jiang, Yunhong wrote:
  When we worked on H release, we target for basic PCI support like
  accelerator card or encryption card etc. I think SR-IOV network
  support is more complex and requires more effort, in both Nova side
  and Neutron side. We are working on some enhancement in Nova side
  now. But the whole picture may need more time/discussion.

 Can you elaborate on the complexities?  Assuming you enable SR-IOV on
 the host

Re: [openstack-dev] [Openstack] Neutron support for passthrough of networking devices?

2013-10-10 Thread Chris Friesen

On 10/10/2013 01:19 AM, Prashant Upadhyaya wrote:

Hi Chris,

I note two of your comments --



When we worked on H release, we target for basic PCI support
like accelerator card or encryption card etc.



PU So I note that you are already solving the PCI pass through
usecase somehow ? How ? If you have solved this already in terms of
architecture then SRIOV should not be difficult.


Notice the double indent...that was actually Jiang's statement that I
quoted.



Do we run into the same complexity if we have spare physical NICs
on the host that get passed in to the guest?



PU In part you are correct. However there is one additional thing.
When we have multiple physical NIC's, the Compute Node's linux is
still in control over those.


snip


In case of SRIOV, you can dice up a single
physical NIC into multiple NIC's (effectively), and expose each of
these diced up NIC's to a VM each. This means that the VM will now
'directly' access the NIC bypassing the Hypervisor.


snip


But if there are two
physical NIC's which were diced up with SRIOV, then VM's on the diced
parts of the first  physical NIC cannot communicate easily with the
VM's on the diced parts of the second physical NIC. So a native
implementation has to be there on the Compute Node which will aid
this (this native implementation will take over the Physical
Function, PF of each NIC) and will be able to 'switch' the packets
between VM's of different physical diced up NIC's [if we need that
usecase]


Is this strictly necessary?  It seems like it would be simpler to let 
the packets be sent out over the wire and the switch/router would send 
them back to the other NIC.  Of course this would result in higher use 
of the physical link, but on the other hand it would mean less work for 
the CPU on the compute node.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev