Sounds like a great initiative.
Lets follow up on the proposal by the kuryr-kubernetes blueprint.
BR,
Irena
On Wed, Jun 6, 2018 at 6:47 AM, Peng Liu wrote:
> Hi Kuryr-kubernetes team,
>
> I'm thinking to propose a new BP to support Kubernetes Network Custom
> Resource Definition De-facto
+1
On Fri, Jan 19, 2018 at 9:42 PM, Hongbin Lu wrote:
> Hi Kuryr team,
>
> I think Kuryr-libnetwork is ready to move out of beta status. I propose to
> make the first 1.x release of Kuryr-libnetwork for Queens and cut a stable
> branch on it. What do you think about this
Probably https://github.com/openstack/kuryr-kubernetes
On Sun, Sep 10, 2017 at 4:29 PM, Gary Kotton wrote:
> Hi,
>
> I suggest that you take a look at https://wiki.openstack.org/wiki/Kuryr.
> This most probably already has the relevant watchers implemented.
>
> Thanks
>
>
+1
On Wed, Jul 5, 2017 at 4:23 AM, Vikas Choudhary
wrote:
> +1
>
> On Tue, Jul 4, 2017 at 7:59 PM, Antoni Segura Puimedon > wrote:
>
>> On Tue, Jul 4, 2017 at 12:23 PM, Gal Sagie wrote:
>> > +1
>> +1
>> >
>> > On Tue, Jul
On Fri, Jan 13, 2017 at 6:49 PM, Antoni Segura Puimedon
wrote:
> Hi fellow kuryrs!
>
> We are getting close to the end of the Ocata and it is time to look back
> and appreciate the good work all the contributors did. I would like to
> thank you all for the continued
Hi Gideon,
Support for nested containers is not merged into kuryr repository yet.
You can try to experiment with this patch:
https://review.openstack.org/#/c/402462/
As for the proper devstack settings for such environment, the 'undercloud'
and 'overcloud' devstack settings will be added to this
Hi,
The case you are describing may be related to the previously discussed RFE
[1].
Having additional networks with FIP range attached via router interface
should be allowed from the API point of view, but may need some adaptations
to make it work properly. Please see the details in the discussion
Hi Kuryrs!
>
> On September 5th's weekly IRC meeting Irena Berezovsky suggested that
> we should take a decision regarding the location of specs and devrefs.
>
> Currently we default to putting all the specs and devrefs for:
> - Kuryr
> - Kuryr-libnetwork
> - Kuryr-kubernetes
>
Hi Ivan,
The approach looks very interesting and seems to be reasonable effort to
make it work with kuryr as alternative to the 'VLAN aware VM' approach.
Having container presented as neutron entity has its value, especially for
visibility/monitoring (i.e mirroring) and security (i.e applying
Mike,
As per QoS spec [1], the behavior is:
``QoS policies could be applied:
- Per network: All the ports plugged on the network where the QoS policy
is
applied get the policy applied to them.
- Per port: The specific port gets the policy applied, when the port had
any
+1
On Wed, Aug 17, 2016 at 12:54 AM, Antoni Segura Puimedon wrote:
> Hi Kuryrs,
>
> I would like to propose Vikas Choudhary for the core team for the
> kuryr-libnetwork subproject. Vikas has kept submitting patches and reviews
> at a very good rhythm in the past cycle and I
Hi Liping Mao,
On Thu, May 26, 2016 at 12:31 PM, Liping Mao (limao)
wrote:
> Hi Vikas, Antoni and Kuryr team,
>
> When I use kuryr, I notice kuryr will failed to add an existed
> network with gateway interface already created by neutron[1][2].
>
> The bug is because kuryr will
On Wed, Apr 20, 2016 at 4:25 PM, Miguel Angel Ajo Pelayo <
majop...@redhat.com> wrote:
> Inline update.
>
> On Mon, Apr 11, 2016 at 4:22 PM, Miguel Angel Ajo Pelayo
> wrote:
> > On Mon, Apr 11, 2016 at 1:46 PM, Jay Pipes wrote:
> >> On 04/08/2016 09:17
Hi Gary,
The new L2GW spec is [1] comes to enable inter-cloud connection to stretch
the network between the local and the remote clouds using tunnels between
border VTEP devices.
VTEP can be populated manually with remote MAC (optionally) IP entries.
BGP support is a bit orthogonal or may I say
Hi Andy,
(Adding neutron tag)
Please open an RFE bug under neutron and add qos tag. This will facilitate
the discussion of the use case feasibility.
Please join the QoS IRC meetings
https://wiki.openstack.org/wiki/Meetings/QoS.
BR,
Irena
On Mon, Mar 14, 2016 at 2:05 PM, Andy Wang
Hi Reedip,
Please see my comments inline
On Tue, Mar 8, 2016 at 9:19 AM, reedip banerjee wrote:
> While reading up the specs in [1] and [2], there are certain things which
> we may need to discuss before proceeding forward
>
> a) Reference point for Ingress/Egress traffic:
>
Hi Jason,
According to the L2GW config, it should be set as in this line:
https://github.com/openstack/networking-l2gw/blob/master/etc/l2gw_plugin.ini#L25
I think it should be working as default setting, but maybe you can try to
set this explicitly.
Hope it helps,
Irena
On Tue, Mar 8, 2016 at
On Wed, Nov 18, 2015 at 8:31 AM, Takashi Yamamoto
wrote:
> hi,
>
> On Thu, Nov 12, 2015 at 2:11 AM, Vikram Hosakote (vhosakot)
> wrote:
> > Hi,
> >
> > TAAS looks great for traffic monitoring.
> >
> > Some questions about TAAS.
> >
> > 1) Can TAAS be
+1
On Tue, Oct 13, 2015 at 5:07 PM, Gal Sagie wrote:
> +1
>
> Taku is a great addition to the team and hoping to see him continue
> deliver high quality
> contribution in all aspects of the project.
>
> On Tue, Oct 13, 2015 at 4:52 PM, Antoni Segura Puimedon <
>
I would like to second Kevin. This can be done in a similar way as ML2
Plugin passed plugin_context to ML2 Extension Drivers:
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/driver_api.py#L910
.
BR,
Irena
On Fri, Sep 25, 2015 at 11:57 AM, Kevin Benton
I would like to start discussion regarding user experience when certain
level of network QoS is expected to be applied on VM ports. As you may know
basic networking QoS support was introduced during Liberty Release
following spec, Ref [1]
As it was discussed during last networking-QoS meeting, Ref
Kyle,
Thank you for the hard work you did making neuron project and neutron
community better!
You have been open and very supportive as a neutron community lead.
Hope you will stay involved.
On Fri, Sep 11, 2015 at 11:12 PM, Kyle Mestery wrote:
> I'm writing to let
Second or last week of September work for me
On Thu, Aug 20, 2015 at 3:22 PM, Antoni Segura Puimedon
toni+openstac...@midokura.com wrote:
On Wed, Aug 19, 2015 at 11:50 PM, Salvatore Orlando
salv.orla...@gmail.com wrote:
Hi Gal,
even if I've been a lurker so far, I'm interested in
Current VPNaaS Service Plugin inherits from VpnPluginRpcDbMixin, which is
not required for some vendor solutions, since L3 is implemented without
leveraging L3 Agents to manage router namespaces (ODL, MidoNet, etc).
I guess if Mixin usage will be changed to conditional RPC support based on
drivers
Hi Bob, Miguel
On Tue, Jul 14, 2015 at 5:19 PM, Robert Kukura kuk...@noironetworks.com
wrote:
I haven't had a chance to review this patch in detail yet, but am
wondering if this is being integrated with ML2 as an extension driver? If
so, that should clearly address how dictionaries are
Hi Andreas,
On Fri, Jun 26, 2015 at 4:04 PM, Andreas Scheuring
scheu...@linux.vnet.ibm.com wrote:
Hi together,
for a new ml2 plugin I would like to pass over some data from neutron to
nova on port creation and update (exploiting port binding extension
[1]). For my prototype I thought of
On Mon, Jun 22, 2015 at 7:48 PM, Sean M. Collins s...@coreitpro.com wrote:
On Mon, Jun 22, 2015 at 10:47:39AM EDT, Salvatore Orlando wrote:
I would probably start with something for enabling the L2 agent to
process
features such as QoS and security groups, working on the OVS agent, and
Hi Vikram,
Agree with what you stated. Additional use case can be Tap as a Service to
allow filtering of the mirrored packets.
BR,
Irena
On Fri, Jun 5, 2015 at 11:47 AM, Vikram Choudhary
vikram.choudh...@huawei.com wrote:
Dear All,
There are multiple proposal floating around flow
Hi Ian,
I like your proposal. It sounds very reasonable and makes separation of
concerns between neutron and nova very clear. I think with vif plug script
support [1]. it will help to decouple neutron from nova dependency.
Thank you for sharing this,
Irena
[1]
Hi,
This week neutron QoS meeting will take place on Tuesday, April 21 at 14:00
UTC on #openstack-meeting-3.
Next week, the meeting is back to its original slot: Wed at 14:00 UTC on
#openstack-meeting-3.
Please join if you are interested.
Hi Miguel,
Thank you for leading this.
On Tue, Apr 7, 2015 at 8:45 AM, Miguel Ángel Ajo majop...@redhat.com
wrote:
On Tuesday, 7 de April de 2015 at 3:14, Kyle Mestery wrote:
On Mon, Apr 6, 2015 at 6:04 PM, Salvatore Orlando sorla...@nicira.com
wrote:
On 7 April 2015 at 00:33, Armando
Please see inline
On Thu, Feb 19, 2015 at 4:43 PM, Steve Gordon sgor...@redhat.com wrote:
- Original Message -
From: Irena Berezovsky irenab@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org,
On Thu, Feb 5, 2015
On Thu, Feb 5, 2015 at 9:01 PM, Steve Gordon sgor...@redhat.com wrote:
- Original Message -
From: Przemyslaw Czesnowicz przemyslaw.czesnow...@intel.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Hi
1) If the device is
with pci_slot details
can be very dangerous, since you skip the phase when this pci slot is
reserved by nova. The system may become inconsistent.
Thank you,
Ageeleshwar K
On Thu, Feb 5, 2015 at 12:19 PM, Irena Berezovsky irenab@gmail.com
wrote:
Hi Akilesh,
Please see my responses inline
Hi Akilesh,
please see inline
On Wed, Feb 4, 2015 at 11:32 AM, Akilesh K akilesh1...@gmail.com wrote:
Hi,
Issue 1:
I do not understand what you mean. I did specify the physical_network.
What I am trying to say is some physical networks exists only on the
compute node and not on the network
Hi David,
You error is not related to agent.
I would suggest to check:
1.nova.conf at your compute node for pci whitelist configuration
2. Neutron server configuration for correct physical_network label
matching the label in pci whitelist
3. Nova DB tables containing PCI
-Original Message-
From: henry hly [mailto:]
Sent: Tuesday, December 16, 2014 3:12 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Minimal ML2 mechanism driver after Neutron
decomposition change
On Tue, Dec 16, 2014 at 1:53 AM, Neil
Hi David,
One configuration option is missing that you should be aware of:
In /etc/neutron/plugins/ml2/ml2_conf_sriov.ini:
In [ml2_sriov] section set PCI Device vendor and product IDs you use, in format
vendor_id:product_id
supported_pci_vendor_devs =
Example:
supported_pci_vendor_devs =
Hi Daniel,
Please see inline
-Original Message-
From: Daniel P. Berrange [mailto:berra...@redhat.com]
Sent: Tuesday, December 09, 2014 4:04 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech
Hi Murali,
Seems there is a mismatch between pci_whitelist configuration and requested
network.
In the table below:
physical_network: physnet2
In the error you sent, there is:
; Russell Bryant; Ian Wells (iawells); Irena
Berezovsky; ba...@cisco.com
Cc: Nikola Đipanov; Russell Bryant; OpenStack Development Mailing List (not for
usage questions)
Subject: [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other
features that can't be tested on current infra.
Hi all
Count me in
From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Tuesday, November 11, 2014 3:47 PM
To: OpenStack List
Subject: [openstack-dev] [Neutron] Translation technical debt
Hi,
In order to enforce our translation policies -
http://docs.openstack.org/developer/oslo.i18n/guidelines.html -
Hi,
We thought it would be a good idea to have some chat regarding further SR-IOV
enchantment that we want to achieve during Kilo.
If you are interested to discuss it, please join us Wednesday 5, at 13:15 at
the developers lounge.
The list of topics raised till now can be found here:
Hi Sean,
Is there any chance to change this time slot?
Unfortunately, I won't be there on Friday.
BR,
Irena
-Original Message-
From: Collins, Sean [mailto:sean_colli...@cable.comcast.com]
Sent: Thursday, October 30, 2014 5:50 PM
To: OpenStack Development Mailing List (not for usage
Hi Sean,
Will be great to meet in person and discuss QoS adoption path.
Count me in,
Irena
-Original Message-
From: Collins, Sean [mailto:sean_colli...@cable.comcast.com]
Sent: Tuesday, October 28, 2014 8:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject:
Hi Simon,
Please check you neuron server configuration.
To support VXLAN networks, you should have the following configuration in the
ml2_conf.ini:
[ml2]
type_drivers = vxlan,local
tenant_network_types = vxlan
mechanism_drivers = openvswitch
[ml2_type_vxlan]
vni_ranges = 65537:6
For the
Hi Don,
Seems that there is a problem at neutron side, that ML2 refuses to bind the
port.
Can you please share the error you get at neutron server?
I am not sure, but seems that neutron ml2 configuration is not accurate.
With commands you share, I think you should change it as following:
[ovs]
Hi,
While keeping focused on defining proper approach to deal with Neutron third
vendors’ plugin and driver, we also need to provide solution for complimentary
critical piece of code maintained in the Nova code base.
Introducing new vif_type by neutron L2 Plugin/Driver, requires adding vif
Following the last PCI pass-through meeting , we want to start thinking about
features/add-ons that need to be addressed in the Kilo Release.
I created an etherpad (reused Doug's template) for topics related to PCI
pass-through, mostly focused on SR-IOV networking:
blueprint mentioning sriov
macvtap. Do you have any insights into this one, too? What we also would like
to do is to introduce macvtap as network virtualization option. Macvtap also
registers mac addresses to network adapters...
Thanks,
Andreas
On Sun, 2014-08-24 at 08:51 +, Irena Berezovsky
Hi Andreas,
Thank you for this initiative.
We were looking on similar problem for mixing OVS and SR-IOV on same network
adapter, which also requires mac addresses registration of OVS ports.
Please let me know if you would like to collaborate on this effort.
BR,
Irena
-Original Message-
Hi,
As announced in the last neutron meeting [1], the Mellanox plugin is being
deprecated. Juno is the last release to support Mellanox plugin.
The Mellanox ML2 Mechanism Driver is replacing the plugin and introduced since
Icehouse release.
[1]
should be triggered by nova changes in the PCI area.
What do you suggest?
Irena
From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Tuesday, August 12, 2014 4:29 PM
To: Irena Berezovsky; OpenStack Development Mailing List (not for usage
questions)
Subject: Re: [openstack-dev] [Nova] PCI support
Hi,
Mellanox CI was also failing due to the same issue,
https://bugs.launchpad.net/neutron/+bug/1355780 (apparently duplicated bug for
https://bugs.launchpad.net/neutron/+bug/1353309)
We currently fixed the issue locally, by patching the server side RPC version
support to 1.3.
BR,
Irena
nova patches.
What tests do you think it should run for nova side?
Thanks,
Irena
From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Wednesday, August 13, 2014 10:10 AM
To: Irena Berezovsky; OpenStack Development Mailing List (not for usage
questions)
Subject: Re: [openstack-dev] [Nova] PCI
+1
-Original Message-
From: Kyle Mestery [mailto:mest...@mestery.com]
Sent: Wednesday, August 13, 2014 5:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron] Rotating the weekly Neutron meeting
Per this week's Neutron meeting [1], it
Hi Gary,
Mellanox already established CI support on Mellanox SR-IOV NICs, as one of the
jobs of Mellanox External Testing CI
(Check-MLNX-Neutron-ML2-Sriov-driverhttp://144.76.193.39/ci-artifacts/94888/13/Check-MLNX-Neutron-ML2-Sriov-driver).
Meanwhile not voting, but will be soon.
BR,
Irena
Hi Chuck,
I'll comment regarding Mellanox Plug-in and Ml2 Mech driver in the review.
BR,
Irena
-Original Message-
From: Carlino, Chuck (OpenStack TripleO, Neutron) [mailto:chuck.carl...@hp.com]
Sent: Wednesday, August 06, 2014 10:42 PM
To: OpenStack Development Mailing List (not for
Hi Robert,
Please see inline
-Original Message-
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Friday, July 25, 2014 12:44 AM
To: mest...@mestery.com; Irena Berezovsky
Cc: Akihiro Motoki; Sandhya Dasu (sadasu); OpenStack Development Mailing List
(not for usage questions)
Subject
similar to bridge_mappings:
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mech_openvswitch.py#43
BR,
Irena
From: Czesnowicz, Przemyslaw [mailto:przemyslaw.czesnow...@intel.com]
Sent: Thursday, July 10, 2014 6:20 PM
To: Irena Berezovsky; OpenStack Development Mailing
Hi,
For passing information from neutron to nova VIF Driver, you should use
binding:vif_details dictionary. You may not require new VIF_TYPE, but can
leverage the existing VIF_TYPE_OVS, and add ‘use_dpdk’ in vif_details
dictionary. This will require some rework of the existing libvirt
I'll chair this week PCI SR-IOV pass-through meeting for those who would like
to attend.
BR,
Irena
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Tuesday, July 01, 2014 5:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][sriov] weekly
Hi Mohammad,
Thank you for sharing the links.
Can you please elaborate on columns of the table in [1]. Is [R] supposed to be
for spec review and [C] for code review?
If this correct, would it be possible to add [C] columns for already merged
specs that still have the code under review?
Thanks a
+ 1
Would love to join the gang :)
-Original Message-
From: Assaf Muller [mailto:amul...@redhat.com]
Sent: Friday, June 13, 2014 4:21 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova]{neutron] Mid cycle sprints
- Original
Hi Luke,
Please see my comments inline.
BR,
Irena
From: Luke Gorrie [mailto:l...@tail-f.com]
Sent: Monday, June 09, 2014 12:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][ml2] Too much shim rest proxy
mechanism drivers in ML2
On 6
://etherpad.openstack.org/p/modular-l2-agent-outline
Best Regards,
Irena
From: luk...@gmail.com [mailto:luk...@gmail.com] On Behalf Of Luke Gorrie
Sent: Tuesday, June 10, 2014 12:48 PM
To: Irena Berezovsky
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev
+1 to attend,
Regards,
Irena
-Original Message-
From: Collins, Sean [mailto:sean_colli...@cable.comcast.com]
Sent: Wednesday, May 21, 2014 5:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][QoS] Weekly IRC Meeting?
Hi,
The
(not for usage questions);
John Garbutt; Russell Bryant; yunhong-jiang; Itzik Brown; Yongli He; Jay Pipes;
Irena Berezovsky
Subject: Re: Informal meeting before SR-IOV summit presentation
the program pods area should be open.
On 5/9/14, 3:33 PM, Sandhya Dasu (sadasu) sad...@cisco.com wrote:
I have
Garbutt; Russell Bryant; yunhong-jiang; Itzik Brown; Brent Eagles; Yongli
He; Jay Pipes; Irena Berezovsky
Subject: Re: Informal meeting before SR-IOV summit presentation
It sounds good to me.
Thanks Sandhya for organizing it.
Robert
On 5/9/14, 2:51 PM, Sandhya Dasu (sadasu) sad...@cisco.com wrote
I would like to join this discussion.
Thanks,
Irena
-Original Message-
From: Collins, Sean [mailto:sean_colli...@cable.comcast.com]
Sent: Tuesday, May 06, 2014 7:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][QoS] Interest in a
Hi Paul,
Please be aware that there was also change in nova to support ovs_hybrid_plug:
https://review.openstack.org/#/c/83190/
I am not sure, but maybe worth to check nova code and nova.conf you are using
to be aligned with neutron code.
Hope it helps,
Irena
From: Paul Michali (pcm)
Hi Li Ma,
ML2 binding:profile is accessible for admin user only.
Currently it can be set via port-create/port-update CLI following this syntax:
'neutron port-create netX --binding:profile type=dict keyX=valX'
BR,
Irena
-Original Message-
From: Li Ma [mailto:m...@awcloud.com]
Sent:
-Original Message-
From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Wednesday, March 05, 2014 9:04 AM
To: Robert Li (baoli); Sandhya Dasu (sadasu); OpenStack Development Mailing
List (not for usage questions); Robert Kukura; Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron
review
https://review.openstack.org/#/c/74464/ ?
I think it will be easier to follow up the comments and decisions.
Thanks,
Irena
-Original Message-
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, March 05, 2014 6:10 PM
To: Irena Berezovsky; OpenStack Development
-Original Message-
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, March 05, 2014 4:46 AM
To: Sandhya Dasu (sadasu); OpenStack Development Mailing List (not for usage
questions); Irena Berezovsky; Robert Kukura; Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova
-
From: yongli he [mailto:yongli...@intel.com]
Sent: Tuesday, March 04, 2014 3:28 AM
To: Robert Li (baoli); Irena Berezovsky; OpenStack Development Mailing List
Subject: PCI SRIOV meeting suspend?
HI, Robert
does it stop for while?
and if you are convenient please review this patch set
Hi Paul,
I think the problem for tests failure is SystemExit exception raised by
service_base.py when fails to load drivers by plugin. It terminates the tests.
BR,
Irena
From: Paul Michali [mailto:p...@cisco.com]
Sent: Tuesday, March 04, 2014 7:34 AM
To: OpenStack Development Mailing List (not
it should work for your case, and if you
need L2 agent for this.
BR,
Irena
-Original Message-
From: Sandhya Dasu (sadasu) [mailto:sad...@cisco.com]
Sent: Tuesday, February 25, 2014 4:19 PM
To: OpenStack Development Mailing List (not for usage questions); Irena
Berezovsky; Robert Kukura
Hi Nishant,
Following Salvatore suggestion, I think the best is to consider using ML2
plugin to make several backend technologies available in your setup.
If you are looking to deploy Mellanox solution aside with other technology,
there is Mellanox ML2 Mechanism Driver that currently under
Hi,
As stated below, we are already having this work both in nova and neuron.
Please take a look at the following discussions:
https://wiki.openstack.org/wiki/Meetings#PCI_Passthrough_Meeting
For neutron part there are two different flavors that are coming as part of
this effort:
1. Cisco SRIOV
Please see inline my understanding
-Original Message-
From: Robert Kukura [mailto:rkuk...@redhat.com]
Sent: Tuesday, February 04, 2014 11:57 PM
To: Sandhya Dasu (sadasu); OpenStack Development Mailing List (not for usage
questions); Irena Berezovsky; Robert Li (baoli); Brian Bowen
Seems the openstack-meeting-alt is busy, let's use openstack-meeting
From: Sandhya Dasu (sadasu) [mailto:sad...@cisco.com]
Sent: Monday, February 03, 2014 8:28 PM
To: Irena Berezovsky; Robert Li (baoli); Robert Kukura; OpenStack Development
Mailing List (not for usage questions); Brian Bowen
and neutron.
BR,
Irena
From: Sandhya Dasu (sadasu) [mailto:sad...@cisco.com]
Sent: Friday, January 31, 2014 6:46 PM
To: Irena Berezovsky; Robert Li (baoli); Robert Kukura; OpenStack Development
Mailing List (not for usage questions); Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron
Mech. Drivers.
More comments inline
BR,
IRena
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 29, 2014 4:47 PM
To: Irena Berezovsky; rkuk...@redhat.com; Sandhya Dasu (sadasu); OpenStack
Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev
Hi Robert,
Please see inline, I'll try to post my understanding.
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 29, 2014 6:03 PM
To: Irena Berezovsky; rkuk...@redhat.com; Sandhya Dasu (sadasu); OpenStack
Development Mailing List (not for usage questions)
Subject: Re
Please see inline
From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Thursday, January 30, 2014 1:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th
On 29 January 2014 23:50, Robert Kukura
coding.
thanks,
Robert
On 1/22/14 8:03 AM, Robert Li (baoli)
ba...@cisco.commailto:ba...@cisco.com wrote:
Sounds great! Let's do it on Thursday.
--Robert
On 1/22/14 12:46 AM, Irena Berezovsky
ire...@mellanox.commailto:ire...@mellanox.com wrote:
Hi Robert, all,
I would suggest not to delay the SR
.
see inline as well.
thanks,
Robert
On 1/27/14 10:54 AM, Irena Berezovsky
ire...@mellanox.commailto:ire...@mellanox.com wrote:
Hi Robert, all,
My comments inline
Regards,
Irena
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 5:05 PM
To: OpenStack Development Mailing
. But it may be a good idea to come up with Modular Agent.
BR,
Irena
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 11:16 PM
To: Irena Berezovsky; OpenStack Development Mailing List (not for usage
questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through
for the VMs, either macvtap or direct assignment. And the PF is used for
the uplink to the linux bridge or OVS!!
My question to the team is whether we consider both of these deployments or not?
Thx,
Nrupal
From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Monday, January 27, 2014 1:01 PM
Hi Nishant,
Mellanox plugin supports two types of VF provisioning , as pci passthrough
(hostdev) and as macvtap (mlnx_direct) vNIC.
According to the log, you want to use the first flavor (hostdev).
Please follow the following instructions in:
Hi Robert, all,
I would suggest not to delay the SR-IOV discussion to the next week.
Let's try to cover the SRIOV side and especially the nova-neutron interaction
points and interfaces this Thursday.
Once we have the interaction points well defined, we can run parallel patches
to cover the full
Hi,
Having post PCI meeting discussion with Ian based on his proposal
https://docs.google.com/document/d/1vadqmurlnlvZ5bv3BlUbFeXRS_wh-dsgi5plSjimWjU/edit?pli=1#,
I am not sure that the case that quite usable for SR-IOV based networking is
covered well by this proposal. The understanding I got
Hi Robert, Yonhong,
Although network XML solution (option 1) is very elegant, it has one major
disadvantage. As Robert mentioned, the disadvantage of the network XML is the
inability to know what SR-IOV PCI device was actually allocated. When neutron
is responsible to set networking
Ian,
Thank you for putting in writing the ongoing discussed specification.
I have added few comments on the Google doc [1].
As for live migration support, this can be done also without libvirt network
usage.
Not very elegant, but working: rename the interface of the PCI device to some
logical
Hi,
After having a lot of discussions both on IRC and mailing list, I would like to
suggest to define basic use cases for PCI pass-through network support with
agreed list of limitations and assumptions and implement it. By doing this
Proof of Concept we will be able to deliver basic PCI
to help you. Conveniently he's
also core. ;)
--
Ian.
On 12 January 2014 22:12, Irena Berezovsky
ire...@mellanox.commailto:ire...@mellanox.com wrote:
Hi John,
Thank you for taking an initiative and summing up the work that need to be done
to provide PCI pass-through network support.
The only item I
Hi John,
Thank you for taking an initiative and summing up the work that need to be done
to provide PCI pass-through network support.
The only item I think is missing is the neutron support for PCI pass-through.
Currently we have Mellanox Plugin that supports PCI pass-through assuming
Mellanox
Please, see inline
From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Tuesday, December 24, 2013 1:38 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] Todays' meeting log: PCI
pass-through network support
On autodiscovery and
Hi Ian,
My comments are inline
I would like to suggest to focus the next PCI-pass though IRC meeting on:
1.Closing the administration and tenant that powers the VM use cases.
2. Decouple the nova and neutron parts to start focusing on the neutron
related details.
BR,
Irena
1 - 100 of 110 matches
Mail list logo