On 29 May 2018 at 14:53, Jeremy Stanley wrote:
> On 2018-05-29 15:25:01 -0500 (-0500), Jay S Bryant wrote:
> [...]
> > Maybe it would be different now that I am a Core/PTL but in the past I
> had
> > been warned to be careful as it could be misinterpreted if I was changing
> > other people's
If your nitpick is a spelling mistake or the need for a comment where
you've pretty much typed the text of the comment in the review comment
itself, then I have personally found it easiest to use the Gerrit online
editor to actually update the patch yourself. There's nothing magical
about the
On 28 December 2017 at 06:57, CARVER, PAUL wrote:
> It was a gating criteria for stadium status. The idea was that the for a
> stadium project the neutron team would have review authority over the API
> but wouldn't necessarily review or be overly familiar with the
>
Hey,
Can someone explain how the API definition files for several service
plugins ended up in neutron-lib? I can see that they've been moved there
from the plugins themselves (e.g. networking-bgpvpn has
In conjunction with the release of VPP 17.10, I'd like to invite you all to
try out networking-vpp 17.10(*) for VPP 17.10. VPP is a fast userspace
forwarder based on the DPDK toolkit, and uses vector packet processing
algorithms to minimise the CPU time spent on each packet and maximise
Since OVS is doing L2 forwarding, you should be fine setting the MTU to as
high as you choose, which would probably be the segment_mtu in the config,
since that's what it defines - the largest MTU that (from the Neutron API
perspective) is usable and (from the OVS perspective) will be used in the
In conjunction with the release of VPP 17.07, I'd like to invite you all to
try out networking-vpp 17.07.1 for VPP 17.07. VPP is a fast userspace
forwarder based on the DPDK toolkit, and uses vector packet processing
algorithms to minimise the CPU time spent on each packet and maximise
On 7 July 2017 at 12:14, Ihar Hrachyshka wrote:
> > That said: what will you do with existing VMs that have been told the
> MTU of
> > their network already?
>
> Same as we do right now when modifying configuration options defining
> underlying MTU: change it on API layer,
OK, so I should read before writing...
On 5 July 2017 at 18:11, Ian Wells <ijw.ubu...@cack.org.uk> wrote:
> On 5 July 2017 at 14:14, Ihar Hrachyshka <ihrac...@redhat.com> wrote:
>
>> Heya,
>>
>> we have https://bugs.launchpad.net/neutron/+bug/1671634 approved
On 5 July 2017 at 14:14, Ihar Hrachyshka wrote:
> Heya,
>
> we have https://bugs.launchpad.net/neutron/+bug/1671634 approved for
> Pike that allows setting MTU for network on creation.
This was actually in the very first MTU spec (in case no one looked),
though it never
I'm coming to this cold, so apologies when I put my foot in my mouth. But
I'm trying to understand what you're actually getting at, here - other than
helpful simplicity - and I'm not following the detail of you're thinking,
so take this as a form of enquiry.
On 14 May 2017 at 10:02, Monty Taylor
There are two steps to how this information is used:
Step 1: create a network - the type driver config on the neutron-server
host will determine which physnet and VLAN ID to use when you create it.
It gets stored in the DB. No networking is actually done, we're just
making a reservation here.
In conjunction with the release of VPP 17.04, I'd like to invite you all to
try out networking-vpp for VPP 17.04. VPP is a fast userspace forwarder
based on the DPDK toolkit, and uses vector packet processing algorithms to
minimise the CPU time spent on each packet and maximise throughput.
+1
On 21 February 2017 at 16:18, Ichihara Hirofumi wrote:
> +1
>
> 2017-02-17 14:18 GMT-05:00 Kevin Benton :
>
>> Hi all,
>>
>> I'm organizing a Neutron social event for Thursday evening in Atlanta
>> somewhere near the venue for dinner/drinks. If
On 25 January 2017 at 18:07, Kevin Benton wrote:
> >Setting aside all the above talk about how we might do things for a
> moment: to take one specific feature example, it actually took several
> /years/ to add VLAN-aware ports to OpenStack. This is an example of a
> feature
In conjunction with the release of VPP 17.01, I'd like to invite you all to
try out networking-vpp for VPP 17.01. VPP is a fast userspace forwarder
based on the DPDK toolkit, and uses vector packet processing algorithms to
minimise the CPU time spent on each packet and maximise throughput.
On 25 January 2017 at 14:17, Monty Taylor wrote:
> > Adding an additional networking project to try to solve this will only
> > make things work. We need one API. If it needs to grow features, it
> > needs to grow features - but they should be features that all of
> >
I would certainly be interested in dicussing this, though I'm not currently
signed up for the PTG. Obviously this is close to my interests, and I see
Kevin's raised Gluon as the bogeyman (which it isn't trying to be).
Setting aside all the above talk about how we might do things for a moment:
to
I see this changes a function's argument types without changing the
function's name - for instance, in the proposed networking-cisco change,
https://review.openstack.org/#/c/409045/ . This makes it hard to detect
that there's been a change and react accordingly. What's the recommended
way to
+1
On 14 October 2016 at 11:30, Miguel Lavalle wrote:
> Dear Neutrinos,
>
> I am organizing a social event for the team on Thursday 27th at 19:30.
> After doing some Google research, I am proposing Raco de la Vila, which is
> located in Poblenou:
On 6 October 2016 at 10:43, Jay Pipes wrote:
> On 10/06/2016 11:58 AM, Naveen Joy (najoy) wrote:
>
>> It’s primarliy because we have seen better stability and scalability
>> with etcd over rabbitmq.
>>
>
> Well, that's kind of comparing apples to oranges. :)
>
> One is a
We'd like to introduce the VPP mechanism driver, networking-vpp[1], to the
developer community.
networking-vpp is an ML2 mechanism driver to control DPDK-based VPP
user-space forwarders on OpenStack compute nodes. The code does what
mechanism drivers do - it connects VMs to each other and to
On 5 September 2016 at 17:08, Flavio Percoco wrote:
> We should probably start by asking ourselves who's really being bitten by
> the
> messaging bus right now? Large (and please, let's not bikeshed on what a
> Large
> Cloud is) Clouds? Small Clouds? New Clouds? Everyone?
>
On 1 September 2016 at 06:52, Ken Giusti <kgiu...@gmail.com> wrote:
> On Wed, Aug 31, 2016 at 3:30 PM, Ian Wells <ijw.ubu...@cack.org.uk> wrote:
>
> > I have opinions about other patterns we could use, but I don't want to
push
> > my solutions here, I want to
On 31 August 2016 at 10:12, Clint Byrum wrote:
> Excerpts from Duncan Thomas's message of 2016-08-31 12:42:23 +0300:
> > On 31 August 2016 at 11:57, Bogdan Dobrelya
> wrote:
> >
> > > I agree that RPC design pattern, as it is implemented now, is a major
On 29 August 2016 at 03:48, Jay Pipes wrote:
> On 08/27/2016 11:16 AM, HU, BIN wrote:
>
>> So telco use cases is not only the innovation built on top of OpenStack.
>> Instead, telco use cases, e.g. Gluon (NFV networking), vCPE Cloud, Mobile
>> Cloud, Mobile Edge Cloud, brings
On 11 July 2016 at 12:52, Sam Yaple wrote:
> After lots of fun on IRC I have given up this battle. I am giving up
> quickly because frickler has purposed a workaround (or better solution
> depending on who you ask). So for all of you keeping track at home, if you
> want your
On 11 July 2016 at 11:49, Sean M. Collins wrote:
> Sam Yaple wrote:
> > In this situation, since you are mapping real-ips and the real world runs
> > on 1500 mtu
>
> Don't be so certain about that assumption. The Internet is a very big
> and diverse place
OK, I'll
On 11 July 2016 at 11:12, Chris Friesen wrote:
> On 07/11/2016 10:39 AM, Jay Pipes wrote:
>
> Out of curiosity, in what scenarios is it better to limit the instance's
>> MTU to
>> a value lower than that of the maximum path MTU of the infrastructure? In
>> other
>>
On 18 April 2016 at 04:33, Ihar Hrachyshka wrote:
> Akihiro Motoki wrote:
>
> 2016-04-18 15:58 GMT+09:00 Ihar Hrachyshka :
>>
>>> Sławek Kapłoński wrote:
>>>
>>> Hello,
What MTU have You got configured
In general, while you've applied this to networking (and it's not the first
time I've seen this proposal), the same technique will work with any device
- PF or VF, networking or other:
- notify the VM via an accepted channel that a device is going to be
temporarily removed
- remove the device
-
On 27 January 2016 at 11:06, Flavio Percoco wrote:
> FWIW, the current governance model does not prevent competition. That's
> not to
> be understood as we encourage it but rather than there could be services
> with
> some level of overlap that are still worth being separate.
As I recall, network_device_mtu sets up the MTU on a bunch of structures
independently of whatever the correct value is. It was a bit of a
workaround back in the day and is still a bit of a workaround now. I'd
sooner we actually fix up the new mechanism (which is kind of hard to do
when the
On 25 January 2016 at 07:06, Matt Kassawara wrote:
> Overthinking and corner cases led to the existing implementation which
doesn't solve the MTU problem and arguably makes the situation worse
because options in the configuration files give operators the impression
they can
On 23 January 2016 at 11:27, Adam Lawson wrote:
> For the sake of over-simplification, is there ever a reason to NOT enable
> jumbo frames in a cloud/SDN context where most of the traffic is between
> virtual elements that all support it? I understand that some switches do
>
I wrote the spec for the MTU work that's in the Neutron API today. It
haunts my nightmares. I learned so many nasty corner cases for MTU, and
you're treading that same dark path.
I'd first like to point out a few things that change the implications of
what you're reporting in strange ways. [1]
On 22 January 2016 at 10:35, Neil Jerram wrote:
> * Why change from ML2 to core plugin?
>
> - It could be seen as resolving a conceptual mismatch.
> networking-calico uses
> IP routing to provide L3 connectivity between VMs, whereas ML2 is
> ostensibly
> all about
ironment, at least if everything is working as
intended.
--
Ian.
[1]
https://github.com/openstack/neutron/blob/544ff57bcac00720f54a75eb34916218cb248213/releasenotes/notes/advertise_mtu_by_default-d8b0b056a74517b8.yaml#L5
> On Jan 24, 2016 20:48, "Ian Wells" <ijw.ubu...@cack.org.
t even though it's a
> behavior change considering the current behavior is annoying. :)
> On Jan 24, 2016 23:31, "Ian Wells" <ijw.ubu...@cack.org.uk> wrote:
>
>> On 24 January 2016 at 22:12, Kevin Benton <blak...@gmail.com> wrote:
>>
>>> >The re
50+hacks and other methods of today will find their
system changes behaviour if we started setting that specific default.
Regardless, we need to take that documentation and update it. It was a
nasty hack back in the day and not remotely a good idea now.
> On Jan 24, 2016 23:00, "
Actually, I note that that document is Juno and there doesn't seem to be
anything at all in the Liberty guide now, so the answer is probably to add
settings for path_mtu and segment_mtu in the recommended Neutron
configuration.
On 24 January 2016 at 22:26, Ian Wells <ijw.ubu...@cack.org.uk>
On 12 October 2015 at 21:18, Clint Byrum wrote:
> We _would_ keep a local cache of the information in the schedulers. The
> centralized copy of it is to free the schedulers from the complexity of
> having to keep track of it as state, rather than as a cache. We also don't
>
On 11 October 2015 at 00:23, Clint Byrum wrote:
> I'm in, except I think this gets simpler with an intermediary service
> like ZK/Consul to keep track of this 1GB of data and replace the need
> for 6, and changes the implementation of 5 to "updates its record and
> signals its
On 10 October 2015 at 23:47, Clint Byrum wrote:
> > Per before, my suggestion was that every scheduler tries to maintain a
> copy
> > of the cloud's state in memory (in much the same way, per the previous
> > example, as every router on the internet tries to make a route table
On 9 October 2015 at 18:29, Clint Byrum wrote:
> Instead of having the scheduler do all of the compute node inspection
> and querying though, you have the nodes push their stats into something
> like Zookeeper or consul, and then have schedulers watch those stats
> for changes
On 9 October 2015 at 12:50, Chris Friesen
wrote:
> Has anybody looked at why 1 instance is too slow and what it would take to
>
>> make 1 scheduler instance work fast enough? This does not preclude the
>> use of
>> concurrency for finer grain tasks in the background.
On 8 October 2015 at 13:28, Ed Leafe <e...@leafe.com> wrote:
> On Oct 8, 2015, at 1:38 PM, Ian Wells <ijw.ubu...@cack.org.uk> wrote:
> > Truth be told, storing that data in MySQL is secondary to the correct
> functioning of the scheduler.
>
> I have no problem with
On 7 October 2015 at 22:17, Chris Friesen <chris.frie...@windriver.com>
wrote:
> On 10/07/2015 07:23 PM, Ian Wells wrote:
>
>>
>> The whole process is inherently racy (and this is inevitable, and
>> correct),
>>
>>
> Why is it inevitable?
>
It's in
On 8 October 2015 at 09:10, Ed Leafe wrote:
> You've hit upon the problem with the current design: multiple, and
> potentially out-of-sync copies of the data.
Arguably, this is the *intent* of the current design, not a problem with
it. The data can never be perfect (ever) so
On 7 October 2015 at 16:00, Chris Friesen
wrote:
> 1) Some resources (RAM) only require tracking amounts. Other resources
> (CPUs, PCI devices) require tracking allocation of specific individual host
> resources (for CPU pinning, PCI device allocation, etc.).
Can I ask a different question - could we reject a few simple-to-check
things on the push, like bad commit messages? For things that take 2
seconds to fix and do make people's lives better, it's not that they're
rejected, it's that the whole rejection cycle via gerrit review (push/wait
for tests
Neutron already offers a DNS server (within the DHCP namespace, I think).
It does forward on non-local queries to an external DNS server, but it
already serves local names for instances; we'd simply have to set one
aside, or perhaps use one in a 'root' but nonlocal domain
(metadata.openstack
On 21 July 2015 at 07:52, Carl Baldwin c...@ecbaldwin.net wrote:
Now, you seem to generally be thinking in terms of the latter model,
particularly since the provider network model you're talking about fits
there. But then you say:
Actually, both. For example, GoDaddy assigns each vm an ip
20, 2015 4:26 PM, Ian Wells ijw.ubu...@cack.org.uk wrote:
There are two routed network models:
- I give my VM an address that bears no relation to its location and
ensure the routed fabric routes packets there - this is very much the
routing protocol method for doing things where I have
It is useful, yes; and posting diffs on the mailing list is not the way to
get them reviewed and approved. If you can get this on gerrit it will get
a proper review, and I would certainly like to see something like this
incorporated.
On 21 July 2015 at 15:41, John Nielsen li...@jnielsen.net
On 20 July 2015 at 10:21, Neil Jerram neil.jer...@metaswitch.com wrote:
Hi Ian,
On 20/07/15 18:00, Ian Wells wrote:
On 19 July 2015 at 03:46, Neil Jerram neil.jer...@metaswitch.com
mailto:neil.jer...@metaswitch.com wrote:
The change at [1] creates and describes a new 'routed' value
On 19 July 2015 at 03:46, Neil Jerram neil.jer...@metaswitch.com wrote:
The change at [1] creates and describes a new 'routed' value for
provider:network_type. It means that a compute host handles data
to/from the relevant TAP interfaces by routing it, and specifically
that those TAP
There are two routed network models:
- I give my VM an address that bears no relation to its location and ensure
the routed fabric routes packets there - this is very much the routing
protocol method for doing things where I have injected a route into the
network and it needs to propagate. It's
On 11 June 2015 at 02:37, Andreas Scheuring scheu...@linux.vnet.ibm.com
wrote:
Do you happen to know how data gets routed _to_ a VM, in the
type='network' case?
Neil, sorry no. Haven't played around with that, yet. But from reading
the libvirt man, it looks good. It's saying Guest network
On 11 June 2015 at 15:34, Michael Still mi...@stillhq.com wrote:
On Fri, Jun 12, 2015 at 7:07 AM, Mark Boo mrkzm...@gmail.com wrote:
- What functionality is missing (if any) in config drive / metadata
service
solutions to completely replace file injection?
None that I am aware of. In
On 11 June 2015 at 12:37, Richard Raseley rich...@raseley.com wrote:
Andrew Laski wrote:
There are many reasons a deployer may want to live-migrate instances
around: capacity planning, security patching, noisy neighbors, host
maintenance, etc... and I just don't think the user needs to know
I don't see a problem with this, though I think you do want plug/unplug
calls to be passed on to Neutron so that has the opportunity to set up the
binding from its side (usage 0) and tear it down when you're done with it
(usage 1).
There may be a set of races you need to deal with, too - what
to decouple neutron from nova dependency.
Thank you for sharing this,
Irena
[1] https://review.openstack.org/#/c/162468/
On Tue, Jun 2, 2015 at 10:45 AM, Ian Wells ijw.ubu...@cack.org.uk wrote:
VIF plugging, but not precisely libvirt VIF plugging, so I'll tout this
to a hopefully interested
The fix should work fine. It is technically a workaround for the way
checksums work in virtualised systems, and the unfortunate fact that some
DHCP clients check checksums on packets where the hardware has checksum
offload enabled. (This doesn't work due to an optimisation in the way QEMU
treats
VIF plugging, but not precisely libvirt VIF plugging, so I'll tout this to
a hopefully interested audience.
At the summit, we wrote up a spec we were thinking of doing at [1]. It
actually proposes two things, which is a little naughty really, but hey.
Firstly we propose that we turn binding
On 13 May 2015 at 10:30, Vinod Pandarinathan (vpandari) vpand...@cisco.com
wrote:
- Traditional monitoring tools (Nagios, Zabbix, ) are necessary anyway
for infrastructure monitoring (CPU, RAM, disks, operating system, RabbitMQ,
databases and more) and diagnostic purposes. Adding OpenStack
On 20 April 2015 at 17:52, David Kranz dkr...@redhat.com wrote:
On 04/20/2015 08:07 PM, Ian Wells wrote:
Whatever your preference might be, I think it's best we lose the
ambiguity. And perhaps advertise that page a little more widely, actually
- I hadn't come across it in my travels
On 20 April 2015 at 07:40, Boris Pavlovic bo...@pavlovic.me wrote:
Dan,
IMHO, most of the test coverage we have for nova's neutronapi is more
than useless. It's so synthetic that it provides no regression
protection, and often requires significantly more work than the change
that is
On 20 April 2015 at 13:02, Kevin L. Mitchell kevin.mitch...@rackspace.com
wrote:
On Mon, 2015-04-20 at 13:57 -0600, Chris Friesen wrote:
However, minor changes like that could still possibly break clients
that are not
expecting them. For example, a client that uses the json response as
On 20 April 2015 at 15:23, Matthew Treinish mtrein...@kortar.org wrote:
On Mon, Apr 20, 2015 at 03:10:40PM -0700, Ian Wells wrote:
It would be nice to have a consistent policy here; it would make future
decision making easier and it would make it easier to write specs if we
knew what
This puts me in mind of a previous proposal, from the Neutron side of
things. Specifically, I would look at Erik Moe's proposal for VM ports
attached to multiple networks:
https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms .
I believe that you want logical ports hiding behind a
am trying to understand how guest os use trunking network.
If guest os use bridge like Linuxbride and OVS, how we launch it and how
libvirt to support it?
Thanks,
-Ruijing
*From:* Ian Wells [mailto:ijw.ubu...@cack.org.uk]
*Sent:* Wednesday, March 25, 2015 2:18 AM
*To:* OpenStack
On 24 March 2015 at 11:45, Armando M. arma...@gmail.com wrote:
This may be besides the point, but I really clash with the idea that we
provide a reference implementation on something we don't have CI for...
Aside from the unit testing, it is going to get a test for the case we can
test - when
That spec ensures that you can tell what the plugin is doing. You can ask
for a VLAN transparent network, but the cloud may tell you it can't make
one.
The OVS driver in Openstack drops VLAN tagged packets, I'm afraid, and the
spec you're referring to doesn't change that. The spec does ensure
On 22 March 2015 at 07:48, Jay Pipes jaypi...@gmail.com wrote:
On 03/20/2015 05:16 PM, Kevin Benton wrote:
To clarify a bit, we obviously divide lots of things by tenant (quotas,
network listing, etc). The difference is that we have nothing right now
that has to be unique within a tenant.
On 20 March 2015 at 15:49, Salvatore Orlando sorla...@nicira.com wrote:
The MTU issue has been a long-standing problem for neutron users. What
this extension is doing is simply, in my opinion, enabling API control over
an aspect users were dealing with previously through custom made scripts.
://review.openstack.org/#/q/project:openstack/neutron+branch:master+topic:bp/mtu-selection-and-advertisement,n,z
[4]
https://review.openstack.org/#/q/project:openstack/neutron+branch:master+topic:bp/nfv-vlan-trunks,n,z
[5] https://review.openstack.org/#/c/136760/
On 19 March 2015 at 14:56, Ian
There are precedents for this. For example, the attributes that currently
exist for IPv6 advertisement are very similar:
- added during the run of a stable Neutron API
- properties added on a Neutron object (MTU and VLAN affect network, but
IPv6 affects subnet - same principle though)
-
Per the other discussion on attributes, I believe the change walks in
historical footsteps and it's a matter of project policy choice. That
aside, you raised a couple of other issues on IRC:
- backward compatibility with plugins that haven't adapted their API - this
is addressed in the spec,
On 19 March 2015 at 11:44, Gary Kotton gkot...@vmware.com wrote:
Hi,
Just the fact that we did this does not make it right. But I guess that we
are starting to bend the rules. I think that we really need to be far more
diligent about this kind of stuff. Having said that we decided the
On 18 March 2015 at 03:33, Duncan Thomas duncan.tho...@gmail.com wrote:
On 17 March 2015 at 22:02, Davis, Amos (PaaS-Core)
amos.steven.da...@hp.com wrote:
Ceph/Cinder:
LVM or other?
SCSI-backed?
Any others?
I'm wondering why any of the above matter to an application.
The Neutron
On 12 March 2015 at 05:33, Fredy Neeser fredy.nee...@solnet.ch wrote:
2. I'm using policy routing on my hosts to steer VXLAN traffic (UDP
dest. port 4789) to interface br-ex.12 -- all other traffic from
192.168.1.14 is source routed from br-ex.1, presumably because br-ex.1 is a
On 11 March 2015 at 04:27, Fredy Neeser fredy.nee...@solnet.ch wrote:
7: br-ex.1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state
UNKNOWN group default
link/ether e0:3f:49:b4:7c:a7 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.14/24 brd 192.168.1.255 scope global br-ex.1
On 11 March 2015 at 10:56, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:
While looking at some other problems yesterday [1][2] I stumbled across
this feature change in Juno [3] which adds a config option
allow_duplicate_networks to the [neutron] group in nova. The default
value is False,
On 6 March 2015 at 13:16, Sławek Kapłoński sla...@kaplonski.pl wrote:
Hello,
Today I found bug https://bugs.launchpad.net/neutron/+bug/1314614 because
I
have such problem on my infra.
(For reference, if you delete a port that a Nova is using - it just goes
ahead and deletes the port from
With apologies for derailing the question, but would you care to tell us
what evil you're planning on doing? I find it's always best to be informed
about these things.
--
Ian.
(Why yes, it *is* a Saturday morning.)
On 6 March 2015 at 12:23, Michael Krotscheck krotsch...@gmail.com wrote:
On 2 February 2015 at 09:49, Chris Friesen chris.frie...@windriver.com
wrote:
On 02/02/2015 10:51 AM, Jay Pipes wrote:
This is a bug that I discovered when fixing some of the NUMA related nova
objects. I have a patch that should fix it up shortly.
Any chance you could point me at it or
On 28 January 2015 at 17:32, Robert Collins robe...@robertcollins.net
wrote:
E.g. its a call (not cast) out to Neutron, and Neutron returns when
the VIF(s) are ready to use, at which point Nova brings the VM up. If
the call times out, we error.
I don't think this model really works with
Lots of open questions in here, because I think we need a long conversation
on the subject.
On 23 January 2015 at 15:51, Kevin Benton blak...@gmail.com wrote:
It seems like a change to using internal RPC interfaces would be pretty
unstable at this point.
Can we start by identifying the
Once more, I'd like to revisit the VIF_VHOSTUSER discussion [1]. I still
think this is worth getting into Nova's libvirt driver - specifically
because there's actually no way to distribute this as an extension; since
we removed the plugin mechanism for VIF drivers, it absolutely requires a
code
Sukhdev,
Since the term is quite broad and has meant many things in the past, can
you define what you're thinking of when you say 'L2 gateway'?
Cheers,
--
Ian.
On 2 January 2015 at 18:28, Sukhdev Kapur sukhdevka...@gmail.com wrote:
Hi all,
HAPPY NEW YEAR.
Starting Monday (Jan 5th, 2015)
Hey Ryota,
A better way of describing it would be that the bridge name is, at present,
generated in *both* Nova *and* Neutron, and the VIF type semantics define
how it's calculated. I think you're right that in both cases it would make
more sense for Neutron to tell Nova what the connection
:15:56AM +0100, Ian Wells wrote:
Hey Ryota,
A better way of describing it would be that the bridge name is, at
present,
generated in *both* Nova *and* Neutron, and the VIF type semantics define
how it's calculated. I think you're right that in both cases it would
make
more sense
On 10 December 2014 at 01:31, Daniel P. Berrange berra...@redhat.com
wrote:
So the problem of Nova review bandwidth is a constant problem across all
areas of the code. We need to solve this problem for the team as a whole
in a much broader fashion than just for people writing VIF drivers. The
? Neutron provides low level hooks
and the rest is defined elsewhere. Maybe this could work, but there would
probably be other issues if the actual implementation is not on the edge or
outside Neutron.
/Erik
*From:* Ian Wells [mailto:ijw.ubu...@cack.org.uk]
*Sent:* den 4 december 2014 20
On 1 December 2014 at 21:26, Mohammad Hanif mha...@brocade.com wrote:
I hope we all understand how edge VPN works and what interactions are
introduced as part of this spec. I see references to neutron-network
mapping to the tunnel which is not at all case and the edge-VPN spec
doesn’t
On 4 December 2014 at 08:00, Neil Jerram neil.jer...@metaswitch.com wrote:
Kevin Benton blak...@gmail.com writes:
I was actually floating a slightly more radical option than that: the
idea that there is a VIF type (VIF_TYPE_NOOP) for which Nova does
absolutely _nothing_, not even create the
On 1 December 2014 at 04:43, Mathieu Rohon mathieu.ro...@gmail.com wrote:
This is not entirely true, as soon as a reference implementation,
based on existing Neutron components (L2agent/L3agent...) can exist.
The specific thing I was saying is that that's harder with an edge-id
mechanism than
On 27 November 2014 at 12:11, Mohammad Hanif mha...@brocade.com wrote:
Folks,
Recently, as part of the L2 gateway thread, there was some discussion on
BGP/MPLS/Edge VPN and how to bridge any overlay networks to the neutron
network. Just to update everyone in the community, Ian and I have
On 19 November 2014 17:19, Sukhdev Kapur sukhdevka...@gmail.com wrote:
Folks,
Like Ian, I am jumping in this very late as well - as I decided to travel
Europe after the summit, just returned back and catching up :-):-)
I have noticed that this thread has gotten fairly convoluted and
1 - 100 of 203 matches
Mail list logo