Re: [Openstack-operators] vmware nsx 6

2016-02-17 Thread Ignazio Cassano
Many thanks, Mark.
We are going to test this solution following your instructions and if we
find problems some problems we will contact you.
Regards
Ignazio

2016-02-17 20:45 GMT+01:00 Mark Voelker :

> Hi Ignazio,
>
> Sure, NSXv 6.2.1 is usable for a VMware region [1].  Source for the driver
> is here [2]:
>
> http://git.openstack.org/cgit/openstack/vmware-nsx/tree/?h=stable/liberty
>
> The blogs I pointed to earlier should give you a good feel for the basic
> architecture and services.  Configuration-wise, you’ll want to set
> “core_plugin" to “vmware_nsx.neutron.plugins.vmware.plugin.NsxVPlugin” and
> “service_plugins” should include
> “neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPlugin,vmware_nsx.neutron.services.l2gateway.plugin.NSxL2GatewayPlugin”.
> in your neutron.conf file and then configure the plugin options.  Most of
> what you’ll need is found here:
>
>
> http://git.openstack.org/cgit/openstack/vmware-nsx/tree/etc/nsx.ini?h=stable/liberty
>
> You’ll want to set “nsx_l2gw_driver" to
> "vmware_nsx.neutron.services.l2gateway/nsx_v_driver.NsxvL2GatewayDriver" in
> line 49 since you’re using NSXv.  Then fill in the NSXv plugin
> configuration options in the [nsxv] section in lines 61-181, and you may
> optionally want to tinker with the [nsx_sync] section in lines 239-276. You
> can ignore most of the rest as it mostly pertains to the NSX-mh and DVS
> plugins.  If it’s useful I can send you some sample configs from one of my
> lab setups; just let me know!
>
> [1] For a bit more info on the various plugins available for NSX-mh, NSXv,
> and DVS see https://wiki.openstack.org/wiki/Neutron/VMware_NSX_plugins
>
> [2] More specifically for you since you’re using NSXv, the NSXv plugin
> code is in:
> http://git.openstack.org/cgit/openstack/vmware-nsx/tree/vmware_nsx/plugins/nsx_v?h=stable/liberty
>
> At Your Service,
>
> Mark T. Voelker
>
>
>
> > On Feb 17, 2016, at 10:30 AM, Ignazio Cassano 
> wrote:
> >
> > Hi Mark, many thanks for your help.
> > We are not using vmware VIO, but we are using openstack liberty
> community edition with a Region for vmware nsx.
> > If you read the following link:
> >
> >
> http://docs.openstack.org/admin-guide-cloud/networking_config-agents.html
> >
> > you can see the following instructions:
> >   • Use the NSX Administrator Guide to add the node as a Hypervisor
> by using the NSX Manager GUI. Even if your forwarding node has no VMs and
> is only used for services agents like neutron-dhcp-agent or
> neutron-lbaas-agent, it should still be added to NSX as a Hypervisor.
> > On  nsx 6.2.1 GUI there isn't any section to add the node as a
> Hypervisor, probably because this document is related to a NSX multi
> hypervisor version.
> >
> > So the question is: must we wait for a new nsx multi hypervisor version
> or we can use the current nsx version ?
> >
> > Best Regards
> >
> > Ignazio
> >
> >
> >
> >
> > 2016-02-17 14:51 GMT+01:00 Mark Voelker :
> > Hi Ignazio,
> >
> > I have. =)  Drop me a note and let me know what you need; we’ll be happy
> to help.  For a general background, this is a good place to start:
> >
> >
> http://blogs.vmware.com/openstack/openstack-networking-with-vmware-nsx-part-1/
> >
> >
> http://blogs.vmware.com/openstack/openstack-networking-with-vmware-nsx-part-2/
> >
> >
> http://blogs.vmware.com/openstack/openstack-networking-with-vmware-nsx-part-3/
> >
> > There’s also useful information in the config guides:
> >
> >
> http://docs.openstack.org/kilo/config-reference/content/networking-plugin-nsx.html
> >
> > At Your Service,
> >
> > Mark T. Voelker
> >
> >
> >
> > > On Feb 17, 2016, at 2:43 AM, Ignazio Cassano 
> wrote:
> > >
> > > Hi all,
> > > I would like to know if someone have configured openstack neutron with
> vmware
> > > nsx 6. I found old documentation about it.
> > > Regards
> > > Ignazio
> > > ___
> > > OpenStack-operators mailing list
> > > OpenStack-operators@lists.openstack.org
> > >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> >
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] VM HA support in trunk

2016-02-17 Thread Affan Syed
Matt/Joe,

I think your points are valid. However, when looking at woowing customers
who are in legacy operation, doing all the changes at once doesnt seem like
a viable value proposition. This first order transition is important to get
them to see the benefits of cloud. Then we can have their previous OPS
people to spend spare time on becoming DEVss and build cloud native apps !


Affan

On Tue, 16 Feb 2016 at 19:24 Bajin, Joseph  wrote:

> I would have to agree with Matt.  The ability for any sort of handling of
> failures either reside within the application or tools around the
> application to make it work.  Having the infrastructure handle the
> failures, I believe, is a slippery slope that is starting to appear more
> and more.
>
> I do fear that many people/organizations are starting to look at the cloud
> as a “low cost” or “free” VMWare solution.  They want the same enterprise
> based availability and support that they get with a vendor paid solution
> without the cost of the vendor paid solution.   I have started to see and
> hear more about how vendors are adding “enterprise” solutions to
> OpenStack.  This includes High Availability features that rely on the
> infrastructure to manage instead of the application.  I fear the direction
> of all the projects will begin migrating this way as more vendors get
> involve and want to figure out business models that they can use around
> “enterprise” feature-sets.
>
> —Joe
>
>
>
> From: Matt Fischer 
> Date: Monday, February 15, 2016 at 10:59 AM
> To: Toshikazu Ichikawa 
> Cc: "openstack-operators@lists.openstack.org" <
> openstack-operators@lists.openstack.org>
> Subject: Re: [Openstack-operators] [nova] VM HA support in trunk
>
> I believe that either have your customers design their apps to handle
> failures or have tools that are reactive to failures.
>
> Unfortunately like many other private cloud operators we deal a lot with
> legacy applications that aren't scaled horizontally or fault tolerant and
> so we've built tooling to handle customer notifications (reactive). When we
> lose a compute host we generate a notice to customers and then work on
> evacuating their instances. For the evac portion nova host-evacuate or
> host-evacuate-live work fairly well, although we rarely get a functioning
> floating-IP after host-evacuate without other work.
>
> Getting adoption of heat or other automation tooling to educate customers
> is a long process, especially when they're used to VMware where I think
> they get the VM HA stuff for "free".
>
>
> On Mon, Feb 15, 2016 at 8:25 AM, Toshikazu Ichikawa <
> ichikawa.toshik...@lab.ntt.co.jp> wrote:
>
>> Hi Affan,
>>
>>
>>
>>
>>
>> I don’t think any components in Liberty provide HA VM support directly.
>>
>>
>>
>> However, many works are published and open-sourced, here.
>>
>> https://etherpad.openstack.org/p/automatic-evacuation
>>
>> You may find ideas and solutions.
>>
>>
>>
>> And, the discussion on this topic is on-going at HA meeting.
>>
>> https://wiki.openstack.org/wiki/Meetings/HATeamMeeting
>>
>>
>>
>> thanks,
>>
>> Kazu
>>
>>
>>
>> *From:* Affan Syed [mailto:affan.syed@gmail.com]
>> *Sent:* Monday, February 15, 2016 12:51 PM
>> *To:* openstack-operators@lists.openstack.org
>> *Subject:* [Openstack-operators] [nova] VM HA support in trunk
>>
>>
>>
>> reposting with the correct tag, hopefully. Would really appreciate some
>> pointers.
>>
>> -- Forwarded message -
>> From: Affan Syed 
>> Date: Sat, 13 Feb 2016 at 15:13
>> Subject: [nova] VM HA support in trunk
>> To: 
>>
>>
>>
>> Hi all,
>>
>> I have been trying to understand if we currently have some VM HA support
>> as part of Liberty?
>>
>>
>>
>> To be precise, how are host being down due to power failure handled,
>> specifically in terms of migrating the VMs but possibly even their
>> networking configs (tunnels etc).
>>
>>
>>
>> The VM migration like XEN-HA or KVM cluster seem to require 1+1 HA, I
>> have read a few places about celiometer+heat templates to launch VMs for an
>> N+1 backup scenario, but these all seem like one-off setups.
>>
>>
>>
>>
>>
>> This issue seems to be very much important for legacy enterprises to move
>> their "pets" --- not sure if we can simply wish away that mindset!
>>
>>
>>
>> Affan
>>
>>
>>
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org

[Openstack-operators] [app-catalog] IRC Meeting Thursday February 18th at 17:00UTC

2016-02-17 Thread Christopher Aedo
Join us Thursday for our weekly meeting, scheduled for February 18th
at 17:00UTC in #openstack-meeting-3

The agenda can be found here, and please add to if you want to get
something on the agenda:
https://wiki.openstack.org/wiki/Meetings/app-catalog

Looking forward to seeing all interested parties there!

-Christopher

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [tags] Ops-Tag Meeting

2016-02-17 Thread Shamail
Hi everyone,

The Ops-Tag[1] team will be meeting tomorrow (2/18) at 1400 UTC in 
#openstack-meeting.  The agenda is included below, we hope to see you there!

Agenda:
1) Review proposed tags/changes
2) Open

[1] https://wiki.openstack.org/wiki/Operations/Tags

Thanks,
Shamail 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] How to install magnum?

2016-02-17 Thread Steve Gordon
- Original Message -
> From: "Mike Smith" 
> To: "Hongbin Lu" 
> 
> Thanks Hongbin.  I am also willing to get involved to help write the
> operator/production-oriented documentation for Magnum.  I’d like to see the
> Magnum project work with the RDO folks to get Magnum RPMs into the RDO
> distribution so that it is more accessible to operators of package-based
> distros.
> 
> Mike Smith
> Lead Cloud Systems Architect
> Overstock.com

FWIW the folks from CERN had been working with us on this, the relevant 
tracking bugs are here:

openstack-magnum: https://bugzilla.redhat.com/show_bug.cgi?id=1292794
python-magnumclient: https://bugzilla.redhat.com/show_bug.cgi?id=1286772

I just bumped the first request as I think it is good to go but someone missed 
setting a flag to initiate the next step of the process.

Thanks,

Steve


> On Feb 17, 2016, at 10:10 AM, Hongbin Lu
> > wrote:
> 
> Mike,
> 
> I am sorry that it is currently lack of installation guide for Magnum. I have
> created a blueprint [1] for creating one. It will be picked up if someone
> interests to work on that.
> 
> If you need a guide right away, maybe you could reference this one [2]. This
> guide is not targeting for operators, but it might inspire the installation
> steps.
> 
> [1] https://blueprints.launchpad.net/magnum/+spec/magnum-installation-guide
> [2] http://docs.openstack.org/developer/magnum/dev/dev-manual-devstack.html
> 
> Best regards,
> Hongbin
> 
> On Tue, Feb 9, 2016 at 11:30 AM, Mike Perez
> > wrote:
> On 09:48 Nov 03, Mike Perez wrote:
> > On 18:51 Oct 28, Mike Perez wrote:
> > > On 12:35 Oct 28, JJ Asghar wrote:
> > > > On 10/28/15 10:35 AM, Mike Perez wrote:
> > > > > On 14:09 Oct 16, hittang wrote:
> > > > >> Hello,everynoe. Can anybody help me for installing magnum? I have an
> > > > >> openstack installtion,which has one controller node, one network
> > > > >> node, and
> > > > >> server computes node. Now, I want to install magnum, and  to manage
> > > > >> docker
> > > > >> containers with.
> > > > >
> > > > > I was not able to find this information in the Magnum wiki [1],
> > > > > except for the
> > > > > developer quick start. Doing a quick search, other related threads
> > > > > point
> > > > > to the dev docs for installation, which is developer centric.
> > > > >
> > > > > Adrian, is this something missing in documentation, or did we miss
> > > > > it?
> > > > >
> > > > > [1] - https://wiki.openstack.org/wiki/Magnum
> > > > >
> > > >
> > > > Yep, this would be awesome. It's neat to see the integrations with
> > > > DevStack, but getting it to work in a "prod" environment seems
> > > > confusing
> > > > at best.
> > > >
> > > > I've attempted a couple times now, and failed each one. I'm more then
> > > > willing to help debug/QA the docs that yall decide to put together.
> > >
> > > Since we're doing so much talk about Magnum in Keynotes for the Tokyo
> > > OpenStack
> > > summit, and there are install confusion, we should probably work on that.
> > > I have raised this in the next meeting [1], which I will bring up this
> > > thread
> > > in.
> > >
> > > [1] -
> > > https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2015-11-03_1600_UTC
> >
> > During the Magnum meeting today [1] it seems like this was previously
> > discussed
> > at the last Magnum midcycle to improve this documentation in the Mitaka
> > release
> > [2].
> >
> > [1] -
> > http://eavesdrop.openstack.org/meetings/containers/2015/containers.2015-11-03-16.01.log.html#l-150
> > [2] - https://etherpad.openstack.org/p/magnum-mitaka-summit-meetup
> 
> Hey Adrian,
> 
> Can we get an update on the install guide for Magnum? I was looking through
> OpenStack manuals repo and didn't find anything up for review. We ran into
> each
> other recently, and you mentioned someone who was working on this effort, so
> please cc them to this thread. Thanks!
> 
> --
> Mike Perez
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 

-- 
Steve Gordon,
Sr. Technical Product Manager,
Red Hat OpenStack Platform

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org

Re: [Openstack-operators] Nova-network -> Neutron Migration

2016-02-17 Thread Kevin Bringard (kevinbri)




On 2/17/16, 1:31 PM, "Shamail"  wrote:

>Sorry for the top posting...  I wanted to make a suggestion:
>
>
>Would this script be suited for OSOps[1]?  The networking guide could then 
>reference it but we could continue to evolve/maintain it as an operators tool.
>

It could be... The problem is that every deploy is different, and so this isn't 
so much a one size fits all software, as it is a good reference. By the time we 
got it to the point where it was a generic migration tool, anyone who'd benefit 
from it would likely have long moved away from nova-networking.

At least that's my thought, but maybe I overestimate the effort involved in 
generalizing it.

>
>[1] https://wiki.openstack.org/wiki/Osops
>
>
>Thanks,
>Shamail 
>
>On Feb 17, 2016, at 4:29 PM, Matt Kassawara  wrote:
>
>
>
>Cool! I'd like to see this stuff in the networking guide... or at least a link 
>to it for now.
>
>On Wed, Feb 17, 2016 at 8:14 AM, Kevin Bringard (kevinbri)
> wrote:
>
>Hey All!
>
>I wanted to follow up on this. We've managed successfully migrated Icehouse 
>with per tenant networks (non overlapping, obviously) and L3 services from 
>nova-networking to neutron in the lab. I'm working on the automation bits, but 
>once that is done we'll start
> migrating real workloads.
>
>I forked Sam's stuff and modified it to work in icehouse with tenants nets: 
>https://github.com/kevinbringard/novanet2neutron/tree/icehouse 
>. I need to 
>update the README to succinctly reflect the steps, but the code is there (I'm 
>going to work on the README today).
>
>If this is something folks are interested in I proposed a talk to go over the 
>process and our various use cases in Austin:
>
>https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7045
> 
>
>
>-- Kevin
>
>
>
>On 12/9/15, 12:49 PM, "Kevin Bringard (kevinbri)"  wrote:
>
>>It's worth pointing out, it looks like this only works in Kilo+, as it's 
>>written. Sam pointed out earlier that this was what they'd run it on, but I 
>>verified it won't work on earlier versions because, specifically, in the 
>>migrate-secgroups.py it inserts into
> the default_security_group table, which was introduced in Kilo.
>>
>>I'm working on modifying it. If I manage to get it working properly I'll 
>>commit my changes to my fork and send it out.
>>
>>-- Kevin
>>
>>
>>
>>On 12/9/15, 10:00 AM, "Edgar Magana"  wrote:
>>
>>>I did not but more advanced could mean a lot of things for Neutron. There 
>>>are so many possible scenarios that expecting to have a “script” to cover 
>>>all of them is a whole new project. Not sure we want to explore than. In the 
>>>past we were recommending to
>>> make the migration in multiple steps, maybe we could use this as a good 
>>> step 0.
>>>
>>>
>>>Edgar
>>>
>>>
>>>
>>>
>>>
>>>From: "Kris G. Lindgren"
>>>Date: Wednesday, December 9, 2015 at 8:57 AM
>>>To: Edgar Magana, Matt Kassawara, "Kevin Bringard (kevinbri)"
>>>Cc: OpenStack Operators
>>>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>>>
>>>
>>>
>>>Doesn't this script only solve the case of going from flatdhcp networks in 
>>>nova-network to same dchp/provider networks in neutron.  Did anyone test to 
>>>see if it also works for doing more advanced nova-network configs?
>>>
>>>
>>>___
>>>Kris Lindgren
>>>Senior Linux Systems Engineer
>>>GoDaddy
>>>
>>>
>>>
>>>
>>>
>>>
>>>From: Edgar Magana 
>>>Date: Wednesday, December 9, 2015 at 9:54 AM
>>>To: Matt Kassawara , "Kevin Bringard (kevinbri)" 
>>>
>>>Cc: OpenStack Operators 
>>>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>>>
>>>
>>>
>>>Yes! We should but with a huge caveat that is not not supported officially 
>>>by the OpenStack community. At least the author wants to make a move with 
>>>the Neutron team to make it part of the tree.
>>>
>>>
>>>Edgar
>>>
>>>
>>>
>>>
>>>
>>>From: Matt Kassawara
>>>Date: Wednesday, December 9, 2015 at 8:52 AM
>>>To: "Kevin Bringard (kevinbri)"
>>>Cc: Edgar Magana, Tom Fifield, OpenStack Operators
>>>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>>>
>>>
>>>
>>>Anyone think we should make this script a bit more "official" ... perhaps in 
>>>the networking guide?
>>>
>>>On Wed, Dec 9, 2015 at 9:01 AM, Kevin Bringard (kevinbri)
>>> wrote:
>>>
>>>Thanks, Tom, Sam, and Edgar, that's really good info. If nothing else it'll 
>>>give me a good blueprint for what to look for and where to start.
>>>
>>>
>>>
>>>On 12/8/15, 10:37 PM, "Edgar Magana"  wrote:
>>>
Awesome code! I just did a 

Re: [Openstack-operators] [openstack-ansible]

2016-02-17 Thread Major Hayden
On 02/17/2016 02:00 PM, Wade Holler wrote:
> Well it almost does. Except on my neutron agents container I ended up with a 
> eth12
> 
> And I do have a flat network plumbed in to the infrastructure host ( on which 
> the neutron agent container resides ) via br-vlan.
> 
> Thoughts?
> 
> Thank you for the engagement and previous prompt rely! I really appreciate 
> the help.

Hmm, I think I'm to the point where I need to know a little more about your 
configuration. ;)

Could you drop as much of your openstack_user_config.yml as you can into a 
pastebin somewhere?  Be sure to obfuscate any information or IP addresses that 
would be problematic if they're made public.

If you want all three network types -- flat, VLAN, and VXLAN -- that's totally 
doable, but a veth might be required on your hypervisors.  Knowing more about 
your configuration and desired state will help.

Feel free to hop into #openstack-ansible on Freenode to talk in real-time.  I'm 
mhayden in the channel.

--
Major Hayden

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-ansible]

2016-02-17 Thread Wade Holler
Hi Major,

Well it almost does. Except on my neutron agents container I ended up with
a eth12

And I do have a flat network plumbed in to the infrastructure host ( on
which the neutron agent container resides ) via br-vlan.

Thoughts?

Thank you for the engagement and previous prompt rely! I really appreciate
the help.

Wade
On Wed, Feb 17, 2016 at 2:33 PM Major Hayden  wrote:

> On 02/17/2016 01:23 PM, Wade Holler wrote:
> > Going to ask this question without much data or background as I hope
> someone very familiar with openstack-ansible will be able to easily answer
> it.
> >
> > I tried to follow the install guide and network config pretty closely.
> >
> > All is well except my physical compute nodes don't have an eth12.
> >
> > What is the best was to change the openstack-ansible config
> (/etc/openstack_deploy/openstack_user_config.yml
> /etc/openstack_deploy/user_variables.yml ) such that this is tolerated.
> i.e., flat:eth12 not placed in the physical_interface_mappings line of
> /etc/neutron/plugins/ml2/linuxbridge_agent.ini ?
>
> Hello Wade,
>
> I ran into this problem as well and it's a bit confusing.  The eth12
> interface is used in the AIO since we need three networks (VLAN, flat, and
> VXLAN), but we have only two bridges (br-vlan and br-vxlan).  You can use
> eth12 in your deployments, but you'll need to create a veth for that.
> There's no requirement to use all three of these network types in your
> production environment.
>
> Which networks do you plan to use?  If you plan to use VLAN and VXLAN
> without flat networking, you can omit the provider network section for the
> flat network.  Here's an excerpt from what I'm using in production:
>
>   https://gist.github.com/major/b8771c99e1274e89bc98
>
> Does that help?
>
> --
> Major Hayden
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-ansible]

2016-02-17 Thread Major Hayden
On 02/17/2016 01:23 PM, Wade Holler wrote:
> Going to ask this question without much data or background as I hope someone 
> very familiar with openstack-ansible will be able to easily answer it.
> 
> I tried to follow the install guide and network config pretty closely. 
> 
> All is well except my physical compute nodes don't have an eth12.  
> 
> What is the best was to change the openstack-ansible config 
> (/etc/openstack_deploy/openstack_user_config.yml 
> /etc/openstack_deploy/user_variables.yml ) such that this is tolerated.  
> i.e., flat:eth12 not placed in the physical_interface_mappings line of 
> /etc/neutron/plugins/ml2/linuxbridge_agent.ini ?

Hello Wade,

I ran into this problem as well and it's a bit confusing.  The eth12 interface 
is used in the AIO since we need three networks (VLAN, flat, and VXLAN), but we 
have only two bridges (br-vlan and br-vxlan).  You can use eth12 in your 
deployments, but you'll need to create a veth for that.  There's no requirement 
to use all three of these network types in your production environment.

Which networks do you plan to use?  If you plan to use VLAN and VXLAN without 
flat networking, you can omit the provider network section for the flat 
network.  Here's an excerpt from what I'm using in production:

  https://gist.github.com/major/b8771c99e1274e89bc98

Does that help?

--
Major Hayden

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [openstack-ansible]

2016-02-17 Thread Wade Holler
Hi All,

Going to ask this question without much data or background as I hope
someone very familiar with openstack-ansible will be able to easily answer
it.

I tried to follow the install guide and network config pretty closely.

All is well except my physical compute nodes don't have an eth12.

What is the best was to change the openstack-ansible config
(/etc/openstack_deploy/openstack_user_config.yml
/etc/openstack_deploy/user_variables.yml ) such that this is tolerated.
 i.e., flat:eth12 not placed in the physical_interface_mappings line of
/etc/neutron/plugins/ml2/linuxbridge_agent.ini ?

Best Regards,
Wade
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova-network -> Neutron Migration

2016-02-17 Thread Ned Rhudy (BLOOMBERG/ 731 LEX)
We're in mostly the same boat; using nova-network with VLAN segmentation and 
looking at a Neutron migration (though ours may take a more drastic path and 
take us to Neutron+Calico). One question I have for you: the largest issue and 
conceptual leap we had when initially prototyping Neutron+linuxbridge was that 
our current model only has controllers and work nodes, with no provisions for 
dedicated network nodes to route in/out of the cluster. All our work nodes can 
route by themselves, which would have steered us towards a DVR model, but that 
seems to have its own issues as well as mandating OVS.

Since your branch indicates you're using linuxbridge on Icehouse, are you 
provisioning network nodes as part of your migration, or are you avoiding 
needing to provision network nodes in a different fashion?

From: kevin...@cisco.com 
Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration

Definitely, I can work on that. I need to get the migration done first, but 
once I do I plan to open source our plays and whatever else to help people 
perform the migration themselves. At that point I can work on adding some stuff 
to the networking guide as well. Probably will be a few months from now, though.


On 2/17/16, 9:29 AM, "Matt Kassawara"  wrote:

>Cool! I'd like to see this stuff in the networking guide... or at least a link 
>to it for now.
>
>On Wed, Feb 17, 2016 at 8:14 AM, Kevin Bringard (kevinbri)
> wrote:
>
>Hey All!
>
>I wanted to follow up on this. We've managed successfully migrated Icehouse 
>with per tenant networks (non overlapping, obviously) and L3 services from 
>nova-networking to neutron in the lab. I'm working on the automation bits, but 
>once that is done we'll start
> migrating real workloads.
>
>I forked Sam's stuff and modified it to work in icehouse with tenants nets: 
>https://github.com/kevinbringard/novanet2neutron/tree/icehouse 
>. I need to 
>update the README to succinctly reflect the steps, but the code is there (I'm 
>going to work on the README today).
>
>If this is something folks are interested in I proposed a talk to go over the 
>process and our various use cases in Austin:
>
>https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7045
> 
>
>
>-- Kevin
>
>
>
>On 12/9/15, 12:49 PM, "Kevin Bringard (kevinbri)"  wrote:
>
>>It's worth pointing out, it looks like this only works in Kilo+, as it's 
>>written. Sam pointed out earlier that this was what they'd run it on, but I 
>>verified it won't work on earlier versions because, specifically, in the 
>>migrate-secgroups.py it inserts into
> the default_security_group table, which was introduced in Kilo.
>>
>>I'm working on modifying it. If I manage to get it working properly I'll 
>>commit my changes to my fork and send it out.
>>
>>-- Kevin
>>
>>
>>
>>On 12/9/15, 10:00 AM, "Edgar Magana"  wrote:
>>
>>>I did not but more advanced could mean a lot of things for Neutron. There 
>>>are so many possible scenarios that expecting to have a “script” to cover 
>>>all of them is a whole new project. Not sure we want to explore than. In the 
>>>past we were recommending to
>>> make the migration in multiple steps, maybe we could use this as a good 
>>> step 0.
>>>
>>>
>>>Edgar
>>>
>>>
>>>
>>>
>>>
>>>From: "Kris G. Lindgren"
>>>Date: Wednesday, December 9, 2015 at 8:57 AM
>>>To: Edgar Magana, Matt Kassawara, "Kevin Bringard (kevinbri)"
>>>Cc: OpenStack Operators
>>>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>>>
>>>
>>>
>>>Doesn't this script only solve the case of going from flatdhcp networks in 
>>>nova-network to same dchp/provider networks in neutron.  Did anyone test to 
>>>see if it also works for doing more advanced nova-network configs?
>>>
>>>
>>>___
>>>Kris Lindgren
>>>Senior Linux Systems Engineer
>>>GoDaddy
>>>
>>>
>>>
>>>
>>>
>>>
>>>From: Edgar Magana 
>>>Date: Wednesday, December 9, 2015 at 9:54 AM
>>>To: Matt Kassawara , "Kevin Bringard (kevinbri)" 
>>>
>>>Cc: OpenStack Operators 
>>>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>>>
>>>
>>>
>>>Yes! We should but with a huge caveat that is not not supported officially 
>>>by the OpenStack community. At least the author wants to make a move with 
>>>the Neutron team to make it part of the tree.
>>>
>>>
>>>Edgar
>>>
>>>
>>>
>>>
>>>
>>>From: Matt Kassawara
>>>Date: Wednesday, December 9, 2015 at 8:52 AM
>>>To: "Kevin Bringard (kevinbri)"
>>>Cc: Edgar Magana, Tom Fifield, OpenStack Operators
>>>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>>>
>>>
>>>
>>>Anyone think we 

Re: [Openstack-operators] Managing quota for Nova local storage?

2016-02-17 Thread Warren Wang
We are in the same boat. Can't get rid of ephemeral for it's speed, and
independence. I get it, but it makes management of all these tiny pools a
scheduling and capacity nightmare.

Warren @ Walmart

On Wed, Feb 17, 2016 at 1:50 PM, Ned Rhudy (BLOOMBERG/ 731 LEX) <
erh...@bloomberg.net> wrote:

> The subject says it all - does anyone know of a method by which quota can
> be enforced on storage provisioned via Nova rather than Cinder? Googling
> around appears to indicate that this is not possible out of the box (e.g.,
> https://ask.openstack.org/en/question/8518/disk-quota-for-projects/).
>
> The rationale is we offer two types of storage, RBD that goes via Cinder
> and LVM that goes directly via the libvirt driver in Nova. Users know they
> can escape the constraints of their volume quotas by using the LVM-backed
> instances, which were designed to provide a fast-but-unreliable RAID
> 0-backed alternative to slower-but-reliable RBD volumes. Eventually users
> will hit their max quota in some other dimension (CPU or memory), but we'd
> like to be able to limit based directly on how much local storage is used
> in a tenancy.
>
> Does anyone have a solution they've already built to handle this scenario?
> We have a few ideas already for things we could do, but maybe somebody's
> already come up with something. (Social engineering on our user base by
> occasionally destroying a random RAID 0 to remind people of their unsafety,
> while tempting, is probably not a viable candidate solution.)
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Managing quota for Nova local storage?

2016-02-17 Thread Ned Rhudy (BLOOMBERG/ 731 LEX)
The subject says it all - does anyone know of a method by which quota can be 
enforced on storage provisioned via Nova rather than Cinder? Googling around 
appears to indicate that this is not possible out of the box (e.g., 
https://ask.openstack.org/en/question/8518/disk-quota-for-projects/).

The rationale is we offer two types of storage, RBD that goes via Cinder and 
LVM that goes directly via the libvirt driver in Nova. Users know they can 
escape the constraints of their volume quotas by using the LVM-backed 
instances, which were designed to provide a fast-but-unreliable RAID 0-backed 
alternative to slower-but-reliable RBD volumes. Eventually users will hit their 
max quota in some other dimension (CPU or memory), but we'd like to be able to 
limit based directly on how much local storage is used in a tenancy.

Does anyone have a solution they've already built to handle this scenario? We 
have a few ideas already for things we could do, but maybe somebody's already 
come up with something. (Social engineering on our user base by occasionally 
destroying a random RAID 0 to remind people of their unsafety, while tempting, 
is probably not a viable candidate solution.)___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] European Operators Meetup

2016-02-17 Thread Salman Toor
Hi Matt,

Thanks to you and team for organizing this much needed activity. The sessions 
and discussions were very interesting and helpful.

Looking forward to attend more events in Europe!

Regards..
Salman

PhD, Scientific Computing
Researcher, IT Department,
Uppsala University.
Senior Cloud Architect,
SNIC.
Cloud Application Expert,
UPPMAX.
salman.t...@it.uu.se
http://www.it.uu.se/katalog/salto690

From: Matt Jarvis [matt.jar...@datacentred.co.uk]
Sent: Wednesday, February 17, 2016 10:27 AM
To: OpenStack Operators
Subject: [Openstack-operators] European Operators Meetup

I just wanted to say a huge thank you to everyone who attended, moderated and 
sponsored the European Ops Meetup. We had a fantastic two days in Manchester, 
made a lot of new friends and had some incredibly useful discussions. Our goals 
when we put the event together were to engage European operators with the wider 
ops community, and to raise the profile of the OpenStack landscape in Europe 
and why it's different - and I think we exceeded expectations on both of those 
fronts.

See you all in Austin !

Matt

DataCentred Limited registered in England and Wales no. 05611763
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] How to install magnum?

2016-02-17 Thread Hongbin Lu
Mike,

I am sorry that it is currently lack of installation guide for Magnum. I
have created a blueprint [1] for creating one. It will be picked up if
someone interests to work on that.

If you need a guide right away, maybe you could reference this one [2].
This guide is not targeting for operators, but it might inspire the
installation steps.

[1] https://blueprints.launchpad.net/magnum/+spec/magnum-installation-guide
[2] http://docs.openstack.org/developer/magnum/dev/dev-manual-devstack.html

Best regards,
Hongbin

On Tue, Feb 9, 2016 at 11:30 AM, Mike Perez  wrote:

> On 09:48 Nov 03, Mike Perez wrote:
> > On 18:51 Oct 28, Mike Perez wrote:
> > > On 12:35 Oct 28, JJ Asghar wrote:
> > > > On 10/28/15 10:35 AM, Mike Perez wrote:
> > > > > On 14:09 Oct 16, hittang wrote:
> > > > >> Hello,everynoe. Can anybody help me for installing magnum? I have
> an
> > > > >> openstack installtion,which has one controller node, one network
> node, and
> > > > >> server computes node. Now, I want to install magnum, and  to
> manage docker
> > > > >> containers with.
> > > > >
> > > > > I was not able to find this information in the Magnum wiki [1],
> except for the
> > > > > developer quick start. Doing a quick search, other related threads
> point
> > > > > to the dev docs for installation, which is developer centric.
> > > > >
> > > > > Adrian, is this something missing in documentation, or did we miss
> it?
> > > > >
> > > > > [1] - https://wiki.openstack.org/wiki/Magnum
> > > > >
> > > >
> > > > Yep, this would be awesome. It's neat to see the integrations with
> > > > DevStack, but getting it to work in a "prod" environment seems
> confusing
> > > > at best.
> > > >
> > > > I've attempted a couple times now, and failed each one. I'm more then
> > > > willing to help debug/QA the docs that yall decide to put together.
> > >
> > > Since we're doing so much talk about Magnum in Keynotes for the Tokyo
> OpenStack
> > > summit, and there are install confusion, we should probably work on
> that.
> > > I have raised this in the next meeting [1], which I will bring up this
> thread
> > > in.
> > >
> > > [1] -
> https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2015-11-03_1600_UTC
> >
> > During the Magnum meeting today [1] it seems like this was previously
> discussed
> > at the last Magnum midcycle to improve this documentation in the Mitaka
> release
> > [2].
> >
> > [1] -
> http://eavesdrop.openstack.org/meetings/containers/2015/containers.2015-11-03-16.01.log.html#l-150
> > [2] - https://etherpad.openstack.org/p/magnum-mitaka-summit-meetup
>
> Hey Adrian,
>
> Can we get an update on the install guide for Magnum? I was looking through
> OpenStack manuals repo and didn't find anything up for review. We ran into
> each
> other recently, and you mentioned someone who was working on this effort,
> so
> please cc them to this thread. Thanks!
>
> --
> Mike Perez
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova-network -> Neutron Migration

2016-02-17 Thread Kevin Bringard (kevinbri)
Definitely, I can work on that. I need to get the migration done first, but 
once I do I plan to open source our plays and whatever else to help people 
perform the migration themselves. At that point I can work on adding some stuff 
to the networking guide as well. Probably will be a few months from now, though.




On 2/17/16, 9:29 AM, "Matt Kassawara"  wrote:

>Cool! I'd like to see this stuff in the networking guide... or at least a link 
>to it for now.
>
>On Wed, Feb 17, 2016 at 8:14 AM, Kevin Bringard (kevinbri)
> wrote:
>
>Hey All!
>
>I wanted to follow up on this. We've managed successfully migrated Icehouse 
>with per tenant networks (non overlapping, obviously) and L3 services from 
>nova-networking to neutron in the lab. I'm working on the automation bits, but 
>once that is done we'll start
> migrating real workloads.
>
>I forked Sam's stuff and modified it to work in icehouse with tenants nets: 
>https://github.com/kevinbringard/novanet2neutron/tree/icehouse 
>. I need to 
>update the README to succinctly reflect the steps, but the code is there (I'm 
>going to work on the README today).
>
>If this is something folks are interested in I proposed a talk to go over the 
>process and our various use cases in Austin:
>
>https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7045
> 
>
>
>-- Kevin
>
>
>
>On 12/9/15, 12:49 PM, "Kevin Bringard (kevinbri)"  wrote:
>
>>It's worth pointing out, it looks like this only works in Kilo+, as it's 
>>written. Sam pointed out earlier that this was what they'd run it on, but I 
>>verified it won't work on earlier versions because, specifically, in the 
>>migrate-secgroups.py it inserts into
> the default_security_group table, which was introduced in Kilo.
>>
>>I'm working on modifying it. If I manage to get it working properly I'll 
>>commit my changes to my fork and send it out.
>>
>>-- Kevin
>>
>>
>>
>>On 12/9/15, 10:00 AM, "Edgar Magana"  wrote:
>>
>>>I did not but more advanced could mean a lot of things for Neutron. There 
>>>are so many possible scenarios that expecting to have a “script” to cover 
>>>all of them is a whole new project. Not sure we want to explore than. In the 
>>>past we were recommending to
>>> make the migration in multiple steps, maybe we could use this as a good 
>>> step 0.
>>>
>>>
>>>Edgar
>>>
>>>
>>>
>>>
>>>
>>>From: "Kris G. Lindgren"
>>>Date: Wednesday, December 9, 2015 at 8:57 AM
>>>To: Edgar Magana, Matt Kassawara, "Kevin Bringard (kevinbri)"
>>>Cc: OpenStack Operators
>>>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>>>
>>>
>>>
>>>Doesn't this script only solve the case of going from flatdhcp networks in 
>>>nova-network to same dchp/provider networks in neutron.  Did anyone test to 
>>>see if it also works for doing more advanced nova-network configs?
>>>
>>>
>>>___
>>>Kris Lindgren
>>>Senior Linux Systems Engineer
>>>GoDaddy
>>>
>>>
>>>
>>>
>>>
>>>
>>>From: Edgar Magana 
>>>Date: Wednesday, December 9, 2015 at 9:54 AM
>>>To: Matt Kassawara , "Kevin Bringard (kevinbri)" 
>>>
>>>Cc: OpenStack Operators 
>>>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>>>
>>>
>>>
>>>Yes! We should but with a huge caveat that is not not supported officially 
>>>by the OpenStack community. At least the author wants to make a move with 
>>>the Neutron team to make it part of the tree.
>>>
>>>
>>>Edgar
>>>
>>>
>>>
>>>
>>>
>>>From: Matt Kassawara
>>>Date: Wednesday, December 9, 2015 at 8:52 AM
>>>To: "Kevin Bringard (kevinbri)"
>>>Cc: Edgar Magana, Tom Fifield, OpenStack Operators
>>>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>>>
>>>
>>>
>>>Anyone think we should make this script a bit more "official" ... perhaps in 
>>>the networking guide?
>>>
>>>On Wed, Dec 9, 2015 at 9:01 AM, Kevin Bringard (kevinbri)
>>> wrote:
>>>
>>>Thanks, Tom, Sam, and Edgar, that's really good info. If nothing else it'll 
>>>give me a good blueprint for what to look for and where to start.
>>>
>>>
>>>
>>>On 12/8/15, 10:37 PM, "Edgar Magana"  wrote:
>>>
Awesome code! I just did a small testbed test and it worked nicely!

Edgar




On 12/8/15, 7:16 PM, "Tom Fifield"  wrote:

>On 09/12/15 06:32, Kevin Bringard (kevinbri) wrote:
>> Hey fellow oppers!
>>
>> I was wondering if anyone has any experience doing a migration from 
>> nova-network to neutron. We're looking at an in place swap, on an 
>> Icehouse deployment. I don't have parallel
>>
>> I came 

Re: [Openstack-operators] Nova-network -> Neutron Migration

2016-02-17 Thread Kevin Bringard (kevinbri)
Hey All!

I wanted to follow up on this. We've managed successfully migrated Icehouse 
with per tenant networks (non overlapping, obviously) and L3 services from 
nova-networking to neutron in the lab. I'm working on the automation bits, but 
once that is done we'll start migrating real workloads.

I forked Sam's stuff and modified it to work in icehouse with tenants nets: 
https://github.com/kevinbringard/novanet2neutron/tree/icehouse. I need to 
update the README to succinctly reflect the steps, but the code is there (I'm 
going to work on the README today).

If this is something folks are interested in I proposed a talk to go over the 
process and our various use cases in Austin: 
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7045

-- Kevin



On 12/9/15, 12:49 PM, "Kevin Bringard (kevinbri)"  wrote:

>It's worth pointing out, it looks like this only works in Kilo+, as it's 
>written. Sam pointed out earlier that this was what they'd run it on, but I 
>verified it won't work on earlier versions because, specifically, in the 
>migrate-secgroups.py it inserts into the default_security_group table, which 
>was introduced in Kilo.
>
>I'm working on modifying it. If I manage to get it working properly I'll 
>commit my changes to my fork and send it out.
>
>-- Kevin
>
>
>
>On 12/9/15, 10:00 AM, "Edgar Magana"  wrote:
>
>>I did not but more advanced could mean a lot of things for Neutron. There are 
>>so many possible scenarios that expecting to have a “script” to cover all of 
>>them is a whole new project. Not sure we want to explore than. In the past we 
>>were recommending to
>> make the migration in multiple steps, maybe we could use this as a good step 
>> 0.
>>
>>
>>Edgar
>>
>>
>>
>>
>>
>>From: "Kris G. Lindgren"
>>Date: Wednesday, December 9, 2015 at 8:57 AM
>>To: Edgar Magana, Matt Kassawara, "Kevin Bringard (kevinbri)"
>>Cc: OpenStack Operators
>>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>>
>>
>>
>>Doesn't this script only solve the case of going from flatdhcp networks in 
>>nova-network to same dchp/provider networks in neutron.  Did anyone test to 
>>see if it also works for doing more advanced nova-network configs?
>>
>>
>>___
>>Kris Lindgren
>>Senior Linux Systems Engineer
>>GoDaddy
>>
>>
>>
>>
>>
>>
>>From: Edgar Magana 
>>Date: Wednesday, December 9, 2015 at 9:54 AM
>>To: Matt Kassawara , "Kevin Bringard (kevinbri)" 
>>
>>Cc: OpenStack Operators 
>>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>>
>>
>>
>>Yes! We should but with a huge caveat that is not not supported officially by 
>>the OpenStack community. At least the author wants to make a move with the 
>>Neutron team to make it part of the tree.
>>
>>
>>Edgar 
>>
>>
>>
>>
>>
>>From: Matt Kassawara
>>Date: Wednesday, December 9, 2015 at 8:52 AM
>>To: "Kevin Bringard (kevinbri)"
>>Cc: Edgar Magana, Tom Fifield, OpenStack Operators
>>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>>
>>
>>
>>Anyone think we should make this script a bit more "official" ... perhaps in 
>>the networking guide?
>>
>>On Wed, Dec 9, 2015 at 9:01 AM, Kevin Bringard (kevinbri)
>> wrote:
>>
>>Thanks, Tom, Sam, and Edgar, that's really good info. If nothing else it'll 
>>give me a good blueprint for what to look for and where to start.
>>
>>
>>
>>On 12/8/15, 10:37 PM, "Edgar Magana"  wrote:
>>
>>>Awesome code! I just did a small testbed test and it worked nicely!
>>>
>>>Edgar
>>>
>>>
>>>
>>>
>>>On 12/8/15, 7:16 PM, "Tom Fifield"  wrote:
>>>
On 09/12/15 06:32, Kevin Bringard (kevinbri) wrote:
> Hey fellow oppers!
>
> I was wondering if anyone has any experience doing a migration from 
> nova-network to neutron. We're looking at an in place swap, on an 
> Icehouse deployment. I don't have parallel
>
> I came across a couple of things in my search:
>
> 
>>https://wiki.openstack.org/wiki/Neutron/MigrationFromNovaNetwork/HowTo 
>>
> 
>>http://docs.openstack.org/networking-guide/migration_nova_network_to_neutron.html
>> 
>>
>
> But neither of them have much in the way of details.
>
> Looking to disrupt as little as possible, but of course with something 
> like this there's going to be an interruption.
>
> If anyone has any experience, pointers, or thoughts I'd love to hear 
> about it.
>
> Thanks!
>
> -- Kevin

NeCTAR used this script (https://github.com/NeCTAR-RC/novanet2neutron )
with success to do a live nova-net to neutron using Juno.

Re: [Openstack-operators] vmware nsx 6

2016-02-17 Thread Mark Voelker
Hi Ignazio,

I have. =)  Drop me a note and let me know what you need; we’ll be happy to 
help.  For a general background, this is a good place to start:

http://blogs.vmware.com/openstack/openstack-networking-with-vmware-nsx-part-1/

http://blogs.vmware.com/openstack/openstack-networking-with-vmware-nsx-part-2/

http://blogs.vmware.com/openstack/openstack-networking-with-vmware-nsx-part-3/

There’s also useful information in the config guides:

http://docs.openstack.org/kilo/config-reference/content/networking-plugin-nsx.html

At Your Service,

Mark T. Voelker



> On Feb 17, 2016, at 2:43 AM, Ignazio Cassano  wrote:
> 
> Hi all,
> I would like to know if someone have configured openstack neutron with vmware
> nsx 6. I found old documentation about it.
> Regards
> Ignazio
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] European Operators Meetup

2016-02-17 Thread Adam Huffman
Thanks for the hard work of you and your team, Matt.

Hope you were able to enjoy it as well, even though you were hosting...



On Wed, Feb 17, 2016 at 9:27 AM, Matt Jarvis
 wrote:
> I just wanted to say a huge thank you to everyone who attended, moderated
> and sponsored the European Ops Meetup. We had a fantastic two days in
> Manchester, made a lot of new friends and had some incredibly useful
> discussions. Our goals when we put the event together were to engage
> European operators with the wider ops community, and to raise the profile of
> the OpenStack landscape in Europe and why it's different - and I think we
> exceeded expectations on both of those fronts.
>
> See you all in Austin !
>
> Matt
>
> DataCentred Limited registered in England and Wales no. 05611763
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] European Operators Meetup

2016-02-17 Thread Robert Starmer
+1

Really a well put together event.

Robert

On Wed, Feb 17, 2016 at 10:21 AM, Edgar Magana 
wrote:

> Thank you Matt!
>
> Great organization and a very good job putting all this together. Yes, see
> you in Austin.
>
> Edgar
>
> From: Matt Jarvis 
> Date: Wednesday, February 17, 2016 at 1:27 AM
> To: OpenStack Operators 
> Subject: [Openstack-operators] European Operators Meetup
>
> I just wanted to say a huge thank you to everyone who attended, moderated
> and sponsored the European Ops Meetup. We had a fantastic two days in
> Manchester, made a lot of new friends and had some incredibly useful
> discussions. Our goals when we put the event together were to engage
> European operators with the wider ops community, and to raise the profile
> of the OpenStack landscape in Europe and why it's different - and I think
> we exceeded expectations on both of those fronts.
>
> See you all in Austin !
>
> Matt
>
> DataCentred Limited registered in England and Wales no. 05611763
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Neutron][DVR] point of encapsulation/locating routing tables

2016-02-17 Thread Assaf Muller
I wrote a fairly comprehensive blog series about DVR:
assafmuller.com/category/dvr/

On Tue, Feb 16, 2016 at 2:47 PM, Adam Lawson  wrote:
> Hi everyone:
>
> Got a quick question for those of you with some knowledge of OVS and
> namespaces within the context of DVR.
>
> I'm trying to document exactly where namespaces are defined and where
> encapsulation occurs if using (for instance) VxLAN/GRE tunnels.
>
> Is there a quick and dirty slideshare or someone able to explain this quick
> and dirty?
>
> The question came up today is if we use tunnels, where does encapsulation
> occur and where are the tables maintained and what data specifically sits in
> each of those tables.
>
> Is this an easy question to answer?
>
> //adam
>
> Adam Lawson
>
> AQORN, Inc.
> 427 North Tatnall Street
> Ste. 58461
> Wilmington, Delaware 19801-2230
> Toll-free: (844) 4-AQORN-NOW ext. 101
> International: +1 302-387-4660
> Direct: +1 916-246-2072
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] European Operators Meetup

2016-02-17 Thread Edgar Magana
Thank you Matt!

Great organization and a very good job putting all this together. Yes, see you 
in Austin.

Edgar

From: Matt Jarvis 
>
Date: Wednesday, February 17, 2016 at 1:27 AM
To: OpenStack Operators 
>
Subject: [Openstack-operators] European Operators Meetup

I just wanted to say a huge thank you to everyone who attended, moderated and 
sponsored the European Ops Meetup. We had a fantastic two days in Manchester, 
made a lot of new friends and had some incredibly useful discussions. Our goals 
when we put the event together were to engage European operators with the wider 
ops community, and to raise the profile of the OpenStack landscape in Europe 
and why it's different - and I think we exceeded expectations on both of those 
fronts.

See you all in Austin !

Matt

DataCentred Limited registered in England and Wales no. 05611763
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] European Operators Meetup

2016-02-17 Thread Matt Jarvis
I just wanted to say a huge thank you to everyone who attended, moderated
and sponsored the European Ops Meetup. We had a fantastic two days in
Manchester, made a lot of new friends and had some incredibly useful
discussions. Our goals when we put the event together were to engage
European operators with the wider ops community, and to raise the profile
of the OpenStack landscape in Europe and why it's different - and I think
we exceeded expectations on both of those fronts.

See you all in Austin !

Matt

-- 
DataCentred Limited registered in England and Wales no. 05611763
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators