Re: [Openstack-operators] Cannot launch instances on Ocata.

2017-05-17 Thread Kevin Bringard (kevinbri)
I would check the nova-scheduler log. It should explain to you why it’s unable 
to schedule the instance

-Original Message-
From: Andy Wojnarek 
Date: Wednesday, May 17, 2017 at 3:09 PM
To: "openstack-operators@lists.openstack.org" 

Subject: [Openstack-operators] Cannot launch instances on Ocata.

Hi,
 
I have a new Openstack cloud running in our lab, but I am unable to launch 
instances. This is Ocata running on Ubuntu 16.04.2

 
Here are the errors I am getting when trying to launch an instance:
 
On my controller node in log file /var/log/nova/nova-conductor.log
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager 
[req-a9beeb33-9454-47a2-96e2-908d5b1e4c46 b07949d8ae7144049851c7abb39ac6db 
4fd0307bf4b74c5a8718b180c24c7cff - - -] Failed to schedule instances
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager Traceback (most 
recent call last):
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File 
"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 866, in 
schedule_and_build_instances
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager 
request_specs[0].to_legacy_filter_properties_dict())
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File 
"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 597, in 
_schedule_instances
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager hosts = 
self.scheduler_client.select_destinations(context, spec_obj)
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/utils.py", line 371, in wrapped
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager return 
func(*args, **kwargs)
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line 51, 
in select_destinations
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager return 
self.queryclient.select_destinations(context, spec_obj)
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line 37, 
in __run_method
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager return 
getattr(self.instance, __name)(*args, **kwargs)
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/client/query.py", line 32, in 
select_destinations
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager return 
self.scheduler_rpcapi.select_destinations(context, spec_obj)
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/rpcapi.py", line 129, in 
select_destinations
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager return 
cctxt.call(ctxt, 'select_destinations', **msg_args)
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 169, in 
call
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager 
retry=self.retry)
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 97, in 
_send
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager 
timeout=timeout, retry=retry)
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 
458, in send
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager retry=retry)
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 
449, in _send
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager raise result
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager 
NoValidHost_Remote: No valid host was found. There are not enough hosts 
available.
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager Traceback (most 
recent call last):
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 218, in 
inner
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager return 
func(*args, **kwargs)
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager   File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 98, in 
select_destinations
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager dests = 
self.driver.select_destinations(ctxt, spec_obj)
2017-05-17 16:48:33.656 2654 ERROR nova.conductor.manager
2017-05-17 16:48:33.656 

Re: [Openstack-operators] Help: Liberty installation guide (English).

2017-04-11 Thread Kevin Bringard (kevinbri)

-Original Message-
From: Edgar Magana 
Date: Tuesday, April 11, 2017 at 9:30 AM
To: "Kris G. Lindgren" , Gaurav Goyal 
, "openstack-operators@lists.openstack.org" 
, 
"openstack-operators-requ...@lists.openstack.org" 

Subject: Re: [Openstack-operators] Help: Liberty installation guide 
(English).

We should start having some releases known as LTS. It is very hard to keep 
updating the code every six months.


I realize we’re off on a tangent now, but just wanted to say: this, times 
infinity. I’ve been making this case since Diablo when we stopped building 
packages and starting pushing support to vendors.

Chasing trunk just isn’t feasible for larger organizations, and as Edgar 
mentioned, updating the code every 6 months (or even a year) is a really 
difficult proposition.

There’s a reason large vendors do LTS releases; it just makes sense.


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Delete cinder service

2016-09-01 Thread Kevin Bringard (kevinbri)



On 9/1/16, 9:51 AM, "Nick Jones"  wrote:


On 1 Sep 2016, at 15:36, Jonathan D. Proulx wrote:

> On Thu, Sep 01, 2016 at 04:25:25PM +0300, Vladimir Prokofev wrote:
> :I've used direct database update to achive this in Mitaka:
> :use cinder;
> :update services set deleted = '1' where ;
>
>
> I belive the official way is:
>
> cinder-manage service remove  
>
> Which probably more or less does the same thing...

Yep.  Both options basically require direct interaction with the 
database as opposed to via a Cinder API call, but at least with 
cinder-manage the scope for making a mistake is far more limited than 
missing some qualifying clause off an UPDATE statement (limit 1 is your 
friend!) ;)


Similarly, I do the query as a SELECT first to make sure it’s only returning 
the record(s) I want before issuing the UPDATE.

—

-Nick

-- 
DataCentred Limited registered in England and Wales no. 05611763

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Who's using TripleO in production?

2016-08-03 Thread Kevin Bringard (kevinbri)
>I like the idea of OOO but it takes time to harden that sort of deployment 
>scenario.  And trying to build a generic tool to hit hardware in the wild is 
>an exercise in futility, to a point.  
>Crowbar actually kind of made sense in so far as it was designed
> to let you write the connector bits you'd need to write.  I figure over time 
> OOO will be forced into that sort of pattern as every automated deployment 
> framework has been for the past 20 years or so.  It's amazing how many times 
> i've seen people try to reinvent
> this wheel, and how many times they've outright ignored the lessons of those 
> who went before.

When I worked for a large telecom doing huge OpenStack deploys this was exactly 
the problem we were running into. No two orders came with the hardware info in 
the same format, and when you're deploying hundreds of nodes a month just 
bootstrapping everything becomes a nightmare. We came up with an idea we called 
"Open Manifest" (we even considered trying to make it an extension of the Open 
Compute standards). It is effectively a standard BOM format in JSON. The idea 
being, with enough buying power, we could get hardware makers on board with 
providing information in a standard ingestible format which would make this 
exact problem much easier to deal with. Far less work to be done on the 
connector bits (clamps I think they were called in crowbar nomenclature), 
because you would just ingest a standard JSON blob into your bootstrapping 
process.

I submitted a talk on it a few years back (I think at the Essex summit), but 
folks didn't seem interested at the time. If it's something people might be 
interested in now I can probably dig up the notes and work we'd done on it.


-- Kevin
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] How are people dealing with API rate limiting?

2016-06-14 Thread Kevin Bringard (kevinbri)


On 6/14/16, 9:44 AM, "Matt Fischer"  wrote:

>On Tue, Jun 14, 2016 at 9:37 AM, Sean Dague 
> wrote:
>
>On 06/14/2016 11:02 AM, Matt Riedemann wrote:
>> A question came up in the nova IRC channel this morning about the
>> api_rate_limit config option in nova which was only for the v2 API.
>>
>> Sean Dague explained that it never really worked because it was per API
>> server so if you had more than one API server it was busted. There is no
>> in-tree replacement in nova.
>>
>> So the open question here is, what are people doing as an alternative?
>
>Just as a clarification. The toy implementation that used to live in
>Nova was even worse then described above. The counters were kept per
>process. So if you had > 1 API worker (default is # of workers per CPU)
>it would be inconsistent.
>
>This is why it was defaulted in false in Havana, and never carried
>forward to the new API backend.
>
>-Sean
>
>
>
>
>+1 to remove that code! 

+1 to this +1.

As pointed out, it’s never really worked anyway, and I think only serves to 
confuse and frustrate people. API rate limiting should probably be happening 
higher up on the stack where connections are concentrated. 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Problems (simple ones) at scale... Invisible VMs.

2016-05-19 Thread Kevin Bringard (kevinbri)
Sorry, my email client didn’t properly thread replies, so I just saw Simon’s 
reply. Seems that setting limit -1 is effectively telling it to paginate, so 
feel free to ignore that suggestion :-D

We should perhaps still consider a better way to let people know that’s 
happening. At the very least I’ll look over the docs and see if we need to open 
a docs bug.

-- Kevin

On 5/19/16, 7:49 AM, "Kevin Bringard (kevinbri)" <kevin...@cisco.com> wrote:

>Given that there’s already the ability to start listing from a specific 
>VM/UUID, it seems like we should add some pagination functionality in there. 
>Maybe we need to add an “—all” flag or something which is non-admin and 
>returns all VMs for the specific tenant. Then if someone really does want all 
>10,000 VMs they’re running in a tenant they can get them, and still honor 
>osapi_max_limit (per subsequent paged call).
>
>Alternatively (or perhaps in addition to?) if there are more results than the 
>osapi_max_limit, or some other limit set, we should tell people “showing 1000 
>of 10,000 results, use –marker  to show the next 1000” or something.
>
>Limits are something people tend to be OK with, but having to dig into 
>un/under-documented issues is what makes them straight up angry. If there’s 
>something we can do to help alleviate that it seems worth investigating.
>
>On 5/18/16, 5:26 PM, "James Downs" <e...@egon.cc> wrote:
>
>>On Wed, May 18, 2016 at 04:37:42PM -0600, David Medberry wrote:
>>
>>> It seems to bypass it... or I'm running into a LOWER limit (undocumented).
>>> So help on  limit -1 says it is still limited by osapi_max_limit
>>
>>You're looking for:
>>
>>--marker  The last server UUID of the previous page;
>>displays list of servers after "marker".
>>
>>This is much faster than increasing the size of results, at least in 
>>sufficiently
>>large environments.
>>
>>Cheers,
>>-j
>>
>>___
>>OpenStack-operators mailing list
>>OpenStack-operators@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Problems (simple ones) at scale... Invisible VMs.

2016-05-19 Thread Kevin Bringard (kevinbri)
Given that there’s already the ability to start listing from a specific 
VM/UUID, it seems like we should add some pagination functionality in there. 
Maybe we need to add an “—all” flag or something which is non-admin and returns 
all VMs for the specific tenant. Then if someone really does want all 10,000 
VMs they’re running in a tenant they can get them, and still honor 
osapi_max_limit (per subsequent paged call).

Alternatively (or perhaps in addition to?) if there are more results than the 
osapi_max_limit, or some other limit set, we should tell people “showing 1000 
of 10,000 results, use –marker  to show the next 1000” or something.

Limits are something people tend to be OK with, but having to dig into 
un/under-documented issues is what makes them straight up angry. If there’s 
something we can do to help alleviate that it seems worth investigating.

On 5/18/16, 5:26 PM, "James Downs"  wrote:

>On Wed, May 18, 2016 at 04:37:42PM -0600, David Medberry wrote:
>
>> It seems to bypass it... or I'm running into a LOWER limit (undocumented).
>> So help on  limit -1 says it is still limited by osapi_max_limit
>
>You're looking for:
>
>--marker  The last server UUID of the previous page;
>displays list of servers after "marker".
>
>This is much faster than increasing the size of results, at least in 
>sufficiently
>large environments.
>
>Cheers,
>-j
>
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] Do you, or your users, have input on how get-me-a-network should work in Nova?

2016-02-19 Thread Kevin Bringard (kevinbri)
Sorry for top posting. 

Just wanted to say I agree with Monty (and didn't want you to have to scroll 
way down to read it). When we switched to neutron the thing people said was 
"Why do I have to do all this other stuff now?". So long as the tools exist for 
folks to do more powerful things if they want to, I'm all for making the 
simplest use case just that: simple. I think if there's any issue with this 
approach it'll be with the people who are unlearning their behavior.

-- Kevin




On 2/19/16, 1:21 PM, "Monty Taylor"  wrote:

>On 02/19/2016 11:07 AM, Matt Riedemann wrote:
>> There is a long contentious dev thread going on here [1] about how Nova
>> should handle the Neutron auto-allocate-topology API (referred to as the
>> 'get-me-a-network' effort).
>>
>> The point is to reduce the complexity for users to simply boot an
>> instance and be able to ssh into it without having to first setup
>> networks/subnets/routers in neutron and then specify a nic when booting
>> the instance. If the planets are aligned, and no nic is provided (or
>> available to the project), then nova would call the new neutron API to
>> auto-allocate the network and use that to create a port to associate
>> with the instance.
>>
>> There is existing behavior in Nova where you can boot an instance and
>> get no networking with neutron as the backend. You can later add
>> networking by attaching an interface. The nova dev team has no idea how
>> common this use case is though.
>
>Fascinating. I have never wanted to do this. I cannot imagine wanting to 
>do this. I also have never heard anyone express a desire to do this.
>
>> There will be a microversion to the nova API with the get-me-a-network
>> support. The debate is what the default behavior should be when using
>> that microversion. The options are basically:
>
>The command line tool always uses the latest microversion, right?
>
>> 1. If no nic is provided at boot and none are available, don't provide a
>> network (existing behavior). If the user wants a network auto-allocated,
>> they specify something like: --nic=auto
>>
>> In this case the user has to opt into auto-allocating the network.
>
>This would not be horrible, but still requires a user to take an action 
>that they would not expect to need to do just to do the simple thing 
>(boot a vm) So it's my least favorite.
>
>> 2. If no nic is provided at boot and none are available, nova will
>> attempt to auto-allocate the network from neutron. If the user
>> specifically doesn't want networking on instance create (for whatever
>> reason), they have to opt into that behavior with something like:
>> --nic=none
>>
>> This is closer in behavior to how booting an instance works with
>> nova-network, but it is a change in the default behavior for the neutron
>> case, and that is a cause for concern for any users that have written
>> tools to expect that default behavior.
>
>This is my most favorite - because it accomplishes the simplest case 
>"boot me a vm without me requesting anything out of the ordinary about 
>it" in the simplest way "nova boot"
>
>> 3. If no nic is provided at boot and none are available, fail the
>> request and force the request to be explicit, i.e. provide a specific
>> nic, or auto, or none. This is a fail-fast scenario to force users to
>> really state what they want.
>
>I like this better than 1 but less than 2. The nice part is that the 
>error message can at least communicate to the user that they need to say 
>"--nic=$something" - which is at least active communication on our part. 
>But if there is no network available, then the _only_ valid choices are 
>none and auto, (specific nic wouldn't be a result here because in that 
>case the user currently gets the "I can't figure out which network to 
>use" errror - and again the "no" nic is a strange case that is the least 
>likely thing a user wants to do.
>
>> --
>>
>> As with any microversion change, we hope that users are reading the docs
>> and aware of the changes in each microversion, but we can't guarantee
>> that, so changing default behavior (case 2) requires discussion and
>> input, especially from outside the dev team.
>
>That is totally fair and I think you're right about that. It is a change 
>- but in this particular case I think we can extrapolate pretty well 
>about what people do and how they use clouds.
>
>Getting an IP address in an OpenStack Cloud is hard already - AND it's 
>very common for clouds to restrict API calls for port/fixed-ip 
>manipulation, so I doubt VERY seriously that anyone is deliberately 
>going through additional needless steps to get a working IP.
>
>> If you or your users have any input on this, please respond in this
>> thread of the one in the -dev list.
>
>I earnestly suggest #2. I believe it is the behavior users are more 
>likely to expect than anything else.
>
>
>___
>OpenStack-operators mailing list

Re: [Openstack-operators] Nova-network -> Neutron Migration

2016-02-19 Thread Kevin Bringard (kevinbri)
Ah, thanks Rob, I misunderstood the purpose for the repo.

Once I get everything tidied up and complete I'll work on getting a PR into 
contrib.




On 2/18/16, 7:42 PM, "Robert Starmer" <rob...@kumul.us> wrote:

>It still probably fits in OSOps, at least in contrib, as the idea is to 
>capture scripts and code that _may_ help someone, rather than necessarily 
>being 100% fit for any production environment.  Of course, any/all 
>documentation available would
> also be useful.
>
>
>Robert
>
>
>On Wed, Feb 17, 2016 at 8:42 PM, Kevin Bringard (kevinbri)
><kevin...@cisco.com> wrote:
>
>
>
>
>
>On 2/17/16, 1:31 PM, "Shamail" <itzsham...@gmail.com> wrote:
>
>>Sorry for the top posting...  I wanted to make a suggestion:
>>
>>
>>Would this script be suited for OSOps[1]?  The networking guide could then 
>>reference it but we could continue to evolve/maintain it as an operators tool.
>>
>
>It could be... The problem is that every deploy is different, and so this 
>isn't so much a one size fits all software, as it is a good reference. By the 
>time we got it to the point where it was a generic migration tool, anyone 
>who'd benefit from it would
> likely have long moved away from nova-networking.
>
>At least that's my thought, but maybe I overestimate the effort involved in 
>generalizing it.
>
>>
>>[1] 
>https://wiki.openstack.org/wiki/Osops <https://wiki.openstack.org/wiki/Osops>
>>
>>
>>Thanks,
>>Shamail
>>
>>On Feb 17, 2016, at 4:29 PM, Matt Kassawara <mkassaw...@gmail.com> wrote:
>>
>>
>>
>>Cool! I'd like to see this stuff in the networking guide... or at least a 
>>link to it for now.
>>
>>On Wed, Feb 17, 2016 at 8:14 AM, Kevin Bringard (kevinbri)
>><kevin...@cisco.com> wrote:
>>
>>Hey All!
>>
>>I wanted to follow up on this. We've managed successfully migrated Icehouse 
>>with per tenant networks (non overlapping, obviously) and L3 services from 
>>nova-networking to neutron in the lab. I'm working on the automation bits, 
>>but once that is done we'll start
>> migrating real workloads.
>>
>>I forked Sam's stuff and modified it to work in icehouse with tenants nets:
>>https://github.com/kevinbringard/novanet2neutron/tree/icehouse 
>><https://github.com/kevinbringard/novanet2neutron/tree/icehouse>.
> I need to update the README to succinctly reflect the steps, but the code is 
> there (I'm going to work on the README today).
>>
>>If this is something folks are interested in I proposed a talk to go over the 
>>process and our various use cases in Austin:
>>
>>https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7045
>> 
>><https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7045>
>>
>>-- Kevin
>>
>>
>>
>>On 12/9/15, 12:49 PM, "Kevin Bringard (kevinbri)" <kevin...@cisco.com> wrote:
>>
>>>It's worth pointing out, it looks like this only works in Kilo+, as it's 
>>>written. Sam pointed out earlier that this was what they'd run it on, but I 
>>>verified it won't work on earlier versions because, specifically, in the 
>>>migrate-secgroups.py it inserts into
>> the default_security_group table, which was introduced in Kilo.
>>>
>>>I'm working on modifying it. If I manage to get it working properly I'll 
>>>commit my changes to my fork and send it out.
>>>
>>>-- Kevin
>>>
>>>
>>>
>>>On 12/9/15, 10:00 AM, "Edgar Magana" <edgar.mag...@workday.com> wrote:
>>>
>>>>I did not but more advanced could mean a lot of things for Neutron. There 
>>>>are so many possible scenarios that expecting to have a “script” to cover 
>>>>all of them is a whole new project. Not sure we want to explore than. In 
>>>>the past we were recommending to
>>>> make the migration in multiple steps, maybe we could use this as a good 
>>>> step 0.
>>>>
>>>>
>>>>Edgar
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>From: "Kris G. Lindgren"
>>>>Date: Wednesday, December 9, 2015 at 8:57 AM
>>>>To: Edgar Magana, Matt Kassawara, "Kevin Bringard (kevinbri)"
>>>>Cc: OpenStack Operators
>>>>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>>>>
>>>>
>>>>
>>>>Doesn't this script only solve the case of going from 

Re: [Openstack-operators] Nova-network -> Neutron Migration

2016-02-17 Thread Kevin Bringard (kevinbri)




On 2/17/16, 1:31 PM, "Shamail" <itzsham...@gmail.com> wrote:

>Sorry for the top posting...  I wanted to make a suggestion:
>
>
>Would this script be suited for OSOps[1]?  The networking guide could then 
>reference it but we could continue to evolve/maintain it as an operators tool.
>

It could be... The problem is that every deploy is different, and so this isn't 
so much a one size fits all software, as it is a good reference. By the time we 
got it to the point where it was a generic migration tool, anyone who'd benefit 
from it would likely have long moved away from nova-networking.

At least that's my thought, but maybe I overestimate the effort involved in 
generalizing it.

>
>[1] https://wiki.openstack.org/wiki/Osops
>
>
>Thanks,
>Shamail 
>
>On Feb 17, 2016, at 4:29 PM, Matt Kassawara <mkassaw...@gmail.com> wrote:
>
>
>
>Cool! I'd like to see this stuff in the networking guide... or at least a link 
>to it for now.
>
>On Wed, Feb 17, 2016 at 8:14 AM, Kevin Bringard (kevinbri)
><kevin...@cisco.com> wrote:
>
>Hey All!
>
>I wanted to follow up on this. We've managed successfully migrated Icehouse 
>with per tenant networks (non overlapping, obviously) and L3 services from 
>nova-networking to neutron in the lab. I'm working on the automation bits, but 
>once that is done we'll start
> migrating real workloads.
>
>I forked Sam's stuff and modified it to work in icehouse with tenants nets: 
>https://github.com/kevinbringard/novanet2neutron/tree/icehouse 
><https://github.com/kevinbringard/novanet2neutron/tree/icehouse>. I need to 
>update the README to succinctly reflect the steps, but the code is there (I'm 
>going to work on the README today).
>
>If this is something folks are interested in I proposed a talk to go over the 
>process and our various use cases in Austin:
>
>https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7045
> 
><https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7045>
>
>-- Kevin
>
>
>
>On 12/9/15, 12:49 PM, "Kevin Bringard (kevinbri)" <kevin...@cisco.com> wrote:
>
>>It's worth pointing out, it looks like this only works in Kilo+, as it's 
>>written. Sam pointed out earlier that this was what they'd run it on, but I 
>>verified it won't work on earlier versions because, specifically, in the 
>>migrate-secgroups.py it inserts into
> the default_security_group table, which was introduced in Kilo.
>>
>>I'm working on modifying it. If I manage to get it working properly I'll 
>>commit my changes to my fork and send it out.
>>
>>-- Kevin
>>
>>
>>
>>On 12/9/15, 10:00 AM, "Edgar Magana" <edgar.mag...@workday.com> wrote:
>>
>>>I did not but more advanced could mean a lot of things for Neutron. There 
>>>are so many possible scenarios that expecting to have a “script” to cover 
>>>all of them is a whole new project. Not sure we want to explore than. In the 
>>>past we were recommending to
>>> make the migration in multiple steps, maybe we could use this as a good 
>>> step 0.
>>>
>>>
>>>Edgar
>>>
>>>
>>>
>>>
>>>
>>>From: "Kris G. Lindgren"
>>>Date: Wednesday, December 9, 2015 at 8:57 AM
>>>To: Edgar Magana, Matt Kassawara, "Kevin Bringard (kevinbri)"
>>>Cc: OpenStack Operators
>>>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>>>
>>>
>>>
>>>Doesn't this script only solve the case of going from flatdhcp networks in 
>>>nova-network to same dchp/provider networks in neutron.  Did anyone test to 
>>>see if it also works for doing more advanced nova-network configs?
>>>
>>>
>>>___
>>>Kris Lindgren
>>>Senior Linux Systems Engineer
>>>GoDaddy
>>>
>>>
>>>
>>>
>>>
>>>
>>>From: Edgar Magana <edgar.mag...@workday.com>
>>>Date: Wednesday, December 9, 2015 at 9:54 AM
>>>To: Matt Kassawara <mkassaw...@gmail.com>, "Kevin Bringard (kevinbri)" 
>>><kevin...@cisco.com>
>>>Cc: OpenStack Operators <openstack-operators@lists.openstack.org>
>>>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>>>
>>>
>>>
>>>Yes! We should but with a huge caveat that is not not supported officially 
>>>by the OpenStack community. At least the author wants to make a move with 
>

Re: [Openstack-operators] Nova-network -> Neutron Migration

2016-02-17 Thread Kevin Bringard (kevinbri)
Definitely, I can work on that. I need to get the migration done first, but 
once I do I plan to open source our plays and whatever else to help people 
perform the migration themselves. At that point I can work on adding some stuff 
to the networking guide as well. Probably will be a few months from now, though.




On 2/17/16, 9:29 AM, "Matt Kassawara" <mkassaw...@gmail.com> wrote:

>Cool! I'd like to see this stuff in the networking guide... or at least a link 
>to it for now.
>
>On Wed, Feb 17, 2016 at 8:14 AM, Kevin Bringard (kevinbri)
><kevin...@cisco.com> wrote:
>
>Hey All!
>
>I wanted to follow up on this. We've managed successfully migrated Icehouse 
>with per tenant networks (non overlapping, obviously) and L3 services from 
>nova-networking to neutron in the lab. I'm working on the automation bits, but 
>once that is done we'll start
> migrating real workloads.
>
>I forked Sam's stuff and modified it to work in icehouse with tenants nets: 
>https://github.com/kevinbringard/novanet2neutron/tree/icehouse 
><https://github.com/kevinbringard/novanet2neutron/tree/icehouse>. I need to 
>update the README to succinctly reflect the steps, but the code is there (I'm 
>going to work on the README today).
>
>If this is something folks are interested in I proposed a talk to go over the 
>process and our various use cases in Austin:
>
>https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7045
> 
><https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7045>
>
>-- Kevin
>
>
>
>On 12/9/15, 12:49 PM, "Kevin Bringard (kevinbri)" <kevin...@cisco.com> wrote:
>
>>It's worth pointing out, it looks like this only works in Kilo+, as it's 
>>written. Sam pointed out earlier that this was what they'd run it on, but I 
>>verified it won't work on earlier versions because, specifically, in the 
>>migrate-secgroups.py it inserts into
> the default_security_group table, which was introduced in Kilo.
>>
>>I'm working on modifying it. If I manage to get it working properly I'll 
>>commit my changes to my fork and send it out.
>>
>>-- Kevin
>>
>>
>>
>>On 12/9/15, 10:00 AM, "Edgar Magana" <edgar.mag...@workday.com> wrote:
>>
>>>I did not but more advanced could mean a lot of things for Neutron. There 
>>>are so many possible scenarios that expecting to have a “script” to cover 
>>>all of them is a whole new project. Not sure we want to explore than. In the 
>>>past we were recommending to
>>> make the migration in multiple steps, maybe we could use this as a good 
>>> step 0.
>>>
>>>
>>>Edgar
>>>
>>>
>>>
>>>
>>>
>>>From: "Kris G. Lindgren"
>>>Date: Wednesday, December 9, 2015 at 8:57 AM
>>>To: Edgar Magana, Matt Kassawara, "Kevin Bringard (kevinbri)"
>>>Cc: OpenStack Operators
>>>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>>>
>>>
>>>
>>>Doesn't this script only solve the case of going from flatdhcp networks in 
>>>nova-network to same dchp/provider networks in neutron.  Did anyone test to 
>>>see if it also works for doing more advanced nova-network configs?
>>>
>>>
>>>___
>>>Kris Lindgren
>>>Senior Linux Systems Engineer
>>>GoDaddy
>>>
>>>
>>>
>>>
>>>
>>>
>>>From: Edgar Magana <edgar.mag...@workday.com>
>>>Date: Wednesday, December 9, 2015 at 9:54 AM
>>>To: Matt Kassawara <mkassaw...@gmail.com>, "Kevin Bringard (kevinbri)" 
>>><kevin...@cisco.com>
>>>Cc: OpenStack Operators <openstack-operators@lists.openstack.org>
>>>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>>>
>>>
>>>
>>>Yes! We should but with a huge caveat that is not not supported officially 
>>>by the OpenStack community. At least the author wants to make a move with 
>>>the Neutron team to make it part of the tree.
>>>
>>>
>>>Edgar
>>>
>>>
>>>
>>>
>>>
>>>From: Matt Kassawara
>>>Date: Wednesday, December 9, 2015 at 8:52 AM
>>>To: "Kevin Bringard (kevinbri)"
>>>Cc: Edgar Magana, Tom Fifield, OpenStack Operators
>>>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>>>
>>>
>>>
>>>Anyone think

Re: [Openstack-operators] Nova-network -> Neutron Migration

2016-02-17 Thread Kevin Bringard (kevinbri)
Hey All!

I wanted to follow up on this. We've managed successfully migrated Icehouse 
with per tenant networks (non overlapping, obviously) and L3 services from 
nova-networking to neutron in the lab. I'm working on the automation bits, but 
once that is done we'll start migrating real workloads.

I forked Sam's stuff and modified it to work in icehouse with tenants nets: 
https://github.com/kevinbringard/novanet2neutron/tree/icehouse. I need to 
update the README to succinctly reflect the steps, but the code is there (I'm 
going to work on the README today).

If this is something folks are interested in I proposed a talk to go over the 
process and our various use cases in Austin: 
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/7045

-- Kevin



On 12/9/15, 12:49 PM, "Kevin Bringard (kevinbri)" <kevin...@cisco.com> wrote:

>It's worth pointing out, it looks like this only works in Kilo+, as it's 
>written. Sam pointed out earlier that this was what they'd run it on, but I 
>verified it won't work on earlier versions because, specifically, in the 
>migrate-secgroups.py it inserts into the default_security_group table, which 
>was introduced in Kilo.
>
>I'm working on modifying it. If I manage to get it working properly I'll 
>commit my changes to my fork and send it out.
>
>-- Kevin
>
>
>
>On 12/9/15, 10:00 AM, "Edgar Magana" <edgar.mag...@workday.com> wrote:
>
>>I did not but more advanced could mean a lot of things for Neutron. There are 
>>so many possible scenarios that expecting to have a “script” to cover all of 
>>them is a whole new project. Not sure we want to explore than. In the past we 
>>were recommending to
>> make the migration in multiple steps, maybe we could use this as a good step 
>> 0.
>>
>>
>>Edgar
>>
>>
>>
>>
>>
>>From: "Kris G. Lindgren"
>>Date: Wednesday, December 9, 2015 at 8:57 AM
>>To: Edgar Magana, Matt Kassawara, "Kevin Bringard (kevinbri)"
>>Cc: OpenStack Operators
>>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>>
>>
>>
>>Doesn't this script only solve the case of going from flatdhcp networks in 
>>nova-network to same dchp/provider networks in neutron.  Did anyone test to 
>>see if it also works for doing more advanced nova-network configs?
>>
>>
>>_______
>>Kris Lindgren
>>Senior Linux Systems Engineer
>>GoDaddy
>>
>>
>>
>>
>>
>>
>>From: Edgar Magana <edgar.mag...@workday.com>
>>Date: Wednesday, December 9, 2015 at 9:54 AM
>>To: Matt Kassawara <mkassaw...@gmail.com>, "Kevin Bringard (kevinbri)" 
>><kevin...@cisco.com>
>>Cc: OpenStack Operators <openstack-operators@lists.openstack.org>
>>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>>
>>
>>
>>Yes! We should but with a huge caveat that is not not supported officially by 
>>the OpenStack community. At least the author wants to make a move with the 
>>Neutron team to make it part of the tree.
>>
>>
>>Edgar 
>>
>>
>>
>>
>>
>>From: Matt Kassawara
>>Date: Wednesday, December 9, 2015 at 8:52 AM
>>To: "Kevin Bringard (kevinbri)"
>>Cc: Edgar Magana, Tom Fifield, OpenStack Operators
>>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>>
>>
>>
>>Anyone think we should make this script a bit more "official" ... perhaps in 
>>the networking guide?
>>
>>On Wed, Dec 9, 2015 at 9:01 AM, Kevin Bringard (kevinbri)
>><kevin...@cisco.com> wrote:
>>
>>Thanks, Tom, Sam, and Edgar, that's really good info. If nothing else it'll 
>>give me a good blueprint for what to look for and where to start.
>>
>>
>>
>>On 12/8/15, 10:37 PM, "Edgar Magana" <edgar.mag...@workday.com> wrote:
>>
>>>Awesome code! I just did a small testbed test and it worked nicely!
>>>
>>>Edgar
>>>
>>>
>>>
>>>
>>>On 12/8/15, 7:16 PM, "Tom Fifield" <t...@openstack.org> wrote:
>>>
>>>>On 09/12/15 06:32, Kevin Bringard (kevinbri) wrote:
>>>>> Hey fellow oppers!
>>>>>
>>>>> I was wondering if anyone has any experience doing a migration from 
>>>>> nova-network to neutron. We're looking at an in place swap, on an 
>>>>> Icehouse deployment. I don't have parallel
>>>>>
>>>>> I came a

Re: [Openstack-operators] [openstack-operators]disable snat for router gateway

2016-01-19 Thread Kevin Bringard (kevinbri)
To expand on Joseph's explanation: when SNAT is enabled, an IP is pulled from 
the floating pool and assigned as a "default SNAT" for the router when its 
gateway is set. Similar to how your home router has a single external IP and 
all your internal devices SNAT out from that IP, all Vms on that network will 
have external access which originate from that IP address.

As Joseph pointed out, if you have this option disabled, unless you explicitly 
assign a floating IP address to a VM (which sets up a 1:1 DNAT/SNAT for the 
internal/floating IP) Vms won't be able to access the outside world because 
there will be no default SNAT rule mapping them to an externally routable IP 
address.



On 1/15/16, 7:04 PM, "Bajin, Joseph"  wrote:

>The instance would still require a floating IP. That is the only way the host 
>would get outside of the tenant network.  
>
>
>We do this for some of our tenants to ensure that we know that only 
>connections outbound would be controlled by Floating IPs. 
>
>
>
>
>
>On Jan 15, 2016, at 6:55 PM, Akshay Kumar Sanghai 
> wrote:
>
>
>
>Hi,In the cli of neutron router-gateway-set, thers is an option of disable 
>snat. 
>http://docs.openstack.org/cli-reference/neutron.html#neutron-router-gateway-set
>
>
>Does that mean i can create a tenant network and the packet will go out with 
>the same fixed ip of the vm? Assume the tenant network created is routable or 
>identifiable in the physical network.
>I tried to disable snat for the router gateway, but the packet wasn't going 
>out from the external interface. Do i need to edit some iptable rules or the 
>disable snat option doesn't work?
>
>S 
>Thanks,
>Akshay
>
>
>
>
>
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] kilo keystone with liberty nova

2016-01-06 Thread Kevin Bringard (kevinbri)
We've even done later versions of keystone with older versions of other stuff 
(Specifically Kilo Keystone with Juno Glance/Heat and Icehouse everything 
else)... 

The thing I'd watch out for going the other direction is if the newer version 
of, let's say, nova, requires API calls which aren't present in your older 
version of Keystone. I can't think of any specific calls off the top of my 
head, but that'd be the only major read flag which comes to mind.



On 1/5/16, 10:50 PM, "Abel Lopez"  wrote:

>
>
>
>We've done things like Icehouse keystone and Juno glance/heat/ceilometer. 
>
>
>Seems like having older keystone is somewhat safe. 
>
>On Tuesday, January 5, 2016, Marcus Furlong  wrote:
>
>Hi,
>
>Just wanted to check if it's possible to have a mixed version
>environment with kilo keystone and e.g. liberty nova.
>
>Has anyone done this? Any gotchas?
>
>Regards,
>Marcus.
>--
>Marcus Furlong
>
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org 
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova-network -> Neutron Migration

2015-12-09 Thread Kevin Bringard (kevinbri)
Thanks, Tom, Sam, and Edgar, that's really good info. If nothing else it'll 
give me a good blueprint for what to look for and where to start.



On 12/8/15, 10:37 PM, "Edgar Magana" <edgar.mag...@workday.com> wrote:

>Awesome code! I just did a small testbed test and it worked nicely!
>
>Edgar
>
>
>
>
>On 12/8/15, 7:16 PM, "Tom Fifield" <t...@openstack.org> wrote:
>
>>On 09/12/15 06:32, Kevin Bringard (kevinbri) wrote:
>>> Hey fellow oppers!
>>>
>>> I was wondering if anyone has any experience doing a migration from 
>>> nova-network to neutron. We're looking at an in place swap, on an Icehouse 
>>> deployment. I don't have parallel
>>>
>>> I came across a couple of things in my search:
>>>
>>> https://wiki.openstack.org/wiki/Neutron/MigrationFromNovaNetwork/HowTo
>>> http://docs.openstack.org/networking-guide/migration_nova_network_to_neutron.html
>>>
>>> But neither of them have much in the way of details.
>>>
>>> Looking to disrupt as little as possible, but of course with something like 
>>> this there's going to be an interruption.
>>>
>>> If anyone has any experience, pointers, or thoughts I'd love to hear about 
>>> it.
>>>
>>> Thanks!
>>>
>>> -- Kevin
>>
>>NeCTAR used this script (https://github.com/NeCTAR-RC/novanet2neutron ) 
>>with success to do a live nova-net to neutron using Juno.
>>
>>
>>
>>___
>>OpenStack-operators mailing list
>>OpenStack-operators@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova-network -> Neutron Migration

2015-12-09 Thread Kevin Bringard (kevinbri)
Yea, I was considering updating the wiki 
(https://wiki.openstack.org/wiki/Neutron/MigrationFromNovaNetwork/HowTo) to at 
least make mention of it.

If it works (and it sounds like it does, at least for Juno) then I'm all for 
adding it as a potential resource




On 12/9/15, 9:52 AM, "Matt Kassawara" <mkassaw...@gmail.com> wrote:

>Anyone think we should make this script a bit more "official" ... perhaps in 
>the networking guide?
>
>On Wed, Dec 9, 2015 at 9:01 AM, Kevin Bringard (kevinbri)
><kevin...@cisco.com> wrote:
>
>Thanks, Tom, Sam, and Edgar, that's really good info. If nothing else it'll 
>give me a good blueprint for what to look for and where to start.
>
>
>
>On 12/8/15, 10:37 PM, "Edgar Magana" <edgar.mag...@workday.com> wrote:
>
>>Awesome code! I just did a small testbed test and it worked nicely!
>>
>>Edgar
>>
>>
>>
>>
>>On 12/8/15, 7:16 PM, "Tom Fifield" <t...@openstack.org> wrote:
>>
>>>On 09/12/15 06:32, Kevin Bringard (kevinbri) wrote:
>>>> Hey fellow oppers!
>>>>
>>>> I was wondering if anyone has any experience doing a migration from 
>>>> nova-network to neutron. We're looking at an in place swap, on an Icehouse 
>>>> deployment. I don't have parallel
>>>>
>>>> I came across a couple of things in my search:
>>>>
>>>> 
>https://wiki.openstack.org/wiki/Neutron/MigrationFromNovaNetwork/HowTo 
><https://wiki.openstack.org/wiki/Neutron/MigrationFromNovaNetwork/HowTo>
>>>> 
>http://docs.openstack.org/networking-guide/migration_nova_network_to_neutron.html
> 
><http://docs.openstack.org/networking-guide/migration_nova_network_to_neutron.html>
>>>>
>>>> But neither of them have much in the way of details.
>>>>
>>>> Looking to disrupt as little as possible, but of course with something 
>>>> like this there's going to be an interruption.
>>>>
>>>> If anyone has any experience, pointers, or thoughts I'd love to hear about 
>>>> it.
>>>>
>>>> Thanks!
>>>>
>>>> -- Kevin
>>>
>>>NeCTAR used this script (https://github.com/NeCTAR-RC/novanet2neutron )
>>>with success to do a live nova-net to neutron using Juno.
>>>
>>>
>>>
>>>___
>>>OpenStack-operators mailing list
>>>OpenStack-operators@lists.openstack.org
>>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>___
>>OpenStack-operators mailing list
>>OpenStack-operators@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova-network -> Neutron Migration

2015-12-09 Thread Kevin Bringard (kevinbri)
It's worth pointing out, it looks like this only works in Kilo+, as it's 
written. Sam pointed out earlier that this was what they'd run it on, but I 
verified it won't work on earlier versions because, specifically, in the 
migrate-secgroups.py it inserts into the default_security_group table, which 
was introduced in Kilo.

I'm working on modifying it. If I manage to get it working properly I'll commit 
my changes to my fork and send it out.

-- Kevin



On 12/9/15, 10:00 AM, "Edgar Magana" <edgar.mag...@workday.com> wrote:

>I did not but more advanced could mean a lot of things for Neutron. There are 
>so many possible scenarios that expecting to have a “script” to cover all of 
>them is a whole new project. Not sure we want to explore than. In the past we 
>were recommending to
> make the migration in multiple steps, maybe we could use this as a good step 
> 0.
>
>
>Edgar
>
>
>
>
>
>From: "Kris G. Lindgren"
>Date: Wednesday, December 9, 2015 at 8:57 AM
>To: Edgar Magana, Matt Kassawara, "Kevin Bringard (kevinbri)"
>Cc: OpenStack Operators
>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>
>
>
>Doesn't this script only solve the case of going from flatdhcp networks in 
>nova-network to same dchp/provider networks in neutron.  Did anyone test to 
>see if it also works for doing more advanced nova-network configs?
>
>
>___
>Kris Lindgren
>Senior Linux Systems Engineer
>GoDaddy
>
>
>
>
>
>
>From: Edgar Magana <edgar.mag...@workday.com>
>Date: Wednesday, December 9, 2015 at 9:54 AM
>To: Matt Kassawara <mkassaw...@gmail.com>, "Kevin Bringard (kevinbri)" 
><kevin...@cisco.com>
>Cc: OpenStack Operators <openstack-operators@lists.openstack.org>
>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>
>
>
>Yes! We should but with a huge caveat that is not not supported officially by 
>the OpenStack community. At least the author wants to make a move with the 
>Neutron team to make it part of the tree.
>
>
>Edgar 
>
>
>
>
>
>From: Matt Kassawara
>Date: Wednesday, December 9, 2015 at 8:52 AM
>To: "Kevin Bringard (kevinbri)"
>Cc: Edgar Magana, Tom Fifield, OpenStack Operators
>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>
>
>
>Anyone think we should make this script a bit more "official" ... perhaps in 
>the networking guide?
>
>On Wed, Dec 9, 2015 at 9:01 AM, Kevin Bringard (kevinbri)
><kevin...@cisco.com> wrote:
>
>Thanks, Tom, Sam, and Edgar, that's really good info. If nothing else it'll 
>give me a good blueprint for what to look for and where to start.
>
>
>
>On 12/8/15, 10:37 PM, "Edgar Magana" <edgar.mag...@workday.com> wrote:
>
>>Awesome code! I just did a small testbed test and it worked nicely!
>>
>>Edgar
>>
>>
>>
>>
>>On 12/8/15, 7:16 PM, "Tom Fifield" <t...@openstack.org> wrote:
>>
>>>On 09/12/15 06:32, Kevin Bringard (kevinbri) wrote:
>>>> Hey fellow oppers!
>>>>
>>>> I was wondering if anyone has any experience doing a migration from 
>>>> nova-network to neutron. We're looking at an in place swap, on an Icehouse 
>>>> deployment. I don't have parallel
>>>>
>>>> I came across a couple of things in my search:
>>>>
>>>> 
>https://wiki.openstack.org/wiki/Neutron/MigrationFromNovaNetwork/HowTo 
><https://wiki.openstack.org/wiki/Neutron/MigrationFromNovaNetwork/HowTo>
>>>> 
>http://docs.openstack.org/networking-guide/migration_nova_network_to_neutron.html
> 
><http://docs.openstack.org/networking-guide/migration_nova_network_to_neutron.html>
>>>>
>>>> But neither of them have much in the way of details.
>>>>
>>>> Looking to disrupt as little as possible, but of course with something 
>>>> like this there's going to be an interruption.
>>>>
>>>> If anyone has any experience, pointers, or thoughts I'd love to hear about 
>>>> it.
>>>>
>>>> Thanks!
>>>>
>>>> -- Kevin
>>>
>>>NeCTAR used this script (https://github.com/NeCTAR-RC/novanet2neutron )
>>>with success to do a live nova-net to neutron using Juno.
>>>
>>>
>>>
>>>___
>>>OpenStack-operators mailing list
>>>OpenStack-operators@lists.openstack.org
>>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>___
>>OpenStack-operators mailing list
>>OpenStack-operators@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
>
>
>
>
>
>
>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Nova-network -> Neutron Migration

2015-12-08 Thread Kevin Bringard (kevinbri)
Hey fellow oppers!

I was wondering if anyone has any experience doing a migration from 
nova-network to neutron. We're looking at an in place swap, on an Icehouse 
deployment. I don't have parallel 

I came across a couple of things in my search:

https://wiki.openstack.org/wiki/Neutron/MigrationFromNovaNetwork/HowTo
http://docs.openstack.org/networking-guide/migration_nova_network_to_neutron.html

But neither of them have much in the way of details.

Looking to disrupt as little as possible, but of course with something like 
this there's going to be an interruption.

If anyone has any experience, pointers, or thoughts I'd love to hear about it.

Thanks!

-- Kevin
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] OpenStack Tuning Guide

2015-11-05 Thread Kevin Bringard (kevinbri)
Yep, I 100% agree, and that's my goal. Gather up everyone's info and best 
practices, and then work on distilling it into a new chapter (or perhaps even a 
whole new section) of the operators or administration guide.


Thanks for all the great feedback so far, let's keep it up!

-- Kevin

On 11/4/15, 4:40 PM, "Matt Kassawara" <mkassaw...@gmail.com> wrote:

>The official documentation would greatly benefit from more practical 
>information such as tuning... most likely in the administration or operations 
>guide, at least for now. Ideally, I would like to see a operators contribute 
>documentation for
> complete production deployments that others can use as a reference for new 
> deployments and avoid reinventing the wheel.
>
>On Wed, Nov 4, 2015 at 4:27 PM, Leslie-Alexandre DENIS 
><cont...@ladenis.fr> wrote:
>
>Hello there,
>
>Very interesting initiative ! I'm currently working on the performance side 
>too on our (french astrophysics cloud) OpenStack deployment but to say so 
>currently I'm just following what CERN did regarding to Nova/CPU configuration.
>It's essentially around KSM, NUMA, CPU pinning, EPT and performance measures 
>are compared through Spec 06 benchmarks, which is standard for HPC/HTC 
>computing.
>
>You can take a look at the very interesting blog from CERN here 
>http://openstack-in-production.blogspot.fr/ 
><http://openstack-in-production.blogspot.fr/> and if you want I have few 
>slides from various meetings like HEPIX on these subjects.
>
>Following the etherpad !
>
>Regards,
>
>
>Le 05/11/2015 00:11, Donald Talton a écrit :
>
>Awesome start. Rabbit fd tweaks are the bane of every install...including some 
>of my own...
>
>-Original Message-
>From: Kevin Bringard (kevinbri) [mailto:kevin...@cisco.com]
>Sent: Wednesday, November 04, 2015 3:56 PM
>To: OpenStack Operators
>Subject: [Openstack-operators] OpenStack Tuning Guide
>
>Hey all!
>
>Something that jumped out at me in Tokyo was how much it seemed that "basic" 
>tuning stuff wasn't common knowledge. This was especially prevalent in the 
>couple of rabbit talks I went to. So, in order to pool our resources, I 
>started an Etherpad titled the "OpenStack
> Tuning Guide" (https://etherpad.openstack.org/p/OpenStack_Tuning_Guide). 
> Eventually I expect this should go into the documentation project, and much 
> of it
> may already exist in the operators manual (or elsewhere), but I thought that 
> getting us all together to drop in our hints, tweaks, and best practices for 
> tuning our systems to run OpenStack well, in real production, would be time 
> well spent.
>
>It's a work in progress at the moment, and we've only just started, but please 
>feel free to check it out. Feedback and community involvement is super 
>welcome, so please don't hesitate to modify it as you see fit.
>
>Finally, I hate diverging resources, so if something like this already exists 
>please speak up so we can focus our efforts on making sure that's up to date 
>and well publicized.
>
>Thanks everyone!
>
>-- Kevin
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>This email and any files transmitted with it are confidential, proprietary and 
>intended solely for the individual or entity to whom they are addressed. If 
>you have received this email in error please delete it immediately.
>
>
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [neutron] Any users of Neutron's VPN advanced service?

2015-08-06 Thread Kevin Bringard (kevinbri)
I've got to agree. We don't really use the included VPNaaS for many of the
reasons listed below. Most of our users put appliance VM to establish
tunnels and behave as their subnet's router, same as Sam.

On 8/6/15, 7:52 AM, Sam Stoelinga sammiest...@gmail.com wrote:

I'm running VPN servers in VMs. Neutron VPNaaS only supports site-to-site
IPSec based VPNs and it seemed quite troublesome to setup (opinion-based).


Sam Stoelinga


On Thu, Aug 6, 2015 at 2:51 PM, Edgar Magana
edgar.mag...@workday.com wrote:

I know I can¹t wear both hats but in this case as Operator as one of the
constant moderators for the neutron-related sessions, I can say that I
have never received a report about the VPNaaS code from the Operators.
This could be means two things, the code
 is terrific and nobody has issues with it or basically nobody uses it.


Thanks,


Edgar







From: Kyle Mestery
Date: Wednesday, August 5, 2015 at 12:56 PM
To: openstack-operators@lists.openstack.org
Cc: Paul Michali, Doug Wiegley
Subject: [Openstack-operators] [neutron] Any users of Neutron's VPN
advanced service?



Operators:


We (myself, Paul and Doug) are looking to better understand who might be
using Neutron's VPNaaS code. We're looking for what version you're using,
how long you're using it, and if you plan to continue deploying it with
future upgrades. Any information operators
 can provide here would be fantastic!


Thank you!

Kyle







___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators








___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Outbound and inbound external access for projects

2015-07-15 Thread Kevin Bringard (kevinbri)
You don't need per project vlans for inbound and outbound access. Public
Ips only need a single VLAN between the logical routers
(net-hosts/l3-agent hosts) and their next hop... It's the internal
networks which require multiple VLANs if you wish to do such a thing, and
those VLANs are only necessary on your internal switches. Alternatively
you can use GRE or STT or some other segregation method and avoid the VLAN
cap altogether (on internal networks).

Basically, the flow looks like so:

Internet - Floating IP (hosted on your logical router host... All a
single public VLAN) - NAT translation to internal tenant subnet (and
tagged with the internal OVS VLAN - VLAN translation flow (if it needs
to go to the wire) tags the packet with the VLAN assigned to the tenant's
subnet (or goes over the requisite GRE tunnel) - ...

It's kind of complicated, I know, but hopefully that helps some? Or
perhaps I just misunderstood your scenario/question, which is also
entirely possible :-D


On 7/15/15, 9:24 AM, Adam Huffman adam.huff...@gmail.com wrote:

Hello

We're at the stage of working out how to integrate our Icehouse system
with the external network, using Neutron.

We have a limited set of public IPs available for inbound access, and
we'd also like to make outbound access optional, in case some projects
want to be completely isolated.

One suggestion is as follows:

- each project is allocated a single /24 VLAN

- within this VLAN, there are 2 subnets

- the first subnet (/25) would be for outbound access, using floating IPs

- the second (/25) subnet would be for inbound access, drawing from
the limited public pool, also with floating IPs

Does that sound sensible/feasible? The Cisco hardware that's providing
the route to the external network has constraints in the numbers of
VLANs it will support, so we prefer this approach to having separate
per-project VLANs for outbound and inbound access.

If there's a different way of achieving this, I'd be interested to
hear that too.


Cheers,
Adam

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Getting ERROE on compute node

2015-07-08 Thread Kevin Bringard (kevinbri)
I think more than that, conductor may not be running. If you look at the
original error messages, we can see that it's able to connect to the
rabbit server:

2015-07-08 01:21:20.501 49721 INFO oslo.messaging._drivers.impl_rabbit
[req-6f67f40a-53cc-4c95-a431-f98ab296c0c5 ] Connecting to AMQP server on
controller.example.com:5672 http://controller.example.com:5672/
2015-07-08 01:21:20.516 49721 INFO oslo.messaging._drivers.impl_rabbit
[req-6f67f40a-53cc-4c95-a431-f98ab296c0c5 ] Connected to AMQP server
oncontroller.example.com:5672 http://controller.example.com:5672/


But then it times out waiting for conductor.

2015-07-08 01:21:30.538 49721 WARNING nova.conductor.api
[req-6f67f40a-53cc-4c95-a431-f98ab296c0c5 None] Timed out waiting for
nova-conductor.  Is it running? Or did this service start before
nova-conductor?  Reattempting establishment of nova-conductor connection...


I'd look on your controller node and make sure the nova-conductor service
is running, and also make sure *it* is able to connect to the message bus.



On 7/8/15, 8:07 AM, MailingLists - EWS
mailingli...@expresswebsystems.com wrote:

Anwar,
 
It looks like your RabbitMQ isn¹t running or isn¹t reachable.
 
There were some issues that we ran into when doing this. Make sure
RabbitMQ is running. Double check your IPtables rules to make sure the
ports are open. I
 also seem to recall some problem with a certain version of RabbitMQ that
was installed that was just broken and we had to roll back to a previous
version but that was several months ago and my recollection is a bit hazy
as to the details.
 
Tom Walsh
Express Web Systems, Inc.







___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ha queues Juno periodic rabbitmq errors

2015-05-14 Thread Kevin Bringard (kevinbri)
If you're using Rabbit 3.x you need to enable HA queues via policy on the
rabbit server side.

Something like this:

rabbitmqctl set_policy ha-all 
'{ha-mode:all,ha-sync-mode:automatic}'


Obviously, tailor it to your own needs :-)

We've also seen issues with TCP_RETRIES2 needing to be turned way down
because when rebooting the rabbit node, it takes quite some time for the
remote host to realize it's gone and tear down the connections.

On 5/14/15, 9:23 AM, Pedro Sousa pgso...@gmail.com wrote:

Hi all,


I'm using Juno and ocasionally see this kind of errors when I reboot one
of my rabbit nodes:


MessagingTimeout: Timed out waiting for a reply to message ID
e95d4245da064c779be2648afca8cdc0


I use ha queues in my openstack services:


rabbit_hosts=192.168.113.206:5672
http://192.168.113.206:5672,192.168.113.207:5672
http://192.168.113.207:5672,192.168.113.208:5672
http://192.168.113.208:5672

rabbit_ha_queues=True



As anyone experienced this issues? is this a oslo bug or related?


Regards,
Pedro Sousa









___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ha queues Juno periodic rabbitmq errors

2015-05-14 Thread Kevin Bringard (kevinbri)


On 5/14/15, 9:45 AM, Pedro Sousa pgso...@gmail.com wrote:

Hi Kevin,


thank you for reply, I'm using rabbitmqctl set_policy HA '^(?!amq\.).*'
'{ha-mode: all}'


I will test with ha-sync-mode:automatic' and net.ipv4.tcp_retries2=5

I don't know that you need to ha-sync-mode to automatic (I was just using
an example I found quickly on the internets), but I do think the
tcp_retries2 thing will help. I think we may have even set ours to 3...
I'd have to check. But, fiddle with it to the point that it times out
connections quickly, without having false positives.

There's a doc RH wrote about it. It's specific to Oracle, but should be
portable.

https://www.redhat.com/promo/summit/2010/presentations/summit/decoding-the-
code/fri/scott-945-tuning/summit_jbw_2010_presentation.pdf



Regards,
Pedro Sousa












On Thu, May 14, 2015 at 4:29 PM, Kevin Bringard (kevinbri)
kevin...@cisco.com wrote:

If you're using Rabbit 3.x you need to enable HA queues via policy on the
rabbit server side.

Something like this:

rabbitmqctl set_policy ha-all 
'{ha-mode:all,ha-sync-mode:automatic}'


Obviously, tailor it to your own needs :-)

We've also seen issues with TCP_RETRIES2 needing to be turned way down
because when rebooting the rabbit node, it takes quite some time for the
remote host to realize it's gone and tear down the connections.

On 5/14/15, 9:23 AM, Pedro Sousa pgso...@gmail.com wrote:

Hi all,


I'm using Juno and ocasionally see this kind of errors when I reboot one
of my rabbit nodes:


MessagingTimeout: Timed out waiting for a reply to message ID
e95d4245da064c779be2648afca8cdc0


I use ha queues in my openstack services:


rabbit_hosts=192.168.113.206:5672 http://192.168.113.206:5672
http://192.168.113.206:5672,192.168.113.207:5672
http://192.168.113.207:5672
http://192.168.113.207:5672,192.168.113.208:5672
http://192.168.113.208:5672
http://192.168.113.208:5672

rabbit_ha_queues=True



As anyone experienced this issues? is this a oslo bug or related?


Regards,
Pedro Sousa

















___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] how to filter outgoing VM traffic in icehouse

2015-05-13 Thread Kevin Bringard (kevinbri)
Ah, I don't believe nova-network supports EGRESS rules.

On 5/13/15, 3:41 PM, Gustavo Randich gustavo.rand...@gmail.com wrote:

Hi, sorry, I forgot to mention: I'm using nova-network



On Wed, May 13, 2015 at 6:39 PM, Abel Lopez
alopg...@gmail.com wrote:

Yes, you can define egress security group rules.

 On May 13, 2015, at 2:32 PM, Gustavo Randich
gustavo.rand...@gmail.com wrote:

 Hi,

 Is there any way to filter outgoing VM traffic in Icehouse, preferably
using security groups? I.e. deny all traffic except to certain IPs

 Thanks!



 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators








___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] how to filter outgoing VM traffic in icehouse

2015-05-13 Thread Kevin Bringard (kevinbri)
Specifically, look at neutron security-group-rule-create:

usage: neutron security-group-rule-create [-h] [-f {shell,table}] [-c
COLUMN]
  [--variable VARIABLE]
  [--prefix PREFIX]
  [--request-format {json,xml}]
  [--tenant-id TENANT_ID]
  [--direction {ingress,egress}]
  [--ethertype ETHERTYPE]
  [--protocol PROTOCOL]
  [--port-range-min PORT_RANGE_MIN]
  [--port-range-max PORT_RANGE_MAX]
  [--remote-ip-prefix
REMOTE_IP_PREFIX]
  [--remote-group-id REMOTE_GROUP]
  SECURITY_GROUP

The --direction option is what you're looking for. You may need to remove
a default egress rule... I think by default it allows everything.


On 5/13/15, 3:39 PM, Abel Lopez alopg...@gmail.com wrote:

Yes, you can define egress security group rules.

 On May 13, 2015, at 2:32 PM, Gustavo Randich
gustavo.rand...@gmail.com wrote:
 
 Hi,
 
 Is there any way to filter outgoing VM traffic in Icehouse, preferably
using security groups? I.e. deny all traffic except to certain IPs
 
 Thanks!
 
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators