[Openstack-operators] OpenStack Liberty automation script release.

2015-12-09 Thread Hieu LE
Dear stackers,

Vietnam OpenStack Community is very eager to announce the release of
Liberty automation deploy script:

1. Liberty deploy multi nodes using OVS in Neutron:
https://github.com/vietstacker/openstack-liberty-multinode
/blob/master/LIBERTY-U14.04-OVS/README-ENGLISH.md
2. Liberty deploy with multi nodes using Linux Bridge in Neutron:
https://github.com/vietstacker/openstack-liberty-multinode
/blob/master/LIBERTY-U14.04-LB/README-ENGLISH.md
3. Liberty AIO deploy using Linux Bridge:
 https://github.com/vietstacker/openstack-liberty-multinode
/blob/master/LIBERTY-U14.04-AIO/README-ENGLISH.md

Please review and give us feedback.

-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.1
GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C()$ ULC(++)$ P L++(+++)$ E
!W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R tv+ b+(++)>+++ DI- D+ G
e++(+++) h-- r(++)>+++ y-
--END GEEK CODE BLOCK--
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [keystone] RBAC usage at production

2015-12-09 Thread Steve Martinelli

Whether or not a restart is required is actually handled by oslo.policy.
Which is only included in Kilo and newer versions of Keystone. The work to
avoid restarting the service went in in commit [0] and was further worked
on in [1].

Juno and older versions are using the oslo-incubator code to handle policy
(before it was turned into it's own library), and AFAICT don't have the
check to see if policy.json has been modified.

[0]
https://github.com/openstack/oslo.policy/commit/63d699aff89969fdfc584ce875a23ba0a90e5b51
[1]
https://github.com/openstack/oslo.policy/commit/b5f07dfe4cd4a5d12c7fecbc3954694d934de642

Thanks,

Steve Martinelli
OpenStack Keystone Project Team Lead



From:   Timothy Symanczyk 
To: "OpenStack Development Mailing List (not for usage questions)"
, "Kris G. Lindgren"
, Oguz Yarimtepe
,
"openstack-operators@lists.openstack.org"

Date:   2015/12/09 04:40 PM
Subject:Re: [openstack-dev] [Openstack-operators] [keystone] RBAC usage
at production



We are running keystone kilo in production, and I¹m actively implementing
RBAC right now. I¹m certain that, at least with the version of keystone
we¹re running, a restart is NOT required when the policy file is modified.

Tim




On 12/9/15, 9:18 AM, "Edgar Magana"  wrote:

>We use RBAC in production but basically modify networking operations and
>some compute ones. In our case we don¹t need to restart the services if
>we modify the policy.json file. I am surprise that keystone is not
>following the same process.
>
>Edgar
>
>
>
>
>On 12/9/15, 9:06 AM, "Kris G. Lindgren"  wrote:
>
>>In other projects the policy.json file is read each time of api request.
>> So changes to the file take place immediately.  I was 90% sure keystone
>>was the same way?
>>
>>___
>>Kris Lindgren
>>Senior Linux Systems Engineer
>>GoDaddy
>>
>>
>>
>>
>>
>>
>>
>>On 12/9/15, 1:39 AM, "Oguz Yarimtepe"  wrote:
>>
>>>Hi,
>>>
>>>I am wondering whether there are people using RBAC at production. The
>>>policy.json file has a structure that requires restart of the service
>>>each time you edit the file. Is there and on the fly solution or tips
>>>about it?
>>>
>>>
>>>
>>>___
>>>OpenStack-operators mailing list
>>>OpenStack-operators@lists.openstack.org
>>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>___
>>OpenStack-operators mailing list
>>OpenStack-operators@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [keystone] Removing functionality that was deprecated in Kilo and upcoming deprecated functionality in Mitaka

2015-12-09 Thread Thomas Goirand
On 12/08/2015 06:39 AM, Jamie Lennox wrote:
> The main problem I see with running Keystone (or any other service) in a
> web server, is that *I* (as a package maintainer) will loose the control
> over when the service is started. Let me explain why that is important
> for me.
> 
> In Debian, many services/daemons are run, then their API is used by the
> package. In the case of Keystone, for example, it is possible to ask,
> via Debconf, that Keystone registers itself in the service catalog. If
> we get Keystone within Apache, it becomes at least harder to do so.
> 
> I was going to leave this up to others to comment on here, but IMO -
> excellent. Anyone that is doing an even semi serious deployment of
> OpenStack is going to require puppet/chef/ansible or some form of
> orchestration layer for deployment. Even for test deployments it seems
> to me that it's crazy for this sort of functionality be handled from
> debconf. The deployers of the system are going to understand if they
> want to use eventlet or apache and should therefore understand what
> restarting apache on a system implies.

It is often what everyone from within the community says. However,
there's lots of users who hardly do a single deployment, maybe 2. I
don't agree that they should all invest a huge amount of time in some
automation tools, and for them, packages should be enough.

Anyway, the debconf handling is completely optional, and most of the
helpers are completely disabled by default. So it is *not* in the way of
using any deployment tool like puppet.

Cheers,

Thomas Goirand (zigo)


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova-network -> Neutron Migration

2015-12-09 Thread Kevin Bringard (kevinbri)
It's worth pointing out, it looks like this only works in Kilo+, as it's 
written. Sam pointed out earlier that this was what they'd run it on, but I 
verified it won't work on earlier versions because, specifically, in the 
migrate-secgroups.py it inserts into the default_security_group table, which 
was introduced in Kilo.

I'm working on modifying it. If I manage to get it working properly I'll commit 
my changes to my fork and send it out.

-- Kevin



On 12/9/15, 10:00 AM, "Edgar Magana"  wrote:

>I did not but more advanced could mean a lot of things for Neutron. There are 
>so many possible scenarios that expecting to have a “script” to cover all of 
>them is a whole new project. Not sure we want to explore than. In the past we 
>were recommending to
> make the migration in multiple steps, maybe we could use this as a good step 
> 0.
>
>
>Edgar
>
>
>
>
>
>From: "Kris G. Lindgren"
>Date: Wednesday, December 9, 2015 at 8:57 AM
>To: Edgar Magana, Matt Kassawara, "Kevin Bringard (kevinbri)"
>Cc: OpenStack Operators
>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>
>
>
>Doesn't this script only solve the case of going from flatdhcp networks in 
>nova-network to same dchp/provider networks in neutron.  Did anyone test to 
>see if it also works for doing more advanced nova-network configs?
>
>
>___
>Kris Lindgren
>Senior Linux Systems Engineer
>GoDaddy
>
>
>
>
>
>
>From: Edgar Magana 
>Date: Wednesday, December 9, 2015 at 9:54 AM
>To: Matt Kassawara , "Kevin Bringard (kevinbri)" 
>
>Cc: OpenStack Operators 
>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>
>
>
>Yes! We should but with a huge caveat that is not not supported officially by 
>the OpenStack community. At least the author wants to make a move with the 
>Neutron team to make it part of the tree.
>
>
>Edgar 
>
>
>
>
>
>From: Matt Kassawara
>Date: Wednesday, December 9, 2015 at 8:52 AM
>To: "Kevin Bringard (kevinbri)"
>Cc: Edgar Magana, Tom Fifield, OpenStack Operators
>Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration
>
>
>
>Anyone think we should make this script a bit more "official" ... perhaps in 
>the networking guide?
>
>On Wed, Dec 9, 2015 at 9:01 AM, Kevin Bringard (kevinbri)
> wrote:
>
>Thanks, Tom, Sam, and Edgar, that's really good info. If nothing else it'll 
>give me a good blueprint for what to look for and where to start.
>
>
>
>On 12/8/15, 10:37 PM, "Edgar Magana"  wrote:
>
>>Awesome code! I just did a small testbed test and it worked nicely!
>>
>>Edgar
>>
>>
>>
>>
>>On 12/8/15, 7:16 PM, "Tom Fifield"  wrote:
>>
>>>On 09/12/15 06:32, Kevin Bringard (kevinbri) wrote:
 Hey fellow oppers!

 I was wondering if anyone has any experience doing a migration from 
 nova-network to neutron. We're looking at an in place swap, on an Icehouse 
 deployment. I don't have parallel

 I came across a couple of things in my search:

 
>https://wiki.openstack.org/wiki/Neutron/MigrationFromNovaNetwork/HowTo 
>
 
>http://docs.openstack.org/networking-guide/migration_nova_network_to_neutron.html
> 
>

 But neither of them have much in the way of details.

 Looking to disrupt as little as possible, but of course with something 
 like this there's going to be an interruption.

 If anyone has any experience, pointers, or thoughts I'd love to hear about 
 it.

 Thanks!

 -- Kevin
>>>
>>>NeCTAR used this script (https://github.com/NeCTAR-RC/novanet2neutron )
>>>with success to do a live nova-net to neutron using Juno.
>>>
>>>
>>>
>>>___
>>>OpenStack-operators mailing list
>>>OpenStack-operators@lists.openstack.org
>>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>___
>>OpenStack-operators mailing list
>>OpenStack-operators@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
>
>
>
>
>
>
>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [keystone] RBAC usage at production

2015-12-09 Thread Edgar Magana
We use RBAC in production but basically modify networking operations and some 
compute ones. In our case we don’t need to restart the services if we modify 
the policy.json file. I am surprise that keystone is not following the same 
process. 

Edgar




On 12/9/15, 9:06 AM, "Kris G. Lindgren"  wrote:

>In other projects the policy.json file is read each time of api request.  So 
>changes to the file take place immediately.  I was 90% sure keystone was the 
>same way?
>
>___
>Kris Lindgren
>Senior Linux Systems Engineer
>GoDaddy
>
>
>
>
>
>
>
>On 12/9/15, 1:39 AM, "Oguz Yarimtepe"  wrote:
>
>>Hi,
>>
>>I am wondering whether there are people using RBAC at production. The 
>>policy.json file has a structure that requires restart of the service 
>>each time you edit the file. Is there and on the fly solution or tips 
>>about it?
>>
>>
>>
>>___
>>OpenStack-operators mailing list
>>OpenStack-operators@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova-network -> Neutron Migration

2015-12-09 Thread Edgar Magana
I did not but more advanced could mean a lot of things for Neutron. There are 
so many possible scenarios that expecting to have a “script” to cover all of 
them is a whole new project. Not sure we want to explore than. In the past we 
were recommending to make the migration in multiple steps, maybe we could use 
this as a good step 0.

Edgar

From: "Kris G. Lindgren"
Date: Wednesday, December 9, 2015 at 8:57 AM
To: Edgar Magana, Matt Kassawara, "Kevin Bringard (kevinbri)"
Cc: OpenStack Operators
Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration

Doesn't this script only solve the case of going from flatdhcp networks in 
nova-network to same dchp/provider networks in neutron.  Did anyone test to see 
if it also works for doing more advanced nova-network configs?

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: Edgar Magana mailto:edgar.mag...@workday.com>>
Date: Wednesday, December 9, 2015 at 9:54 AM
To: Matt Kassawara mailto:mkassaw...@gmail.com>>, "Kevin 
Bringard (kevinbri)" mailto:kevin...@cisco.com>>
Cc: OpenStack Operators 
mailto:openstack-operators@lists.openstack.org>>
Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration

Yes! We should but with a huge caveat that is not not supported officially by 
the OpenStack community. At least the author wants to make a move with the 
Neutron team to make it part of the tree.

Edgar

From: Matt Kassawara
Date: Wednesday, December 9, 2015 at 8:52 AM
To: "Kevin Bringard (kevinbri)"
Cc: Edgar Magana, Tom Fifield, OpenStack Operators
Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration

Anyone think we should make this script a bit more "official" ... perhaps in 
the networking guide?

On Wed, Dec 9, 2015 at 9:01 AM, Kevin Bringard (kevinbri) 
mailto:kevin...@cisco.com>> wrote:
Thanks, Tom, Sam, and Edgar, that's really good info. If nothing else it'll 
give me a good blueprint for what to look for and where to start.



On 12/8/15, 10:37 PM, "Edgar Magana" 
mailto:edgar.mag...@workday.com>> wrote:

>Awesome code! I just did a small testbed test and it worked nicely!
>
>Edgar
>
>
>
>
>On 12/8/15, 7:16 PM, "Tom Fifield" 
>mailto:t...@openstack.org>> wrote:
>
>>On 09/12/15 06:32, Kevin Bringard (kevinbri) wrote:
>>> Hey fellow oppers!
>>>
>>> I was wondering if anyone has any experience doing a migration from 
>>> nova-network to neutron. We're looking at an in place swap, on an Icehouse 
>>> deployment. I don't have parallel
>>>
>>> I came across a couple of things in my search:
>>>
>>> https://wiki.openstack.org/wiki/Neutron/MigrationFromNovaNetwork/HowTo
>>> http://docs.openstack.org/networking-guide/migration_nova_network_to_neutron.html
>>>
>>> But neither of them have much in the way of details.
>>>
>>> Looking to disrupt as little as possible, but of course with something like 
>>> this there's going to be an interruption.
>>>
>>> If anyone has any experience, pointers, or thoughts I'd love to hear about 
>>> it.
>>>
>>> Thanks!
>>>
>>> -- Kevin
>>
>>NeCTAR used this script (https://github.com/NeCTAR-RC/novanet2neutron )
>>with success to do a live nova-net to neutron using Juno.
>>
>>
>>
>>___
>>OpenStack-operators mailing list
>>OpenStack-operators@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [keystone] RBAC usage at production

2015-12-09 Thread Kris G. Lindgren
In other projects the policy.json file is read each time of api request.  So 
changes to the file take place immediately.  I was 90% sure keystone was the 
same way?

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy







On 12/9/15, 1:39 AM, "Oguz Yarimtepe"  wrote:

>Hi,
>
>I am wondering whether there are people using RBAC at production. The 
>policy.json file has a structure that requires restart of the service 
>each time you edit the file. Is there and on the fly solution or tips 
>about it?
>
>
>
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova-network -> Neutron Migration

2015-12-09 Thread Kris G. Lindgren
Doesn't this script only solve the case of going from flatdhcp networks in 
nova-network to same dchp/provider networks in neutron.  Did anyone test to see 
if it also works for doing more advanced nova-network configs?

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: Edgar Magana mailto:edgar.mag...@workday.com>>
Date: Wednesday, December 9, 2015 at 9:54 AM
To: Matt Kassawara mailto:mkassaw...@gmail.com>>, "Kevin 
Bringard (kevinbri)" mailto:kevin...@cisco.com>>
Cc: OpenStack Operators 
mailto:openstack-operators@lists.openstack.org>>
Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration

Yes! We should but with a huge caveat that is not not supported officially by 
the OpenStack community. At least the author wants to make a move with the 
Neutron team to make it part of the tree.

Edgar

From: Matt Kassawara
Date: Wednesday, December 9, 2015 at 8:52 AM
To: "Kevin Bringard (kevinbri)"
Cc: Edgar Magana, Tom Fifield, OpenStack Operators
Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration

Anyone think we should make this script a bit more "official" ... perhaps in 
the networking guide?

On Wed, Dec 9, 2015 at 9:01 AM, Kevin Bringard (kevinbri) 
mailto:kevin...@cisco.com>> wrote:
Thanks, Tom, Sam, and Edgar, that's really good info. If nothing else it'll 
give me a good blueprint for what to look for and where to start.



On 12/8/15, 10:37 PM, "Edgar Magana" 
mailto:edgar.mag...@workday.com>> wrote:

>Awesome code! I just did a small testbed test and it worked nicely!
>
>Edgar
>
>
>
>
>On 12/8/15, 7:16 PM, "Tom Fifield" 
>mailto:t...@openstack.org>> wrote:
>
>>On 09/12/15 06:32, Kevin Bringard (kevinbri) wrote:
>>> Hey fellow oppers!
>>>
>>> I was wondering if anyone has any experience doing a migration from 
>>> nova-network to neutron. We're looking at an in place swap, on an Icehouse 
>>> deployment. I don't have parallel
>>>
>>> I came across a couple of things in my search:
>>>
>>> https://wiki.openstack.org/wiki/Neutron/MigrationFromNovaNetwork/HowTo
>>> http://docs.openstack.org/networking-guide/migration_nova_network_to_neutron.html
>>>
>>> But neither of them have much in the way of details.
>>>
>>> Looking to disrupt as little as possible, but of course with something like 
>>> this there's going to be an interruption.
>>>
>>> If anyone has any experience, pointers, or thoughts I'd love to hear about 
>>> it.
>>>
>>> Thanks!
>>>
>>> -- Kevin
>>
>>NeCTAR used this script (https://github.com/NeCTAR-RC/novanet2neutron )
>>with success to do a live nova-net to neutron using Juno.
>>
>>
>>
>>___
>>OpenStack-operators mailing list
>>OpenStack-operators@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova-network -> Neutron Migration

2015-12-09 Thread Edgar Magana
Yes! We should but with a huge caveat that is not not supported officially by 
the OpenStack community. At least the author wants to make a move with the 
Neutron team to make it part of the tree.

Edgar

From: Matt Kassawara
Date: Wednesday, December 9, 2015 at 8:52 AM
To: "Kevin Bringard (kevinbri)"
Cc: Edgar Magana, Tom Fifield, OpenStack Operators
Subject: Re: [Openstack-operators] Nova-network -> Neutron Migration

Anyone think we should make this script a bit more "official" ... perhaps in 
the networking guide?

On Wed, Dec 9, 2015 at 9:01 AM, Kevin Bringard (kevinbri) 
mailto:kevin...@cisco.com>> wrote:
Thanks, Tom, Sam, and Edgar, that's really good info. If nothing else it'll 
give me a good blueprint for what to look for and where to start.



On 12/8/15, 10:37 PM, "Edgar Magana" 
mailto:edgar.mag...@workday.com>> wrote:

>Awesome code! I just did a small testbed test and it worked nicely!
>
>Edgar
>
>
>
>
>On 12/8/15, 7:16 PM, "Tom Fifield" 
>mailto:t...@openstack.org>> wrote:
>
>>On 09/12/15 06:32, Kevin Bringard (kevinbri) wrote:
>>> Hey fellow oppers!
>>>
>>> I was wondering if anyone has any experience doing a migration from 
>>> nova-network to neutron. We're looking at an in place swap, on an Icehouse 
>>> deployment. I don't have parallel
>>>
>>> I came across a couple of things in my search:
>>>
>>> https://wiki.openstack.org/wiki/Neutron/MigrationFromNovaNetwork/HowTo
>>> http://docs.openstack.org/networking-guide/migration_nova_network_to_neutron.html
>>>
>>> But neither of them have much in the way of details.
>>>
>>> Looking to disrupt as little as possible, but of course with something like 
>>> this there's going to be an interruption.
>>>
>>> If anyone has any experience, pointers, or thoughts I'd love to hear about 
>>> it.
>>>
>>> Thanks!
>>>
>>> -- Kevin
>>
>>NeCTAR used this script (https://github.com/NeCTAR-RC/novanet2neutron )
>>with success to do a live nova-net to neutron using Juno.
>>
>>
>>
>>___
>>OpenStack-operators mailing list
>>OpenStack-operators@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova-network -> Neutron Migration

2015-12-09 Thread Kevin Bringard (kevinbri)
Yea, I was considering updating the wiki 
(https://wiki.openstack.org/wiki/Neutron/MigrationFromNovaNetwork/HowTo) to at 
least make mention of it.

If it works (and it sounds like it does, at least for Juno) then I'm all for 
adding it as a potential resource




On 12/9/15, 9:52 AM, "Matt Kassawara"  wrote:

>Anyone think we should make this script a bit more "official" ... perhaps in 
>the networking guide?
>
>On Wed, Dec 9, 2015 at 9:01 AM, Kevin Bringard (kevinbri)
> wrote:
>
>Thanks, Tom, Sam, and Edgar, that's really good info. If nothing else it'll 
>give me a good blueprint for what to look for and where to start.
>
>
>
>On 12/8/15, 10:37 PM, "Edgar Magana"  wrote:
>
>>Awesome code! I just did a small testbed test and it worked nicely!
>>
>>Edgar
>>
>>
>>
>>
>>On 12/8/15, 7:16 PM, "Tom Fifield"  wrote:
>>
>>>On 09/12/15 06:32, Kevin Bringard (kevinbri) wrote:
 Hey fellow oppers!

 I was wondering if anyone has any experience doing a migration from 
 nova-network to neutron. We're looking at an in place swap, on an Icehouse 
 deployment. I don't have parallel

 I came across a couple of things in my search:

 
>https://wiki.openstack.org/wiki/Neutron/MigrationFromNovaNetwork/HowTo 
>
 
>http://docs.openstack.org/networking-guide/migration_nova_network_to_neutron.html
> 
>

 But neither of them have much in the way of details.

 Looking to disrupt as little as possible, but of course with something 
 like this there's going to be an interruption.

 If anyone has any experience, pointers, or thoughts I'd love to hear about 
 it.

 Thanks!

 -- Kevin
>>>
>>>NeCTAR used this script (https://github.com/NeCTAR-RC/novanet2neutron )
>>>with success to do a live nova-net to neutron using Juno.
>>>
>>>
>>>
>>>___
>>>OpenStack-operators mailing list
>>>OpenStack-operators@lists.openstack.org
>>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>___
>>OpenStack-operators mailing list
>>OpenStack-operators@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Cinder API with multiple regions not working.

2015-12-09 Thread Joe Topjian
Hi Salman,

Someone mentioned this same issue yesterday in relation to Terraform (maybe
a colleague of yours?), so given the two occurrences, I thought I'd look
into this.

I have a Liberty environment readily available, so I created a second set
of volume and volumev2 endpoints for a fictional region. Everything worked
as expected, so I started reviewing the config files and saw that
/etc/cinder/cinder.conf had an option

[DEFAULT]
os_region_name = RegionOne

I commented that out, but things still worked.

Then in /etc/nova/nova.conf, I saw:

[cinder]
os_region_name = RegionOne

commenting this out caused volume attachments to hang indefinitely because
nova was trying to contact cinder at RegionTwo (I'm assuming this is the
first catalog entry that was returned).

Given this is a Liberty environment, it's not accurately reproducing your
problem, but could you check and see if you have that option set in
nova.conf?

I have a Kilo environment in the process of building. Once it has finished,
I'll see if I can reproduce your error there.

Thanks,
Joe

On Wed, Dec 9, 2015 at 4:35 AM, Salman Toor  wrote:

> Hi,
>
> I am using Kilo release on CentOS. We have recently enabled multiple
> regions and it seems that Cinder have some problems with  multiple
> endpoints.
>
> Thinks are working fine with nova but cinder is behaving strange. Here are
> my endpoints
>
>
> 
> *[root@controller: ~]* # openstack service list
> +--++--+
> | ID   | Name   | Type |
> +--++--+
> | 0a33e6f259794ff2a99e626be37c0c2b | cinderv2-hpc2n | volumev2 |
> | 1fcae9bd76304853a3168c39c7fe8e6b | nova   | compute  |
> | 2c7828120c294d3f82e3a17835babb85 | neutron| network  |
> | 3804fcd8f9494d30b589b55fe6abb811 | nova-hpc2n | compute  |
> | 478eff4e96464ae8a958ba29f750b14c | glance | image|
> | 4a5a771d915e43c28e66538b8bc6e625 | cinder | volume   |
> | 72d1be82b2e5478dbf0f3fb9e7ba969d | cinderv2   | volumev2 |
> | 97f977f8a7a04bae89da167fd25dc06c | glance-hpc2n   | image|
> | a985795b49e2440db82970b81248c86e | cinder-hpc2n   | volume   |
> | dccd39b92ab547ddaf9047b38620145a | swift  | object-store |
> | ebb1660d1d9746759a48de921521bfad | keystone   | identity |
> +--++--+
>
> *[root@controller: ~]* # openstack endpoint
> show a985795b49e2440db82970b81248c86e
> +--+--+
> | Field| Value|
> +--+--+
> | adminurl | http://:8776/v1/%(tenant_id)s |
> | enabled  | True |
> | id   | d4003e91ddf24cfb9fa497da81b01a18 |
> | internalurl  | http://:8776/v1/%(tenant_id)s |
> | publicurl| http://:8776/v1/%(tenant_id)s |
> | region   | HPC2N|
> | service_id   | a985795b49e2440db82970b81248c86e |
> | service_name | cinder-hpc2n |
> | service_type | volume   |
> +--+--+
>
> *[root@controller: ~]* # openstack endpoint
> show 4a5a771d915e43c28e66538b8bc6e625
> +--++
> | Field| Value  |
> +--++
> | adminurl | http://:8776/v1/%(tenant_id)s|
> | enabled  | True   |
> | id   | 5f19c0b535674dbd9e318c7b6d61b3bc   |
> | internalurl  | http://:8776/v1/%(tenant_id)s|
> | publicurl| http://:8776/v1/%(tenant_id)s |
> | region   | regionOne  |
> | service_id   | 4a5a771d915e43c28e66538b8bc6e625   |
> | service_name | cinder |
> | service_type | volume |
> +--++
>
> And same for v2 endpoints
>
> 
>
> ——— nova-api.log ———
>
> achmentController object at 0x598a3d0>>, body: {"volumeAttachment":
> {"device": "", "volumeId": "93d96eab-e3fd-4131-9549-ed51e7299da2"}}
> _process_stack
> /usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:780
> 2015-12-08 12:52:05.847 3376 INFO
> nova.api.openstack.compute.contrib.volumes
> [req-d6e1380d-c6bc-4911-b2c6-251bc8b4c62c a62c20fdf99c443a924f0d50a51514b1
> 3c9d997982e04c6db0e02b82fa18fdd8 - - -] Attach volume
> 93d96eab-e3fd-4131-9549-ed51e7299da2 to instance
> 3a4c8722-52a7-48f2-beb7-db8938698a0d at
> 2015-12-08 12:52:

Re: [Openstack-operators] Nova-network -> Neutron Migration

2015-12-09 Thread Matt Kassawara
Anyone think we should make this script a bit more "official" ... perhaps
in the networking guide?

On Wed, Dec 9, 2015 at 9:01 AM, Kevin Bringard (kevinbri) <
kevin...@cisco.com> wrote:

> Thanks, Tom, Sam, and Edgar, that's really good info. If nothing else
> it'll give me a good blueprint for what to look for and where to start.
>
>
>
> On 12/8/15, 10:37 PM, "Edgar Magana"  wrote:
>
> >Awesome code! I just did a small testbed test and it worked nicely!
> >
> >Edgar
> >
> >
> >
> >
> >On 12/8/15, 7:16 PM, "Tom Fifield"  wrote:
> >
> >>On 09/12/15 06:32, Kevin Bringard (kevinbri) wrote:
> >>> Hey fellow oppers!
> >>>
> >>> I was wondering if anyone has any experience doing a migration from
> nova-network to neutron. We're looking at an in place swap, on an Icehouse
> deployment. I don't have parallel
> >>>
> >>> I came across a couple of things in my search:
> >>>
> >>> https://wiki.openstack.org/wiki/Neutron/MigrationFromNovaNetwork/HowTo
> >>>
> http://docs.openstack.org/networking-guide/migration_nova_network_to_neutron.html
> >>>
> >>> But neither of them have much in the way of details.
> >>>
> >>> Looking to disrupt as little as possible, but of course with something
> like this there's going to be an interruption.
> >>>
> >>> If anyone has any experience, pointers, or thoughts I'd love to hear
> about it.
> >>>
> >>> Thanks!
> >>>
> >>> -- Kevin
> >>
> >>NeCTAR used this script (https://github.com/NeCTAR-RC/novanet2neutron )
> >>with success to do a live nova-net to neutron using Juno.
> >>
> >>
> >>
> >>___
> >>OpenStack-operators mailing list
> >>OpenStack-operators@lists.openstack.org
> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >___
> >OpenStack-operators mailing list
> >OpenStack-operators@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Nova-network -> Neutron Migration

2015-12-09 Thread Kevin Bringard (kevinbri)
Thanks, Tom, Sam, and Edgar, that's really good info. If nothing else it'll 
give me a good blueprint for what to look for and where to start.



On 12/8/15, 10:37 PM, "Edgar Magana"  wrote:

>Awesome code! I just did a small testbed test and it worked nicely!
>
>Edgar
>
>
>
>
>On 12/8/15, 7:16 PM, "Tom Fifield"  wrote:
>
>>On 09/12/15 06:32, Kevin Bringard (kevinbri) wrote:
>>> Hey fellow oppers!
>>>
>>> I was wondering if anyone has any experience doing a migration from 
>>> nova-network to neutron. We're looking at an in place swap, on an Icehouse 
>>> deployment. I don't have parallel
>>>
>>> I came across a couple of things in my search:
>>>
>>> https://wiki.openstack.org/wiki/Neutron/MigrationFromNovaNetwork/HowTo
>>> http://docs.openstack.org/networking-guide/migration_nova_network_to_neutron.html
>>>
>>> But neither of them have much in the way of details.
>>>
>>> Looking to disrupt as little as possible, but of course with something like 
>>> this there's going to be an interruption.
>>>
>>> If anyone has any experience, pointers, or thoughts I'd love to hear about 
>>> it.
>>>
>>> Thanks!
>>>
>>> -- Kevin
>>
>>NeCTAR used this script (https://github.com/NeCTAR-RC/novanet2neutron ) 
>>with success to do a live nova-net to neutron using Juno.
>>
>>
>>
>>___
>>OpenStack-operators mailing list
>>OpenStack-operators@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [openstack-ansible] Mid Cycle Sprint

2015-12-09 Thread Jesse Pretorius
Hi everyone,

At the Mitaka design summit in Tokyo we had some corridor discussions about
doing a mid-cycle meetup for the purpose of continuing some design
discussions and doing some specific sprint work.

***
I'd like indications of who would like to attend and what
locations/dates/topics/sprints would be of interest to you.
***

For guidance/background I've put some notes together below:

Location

We have contributors, deployers and downstream consumers across the globe
so picking a venue is difficult. Rackspace have facilities in the UK
(Hayes, West London) and in the US (San Antonio) and are happy for us to
make use of them.

Dates
-
Most of the mid-cycles for upstream OpenStack projects are being held in
January. The Operators mid-cycle is on February 15-16.

As I feel that it's important that we're all as involved as possible in
these events, I would suggest that we schedule ours after the Operators
mid-cycle.

It strikes me that it may be useful to do our mid-cycle immediately after
the Ops mid-cycle, and do it in the UK. This may help to optimise travel
for many of us.

Format
--
The format of the summit is really for us to choose, but typically they're
formatted along the lines of something like this:

Day 1: Big group discussions similar in format to sessions at the design
summit.

Day 2: Collaborative code reviews, usually performed on a projector, where
the goal is to merge things that day (if a review needs more than a single
iteration, we skip it. If a review needs small revisions, we do them on the
spot).

Day 3: Small group / pair programming.

Topics
--
Some topics/sprints that come to mind that we could explore/do are:
 - Install Guide Documentation Improvement [1]
 - Development Documentation Improvement (best practises, testing, how to
develop a new role, etc)
 - Upgrade Framework [2]
 - Multi-OS Support [3]

[1] https://etherpad.openstack.org/p/oa-install-docs
[2] https://etherpad.openstack.org/p/openstack-ansible-upgrade-framework
[3] https://etherpad.openstack.org/p/openstack-ansible-multi-os-support

-- 
Jesse Pretorius
IRC: odyssey4me
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Cinder API with multiple regions not working.

2015-12-09 Thread Salman Toor
Hi,

I am using Kilo release on CentOS. We have recently enabled multiple regions 
and it seems that Cinder have some problems with  multiple endpoints.

Thinks are working fine with nova but cinder is behaving strange. Here are my 
endpoints



[root@controller: ~] # openstack service list
+--++--+
| ID   | Name   | Type |
+--++--+
| 0a33e6f259794ff2a99e626be37c0c2b | cinderv2-hpc2n | volumev2 |
| 1fcae9bd76304853a3168c39c7fe8e6b | nova   | compute  |
| 2c7828120c294d3f82e3a17835babb85 | neutron| network  |
| 3804fcd8f9494d30b589b55fe6abb811 | nova-hpc2n | compute  |
| 478eff4e96464ae8a958ba29f750b14c | glance | image|
| 4a5a771d915e43c28e66538b8bc6e625 | cinder | volume   |
| 72d1be82b2e5478dbf0f3fb9e7ba969d | cinderv2   | volumev2 |
| 97f977f8a7a04bae89da167fd25dc06c | glance-hpc2n   | image|
| a985795b49e2440db82970b81248c86e | cinder-hpc2n   | volume   |
| dccd39b92ab547ddaf9047b38620145a | swift  | object-store |
| ebb1660d1d9746759a48de921521bfad | keystone   | identity |
+--++--+

[root@controller: ~] # openstack endpoint show a985795b49e2440db82970b81248c86e
+--+--+
| Field| Value|
+--+--+
| adminurl | http://:8776/v1/%(tenant_id)s |
| enabled  | True |
| id   | d4003e91ddf24cfb9fa497da81b01a18 |
| internalurl  | http://:8776/v1/%(tenant_id)s |
| publicurl| http://:8776/v1/%(tenant_id)s |
| region   | HPC2N|
| service_id   | a985795b49e2440db82970b81248c86e |
| service_name | cinder-hpc2n |
| service_type | volume   |
+--+--+

[root@controller: ~] # openstack endpoint show 4a5a771d915e43c28e66538b8bc6e625
+--++
| Field| Value  |
+--++
| adminurl | http://:8776/v1/%(tenant_id)s|
| enabled  | True   |
| id   | 5f19c0b535674dbd9e318c7b6d61b3bc   |
| internalurl  | http://:8776/v1/%(tenant_id)s|
| publicurl| http://:8776/v1/%(tenant_id)s |
| region   | regionOne  |
| service_id   | 4a5a771d915e43c28e66538b8bc6e625   |
| service_name | cinder |
| service_type | volume |
+--++

And same for v2 endpoints



——— nova-api.log ———

achmentController object at 0x598a3d0>>, body: {"volumeAttachment": {"device": 
"", "volumeId": "93d96eab-e3fd-4131-9549-ed51e7299da2"}} _process_stack 
/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:780
2015-12-08 12:52:05.847 3376 INFO nova.api.openstack.compute.contrib.volumes 
[req-d6e1380d-c6bc-4911-b2c6-251bc8b4c62c a62c20fdf99c443a924f0d50a51514b1 
3c9d997982e04c6db0e02b82fa18fdd8 - - -] Attach volume 
93d96eab-e3fd-4131-9549-ed51e7299da2 to instance 
3a4c8722-52a7-48f2-beb7-db8938698a0d at
2015-12-08 12:52:05.847 3376 DEBUG nova.compute.api 
[req-d6e1380d-c6bc-4911-b2c6-251bc8b4c62c a62c20fdf99c443a924f0d50a51514b1 
3c9d997982e04c6db0e02b82fa18fdd8 - - -] [instance: 
3a4c8722-52a7-48f2-beb7-db8938698a0d] Fetching instance by UUID get 
/usr/lib/python2.7/site-packages/nova/compute/api.py:1911
2015-12-08 12:52:05.988 3376 ERROR nova.api.openstack 
[req-d6e1380d-c6bc-4911-b2c6-251bc8b4c62c a62c20fdf99c443a924f0d50a51514b1 
3c9d997982e04c6db0e02b82fa18fdd8 - - -] Caught error: internalURL endpoint for 
volume service named cinder not found
2015-12-08 12:52:05.988 3376 TRACE nova.api.openstack Traceback (most recent 
call last):
2015-12-08 12:52:05.988 3376 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/__init__.py", line 125, in 
__call__
2015-12-08 12:52:05.988 3376 TRACE nova.api.openstack return 
req.get_response(self.application)
2015-12-08 12:52:05.988 3376 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/request.py", line 1296, in send
2015-12-08 12:52:05.988 3376 TRACE nova.api.openstack application, 
catch_exc_info=False)
2015-12-08 12:52:05.988 3376 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/request.py", line 1260, in 
call_application
201

Re: [Openstack-operators] Cinder V1 endpoints

2015-12-09 Thread Salman Toor
Hi,

Now I can also confirm that using v1 and v2 in the cinder endpoints work fine 
for us in the Kilo release( we are using CentOS7).

Regards..
Salman.



PhD, Scientific Computing
Researcher, IT Department,
Uppsala University.
Senior Cloud Architect,
SNIC.
Cloud Application Expert,
UPPMAX.
Visiting Researcher,
Helsinki Institute of Physics (HIP).
salman.t...@it.uu.se
http://www.it.uu.se/katalog/salto690

On 07 Dec 2015, at 18:18, Jesse Keating 
mailto:j...@bluebox.net>> wrote:

Reading that linked bug, it does indeed seem like the use of v2 in the path for 
both volume and volumev2 was indeed a mistake. Further supported by 
http://lists.openstack.org/pipermail/openstack-operators/2015-December/009092.html

It may have been intended for the documentation at the time, but seems later to 
have been a mistake.


- jlk

On Fri, Dec 4, 2015 at 2:40 PM, Anne Gentle 
mailto:annegen...@justwriteclick.com>> wrote:


On Thu, Dec 3, 2015 at 9:33 AM, Salman Toor 
mailto:salman.t...@it.uu.se>> wrote:
Hi,

In the following link of Kilo document the cinder V1 endpoints have v2 in it.

http://docs.openstack.org/kilo/install-guide/install/yum/content/cinder-install-controller-node.html


openstack endpoint create \
  --publicurl http://controller:8776/v2/%\(tenant_id\)s \
  --internalurl http://controller:8776/v2/%\(tenant_id\)s \
  --adminurl http://controller:8776/v2/%\(tenant_id\)s \
  --region RegionOne \
  volume

Shouldn’t it be like following (colour in RED)


openstack endpoint create \
  --publicurl http://controller:8776/v1/%\(tenant_id\)s \
  --internalurl http://controller:8776/v1/%\(tenant_id\)s \
  --adminurl http://controller:8776/v1/%\(tenant_id\)s \
  --region RegionOne \
  volume

Is that a documentation error? Can someone please confirm?



For the install documentation, especially one in use for and tested as many 
months as the Kilo one, you can be assured it is not a documentation error.

It is a specific choice by our install doc team to ensure that all requests are 
redirected to a v2 endpoint. Otherwise your Dashboard will not display the 
Block Storage information.

If you're ever in doubt, feel free to search through fixed documentation bugs 
or read the commentary, such as:
https://bugs.launchpad.net/openstack-manuals/+bug/1508355

Thanks,
Anne

Thanks in advance.

Regards..
Salman.



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




--
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Liberty cinder ceph architecture

2015-12-09 Thread Arne Wiebalck
We had the Cinder services running on our controllers initially, but split them 
off
to a separate set of (virtual) machines in order to allow for independent 
upgrades.
Performance-wise that should not make a big difference, unless you expect an
enormous amount of requests. Nothing needs to be installed on the Ceph side.

HTH,
 Arne


On 08 Dec 2015, at 09:26, Ignazio Cassano 
mailto:ignaziocass...@gmail.com>> wrote:


Hi all, I am going to install openstack liberty and I already installed two 
ceph nodes .  Now I need to know where cinder components must be installed.
In an nfs scenario I installed some cinder componens on controller node and 
some on nfs server but with ceph I would like  to avoid installing cinder 
components directly on ceph nodes.
Any suggestions ?  My controller environment is made up of a cluster of 
physical nodes.
Computing is made up of two kvm nodes.
Must I install cinder-api, cinder-scheduler, cinder-volume and cinder backup on 
controller nodes or for best performace it' s more convenient to split them on 
different nodes ?
Another question is related to object storage : is ceph radosgw supported to 
replace swift ?
Regards

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--
Arne Wiebalck
CERN IT

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Liberty cinder ceph architecture

2015-12-09 Thread Ignazio Cassano
Many thanks for your hints.
Ignazio

2015-12-09 9:50 GMT+01:00 Arne Wiebalck :

> We had the Cinder services running on our controllers initially, but split
> them off
> to a separate set of (virtual) machines in order to allow for independent
> upgrades.
> Performance-wise that should not make a big difference, unless you expect
> an
> enormous amount of requests. Nothing needs to be installed on the Ceph
> side.
>
> HTH,
>  Arne
>
>
> On 08 Dec 2015, at 09:26, Ignazio Cassano 
> wrote:
>
> Hi all, I am going to install openstack liberty and I already installed
> two ceph nodes .  Now I need to know where cinder components must be
> installed.
> In an nfs scenario I installed some cinder componens on controller node
> and some on nfs server but with ceph I would like  to avoid installing
> cinder components directly on ceph nodes.
> Any suggestions ?  My controller environment is made up of a cluster of
> physical nodes.
> Computing is made up of two kvm nodes.
> Must I install cinder-api, cinder-scheduler, cinder-volume and cinder
> backup on controller nodes or for best performace it' s more convenient to
> split them on different nodes ?
> Another question is related to object storage : is ceph radosgw supported
> to replace swift ?
> Regards
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
> --
> Arne Wiebalck
> CERN IT
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [keystone] RBAC usage at production

2015-12-09 Thread Oguz Yarimtepe

Hi,

I am wondering whether there are people using RBAC at production. The 
policy.json file has a structure that requires restart of the service 
each time you edit the file. Is there and on the fly solution or tips 
about it?




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [keystone] Removing functionality that was deprecated in Kilo and upcoming deprecated functionality in Mitaka

2015-12-09 Thread Thomas Goirand
On 12/08/2015 04:09 AM, Dolph Mathews wrote:
> In Debian, many services/daemons are run, then their API is used by the
> package. In the case of Keystone, for example, it is possible to ask,
> via Debconf, that Keystone registers itself in the service catalog. If
> we get Keystone within Apache, it becomes at least harder to do so.
> 
> 
> You start the next paragraph with "the other issue," but I'm not clear
> on what the issue is here? This sounds like a bunch of complexity that
> is working as you expect it to.

Ok, let me try again. If the only way to get Keystone up and running is
through a web server like Apache, then starting Keystone and then using
its API is not as easy as if it was a daemon on its own. For example,
there may be some other types of configuration in use in the web server
which the package doesn't control.

> The other issue is that if all services are sharing the same web server,
> restarting the web server restarts all services. Or, said otherwise: if
> I need to change a configuration value of any of the services served by
> Apache, I will need to restart them all, which is very annoying: I very
> much prefer to just restart *ONE* service if I need.
> 
> As a deployer, I'd solve this by running one API service per server. As
> a packager, I don't expect you to solve this outside of AIO
> architectures, in which case, uptime is obviously not a concern.

Let's say there's a security issue in Keystone. One would expect that a
simple "apt-get dist-upgrade" will do all. If Keystone is installed in a
web server, should the package aggressively attempts to restart it? If
not, what is the proposed solution to have Keystone restarted in this case?

> Also, something which we learned the hard way at Mirantis: it is *very*
> annoying that Apache restarts every Sunday morning by default in
> distributions like Ubuntu and Debian (I'm not sure for the other
> distros). No, the default config of logrotate and Apache can't be
> changed in distros just to satisfy OpenStack users: there's other users
> of Apache in these distros.
> 
> Yikes! Is the debian Apache package intended to be useful in production?
> That doesn't sound like an OpenStack-specific problem at all. How is
> logrotate involved? Link?

It is logrotate which restarts apache. From /etc/logrotate.d/apache2:

/var/log/apache2/*.log {
daily
missingok
rotate 14
compress
delaycompress
notifempty
create 640 root adm
sharedscripts
postrotate
if /etc/init.d/apache2 status > /dev/null ; then \
/etc/init.d/apache2 reload > /dev/null; \
fi;
endscript
prerotate
if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
run-parts /etc/logrotate.d/httpd-prerotate; \
fi; \
endscript
}

This is to be considered quite harmful. Yes, I am fully aware that this
file can be tweaked. Though this is the default, and it is always best
to provide a default which works for our users. And in this case, no
OpenStack package maintainer controls it.

> Then, yes, uWSGI becomes a nice option. I used it for the Barbican
> package, and it worked well. Though the uwsgi package in Debian isn't
> very well maintained, and multiple times, Barbican could have been
> removed from Debian testing because of RC bugs against uWSGI.
> 
> uWSGI is a nice option, but no one should be tied to that either-- in
> the debian world or otherwise.

For Debian users, I intend to provide useful configuration so that
everything works by default. OpenStack is complex enough. It is my role,
as a package maintainer, to make it easier to use.

One of the options I have is to create new packages like this:

python-keystone -> Already exists, holds the Python code
keystone -> Currently contains the Eventlet daemon

I could transform the later into a meta package depending on any of the
below options:

keystone-apache -> Default configuration for Apache
keystone-uwsgi -> An already configured startup script using uwsgi

Though I'm not sure the FTP masters will love me if I create so many new
packages just to create automated configurations... Also, maybe it's
just best to provide *one* implementation which we all consider the
best. I'm just not sure yet which one it is. Right now, I'm leaning
toward uwsgi.

> So, all together, I'm a bit reluctant to see the Eventlet based servers
> going away. If it's done, then yes, I'll work around it. Though I'd
> prefer if it didn't.
> 
> Think of it this way: keystone is moving towards using Python
> where Python excels, and is punting up the stack where Python is
> handicapped. Don't think of it as a work around, think of it as having
> the freedom to architect your own deployment.

I'm ok with that, but as per the above, I'd like to provide something
which just works for De