Re: [Openstack-operators] [Keystone][PublicCloud] Introducing Adjutant, an OpenStack service for signups, user invites, password reset and more!

2017-06-01 Thread Andy Botting
We caught up with some of the Catalyst guys at the Melbourne OpenStack
Australia day and they gave us a demo of this. Looks like a really nice
project that I think might replace some of existing user management
workflow.

Allowing project managers to invite collaborators and self-manage their
permissions and to create nested projects (without requiring admin
intervention) would work well for us on the Nectar cloud.

Thanks for the contribution Adrian. Hoping we can find some time soon to
test this out and contribute.

cheers,
Andy

On 29 May 2017 at 17:01, Adrian Turjak  wrote:

> Hello OpenStack Community,
>
> I'd like to introduce to you all a service we have developed at Catalyst
> and are now ready to release to the OpenStack community in hopes that
> others may find it useful. As a public cloud provider we quickly ran into a
> bunch of little issues around user management, sign-ups, and other pieces
> of business logic that needed to fit into how we administer the cloud but
> didn't entirely make sense as additions to existing services. There were
> also a lot of actions we wanted to delegate to our customers but couldn't
> do without giving them too much power in Keystone, or wanted those actions
> to send emails, or extend to external non-OpenStack services.
>
> Enter Adjutant. Adjutant (previously called StackTask) was built as a
> service to allow us to create business workflows that can be exposed in
> some fashion over an API. A way for us to build reusable snippets of code
> that we can tie together, and a flexible and pluggable API layer we can
> expose those on. We needed these to be able to talk to our external
> systems, as well as our OpenStack services, and provide us some basic steps
> and in some cases the ability to require approval before an action
> completes. In many ways Adjutant also works as a layer around Keystone for
> us to build business logic around certain things we'd like our customers to
> be able to do in very limited ways.
>
> The service itself is built on Django with Django-Rest-Framework and is an
> API service with the gui component built as a ui plugin for Horizon that
> allows easy integration into an OpenStack dashboard.
>
> Adjutant, as the name implies, is a helper, not a major service, but one
> that smooths some situations and an easy place to offload some admin tasks
> that a customer or non-admin should be able to trigger in a more limited
> way. Not only that, but it stores the history of all these tasks, who asked
> for them, and when they were completed. Anything a user does through
> Adjutant is stored and able to be audited, with in future the ability for
> project admins to be able to audit their own tasks and see who of their
> users did something.
>
> Out of the box it provides the following functionality:
>
>- User invitation by users with the 'project_admin' or 'project_mod'
>role.
>   - This will send out an email to the person you've invited with a
>   submission token and let them setup their password and then grants them
>   roles on your project. If their user exists already, will only require
>   confirmation and then grant roles.
>- As a 'project_admin' or 'project_mod' you can list the users with
>roles on your project and edit their roles or revoke them from your 
> project.
>- Let non-admin users request a password reset.
>   - User will be emailed a token which will let them reset their
>   password.
>- Basic signup
>   - Let a user request a new project. Requires admin approval and
>   will create a new project and user, granting default roles on the new
>   project. Will reuse existing user if present, or send an email to the 
> user
>   to setup their password.
>- Let a user update their email address.
>   - Will notify old email, and send a confirmation token to the new.
>
> Features coming in the future (most either almost done, or in prototype
> stages):
>
>- Forced password reset
>- users with 'project_admin' or 'project_mod' can force a password
>   reset for a given user in their projects
>   - cloud admins can force password resets for users on their cloud.
>   - changes user password to a randomly generated value and sends
>   user a password reset token to their email.
>   - user must reset before they can log in again.
>- Quota management for your project
>   - As a 'project_admin' or 'project_mod' you can request a change in
>   quota to a set of predefined sizes (as set in the Adjutant conf). Sizes
>   allows you to increase multiple related quotas at the same time. You can
>   move to adjacent sizes without approval a number of times in a 
> configurable
>   window (days), or an admin can approve your quota change as well.
>- Hierarchical Multi-Tenancy in a single domain environment
>   - 'project_admin' to be able to create sub-projects off the current
>   

Re: [Openstack-operators] [dev] [doc] Operations Guide future

2017-06-01 Thread Blair Bethwaite
Hi Alex,

Likewise for option 3. If I recall correctly from the summit session
that was also the main preference in the room?

On 2 June 2017 at 11:15, George Mihaiescu  wrote:
> +1 for option 3
>
>
>
> On Jun 1, 2017, at 11:06, Alexandra Settle  wrote:
>
> Hi everyone,
>
>
>
> I haven’t had any feedback regarding moving the Operations Guide to the
> OpenStack wiki. I’m not taking silence as compliance. I would really like to
> hear people’s opinions on this matter.
>
>
>
> To recap:
>
>
>
> Option one: Kill the Operations Guide completely and move the Administration
> Guide to project repos.
> Option two: Combine the Operations and Administration Guides (and then this
> will be moved into the project-specific repos)
> Option three: Move Operations Guide to OpenStack wiki (for ease of
> operator-specific maintainability) and move the Administration Guide to
> project repos.
>
>
>
> Personally, I think that option 3 is more realistic. The idea for the last
> option is that operators are maintaining operator-specific documentation and
> updating it as they go along and we’re not losing anything by combining or
> deleting. I don’t want to lose what we have by going with option 1, and I
> think option 2 is just a workaround without fixing the problem – we are not
> getting contributions to the project.
>
>
>
> Thoughts?
>
>
>
> Alex
>
>
>
> From: Alexandra Settle 
> Date: Friday, May 19, 2017 at 1:38 PM
> To: Melvin Hillsman , OpenStack Operators
> 
> Subject: Re: [Openstack-operators] Fwd: [openstack-dev] [openstack-doc]
> [dev] What's up doc? Summit recap edition
>
>
>
> Hi everyone,
>
>
>
> Adding to this, I would like to draw your attention to the last dot point of
> my email:
>
>
>
> “One of the key takeaways from the summit was the session that I joint
> moderated with Melvin Hillsman regarding the Operations and Administration
> Guides. You can find the etherpad with notes here:
> https://etherpad.openstack.org/p/admin-ops-guides  The session was really
> helpful – we were able to discuss with the operators present the current
> situation of the documentation team, and how they could help us maintain the
> two guides, aimed at the same audience. The operator’s present at the
> session agreed that the Administration Guide was important, and could be
> maintained upstream. However, they voted and agreed that the best course of
> action for the Operations Guide was for it to be pulled down and put into a
> wiki that the operators could manage themselves. We will be looking at
> actioning this item as soon as possible.”
>
>
>
> I would like to go ahead with this, but I would appreciate feedback from
> operators who were not able to attend the summit. In the etherpad you will
> see the three options that the operators in the room recommended as being
> viable, and the voted option being moving the Operations Guide out of
> docs.openstack.org into a wiki. The aim of this was to empower the
> operations community to take more control of the updates in an environment
> they are more familiar with (and available to others).
>
>
>
> What does everyone think of the proposed options? Questions? Other thoughts?
>
>
>
> Alex
>
>
>
> From: Melvin Hillsman 
> Date: Friday, May 19, 2017 at 1:30 PM
> To: OpenStack Operators 
> Subject: [Openstack-operators] Fwd: [openstack-dev] [openstack-doc] [dev]
> What's up doc? Summit recap edition
>
>
>
>
>
> -- Forwarded message --
> From: Alexandra Settle 
> Date: Fri, May 19, 2017 at 6:12 AM
> Subject: [openstack-dev] [openstack-doc] [dev] What's up doc? Summit recap
> edition
> To: "openstack-d...@lists.openstack.org"
> 
> Cc: "OpenStack Development Mailing List (not for usage questions)"
> 
>
>
> Hi everyone,
>
>
> The OpenStack manuals project had a really productive week at the OpenStack
> summit in Boston. You can find a list of all the etherpads and attendees
> here: https://etherpad.openstack.org/p/docs-summit
>
>
>
> As we all know, we are rapidly losing key contributors and core reviewers.
> We are not alone, this is happening across the board. It is making things
> harder, but not impossible. Since our inception in 2010, we’ve been climbing
> higher and higher trying to achieve the best documentation we could, and
> uphold our high standards. This is something to be incredibly proud of.
> However, we now need to take a step back and realise that the amount of work
> we are attempting to maintain is now out of reach for the team size that we
> have. At the moment we have 13 cores, of which none are full time
> contributors or reviewers. This includes myself.
>
>
>
> That being said! I have spent the last week at the summit talking to some of
> our leaders, including Doug 

Re: [Openstack-operators] [dev] [doc] Operations Guide future

2017-06-01 Thread Doug Thompson
Hi Alexandra,   I have always been in favour of having operations guides 
separate from an administration guide although there needs to be a lot of cross 
reference.
   IMHO, operations guides should have commands and instructions on how to 
diagnose and remedy issues of a running installation.  Administration guides 
should have lots of detail on configuration and how to implement features.
On Thu, 2017-06-01 at 15:06 +, Alexandra Settle wrote:
> Hi everyone,
>  
> I haven’t had any feedback regarding moving the Operations Guide to the 
> OpenStack wiki. I’m not taking silence as compliance. I would really like to 
> hear people’s opinions
>  on this matter.
>  
> To recap:
>  
> 
> Option one: Kill the Operations Guide
>  completely and move the Administration Guide to project repos.Option two: 
> Combine the Operations and
>  Administration Guides (and then this will be moved into the project-specific 
> repos)Option three: Move Operations Guide to
>  OpenStack wiki (for ease of operator-specific maintainability) and move the 
> Administration Guide to project repos.
>  
> Personally, I think that option 3 is more realistic. The idea for the last 
> option is that operators are maintaining operator-specific documentation and 
> updating it as they
>  go along and we’re not losing anything by combining or deleting. I don’t 
> want to lose what we have by going with option 1, and I think option 2 is 
> just a workaround without fixing the problem – we are not getting 
> contributions to the project.
>  
> Thoughts?
>  
> Alex
>  
> 
> From:
> Alexandra Settle 
> 
> Date: Friday, May 19, 2017 at 1:38 PM
> 
> To: Melvin Hillsman , OpenStack Operators 
> 
> 
> Subject: Re: [Openstack-operators] Fwd: [openstack-dev] [openstack-doc] [dev] 
> What's up doc? Summit recap edition
> 
> 
>  
> 
> Hi everyone,
>  
> Adding to this, I would like to draw your attention to the last dot point of 
> my email:
>  
> “One of the key takeaways from the summit was the session that I joint 
> moderated with Melvin Hillsman regarding the Operations and Administration
>  Guides. You can find the etherpad with notes here: 
> https://etherpad.openstack.org/p/admin-ops-guides  The session was really 
> helpful – we were able to discuss with the operators present the current 
> situation of the documentation team, and how they could help us maintain the 
> two guides, aimed at the same
>  audience. The operator’s present at the session agreed that the 
> Administration Guide was important, and could be maintained upstream. 
> However, they voted and agreed that the best course of action for the 
> Operations Guide was for it to be pulled down and put
>  into a wiki that the operators could manage themselves. We will be looking 
> at actioning this item as soon as possible.”
>  
> I would like to go ahead with this, but I would appreciate feedback from 
> operators who were not able to attend the summit. In the etherpad you will
>  see the three options that the operators in the room recommended as being 
> viable, and the voted option being moving the Operations Guide out of 
> docs.openstack.org into a wiki. The aim of this was to empower the operations 
> community to take more control of
>  the updates in an environment they are more familiar with (and available to 
> others).
>  
> What does everyone think of the proposed options? Questions? Other thoughts?
>  
> Alex
>  
> 
> From:
> Melvin Hillsman 
> 
> Date: Friday, May 19, 2017 at 1:30 PM
> 
> To: OpenStack Operators 
> 
> Subject: [Openstack-operators] Fwd: [openstack-dev] [openstack-doc] [dev] 
> What's up doc? Summit recap edition
> 
> 
>  
> 
> 
>  
> 
> 
> -- Forwarded message --
> 
> From: Alexandra Settle 
> 
> Date: Fri, May 19, 2017 at 6:12 AM
> 
> Subject: [openstack-dev] [openstack-doc] [dev] What's up doc? Summit recap 
> edition
> 
> To: "openstack-d...@lists.openstack.org" 
> 
> Cc: "OpenStack Development Mailing List (not for usage questions)" 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Hi everyone,
> 
> 
> 
> The OpenStack manuals project had a really productive week at the OpenStack 
> summit in Boston. You can find a list of all the etherpads and attendees here:
> https://etherpad.openstack.org/p/docs-summit
> 
> 
>  
> 
> As we all know, we are rapidly losing key contributors and core reviewers. We 
> are not alone, this is happening across the board. It is making things 
> harder, but not impossible. Since our inception in 2010, we’ve been climbing
>  higher and higher trying to achieve the best documentation we could, and 
> uphold our high standards. This is something to be incredibly proud of. 
> However, we now need to take a step back and realise that the amount of work 
> we are attempting to maintain 

Re: [Openstack-operators] problem with nova placement after update of cloud from Mitaka to Ocata

2017-06-01 Thread federica fanzago

Hi Jay,
thanks for the answer.
Yes we did all these steps but we have created the admin and public 
endpoint of placement in https.
Changing them to the internal one in http (ip in the management network) 
the command 'nova-status upgrade check' works correctly.


Cheers,
   Federica



On 05/31/2017 09:50 PM, Jay Pipes wrote:

On 05/31/2017 05:52 AM, federica fanzago wrote:

Hello operators,
we have a problem with the placement after the update of our cloud 
from Mitaka to Ocata release.


We started from a mitaka cloud and we have followed these steps: 
updated the cloud controller from Mitaka to newton, run the dbsync, 
updated from newton to ocata adding at this step the db nova_cell0 
and run again the dbsync.  Then we have updated the compute directly 
from Mitaka to Ocata.


With the update to Ocata we have added the placement section in 
nova.conf, configured the related endpoint and installed the package 
openstack-nova-placement-api (placement wasn't enabled in newton)


Verifying the operation, the command nova-status upgrade check fails 
with the error


Error:
Traceback (most recent call last):
   File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 
456, in main

 ret = fn(*fn_args, **fn_kwargs)
   File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 
386, in check

 result = func(self)
   File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 
201, in _check_placement

 versions = self._placement_get("/")
   File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 
189, in _placement_get

 return client.get(path, endpoint_filter=ks_filter).json()
   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", 
line 758, in get

 return self.request(url, 'GET', **kwargs)
   File "/usr/lib/python2.7/site-packages/positional/__init__.py", 
line 101, in inner

 return wrapped(*args, **kwargs)
   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", 
line 655, in request

 raise exceptions.from_response(resp, method, url)
ServiceUnavailable: Service Unavailable (HTTP 503)

Do you have suggestions about how to debug the problem?


Did you ensure that you created a service entry, endpoint entry, and 
service user for Placement in Keystone?


See here:

https://ask.openstack.org/en/question/102256/how-to-configure-placement-service-for-compute-node-on-ocata/ 



Best,
-jay

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



--
Federica Fanzago
INFN Sezione di Padova
Via Marzolo, 8
35131 Padova - Italy

Tel: +39 049.967.7367
--


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [cinder] Thoughts on cinder readiness

2017-06-01 Thread Arne Wiebalck
Joshua,

We’ve introduced Cinder on Ceph in production more than 3 years ago (when we
were still on Havana, IIRC). Today, we have around 4’000 volumes with a total 
size
of 1.25PB (half of which is actually filled).

To back up what Mike and Erik already said, Cinder has given us very little
problems during that time. Three things to maybe mention are:

- we were running with multiple volume servers using the same ‘host’ identifier
for some time; this may have led to some volumes stuck or DB inconsistencies
we’ve encountered; the config has meanwhile been changed to have only one 
active c-vol (and we’re of course closely following the ongoing A/A HA work); 

- until recently, we’ve not explicitly specified the pymysql driver in the DB 
connection
string; for quite some time this led to RPC timeouts and volumes stuck in 
deletion
when launching parallel deletions for 20+ volumes in one go; the config we’ve 
have
been carrying forward since the initial setup has now been corrected and we are 
not
able to reproduce the problem any longer;

- depending on your actual setup, Cinder upgrades will require a service 
downtime;
you may want to check the docs for the recent work on rolling upgrades to see 
how
you’ll need to set up things in order to minimise the intervention time (if 
that is important
for your use case).

Cheers,
 Arne


> On 01 Jun 2017, at 06:06, Joshua Harlow  wrote:
> 
> Erik McCormick wrote:
>> I've been running Ceph-backed Cinder since, I think, Icehouse. It's
>> really more of a function of your backend or the hypervisor than Cinder
>> itself. That being said, it's been probabky mt smallest Openstack pain
>> point iver the years.
>> 
>> I can't imagine what sort of concurrency issues you'd run into short of
>> a large public cloud given that it really doesn't do much once
>> provisioning a volume is complete. Maybe if you've got people taking a
>> ton of snapshots? What sort of specific issues are you concerned about?
>> 
> 
> Mainly the ones that spawned articles/specs like:
> 
> https://gorka.eguileor.com/a-cinder-road-to-activeactive-ha/
> 
> https://specs.openstack.org/openstack/cinder-specs/specs/mitaka/cinder-volume-active-active-support.html
> 
> And a few more like those, I'm especially not going to be a big fan of having 
> to (as a person, myself or others on the godaddy team) go in and muck with 
> volumes in stuck states and so-on (similar issues occur in nova, which just 
> drain the blood out of humans that have to go fix them).
> 
>> -Erik
>> 
>> On May 31, 2017 8:30 PM, "Mike Lowe" > > wrote:
>> 
>>We have run ceph backed cinder from Liberty through Newton, with the
>>exception of a libvirt 2.x bug that should now be fixed, cinder
>>really hasn't caused us any problems.
>> 
>>Sent from my iPad
>> 
>> > On May 31, 2017, at 6:12 PM, Joshua Harlow >> wrote:
>> >
>> > Hi folks,
>> >
>> > So I was having some back and forth internally about is cinder
>>ready for usage and wanted to get other operators thoughts on how
>>there cinder experiences have been going, any trials and tribulations.
>> >
>> > For context, we are running on liberty (yes I know, working on
>>getting that to newer versions) and folks in godaddy are starting to
>>use more and more cinder (backed by ceph) and that got me thinking
>>about asking the question from operators (and devs) on what kind of
>>readiness 'rating' (or whatever you would want to call it) would
>>people give cinder in liberty.
>> >
>> > Some things that I was thinking was around concurrency rates,
>>because I know that's be a common issue that the cinder developers
>>have been working through (using tooz, and various other lock
>>mechanisms and such).
>> >
>> > Have other cinder operators seen concurrent operations (or
>>conflicting operations or ...) work better in newer releases (is
>>there any metric/s anyone has gathered about how things have gotten
>>worse/better under scale for cinder in various releases? partically
>>with regard to using ceph).
>> >
>> > Thoughts?
>> >
>> > It'd be interesting to capture (not just for my own usage) I
>>think because such info helps the overall user and operator and dev
>>community (and yes I would expect various etherpads to have parts of
>>this information, but it'd be nice to have like a single place where
>>other operators can specify how ready they believe a project is for
>>a given release and for a given configuration; and ideally provide
>>details/comments as to why they believe this).
>> >
>> > -Josh
>> >
>> >
>> >
>> >
>> > ___
>> > OpenStack-operators mailing list
>> > OpenStack-operators@lists.openstack.org
>>