Re: [Openstack-operators] [openstack-dev] [openstack-operators] [qa] [berlin] QA Team & related sessions at berlin summit

2018-11-10 Thread Ghanshyam Mann
Hello Everyone,

I have created the below etherpads to use during QA Forum sessions:

- Users / Operators adoption of QA tools:  
https://etherpad.openstack.org/p/BER-qa-ops-user-feedback 
- QA Onboarding: https://etherpad.openstack.org/p/BER-qa-onboarding-vancouver

-gmann

  On Fri, 09 Nov 2018 11:02:54 +0900 Ghanshyam Mann 
 wrote  
 > Hello everyone, 
 >  
 > Along with project updates & onboarding sessions, QA team will host QA 
 > feedback sessions in berlin summit.  Feel free to catch us next week for any 
 > QA related questions or if you need help to contribute in QA (we are really 
 > looking forward to onbaord new contributor in QA).  
 >  
 > Below are the QA related sessions, feel free to append the list if i missed 
 > anything. I am working on onboarding/forum sessions etherpad and will send 
 > the link tomorrow.  
 >  
 > Tuesday: 
 >   1. OpenStack QA - Project Update.   [1] 
 >   2. OpenStack QA - Project Onboarding.   [2] 
 >   3. OpenStack Patrole – Foolproofing your OpenStack Deployment [3] 
 >  
 > Wednesday: 
 >   4. Forum: Users / Operators adoption of QA tools / plugins.  [4] 
 >  
 > Thursday: 
 >   5. Using Rally/Tempest for change validation (OPS session) [5] 
 >  
 > [1] 
 > https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22763/openstack-qa-project-update
 >   
 > [2] 
 > https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22762/openstack-qa-project-onboarding
 >  
 > [3] 
 > https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22148/openstack-patrole-foolproofing-your-openstack-deployment
 >  
 > [4] 
 > https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22788/users-operators-adoption-of-qa-tools-plugins
 >   
 > [5] 
 > https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22837/using-rallytempest-for-change-validation-ops-session
 >   
 >  
 > -gmann 
 > 



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [openstack-dev] [openstack-operators] [qa] [berlin] QA Team & related sessions at berlin summit

2018-11-08 Thread Ghanshyam Mann
Hello everyone,

Along with project updates & onboarding sessions, QA team will host QA feedback 
sessions in berlin summit.  Feel free to catch us next week for any QA related 
questions or if you need help to contribute in QA (we are really looking 
forward to onbaord new contributor in QA). 

Below are the QA related sessions, feel free to append the list if i missed 
anything. I am working on onboarding/forum sessions etherpad and will send the 
link tomorrow. 

Tuesday:
  1. OpenStack QA - Project Update.   [1]
  2. OpenStack QA - Project Onboarding.   [2]
  3. OpenStack Patrole – Foolproofing your OpenStack Deployment [3]

Wednesday:
  4. Forum: Users / Operators adoption of QA tools / plugins.  [4]

Thursday:
  5. Using Rally/Tempest for change validation (OPS session) [5]

[1] 
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22763/openstack-qa-project-update
 
[2] 
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22762/openstack-qa-project-onboarding
[3] 
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22148/openstack-patrole-foolproofing-your-openstack-deployment
[4] 
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22788/users-operators-adoption-of-qa-tools-plugins
 
[5] 
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22837/using-rallytempest-for-change-validation-ops-session
 

-gmann


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [goals][upgrade-checkers] Week R-26 Update

2018-10-16 Thread Ghanshyam Mann
  On Sat, 13 Oct 2018 07:05:53 +0900 Matt Riedemann  
wrote  
 > The big update this week is version 0.1.0 of oslo.upgradecheck was 
 > released. The documentation along with usage examples can be found here 
 > [1]. A big thanks to Ben Nemec for getting that done since a few 
 > projects were waiting for it.
 > 
 > In other updates, some changes were proposed in other projects [2].
 > 
 > And finally, Lance Bragstad and I had a discussion this week [3] about 
 > the validity of upgrade checks looking for deleted configuration 
 > options. The main scenario I'm thinking about here is FFU where someone 
 > is going from Mitaka to Pike. Let's say a config option was deprecated 
 > in Newton and then removed in Ocata. As the operator is rolling through 
 > from Mitaka to Pike, they might have missed the deprecation signal in 
 > Newton and removal in Ocata. Does that mean we should have upgrade 
 > checks that look at the configuration for deleted options, or options 
 > where the deprecated alias is removed? My thought is that if things will 
 > not work once they get to the target release and restart the service 
 > code, which would definitely impact the upgrade, then checking for those 
 > scenarios is probably OK. If on the other hand the removed options were 
 > just tied to functionality that was removed and are otherwise not 
 > causing any harm then I don't think we need a check for that. It was 
 > noted that oslo.config has a new validation tool [4] so that would take 
 > care of some of this same work if run during upgrades. So I think 
 > whether or not an upgrade check should be looking for config option 
 > removal ultimately depends on the severity of what happens if the manual 
 > intervention to handle that removed option is not performed. That's 
 > pretty broad, but these upgrade checks aren't really set in stone for 
 > what is applied to them. I'd like to get input from others on this, 
 > especially operators and if they would find these types of checks useful.
 > 
 > [1] https://docs.openstack.org/oslo.upgradecheck/latest/
 > [2] https://storyboard.openstack.org/#!/story/2003657
 > [3] 
 > http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-10-10.log.html#t2018-10-10T15:17:17
 > [4] 
 > http://lists.openstack.org/pipermail/openstack-dev/2018-October/135688.html

Other point is about policy change and how we should accommodate those in 
upgrade-checks.

There are below categorization of policy changes:
1. Policy rule name has been changed. 
Upgrade Impact: If that policy rule is overridden in policy.json then, yes 
we need to tell this in upgrade-check CLI. If not overridden which means 
operators depends on policy in code then, it would not impact their upgrade. 
2. Policy rule (deprecated) has been removed
Upgrade Impact: YES, as it can impact their API access after upgrade.  This 
needs to be cover in upgrade-checks
3. Default value (including scope) of Policy rule has been changed
Upgrade Impact: YES, this can change the access level of their API after 
upgrade. This needs to be cover in upgrade-checks
4. New Policy rule introduced
Upgrade Impact: YES, same reason. 

 I think policy changes can be added in upgrade checker by checking all the 
above category because everything will impact upgrade? 

For Example, cinder policy change [1]:

"Add granularity to the volume_extension:volume_type_encryption policy with the 
addition of distinct actions for create, get, update, and delete:

volume_extension:volume_type_encryption:create
volume_extension:volume_type_encryption:get
volume_extension:volume_type_encryption:update
volume_extension:volume_type_encryption:delete
To address backwards compatibility, the new rules added to the volume_type.py 
policy file, default to the existing rule, 
volume_extension:volume_type_encryption, if it is set to a non-default value. "

[1] https://docs.openstack.org/releasenotes/cinder/unreleased.html#upgrade-notes

-gmann

 > 
 > -- 
 > 
 > Thanks,
 > 
 > Matt
 > 
 > ___
 > OpenStack-operators mailing list
 > OpenStack-operators@lists.openstack.org
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 > 



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [all] Consistent policy names

2018-10-13 Thread Ghanshyam Mann
  On Sat, 13 Oct 2018 01:45:17 +0900 Lance Bragstad  
wrote  
 > Sending a follow up here quick.
 > The reviewers actively participating in [0] are nearing a conclusion. 
 > Ultimately, the convention is going to be:
 >   
 > :[:][:]:[:]
 > Details about what that actually means can be found in the review [0]. Each 
 > piece is denoted as being required or optional, along with examples. I think 
 > this gives us a pretty good starting place, and the syntax is flexible 
 > enough to support almost every policy naming convention we've stumbled 
 > across.
 > Now is the time if you have any final input or feedback. Thanks for sticking 
 > with the discussion.

Thanks Lance for working on this. Current version lgtm. I would like to see 
some operators feedback also if  this standard policy name format is clear and 
easy understandable. 

-gmann

 > Lance
 > [0] https://review.openstack.org/#/c/606214/
 > 
 > On Mon, Oct 8, 2018 at 8:49 AM Lance Bragstad  wrote:
 > 
 > On Mon, Oct 1, 2018 at 8:13 AM Ghanshyam Mann  
 > wrote:
 >   On Sat, 29 Sep 2018 03:54:01 +0900 Lance Bragstad 
 >  wrote  
 >   > 
 >   > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki  
 > wrote:
 >   > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg
 >   >   wrote:
 >   >  >
 >   >  > Ideally I would like to see it in the form of least specific to most 
 > specific. But more importantly in a way that there is no additional 
 > delimiters between the service type and the resource. Finally, I do not like 
 > the change of plurality depending on action type.
 >   >  >
 >   >  > I propose we consider
 >   >  >
 >   >  > ::[:]
 >   >  >
 >   >  > Example for keystone (note, action names below are strictly examples 
 > I am fine with whatever form those actions take):
 >   >  > identity:projects:create
 >   >  > identity:projects:delete
 >   >  > identity:projects:list
 >   >  > identity:projects:get
 >   >  >
 >   >  > It keeps things simple and consistent when you're looking through 
 > overrides / defaults.
 >   >  > --Morgan
 >   >  +1 -- I think the ordering if `resource` comes before
 >   >  `action|subaction` will be more clean.
 >   > 
 >   > ++
 >   > These are excellent points. I especially like being able to omit the 
 > convention about plurality. Furthermore, I'd like to add that I think we 
 > should make the resource singular (e.g., project instead or projects). For 
 > example:
 >   > compute:server:list
 >   > 
 > compute:server:updatecompute:server:createcompute:server:deletecompute:server:action:rebootcompute:server:action:confirm_resize
 >  (or confirm-resize)
 >  
 >  Do we need "action" word there? I think action name itself should convey 
 > the operation. IMO below notation without "äction" word looks clear enough. 
 > what you say?
 >  
 >  compute:server:reboot
 >  compute:server:confirm_resize
 > 
 > I agree. I simplified this in the current version up for review.  
 >  -gmann
 >  
 >   > 
 >   > Otherwise, someone might mistake compute:servers:get, as "list". This is 
 > ultra-nick-picky, but something I thought of when seeing the usage of 
 > "get_all" in policy names in favor of "list."
 >   > In summary, the new convention based on the most recent feedback should 
 > be:
 >   > ::[:]
 >   > Rules:service-type is always defined in the service types authority
 >   > resources are always singular
 >   > Thanks to all for sticking through this tedious discussion. I appreciate 
 > it.  
 >   >  /R
 >   >  
 >   >  Harry
 >   >  >
 >   >  > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad  
 > wrote:
 >   >  >>
 >   >  >> Bumping this thread again and proposing two conventions based on the 
 > discussion here. I propose we decide on one of the two following conventions:
 >   >  >>
 >   >  >> ::
 >   >  >>
 >   >  >> or
 >   >  >>
 >   >  >> :_
 >   >  >>
 >   >  >> Where  is the corresponding service type of the 
 > project [0], and  is either create, get, list, update, or delete. I 
 > think decoupling the method from the policy name should aid in consistency, 
 > regardless of the underlying implementation. The HTTP method specifics can 
 > still be relayed using oslo.policy's DocumentedRuleDefault object [1].
 >   >  >>
 >   >  >> I think the plurality of the resource should default to what makes 
 > sense for the operation being carried out (e.g., list:foobars, 
 > create:f

Re: [Openstack-operators] [openstack-dev] [all] Consistent policy names

2018-10-01 Thread Ghanshyam Mann
  On Sat, 29 Sep 2018 03:54:01 +0900 Lance Bragstad  
wrote  
 > 
 > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki  wrote:
 > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg
 >   wrote:
 >  >
 >  > Ideally I would like to see it in the form of least specific to most 
 > specific. But more importantly in a way that there is no additional 
 > delimiters between the service type and the resource. Finally, I do not like 
 > the change of plurality depending on action type.
 >  >
 >  > I propose we consider
 >  >
 >  > ::[:]
 >  >
 >  > Example for keystone (note, action names below are strictly examples I am 
 > fine with whatever form those actions take):
 >  > identity:projects:create
 >  > identity:projects:delete
 >  > identity:projects:list
 >  > identity:projects:get
 >  >
 >  > It keeps things simple and consistent when you're looking through 
 > overrides / defaults.
 >  > --Morgan
 >  +1 -- I think the ordering if `resource` comes before
 >  `action|subaction` will be more clean.
 > 
 > ++
 > These are excellent points. I especially like being able to omit the 
 > convention about plurality. Furthermore, I'd like to add that I think we 
 > should make the resource singular (e.g., project instead or projects). For 
 > example:
 > compute:server:list
 > compute:server:updatecompute:server:createcompute:server:deletecompute:server:action:rebootcompute:server:action:confirm_resize
 >  (or confirm-resize)

Do we need "action" word there? I think action name itself should convey the 
operation. IMO below notation without "äction" word looks clear enough. what 
you say?

compute:server:reboot
compute:server:confirm_resize

-gmann

 > 
 > Otherwise, someone might mistake compute:servers:get, as "list". This is 
 > ultra-nick-picky, but something I thought of when seeing the usage of 
 > "get_all" in policy names in favor of "list."
 > In summary, the new convention based on the most recent feedback should be:
 > ::[:]
 > Rules:service-type is always defined in the service types authority
 > resources are always singular
 > Thanks to all for sticking through this tedious discussion. I appreciate it. 
 >  
 >  /R
 >  
 >  Harry
 >  >
 >  > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad  
 > wrote:
 >  >>
 >  >> Bumping this thread again and proposing two conventions based on the 
 > discussion here. I propose we decide on one of the two following conventions:
 >  >>
 >  >> ::
 >  >>
 >  >> or
 >  >>
 >  >> :_
 >  >>
 >  >> Where  is the corresponding service type of the project 
 > [0], and  is either create, get, list, update, or delete. I think 
 > decoupling the method from the policy name should aid in consistency, 
 > regardless of the underlying implementation. The HTTP method specifics can 
 > still be relayed using oslo.policy's DocumentedRuleDefault object [1].
 >  >>
 >  >> I think the plurality of the resource should default to what makes sense 
 > for the operation being carried out (e.g., list:foobars, create:foobar).
 >  >>
 >  >> I don't mind the first one because it's clear about what the delimiter 
 > is and it doesn't look weird when projects have something like:
 >  >>
 >  >> :::
 >  >>
 >  >> If folks are ok with this, I can start working on some documentation 
 > that explains the motivation for this. Afterward, we can figure out how we 
 > want to track this work.
 >  >>
 >  >> What color do you want the shed to be?
 >  >>
 >  >> [0] https://service-types.openstack.org/service-types.json
 >  >> [1] 
 > https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule
 >  >>
 >  >> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad  
 > wrote:
 >  >>>
 >  >>>
 >  >>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann 
 >  wrote:
 >  >>>>
 >  >>>>   On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt 
 >  wrote 
 >  >>>>  > tl;dr+1 consistent names
 >  >>>>  > I would make the names mirror the API... because the Operator 
 > setting them knows the API, not the codeIgnore the crazy names in Nova, I 
 > certainly hate them
 >  >>>>
 >  >>>> Big +1 on consistent naming  which will help operator as well as 
 > developer to maintain those.
 >  >>>>
 >  >>>>  >
 >  >>>>  > Lance Bragstad  wrote:
 >  >>>>  > > I'm curious if anyone has context on

Re: [Openstack-operators] [openstack-dev] [all] Consistent policy names

2018-10-01 Thread Ghanshyam Mann
  On Sat, 29 Sep 2018 07:23:30 +0900 Lance Bragstad  
wrote  
 > Alright - I've worked up the majority of what we have in this thread and 
 > proposed a documentation patch for oslo.policy [0].
 > I think we're at the point where we can finish the rest of this discussion 
 > in gerrit if folks are ok with that.
 > [0] https://review.openstack.org/#/c/606214/

+1, thanks for that. let's start the discussion there.

-gmann

 > On Fri, Sep 28, 2018 at 3:33 PM Sean McGinnis  wrote:
 > On Fri, Sep 28, 2018 at 01:54:01PM -0500, Lance Bragstad wrote:
 >  > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki  wrote:
 >  > 
 >  > > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg
 >  > >  wrote:
 >  > > >
 >  > > > Ideally I would like to see it in the form of least specific to most
 >  > > specific. But more importantly in a way that there is no additional
 >  > > delimiters between the service type and the resource. Finally, I do not
 >  > > like the change of plurality depending on action type.
 >  > > >
 >  > > > I propose we consider
 >  > > >
 >  > > > ::[:]
 >  > > >
 >  > > > Example for keystone (note, action names below are strictly examples I
 >  > > am fine with whatever form those actions take):
 >  > > > identity:projects:create
 >  > > > identity:projects:delete
 >  > > > identity:projects:list
 >  > > > identity:projects:get
 >  > > >
 >  > > > It keeps things simple and consistent when you're looking through
 >  > > overrides / defaults.
 >  > > > --Morgan
 >  > > +1 -- I think the ordering if `resource` comes before
 >  > > `action|subaction` will be more clean.
 >  > >
 >  > 
 >  
 >  Great idea. This is looking better and better.
 >   __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [all] Consistent policy names

2018-09-21 Thread Ghanshyam Mann
  On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt  
wrote  
 > tl;dr+1 consistent names
 > I would make the names mirror the API... because the Operator setting them 
 > knows the API, not the codeIgnore the crazy names in Nova, I certainly hate 
 > them

Big +1 on consistent naming  which will help operator as well as developer to 
maintain those. 

 > 
 > Lance Bragstad  wrote:
 > > I'm curious if anyone has context on the "os-" part of the format?
 > 
 > My memory of the Nova policy mess...* Nova's policy rules traditionally 
 > followed the patterns of the code
 > ** Yes, horrible, but it happened.* The code used to have the OpenStack API 
 > and the EC2 API, hence the "os"* API used to expand with extensions, so the 
 > policy name is often based on extensions** note most of the extension code 
 > has now gone, including lots of related policies* Policy in code was focused 
 > on getting us to a place where we could rename policy** Whoop whoop by the 
 > way, it feels like we are really close to something sensible now!
 > Lance Bragstad  wrote:
 > Thoughts on using create, list, update, and delete as opposed to post, get, 
 > put, patch, and delete in the naming convention?
 > I could go either way as I think about "list servers" in the API.But my 
 > preference is for the URL stub and POST, GET, etc.
 >  On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad  
 > wrote:If we consider dropping "os", should we entertain dropping "api", too? 
 > Do we have a good reason to keep "api"?I wouldn't be opposed to simple 
 > service types (e.g "compute" or "loadbalancer").
 > +1The API is known as "compute" in api-ref, so the policy should be for 
 > "compute", etc.

Agree on mapping the policy name with api-ref as much as possible. Other than 
policy name having 'os-', we have 'os-' in resource name also in nova API url 
like /os-agents, /os-aggregates etc (almost every resource except servers , 
flavors).  As we cannot get rid of those from API url, we need to keep the same 
in policy naming too? or we can have policy name like 
compute:agents:create/post but that mismatch from api-ref where agents resource 
url is os-agents.

Also we have action API (i know from nova not sure from other services) like 
POST /servers/{server_id}/action {addSecurityGroup} and their current policy 
name is all inconsistent.  few have policy name including their resource name 
like "os_compute_api:os-flavor-access:add_tenant_access", few has 'action' in 
policy name like "os_compute_api:os-admin-actions:reset_state" and few has 
direct action name like "os_compute_api:os-console-output"

May be we can make them consistent with 
:: or any better opinion. 

 > From: Lance Bragstad > The topic of having consistent 
 > policy names has popped up a few times this week.
 > 
 > I would love to have this nailed down before we go through all the policy 
 > rules again. In my head I hope in Nova we can go through each policy rule 
 > and do the following:
 > * move to new consistent policy name, deprecate existing name* hardcode 
 > scope check to project, system or user** (user, yes... keypairs, yuck, but 
 > its how they work)** deprecate in rule scope checks, which are largely bogus 
 > in Nova anyway* make read/write/admin distinction** therefore adding the 
 > "noop" role, amount other things

+ policy granularity. 

It is good idea to make the policy improvement all together and for all rules 
as you mentioned. But my worries is how much load it will be on operator side 
to migrate all policy rules at same time? What will be the deprecation period 
etc which i think we can discuss on proposed spec - 
https://review.openstack.org/#/c/547850

-gmann

 > Thanks,John 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [tc]Global Reachout Proposal

2018-09-17 Thread Ghanshyam Mann
  On Sat, 15 Sep 2018 02:49:40 +0900 Zhipeng Huang  
wrote  
 > Hi all,
 > Follow up the diversity discussion we had in the tc session this morning 
 > [0], I've proposed a resolution on facilitating technical community in large 
 > to engage in global reachout for OpenStack more efficiently. 
 > Your feedbacks are welcomed. Whether this should be a new resolution or not 
 > at the end of the day, this is a conversation worthy to have.
 > [0] https://review.openstack.org/602697

I like that we are discussing the Global Reachout things which i personally 
feel is very important. There are many obstacle to have a standard global 
communication way. Honestly saying, there cannot be any standard communication 
channel which can accommodate different language, cultures , company/govt 
restriction. So the better we can do is best solution. 

I can understand that IRC cannot be used in China which is very painful and 
mostly it is used weChat. But there are few key points we need to consider for 
any social app to use?
- Technical discussions which needs more people to participate and need ref of 
links etc cannot be done on mobile app. You need desktop version of that app.
- Many of the social app have # of participation, invitation, logging 
restriction. 
- Those apps are not restricted to other place.
- It does not split the community members among more than one app or exiting 
channel.

With all those point, we need to think what all communication channel we really 
want to promote as community. 

IMO, we should educate and motivate people to participate over existing channel 
like IRC,  ML as much as possible. At least ML does not have any issue about 
usage. Ambassador and local user groups people can play a critical role here or 
local developers (i saw Alex volunteer for nova discussion in china) and they 
can ask them to start communication in ML or if they cannot then they can start 
the thread and proxy for them. 

I know slack is being used for Japan community and most of the communication 
there is in Japanese so i cannot help there even I join it. When talking to 
Akira (Japan Ambassador ) and as per him most of the developers do communicate 
in IRC, ML but users hesitate to do so because of culture and language. 

So if proposal is to participate community (Developers, TC, UC, Ambassador, 
User Group members etc) in local chat app and encourage people to move to ML 
etc then it is great idea. But if we want to promote all different chat app as 
community practice then, it can lead to lot of other problems than solving the 
current one.  For example:  It will divide the technical discussion etc

-gmann

 > -- 
 > Zhipeng (Howard) Huang
 > Standard EngineerIT Standard & Patent/IT Product LineHuawei Technologies 
 > Co,. LtdEmail: huangzhipeng@huawei.comOffice: Huawei Industrial Base, 
 > Longgang, Shenzhen
 > (Previous)
 > Research AssistantMobile Ad-Hoc Network Lab, Calit2University of California, 
 > IrvineEmail: zhipengh@uci.eduOffice: Calit2 Building Room 2402
 > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado 
 > ___
 > OpenStack-operators mailing list
 > OpenStack-operators@lists.openstack.org
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 > 



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Open letter/request to TC candidates (and existing elected officials)

2018-09-13 Thread Ghanshyam Mann
  On Thu, 13 Sep 2018 00:47:27 +0900 Matt Riedemann  
wrote  
 > Rather than take a tangent on Kristi's candidacy thread [1], I'll bring 
 > this up separately.
 > 
 > Kristi said:
 > 
 > "Ultimately, this list isn’t exclusive and I’d love to hear your and 
 > other people's opinions about what you think the I should focus on."
 > 
 > Well since you asked...
 > 
 > Some feedback I gave to the public cloud work group yesterday was to get 
 > their RFE/bug list ranked from the operator community (because some of 
 > the requests are not exclusive to public cloud), and then put pressure 
 > on the TC to help project manage the delivery of the top issue. I would 
 > like all of the SIGs to do this. The upgrades SIG should rank and 
 > socialize their #1 issue that needs attention from the developer 
 > community - maybe that's better upgrade CI testing for deployment 
 > projects, maybe it's getting the pre-upgrade checks goal done for Stein. 
 > The UC should also be doing this; maybe that's the UC saying, "we need 
 > help on closing feature gaps in openstack client and/or the SDK". I 
 > don't want SIGs to bombard the developers with *all* of their 
 > requirements, but I want to get past *talking* about the *same* issues 
 > *every* time we get together. I want each group to say, "this is our top 
 > issue and we want developers to focus on it." For example, the extended 
 > maintenance resolution [2] was purely birthed from frustration about 
 > talking about LTS and stable branch EOL every time we get together. It's 
 > also the responsibility of the operator and user communities to weigh in 
 > on proposed release goals, but the TC should be actively trying to get 
 > feedback from those communities about proposed goals, because I bet 
 > operators and users don't care about mox removal [3].

I agree on this and i feel this is real value  we can add with current 
situation where contributors are less in almost all of the projects. When we 
set goal for any cycle, we should have user/operator/SIG weightage on priority 
in selection checklist and categorize the goal into respective category/tag 
something like "user-oriented"  or "coding-oriented"(only developer/maintaining 
code benefits).  And then we concentrate more on first category and leave 
second one more on project team. Project team further can plan the second 
catagory items as per their bandwidth and priority.  I am not saying 
code/developer oriented goals should not be initiated by TC but those should be 
on low priority list kind of. 

-gmann

 > 
 > I want to see the TC be more of a cross-project project management 
 > group, like a group of Ildikos and what she did between nova and cinder 
 > to get volume multi-attach done, which took persistent supervision to 
 > herd the cats and get it delivered. Lance is already trying to do this 
 > with unified limits. Doug is doing this with the python3 goal. I want my 
 > elected TC members to be pushing tangible technical deliverables forward.
 > 
 > I don't find any value in the TC debating ad nauseam about visions and 
 > constellations and "what is openstack?". Scope will change over time 
 > depending on who is contributing to openstack, we should just accept 
 > this. And we need to realize that if we are failing to deliver value to 
 > operators and users, they aren't going to use openstack and then "what 
 > is openstack?" won't matter because no one will care.
 > 
 > So I encourage all elected TC members to work directly with the various 
 > SIGs to figure out their top issue and then work on managing those 
 > deliverables across the community because the TC is particularly well 
 > suited to do so given the elected position. I realize political and 
 > bureaucratic "how should openstack deal with x?" things will come up, 
 > but those should not be the priority of the TC. So instead of 
 > philosophizing about things like, "should all compute agents be in a 
 > single service with a REST API" for hours and hours, every few months - 
 > immediately ask, "would doing that get us any closer to achieving top 
 > technical priority x?" Because if not, or it's so fuzzy in scope that no 
 > one sees the way forward, document a decision and then drop it.
 > 
 > [1] 
 > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134490.html
 > [2] 
 > https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html
 > [3] https://governance.openstack.org/tc/goals/rocky/mox_removal.html
 > 
 > -- 
 > 
 > Thanks,
 > 
 > Matt
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org

Re: [Openstack-operators] ocata nova /etc/nova/policy.json

2018-09-06 Thread Ghanshyam Mann

  On Thu, 06 Sep 2018 23:53:10 +0900 Ignazio Cassano 
 wrote  
 > Thanks but I made a mistake because I forgot to change  user variables 
 > before deleting the instance.User belonging to user role cannot delete 
 > instances of other projects.Sorry for my mistakeRegardsIgnazio

On Policy side, Nova has policy in code now. And for showing the all projects 
servers, nova has policy rule [1] for that which control the --all-projects 
parameter. By Default it is 'admin' only so demo user cannot see the other 
instance until this rule is modified in your policy.json  

[1]
os_compute_api:servers:index:get_all_tenants
os_compute_api:servers:detail:get_all_tenants
https://docs.openstack.org/nova/latest/configuration/policy.html 

-gmann

 > 
 > Il giorno gio 6 set 2018 alle ore 16:41 iain MacDonnell 
 >  ha scritto:
 > 
 >  
 >  On 09/06/2018 06:31 AM, Ignazio Cassano wrote:
 >  > I installed openstack ocata on centos and I saw /etc/nova/policy.json 
 >  > coontains the following:
 >  > {
 >  > }
 >  > 
 >  > I created an instance in a a project "admin" with user admin that 
 >  > belogns to admin project
 >  > 
 >  > I created a demo project with a user demo with "user" role.
 >  > 
 >  > Using command lines (openstack server list --all-projects) the user demo 
 >  > can list the admin instances and can also delete one of them.
 >  > 
 >  > I think this is a bug and a nova policy.json must be created with some 
 >  > rules for avoiding the above.
 >  
 >  See 
 >  
 > https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html
 >  
 >  You have something else going on ...
 >  
 >   ~iain
 >  
 >  
 >  
 >  
 >  ___
 >  OpenStack-operators mailing list
 >  OpenStack-operators@lists.openstack.org
 >  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 >   ___
 > OpenStack-operators mailing list
 > OpenStack-operators@lists.openstack.org
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 > 



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [openstack-dev] [openstack-operator] [qa] [forum] [berlin] QA Brainstorming Topic ideas for Berlin 2018

2018-09-06 Thread Ghanshyam Mann
Hi All,

I have created the below etherpad to collect the forum ideas related to QA for 
Berlin Summit.

Please write up your ideas with your irc name on etherpad.

https://etherpad.openstack.org/p/berlin-stein-forum-qa-brainstorming 

-gmann





___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins

2018-06-26 Thread Ghanshyam Mann
  On Wed, 27 Jun 2018 10:19:17 +0900 Ghanshyam Mann 
 wrote  
 > ++ operator ML
 > 
 >   On Wed, 27 Jun 2018 10:17:33 +0900 Ghanshyam Mann 
 >  wrote  
 >  >  
 >  >  
 >  >  
 >  >   On Tue, 26 Jun 2018 23:12:30 +0900 Doug Hellmann 
 >  wrote   
 >  >  > Excerpts from Matthew Treinish's message of 2018-06-26 09:52:09 -0400: 
 >  >  > > On Tue, Jun 26, 2018 at 08:53:21AM -0400, Doug Hellmann wrote: 
 >  >  > > > Excerpts from Andrea Frittoli's message of 2018-06-26 13:35:11 
 > +0100: 
 >  >  > > > > On Tue, 26 Jun 2018, 1:08 pm Thierry Carrez, 
 >  wrote: 
 >  >  > > > >  
 >  >  > > > > > Dmitry Tantsur wrote: 
 >  >  > > > > > > [...] 
 >  >  > > > > > > My suggestion: tempest has to be compatible with all 
 > supported releases 
 >  >  > > > > > > (of both services and plugins) OR be branched. 
 >  >  > > > > > > [...] 
 >  >  > > > > > I tend to agree with Dmitry... We have a model for things that 
 > need 
 >  >  > > > > > release alignment, and that's the cycle-bound series. The 
 > reason tempest 
 >  >  > > > > > is branchless was because there was no compatibility issue. If 
 > the split 
 >  >  > > > > > of tempest plugins introduces a potential incompatibility, 
 > then I would 
 >  >  > > > > > prefer aligning tempest to the existing model rather than 
 > introduce a 
 >  >  > > > > > parallel tempest-specific cycle just so that tempest can stay 
 >  >  > > > > > release-independent... 
 >  >  > > > > > 
 >  >  > > > > > I seem to remember there were drawbacks in branching tempest, 
 > though... 
 >  >  > > > > > Can someone with functioning memory brain cells summarize them 
 > again ? 
 >  >  > > > > > 
 >  >  > > > >  
 >  >  > > > >  
 >  >  > > > > Branchless Tempest enforces api stability across branches. 
 >  >  > > >  
 >  >  > > > I'm sorry, but I'm having a hard time taking this statement 
 > seriously 
 >  >  > > > when the current source of tension is that the Tempest API itself 
 >  >  > > > is breaking for its plugins. 
 >  >  > > >  
 >  >  > > > Maybe rather than talking about how to release compatible things 
 >  >  > > > together, we should go back and talk about why Tempest's API is 
 > changing 
 >  >  > > > in a way that can't be made backwards-compatible. Can you give 
 > some more 
 >  >  > > > detail about that? 
 >  >  > > >  
 >  >  > >  
 >  >  > > Well it's not, if it did that would violate all the stability 
 > guarantees 
 >  >  > > provided by Tempest's library and plugin interface. I've not ever 
 > heard of 
 >  >  > > these kind of backwards incompatibilities in those interfaces and we 
 > go to 
 >  >  > > all effort to make sure we don't break them. Where did the idea that 
 >  >  > > backwards incompatible changes where being introduced come from? 
 >  >  >  
 >  >  > In his original post, gmann said, "There might be some changes in 
 >  >  > Tempest which might not work with older version of Tempest Plugins." 
 >  >  > I was surprised to hear that, but I'm not sure how else to interpret 
 >  >  > that statement. 
 >  >  
 >  > I did not mean to say that Tempest will introduce the changes in backward 
 > incompatible way which can break plugins. That cannot happen as all plugins 
 > and tempest are branchless and they are being tested with master Tempest so 
 > if we change anything backward incompatible then it break the plugins gate. 
 > Even we have to remove any deprecated interfaces from Tempest, we fix all 
 > plugins first like - 
 > https://review.openstack.org/#/q/topic:remove-support-of-cinder-v1-api+(status:open+OR+status:merged)
 >   
 >  >  
 >  > What I mean to say here is that adding new or removing deprecated 
 > interface in Tempest might not work with all released version or unreleased 
 > Plugins. That point is from point of view of using Tempest and Plugins in 
 > production cloud testing not gate(where we keep the compatibility). 
 > Production Cloud user use Tempest cycle based version. Pike based Cloud will 
 > be tested by Tempest 17.0.0 not latest version (though latest version might 
 > work).  
 >  >  
 >  >

Re: [Openstack-operators] [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins

2018-06-26 Thread Ghanshyam Mann
++ operator ML

  On Wed, 27 Jun 2018 10:17:33 +0900 Ghanshyam Mann 
 wrote  
 >  
 >  
 >  
 >   On Tue, 26 Jun 2018 23:12:30 +0900 Doug Hellmann 
 >  wrote   
 >  > Excerpts from Matthew Treinish's message of 2018-06-26 09:52:09 -0400: 
 >  > > On Tue, Jun 26, 2018 at 08:53:21AM -0400, Doug Hellmann wrote: 
 >  > > > Excerpts from Andrea Frittoli's message of 2018-06-26 13:35:11 +0100: 
 >  > > > > On Tue, 26 Jun 2018, 1:08 pm Thierry Carrez, 
 >  wrote: 
 >  > > > >  
 >  > > > > > Dmitry Tantsur wrote: 
 >  > > > > > > [...] 
 >  > > > > > > My suggestion: tempest has to be compatible with all supported 
 > releases 
 >  > > > > > > (of both services and plugins) OR be branched. 
 >  > > > > > > [...] 
 >  > > > > > I tend to agree with Dmitry... We have a model for things that 
 > need 
 >  > > > > > release alignment, and that's the cycle-bound series. The reason 
 > tempest 
 >  > > > > > is branchless was because there was no compatibility issue. If 
 > the split 
 >  > > > > > of tempest plugins introduces a potential incompatibility, then I 
 > would 
 >  > > > > > prefer aligning tempest to the existing model rather than 
 > introduce a 
 >  > > > > > parallel tempest-specific cycle just so that tempest can stay 
 >  > > > > > release-independent... 
 >  > > > > > 
 >  > > > > > I seem to remember there were drawbacks in branching tempest, 
 > though... 
 >  > > > > > Can someone with functioning memory brain cells summarize them 
 > again ? 
 >  > > > > > 
 >  > > > >  
 >  > > > >  
 >  > > > > Branchless Tempest enforces api stability across branches. 
 >  > > >  
 >  > > > I'm sorry, but I'm having a hard time taking this statement seriously 
 >  > > > when the current source of tension is that the Tempest API itself 
 >  > > > is breaking for its plugins. 
 >  > > >  
 >  > > > Maybe rather than talking about how to release compatible things 
 >  > > > together, we should go back and talk about why Tempest's API is 
 > changing 
 >  > > > in a way that can't be made backwards-compatible. Can you give some 
 > more 
 >  > > > detail about that? 
 >  > > >  
 >  > >  
 >  > > Well it's not, if it did that would violate all the stability 
 > guarantees 
 >  > > provided by Tempest's library and plugin interface. I've not ever heard 
 > of 
 >  > > these kind of backwards incompatibilities in those interfaces and we go 
 > to 
 >  > > all effort to make sure we don't break them. Where did the idea that 
 >  > > backwards incompatible changes where being introduced come from? 
 >  >  
 >  > In his original post, gmann said, "There might be some changes in 
 >  > Tempest which might not work with older version of Tempest Plugins." 
 >  > I was surprised to hear that, but I'm not sure how else to interpret 
 >  > that statement. 
 >  
 > I did not mean to say that Tempest will introduce the changes in backward 
 > incompatible way which can break plugins. That cannot happen as all plugins 
 > and tempest are branchless and they are being tested with master Tempest so 
 > if we change anything backward incompatible then it break the plugins gate. 
 > Even we have to remove any deprecated interfaces from Tempest, we fix all 
 > plugins first like - 
 > https://review.openstack.org/#/q/topic:remove-support-of-cinder-v1-api+(status:open+OR+status:merged)
 >   
 >  
 > What I mean to say here is that adding new or removing deprecated interface 
 > in Tempest might not work with all released version or unreleased Plugins. 
 > That point is from point of view of using Tempest and Plugins in production 
 > cloud testing not gate(where we keep the compatibility). Production Cloud 
 > user use Tempest cycle based version. Pike based Cloud will be tested by 
 > Tempest 17.0.0 not latest version (though latest version might work).  
 >  
 > This thread is not just for gate testing point of view (which seems to be 
 > always interpreted), this is more for user using Tempest and Plugins for 
 > their cloud testing. I am looping  operator mail list also which i forgot in 
 > initial post.  
 >  
 > We do not have any tag/release from plugins to know what version of plugin 
 > can work with what version of tempest. For Example If There is new interface 
 > intro

[Openstack-operators] [openstack-dev] [openstack-operators][qa] Tempest removal of test_get_service_by_service_and_host_name

2018-05-19 Thread Ghanshyam Mann
Hi All,

Patch https://review.openstack.org/#/c/569112/1 removed the
test_get_service_by_service_and_host_name from tempest tree which
looks ok as per bug and commit msg.

This satisfy the condition of test removal as per process [1] and this
mail is to complete the test removal process to check the external
usage of this test.

There is one place this tests is listed in Trio2o doc, i have raised
patch in Trio2o to remove that to avoid any future confusion[2].

If this test is required by anyone, please respond to this mail,
otherwise we are good here.

..1 https://docs.openstack.org/tempest/latest/test_removal.html
..2 https://review.openstack.org/#/c/569568/

-gmann

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators