Re: [Openstack-operators] [openstack-dev] [all] Consistent policy names

2018-09-14 Thread Lance Bragstad
Ok - yeah, I'm not sure what the history behind that is either...

I'm mainly curious if that's something we can/should keep or if we are
opposed to dropping 'os' and 'api' from the convention (e.g.
load-balancer:loadbalancer:post as opposed to
os_load-balancer_api:loadbalancer:post) and just sticking with the
service-type?

On Fri, Sep 14, 2018 at 2:16 PM Michael Johnson  wrote:

> I don't know for sure, but I assume it is short for "OpenStack" and
> prefixing OpenStack policies vs. third party plugin policies for
> documentation purposes.
>
> I am guilty of borrowing this from existing code examples[0].
>
> [0]
> http://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html
>
> Michael
> On Fri, Sep 14, 2018 at 8:46 AM Lance Bragstad 
> wrote:
> >
> >
> >
> > On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson 
> wrote:
> >>
> >> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post"
> >> which maps to the "os--api::" format.
> >
> >
> > Thanks for explaining the justification, Michael.
> >
> > I'm curious if anyone has context on the "os-" part of the format? I've
> seen that pattern in a couple different projects. Does anyone know about
> its origin? Was it something we converted to our policy names because of
> API names/paths?
> >
> >>
> >>
> >> I selected it as it uses the service-type[1], references the API
> >> resource, and then the method. So it maps well to the API reference[2]
> >> for the service.
> >>
> >> [0] https://docs.openstack.org/octavia/latest/configuration/policy.html
> >> [1] https://service-types.openstack.org/
> >> [2]
> https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer
> >>
> >> Michael
> >> On Wed, Sep 12, 2018 at 12:52 PM Tim Bell  wrote:
> >> >
> >> > So +1
> >> >
> >> >
> >> >
> >> > Tim
> >> >
> >> >
> >> >
> >> > From: Lance Bragstad 
> >> > Reply-To: "OpenStack Development Mailing List (not for usage
> questions)" 
> >> > Date: Wednesday, 12 September 2018 at 20:43
> >> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-...@lists.openstack.org>, OpenStack Operators <
> openstack-operators@lists.openstack.org>
> >> > Subject: [openstack-dev] [all] Consistent policy names
> >> >
> >> >
> >> >
> >> > The topic of having consistent policy names has popped up a few times
> this week. Ultimately, if we are to move forward with this, we'll need a
> convention. To help with that a little bit I started an etherpad [0] that
> includes links to policy references, basic conventions *within* that
> service, and some examples of each. I got through quite a few projects this
> morning, but there are still a couple left.
> >> >
> >> >
> >> >
> >> > The idea is to look at what we do today and see what conventions we
> can come up with to move towards, which should also help us determine how
> much each convention is going to impact services (e.g. picking a convention
> that will cause 70% of services to rename policies).
> >> >
> >> >
> >> >
> >> > Please have a look and we can discuss conventions in this thread. If
> we come to agreement, I'll start working on some documentation in
> oslo.policy so that it's somewhat official because starting to renaming
> policies.
> >> >
> >> >
> >> >
> >> > [0] https://etherpad.openstack.org/p/consistent-policy-names
> >> >
> >> > ___
> >> > OpenStack-operators mailing list
> >> > OpenStack-operators@lists.openstack.org
> >> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [tc][uc]Community Wide Long Term Goals

2018-09-14 Thread Zhipeng Huang
Hi,

Based upon the discussion we had at the TC session in the afternoon, I'm
starting to draft a patch to add long term goal mechanism into governance.
It is by no means a complete solution at the moment (still have not thought
through the execution method yet to make sure the outcome), but feel free
to provide your feedback at https://review.openstack.org/#/c/602799/ .

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [nova][publiccloud-wg] Proposal to shelve on stop/suspend

2018-09-14 Thread Matt Riedemann
tl;dr: I'm proposing a new parameter to the server stop (and suspend?) 
APIs to control if nova shelve offloads the server.


Long form: This came up during the public cloud WG session this week 
based on a couple of feature requests [1][2]. When a user stops/suspends 
a server, the hypervisor frees up resources on the host but nova 
continues to track those resources as being used on the host so the 
scheduler can't put more servers there. What operators would like to do 
is that when a user stops a server, nova actually shelve offloads the 
server from the host so they can schedule new servers on that host. On 
start/resume of the server, nova would find a new host for the server. 
This also came up in Vancouver where operators would like to free up 
limited expensive resources like GPUs when the server is stopped. This 
is also the behavior in AWS.


The problem with shelve is that it's great for operators but users just 
don't use it, maybe because they don't know what it is and stop works 
just fine. So how do you get users to opt into shelving their server?


I've proposed a high-level blueprint [3] where we'd add a new 
(microversioned) parameter to the stop API with three options:


* auto
* offload
* retain

Naming is obviously up for debate. The point is we would default to auto 
and if auto is used, the API checks a config option to determine the 
behavior - offload or retain. By default we would retain for backward 
compatibility. For users that don't care, they get auto and it's fine. 
For users that do care, they either (1) don't opt into the microversion 
or (2) specify the specific behavior they want. I don't think we need to 
expose what the cloud's configuration for auto is because again, if you 
don't care then it doesn't matter and if you do care, you can opt out of 
this.


"How do we get users to use the new microversion?" I'm glad you asked.

Well, nova CLI defaults to using the latest available microversion 
negotiated between the client and the server, so by default, anyone 
using "nova stop" would get the 'auto' behavior (assuming the client and 
server are new enough to support it). Long-term, openstack client plans 
on doing the same version negotiation.


As for the server status changes, if the server is stopped and shelved, 
the status would be 'SHELVED_OFFLOADED' rather than 'SHUTDOWN'. I 
believe this is fine especially if a user is not being specific and 
doesn't care about the actual backend behavior. On start, the API would 
allow starting (unshelving) shelved offloaded (rather than just stopped) 
instances. Trying to hide shelved servers as stopped in the API would be 
overly complex IMO so I don't want to try and mask that.


It is possible that a user that stopped and shelved their server could 
hit a NoValidHost when starting (unshelving) the server, but that really 
shouldn't happen in a cloud that's configuring nova to shelve by default 
because if they are doing this, their SLA needs to reflect they have the 
capacity to unshelve the server. If you can't honor that SLA, don't 
shelve by default.


So, what are the general feelings on this before I go off and start 
writing up a spec?


[1] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791681
[2] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791679
[3] https://blueprints.launchpad.net/nova/+spec/shelve-on-stop

--

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [Openstack-sigs][openstack-dev][all]Expose SIGs/WGs as single window for Users/Ops scenario

2018-09-14 Thread Rico Lin
The Idea has been raising around (from me or from Matt's ML), so I would
like to give people more update on this (in terms of what I have been
raising, what people have been feedbacks, and what init idea I can collect
or I have as actions.

*Why are we doing this?*
The basic concept for this is to allow users/ops get a single window
for important scenario/user cases or issues (here's an example [1])into
traceable tasks in single story/place and ask developers be responsible (by
changing the mission of government policy) to co-work on that task.
SIGs/WGs are so desired to get feedbacks or use cases, so as to project
teams (not gonna speak for all projects/SIGs/WGs but we like to collect for
more idea for sure). And the project team got a central place to develop
for specific user requirements (Edge, NFV, Self-healing, K8s). One more
idea on this is that we can also use SIGs and WGs as a place for
cross-project docs and those documents can be some more general information
on how a user can plan for that area (again Edge, NFV, Self-healing, K8s).
There also needs clear information to Users/Ops about what's the dependency
cross projects which involved. Also, a potential way to expose more
projects. From this step, we can plan to cross-project gating ( in projects
gate or periodic) implementation

*So what's triggering and feedback:*

   - This idea has been raising as a topic in K8S SIG, Self-healing SIG
   session. Feedback from K8s-sig and Self-healing-sig are generally looking
   forward to this. SIGs appears are desired to get use cases and user issues
   (I didn't so through this idea to rest SIGs/WGs yet, but place leave
   feedback if you're in that group). Most because this can value up SIGs/WGs
   on what they're interesting on.
   - This idea has been raising as a topic in Ops-meetup session
   Most of ops think it will be super if actually anyone willing to handle
   their own issues. The concerns about this are that we have to make some
   structure or guidelines to avoid a crazy number of useless issues (maybe
   like setup template for issues). Another feedback from an operator is
   that he concerns about ops should just try to go through everything in
   detail by themselves and contact to teams by themselves. IMO it depends on
   teams to set template and say you must have some specific information or
   even figure out which project should be in charge of which failed.
   - This idea has been raising as a topic in TC session
   Public cloud WGs also got this idea as well (and they done a good job!),
   appears it's a very preferred way for them. What happens to them is public
   cloud WG collect bunch number of use cases, but would like to see immediate
   actions or a traceable way to keep tracing those task.
   Doug: It might be hard to push developers to SIGs/WGs, but SIGs/WGs can
   always raise the cross-project forum. Also, it's important to let people
   know that who they can talk to.
   Melvin: Make it easier for everyone, and give a visibility. How can we
   possible to make one thing done is very important.
   Thierry: Have a way to expose the top priority which is important for
   OpenStack.

   - Also, raise to some PTLs and UCs. Generally good, Amy (super cute UC
   member) do ask the concern about there are manual works to bind tasks to
   cross bug tracing platform (like if you like to create a story in
   Self-healing SIG, and said it's relative to Heat, and Neutron. you create a
   task for Heat in that story, but you need to create a launchpad bug and
   link it to that story.). That issue might in now still need to be manually
   done, but what we might able to change is to consider migrate most of the
   relative teams to a single channel in long-term. I didn't get the chance to
   reach most of PTLs but do hope this is the place PTLs can also share their
   feedbacks.
   - There are ML in Self-healing-sig [2]
   not like a lot of feedback to this ML, but generally looks good


*What are the actions we can do right away:*

   - Please give feedback to us
   - Give a forum for this topic for all to discuss this (I already add a
   brainstorm in TC etherpad, but it's across projects, UCs, TCs, WGs, SIGs).
   - Set up a cross-committee discuss for restructuring missions to make
   sure teams are responsible for hep on development, SIGs/WGs are responsible
   to trace task as story level and help to trigger cross-project discussion,
   and operators are responsible to follow the structure to send issues and
   provide valuable information.
   - We can also do an experiment on try on SIGs/WGs who and the relative
   projects are willing to join this for a while and see how the outcomes and
   adjust on them.
   - Can we set cross-projects as a goal for a group of projects instead of
   only community goal?
   - Also if this is a nice idea, we can have a guideline for SIGs/WGs to
   like suggest how they can have a cross-project gate, have a way to let
   users/ops to file 

Re: [Openstack-operators] [openstack-dev] [all] Consistent policy names

2018-09-14 Thread Michael Johnson
I don't know for sure, but I assume it is short for "OpenStack" and
prefixing OpenStack policies vs. third party plugin policies for
documentation purposes.

I am guilty of borrowing this from existing code examples[0].

[0] 
http://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html

Michael
On Fri, Sep 14, 2018 at 8:46 AM Lance Bragstad  wrote:
>
>
>
> On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson  wrote:
>>
>> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post"
>> which maps to the "os--api::" format.
>
>
> Thanks for explaining the justification, Michael.
>
> I'm curious if anyone has context on the "os-" part of the format? I've seen 
> that pattern in a couple different projects. Does anyone know about its 
> origin? Was it something we converted to our policy names because of API 
> names/paths?
>
>>
>>
>> I selected it as it uses the service-type[1], references the API
>> resource, and then the method. So it maps well to the API reference[2]
>> for the service.
>>
>> [0] https://docs.openstack.org/octavia/latest/configuration/policy.html
>> [1] https://service-types.openstack.org/
>> [2] 
>> https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer
>>
>> Michael
>> On Wed, Sep 12, 2018 at 12:52 PM Tim Bell  wrote:
>> >
>> > So +1
>> >
>> >
>> >
>> > Tim
>> >
>> >
>> >
>> > From: Lance Bragstad 
>> > Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>> > 
>> > Date: Wednesday, 12 September 2018 at 20:43
>> > To: "OpenStack Development Mailing List (not for usage questions)" 
>> > , OpenStack Operators 
>> > 
>> > Subject: [openstack-dev] [all] Consistent policy names
>> >
>> >
>> >
>> > The topic of having consistent policy names has popped up a few times this 
>> > week. Ultimately, if we are to move forward with this, we'll need a 
>> > convention. To help with that a little bit I started an etherpad [0] that 
>> > includes links to policy references, basic conventions *within* that 
>> > service, and some examples of each. I got through quite a few projects 
>> > this morning, but there are still a couple left.
>> >
>> >
>> >
>> > The idea is to look at what we do today and see what conventions we can 
>> > come up with to move towards, which should also help us determine how much 
>> > each convention is going to impact services (e.g. picking a convention 
>> > that will cause 70% of services to rename policies).
>> >
>> >
>> >
>> > Please have a look and we can discuss conventions in this thread. If we 
>> > come to agreement, I'll start working on some documentation in oslo.policy 
>> > so that it's somewhat official because starting to renaming policies.
>> >
>> >
>> >
>> > [0] https://etherpad.openstack.org/p/consistent-policy-names
>> >
>> > ___
>> > OpenStack-operators mailing list
>> > OpenStack-operators@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [tc]Global Reachout Proposal

2018-09-14 Thread Zhipeng Huang
Hi all,

Follow up the diversity discussion we had in the tc session this morning
[0], I've proposed a resolution on facilitating technical community in
large to engage in global reachout for OpenStack more efficiently.

Your feedbacks are welcomed. Whether this should be a new resolution or not
at the end of the day, this is a conversation worthy to have.

[0] https://review.openstack.org/602697

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova] Hard fail if you try to rename an AZ with instances in it?

2018-09-14 Thread Matt Riedemann

On 3/28/2018 4:35 PM, Jay Pipes wrote:

On 03/28/2018 03:35 PM, Matt Riedemann wrote:

On 3/27/2018 10:37 AM, Jay Pipes wrote:


If we want to actually fix the issue once and for all, we need to 
make availability zones a real thing that has a permanent identifier 
(UUID) and store that permanent identifier in the instance (not the 
instance metadata).


Or we can continue to paper over major architectural weaknesses like 
this.


Stepping back a second from the rest of this thread, what if we do the 
hard fail bug fix thing, which could be backported to stable branches, 
and then we have the option of completely re-doing this with aggregate 
UUIDs as the key rather than the aggregate name? Because I think the 
former could get done in Rocky, but the latter probably not.


I'm fine with that (and was fine with it before, just stating that 
solving the problem long-term requires different thinking)


Best,
-jay


Just FYI for anyone that cared about this thread, we agreed at the Stein 
PTG to resolve the immediate bug [1] by blocking AZ renames while the AZ 
has instances in it. There won't be a microversion for that change and 
we'll be able to backport it (with a release note I suppose).


[1] https://bugs.launchpad.net/nova/+bug/1782539

--

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [TripleO] undercloud sshd config override

2018-09-14 Thread Cody
Hello folks,

I installed TripleO undercloud on a machine with a pre-existing
sshd_config that disabled root and password login. The file was
rewritten by Puppet after the undercloud installation and was made to
allow for both options. This is not a good default practice. Is there
a way to set the undercloud to respect any pre-existing sshd_config
settings?

Thank you to all.

Regards,
Cody

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [all] Consistent policy names

2018-09-14 Thread Lance Bragstad
On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson  wrote:

> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post"
> which maps to the "os--api::" format.
>

Thanks for explaining the justification, Michael.

I'm curious if anyone has context on the "os-" part of the format? I've
seen that pattern in a couple different projects. Does anyone know about
its origin? Was it something we converted to our policy names because of
API names/paths?


>
> I selected it as it uses the service-type[1], references the API
> resource, and then the method. So it maps well to the API reference[2]
> for the service.
>
> [0] https://docs.openstack.org/octavia/latest/configuration/policy.html
> [1] https://service-types.openstack.org/
> [2]
> https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer
>
> Michael
> On Wed, Sep 12, 2018 at 12:52 PM Tim Bell  wrote:
> >
> > So +1
> >
> >
> >
> > Tim
> >
> >
> >
> > From: Lance Bragstad 
> > Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> > Date: Wednesday, 12 September 2018 at 20:43
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-...@lists.openstack.org>, OpenStack Operators <
> openstack-operators@lists.openstack.org>
> > Subject: [openstack-dev] [all] Consistent policy names
> >
> >
> >
> > The topic of having consistent policy names has popped up a few times
> this week. Ultimately, if we are to move forward with this, we'll need a
> convention. To help with that a little bit I started an etherpad [0] that
> includes links to policy references, basic conventions *within* that
> service, and some examples of each. I got through quite a few projects this
> morning, but there are still a couple left.
> >
> >
> >
> > The idea is to look at what we do today and see what conventions we can
> come up with to move towards, which should also help us determine how much
> each convention is going to impact services (e.g. picking a convention that
> will cause 70% of services to rename policies).
> >
> >
> >
> > Please have a look and we can discuss conventions in this thread. If we
> come to agreement, I'll start working on some documentation in oslo.policy
> so that it's somewhat official because starting to renaming policies.
> >
> >
> >
> > [0] https://etherpad.openstack.org/p/consistent-policy-names
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials)

2018-09-14 Thread Davanum Srinivas
Folks,

Sorry for the top post - Those of you that are still at PTG, please feel
free to drop in to the Clear Creek room today.

Thanks,
Dims

On Thu, Sep 13, 2018 at 2:44 PM Jeremy Stanley  wrote:

> On 2018-09-12 17:50:30 -0600 (-0600), Matt Riedemann wrote:
> [...]
> > Again, I'm not saying TC members should be doing all of the work
> > themselves. That's not realistic, especially when critical parts
> > of any major effort are going to involve developers from projects
> > on which none of the TC members are active contributors (e.g.
> > nova). I want to see TC members herd cats, for lack of a better
> > analogy, and help out technically (with code) where possible.
>
> I can respect that. I think that OpenStack made a mistake in naming
> its community management governance body the "technical" committee.
> I do agree that having TC members engage in activities with tangible
> outcomes is preferable, and that the needs of the users of its
> software should weigh heavily in prioritization decisions, but those
> are not the only problems our community faces nor is it as if there
> are no other responsibilities associated with being a TC member.
>
> > Given the repeated mention of how the "help wanted" list continues
> > to not draw in contributors, I think the recruiting role of the TC
> > should take a back seat to actually stepping in and helping work
> > on those items directly. For example, Sean McGinnis is taking an
> > active role in the operators guide and other related docs that
> > continue to be discussed at every face to face event since those
> > docs were dropped from openstack-manuals (in Pike).
>
> I completely agree that the help wanted list hasn't worked out well
> in practice. It was based on requests from the board of directors to
> provide some means of communicating to their business-focused
> constituency where resources would be most useful to the project.
> We've had a subsequent request to reorient it to be more like a set
> of job descriptions along with clearer business use cases explaining
> the benefit to them of contributing to these efforts. In my opinion
> it's very much the responsibility of the TC to find ways to
> accomplish these sorts of things as well.
>
> > I think it's fair to say that the people generally elected to the
> > TC are those most visible in the community (it's a popularity
> > contest) and those people are generally the most visible because
> > they have the luxury of working upstream the majority of their
> > time. As such, it's their duty to oversee and spend time working
> > on the hard cross-project technical deliverables that operators
> > and users are asking for, rather than think of an infinite number
> > of ways to try and draw *others* to help work on those gaps.
>
> But not everyone who is funded for full-time involvement with the
> community is necessarily "visible" in ways that make them electable.
> Higher-profile involvement in such activities over time is what gets
> them the visibility to be more easily elected to governance
> positions via "popularity contest" mechanics.
>
> > As I think it's the role of a PTL within a given project to have a
> > finger on the pulse of the technical priorities of that project
> > and manage the developers involved (of which the PTL certainly may
> > be one), it's the role of the TC to do the same across openstack
> > as a whole. If a PTL doesn't have the time or willingness to do
> > that within their project, they shouldn't be the PTL. The same
> > goes for TC members IMO.
>
> Completely agree, I think we might just disagree on where to strike
> the balance of purely technical priorities for the TC (as I
> personally think the TC is somewhat incorrectly named).
> --
> Jeremy Stanley
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Davanum Srinivas :: https://twitter.com/dims
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators