Re: [Openstack-operators] new SIGs to cover use cases

2018-11-13 Thread Arkady.Kanevsky
Good point.
Adding SIG list.

-Original Message-
From: Jeremy Stanley [mailto:fu...@yuggoth.org] 
Sent: Monday, November 12, 2018 4:46 PM
To: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] new SIGs to cover use cases


[EXTERNAL EMAIL] 
Please report any suspicious attachments, links, or requests for sensitive 
information.


On 2018-11-12 15:46:38 + (+), arkady.kanev...@dell.com wrote:
[...]
>   1.  Do we have or want to create a user community around Hybrid cloud.
[...]
>   2.  As we target AI/ML as 2019 target application domain do we
>   want to create a SIG for it? Or do we extend scientific
>   community SIG to cover it?
[...]

It may also be worthwhile to ask this on the openstack-sigs mailing
list.
-- 
Jeremy Stanley

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] new SIGs to cover use cases

2018-11-12 Thread Arkady.Kanevsky
Team,
At today Board and joint TC and UC meetings 2 questions come up:

  1.  Do we have or want to create a user community around Hybrid cloud. This 
is one of the major push of OpenStack for the communities. With 70+% of 
questionnaire responders told that they deploy and use hybrid cloud. We do have 
Public and Private clouds SIGs but not hybrid. That brings the question where 
do we capture and driver hybrid cloud requirements.
  2.  As we target AI/ML as 2019 target application domain do we want to create 
a SIG for it? Or do we extend scientific community SIG to cover it?

Want to start dialog on it.
Thanks,
Arkady
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Openstack-sigs] [FEMDC] [Edge] [tripleo] On the use of terms "Edge" and "Far Edge"

2018-10-18 Thread Arkady.Kanevsky
Love  the idea to have clearer terminology.
Suggest we let telco folks to suggest terminology to use.
This is not a 3 level hierarchy but much more.
There are several layers of aggregation from local to metro, to  regional, to 
DC. And potential multiple layers in each.

-Original Message-
From: Dmitry Tantsur  
Sent: Thursday, October 18, 2018 9:23 AM
To: OpenStack Development Mailing List (not for usage questions); 
openstack-s...@lists.openstack.org
Subject: [Openstack-sigs] [FEMDC] [Edge] [tripleo] On the use of terms "Edge" 
and "Far Edge"


[EXTERNAL EMAIL] 
Please report any suspicious attachments, links, or requests for sensitive 
information.


Hi all,

Sorry for chiming in really late in this topic, but I think $subj is worth 
discussing until we settle harder on the potentially confusing terminology.

I think the difference between "Edge" and "Far Edge" is too vague to use these 
terms in practice. Think about the "edge" metaphor itself: something rarely has 
several layers of edges. A knife has an edge, there are no far edges. I imagine 
zooming in and seeing more edges at the edge, and then it's quite cool indeed, 
but is it really a useful metaphor for those who never used a strong 
microscope? :)

I think in the trivial sense "Far Edge" is a tautology, and should be avoided. 
As a weak proof of my words, I already see a lot of smart people confusing 
these 
two and actually use Central/Edge where they mean Edge/Far Edge. I suggest we 
adopt a different terminology, even if it less consistent with typical 
marketing 
term around the "Edge" movement.

Now, I don't have really great suggestions. Something that came up in TripleO 
discussions [1] is Core/Hub/Edge, which I think reflects the idea better.

I'd be very interested to hear your ideas.

Dmitry

[1] https://etherpad.openstack.org/p/tripleo-edge-mvp

___
openstack-sigs mailing list
openstack-s...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][elections] Stein TC Election Results

2018-09-28 Thread Arkady.Kanevsky
Congrats to newly elected TCs and all people who run.

-Original Message-
From: Doug Hellmann  
Sent: Friday, September 28, 2018 10:29 AM
To: Emmet Hikory; OpenStack Developers
Subject: Re: [openstack-dev] [all][tc][elections] Stein TC Election Results


[EXTERNAL EMAIL] 
Please report any suspicious attachments, links, or requests for sensitive 
information.


Emmet Hikory  writes:

> Please join me in congratulating the 6 newly elected members of the
> Technical Committee (TC):
>
>   - Doug Hellmann (dhellmann)
>   - Julia Kreger (TheJulia)
>   - Jeremy Stanley (fungi)
>   - Jean-Philippe Evrard (evrardjp)
>   - Lance Bragstad (lbragstad)
>   - Ghanshyam Mann (gmann)

Congratulations, everyone! I'm looking forward to serving with all of
you for another term.

> Full Results:
> https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f773fda2d0695864
>
> Election process details and results are also available here:
> https://governance.openstack.org/election/
>
> Thank you to all of the candidates, having a good group of candidates helps
> engage the community in our democratic process.
>
> Thank you to all who voted and who encouraged others to vote.  Voter turnout
> was significantly up from recent cycles.  We need to ensure your voices are
> heard.

It's particularly good to hear that turnout is up, not just in
percentage but in raw numbers, too. Thank you all for voting!

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release model for feature-complete OpenStack libraries

2018-09-28 Thread Arkady.Kanevsky
How will we handle which versions of libraries work together?
And which combinations will be run thru CI?

-Original Message-
From: Thierry Carrez  
Sent: Friday, September 28, 2018 7:17 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [release] Release model for feature-complete OpenStack 
libraries


[EXTERNAL EMAIL] 
Please report any suspicious attachments, links, or requests for sensitive 
information.


Hi everyone,

In OpenStack, libraries have to be released with a 
cycle-with-intermediary model, so that (1) they can be released early 
and often, (2) services consuming those libraries can take advantage of 
their new features, and (3) we detect integration bugs early rather than 
late. This works well while libraries see lots of changes, however it is 
a bit heavy-handed for feature-complete, stable libraries: it forces 
those to release multiple times per year even if they have not seen any 
change.

For those, we discussed[1] a number of mechanisms in the past, but at 
the last PTG we came up with the conclusion that those were a bit 
complex and not really addressing the issue. Here is a simpler proposal.

Once libraries are deemed feature-complete and stable, they should 
switch them to an "independent" release model (like all our third-party 
libraries). Those would see releases purely as needed for the occasional 
corner case bugfix. They won't be released early and often, there is no 
new feature to take advantage of, and new integration bugs should be 
very rare.

This transition should be definitive in most cases. In rare cases where 
a library were to need large feature development work again, we'd have 
two options: develop the new feature in a new library depending on the 
stable one, or grant an exception and switch it back to 
cycle-with-intermediary.

If one of your libraries should already be considered feature-complete 
and stable, please contact the release team to transition them to the 
new release model.

Thanks for reading!

[1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131341.html

-- 
The Release Team

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Arkady.Kanevsky
+1

-Original Message-
From: Tim Bell [mailto:tim.b...@cern.ch] 
Sent: Wednesday, September 26, 2018 1:56 PM
To: OpenStack Development Mailing List (not for usage questions); 
openstack-operators; openstack-sigs
Subject: Re: [Openstack-sigs] [openstack-dev] [goals][tc][ptl][uc] starting 
goal selection for T series


[EXTERNAL EMAIL] 
Please report any suspicious attachments, links, or requests for sensitive 
information.



Doug,

Thanks for raising this. I'd like to highlight the goal "Finish moving legacy 
python-*client CLIs to python-openstackclient" from the etherpad and propose 
this for a T/U series goal.

To give it some context and the motivation:

At CERN, we have more than 3000 users of the OpenStack cloud. We write an 
extensive end user facing documentation which explains how to use the OpenStack 
along with CERN specific features (such as workflows for requesting 
projects/quotas/etc.). 

One regular problem we come across is that the end user experience is 
inconsistent. In some cases, we find projects which are not covered by the 
unified OpenStack client (e.g. Manila). In other cases, there are subsets of 
the function which require the native project client.

I would strongly support a goal which targets

- All new projects should have the end user facing functionality fully exposed 
via the unified client
- Existing projects should aim to close the gap within 'N' cycles (N to be 
defined)
- Many administrator actions would also benefit from integration (reader roles 
are end users too so list and show need to be covered too)
- Users should be able to use a single openrc for all interactions with the 
cloud (e.g. not switch between password for some CLIs and Kerberos for OSC)

The end user perception of a solution will be greatly enhanced by a single 
command line tool with consistent syntax and authentication framework.

It may be a multi-release goal but it would really benefit the cloud consumers 
and I feel that goals should include this audience also.

Tim

-Original Message-
From: Doug Hellmann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 26 September 2018 at 18:00
To: openstack-dev , openstack-operators 
, openstack-sigs 

Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T 
series

It's time to start thinking about community-wide goals for the T series.

We use community-wide goals to achieve visible common changes, push for
basic levels of consistency and user experience, and efficiently improve
certain areas where technical debt payments have become too high -
across all OpenStack projects. Community input is important to ensure
that the TC makes good decisions about the goals. We need to consider
the timing, cycle length, priority, and feasibility of the suggested
goals.

If you are interested in proposing a goal, please make sure that before
the summit it is described in the tracking etherpad [1] and that you
have started a mailing list thread on the openstack-dev list about the
proposal so that everyone in the forum session [2] has an opportunity to
consider the details.  The forum session is only one step in the
selection process. See [3] for more details.

Doug

[1] https://etherpad.openstack.org/p/community-goals
[2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814
[3] https://governance.openstack.org/tc/goals/index.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
openstack-sigs mailing list
openstack-s...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Arkady.Kanevsky
+1

-Original Message-
From: Tim Bell [mailto:tim.b...@cern.ch] 
Sent: Wednesday, September 26, 2018 1:56 PM
To: OpenStack Development Mailing List (not for usage questions); 
openstack-operators; openstack-sigs
Subject: Re: [Openstack-sigs] [openstack-dev] [goals][tc][ptl][uc] starting 
goal selection for T series


[EXTERNAL EMAIL] 
Please report any suspicious attachments, links, or requests for sensitive 
information.



Doug,

Thanks for raising this. I'd like to highlight the goal "Finish moving legacy 
python-*client CLIs to python-openstackclient" from the etherpad and propose 
this for a T/U series goal.

To give it some context and the motivation:

At CERN, we have more than 3000 users of the OpenStack cloud. We write an 
extensive end user facing documentation which explains how to use the OpenStack 
along with CERN specific features (such as workflows for requesting 
projects/quotas/etc.). 

One regular problem we come across is that the end user experience is 
inconsistent. In some cases, we find projects which are not covered by the 
unified OpenStack client (e.g. Manila). In other cases, there are subsets of 
the function which require the native project client.

I would strongly support a goal which targets

- All new projects should have the end user facing functionality fully exposed 
via the unified client
- Existing projects should aim to close the gap within 'N' cycles (N to be 
defined)
- Many administrator actions would also benefit from integration (reader roles 
are end users too so list and show need to be covered too)
- Users should be able to use a single openrc for all interactions with the 
cloud (e.g. not switch between password for some CLIs and Kerberos for OSC)

The end user perception of a solution will be greatly enhanced by a single 
command line tool with consistent syntax and authentication framework.

It may be a multi-release goal but it would really benefit the cloud consumers 
and I feel that goals should include this audience also.

Tim

-Original Message-
From: Doug Hellmann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 26 September 2018 at 18:00
To: openstack-dev , openstack-operators 
, openstack-sigs 

Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T 
series

It's time to start thinking about community-wide goals for the T series.

We use community-wide goals to achieve visible common changes, push for
basic levels of consistency and user experience, and efficiently improve
certain areas where technical debt payments have become too high -
across all OpenStack projects. Community input is important to ensure
that the TC makes good decisions about the goals. We need to consider
the timing, cycle length, priority, and feasibility of the suggested
goals.

If you are interested in proposing a goal, please make sure that before
the summit it is described in the tracking etherpad [1] and that you
have started a mailing list thread on the openstack-dev list about the
proposal so that everyone in the forum session [2] has an opportunity to
consider the details.  The forum session is only one step in the
selection process. See [3] for more details.

Doug

[1] https://etherpad.openstack.org/p/community-goals
[2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814
[3] https://governance.openstack.org/tc/goals/index.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
openstack-sigs mailing list
openstack-s...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTL non-candidacy

2018-07-25 Thread Arkady.Kanevsky
Indeed. Thanks Alex for your great leadership of TripleO.

From: Remo Mattei [mailto:r...@rm.ht]
Sent: Wednesday, July 25, 2018 4:09 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tripleo] PTL non-candidacy

I want publically  want to say THANK YOU Alex. You ROCK.

Hopefully one of those summit, I will meet.

Ciao,
Remo


On Jul 25, 2018, at 6:23 AM, Alex Schultz 
mailto:aschu...@redhat.com>> wrote:

Hey folks,

So it's been great fun and we've accomplished much over the last two
cycles but I believe it is time for me to step back and let someone
else do the PTLing.  I'm not going anywhere so I'll still be around to
focus on the simplification and improvements that TripleO needs going
forward.  I look forwards to continuing our efforts with everyone.

Thanks,
-Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

[Image removed by sender.]
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-16 Thread Arkady.Kanevsky
Is this for ephemeral storage handling?

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Monday, July 16, 2018 8:44 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [cinder] about block device driver

On 07/16/2018 09:32 AM, Sean McGinnis wrote:
> The other option would be to not use Cinder volumes so you just use 
> local storage on your compute nodes.

^^ yes, this.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits

2018-06-13 Thread Arkady.Kanevsky
+1

From: John Fulton [mailto:johfu...@redhat.com]
Sent: Wednesday, June 13, 2018 11:23 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on 
storage bits

On Wed, Jun 13, 2018, 12:04 PM Marios Andreou 
mailto:mar...@redhat.com>> wrote:

On Wed, Jun 13, 2018 at 6:57 PM, Giulio Fidente 
mailto:gfide...@redhat.com>> wrote:
On 06/13/2018 05:50 PM, Emilien Macchi wrote:
> Alan Bishop has been highly involved in the Storage backends integration
> in TripleO and Puppet modules, always here to update with new features,
> fix (nasty and untestable third-party backends) bugs and manage all the
> backports for stable releases:
> https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22
>
> He's also well knowledgeable of how TripleO works and how containers are
> integrated, I would like to propose him as core on TripleO projects for
> patches related to storage things (Cinder, Glance, Swift, Manila, and
> backends).
>
> Please vote -1/+1,

+1 :D

+1


+1




--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] The Forum Schedule is now live

2018-04-30 Thread Arkady.Kanevsky
LOL

From: Jimmy McArthur [mailto:ji...@openstack.org]
Sent: Monday, April 30, 2018 1:22 PM
To: Kanevsky, Arkady
Cc: a...@demarco.com; openstack-...@lists.openstack.org; 
OpenStack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] [openstack-dev] The Forum Schedule is now 
live

We don't support deprecated browsers, I'm afraid.




arkady.kanev...@dell.com
April 30, 2018 at 1:14 PM
Interesting.
It does work on Chrome but not on IE.
Here is IE screenshot.
Thanks,
Arkady

From: Jimmy McArthur [mailto:ji...@openstack.org]
Sent: Monday, April 30, 2018 11:22 AM
To: Kanevsky, Arkady
Cc: a...@demarco.com; 
openstack-...@lists.openstack.org; 
OpenStack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] [openstack-dev] The Forum Schedule is now 
live

Hmm. I see both populated with all of the relevant sessions.  Can you send me a 
screencap of what you're seeing?




Jimmy McArthur
April 30, 2018 at 11:22 AM
Hmm. I see both populated with all of the relevant sessions.  Can you send me a 
screencap of what you're seeing?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
arkady.kanev...@dell.com
April 30, 2018 at 10:58 AM
Both are currently empty.

From: Jimmy McArthur [mailto:ji...@openstack.org]
Sent: Monday, April 30, 2018 10:48 AM
To: Amy Marrich
Cc: OpenStack Development Mailing List (not for usage questions); 
OpenStack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] [openstack-dev] The Forum Schedule is now 
live

Project Updates are in their own track: 
https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223

As are SIG, BoF and Working Groups: 
https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218




Jimmy McArthur
April 30, 2018 at 10:47 AM
Project Updates are in their own track: 
https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223

As are SIG, BoF and Working Groups: 
https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Amy Marrich
April 30, 2018 at 10:44 AM
Emilien,

I believe that the Project Updates are separate from the Forum? I know I saw 
some in the schedule before the Forum submittals were even closed. Maybe 
contact speaker support or Jimmy will answer here.

Thanks,

Amy (spotz)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Openstack-operators] The Forum Schedule is now live

2018-04-30 Thread Arkady.Kanevsky
LOL

From: Jimmy McArthur [mailto:ji...@openstack.org]
Sent: Monday, April 30, 2018 1:22 PM
To: Kanevsky, Arkady
Cc: a...@demarco.com; openstack-dev@lists.openstack.org; 
openstack-operat...@lists.openstack.org
Subject: Re: [Openstack-operators] [openstack-dev] The Forum Schedule is now 
live

We don't support deprecated browsers, I'm afraid.




arkady.kanev...@dell.com
April 30, 2018 at 1:14 PM
Interesting.
It does work on Chrome but not on IE.
Here is IE screenshot.
Thanks,
Arkady

From: Jimmy McArthur [mailto:ji...@openstack.org]
Sent: Monday, April 30, 2018 11:22 AM
To: Kanevsky, Arkady
Cc: a...@demarco.com; 
openstack-dev@lists.openstack.org; 
openstack-operat...@lists.openstack.org
Subject: Re: [Openstack-operators] [openstack-dev] The Forum Schedule is now 
live

Hmm. I see both populated with all of the relevant sessions.  Can you send me a 
screencap of what you're seeing?




Jimmy McArthur
April 30, 2018 at 11:22 AM
Hmm. I see both populated with all of the relevant sessions.  Can you send me a 
screencap of what you're seeing?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
arkady.kanev...@dell.com
April 30, 2018 at 10:58 AM
Both are currently empty.

From: Jimmy McArthur [mailto:ji...@openstack.org]
Sent: Monday, April 30, 2018 10:48 AM
To: Amy Marrich
Cc: OpenStack Development Mailing List (not for usage questions); 
openstack-operat...@lists.openstack.org
Subject: Re: [Openstack-operators] [openstack-dev] The Forum Schedule is now 
live

Project Updates are in their own track: 
https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223

As are SIG, BoF and Working Groups: 
https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218




Jimmy McArthur
April 30, 2018 at 10:47 AM
Project Updates are in their own track: 
https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223

As are SIG, BoF and Working Groups: 
https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Amy Marrich
April 30, 2018 at 10:44 AM
Emilien,

I believe that the Project Updates are separate from the Forum? I know I saw 
some in the schedule before the Forum submittals were even closed. Maybe 
contact speaker support or Jimmy will answer here.

Thanks,

Amy (spotz)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] The Forum Schedule is now live

2018-04-30 Thread Arkady.Kanevsky
Both are currently empty.

From: Jimmy McArthur [mailto:ji...@openstack.org]
Sent: Monday, April 30, 2018 10:48 AM
To: Amy Marrich
Cc: OpenStack Development Mailing List (not for usage questions); 
OpenStack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] [openstack-dev] The Forum Schedule is now 
live

Project Updates are in their own track: 
https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223

As are SIG, BoF and Working Groups: 
https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218


Amy Marrich
April 30, 2018 at 10:44 AM
Emilien,

I believe that the Project Updates are separate from the Forum? I know I saw 
some in the schedule before the Forum submittals were even closed. Maybe 
contact speaker support or Jimmy will answer here.

Thanks,

Amy (spotz)

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Emilien Macchi
April 30, 2018 at 10:33 AM


Hello all -

Please take a look here for the posted Forum schedule: 
https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224  You 
should also see it update on your Summit App.

Why TripleO doesn't have project update?
Maybe we could combine it with TripleO - Project Onboarding if needed but it 
would be great to have it advertised as a project update!

Thanks,
--
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Jimmy McArthur
April 27, 2018 at 11:04 AM
Hello all -

Please take a look here for the posted Forum schedule: 
https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224  You 
should also see it update on your Summit App.

Thank you and see you in Vancouver!
Jimmy


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Openstack-operators] The Forum Schedule is now live

2018-04-30 Thread Arkady.Kanevsky
Both are currently empty.

From: Jimmy McArthur [mailto:ji...@openstack.org]
Sent: Monday, April 30, 2018 10:48 AM
To: Amy Marrich
Cc: OpenStack Development Mailing List (not for usage questions); 
openstack-operat...@lists.openstack.org
Subject: Re: [Openstack-operators] [openstack-dev] The Forum Schedule is now 
live

Project Updates are in their own track: 
https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223

As are SIG, BoF and Working Groups: 
https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218


Amy Marrich
April 30, 2018 at 10:44 AM
Emilien,

I believe that the Project Updates are separate from the Forum? I know I saw 
some in the schedule before the Forum submittals were even closed. Maybe 
contact speaker support or Jimmy will answer here.

Thanks,

Amy (spotz)

___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Emilien Macchi
April 30, 2018 at 10:33 AM


Hello all -

Please take a look here for the posted Forum schedule: 
https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224  You 
should also see it update on your Summit App.

Why TripleO doesn't have project update?
Maybe we could combine it with TripleO - Project Onboarding if needed but it 
would be great to have it advertised as a project update!

Thanks,
--
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Jimmy McArthur
April 27, 2018 at 11:04 AM
Hello all -

Please take a look here for the posted Forum schedule: 
https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224  You 
should also see it update on your Summit App.

Thank you and see you in Vancouver!
Jimmy


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-26 Thread Arkady.Kanevsky
+1.
It would be good to also identify the use cases.
Surprised that node should be cleaned up automatically.
I would expect that we want it to be a deliberate request from administrator to 
do.
Maybe user when they "return" a node to free pool after baremetal usage.
Thanks,
Arkady

-Original Message-
From: Tim Bell [mailto:tim.b...@cern.ch] 
Sent: Thursday, April 26, 2018 11:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

How about asking the operators at the summit Forum or asking on 
openstack-operators to see what the users think?

Tim

-Original Message-
From: Ben Nemec 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 26 April 2018 at 17:39
To: "OpenStack Development Mailing List (not for usage questions)" 
, Dmitry Tantsur 
Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default?



On 04/26/2018 09:24 AM, Dmitry Tantsur wrote:
> Answering to both James and Ben inline.
> 
> On 04/25/2018 05:47 PM, Ben Nemec wrote:
>>
>>
>> On 04/25/2018 10:28 AM, James Slagle wrote:
>>> On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur 
>>>  wrote:
 On 04/25/2018 04:26 PM, James Slagle wrote:
>
> On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur 
> wrote:
>>
>> Hi all,
>>
>> I'd like to restart conversation on enabling node automated 
>> cleaning by
>> default for the undercloud. This process wipes partitioning tables
>> (optionally, all the data) from overcloud nodes each time they 
>> move to
>> "available" state (i.e. on initial enrolling and after each tear 
>> down).
>>
>> We have had it disabled for a few reasons:
>> - it was not possible to skip time-consuming wiping if data from 
>> disks
>> - the way our workflows used to work required going between 
>> manageable
>> and
>> available steps several times
>>
>> However, having cleaning disabled has several issues:
>> - a configdrive left from a previous deployment may confuse 
>> cloud-init
>> - a bootable partition left from a previous deployment may take
>> precedence
>> in some BIOS
>> - an UEFI boot partition left from a previous deployment is likely to
>> confuse UEFI firmware
>> - apparently ceph does not work correctly without cleaning (I'll 
>> defer to
>> the storage team to comment)
>>
>> For these reasons we don't recommend having cleaning disabled, and I
>> propose
>> to re-enable it.
>>
>> It has the following drawbacks:
>> - The default workflow will require another node boot, thus becoming
>> several
>> minutes longer (incl. the CI)
>> - It will no longer be possible to easily restore a deleted overcloud
>> node.
>
>
> I'm trending towards -1, for these exact reasons you list as
> drawbacks. There has been no shortage of occurrences of users who have
> ended up with accidentally deleted overclouds. These are usually
> caused by user error or unintended/unpredictable Heat operations.
> Until we have a way to guarantee that Heat will never delete a node,
> or Heat is entirely out of the picture for Ironic provisioning, then
> I'd prefer that we didn't enable automated cleaning by default.
>
> I believe we had done something with policy.json at one time to
> prevent node delete, but I don't recall if that protected from both
> user initiated actions and Heat actions. And even that was not enabled
> by default.
>
> IMO, we need to keep "safe" defaults. Even if it means manually
> documenting that you should clean to prevent the issues you point out
> above. The alternative is to have no way to recover deleted nodes by
> default.


 Well, it's not clear what is "safe" here: protect people who explicitly
 delete their stacks or protect people who don't realize that a previous
 deployment may screw up their new one in a subtle way.
>>>
>>> The latter you can recover from, the former you can't if automated
>>> cleaning is true.
> 
> Nor can we recover from 'rm -rf / --no-preserve-root', but it's not a 
> reason to disable the 'rm' command :)
> 
>>>
>>> It's not just about people who explicitly delete their stacks (whether
>>> intentional or not). There could be user error (non-explicit) or
>>> 

Re: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback

2018-04-02 Thread Arkady.Kanevsky
+1

From: Erik McCormick [mailto:emccorm...@cirrusseven.com]
Sent: Monday, April 2, 2018 3:57 PM
To: Melvin Hillsman 
Cc: openstack-operators 
Subject: Re: [Openstack-operators] Ops Meetup, Co-Location options, and User 
Feedback

I'm a +1 too as long as the devs at large are cool with it and won't hate on us 
for crashing their party. I also +1 the proposed format.  It's basically what 
we're discussed in Tokyo. Make it so.

Cheers
Erik

PS. Sorry for the radio silence the past couple weeks. Vacation,  kids,  etc.

On Apr 2, 2018 4:18 PM, "Melvin Hillsman" 
> wrote:
Unless anyone has any objections I believe we have quorum Jimmy.

On Mon, Apr 2, 2018 at 12:53 PM, Melvin Hillsman 
> wrote:
+1

On Mon, Apr 2, 2018 at 11:39 AM, Jimmy McArthur 
> wrote:
Hi all -

I'd like to check in to see if we've come to a consensus on the colocation of 
the Ops Meetup.  Please let us know as soon as possible as we have to alert our 
events team.

Thanks!
Jimmy


Chris Morgan
March 27, 2018 at 11:44 AM
Hello Everyone,
  This proposal looks to have very good backing in the community. There was an 
informal IRC meeting today with the meetups team, some of the foundation folk 
and others and everyone seems to like a proposal put forward as a sample 
definition of the combined event - I certainly do, it looks like we could have 
a really great combined event in September.

I volunteered to share that a bit later today with some other info. In the 
meanwhile if you have a viewpoint please do chime in here as we'd like to 
declare this agreed by the community ASAP, so in particular IF YOU OBJECT 
please speak up by end of week, this week.

Thanks!

Chris




--
Chris Morgan >
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Jonathan Proulx
March 23, 2018 at 10:07 AM
On Thu, Mar 22, 2018 at 09:02:48PM -0700, Yih Leong, Sun. wrote:
:I support the ideas to try colocating the next Ops Midcycle and PTG.
:Although scheduling could be a potential challenge but it worth give it a
:try.
:
:Also having an joint social event in the evening can also help Dev/Ops to
:meet and offline discussion. :)

Agreeing stongly with Matt and Melvin's comments about Forum -vs-
PTG/OpsMidcycle

PTG/OpsMidcycle (as I see them) are about focusing inside teams to get
work done ("how" is a a good one word I think). The advantage of
colocation is for cross team questions like "we're thinking of doing
this thing this way, does this have any impacts on your work my might
not have considered", can get a quick respose in the hall, at lunch,
or over beers as Yih Leong suggests.

Forum has become about coming to gather across groups for more
conceptual "what" discussions.

So I also thing they are very distinct and I do see potential benefits
to colocation.

We do need to watch out for downsides. The concerns around colocation
seemed mostly about larger events costing more and being generally
harder to organize. If we try we will find out if there is merit to
this concern, but (IMO) it is important to keep both of the
events as cheap and simple as possible.

-Jon

:
:On Thursday, March 22, 2018, Melvin Hillsman 
 wrote:
:
:> Thierry and Matt both hit the nail on the head in terms of the very
:> base/purpose/point of the Forum, PTG, and Ops Midcycles and here is my +2
:> since I have spoke with both and others outside of this thread and agree
:> with them here as I have in individual discussions.
:>
:> If nothing else I agree with Jimmy's original statement of at least giving
:> this a try.
:>
:> On Thu, Mar 22, 2018 at 4:54 PM, Matt Van Winkle 

:> wrote:
:>
:>> Hey folks,
:>> Great discussion! There are number of points to comment on going back
:>> through the last few emails. I'll try to do so in line with Theirry's
:>> latest below. From a User Committee perspective (and as a member of the
:>> Ops Meetup planning team), I am a convert to the idea of co-location, but
:>> have come to see a lot of value in it. I'll point some of that out as I
:>> respond to specific comments, but first a couple of overarching points.
:>>
:>> In the current model, the Forum sessions are very much about WHAT the
:>> software should do. Keeping the discussions focused on behavior, feature
:>> and function has made it much easier for an operator to participate
:>> effectively in the conversation versus the older, design sessions, that
:>> focused largely on blueprints, coding 

Re: [Openstack] iSCSI multipath

2018-03-23 Thread Arkady.Kanevsky
Ramon,

It is 
volume_driver=cinder.volume.drivers.dell_emc.sc.storagecenter_iscsi.SCISCSIDriver
 these days.

The volume is mapped to all paths whether they will be used or not.

-Original Message-
From: Ramon Orru [mailto:ramon.o...@immobiliare.it] 
Sent: Friday, March 23, 2018 9:07 AM
To: Kanevsky, Arkady ; openstack@lists.openstack.org
Subject: Re: [Openstack] iSCSI multipath

Hi Arkady,

is Dell SC driver (connected to a SC9000), to be clear, we have:

volume_driver=cinder.volume.drivers.dell.dell_storagecenter_iscsi.DellStorageCenterISCSIDriver

In our cinder.conf

Ramon


Il 23/03/18 14:52, arkady.kanev...@dell.com ha scritto:
> Ramon,
> Which DellEMC driver is that? VNX?
>
> -Original Message-
> From: Ramon Orru [mailto:ramon.o...@immobiliare.it]
> Sent: Friday, March 23, 2018 6:35 AM
> To: openstack@lists.openstack.org
> Subject: [Openstack] iSCSI multipath
>
> Hi everyone,
>
> I'm using delliscsi driver as cinder backend in a new queens cluster.
>
> Checking all nodes connections to storage layer, I figured out that after 
> discovering and logging in (successfully) on every path, all nodes set up 
> only 4 paths on 8 totals.
>
> Multipath on these 4 paths is working well and Im not experiencing any issue, 
> I just wonder why only 4 paths are used... am I ignoring some configuration 
> or default behaviour?
>
> Thanks in advance
>
> Ramon
>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] iSCSI multipath

2018-03-23 Thread Arkady.Kanevsky
Ramon,
Which DellEMC driver is that? VNX?

-Original Message-
From: Ramon Orru [mailto:ramon.o...@immobiliare.it] 
Sent: Friday, March 23, 2018 6:35 AM
To: openstack@lists.openstack.org
Subject: [Openstack] iSCSI multipath

Hi everyone,

I'm using delliscsi driver as cinder backend in a new queens cluster.

Checking all nodes connections to storage layer, I figured out that after 
discovering and logging in (successfully) on every path, all nodes set up only 
4 paths on 8 totals.

Multipath on these 4 paths is working well and Im not experiencing any issue, I 
just wonder why only 4 paths are used... am I ignoring some configuration or 
default behaviour?

Thanks in advance

Ramon


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [refstack] Full list of API Tests versus 'OpenStack Powered' Tests

2018-03-15 Thread Arkady.Kanevsky
Greg,
For compliance it is sufficient to run 
https://refstack.openstack.org/#/guidelines.
But it is good if you can also submit fill Tempest run.
That is used internally by refstack to identify which tests to include in the 
future.
This can be submitted anonymously if you like.
Thanks,
Arkady

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: Thursday, March 15, 2018 9:05 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [refstack] Full list of API Tests versus 
'OpenStack Powered' Tests

Re-posting this question to ‘OPENSTACK REFSTACK’,
Any guidance on what level of compliance is required to qualify for the 
OpenStack Logo ( https://www.openstack.org/brand/interop/ ),
See questions below.

Greg.

From: Greg Waines >
Date: Monday, February 26, 2018 at 6:22 PM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: [openstack-dev] [refstack] Full list of API Tests versus 'OpenStack 
Powered' Tests


I have a commercial OpenStack product that I would like to claim compliancy 
with RefStack
· Is it sufficient to claim compliance with only the “OpenStack Powered 
Platform” TESTS ?
oi.e. https://refstack.openstack.org/#/guidelines
oi.e. the ~350-ish compute + object-storage tests
· OR
· Should I be using the COMPLETE API Test Set ?
oi.e. the > 1,000 tests from various domains that get run if you do not 
specify a test-list

Greg.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DriverLog] DriverLog future

2018-03-01 Thread Arkady.Kanevsky
Having 3rd party CI report results automatically will be helpful.
While it is possible for PTLs to report per release which drivers should be 
listed in marketplace for release, currently PTLs are signed for this extra 
work.

Driver owners submitting DriverLog updates per release – not a big deal.
Extra work on Ilya.

I think we can define a rule for removal. If driver entry was not updated for 
2(?) releases then remove it. We can run questionnaire for what the right # for 
it.
Thanks,
Arkady

From: Ilya Shakhat [mailto:shak...@gmail.com]
Sent: Thursday, March 1, 2018 4:44 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [DriverLog] DriverLog future

Hi!
For those who do not know, DriverLog is a community registry of 3rd-party 
drivers for OpenStack hosted together with Stackalytics [1]. The project 
started 4 years ago and by now contains information about 220 drivers. The data 
from DriverLog is also consumed by official Marketplace [2].
Here I would like to discuss directions for DriverLog and 3rd-party driver 
registry as general.
1) Being a single community-wide registry was good initially, it allowed to 
quickly collect description for most of drivers in a single place. But in a 
long term this approach stopped working - not many projects remember to update 
the information stored in some random place, right?
Mike already pointed to this problem a year ago [3] and the idea was to move 
driver list to projects (and thus move responsibility to them too) and have an 
aggregated list of drivers produced by infra. Do we have any progress in this 
direction? Is it a time to start deprecation of DriverLog and consider 
transition during Rocky release?
2) As a project with 4 years history DriverLog's list only increased over the 
time with quite few removals. Now it still has drivers with the latest version 
Liberty or drivers for non-maintained projects (e.g. Fuel). While it maybe 
makes sense to keep all of them for operators who run older versions, it may 
produce a feeling that the majority of drivers are old. One of solutions for 
this is to show by default drivers for active releases only (Pike and ahead). 
If done this will apply to both DriverLog and Marketplace.

Any other ideas or suggestions?
Thanks,
I

[1] http://stackalytics.com/report/driverlog
[2] https://www.openstack.org/marketplace/drivers/
[3] http://lists.openstack.org/pipermail/openstack-dev/2017-January/110151.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [User-committee] User Committee Election Results - February 2018

2018-02-26 Thread Arkady.Kanevsky
Congrats to new committee members.
And thanks for great job for previous ones.

From: Shilla Saebi [mailto:shilla.sa...@gmail.com]
Sent: Sunday, February 25, 2018 5:52 PM
To: user-committee ; OpenStack Mailing List 
; OpenStack Operators 
; OpenStack Dev 
; commun...@lists.openstack.org
Subject: [User-committee] User Committee Election Results - February 2018

Hello Everyone!

Please join me in congratulating 3 newly elected members of the User Committee 
(UC)! The winners for the 3 seats are:

Melvin Hillsman
Amy Marrich
Yih Leong Sun

Full results can be found here: 
https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f7b17dc638013045

Election details can also be found here: 
https://governance.openstack.org/uc/reference/uc-election-feb2018.html

Thank you to all of the candidates, and to all of you who voted and/or promoted 
the election!

Shilla
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [User-committee] User Committee Election Results - February 2018

2018-02-26 Thread Arkady.Kanevsky
Congrats to new committee members.
And thanks for great job for previous ones.

From: Shilla Saebi [mailto:shilla.sa...@gmail.com]
Sent: Sunday, February 25, 2018 5:52 PM
To: user-committee ; OpenStack Mailing List 
; OpenStack Operators 
; OpenStack Dev 
; commun...@lists.openstack.org
Subject: [User-committee] User Committee Election Results - February 2018

Hello Everyone!

Please join me in congratulating 3 newly elected members of the User Committee 
(UC)! The winners for the 3 seats are:

Melvin Hillsman
Amy Marrich
Yih Leong Sun

Full results can be found here: 
https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f7b17dc638013045

Election details can also be found here: 
https://governance.openstack.org/uc/reference/uc-election-feb2018.html

Thank you to all of the candidates, and to all of you who voted and/or promoted 
the election!

Shilla
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] User Committee Elections

2018-02-19 Thread Arkady.Kanevsky
I saw election email with the pointer to votes.
See no reason for stopping it now. But extending vote for 1 more week makes 
sense.
Thanks,
Arkady

From: Melvin Hillsman [mailto:mrhills...@gmail.com]
Sent: Monday, February 19, 2018 11:32 AM
To: user-committee ; OpenStack Mailing List 
; OpenStack Operators 
; OpenStack Dev 
; commun...@lists.openstack.org
Subject: [Openstack-operators] User Committee Elections

Hi everyone,

We had to push the voting back a week if you have been keeping up with the UC 
elections[0]. That being said, election officials have sent out the poll and so 
voting is now open! Be sure to check out the candidates - https://goo.gl/x183he 
- and get your vote in before the poll closes.

[0] https://governance.openstack.org/uc/reference/uc-election-feb2018.html

--
Kind regards,

Melvin Hillsman
mrhills...@gmail.com
mobile: (832) 264-2646
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Openstack-operators] User Committee Elections

2018-02-19 Thread Arkady.Kanevsky
I saw election email with the pointer to votes.
See no reason for stopping it now. But extending vote for 1 more week makes 
sense.
Thanks,
Arkady

From: Melvin Hillsman [mailto:mrhills...@gmail.com]
Sent: Monday, February 19, 2018 11:32 AM
To: user-committee ; OpenStack Mailing List 
; OpenStack Operators 
; OpenStack Dev 
; commun...@lists.openstack.org
Subject: [Openstack-operators] User Committee Elections

Hi everyone,

We had to push the voting back a week if you have been keeping up with the UC 
elections[0]. That being said, election officials have sent out the poll and so 
voting is now open! Be sure to check out the candidates - https://goo.gl/x183he 
- and get your vote in before the poll closes.

[0] https://governance.openstack.org/uc/reference/uc-election-feb2018.html

--
Kind regards,

Melvin Hillsman
mrhills...@gmail.com
mobile: (832) 264-2646
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [User-committee] Stepping aside announcement

2018-01-29 Thread Arkady.Kanevsky
Edgar, thank you for all your hard work and passion.

From: Melvin Hillsman [mailto:mrhills...@gmail.com]
Sent: Monday, January 29, 2018 12:45 PM
To: Amy Marrich 
Cc: openst...@lists.openstack.org; openstack-operators 
; user-committee 

Subject: Re: [User-committee] Stepping aside announcement

Thanks for your service to the community Edgar! Hope to see you at an event 
soon and we can toast to your departure and continued success!

On Mon, Jan 29, 2018 at 11:59 AM, Amy Marrich 
> wrote:
Edgar,

Thank you for all your hard work and contributions!

Amy (spotz)

On Mon, Jan 29, 2018 at 11:12 AM, Edgar Magana 
> wrote:
Dear Community,

This is an overdue announcement but I was waiting for the right moment and 
today it is with the opening of the UC election. It has been almost seven years 
of full commitment to OpenStack and the entire ecosystem around it. During the 
last couple of years, I had the opportunity to serve as Chair of the User 
Committee. I have participated in this role with nothing more important but 
passion and dedication for the users and operators. OpenStack has been very 
important for me and it will be always the most enjoyable work I have ever done.

It is time to move on. Our team is extending its focus to other cloud domains 
and I will be leading one of the those. Therefore, I would like to announce 
that I am stepping aside from my role as UC Chair. Per our UC election, there 
will be no just 2 seats available but three: 
https://governance.openstack.org/uc/reference/uc-election-feb2018.html

I want to encourage the whole AUC community to participate, be part of the User 
Committee is a very important and grateful activity. Please, go for it!

Thank you all,

Edgar Magana
Sr. Principal Architect
Workday, Inc.




___
User-committee mailing list
user-commit...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee


___
User-committee mailing list
user-commit...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee



--
Kind regards,

Melvin Hillsman
mrhills...@gmail.com
mobile: (832) 264-2646
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [cinder] Leaving the Cinder core team

2017-12-20 Thread Arkady.Kanevsky
Good luck.
We will miss you.

From: Patrick East [mailto:patrick.e...@purestorage.com]
Sent: Wednesday, December 20, 2017 6:10 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [cinder] Leaving the Cinder core team

Hi Everyone,

Wanted to let the group know that I'll be stepping away from my position on the 
Cinder Core team.

I've got mixed feelings about it, but with my current responsibilities and 
priorities my focus will be elsewhere. I will be unable to dedicate the level 
of time and energy to Cinder reviews and core feature/bug contributions that 
I've had the privilege of in the past.

My involvement with Cinder won't change much from the last release or so, just 
setting expectations that I don't plan to increase my commitment upstream. I 
will still be on IRC and involved in bugs/reviews/etc where I can, primarily in 
areas I am very familiar with (os-brick, replication, volume drivers, etc). 
Feel free to ping me or cc me on reviews/bugs.

Thanks,

Patrick East
patrick.e...@purestorage.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Removal of tempest plugin code from openstack/ironic & openstack/ironic-inspector

2017-12-18 Thread Arkady.Kanevsky
Thanks for response.
My recommendation is:
1. only allow patches into openstack/ironic-tempest-plugin
2. Give Ironic CI owners time period (3 weeks?) to switch their setup to only 
use openstack/ironic-tempest-plugin and not master and report back to Ironic CI 
team if it works for them. If yes, go ahead and switch if not, report back.
3. At the end of that time, if majority of Ironic CI site complete their  
transition to ironic-tempest-plugin we switch.

Thanks,
Arkady


-Original Message-
From: Matthew Treinish [mailto:mtrein...@kortar.org] 
Sent: Monday, December 18, 2017 2:50 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Ironic] Removal of tempest plugin code from 
openstack/ironic & openstack/ironic-inspector

On Mon, Dec 18, 2017 at 01:37:13PM -0700, Julia Kreger wrote:
> > And actually I almost think the holiday time is the best time since 
> > the fewest number of people are going to care. But maybe I'm wrong. 
> > I do wonder if nobody is around to watch a 3rd Party CI for two 
> > weeks, how likely is it to still be working when they get back?
> >
> > I'm not vehemently opposed to delaying, but somewhat opposed.
> >
> > Thoughts?
> 
> I agree and disagree of course. :)  Arkady raises a good point about 
> availability of people, and the simple fact is they will be broken if 
> nobody is around to fix them. That being said, the true measurement is 
> going to be if third party CI shows the commits to remove the folders 
> as passing. If they pass, ideally we should proceed with removing them 
> sooner rather than later to avoid confusion. If they break after the 
> removal of the folders but still ultimately due to the removal of the 
> folders, we have found a bug that will need to be corrected, and we 
> can always temporarily revert to restore the folders in the mean time 
> until people return.
> 

Well it depends, there might not be a failure mode with removing the in-tree 
plugins. It depends on the test selection the 3rd party ci's run. (or if 
they're doing anything extra downstream which has a hard dependency on the 
in-tree stuff, like importing from it directly) If they're running anything 
from tempest itself it's unlikely they'd fail because of the plugin removal. 
The plugins are loaded dynamically during test discovery, and if you remove a 
plugin then it just doesn't get loaded by tempest anymore. So for the normal 
case this would only cause a failure if the only tests being selected were in 
the plugin (and then it fails because no tests were run).

-Matt Treinish

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Removal of tempest plugin code from openstack/ironic & openstack/ironic-inspector

2017-12-18 Thread Arkady.Kanevsky
John,
Should we give all Ironic CI maintainers to do the migration before pulling the 
code from master?
Especially that close to holiday season when a lot of folks are out.
We want to avoid Ironic CI not being functional for several weeks.
Thanks,
Arkady

From: John Villalovos [mailto:openstack@sodarock.com]
Sent: Monday, December 18, 2017 10:59 AM
To: openstack-dev 
Subject: Re: [openstack-dev] [Ironic] Removal of tempest plugin code from 
openstack/ironic & openstack/ironic-inspector

To hopefully make things more clear.
All of the ironic related projects that were using the tempest-plugin code from 
either openstack/ironic or openstack/ironic-inspector have been migrated to use 
the tempest-plugin code in openstack/ironic-tempest-plugin. This includes 
master and stable branches. Previously all branches (master and stable) were 
pulling from the master branch of openstack/ironic and/or 
openstack/ironic-inspector to get the tempest-plugin code. Now they all pull 
from the master branch of openstack/ironic-tempest-plugin. Note: 
openstack/ironic-tempest-plugin will NOT have any stable branches, only master.
We will be removing all the tempest-plugin code from the master branch of 
openstack/ironic and openstack/ironic-inspector on Tuesday 19-Dec-2017. We will 
NOT be removing the tempest-plugin code from any stable branches. We (Ironic) 
didn't/don't use that code but since downstream consumers may we will leave it 
in place.
Any 3rd Party CI that are testing using the tempest-plugin code pulled from 
master will need to update their CI to now use openstack/ironic-tempest-plugin
Again we will be removing all the tempest-plugin code from the master branch of 
openstack/ironic and openstack/ironic-inspector on Tuesday 19-Dec-2017. If your 
CI depends on that code, please update to use the new 
openstack/ironic-tempest-plugin repository.


On Mon, Dec 18, 2017 at 8:33 AM, John Villalovos 
> wrote:


On Fri, Dec 15, 2017 at 7:27 AM, John Villalovos 
> wrote:
I wanted to send out a note to any 3rd Party CI or other users of the tempest 
plugin code inside either openstack/ironic or openstack/ironic-inspector. That 
code has been migrated to the openstack/ironic-inspector-plugin repository. We 
have been busily ( https://review.openstack.org/#/q/topic:ironic-tempest-plugin 
) migrating all of the projects to use this new repository.
If you have a 3rd Party CI or something else that is depending on the tempest 
plugin code please migrate it to use openstack/ironic-tempest-plugin.
We plan to remove the tempest plugin code on Tuesday 19-Dec-2017 from 
openstack/ironic and openstack/ironic-tempest-plugin. And then after that doing 
backports of those changes to the stable branches.

After discussion on IRC in regards to back-porting to the stable branches. We 
will NOT backport the removal of the tempest plugin code as it could break 
distros and other consumers.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Openstack operational configuration improvement.

2017-12-18 Thread Arkady.Kanevsky
Flint,
I do support UI method for solution management.
My point is that it is not Horizon one as it is on a different layer.

From: Flint WALRUS [mailto:gael.ther...@gmail.com]
Sent: Monday, December 18, 2017 9:40 AM
To: Kanevsky, Arkady 
Cc: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] Openstack operational configuration 
improvement.

Hi arkady,
Ok understood your point.
However, as an operator and administrator of a really large deployment I 
disagree with this statement as a lot of our company's admins and operators 
indeed rely a lot on the dashboard rather than the CLI for a lot of daily tasks.
Not everyone is willing to goes with the CLI each time it need to perform some 
relatively short tasks.
About the TripleO UI and configuration management, tracability could definitely 
be an addition to the array with a column resuming at least the three last 
modifications.
Even if the foundation is providing deployment guidelines and reference designs 
it shouldn't be something enforced as every users will have for sure different 
use case.

Anyway, thanks everyone for your answers, that's really interesting to get your 
insights.

Le lun. 18 déc. 2017 à 16:20, 
> a écrit :
Flint,
Horizon is targeted to a user not administrator/operator.
The closest we have is TripleO UI .
Whatever change in any node configuration from OpenStack down to OS to HW need 
to be recorded in whatever method used to set openstack in order to be able to 
handle upgrade.
Thanks,
Arkady

From: Flint WALRUS 
[mailto:gael.ther...@gmail.com]
Sent: Monday, December 18, 2017 7:29 AM
To: 
openstack-operators@lists.openstack.org
Subject: [Openstack-operators] Openstack operational configuration improvement.

Hi everyone, I don't really know if this list is the good one, but I'll have my 
bet on it :D
Here I go.
Since many times now, I'm managing  Openstack platforms, and one things that 
always amazed me is its lack of comprehensive configuration management for 
services within Horizon.
Indeed, you can set and adapt pretty much everything within Horizon or the CLI 
except for the services configuration.
So here is a proposal regarding this issue:
I think of it as a rework of the already existing system information panel from 
the admin dashboard such as:
Within the services tab, each service line would now be a clickable drop down 
containing an additional subarray named configuration and listing the whole 
available configurations options for this service with information such as:
- Current value: default or value. (Dynamically editable by simply clicking on 
it, write the new value on the INI file).
- Default value: the default sane value. (Not editable default value of the 
option).
- Reload / Restart button. (a button enforcing the service to reload its 
configuration).
- Description: None or a short excerpt. (Not editable information about the 
option meaning).
- Documentation: None or a link to the option documentation. (Not editable).

What do you think of it?

PS: If this discussion should go with the horizon team rather than the 
operational team, could someone help with this one as I didn't find any mailing 
list related endpoint?
Thanks a lot.


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [all] Switching to longer development cycles

2017-12-13 Thread Arkady.Kanevsky
How about add to our biannual questionnaire how often customer upgrade their 
openstack environment?

-Original Message-
From: Thierry Carrez [mailto:thie...@openstack.org] 
Sent: Wednesday, December 13, 2017 4:20 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] Switching to longer development cycles

Ed Leafe wrote:
> On Dec 13, 2017, at 12:13 PM, Tim Bell  wrote:
> 
>> There is a risk that deployment to production is delayed, and therefore 
>> feedback is delayed and the wait for the ‘initial bug fixes before we deploy 
>> to prod’ gets longer.
> 
> There is always a rush at the Feature Freeze point in a cycle to get things 
> in, or they will be delayed for 6 months. With the year-long cycle, now 
> anything missing Feature Freeze will be delayed by a year. The long cycle 
> also means that a lot more time will be spent backporting things to the 
> current release, since people won’t be able to wait a whole year for some 
> improvements.
> 
> Maybe it’s just the dev in me, but I prefer shorter cycles (CD, anyone?).

Yes, I'll admit I'm struggling with that part of the proposal too. We could use 
intermediary releases but there would always be a "more important" release.

Is the "rush" at the end of the cycle still a thing those days ? From a release 
management perspective it felt like the pressure was reduced in recent cycles, 
with less and less FFEs. But that may be that PTLs have gotten better at 
denying them, not that the pressure is reduced now that we are past the hype 
peak...

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] thierry's longer dev cycle proposal

2017-12-13 Thread Arkady.Kanevsky
It is a sign of the maturity of OpenStack. With lots of deployment and most of 
them in production, the emphasis is shifting from rapid functionality additions 
to stability, manageability, and long term operability.

-Original Message-
From: Melvin Hillsman [mailto:mrhills...@gmail.com] 
Sent: Wednesday, December 13, 2017 5:29 PM
To: Jeremy Stanley ; openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] thierry's longer dev cycle proposal

I think this is a good opportunity to allow some stress relief to the developer 
community and offer space for more discussions with operators where some 
operators do not feel like they are bothering/bugging developers. I believe 
this is the main gain for operators; my personal opinion. In general I think 
the opportunity costs/gains are worth it for this and it is the responsibility 
of the community to make the change be useful as you mentioned in your original 
thread Thierry. It is not a silver bullet for all of the issues folks have with 
the way things are done but I believe that if it does not hurt things and 
offers even a slight gain in some area it makes sense.

Any change is not going to satisfy/dis-satisfy 100% of the constituents.

-- 
Kind regards,

Melvin Hillsman
mrhills...@gmail.com
mobile: +1 (832) 264-2646
irc: mrhillsman

On 12/13/17, 4:39 PM, "Jeremy Stanley"  wrote:

On 2017-12-13 22:35:41 +0100 (+0100), Thierry Carrez wrote:
[...]
> It's not really fait accompli, it's just a proposal up for discussion at
> this stage. Which is the reason why I started the thread on -dev -- to
> check the sanity of the change from a dev perspective first. If it makes
> things harder and not simpler on that side, I don't expect the TC to
> proceed.
[...]

With my TC hat on, regardless of what impression the developer
community has on this, I plan to take subsequent operator and
end-user/app-dev feedback into account as well before making any
binding decisions (and expect other TC members feel the same).
-- 
Jeremy Stanley
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [all] Switching to longer development cycles

2017-12-13 Thread Arkady.Kanevsky
A lot of great points.
If we are switching to 1 year cycle do we also move summits/forum to once a 
year...
That impacts much more than developers.

-Original Message-
From: Matt Riedemann [mailto:mriede...@gmail.com] 
Sent: Wednesday, December 13, 2017 10:52 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] Switching to longer development cycles

On 12/13/2017 10:17 AM, Thierry Carrez wrote:
> Hi everyone,
> 
> Over the past year, it has become pretty obvious to me that our 
> self-imposed rhythm no longer matches our natural pace. It feels like 
> we are always running elections, feature freeze is always just around 
> the corner, we lose too much time to events, and generally the 
> impression that there is less time to get things done. Milestones in 
> the development cycles are mostly useless now as they fly past us too fast.
> A lot of other people reported that same feeling.

On the other hand, without community-wide imposed deadlines and milestones, we 
lose some motivation for getting things done by a specific time, which could 
mean the bigger and more complicated things drag on longer because there isn't 
a deadline. One could say that we just need to be more disciplined, but in an 
open source project where there is no boss at the top setting that deadline and 
holding people to it, it's hard to be that disciplined. The PTL can only ask 
people to work on priorities so much.

> 
> As our various components mature, we have less quick-paced feature 
> development going on. As more and more people adopt OpenStack, we are 
> more careful about not breaking them, which takes additional time. The 
> end result is that getting anything done in OpenStack takes more time 
> than it used to, but we have kept our cycle rhythm mostly the same.
> 
> Our development pace was also designed for fast development in a time 
> where most contributors were full time on OpenStack. But fewer and 
> fewer people have 100% of their time to dedicate to OpenStack upstream
> development: a lot of us now have composite jobs or have to 
> participate in multiple communities. This is a good thing, and it will 
> only accelerate as more and more OpenStack development becomes fueled 
> directly by OpenStack operators and users.
> 
> In another thread, John Dickinson suggested that we move to one-year 
> development cycles, and I've been thinking a lot about it. I now think 
> it is actually the right way to reconcile our self-imposed rhythm with 
> the current pace of development, and I would like us to consider 
> switching to year-long development cycles for coordinated releases as 
> soon as possible.
> 
> What it means:
> 
> - We'd only do one *coordinated* release of the OpenStack components 
> per year, and maintain one stable branch per year
> - We'd elect PTLs for one year instead of every 6 months

If we're running elections too often, we can do this without a change to a 
1-year dev cycle.

> - We'd only have one set of community goals per year
> - We'd have only one PTG with all teams each year

This is arguably going to impact productivity, not improve it - because without 
the face time to hash out the complicated things, they drag on longer.

> 
> What it does _not_ mean:
> 
> - It doesn't mean we'd release components less early or less often. 
> Any project that is in feature development or wants to ship changes 
> more often is encouraged to use the cycle-with-intermediary release 
> model and release very early and very often. It just means that the 
> minimum we'd impose for mature components is one release per year 
> instead of one release every 6 months.

I personally don't expect anyone to pick up these intermediate releases. 
I expect most consumers are going to pick up a coordinated release (several 
months or years after it's released), especially if that's what the distro 
vendors are going to be doing. So Nova could release once per quarter but I 
wouldn't expect anyone to pick it up except maybe hosting companies, but not 
even sure about that.

> 
> - It doesn't mean that we encourage slowing down and procrastination.
> Each project would be able to set its own pace. We'd actually 
> encourage teams to set objectives for the various (now longer) 
> milestones in the cycle, and organize virtual sprints to get specific 
> objectives done as a group. Slowing down the time will likely let us 
> do a better job at organizing the work that is happening within a cycle.

As I said above, encouraging teams to do this and teams actually being 
disciplined enough to do it are different things. Maybe if we actually did the 
runways / slots idea from years past but as I've been reminded by people many 
times over the years, you can't force people to work on someone else's 
priorities - people are going to scratch their itch.

> 
> - It doesn't mean that teams can only meet in-person once a year.
> Summits would still provide a venue for team members to have an 
> in-person 

Re: [openstack-dev] [all] Switching to longer development cycles

2017-12-13 Thread Arkady.Kanevsky
Thierry,
Thanks for starting this discussion.
I support move to 1 year cycle. With OpenStack maturity and adoption it is a 
natural transformation.

However, we also need to consider previous releases, support for them and "." 
release for them.

Also for projects that are "in early stages" they can continue with faster 
cadence but they will need to be released "on" with latest released "core".

Thanks,
Arkady

-Original Message-
From: Thierry Carrez [mailto:thie...@openstack.org] 
Sent: Wednesday, December 13, 2017 10:17 AM
To: OpenStack Development Mailing List 
Subject: [openstack-dev] [all] Switching to longer development cycles

Hi everyone,

Over the past year, it has become pretty obvious to me that our self-imposed 
rhythm no longer matches our natural pace. It feels like we are always running 
elections, feature freeze is always just around the corner, we lose too much 
time to events, and generally the impression that there is less time to get 
things done. Milestones in the development cycles are mostly useless now as 
they fly past us too fast.
A lot of other people reported that same feeling.

As our various components mature, we have less quick-paced feature development 
going on. As more and more people adopt OpenStack, we are more careful about 
not breaking them, which takes additional time. The end result is that getting 
anything done in OpenStack takes more time than it used to, but we have kept 
our cycle rhythm mostly the same.

Our development pace was also designed for fast development in a time where 
most contributors were full time on OpenStack. But fewer and fewer people have 
100% of their time to dedicate to OpenStack upstream
development: a lot of us now have composite jobs or have to participate in 
multiple communities. This is a good thing, and it will only accelerate as more 
and more OpenStack development becomes fueled directly by OpenStack operators 
and users.

In another thread, John Dickinson suggested that we move to one-year 
development cycles, and I've been thinking a lot about it. I now think it is 
actually the right way to reconcile our self-imposed rhythm with the current 
pace of development, and I would like us to consider switching to year-long 
development cycles for coordinated releases as soon as possible.

What it means:

- We'd only do one *coordinated* release of the OpenStack components per year, 
and maintain one stable branch per year
- We'd elect PTLs for one year instead of every 6 months
- We'd only have one set of community goals per year
- We'd have only one PTG with all teams each year

What it does _not_ mean:

- It doesn't mean we'd release components less early or less often. Any project 
that is in feature development or wants to ship changes more often is 
encouraged to use the cycle-with-intermediary release model and release very 
early and very often. It just means that the minimum we'd impose for mature 
components is one release per year instead of one release every 6 months.

- It doesn't mean that we encourage slowing down and procrastination.
Each project would be able to set its own pace. We'd actually encourage teams 
to set objectives for the various (now longer) milestones in the cycle, and 
organize virtual sprints to get specific objectives done as a group. Slowing 
down the time will likely let us do a better job at organizing the work that is 
happening within a cycle.

- It doesn't mean that teams can only meet in-person once a year.
Summits would still provide a venue for team members to have an in-person 
meeting. I also expect a revival of the team-organized midcycles to replace the 
second PTG for teams that need or want to meet more often.

- It doesn't mean less emphasis on common goals. While we'd set goals only once 
per year, I hope that having one full year to complete those will let us tackle 
more ambitious goals, or more of them in parallel.

- It doesn't simplify upgrades. The main issue with the pace of upgrading is 
not the rhythm, it's the imposed timing. Being forced to upgrade every year is 
only incrementally better than being forced to upgrade every 6 months. The real 
solution there is better support for skipping releases that don't matter to 
you, not longer development cycles.

- It doesn't give us LTS. The cost of maintaining branches is not really due to 
the number of them we need to maintain in parallel, it is due to the age of the 
oldest one. We might end up being able to support branches for slightly longer 
as a result of having to maintain less of them in parallel, but we will not 
support our stable branches for a significantly longer time as a direct result 
of this change. The real solution here is being discussed by the (still 
forming) LTS SIG and involves having a group step up to continue to maintain 
some branches past EOL.

Why one year ?

Why not switch to 9 months ? Beyond making the math a lot easier, this has 
mostly to do with 

Re: [Openstack-operators] [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-10-30 Thread Arkady.Kanevsky
See you there Eric.

From: Erik McCormick [mailto:emccorm...@cirrusseven.com]
Sent: Monday, October 30, 2017 10:58 AM
To: Matt Riedemann 
Cc: OpenStack Development Mailing List ; 
openstack-operators 
Subject: Re: [openstack-dev] [Openstack-operators] 
[skip-level-upgrades][fast-forward-upgrades] PTG summary



On Oct 30, 2017 11:53 AM, "Matt Riedemann" 
> wrote:
On 9/20/2017 9:42 AM, arkady.kanev...@dell.com 
wrote:
Lee,
I can chair meeting in Sydney.
Thanks,
Arkady

Arkady,

Are you actually moderating the forum session in Sydney because the session 
says Eric McCormick is the session moderator:

I submitted it so it gets my name on it. I think Arkady and I are going to do 
it together.

https://www.openstack.org/summit/sydney-2017/summit-schedule/events/20451/fast-forward-upgrades

People are asking in the nova IRC channel about this session and were told to 
ask Jay Pipes about it, but Jay isn't going to be in Sydney and isn't involved 
in fast-forward upgrades, as far as I know anyway.

So whoever is moderating this session, can you please create an etherpad and 
get it linked to the wiki?

https://wiki.openstack.org/wiki/Forum/Sydney2017

I'll have the etherpad up today and pass it among here and on the wiki.



--

Thanks,

Matt


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Openstack-operators] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-10-30 Thread Arkady.Kanevsky
See you there Eric.

From: Erik McCormick [mailto:emccorm...@cirrusseven.com]
Sent: Monday, October 30, 2017 10:58 AM
To: Matt Riedemann 
Cc: OpenStack Development Mailing List ; 
openstack-operators 
Subject: Re: [openstack-dev] [Openstack-operators] 
[skip-level-upgrades][fast-forward-upgrades] PTG summary



On Oct 30, 2017 11:53 AM, "Matt Riedemann" 
> wrote:
On 9/20/2017 9:42 AM, arkady.kanev...@dell.com 
wrote:
Lee,
I can chair meeting in Sydney.
Thanks,
Arkady

Arkady,

Are you actually moderating the forum session in Sydney because the session 
says Eric McCormick is the session moderator:

I submitted it so it gets my name on it. I think Arkady and I are going to do 
it together.

https://www.openstack.org/summit/sydney-2017/summit-schedule/events/20451/fast-forward-upgrades

People are asking in the nova IRC channel about this session and were told to 
ask Jay Pipes about it, but Jay isn't going to be in Sydney and isn't involved 
in fast-forward upgrades, as far as I know anyway.

So whoever is moderating this session, can you please create an etherpad and 
get it linked to the wiki?

https://wiki.openstack.org/wiki/Forum/Sydney2017

I'll have the etherpad up today and pass it among here and on the wiki.



--

Thanks,

Matt


___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [requirements] moving driver dependencies to global-requirements?

2017-10-30 Thread Arkady.Kanevsky
The second seem to be better suited for per driver requirement handling and per 
HW type per function.
Which option is easier to handle for container per dependency for the future?


Thanks,
Arkady

-Original Message-
From: Doug Hellmann [mailto:d...@doughellmann.com] 
Sent: Monday, October 30, 2017 2:47 PM
To: openstack-dev 
Subject: Re: [openstack-dev] [ironic] [requirements] moving driver dependencies 
to global-requirements?

Excerpts from Dmitry Tantsur's message of 2017-10-30 17:51:49 +0100:
> Hi all,
> 
> So far driver requirements [1] have been managed outside of 
> global-requirements. 
> This was mostly necessary because some dependencies were not on PyPI. 
> This is no longer the case, and I'd like to consider managing them 
> just like any other dependencies. Pros:
> 1. making these dependencies (and their versions) more visible for 
> packagers 2. following the same policies for regular and driver 
> dependencies 3. ensuring co-installability of these dependencies with 
> each other and with the remaining openstack 4. potentially using 
> upper-constraints in 3rd party CI to test what packagers will probably 
> package 5. we'll be able to finally create a tox job running unit 
> tests with all these dependencies installed (FYI these often breaks in 
> RDO CI)
> 
> Cons:
> 1. more work for both the requirements team and the vendor teams 2. 
> inability to use ironic release notes to explain driver requirements 
> changes 3. any objections from the requirements team?
> 
> If we make this change, we'll drop driver-requirements.txt, and will 
> use setuptools extras to list then in setup.cfg (this way is supported 
> by g-r) similar to what we do in ironicclient [2].
> 
> We either will have one list:
> 
> [extras]
> drivers =
>sushy>=a.b
>python-dracclient>=x.y
>python-prolianutils>=v.w
>...
> 
> or (and I like this more) we'll have a list per hardware type:
> 
> [extras]
> redfish =
>sushy>=a.b
> idrac =
>python-dracclient>=x.y
> ilo =
>...
> ...
> 
> WDYT?

The second option is what I would expect.

Doug

> 
> [1] 
> https://github.com/openstack/ironic/blob/master/driver-requirements.tx
> t [2] 
> https://github.com/openstack/python-ironicclient/blob/master/setup.cfg
> #L115
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [elections] Technical Committee Election Results

2017-10-21 Thread Arkady.Kanevsky
Congrats to all

From: Edgar Magana [mailto:edgar.mag...@workday.com]
Sent: Saturday, October 21, 2017 12:00 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [all] [elections] Technical Committee Election 
Results

An awesome team! Thanks for willing to continue the hard work of on the TC.

Cheers,

Edgar Magana

On Oct 21, 2017, at 6:56 AM, Amy Marrich 
> wrote:
Congrats everyone!

On Fri, Oct 20, 2017 at 6:59 PM, Kendall Nelson 
> wrote:
Hello Everyone :)

Please join me in congratulating the 6 newly elected members of the Technical 
Committee (TC)!

Colleen Murphy (cmurphy)
Doug Hellmann (dhellmann)
Emilien Macchi (emilienm)
Jeremy Stanley (fungi)
Julia Kreger (TheJulia)
Paul Belanger (pabelanger)

Full results: 
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_ce86063991ef8aae

Election process details and results are also available here: 
https://governance.openstack.org/election/

Thank you to all of the candidates, having a good group of candidates helps 
engage the community in our democratic process.

Thank you to all who voted and who encouraged others to vote. We need to ensure 
your voice is heard.

Thank you for another great round.
-Kendall Nelson (diablo_rojo)

[1] 
https://review.openstack.org/#/c/513881/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ=_MZICzb6CeQO95qx98u8CjXJmL5mGEEXSfWKUHNzJrs=8zikRlZwVoA05HuwKswWyLaWXjuioxs6SJW_LjXQ_Q0=
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] https://www.openstack.org/marketplace/ web changes

2017-10-17 Thread Arkady.Kanevsky
Danny and team,
What is the process of updating the web page template for detailed "distro" 
information to align with updated project-navigator.
For example, to add AODH, PANKO, Tempest, Rally as projects for distro solution 
offering?
Just looking for a process of requesting/making changes to the web template.
Then we can talk about any additional info that refstack collects for it.

In order to be listed in marketplace you must submit tempest results so it is 
for sure part of the distro/appliance.

Thanks,
Arkady Kanevsky, Ph.D.
Director of SW Development
Dell EMC CPSD
Dell Inc. One Dell Way, MS PS2-91
Round Rock, TX 78682, USA
Phone: 512 723 5264

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] It's time...

2017-10-04 Thread Arkady.Kanevsky
Tom,
Thank you for everything.
Arkady

From: Rico Lin [mailto:rico.lin.gua...@gmail.com]
Sent: Wednesday, October 04, 2017 9:45 AM
To: Tom Fifield 
Cc: OpenStack Operators 
Subject: Re: [Openstack-operators] It's time...

Tom,

It's my pleasure to working with you and learning from you.
Thank you for what you have done for the community and open source.
Hopefully we will have chances to work together in future, and of course, see 
you in Sydney!


2017-10-04 22:34 GMT+08:00 Tim Bell >:
Tom,

All the best for the future. I will happily share a beverage or two in Sydney, 
reflect on the early days and toast the growth of the community that you have 
been a major contributor to.

Tim

-Original Message-
From: Tom Fifield >
Date: Wednesday, 4 October 2017 at 16:25
To: openstack-operators 
>
Subject: [Openstack-operators] It's time...

Hi all,

Tom here, on a personal note.

It's quite fitting that this November our summit is in Australia :)

I'm hoping to see you there because after being part of 15 releases, and
travelling the equivalent of a couple of round trips to the moon to
witness OpenStack grow around the world, the timing is right for me to
step down as your Community Manager.

We've had an incredible journey together, culminating in the healthy
community we have today. Across more than 160 countries, users and
developers collaborate to make clouds better for the work that matters.
The diversity of use is staggering, and the scale of resources being run
is quite significant. We did that :)


Behind the scenes, I've spent the past couple of months preparing to
transition various tasks to other members of the Foundation staff. If
you see a new name behind an openstack.org email 
address, please give
them due attention and care - they're all great people. I'll be around
through to year end to shepherd the process, so please ping me if you
are worried about anything.

Always remember, you are what makes OpenStack. OpenStack changes and
thrives based on how you feel and what work you do. It's been a
privilege to share the journey with you.



So, my plan? After a decade of diligent effort in organisations
euphemistically described as "minimally-staffed", I'm looking forward to
taking a decent holiday. Though, if you have a challenge interesting
enough to wrest someone from a tropical beach or a misty mountain top ... ;)


There are a lot of you out there to whom I remain indebted. Stay in
touch to make sure your owed drinks make it to you!

+886 988 33 1200
t...@tomfifield.net
https://www.linkedin.com/in/tomfifield
https://twitter.com/TomFifield


Regards,



Tom

___
OpenStack-operators mailing list

OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



--
May The Force of OpenStack Be With You,
Rico Lin
irc: ricolin




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-09-29 Thread Arkady.Kanevsky
There are some loose ends that Saverio correctly bringing up.
These perfect points to discuss at Forum.
Suggest we start etherpad to collect agenda for it.

-Original Message-
From: Lee Yarwood [mailto:lyarw...@redhat.com] 
Sent: Friday, September 29, 2017 7:04 AM
To: Saverio Proto 
Cc: OpenStack Development Mailing List (not for usage questions) 
; openstack-operators@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack-operators] 
[skip-level-upgrades][fast-forward-upgrades] PTG summary

On 29-09-17 11:40:21, Saverio Proto wrote:
> Hello,
> 
> sorry I could not make it to the PTG.
> 
> I have an idea that I want to share with the community. I hope this is 
> a good place to start the discussion.
> 
> After years of Openstack operations, upgrading releases from Icehouse 
> to Newton, the feeling is that the control plane upgrade is doable.
> 
> But it is also a lot of pain to upgrade all the compute nodes. This 
> really causes downtime to the VMs that are running.
> I can't always make live migrations, sometimes the VMs are just too 
> big or too busy.
> 
> It would be nice to guarantee the ability to run an updated control 
> plane with compute nodes up to N-3 Release.
> 
> This way even if we have to upgrade the control plane every 6 months, 
> we can keep a longer lifetime for compute nodes. Basically we can 
> never upgrade them until we decommission the hardware.
> 
> If there are new features that require updated compute nodes, we can 
> always organize our datacenter in availability zones, not scheduling 
> new VMs to those compute nodes.
> 
> To my understanding this means having compatibility at least for the 
> nova-compute agent and the neutron-agents running on the compute node.
> 
> Is it a very bad idea ?
> 
> Do other people feel like me that upgrading all the compute nodes is 
> also a big part of the burden regarding the upgrade ?

Yeah, I don't think the Nova community would ever be able or willing to verify 
and maintain that level of backward compatibility. Ultimately there's nothing 
stopping you from upgrading Nova on the computes while also keeping instance 
running.

You only run into issues with kernel, OVS and QEMU (for n-cpu with
libvirt) etc upgrades that require reboots or instances to be restarted (either 
hard or via live-migration). If you're unable or just unwilling to take 
downtime for instances that can't be moved when these components require an 
update then you have bigger problems IMHO.

Regards,

Lee
-- 
Lee Yarwood A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672 2D76
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Openstack-operators] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-09-29 Thread Arkady.Kanevsky
There are some loose ends that Saverio correctly bringing up.
These perfect points to discuss at Forum.
Suggest we start etherpad to collect agenda for it.

-Original Message-
From: Lee Yarwood [mailto:lyarw...@redhat.com] 
Sent: Friday, September 29, 2017 7:04 AM
To: Saverio Proto 
Cc: OpenStack Development Mailing List (not for usage questions) 
; openstack-operat...@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack-operators] 
[skip-level-upgrades][fast-forward-upgrades] PTG summary

On 29-09-17 11:40:21, Saverio Proto wrote:
> Hello,
> 
> sorry I could not make it to the PTG.
> 
> I have an idea that I want to share with the community. I hope this is 
> a good place to start the discussion.
> 
> After years of Openstack operations, upgrading releases from Icehouse 
> to Newton, the feeling is that the control plane upgrade is doable.
> 
> But it is also a lot of pain to upgrade all the compute nodes. This 
> really causes downtime to the VMs that are running.
> I can't always make live migrations, sometimes the VMs are just too 
> big or too busy.
> 
> It would be nice to guarantee the ability to run an updated control 
> plane with compute nodes up to N-3 Release.
> 
> This way even if we have to upgrade the control plane every 6 months, 
> we can keep a longer lifetime for compute nodes. Basically we can 
> never upgrade them until we decommission the hardware.
> 
> If there are new features that require updated compute nodes, we can 
> always organize our datacenter in availability zones, not scheduling 
> new VMs to those compute nodes.
> 
> To my understanding this means having compatibility at least for the 
> nova-compute agent and the neutron-agents running on the compute node.
> 
> Is it a very bad idea ?
> 
> Do other people feel like me that upgrading all the compute nodes is 
> also a big part of the burden regarding the upgrade ?

Yeah, I don't think the Nova community would ever be able or willing to verify 
and maintain that level of backward compatibility. Ultimately there's nothing 
stopping you from upgrading Nova on the computes while also keeping instance 
running.

You only run into issues with kernel, OVS and QEMU (for n-cpu with
libvirt) etc upgrades that require reboots or instances to be restarted (either 
hard or via live-migration). If you're unable or just unwilling to take 
downtime for instances that can't be moved when these components require an 
update then you have bigger problems IMHO.

Regards,

Lee
-- 
Lee Yarwood A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672 2D76
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-09-28 Thread Arkady.Kanevsky
Erik,
Thanks for setting up a session for it.
Glad it is driven by Operators.
I will be happy to work with you on the session and run it with you.
Thanks,
Arkady

From: Erik McCormick [mailto:emccorm...@cirrusseven.com]
Sent: Thursday, September 28, 2017 7:40 AM
To: Lee Yarwood 
Cc: OpenStack Development Mailing List ; 
openstack-operators 
Subject: Re: [openstack-dev] [Openstack-operators] 
[skip-level-upgrades][fast-forward-upgrades] PTG summary


On Sep 28, 2017 4:31 AM, "Lee Yarwood" 
> wrote:
On 20-09-17 14:56:20, arkady.kanev...@dell.com 
wrote:
> Lee,
> I can chair meeting in Sydney.
> Thanks,
> Arkady
Thanks Arkady!

FYI I see that emccormickva has created the following Forum session to
discuss FF upgrades:

http://forumtopics.openstack.org/cfp/details/19

You might want to reach out to him to help craft the agenda for the
session based on our discussions in Denver.
.
I just didn't want to risk it not getting in, and it was on our etherpad as 
well. I'm happy to help, but would love for you guys to lead.

Thanks,
Erik


Thanks again,

Lee
--
Lee Yarwood A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672 2D76
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Openstack-operators] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-09-28 Thread Arkady.Kanevsky
Erik,
Thanks for setting up a session for it.
Glad it is driven by Operators.
I will be happy to work with you on the session and run it with you.
Thanks,
Arkady

From: Erik McCormick [mailto:emccorm...@cirrusseven.com]
Sent: Thursday, September 28, 2017 7:40 AM
To: Lee Yarwood 
Cc: OpenStack Development Mailing List ; 
openstack-operators 
Subject: Re: [openstack-dev] [Openstack-operators] 
[skip-level-upgrades][fast-forward-upgrades] PTG summary


On Sep 28, 2017 4:31 AM, "Lee Yarwood" 
> wrote:
On 20-09-17 14:56:20, arkady.kanev...@dell.com 
wrote:
> Lee,
> I can chair meeting in Sydney.
> Thanks,
> Arkady
Thanks Arkady!

FYI I see that emccormickva has created the following Forum session to
discuss FF upgrades:

http://forumtopics.openstack.org/cfp/details/19

You might want to reach out to him to help craft the agenda for the
session based on our discussions in Denver.
.
I just didn't want to risk it not getting in, and it was on our etherpad as 
well. I'm happy to help, but would love for you guys to lead.

Thanks,
Erik


Thanks again,

Lee
--
Lee Yarwood A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672 2D76
___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-09-20 Thread Arkady.Kanevsky
Lee,
I can chair meeting in Sydney.
Thanks,
Arkady

-Original Message-
From: Lee Yarwood [mailto:lyarw...@redhat.com] 
Sent: Wednesday, September 20, 2017 8:29 AM
To: openstack-...@lists.openstack.org; openstack-operators@lists.openstack.org
Subject: [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG 
summary

My thanks again to everyone who attended and contributed to the skip-level 
upgrades track over the first two days of last weeks PTG.
I've included a short summary of our discussions below with a list of agreed 
actions for Queens at the end.

tl;dr s/skip-level/fast-forward/g

https://etherpad.openstack.org/p/queens-PTG-skip-level-upgrades

Monday - Define and rename
--

During our first session [1] we briefly discussed the history of the skip-level 
upgrades effort within the community and the various misunderstandings that 
have arisen from previous conversations around this topic at past events.

We agreed that at present the only way to perform upgrades between N and
N+>=2 releases of OpenStack was to upgrade linearly through each major
release, without skipping between the starting and target release of the 
upgrade.

This is contrary to previous discussions on the topic where it had been 
suggested that releases could be skipped if DB migrations for these releases 
were applied in bulk later in the process. As projects within the community 
currently offer no such support for this it was agreed to continue to use the 
supported N to N+1 upgrade jumps, albeit in a minimal, offline way.

The name skip-level upgrades has had an obvious role to play in the confusion 
here and as such the renaming of this effort was discussed at length. Various 
suggestions are listed on the pad but for the time being I'm going to stick 
with the basic `fast-forward upgrades` name (FFOU, OFF, BOFF, FFUD etc were all 
close behind). This removes any notion of releases being skipped and should 
hopefully avoid any further confusion in the future.
 
Support by the projects for offline upgrades was then discussed with a recent 
Ironic issue [2] highlighted as an example where projects have required 
services to run before the upgrade could be considered complete. The additional 
requirement of ensuring both workloads and the data plane remain active during 
the upgrade was also then discussed. It was agreed that both the 
`supports-upgrades` [3] and `supports-accessible-upgrades` [4] tags should be 
updated to reflect these requirements for fast-forward upgrades.

Given the above it was agreed that this new definition of what fast-forward 
upgrades are and the best practices associated with them should be clearly 
documented somewhere. Various operators in the room highlighted that they would 
like to see a high level document outline the steps required to achieve this, 
hopefully written by someone with past experience of running this type of 
upgrade.

I failed to capture the names of the individuals who were interested in helping 
out here. If anyone is interested in helping out here please feel free to add 
your name to the actions either at the end of this mail or at the bottom of the 
pad.

In the afternoon we reviewed the current efforts within the community to 
implement fast-forward upgrades, covering TripleO, Charms (Juju) and 
openstack-ansible. While this was insightful to many in the room there didn't 
appear to be any obvious areas of collaboration outside of sharing best 
practice and defining the high level flow of a fast-forward upgrade.

Tuesday - NFV, SIG and actions
--

Tuesday started with a discussion around NFV considerations with fast-forward 
upgrades. These ranged from the previously mentioned need for the data plane to 
remain active during the upgrade to the restricted nature of upgrades in NFV 
environments in terms of time and number of reboots.

It was highlighted that there are some serious as yet unresolved bugs in Nova 
regarding the live migration of instances using SR-IOV devices.
This currently makes the moving of workloads either prior to or during the 
upgrade particularly difficult.

Rollbacks were also discussed and the need for any best practice documentation 
around fast-forward upgrades to include steps to allow the recovery of 
environments if things fail was also highlighted.

We then revisited an idea from the first day of finding or creating a SIG for 
this effort to call home. It was highlighted that there was a suggestion in the 
packaging room to create a Deployment / Lifecycle SIG.
After speaking with a few individuals later in the week I've taken the action 
to reach out on the openstack-sigs mailing list for further input.

Finally, during a brief discussion on ways we could collaborate and share 
tooling for fast-forward upgrades a new tool to migrate configuration files 
between N to N+>=2 releases was introduced [5]. While interesting it was seen 
as a more generic utility that could also 

Re: [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-09-20 Thread Arkady.Kanevsky
Lee,
I can chair meeting in Sydney.
Thanks,
Arkady

-Original Message-
From: Lee Yarwood [mailto:lyarw...@redhat.com] 
Sent: Wednesday, September 20, 2017 8:29 AM
To: openstack-dev@lists.openstack.org; openstack-operat...@lists.openstack.org
Subject: [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG 
summary

My thanks again to everyone who attended and contributed to the skip-level 
upgrades track over the first two days of last weeks PTG.
I've included a short summary of our discussions below with a list of agreed 
actions for Queens at the end.

tl;dr s/skip-level/fast-forward/g

https://etherpad.openstack.org/p/queens-PTG-skip-level-upgrades

Monday - Define and rename
--

During our first session [1] we briefly discussed the history of the skip-level 
upgrades effort within the community and the various misunderstandings that 
have arisen from previous conversations around this topic at past events.

We agreed that at present the only way to perform upgrades between N and
N+>=2 releases of OpenStack was to upgrade linearly through each major
release, without skipping between the starting and target release of the 
upgrade.

This is contrary to previous discussions on the topic where it had been 
suggested that releases could be skipped if DB migrations for these releases 
were applied in bulk later in the process. As projects within the community 
currently offer no such support for this it was agreed to continue to use the 
supported N to N+1 upgrade jumps, albeit in a minimal, offline way.

The name skip-level upgrades has had an obvious role to play in the confusion 
here and as such the renaming of this effort was discussed at length. Various 
suggestions are listed on the pad but for the time being I'm going to stick 
with the basic `fast-forward upgrades` name (FFOU, OFF, BOFF, FFUD etc were all 
close behind). This removes any notion of releases being skipped and should 
hopefully avoid any further confusion in the future.
 
Support by the projects for offline upgrades was then discussed with a recent 
Ironic issue [2] highlighted as an example where projects have required 
services to run before the upgrade could be considered complete. The additional 
requirement of ensuring both workloads and the data plane remain active during 
the upgrade was also then discussed. It was agreed that both the 
`supports-upgrades` [3] and `supports-accessible-upgrades` [4] tags should be 
updated to reflect these requirements for fast-forward upgrades.

Given the above it was agreed that this new definition of what fast-forward 
upgrades are and the best practices associated with them should be clearly 
documented somewhere. Various operators in the room highlighted that they would 
like to see a high level document outline the steps required to achieve this, 
hopefully written by someone with past experience of running this type of 
upgrade.

I failed to capture the names of the individuals who were interested in helping 
out here. If anyone is interested in helping out here please feel free to add 
your name to the actions either at the end of this mail or at the bottom of the 
pad.

In the afternoon we reviewed the current efforts within the community to 
implement fast-forward upgrades, covering TripleO, Charms (Juju) and 
openstack-ansible. While this was insightful to many in the room there didn't 
appear to be any obvious areas of collaboration outside of sharing best 
practice and defining the high level flow of a fast-forward upgrade.

Tuesday - NFV, SIG and actions
--

Tuesday started with a discussion around NFV considerations with fast-forward 
upgrades. These ranged from the previously mentioned need for the data plane to 
remain active during the upgrade to the restricted nature of upgrades in NFV 
environments in terms of time and number of reboots.

It was highlighted that there are some serious as yet unresolved bugs in Nova 
regarding the live migration of instances using SR-IOV devices.
This currently makes the moving of workloads either prior to or during the 
upgrade particularly difficult.

Rollbacks were also discussed and the need for any best practice documentation 
around fast-forward upgrades to include steps to allow the recovery of 
environments if things fail was also highlighted.

We then revisited an idea from the first day of finding or creating a SIG for 
this effort to call home. It was highlighted that there was a suggestion in the 
packaging room to create a Deployment / Lifecycle SIG.
After speaking with a few individuals later in the week I've taken the action 
to reach out on the openstack-sigs mailing list for further input.

Finally, during a brief discussion on ways we could collaborate and share 
tooling for fast-forward upgrades a new tool to migrate configuration files 
between N to N+>=2 releases was introduced [5]. While interesting it was seen 
as a more generic utility that could also 

Re: [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-09-20 Thread Arkady.Kanevsky
Lee,
I can chair meeting in Sydney.
Thanks,
Arkady

-Original Message-
From: Lee Yarwood [mailto:lyarw...@redhat.com] 
Sent: Wednesday, September 20, 2017 8:29 AM
To: openstack-dev@lists.openstack.org; openstack-operat...@lists.openstack.org
Subject: [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG 
summary

My thanks again to everyone who attended and contributed to the skip-level 
upgrades track over the first two days of last weeks PTG.
I've included a short summary of our discussions below with a list of agreed 
actions for Queens at the end.

tl;dr s/skip-level/fast-forward/g

https://etherpad.openstack.org/p/queens-PTG-skip-level-upgrades

Monday - Define and rename
--

During our first session [1] we briefly discussed the history of the skip-level 
upgrades effort within the community and the various misunderstandings that 
have arisen from previous conversations around this topic at past events.

We agreed that at present the only way to perform upgrades between N and
N+>=2 releases of OpenStack was to upgrade linearly through each major
release, without skipping between the starting and target release of the 
upgrade.

This is contrary to previous discussions on the topic where it had been 
suggested that releases could be skipped if DB migrations for these releases 
were applied in bulk later in the process. As projects within the community 
currently offer no such support for this it was agreed to continue to use the 
supported N to N+1 upgrade jumps, albeit in a minimal, offline way.

The name skip-level upgrades has had an obvious role to play in the confusion 
here and as such the renaming of this effort was discussed at length. Various 
suggestions are listed on the pad but for the time being I'm going to stick 
with the basic `fast-forward upgrades` name (FFOU, OFF, BOFF, FFUD etc were all 
close behind). This removes any notion of releases being skipped and should 
hopefully avoid any further confusion in the future.
 
Support by the projects for offline upgrades was then discussed with a recent 
Ironic issue [2] highlighted as an example where projects have required 
services to run before the upgrade could be considered complete. The additional 
requirement of ensuring both workloads and the data plane remain active during 
the upgrade was also then discussed. It was agreed that both the 
`supports-upgrades` [3] and `supports-accessible-upgrades` [4] tags should be 
updated to reflect these requirements for fast-forward upgrades.

Given the above it was agreed that this new definition of what fast-forward 
upgrades are and the best practices associated with them should be clearly 
documented somewhere. Various operators in the room highlighted that they would 
like to see a high level document outline the steps required to achieve this, 
hopefully written by someone with past experience of running this type of 
upgrade.

I failed to capture the names of the individuals who were interested in helping 
out here. If anyone is interested in helping out here please feel free to add 
your name to the actions either at the end of this mail or at the bottom of the 
pad.

In the afternoon we reviewed the current efforts within the community to 
implement fast-forward upgrades, covering TripleO, Charms (Juju) and 
openstack-ansible. While this was insightful to many in the room there didn't 
appear to be any obvious areas of collaboration outside of sharing best 
practice and defining the high level flow of a fast-forward upgrade.

Tuesday - NFV, SIG and actions
--

Tuesday started with a discussion around NFV considerations with fast-forward 
upgrades. These ranged from the previously mentioned need for the data plane to 
remain active during the upgrade to the restricted nature of upgrades in NFV 
environments in terms of time and number of reboots.

It was highlighted that there are some serious as yet unresolved bugs in Nova 
regarding the live migration of instances using SR-IOV devices.
This currently makes the moving of workloads either prior to or during the 
upgrade particularly difficult.

Rollbacks were also discussed and the need for any best practice documentation 
around fast-forward upgrades to include steps to allow the recovery of 
environments if things fail was also highlighted.

We then revisited an idea from the first day of finding or creating a SIG for 
this effort to call home. It was highlighted that there was a suggestion in the 
packaging room to create a Deployment / Lifecycle SIG.
After speaking with a few individuals later in the week I've taken the action 
to reach out on the openstack-sigs mailing list for further input.

Finally, during a brief discussion on ways we could collaborate and share 
tooling for fast-forward upgrades a new tool to migrate configuration files 
between N to N+>=2 releases was introduced [5]. While interesting it was seen 
as a more generic utility that could also 

Re: [openstack-dev] [interop][refstack] proposal for extending published and submitted info

2017-09-10 Thread Arkady.Kanevsky
Fellow interop members,
I would like to request we consider adding information on what was tested for 
interop.
Specifically, if you are listed in distros and appliances it would be good to 
be able to specify what underlying HW was used for interop testing.
I do not propose that as the requirement for submission but ability for 
submitters to specify it if they want to.
Why?
1. HW vendors have something to list in the submission.
2. If you have openstack solutions with multiple options for distros, common 
for HW vendors, it allows them to provide multiple submissions and entries at 
marketplace.
3. It provides helpful information for Consumers of openstack when they use 
marketplace data for their decision making. Especially when they choose both HW 
and distro for their decision making.

I will be happy to generate blueprint if the team warrants it.
Thanks,
Arkady
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Team dinner at the PTG?

2017-08-28 Thread Arkady.Kanevsky
Great. See you all there.

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com] 
Sent: Monday, August 28, 2017 1:07 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [ironic] Team dinner at the PTG?

And the final day is Monday, Sep 11th. See you there! :)

On 08/23/2017 03:46 PM, Dmitry Tantsur wrote:
> Hi folks!
> 
> We're trying to organize an informal team meeting at some place, 
> probably with burgers and beer, in Denver. Note that it won't be sponsored.
> 
> Please vote in the Doodle https://doodle.com/poll/nvavg9ab9ebq2e4v 
> about the days you're available.
> 
> If you're local, we need you help finding a good place to go. Please 
> ping Julia
> (TheJulia) or myself (dtantsur) on IRC if you're willing to help.
> 
> Thanks,
> Dmitry


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Interop-wg] [refstack][interop-wg] Non-candidacy for RefStack PTL

2017-08-02 Thread Arkady.Kanevsky
Here here.
Thanks Catherine for a great job for many years.

From: Egle Sigler [mailto:ushnish...@hotmail.com]
Sent: Wednesday, August 02, 2017 3:54 PM
To: Catherine Cuong Diep ; openstack-dev@lists.openstack.org; 
interop...@lists.openstack.org
Subject: Re: [openstack-dev] [Interop-wg] [refstack][interop-wg] Non-candidacy 
for RefStack PTL


Thank you Catherine for all your outstanding leadership and work on RefStack 
and Interop (defcore).



-Egle


From: Catherine Cuong Diep >
Sent: Wednesday, August 2, 2017 2:15 PM
To: 
openstack-dev@lists.openstack.org; 
interop...@lists.openstack.org
Subject: [Interop-wg] [openstack-dev][refstack][interop-wg] Non-candidacy for 
RefStack PTL


Hi Everyone,

As I had announced in the RefStack IRC meeting a few weeks ago, I will not run 
for RefStack PTL in the upcoming cycle. I have been PTL for the last 2 years 
and it is time to pass the torch to a new leader.

I would like to thanks everyone for your support and contribution to make the 
RefStack project and interoperability testing a reality. We would not be where 
we are today without your
commitment and dedication.

I will still be around to help the project and to work with the next PTL for a 
smooth transition.

Catherine Diep
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [elections][tripleo] Queens PTL candidacy

2017-08-02 Thread Arkady.Kanevsky
+1

-Original Message-
From: Alex Schultz [mailto:aschu...@redhat.com] 
Sent: Wednesday, August 02, 2017 8:56 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [elections][tripleo] Queens PTL candidacy

I would like to nominate myself for the TripleO PTL role for the Queens cycle.

I have been a contributor to various OpenStack projects since Liberty. I have 
spent most of my time working on the deployment of OpenStack and with the 
engineers who deploy it.  As many of you know, I believe the projects we work 
on should simplify workflows and improve the end user's lives. During my time 
as Puppet OpenStack PTL, I have promoted efforts to simplify and establishing 
reusable patterns and best practices. I feel confident that TripleO is on the 
right path and hope to continue to lead it in the right direction.

For the last few cycles we have moved TripleO forwards and improved not only 
TripleO itself, but have provided additional tooling around deploying and 
managing OpenStack. As we look forward to the Queens cycle, it is imporant to 
recognize the work we have done and can continue to improve on.

* Improving deployment of containerized services.
  We started the effort to switch over to containerized services being deployed
  with TripleO as part of the Pike cycle and we need to finalize the last few
  services. As we start the transition to including Kubernetes, we need to be
  mindful of the transition and make sure we evaluate and leverage already
  existing solutions.
* Continue making the deployers' lives easier.
  The recent cycles have been full of efforts to allow users to do more with
  TripleO. With the work to expose composable roles, composable networks and
  containerization we have added additional flexibility for the deployment
  engineers to be able to build out architectures needed for the end user.
  That being said, there are still efforts to be done to make the deployment
  process less error prone and more user friendly.
* Continued improvement of CI
  The process to transition over to tripleo-quickstart has made excellent
  progress over the last few cycles. We need to continue to refine the steps
  to ensure that Developers can reuse the work and be able to quickly and
  easily troubleshoot when things break down.  Additionally we need to make
  sure that we can classify repeated failures and work to address them quickly
  as to not hold up bugs and features.
* Improve visibility of the project status
  As part of the Queens cycle, I would like to devote some time into capturing
  metrics and information about the status of the various projects under the
  TripleO umbrella. We've been doing lots of work but it I think it would be
  beneficial for us to know where this work has been occurring. I'm hoping to
  work on some of the reporting around the status of our CI, bugs and reviews
  to be able to see where we could use some more efforts to hopefully improve
  our development velocities.

Thanks,
Alex Schultz
irc: mwhahaha

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Wiki

2017-07-03 Thread Arkady.Kanevsky
Most of google searches will pickup wiki pages. So people will view wiki as the 
current state of projects.

-Original Message-
From: Thierry Carrez [mailto:thie...@openstack.org] 
Sent: Monday, July 03, 2017 9:30 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all][tc] Wiki

Flavio Percoco wrote:
> On 03/07/17 13:58 +0200, Thierry Carrez wrote:
>> Flavio Percoco wrote:
>>> Sometimes I wonder if we still need to maintain a Wiki. I guess some 
>>> projects still use it but I wonder if the use they make of the Wiki 
>>> could be moved somewhere else.
>>>
>>> For example, in the TC we use it for the Agenda but I think that 
>>> could be moved to an etherpad. Things that should last forever 
>>> should be documented somewhere (project repos, governance repo in 
>>> the TC case) where we can actually monitor what goes in and easily 
>>> clean up.
>>
>> This is a complete tangent, but I'll bite :) We had a thorough 
>> discussion about that last year, summarized at:
>>
>> http://lists.openstack.org/pipermail/openstack-dev/2016-June/096481.h
>> tml
>>
>> TL,DR; was that while most authoritative content should (and has been
>> mostly) moved off the wiki, it's still useful as a cheap publication 
>> platform for teams and workgroups, somewhere between a git repository 
>> with a docs job and an etherpad.
>>
>> FWIW the job of migrating authoritative things off the wiki is still 
>> on-going. As an example, Thingee is spearheading the effort to move 
>> the "How to Contribute" page and other first pointers to a reference 
>> website (see recent thread about that).
> 
> I guess the short answer is that we hope one day we won't need it. I 
> certainly do.
> 
> What would happen if we make the wiki read-only? Would that break 
> peopl's workflow?
> 
> Do we know what teams modify the wiki more often and what it is they 
> do there?

The data is publicly available (see recent changes on the wiki). Most ops 
workgroups heavily rely on the wiki, as well as a significant number of 
upstream project teams and workgroups. Developers are clearly not the main 
target.

You can dive back into the original analysis etherpad if you're interested:

https://etherpad.openstack.org/p/wiki-use-cases

Things that are stroked out are things we moved to reference websites since 
then.

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-15 Thread Arkady.Kanevsky
Right on the point Rocky (no wonder you have release named after you)

We also need to update https://www.openstack.org/software/project-navigator/
To align with whatever decision agreed upon.
And we better make decision stick and not change it again in a year or two.
Arkady

-Original Message-
From: Rochelle Grober [mailto:rochelle.gro...@huawei.com] 
Sent: Thursday, June 15, 2017 3:21 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

OK.  So, our naming is like branding.  We are techies -- not good at marketing. 
 But, gee, the foundation has a marketing team.  And they end up fielding a lot 
of the confusing questions from companies not deeply entrenched in the 
OpenStack Dev culture.  Perhaps it would be worth explaining what we are trying 
to do to the marketing team and let them suggest some branding words.

What we would need to do pretty much goes back to Chris and Gord's emails about 
answering the questions of what we mean by "Openstack Project" and  "projects 
we allow to use our infrastructure in pursuit of something that somehow works 
with OpenStack projects".  If we provide marketing with a solid definition of 
what is an OpenStack project (we have that one down fairly well, but they might 
ask about core or other things we haven't debated in a while), and provide them 
with what those other projects have in common besides being hosted by us, they 
might come up with something that works for them and is ok for us.  Remember, 
they get hit with it a lot more than we do at this point.

So what we need is:

* detailed definition of "OpenStack Project" (maybe based on answering those 
questions Chris proposed plus others)
* good definition of what the "others" hosted on our infrastructure are/are 
expected to be
* removal of "big tent" from everywhere (aside/non sequitur -- There was a doge 
of Venice that got deposed for treason and his visage and name were eradicated 
from all buildings, documents, etc.  He also happened to be the inventor of the 
chastity belt)
* introduce the marketing guys to the definitions and the branding issue
* call in OpenStack projects and Others until we have a reasonable brand for 
Others.

--Rocky 

Thierry Carrez Wrote:
> Jeremy Stanley wrote:
> > On 2017-06-15 11:15:36 +0200 (+0200), Thierry Carrez wrote:
> > [...]
> >> I'd like to propose that we introduce a new concept:
> >> "OpenStack-Hosted projects". There would be "OpenStack projects" on 
> >> one side, and "Projects hosted on OpenStack infrastructure" on the 
> >> other side (all still under the openstack/ git repo prefix).
> >
> > I'm still unconvinced a term is needed for this. Can't we just have 
> > "OpenStack Projects" (those under TC governance) and "everything 
> > else?" Why must the existence of any term require a term for its 
> > opposite?
> 
> Well, we tried that for 2.5 years now, and people are still confused 
> about which projects are an Openstack project and what are not. The 
> confusion led to the perception that everything under openstack/ is an 
> openstack project.
> It led to the perception that "big tent" means "anything goes in" or 
> "flea market".
> 
> Whether we like it or not, giving a name to that category, a name that 
> people can refer to (not "projects under openstack infrastructure that 
> are not officially recognized by the TC"), is I think the only way out of 
> this confusion.
> 
> Obviously we are not the target audience for that term. I think we are 
> deep enough in OpenStack and technically-focused enough to see through that.
> But reality is, the majority of the rest of the world is confused, and 
> needs help figuring it out. Giving the category a name is a way to do that.
> 
> --
> Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Product] PTG attendance

2017-06-13 Thread Arkady.Kanevsky
Fellow Product WG members,
We are taking informal poll on how many of us plan to attend
PTG meeting in Denver?

Second question should we have mid-cycle meeting co-located with PTG or with 
operator summit in Mexico city?

Please, respond to this email so Shamail and Leong can tally the results.
Thanks,
Arkady

Arkady Kanevsky, Ph.D.
Director of SW Development
Dell EMC CPSD
Dell Inc. One Dell Way, MS PS2-91
Round Rock, TX 78682, USA
Phone: 512 723 5264

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [dev] [doc] Operations Guide future

2017-06-01 Thread Arkady.Kanevsky
Option 3 sound reasonable if wiki can be searchable.

-Original Message-
From: Blair Bethwaite [mailto:blair.bethwa...@gmail.com] 
Sent: Thursday, June 01, 2017 8:44 PM
To: Alexandra Settle 
Cc: OpenStack Operators ; 
openstack-d...@lists.openstack.org; OpenStack Development Mailing List (not for 
usage questions) ; George Mihaiescu 

Subject: Re: [openstack-dev] [Openstack-operators] [dev] [doc] Operations Guide 
future

Hi Alex,

Likewise for option 3. If I recall correctly from the summit session that was 
also the main preference in the room?

On 2 June 2017 at 11:15, George Mihaiescu  wrote:
> +1 for option 3
>
>
>
> On Jun 1, 2017, at 11:06, Alexandra Settle  wrote:
>
> Hi everyone,
>
>
>
> I haven’t had any feedback regarding moving the Operations Guide to 
> the OpenStack wiki. I’m not taking silence as compliance. I would 
> really like to hear people’s opinions on this matter.
>
>
>
> To recap:
>
>
>
> Option one: Kill the Operations Guide completely and move the 
> Administration Guide to project repos.
> Option two: Combine the Operations and Administration Guides (and then 
> this will be moved into the project-specific repos) Option three: Move 
> Operations Guide to OpenStack wiki (for ease of operator-specific 
> maintainability) and move the Administration Guide to project repos.
>
>
>
> Personally, I think that option 3 is more realistic. The idea for the 
> last option is that operators are maintaining operator-specific 
> documentation and updating it as they go along and we’re not losing 
> anything by combining or deleting. I don’t want to lose what we have 
> by going with option 1, and I think option 2 is just a workaround 
> without fixing the problem – we are not getting contributions to the project.
>
>
>
> Thoughts?
>
>
>
> Alex
>
>
>
> From: Alexandra Settle 
> Date: Friday, May 19, 2017 at 1:38 PM
> To: Melvin Hillsman , OpenStack Operators 
> 
> Subject: Re: [Openstack-operators] Fwd: [openstack-dev] 
> [openstack-doc] [dev] What's up doc? Summit recap edition
>
>
>
> Hi everyone,
>
>
>
> Adding to this, I would like to draw your attention to the last dot 
> point of my email:
>
>
>
> “One of the key takeaways from the summit was the session that I joint 
> moderated with Melvin Hillsman regarding the Operations and 
> Administration Guides. You can find the etherpad with notes here:
> https://etherpad.openstack.org/p/admin-ops-guides  The session was 
> really helpful – we were able to discuss with the operators present 
> the current situation of the documentation team, and how they could 
> help us maintain the two guides, aimed at the same audience. The 
> operator’s present at the session agreed that the Administration Guide 
> was important, and could be maintained upstream. However, they voted 
> and agreed that the best course of action for the Operations Guide was 
> for it to be pulled down and put into a wiki that the operators could 
> manage themselves. We will be looking at actioning this item as soon as 
> possible.”
>
>
>
> I would like to go ahead with this, but I would appreciate feedback 
> from operators who were not able to attend the summit. In the etherpad 
> you will see the three options that the operators in the room 
> recommended as being viable, and the voted option being moving the 
> Operations Guide out of docs.openstack.org into a wiki. The aim of 
> this was to empower the operations community to take more control of 
> the updates in an environment they are more familiar with (and available to 
> others).
>
>
>
> What does everyone think of the proposed options? Questions? Other thoughts?
>
>
>
> Alex
>
>
>
> From: Melvin Hillsman 
> Date: Friday, May 19, 2017 at 1:30 PM
> To: OpenStack Operators 
> Subject: [Openstack-operators] Fwd: [openstack-dev] [openstack-doc] 
> [dev] What's up doc? Summit recap edition
>
>
>
>
>
> -- Forwarded message --
> From: Alexandra Settle 
> Date: Fri, May 19, 2017 at 6:12 AM
> Subject: [openstack-dev] [openstack-doc] [dev] What's up doc? Summit 
> recap edition
> To: "openstack-d...@lists.openstack.org"
> 
> Cc: "OpenStack Development Mailing List (not for usage questions)"
> 
>
>
> Hi everyone,
>
>
> The OpenStack manuals project had a really productive week at the 
> OpenStack summit in Boston. You can find a list of all the etherpads 
> and attendees
> here: https://etherpad.openstack.org/p/docs-summit
>
>
>
> As we all know, we are rapidly losing key contributors and core reviewers.
> We are not alone, this is happening across the board. It is making 
> things harder, but not 

[openstack-dev] [User] Achieving Resiliency at Scales of 1000+

2017-05-16 Thread Arkady.Kanevsky
Team,
We manage to have a productive discussion  on resiliency for 1000+ nodes.
Many thanks to Adam Spiers on helping with it.
https://etherpad.openstack.org/p/Achieving_Resiliency_at_Scales_of_1000+
There are several concrete actions especially for current gate testing.
Will bring these at the next user committee meeting.
Thanks,
Arkady

Arkady Kanevsky, Ph.D.
Director of SW Development
Dell EMC CPSD
Dell Inc. One Dell Way, MS PS2-91
Round Rock, TX 78682, USA
Phone: 512 723 5264

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [User-committee] [Forum] Moderators needed!

2017-04-29 Thread Arkady.Kanevsky
No problems. All sorted out.

From: Sam P [mailto:sam47pr...@gmail.com]
Sent: Saturday, April 29, 2017 9:15 PM
To: Kanevsky, Arkady 
Cc: Shamail Tahir ; OpenStack Operators 
; OpenStack Development Mailing List 
(not for usage questions) ; 
user-commit...@lists.openstack.org
Subject: Re: [User-committee] [Forum] Moderators needed!

Hi Arkandy,

 Thank you.
 I replied to Shamail with "I am available to moderate the session" for 

High Availability in 
OpenStack.
 However, by mistake my reply only sent to individual members and not for the 
MLs.
 Sorry

--- Regards,
Sampath


On Sat, Apr 29, 2017 at 10:06 AM, 
> wrote:
​​
Shamail,
I can moderate either
Achieving Resiliency at Scales of 
1000+
 or
High Availability in 
OpenStack

Thanks,
Arkady

From: Shamail Tahir [mailto:itzsham...@gmail.com]
Sent: Friday, April 28, 2017 7:23 AM
To: openstack-operators 
>;
 OpenStack Development Mailing List (not for usage questions) 
>; 
user-committee 
>
Subject: [User-committee] [Forum] Moderators needed!

Hi everyone,

Most of the proposed/accepted Forum sessions currently have moderators but 
there are six sessions that do not have a confirmed moderator yet. Please look 
at the list below and let us know if you would be willing to help moderate any 
of these sessions.

The topics look really interesting but it will be difficult to keep the 
sessions on the schedule if there is not an assigned moderator. We look forward 
to seeing you at the Summit/Forum in Boston soon!

​​
Achieving Resiliency at Scales of 
1000+

Feedback from users for I18n & translation - important 
part?

Neutron Pain 
Points

Making Neutron easy for people who want basic 
networking

​​
High Availability in 
OpenStack

Cloud-Native Design/Refactoring across 
OpenStack



Thanks,
Doug, Emilien, Melvin, Mike, Shamail & Tom
Forum Scheduling Committee

___
User-committee mailing list
user-commit...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [User-committee] [Forum] Moderators needed!

2017-04-28 Thread Arkady.Kanevsky
Shamail,
I can moderate either
Achieving Resiliency at Scales of 
1000+
 or
High Availability in 
OpenStack

Thanks,
Arkady

From: Shamail Tahir [mailto:itzsham...@gmail.com]
Sent: Friday, April 28, 2017 7:23 AM
To: openstack-operators ; OpenStack 
Development Mailing List (not for usage questions) 
; user-committee 

Subject: [User-committee] [Forum] Moderators needed!

Hi everyone,

Most of the proposed/accepted Forum sessions currently have moderators but 
there are six sessions that do not have a confirmed moderator yet. Please look 
at the list below and let us know if you would be willing to help moderate any 
of these sessions.

The topics look really interesting but it will be difficult to keep the 
sessions on the schedule if there is not an assigned moderator. We look forward 
to seeing you at the Summit/Forum in Boston soon!

Achieving Resiliency at Scales of 
1000+

Feedback from users for I18n & translation - important 
part?

Neutron Pain 
Points

Making Neutron easy for people who want basic 
networking

High Availability in 
OpenStack

Cloud-Native Design/Refactoring across 
OpenStack



Thanks,
Doug, Emilien, Melvin, Mike, Shamail & Tom
Forum Scheduling Committee
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PWG] mid-cycle venue option

2017-02-20 Thread Arkady.Kanevsky
Team,
I had updated venue info at https://etherpad.openstack.org/p/MIL-pwg-meetup.
That includes nearby hotel info.
Rate from Hotels.com, booking.com, expedia.com and so on give me better rate 
than corporate one.


Need to cover some logistic issues.

1.   Do we need breakfast? Or everybody has it in hotel and we can skip it.

2.   Shamail, who is organizing group dinner? Assume it is Monday night.

3.   Do we want catered lunch or we will take a break to go for it?

Thanks,
Arkady
From: Kanevsky, Arkady
Sent: Tuesday, February 07, 2017 10:19 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [PWG] mid-cycle venue option

Team,
I finally checked on my side for the venue.

The address of my available venue is
Company: Dell
Street: Viale Piero e Alberto Pirelli 6
City: Milano

That is about 15 min drive or 20 min on public transport from coworking place.

I reserved 2 conf rooms for mon-tue.
While on Tue you'll benefit for a proper room for a roundtable, on Mon the only 
room available who could accommodate 15 people is a room with a table for 10 
people and chairs all around .

Let me know if we want to follow on it.


Arkady Kanevsky, Ph.D.
Director of SW Development
Dell EMC CPSD
Dell Inc. One Dell Way, MS PS2-91
Round Rock, TX 78682, USA
Phone: 512 723 5264

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] call today

2017-02-20 Thread Arkady.Kanevsky
Do we have PTG call this week?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-02-17 Thread Arkady.Kanevsky
There is no project that can stand on its own.
Even Swift need some identity management.

Thus, even if you are contributing to only one project your are still dependent 
on many others. Including QA and infrastructure and so on.

While most Customers are looking on a few projects together and not all 
projects combined it is still referred to as OpenStack. The release is of 
openstack.
There are a lot of features that span many projects and just because a feature 
is done in one project it is not sufficient for customer needs. HA, upgrade, 
log consistency are all examples of it.

The strength of openstack is in combination of projects working together. 

I will skip topic what is core and what is not.
I personally think that we did customer and ourselves a big disservice when we 
abandon integrated release concept for the same reasons I stated above.
Thanks,
Arkady

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com] 
Sent: Friday, February 17, 2017 6:31 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help 
your team?

On 02/17/2017 12:01 AM, Chris Dent wrote:
> On Thu, 16 Feb 2017, Dan Prince wrote:
>
>> And yes. We are all OpenStack developers in a sense. We want to align 
>> things in the technical arena. But I think you'll also find that most 
>> people more closely associate themselves to a team within OpenStack 
>> than they perhaps do with the larger project. Many of us in TripleO 
>> feel that way I think. This is a healthy thing, being part of a team.
>> Don't make us feel bad because of it by suggesting that uber 
>> OpenStack graphics styling takes precedent.
>
> I'd very much like to have a more clear picture of the number of 
> people who think of themselves primarily as "OpenStack developers"
> or primarily as "$PROJECT developers".
>
> I've always assumed that most people in the community(tm) thought of 
> themselves as the former but I'm realizing (in part because of what 
> Dan's said here) that's bias or solipsism on my part and I really have 
> no clue what the situation is.
>
> Anyone have a clue?

I don't have a clue, and I don't personally think it matters. But I suspect the 
latter is the majority. At least because very few contributors have a chance to 
contribute to something OpenStack-wide, while many people get assigned to work 
on a project or a few of them.

That being said, I don't believe that the "OpenStack vs $PROJECT" question is 
as important as it may seem from this thread :)

>
>
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Refstack - final mascot

2017-02-08 Thread Arkady.Kanevsky
+1

From: Rochelle Grober [mailto:rochelle.gro...@huawei.com]
Sent: Tuesday, February 07, 2017 8:36 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] Refstack - final mascot

Looks great to me.

--Rocky

From: Catherine Cuong Diep [mailto:cd...@us.ibm.com]
Sent: Monday, February 06, 2017 4:25 PM
To: OpenStack Dev Mailer 
>
Subject: [openstack-dev] Refstack - final mascot


Hello RefStack team,

Please see RefStack mascot in Heidi's note below.

Catherine Diep
- Forwarded by Catherine Cuong Diep/San Jose/IBM on 02/06/2017 04:18 PM 
-

From: Heidi Joy Tretheway 
>
To: Catherine Cuong Diep/San Jose/IBM@IBMUS
Date: 02/02/2017 11:42 AM
Subject: Refstack - final mascot





Hi Catherine,

I have a new revision from our illustration team for your team’s project 
mascot. We’re pushing hard to get all 60 of the mascots finalized by the PTG, 
so I’d love any feedback from your team as swiftly as possible. As a reminder, 
we can’t change the illustration style (since it’s consistent throughout the 
full mascot set) and so we’re just looking for problems with the creatures. 
Could you please let me know if your team has any final concerns?

Thank you!

[cid:image001.png@01D28240.35CFBBD0]




Heidi Joy Tretheway
Senior Marketing Manager, OpenStack Foundation
503 816 9769 | Skype: 
heidi.tretheway



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [PWG] mid-cycle venue option

2017-02-07 Thread Arkady.Kanevsky
Team,
I finally checked on my side for the venue.

The address of my available venue is
Company: Dell
Street: Viale Piero e Alberto Pirelli 6
City: Milano

That is about 15 min drive or 20 min on public transport from coworking place.

I reserved 2 conf rooms for mon-tue.
While on Tue you'll benefit for a proper room for a roundtable, on Mon the only 
room available who could accommodate 15 people is a room with a table for 10 
people and chairs all around .

Let me know if we want to follow on it.


Arkady Kanevsky, Ph.D.
Director of SW Development
Dell EMC CPSD
Dell Inc. One Dell Way, MS PS2-91
Round Rock, TX 78682, USA
Phone: 512 723 5264

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] New mascot design

2017-01-31 Thread Arkady.Kanevsky
I think Russian already owns the bear.

From: Jim Rollenhagen [mailto:j...@jimrollenhagen.com]
Sent: Tuesday, January 31, 2017 2:49 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [ironic] New mascot design

Hey ironic-ers,
The foundation has passed along a new version of our mascot (attached) to us, 
and would like your feedback on it.

They're hoping to have all mascot-related things ready in time for the PTG, so 
please do send your thoughts quickly, if you have them. :)

// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Release notes in TripleO

2017-01-15 Thread Arkady.Kanevsky
Does that applies to drivers also?

-Original Message-
From: Ben Nemec [mailto:openst...@nemebean.com] 
Sent: Wednesday, January 11, 2017 8:49 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [tripleo] Release notes in TripleO



On 01/11/2017 08:24 AM, Emilien Macchi wrote:
> On Wed, Jan 11, 2017 at 9:21 AM, Emilien Macchi  wrote:
>> Greetings,
>>
>> OpenStack has been using reno [1] to manage release notes for a while 
>> now and it has been proven to be super useful.
>> Puppet OpenStack project adopted it in Mitaka and since then we loved it.
>> The path to use reno in a project is not that simple. People need to 
>> get used of adding a release note every time they submit a patch that 
>> fix a bug or add a new feature. This thing takes time and will 
>> require some involvement from the team.
>> Though the benefits are really here:
>> - our users will understand what new features we have developed
>> - our users will learn deprecations.
>> - developers will have a way to communicate with non-devs, expressing 
>> the work done in TripleO (eg: to product managers, etc).
>>
>> This is an example of a release note:
>> https://github.com/openstack/puppet-nova/blob/master/releasenotes/not
>> es/nova-placement-30566167309fd124.yaml
>>
>> And the output:
>> http://docs.openstack.org/releasenotes/puppet-nova/unreleased.html
>>
>> So here's a plan proposal:
>> 1) Emilien to add all CI jobs and required bits to have reno in 
>> TripleO (already done for python-tripleoclient). I'm doing the rest 
>> of the projects this week.
>
> I forgot to mention which projects we would target for Ocata:
> - python-tripleoclient
> - puppet-tripleo
> - tripleo-common
> - tripleo-heat-templates
> - tripleo-puppet-elements
> - tripleo-ui
> - tripleo-validations
> - tripleo-quickstart and tripleo-quickstart-extras

+instack-undercloud

Otherwise this all sounds good to me.  Adding reno to more tripleo projects has 
been on my todo list for months.

>
>> 2) Emilien with the team (please ping me if you volunteer to help) to 
>> write Ocata release notes before the release (we have ~ one month).
>> 3) Once 1) is done, I would ask to the team to use it.
>>
>> Regarding 3), here are some thoughts:
>> During pike-1 and pike-2:
>> I wouldn't -1 a patch that does't have a release note, but rather 
>> comment and give some guidance to the committer and ask if it's 
>> something doable. Otherwise, proposing a patch on top of it with the 
>> release note. That way, we don't force people to use it immediately, 
>> but instead giving them some guidance on why and how to use it, 
>> directly in the review.
>> During pike-3:
>> Start -1 patches which don't have a release note. I think 3 or 4 
>> months is fair to learn how to use reno (it takes less than 5 min to 
>> create a good release note).
>>
>> Any feedback is highly welcome, let's make TripleO releases better!
>>
>> Thanks,
>>
>> [1] http://docs.openstack.org/developer/reno
>> --
>> Emilien Macchi
>
>
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: Do you want to ask a project-specific question on the next User Survey?

2017-01-09 Thread Arkady.Kanevsky
+1

From: Ligong LG1 Duan [mailto:duan...@lenovo.com]
Sent: Sunday, January 08, 2017 7:41 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [tripleo] Fwd: Do you want to ask a 
project-specific question on the next User Survey?

I would like to ask:
· Which deployment tool of OpenStack are you using, TripleO or Fuel or 
Kolla or your customized tool?

Regards,
Ligong Duan

From: arkady.kanev...@dell.com 
[mailto:arkady.kanev...@dell.com]
Sent: Monday, January 09, 2017 12:30 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tripleo] Fwd: Do you want to ask a 
project-specific question on the next User Survey?

Suggest we request a question on life-cycle management tools, including 
TripleO, like upgrade, patching, etc. Not just deployment.
Arkady

From: Emilien Macchi [mailto:emil...@redhat.com]
Sent: Tuesday, January 03, 2017 8:08 AM
To: OpenStack Development Mailing List 
>
Subject: [openstack-dev] [tripleo] Fwd: Do you want to ask a project-specific 
question on the next User Survey?

(Happy new year folks!)

Forwarding Heidi's email to TripleO folks, so anyone can contribute to it.

Feel free to propose questions on:
https://etherpad.openstack.org/p/tripleo-user-survey-2017

The question with the more voting, will be proposed to the survey.
Please take 2 min and help on $topic, it will be very helpful.

Thanks,

-- Forwarded message --
From: Heidi Joy Tretheway 
>
Date: Thu, Dec 22, 2016 at 4:58 PM
Subject: Do you want to ask a project-specific question on the next User Survey?
To: Jimmy McArthur >, Lauren 
Sell >
Greetings,

I wanted to offer you the opportunity to ask a question on the upcoming User 
Survey, which launches on or before Feb. 1. Each PTL of a project with 
significant adoption can submit one question. You can decide which audience to 
serve the question to - those who are USING, TESTING, or INTERESTED in your 
project (or some combination of these).

My hope is to gather as much information as possible to help you, and send it 
all raw, without commentary, in advance of the Project Team Gathering in late 
February.

The deadline to submit is Jan. 9.

Feel free to drop me a note if I can answer any questions for you!

Best,
Heidi Joy


[Image removed by sender. photo]

Heidi Joy Tretheway
Senior Marketing Manager, OpenStack Foundation
503 816 9769 | Skype: 
heidi.tretheway
[Image removed by sender.] [Image 
removed by sender.]   [Image removed by 
sender.] 







--
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Fixing Swift rings when upscaling/replacing nodes in TripleO deployments

2017-01-09 Thread Arkady.Kanevsky
Thanks

-Original Message-
From: Christian Schwede [mailto:cschw...@redhat.com] 
Sent: Thursday, January 05, 2017 1:29 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [TripleO] Fixing Swift rings when 
upscaling/replacing nodes in TripleO deployments

On 05.01.2017 17:03, Steven Hardy wrote:
> On Thu, Jan 05, 2017 at 02:56:15PM +, arkady.kanev...@dell.com wrote:
>> I have concern to rely on undercloud for overcloud swift.
>> Undercloud is not HA (yet) so it may not be operational when disk failed or 
>> swift overcloud node is added/deleted.
> 
> I think the proposal is only for a deploy-time dependency, after the 
> overcloud is deployed there should be no dependency on the undercloud 
> swift, because the ring data will have been copied to all the nodes.

Yes, exactly - there is no runtime dependency. The overcloud will continue to 
work even if the undercloud is gone.

If you "loose" the undercloud (or more precisely, the overcloud rings that are 
stored on the undercloud Swift) you can copy them from any overcloud node and 
run an update.

Even if one deletes the rings from the undercloud, the deployment will continue 
to work after an update - puppet-swift will simply continue to use the already 
existing .builder files on the nodes.

Only if one deletes the rings on the undercloud and runs an update with 
new/replaced nodes it will fail - the swift-recon check will raise an error in 
step 5 because rings are inconsistent on the new/replaced nodes. But the 
inconsistency is already the case today (in fact it's the same way as it works 
today), except that there is no check and no warning to the operator.

-- Christian

> During create/update operations you need the undercloud operational by 
> definition, so I think this is probably OK?
> 
> Steve
>>
>> -Original Message-
>> From: Christian Schwede [mailto:cschw...@redhat.com]
>> Sent: Thursday, January 05, 2017 6:14 AM
>> To: OpenStack Development Mailing List 
>> 
>> Subject: [openstack-dev] [TripleO] Fixing Swift rings when 
>> upscaling/replacing nodes in TripleO deployments
>>
>> Hello everyone,
>>
>> there was an earlier discussion on $subject last year [1] regarding a bug 
>> when upscaling or replacing nodes in TripleO [2].
>>
>> Shortly summarized: Swift rings are built on each node separately, and if 
>> adding or replacing nodes (or disks) this will break the rings because they 
>> are no longer consistent across the nodes. What's needed are the previous 
>> ring builder files on each node before changing the rings.
>>
>> My former idea in [1] was to build the rings in advance on the undercloud, 
>> and also using introspection data to gather a set of disks on each node for 
>> the rings.
>>
>> However, this changes the current way of deploying significantly, and also 
>> requires more work in TripleO and Mistral (for example to trigger a ring 
>> build on the undercloud after the nodes have been started, but before the 
>> deployment triggers the Puppet run).
>>
>> I prefer smaller steps to keep everything stable for now, and therefore I 
>> changed my patches quite a bit. This is my updated proposal:
>>
>> 1. Two temporary undercloud Swift URLs (one PUT, one GET) will be computed 
>> before Mistral starts the deployments. A new Mistral action to create such 
>> URLs is required for this [3].
>> 2. Each overcloud node will try to fetch rings from the undercloud Swift 
>> deployment before updating it's set of rings locally using the temporary GET 
>> url. This guarantees that each node uses the same source set of builder 
>> files. This happens in step 2. [4] 3. puppet-swift runs like today, updating 
>> the rings if required.
>> 4. Finally, at the end of the deployment (in step 5) the nodes will upload 
>> their modified rings to the undercloud using the temporary PUT urls. 
>> swift-recon will run before this, ensuring that all rings across all nodes 
>> are consistent.
>>
>> The two required patches [3][4] are not overly complex IMO, but they solve 
>> the problem of adding or replacing nodes without changing the current 
>> workflow significantly. It should be even easy to backport them if needed.
>>
>> I'll continue working on an improved way of deploying Swift rings (using 
>> introspection data), but using this approach it could be even done using 
>> todays workflow, feeding data into puppet-swift (probably with some updates 
>> to puppet-swift/tripleo-heat-templates to allow support for regions, zones, 
>> different disk layouts and the like). However, all of this could be built on 
>> top of these two patches.
>>
>> I'm curious about your thoughts and welcome any feedback or reviews!
>>
>> Thanks,
>>
>> -- Christian
>>
>>
>> [1]
>> http://lists.openstack.org/pipermail/openstack-dev/2016-August/100720
>> .html [2] https://bugs.launchpad.net/tripleo/+bug/1609421
>> [3] https://review.openstack.org/#/c/413229/
>> [4] 

Re: [openstack-dev] [tripleo] Fwd: Do you want to ask a project-specific question on the next User Survey?

2017-01-08 Thread Arkady.Kanevsky
Suggest we request a question on life-cycle management tools, including 
TripleO, like upgrade, patching, etc. Not just deployment.
Arkady

From: Emilien Macchi [mailto:emil...@redhat.com]
Sent: Tuesday, January 03, 2017 8:08 AM
To: OpenStack Development Mailing List 
Subject: [openstack-dev] [tripleo] Fwd: Do you want to ask a project-specific 
question on the next User Survey?

(Happy new year folks!)

Forwarding Heidi's email to TripleO folks, so anyone can contribute to it.

Feel free to propose questions on:
https://etherpad.openstack.org/p/tripleo-user-survey-2017

The question with the more voting, will be proposed to the survey.
Please take 2 min and help on $topic, it will be very helpful.

Thanks,

-- Forwarded message --
From: Heidi Joy Tretheway 
>
Date: Thu, Dec 22, 2016 at 4:58 PM
Subject: Do you want to ask a project-specific question on the next User Survey?
To: Jimmy McArthur >, Lauren 
Sell >

Greetings,

I wanted to offer you the opportunity to ask a question on the upcoming User 
Survey, which launches on or before Feb. 1. Each PTL of a project with 
significant adoption can submit one question. You can decide which audience to 
serve the question to - those who are USING, TESTING, or INTERESTED in your 
project (or some combination of these).

My hope is to gather as much information as possible to help you, and send it 
all raw, without commentary, in advance of the Project Team Gathering in late 
February.

The deadline to submit is Jan. 9.

Feel free to drop me a note if I can answer any questions for you!

Best,
Heidi Joy


[Image removed by sender. photo]

Heidi Joy Tretheway
Senior Marketing Manager, OpenStack Foundation
503 816 9769 | Skype: 
heidi.tretheway
[Image removed by sender.] [Image 
removed by sender.]   [Image removed by 
sender.] 







--
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Fixing Swift rings when upscaling/replacing nodes in TripleO deployments

2017-01-05 Thread Arkady.Kanevsky
I have concern to rely on undercloud for overcloud swift.
Undercloud is not HA (yet) so it may not be operational when disk failed or 
swift overcloud node is added/deleted.

-Original Message-
From: Christian Schwede [mailto:cschw...@redhat.com] 
Sent: Thursday, January 05, 2017 6:14 AM
To: OpenStack Development Mailing List 
Subject: [openstack-dev] [TripleO] Fixing Swift rings when upscaling/replacing 
nodes in TripleO deployments

Hello everyone,

there was an earlier discussion on $subject last year [1] regarding a bug when 
upscaling or replacing nodes in TripleO [2].

Shortly summarized: Swift rings are built on each node separately, and if 
adding or replacing nodes (or disks) this will break the rings because they are 
no longer consistent across the nodes. What's needed are the previous ring 
builder files on each node before changing the rings.

My former idea in [1] was to build the rings in advance on the undercloud, and 
also using introspection data to gather a set of disks on each node for the 
rings.

However, this changes the current way of deploying significantly, and also 
requires more work in TripleO and Mistral (for example to trigger a ring build 
on the undercloud after the nodes have been started, but before the deployment 
triggers the Puppet run).

I prefer smaller steps to keep everything stable for now, and therefore I 
changed my patches quite a bit. This is my updated proposal:

1. Two temporary undercloud Swift URLs (one PUT, one GET) will be computed 
before Mistral starts the deployments. A new Mistral action to create such URLs 
is required for this [3].
2. Each overcloud node will try to fetch rings from the undercloud Swift 
deployment before updating it's set of rings locally using the temporary GET 
url. This guarantees that each node uses the same source set of builder files. 
This happens in step 2. [4] 3. puppet-swift runs like today, updating the rings 
if required.
4. Finally, at the end of the deployment (in step 5) the nodes will upload 
their modified rings to the undercloud using the temporary PUT urls. 
swift-recon will run before this, ensuring that all rings across all nodes are 
consistent.

The two required patches [3][4] are not overly complex IMO, but they solve the 
problem of adding or replacing nodes without changing the current workflow 
significantly. It should be even easy to backport them if needed.

I'll continue working on an improved way of deploying Swift rings (using 
introspection data), but using this approach it could be even done using todays 
workflow, feeding data into puppet-swift (probably with some updates to 
puppet-swift/tripleo-heat-templates to allow support for regions, zones, 
different disk layouts and the like). However, all of this could be built on 
top of these two patches.

I'm curious about your thoughts and welcome any feedback or reviews!

Thanks,

-- Christian


[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-August/100720.html
[2] https://bugs.launchpad.net/tripleo/+bug/1609421
[3] https://review.openstack.org/#/c/413229/
[4] https://review.openstack.org/#/c/414460/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] python-wsmanclient future

2016-11-16 Thread Arkady.Kanevsky
+1

-Original Message-
From: Jim Rollenhagen [mailto:j...@jimrollenhagen.com] 
Sent: Tuesday, November 15, 2016 10:00 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [ironic] python-wsmanclient future

On Mon, Nov 7, 2016 at 8:51 AM, Dmitry Tantsur  wrote:
> Hi folks!
>
> In view of the Ironic governance discussion [1] I'd like to talk about 
> wsmanclient [2] future.
>
> This project was created to split away wsman code from 
> python-dracclient to be reused in other drivers (I can only think of 
> AMT right now). This was never finished: dracclient still uses its internal 
> wsman implementation.
>
> To make it worse, the guy behind this effort (ifarkas) has left our 
> team, python-dracclient is likely to leave Ironic governance per [1], 
> and the AMT driver is going to leave the Ironic tree.
>
> At least the majority of the folks currently behind dracclient (Miles, 
> Lucas and myself) do not have resources to continue this wsmanclient effort.
> Unless somebody is ready to take over both wsmanclient itself and the 
> effort to port dracclient, I suggest we abandon wsmanclient.
>
> Any thoughts?

+1. Sounds like nobody objects, I can add retiring this to my todo list.

// jim

>
> [1] https://review.openstack.org/#/c/392685/
> [2] https://github.com/openstack/python-wsmanclient
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Removing agent vendor passthru and unsupported drivers

2016-11-14 Thread Arkady.Kanevsky
Agree with removal of unsupported drivers.
For supported drivers we will still need to support passthru.
First, because it is currently used by many customers and for many drivers.
Second, we will need to add support in Ironic and expose it in API for many 
features before we can do them without passthru crutch.
For example, PXE NIC setup support, FW install/update, BIOS version 
setup/upgrade, etc.

Thanks,
Arkady


-Original Message-
From: Mathieu Mitchell [mailto:mmitch...@internap.com] 
Sent: Monday, November 14, 2016 10:42 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [ironic] Removing agent vendor passthru and 
unsupported drivers

Hi Pavlo,

See my reply below.

On 2016-11-14 7:50 AM, Pavlo Shchelokovskyy wrote:
> Hi Ironicers,
>
> currently I'm busy with removing the lookup/heartbeats "as vendor passthru"
> from Ironic which we slated for removal in Ocata, and have the 
> following question.
>
> Removing the old agent vendor passthru requires changes to some 
> unsupported drivers whose copies are already in 
> ironic-staging-drivers. The drivers in question are WoL, iBoot and 
> especially AMT (which uses a custom not-so-vendor passthru).

The "follows-standard-deprecation" policy states the following "Features, APIs 
or configuration options are marked deprecated in the code. Appropriate 
warnings will be sent to the end user, operator or library user. **Code will be 
frozen and only receive minimal maintenance (just so that it continues to work 
as-is).**" [0] (emphasis mine). My understanding is that your changes would 
fall into the "just so that it continues to work as-is" clause.

>
> AFAIU according to our third-party drivers policy, those unsupported 
> drivers have to be removed from Ironic tree anyway (as there is no 
> plan to test them on third-party CI AFAIK) and this looks like a 
> perfect time to do it.
>
> So ideally I'd like to fix those in ironic-staging-drivers and then 
> remove them from Ironic tree via a depends-on patch.
>
> What do you think on such plan?

The drivers were marked for removal in Ocata [1], so you can already remove 
them from the tree. A simple but relevant thing I note is that it would be 
preferable, from my point of view, to remove them all in a single commit.

Finally, I would add that functional CI coverage for the SNMP driver is well 
under way [2]. We are currently doing the work to keep the SNMP driver in-tree 
(what we are doing is similar to VirtualBMC and the IPMI driver). Going ahead 
with a single commit to remove all the drivers would impact our current work. I 
would therefore suggest doing the required "vendor passthru" changes to the 
different drivers and post-pone the commit to delete all unsupported drivers.

[0]
https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html#requirements
[1] http://docs.openstack.org/releasenotes/ironic/current-series.html#id5
[2] https://review.openstack.org/#/q/status:open+topic:bug/1597793

Thank you,
Mathieu Mitchell
Internap

>
> Cheers,
> Dr. Pavlo Shchelokovskyy
> Senior Software Engineer
> Mirantis Inc
> www.mirantis.com
>
>
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] When should a project be under Ironic's governance?

2016-11-10 Thread Arkady.Kanevsky
Second try

-Original Message-
From: Kanevsky, Arkady 
Sent: Thursday, November 10, 2016 10:08 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: RE: [openstack-dev] [ironic] When should a project be under Ironic's 
governance?

Fully agree.
How do we propose to handle dependency of Ironic  version on specific version 
of a driver? 
Clearly distros can do it but we will not have a version for user of upstream 
to use without building it themselves.
I am only referring to ironic drivers that do pass CI voting that user expect 
availability of.
Thanks,
Arkady

-Original Message-
From: Jim Rollenhagen [mailto:j...@jimrollenhagen.com]
Sent: Wednesday, November 02, 2016 9:37 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [ironic] When should a project be under Ironic's 
governance?

On Mon, Oct 17, 2016 at 4:27 PM, Michael Turek  
wrote:
> Hello ironic!
>
> At today's IRC meeting, the questions "what should and should not be a 
> project be under Ironic's governance" and "what does it mean to be 
> under Ironic's governance" were raised. Log here:
>
> http://eavesdrop.openstack.org/meetings/ironic/2016/ironic.2016-10-17-
> 17.00.log.html#l-176
>
> See http://governance.openstack.org/reference/projects/ironic.html for 
> a list of projects currently under Ironic's governance.
>
> Is it as simple as "any project that aides in openstack baremetal 
> deployment should be under Ironic's governance"? This is probably too 
> general (nova arguably fits here) but it might be a good starting point.
>
> Another angle to look at might be that a project belongs under the 
> Ironic governance when both Ironic (the main services) and the 
> candidate subproject would benefit from being under the same 
> governance. A hypothetical example of this is when Ironic and the candidate 
> project need to release together.
>
> Just some initial thoughts to get the ball rolling. What does everyone 
> else think?

We discussed this during our contributor's meetup at the summit, and came to 
consensus in the room, that in order for a repository to be under ironic's 
governance:

* it must roughly fall within the TC's rules for a new project:
  http://governance.openstack.org/reference/new-projects-requirements.html
* it must not be intended for use with only a single vendor's hardware (e.g. a 
library
  to handle iLO is not okay, a library to handle IPMI is okay).
* it must align with ironic's mission statement: "To produce an OpenStack 
service
  and associated libraries capable of managing and provisioning physical 
machines,
  and to do this in a security-aware and fault-tolerant manner."
* lack of contributor diversity is a chicken-egg problem, and as such a 
repository
  where only a single company is contributing is okay.

I've proposed this as a docs patch: https://review.openstack.org/392685

We decided we should get consensus from all cores on that patch - meaning 80% 
or more agree, and any that disagree will still agree to live by the decision. 
So, cores, please chime in on gerrit. :)

Once that patch lands, I'll submit a patch to openstack/governance to shuffle 
projects around where they do or don't fit.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev