Re: [openstack-dev] Proposal for a process to keep up with Python releases

2018-10-26 Thread Zane Bitter

On 26/10/18 5:09 AM, Thomas Goirand wrote:

On 10/22/18 9:12 PM, Zane Bitter wrote:

On 22/10/18 10:33 AM, Thomas Goirand wrote:

This can only happen if we have supporting distribution packages for it.
IMO, this is a call for using Debian Testing or even Sid in the gate.


It depends on which versions we choose to support, but if necessary yes.


If what we want is to have early detection of problems with latest
versions of Python, then there's not so many alternatives.


I think a lot depends on the relative timing of the Python release, the 
various distro release cycles, and the OpenStack release cycle. We 
established that for 3.7 that's the only way we could have done it in 
Rocky; for 3.8, who knows.



I don't really understand why you're writing that it "depends on which
version we choose to support".


The current version of the resolution[1] says that we'll choose the 
latest released version "we can feasibly use for testing", while making 
clear that availability in an Ubuntu LTS release is *not* a requirement 
for feasibility. But it doesn't require the TC to choose the latest 
version available from python.org if we're not able to build an image 
that we can successfully use for testing in time before the beginning of 
the release cycle.


[1] https://review.openstack.org/613145


That's the kind of answer which I found
very frustrating when I submit a bug, and I'm being replied "we don't
support this version". My reasoning is, the earlier we detect and fix
problems, the better, and that's orthogonal to to what version of Python
we want to support. Delaying bugfix and latest Python version compat
leads to nowhere, and best is to test with it if possible (even in a
non-voting mode).


I agree that bugs with future versions of Python are always worth fixing 
ASAP, whether or not we are able to test them in the gate.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for a process to keep up with Python releases

2018-10-25 Thread Zane Bitter

On 25/10/18 1:38 PM, William M Edmonds wrote:

Zane Bitter  wrote on 10/22/2018 03:12:46 PM:
 > On 22/10/18 10:33 AM, Thomas Goirand wrote:
 > > On 10/19/18 5:17 PM, Zane Bitter wrote:



 > >> Integration Tests
 > >> -
 > >>
 > >> Integration tests do test, amongst other things, integration with
 > >> non-openstack-supplied things in the distro, so it's important that we
 > >> test on the actual distros we have identified as popular.[2] It's also
 > >> important that every project be testing on the same distro at the 
end of

 > >> a release, so we can be sure they all work together for users.
 > >
 > > I find very disturbing to see the project only leaning toward these 
only

 > > 2 distributions. Why not SuSE & Debian?
 >
 > The bottom line is it's because targeting those two catches 88% of our
 > users. (For once I did not make this statistic up.)
 >
 > Also note that in practice I believe almost everything is actually
 > tested on Ubuntu LTS, and only TripleO is testing on CentOS. It's
 > difficult to imagine how to slot another distro into the mix without
 > doubling up on jobs.

I think you meant 78%, assuming you were looking at the latest User 
Survey results [1], page 55. Still a hefty number.


I never know how to read those weird 3-way bar charts they have in the 
user survey, but that actually adds up to 91% by the looks of it (I 
believe you forgot to count RHEL). The numbers were actually slightly 
lower in the full-year data for 2017 that I used (from 
https://www.openstack.org/analytics - I can't give you a direct link 
because Javascript ).


It is important to note that the User Survey lumps all versions of a 
given OS together, whereas the TC reference [2] only considers the 
latest LTS/stable version. If the User Survey split out latests 
LTS/stable versions vs. others (e.g. Ubuntu 16.04 LTS), I expect we'd 
see Ubuntu 18.04 LTS + Centos 7 adding up to much less than 78%.


This is true, although we don't know by how much. (FWIW I can almost 
guarantee that virtually all of the CentOS/RHEL users are on 7, but I'm 
sure the same is not the case for Ubuntu 16.04.)



[1] https://www.openstack.org/assets/survey/April2017SurveyReport.pdf
[2] 
https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for a process to keep up with Python releases

2018-10-24 Thread Zane Bitter
There seems to be agreement that this is broadly a good direction to 
pursue, so I proposed a TC resolution. Let's shift discussion to the review:


https://review.openstack.org/613145

cheers,
Zane.

On 19/10/18 11:17 AM, Zane Bitter wrote:
There hasn't been a Python 2 release in 8 years, and during that time 
we've gotten used to the idea that that's the way things go. However, 
with the switch to Python 3 looming (we will drop support for Python 2 
in the U release[1]), history is no longer a good guide: Python 3 
releases drop as often as every year. We are already feeling the pain 
from this, as Linux distros have largely already completed the shift to 
Python 3, and those that have are on versions newer than the py35 we 
currently have in gate jobs.


We have traditionally held to the principle that we want each release to 
support the latest release of CentOS and the latest LTS release of 
Ubuntu, as they existed at the beginning of the release cycle.[2] 
Currently this means in practice one version of py2 and one of py3, but 
in the future it will mean two, usually different, versions of py3.


There are two separate issues that we need to address: unit tests (we'll 
define this as code tested in isolation, within or spawned from within 
the testing process), and integration tests (we'll define this as code 
running in its own process, tested from the outside). I have two 
separate but related proposal for how to handle those.


I'd like to avoid discussion which versions of things we think should be 
supported in Stein in this thread. Let's come up with a process that we 
think is a good one to take into T and beyond, and then retroactively 
apply it to Stein. Competing proposals are of course welcome, in 
addition to feedback on this one.


Unit Tests
--

For unit tests, the most important thing is to test on the versions of 
Python we target. It's less important to be using the exact distro that 
we want to target, because unit tests generally won't interact with 
stuff outside of Python.


I'd like to propose that we handle this by setting up a unit test 
template in openstack-zuul-jobs for each release. So for Stein we'd have 
openstack-python3-stein-jobs. This template would contain:


* A voting gate job for the highest minor version of py3 we want to 
support in that release.
* A voting gate job for the lowest minor version of py3 we want to 
support in that release.

* A periodic job for any interim minor releases.
* (Starting late in the cycle) a non-voting check job for the highest 
minor version of py3 we want to support in the *next* release (if 
different), on the master branch only.


So, for example, (and this is still under active debate) for Stein we 
might have gating jobs for py35 and py37, with a periodic job for py36. 
The T jobs might only have voting py36 and py37 jobs, but late in the T 
cycle we might add a non-voting py38 job on master so that people who 
haven't switched to the U template yet can see what, if anything, 
they'll need to fix.


We'll run the unit tests on any distro we can find that supports the 
version of Python we want. It could be a non-LTS Ubuntu, Fedora, Debian 
unstable, whatever it takes. We won't wait for an LTS Ubuntu to have a 
particular Python version before trying to test it.


Before the start of each cycle, the TC would determine which range of 
versions we want to support, on the basis of the latest one we can find 
in any distro and the earliest one we're likely to need in one of the 
supported Linux distros. There will be a project-wide goal to switch the 
testing template from e.g. openstack-python3-stein-jobs to 
openstack-python3-treasure-jobs for every repo before the end of the 
cycle. We'll have goal champions as usual following up and helping teams 
with the process. We'll know where the problem areas are because we'll 
have added non-voting jobs for any new Python versions to the previous 
release's template.


Integration Tests
-

Integration tests do test, amongst other things, integration with 
non-openstack-supplied things in the distro, so it's important that we 
test on the actual distros we have identified as popular.[2] It's also 
important that every project be testing on the same distro at the end of 
a release, so we can be sure they all work together for users.


When a new release of CentOS or a new LTS release of Ubuntu comes out, 
the TC will create a project-wide goal for the *next* release cycle to 
switch all integration tests over to that distro. It's up to individual 
projects to make the switch for the tests that they own (e.g. it'd be 
the QA team for Tempest, but other individual projects for their own 
jobs). Again, there'll be a goal champion to monitor and follow up.



[1] 
https://governance.openstack.org/tc/resolutions/20180529-python2-deprecation-timeline.html 

[2] 
https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions

[openstack-dev] [karbor][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Karbor team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Karbor is one of the most difficult projects when it comes to describing 
where it fits in the design goals, which may be an indication that we're 
missing something from the vision about the role OpenStack has to play 
in data protection. If that's the case, I'd be very interested in 
hearing what you think that should look like. For now perhaps the 
closest match is with the 'Basic Data Center Management' goal, since 
Karbor is an abstraction for its various plugins, some of which must 
interact with the physical data center to accomplish their work.


Of the other sections, the Interoperability one is probably worth paying 
attention to. Any project which provides access to a lot of different 
vendor plugins always have to balance the desire to expose as much 
functionaility as possible with the need to ensure that applications can 
be ported between OpenStack clouds running different sets of plugins. 
OpenStack places a high value on interoperability, so this is something 
to keep in mind when designing.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [monasca][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Monasca team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Monasca is a project that has both user-facing and operator-facing 
functions, so it straddles the border of the scope of the vision 
document (which, to be clear, is not the same as the scope of OpenStack 
itself). The user-facing part is covered by the vision, and would 
probably fit under the 'Customisable Integration' design goal. I think 
the design principle for Monasca to be aware of here, as I mentioned at 
the PTG, is that alarms should work in such a way that it is up to the 
user where to direct them to - it could be autoscaling in Heat, 
autoscaling in Senlin, or something else that is completely 
application-specific.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [searchlight][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Searchlight team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Searchlight is one of the trickier projects to categorise. It's 
difficult to point to any of the listed 'Design Goals' in the document 
and say that Searchlight is contributing directly, although it does 
contribute a search capability to Horizon so arguably you could say it's 
a part of the GUI goal. But I think it is definitely contributing 
indirectly by helping the projects that do fulfill those design goals to 
better meet the requirements laid out in the preceding sections - in 
particular the one about Application Control. As such, I don't think 
there's any danger of this document appearing to exclude Searchlight 
from OpenStack, but it might be the case that we can learn from 
Searchlight and document more explicitly the things that it brings to 
the table as things that OpenStack should be striving for. I'd be 
interested in your thoughts on whether anything is missing.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Barbican team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Barbican provides an abstraction over HSMs and software equivalents 
(like Vault), so the immediate design goal that it meets is the 
'Hardware Virtualisation' one. However, the most interesting part of the 
document for the Barbican team is probably the section on cross-project 
dependencies. In discussions at the PTG, the TC concluded that we 
shouldn't force projects to adopt hard dependencies on other services 
(like Barbican), but recommend that they do so when there is a benefit 
to the user. The challenge here I think is that not duplicating 
security-sensitive code such as secret storage is well known to be 
something that is both of great benefit to the user and highly tempting 
to take a shortcut on. Your feedback on whether we have got the right 
balance is important.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Glance team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


There's not a lot to say about Glance specifically in the document. 
Obviously a disk image management service is a fairly fundamental 
component of 'Basic Physical Data Center Management', so it certainly 
fits with the vision.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Keystone team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Identity management is specifically called out as a key aspect of the 
'Basic Physical Data Center Management' design goal, so obviously 
Keystone fits in there. However, there are other parts of the document 
that can also help provide guidance. One is the last paragraph of the 
'Customisable Integration' goal, which talks about which combinations of 
interactions need to be possible (needs that are currently met by a 
combination of application credentials and trusts), and the importance 
of least-privilege access and credential rotation. Another is the 
section on 'Application Control'. All of this is stuff we have talked 
about in the past so there should be no surprises, but hopefully this 
helps situate it all in the context of the bigger picture.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Cinder team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Clearly Cinder is an integral part of meeting the 'Basic Physical Data 
Center Management' design goal, and also contributes to the 'Hardware 
Virtualisation' goal.


The last paragraph in the 'Plays Well With Others' goal, about providing 
a standalone backend abstraction layer independently of the higher-level 
API (that might include e.g. scheduling and integration with other 
OpenStack services) was added with Cinder in mind, as I know that this 
is something the Cinder community has discussed, and it might also be 
applicable to other projects. Of course this is by no means mandatory, 
but it might be an interesting are to continue exploring.


The Partitioning section highlights the known mismatch between the 
concept of Availability Zones as borrowed from other clouds and the way 
operators use OpenStack, and offers a long-term design direction that 
Cinder might want to pursue in conjunction with Nova.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Manila team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


I think that, like Cinder, Manila would qualify as contributing to the 
'Basic Physical Data Center Management' goal, since it also allows users 
to access external storage providers through a standardised API.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Swift team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


The vision puts Swift firmly in-scope as the provider of Infinite, 
Continuous Scaling for data storage. And of course Swift is also part of 
the 'Built-in Reliability and Durability' goal, since it provides 
extremely durable storage and spreads the cost across multiple tenants. 
This is clearly a critical aspect of any cloud, and I'm hopeful this 
exercise will help put to rest a lot of the pointless speculation about 
whether Swift 'really' belongs in OpenStack.


I know y'all have a very data-centric viewpoint on cloud that is 
probably unique in the OpenStack community, so I'm particularly 
interested in any insights you might have to offer on the vision as a 
whole from that perspective.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cyborg][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Cyborg team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Cyborg is very obviously a major contributor to the 'Hardware 
Virtualisation' design goal. There's no attempt to make an exhaustive 
list of the types of hardware we want to virtualise, but if anything is 
obviously missing then please suggest it.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Ironic team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


I'd say that Ironic definitely contributes to the 'Basic Physical Data 
Center Management' goal, since it manages physical resources in the data 
center and allows users to access them.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [designate][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Designate team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


I wrote DNS in as a late addition to the list of systems OpenStack needs 
to interface with for the 'Basic Physical Data Center Management' goal, 
because on reflection it seems essential to any basic physical data 
center that things outside the data center need some way of addressing 
resources running within it. If there's a more generic way of expressing 
that, or if you think Designate would be a better fit with some other 
design goal (whether it's already on the list or not), please let us know.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Octavia team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


I think the main design goal that applies to Octavia is the 'Hardware 
Virtualisation' one, since Octavia provides an API and abstraction layer 
over hardware (and software) load balancers. The 'Customisable 
Integration' goal plays a role too though, because even when a software 
load balancer is used, one advantage of having an OpenStack API for it 
is to allow integration with other OpenStack services (like autoscaling).


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Neutron team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Neutron pretty obviously falls under the goals of 'Basic Physical Data 
Center Management' and 'Hardware Virtualisation'.


The last paragraph of the 'Plays Well With Others' design goal (about 
offering standalone layers) was prompted by discussions in Cinder. My 
sense is that this is less relevant to Neutron because of the existence 
of OpenDaylight, but it might be something to pay particular attention 
to when reviewing the document.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qinling][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Qinling team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Qinling offers perhaps the ultimate in 'Infinite, Continuous Scaling' 
for compute resources, by offering extremely fine-grained variation in 
the capacity utilized; by not reserving any capacity at all but sharing 
it in real time across tenants; and by at least in principle not having 
an upper bound for how big an application can scale without modifying 
its architecture. It also 'Plays Well With Others' by tightly 
integrating the backend components of a FaaS into OpenStack.


Qinling also has a role to play in the 'Customisable Integration' goal, 
since it offers a way for application developers to deploy some glue 
logic in the cloud itself without needing to either pre-allocate a chunk 
of resources (i.e. a VM) to it or to host it outside of the cloud.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Nova team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


The 'Basic Physical Data Center Management' goal was written to 
acknowledge Nova's central role in OpenStack, and emphasize that 
OpenStack differs from projects like Kubernetes in that we don't expect 
something else to manage the physical data center for us; we expect 
OpenStack to be the thing that does that for other projects. Obviously 
Nova is also covered by the 'Hardware Virtualisation' design goal.


The last paragraph of the 'Plays Well With Others' design goal was 
prompted by discussions in Cinder. I don't think the topic of other 
systems using parts of Nova standalone has ever really come up, but if 
it did this might be somewhere to look for guidance. (Note that it's 
phrased as completely optional.)


A couple of the other sections are also (I think) worthy of close 
attention. The principles in the 'Application Control' section of the 
cloud pillars remain important. Nova is a bit unusual in that there are 
a number of auxiliary services that provide functionality here (I'm 
thinking of e.g. Masakari) - which is good, but it means more things to 
think about. Not only whether any given functionality is needed, but 
whether it is best provided by Nova or some other project, and if the 
latter how Nova can provide affordances for that project to integrate 
with it.


The Partitioning section was suggested by Jay. It highlights the known 
mismatch between the concept of Availability Zones as borrowed from 
other clouds and the way operators use OpenStack, and offers a long-term 
design direction without being prescriptive.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [zun][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Zun team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Zun seems to fit nicely with the 'Infinite, Continuous Scaling' design 
goal, since it allows users to scale their applications and share 
physical resources at a more fine-grained level than a VM. I'm not 
actually up to date with the details under the hood, but from reading 
the docs it looks like it would also be doing Basic Physical Data Center 
Management - effectively doing what Nova does except with containers 
instead of VMs. And the future plans to integrate with Kubernetes also 
fit with the 'Plays Well With Others" design goal. I'm looking forward 
to your feedback on all of those areas, and I hope that the rest of the 
principles articulated in the vision will prove helpful to you as you 
make design decisions.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [zaqar][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Zaqar team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


The two design goals that Zaqar contributes to are 'Infinite, Continuous 
Scaling' and 'Built-in Reliability and Durability'. It allows 
application developers to do asynchronous messaging and have the scaling 
handled by the cloud, so they can send as many or as few messages as 
they need without having to scale in VM-sized chunks. And it offers 
reliable at-least-once delivery, so application developers can rely on 
the cloud to provide that, simplifying the fault tolerance requirements 
for the rest of the application.


Of course Zaqar can also fulfill a valuable role carrying messages from 
the OpenStack services to the application. This capability will be 
critical to achieving the ideals outlined in the 'Application Control' 
section, since delivery of event notifications from the cloud services 
to the application should be both asynchronous (the cloud can't wait for 
a user application) and reliable (so some sort of queuing is required).


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [blazar][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Blazar team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Blazar is one of the most interesting projects when it comes to defining 
a vision for OpenStack clouds, because it has a really well-defined set 
of goals around energy efficiency and capacity planning that we've so 
far failed to capture in the document. In the 'Self-Service' section we 
talk about aligning user charges with operators' opportunity costs, 
which hints at the leasing concept but seems incomplete without a 
discussion about capacity planning. Similarly, we talk in various place 
about reducing costs to users by sharing resources across tenants, but 
not about how to physically pack those resources to minimise the costs 
to operators. I would really value the Blazar team's input on where and 
how best to introduce these concepts into the vision.


As far as what we have already goes, I think the compute host 
reservation part of Blazar definitely qualifies as part of 'Basic 
Physical Data Center Management' since it's about optimally managing 
physical resources in the data center. Arguably the VM reservation part 
could too, as something that effectively augments Nova, but it's more of 
a stretch which makes me wonder if there's something missing.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [senlin][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Senlin team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


As for Heat, I think the most important of the design goals for Senlin 
is the Customisable Integration one. Senlin is already designed around 
this concept, with Receivers that have webhook URLs allowing users to 
wire alarms for any source together with autoscaling in whatever way 
they like. However, even more important than that is the way that Senlin 
helps the other services deliver on the 'Application Control' pillar, by 
helping applications manage their own infrastructure from within the 
cloud itself.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Heat team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


I think the most relevant design goal here for Heat is the one on 
Customisable Integration. This definitely has implications for how Heat 
designs things - for example, Heat follows these guidelines with its 
autoscaling implementation, by providing a webhook URL that can be used 
for scaling up and down and allowing users to wire it to either Aodh, 
Monasca, or some other thing (possibly of their own design). But beyond 
that, Heat is the service that actually provides the wiring, not only 
for itself but for all of OpenStack. When users want to connect 
resources from different services together, much of the time they'll be 
doing so using the declarative model of a Heat template.


The sections on Interoperability and Bidirectional Compatibility should 
also be important considerations when making design decisions, since 
Heat templates should help provide interoperability across clouds. The 
Cross-Project Dependencies section is also likely of interest, since 
several projects rely on Heat, and in fact in the distant past the TC 
used to require this, but that is no longer the case either in practice 
or in the document as proposed. Finally, the section on Application 
Control mentions the importance of allowing applications to authenticate 
securely to the cloud, which is something Heat has put a lot of work 
into and run into a lot of problems with. My hope is that this document 
will help to spread that focus further in other parts of OpenStack so 
that this kind of thing gets easier over time.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [telemetry][aodh][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Telemetry team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


The scope of the document (which doesn't attempt to cover the whole 
scope of OpenStack) is user-facing services, so within the Telemetry 
stable I think that means mostly just Aodh at this point? The most 
relevant design goal is probably 'Customisable Integration'. This 
section emphasises the importance of allowing users to connect alarms to 
whatever they wish - from other OpenStack services to something 
application-specific. With its support for arbitrary webhooks and 
optional trust-token authentication on outgoing alarms, Aodh is already 
doing a very good job with this.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Mistral team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


I see Mistral contributing to two of the design goals. First, it helps 
with Customisable Integration by enabling application developers to 
incorporate glue logic between cloud services or between the application 
and cloud services, and host it in the cloud without the need to 
pre-allocate a VM for it. Secondly, it also contributes to the Built-in 
Reliability and Durability goal by providing applications with a 
highly-reliable way of maintaining workflow state without the need for 
the application itself to do it.


The sections on Bidirectional Compatibility and Interoperability will 
probably be relevant to design decisions in Mistral, since workbooks are 
one of the artifact types that I'd expect to help with interoperability 
across clouds. The Cross-Project Dependencies section may also be of 
special interest to review, since Mistral is a service that many other 
OpenStack services could potentially rely on.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [masakari][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Masakari team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


In my view, Masakari's role in terms of the design goals is to augment 
Nova (which obviously fits in the Basic Physical Data Center Management 
and Hardware Virtualisation goals) to improve its compliance with the 
section on Application Control of the infrastructure. Without Masakari 
there's no good way for an application to be notified about events like 
failure of a VM or hypervisor, and no way to perform some of the 
recovery actions.


The section on Customisable Integration states that we place a lot of 
value on allowing users and applications to configure how they want to 
handle events (including events like failures) rather than acting 
automatically, because every application's requirements are unique. This 
is probably going to be a valuable thing to keep in mind when making 
design decisions in Masakari.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [solum][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Solum team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


As I understand it, Solum's goal is to provide native OpenStack 
integration for PaaS layers, so it would be covered by the 'Plays Well 
With Others' design goal.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [freezer][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Freezer team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


For the purposes of this document we can largely ignore the Freezer 
guest agent, because we're only looking at cloud services. (To be clear, 
this doesn't mean the guest agent is outside the scope of OpenStack, 
just that it doesn't need to be covered by the vision document.) It 
appears to me that the Freezer API is targeting the 'Built-in 
Reliability and Durability' design goal: it provides a way to e.g. 
reliably trigger and generally manage the backup process, and by making 
it a cloud service the cost of providing that can be spread across 
multiple tenants. But it may be that we should also say something more 
specific about OpenStack's role in data protection. Perhaps y'all could 
work with the Karbor team to figure out what.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Murano team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


To be honest, nothing in the document we have so far really captures the 
scope and ambition of the original vision behind Murano. You could say 
that it fulfils a similar role to Heat in meeting the Customisable 
Integration goal by being one of the components that users can use to 
wire the various servies the OpenStack offers together into a coherent 
application, and functionally that would be a pretty accurate 
description. But nothing in there suggests that we want OpenStack to 
produce a standard packaging format for cloud application components or 
a marketplace where they can be published. Is that still part of the 
vision for Murano after the closure of the application catalog? Is it 
something that should be explicitly part of the vision for OpenStack 
clouds? If so, what should that look like?


The sections on Interoperability and Bidirectional Compatibility 
formalise what are already important design considerations for Murano, 
since one goal of its packaging format is obviously to provide 
interoperability across clouds.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Sahara team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


I wrote the 'Abstract Specialised Operations' design goal to 
specifically to cover Sahara and Trove. (As you can see, I was really 
struggling to find a good, generic name for the principle; better 
suggestions are welcome.) I think this is a decent explanation for why 
Hadoop-as-a-Service should be in OpenStack, but I am by no means an 
expert so I would really like to hear the Sahara team's perspective on it.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Trove team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


I wrote the 'Abstract Specialised Operations' design goal to 
specifically to cover Trove (and Sahara). (As you can see, I was really 
struggling to find a good, generic name for the principle; better 
suggestions are welcome.) This is the best explanation I could think of 
to explain why it's important to have a DBaaS in OpenStack, even if it 
only scales at a coarse granularity (as opposed to a DynamoDB-style 
service like MagnetoDB was, which would be a natural fit for the 
'Infinite, Continuous Scaling' design goal). However, the Trove team 
might well have a different perspective on why Trove is important to 
OpenStack, so I would very much like to hear your feedback and suggestions.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Horizon team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


We have said that other kinds of user interface, e.g. the CLI, are out 
of scope for the document (though not for OpenStack, of course!). After 
some discussion, we decided that Horizon being a service was more 
important to its categorisation than it being a user interface, so I 
wrote the Graphical User Interface design goal to ensure that it is 
covered. However, I'm sure y'all have spent much more time thinking 
about what Horizon contributes to OpenStack than I, so your feedback and 
suggestions are needed.


That is not the only way in which I think this document is relevant to 
the Horizon team: one of my goals with the exercise is to encourage the 
service projects to make sure their APIs make all of the 
operationally-relevant information available and legible to 
applications. That would include e.g. surfacing events, which I know is 
something that Horizon has wanted for a long time, and hopefully this 
will lead to easier ways to build a GUI without as much polling.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Magnum team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Magnum would fall under the 'Plays Well With Others' design goal, as 
it's one way of integrating OpenStack with Kubernetes, ensuring that 
OpenStack users have access to container orchestration tools. And it's 
also an example (along with Sahara and Trove) of the 'Abstract 
Specialised Operations' goal, since it allows operators to have a 
centralised team of Kubernetes cluster operators to serve multiple tenants.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for a process to keep up with Python releases

2018-10-22 Thread Zane Bitter

On 22/10/18 10:33 AM, Thomas Goirand wrote:

On 10/19/18 5:17 PM, Zane Bitter wrote:

We have traditionally held to the principle that we want each release to
support the latest release of CentOS and the latest LTS release of
Ubuntu, as they existed at the beginning of the release cycle.[2]
Currently this means in practice one version of py2 and one of py3, but
in the future it will mean two, usually different, versions of py3.


That's not very nice to forget about the Debian case, which usually
closely precedes Ubuntu. If you want to support Ubuntu better, then
supporting Debian better helps. I usually get the issue before everyone,
as Sid is the distro which is updated the most often. Therefore, please
make sure to include Debian in your proposal.


This is something that needs to be addressed separately I think. It has 
been our long-standing, documented testing policy. If you want to change 
it, make a proposal. For the purposes of this discussion though, the 
main point to take away from the paragraph you quoted is that once 
Python2 is EOL there will rarely be a _single_ version of Python3 that 
is sufficient to support even 2 distros, let alone more.


I haven't forgotten about you, and in fact one of the goals of this 
process is to ensure that we stay up-to-date and not get into situations 
like you had in Rocky where we were two releases behind. Debian will 
definitely benefit from that.



For unit tests, the most important thing is to test on the versions of
Python we target. It's less important to be using the exact distro that
we want to target, because unit tests generally won't interact with
stuff outside of Python.


One of the reoccurring problem that I'm facing in Debian is that not
only Python 3 version is lagging behind, but OpenStack dependencies are
also lagging behind the distro. Often, the answer is "we don't support
this or that version of X", which of course is very frustrating. One
thing which would be super nice, would be a non-voting gate job that
test with the latest version of every Python dependencies as well, so we
get to see breakage early. We've stopped seeing them since we decided it
breaks too often and we would hide problems behind the
global-requirement thing.


I'll leave this to the requirements team, who are more qualified to comment.


And sometimes, we have weird interactions. For example, taskflow was
broken in Python 3.7 before this patch:
https://salsa.debian.org/openstack-team/libs/python-taskflow/commit/6a10261a8a147d901c07a6e7272dc75b9f4d0988

which broke multiple packages using it. Funny thing, it looks like it
wouldn't have happen if we didn't have a pre-version of Python 3.7.1 in
Sid, apparently. Anyway, this can happen again.


So, for example, (and this is still under active debate) for Stein we
might have gating jobs for py35 and py37, with a periodic job for py36.
The T jobs might only have voting py36 and py37 jobs, but late in the T
cycle we might add a non-voting py38 job on master so that people who
haven't switched to the U template yet can see what, if anything,
they'll need to fix.


This can only happen if we have supporting distribution packages for it.
IMO, this is a call for using Debian Testing or even Sid in the gate.


It depends on which versions we choose to support, but if necessary yes.


We'll run the unit tests on any distro we can find that supports the
version of Python we want. It could be a non-LTS Ubuntu, Fedora, Debian
unstable, whatever it takes. We won't wait for an LTS Ubuntu to have a
particular Python version before trying to test it.


I very much agree with that.


Before the start of each cycle, the TC would determine which range of
versions we want to support, on the basis of the latest one we can find
in any distro and the earliest one we're likely to need in one of the
supported Linux distros.


Release of Python aren't aligned with OpenStack cycles. Python 3.7
appeared late in the Rocky cycle. Therefore, unfortunately, doing what
you propose above doesn't address the issue.


This is valuable feedback; it's important to know where there are 
real-world cases that we're not addressing.


Python 3.7 was released 3 weeks after rocky-2 and only 4 weeks before 
rocky-3. TBH I find it hard to imagine any process that would have led 
us to attempt to get every OpenStack project supporting 3.7 in Rocky 
without a radical change in our conception of how OpenStack is distributed.


On the bright side, under this process we would have had 3.6 support in 
Ocata and we could have automatically added a non-voting (or periodic) 
3.7 job during Rocky development as soon as a distro was available for 
testing, which would at least have made it easier to locate problems 
earlier even if we didn't get full 3.7 support until the Stein release.



Integration Tests
-

Integration tests do test, amongst other things, integration with
non-openstack-supplied things in the distro, so it's important th

Re: [openstack-dev] [all] [tc] [api] Paste Maintenance

2018-10-22 Thread Zane Bitter

On 22/10/18 10:38 AM, Thomas Goirand wrote:

On 10/22/18 12:55 PM, Chris Dent wrote:

My assumption is that it's "something we plan to minimally maintain
because we depend on it". in which case all options would work: the
exact choice depends on whether there is anybody interested in helping
maintaining it, and where those contributors prefer to do the work.


Thus far I'm not hearing any volunteers. If that continues to be the
case, I'll just keep it on bitbucket as that's the minimal change.


Could you please move it to Github, so that at least, it's easier to
check out? Mercurial is always a pain...


FWIW as one data point I probably would have fixed the py37 pull request 
myself instead of just commenting, had it not involved doing:


* A pull request
* on bitbucket
* with Mercurial

(I used to like the Mercurial UI, but it turns out that after_really_ 
learning Git... my brain is full and I can't remember anything else.)


- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for a process to keep up with Python releases

2018-10-19 Thread Zane Bitter

On 19/10/18 11:17 AM, Zane Bitter wrote:
I'd like to propose that we handle this by setting up a unit test 
template in openstack-zuul-jobs for each release. So for Stein we'd have 
openstack-python3-stein-jobs. This template would contain:


* A voting gate job for the highest minor version of py3 we want to 
support in that release.
* A voting gate job for the lowest minor version of py3 we want to 
support in that release.

* A periodic job for any interim minor releases.
* (Starting late in the cycle) a non-voting check job for the highest 
minor version of py3 we want to support in the *next* release (if 
different), on the master branch only.


So, for example, (and this is still under active debate) for Stein we 
might have gating jobs for py35 and py37, with a periodic job for py36. 
The T jobs might only have voting py36 and py37 jobs, but late in the T 
cycle we might add a non-voting py38 job on master so that people who 
haven't switched to the U template yet can see what, if anything, 
they'll need to fix.


Just to make it easier to visualise, here is an example for how the Zuul 
config _might_ look now if we had adopted this proposal during Rocky:


https://review.openstack.org/611947

And instead of having a project-wide goal in Stein to add 
`openstack-python36-jobs` to the list that currently includes 
`openstack-python35-jobs` in each project's Zuul config[1], we'd have 
had a goal to change `openstack-python3-rocky-jobs` to 
`openstack-python3-stein-jobs` in each project's Zuul config.


- ZB


[1] 
https://governance.openstack.org/tc/goals/stein/python3-first.html#python-3-6-unit-test-jobs


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for a process to keep up with Python releases

2018-10-19 Thread Zane Bitter

On 19/10/18 12:30 PM, Clark Boylan wrote:

On Fri, Oct 19, 2018, at 8:17 AM, Zane Bitter wrote:

Unit Tests
--

For unit tests, the most important thing is to test on the versions of
Python we target. It's less important to be using the exact distro that
we want to target, because unit tests generally won't interact with
stuff outside of Python.

I'd like to propose that we handle this by setting up a unit test
template in openstack-zuul-jobs for each release. So for Stein we'd have
openstack-python3-stein-jobs. This template would contain:


Because zuul config is branch specific we could set up every project to use a 
`openstack-python3-jobs` template then define that template differently on each 
branch. This would mean you only have to update the location where the template 
is defined and not need to update every other project after cutting a stable 
branch. I would suggest we take advantage of that to reduce churn.


There was a reason I didn't propose that approach: in practice you can't 
add a new gating test to a centralised zuul template definition. If you 
do, many projects will break because the change is not self-testing. At 
best you'll be pitchforked by an angry mob of people who can't get 
anything but py37 fixes through the gate, and at worst they'll all stop 
using the template to get unblocked and then never go back to it.


We don't need everyone to cut over at the same time. We just need them 
to do it in the space of one release cycle. One patch every 6 months is 
not an excessive amount of churn.



* A voting gate job for the highest minor version of py3 we want to
support in that release.
* A voting gate job for the lowest minor version of py3 we want to
support in that release.
* A periodic job for any interim minor releases.
* (Starting late in the cycle) a non-voting check job for the highest
minor version of py3 we want to support in the *next* release (if
different), on the master branch only.

So, for example, (and this is still under active debate) for Stein we
might have gating jobs for py35 and py37, with a periodic job for py36.
The T jobs might only have voting py36 and py37 jobs, but late in the T
cycle we might add a non-voting py38 job on master so that people who
haven't switched to the U template yet can see what, if anything,
they'll need to fix.

We'll run the unit tests on any distro we can find that supports the
version of Python we want. It could be a non-LTS Ubuntu, Fedora, Debian
unstable, whatever it takes. We won't wait for an LTS Ubuntu to have a
particular Python version before trying to test it.

Before the start of each cycle, the TC would determine which range of
versions we want to support, on the basis of the latest one we can find
in any distro and the earliest one we're likely to need in one of the
supported Linux distros. There will be a project-wide goal to switch the
testing template from e.g. openstack-python3-stein-jobs to
openstack-python3-treasure-jobs for every repo before the end of the
cycle. We'll have goal champions as usual following up and helping teams
with the process. We'll know where the problem areas are because we'll
have added non-voting jobs for any new Python versions to the previous
release's template.


I don't know that this needs to be a project wide goal if you can just update 
the template on the master branch where the template is defined. Do that then 
every project is now running with the up to date version of the template. We 
should probably advertise when this is happening with some links to python 
version x.y breakages/features, but the process itself should be quick.


Either way, it'll be project teams themselves fixing any broken tests 
due to a new version being added. So we can either have a formal 
project-wide goal where we project-manage that process across the space 
of a release, or a de-facto project-wide goal where we break everybody 
and then nothing gets merged until they fix it.



As for python version range selection I worry that that the criteria about 
relies on too much guesswork.


Some guesswork is going to be inevitable, unfortunately, (we have no way 
of knowing what will be in CentOS 8, for example) but I agree that we 
should try to tighten up the criteria as much as possible.



I do think we should do our best to test future incoming versions of python 
even while not officially supporting them. We will have to support them at some 
point, either directly or via some later version that includes the changes from 
that intermediate version.


+1, I think we should try to add support for higher versions as soon as 
possible. It may take a long time to get into an LTS release, but 
there's bound to be _some_ distro out there where people want to use it. 
(Case in point: Debian really wanted py37 support in Rocky, at which 
point a working 3.7 wasn't even available in _any_ Ubuntu release, let 
alone an LTS). That's why I said "the latest one we can find in any 
distro&quo

[openstack-dev] Proposal for a process to keep up with Python releases

2018-10-19 Thread Zane Bitter
There hasn't been a Python 2 release in 8 years, and during that time 
we've gotten used to the idea that that's the way things go. However, 
with the switch to Python 3 looming (we will drop support for Python 2 
in the U release[1]), history is no longer a good guide: Python 3 
releases drop as often as every year. We are already feeling the pain 
from this, as Linux distros have largely already completed the shift to 
Python 3, and those that have are on versions newer than the py35 we 
currently have in gate jobs.


We have traditionally held to the principle that we want each release to 
support the latest release of CentOS and the latest LTS release of 
Ubuntu, as they existed at the beginning of the release cycle.[2] 
Currently this means in practice one version of py2 and one of py3, but 
in the future it will mean two, usually different, versions of py3.


There are two separate issues that we need to address: unit tests (we'll 
define this as code tested in isolation, within or spawned from within 
the testing process), and integration tests (we'll define this as code 
running in its own process, tested from the outside). I have two 
separate but related proposal for how to handle those.


I'd like to avoid discussion which versions of things we think should be 
supported in Stein in this thread. Let's come up with a process that we 
think is a good one to take into T and beyond, and then retroactively 
apply it to Stein. Competing proposals are of course welcome, in 
addition to feedback on this one.


Unit Tests
--

For unit tests, the most important thing is to test on the versions of 
Python we target. It's less important to be using the exact distro that 
we want to target, because unit tests generally won't interact with 
stuff outside of Python.


I'd like to propose that we handle this by setting up a unit test 
template in openstack-zuul-jobs for each release. So for Stein we'd have 
openstack-python3-stein-jobs. This template would contain:


* A voting gate job for the highest minor version of py3 we want to 
support in that release.
* A voting gate job for the lowest minor version of py3 we want to 
support in that release.

* A periodic job for any interim minor releases.
* (Starting late in the cycle) a non-voting check job for the highest 
minor version of py3 we want to support in the *next* release (if 
different), on the master branch only.


So, for example, (and this is still under active debate) for Stein we 
might have gating jobs for py35 and py37, with a periodic job for py36. 
The T jobs might only have voting py36 and py37 jobs, but late in the T 
cycle we might add a non-voting py38 job on master so that people who 
haven't switched to the U template yet can see what, if anything, 
they'll need to fix.


We'll run the unit tests on any distro we can find that supports the 
version of Python we want. It could be a non-LTS Ubuntu, Fedora, Debian 
unstable, whatever it takes. We won't wait for an LTS Ubuntu to have a 
particular Python version before trying to test it.


Before the start of each cycle, the TC would determine which range of 
versions we want to support, on the basis of the latest one we can find 
in any distro and the earliest one we're likely to need in one of the 
supported Linux distros. There will be a project-wide goal to switch the 
testing template from e.g. openstack-python3-stein-jobs to 
openstack-python3-treasure-jobs for every repo before the end of the 
cycle. We'll have goal champions as usual following up and helping teams 
with the process. We'll know where the problem areas are because we'll 
have added non-voting jobs for any new Python versions to the previous 
release's template.


Integration Tests
-

Integration tests do test, amongst other things, integration with 
non-openstack-supplied things in the distro, so it's important that we 
test on the actual distros we have identified as popular.[2] It's also 
important that every project be testing on the same distro at the end of 
a release, so we can be sure they all work together for users.


When a new release of CentOS or a new LTS release of Ubuntu comes out, 
the TC will create a project-wide goal for the *next* release cycle to 
switch all integration tests over to that distro. It's up to individual 
projects to make the switch for the tests that they own (e.g. it'd be 
the QA team for Tempest, but other individual projects for their own 
jobs). Again, there'll be a goal champion to monitor and follow up.



[1] 
https://governance.openstack.org/tc/resolutions/20180529-python2-deprecation-timeline.html
[2] 
https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [python3] Enabling py37 unit tests

2018-10-16 Thread Zane Bitter

On 15/10/18 8:00 PM, Monty Taylor wrote:

On 10/15/2018 06:39 PM, Zane Bitter wrote:


In fact, as far as we know the version we have to support in CentOS 
may actually be 3.5, which seems like a good reason to keep it working 
for long enough that we can find out for sure one way or the other.


I certainly hope this is not what ends up happening, but seeing as how I 
actually do not know - I agree, I cannot discount the possibility that 
such a thing would happen.


I'm right there with ya.

That said - until such a time as we get to actually drop python2, I 
don't see it as an actual issue. The reason being - if we test with 2.7 
and 3.7 - the things in 3.6 that would break 3.5 get gated by the 
existence of 2.7 for our codebase.


Case in point- the instant 3.6 is our min, I'm going to start replacing 
every instance of:


   "foo {bar}".format(bar=bar)

in any code I spend time in with:

   f"foo {bar}"

It TOTALLY won't parse on 3.5 ... but it also won't parse on 2.7.

If we decide as a community to shift our testing of python3 to be 3.6 - 
or even 3.7 - as long as we still are testing 2.7, I'd argue we're 
adequately covered for 3.5.


Yeah, that is a good point. There are only a couple of edge-case 
scenarios where that might not prove to be the case. One is where we 
install a different (or a different version of a) 3rd-party library on 
py2 vs. py3. The other would be where you have some code like:


  if six.PY3:
 some_std_lib_function_added_in_3_6()
  else:
 py2_code()

It may well be that we can say this is niche enough that we don't care.

In theory the same thing could happen between versions of python3 (e.g. 
if we only tested on 3.5 & 3.7, and not 3.6). There certainly exist 
places where we check the minor version.* However, that's so much less 
likely again that it definitely seems negligible.


* e.g. 
https://git.openstack.org/cgit/openstack/oslo.service/tree/oslo_service/service.py#n207


The day we decide we can drop 2.7 - if we've been testing 3.7 for 
python3 and it turns out RHEL/CentOS 8 ship with python 3.5, then 
instead of just deleting all of the openstack-tox-py27 jobs, we'd 
probably just need to replace them with openstack-tox-py35 jobs, as that 
would be our new low-water mark.


Now, maybe we'll get lucky and RHEL/CentOS 8 will be a future-looking 
release and will ship with python 3.7 AND so will the corresponding 
Ubuntu LTS - and we'll get to only care about one release of python for 
a minute. :)


Come on - I can dream, right?


Sure, but let's not get complacent - 3.8 is right around the corner :)

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python3] Enabling py37 unit tests

2018-10-15 Thread Zane Bitter

On 15/10/18 4:10 PM, Jeremy Stanley wrote:

On 2018-10-15 15:00:07 -0400 (-0400), Zane Bitter wrote:
[...]

That said, I don't think we should be dropping support/testing for 3.5.
According to:

   https://governance.openstack.org/tc/reference/pti/python.html

3.5 is the only Python3 version that we require all projects to run tests
for.


Until we update it to refer to the version provided by the test
platforms we document at:

https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions


I'm sure we will update it, but as of now we haven't. People shouldn't 
have to guess which TC-maintained documentation is serious and which 
stuff they should just ignore on an ad-hoc basis. If it says 3.5 then 
the answer is 3.5 until somebody submits a patch and the TC approves it.



Out goal is to get everyone running 3.6 unit tests by the end of Stein:

https://governance.openstack.org/tc/goals/stein/python3-first.html#python-3-6-unit-test-jobs

but we explicitly said there that we were not dropping support for 3.5 as
part of the goal, and should continue to do so until we can effect an
orderly transition later.

[...]

We're not dropping support for 3.5 as part of the python3-first
goal, but would be dropping it as part of the switch from Ubuntu
16.04 LTS (which provides Python 3.5) to 18.04 LTS (which provides
Python 3.6). In the past the OpenStack Infra team has prodded us to
follow our documented testing platform policies as new versions
become available, but now with a move to providing infrastructure
services to other OSF projects as well we're on our own to police
this.

We _could_ decide that we're going to start running tests on
multiple versions of Python 3 indefinitely (rather than as a
transitional state during the switch from Ubuntu Xenial to Bionic)


This is inevitable at some point - we say that we'll support both the 
latest release of Ubuntu LTS *and* CentOS. So far that's been irrelevant 
for Python3 because CentOS has only Python2, but we know that the next 
CentOS release will have Python3 and from that point on we will for sure 
be in a situation where we are supporting multiple Python3 versions, not 
always contiguous, for the indefinite future (because the release cycles 
of Ubuntu & CentOS are not aligned in any way).


In fact, as far as we know the version we have to support in CentOS may 
actually be 3.5, which seems like a good reason to keep it working for 
long enough that we can find out for sure one way or the other.



but that does necessarily mean running more jobs. We could also
decide to start targeting different versions of Python than provided
by the distros on which we run our tests (and build it from source
ourselves or something) but I think that's only reasonable if we're
going to also recommend that users deploy OpenStack on top of
custom-compiled Python interpreters rather than the interpreters
provided by server distros like RHEL and Ubuntu.


I am definitely spoiled by Fedora, where I have every version from 3.3 
to 3.7 installed from the distro packages.



So to sum up the above, it's less a question of whether we're
dropping Python 3.5 testing in Stein, and more a question of whether
we're going to continue requiring OpenStack to also be able to run
on Ubuntu 16.04 LTS (which wasn't the latest LTS even at the start
of the cycle).


There's actually another whole level of discussion we probably need to 
have. So far we've talked about unit tests, but running functional tests 
is whole other thing, and one we really do probably want to pick a 
single version of Ubuntu to run on for the sake of the gate (and I'd 
suggest that that version should probably by Bionic, if we can get 
everything working on 3.6 early enough in the cycle).


That process would have been a lot easier if we were earlier on 3.6, so 
I'm grateful to the folks who are already working on 3.7 (which is a 
much more substantial change) to hopefully make this less painful in the 
future.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python3] Enabling py37 unit tests

2018-10-15 Thread Zane Bitter

On 12/10/18 8:59 AM, Corey Bryant wrote:



On Thu, Oct 11, 2018 at 10:19 AM Andreas Jaeger > wrote:


On 10/10/2018 23.10, Jeremy Stanley wrote:
 > I might have only pointed this out on IRC so far, but the
 > expectation is that testing 3.5 and 3.6 at the same time was merely
 > transitional since official OpenStack projects should be moving
 > their testing from Ubuntu Xenial (which provides 3.5) to Ubuntu
 > Bionic (which provides 3.6 and, now, 3.7 as well) during the Stein
 > cycle and so will drop 3.5 testing on master in the process.

Agreed, this needs some larger communication and explanation on what
to do,


The good news is we now have an initial change underway and successful, 
dropping py35 and enabling py37: https://review.openstack.org/#/c/609557/


Hey Corey,
Thanks for getting this underway, it's really important that we keep 
moving forward (we definitely got behind on the 3.6 transition and are 
paying for it now).


That said, I don't think we should be dropping support/testing for 3.5. 
According to:


  https://governance.openstack.org/tc/reference/pti/python.html

3.5 is the only Python3 version that we require all projects to run 
tests for.


Out goal is to get everyone running 3.6 unit tests by the end of Stein:


https://governance.openstack.org/tc/goals/stein/python3-first.html#python-3-6-unit-test-jobs

but we explicitly said there that we were not dropping support for 3.5 
as part of the goal, and should continue to do so until we can effect an 
orderly transition later. Personally, I would see that including waiting 
for all the 3.5-supporting projects to add 3.6 jobs (which has been 
blocked up until ~this point, as we are only just now close to getting 
all of the repos using local Zuul config).


I do agree that anything that works on 3.5 and 3.7 will almost certainly 
work on 3.6, so if you wanted to submit a patch to that goal saying that 
projects could add a unit test job for *either* 3.6 or 3.7 (in addition 
to 3.5) then I would probably support that. We could then switch all the 
3.5 jobs to 3.6 later when we eventually drop 3.5 support. That would 
mean we'd only ever run 3 unit test jobs (and 2 once 2.7 is eventually 
dropped) - for the oldest and newest versions of Python 3 that a project 
supports.


cheers,
Zane.

[This thread was also discussed on IRC starting here: 
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-10-15.log.html#t2018-10-15T18:09:05]


I'm happy to get things moving along and start proposing changes like 
this to other projects and communicating with PTLs along the way. Do you 
think we need more discussion/communication on this or should I get started?


Thanks,
Corey


Andreas
-- 
  Andreas Jaeger aj@{suse.com ,opensuse.org

} Twitter: jaegerandi
   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
    GF: Felix Imendörffer, Jane Smithard, Graham Norton,
        HRB 21284 (AG Nürnberg)
     GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272
A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tc] add service discovery, proxysql, vault, fabio and FQDN endpoints

2018-10-11 Thread Zane Bitter

On 10/10/18 1:35 PM, Jay Pipes wrote:

+tc topic

On 10/10/2018 11:49 AM, Fox, Kevin M wrote:
Sorry. Couldn't quite think of the name. I was meaning, openstack 
project tags.


I think having a tag that indicates the project is no longer using 
SELECT FOR UPDATE (and thus is safe to use multi-writer Galera) is an 
excellent idea, Kevin. ++


I would support such a tag, especially if it came with detailed 
instructions on how to audit your code to make sure you are not doing 
this with sqlalchemy. (Bonus points for a flake8 plugin that can be 
enabled in the gate.)


(One question for clarification: is this actually _required_ to use 
multi-writer Galera? My previous recollection was that it was possible, 
but inefficient, to use SELECT FOR UPDATE safely as long as you wrote a 
lot of boilerplate to restart the transaction if it failed.)



-jay



From: Jay Pipes [jaypi...@gmail.com]
Sent: Tuesday, October 09, 2018 12:22 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [kolla] add service discovery, proxysql, 
vault, fabio and FQDN endpoints


On 10/09/2018 03:10 PM, Fox, Kevin M wrote:
Oh, this does raise an interesting question... Should such 
information be reported by the projects up to users through labels? 
Something like, "percona_multimaster=safe" Its really difficult for 
folks to know which projects can and can not be used that way currently.


Are you referring to k8s labels/selectors? or are you referring to
project tags (you know, part of that whole Big Tent thing...)?

-jay

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] bringing back formal TC meetings

2018-10-04 Thread Zane Bitter

On 4/10/18 1:47 PM, Jeremy Stanley wrote:

On 2018-10-04 13:40:05 -0400 (-0400), Doug Hellmann wrote:
[...]

TC members, please reply to this thread and indicate if you would
find meeting at 1300 UTC on the first Thursday of every month
acceptable, and of course include any other comments you might
have (including alternate times).


This time is acceptable to me. As long as we ensure that community
feedback continues more frequently in IRC and on the ML (for example
by making it clear that this meeting is expressly *not* for that)
then I'm fine with resuming formal meetings.


+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][python3][heat][stable] how should we proceed with ocata branch

2018-10-03 Thread Zane Bitter

On 3/10/18 9:42 AM, Matt Riedemann wrote:

On 10/3/2018 7:58 AM, Doug Hellmann wrote:

There is one more patch to import the zuul configuration for the
heat-agents repository's stable/ocata branch. That branch is apparently
broken, and Zane suggested on the review [1] that we abandon the patch
and close the branch.

That patch is the only thing blocking the cleanup patch in
project-config, so I would like to get a definitive answer about what to
do. Should we close the branch, or does someone want to try to fix
things up?


I think we agreed on closing the branch, and Rico was looking into the 
procedure for how to actually do that.



Doug

[1]https://review.openstack.org/#/c/597272/


I'm assuming heat-agents is a service, not a library, since it doesn't 
show up in upper-constraints.


It's a guest agent, so neither :)

Based on that, does heat itself plan on 
putting its stable/ocata branch into extended maintenance mode and if 


Wearing my Red Hat hat, I would be happy to EOL it. But wearing my 
upstream hat, I'm happy to keep maintaining it, and I was not proposing 
that we EOL heat's stable/ocata as well.


so, does that mean EOLing the heat-agents stable/ocata branch could 
cause problems for the heat stable/ocata branch? In other words, will it 
be reasonable to run CI for stable/ocata heat changes against a 
heat-agents ocata-eol tag?


I don't think that's a problem. The guest agents rarely change, and I 
don't think there's ever been a patch backported by 4 releases.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-10-01 Thread Zane Bitter

On 28/09/18 1:19 PM, Chris Dent wrote:
They aren't arbitrary. They are there for a reason: a trait is a 
boolean capability. It describes something that either a provider is 
capable of supporting or it isn't.


This is somewhat (maybe even only slightly) different from what I
think the definition of a trait is, and that nuance may be relevant.

I describe a trait as a "quality that a resource provider has" (the
car is blue). This contrasts with a resource class which is a
"quantity that a resource provider has" (the car has 4 doors).


I'm not sure that quality vs. quantity is actually the right distinction 
here... someone could equally argue that having 4 doors is itself a 
quality[1] of a car, and they could certainly come up with a formulation 
that obscures the role of quantity at all (the car is a sedan).


I think the actual distinction you're describing is between discrete (or 
perhaps just enumerable) and continuous (or at least innumerable) values.


What that misses is that if the car is blue, it cannot also be green. 
Since placement absolutely should not know anything at all about the 
meaning of traits, this means that clients will be required to implement 
a bunch of business logic to maintain consistency. Furthermore, should 
the colour of the car change from blue to green at some point in the 
future[2], I am assuming that placement will not offer an API that 
allows both traits to be updated atomically. Those are problems that 
key-value solves.


It could be the case that those problems are not considered important in 
this context; if so I'd expect to see the reasons explained as part of 
this discussion.


cheers,
Zane.

[1] Resisting the urge to quote Stalin here.
[2] https://en.wikipedia.org/wiki/New_riddle_of_induction

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-09-28 Thread Zane Bitter

On 28/09/18 9:25 AM, Eric Fried wrote:

It's time somebody said this.

Every time we turn a corner or look under a rug, we find another use
case for provider traits in placement. But every time we have to have
the argument about whether that use case satisfies the original
"intended purpose" of traits.

That's only reason I've ever been able to glean: that it (whatever "it"
is) wasn't what the architects had in mind when they came up with the
idea of traits. We're not even talking about anything that would require
changes to the placement API. Just, "Oh, that's not a *capability* -
shut it down."


So I have no idea what traits or capabilities are (in this context), but 
I have a bit of experience with running a busy project where everyone 
wants to get their pet feature in, so I'd like to offer a couple of 
observations if I may:


* Conceptual integrity *is* important.

* 'Everything we could think of before we had a chance to try it' is not 
an especially compelling concept, and using it in place of one will tend 
to result in a lot of repeated arguments.


Both extremes ('that's how we've always done it' vs. 'free-for-all') are 
probably undesirable. I'd recommend trying to document traits in 
conceptual, rather than historical, terms. What are they good at? What 
are they not good at? Is there a limit to how many there can be while 
still remaining manageable? Are there other potential concepts that 
would map better to certain borderline use cases? That won't make the 
arguments go away, but it should help make them easier to resolve.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][senlin] Action Required. Idea to propose for a forum for autoscaling features integration

2018-09-27 Thread Zane Bitter

On 27/09/18 7:00 PM, Duc Truong wrote:

On Thu, Sep 27, 2018 at 11:14 AM Zane Bitter  wrote:



and we will gradually fade out the existing 'AutoScalingGroup'
and related resource types in Heat.


That's almost impossible to do without breaking existing users.


One approach would be to switch the underlying Heat AutoScalingGroup
implementation to use Senlin and then deprecate the AutoScalingGroup
resource type in favor of the Senlin resource type over several
cycles. 


The hard part (or one hard part, at least) of that is migrating the 
existing data.



Not saying that this is the definitive solution, but it is
worth discussing as an option since this follows a path other projects
have taken (e.g. nova-volume extraction into cinder).


+1, *definitely* worth discussing.


A prerequisite to this approach would probably require Heat to create
the so-called common library to house the autoscaling code.  Then
Senlin would need to achieve feature parity against this autoscaling
library before the switch could happen.



Clearly there are _some_ parts that could in principle be shared. (I
added some comments to the etherpad to clarify what I think Rico was
referring to.)

It seems to me that there's value in discussing it together rather than
just working completely independently, even if the outcome of that
discussion is that


+1.  The outcome of any discussion will be beneficial not only to the
teams but also the operators and users.

Regards,

Duc (dtruong)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][senlin] Action Required. Idea to propose for a forum for autoscaling features integration

2018-09-27 Thread Zane Bitter

On 26/09/18 10:27 PM, Qiming Teng wrote:

Hi,

Due to many reasons, I cannot join you on this event, but I do like to
leave some comments here for references.

On Tue, Sep 18, 2018 at 11:27:29AM +0800, Rico Lin wrote:

*TL;DR*
*How about a forum in Berlin for discussing autoscaling integration (as a
long-term goal) in OpenStack?*


First of all, there is nothing called "auto-scaling" in my mind and
"auto" is most of the time a scary word to users. It means the service
or tool is hiding some details from the users when it is doing something
without human intervention. There are cases where this can be useful,
there are also many other cases the service or tool is messing up things
to a state difficult to recover from. What matters most is the usage
scenarios we support. I don't think users care that much how project
teams are organized.


Yeah, I mostly agree with you, and in fact I often use the term 'scaling 
group' to encompass all of the different types of groups in Heat. Our 
job is to provide an API that is legible to external tools to increase 
and decrease the size of the group. The 'auto' part is created by 
connecting it with other services, whether they be OpenStack services 
like Aodh or Monasca, monitoring services provided by the user 
themselves, or just manual invocation.


(BTW people from the HA-clustering world have a _very_ negative reaction 
to Senlin's use of the term 'cluster'... there is no perfect terminology.)



Hi all, as we start to discuss how can we join develop from Heat and Senlin
as we originally planned when we decided to fork Senlin from Heat long time
ago.

IMO the biggest issues we got now are we got users using autoscaling in
both services, appears there is a lot of duplicated effort, and some great
enhancement didn't exist in another service.
As a long-term goal (from the beginning), we should try to join development
to sync functionality, and move users to use Senlin for autoscaling. So we
should start to review this goal, or at least we should try to discuss how
can we help users without break or enforce anything.


The original plan, iirc, was to make sure Senlin resources are supported
in Heat,


This happened.


and we will gradually fade out the existing 'AutoScalingGroup'
and related resource types in Heat.


That's almost impossible to do without breaking existing users.


I have no clue since when Heat is
interested in "auto-scaling" again.


It's something that Rico and I have been discussing - it turns out that 
Heat still has a *lot* of users running very important stuff on Heat 
scaling group code which, as you know, is burdened by a lot of technical 
debt.



What will be great if we can build common library cross projects, and use
that common library in both projects, make sure we have all improvement
implemented in that library, finally to use Senlin from that from that
library call in Heat autoscaling group. And in long-term, we gonna let all
user use more general way instead of multiple ways but generate huge
confusing for users.


The so called "auto-scaling" is always a solution, built by
orchestrating many moving parts across the infrastructure. In some
cases, you may have to install agents into VMs for workload metering.


Totally agree, but...


I
am not convinced this can be done using a library approach.


Clearly there are _some_ parts that could in principle be shared. (I 
added some comments to the etherpad to clarify what I think Rico was 
referring to.)


It seems to me that there's value in discussing it together rather than 
just working completely independently, even if the outcome of that 
discussion is that



*As an action, I propose we have a forum in Berlin and sync up all effort
from both teams to plan for idea scenario design. The forum submission [1]
ended at 9/26.*
Also would benefit from both teams to start to think about how they can
modulize those functionalities for easier integration in the future.

 From some Heat PTG sessions, we keep bring out ideas on how can we improve
current solutions for Autoscaling. We should start to talk about will it
make sense if we combine all group resources into one, and inherit from it
for other resources (ideally gonna deprecate rest resource types). Like we
can do Batch create/delete in Resource Group, but not in ASG. We definitely
got some unsynchronized works inner Heat, and cross Heat and Senlin.


Totally agree with you on this. We should strive to minimize the
technologies users have to master when they have a need.


+1 - to expand on Rico's example, we have at least 3 completely separate 
implementations of batching, each supporting different actions:


Heat AutoscalingGroup: updates only
Heat ResourceGroup: create or update
Senlin Batch Policy: updates only

and users are asking for batch delete as well. This is clearly an area 
where technical debt from duplicate implementations is making it hard to 
deliver value to users.


cheers,
Zane.


Re: [openstack-dev] [Heat] Bug in documentation?

2018-09-26 Thread Zane Bitter

On 26/09/18 12:02 PM, Postlbauer, Juan wrote:

Hi everyone:

I see that heat doc 
https://docs.openstack.org/heat/rocky/template_guide/openstack.html#OS::Nova::Flavor 
states that


*ram¶ 
*


Memory in MB for the flavor.

*disk¶ 
*


Size of local disk in GB.

That would be 1000*1000 for ram and 1000*1000*1000 for disk.

But Nova doc 
https://developer.openstack.org/api-ref/compute/#create-flavor states that:


ram



body



integer



The amount of RAM a flavor has, in MiB.

disk



body



integer



The size of the root disk that will be created in GiB.

That would be 1024*1024 for ram and 1024*1024*1024 for disk. Which, at 
least for ram, makes much more sense to me.


Is this a typo in Heat documentation?


No, but it's ambiguous in a way that MiB/GiB would not be. Feel free to 
submit a patch.


- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [all][tc] We're combining the lists!

2018-09-24 Thread Zane Bitter

On 20/09/18 5:46 PM, Doug Hellmann wrote:

Excerpts from Jeremy Stanley's message of 2018-09-20 16:32:49 +:

tl;dr: The openstack, openstack-dev, openstack-sigs and
openstack-operators mailing lists (to which this is being sent) will
be replaced by a new openstack-disc...@lists.openstack.org mailing
list.


Since last week there was some discussion of including the openstack-tc
mailing list among these lists to eliminate confusion caused by the fact
that the list is not configured to accept messages from all subscribers
(it's meant to be used for us to make sure TC members see meeting
announcements).

I'm inclined to include it and either use a direct mailing or the
[tc] tag on the new discuss list to reach TC members, but I would
like to hear feedback from TC members and other interested parties
before calling that decision made. Please let me know what you think.


+1. I already sort mail to the -tc list and mail to the -dev list with 
the [tc] tag into the same mailbox, so the value to me of having a list 
that only TC members can post to without moderation is zero.


- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [tc]Global Reachout Proposal

2018-09-19 Thread Zane Bitter

On 18/09/18 9:10 PM, Jaesuk Ahn wrote:
On Wed, Sep 19, 2018 at 5:30 AM Zane Bitter <mailto:zbit...@redhat.com>> wrote:


Resotring the whole quote here because I accidentally sent the original 
to the -sigs list only and not the -dev list.



As others have mentioned, I think this is diving into solutions when we haven't 
defined the problems. I know you mentioned it briefly in the PTG session, but 
that context never made it to the review or the mailing list.

So AIUI the issue you're trying to solve here is that the TC members seem 
distant and inaccessible to Chinese contributors because we're not on the same 
social networks they are?

Perhaps there are others too?

Obvious questions to ask from there would be:

- Whether this is the most important issue facing contributors from the APAC 
region

- To what extent the proposed solution is expected to help



I do agree with Zane on the above point.


For the record, I didn't express an opinion. I'm just pointing out what 
the questions are.


As one of OpenStack participants from Asia region, I will put my 
personal opinion.
IRC and ML has been an unified and standard way of communication in 
OpenStack Community, and that has been a good way to encourage "open 
communication" on a unified method wherever you are from, or whatever 
background you have. If the whole community start recognize some other 
tools (say WeChat) as recommended alternative communication method 
because there are many people there, ironically, it might be a way to 
break "diversity" and "openness" we want to embrace.


Using whatever social media (or tools) in a specific region due to any 
reason is not a problem. Anyone is free to use anything. Only thing we 
need to make sure is, if you want to communicate officially with the 
whole community, there is a very well defined and unified way to do it. 
This is currently IRC and ML. Some of Korean dev has difficulties to use 
IRC. However, there is not a perfect tool out there in this world, and 
we accept all the reason why the community selected IRC as official tool


But, that being said, There are some things I am facing with IRC from 
here in Korea


As a person from Asia, I do have some of pain points. Because of time 
differences, often, I have to do achieve searching since most of 
conversations happened while I am sleeping. IRC is not a good tool to 
perform "search backlog". Although there is message archive you can dig, 
it is still hard. This is a problem. I do love to see any technical 
solution for me to efficiently and easily go through irc backlog, like 
most of modern chat tools.


Secondly, IRC is not a popular one even in dev community here in Korea. 
In addition, in order to properly use irc, you need to do extra work, 
something like setting up bouncing server. I had to do google search to 
figure out how to use it.


I think part of the disconnect here is that people have different ideas 
about what IRC (and chat in general) is for.


For me it's a way to conduct synchronous conversations. These tend to go 
badly on the mailing list (really long threads of 1 sentence per 
message) or on code review (have to keep refreshing), so it's good that 
we have another tool to do this. I answer a lot of user questions, 
clarify comments on patches, and obviously join team meetings in IRC.


The key part is 'synchronous' though. If I'm not there, the conversation 
is not going to be synchronous. I don't run a bouncer, although I 
generally leave my computer running when I'm not working so you'll often 
(but not always) be able to ping me, and I'll usually look back to see 
if it was something important. Otherwise it's 50-50 whether I'll even 
bother to read scrollback, and certainly not for more than a couple of 
channels.


Other people, however, have a completely different perspective: they 
want a place where they are guaranteed to be reachable at any time (even 
if they don't see it until later) and the entire record is always right 
there. I think Slack was built for those kinds of people. You would have 
to drag me kicking and screaming into Slack even if it weren't 
proprietary software.


I don't know where WeChat falls on that spectrum. But maybe part of the 
issue is that we're creating too high an expectation of what it means to 
participate in the community (e.g. if you're not going to set up a 
bouncer and be reachable 24/7 then you might as well not get involved at 
all - this is 100% untrue). I've seen several assertions, including in 
the review, that any decisions must be documented on the mailing list or 
IRC, and I'm not sure I agree. IMHO, any decisions should be documented 
on the mailing list, period.


I'd love to see more participation on the mailing list. Since it is 
asynchronous already it's somewhat friendlier to those in APAC time 
zones (although there are still issues, real or perceived, with 
decisions being reached before anyone

Re: [openstack-dev] [tc] notes from stein ptg meetings of the technical committee

2018-09-18 Thread Zane Bitter

On 17/09/18 5:07 PM, Jay Pipes wrote:
Also, for the record, I actually wasn't referring to Adjutant 
specifically when I referred in my original post to "only tangentially 
related to cloud computing". I was referring to my recollection of 
fairly recent history. I remember the seemingly endless debates about 
whether some applicants "fit" the OpenStack ecosystem or whether the 
applicant was merely trying to jump on a hype bandwagon for marketing 
purposes. Again, I wasn't specifically referring to Adjutant here, so I 
apologize if my words came across that way.


Thanks for the clarification. What you're referring to is also an 
acknowledged problem, which we discussed at the Forum and are attempting 
to address with the Technical Vision (which we need to find a better 
name for). We didn't really discuss that on the Sunday though, because 
it was a topic on the formal agenda for Friday. Sunday's discussion was 
purely a retrospective on the Adjutant application, so you should read 
Doug's summary in that context.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] notes from stein ptg meetings of the technical committee

2018-09-17 Thread Zane Bitter

On 17/09/18 3:06 PM, Jay Pipes wrote:

On 09/17/2018 01:31 PM, Doug Hellmann wrote:

New Project Application Process
===

We wrapped up Sunday with a discussion of of our process for reviewing
new project applications. Zane and Chris in particular felt the
process for Adjutant was too painful for the project team because
there was no way to know how long discussions might go on and now
way for them to anticipate some of the issues they encountered.

We talked about formalizing a "coach" position to have someone from
the TC (or broader community) work with the team to prepare their
application with sufficient detail, seek feedback before voting
starts, etc.

We also talked about adding a time limit to the process, so that
teams at least have a rejection with feedback in a reasonable amount
of time.  Some of the less contentious discussions have averaged
from 1-4 months with a few more contentious cases taking as long
as 10 months. We did not settle on a time frame during the meeting,
so I expect this to be a topic for us to work out during the next
term.


So, to summarize... the TC is back to almost exactly the same point it 
was at right before the Project Structure Reform happened in 2014-2015 
(that whole Big Tent thing).


I wouldn't go that far. There are more easy decisions than there were 
before the reform, but there still exist hard decisions. This is perhaps 
inevitable.


The Project Structure Reform occurred because the TC could not make 
decisions on whether projects should join OpenStack using objective 
criteria, and due to this, new project applicants were forced to endure 
long waits and subjective "graduation" reviews that could change from 
one TC election cycle to the next.


The solution to this was to make an objective set of application 
criteria and remove the TC from the "Supreme Court of OpenStack" role 
that new applicants needed to come before and submit to the court's 
judgment.


Many people complained that the Project Structure Reform was the TC 
simply abrogating responsibility for being a judgmental body.


It seems that although we've now gotten rid of those objective criteria 
for project inclusion and gone back to the TC being a subjective 
judgmental body, that the TC is still not actually willing to pass 
judgment one way or the other on new project applicants.


No criteria have been gotten rid of, but even after the Project 
Structure Reform there existed criteria that were subjective. Here is a 
thread discussing them during the last TC election:


http://lists.openstack.org/pipermail/openstack-dev/2018-April/129622.html

(I actually think that the perception that the criteria should be 
entirely objective might be a contributor to the problem: when faced 
with a subjective decision and no documentation or precedent to guide 
them, TC members can be reluctant to choose.)


Is this because it is still remarkably unclear what OpenStack actually 
*is* (the whole mission/scope thing)?


Or is this because TC members simply don't want to be the ones to say 
"No" to good-meaning people


I suspect both of those reasons are probably in the mix, along with a 
few others as well.


that may have an idea that is only 
tangentially related to cloud computing?


It should be noted that in this case Adjutant pretty clearly fills an 
essential use case for public clouds. The debate was around whether 
accepting it was likely to lead to the desired standardisation across 
public OpenStack clouds or effectively act as an official endorsement 
for API fragmentation.


It's not clear that any change to the criteria could have made this 
particular decision any easier.


Things did seem to go more smoothly after we nominated a couple of 
people to work directly with the project to polish their application, 
and in retrospect we probably should have treated it with more urgency 
rather than e.g. waiting for a face-to-face discussion at the Forum 
before attempting to make progress. Those are the lessons behind the 
process improvements that we discussed last week that Doug summarised above.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][upgrade-checkers] Week R-31 update

2018-09-12 Thread Zane Bitter

On 4/09/18 5:39 PM, Ben Nemec wrote:
Would it be helpful to factor some of the common code out into an Oslo 
library so projects basically just have to subclass, implement check 
functions, and add them to the _upgrade_checks dict? It's not a huge 
amount of code, but a bunch of it seems like it would need to be 
copy-pasted into every project. I have a tentative topic on the Oslo PTG 
schedule for this but figured I should check if it's something we even 
want to pursue.


+1. We started discussing this today and immediately realised it was 
going to result in every project copy/pasting the code to create a 
-status executable and thae upgrade-check command itself. It 
would be great if we can avoid this from the start.



On 09/04/2018 04:29 PM, Matt Riedemann wrote:

Just a few updates this week.

1. The story is now populated with a task per project that may have 
something to complete for this goal [1]. PTLs, or their liaison(s), 
should assign the task for their project to whomever is going to work 
on the goal. The goal document in governance is being updated with the 
appropriate links to storyboard [2].


2. While populating the story and determining which projects to omit 
(like infra, docs, QA were obvious), I left in the deployment projects 
but those likely can/should opt-out of this goal for Stein since the 
goal is more focused on service projects like keystone/cinder/glance. 
I have pushed a docs updated to the goal with respect to deployment 
projects [3]. For deployment projects that don't plan on doing 
anything with this goal, feel free to just invalidate the task in 
storyboard for your project.


3. I have a developer/contributor reference docs patch up for review 
in nova [4] which is hopefully written generically enough that it can 
be consumed by and used as a guide for other projects implementing 
these upgrade checks.


4. I've proposed an amendment to the completion criteria for the goal 
[5] saying that projects with the "supports-upgrade" tag should 
integrate the checks from their project with their upgrade CI testing 
job. That could be grenade or some other upgrade testing framework, 
but it stands to reason that a project which claims to support 
upgrades and has automated checks for upgrades, should be running 
those in their CI.


Let me know if there are any questions. There will also be some time 
during a PTG lunch-and-learn session where I'll go over this goal at a 
high level, so feel free to ask questions during or after that at the 
PTG as well.


[1] https://storyboard.openstack.org/#!/story/2003657
[2] https://review.openstack.org/#/c/599759/
[3] https://review.openstack.org/#/c/599835/
[4] https://review.openstack.org/#/c/596902/
[5] https://review.openstack.org/#/c/599849/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [StoryBoard] Project Update/Some New Things

2018-09-10 Thread Zane Bitter

On 10/09/18 10:34 AM, Jeremy Stanley wrote:

On 2018-09-10 14:43:18 +0100 (+0100), Adam Coldrick wrote:
[...]

# Linking to projects by name

Keen observers might've noticed that StoryBoard recently grew the ability
to link to projects by name, rather than by ID number. All the links to
projects in the UI have been replaced with links in this form, and its
probably a good idea for folk to start using them in any documentation
they have. These links look like

   https://storyboard.openstack.org/#!/project/openstack-infra/storyboard


Thanks for this!!!


[...]

Worth noting, this has made it harder to find the numeric project ID
without falling back on the API. Change
https://review.openstack.org/600893 merged to the releases
repository yesterday allowing deliverable repositories to be
referenced by their StoryBoard project name rather than only the ID
number. If there are other places in tooling and automation where we
relied on the project ID number, work should be done to update those
similarly.


In the docs configuration we use the ID for the generating the bugs 
link. We also rely on it being a numeric ID (as a string - it crashes if 
you use an int) rather than a string to determine whether the target is 
a launchpad project or a storyboard project.



# Finding stories from a task ID

It is now possible to navigate to a story given just a task ID, if for
whatever reason that's all the information you have available. A link like

   https://storyboard.openstack.org/#!/task/12389

will work. This will redirect to the story containing the task, and is the
first part of work to support linking directly to an individual task in a
story.

[...]

As an aside, I think this makes it possible now for us to start
hyperlinking Task footers in commit messages within the Gerrit
change view. I'll try and figure out what we need to adjust in our
Gerrit commentlink and its-storyboard plugin configs to make that
happen.


+1


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goal][python3] week 3 update

2018-08-28 Thread Zane Bitter

On 27/08/18 15:37, Doug Hellmann wrote:

== Next Steps ==

If your team is ready to have your zuul settings migrated, please
let us know by following up to this email. We will start with the
volunteers, and then work our way through the other teams.


Heat is ready. I already did master (and by extension stable/rocky) a 
little while back in openstack/heat, but you should check that it's 
correct :)


- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-20 Thread Zane Bitter

On 18/08/18 18:22, Eric Fried wrote:

A year ago we might have developed a feature where one patch would
straddle placement and nova. Six months ago we were developing features
where those patches were separate but in the same series. Today that's
becoming less and less the case: nrp, sharing providers, consumer
generations, and other things mentioned have had their placement side
completed and their nova side - if started at all - done completely
independently. The reshaper series is an exception - but looking back on
its development, Depends-On would have worked just as well.


So you've given a list here of things that you think wouldn't gain any 
particular benefit from being under the same governance. (Or possibly 
this is just an argument for being in a separate repo, which everybody 
already agrees with?) Mel gave a list of things she thinks _would_ 
benefit from shared governance. Was there anything on her list that 
you'd disagree with? Is there anything on your list that Mel or Dan or 
anybody else would disagree with? Why?


(Note: I personally don't even think it matters, but this is how you 
reach consensus.)



Agree the nova project is overloaded and would benefit from having
broader core reviewer coverage over placement code.  The list Chris
gives above includes more than one non-nova core who should be made
placement cores as soon as that's a thing.


I agree with this, but separate governance is not a prerequisite for it. 
Having a different/larger core team for a repo in Gerrit is technically 
very easy, and our governance rules leave it completely up to the 
project team (represented by the PTL) to decide. Mel indicated what I'd 
describe as non-opposition to that on IRC, provided that the nova-core 
team retained core review rights on the placement repo.[1] How does the 
Nova team as a whole feel about that? Would anybody object? Would that 
be sufficient to resolve the placement team's concerns about core 
reviewer coverage?


cheers,
Zane.

[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-20.log.html#t2018-08-20T17:36:58


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-20 Thread Zane Bitter

On 20/08/18 14:02, Chris Friesen wrote:
In order to address the "velocity of change in placement" issues, how 
about making the main placement folks members of nova-core with the 
understanding that those powers would only be used in the new placement 
repo?


That kind of 'understanding' is only needed (because of limitations in 
Gerrit) when working in the same repo. Once it's in a separate repo you 
just create a new 'placement-core' group and make nova-core a member of it.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-20 Thread Zane Bitter

On 17/08/18 11:51, Chris Dent wrote:

One of the questions that has come up on the etherpad is about how
placement should be positioned, as a project, after the extraction.
The options are:

* A repo within the compute project
* Its own project, either:
   * working towards being official and governed
   * official and governed from the start


So since this is under heavy discussion in #openstack-tc, and Ed asked 
for folks who are not invested in either side, allow me to offer this 
suggestion:


It just doesn't matter.

The really important thing here, and it sounds like one that everybody 
agrees on, is that placement gets split out into its own repo. That will 
enable things to move forward both technically (helping other projects 
to more easily consume it) and socially (allowing it to use a separate 
Gerrit ACL so it can add additional core reviewers with +2 rights only 
on that repo). So let's focus on getting that done.


It seems unlikely to me that having the placement repo technically under 
the governance of the Nova project will present anywhere near the level 
of obstacle to other projects using as having it in the same repo as 
Nova currently does, if they are even aware of it at all. Conversely, I 
consider it equally unlikely that placement living outside of the Nova 
umbrella altogether would result in significant divergence between its 
interests and those of Nova.


If you want my personal opinion then I'm a big believer in incremental 
change. So, despite recognising that it is born of long experience of 
which I have been blissfully mostly unaware, I have to disagree with 
Chris's position that if anybody lets you change something then you 
should try to change as much as possible in case they don't let you try 
again. (In fact I'd go so far as to suggest that those kinds of 
speculative changes are a contributing factor in making people reluctant 
to allow anything to happen at all.) So I'd suggest splitting the repo, 
trying things out for a while within Nova's governance, and then 
re-evaluating. If there are that point specific problems that separate 
governance would appear to address, then it's only a trivial governance 
patch and a PTL election away. It should also be much easier to get 
consensus at that point than it is at this distance where we're only 
speculating what things will be like after the extraction.


I'd like to point out for the record that Mel already said this and said 
it better and is AFAICT pretty much never wrong :)


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][tc] Technical Vision statement: feedback sought

2018-08-16 Thread Zane Bitter
The TC has undertaken to attempt to write a technical vision statement 
for OpenStack that documents the community's consensus on what we're 
trying to build. To date the only thing we've had to guide us is the 
mission statement[1], which is exactly one sentence long and uses 
undefined terms (like 'cloud'). That can lead to diverging perspectives 
and poor communication.


No group is charged with designing OpenStack at a high level - it is the 
sum of what individual teams produce. So the only way we're going to end 
up with a coherent offering is if we're all moving in the same direction.


The TC has also identified that we're having conversations about whether 
a new project fits with the OpenStack mission too late - only after the 
project applies to become official. We're hoping that updates to this 
document can provide a mechanism to have those conversations earlier.


A first draft review is now available for comment:

https://review.openstack.org/592205

We're soliciting feedback on the review, on the mailing list, on IRC 
during TC office hours or any time that's convenient to you in 
#openstack-tc, and during the PTG in Denver.


If the vision as written broadly matches yours then we'd like to hear 
from you, and if it does not we *need* to hear from you. The goal is to 
have something that the entire community can buy into, and although that 
means not everyone will be able to get their way on every topic we are 
more than willing to make changes in order to find consensus. Everything 
is up for grabs, including the form and structure of the document itself.


cheers,
Zane.

[1] 
https://docs.openstack.org/project-team-guide/introduction.html#the-mission


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Proposing Zane Bitter as oslo.service core

2018-08-16 Thread Zane Bitter

On 15/08/18 16:34, Ben Nemec wrote:
Since there were no objections, I've added Zane to the oslo.service core 
team.  Thanks and welcome, Zane!


Thanks team! I'll try not to mess it up :)


On 08/03/2018 11:58 AM, Ben Nemec wrote:

Hi,

Zane has been doing some good work in oslo.service recently and I 
would like to add him to the core team.  I know he's got a lot on his 
plate already, but he has taken the time to propose and review patches 
in oslo.service and has demonstrated an understanding of the code.


Please respond with +1 or any concerns you may have.  Thanks.

-Ben

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][mox][python3][goal] need help with mox3 and python 3.6

2018-08-14 Thread Zane Bitter

On 14/08/18 15:45, Doug Hellmann wrote:

The python 3.6 unit test job has exposed an issue with mox3. It looks
like it might just be in the test suite, but I can't tell.

I'm looking for one of the folks who suggested we should just keep
maintaining mox3 to help fix it. Please go ahead and take over the
relevant patch and include whatever changes are needed.

https://review.openstack.org/#/c/589591/


I'm not one of those people (and I'm not oblivious to the fact that this 
was a not-especially-subtly coded message for mriedem), but I fixed it.


Please don't make me a maintainer now ;)

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate

2018-08-07 Thread Zane Bitter

Top posting to avoid getting into the weeds.

* OpenStack is indeed lagging behind
* The road to 3.7 (and eventually 3.8) runs through 3.6
* As part of the project-wide python3-first goal we aim to have 
everything working on 3.6 for Stein, so we are making some progress at least

* As of now we are *not* dropping support for 3.5 in Stein
* No matter what we do, the specific issue you're encountering is 
structural: we don't add support for a Python version in the gate until 
it is available in an Ubuntu LTS release, and that doesn't happen until 
after it is available in Debian, so you will always have the problem 
that new Python versions will be introduced in Debian before we have a 
gate for them
* Structural problems require structural solutions; "everybody work 
harder/pay more attention/prioritise differently" will not do it
* I don't see any evidence that people are refusing to review patches 
that fix 3.7 issues, and I certainly don't think fixing them is 
'controversial'


On 07/08/18 10:11, Thomas Goirand wrote:

On 08/07/2018 03:24 PM, Sean Mooney wrote:

so im not sure pushing for python 3.7 is the right thing to do. also i would not
assume all distros will ship 3.7 in the near term. i have not check lately but
i believe cento 7 unless make 3.4 and 3.6 available in the default repos.
ubuntu 18.04 ships with 3.6 i believe


The current plan for Debian is that we'll be trying to push for Python
3.7 for Buster, which freezes in January. This freeze date means that
it's going to be Rocky that will end up in the next Debian release. If
Python 3.7 is a failure, then late November, we will remove Python 3.7
from Unstable and let Buster release with 3.6.

As for Ubuntu, it is currently unclear if 18.10 will be released with
Python 3.7 or not, but I believe they are trying to do that. If not,
then 19.04 will for sure be released with Python 3.7.


im not sure about other linux distros but since most openstack
deployment are done
on LTS releases of operating systems i would suspect that python 3.6
will be the main
python 3 versions we see deployed in production for some time.


In short: that's wrong.


having a 3.7 gate is not a bad idea but priority wise have a 3.6 gate
would be much higher on my list.


Wrong list. One version behind.


i think we as a community will have to decide on the minimum and
maximum python 3 versions
we support for each release and adjust as we go forward.


Whatever the OpenStack community decides is not going to change what
distributions like Debian will do. This type of reasoning lacks a much
needed humility.


i would suggst a min of 3.5 and max of 3.6 for rocky.


My suggestion is that these bugs are of very high importance and that
they should at least deserve attention. That the gate for Python 3.7
isn't ready, I can understand, as everyone's time is limited. This
doesn't mean that the OpenStack community at large should just dismiss
patches that are important for downstream.


for stien perhaps bump that to min of 3.6 max 3.7 but i think this is
something that needs to be address community wide
via a governance resolution rather then per project.


At this point, dropping 3.5 isn't a good idea either, even for Stein.


it will also
impact the external python lib we can depend on too which is
another reason i think thie need to be a comuntiy wide discussion and
goal that is informed by what distros are doing but
not mandated by what any one distro is doing.
regards
sean.


Postponing any attempt to support anything current is always a bad idea.
I don't see why there's even a controversy when one attempts to fix bugs
that will, sooner or later, also hit the gate.

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate

2018-08-06 Thread Zane Bitter

On 06/08/18 13:11, Thomas Goirand wrote:

On 08/02/2018 10:43 AM, Andrey Kurilin wrote:

 There's also some "raise StopIteration" issues in:
 - ceilometer
 - cinder
 - designate
 - glance
 - glare
 - heat
 - karbor
 - manila
 - murano
 - networking-ovn
 - neutron-vpnaas
 - nova
 - rally


Can you provide any traceback or steps to reproduce the issue for Rally
project ?


I assume Thomas is only trying to run the unit tests, since that's what 
he has to do to verify the package?



I'm not sure there's any. The only thing I know is that it has stop
StopIteration stuff, but I'm not sure if they are part of generators, in
which case they should simply be replaced by "return" if you want it to
be py 3.7 compatible.


I was about to say nobody is doing 'raise StopIteration' where they mean 
'return' until I saw that the Glance tests apparently were :D


The main issue though is when StopIteration is raised by one thing that 
happens to be called from *another* generator. e.g. many of the Heat 
tests that are failing are because we supplied a too-short list of 
side-effects to a mock and calling next() on them raises StopIteration, 
but because the calls were happening from inside a generator the 
StopIterations previously just got swallowed. If no generator were 
involved the test would have failed with the StopIteration exception. 
(Note: this was a bug - either in the code or more likely the tests. The 
purpose of the change in py37 was to expose this kind of bug wherever it 
exists.)



I didn't have time to investigate these, but at least Glance was
affected, and a patch was sent (as well as an async patch). None of them
has been merged yet:

https://review.openstack.org/#/c/586050/
https://review.openstack.org/#/c/586716/

That'd be ok if at least there was some reviews. It looks like nobody
cares but Debian & Ubuntu people... :(

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev][heat][keystone][security sig][all] SSL option for keystone session

2018-08-06 Thread Zane Bitter

On 06/08/18 00:46, Rico Lin wrote:

Hi all
I would like to trigger a discussion on providing directly SSL content 
for KeyStone session. Since all team using SSL, I believe this maybe 
concerns to other projects as well.


As we consider to implement customize SSL option for Heat remote stack 
[3] (and multicloud support [1]), I'm trying to figure out what is the 
best solution for this. Current SSL option in KeyStone session didn't 
allow us to provide directly CERT/Key string, instead only allow us to 
provide CERT/Key file path. Which is actually a limitation of 
python with the version less than 3.7 ([2]). As we not gonna easily get 
ride of previous python versions, we try to figure out what is the best 
solution we can approach here.


Some way, we can think about, like using pipeline, or create a file, 
encrypted it and send the file path out to KeyStone session.


Would like to hear more from all for any advice or suggestion on how can 
we approach this.


Create a temporary directory using tempfile.mkdtemp() as shown here:

https://security.openstack.org/guidelines/dg_using-temporary-files-securely.html#correct

This probably only needs to happen once per process. (Also I would pass 
mode=0o600 when creating the file instead of using umask().)


Assuming the data gets read only once, then I'd suggest rather than 
using a tempfile, create a named pipe using os.mkfifo(), open it, and 
write the data. Then pass the filename of the FIFO to the SSL lib. Close 
it again after and remove the pipe.



[1] https://etherpad.openstack.org/p/ptg-rocky-multi-cloud
[2] https://www.python.org/dev/peps/pep-0543/
[3] https://review.openstack.org/#/c/480923/
  --
May The Force of OpenStack Be With You,
*/Rico Lin
/*irc: ricolin





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-17 Thread Zane Bitter

On 17/07/18 10:44, Thierry Carrez wrote:

Finally found the time to properly read this...


For anybody else who found the wall of text challenging, I distilled the 
longest part into a blog post:


https://www.zerobanana.com/archive/2018/07/17#openstack-layer-model-limitations


Zane Bitter wrote:

[...]
We chose to add features to Nova to compete with vCenter/oVirt, and 
not to add features the would have enabled OpenStack as a whole to 
compete with more than just the compute provisioning subset of 
EC2/Azure/GCP.


Could you give an example of an EC2 action that would be beyond the 
"compute provisioning subset" that you think we should have built into 
Nova ?


Automatic provision/rotation of application credentials.
Reliable, user-facing event notifications.
Collection of usage data suitable for autoscaling, billing, and whatever 
it is that Watcher does.


Meanwhile, the other projects in OpenStack were working on building 
the other parts of an AWS/Azure/GCP competitor. And our vague 
one-sentence mission statement allowed us all to maintain the delusion 
that we were all working on the same thing and pulling in the same 
direction, when in truth we haven't been at all.


Do you think that organizing (tying) our APIs along [micro]services, 
rather than building a sanely-organized user API on top of a 
sanely-organized set of microservices, played a role in that divide ?


TBH, not really. If I were making a list of contributing factors I would 
probably put 'path dependence' at #1, #2 and #3.


At the start of this discussion, Jay posted on IRC a list of things that 
he thought shouldn't have been in the Nova API[1]:


- flavors
- shelve/unshelve
- instance groups
- boot from volume where nova creates the volume during boot
- create me a network on boot
- num_instances > 1 when launching
- evacuate
- host-evacuate-live
- resize where the user 'confirms' the operation
- force/ignore host
- security groups in the compute API
- force delete server
- restore soft deleted server
- lock server
- create backup

Some of those are trivially composable in higher-level services (e.g. 
boot from volume where nova creates the volume, get me a network, 
security groups). I agree with Jay that in retrospect it would have been 
cleaner to delegate those to some higher level than the Nova API (or, 
equivalently, for some lower-level API to exist within what is now 
Nova). And maybe if we'd had a top-level API like that we'd have been 
more aware of the ways that the lower-level ones lacked legibility for 
orchestration tools (oaktree is effectively an example of a top-level 
API like this, I'm sure Monty can give us a list of complaints ;)


But others on the list involve operations at a low level that don't 
appear to me to be composable out of simpler operations. (Maybe Jay has 
a shorter list of low-level APIs that could be combined to implement all 
of these, I don't know.) Once we decided to add those features, it was 
inevitable that they would reach right the way down through the stack to 
the lowest level.


There's nothing _organisational_ stopping Nova from creating an internal 
API (it need not even be a ReST API) for the 'plumbing' parts, with a 
separate layer that does orchestration-y stuff. That they're not doing 
so suggests to me that they don't think this is the silver bullet for 
managing complexity.


What would have been a silver bullet is saying 'no' to a bunch of those 
features, preferably starting with 'restore soft deleted server'(!!) and 
shelve/unshelve(?!). When AWS got feature requests like that they didn't 
say 'we'll have to add that in a higher-level API', they said 'if your 
application needs that then cloud is not for you'. We were never 
prepared to say that.


[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-26.log.html#t2018-06-26T15:30:33


We can decide that we want to be one, or the other, or both. But if we 
don't all decide together then a lot of us are going to continue 
wasting our time working at cross-purposes.


If you are saying that we should choose between being vCenter or AWS, I 
would definitely say the latter.


Agreed.

But I'm still not sure I see this issue 
in such a binary manner.


I don't know that it's still a viable option to say 'AWS' now. Given our 
installed base of users and our commitment to not breaking them, our 
practical choices may well be between 'vCenter' or 'both'.


It's painful because had we chosen 'AWS' at the beginning then we could 
have avoided the complexity hit of many of those features listed above, 
and spent our complexity budget on cloud features instead. Now we are 
locked in to supporting that legacy complexity forever, and it has 
reportedly maxed out our complexity budget to the point where people are 
reluctant to implement any cloud features, and unable to refactor to 
make them easier.


Astute observers will note that this is a *textbook* case of the 
Innovator's Dilem

Re: [openstack-dev] [python3][tc][infra][docs] changing the documentation build PTI to use tox

2018-07-09 Thread Zane Bitter

On 05/07/18 16:46, Doug Hellmann wrote:

I have a governance patch up [1] to change the project-testing-interface
(PTI) for building documentation to restore the use of tox.

We originally changed away from tox because we wanted to have a
single standard command that anyone could use to build the documentation
for a project. It turns out that is more complicated than just
running sphinx-build in a lot of cases anyway, because of course
you have a bunch of dependencies to install before sphinx-build
will work.


Is this the main reason? If we think we made the wrong call (i.e. 
everyone has to set up a virtualenv and install doc/requirements.txt 
anyway so we should just make them use tox even if they are not Python 
projects), then I agree it makes sense to fix it even though we only 
_just_ finished telling people it would be the opposite way.



Updating the job that uses sphinx directly to run under python 3,
while allowing the transition to be self-testing, was going to
require writing some extra complexity to look at something in the
repository to decide what version of python to use.  Since tox
handles that for us by letting us set basepython in the virtualenv
configuration, it seemed more straightforward to go back to using
tox.


Wouldn't another option be to have separate Zuul jobs for Python 3 and 
Python 2-based sphinx builds? Then the switchover would still be 
self-testing.


I'd rather do that if this is the main problem we're trying to solve, 
rather than reverse course.



So, this new PTI definition restores the use of tox and specifies
a "docs" environment. I have started defining the relevant jobs [2]
and project templates [3], and I will be updating the python3-first
transition plan as well.

Let me know if you have any questions about any of that,
Doug

[1] https://review.openstack.org/#/c/580495/
[2] 
https://review.openstack.org/#/q/project:openstack-infra/project-config+topic:python3-first
[3] 
https://review.openstack.org/#/q/project:openstack-infra/openstack-zuul-jobs+topic:python3-first

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-06 Thread Zane Bitter

I'm not Kevin but I think I can clarify some of these.

On 03/07/18 16:04, Jay Pipes wrote:
On 07/03/2018 02:37 PM, Fox, Kevin M wrote: 
 So these days containers are out clouding vms at this use case. So, does Nova continue to be cloudy vm or does it go for the more production vm use case like oVirt and VMware?


"production VM" use case like oVirt or VMWare? I don't know what that means. You mean 
"a GUI-based VM management system"?


Read 'pets'.

While some people only ever consider running Kubernetes on top of a 
cloud, some of us realize maintaining both a cloud an a kubernetes is 
unnecessary and can greatly simplify things simply by running k8s on 
bare metal. This does then make it a competitor to Nova  as a platform 
for running workload on.


What percentage of Kubernetes users deploy on baremetal (and continue to 
deploy on baremetal in production as opposed to just toying around with 
it)?


At Red Hat Summit there was a demo of deploying OpenShift alongside (not 
on top of) OpenStack on bare metal using Director (downstream of TripleO 
- so managed by Ironic in an OpenStack undercloud).


I don't know if people using Kubernetes directly on baremetal in 
production is widespread right now, but it's clear to me that it will be 
just around the corner.


As k8s gains more multitenancy features, this trend will continue to 
grow I think. OpenStack needs to be ready for when that becomes a thing.


OpenStack is already multi-tenant, being designed as such from day one. 
With the exception of Ironic, which uses Nova to enable multi-tenancy.


What specifically are you referring to "OpenStack needs to be ready"? 
Also, what specific parts of OpenStack are you referring to there?


I believe the point was:

* OpenStack supports multitenancy.
* Kubernetes does not support multitenancy.
* Applications that require multitenancy currently require separate 
per-tenant deployments of Kubernetes; deploying on top of a cloud (such 
as OpenStack) makes this easier, so there is demand for OpenStack from 
people who need multitenancy even if they are mainly interacting with 
Kubernetes. Effectively OpenStack is the multitenancy layer for k8s in a 
lot of deployments.

* One day Kubernetes will support multitenancy.
* Then what?


Think of OpenStack like a game console. The moment you make a component 
optional and make it takes extra effort to obtain, few software developers 
target it and rarely does anyone one buy the addons it because there isn't 
software for it. Right now, just about everything in OpenStack is an addon. 
Thats a problem.


I don't have any game consoles nor do I develop software for them,


Me neither, but much like OpenStack it's a two-sided marketplace 
(developers and users in the console case, operators and end-users in 
the OpenStack case), where you succeed or fail based on how much value 
you can get flowing between the two sides. There's a positive feedback 
loop between supply on one side and demand on the other, so like all 
positive feedback loops it's unstable and you have to find some way to 
bootstrap it in the right direction, which is hard. One way to make it 
much, much harder is to segment your market such a way that you give 
yourself a second feedback loop that you also have to bootstrap, that 
depends on the first one, and you only get to use a subset of your 
existing market participants to do it.


As an example from my other reply, we're probably going to try to use 
Barbican to help integrate Heat with external tools like k8s and 
Ansible, but for that to have any impact we'll have to convince users 
that they want to do this badly enough that they'll convince their 
operators to deploy Barbican - and we'll likely have to do so before 
they've even tried it. That's even after we've already convinced them to 
use OpenStack and deploy Heat. If Barbican (and Heat) were available as 
part of every OpenStack deployment, then it'd just be a matter of 
convincing people to use the feature, which would already be available 
and which they could try out at any time. That's a much lower bar.


I'm not defending "make it a monolith" as a solution, but Kevin is 
identifying a real problem.


- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-06 Thread Zane Bitter

On 02/07/18 19:13, Jay Pipes wrote:
Also note that when I've said that *OpenStack* should have a smaller 
mission and scope, that doesn't mean that higher-level services 
aren't necessary or wanted.


Thank you for saying this, and could I please ask you to repeat this 
disclaimer whenever you talk about a smaller scope for OpenStack.


Yes. I shall shout it from the highest mountains. [1]


Thanks. Appreciate it :)

[1] I live in Florida, though, which has no mountains. But, when I 
visit, say, North Carolina, I shall certainly shout it from their 
mountains.


That's where I live, so I'll keep an eye out for you if I hear shouting.

Because for those of us working on higher-level services it feels like 
there has been a non-stop chorus (both inside and outside the project) 
of people wanting to redefine OpenStack as something that doesn't 
include us.


I've said in the past (on Twitter, can't find the link right now, but 
it's out there somewhere) something to the effect of "at some point, 
someone just needs to come out and say that OpenStack is, at its core, 
Nova, Neutron, Keystone, Glance and Cinder".


https://twitter.com/jaypipes/status/875377520224460800 for anyone who 
was curious.


Interestingly, that and my equally off-the-cuff reply 
https://twitter.com/zerobanana/status/875559517731381249 are actually 
pretty close to the minimal descriptions of the two broad camps we were 
talking about in the technical vision etherpad. (Noting for the record 
that cdent disputes that views can be distilled into two camps.)


Perhaps this is what you were recollecting. I would use a different 
phrase nowadays to describe what I was thinking with the above.


I don't think I was recalling anything in particular that *you* had 
said. Complaining about the non-core projects (presumably on the logic 
that if we kicked them out of OpenStack all their developers would 
instead go to work on radically simplifying the remaining projects 
instead?) was a widespread popular pastime for at least 
roughly the 4 years from 2013-2016.


I would say instead "Nova, Neutron, Cinder, Keystone and Glance [2] are 
a definitive lower level of an OpenStack deployment. They represent a 
set of required integrated services that supply the most basic 
infrastructure for datacenter resource management when deploying 
OpenStack."


Note the difference in wording. Instead of saying "OpenStack is X", I'm 
saying "These particular services represent a specific layer of an 
OpenStack deployment".


OK great. So this is wrong :) and I will attempt to explain why I think 
that in a second. But first I want to acknowledge what is attractive 
about this viewpoint (even to me). This is a genuinely useful 
observation that leads to a real insight.


The insight, I think, is the same one we all just agreed on in another 
part of the thread: OpenStack is the only open source project 
concentrating on the gap between a rack full of unconfigured equipment 
and somewhere that you could, say, install Kubernetes. We write the bit 
where the rubber meets the road, and if we don't get it done there's 
nobody else to do it! There's an almost infinite variety of different 
applications and they'll all need different parts of the higher layers, 
but ultimately they'll all need to be reified in a physical data center 
and when they do, we'll be there: that's the core of what we're building.


It's honestly only the tiniest of leaps from seeing that idea as 
attractive, useful, and genuinely insightful to seeing it as correct, 
and I don't really blame anybody who made that leap.


I'm going to gloss over the fact that we punted the actual process of 
setting up the data center to a bunch of what turned out to be 
vendor-specific installer projects that you suggest should be punted out 
of OpenStack altogether, because that isn't the biggest problem I have 
with this view.


Back in the '70s there was this idea about AI: even a 2 year old human 
can e.g. recognise images with a high degree of accuracy, but doing e.g. 
calculus is extremely hard in comparison and takes years of training. 
But computers can already do calculus! Ergo, we've solved the hardest 
part already and building the rest out of that will be trivial, AGI is 
just around the corner,   (I believe I cribbed this explanation 
from an outdated memory of Marvin Minsky's 1982 paper "Why People Think 
Computers Can't" - specifically the section "Could a Computer Have 
Common Sense?" - so that's a better source if you actually want to learn 
something about AI.) The popularity of this idea arguably helped created 
the AI bubble, and the inevitable collision with the reality of its 
fundamental wrongness led to the AI Winter. Because in fact just because 
you can build logic out of many layers of heuristics (as human brains 
do), it absolutely does not follow that it's trivial to build other 
things that also require many layers of heuristics once you have some 
basic logic building blocks. 

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-02 Thread Zane Bitter
well. 
Some of those projects only exist at all because of boundaries between 
stuff on the compute node, while others are just unnecessarily 
complicated to add to a deployment because of those boundaries. (See 
https://julien.danjou.info/lessons-from-openstack-telemetry-incubation/ 
for some insightful observations on that topic - note that you don't 
have to agree with all of it to appreciate the point that the 
balkanisation of the compute node architecture leads to bad design 
decisions.)


In theory doing that should make it easier to build e.g. a cut-down 
compute API of the kind that Jay was talking about upthread.


I know that the short-term costs of making a change like this are going 
to be high - we aren't even yet at a point where making a stable API for 
compute drivers has been judged to meet a cost/benefit analysis. But 
maybe if we can do a comprehensive job of articulating the long-term 
benefits, we might find that it's still the right thing to do.



  * focus on the commons first.
  * simplify the architecture for ops:
* make as much as possible stateless and centralize remaining state.
* stop moving config options around with every release. Make it promote 
automatically and persist it somewhere.
* improve serial performance before sharding. k8s can do 5000 nodes on one 
control plane. No reason to do nova cells and make ops deal with it except for 
the most huge of clouds
  * consider a reference product (think Linux vanilla kernel. distro's can 
provide their own variants. thats ok)
  * come up with an architecture team for the whole, not the subsystem. The 
whole thing needs to work well.


We probably actually need two groups: one to think about the 
architecture of the user experience of OpenStack, and one to think about 
the internal architecture as a whole.


I'd be very enthusiastic about the TC chartering some group to work on 
this. It has worried me for a long time that there is nobody designing 
OpenStack as an whole; design is done at the level of individual 
projects, and OpenStack is an ad-hoc collection of what they produce. 
Unfortunately we did have an Architecture Working Group for a while (in 
the sense of the second definition above), and it fizzled out because 
there weren't enough people with enough time to work on it. Until we can 
identify at least a theoretical reason why a new effort would be more 
successful, I don't think there is going to be any appetite for trying 
again.


cheers,
Zane.


  * encourage current OpenStack devs to test/deploy Kubernetes. It has some 
very good ideas that OpenStack could benefit from. If you don't know what they 
are, you can't adopt them.

And I know its hard to talk about, but consider just adopting k8s as the 
commons and build on top of it. OpenStack's api's are good. The implementations 
right now are very very heavy for ops. You could tie in K8s's pod scheduler 
with vm stuff running in containers and get a vastly simpler architecture for 
operators to deal with. Yes, this would be a major disruptive change to 
OpenStack. But long term, I think it would make for a much healthier OpenStack.

Thanks,
Kevin
________
From: Zane Bitter [zbit...@redhat.com]
Sent: Wednesday, June 27, 2018 4:23 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

On 27/06/18 07:55, Jay Pipes wrote:

WARNING:

Danger, Will Robinson! Strong opinions ahead!


I'd have been disappointed with anything less :)


On 06/26/2018 10:00 PM, Zane Bitter wrote:

On 26/06/18 09:12, Jay Pipes wrote:

Is (one of) the problem(s) with our community that we have too small
of a scope/footprint? No. Not in the slightest.


Incidentally, this is an interesting/amusing example of what we talked
about this morning on IRC[1]: you say your concern is that the scope
of *Nova* is too big and that you'd be happy to have *more* services
in OpenStack if they took the orchestration load off Nova and left it
just to handle the 'plumbing' part (which I agree with, while noting
that nobody knows how to get there from here); but here you're
implying that Kata Containers (something that will clearly have no
effect either way on the simplicity or otherwise of Nova) shouldn't be
part of the Foundation because it will take focus away from
Nova/OpenStack.


Above, I was saying that the scope of the *OpenStack* community is
already too broad (IMHO). An example of projects that have made the
*OpenStack* community too broad are purpose-built telco applications
like Tacker [1] and Service Function Chaining. [2]

I've also argued in the past that all distro- or vendor-specific
deployment tools (Fuel, Triple-O, etc [3]) should live outside of
OpenStack because these projects are more products and the relentless
drive of vendor product management (rightfully) pushes the scope of
these applications to gobble up more and more feature space that may or
may not have anything to do with the core OpenStack mi

Re: [openstack-dev] [all][sdk] Integrating OpenStack and k8s with a service broker

2018-06-29 Thread Zane Bitter
Now that the project is set up, let's tag future messages on this topic 
with [service-broker]. Here's one to start with that will help you find 
everything:


http://lists.openstack.org/pipermail/openstack-dev/2018-June/131923.html

cheers,
Zane.

On 05/06/18 12:19, Zane Bitter wrote:
I've been doing some investigation into the Service Catalog in 
Kubernetes and how we can get OpenStack resources to show up in the 
catalog for use by applications running in Kubernetes. (The Big 3 public 
clouds already support this.) The short answer is via an implementation 
of something called the Open Service Broker API, but there are shortcuts 
available to make it easier to do.


I'm convinced that this is readily achievable and something we ought to 
do as a community.


I've put together a (long-winded) FAQ below to answer all of your 
questions about it.


Would you be interested in working on a new project to implement this 
integration? Reply to this thread and let's collect a list of volunteers 
to form the initial core review team.


cheers,
Zane.


What is the Open Service Broker API?


The Open Service Broker API[1] is a standard way to expose external 
resources to applications running in a PaaS. It was originally developed 
in the context of CloudFoundry, but the same standard was adopted by 
Kubernetes (and hence OpenShift) in the form of the Service Catalog 
extension[2]. (The Service Catalog in Kubernetes is the component that 
calls out to a service broker.) So a single implementation can cover the 
most popular open-source PaaS offerings.


In many cases, the services take the form of simply a pre-packaged 
application that also runs inside the PaaS. But they don't have to be - 
services can be anything. Provisioning via the service broker ensures 
that the services requested are tied in to the PaaS's orchestration of 
the application's lifecycle.


(This is certainly not the be-all and end-all of integration between 
OpenStack and containers - we also need ways to tie PaaS-based 
applications into the OpenStack's orchestration of a larger group of 
resources. Some applications may even use both. But it's an important 
part of the story.)


What sorts of services would OpenStack expose?
--

Some example use cases might be:

* The application needs a reliable message queue. Rather than spinning 
up multiple storage-backed containers with anti-affinity policies and 
dealing with the overhead of managing e.g. RabbitMQ, the application 
requests a Zaqar queue from an OpenStack cloud. The overhead of running 
the queueing service is amortised across all of the applications in the 
cloud. The queue gets cleaned up correctly when the application is 
removed, since it is tied into the application definition.


* The application needs a database. Rather than spinning one up in a 
storage-backed container and dealing with the overhead of managing it, 
the application requests a Trove DB from an OpenStack cloud.


* The application includes a service that needs to run on bare metal for 
performance reasons (e.g. could also be a database). The application 
requests a bare-metal server from Nova w/ Ironic for the purpose. (The 
same applies to requesting a VM, but there are alternatives like 
KubeVirt - which also operates through the Service Catalog - available 
for getting a VM in Kubernetes. There are no non-proprietary 
alternatives for getting a bare-metal server.)


AWS[3], Azure[4], and GCP[5] all have service brokers available that 
support these and many more services that they provide. I don't know of 
any reason in principle not to expose every type of resource that 
OpenStack provides via a service broker.


How is this different from cloud-provider-openstack?


The Cloud Controller[6] interface in Kubernetes allows Kubernetes itself 
to access features of the cloud to provide its service. For example, if 
k8s needs persistent storage for a container then it can request that 
from Cinder through cloud-provider-openstack[7]. It can also request a 
load balancer from Octavia instead of having to start a container 
running HAProxy to load balance between multiple instances of an 
application container (thus enabling use of hardware load balancers via 
the cloud's abstraction for them).


In contrast, the Service Catalog interface allows the *application* 
running on Kubernetes to access features of the cloud.


What does a service broker look like?
-

A service broker provides an HTTP API with 5 actions:

* List the services provided by the broker
* Create an instance of a resource
* Bind the resource into an instance of the application
* Unbind the resource from an instance of the application
* Delete the resource

The binding step is used for things like providing a set of DB 
credentials to a container. You can rotate credentials when

[openstack-dev] [service-broker] openstack-service-broker project update

2018-06-29 Thread Zane Bitter
(This follows on from 
http://lists.openstack.org/pipermail/openstack-dev/2018-June/131183.html 
in case you are confused)


Hi folks,
Now that the project creation process is largely complete, here are some 
housekeeping updates:


* Let's use the [service-broker] tag in emails to openstack-dev 
(starting with this one!)


* By popular demand, I set up an IRC channel too. It's 
#openstack-service-broker on FreeNode.


* The project repo is available here: 
http://git.openstack.org/cgit/openstack/openstack-service-broker


Since there's no code yet, the only Zuul job is the one to build (but 
not publish) the docs, but it is working and patches are merging.


* As a reminder, this is the current core review team: 
https://review.openstack.org/#/admin/groups/1925,members (and more 
volunteers are welcome)


* The project storyboard is available here: 
https://storyboard.openstack.org/#!/project/1038


* I started adding some stories to the storyboard that should get us to 
an initial prototype, and added them to this worklist: 
https://storyboard.openstack.org/#!/worklist/391


Folks from the Automation Broker team volunteered to help out by writing 
some example playbooks that we can start from. So the most important 
thing I think we can work on to start with is to build the tooling that 
will enable us to test them, both locally for devs and folks checking 
out the project and in the gate.


* It would probably be helpful to set up a meeting time - at least for 
an initial kickoff (thanks Artem for the suggestion), although I see 
we've managed to collect people in just about every time zone so it 
might be challenging. Here is a poll we can use to try to pick a time: 
https://framadate.org/xlKuh4vtozew8gL8 (note: assume all the times are 
in UTC). Everyone who is interested please respond to that in the next 
few days. (I chose the date for July 10th to avoid the days right 
before/after the July 4th holiday in the US, although I personally will 
be around on those days.)


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican][heat] Identifying secrets in Barbican

2018-06-28 Thread Zane Bitter

On 28/06/18 15:00, Douglas Mendizabal wrote:

Replying inline.

[snip]

IIRC, using URIs instead of UUIDs was a federation pre-optimization
done many years ago when Barbican was brand new and we knew we wanted
federation but had no idea how it would work.  The rationale was that
the URI would contain both the ID of the secret as well as the location
of where it was stored.

In retrospect, that was a terrible idea, and using UUIDs for
consistency with the rest of OpenStack would have been a better choice.
  I've added a story to the python-barbicanclient storyboard to enable
usage of UUIDs instead of URLs:

https://storyboard.openstack.org/#!/story/2002754


Cool, thanks for clearing that up. If UUID is going to become the/a 
standard way to reference stuff in the future then we'll just use the 
UUID for the property value.



I'm sure you've noticed, but the URI that identifies the secret
includes the UUID that Barbican uses to identify the secret internally:

http://{barbican-host}:9311/v1/secrets/{UUID}

So you don't actually need to store the URI, since it can be
reconstructed by just saving the UUID and then using whatever URL
Barbican has in the service catalog.



In a tangentially related question, since secrets are immutable once
they've been uploaded, what's the best way to handle a case where
you
need to rotate a secret without causing a temporary condition where
there is no version of the secret available? (The fact that there's
no
way to do this for Nova keypairs is a perpetual problem for people,
and
I'd anticipate similar use cases for Barbican.) I'm going to guess
it's:

* Create a new secret with the same name
* GET /v1/secrets/?name==created:desc=1 to find out
the
URL for the newest secret with that name
* Use that URL when accessing the secret
* Once the new secret is created, delete the old one

Should this, or whatever the actual recommended way of doing it is,
be
baked in to the client somehow so that not every user needs to
reimplement it?



When you store a secret (e.g. using POST /v1/secrets), the response
includes the URI both in the JSON body and in the Location: header.
  
There is no need for you to mess around with searching by name, since

Barbican does not use the name to identify a secret.  You should just
save the URI (or UUID) from the response, and then update the resource
using the old secret to point to the new secret instead.


Sometimes user will want to be able to rotate secrets without updating 
all of the places that they're referenced from though.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-06-27 Thread Zane Bitter

On 27/06/18 07:55, Jay Pipes wrote:

WARNING:

Danger, Will Robinson! Strong opinions ahead!


I'd have been disappointed with anything less :)


On 06/26/2018 10:00 PM, Zane Bitter wrote:

On 26/06/18 09:12, Jay Pipes wrote:
Is (one of) the problem(s) with our community that we have too small 
of a scope/footprint? No. Not in the slightest.


Incidentally, this is an interesting/amusing example of what we talked 
about this morning on IRC[1]: you say your concern is that the scope 
of *Nova* is too big and that you'd be happy to have *more* services 
in OpenStack if they took the orchestration load off Nova and left it 
just to handle the 'plumbing' part (which I agree with, while noting 
that nobody knows how to get there from here); but here you're 
implying that Kata Containers (something that will clearly have no 
effect either way on the simplicity or otherwise of Nova) shouldn't be 
part of the Foundation because it will take focus away from 
Nova/OpenStack.


Above, I was saying that the scope of the *OpenStack* community is 
already too broad (IMHO). An example of projects that have made the 
*OpenStack* community too broad are purpose-built telco applications 
like Tacker [1] and Service Function Chaining. [2]


I've also argued in the past that all distro- or vendor-specific 
deployment tools (Fuel, Triple-O, etc [3]) should live outside of 
OpenStack because these projects are more products and the relentless 
drive of vendor product management (rightfully) pushes the scope of 
these applications to gobble up more and more feature space that may or 
may not have anything to do with the core OpenStack mission (and have 
more to do with those companies' product roadmap).


I'm still sad that we've never managed to come up with a single way to 
install OpenStack. The amount of duplicated effort expended on that 
problem is mind-boggling. At least we tried though. Excluding those 
projects from the community would have just meant giving up from the 
beginning.


I think Thierry's new map, that collects installer services in a 
separate bucket (that may eventually come with a separate git namespace) 
is a helpful way of communicating to users what's happening without 
forcing those projects outside of the community.


On the other hand, my statement that the OpenStack Foundation having 4 
different focus areas leads to a lack of, well, focus, is a general 
statement on the OpenStack *Foundation* simultaneously expanding its 
sphere of influence while at the same time losing sight of OpenStack 
itself -- and thus the push to create an Open Infrastructure Foundation 
that would be able to compete with the larger mission of the Linux 
Foundation.


[1] This is nothing against Tacker itself. I just don't believe that 
*applications* that are specially built for one particular industry 
belong in the OpenStack set of projects. I had repeatedly stated this on 
Tacker's application to become an OpenStack project, FWIW:


https://review.openstack.org/#/c/276417/

[2] There is also nothing wrong with service function chains. I just 
don't believe they belong in *OpenStack*. They more appropriately belong 
in the (Open)NFV community because they just are not applicable outside 
of that community's scope and mission.


[3] It's interesting to note that Airship was put into its own 
playground outside the bounds of the OpenStack community (but inside the 
bounds of the OpenStack Foundation).


I wouldn't say it's inside the bounds of the Foundation, and in fact 
confusion about that is a large part of why I wrote the blog post. It is 
a 100% unofficial project that just happens to be hosted on our infra. 
Saying it's inside the bounds of the Foundation is like saying 
Kubernetes is inside the bounds of GitHub.


Airship is AT's specific 
deployment tooling for "the edge!". I actually think this was the 
correct move for this vendor-opinionated deployment tool.



So to answer your question:

 zaneb: yeah... nobody I know who argues for a small stable 
core (in Nova) has ever said there should be fewer higher layer services.

 zaneb: I'm not entirely sure where you got that idea from.


Note the emphasis on *Nova* above?

Also note that when I've said that *OpenStack* should have a smaller 
mission and scope, that doesn't mean that higher-level services aren't 
necessary or wanted.


Thank you for saying this, and could I please ask you to repeat this 
disclaimer whenever you talk about a smaller scope for OpenStack. 
Because for those of us working on higher-level services it feels like 
there has been a non-stop chorus (both inside and outside the project) 
of people wanting to redefine OpenStack as something that doesn't 
include us.


The reason I haven't dropped this discussion is because I really want to 
know if _all_ of those people were actually talking about something else 
(e.g. a smaller scope for Nova), or if it's just you. Because you and I 
are in complete agreement that Nova has g

[openstack-dev] [barbican][heat] Identifying secrets in Barbican

2018-06-27 Thread Zane Bitter
We're looking at using Barbican to implement a feature in Heat[1] and 
ran into some questions about how secrets are identified in the client.


With most openstack clients, resources are identified by a UUID. You 
pass the UUID on the command line (or via the Python API or whatever) 
and the client combines that with the endpoint of the service obtained 
from the service catalog and a path to the resource type to generate the 
URL used to access the resource.


While there appears to be no technical reason that barbicanclient 
couldn't also do this, instead of just the UUID it uses the full URL as 
the identifier for the resource. This is extremely cumbersome for the 
user, and invites confused-deputy attacks where if the attacker can 
control the URL, they can get barbicanclient to send a token to an 
arbitrary URL. What is the rationale for doing it this way?



In a tangentially related question, since secrets are immutable once 
they've been uploaded, what's the best way to handle a case where you 
need to rotate a secret without causing a temporary condition where 
there is no version of the secret available? (The fact that there's no 
way to do this for Nova keypairs is a perpetual problem for people, and 
I'd anticipate similar use cases for Barbican.) I'm going to guess it's:


* Create a new secret with the same name
* GET /v1/secrets/?name==created:desc=1 to find out the 
URL for the newest secret with that name

* Use that URL when accessing the secret
* Once the new secret is created, delete the old one

Should this, or whatever the actual recommended way of doing it is, be 
baked in to the client somehow so that not every user needs to 
reimplement it?



Bottom line: how should Heat expect/require a user to refer to a 
Barbican secret in a property of a Heat resource, given that:

- We don't want Heat to become the deputy in "confused deputy attack".
- We shouldn't do things radically differently to the way Barbican does 
them, because users will need to interact with Barbican first to store 
the secret.
- Many services will likely end up implementing integration with 
Barbican and we'd like them all to have similar user interfaces.

- Users will need to rotate credentials without downtime.

cheers,
Zane.

BTW the user documentation for Barbican is really hard to find. Y'all 
might want to look in to cross-linking all of the docs you have 
together. e.g. there is no link from the Barbican docs to the 
python-barbicanclient docs or vice-versa.


[1] https://storyboard.openstack.org/#!/story/2002126

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-06-26 Thread Zane Bitter

On 26/06/18 09:12, Jay Pipes wrote:
Is (one of) the problem(s) with our community that we have too small of 
a scope/footprint? No. Not in the slightest.


Incidentally, this is an interesting/amusing example of what we talked 
about this morning on IRC[1]: you say your concern is that the scope of 
*Nova* is too big and that you'd be happy to have *more* services in 
OpenStack if they took the orchestration load off Nova and left it just 
to handle the 'plumbing' part (which I agree with, while noting that 
nobody knows how to get there from here); but here you're implying that 
Kata Containers (something that will clearly have no effect either way 
on the simplicity or otherwise of Nova) shouldn't be part of the 
Foundation because it will take focus away from Nova/OpenStack.


So to answer your question:

 zaneb: yeah... nobody I know who argues for a small stable 
core (in Nova) has ever said there should be fewer higher layer services.

 zaneb: I'm not entirely sure where you got that idea from.

I guess from all the people who keep saying it ;)

Apparently somebody was saying it a year ago too :D
https://twitter.com/zerobanana/status/883052105791156225

cheers,
Zane.

[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-26.log.html#t2018-06-26T15:30:33


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-06-26 Thread Zane Bitter

On 26/06/18 09:12, Jay Pipes wrote:

On 06/26/2018 08:41 AM, Chris Dent wrote:

Meanwhile, to continue [last week's theme](/tc-report-18-25.html),
the TC's role as listener, mediator, and influencer lacks
definition.

Zane wrote up a blog post explaining the various ways in which the
OpenStack Foundation is 
[expanding](https://www.zerobanana.com/archive/2018/06/14#osf-expansion).


One has to wonder with 4 "focus areas" for the OpenStack Foundation [1] 
whether there is any actual expectation that there will be any focus at 
all any more.


Are CI/CD and secure containers important? [2] Yes, absolutely.

Is (one of) the problem(s) with our community that we have too small of 
a scope/footprint? No. Not in the slightest.


IMHO, what we need is focus. And having 4 different focus areas doesn't 
help focus things.


One of the upshots of this change is that when discussing stuff we now 
need to be more explicit about who 'we' are.


We, the OpenStack project, will have less stuff to focus on as a result 
of this change (no Zuul, for example, and if that doesn't make you happy 
then perhaps no 'edge' stuff will ;).


We, the OpenStack Foundation, will unquestionably have more stuff.

I keep waiting for people to say "no, that isn't part of our scope". But 
all I see is people saying "yes, we will expand our scope to these new 
sets of things


Arguably we're saying both of these things, but for different 
definitions of 'our'.


(otherwise *gasp* the Linux Foundation will gobble up all 
the hype)".


I could also speculate on what the board was hoping to achieve when it 
made this move, but it would be much better if they were to communicate 
that clearly to the membership themselves. One thing we did at the joint 
leadership meeting was essentially brainstorming for a new mission 
statement for the Foundation, and this very much seemed like a post-hoc 
exercise - we (the Foundation) are operating outside the current mission 
of record, but nobody has yet articulated what our new mission is.



Just my two cents and sorry for being opinionated,


Hey, feel free to complain to the TC on openstack-dev any time. But also 
be aware that if you actually want anything to happen, you also need to 
complain to your Individual Directors of the Foundation and/or on the 
foundation list.


cheers,
Zane.


-jay

[1] https://www.openstack.org/foundation/strategic-focus-areas/

[2] I don't include "edge" in my list of things that are important 
considering nobody even knows what "edge" is yet. I fail to see how 
people can possibly "focus" on something that isn't defined.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][heat-translator][tacker] Need new release of heat-translator library

2018-06-22 Thread Zane Bitter

On 21/06/18 16:37, Sean McGinnis wrote:


Apparently heat-translator has a healthy ecosystem of contributors and
users, but not of maintainers, and it remaining a deliverable of the Heat
project is doing nothing to alleviate the latter problem. I'd like to find
it a home that _would_ help.



I'd be interested to hear thoughts if this is somewhere where we (the TC)
should step in and make a few people cores on this project?


 Let's save that remedy for projects that are unresponsive.


Or are the existing
contributors a healthy amount but not involved enough to trust to be cores?


 heat-translator cores are aware of the problem and 
are theoretically on the lookout for new cores, but I presume there's 
nobody with the track record of reviews to nominate yet.


- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [barbican] [tc] key store in base services

2018-06-21 Thread Zane Bitter

On 20/06/18 17:59, Adam Harwell wrote:

Looks like I missed this so I'm late to the party, but:

Ade is technically correct, Octavia doesn't explicitly depend on 
Barbican, as we do support castellan generically.


*HOWEVER*: we don't just store and retrieve our own secrets -- we rely 
on loading up user created secrets. This means that for Octavia to work, 
even if we use castellan, we still need some way for users to interact 
with the secret store via an API, and what that means in openstack in 
still Barbican. So I would still say that Barbican is a dependency for 
us logistically, if not technically.


Right, yeah, I'd call that a dependency on Barbican.

There are reportedly, however, other use cases where the keys are 
generated internally that don't depend on Barbican but can benefit from 
Castellan.


It might be a worthwhile exercise to make a list of all of the proposed 
features that have been blocked on this and whether they require user 
interaction with the key store.


For example, internally at GoDaddy we were investigating deploying Vault 
with a custom user-facing API/UI for allowing users to store secrets 
that could be consumed by Octavia with castellan (don't get me started 
on how dumb that is, but it's what we were investigating).
The correct way to do this in an openstack environment is the openstack 
secret store API, which is Barbican.


This is the correct answer, and thank you for being awesome :)

So, while I am personally dealing 
with an example of very painfully avoiding Barbican (which may have been 
a non-issue if Barbican were a base service), I have a tough time 
reconciling not including Barbican itself as a requirement.


On the bright side, getting everyone to deploy either Barbican or Vault 
makes it easier even for the folks who chose Vault to deploy Barbican later.


I don't think we've given up on making Barbican a base service, just 
recognised that it's a longer-term effort whereas this is something we 
can do to start down the path right now.


cheers,
Zane.


    --Adam (rm_work)

On Wed, Jun 20, 2018, 11:37 Jeremy Stanley > wrote:


On 2018-06-06 01:29:49 + (+), Jeremy Stanley wrote:
[...]
 > Seeing no further objections, I give you
 > https://review.openstack.org/572656 for the next step.

That change merged just a few minutes ago, and

https://governance.openstack.org/tc/reference/base-services.html#current-list-of-base-services
now includes:

     A Castellan-compatible key store

     OpenStack components may keep secrets in a key store, using
     Oslo’s Castellan library as an indirection layer. While
     OpenStack provides a Castellan-compatible key store service,
     Barbican, other key store backends are also available for
     Castellan. Note that in the context of the base services set
     Castellan is intended only to provide an interface for services
     to interact with a key store, and it should not be treated as a
     means to proxy API calls from users to that key store. In order
     to reduce unnecessary exposure risks, any user interaction with
     secret material should be left to a dedicated API instead
     (preferably as provided by Barbican).

Thanks to everyone who helped brainstorming/polishing, and here's
looking forward to a ubiquity of default security features and
functionality in future OpenStack releases!
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][heat-translator][tacker] Need new release of heat-translator library

2018-06-21 Thread Zane Bitter

On 20/06/18 18:59, Doug Hellmann wrote:

According to
https://governance.openstack.org/tc/reference/projects/heat.html  the
Heat PTL*is*  the PTL for heat-translators. Any internal team structure
that implies otherwise is just that, an internal team structure.


Yes, correct.


I'm really unclear on what the problem is here.


From my perspective (wearing my Heat hat), the problem is that the 
official team structure no longer represents reality. The folks who were 
working on both heat and heat-translator are long gone. Bob is 
responsive to direct email, but heat-translator is effectively in 
maintenance mode at best.


A few weeks back I made the mistake of reviewing a patch (Gerrit 
confirms that it was literally the first patch I have ever reviewed in 
heat-translator) to update the docs PTI since (a) I know a bit about 
that, and (b) I technically have +2 rights. Immediately people started 
pinging me every day for reviews and adding stuff to my review queue, 
some of which was labelled 'trivial' right there in the patch headline, 
until I asked them to knock it off. That's how much demand there is for 
maintainers.


Apparently heat-translator has a healthy ecosystem of contributors and 
users, but not of maintainers, and it remaining a deliverable of the 
Heat project is doing nothing to alleviate the latter problem. I'd like 
to find it a home that _would_ help.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][heat-templates] Creating a role with no domain

2018-06-21 Thread Zane Bitter

On 21/06/18 07:39, Rabi Mishra wrote:
Looks like that's a bug where we create a domain specific role for 
'default' domain[1], when domain is not specified.


[1] 
https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/keystone/role.py#L54


You can _probably_ pass

  domain: null

in your template. Worth a try, anyway.

- ZB

You're welcome to raise a bug and propose a fix where we should be just 
removing the default.


On Thu, Jun 21, 2018 at 4:14 PM, Tikkanen, Viktor (Nokia - FI/Espoo) 
mailto:viktor.tikka...@nokia.com>> wrote:


Hi!
There was a new ’domain’ property added to OS::Keystone::Role
(_https://storyboard.openstack.org/#!/story/1684558_
,
_https://review.openstack.org/#/c/459033/_
).
With “openstack role create” CLI command it is still possible to
create roles with no associated domains; but it seems that the same
cannot be done with heat templates.
An example: if I create two roles, CliRole (with “openstack role
create CliRole” command)  and SimpleRole with the following heat
template:
heat_template_version: 2015-04-30
description: Creates a role
resources:
   role_resource:
     type: OS::Keystone::Role
     properties:
   name: SimpleRole
the result in the keystone database will be:
MariaDB [keystone]> select * from role;
+--+--+---+---+
| id    | name | extra | domain_id |
+--+--+---+---+
| 5de0eee4990e4a59b83dae93af9c0951 | SimpleRole   | {}    |
default   |
| 79472e6e1bf341208bd88e1c2dcf7f85 | CliRole  | {}    |
<>  |
| 7dd5e4ea87e54a13897eb465fdd0e950 | heat_stack_owner | {}    |
<>  |
| 80fd61edbe8842a7abb47fd7c91ba9d7 | heat_stack_user  | {}    |
<>  |
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | {}    |
<>  |
| e174c27e79b84ea392d28224eb0af7c9 | admin    | {}    |
<>  |
+--+--+---+---+
Should it be possible to create a role without associated domain
with a heat template?
-V.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Regards,
Rabi Mishra



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][tacker][heat-translator] deliverables of heat-translator library

2018-06-20 Thread Zane Bitter

On 20/06/18 12:38, HADDLETON, Robert W (Bob) wrote:
The Tacker team is dependent on tosca-parser and heat-translator but 
they are not the only consumers.


I agree the structure is odd, and Sahdev has more of the history than I do.


History lesson:

At the time (2014) OpenStack was organised into 'Programs', that could 
contain multiple projects. It seemed to make sense to bring 
heat-translator into the Orchestration Program. It had its own PTL 
(Sahdev) and its own core team (although Heat cores from the time still 
have +2 rights on it), and operated essentially independently.


There were discussions about eventually combining it with heatclient or 
even Heat itself once it was mature, but that hasn't come up in quite a 
while and there are no resources available to work on it now anyway.


When we scrapped 'Programs', it just got converted to a deliverable of 
the Heat project, instead of its own separate project. In practice, 
however, nothing actually changed and it kept its own (technically 
unofficial, I think) PTL and operated independently. That's the source 
of the weirdness.


Since then the number of core reviewers has dropped considerably and 
people have difficulty getting patches in and releases made. Most of the 
people bugging me about that have been from Tacker, hence the suggestion 
to move the project over there: since they are the biggest users they 
could help maintain it.


In the past the requests from the Tacker team have come to Sahdev/me and 
we have created
releases as needed.  For some reason this time a request went to the 
Heat ML, in addition to

a separate request to me directly.

I'm open to changes in the structure but I don't think Tacker is the 
right place to put the

deliverables.


What would you suggest?

cheers,
Zane.


Bob

On 6/20/2018 11:31 AM, Rico Lin wrote:
To continue the discussion in 
http://lists.openstack.org/pipermail/openstack-dev/2018-June/131681.html


Add Tacker and heat-translator to make sure all aware this discussion

On Thu, Jun 21, 2018 at 12:28 AM Doug Hellmann > wrote:


Excerpts from Zane Bitter's message of 2018-06-20 12:07:49 -0400:
> On 20/06/18 11:40, Rico Lin wrote:
> > I send a release patch now
https://review.openstack.org/#/c/576895/
> > Also, add Bob Haddleton to this ML who is considering as PTL for
> > heat-translator team
>
> Is it time to consider moving the heat-translator and
tosca-parser repos
> from being deliverables of Heat to deliverables of Tacker? The
current
> weird structure dates from the days of the experiment with
OpenStack
> 'Programs' (vs. Projects).
>
> Heat cores don't really have time to be engaging with
heat-translator,
> and Tacker is clearly the major user and the thing that keeps
getting
> blocked on needing patches merged and releases made.

It sounds like it. I had no idea there was any reason to look to
anyone
other than the Heat PTL or liaison for approval of that release. A WIP
on the patch would have been OK, too, but if the Tacker team is really
the one responsible we should update the repo governance.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
May The Force of OpenStack Be With You,

*/Rico Lin
/*irc: ricolin









__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Need new release of heat-translator library

2018-06-20 Thread Zane Bitter

On 20/06/18 11:40, Rico Lin wrote:

I send a release patch now https://review.openstack.org/#/c/576895/
Also, add Bob Haddleton to this ML who is considering as PTL for 
heat-translator team


Is it time to consider moving the heat-translator and tosca-parser repos 
from being deliverables of Heat to deliverables of Tacker? The current 
weird structure dates from the days of the experiment with OpenStack 
'Programs' (vs. Projects).


Heat cores don't really have time to be engaging with heat-translator, 
and Tacker is clearly the major user and the thing that keeps getting 
blocked on needing patches merged and releases made.


Ben Nemec mailto:openst...@nemebean.com>> 於 
2018年6月20日 週三 下午10:26寫道:




On 06/20/2018 02:58 AM, Patil, Tushar wrote:
 > Hi,
 >
 > Few weeks back, we had proposed a patch [1] to add support for
translation of placement policies and that patch got merged.
 >
 > This feature will be consumed by tacker specs [2] which we are
planning to implement in Rocky release and it's implementation is
uploaded in patch [3]. Presently, the tests are failing on patch [3]
becoz it requires newer version of heat-translator library.
 >
 > Could you please release a new version of heat-translator library
so that we can complete specs[2] in Rocky timeframe.

Note that you can propose a release to the releases repo[1] and then
you
just need the PTL or release liaison to sign off on it.

1: http://git.openstack.org/cgit/openstack/releases/tree/README.rst

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
May The Force of OpenStack Be With You,
*/Rico Lin
/*irc: ricolin




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Non-OpenStack projects under the Foundation umbrella

2018-06-20 Thread Zane Bitter
You may or may not be aware that the Foundation is in the process of 
expanding its mission to support projects other than OpenStack. It's a 
confusing topic and it's hard to find information about it all in one 
place, so I collected everything I was able to piece together during the 
Summit into a blog post that I hope will help to clarify the current status:


https://www.zerobanana.com/archive/2018/06/14#osf-expansion

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [openstack-operators][heat][oslo.db] Configure maximum number of db connections

2018-06-19 Thread Zane Bitter

On 18/06/18 13:39, Jay Pipes wrote:

+openstack-dev since I believe this is an issue with the Heat source code.

On 06/18/2018 11:19 AM, Spyros Trigazis wrote:

Hello list,

I'm hitting quite easily this [1] exception with heat. The db server 
is configured to have 1000
max_connnections and 1000 max_user_connections and in the database 
section of heat

conf I have these values set:
max_pool_size = 22
max_overflow = 0
Full config attached.

I ended up with this configuration based on this formula:
num_heat_hosts=4
heat_api_workers=2
heat_api_cfn_workers=2
num_engine_workers=4
max_pool_size=22
max_overflow=0
num_heat_hosts * (max_pool_size + max_overflow) * (heat_api_workers + 
num_engine_workers + heat_api_cfn_workers)

704

What I have noticed is that the number of connections I expected with 
the above formula is not respected.
Based on this formula each node (every node runs the heat-api, 
heat-api-cfn and heat-engine) should

use up to 176 connections but they even reach 400 connections.

Has anyone noticed a similar behavior?


Looking through the Heat code, I see that there are many methods in the 
/heat/db/sqlalchemy/api.py module that use a SQLAlchemy session but 
never actually call session.close() [1] which means that the session 
will not be released back to the connection pool, which might be the 
reason why connections keep piling up.


Thanks for looking at this Jay! Maybe I can try to explain our strategy 
(such as it is) here and you can tell us what we should be doing instead :)


Essentially we have one session per 'task', that is used for the 
duration of the task. Back in the day a 'task' was the processing of an 
entire stack from start to finish, but with our new distributed 
architecture it's much more granular - either it's just the initial 
setup of a change to a stack, or it's the processing of a single 
resource. (This was a major design change, and it's quite possible that 
the assumptions we made at the beginning - and tbh I don't think we 
really knew what we were doing then either - are no longer valid.)


So, for example, Heat sees an RPC request come in to update a resource, 
it starts a greenthread to handle it, that creates a database session 
that is stored in the request context. At the beginning of the request 
we load the data needed and update the status of the resource in the DB 
to IN_PROGRESS. Then we do whatever we need to do to update the resource 
(mostly this doesn't involve writing to the DB, but there are 
exceptions). Then we update the status to COMPLETE/FAILED, do some 
housekeeping stuff in the DB and send out RPC messages for any other 
work that needs to be done. IIUC that all uses the same session, 
although I don't know if it gets opened and closed multiple times in the 
process, and presumably the same object cache.


Crucially, we *don't* have a way to retry if we're unable to connect to 
the database in any of those operations. If we can't connect at the 
beginning that'd be manageable, because we could (but currently don't) 
just send out a copy of the incoming RPC message to try again later. But 
once we've changed something about the resource, we *must* record that 
in the DB or Bad Stuff(TM) will happen.


The way we handled that, as Spyros pointed out, was to adjust the size 
of the overflow pool to match the size of the greenthread pool. This 
ensures that every 'task' is able to connect to the DB, because  we 
won't take the message out of the RPC queue until there is a 
greenthread, and by extension a DB connection, available. This is 
infinitely preferable to finding out there are no connections available 
after you've already accepted the message (and oslo_messaging has an 
annoying 'feature' of acknowledging the message before it has even 
passed it to the application). It means stuff that we aren't able to 
handle yet queues up in the message queue, where it belongs, instead of 
in memory.


History: https://bugs.launchpad.net/heat/+bug/1491185

Unfortunately now you have to tune the size of the threadpool to trade 
off not utilising too little of your CPU against not opening too many DB 
connections. Nobody knows what the 'correct' tradeoff is, and even if we 
did Heat can't really tune it automatically by default because at 
startup it only knows the number of worker processes on the local node; 
it can't tell how many other nodes are [going to be] running and opening 
connections to the same database. Plus the number of allowed DB 
connections becomes the bottleneck to how much you can scale out the 
service horizontally.


What is the canonical way of handling this kind of situation? Retry any 
DB operation where we can't get a connection, and close the session 
after every transaction?


Not sure if there's any setting in Heat that will fix this problem. 
Disabling connection pooling will likely not help since connections are 
not properly being closed and returned to the connection pool to begin 
with.


Best,
-jay

[1] Heat 

Re: [openstack-dev] [tc][all] A culture change (nitpicking)

2018-06-18 Thread Zane Bitter

Replying to myself one more time...

On 12/06/18 17:35, Zane Bitter wrote:

On 11/06/18 18:49, Zane Bitter wrote:
It's had a week to percolate (and I've seen quite a few people viewing 
the etherpad), so here is the review:


https://review.openstack.org/574479


In response to comments, I moved the change to the Project Team Guide
instead of the Contributor Guide (since the latter is aimed only at new 
contributors, but this is for everyone). The new review is here:


https://review.openstack.org/574888

The first review is still up, but it's now just adding links from the 
Contributor Guide to this new doc.


This is now live:

https://docs.openstack.org/project-team-guide/review-the-openstack-way.html

Thanks to everyone who contributed/reviewed/commented. Let's also 
remember to make this a living document, so we all keep learning from 
each other :)


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [release] How to handle "stable" deliverables releases

2018-06-12 Thread Zane Bitter

On 12/06/18 11:41, Michael Johnson wrote:

I think we should continue with option 1.

It is an indicator that a project is active in OpenStack and is
explicit about which code should be used together.

Both of those statements hold no technical water, but address the
"human" factor of "What is OpenStack?", "What do I need to deploy?",
"What is an active project and what is not?", and "How do we advertise
what OpenStack can provide?".


There's a strong argument that that makes sense for services. Although 
in practice I'm doubtful that very many services could get through a 
whole cycle without _any_ patches and still be working at the end of it. 
(Incidentally, does the release tooling check that the gate still passes 
at the time of release, even if it has been months since the last patch 
merged?)


It's not clear that it still makes sense for libraries though, and in 
practice that's what this process will mostly apply to. (I tend to agree 
with others in favouring 2, although the release numbering required to 
account for possible future backports does leave something to be desired.)



One caveat with this model is that we need to be careful with version
numbers. Imagine a library that did a 1.18.0 release for queens (which
stable/queens is created from). Nothing happens in Rocky, so we create
stable/rocky from the same 1.18.0 release. Same in Stein, so we create
stable/stein from the same 1.18.0 release. During the Telluride[1] cycle
some patches land and we want to release that. In order to leave room
for rocky and stein point releases, we need to skip 1.18.1 and 1.19.0,
and go directly to 1.20.0. I think we can build release checks to ensure
that, but that's something to keep in mind.


Would another option be to release T as 1.19.0 and use 1.18.1.0 and 
1.18.2.0 for stable/rocky and stable/stein, respectively? There's no 
*law* that says version numbers can only have 3 components, right? ;)


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] A culture change (nitpicking)

2018-06-12 Thread Zane Bitter

On 11/06/18 18:49, Zane Bitter wrote:
It's had a week to percolate (and I've seen quite a few people viewing 
the etherpad), so here is the review:


https://review.openstack.org/574479


In response to comments, I moved the change to the Project Team Guide
instead of the Contributor Guide (since the latter is aimed only at new 
contributors, but this is for everyone). The new review is here:


https://review.openstack.org/574888

The first review is still up, but it's now just adding links from the 
Contributor Guide to this new doc.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] A culture change (nitpicking)

2018-06-11 Thread Zane Bitter

On 04/06/18 10:13, Zane Bitter wrote:

On 31/05/18 14:35, Julia Kreger wrote:

Back to the topic of nitpicking!

I virtually sat down with Doug today and we hammered out the positive
aspects that we feel like are the things that we as a community want
to see as part of reviews coming out of this effort. The principles
change[1] in governance has been updated as a result.

I think we are at a point where we have to state high level
principles, and then also update guidelines or other context providing
documentation to re-enforce some of items covered in this
discussion... not just to educate new contributors, but to serve as a
checkpoint for existing reviewers when making the decision as to how
to vote change set. The question then becomes where would such
guidelines or documentation best fit?


I think the contributor guide is the logical place for it. Kendall 
pointed out this existing section:


https://docs.openstack.org/contributors/code-and-documentation/using-gerrit.html#reviewing-changes 



It could go in there, or perhaps we separate out the parts about when to 
use which review scores into a separate page from the mechanics of how 
to use Gerrit.



Should we explicitly detail the
cause/effect that occurs? Should we convey contributor perceptions, or
maybe even just link to this thread as there has been a massive amount
of feedback raising valid cases, points, and frustrations.

Personally, I'd lean towards a blended approach, but the question of
where is one I'm unsure of. Thoughts?


Let's crowdsource a set of heuristics that reviewers and contributors 
should keep in mind when they're reviewing or having their changes 
reviewed. I made a start on collecting ideas from this and past threads, 
as well as my own reviewing experience, into a document that I've 
presumptuously titled "How to Review Changes the OpenStack Way" (but 
might be more accurately called "The Frank Sinatra Guide to Code Review" 
at the moment):


https://etherpad.openstack.org/p/review-the-openstack-way

It's in an etherpad to make it easier for everyone to add their 
suggestions and comments (folks in #openstack-tc have made some tweaks 
already). After a suitable interval has passed to collect feedback, I'll 
turn this into a contributor guide change.


It's had a week to percolate (and I've seen quite a few people viewing 
the etherpad), so here is the review:


https://review.openstack.org/574479

- ZB


Have at it!

cheers,
Zane.


-Julia

[1]: https://review.openstack.org/#/c/570940/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][sdk][heat] Integrating OpenStack and k8s with a service broker

2018-06-11 Thread Zane Bitter

On 08/06/18 22:28, Rico Lin wrote:



Zane Bitter mailto:zbit...@redhat.com>> 於 2018年6 
月9日 週六 上午9:20寫道:

 >
 > IIUC you're talking about a Heat resource that calls out to a service
 > broker using the Open Service Broker API? (Basically acting like the
 > Kubernetes Service Catalog.) That would be cool, as it would allow us to
 > orchestrate services written for Kubernetes/CloudFoundry using Heat.
 > Although probably not as easy as it sounds at first glance ;)
In my previous glance, I was thought about our new service will also 
wrap up API with Ansible playbooks. A playbook to create a resource, and 
another playbook to control Service Broker API. So we can directly 
use that playbook instead of calling Service broker APIs. No?:)


Oh, call the playbooks directly from Heat? That would work for anything 
else that uses Automation Broker (e.g. the AWS playbook bundles), but 
not for stuff that has its own service broker implementation (e.g. Azure).


That said, it would also be interesting for other reasons if Heat had a 
way to run Ansible playbooks either directly or via AWX, but now we're 
getting even further off-topic ;)


I think we can start trying to build playbooks before we start planning 
on crazy ideas:)

 >
 > It wouldn't rely on _this_ set of playbook bundles though, because this
 > one is only going to expose OpenStack resources, which are already
 > exposed in Heat. (Unless you're suggesting we replace all of the current
 > resource plugins in Heat with Ansible playbooks via the service broker?
 > In which case... that's not gonna happen ;)
Right, we should use OS::Heat::Stackto expose resources from other 
OpenStack, notwith this.


+1


 > So Heat could adopt this at any time to add support for resources
 > exposed by _other_ service brokers, such as the AWS/Azure/GCE service
 > brokers or other playbooks exposed through Automation Broker.
 >

I like the idea to add support for resources exposed by other service 
borkers


I can already see that I'm going to make this same typo at least 3 times 
a week.


https://www.youtube.com/watch?v=sY_Yf4zz-yo

- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection)

2018-06-11 Thread Zane Bitter

On 11/06/18 10:23, Doug Hellmann wrote:

Excerpts from Dmitry Tantsur's message of 2018-06-11 16:00:41 +0200:

Hi,

On 06/11/2018 03:53 PM, Ruby Loo wrote:

Hi,

I don't want to hijack the initial thread, but am now feeling somewhat guilty
about not being vocal wrt Storyboard. Yes, ironic migrated to Storyboard in the
beginning of this cycle. To date, I have not been pleased with replacing
Launchpad with Storyboard. I believe that Storyboard is somewhat
still-in-progress, and that there were/are some features (stories) that are
outstanding that would make its use better.

  From my point of view (as a developer and core, not a project manager or PTL)
using Storyboard has made my day-to-day work worse. Granted, any migration is
without headaches. But some of the main things, like searching for our RFEs
(that we had tagged in Launchpad) wasn't possible. I haven't yet figured out how
to limit a search to only the 'ironic' project using that 'search' like GUI, so
I have been frustrated trying to find particular bugs that I *knew* existed but
had not memorized the bug number.


Yeah, I cannot fully understand the search. I would expect something explicit
like Launchpad or better something command-based like "project:openstack/ironic
pxe". This does not seem to work, so I also wonder how to filter all stories
affecting a project.



Searching tripped me up for the first couple of weeks, too.
Storyboard's search field is a lot "smarter" than expected. Or maybe
you'd call it "magic". Either way, it was confusing, but you don't have
to use any special syntax in the UI.

To search for a project, type the name of the project in the search
field and then *wait* for the list of drop-down options to appear.
The first item in the list will be a "raw" search for the term. The
others will have little icons indicating their type. The project
icon looks like a little cube, for example.  If I go to
https://storyboard.openstack.org/#!/search and type "openstack/ironic"
I get a list that includes openstack/ironic, openstack/ironic-inspector,
etc.

Select the project you want from the list and hit enter, and you'll
get a list of all of the stories with tasks attached to the project.


Yeah, it's actually pretty powerful, but the UX is a pain. For a 
workflow as common as searching within a project, there should never be 
a step that involves *waiting*. This could be easily fixed: if the user 
is on the page for a project (or project group) and clicks on the search 
tab, the search field should be autopopulated with the project so they 
only have to opt out when they want to search something else, rather 
than opt in every time by typing the project's name again... waiting... 
clicking on one of the inscrutable icons. (Prepopulating in this way 
would also help teach people how the search field works and what the 
little icons mean, so that it wouldn't take weeks to figure out how to 
search within a project even when you have to start from scratch.)


There are a lot of rough edges like this. An issue tracker is an 
incredibly complicated class of application, and projects like Launchpad 
have literally millions of issues tracked, so basically everything that 
could come up has. Storyboard is not at that stage yet.


Some other bugbears:

* There's no help link anywhere. (This appears to be because there's 
nothing to link to.)


* There's no way to mark a story as a duplicate of another.

* Numeric IDs in URLs instead of project names are a serious barrier to 
usability.


* Default query size of 10 unless you (a) are logged in, and (b) 
increased it to something sane in your Profile (pro tip: do this now!) 
makes it really painful to use, especially since the full text search is 
not very accurate, the prev/next arrows appear to be part of a 
competition to make UI elements as tiny as possible(4 pixels wide, and 
even the click target is only 16), and moving between pages is kinda 
slow. Also I changed the setting in my profile the other day, and when I 
logged in again today it had been reset to 10.


* Actually, I just tried scrolling through the project list after 
setting the query size back to 100, and the ranges I got were:

  - 1 to 100 of 344 ok so far
  - 101 to 200 of 344 good good
  - 100101 to 344 of 344 wat

* Actually, *is* there any full-text search? The search page says only 
that you can search for "Keyword in a story or task title". That would 
explain why it's impossible to find most things you're looking for.


* You can't even use Google to search it, I suspect because only issues 
that are linked to from other sites are visible to the crawler due to 
there being no sitemap.xml.


* Showing project groups in reverse chronological order of their 
creation instead of alphabetical order is bizarre.


* Launchpad fields always display in fixed-width fonts with linebreaks 
preserved. Storyboard uses Markdown. The migration process makes no 
attempt to preserve the formatting, so a lot of the 

Re: [openstack-dev] [all][sdk][heat] Integrating OpenStack and k8s with a service broker

2018-06-08 Thread Zane Bitter

On 08/06/18 02:40, Rico Lin wrote:

Thanks, Zane for putting this up.
It's a great service to expose infrastructure to application, and a 
potential cross-community works as well.

 >
 > Would you be interested in working on a new project to implement this
 > integration? Reply to this thread and let's collect a list of volunteers
 > to form the initial core review team.
 >
Glad to help

 > I'd prefer to go with the pure-Ansible autogenerated way so we can have
 > support for everything, but looking at the GCP[5]/Azure[4]/AWS[3]
 > brokers they have 10, 11 and 17 services respectively, so arguably we
 > could get a comparable number of features exposed without investing
 > crazy amounts of time if we had to write templates explicitly.
 >
If we going to generate another project to provide this service, I 
believe to use pure-Ansible will be a better option indeed.


TBH I don't think we can know for sure until we've tried building a few 
playbooks by hand and figured out whether they're similar enough that we 
can autogenerate them all, or if they need so much hand-tuning that it 
isn't feasible. But I'm a big fan of autogeneration if it works.


Once service gets stable, it's actually quite easy(at first glance) for 
Heat to adopt this (just create a service broker with our new service 
while creating a resource I believe?).


IIUC you're talking about a Heat resource that calls out to a service 
broker using the Open Service Broker API? (Basically acting like the 
Kubernetes Service Catalog.) That would be cool, as it would allow us to 
orchestrate services written for Kubernetes/CloudFoundry using Heat. 
Although probably not as easy as it sounds at first glance ;)


It wouldn't rely on _this_ set of playbook bundles though, because this 
one is only going to expose OpenStack resources, which are already 
exposed in Heat. (Unless you're suggesting we replace all of the current 
resource plugins in Heat with Ansible playbooks via the service broker? 
In which case... that's not gonna happen ;)


So Heat could adopt this at any time to add support for resources 
exposed by _other_ service brokers, such as the AWS/Azure/GCE service 
brokers or other playbooks exposed through Automation Broker.


Sounds like the use case of service broker might be when application 
request for a single resource exposed with Broker. And the resource 
dependency will be relatively simple. And we should just keep it simple 
and don't start thinking about who and how that application was created 
and keep the application out of dependency (I mean if the user likes to 
manage the total dependency, they can consider using heat with service 
broker once we integrated).


--
May The Force of OpenStack Be With You,

Rico Lin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC] [Infra] Terms of service for hosted projects

2018-06-06 Thread Zane Bitter

On 29/05/18 13:37, Jeremy Stanley wrote:

On 2018-05-29 10:53:03 -0400 (-0400), Zane Bitter wrote:

We allow various open source projects that are not an official
part of OpenStack or necessarily used by OpenStack to be hosted on
OpenStack infrastructure - previously under the 'StackForge'
branding, but now without separate branding. Do we document
anywhere the terms of service under which we offer such hosting?


We do so minimally here:

https://docs.openstack.org/infra/system-config/unofficial_project_hosting.html

It's linked from this section of the Project Creator’s Guide in the
Infra Manual:

https://docs.openstack.org/infra/manual/creators.html#decide-status-of-your-project

But yes, we should probably add some clarity to that document and
see about making sure it's linked more prominently. We also maintain
some guidelines for reviewers of changes to the
openstack-infra/project-config repository, which has a bit to say
about new repository creation changes:

https://git.openstack.org/cgit/openstack-infra/project-config/tree/REVIEWING.rst


It is my understanding that the infra team will enforce the
following conditions when a repo import request is received:

* The repo must be licensed under an OSI-approved open source
license.


That has been our custom, but we should add a statement to this
effect in the aforementioned document.


* If the repo is a fork of another project, there must be (public)
evidence of an attempt to co-ordinate with the upstream first.


I don't recall this ever being mandated, though the project-config
reviewers do often provide suggestions to project creators such as
places in the existing community with which they might consider
cooperating/collaborating.


We're mandating it for StarlingX, aren't we?

AIUI we haven't otherwise forked anything that was still maintained 
(although we've forked plenty of libraries after establishing that the 
upstream was moribund).



Neither of those appears to be documented (specifically,
https://governance.openstack.org/tc/reference/licensing.html only
specifies licensing requirements for official projects, libraries
imported by official projects, and software used by the Infra
team).


The Infrastructure team has been granted a fair amount of autonomy
to determine its operating guidelines, and future plans to separate
project hosting further from the OpenStack name (in an attempt to
make it more clear that hosting your project in the infrastructure
is not an endorsement by OpenStack and doesn't make it "part of
OpenStack") make the OpenStack TC governance site a particularly
poor choice of venue to document such things.


So clearly in the future this will be the responsibility of the 
Winterscale Infrastructure Council assuming that proposal goes ahead.


For now, would it be valuable for the TC to develop some guidelines that 
will provide the WIC with a solid base it can evolve from once it takes 
them over, or should we just leave it up to infra's discretion?



In addition, I think we should require projects hosted on our
infrastructure to agree to other policies:

* Adhere to the OpenStack Foundation Code of Conduct.


This seems like a reasonable addition to our hosting requirements.


* Not misrepresent their relationship to the official OpenStack
project or the Foundation. Ideally we'd come up with language that
they *can* use to describe their status, such as "hosted on the
OpenStack infrastructure".


Also a great suggestion. We sort of say that in the "what being an
unoffocial project is not" bullet list, but it could use some
fleshing out.


If we don't have place where this kind of thing is documented
already, I'll submit a review adding one. Does anybody have any
ideas about a process for ensuring that projects have read and
agreed to the terms when we add them?


Adding process forcing active confirmation of such rules seems like
a lot of unnecessary overhead/red tape/bureaucracy. As it stands,
we're working to get rid of active agreement to the ICLA in favor of
simply asserting the DCO in commit messages, so I'm not a fan of
adding some new agreement people have to directly acknowledge along
with associated automation and policing.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][sdk] Integrating OpenStack and k8s with a service broker

2018-06-06 Thread Zane Bitter

On 06/06/18 11:18, Chris Hoge wrote:

Hi Zane,

Do you think this effort would make sense as a subproject within the Cloud
Provider OpenStack repository hosted within the Kubernetes org? We have
a solid group of people working on the cloud provider, and while it’s not
the same code, it’s a collection of the same expertise and test resources.


TBH, I think it makes more sense as part of the OpenStack community. If 
you look at how the components interact, it goes:


Kubernetes Service Catalog -> Automation Broker -> [this] -> OpenStack

So the interfaces with k8s are already well-defined and owned by other 
teams. It's the interface with OpenStack that requires the closest 
co-ordination. (Particularly if we end up autogenerating the playbooks 
from introspection on shade.) If you look at where the other clouds host 
their service brokers or Ansible Playbook Bundles, they're not part of 
the equivalent Kubernetes Cloud Providers either.


We'll definitely want testing though. Given that this is effectively 
another user interface to OpenStack, do you think this is an area that 
OpenLab could help out with?



Even if it's hosted as an OpenStack project, we should still make sure
we have documentation and pointers from the kubernetes/cloud-provider-openstack
to guide users in the right direction.


Sure, that makes sense to cross-advertise it to people we know are using 
k8s on top of OpenStack already. (Although note that k8s does not have 
to be running on top of OpenStack for the service broker to be useful, 
unlike the cloud provider.)



While I'm not in a position to directly contribute, I'm happy to offer
any support I can through the SIG-OpenStack and SIG-Cloud-Provider
roles I have in the K8s community.


Thanks!

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][python3][tc][infra] Python 3.6

2018-06-05 Thread Zane Bitter

On 05/06/18 16:38, Doug Hellmann wrote:

Excerpts from Zane Bitter's message of 2018-06-05 15:55:49 -0400:

We've talked a bit about migrating to Python 3, but (unless I missed it)
not a lot about which version of Python 3. Currently all projects that
support Python 3 are gating against 3.5. However, Ubuntu Artful and
Fedora 26 already ship Python 3.6 by default. (And Bionic and F28 have
been released since then.) The one time it did come up in a thread, we
decided it was blocked on the availability of 3.6 in Ubuntu to run on
the test nodes, so it's time to discuss it again.

AIUI we're planning to switch the test nodes to Bionic, since it's the
latest LTS release, so I'd assume that means that when we talk about
running docs jobs, pep8  with Python3 (under the python3-first
project-wide goal) that means 3.6. And while 3.5 jobs should continue to
work, it seems like we ought to start testing ASAP with the version that
users are going to get by default if they choose to use our Python3
packages.

The list of breaking changes in 3.6 is quite short (although not zero),
so I wouldn't expect too many roadblocks:
https://docs.python.org/3/whatsnew/3.6.html#porting-to-python-3-6

I think we can split the problem into two parts:

* How can we detect any issues ASAP.

Would it be sane to give all projects with a py35 unit tests job a
non-voting py36 job so that they can start fixing any issues right away?
Like this: https://review.openstack.org/572535


That seems like a good way to start.

Maybe we want to rename that project template to openstack-python3-jobs
to keep it version-agnostic?


You mean the 35_36 one? Actually, let's discuss this on the review.



* How can we ensure every project fixes any issues and migrates to
voting gates, including for functional test jobs?

Would it make sense to make this part of the 'python3-first'
project-wide goal?


Yes, that seems like a good idea. We can be specific about the version
of python 3 to be used to achieve that goal (assuming it is selected as
a goal).

The instructions I've been putting together are based on just using
"python3" in the tox.ini file because I didn't want to have to update
that every time we update to a new version of python. Do you think we
should be more specific there, too?


That's probably fine IMHO. We should just be aware that e.g. when 
distros start switching to 3.7 then people's local jobs will start 
running in 3.7.


For me, at least, this has already been the case with 3.6 - tox is now 
python3 by default in Fedora, so e.g. pep8 jobs have been running under 
3.6 for a while now. There were a *lot* of deprecation warnings at first.



Doug



cheers,
Zane.


(Disclaimer for the conspiracy-minded: you might assume that I'm
cleverly concealing inside knowledge of which version of Python 3 will
replace Python 2 in the next major release of RHEL/CentOS, but in fact
you would be mistaken. The truth is I've been too lazy to find out, so
I'm as much in the dark as anybody. Really. Anyway, this isn't about
that, it's about testing within upstream OpenStack.)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][python3][tc][infra] Python 3.6

2018-06-05 Thread Zane Bitter
We've talked a bit about migrating to Python 3, but (unless I missed it) 
not a lot about which version of Python 3. Currently all projects that 
support Python 3 are gating against 3.5. However, Ubuntu Artful and 
Fedora 26 already ship Python 3.6 by default. (And Bionic and F28 have 
been released since then.) The one time it did come up in a thread, we 
decided it was blocked on the availability of 3.6 in Ubuntu to run on 
the test nodes, so it's time to discuss it again.


AIUI we're planning to switch the test nodes to Bionic, since it's the 
latest LTS release, so I'd assume that means that when we talk about 
running docs jobs, pep8  with Python3 (under the python3-first 
project-wide goal) that means 3.6. And while 3.5 jobs should continue to 
work, it seems like we ought to start testing ASAP with the version that 
users are going to get by default if they choose to use our Python3 
packages.


The list of breaking changes in 3.6 is quite short (although not zero), 
so I wouldn't expect too many roadblocks:

https://docs.python.org/3/whatsnew/3.6.html#porting-to-python-3-6

I think we can split the problem into two parts:

* How can we detect any issues ASAP.

Would it be sane to give all projects with a py35 unit tests job a 
non-voting py36 job so that they can start fixing any issues right away? 
Like this: https://review.openstack.org/572535


* How can we ensure every project fixes any issues and migrates to 
voting gates, including for functional test jobs?


Would it make sense to make this part of the 'python3-first' 
project-wide goal?


cheers,
Zane.


(Disclaimer for the conspiracy-minded: you might assume that I'm 
cleverly concealing inside knowledge of which version of Python 3 will 
replace Python 2 in the next major release of RHEL/CentOS, but in fact 
you would be mistaken. The truth is I've been too lazy to find out, so 
I'm as much in the dark as anybody. Really. Anyway, this isn't about 
that, it's about testing within upstream OpenStack.)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][sdk] Integrating OpenStack and k8s with a service broker

2018-06-05 Thread Zane Bitter
I've been doing some investigation into the Service Catalog in 
Kubernetes and how we can get OpenStack resources to show up in the 
catalog for use by applications running in Kubernetes. (The Big 3 public 
clouds already support this.) The short answer is via an implementation 
of something called the Open Service Broker API, but there are shortcuts 
available to make it easier to do.


I'm convinced that this is readily achievable and something we ought to 
do as a community.


I've put together a (long-winded) FAQ below to answer all of your 
questions about it.


Would you be interested in working on a new project to implement this 
integration? Reply to this thread and let's collect a list of volunteers 
to form the initial core review team.


cheers,
Zane.


What is the Open Service Broker API?


The Open Service Broker API[1] is a standard way to expose external 
resources to applications running in a PaaS. It was originally developed 
in the context of CloudFoundry, but the same standard was adopted by 
Kubernetes (and hence OpenShift) in the form of the Service Catalog 
extension[2]. (The Service Catalog in Kubernetes is the component that 
calls out to a service broker.) So a single implementation can cover the 
most popular open-source PaaS offerings.


In many cases, the services take the form of simply a pre-packaged 
application that also runs inside the PaaS. But they don't have to be - 
services can be anything. Provisioning via the service broker ensures 
that the services requested are tied in to the PaaS's orchestration of 
the application's lifecycle.


(This is certainly not the be-all and end-all of integration between 
OpenStack and containers - we also need ways to tie PaaS-based 
applications into the OpenStack's orchestration of a larger group of 
resources. Some applications may even use both. But it's an important 
part of the story.)


What sorts of services would OpenStack expose?
--

Some example use cases might be:

* The application needs a reliable message queue. Rather than spinning 
up multiple storage-backed containers with anti-affinity policies and 
dealing with the overhead of managing e.g. RabbitMQ, the application 
requests a Zaqar queue from an OpenStack cloud. The overhead of running 
the queueing service is amortised across all of the applications in the 
cloud. The queue gets cleaned up correctly when the application is 
removed, since it is tied into the application definition.


* The application needs a database. Rather than spinning one up in a 
storage-backed container and dealing with the overhead of managing it, 
the application requests a Trove DB from an OpenStack cloud.


* The application includes a service that needs to run on bare metal for 
performance reasons (e.g. could also be a database). The application 
requests a bare-metal server from Nova w/ Ironic for the purpose. (The 
same applies to requesting a VM, but there are alternatives like 
KubeVirt - which also operates through the Service Catalog - available 
for getting a VM in Kubernetes. There are no non-proprietary 
alternatives for getting a bare-metal server.)


AWS[3], Azure[4], and GCP[5] all have service brokers available that 
support these and many more services that they provide. I don't know of 
any reason in principle not to expose every type of resource that 
OpenStack provides via a service broker.


How is this different from cloud-provider-openstack?


The Cloud Controller[6] interface in Kubernetes allows Kubernetes itself 
to access features of the cloud to provide its service. For example, if 
k8s needs persistent storage for a container then it can request that 
from Cinder through cloud-provider-openstack[7]. It can also request a 
load balancer from Octavia instead of having to start a container 
running HAProxy to load balance between multiple instances of an 
application container (thus enabling use of hardware load balancers via 
the cloud's abstraction for them).


In contrast, the Service Catalog interface allows the *application* 
running on Kubernetes to access features of the cloud.


What does a service broker look like?
-

A service broker provides an HTTP API with 5 actions:

* List the services provided by the broker
* Create an instance of a resource
* Bind the resource into an instance of the application
* Unbind the resource from an instance of the application
* Delete the resource

The binding step is used for things like providing a set of DB 
credentials to a container. You can rotate credentials when replacing a 
container by revoking the existing credentials on unbind and creating a 
new set on bind, without replacing the entire resource.


Is there an easier way?
---

Yes! Folks from OpenShift came up with a project called the Automation 
Broker[8]. To add 

Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-04 Thread Zane Bitter

On 04/06/18 17:52, Doug Hellmann wrote:

Excerpts from Zane Bitter's message of 2018-06-04 17:41:10 -0400:

On 02/06/18 13:23, Doug Hellmann wrote:

Excerpts from Zane Bitter's message of 2018-06-01 15:19:46 -0400:

On 01/06/18 12:18, Doug Hellmann wrote:


[snip]


Is that rule a sign of a healthy team dynamic, that we would want
to spread to the whole community?


Yeah, this part I am pretty unsure about too. For some projects it
probably is. For others it may just be an unnecessary obstacle, although
I don't think it'd actually be *un*healthy for any project, assuming a
big enough and diverse enough team (which should be a goal for the whole
community).


It feels like we would be saying that we don't trust 2 core reviewers
from the same company to put the project's goals or priorities over
their employer's.  And that doesn't feel like an assumption I would
want us to encourage through a tag meant to show the health of the
project.


Another way to look at it would be that the perception of a conflict of
interest can be just as damaging to a community as somebody actually
acting on a conflict of interest, and thus having clearly-defined rules
to manage conflicts of interest helps protect everybody (and especially
the people who could be perceived to have a conflict of interest but
aren't, in fact, acting on it).


That's a reasonable perspective. Thanks for expanding on your original
statement.


Apparently enough people see it the way you described that this is
probably not something we want to actively spread to other projects at
the moment.


I am still curious to know which teams have the policy. If it is more
widespread than I realized, maybe it's reasonable to extend it and use
it as the basis for a health check after all.


At least Nova still does, judging by this comment from Matt Riedemann in 
January:


"For the record, it's not cool for two cores from the same company to be 
the sole +2s on a change contributed by the same company. Pretty 
standard operating procedure."


(on https://review.openstack.org/#/c/523958/18)

When this thread started I looked for somewhere that was documented more 
permanently, but I didn't find it.



The appealing part of the idea to me was that we could stop pretending
that the results of our mindless script are objective - despite the fact
that both the subset of information to rely on and the limits in the
script were chosen by someone, in an essentially arbitrary way - and let
the decision rest on the expertise of those who are closest to the
project (and therefore have the most information), while aligning their
incentives with the needs of users so that they're not being asked to
keep their own score. I'm always on the lookout for opportunities to do
that, so I felt like I had to at least float it.

The alignment goes both ways though, and if we'd be creating an
incentive to extend the coverage of a policy that is already
controversial then this is not the way forward.

cheers,
Zane.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-04 Thread Zane Bitter

On 02/06/18 13:23, Doug Hellmann wrote:

Excerpts from Zane Bitter's message of 2018-06-01 15:19:46 -0400:

On 01/06/18 12:18, Doug Hellmann wrote:


[snip]


Is that rule a sign of a healthy team dynamic, that we would want
to spread to the whole community?


Yeah, this part I am pretty unsure about too. For some projects it
probably is. For others it may just be an unnecessary obstacle, although
I don't think it'd actually be *un*healthy for any project, assuming a
big enough and diverse enough team (which should be a goal for the whole
community).


It feels like we would be saying that we don't trust 2 core reviewers
from the same company to put the project's goals or priorities over
their employer's.  And that doesn't feel like an assumption I would
want us to encourage through a tag meant to show the health of the
project.


Another way to look at it would be that the perception of a conflict of 
interest can be just as damaging to a community as somebody actually 
acting on a conflict of interest, and thus having clearly-defined rules 
to manage conflicts of interest helps protect everybody (and especially 
the people who could be perceived to have a conflict of interest but 
aren't, in fact, acting on it).


Apparently enough people see it the way you described that this is 
probably not something we want to actively spread to other projects at 
the moment.


The appealing part of the idea to me was that we could stop pretending 
that the results of our mindless script are objective - despite the fact 
that both the subset of information to rely on and the limits in the 
script were chosen by someone, in an essentially arbitrary way - and let 
the decision rest on the expertise of those who are closest to the 
project (and therefore have the most information), while aligning their 
incentives with the needs of users so that they're not being asked to 
keep their own score. I'm always on the lookout for opportunities to do 
that, so I felt like I had to at least float it.


The alignment goes both ways though, and if we'd be creating an 
incentive to extend the coverage of a policy that is already 
controversial then this is not the way forward.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   5   6   7   8   >