Re: [openstack-dev] [charms] PTL non-candidacy for Stein cycle

2018-07-27 Thread Jean-Philippe Evrard


On July 27, 2018 4:09:04 PM UTC, James Page  wrote:
>Hi All
>
>I won't be standing for PTL of OpenStack Charms for this upcoming
>cycle.
>
>Its been my pleasure to have been PTL since the project was accepted
>into
>OpenStack, but its time to let someone else take the helm.   I'm not
>going
>anywhere but expect to have a bit of a different focus for this cycle
>(at
>least).
>
>Cheers
>
>James

Thanks for the work done on a cross project level and your  communication!

JP (evrardjp)__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] keypair quota usage info for user

2018-07-27 Thread Jeremy Stanley
On 2018-07-27 14:20:01 -0500 (-0500), Matt Riedemann wrote:
[...]
> Note the entries in there about how several deployments don't rely
> on nova's keypair interface because of its clunky nature, and
> other ideas about getting nova out of the keypair business
> altogether and instead let barbican manage that and nova just
> references a key resource in barbican. Before we'd consider making
> incremental changes to nova's keypair interface and user/project
> scoping, I think we would need to think through that barbican
> route and what it could look like and how it might benefit
> everyone.

If the Nova team is interested in taking it in that direction, I'll
gladly lobby to convert the "A Castellan-compatible key store" entry
at
https://governance.openstack.org/tc/reference/base-services.html#current-list-of-base-services
to a full on "Barbican" entry (similar to the "Keystone" entry). The
only thing previously standing in the way was a use case for a
fundamental feature from the trademark programs' interoperability
set.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] keypair quota usage info for user

2018-07-27 Thread Jay Pipes

On 07/27/2018 03:21 PM, Matt Riedemann wrote:

On 7/27/2018 2:14 PM, Matt Riedemann wrote:
 From checking the history and review discussion on [3], it seems 
that it was like that from staring. key_pair quota is being counted 
when actually creating the keypair but it is not shown in API 
'in_use' field.


Just so I'm clear which API we're talking about, you mean there is no 
totalKeypairsUsed entry in 
https://developer.openstack.org/api-ref/compute/#show-rate-and-absolute-limits 
correct?


Nevermind I see it now:

https://developer.openstack.org/api-ref/compute/#show-the-detail-of-quota

We have too many quota-related APIs.


Yes. Yes we do.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] keypair quota usage info for user

2018-07-27 Thread Matt Riedemann

On 7/27/2018 2:14 PM, Matt Riedemann wrote:
 From checking the history and review discussion on [3], it seems that 
it was like that from staring. key_pair quota is being counted when 
actually creating the keypair but it is not shown in API 'in_use' field.


Just so I'm clear which API we're talking about, you mean there is no 
totalKeypairsUsed entry in 
https://developer.openstack.org/api-ref/compute/#show-rate-and-absolute-limits 
correct?


Nevermind I see it now:

https://developer.openstack.org/api-ref/compute/#show-the-detail-of-quota

We have too many quota-related APIs.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] keypair quota usage info for user

2018-07-27 Thread Matt Riedemann

On 7/25/2018 12:43 PM, Chris Friesen wrote:
Keypairs are weird in that they're owned by users, not projects.  This 
is arguably wrong, since it can cause problems if a user boots an 
instance with their keypair and then gets removed from a project.


Nova microversion 2.54 added support for modifying the keypair 
associated with an instance when doing a rebuild.  Before that there was 
no clean way to do it.


While discussing what eventually became microversion 2.54, sdague sent a 
nice summary of several discussions related to this:


http://lists.openstack.org/pipermail/openstack-dev/2017-October/123071.html

Note the entries in there about how several deployments don't rely on 
nova's keypair interface because of its clunky nature, and other ideas 
about getting nova out of the keypair business altogether and instead 
let barbican manage that and nova just references a key resource in 
barbican. Before we'd consider making incremental changes to nova's 
keypair interface and user/project scoping, I think we would need to 
think through that barbican route and what it could look like and how it 
might benefit everyone.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] keypair quota usage info for user

2018-07-27 Thread Matt Riedemann

On 7/25/2018 4:44 AM, Ghanshyam Mann wrote:

 From checking the history and review discussion on [3], it seems that it was 
like that from staring. key_pair quota is being counted when actually creating 
the keypair but it is not shown in API 'in_use' field.


Just so I'm clear which API we're talking about, you mean there is no 
totalKeypairsUsed entry in 
https://developer.openstack.org/api-ref/compute/#show-rate-and-absolute-limits 
correct?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] placement update 18-30

2018-07-27 Thread Matt Riedemann

On 7/27/2018 8:07 AM, Chris Dent wrote:

# Questions

I wrote up some analysis of the way the [resource tracker talks to
placement](https://anticdent.org/novas-use-of-placement.html). It
identifies some redundancies. Actually it reinforces that some
redundancies we've known about are still there. Fixing some of these
things might count as bug fixes. What do you think?


Performance issues are definitely bugs so I think that's fair. How big 
of an impact the solution is is another thing.




* "How to deploy / model shared disk. Seems fairly straight-forward,
     and we could even maybe create a multi-node ceph job that does
     this - wouldn't that be awesome?!?!", says an enthusiastic Matt
     Riedemann.


Two updates here:

1. We've effectively disabled the shared storage provider stuff in the 
libvirt driver:


https://bugs.launchpad.net/nova/+bug/1784020

Because of the reasons listed in the bug. That's going to require a spec 
in Stein if we're going to fully support shared storage providers and 
the work items from that bug would be a good start for a spec.


2. Coincidentally, I *just* got a working ceph (single-node) CI job run 
working with a shared storage provider providing DISK_GB for the single 
compute node provider:


https://review.openstack.org/#/c/586363/

Fleshing that out for a multi-node job shouldn't be too hard.

All of that is now entered in the Stein PTG etherpad for discussion in 
Denver.




* The whens and wheres of re-shaping and VGPUs.


I'm not sure anything about this has to be documented for Rocky since we 
didn't get /reshaper done so nothing regarding VGPUs in nova changed, 
right? Except I think Sylvain fixed one VGPU gap in the libvirt driver 
which was updated in the docs, but unrelated to /reshaper.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] PTL candidacy for Stein

2018-07-27 Thread Samuel Cassiba
Howdy!

I am submitting my name to continue as PTL for Chef OpenStack. If you
don't know me, I am scas on Freenode. I work for Workday, where I am an
active operator and upstream developer. I have contributed to OpenStack
since 2014, and joined the Chef core team in early 2015. Since then, I have
served as PTL for four cycles. I am also an active member of the
Sous-Chefs organization, which fosters maintainership of community Chef
cookbooks that could no longer be maintained by their author(s). My life
as a triple threat, as well as being largely in the deploy automation
space, gives me a unique perspective on the use cases for Chef
OpenStack.

Development continues to run about a release behind the coordinated
release to stabilize due to contributor availability. In that time,
overall testing has improved to raise the overall testing confidence in
landing more aggressive changes. Local testing infrastructure tends to
run closer to trunk to keep a pulse on how upstream changes will affect
the cookbooks closer to review time. This, in turn, influences the
changes that do pass the sniff test.

For Stein, I would like to focus on some of the efforts started during
Rocky.

* Awareness and Community

  Chef OpenStack is extremely powerful and flexible, but it is not easy
  for new contributors to get involved. That is, if they can find it,
  down the dark alley, through the barber shop, and behind the door with
  a secret knock. Documentation has been a handful of terse Markdown
  docs and READMEs that do not evolve as fast as the code, which I think
  impacts visibility and artificially creates a barrier to entry. I
  would like to place more emphasis on providing this more well-lit
  entry point for new and existing users alike.

* Consistency and HA

  Stability is never a given, but it is pretty close with Chef
  OpenStack. Each change runs through multiple, iterative tests before
  it hits Gerrit. However, not every change runs through those same
  tests in the gate due to the gap between local and integration. This
  natural gap has resulted in multiple chef-client versions and
  OpenStack configurations testing each change.  There have existed HA
  primitives in the cookbooks for years, but there are no published
  working examples. I am aiming to continue this effort to further
  reducing the human element in executing the tests.

* Continued work on containerization

  With efforts to deploy OpenStack in the context of containers, Chef
  OpenStack has not shared in the fanfare. I shipped a very shaky dokken
  support out of a hack day at the 2017 Chef Community Summit in
  Seattle, and have refined it over time to where it's consistently
  Doing A Thing. I have found regressions upstream (e.g. packaging), and
  have conservatively implemented workarounds to coax things into
  submission when the actual fix would take more months to land.  I wish
  to continue that effort, and expand to other Ansible-based and
  Kitchen-based integration scenarios to provide examples of how to get
  to OpenStack using Chef.

These are but some of my personal goals and aspirations. I hope to be
able to make progress on them all, but reality may temper those
aspirations.

I would love to connect with more new users and contributors. You can
reach out to me directly, or find me in #openstack-chef.

Thanks!

-scas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] network isolation can't find files referred to on director

2018-07-27 Thread James Slagle
On Thu, Jul 26, 2018 at 4:58 AM, Samuel Monderer
 wrote:
> Hi James,
>
> I understand the network-environment.yaml will also be generated.
> What do you mean by rendered path? Will it be
> "usr/share/openstack-tripleo-heat-templates/network/ports/"?

Yes, the rendered path is the path that the jinja2 templating process creates.

> By the way I didn't find any other place in my templates where I refer to
> these files?
> What about custom nic configs is there also a jinja2 process to create them?

No. custom nic configs are by definition, custom to the environment
you are deploying. Only you know how to properly define what newtork
configurations needs applying.

Our sample nic configs are generated from jinja2 now. For example:
tripleo-heat-templates/network/config/single-nic-vlans/role.role.j2.yaml

If you wanted to follow that pattern such that your custom nic config
templates were generated, you could do that

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [charms] PTL non-candidacy for Stein cycle

2018-07-27 Thread James Page
Hi All

I won't be standing for PTL of OpenStack Charms for this upcoming cycle.

Its been my pleasure to have been PTL since the project was accepted into
OpenStack, but its time to let someone else take the helm.   I'm not going
anywhere but expect to have a bit of a different focus for this cycle (at
least).

Cheers

James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Keystone Team Update - Week of 23 July 2018

2018-07-27 Thread Lance Bragstad
# Keystone Team Update - Week of 23 July 2018

## News

This week wrapped up rocky-3, but the majority of the things working
through review are refactors that aren't necessarily susceptible to the
deadline.

## Recently Merged Changes

Search query: https://bit.ly/2IACk3F

We merged 32 changes this week, including the remaining patches for
implementing strict two-level hierarchical limits (server and client
support), Flask work, and a security fix.

## Changes that need Attention

Search query: https://bit.ly/2wv7QLK

There are 47 changes that are passing CI, not in merge conflict, have no
negative reviews and aren't proposed by bots.

There are still a lot of patches that need attention, specifically the
work to start converting keystone APIs to consume Flask. These changes
should be transparent to end users, but if you have questions about the
approach or specific reviews, please come ask in #openstack-keystone.
Kristi also has a patch up to implement the mutable config goal for
keystone [0]. This work was dependent on Flask bits that merged earlier
this week, but based on a discussion with the TC we've already missed
the deadline [1]. Reviews here would still be appreciated because it
should help us merge the implementation early in Stein.

[0] https://review.openstack.org/#/c/585417/
[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-27.log.html#t2018-07-27T15:03:49

## Bugs

This week we opened 6 new bugs and fixed 2.

The highlight here is a security bug that was fixed and backported to
all supported releases [0].

[0] https://bugs.launchpad.net/keystone/+bug/1779205

## Milestone Outlook

https://releases.openstack.org/rocky/schedule.html

At this point we're past the third milestone, meaning requirements are
frozen and we're in a soft string freeze. Please be aware of those
things when reviewing patch sets. The next deadline for us is RC target
on August 10th.

## Help with this newsletter

Help contribute to this newsletter by editing the
etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter
Dashboard generated using gerrit-dash-creator
and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [tripleo-validations] using using top-level fact vars will deprecated in future Ansible versions

2018-07-27 Thread Sam Doran
> so if, for convenience, we do this:
> vars:
>  a_mounts: "{{ hostvars[inventory_hostname].ansible_facts.mounts }}"
> 
> That's completely acceptable and correct, and won't create any security
> issue, right?


Yes, that will work, but you don't need to use the hostvars dict. You can 
simply use ansible_facts.mounts.

Using facts in no way creates security issues. The attack vector is a managed 
node setting local facts, or a malicious playbook author setting a fact that 
contains executable and malicious code. Ansible uses an UnsafeProxy class to 
ensure text from untrusted sources is properly handled to defend against this.

> I think the last thing we want is to break TripleO + Ceph integration so we 
> will maintain Ansible 2.5.x in TripleO Rocky and upgrade to 2.6.x in Stein 
> when ceph-ansible 3.2 is used and working well.

This sounds like a good plan.

---

Respectfully,

Sam Doran
Senior Software Engineer
Ansible by Red Hat
sdo...@redhat.com 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-27 Thread Matt Riedemann

On 7/16/2018 4:20 AM, Gorka Eguileor wrote:

If I remember correctly the driver was deprecated because it had no
maintainer or CI.  In Cinder we require our drivers to have both,
otherwise we can't guarantee that they actually work or that anyone will
fix it if it gets broken.


Would this really require 3rd party CI if it's just local block storage 
on the compute node (in devstack)? We could do that with an upstream CI 
job right? We already have upstream CI jobs for things like rbd and nfs. 
The 3rd party CI requirements generally are for proprietary storage 
backends.


I'm only asking about the CI side of this, the other notes from Sean 
about tweaking the LVM volume backend and feature parity are good 
reasons for removal of the unmaintained driver.


Another option is using the nova + libvirt + lvm image backend for local 
(to the VM) ephemeral disk:


https://github.com/openstack/nova/blob/6be7f7248fb1c2bbb890a0a48a424e205e173c9c/nova/virt/libvirt/imagebackend.py#L653

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] PTL Candidacy for Stein

2018-07-27 Thread T. Nichole Williams
+1, you’ve got my vote :D

T. Nichole Williams
tribe...@tribecc.us



> On Jul 27, 2018, at 6:35 AM, Spyros Trigazis  wrote:
> 
> Hello OpenStack community!
> 
> I would like to nominate myself as PTL for the Magnum project for the
> Stein cycle.
> 
> In the last cycle magnum became more stable and is reaching the point
> of becoming a feature complete solution for providing managed container
> clusters for private or public OpenStack clouds. Also during this cycle
> the community around the project became healthy and more sustainable.
> 
> My goals for Stein are to:
> - complete the work in cluster upgrades and cluster healing
> - keep up with the latest release of Kubernetes and Docker in stable
>   branches and improve their release process
> - documenation for cloud operators improvements
> - continue on building the community which supports the project
> 
> Thanks for your time,
> Spyros
> 
> strigazi on Freenode
> 
> [0] https://review.openstack.org/#/c/586516/ 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] placement update 18-30

2018-07-27 Thread Chris Dent


HTML: https://anticdent.org/placement-update-18-30.html

This is placement update 18-30, a weekly update of ongoing development 
related to the [OpenStack](https://www.openstack.org/) [placement 
service](https://developer.openstack.org/api-ref/placement/).


# Most Important

This week is feature freeze for the Rocky cycle, so the important
stuff is watching already approved code to make sure it actually
merges, bug fixes and testing.

# What's Changed

At yesterday's meeting it was decided the pending work on the
/reshaper will be punted to early Stein. Though the API level is
nearly ready, the code that exercises it from the nova side is very
new and the calculus of confidence, review bandwidth and gate
slowness works against doing an FFE. Some references:

* 

* 


Meanwhile, pending work to get the report client using consumer
generations is also on hold:

* 

As far as I understand it no progress has been made on "Effectively
managing nested and shared resource providers when managing
allocations (such as in migrations)."

Some functionality has merged recently:

* Several changes to make the placement functional tests more
  placement oriented (use placement context, not be based on
  nova.test.TestCase).
* Add 'nova-manage placement sync_aggregates'
* Consumer generation is being used in heal allocations CLI
* Allocations schema no longer allows extra fields
* The report client is more robust about checking and retrying
  provider generations.
* If force_hosts or force_nodes is being used, don't set a limit
  when requesting allocation candidates.

# Questions

I wrote up some analysis of the way the [resource tracker talks to
placement](https://anticdent.org/novas-use-of-placement.html). It
identifies some redundancies. Actually it reinforces that some
redundancies we've known about are still there. Fixing some of these
things might count as bug fixes. What do you think?

# Bugs

* Placement related [bugs not yet in progress](https://goo.gl/TgiPXb):
   14, -1 from last week.
* [In progress placement bugs](https://goo.gl/vzGGDQ) 13, -2 on last
   week.

# Main Themes

## Documentation

Now that we are feature frozen we better document all the stuff. And
more than likely we'll find some bugs while doing that documenting.

This is a section for reminding us to document all the fun stuff we
are enabling. Open areas include:

* "How to deploy / model shared disk. Seems fairly straight-forward,
and we could even maybe create a multi-node ceph job that does
this - wouldn't that be awesome?!?!", says an enthusiastic Matt
Riedemann.

* The whens and wheres of re-shaping and VGPUs.

* Please add more here by responding to this email.

## Consumer Generations

These are in place on the placement side. There's pending work on
the client side, and a semantic fix on the server side, but neither
are going to merge this cycle.

* 
   return 404 when no consumer found in allocs

* 
   Use placement 1.28 in scheduler report client
   (1.28 is consumer gens)

## Reshape Provider Trees

On hold, but still in progress as we hope to get it merged as soon
as there is an opportunity to do so:

It's all at: 

## Mirror Host Aggregates

The command line tool merged, so this is done. It allows
aggregate-based limitation of allocation candidates, a nice little
feature that will speed things up for people.

## Extraction

I wrote up a second [blog
post](https://anticdent.org/placement-extraction-2.html) on some of
the issues associated with placement extraction. There are several
topics on the [PTG
etherpad](https://etherpad.openstack.org/p/nova-ptg-stein) related
to extraction.

# Other

Since we're at feature freeze I'm going to only include things in
the list that were already there and that might count as bug fixes
or potentially relevant for near term review.

So: 11, down from 29.

* 
Add unit test for non-placement resize

* 
Use placement.inventory.inuse in report client

* 
[placement] api-ref: add traits parameter

* 
Convert 'placement_api_docs' into a Sphinx extension

* 
   Add placement.concurrent_udpate to generation pre-checks

* 
   Delete allocations when it is re-allocated
   (This is addressing a TODO in the report client)

* 
   local disk inventory reporting related

* 

Re: [openstack-dev] [tripleo] Rocky Ceph update/upgrade regression risk (semi-FFE)

2018-07-27 Thread Alex Schultz
On Fri, Jul 27, 2018 at 5:48 AM, Emilien Macchi  wrote:
>
>
> On Fri, Jul 27, 2018 at 3:58 AM Jiří Stránský  wrote:
>>
>> I'd call this a semi-FFE, as a few of the patches have characteristics of
>> feature work,
>> but at the same time i don't believe we can afford having Ceph
>> unupgradable in Rocky, so it has characteristics of a regression bug
>> too. I reported a bug [2] and tagged the patches in case we end up
>> having to do backports.
>
>
> Right, let's consider it as a bug and not a feature. Also, it's upgrade
> related so it's top-priority as we did in prior cycles. Therefore I think
> it's fine.

I second this.  We must be able to upgrade so this needs to be addressed.

> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Rocky Ceph update/upgrade regression risk (semi-FFE)

2018-07-27 Thread Emilien Macchi
On Fri, Jul 27, 2018 at 3:58 AM Jiří Stránský  wrote:

> I'd call this a semi-FFE, as a few of the patches have characteristics of
> feature work,
> but at the same time i don't believe we can afford having Ceph
> unupgradable in Rocky, so it has characteristics of a regression bug
> too. I reported a bug [2] and tagged the patches in case we end up
> having to do backports.
>

Right, let's consider it as a bug and not a feature. Also, it's upgrade
related so it's top-priority as we did in prior cycles. Therefore I think
it's fine.
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] PTL Candidacy for Stein

2018-07-27 Thread Spyros Trigazis
Hello OpenStack community!

I would like to nominate myself as PTL for the Magnum project for the
Stein cycle.

In the last cycle magnum became more stable and is reaching the point
of becoming a feature complete solution for providing managed container
clusters for private or public OpenStack clouds. Also during this cycle
the community around the project became healthy and more sustainable.

My goals for Stein are to:
- complete the work in cluster upgrades and cluster healing
- keep up with the latest release of Kubernetes and Docker in stable
  branches and improve their release process
- documenation for cloud operators improvements
- continue on building the community which supports the project

Thanks for your time,
Spyros

strigazi on Freenode

[0] https://review.openstack.org/#/c/586516/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] FFE for multi-backend

2018-07-27 Thread Erno Kuvaja
On Thu, Jul 26, 2018 at 3:35 PM, Abhishek Kekane  wrote:
> I'm asking for a Feature Freeze Exception for Multiple backend support
> (multi-store)
> feature [0].  The only remaining work is a versioning patch to flag this
> feature as
>  experimental and should be completed early next week.
>
> [0]
> https://specs.openstack.org/openstack/glance-specs/specs/rocky/approved/glance/multi-store.html
>
> Patches open for review:
>
> https://review.openstack.org/#/q/status:open+project:openstack/glance+branch:master+topic:bp/multi-store
>
>
>
> Th
> anks & Best Regards,
>
> Abhishek Kekane
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

As agreed on weekly meeting yesterday, this change is just pending
prerequisites to merge so it can be released as EXPERIMENTAL API,
approved for FFE.

Thanks,
Erno jokke Kuvaja

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] FFE for multihash

2018-07-27 Thread Erno Kuvaja
On Thu, Jul 26, 2018 at 3:28 PM, Brian Rosmaita
 wrote:
> I'm asking for a Feature Freeze Exception for the glance-side work for
> the Secure Hash Algorithm Support (multihash) feature [0].  The work
> is underway and should be completed early next week.
>
> cheers,
> brian
>
> [0] 
> https://specs.openstack.org/openstack/glance-specs/specs/rocky/approved/glance/multihash.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

As agreed on the Weekly meeting yesterday; this work is well on it's
way, the glance_store and python-glanceclient bits have been merged
and released; this change was agreed for FFE.

Thanks,
Erno jokke Kuvaja

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Rocky Ceph update/upgrade regression risk (semi-FFE)

2018-07-27 Thread Jiří Stránský

Hi folks,

i want to raise attention on remaining patches that are needed to 
prevent losing Ceph updates/upgrades in Rocky [1], basically making the 
Ceph upgrade mechanism compatible with config-download. I'd call this a 
semi-FFE, as a few of the patches have characteristics of feature work, 
but at the same time i don't believe we can afford having Ceph 
unupgradable in Rocky, so it has characteristics of a regression bug 
too. I reported a bug [2] and tagged the patches in case we end up 
having to do backports.


Please help with reviews and landing the patches if possible.


It would have been better to focus on this earlier in the cycle, but 
majority of Upgrades squad work is exactly this kind of semi-FFE -- 
nontrivial in terms of effort required, but at the same time it's not 
something we can realistically slip into the next release, because it 
would be a regression. This sort of work tends to steal some of our 
focus in N cycle and direct it towards N-1 release (or even older). 
However, i think we've been gradually catching up with the release cycle 
lately, and increased focus on keeping update/upgrade CI green helps us 
catch breakages before they land and saves some person-hours, so i'm 
hoping the future is bright(er) on this.



Thanks and have a good day,

Jirka

[1] https://review.openstack.org/#/q/topic:external-update-upgrade
[2] https://bugs.launchpad.net/tripleo/+bug/1783949

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack-dev] [trove] Considering the transfter of the project leadership

2018-07-27 Thread Dariusz Krol


Hi Sean,
 
This is good point. It would be great to have some help especially to start. We have some experience with contributing to openstack and we are working with gerrit on daily basis so there is no problem with technical aspects. However, it takes some time for changes to be reviewed and merged and we would like to change it. 
 
On the other hand, I can also understand the lack of time to be a PTL since it requires probably a lot of time to coordinate all the work. 
Let’s wait for Chao Zhao to give his opinion on the topic :)
 
Best,
Dariusz Krol

  
  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [edge][glance]: Image handling in edge environment

2018-07-27 Thread Csatari, Gergely (Nokia - HU/Budapest)
Hi,

The meeting will take place on 2018.08.01 18-19h CET.
Here I attach the invitation.

Br,
Gerg0



From: Csatari, Gergely (Nokia - HU/Budapest)
Sent: Friday, July 20, 2018 1:32 PM
To: 'edge-computing' ; 'OpenStack 
Development Mailing List (not for usage questions)' 

Cc: 'jokke' 
Subject: RE: [edge][glance]: Image handling in edge environment

Hi,

We figured out with Jokke two timeslots what would be okay for both of us for 
this common meeting.

Please, other interested parties give your votes to here: 
https://doodle.com/poll/9rfcb8aavsmybzfu

I will evaluate the results and fix the time on 25.07.2018 12h CET.

Br,
Gerg0

From: Csatari, Gergely (Nokia - HU/Budapest)
Sent: Wednesday, July 18, 2018 10:02 AM
To: 'edge-computing' 
mailto:edge-comput...@lists.openstack.org>>;
 OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Subject: [edge][glance]: Image handling in edge environment

Hi,

We had a great Forum session about image handling in edge environment in 
Vancouver [1]. As one 
outcome of the session I've created a wiki with the mentioned architecture 
options 
[1]. During 
the Edge Working Group 
[3] discussions we 
identified some questions (some of them are in the wiki, some of them are in 
mails 
[4]) 
and also I would like to get some feedback on the analyzis in the wiki from 
people who know Glance.

I think the best would be to have some kind of meeting and I see two options to 
organize this:

  *   Organize a dedicated meeting for this
  *   Add this topic as an agenda point to the Glance weekly meeting

Please share your preference and/or opinion.

Thanks,
Gerg0

[1]: https://etherpad.openstack.org/p/yvr-edge-cloud-images
[2]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment
[3]: https://wiki.openstack.org/wiki/Edge_Computing_Group
[4]: http://lists.openstack.org/pipermail/edge-computing/2018-June/000239.html

--- Begin Message ---
BEGIN:VCALENDAR
METHOD:REQUEST
PRODID:Microsoft Exchange Server 2010
VERSION:2.0
BEGIN:VTIMEZONE
TZID:Central Europe Standard Time
BEGIN:STANDARD
DTSTART:16010101T03
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
RRULE:FREQ=YEARLY;INTERVAL=1;BYDAY=-1SU;BYMONTH=10
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:16010101T02
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
RRULE:FREQ=YEARLY;INTERVAL=1;BYDAY=-1SU;BYMONTH=3
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
ORGANIZER;CN="Csatari, Gergely (Nokia - HU/Budapest)":MAILTO:gergely.csatari@
 nokia.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN='edge-comp
 uting':MAILTO:edge-comput...@lists.openstack.org
DESCRIPTION;LANGUAGE=en-US:Hi\,\n\n\n\nLet’s spend this time to discuss t
 he alternatives for Image handling in edge environment listed in here: htt
 ps://wiki.openstack.org/wiki/Image_handling_in_edge_environment\n\n\n\n*  
  Check if the alternatives were captured correctly\n*   Check the 
 pros and cons\n*   Check the concerns and questions\n*   Decide if
  an alternative is a dead end\n\n\n\nFor some strange reasons I do not rec
 eive mails from the OpenStack mailing list servers anymore\, so if you hav
 e anything to discuss about this please use #edge-computing-group or add m
 e directly to the mails.\n\n\n\nBr\,\n\nGerg0\n\n\n\n-
 -- 8< \n\nWebex: https://nokiameetings.webex.com/meet/gerg
 ely.csatari\n\nTelco if you need it:\n\nInternal : 8200300\n\nHungary: +36
 14088997\n\nGlobal numbers\n\nAccess code: 957 007 218\n\n\n\n\n\n\n\n
 \n\n\n\n
UID:04008200E00074C5B7101A82E00830EFE77B8625D401000
 0100025DF680C321C144AAD8B9C823BC15A5C
SUMMARY;LANGUAGE=en-US:Image handling in edge environment
DTSTART;TZID=Central Europe Standard Time:20180801T18
DTEND;TZID=Central Europe Standard Time:20180801T19
CLASS:PUBLIC
PRIORITY:5
DTSTAMP:20180727T065827Z
TRANSP:OPAQUE
STATUS:CONFIRMED
SEQUENCE:0
LOCATION;LANGUAGE=en-US:webex / #edge-computing-group
X-MICROSOFT-CDO-APPT-SEQUENCE:0
X-MICROSOFT-CDO-OWNERAPPTID:-145000478
X-MICROSOFT-CDO-BUSYSTATUS:TENTATIVE
X-MICROSOFT-CDO-INTENDEDSTATUS:BUSY
X-MICROSOFT-CDO-ALLDAYEVENT:FALSE
X-MICROSOFT-CDO-IMPORTANCE:1
X-MICROSOFT-CDO-INSTTYPE:0
X-MICROSOFT-DONOTFORWARDMEETING:FALSE
X-MICROSOFT-DISALLOW-COUNTER:FALSE
X-MICROSOFT-LOCATIONS:[{"DisplayName":"webex / #edge-computing-group"\,"Loc
 ationAnnotation":""\,"LocationUri":""\,"LocationStreet":""\,"LocationCity"
 :""\,"LocationState":""\,"LocationCountry":""\,"LocationPostalCode":""\,"L
 ocationFullAddress":""}]
BEGIN:VALARM
DESCRIPTION:REMINDER
TRIGGER;RELATED=START:-PT15M
ACTION