[openstack-dev] [cinder][ptl] Rocky PTL Candidacy ...

2018-02-05 Thread Jay S Bryant

All,

This note is to declare my candidacy for the Cinder, Rocky PTL position.

I can't believe that Queens is already drawing to a close and that I 
have been PTL for a whole release already.  I have enjoyed this new 
challenge and learned so much more about OpenStack as a result of being 
able to serve as a PTL.  It has grown not only my understanding of 
Cinder but of OpenStack and what it has to offer our end users.


I feel that the Queens release has gone smoothly and as I have been 
looking at the notes from the Queens PTG I think we have been successful 
at addressing many of the goals that the team set back in Denver.  We 
have been successful in getting the development team to take ownership 
of our documentation.  We have focused on fixing bugs in Cinder and 
improving our existing functions as I had hoped we would be able to do.  
We have even seen some return to growth in the development team.  All 
momentum that I would like to see us maintain.


So, I hope that you all will give me a chance to apply what I have 
learned during the last 6 months by supporting me in another term as 
Cinder's PTL.


Sincerely,

Jay Bryant (jungleboyj)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata

2018-02-05 Thread Matt Riedemann
Given the size and detail of this thread, I've tried to summarize the 
problems and possible solutions/workarounds in this etherpad:


https://etherpad.openstack.org/p/nova-aggregate-filter-allocation-ratio-snafu

For those working on this, please check that what I have written down is 
correct and then we can try to make some kind of plan for resolving this.


On 1/16/2018 3:24 PM, melanie witt wrote:

Hello Stackers,

This is a heads up to any of you using the AggregateCoreFilter, 
AggregateRamFilter, and/or AggregateDiskFilter in the filter scheduler. 
These filters have effectively allowed operators to set overcommit 
ratios per aggregate rather than per compute node in <= Newton.


Beginning in Ocata, there is a behavior change where aggregate-based 
overcommit ratios will no longer be honored during scheduling. Instead, 
overcommit values must be set on a per compute node basis in nova.conf.


Details: as of Ocata, instead of considering all compute nodes at the 
start of scheduler filtering, an optimization has been added to query 
resource capacity from placement and prune the compute node list with 
the result *before* any filters are applied. Placement tracks resource 
capacity and usage and does *not* track aggregate metadata [1]. Because 
of this, placement cannot consider aggregate-based overcommit and will 
exclude compute nodes that do not have capacity based on per compute 
node overcommit.


How to prepare: if you have been relying on per aggregate overcommit, 
during your upgrade to Ocata, you must change to using per compute node 
overcommit ratios in order for your scheduling behavior to stay 
consistent. Otherwise, you may notice increased NoValidHost scheduling 
failures as the aggregate-based overcommit is no longer being 
considered. You can safely remove the AggregateCoreFilter, 
AggregateRamFilter, and AggregateDiskFilter from your enabled_filters 
and you do not need to replace them with any other core/ram/disk 
filters. The placement query takes care of the core/ram/disk filtering 
instead, so CoreFilter, RamFilter, and DiskFilter are redundant.


Thanks,
-melanie

[1] Placement has been a new slate for resource management and prior to 
placement, there were conflicts between the different methods for 
setting overcommit ratios that were never addressed, such as, "which 
value to take if a compute node has overcommit set AND the aggregate has 
it set? Which takes precedence?" And, "if a compute node is in more than 
one aggregate, which overcommit value should be taken?" So, the 
ambiguities were not something that was desirable to bring forward into 
placement.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][Kingbird]Multi-Region Orchestrator

2018-02-05 Thread Jay Pipes

Goutham, comments inline...

Also, FYI, using HTML email with different color fonts to indicate 
different people talking is not particularly mailing list-friendly. For 
reasons why, just check out your last post:


http://lists.openstack.org/pipermail/openstack-dev/2018-January/126842.html

You can't tell who is saying what in the mailing list post...

Much better to use non-HTML email and demarcate responses with the 
traditional > marker. :)


OK, comments inline below.

On 01/31/2018 01:17 PM, Goutham Pratapa wrote:

Hi Jay,

Thanks for the questions.. :)

What precisely do you mean by "resources" above ??

Resources as-in resources required to boot-up a vm (Keypair, Image, 
Flavors )


Gotcha. Thanks for the answer.

Also, by "syncing", do you mean "replicating"? The reason I ask is 
because in the case of, say, VM "resources", you can't "sync" a VM 
across regions. You can replicate its bootable image, but you can't 
"sync" a VM's state across multiple OpenStack deployments.


Yes as you said syncing as-in replicating only.


Gotcha. You could, of course, actually use synchronous (or semi-sync) 
replication for various databases, including Glance and Keystone's 
identity/assignment information, but yes, async replication is just as good.


and yes we cannot sync vm's across regions but our idea is to 
sync/replicate all the parameters required to boot a vm


OK, sounds good.

(viz. *image, keypair, flavor*) which are originally there in the source 
region to the target regions in a single-go.


Gotcha.

Some questions on scope that piqued my interest while reading your 
response...


Is Kingbird predominantly designed to be the multi-region orchestrator 
for OpenStack deployments that are all owned/operated by the same 
deployer? Or does Kingbird have intentions of providing glue services 
between multiple fully-independent OpenStack deployments (possibly 
operated by different deployers)?


Further, does Kingbird intend to get into the multi-cloud (as in AWS, 
OpenStack, Azure, etc) orchestration game?


I'm curious what you mean by "resource management". Could you elaborate 
a bit on this?


Resource management as-in managing the resources i.e say a user has a 
glance image(*qcow2 or ami format*) or
say flavor(*works only if admin*) with some properties or keypair 
present in one source regionand he wants the same image or
same flavor with same properties or the same keypair in another set of 
regions user may have to recreate them in all target regions.


But with the help of kingbird you can do all the operations in a single go.

--> If user wants to sync a resource of type keypair he can replicate 
the keypair into multiple target regions in single go (similarly glance 
images and flavors )
--> If user wants different type of resource( keypair,image and flavor) 
in a single go then user can  give a yaml file as input and kingbird 
replicates all resources in all target regions


OK, I understand your use case here, thanks.

It does seem to me, however, that if the intention is *not* to get into 
the multi-cloud orchestration game, that a simpler solution to this 
multi-region OpenStack deployment use case would be to simply have a 
global Glance and Keystone infrastructure that can seamlessly scale to 
multiple regions.


That way, there'd be no need for replicating anything.

I suppose what I'm recommending it that instead of the concept of a 
region (or availability zone in Nova for that matter) being a 
mostly-configuration option thing, that the OpenStack contributor 
community actually work to make regions (the concept that Keystone 
labels a region; which is just a grouping of service endpoints) the one 
and only concept of a user-facing "partition" throughout OpenStack.


That way we would have OpenStack services like Glance, Nova, Cinder, 
Neutron, etc just *natively* understand which region they are in and 
how/if they can communicate with other regions.


Sometimes it seems we (as a community) go through lots of hoops working 
around fundamental architectural problems in OpenStack instead of just 
fixing those problems to begin with. See: Nova cellsv1 (and some of 
cellsv2), Keystone federation, the lack of a real availability zone 
concept anywhere, Nova shelve/unshelve (partly developed because VMs and 
IPs were too closely coupled at the time), the list goes on and on...


Anyway, mostly just rambling/ranting... just food for thought.

Best,
-jay


Thanks
Goutham.

On Wed, Jan 31, 2018 at 9:25 PM, Jay Pipes > wrote:


On 01/31/2018 01:49 AM, Goutham Pratapa wrote:

*Kingbird (The Multi Region orchestrator):*

We are proud to announce kingbird is not only a centralized
quota and resource-manager but also a  Multi-region Orchestrator.

*Use-cases covered:

*- Admin can synchronize and periodically balance quotas across
regions and can have a global view of quotas of all the tenants
across regions.

[openstack-dev] [ironic] team dinner at Dublin PTG?

2018-02-05 Thread Loo, Ruby
Hi ironic-ers,

Planning for the Dublin PTG has started. And what's the most important thing 
(and most fun event) to plan for? You got it, the team dinner! We'd like to get 
an idea of who is interested and what evening works for all or most of us.

Please indicate which evenings you are available, at this doodle: 
https://doodle.com/poll/d4ff6m9hxg887n9q

If you're shy or don't want to use doodle, send me an email.

Please respond by Friday, Feb 16 (same deadline as PTG topics-for-discussion), 
so we can find a place and reserve it.

Thanks!
--ruby

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating Hironori Shiina for ironic-core

2018-02-05 Thread Loo, Ruby
+1 from me. He's been really helpful with the boot-from-volume and rescue work. 
Looking forward to Hironori joining us :)

Thanks Julia, for bringing this up!

--ruby

On 2018-02-05, 1:12 PM, "Julia Kreger"  wrote:

I would like to nominate Hironori Shiina to ironic-core. He has been
working in the ironic community for some time, and has been helping
over the past several cycles with more complex features. He has
demonstrated an understanding of Ironic's code base, mechanics, and
overall community style. His review statistics are also extremely
solid. I personally have a great deal of trust in his reviews.

I believe he would make a great addition to our team.

Thanks,

-Julia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][all] New Zuul Depends-On syntax

2018-02-05 Thread Alex Schultz
On Thu, Feb 1, 2018 at 11:55 AM, James E. Blair  wrote:
> Zane Bitter  writes:
>
>> Yeah, it's definitely nice to have that flexibility. e.g. here is a
>> patch that wouldn't merge for 3 months because the thing it was
>> dependent on also got proposed as a backport:
>>
>> https://review.openstack.org/#/c/514761/1
>>
>> From an OpenStack perspective, it would be nice if a Gerrit ID implied
>> a change from the same Gerrit instance as the current repo and the
>> same branch as the current patch if it exists (otherwise any branch),
>> and we could optionally use a URL instead to select a particular
>> change.
>
> Yeah, that's reasonable, and it is similar to things Zuul does in other
> areas, but I think one of the thing we want to do with Depends-On is
> consider that Zuul isn't the only audience.  It's there just as much for
> the reviewers, and other folks.  So when it comes to Gerrit change ids,
> I feel we had to constrain it to Gerrit's own behavior.  When you click
> on one of those in Gerrit, it shows you all of the changes across all of
> the repos and branches with that change-id.  So that result list is what
> Zuul should work with.  Otherwise there's a discontinuity between what a
> user sees when they click the hyperlink under the change-id and what
> Zuul does.
>
> Similarly, in the new system, you click the URL and you see what Zuul is
> going to use.
>
> And that leads into the reason we want to drop the old syntax: to make
> it seamless for a GitHub user to know how to Depends-On a Gerrit change,
> and vice versa, with neither requiring domain-specific knowledge about
> the system.
>

While I can appreciate that, having to manage urls for backports in
commit messages will lead to missing patches and other PEBAC related
problems. Perhaps rather than throwing out this functionality we can
push for improvements in the gerrit interaction itself?  I'm really -1
on removing the change-id syntax just for this reasoning. The UX of
having to manage complex depends-on urls for things like backports
makes switching to URLs a non-starter unless I have a bunch of
external system deps (and I generally don't).

Thanks,
-Alex

> -Jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating Hironori Shiina for ironic-core

2018-02-05 Thread Vladyslav Drok
+1

On Mon, Feb 5, 2018 at 10:12 AM, Julia Kreger 
wrote:

> I would like to nominate Hironori Shiina to ironic-core. He has been
> working in the ironic community for some time, and has been helping
> over the past several cycles with more complex features. He has
> demonstrated an understanding of Ironic's code base, mechanics, and
> overall community style. His review statistics are also extremely
> solid. I personally have a great deal of trust in his reviews.
>
> I believe he would make a great addition to our team.
>
> Thanks,
>
> -Julia
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG?

2018-02-05 Thread Lance Bragstad


On 02/02/2018 11:56 AM, Lance Bragstad wrote:
> I apologize for using the "baremetal/VM" name, but I wanted to get an
> etherpad rolling sooner rather than later [0], since we're likely going
> to have to decide on a new name in person. I ported the initial ideas
> Colleen mentioned when she started this thread, added links to previous
> etherpads from Boston and Denver, and ported some topics from the Boston
> etherpads.
>
> Please feel free to add ideas to the list or elaborate on existing ones.
> Next week we'll start working through them and figure out what we want
> to accomplish for the session. Once we have an official room for the
> discussion, I'll add the etherpad to the list in the wiki.
Based on some discussions in #openstack-dev this morning [0], I took a
stab at working out a rough schedule for Monday and Tuesday [1]. Let me
know if you notice conflicts or want to re-propose a session/topic.

[0]
http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-02-05.log.html#t2018-02-05T15:45:57
[1] https://etherpad.openstack.org/p/baremetal-vm-rocky-ptg
>
> [0] https://etherpad.openstack.org/p/baremetal-vm-rocky-ptg
>
>
> On 02/02/2018 11:10 AM, Zane Bitter wrote:
>> On 30/01/18 10:33, Colleen Murphy wrote:
>>> At the last PTG we had some time on Monday and Tuesday for
>>> cross-project discussions related to baremetal and VM management. We
>>> don't currently have that on the schedule for this PTG. There is still
>>> some free time available that we can ask for[1]. Should we try to
>>> schedule some time for this?
>> +1, I would definitely attend this too.
>>
>> - ZB
>>
>>>  From a keystone perspective, some things we'd like to talk about with
>>> the BM/VM teams are:
>>>
>>> - Unified limits[2]: we now have a basic REST API for registering
>>> limits in keystone. Next steps are building out libraries that can
>>> consume this API and calculate quota usage and limit allocation, and
>>> developing models for quotas in project hierarchies. Input from other
>>> projects is essential here.
>>> - RBAC: we've introduced "system scope"[3] to fix the admin-ness
>>> problem, and we'd like to guide other projects through the migration.
>>> - Application credentials[4]: this main part of this work is largely
>>> done, next steps are implementing better access control for it, which
>>> is largely just a keystone team problem but we could also use this
>>> time for feedback on the implementation so far
>>>
>>> There's likely some non-keystone-related things that might be at home
>>> in a dedicated BM/VM room too. Do we want to have a dedicated day or
>>> two for these projects? Or perhaps not dedicated days, but
>>> planned-in-advance meeting time? Or should we wait and schedule it
>>> ad-hoc if we feel like we need it?
>>>
>>> Colleen
>>>
>>> [1]
>>> https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/pubhtml?gid=1374855307&single=true
>>> [2]
>>> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html
>>> [3]
>>> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html
>>> [4]
>>> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/application-credentials.html
>>>
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [ptg] Rocky PTG planning

2018-02-05 Thread Lance Bragstad
I've started working the topics we had into a rough schedule [0], and
it's wide open for criticism and feedback. If you notice a conflict with
another session, please leave a comment on the schedule or ping me.
Also, if you think of something else that we should cover, we have
several open slots and we can be flexible in shuffling things around.

Thanks for taking a look!

[0] https://etherpad.openstack.org/p/keystone-rocky-ptg

On 01/03/2018 03:43 PM, Lance Bragstad wrote:
> Hey all,
>
> It's about that time to start our pre-PTG planning activities. I've
> started an etherpad and bootstrapped it with some basic content [0].
> Please take the opportunity to add topics to the schedule. It doesn't
> matter if it is cross-project or keystone specific. The sooner we get
> ideas flowing the easier it will be to coordinate cross-project tracks
> with other groups. We'll organize the content into a schedule after a
> couple week. Let me know if you have any questions.
>
> Thanks,
>
> Lance
>
> [0] https://etherpad.openstack.org/p/keystone-rocky-ptg
>
>




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] candidacy for PTL

2018-02-05 Thread Ade Lee
Fellow Barbicaneers,

I'd like to nominate myself to serve as Barbican PTL through the
Rocky cycle.

Dave has done a great job at keeping the project growing and I'd
like to continue his good work.

This is an exciting time for Barbican.  With more distributions
and installers incorporating Barbican, and a renewed focus on 
meeting security and compliance requirements, deployers will be
relying on Barbican to securely implement some of the use cases
that we've been working on for the past few years (volume encryption,
image signing, swift object encryption etc.).

Moreover, work has been progressing in having castellan adopted as
a base service for OpenStack applications - hopefully increasing 
the deployment of secure secret management across the board.

In particular, for the Rocky cycle, I'd like to continue the progress
made in Queens to:

1) Grow the Barbican team of contributors and core reviewers.
2) Help drive further collaboration with other Openstack projects
   with joint blueprints.
3) Help ensure that deployments are successful by keeping up on
   bugs fixes and backports.
4) Help develop new secret store plugins, in particular :
   -- a castallan secret store that will allow us to use vault and
   custodia backends.
   -- SGX?
5) Continue the stability and maturity enhancements.
 
Thank you in advance for this opportunity to serve.

--Ade Lee (alee)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][all] New Zuul Depends-On syntax

2018-02-05 Thread James E. Blair
cor...@inaugust.com (James E. Blair) writes:

> The reason is that, contrary to earlier replies in this thread, the
> /#/c/ version of the change URL does not work.

The /#/c/ form of Gerrit URLs should work now; if it doesn't, please let
me know.

I would still recommend (and personally plan to use) the other form --
it's very easy to end up with a URL in Gerrit which includes the
patchset, or even a set of patchset diffs.  Zuul will ignore this
information and select the latest patchset of the change as its
dependency.  If a user clicks on a URL with an embedded patchset though,
they may end up looking at an old version, and not the version that Zuul
will use.

At any rate, the /#/c/ form should work.  I'd recommend trying to trim
off anything past the change number, if you do use it, to avoid
ambiguity.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] PTL candidacy for Rocky

2018-02-05 Thread Miguel Lavalle
Hello OpenStack Community,

I write this to submit my candidacy for the Neutron PTL position during the
Rocky cycle. I had the privilege of being the project's PTL for most of the
Queens release series and want to have another opportunity helping the team
and the community to deliver more and better networking  functionality.

I have worked for the technology industry 37+ years. After many years in
management, I decided to return to the "Light Side of the Force", the
technical path, and during the San Diego Summit in 2012 told the Neutron
(Quantum at the time) PTL that one day I wanted to be a member of the core
team. He and the team welcomed me and that started the best period of my
career, not only for the never ending learning experiences, but more
importantly, for the many talented women and men that I have met along the
way. Over these past few years I worked for Rackspace, helping them to
deploy and operate Neutron in their public cloud, IBM in their Linux
Technology Center, and currently for Huawei, as their Neutron upstream
development lead.

During the Queens release the team made significant progress in the
following fronts:

   - Continued with the adoption of Oslo Versioned Objects in the DB layer
   - Implemented QoS rate limits for floating IPs
   - Delivered the FWaaS V2.0 API
   - Concluded the implementation of the logging API for security groups,
   which implements a way to capture and store events related to security
   groups.
   - Continued moving externally referenced items to neutron-lib and
   adopting them in Neutron and the Stadium projects
   - Welcomed VPNaaS back into the Stadium after the team put it back in
   shape
   - Improved team processes such as having a pre-defined weekly schedule
   for team members to act as bug triagers, gave W+ to additional core members
   in neutron-lib and re-scheduled the Neutron drivers meeting on alternate
   days and hours to enable attendance of more people across different time
   zones

Some of the goals that I propose for the team to pursue during the Rocky
cycle are:

   - Finish the implementation of multiple port binding to solve the
   migration between VIF types in a generic way so operators can switch easily
   between backends. This is a joint effort with the Nova team
   - Implement QoS minimum bandwidth allocation in the Placement API to
   support scheduling of instances based on the network bandwidth available in
   hosts. This is another joint effort with the Nova team
   - Synchronize the adoption of the DB layer engine facade with the
   adoption of Oslo Versioned Objects to avoid situations where they don't
   cooperate nicely
   - Implement port forwarding based on floating IPs
   - Continue moving externally referenced items to neutron-lib and
   adopting them in Neutron and the Stadium projects. Finish documenting
   extensions in the API reference. Start the move of generic DB functionality
   to the library
   - Expand the work done with the logging API in security groups to FWaaS
   v2.0
   - Continue efforts in expanding our team and making its work easier.
   While we had some success during Queens, this is an area where we need to
   maintain our focus

Thank you for your consideration and for taking the time to read this

Miguel Lavalle (mlavalle)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Rocky PTL candidacy

2018-02-05 Thread Julia Kreger
Hi Everybody!

I am hereby announcing my candidacy and self nomination for the
Rocky cycle ironic PTL position.

I'm fairly certain most of you know me by this point and know how
much I care about the community as well as our efforts to automate
the deployment and configuration of baremetal infrastructure.

For those of you who do not yet know me, I've been involved in
OpenStack since the beginning of the Juno cycle, and have been working
with the ironic community since the beginning of the Kilo cycle.

I am very passionate about ironic, but I recognize that there is more
work to be done, new directions to head in, and challenges to conquer.

My vision is for ironic to be utilized in more use cases outside of
what we have typically seen as our primary user. It is necessary to
expand on existing relationships and to build new relationships going
forward.

My hope is for us to continue to grow as a community. While we have
had set backs like all projects, we still have massive potential.

Thank you for your consideration,

Julia Kreger (TheJulia)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] this week's priorities and subteam reports

2018-02-05 Thread Yeleswarapu, Ramamani
Hi,

We are glad to present this week's priorities and subteam report for Ironic. As 
usual, this is pulled directly from the Ironic whiteboard[0] and formatted.

This Week's Priorities (as of the weekly ironic meeting)

- Fix the multitenant grenade
- Fix the ironic-tempest-plugin CI https://review.openstack.org/#/c/540355/
- CI and docs work for classic drivers deprecation (see status below)
- Ansible deploy docs https://review.openstack.org/#/c/525501/
- Fix as many bugs as possible

Bugs that we want to land in this release:
1. ironic - Don't try to lock upfront for vif removal: 
https://review.openstack.org/#/c/534441/
2. handle glance images without data https://review.openstack.org/531180
3. rework exception handling on deploy https://review.openstack.org/531120
4. n-g-s: fix bind_port error https://review.openstack.org/#/c/540295/

Vendor priorities
-
cisco-ucs:
Patches in works for SDK update, but not posted yet, currently rebuilding 
third party CI infra after a disaster...
idrac:
RFE and first several patches for adding UEFI support will be posted by 
Tuesday, 1/9
ilo:
https://review.openstack.org/#/c/530838/ - OOB Raid spec for iLO5
irmc:
None

oneview:


Subproject priorities
-
bifrost:
(TheJulia): Fedora support fixes -  https://review.openstack.org/#/c/471750/
ironic-inspector (or its client):

networking-baremetal:

networking-generic-switch:
- initial release note https://review.openstack.org/#/c/534201/

sushy and the redfish driver:


Bugs (dtantsur, vdrok, TheJulia)

- Stats (diff between  15 Jan 2018 and 5 Feb 2018)
- Ironic: 222 bugs (+6) + 247 wishlist items (-13). 1 new, 161 in progress 
(+5), 1 critical (+1), 34 high (+1) and 25 incomplete (-2)
- Inspector: 14 bugs + 25 wishlist items (-3). 0 new, 12 in progress (+2), 0 
critical, 2 high and 4 incomplete (-2)
- Nova bugs with Ironic tag: 14 (+1). 1 new, 0 critical, 0 high
- via http://dashboard-ironic.7e14.starter-us-west-2.openshiftapps.com/
- the dashboard was abruptly deleted and needs a new home :(
- use it locally with `tox -erun` if you need to
- HIGH bugs with patches to review:
- Clean steps are not tested in gate 
https://bugs.launchpad.net/ironic/+bug/1523640: Add manual clean step ironic 
standalone test https://review.openstack.org/#/c/429770/15
- Needs to be reproposed to the ironic tempest plugin repository.
- prepare_instance() is not called for whole disk images with 'agent' deploy 
interface https://bugs.launchpad.net/ironic/+bug/1713916:
- Fix ``agent`` deploy interface to call ``boot.prepare_instance`` 
https://review.openstack.org/#/c/499050/
- (TheJulia) Currently WF-1, as revision is required for deprecation.
- If provisioning network is changed, Ironic conductor does not behave 
correctly https://bugs.launchpad.net/ironic/+bug/1679260: Ironic conductor 
works correctly on changes of networks: https://review.openstack.org/#/c/462931/
- (rloo) needs some direction
- may be fixed as part of https://review.openstack.org/#/c/460564/

CI refactoring and missing test coverage

- not considered a priority, it's a 'do it always' thing
- Standalone CI tests (vsaienk0)
- next patch to be reviewed, needed for 3rd party CI: 
https://review.openstack.org/#/c/429770/
- localboot with partitioned image patches:
- Ironic - add localboot partitioned image test: 
https://review.openstack.org/#/c/502886/
- when previous are merged TODO (vsaienko)
- Upload tinycore partitioned image to tarbals.openstack.org
- Switch ironic to use tinyipa partitioned image by default
- Missing test coverage (all)
- portgroups and attach/detach tempest tests: 
https://review.openstack.org/382476
- adoption: https://review.openstack.org/#/c/344975/
- should probably be changed to use standalone tests
- root device hints: TODO
- node take over
- resource classes integration tests: 
https://review.openstack.org/#/c/443628/
- radosgw (https://bugs.launchpad.net/ironic/+bug/1737957)

Essential Priorities


Ironic client API version negotiation (TheJulia, dtantsur)
--
- RFE https://bugs.launchpad.net/python-ironicclient/+bug/1671145
- Nova bug https://bugs.launchpad.net/nova/+bug/1739440
- gerrit topic: https://review.openstack.org/#/q/topic:bug/1671145
- status as of 5 Feb 2017:
- TODO:
- API-SIG guideline on consuming versions in SDKs 
https://review.openstack.org/532814 on review
- establish foundation for using version negotiation in nova
- nothing more for Queens. Stay tuned...
- need to make sure that we discuss/agree with nova about how to do this

Classic drivers deprecation (dtantsur)
--
- spec: 
http://specs.openstac

[openstack-dev] [ironic] Nominating Hironori Shiina for ironic-core

2018-02-05 Thread Julia Kreger
I would like to nominate Hironori Shiina to ironic-core. He has been
working in the ironic community for some time, and has been helping
over the past several cycles with more complex features. He has
demonstrated an understanding of Ironic's code base, mechanics, and
overall community style. His review statistics are also extremely
solid. I personally have a great deal of trust in his reviews.

I believe he would make a great addition to our team.

Thanks,

-Julia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Bug deputy report

2018-02-05 Thread Jakub Libosvar
Hi all,

I was a bug deputy for the last week and I won't be attending today team
meeting, so here comes my report:

It was very calm, there were no critical bugs reported, some bugs were
already fixed and other got attention and have patches up for review.
Some bugs were also triaged and some closed as they were duplicates.

The only one left is https://bugs.launchpad.net/neutron/+bug/1746707
where I'm not sure whether that's valid for reference implementation. It
says there are inconsistency issues in NSX Neutron plugin and hence
there *might* be issues in other plugins too.

AFAIK ml2 has BEFORE_ and AFTER_ callbacks in combination with retry
mechanisms performed over database. But I'm not brave to judge whether
this is sufficient to be considered safe. Hence I marked the bug as
incomplete but I think it deserves some discussion at the meeting.

Thanks,
Kuba

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Dublin PTG proposed track schedule

2018-02-05 Thread Lance Bragstad


On 02/05/2018 09:34 AM, Thierry Carrez wrote:
> Lance Bragstad wrote:
>> Colleen started a thread asking if there was a need for a baremetal/vm
>> group session [0], which generated quite a bit of positive response. Is
>> there still a possibility of fitting that in on either Monday or
>> Tuesday? The group is usually pretty large.
>>
>> [0]
>> http://lists.openstack.org/pipermail/openstack-dev/2018-January/126743.html
> Yes, we can still allocate a 80-people room or a 30-people one. Let me
> know if you prefer Monday, Tuesday or both.
Awesome - we're collecting topics in an etherpad, but we're likely only
going to get to three or four of them [0] [1]. We can work those topics
into two sessions. One on Monday and one on Tuesday, just to break
things up in case other things are happening those days that people want
to get to.


[0] https://etherpad.openstack.org/p/baremetal-vm-rocky-ptg
[1]
http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-02-05.log.html#t2018-02-05T15:45:57
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] glance-manage db check feature needs reviews

2018-02-05 Thread Brian Rosmaita
Hello Glancers,

Please take a look at Bhagyashri's patch, which was given a FFE.

There's a slight deviation from the spec, so I need feedback about
whether this is acceptable (spoiler alert: I think it's OK).  So
please comment on that aspect of the patch even if you don't have time
at the moment to review the code thoroughly.  See my comment on PS11
for details.

thanks,
brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][kolla][rdo] Collaboration with Kolla for the RDO test days

2018-02-05 Thread Swapnil Kulkarni
Hi David,

Count me in.

~coolsvap

On Mon, Feb 5, 2018 at 9:01 PM, David Moreau Simard 
wrote:

> Hi everyone,
>
> We've started planning the deployment with the Kolla team, you can see
> the etherpad from the "operator" perspective here:
> https://etherpad.openstack.org/p/kolla-rdo-m3
>
> We'll advertise the test days and how users can participate soon.
>
> Thanks,
>
>
> David Moreau Simard
> Senior Software Engineer | OpenStack RDO
>
> dmsimard = [irc, github, twitter]
>
>
> On Mon, Jan 29, 2018 at 8:29 AM, David Moreau Simard
>  wrote:
> > Hi !
> >
> > For those who might be unfamiliar with the RDO [1] community project:
> > we hang out in #rdo, we don't bite and we build vanilla OpenStack
> > packages.
> >
> > These packages are what allows you to leverage one of the deployment
> > projects such as TripleO, PackStack or Kolla to deploy on CentOS or
> > RHEL.
> > The RDO community collaborates with these deployment projects by
> > providing trunk and stable packages in order to let them develop and
> > test against the latest and the greatest of OpenStack.
> >
> > RDO test days typically happen around a week after an upstream
> > milestone has been reached [2].
> > The purpose is to get everyone together in #rdo: developers, users,
> > operators, maintainers -- and test not just RDO but OpenStack itself
> > as installed by the different deployment projects.
> >
> > We tried something new at our last test day [3] and it worked out great.
> > Instead of encouraging participants to install their own cloud for
> > testing things, we supplied a cloud of our own... a bit like a limited
> > duration TryStack [4].
> > This lets users without the operational knowledge, time or hardware to
> > install an OpenStack environment to see what's coming in the upcoming
> > release of OpenStack and get the feedback loop going ahead of the
> > release.
> >
> > We used Packstack for the last deployment and invited Packstack cores
> > to deploy, operate and troubleshoot the installation for the duration
> > of the test days.
> > The idea is to rotate between the different deployment projects to
> > give every interested project a chance to participate.
> >
> > Last week, we reached out to Kolla to see if they would be interested
> > in participating in our next RDO test days [5] around February 8th.
> > We supply the bare metal hardware and their core contributors get to
> > deploy and operate a cloud with real users and developers poking
> > around.
> > All around, this is a great opportunity to get feedback for RDO, Kolla
> > and OpenStack.
> >
> > We'll be advertising the event a bit more as the test days draw closer
> > but until then, I thought it was worthwhile to share some context for
> > this new thing we're doing.
> >
> > Let me know if you have any questions !
> >
> > Thanks,
> >
> > [1]: https://www.rdoproject.org/
> > [2]: https://www.rdoproject.org/testday/
> > [3]: https://dmsimard.com/2017/11/29/come-try-a-real-openstack-
> queens-deployment/
> > [4]: http://trystack.org/
> > [5]: http://eavesdrop.openstack.org/meetings/kolla/2018/kolla.
> 2018-01-24-16.00.log.html
> >
> > David Moreau Simard
> > Senior Software Engineer | OpenStack RDO
> >
> > dmsimard = [irc, github, twitter]
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] driver composition: help needed from vendors

2018-02-05 Thread Dmitry Tantsur

Hi everyone,

We have landed changes deprecating classic drivers, and we may remove classic 
drivers as early as end of Rocky. I would like to ask those who maintain drivers 
for ironic a few favors:


1. We have landed a database migration [1] to change nodes from classic drivers 
to hardware types automatically. Please check the mapping [2] for your drivers 
for correctness.


2. Please update your documentation pages to primarily use hardware types. 
You're free to still mention classic drivers or remove the information about 
them completely.


3. Please update your CI to use hardware types on master (queens and newer). 
Please make sure that the coverage does not suffer. For example, if you used to 
test pxe_foo and agent_foo, the updates CI should test "foo" hardware type with 
"iscsi" and "direct" deploy interfaces.


Please let us know if you have any concerns.

Thanks,
Dmitry

[1] https://review.openstack.org/534373
[2] https://review.openstack.org/539589

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Openstack API and HTTP caching

2018-02-05 Thread Chris Dent

On Mon, 5 Feb 2018, Fred De Backer wrote:


Therefore it is my opinion that the Openstack API (Nova in this case, but
equally valid for all other APIs) should be responsible to include proper
HTTP headers in their responses to either disallow caching of the response
or at least limit it's validity.


Yeah, that is what should happen. We recently did it (disallow
caching) for placement
(http://specs.openstack.org/openstack/nova-specs/specs/queens/approved/placement-cache-headers.html)
but it probably needs to be done just about everywhere else.

I'd suggest you create a bug (probably just a nova one for now, but
make it general enough that it is easy to add other projects) an
perhaps that will help get some traction.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Dublin PTG proposed track schedule

2018-02-05 Thread Thierry Carrez
Lance Bragstad wrote:
> Colleen started a thread asking if there was a need for a baremetal/vm
> group session [0], which generated quite a bit of positive response. Is
> there still a possibility of fitting that in on either Monday or
> Tuesday? The group is usually pretty large.
> 
> [0]
> http://lists.openstack.org/pipermail/openstack-dev/2018-January/126743.html

Yes, we can still allocate a 80-people room or a 30-people one. Let me
know if you prefer Monday, Tuesday or both.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Notification update week 6

2018-02-05 Thread Balázs Gibizer

Hi,

Here is the status update / focus settings mail for w6.

Bugs


No new bugs and the below bug status is the same as last week.

[High] https://bugs.launchpad.net/nova/+bug/1737201 TypeError when
sending notification during attach_interface
Fix merged to master. Backports have been proposed:
* Pike: https://review.openstack.org/#/c/531745/
* Queens: https://review.openstack.org/#/c/531746/

[High] https://bugs.launchpad.net/nova/+bug/1739325 Server operations
fail to complete with versioned notifications if payload contains unset
non-nullable fields
We need to understand first how this can happen. Based on the comments
from the bug it seems it happens after upgrading an old deployment. So
it might be some problem with the online data migration that moves the
flavor into the instance.

[Low] https://bugs.launchpad.net/nova/+bug/1487038
nova.exception._cleanse_dict should use
oslo_utils.strutils._SANITIZE_KEYS
Old abandoned patches exist but need somebody to pick them up:
* https://review.openstack.org/#/c/215308/
* https://review.openstack.org/#/c/388345/


Versioned notification transformation
-
The rocky bp has been created 
https://blueprints.launchpad.net/nova/+spec/versioned-notification-transformation-rocky
Every open patch needs to be reproposed to this bp as soon as master 
opens for Rocky.


Introduce instance.lock and instance.unlock notifications
-
A specless bp has been proposed to the Rocky cycle
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances
Some preliminary discussion happened in an earlier patch
https://review.openstack.org/#/c/526251/

Add the user id and project id of the user initiated the instance
action to the notification
-
A new bp has been proposed
https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications
As the user who initiates the instance action (e.g. reboot) could be
different from the user owning the instance it would make sense to
include the user_id and project_id of the action initiatior to the
versioned instance action notifications as well.

Factor out duplicated notification sample
-
https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open
No open patches. We can expect some as soon as master opens for Rocky.

Weekly meeting
--
The next meeting will be held on 6th of February on #openstack-meeting-4
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180206T17

Cheers,
gibi





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Dublin PTG proposed track schedule

2018-02-05 Thread Thierry Carrez
Luke Hinds wrote:
> On Mon, Feb 5, 2018 at 3:07 PM, Thierry Carrez  > wrote:
> 
> Luke Hinds wrote:
> > I had been monitoring for PTG room allocations, but I missed this email
> > which was the important one.
> >
> > The security SIG plans to meet at the PTG to discuss several topics. I
> > am to late to get our inclusion?
> 
> Not too late, but obviously less choice... Would you be interested in a
> full day on Monday ? What room size do you need ?
> 
> --
> Thierry Carrez (ttx)
> 
> 
> A full day would be great, and room does not need to be large - I expect
> between 5 to 10.

OK, done!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][kolla][rdo] Collaboration with Kolla for the RDO test days

2018-02-05 Thread David Moreau Simard
Hi everyone,

We've started planning the deployment with the Kolla team, you can see
the etherpad from the "operator" perspective here:
https://etherpad.openstack.org/p/kolla-rdo-m3

We'll advertise the test days and how users can participate soon.

Thanks,


David Moreau Simard
Senior Software Engineer | OpenStack RDO

dmsimard = [irc, github, twitter]


On Mon, Jan 29, 2018 at 8:29 AM, David Moreau Simard
 wrote:
> Hi !
>
> For those who might be unfamiliar with the RDO [1] community project:
> we hang out in #rdo, we don't bite and we build vanilla OpenStack
> packages.
>
> These packages are what allows you to leverage one of the deployment
> projects such as TripleO, PackStack or Kolla to deploy on CentOS or
> RHEL.
> The RDO community collaborates with these deployment projects by
> providing trunk and stable packages in order to let them develop and
> test against the latest and the greatest of OpenStack.
>
> RDO test days typically happen around a week after an upstream
> milestone has been reached [2].
> The purpose is to get everyone together in #rdo: developers, users,
> operators, maintainers -- and test not just RDO but OpenStack itself
> as installed by the different deployment projects.
>
> We tried something new at our last test day [3] and it worked out great.
> Instead of encouraging participants to install their own cloud for
> testing things, we supplied a cloud of our own... a bit like a limited
> duration TryStack [4].
> This lets users without the operational knowledge, time or hardware to
> install an OpenStack environment to see what's coming in the upcoming
> release of OpenStack and get the feedback loop going ahead of the
> release.
>
> We used Packstack for the last deployment and invited Packstack cores
> to deploy, operate and troubleshoot the installation for the duration
> of the test days.
> The idea is to rotate between the different deployment projects to
> give every interested project a chance to participate.
>
> Last week, we reached out to Kolla to see if they would be interested
> in participating in our next RDO test days [5] around February 8th.
> We supply the bare metal hardware and their core contributors get to
> deploy and operate a cloud with real users and developers poking
> around.
> All around, this is a great opportunity to get feedback for RDO, Kolla
> and OpenStack.
>
> We'll be advertising the event a bit more as the test days draw closer
> but until then, I thought it was worthwhile to share some context for
> this new thing we're doing.
>
> Let me know if you have any questions !
>
> Thanks,
>
> [1]: https://www.rdoproject.org/
> [2]: https://www.rdoproject.org/testday/
> [3]: 
> https://dmsimard.com/2017/11/29/come-try-a-real-openstack-queens-deployment/
> [4]: http://trystack.org/
> [5]: 
> http://eavesdrop.openstack.org/meetings/kolla/2018/kolla.2018-01-24-16.00.log.html
>
> David Moreau Simard
> Senior Software Engineer | OpenStack RDO
>
> dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Proposing time slots for Mistral office hours

2018-02-05 Thread Dougal Matthews
On 5 February 2018 at 07:48, Renat Akhmerov 
wrote:

> Hi,
>
> Not so long ago we decided to stop holding weekly meetings in one of the
> general IRC channel (it was #openstack-meeting-3 for the last several
> months). The main reason was that we usually didn’t have a good
> representation of the team there because the team is distributed across the
> world. We tried to find a time slot several times that would work well for
> all the team members but failed to. Another reason is that we didn’t always
> have a clear reason to gather because everyone was just focused on their
> tasks and a discussion wasn’t much needed so a meeting was even a
> distraction.
>
> However, despite all this we still would like channels to communicate, the
> team members and people who have user questions and/or would like to start
> contributing.
>
> Similarly to other teams in OpenStack we’d like to try the “Office hours”
> concept. If we follow it we’re supposed to have team members, for whom the
> time slot is OK, available in our channel #openstack-mistral during certain
> hours. These hours can be used for discussing our development stuff between
> team members from different time zones and people outside the team would
> know when they can come and talk to us.
>
> Just to start the discussion on what the office hours time slots could be
> I’m proposing the following time slots:
>
>1. Mon 16.00 UTC (it used to be our time of weekly meetings)
>2. Wed 3.00 UTC
>3. Fri 8.00 UTC
>
>
These sounds good to me. I should be able to regularly attend the Monday
and Friday slots.

I think we should ask Mistral cores to try and attend at least one of these
a week.



>
>
> Each slot is one hour.
>
> Assumingly, #1 would be suitable for people in Europe and America. #2 for
> people in Asia and America. And #3 for people living in Europe and Asia. At
> least that was my thinking when I was wondering what the time slots should
> be.
>
> Please share your thoughts on this. The idea itself and whether the time
> slots look ok.
>
> Thanks
>
> Renat Akhmerov
> @Nokia
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Dublin PTG proposed track schedule

2018-02-05 Thread Luke Hinds
On Mon, Feb 5, 2018 at 3:07 PM, Thierry Carrez 
wrote:

> Luke Hinds wrote:
> > I had been monitoring for PTG room allocations, but I missed this email
> > which was the important one.
> >
> > The security SIG plans to meet at the PTG to discuss several topics. I
> > am to late to get our inclusion?
>
> Not too late, but obviously less choice... Would you be interested in a
> full day on Monday ? What room size do you need ?
>
> --
> Thierry Carrez (ttx)
>

A full day would be great, and room does not need to be large - I expect
between 5 to 10.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Dublin PTG proposed track schedule

2018-02-05 Thread Lance Bragstad
Colleen started a thread asking if there was a need for a baremetal/vm
group session [0], which generated quite a bit of positive response. Is
there still a possibility of fitting that in on either Monday or
Tuesday? The group is usually pretty large.

[0]
http://lists.openstack.org/pipermail/openstack-dev/2018-January/126743.html

On 02/05/2018 09:07 AM, Thierry Carrez wrote:
> Luke Hinds wrote:
>> I had been monitoring for PTG room allocations, but I missed this email
>> which was the important one.
>>
>> The security SIG plans to meet at the PTG to discuss several topics. I
>> am to late to get our inclusion?
> Not too late, but obviously less choice... Would you be interested in a
> full day on Monday ? What room size do you need ?
>




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Dublin PTG proposed track schedule

2018-02-05 Thread Thierry Carrez
Luke Hinds wrote:
> I had been monitoring for PTG room allocations, but I missed this email
> which was the important one.
> 
> The security SIG plans to meet at the PTG to discuss several topics. I
> am to late to get our inclusion?

Not too late, but obviously less choice... Would you be interested in a
full day on Monday ? What room size do you need ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-I18n] [I18n][PTL] PTL nomination for I18n

2018-02-05 Thread SungJin Kang
+1

lol

2018-01-31 4:07 GMT+09:00 Frank Kloeker :

> This is my announcement for re-candidacy as I18n PTL in Rocky Cycle.
>
> The time from the last cycle passed very fast. I had to manage all the
> things that a PTL expects. But we documented everything very well and I
> always had the full support of the team. I asked the team and it would
> continue to support me, which is why I take the chance again.
> This is the point to say thank you to all that we have achieved many
> things and we are a great community!
>
> Now it's time to finish things:
>
> 1. Zanata upgrade. We are in the middle of the upgrade process. The dev
> server is sucessfull upgraded and the new Zanata versions fits all our
> requirements to automate things more and more.
> Now we are in the hot release phase and when it's over, the live
> upgrade can start.
>
> 2. Translation check site. A little bit out of scope in Queens release
> because of lack of resources. We'll try this again in Rocky.
>
> 3. Aquire more people to the team. That will be the main part of my work
> as PTL in Rocky. We've won 3 new language teams in the last cycle and
> can Openstack serve in Indian, Turkish and Esperanto. There is even more
> potential for strengthening existing teams or creating new ones.
> For this we have great OpenStack events in Europe this year, at least
> the Fall Summit in Berlin. We plan workshops and presentations.
>
> The work of the translation team is also becoming more colorful. We have
> project documentation translation in the order books, translation user
> survey and white papers for working groups.
>
> We are well prepared, but we also look to the future, for example how
> AI-programming can support us in the translation work.
>
> If the plan suits you, I look forward to your vote.
>
> Frank
>
> Email: eu...@arcor.de
> IRC: eumel8
> Twitter: eumel_8
>
> OpenStack Profile:
> https://www.openstack.org/community/members/profile/45058/frank-kloeker
>
> ___
> OpenStack-I18n mailing list
> openstack-i...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-i18n
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI][Keystone][Requirements][Release] What happened to the gate on Feb 4th?

2018-02-05 Thread Lance Bragstad


On 02/04/2018 10:44 PM, Qiming Teng wrote:
> Starting about 24 hours ago, we have been notified CI gate failure
> although we haven't changed anything to our project locally. Before that
> we have spent quite some time making the out-of-tree tempest plugins
> work on gate.
>
> After checking the log again and again ... we found the following logs
> from Keystone:
>
> Feb 05 03:31:12.609492 ubuntu-xenial-ovh-gra1-0002362092
> devstack@keystone.service[24845]: WARNING keystone.common.wsgi [None
> req-dfcbf106-fbf5-41bd-9012-3c65d1de5f9a None admin] Could not find
> project: service.: ProjectNotFound: Could not find project: service.
>
> Feb 05 03:31:13.845694 ubuntu-xenial-ovh-gra1-0002362092
> devstack@keystone.service[24845]: WARNING keystone.common.wsgi [None
> req-50feed46-7c15-425d-bec7-1b4a7ccf6859 None admin] Could not find
> service: clustering.: ServiceNotFound: Could not find service:
> clustering.
>
> Feb 05 03:31:12.552647 ubuntu-xenial-ovh-gra1-0002362092
> devstack@keystone.service[24845]: WARNING keystone.common.wsgi [None
> req-0a5e660f-dad6-4779-aea4-dd6969c728e6 None admin] Could not find
> domain: Default.: DomainNotFound: Could not find domain: Default.
>
> Feb 05 03:31:12.441128 ubuntu-xenial-ovh-gra1-0002362092
> devstack@keystone.service[24845]: WARNING keystone.common.wsgi [None
> req-7eb9ed90-28fc-40aa-8a41-d560f7a156c9 None admin] Could not find
> user: senlin.: UserNotFound: Could not find user: senlin.
>
> Feb 05 03:31:12.336572 ubuntu-xenial-ovh-gra1-0002362092
> devstack@keystone.service[24845]: WARNING keystone.common.wsgi [None
> req-19e52d02-5471-49a2-8acd-360199d8c6e0 None admin] Could not find
> role: admin.: RoleNotFound: Could not find role: admin.
>
> Feb 05 03:28:33.797665 ubuntu-xenial-ovh-gra1-0002362092
> devstack@keystone.service[24845]: WARNING keystone.common.wsgi [None
> req-544cd822-18a4-4f7b-913d-297716418239 None admin] Could not find
> user: glance.: UserNotFound: Could not find user: glance.
>
> Feb 05 03:28:29.993214 ubuntu-xenial-ovh-gra1-0002362092
> devstack@keystone.service[24845]: WARNING py.warnings [None
> req-dc411d9c-6ab9-44e3-9afb-20e5e7034f12 None admin]
> /usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py:865:
> UserWarning: Policy identity:create_endpoint failed scope check. The
> token used to make the request was project scoped but the policy
> requires ['system'] scope. This behavior may change in the future where
> using the intended scope is required
>
> Feb 05 03:28:29.920892 ubuntu-xenial-ovh-gra1-0002362092
> devstack@keystone.service[24845]: WARNING keystone.common.wsgi [None
> req-32a4a378-d6d3-411e-9842-2178e577af27 None admin] Could not find
> service: compute.: ServiceNotFound: Could not find service: compute.
These are all WARNINGS messages. If this is a tempest run, these are
probably from negative testing [0], in which case keystone is doing the
correct thing. The warnings you've pasted are also present in successful
tempest runs [1]. Can you provide a link to a patch that's failing? What
project do you work on?

[0]
https://github.com/openstack/tempest/blob/master/tempest/api/identity/admin/v3/test_projects_negative.py
[1]
http://logs.openstack.org/57/540557/2/check/tempest-full/bbd7cdd/controller/logs/screen-keystone.txt.gz?level=WARNING

>
> 
>
> --
>
> So I'm wondering what the hack happened? Keystone version bump?
> Devstack changed? Tempest settings changed? 
> Why are we merging these changes near the end of a cycle when people are
> focusing on stabilizing things?
The original feature freeze date was 10 days ago [2] and with the
condition of the gate during that time, there were several projects
trailing with merging feature. Keystone was one of them and we issued
feature freeze exceptions for those efforts [3] [4] [5]. Based on the
warnings you've reported, I'm not convinced any of those efforts are
affecting CI in a negative way, especially since we're still getting
support into tempest to test those features.

[2] https://releases.openstack.org/queens/schedule.html#q-keystone-ffreeze
[3]
http://lists.openstack.org/pipermail/openstack-dev/2018-January/126587.html
[4]
http://lists.openstack.org/pipermail/openstack-dev/2018-January/126588.html
[5]
http://lists.openstack.org/pipermail/openstack-dev/2018-January/126589.html
> Any hints on these are highly appreciated.
>
> - Qiming
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[openstack-dev] [api] Openstack API and HTTP caching

2018-02-05 Thread Fred De Backer
Hi there,

I recently hit an issue where I was using Terraform through an HTTP proxy
(enforced by my company IT) to provision some resources in an Openstack
cloud. Since creating the resources took some time, the initial response
from openstack was "still creating...". Further polling of the resource
status resulted in receiving *cached* copies of "still creating..." from
the proxy until time-out.

RFC7234 that describes HTTP caching states that in absence of all headers
describing the lifetime/validity of the response, heuristic algorithms may
be applied by caches to guesstimate an appropriate value for the validity
of the response... (Who knows what is implemented out there...) See: the
HTTP caching RFC section 4.2.2
.

The API responses describe the current state of an object which isn't
permanent, but has a limited validity. In fact very limited as the state of
an object might change any moment.

Therefore it is my opinion that the Openstack API (Nova in this case, but
equally valid for all other APIs) should be responsible to include proper
HTTP headers in their responses to either disallow caching of the response
or at least limit it's validity.

See the HTTP caching RFC section 5
 for headers that could be
used to accomplish that.
For sake of completeness; also see https://github.com/gophercloud
/gophercloud/issues/727 for my initial client-side fix and related
discussion with client-side project owners...

Regards,
Fred
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][release] FFE for sushy bug-fix release

2018-02-05 Thread Matthew Thode
On 18-02-05 15:15:18, Dmitry Tantsur wrote:
> Hi all,
> 
> I'm requesting an exception to proceed with the release of the sushy
> library. To my best knowledge, the library is only consumed by ironic and at
> least one other vendor support library which is outside of the official
> governance. The release request is [1]. It addresses a last minute bug in
> the authentication code, without it authentication will not work in some
> cases.
> 
> Thanks,
> Dmitry
> 
> [1] https://review.openstack.org/540824
> 
> P.S.
> We really need a feature freeze period for libraries to avoid this.. But it
> cannot be introduced with the current library release freeze. Another PTG
> topic? :)
> 

As discussed on IRC you have my ack

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] FFE request for --check feature

2018-02-05 Thread Brian Rosmaita
Thanks for following up on this, Abhishek.  After our discussion
approving this at the weekly meeting, I completely forgot to send out
an update.  As Abhishek indicated, the discussion was positive, and
this FFE is APPROVED.

cheers,
brian


On Mon, Feb 5, 2018 at 3:56 AM, Abhishek Kekane  wrote:
> Sorry, Forgot to add meeting logs link in previous mail.
>
> Here it is;
> http://eavesdrop.openstack.org/meetings/glance/2018/glance.2018-02-01-14.01.log.html#l-164
>
> Thank you,
>
> Abhishek Kekane
>
> On Mon, Feb 5, 2018 at 12:30 PM, Abhishek Kekane  wrote:
>>
>> We have discussed this in glance weekly meeting [1] and most of the core
>> reviewers are inclined towards accepting this FFE.
>>
>> +1 from my side as this --check command will be very helpful for
>> operators.
>>
>> Thank you Bhagyashri for working on this.
>>
>> Abhishek Kekane
>>
>> On Wed, Jan 31, 2018 at 7:29 PM, Shewale, Bhagyashri
>>  wrote:
>>>
>>> Hi Glance Folks,
>>>
>>> I'm requesting an Feature Freeze Exception for the lite-spec
>>> http://specs.openstack.org/openstack/glance-specs/specs/untargeted/glance/lite-spec-db-sync-check.html
>>> which is implemented by https://review.openstack.org/#/c/455837/8/
>>>
>>> Regards,
>>> Bhagyashri Shewale
>>>
>>> __
>>> Disclaimer: This email and any attachments are sent in strictest
>>> confidence
>>> for the sole use of the addressee and may contain legally privileged,
>>> confidential, and proprietary data. If you are not the intended
>>> recipient,
>>> please advise the sender by replying promptly to this email and then
>>> delete
>>> and destroy this email and any attachments without any further use,
>>> copying
>>> or forwarding.
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][release] FFE for sushy bug-fix release

2018-02-05 Thread Dmitry Tantsur

Hi all,

I'm requesting an exception to proceed with the release of the sushy library. To 
my best knowledge, the library is only consumed by ironic and at least one other 
vendor support library which is outside of the official governance. The release 
request is [1]. It addresses a last minute bug in the authentication code, 
without it authentication will not work in some cases.


Thanks,
Dmitry

[1] https://review.openstack.org/540824

P.S.
We really need a feature freeze period for libraries to avoid this.. But it 
cannot be introduced with the current library release freeze. Another PTG topic? :)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] collectstatic with custom theme is broken at least since Ocata

2018-02-05 Thread Mateusz Kowalski
Hi,

We are running Horizon in Pike and cannot confirm having the same problem as 
you describe. We are using a custom theme however the folder structure is a bit 
different than the one you presented in the bug report.
In our case we have

- /usr/share/openstack-dashboard/openstack_dashboard/themes
|-- cern
|-- default
|-- material

what means we do not modify at all files inside "default". Let me know if you 
want to compare more deeply our changes to see where the problem comes from, as 
for us "theme_file.split('/templates/')" does not cause the trouble.

Cheers,
Mateusz

> On 5 Feb 2018, at 14:44, Saverio Proto  wrote:
> 
> Hello,
> 
> I have tried to find a fix to this:
> 
> https://ask.openstack.org/en/question/107544/ocata-theme-customization-with-templates/
> https://bugs.launchpad.net/horizon/+bug/1744239
> https://review.openstack.org/#/c/536039/
> 
> I am upgrading from Newton to Pike.
> 
> Here the real question is: how is it possible that this bug was found so
> late ???
> 
> There is at least another operator that documented hitting this bug in
> Ocata.
> 
> Probably this bug went unnoticed because you hit it only if you have
> customizations for Horizon. All the automatic testing does not notice
> this bug.
> 
> What I cannot undestand is.
> - are we two operators hitting a corner case ?
> - No one else uses Horizon with custom themes in production with
> version newer than Newton ?
> 
> This is all food for your brainstorming about LTS,bugfix branches,
> release cycle changes
> 
> Cheers,
> 
> Saverio
> 
> 
> -- 
> SWITCH
> Saverio Proto, Peta Solutions
> Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
> phone +41 44 268 15 15, direct +41 44 268 1573
> saverio.pr...@switch.ch, http://www.switch.ch
> 
> http://www.switch.ch/stories
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] collectstatic with custom theme is broken at least since Ocata

2018-02-05 Thread Saverio Proto
Hello,

I have tried to find a fix to this:

https://ask.openstack.org/en/question/107544/ocata-theme-customization-with-templates/
https://bugs.launchpad.net/horizon/+bug/1744239
https://review.openstack.org/#/c/536039/

I am upgrading from Newton to Pike.

Here the real question is: how is it possible that this bug was found so
late ???

There is at least another operator that documented hitting this bug in
Ocata.

Probably this bug went unnoticed because you hit it only if you have
customizations for Horizon. All the automatic testing does not notice
this bug.

What I cannot undestand is.
 - are we two operators hitting a corner case ?
 - No one else uses Horizon with custom themes in production with
version newer than Newton ?

This is all food for your brainstorming about LTS,bugfix branches,
release cycle changes

Cheers,

Saverio


-- 
SWITCH
Saverio Proto, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 1573
saverio.pr...@switch.ch, http://www.switch.ch

http://www.switch.ch/stories

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Rocky PTL candidacy

2018-02-05 Thread Alex Schultz
I would like to nominate myself for the TripleO PTL role for the Rocky cycle.

As PTL of TripleO for the Queens cycle, the focus was on improving containerized
services, improving the deployment process and CI, and improving
visibility of the status of the project. I personally believe over the last
cycle we've made great strides on all these fronts.  For Rocky, I would like
to continue to focus on:

* Reducing duplication and tech debt
  When we switched over to containerization, we've had to implement some items
  in multiple places to support backwards compatibility. I believe it's time
  to spend some efforts to reduce duplication of code and processes and focus
  on simplifying actions for the end user.  An example of this will be efforts
  to align the undercloud and overcloud deployment processes.

* Simplifying the deployment process
  Additionally with the containerization switch, we've added new requirements
  for actions that must be performed by the end user to deploy OpenStack.
  I believe we should spend time looking at what actions we can remove or reduce
  by automating them as part of the deployment process.  An example of this
  will be efforts to enable autodiscovery for the nodes on the undercloud
  as well as switching to the config-download by default.

* Continued efforts around CI
  We've made great strides in stablizing the CI as well as implementing zuul v3.
  We need to continue to move our CI into fully native zuul v3 actions and
  focus on developers ability to reproduce CI outside of the upstream.

Thanks,
Alex Schultz
irc: mwhahaha

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.log]Re: Error will be occurred if watch_log_file option is true

2018-02-05 Thread Rikimaru Honjo

I tried to replace pyinotify to inotify, but same error was occurred.
I'm asking about the behavior of inotify to the developer of inotify.

I wrote the detail of my status on Launchpad:
https://bugs.launchpad.net/masakari/+bug/1740111/comments/4


On 2018/01/31 20:03, Rikimaru Honjo wrote:

Hello,

Sorry for the very late reply...

On 2018/01/10 1:11, Doug Hellmann wrote:

Excerpts from Rikimaru Honjo's message of 2018-01-09 18:11:09 +0900:

Hello,

On 2018/01/04 23:12, Doug Hellmann wrote:

Excerpts from Rikimaru Honjo's message of 2018-01-04 18:22:26 +0900:

Hello,

The below bug was reported in Masakari's Launchpad.
I think that this bug was caused by oslo.log.
(And, the root cause is a bug of pyinotify using by oslo.log. The detail is
written in the bug report.)

* masakari-api failed to launch due to setting of watch_log_file and log_file
 https://bugs.launchpad.net/masakari/+bug/1740111

There is a possibility that this bug will affects all openstack components 
using oslo.log.
(But, the processes working with uwsgi[1] wasn't affected when I tried to 
reproduce.
I haven't solved the reason of this yet...)

Could you help us?
And, what should we do...?

[1]
e.g. nova-api, cinder-api, keystone...

Best regards,


The bug is in pyinotify. According to the git repo [1] that project
was last updated in June of 2015.  I recommend we move off of
pyinotify entirely, since it appears to be unmaintained.

If there is another library to do the same thing we should switch
to it (there seem to be lots of options [2]). If there is no viable
replacement or fork, we should deprecate that log watching feature
(and anything else for which we use pyinotify) and remove it ASAP.

We'll need a volunteer to do the evaluation and update oslo.log.

Doug

[1] https://github.com/seb-m/pyinotify
[2] https://pypi.python.org/pypi?%3Aaction=search&term=inotify&submit=search

Thank you for replying.

I haven't deeply researched, but inotify looks good.
Because "weight" of inotify is the largest, and following text is described.

https://pypi.python.org/pypi/inotify/0.2.9

This project is unrelated to the *PyInotify* project that existed prior to this 
one (this project began in 2015). That project is defunct and no longer 
available.

PyInotify is defunct and no longer available...



The inotify package seems like a good candidate to replace pyinotify.

Have you looked at how hard it would be to change oslo.log? If so, does
using the newer library eliminate the bug you had?

I am researching it now. (But, I think it is not easy.)
I'll create a patch if inotify can eliminate the bug.



Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






--
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
Rikimaru Honjo
E-mail:honjo.rikim...@po.ntt-tx.co.jp




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Rocky PTL candidacy

2018-02-05 Thread Paul Bourke

On 05/02/18 10:25, Christian Berendt wrote:

Hello Paul.


On 5. Feb 2018, at 11:12, Paul Bourke  wrote:

This does not mean I don't have a vision for Kolla - no project is perfect



Regardless of that, I would be interested in your visions. What specifically do 
you want to tackle in the next cycle in kolla? What should be the focus?

Christian.



Hi Christian,

Sure thing :) To sum it up I would like to see us focus on Kolla in 
production environments. This is the mission of the project, and we 
still have ways to go. Specifically:


* Improving our tooling. Currently kolla-ansible (as in the shell 
script) is very simplistic and requires operators to go under to the 
hood for basic things such as checking the health of their cloud, 
diffing configs, viewing logs, etc. [0]


* Related to the above, we need improved monitoring in Kolla.

* Finish the zero downtime upgrade work.

* Resolving issues around configuration [1]. We need to decide how much 
we want to provide and make it as straight forward as possible for 
operators to override.


* Documentation should continue to be a priority.

* Finally, I would like to start the discussion of moving each element 
of Kolla out into separate projects. In particular I think this needs to 
happen with kolla-kubernetes but potentially the images also.


Each of these are areas that I've specifically heard from real world 
operators, and also I think are key to the future and overall health of 
the project. If you'd like to discuss any in more detail please give me 
a shout at any time.


-Paul

[0] https://blueprints.launchpad.net/kolla/+spec/kolla-multicloud-cli
[1] 
http://lists.openstack.org/pipermail/openstack-dev/2018-January/126663.html



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Dublin PTG proposed track schedule

2018-02-05 Thread Luke Hinds
On Tue, Jan 30, 2018 at 2:11 PM, Thierry Carrez 
wrote:

> Thierry Carrez wrote:
> > Here is the proposed pre-allocated track schedule for the Dublin PTG:
> >
> > https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-
> z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/
> pubhtml?gid=1374855307&single=true
>
> Following feedback I made small adjustments to Kuryr and
> OpenStack-Charms allocations. The track schedule is about to be
> published on the event website, so now is your last chance to signal
> critical issues with it!
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


Hi Thierry,

I had been monitoring for PTG room allocations, but I missed this email
which was the important one.

The security SIG plans to meet at the PTG to discuss several topics. I am
to late to get our inclusion?

Luke
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] [ptl] Rocky PTL candidacy

2018-02-05 Thread duon...@vn.fujitsu.com
Hello everybody,

Kolla is already in production-grade state for a deployment system.  In my
country, I helped one media company using Kolla to deploy OpenStack cluster in
production and I'll have chance to help another company in Vietnam using Kolla
in production in the near future.

I joined Kolla team from Newton, and I always remember how much help I got from
Kolla PTL, core-reviewer and all other members. I'm serving as core reviewer
from Pike cycle. I have contributed many blueprints, bugs report and fix from
the first days I joined Kolla team [1][2][3][4].

From the first day, I am impressed by the diversity of Kolla team, so we get
many ideas for new feature, bug fix and code review.

For Rocky cycle, I would like to focus on the following goals:

* Focus on feedback from Kolla users, their needs and also hassle.
* Improve Kolla documentation, keep it update with the code.
* Encourage diversity in our community.
* Improve cross community communication.
* Implement upgrade procedure for OpenStack services [5]
* Reduce upgrade time to zero downtime upgrade for OpenStack service.
* Start fast forward upgrade support (the 7th point in [6])
* Bring upgrade test to our CI and improve existed facets.
* Implement nodes change feature for Kolla (start with remove node feature).
* Bring Kolla-kubernetes to 1.0 release.

Last but not least, I want to introduce Kolla to many users, companies,
encourage core reviewer membership, prioritize pending features and many other
activities as PTL responsibilities.

Thank you for reading this long email and please consider it as my PTL
candidacy. And I hope you give me one chance to serve as your PTL for the
Rocky cycle.


[1] https://blueprints.launchpad.net/kolla
[2] https://blueprints.launchpad.net/kolla-ansible
[3] https://bugs.launchpad.net/kolla/
[4] https://bugs.launchpad.net/kolla-ansible/
[5] 
https://blueprints.launchpad.net/kolla-ansible/+spec/apply-service-upgrade-procedure
[6] http://lists.openstack.org/pipermail/openstack-dev/2017-December/125688.html

Best regards,

Ha Quang Duong (Mr.)
PODC - Fujitsu Vietnam Ltd.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Rocky PTL candidacy

2018-02-05 Thread Christian Berendt
Hello Paul.

> On 5. Feb 2018, at 11:12, Paul Bourke  wrote:
> 
> This does not mean I don't have a vision for Kolla - no project is perfect


Regardless of that, I would be interested in your visions. What specifically do 
you want to tackle in the next cycle in kolla? What should be the focus?

Christian.

-- 
Christian Berendt
Chief Executive Officer (CEO)

Mail: bere...@betacloud-solutions.de
Web: https://www.betacloud-solutions.de

Betacloud Solutions GmbH
Teckstrasse 62 / 70190 Stuttgart / Deutschland

Geschäftsführer: Christian Berendt
Unternehmenssitz: Stuttgart
Amtsgericht: Stuttgart, HRB 756139


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [monasca] PTL candidacy

2018-02-05 Thread Bedyk, Witold
Hello everyone,

I would like to announce my candidacy to continue as PTL of Monasca for the
Rocky release.

I have worked for the project as core reviewer since 2015, acted as Release
Management Liaison in Ocata and Pike and had a privilege of being PTL in Queens
release cycle. I have learnt a lot in this new role and it's a real pleasure to
work with the great team and improve the project. Thank you for all your
support.

In the next release I would like to focus on following topics:

* continue the work on Cassandra support
* strengthen the community and improve active participation and contribution
* improve tenant monitoring
* accomplish Python 3 migration

Apart from that I'll do my best to promote Monasca, coordinate community work
and interact with other OpenStack teams.

Thank you for considering my candidacy and I'm looking forward to another very
productive cycle.

Best greetings

Witek

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] New meeting time Tue 1000UTC

2018-02-05 Thread Spyros Trigazis
Hello,

Heads up, the containers team meeting has changed from 1600UTC to 1000UTC.

See you there tomorrow at #openstack-meeting-alt !
Spyros
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Rocky PTL candidacy

2018-02-05 Thread Paul Bourke

Hello all,

I've been involved with Kolla since it's early stages around Liberty, 
where we saw it evolve through multiple iterations of image formats, 
orchestration methods and patterns into the project we know and love.


From my perspective the community is one of the best things about 
Kolla. You are the ones that keep config files up to date as OpenStack 
evolves, continue to implement new roles and images, keep the gates up 
and running, the list goes on. With this mind, I won't list a bunch of 
features that I'd like to accomplish for Rocky. Rather, I would like to 
spend time listening to what you as users would like to see in the 
project, and doing whatever I possibly can to help you achieve that. 
This does not mean I don't have a vision for Kolla - no project is 
perfect, and there are plenty of areas I think could use some 
refinement. My hope is through discussion and collaboration we can 
continue to iterate to ensure this project is as useful as possible to 
our users.


I hope you will consider me to serve you as your PTL for the coming cycle.

Thank you!

-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] PTL candidacy

2018-02-05 Thread Thierry Carrez
Ben Nemec wrote:
> I am submitting my candidacy for Oslo PTL.

Thanks Ben for stepping up !

-- 
Thierry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] FFE request for --check feature

2018-02-05 Thread Abhishek Kekane
Sorry, Forgot to add meeting logs link in previous mail.

Here it is;
http://eavesdrop.openstack.org/meetings/glance/2018/glance.2018-02-01-14.01.log.html#l-164

Thank you,

Abhishek Kekane

On Mon, Feb 5, 2018 at 12:30 PM, Abhishek Kekane  wrote:

> We have discussed this in glance weekly meeting [1] and most of the core
> reviewers are inclined towards accepting this FFE.
>
> +1 from my side as this --check command will be very helpful for operators.
>
> Thank you Bhagyashri for working on this.
>
> Abhishek Kekane
>
> On Wed, Jan 31, 2018 at 7:29 PM, Shewale, Bhagyashri <
> bhagyashri.shew...@nttdata.com> wrote:
>
>> Hi Glance Folks,
>>
>> I'm requesting an Feature Freeze Exception for the lite-spec
>> http://specs.openstack.org/openstack/glance-specs/specs/unta
>> rgeted/glance/lite-spec-db-sync-check.html
>> which is implemented by https://review.openstack.org/#/c/455837/8/
>>
>> Regards,
>> Bhagyashri Shewale
>>
>> __
>> Disclaimer: This email and any attachments are sent in strictest
>> confidence
>> for the sole use of the addressee and may contain legally privileged,
>> confidential, and proprietary data. If you are not the intended recipient,
>> please advise the sender by replying promptly to this email and then
>> delete
>> and destroy this email and any attachments without any further use,
>> copying
>> or forwarding.
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Automatically generated Zuul changes (topic: zuulv3-projects)

2018-02-05 Thread Andreas Jaeger
Please accept these changes so that they don't have to be created for
the stable/queens branch,

Andreas

On 2018-01-31 18:59, James E. Blair wrote:
> Hi,
> 
> Occasionally we will make changes to the Zuul configuration language.
> Usually these changes will be backwards compatible, but whether they are
> or not, we still want to move things forward.
> 
> Because Zuul's configuration is now spread across many repositories, it
> may take many changes to do this.  I'm in the process of making one such
> change now.
> 
> Zuul no longer requires the project name in the "project:" stanza for
> in-repo configuration.  Removing it makes it easier to fork or rename a
> project.
> 
> I am using a script to create and upload these changes.  Because changes
> to Zuul's configuration use more resources, I, and the rest of the infra
> team, are carefully monitoring this and pacing changes so as not to
> overwhelm the system.  This is a limitation we'd like to address in the
> future, but we have to live with now.
> 
> So if you see such a change to your project (the topic will be
> "zuulv3-projects"), please observe the following:
> 
> * Go ahead and approve it as soon as possible.
> 
> * Don't be strict about backported change ids.  These changes are only
>   to Zuul config files, the stable backport policy was not intended to
>   apply to things like this.
> 
> * Don't create your own versions of these changes.  My script will
>   eventually upload changes to all affected project-branches.  It's
>   intentionally a slow process, and attempting to speed it up won't
>   help.  But if there's something wrong with the change I propose, feel
>   free to push an update to correct it.


-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev