Re: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules

2018-04-03 Thread Akihiro Motoki
Hi Xinni,

There is no need that you push a tag manually for official deliverables.
You can propose a patch to openstack/releases repository.
Horizon PTL or release liaison (at now both Ivan) can confirm it and the
release team will approve it.
Once it is approved, a release tag will be added and a deliverable will be
published automatically by the infra script (if you've setup project-config
appropriately).

Akihiro

2018-04-04 14:34 GMT+09:00 Xinni Ge :

> Hi Ivan and other Horizon team member,
>
> Thanks for adding us into xstatic-core group.
> But I still need your opinion and help to release the newly-added xstatic
> packages to pypi index.
>
> Current `xstatic-core` group doesn't have the permission to PUSH SIGNED
> TAG, and I cannot release the first non-trivial version.
>
> If I (or maybe Kaz) could be added into xstatic-release group, we can
> release all the 8 packages by ourselves.
>
> Or, we are very appreciate if any member of xstatic-release could help to
> do it.
>
> Just for your quick access, here is the link of access permission page of
> one xstatic package.
> https://review.openstack.org/#/admin/projects/openstack/
> xstatic-angular-material,access
>
> --
> Best Regards,
> Xinni
>
> On Thu, Mar 29, 2018 at 9:59 AM, Kaz Shinohara 
> wrote:
>
>> Hi Ivan,
>>
>>
>> Thank you very much.
>> I've confirmed that all of us have been added to xstatic-core.
>>
>> As discussed, we will focus on the followings what we added for
>> heat-dashboard, will not touch other xstatic repos as core.
>>
>> xstatic-angular-material
>> xstatic-angular-notify
>> xstatic-angular-uuid
>> xstatic-angular-vis
>> xstatic-filesaver
>> xstatic-js-yaml
>> xstatic-json2yaml
>> xstatic-vis
>>
>> Regards,
>> Kaz
>>
>> 2018-03-29 5:40 GMT+09:00 Ivan Kolodyazhny :
>> > Hi Kuz,
>> >
>> > Don't worry, we're on the same page with you. I added both you, Xinni
>> and
>> > Keichii to the xstatic-core group. Thank you for your contributions!
>> >
>> > Regards,
>> > Ivan Kolodyazhny,
>> > http://blog.e0ne.info/
>> >
>> > On Wed, Mar 28, 2018 at 5:18 PM, Kaz Shinohara 
>> wrote:
>> >>
>> >> Hi Ivan & Horizon folks
>> >>
>> >>
>> >> AFAIK, Horizon team had conclusion that you will add the specific
>> >> members to xstatic-core, correct ?
>> >> Can I ask you to add the following members ?
>> >> # All of tree are heat-dashboard core.
>> >>
>> >> Kazunori Shinohara / ksnhr.t...@gmail.com #myself
>> >> Xinni Ge / xinni.ge1...@gmail.com
>> >> Keiichi Hikita / keiichi.hik...@gmail.com
>> >>
>> >> Please give me a shout, if we are not on same page or any concern.
>> >>
>> >> Regards,
>> >> Kaz
>> >>
>> >>
>> >> 2018-03-21 22:29 GMT+09:00 Kaz Shinohara :
>> >> > Hi Ivan, Akihiro,
>> >> >
>> >> >
>> >> > Thanks for your kind arrangement.
>> >> > Looking forward to hearing your decision soon.
>> >> >
>> >> > Regards,
>> >> > Kaz
>> >> >
>> >> > 2018-03-21 21:43 GMT+09:00 Ivan Kolodyazhny :
>> >> >> HI Team,
>> >> >>
>> >> >> From my perspective, I'm OK both with #2 and #3 options. I agree
>> that
>> >> >> #4
>> >> >> could be too complicated for us. Anyway, we've got this topic on the
>> >> >> meeting
>> >> >> agenda [1] so we'll discuss it there too. I'll share our decision
>> after
>> >> >> the
>> >> >> meeting.
>> >> >>
>> >> >> [1] https://wiki.openstack.org/wiki/Meetings/Horizon
>> >> >>
>> >> >>
>> >> >>
>> >> >> Regards,
>> >> >> Ivan Kolodyazhny,
>> >> >> http://blog.e0ne.info/
>> >> >>
>> >> >> On Tue, Mar 20, 2018 at 10:45 AM, Akihiro Motoki > >
>> >> >> wrote:
>> >> >>>
>> >> >>> Hi Kaz and Ivan,
>> >> >>>
>> >> >>> Yeah, it is worth discussed officially in the horizon team meeting
>> or
>> >> >>> the
>> >> >>> mailing list thread to get a consensus.
>> >> >>> Hopefully you can add this topic to the horizon meeting agenda.
>> >> >>>
>> >> >>> After sending the previous mail, I noticed anther option. I see
>> there
>> >> >>> are
>> >> >>> several options now.
>> >> >>> (1) Keep xstatic-core and horizon-core same.
>> >> >>> (2) Add specific members to xstatic-core
>> >> >>> (3) Add specific horizon-plugin core to xstatic-core
>> >> >>> (4) Split core membership into per-repo basis (perhaps too
>> >> >>> complicated!!)
>> >> >>>
>> >> >>> My current vote is (2) as xstatic-core needs to understand what is
>> >> >>> xstatic
>> >> >>> and how it is maintained.
>> >> >>>
>> >> >>> Thanks,
>> >> >>> Akihiro
>> >> >>>
>> >> >>>
>> >> >>> 2018-03-20 17:17 GMT+09:00 Kaz Shinohara :
>> >> 
>> >>  Hi Akihiro,
>> >> 
>> >> 
>> >>  Thanks for your comment.
>> >>  The background of my request to add us to xstatic-core comes from
>> >>  Ivan's comment in last PTG's etherpad for heat-dashboard
>> discussion.
>> >> 
>> >>  https://etherpad.openstack.org/p/heat-dashboard-ptg-rocky-
>> discussion
>> >>  Line135, "we can share ownership if needed - e0ne"

Re: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules

2018-04-03 Thread Ivan Kolodyazhny
Hi Xinni,

Please, send me a list of packages which should be released.

In general, release-* groups are different from core-*. We should discuss
how to go forward with it

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Wed, Apr 4, 2018 at 8:34 AM, Xinni Ge  wrote:

> Hi Ivan and other Horizon team member,
>
> Thanks for adding us into xstatic-core group.
> But I still need your opinion and help to release the newly-added xstatic
> packages to pypi index.
>
> Current `xstatic-core` group doesn't have the permission to PUSH SIGNED
> TAG, and I cannot release the first non-trivial version.
>
> If I (or maybe Kaz) could be added into xstatic-release group, we can
> release all the 8 packages by ourselves.
>
> Or, we are very appreciate if any member of xstatic-release could help to
> do it.
>
> Just for your quick access, here is the link of access permission page of
> one xstatic package.
> https://review.openstack.org/#/admin/projects/openstack/
> xstatic-angular-material,access
>
> --
> Best Regards,
> Xinni
>
> On Thu, Mar 29, 2018 at 9:59 AM, Kaz Shinohara 
> wrote:
>
>> Hi Ivan,
>>
>>
>> Thank you very much.
>> I've confirmed that all of us have been added to xstatic-core.
>>
>> As discussed, we will focus on the followings what we added for
>> heat-dashboard, will not touch other xstatic repos as core.
>>
>> xstatic-angular-material
>> xstatic-angular-notify
>> xstatic-angular-uuid
>> xstatic-angular-vis
>> xstatic-filesaver
>> xstatic-js-yaml
>> xstatic-json2yaml
>> xstatic-vis
>>
>> Regards,
>> Kaz
>>
>> 2018-03-29 5:40 GMT+09:00 Ivan Kolodyazhny :
>> > Hi Kuz,
>> >
>> > Don't worry, we're on the same page with you. I added both you, Xinni
>> and
>> > Keichii to the xstatic-core group. Thank you for your contributions!
>> >
>> > Regards,
>> > Ivan Kolodyazhny,
>> > http://blog.e0ne.info/
>> >
>> > On Wed, Mar 28, 2018 at 5:18 PM, Kaz Shinohara 
>> wrote:
>> >>
>> >> Hi Ivan & Horizon folks
>> >>
>> >>
>> >> AFAIK, Horizon team had conclusion that you will add the specific
>> >> members to xstatic-core, correct ?
>> >> Can I ask you to add the following members ?
>> >> # All of tree are heat-dashboard core.
>> >>
>> >> Kazunori Shinohara / ksnhr.t...@gmail.com #myself
>> >> Xinni Ge / xinni.ge1...@gmail.com
>> >> Keiichi Hikita / keiichi.hik...@gmail.com
>> >>
>> >> Please give me a shout, if we are not on same page or any concern.
>> >>
>> >> Regards,
>> >> Kaz
>> >>
>> >>
>> >> 2018-03-21 22:29 GMT+09:00 Kaz Shinohara :
>> >> > Hi Ivan, Akihiro,
>> >> >
>> >> >
>> >> > Thanks for your kind arrangement.
>> >> > Looking forward to hearing your decision soon.
>> >> >
>> >> > Regards,
>> >> > Kaz
>> >> >
>> >> > 2018-03-21 21:43 GMT+09:00 Ivan Kolodyazhny :
>> >> >> HI Team,
>> >> >>
>> >> >> From my perspective, I'm OK both with #2 and #3 options. I agree
>> that
>> >> >> #4
>> >> >> could be too complicated for us. Anyway, we've got this topic on the
>> >> >> meeting
>> >> >> agenda [1] so we'll discuss it there too. I'll share our decision
>> after
>> >> >> the
>> >> >> meeting.
>> >> >>
>> >> >> [1] https://wiki.openstack.org/wiki/Meetings/Horizon
>> >> >>
>> >> >>
>> >> >>
>> >> >> Regards,
>> >> >> Ivan Kolodyazhny,
>> >> >> http://blog.e0ne.info/
>> >> >>
>> >> >> On Tue, Mar 20, 2018 at 10:45 AM, Akihiro Motoki > >
>> >> >> wrote:
>> >> >>>
>> >> >>> Hi Kaz and Ivan,
>> >> >>>
>> >> >>> Yeah, it is worth discussed officially in the horizon team meeting
>> or
>> >> >>> the
>> >> >>> mailing list thread to get a consensus.
>> >> >>> Hopefully you can add this topic to the horizon meeting agenda.
>> >> >>>
>> >> >>> After sending the previous mail, I noticed anther option. I see
>> there
>> >> >>> are
>> >> >>> several options now.
>> >> >>> (1) Keep xstatic-core and horizon-core same.
>> >> >>> (2) Add specific members to xstatic-core
>> >> >>> (3) Add specific horizon-plugin core to xstatic-core
>> >> >>> (4) Split core membership into per-repo basis (perhaps too
>> >> >>> complicated!!)
>> >> >>>
>> >> >>> My current vote is (2) as xstatic-core needs to understand what is
>> >> >>> xstatic
>> >> >>> and how it is maintained.
>> >> >>>
>> >> >>> Thanks,
>> >> >>> Akihiro
>> >> >>>
>> >> >>>
>> >> >>> 2018-03-20 17:17 GMT+09:00 Kaz Shinohara :
>> >> 
>> >>  Hi Akihiro,
>> >> 
>> >> 
>> >>  Thanks for your comment.
>> >>  The background of my request to add us to xstatic-core comes from
>> >>  Ivan's comment in last PTG's etherpad for heat-dashboard
>> discussion.
>> >> 
>> >>  https://etherpad.openstack.org/p/heat-dashboard-ptg-rocky-
>> discussion
>> >>  Line135, "we can share ownership if needed - e0ne"
>> >> 
>> >>  Just in case, could you guys confirm unified opinion on this
>> matter
>> >>  as
>> >>  Horizon team ?
>> >> 
>> >>  Frankly speaking I'm feeling 

Re: [OpenStack-Infra] Problems setting up my own OpenStack Infrastructure

2018-04-03 Thread Bernd Bausch
Lenny,

thanks, these instructions are a bit more robust and easier to
understand than [2].

One details stands out for me: They make it clear that Ubuntu 14 is
required. A few Puppet modules, in particular Etherpad used as an
example in [2], assume Upstart. I don't know if Upstart is available in
Xenialor recent non-Ubuntu distros, but it's definitely not there by
default.

I did find a few places that could be improved or may even be incorrect.
How can I formally submit suggestions and bugs in the OpenStack-Infra
documentation?

Here they are:

- First, install_puppet.sh is downloaded and executed, then
system-config is cloned.
  Since system-config contains install_puppet.sh, it would be more
efficient to clone, then
  install Puppet.

- Configuration of /etc/puppet/environments/common.yaml is not quite
trivial. Perhaps a few
  examples would help people like me.

- The instructions first install the log server, then the CI server. The
log server is tested
  by uploading a file to Jenkins, which runs on the CI server and is not
yet available at that
  point.

- The Jenkins installation fails since a prerequisite can't be found:

   The following packages have unmet dependencies:
    jenkins : Depends: default-jre-headless (>= 2:1.8) but it is not
going to be installed or
  java8-runtime-headless but it is not installable

- I was unable to start nodepool-builder with "service nodepool-builder
start".
  First, nodepool-builder aborted since it is configured to log to a
file under
  /var/log/nodepool/images/, which doesn't exist.
  After fixing this manually, the service command is successful, but no
  nodepool-builder process is running. I didn't find out why and just
started
  the daemon manually.

- Attempting an image build fails with a stacktrace containing:

    diskimage_builder.element_dependencies.MissingElementException:
    Element 'openstack-repos' not found

This is how far I got for the moment.

Bernd

On 4/1/2018 2:21 PM, Lenny Berkhovsky wrote:
> Hello Bernd,
> There is also a Third Party CI page[1] that may assist you
>
> [1] https://docs.openstack.org/infra/openstackci/third_party_ci.html
>
>




signature.asc
Description: OpenPGP digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules

2018-04-03 Thread Xinni Ge
Hi Ivan and other Horizon team member,

Thanks for adding us into xstatic-core group.
But I still need your opinion and help to release the newly-added xstatic
packages to pypi index.

Current `xstatic-core` group doesn't have the permission to PUSH SIGNED
TAG, and I cannot release the first non-trivial version.

If I (or maybe Kaz) could be added into xstatic-release group, we can
release all the 8 packages by ourselves.

Or, we are very appreciate if any member of xstatic-release could help to
do it.

Just for your quick access, here is the link of access permission page of
one xstatic package.
https://review.openstack.org/#/admin/projects/openstack/xstatic-angular-material,access


--
Best Regards,
Xinni

On Thu, Mar 29, 2018 at 9:59 AM, Kaz Shinohara  wrote:

> Hi Ivan,
>
>
> Thank you very much.
> I've confirmed that all of us have been added to xstatic-core.
>
> As discussed, we will focus on the followings what we added for
> heat-dashboard, will not touch other xstatic repos as core.
>
> xstatic-angular-material
> xstatic-angular-notify
> xstatic-angular-uuid
> xstatic-angular-vis
> xstatic-filesaver
> xstatic-js-yaml
> xstatic-json2yaml
> xstatic-vis
>
> Regards,
> Kaz
>
> 2018-03-29 5:40 GMT+09:00 Ivan Kolodyazhny :
> > Hi Kuz,
> >
> > Don't worry, we're on the same page with you. I added both you, Xinni and
> > Keichii to the xstatic-core group. Thank you for your contributions!
> >
> > Regards,
> > Ivan Kolodyazhny,
> > http://blog.e0ne.info/
> >
> > On Wed, Mar 28, 2018 at 5:18 PM, Kaz Shinohara 
> wrote:
> >>
> >> Hi Ivan & Horizon folks
> >>
> >>
> >> AFAIK, Horizon team had conclusion that you will add the specific
> >> members to xstatic-core, correct ?
> >> Can I ask you to add the following members ?
> >> # All of tree are heat-dashboard core.
> >>
> >> Kazunori Shinohara / ksnhr.t...@gmail.com #myself
> >> Xinni Ge / xinni.ge1...@gmail.com
> >> Keiichi Hikita / keiichi.hik...@gmail.com
> >>
> >> Please give me a shout, if we are not on same page or any concern.
> >>
> >> Regards,
> >> Kaz
> >>
> >>
> >> 2018-03-21 22:29 GMT+09:00 Kaz Shinohara :
> >> > Hi Ivan, Akihiro,
> >> >
> >> >
> >> > Thanks for your kind arrangement.
> >> > Looking forward to hearing your decision soon.
> >> >
> >> > Regards,
> >> > Kaz
> >> >
> >> > 2018-03-21 21:43 GMT+09:00 Ivan Kolodyazhny :
> >> >> HI Team,
> >> >>
> >> >> From my perspective, I'm OK both with #2 and #3 options. I agree that
> >> >> #4
> >> >> could be too complicated for us. Anyway, we've got this topic on the
> >> >> meeting
> >> >> agenda [1] so we'll discuss it there too. I'll share our decision
> after
> >> >> the
> >> >> meeting.
> >> >>
> >> >> [1] https://wiki.openstack.org/wiki/Meetings/Horizon
> >> >>
> >> >>
> >> >>
> >> >> Regards,
> >> >> Ivan Kolodyazhny,
> >> >> http://blog.e0ne.info/
> >> >>
> >> >> On Tue, Mar 20, 2018 at 10:45 AM, Akihiro Motoki 
> >> >> wrote:
> >> >>>
> >> >>> Hi Kaz and Ivan,
> >> >>>
> >> >>> Yeah, it is worth discussed officially in the horizon team meeting
> or
> >> >>> the
> >> >>> mailing list thread to get a consensus.
> >> >>> Hopefully you can add this topic to the horizon meeting agenda.
> >> >>>
> >> >>> After sending the previous mail, I noticed anther option. I see
> there
> >> >>> are
> >> >>> several options now.
> >> >>> (1) Keep xstatic-core and horizon-core same.
> >> >>> (2) Add specific members to xstatic-core
> >> >>> (3) Add specific horizon-plugin core to xstatic-core
> >> >>> (4) Split core membership into per-repo basis (perhaps too
> >> >>> complicated!!)
> >> >>>
> >> >>> My current vote is (2) as xstatic-core needs to understand what is
> >> >>> xstatic
> >> >>> and how it is maintained.
> >> >>>
> >> >>> Thanks,
> >> >>> Akihiro
> >> >>>
> >> >>>
> >> >>> 2018-03-20 17:17 GMT+09:00 Kaz Shinohara :
> >> 
> >>  Hi Akihiro,
> >> 
> >> 
> >>  Thanks for your comment.
> >>  The background of my request to add us to xstatic-core comes from
> >>  Ivan's comment in last PTG's etherpad for heat-dashboard
> discussion.
> >> 
> >>  https://etherpad.openstack.org/p/heat-dashboard-ptg-
> rocky-discussion
> >>  Line135, "we can share ownership if needed - e0ne"
> >> 
> >>  Just in case, could you guys confirm unified opinion on this matter
> >>  as
> >>  Horizon team ?
> >> 
> >>  Frankly speaking I'm feeling the benefit to make us xstatic-core
> >>  because it's easier & smoother to manage what we are taking for
> >>  heat-dashboard.
> >>  On the other hand, I can understand what Akihiro you are saying,
> the
> >>  newly added repos belong to Horizon project & being managed by not
> >>  Horizon core is not consistent.
> >>  Also having exception might make unexpected confusion in near
> future.
> >> 
> >>  Eventually we will follow your opinion, let me hear 

[Openstack] API endpoint naming in Keystone

2018-04-03 Thread Andrew Bogott
I just now upgraded my test install (nova, keystone and glance) from 
Liberty to Mitaka.  Immediately after the upgrade, every compute query 
in the openstack client or Horizon started returned a 404.


I resolved this problem by changing all of my nova endpoints in Keystone 
that looked like this:


   http://labtestnet2001.codfw.wmnet:8774/v2/$(tenant_id)s

so that they now look like this:

   http://labtestnet2001.codfw.wmnet:8774/v2

I can't find any online documentation to support this change. Every 
how-to guide includes the $(tenant_id)s component of the endpoint for 
nova, although other services (e.g. glance) seem not to recommend it.  
Can anyone help me understand what's going on here?  Are the docs just 
out of date, or do I have some subtle breakage in my install that this 
is revealing?


Thanks!

-Andrew


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [cyborg] High Precision Time Synchronization Card Use Case Summary

2018-04-03 Thread yumeng bao
Hi team,
In our last weekly meeting, High Precision Time Synchronization Card Use Case 
was firstly introduced. In the following link is a summary/description about 
this use case. Please take a look and don't hesitate to ask any question.  :)
https://etherpad.openstack.org/p/clock-driver

Regards,Yumeng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cyborg] High Precision Time Synchronization Card Use Case Summary

2018-04-03 Thread yumeng bao
Hi team,
In our last weekly meeting, High Precision Time Synchronization Card Use Case 
was firstly introduced. In the following link is a summary/description about 
this use case. Please take a look and don't hesitate to ask any question.  :)
https://etherpad.openstack.org/p/clock-driver

Regards,Yumeng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] Selecting New Priority Effort(s)

2018-04-03 Thread David Moreau Simard
It won't be very exciting but we really need to do one of the
following two things soon:

1) Ansiblify control plane [1]
2) Update our puppet things to puppet 4 (or 5?)

Puppet 3 has been end of life since Dec 31, 2016. [2]

The longer we draw this out, the more work it'll be :(

[1]: https://review.openstack.org/#/c/469983/
[2]: https://groups.google.com/forum/#!topic/puppet-users/IdutL5FTW7w


David Moreau Simard
Senior Software Engineer | OpenStack RDO

dmsimard = [irc, github, twitter]


On Tue, Apr 3, 2018 at 4:23 PM, Clark Boylan  wrote:
> Hello everyone,
>
> I just approved the change to mark the Zuul v3 priority effort as completed 
> in the infra-specs repo. Thank you to everyone that made that possible. With 
> Zuul v3 work largely done we can now look forward to our next priority 
> efforts.
>
> Currently the only task marked as a priority is the task-tracker spec which 
> at this point is migrating projects into storyboard. I think we can likely 
> add one or two new priority efforts to this list.
>
> After some quick initial brainstorming these were the ideas I had for getting 
> onto that list (note some may require we actually write a spec):
>
> * Gerrit upgrade to 2.14/2.15
> * Control Plane operating system upgrades to Xenial
> * Bringing wiki under config management management
>
> My bias here is I've personally been working to try and pay down some of this 
> tech debt we've built up simply due to bit rot, but I know we have other 
> specs and I'm sure we can make good arguments for why other efforts should be 
> made a priority. I'd love to get feedback on what others think would make 
> good priority efforts.
>
> Let's use this thread to identify candidates then whittle the list down to 
> one or two to focus on for the next little while.
>
> Thank you,
> Clark
>
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [openstack-dev] [nova] pep8 failures on master

2018-04-03 Thread melanie witt

On Tue, 03 Apr 2018 18:53:33 -0400, Doug Hellmann wrote:

Excerpts from melanie witt's message of 2018-04-03 15:30:07 -0700:

On Tue, 3 Apr 2018 15:26:17 -0700, Melanie Witt wrote:

On Wed, 4 Apr 2018 07:54:59 +1000, Michael Still wrote:

Thanks to jichenjc for fixing the pep8 failures I was seeing on master.
I'd decided they were specific to my local dev environment given no one
else was seeing them.

As I said in the patch that fixed the issue [1], I think its worth
exploring how these got through the gate in the first place. There is
nothing in the patch which stops us from ending up here again, and no
real explanation for what caused the issue in the first place.

Discuss.

Michael


1: https://review.openstack.org/#/c/557633


I think by default, infra runs jobs with python2. This is the job
definition for openstack-tox-pep8 [0] which says it "Uses tox with the
``pep8`` environment." And in our tox.ini [1], we don't specify the
basepython version. I contrasted the openstack-tox-pep8 job definition
with the tempest-full-py3 job definition [2] and it sets the
USE_PYTHON3=True variable for devstack.


Re-reading this after I sent it (of course), I realize USE_PYTHON3 in
devstack isn't relevant to the pep8 run since devstack isn't used. So,
I'm not sure what we can do to run both python2 and python3 versions of
the pep8 check considering that the openstack-tox-pep8 job runs tox with
the "pep8" environment only (and we can't just add another "pep8-py3"
environment and have it run it).


The python3 settings are more strict, and all of our code should be at
least importable under python3 now, so I think if we just convert those
jobs to run under 3 we should be good to go.


Thanks Michael and Doug for suggesting we convert to running the pep8 
tox env with python3, I've proposed a change here:


https://review.openstack.org/#/c/558648

Best,
-melanie








__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] The Weekly Owl - 15th Edition

2018-04-03 Thread Emilien Macchi
Note: this is the fifteenth edition of a weekly update of what happens in
TripleO.
The goal is to provide a short reading (less than 5 minutes) to learn where
we are and what we're doing.
Any contributions and feedback are welcome.
Link to the previous version:
http://lists.openstack.org/pipermail/openstack-dev/2018-March/128784.html

+-+
| General announcements |
+-+

+--> Deadline for Rocky blueprints submission was today. From now, new
blueprints should target Stein.
+--> Migration to Storyboard made progress (See UI updates).
+--> Rocky milestone 1 is in 2 weeks!

+--+
| Continuous Integration |
+--+

+--> We're currently having serious issues with OVB CI jobs, see
https://bugs.launchpad.net/tripleo/+bug/1757556
+--> Rover is Arx and Ruck is Rafael. Please let them know any new CI issue.
+--> Master promotion is 5 days, Queens is 5 days, Pike is 10 days and
Ocata is 10 days.
+--> team is working on helping the upgrade squad with upstream upgrade ci
and logging
+--> tempest squad is still working on containerizing tempest
https://trello.com/c/066JFJjf/537-epic-containerize-tempest
+--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting and
https://goo.gl/D4WuBP

+-+
| Upgrades |
+-+

+--> Progress on FFU CLI in tripleoclient
+--> Work on CI jobs for undercloud upgrades
+--> Need reviews, see etherpad
+--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status

+---+
| Containers |
+---+

+--> Working on cleaning up some technical debt with masquerading
+--> Still working on OVB fs001 switch to containerized undercloud, slowed
down by CI issues
+--> fs010 was switched to deploy a containerized undercloud
(multinode-containers)
+--> Investigations around an All-In-One installer, see mailing-list.
+--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status

+--+
| config-download |
+--+

+--> Prototyping dedicated roles with unique repositories for Ansible tasks
in TripleO (see mailing-list)
+--> Migrating ceph & octavia to use external_deploy_tasks
+--> Work in progress for inventory improvements
+--> UI support is still work in progress, see etherpad.
+--> More: https://etherpad.openstack.org/p/tripleo-config-download-
squad-status

+--+
| Integration |
+--+

+--> No updates this week.
+--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status

+-+
| UI/CLI |
+-+

+--> All bugs tagged with "ui" and "ux" are now part of Storyboard:
https://storyboard.openstack.org/#!/project/964
+--> UI developers should now use Storyboard instead of Launchpad. A guide
is provided here:
https://docs.openstack.org/infra/storyboard/gui/manual.html
+--> The team is focused on config-download integration
+--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status

+---+
| Validations |
+---+

+--> Evaluating OpenShift on OpenStack validations
+--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status

+---+
| Networking |
+---+

+--> Routed networks can now be configured when the undercloud is
containerized.
+--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status

+--+
| Workflows |
+--+

+--> Rocky planning is still in progress.
+--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status

+---+
| Security |
+---+

+--> Discussions around Public TLS by default and Secret Management Audit.
+--> More: https://etherpad.openstack.org/p/tripleo-security-squad

++
| Owl fact  |
++

The weekly owl fact is sponsored by Wes: the smallest owl is named the
"Elf" owl.
The mean body weight of this species is 40 g (1.4 oz). These tiny owls are
12.5 to 14.5 cm (4.9 to 5.7 in) long and have a wingspan of about 27 cm
(10.5 in).
Source: https://en.wikipedia.org/wiki/Elf_owl
It was brought during the All-In-One installer discussion, where this name
could be use since we're looking for something tiny and lightweight.

Thanks all for reading and stay tuned!
--
Your fellow reporter, Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pep8 failures on master

2018-04-03 Thread Doug Hellmann
Excerpts from melanie witt's message of 2018-04-03 15:30:07 -0700:
> On Tue, 3 Apr 2018 15:26:17 -0700, Melanie Witt wrote:
> > On Wed, 4 Apr 2018 07:54:59 +1000, Michael Still wrote:
> >> Thanks to jichenjc for fixing the pep8 failures I was seeing on master.
> >> I'd decided they were specific to my local dev environment given no one
> >> else was seeing them.
> >>
> >> As I said in the patch that fixed the issue [1], I think its worth
> >> exploring how these got through the gate in the first place. There is
> >> nothing in the patch which stops us from ending up here again, and no
> >> real explanation for what caused the issue in the first place.
> >>
> >> Discuss.
> >>
> >> Michael
> >>
> >>
> >> 1: https://review.openstack.org/#/c/557633
> > 
> > I think by default, infra runs jobs with python2. This is the job
> > definition for openstack-tox-pep8 [0] which says it "Uses tox with the
> > ``pep8`` environment." And in our tox.ini [1], we don't specify the
> > basepython version. I contrasted the openstack-tox-pep8 job definition
> > with the tempest-full-py3 job definition [2] and it sets the
> > USE_PYTHON3=True variable for devstack.
> 
> Re-reading this after I sent it (of course), I realize USE_PYTHON3 in 
> devstack isn't relevant to the pep8 run since devstack isn't used. So, 
> I'm not sure what we can do to run both python2 and python3 versions of 
> the pep8 check considering that the openstack-tox-pep8 job runs tox with 
> the "pep8" environment only (and we can't just add another "pep8-py3" 
> environment and have it run it).

The python3 settings are more strict, and all of our code should be at
least importable under python3 now, so I think if we just convert those
jobs to run under 3 we should be good to go.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pep8 failures on master

2018-04-03 Thread Doug Hellmann
Excerpts from Michael Still's message of 2018-04-03 22:23:10 +:
> I think the bit I am lost on is the concept of running pep8 "under" a
> version of python. Is this an artifact of what version of pep8 I have
> installed somehow?
> 
> If the py3 pep8 is stricter, couldn't we just move to only that one?

It's the same code, but that code is installed into the python3
interpreter's site-packages directory and the console script indicates
that it should execute the python3 interpreter to run the script,
then some checks are added or changed.

Tox assumes if you don't specify otherwise that it should use the
interpreter it's running under to create any virtualenvs used for
tests. On most systems that default is still python2, but it is
possible to install tox under python3 and then the default is
python3.  You can set basepython=python3 in tox.ini under the pep8
section to force the use of python3 [1] and remove the ambiguity.

That's something we're going to need to do as we transition to
python 3 anyway, because at some point the "default" python in CI
will be python 3 and we're going to want to ensure that developers
working on their local system see the same behavior.

Doug

[1] 
https://tox.readthedocs.io/en/latest/config.html#confval-basepython=NAME-OR-PATH

> 
> Michael
> 
> On Wed., 4 Apr. 2018, 8:19 am Kevin L. Mitchell,  wrote:
> 
> > On Wed, 2018-04-04 at 07:54 +1000, Michael Still wrote:
> > > Thanks to jichenjc for fixing the pep8 failures I was seeing on
> > > master. I'd decided they were specific to my local dev environment
> > > given no one else was seeing them.
> > >
> > > As I said in the patch that fixed the issue [1], I think its worth
> > > exploring how these got through the gate in the first place. There is
> > > nothing in the patch which stops us from ending up here again, and no
> > > real explanation for what caused the issue in the first place.
> >
> > While there was no discussion in the patch, the topic of the patch
> > hints at the cause: "fix_pep8_py3".  These were probably pep8 errors
> > that would only occur if pep8 was running under Python 3 and not Python
> > 2.  The first error was fixed by removing a debugging print that was
> > formatted as "print (…)", which would satisfy pep8 under Python 2—since
> > 'print' is a statement—but not under Python 3, where it's a function.
> > The second error was in a clause protected by six.PY2, and was caused
> > by "unicode" being missing in Python 3; the solution jichenjc chose
> > there was to disable the pep8 check for that line.
> >
> > The only way I can imagine stopping these errors in the future would be
> > to double-up on the pep8 check: have the gate run pep8 under both
> > Python 2 and Python 3.
> > --
> > Kevin L. Mitchell  > >__
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] [neutron] Integration SDN controller

2018-04-03 Thread Matheus Wagner
Hi colleagues,

I want to do the Neutron integration (liberation version) with an SDN Ryu
controller. But so far I have not found much that could help me how to do
this. Anyone here already made this integration? Do you know how to
proceed? Please help me :)

-- 
​Thanks,
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [nova] pep8 failures on master

2018-04-03 Thread melanie witt

On Tue, 3 Apr 2018 15:26:17 -0700, Melanie Witt wrote:

On Wed, 4 Apr 2018 07:54:59 +1000, Michael Still wrote:

Thanks to jichenjc for fixing the pep8 failures I was seeing on master.
I'd decided they were specific to my local dev environment given no one
else was seeing them.

As I said in the patch that fixed the issue [1], I think its worth
exploring how these got through the gate in the first place. There is
nothing in the patch which stops us from ending up here again, and no
real explanation for what caused the issue in the first place.

Discuss.

Michael


1: https://review.openstack.org/#/c/557633


I think by default, infra runs jobs with python2. This is the job
definition for openstack-tox-pep8 [0] which says it "Uses tox with the
``pep8`` environment." And in our tox.ini [1], we don't specify the
basepython version. I contrasted the openstack-tox-pep8 job definition
with the tempest-full-py3 job definition [2] and it sets the
USE_PYTHON3=True variable for devstack.


Re-reading this after I sent it (of course), I realize USE_PYTHON3 in 
devstack isn't relevant to the pep8 run since devstack isn't used. So, 
I'm not sure what we can do to run both python2 and python3 versions of 
the pep8 check considering that the openstack-tox-pep8 job runs tox with 
the "pep8" environment only (and we can't just add another "pep8-py3" 
environment and have it run it).



So, I think we're not gating the pep8 job for python3, only python2, and
that's how the problems got through the gate in the first place. I'm not
sure what the best way is to fix it -- whether we should be looking at
adding a base openstack-tox-pep8-py3 job to openstack-zuul-jobs that
sets USE_PYTHON3=True or if we need to instead change something in our
tox.ini or what.

-melanie

[0]
https://github.com/openstack-infra/openstack-zuul-jobs/blob/6a48004/zuul.d/jobs.yaml#L399
[1] https://github.com/openstack/nova/blob/master/tox.ini#L47
[2] https://github.com/openstack/tempest/blob/master/.zuul.yaml#L61-L74






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pep8 failures on master

2018-04-03 Thread melanie witt

On Wed, 4 Apr 2018 07:54:59 +1000, Michael Still wrote:
Thanks to jichenjc for fixing the pep8 failures I was seeing on master. 
I'd decided they were specific to my local dev environment given no one 
else was seeing them.


As I said in the patch that fixed the issue [1], I think its worth 
exploring how these got through the gate in the first place. There is 
nothing in the patch which stops us from ending up here again, and no 
real explanation for what caused the issue in the first place.


Discuss.

Michael


1: https://review.openstack.org/#/c/557633


I think by default, infra runs jobs with python2. This is the job 
definition for openstack-tox-pep8 [0] which says it "Uses tox with the 
``pep8`` environment." And in our tox.ini [1], we don't specify the 
basepython version. I contrasted the openstack-tox-pep8 job definition 
with the tempest-full-py3 job definition [2] and it sets the 
USE_PYTHON3=True variable for devstack.


So, I think we're not gating the pep8 job for python3, only python2, and 
that's how the problems got through the gate in the first place. I'm not 
sure what the best way is to fix it -- whether we should be looking at 
adding a base openstack-tox-pep8-py3 job to openstack-zuul-jobs that 
sets USE_PYTHON3=True or if we need to instead change something in our 
tox.ini or what.


-melanie

[0] 
https://github.com/openstack-infra/openstack-zuul-jobs/blob/6a48004/zuul.d/jobs.yaml#L399

[1] https://github.com/openstack/nova/blob/master/tox.ini#L47
[2] https://github.com/openstack/tempest/blob/master/.zuul.yaml#L61-L74



















__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pep8 failures on master

2018-04-03 Thread Michael Still
I think the bit I am lost on is the concept of running pep8 "under" a
version of python. Is this an artifact of what version of pep8 I have
installed somehow?

If the py3 pep8 is stricter, couldn't we just move to only that one?

Michael

On Wed., 4 Apr. 2018, 8:19 am Kevin L. Mitchell,  wrote:

> On Wed, 2018-04-04 at 07:54 +1000, Michael Still wrote:
> > Thanks to jichenjc for fixing the pep8 failures I was seeing on
> > master. I'd decided they were specific to my local dev environment
> > given no one else was seeing them.
> >
> > As I said in the patch that fixed the issue [1], I think its worth
> > exploring how these got through the gate in the first place. There is
> > nothing in the patch which stops us from ending up here again, and no
> > real explanation for what caused the issue in the first place.
>
> While there was no discussion in the patch, the topic of the patch
> hints at the cause: "fix_pep8_py3".  These were probably pep8 errors
> that would only occur if pep8 was running under Python 3 and not Python
> 2.  The first error was fixed by removing a debugging print that was
> formatted as "print (…)", which would satisfy pep8 under Python 2—since
> 'print' is a statement—but not under Python 3, where it's a function.
> The second error was in a clause protected by six.PY2, and was caused
> by "unicode" being missing in Python 3; the solution jichenjc chose
> there was to disable the pep8 check for that line.
>
> The only way I can imagine stopping these errors in the future would be
> to double-up on the pep8 check: have the gate run pep8 under both
> Python 2 and Python 3.
> --
> Kevin L. Mitchell  >__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pep8 failures on master

2018-04-03 Thread Kevin L. Mitchell
On Wed, 2018-04-04 at 07:54 +1000, Michael Still wrote:
> Thanks to jichenjc for fixing the pep8 failures I was seeing on
> master. I'd decided they were specific to my local dev environment
> given no one else was seeing them.
> 
> As I said in the patch that fixed the issue [1], I think its worth
> exploring how these got through the gate in the first place. There is
> nothing in the patch which stops us from ending up here again, and no
> real explanation for what caused the issue in the first place.

While there was no discussion in the patch, the topic of the patch
hints at the cause: "fix_pep8_py3".  These were probably pep8 errors
that would only occur if pep8 was running under Python 3 and not Python
2.  The first error was fixed by removing a debugging print that was
formatted as "print (…)", which would satisfy pep8 under Python 2—since
'print' is a statement—but not under Python 3, where it's a function. 
The second error was in a clause protected by six.PY2, and was caused
by "unicode" being missing in Python 3; the solution jichenjc chose
there was to disable the pep8 check for that line.

The only way I can imagine stopping these errors in the future would be
to double-up on the pep8 check: have the gate run pep8 under both
Python 2 and Python 3.
-- 
Kevin L. Mitchell 

signature.asc
Description: This is a digitally signed message part
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] python-glanceclient release status

2018-04-03 Thread Brian Rosmaita
On Mon, Apr 2, 2018 at 6:28 PM, Brian Rosmaita
 wrote:
> These need to be reviewed in master:
> - https://review.openstack.org/#/c/50/
> - https://review.openstack.org/#/c/556292/

Thanks for the reviews.  The requested changes have been made and Zuul
has given a +1, so ready for reviews again!

> Backports needing review:
> - https://review.openstack.org/#/c/555436/

This has a +2 from Sean; it's up to Erno now.

cheers,
brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pep8 failures on master

2018-04-03 Thread Doug Hellmann
Excerpts from Michael Still's message of 2018-04-04 07:54:59 +1000:
> Thanks to jichenjc for fixing the pep8 failures I was seeing on master. I'd
> decided they were specific to my local dev environment given no one else
> was seeing them.
> 
> As I said in the patch that fixed the issue [1], I think its worth
> exploring how these got through the gate in the first place. There is
> nothing in the patch which stops us from ending up here again, and no real
> explanation for what caused the issue in the first place.
> 
> Discuss.
> 
> Michael
> 
> 
> 1: https://review.openstack.org/#/c/557633

Were you running pep8 with python 3 locally (that might happen if
tox is installed under python 3 so the default base-python is python3
instead of just python)?

There are some different defaults in flake8 based on the version
of Python, but I don't know if those 2 specific errors are among
that set.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [First Contact] Meeting tonight/tomorrow/today (Depends on your perspective)

2018-04-03 Thread Kendall Nelson
Hello!

Another meeting tonight late/tomorrow depending on where in the world you
live :) 0800 UTC Wednesday.

Here is the agenda if you have anything to add [1]. Or if you want to add
your name to the ping list it is there as well!

See you all soon!

-Kendall (diablo_rojo)

[1] https://wiki.openstack.org/wiki/First_Contact_SIG#Meeting_Agenda
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] pep8 failures on master

2018-04-03 Thread Michael Still
Thanks to jichenjc for fixing the pep8 failures I was seeing on master. I'd
decided they were specific to my local dev environment given no one else
was seeing them.

As I said in the patch that fixed the issue [1], I think its worth
exploring how these got through the gate in the first place. There is
nothing in the patch which stops us from ending up here again, and no real
explanation for what caused the issue in the first place.

Discuss.

Michael


1: https://review.openstack.org/#/c/557633
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] [neutron] Advice on replacing (non-openstack) existing IPv6 setup

2018-04-03 Thread Erik Huelsmann
Hi,

I'm seeking some advice on replacing a libvirt/manual setup with an
openstack/VM based one. Most of the work has been done and seems to work,
however, the existing setup has working IPv6 on the host as well as the
guests -- something that I have failed to achieve so far with the OpenStack
replacement.

My situation is a single host with a /64 subnet assigned. The guests and
the host have been assigned an IP from the available /64 subnet. All
traffic from the host and the guests needs to be routed upstream through
fe80::1.

Everything works as long as I don't set up any IPv6 at all. But when I set
up the external interface (enp4s0) with an IPv6 address (no matter which
one), the linux bridge receives "File exists" errors from RTNETLINK.

Can anyone point me to configuration examples or installation documentation
for the case I'm trying to configure? (Note that I have looked at this
page: https://docs.openstack.org/mitaka/networking-guide/config-ipv6.html
but the fact that it talks a lot about prefix delegation makes it very
confusing, tbh...)

Thanks in advance for any advice you can provide!

-- 
Bye,

Erik.

http://efficito.com -- Hosted accounting and ERP.
Robust and Flexible. No vendor lock-in.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [all] A quick note on recent IRC trolling/vandalism

2018-04-03 Thread Clark Boylan
Hello everyone,

During the recent holiday weekend some of our channels experienced some IRC 
trolling/vandalism. In particular the meetbot was used to start meetings titled 
'maintenance' which updated the channel topic to 'maintenance'. The individual 
or bot doing this then used this as the pretense for claiming the channel was 
to undergo maintenance and everyone should leave. This is one of the risks of 
using public communications channels, anyone can show up and abuse them.

In an effort to make it more clear as to what is trolling and what isn't, here 
are the bots we currently operate:
  - Meetbot ("openstack") to handle IRC meetings and log channels on 
eavesdrop.openstack.org
  - Statusbot ("openstackstatus") to notify channels about service outages and 
update topic accordingly
  - Gerritbot ("openstackgerrit") to notify channels about code review updates

Should the Infra team need to notify of pending maintenance work, that 
notification will come via the statusbot and not the meetbot. The number of 
individuals that can set topics via statusbot is limited to a small number of 
IRC operators.

If you have any questions you can reach out either in the #openstack-infra 
channel or to any channel operator directly and ask them. To get a list of 
channel operators run `/msg chanserv access #channel-name list`. Finally any 
user can end a meeting that meetbot started after one hour (by issuing a 
#endmeeting command). So you should feel free to clean those up yourself if you 
are able.

If the Freenode staff needs to perform maintenance or otherwise make 
announcements,  they tend to send special messages directly to clients  so you 
will see messages from them in your IRC client's status channel. Should you 
have any questions for Freenode you can find freenode operators in the 
#freenode channel.

As a final note the infra team has an approved spec for improving our IRC bot 
tooling, http://specs.openstack.org/openstack-infra/infra-specs/specs/irc.html. 
Implementing this spec is going to be a prerequisite for implementing smarter 
automated responses to problems like this and it needs volunteers. If you think 
this might be interesting to you definitely reach out.

Thank you for your patience,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cyborg] Cyborg/Nova scheduling spec

2018-04-03 Thread Nadathur, Sundar
Thanks to everybody who has commented on the Cyborg/Nova scheduling spec 
(https://review.openstack.org/#/c/554717/).


As you may have noted, some issues were raised (*1), discussed (*2) and 
a potential solution was offered (*3). I have tried to synthesize the 
new solution from Nova team here:

 https://etherpad.openstack.org/p/Cyborg-Nova-Multifunction

This simplifies Cyborg design/implementation, by having the weigher use 
Placement info (no queries or extra info in Cyborg DB), and by opening 
the possibility of removing the weigher altogether if/when Nova supports 
preferred traits.


Please review it. Once that is done. I'll post an update that includes 
the new scheme and addresses any applicable comment in the current spec.


Thank you very much!

(*1) 
http://lists.openstack.org/pipermail/openstack-dev/2018-March/128685.html
(*2) 
http://lists.openstack.org/pipermail/openstack-dev/2018-March/128840.html, 
128889.html, etc.
(*3) 
http://lists.openstack.org/pipermail/openstack-dev/2018-March/12.html


Regards,
Sundar


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] [kolla] kolla-cli master pointer change

2018-04-03 Thread Clark Boylan
On Wed, Mar 28, 2018, at 11:14 AM, Borne Mace wrote:
> Hi All,
> 
> I brought up my issue in #openstack-infra and it was suggested that I 
> send an email to this list.
> 
> The kolla-cli repository was recently created, from existing sources.  
> There was an issue with the source repo where the master branch was 
> sorely out of date, but there is tagged source which is up to date.  My 
> hope is that someone can force-push the tag as master so that the master 
> branch can be fixed / updated.
> 
> I tried to solve this process through the normal merge process, but 
> since I was not the only committer to that repository gerrit refused to 
> post my review.  I will add the full output of that attempt at the end 
> so folks can see what I'm talking about.  If there is some other process 
> that is more appropriate for me to follow here let me know and I'm happy 
> to go through it.
> 
> The latest / optimal code is tagged as o3l_4.0.1.
> 
> Thanks much for your help!
> 
> -- Borne Mace

Responding to the list to make sure we properly record the steps that were 
taken here. I checked out o3l_4.0.1 in kolla-cli locally then pushed it to 
Gerrit as an admin using `git push gerrit local-branch:master`. Because this 
was a fast forward I didn't even need to force push it. This also means local 
clients should update cleanly to the new master commit as well.

I have since received confirmation from Borne that all looks good.

Thank you for your patience,
Clark

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[OpenStack-Infra] Selecting New Priority Effort(s)

2018-04-03 Thread Clark Boylan
Hello everyone,

I just approved the change to mark the Zuul v3 priority effort as completed in 
the infra-specs repo. Thank you to everyone that made that possible. With Zuul 
v3 work largely done we can now look forward to our next priority efforts.

Currently the only task marked as a priority is the task-tracker spec which at 
this point is migrating projects into storyboard. I think we can likely add one 
or two new priority efforts to this list.

After some quick initial brainstorming these were the ideas I had for getting 
onto that list (note some may require we actually write a spec):

* Gerrit upgrade to 2.14/2.15
* Control Plane operating system upgrades to Xenial
* Bringing wiki under config management management

My bias here is I've personally been working to try and pay down some of this 
tech debt we've built up simply due to bit rot, but I know we have other specs 
and I'm sure we can make good arguments for why other efforts should be made a 
priority. I'd love to get feedback on what others think would make good 
priority efforts.

Let's use this thread to identify candidates then whittle the list down to one 
or two to focus on for the next little while.

Thank you,
Clark

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[openstack-dev] Gerrit server replacement scheduled for May 2nd 2018

2018-04-03 Thread Paul Belanger
Hello from Infra.

It's that time again... on Wednesday, May 02, 2018 20:00 UTC, the OpenStack
Project Infrastructure team is upgrading the server which runs
review.openstack.org to Ubuntu Xenial, and that means a new virtual machine
instance with new IP addresses assigned by our service provider. The new IP
addresses will be as follows:

IPv4 -> 104.130.246.32
IPv6 -> 2001:4800:7819:103:be76:4eff:fe04:9229

They will replace these current production IP addresses:

IPv4 -> 104.130.246.91
IPv6 -> 2001:4800:7819:103:be76:4eff:fe05:8525

We understand that some users may be running from egress-filtered
networks with port 29418/tcp explicitly allowed to the current
review.openstack.org IP addresses, and so are providing this
information as far in advance as we can to allow them time to update
their firewalls accordingly.

Note that some users dealing with egress filtering may find it
easier to switch their local configuration to use Gerrit's REST API
via HTTPS instead, and the current release of git-review has support
for that workflow as well.
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045385.html

We will follow up with final confirmation in subsequent announcements.

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap

2018-04-03 Thread Wesley Hayutin
On Tue, 3 Apr 2018 at 13:53 Dan Prince  wrote:

> On Tue, Apr 3, 2018 at 10:00 AM, Javier Pena  wrote:
> >
> >> Greeting folks,
> >>
> >> During the last PTG we spent time discussing some ideas around an
> All-In-One
> >> installer, using 100% of the TripleO bits to deploy a single node
> OpenStack
> >> very similar with what we have today with the containerized undercloud
> and
> >> what we also have with other tools like Packstack or Devstack.
> >>
> >> https://etherpad.openstack.org/p/tripleo-rocky-all-in-one
> >>
> >
> > I'm really +1 to this. And as a Packstack developer, I'd love to see
> this as a
> > mid-term Packstack replacement. So let's dive into the details.
>
> Curious on this one actually, do you see a need for continued
> baremetal support? Today we support both baremetal and containers.
> Perhaps "support" is a strong word. We support both in terms of
> installation but only containers now have fully supported upgrades.
>
> The interfaces we have today still support baremetal and containers
> but there were some suggestions about getting rid of baremetal support
> and only having containers. If we were to remove baremetal support
> though, Could we keep the Packstack case intact by just using
> containers instead?
>
> Dan
>

Hey couple thoughts..
1.  I've added this topic to the RDO meeting tomorrow.
2.  Just a thought, the "elf owl" is the worlds smallest owl at least
according to the internets   Maybe the all in one could be nick named
tripleo elf?  Talon is cool too.
3.  From a CI perspective, I see this being very help with:
  a: faster run times generally, but especially for an upgrade tests.  It
may be possible to have upgrades gating tripleo projects again.
  b: enabling more packaging tests to be done with TripleO
  c: If developers dig it, we have a better chance at getting TripleO into
other project's check jobs / third party jobs where current requirements
and run times are prohibitive.
  d: Generally speaking replacing packstack / devstack in devel and CI
workflows  where it still exists.
  e: Improved utilization of our resources in RDO-Cloud

It would be interesting to me to see more design and a little more thought
into the potential use cases before we get far along.  Looks like there is
a good start to that here [2].
I'll add some comments with the potential use cases for CI.

/me is very happy to see this moving! Thanks all

[1] https://en.wikipedia.org/wiki/Elf_owl
[2]
https://review.openstack.org/#/c/547038/1/doc/source/install/advanced_deployment/all_in_one.rst



>
> >
> >> One of the problems that we're trying to solve here is to give a simple
> tool
> >> for developers so they can both easily and quickly deploy an OpenStack
> for
> >> their needs.
> >>
> >> "As a developer, I need to deploy OpenStack in a VM on my laptop,
> quickly and
> >> without complexity, reproducing the same exact same tooling as TripleO
> is
> >> using."
> >> "As a Neutron developer, I need to develop a feature in Neutron and
> test it
> >> with TripleO in my local env."
> >> "As a TripleO dev, I need to implement a new service and test its
> deployment
> >> in my local env."
> >> "As a developer, I need to reproduce a bug in TripleO CI that blocks the
> >> production chain, quickly and simply."
> >>
> >
> > "As a packager, I want an easy/low overhead way to test updated packages
> with TripleO bits, so I can make sure they will not break any automation".
> >
> >> Probably more use cases, but to me that's what came into my mind now.
> >>
> >> Dan kicked-off a doc patch a month ago:
> >> https://review.openstack.org/#/c/547038/
> >> And I just went ahead and proposed a blueprint:
> >> https://blueprints.launchpad.net/tripleo/+spec/all-in-one
> >> So hopefully we can start prototyping something during Rocky.
> >>
> >> Before talking about the actual implementation, I would like to gather
> >> feedback from people interested by the use-cases. If you recognize
> yourself
> >> in these use-cases and you're not using TripleO today to test your
> things
> >> because it's too complex to deploy, we want to hear from you.
> >> I want to see feedback (positive or negative) about this idea. We need
> to
> >> gather ideas, use cases, needs, before we go design a prototype in
> Rocky.
> >>
> >
> > I would like to offer help with initial testing once there is something
> in the repos, so count me in!
> >
> > Regards,
> > Javier
> >
> >> Thanks everyone who'll be involved,
> >> --
> >> Emilien Macchi
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> 

Re: [openstack-dev] [nova] Proposing Eric Fried for nova-core

2018-04-03 Thread Eric Fried
Thank you Melanie for the complimentary nomination, to the cores for
welcoming me into the fold, and especially to all (cores and non, Nova
and otherwise) who have mentored me along the way thus far.  I hope to
live up to your example and continue to pay it forward.

-efried

On 04/03/2018 02:20 PM, melanie witt wrote:
> On Mon, 26 Mar 2018 19:00:06 -0700, Melanie Witt wrote:
>> Howdy everyone,
>>
>> I'd like to propose that we add Eric Fried to the nova-core team.
>>
>> Eric has been instrumental to the placement effort with his work on
>> nested resource providers and has been actively contributing to many
>> other areas of openstack [0] like project-config, gerritbot,
>> keystoneauth, devstack, os-loganalyze, and so on.
>>
>> He's an active reviewer in nova [1] and elsewhere in openstack and
>> reviews in-depth, asking questions and catching issues in patches and
>> working with authors to help get code into merge-ready state. These are
>> qualities I look for in a potential core reviewer.
>>
>> In addition to all that, Eric is an active participant in the project in
>> general, helping people with questions in the #openstack-nova IRC
>> channel, contributing to design discussions, helping to write up
>> outcomes of discussions, reporting bugs, fixing bugs, and writing tests.
>> His contributions help to maintain and increase the health of our
>> project.
>>
>> To the existing core team members, please respond with your comments,
>> +1s, or objections within one week.
>>
>> Cheers,
>> -melanie
>>
>> [0] https://review.openstack.org/#/q/owner:efried
>> [1] http://stackalytics.com/report/contribution/nova/90
> 
> Thanks to everyone who responded with their feedback. It's been one week
> and we have had more than enough +1s, so I've added Eric to the team.
> 
> Welcome Eric!
> 
> Best,
> -melanie
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposing Eric Fried for nova-core

2018-04-03 Thread melanie witt

On Mon, 26 Mar 2018 19:00:06 -0700, Melanie Witt wrote:

Howdy everyone,

I'd like to propose that we add Eric Fried to the nova-core team.

Eric has been instrumental to the placement effort with his work on
nested resource providers and has been actively contributing to many
other areas of openstack [0] like project-config, gerritbot,
keystoneauth, devstack, os-loganalyze, and so on.

He's an active reviewer in nova [1] and elsewhere in openstack and
reviews in-depth, asking questions and catching issues in patches and
working with authors to help get code into merge-ready state. These are
qualities I look for in a potential core reviewer.

In addition to all that, Eric is an active participant in the project in
general, helping people with questions in the #openstack-nova IRC
channel, contributing to design discussions, helping to write up
outcomes of discussions, reporting bugs, fixing bugs, and writing tests.
His contributions help to maintain and increase the health of our project.

To the existing core team members, please respond with your comments,
+1s, or objections within one week.

Cheers,
-melanie

[0] https://review.openstack.org/#/q/owner:efried
[1] http://stackalytics.com/report/contribution/nova/90


Thanks to everyone who responded with their feedback. It's been one week 
and we have had more than enough +1s, so I've added Eric to the team.


Welcome Eric!

Best,
-melanie




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap

2018-04-03 Thread Dan Prince
On Tue, Apr 3, 2018 at 10:00 AM, Javier Pena  wrote:
>
>> Greeting folks,
>>
>> During the last PTG we spent time discussing some ideas around an All-In-One
>> installer, using 100% of the TripleO bits to deploy a single node OpenStack
>> very similar with what we have today with the containerized undercloud and
>> what we also have with other tools like Packstack or Devstack.
>>
>> https://etherpad.openstack.org/p/tripleo-rocky-all-in-one
>>
>
> I'm really +1 to this. And as a Packstack developer, I'd love to see this as a
> mid-term Packstack replacement. So let's dive into the details.

Curious on this one actually, do you see a need for continued
baremetal support? Today we support both baremetal and containers.
Perhaps "support" is a strong word. We support both in terms of
installation but only containers now have fully supported upgrades.

The interfaces we have today still support baremetal and containers
but there were some suggestions about getting rid of baremetal support
and only having containers. If we were to remove baremetal support
though, Could we keep the Packstack case intact by just using
containers instead?

Dan

>
>> One of the problems that we're trying to solve here is to give a simple tool
>> for developers so they can both easily and quickly deploy an OpenStack for
>> their needs.
>>
>> "As a developer, I need to deploy OpenStack in a VM on my laptop, quickly and
>> without complexity, reproducing the same exact same tooling as TripleO is
>> using."
>> "As a Neutron developer, I need to develop a feature in Neutron and test it
>> with TripleO in my local env."
>> "As a TripleO dev, I need to implement a new service and test its deployment
>> in my local env."
>> "As a developer, I need to reproduce a bug in TripleO CI that blocks the
>> production chain, quickly and simply."
>>
>
> "As a packager, I want an easy/low overhead way to test updated packages with 
> TripleO bits, so I can make sure they will not break any automation".
>
>> Probably more use cases, but to me that's what came into my mind now.
>>
>> Dan kicked-off a doc patch a month ago:
>> https://review.openstack.org/#/c/547038/
>> And I just went ahead and proposed a blueprint:
>> https://blueprints.launchpad.net/tripleo/+spec/all-in-one
>> So hopefully we can start prototyping something during Rocky.
>>
>> Before talking about the actual implementation, I would like to gather
>> feedback from people interested by the use-cases. If you recognize yourself
>> in these use-cases and you're not using TripleO today to test your things
>> because it's too complex to deploy, we want to hear from you.
>> I want to see feedback (positive or negative) about this idea. We need to
>> gather ideas, use cases, needs, before we go design a prototype in Rocky.
>>
>
> I would like to offer help with initial testing once there is something in 
> the repos, so count me in!
>
> Regards,
> Javier
>
>> Thanks everyone who'll be involved,
>> --
>> Emilien Macchi
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] what to do with problematic mocking in nova unit tests

2018-04-03 Thread Doug Hellmann
Excerpts from Eric Fried's message of 2018-03-31 16:12:22 -0500:
> Hi Doug, I made this [2] for you.  I tested it locally with oslo.config
> master, and whereas I started off with a slightly different set of
> errors than you show at [1], they were in the same suites.  Since I
> didn't want to tox the world locally, I went ahead and added a
> Depends-On from [3].  Let's see how it plays out.
> 
> >> [1]
> http://logs.openstack.org/12/557012/1/check/cross-nova-py27/37b2a7c/job-output.txt.gz#_2018-03-27_21_41_09_883881
> [2] https://review.openstack.org/#/c/558084/
> [3] https://review.openstack.org/#/c/557012/
> 
> -efried

Thanks, Eric! That looks like it should do the trick. I'll give it
a try.

Doug

> 
> On 03/30/2018 06:35 AM, Doug Hellmann wrote:
> > Anyone?
> > 
> >> On Mar 28, 2018, at 1:26 PM, Doug Hellmann  wrote:
> >>
> >> In the course of preparing the next release of oslo.config, Ben noticed
> >> that nova's unit tests fail with oslo.config master [1].
> >>
> >> The underlying issue is that the tests mock things that oslo.config
> >> is now calling as part of determining where options are being set
> >> in code. This isn't an API change in oslo.config, and it is all
> >> transparent for normal uses of the library. But the mocks replace
> >> os.path.exists() and open() for the entire duration of a test
> >> function (not just for the isolated application code being tested),
> >> and so the library behavior change surfaces as a test error.
> >>
> >> I'm not really in a position to go through and clean up the use of
> >> mocks in those (and other?) tests myself, and I would like to not
> >> have to revert the feature work in oslo.config, especially since
> >> we did it for the placement API stuff for the nova team.
> >>
> >> I'm looking for ideas about what to do.
> >>
> >> Doug
> >>
> >> [1] 
> >> http://logs.openstack.org/12/557012/1/check/cross-nova-py27/37b2a7c/job-output.txt.gz#_2018-03-27_21_41_09_883881
> >>
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] New PBR release coming soon

2018-04-03 Thread Ben Nemec
The new pbr version is now in upper-constraints, so it should be getting 
exercised in ci going forward.  Please report any issues to #openstack-oslo.


On 03/26/2018 11:56 AM, Ben Nemec wrote:

Hi,

Since this will potentially affect the majority of OpenStack projects, I 
wanted to give everyone some advance notice.  PBR[1] hasn't been 
released since last summer, and as a result none of the bug fixes or new 
features that have gone in since then are available to users.  Because 
of some feature removals that have happened, this will be a major 
release and due to the number of changes since the last release there's 
a higher probability of issues.


We want to get this potentially painful release out of the way early in 
the cycle and then resume regular releases going forward.  If you know 
of any reason we shouldn't do this right now please respond ASAP.


Thanks.

-Ben

1: https://docs.openstack.org/pbr/latest/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap

2018-04-03 Thread Dan Prince
On Tue, Apr 3, 2018 at 9:23 AM, James Slagle  wrote:
> On Mon, Apr 2, 2018 at 9:05 PM, Dan Prince  wrote:
>> On Thu, Mar 29, 2018 at 5:32 PM, Emilien Macchi  wrote:
>>> Greeting folks,
>>>
>>> During the last PTG we spent time discussing some ideas around an All-In-One
>>> installer, using 100% of the TripleO bits to deploy a single node OpenStack
>>> very similar with what we have today with the containerized undercloud and
>>> what we also have with other tools like Packstack or Devstack.
>>>
>>> https://etherpad.openstack.org/p/tripleo-rocky-all-in-one
>>>
>>> One of the problems that we're trying to solve here is to give a simple tool
>>> for developers so they can both easily and quickly deploy an OpenStack for
>>> their needs.
>>>
>>> "As a developer, I need to deploy OpenStack in a VM on my laptop, quickly
>>> and without complexity, reproducing the same exact same tooling as TripleO
>>> is using."
>>> "As a Neutron developer, I need to develop a feature in Neutron and test it
>>> with TripleO in my local env."
>>> "As a TripleO dev, I need to implement a new service and test its deployment
>>> in my local env."
>>> "As a developer, I need to reproduce a bug in TripleO CI that blocks the
>>> production chain, quickly and simply."
>>>
>>> Probably more use cases, but to me that's what came into my mind now.
>>>
>>> Dan kicked-off a doc patch a month ago:
>>> https://review.openstack.org/#/c/547038/
>>> And I just went ahead and proposed a blueprint:
>>> https://blueprints.launchpad.net/tripleo/+spec/all-in-one
>>> So hopefully we can start prototyping something during Rocky.
>>
>> I've actually started hacking a bit here:
>>
>> https://github.com/dprince/talon
>>
>> Very early and I haven't committed everything yet. (Probably wouldn't
>> have announced it to the list yet but it might help some understand
>> the use case).
>>
>> I'm running this on my laptop to develop TripleO containers with no
>> extra VM involved.
>>
>> P.S. We should call it Talon!
>>
>> Dan
>>
>>>
>>> Before talking about the actual implementation, I would like to gather
>>> feedback from people interested by the use-cases. If you recognize yourself
>>> in these use-cases and you're not using TripleO today to test your things
>>> because it's too complex to deploy, we want to hear from you.
>>> I want to see feedback (positive or negative) about this idea. We need to
>>> gather ideas, use cases, needs, before we go design a prototype in Rocky.
>>
>> Sorry dude. Already prototyping :)
>
> A related use case to all this work that takes it a step further:
>
> I think it would be useful if we could eventually further break down
> "openstack undercloud deploy" into just the pieces needed to:
>
> - start an ephemeral Heat container
> - create the Heat stack passing all requested -e's
> - run config-download and save the output

Yes! This pretty similar what we outlined at the PTG here [1] (lines 21-23):

The high level workflow of here is already possible now if you use the
new --output-only option to config download [2] and is exactly what I
was doing with the Talon prototype. Essentially trying to take it as
far as possible with our existing commands and then bring that to the
group as a "how do we want to package this better?".

One difference I've taken is instead of using a Heat container I
instead use a python-tripleoclient container (which I aim to push to
Kolla if I can whittle it down). This has the benefit of letting you
do everything in a single container. Also I needed a few other
cherry-picks [3] to pull it off to do things like make it so that
docker-puppet.py consumes puppet-tripleo from within the container
instead of bind mounting it from the host, and disabling puppet from
running on the host machine entirely (something I do not want on my
laptop).

The nice thing about all of this is you end up with a self-contained
'Heat template -> Ansible' generator that can translate a set of heat
templates into ansible playbooks which you then just run. What it does
highlight however is perhaps there are still some dependencies that
must be on each host in order for our Ansible playbooks to work.
Things like paunch, and most of the heat-agent hooks still need to be
on each host OS or the resulting playbooks won't work. Continuing the
work to convert things to pure Ansible without requiring any
heat-agents to be installed would make things even nicer I think. But
as it stands today it is already a nice way to hack on
tripleo-heat-templates in a very tight loop. No VMs or quickstart
required.

Dan

[1] https://etherpad.openstack.org/p/tripleo-rocky-all-in-one
[2] 
http://git.openstack.org/cgit/openstack/python-tripleoclient/commit/?id=50a093247742be896bbbeb91408eeaf0362b5085
[3] 
https://github.com/dprince/talon/blob/master/containers/tripleoclient/tripleoclient.sh#L31

>
> Essentially removing the undercloud specific logic (or all-in-one
> specific logic in this 

Re: [openstack-dev] [all][infra] Upcoming changes in ARA Zuul job reports

2018-04-03 Thread David Moreau Simard
On Thu, Mar 29, 2018 at 9:05 PM, Jeffrey Zhang  wrote:
> cool. kolla will try to implement it.

Cool !
For reference, openstack-ansible already retooled their log collection
to copy the database instead of generating the report [1].

[1]: https://review.openstack.org/#/c/557921/

David Moreau Simard
Senior Software Engineer | OpenStack RDO

dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback

2018-04-03 Thread Jimmy McArthur
Thanks to everyone that weighed in! We'll be working on some updated 
language around the event to clarify the inclusion of the Ops 
community.  We'll plan to float that to both operators and dev lists 
when we're a little further along.  Meantime, if you have any questions 
or concerns, don't hesitate to reach out.


Thanks all!
Jimmy


Matt Van Winkle 
April 3, 2018 at 11:43 AM
Looks like we can move forward with co-location!. Jimmy, let us know 
when we need to work time in for you or other Foundation folks to 
discuss more details in the UC meeting and/or Ops Meetup Team meetings.


Thanks!
VW

On 4/3/18, 3:49 AM, "Shintaro Mizuno"  
wrote:


I'm also +1 on this.

I've circulated to the Japanese Ops group and heard no objection so
would be more +1s from our community.

Shintaro
--
Shintaro MIZUNO (水野伸太郎)
NTT Software Innovation Center
TEL: 0422-59-4977
E-mail: mizuno.shint...@lab.ntt.co.jp


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Shintaro Mizuno 
April 3, 2018 at 3:47 AM
I'm also +1 on this.

I've circulated to the Japanese Ops group and heard no objection so 
would be more +1s from our community.


Shintaro
Thierry Carrez 
April 3, 2018 at 3:33 AM

As a data point, in a recent survey 89% of surveyed developers supported
that the Ops meetup should happen at the same time and place. Amongst
past PTG attendees, that support raises to 92%. Furthermore I only heard
good things about the Public Cloud WG participating to the Dublin PTG.

So I don't think anyone views it as "their party" -- just as an event
where we all get stuff done.

Erik McCormick 
April 2, 2018 at 3:57 PM
I'm a +1 too as long as the devs at large are cool with it and won't 
hate on us for crashing their party. I also +1 the proposed format.  
It's basically what we're discussed in Tokyo. Make it so.


Cheers
Erik

PS. Sorry for the radio silence the past couple weeks. Vacation,  
kids,  etc.



Melvin Hillsman 
April 2, 2018 at 12:53 PM
+1




--
Kind regards,

Melvin Hillsman
mrhills...@gmail.com 
mobile: (832) 264-2646


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback

2018-04-03 Thread Matt Van Winkle
Looks like we can move forward with co-location!. Jimmy, let us know when we 
need to work time in for you or other Foundation folks to discuss more details 
in the UC meeting and/or Ops Meetup Team meetings.

Thanks!
VW

On 4/3/18, 3:49 AM, "Shintaro Mizuno"  wrote:

I'm also +1 on this.

I've circulated to the Japanese Ops group and heard no objection so 
would be more +1s from our community.

Shintaro
-- 
Shintaro MIZUNO (水野伸太郎)
NTT Software Innovation Center
TEL: 0422-59-4977
E-mail: mizuno.shint...@lab.ntt.co.jp


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [tc] [all] TC Report 18-14

2018-04-03 Thread Chris Dent


html: https://anticdent.org/tc-report-18-14.html

If the [logs of 
#openstack-tc](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/index.html)
are any indicator of reality (they are not), then the only things
that happened in the past week are that the next OpenStack release
got a name, and the TC talked about how to evaluate projects
applying to be official.

# Stein

Yes, the people have spoken and their voices were almost heard. The
first choice for the name of the "S" release of OpenStack, "Solar",
foundered at the desk of legal and "Stein" won the day and there was
much emojifying: .


From "Rocky" comes...another rock. Not ein Maß. Presumably such

details will not limit the rejoicing.

Associated
[chatter](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-29.log.html#t2018-03-29T19:10:52).

# Official Projects

The [application of
Adjutant](https://review.openstack.org/#/c/553643/) continues to
drive some discussion, both on the review and in IRC. On
[Wednesday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-28.log.html#t2018-03-28T12:04:06)
I dropped a wall of text on the review, expressing my doubt and
confusion over what rules we are supposed to be using when
evaluating applicants.

Then at [Thursday's office
hour](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-29.log.html#t2018-03-29T15:04:56)
the discussion picked up with a larger group. There were at least
three different threads of conversation happening at once:

* comments related to the general topics I raised
* evaluating Adjutant itself in terms of its impact on OpenStack
* trying to get (and encourage the getting of) input from real
  operators about their thoughts on the usefulness of Adjutant (or
  something like it)

The last was an effort to stop speculating, which is something we do
too much.

The second was an effort to not be moving the goalposts in the middle
of an application, despite the confusion.

The first had a lot of ideas, but none were resolved (and there's a
pattern there) so there's a plan to have a session about it at the
Forum. If you look at the [planning
etherpad](https://etherpad.openstack.org/p/YVR-forum-TC-sessions)
you'll see that there are two different topics related to project
applications: one is for Adjutant specifically, in case things aren't
resolved by then (we hope they will be); the other is a general
session on really trying to dig deep into the questions and figure out
what we're trying to do and be when we say "official". These are
separate sessions very much on purpose.

The questions reach into the core of what OpenStack is, so it
ought to be an interesting session.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] nova-placement-api tuning

2018-04-03 Thread Alex Schultz
On Tue, Apr 3, 2018 at 4:48 AM, Chris Dent  wrote:
> On Mon, 2 Apr 2018, Alex Schultz wrote:
>
>> So this is/was valid. A few years back there was some perf tests done
>> with various combinations of process/threads and for Keystone it was
>> determined that threads should be 1 while you should adjust the
>> process count (hence the bug). Now I guess the question is for every
>> service what is the optimal configuration but I'm not sure there's
>> anyone who's looking at this in the upstream for all the services.  In
>> the puppet modules for consistency we applied a similar concept for
>> all the services when they are deployed under apache.  It can be tuned
>> as needed for each service but I don't think we have any great
>> examples of perf numbers. It's really a YMMV thing. We ship a basic
>> default that isn't crazy, but it's probably not optimal either.
>
>
> Do you happen to recall if the trouble with keystone and threaded
> web servers had anything to do with eventlet? Support for the
> eventlet-based server was removed from keystone in Newton.
>

It was running under httpd I believe.

> I've been doing some experiments with placement using multiple uwsgi
> processes, each with multiple threads and it appears to be working
> very well. Ideally all the OpenStack HTTP-based services would be
> able to run effectively in that kind of setup. If they can't I'd
> like to help make it possible.
>
> In any case: processes 3, threads 1 for WSGIDaemonProcess for the
> placement service for a deployment of any real size errs on the
> side of too conservative and I hope we can make some adjustments
> there.
>

You'd say that until you realize that the deployment may also be
sharing every other service api running on the box.  Imagine keystone,
glance, nova, cinder, gnocchi, etc etc all running on the same
machine. Then 3 isn't so conservative. They start adding up and
exhausting resources (cpu cores/memory) really quickly.  In a perfect
world, yes each api service would get it's own system with processes
== processor count but in most cases they end up getting split between
the number of services running on the box.  In puppet we did a sliding
scale and have several facts[0] that can be used if a person doesn't
want to switch to $::processorcount.  If you're rolling your own you
can tune it easier but when you have to come up with something that
might be collocated with a bunch of other services you have to hedge
your bets to make sure it works most of the time.

Thanks,
-Alex

[0] 
http://git.openstack.org/cgit/openstack/puppet-openstacklib/tree/lib/facter/os_workers.rb

>
> --
> Chris Dent   ٩◔̯◔۶   https://anticdent.org/
> freenode: cdent tw: @anticdent
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] Replacing pbr's autodoc feature with sphinxcontrib-apidoc

2018-04-03 Thread Stephen Finucane
On Tue, 2018-04-03 at 12:04 -0400, Zane Bitter wrote:
> On 03/04/18 06:28, Stephen Finucane wrote:
> > On Mon, 2018-04-02 at 19:41 -0400, Zane Bitter wrote:
> > > On 28/03/18 10:31, Stephen Finucane wrote:
> > > > As noted last week [1], we're trying to move away from pbr's autodoc
> > > > feature as part of the new docs PTI. To that end, I've created
> > > > sphinxcontrib-apidoc, which should do what pbr was previously doing for
> > > > us by via a Sphinx extension.
> > > > 
> > > > https://pypi.org/project/sphinxcontrib-apidoc/
> > > > 
> > > > This works by reading some configuration from your documentation's
> > > > 'conf.py' file and using this to call 'sphinx-apidoc'. It means we no
> > > > longer need pbr to do this for.
> > > > 
> > > > I have pushed version 0.1.0 to PyPi already but before I add this to
> > > > global requirements, I'd like to ensure things are working as expected.
> > > > smcginnis was kind enough to test this out on glance and it seemed to
> > > > work for him but I'd appreciate additional data points. The
> > > > configuration steps for this extension are provided in the above link.
> > > > To test this yourself, you simply need to do the following:
> > > > 
> > > >  1. Add 'sphinxcontrib-apidoc' to your test-requirements.txt or
> > > > doc/requirements.txt file
> > > >  2. Configure as noted above and remove the '[pbr]' and 
> > > > '[build_sphinx]'
> > > > configuration from 'setup.cfg'
> > > >  3. Replace 'python setup.py build_sphinx' with a call to 
> > > > 'sphinx-build'
> > > >  4. Run 'tox -e docs'
> > > >  5. Profit?
> > > > 
> > > > Be sure to let me know if anyone encounters issues. If not, I'll be
> > > > pushing for this to be included in global requirements so we can start
> > > > the migration.
> > > 
> > > Thanks Stephen! I tried it out with no problems:
> > > 
> > > https://review.openstack.org/558262
> > > 
> > > However, there are a couple of differences compared to how pbr did things.
> > > 
> > > 1) pbr can generate an 'autoindex' file with a flat list of modules
> > > (this appears to be configurable with the autodoc_index_modules option),
> > > but apidoc only generates a 'modules' file with a hierarchical list of
> > > modules. This is easy to work around, but I guess it needs to be added
> > > to the instructions to check that you're not relying on it.
> > 
> > Yup, smcginnis and I discussed this at some point. PBR has two
> > different ways of generating API documentation: 'autodoc_tree', which
> > is based on 'sphinx-apidoc', and 'autodoc', which is custom (and
> > presumably legacy). This extension replaces the former of those but, as
> > you note below, it seems 'sphinx-apidoc' can be wrangled into
> > generating something approaching the latter.
> 
> That explains quite a lot that was confusing me :D
> 
> > > 2) pbr generates a page per module; this plugin generates a page per
> > > package. This results in wy too much information on a page to be
> > > able to navigate it comfortably IMHO. To the point where it's easier to
> > > read the code. (It also breaks existing links, if you care about that
> > > kind of thing.) I sent you a PR to add an option to pass --separate:
> > > 
> > > https://github.com/sphinx-contrib/apidoc/pull/1
> > 
> > Thanks for that. I've merged it and will use it as the basis of a 0.2.0
> > release assuming nothing else pops up in the next day or two.
> 
> Thanks!
> 
> > I'm not sure what we can do about the broken links though - maybe use the
> > redirect infrastructure to just send everyone to the new place? I guess
> > I can add this to the guide I'm adding to the README on migrating from
> > pbr.
> 
> No links break if you use the apidoc_separate_modules=True option, so I 
> would recommend any projects currently generating a page per module 
> (i.e. using 'autodoc' instead of 'autodoc_tree') should enable that 
> option to keep continuity.

Fancy taking a look at [1], in that case? This should clarify
everything.

[1] https://github.com/sphinx-contrib/apidoc/pull/3

Stephen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] innodb OPTIMIZE TABLE ?

2018-04-03 Thread Jay Pipes

On 04/03/2018 11:51 AM, Michael Bayer wrote:

On Tue, Apr 3, 2018 at 11:41 AM, Jay Pipes  wrote:

On 04/03/2018 11:07 AM, Michael Bayer wrote:




Yes.


b. oslo.db script to run generically, yes or no?



No. Just have Triple-O install galera_innoptimizer and run it in a cron job.


OK, here are the issues I have with galera_innoptimizer:

1. only runs on Galera.This should work on a non-galera db as well


To recap what we just discussed on IRC... it's not necessary to do this 
for non-galera DBs because non-galera DBs don't use manual locking for 
OPTIMIZE TABLE (MySQL 5.7 online DDL changes ensure OPTIMIZE TABLE for 
InnoDB is a non-locking operation).


Galera enforces a strict ordering with its total order isolation mode by 
default for DDL operations, which is what the galera_innoptimizer thing 
is doing: turning off that total order isolation temporarily and 
executing optimize table, then turning on total order isolation again.



2. hardcoded to MySQLdb / mysqlclient.   We don't install that driver anymore.

3. is just running OPTIMIZE on every table across the board, and at
best you can give it a list of tables.  I was hoping to not add more
hardcoded cross-dependencies to tripleo, as this means individual
projects would need to affect how the script is run which means we
have to again start shipping individual per-app crons that require
eternal babysitting.


I have no issues with you creating a better tool :) Just not in oslo.db...


What failures do you foresee if I tried to make it compare the logical
data size to the physical file size?  since I'm going here for file
size optimization only.   or just too complicated / brittle ?


Yeah, you are prematurely optimizing (pun intended). No need. Just run 
OPTIMIZE TABLE every day on all tables in a cron job. With modern MySQL, 
there's really not an issue with that.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Replacing pbr's autodoc feature with sphinxcontrib-apidoc

2018-04-03 Thread Zane Bitter

On 03/04/18 06:28, Stephen Finucane wrote:

On Mon, 2018-04-02 at 19:41 -0400, Zane Bitter wrote:

On 28/03/18 10:31, Stephen Finucane wrote:

As noted last week [1], we're trying to move away from pbr's autodoc
feature as part of the new docs PTI. To that end, I've created
sphinxcontrib-apidoc, which should do what pbr was previously doing for
us by via a Sphinx extension.

https://pypi.org/project/sphinxcontrib-apidoc/

This works by reading some configuration from your documentation's
'conf.py' file and using this to call 'sphinx-apidoc'. It means we no
longer need pbr to do this for.

I have pushed version 0.1.0 to PyPi already but before I add this to
global requirements, I'd like to ensure things are working as expected.
smcginnis was kind enough to test this out on glance and it seemed to
work for him but I'd appreciate additional data points. The
configuration steps for this extension are provided in the above link.
To test this yourself, you simply need to do the following:

 1. Add 'sphinxcontrib-apidoc' to your test-requirements.txt or
doc/requirements.txt file
 2. Configure as noted above and remove the '[pbr]' and '[build_sphinx]'
configuration from 'setup.cfg'
 3. Replace 'python setup.py build_sphinx' with a call to 'sphinx-build'
 4. Run 'tox -e docs'
 5. Profit?

Be sure to let me know if anyone encounters issues. If not, I'll be
pushing for this to be included in global requirements so we can start
the migration.


Thanks Stephen! I tried it out with no problems:

https://review.openstack.org/558262

However, there are a couple of differences compared to how pbr did things.

1) pbr can generate an 'autoindex' file with a flat list of modules
(this appears to be configurable with the autodoc_index_modules option),
but apidoc only generates a 'modules' file with a hierarchical list of
modules. This is easy to work around, but I guess it needs to be added
to the instructions to check that you're not relying on it.


Yup, smcginnis and I discussed this at some point. PBR has two
different ways of generating API documentation: 'autodoc_tree', which
is based on 'sphinx-apidoc', and 'autodoc', which is custom (and
presumably legacy). This extension replaces the former of those but, as
you note below, it seems 'sphinx-apidoc' can be wrangled into
generating something approaching the latter.


That explains quite a lot that was confusing me :D


2) pbr generates a page per module; this plugin generates a page per
package. This results in wy too much information on a page to be
able to navigate it comfortably IMHO. To the point where it's easier to
read the code. (It also breaks existing links, if you care about that
kind of thing.) I sent you a PR to add an option to pass --separate:

https://github.com/sphinx-contrib/apidoc/pull/1


Thanks for that. I've merged it and will use it as the basis of a 0.2.0
release assuming nothing else pops up in the next day or two.


Thanks!


I'm not
sure what we can do about the broken links though - maybe use the
redirect infrastructure to just send everyone to the new place? I guess
I can add this to the guide I'm adding to the README on migrating from
pbr.


No links break if you use the apidoc_separate_modules=True option, so I 
would recommend any projects currently generating a page per module 
(i.e. using 'autodoc' instead of 'autodoc_tree') should enable that 
option to keep continuity.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] innodb OPTIMIZE TABLE ?

2018-04-03 Thread Michael Bayer
On Tue, Apr 3, 2018 at 11:41 AM, Jay Pipes  wrote:
> On 04/03/2018 11:07 AM, Michael Bayer wrote:
>>
>
> Yes.
>
>> b. oslo.db script to run generically, yes or no?
>
>
> No. Just have Triple-O install galera_innoptimizer and run it in a cron job.

OK, here are the issues I have with galera_innoptimizer:

1. only runs on Galera.This should work on a non-galera db as well

2. hardcoded to MySQLdb / mysqlclient.   We don't install that driver anymore.

3. is just running OPTIMIZE on every table across the board, and at
best you can give it a list of tables.  I was hoping to not add more
hardcoded cross-dependencies to tripleo, as this means individual
projects would need to affect how the script is run which means we
have to again start shipping individual per-app crons that require
eternal babysitting.

What failures do you foresee if I tried to make it compare the logical
data size to the physical file size?  since I'm going here for file
size optimization only.   or just too complicated / brittle ?

>
> Best,
> -jay
>
>> thanks for your thoughts!
>>
>>
>>
>> [1] https://github.com/deimosfr/galera_innoptimizer
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [novaclient] invoking methods on the same client object in different theads caused malformed requests

2018-04-03 Thread Chris Friesen

On 04/03/2018 04:25 AM, Xiong, Huan wrote:

Hi,

I'm using a cloud benchmarking tool [1], which creates a *single* nova
client object in main thread and invoke methods on that object in different
worker threads. I find it generated malformed requests at random (my
system has python-novaclient 10.1.0 installed). The root cause was because
some methods in novaclient (e.g., those in images.py and networks.py)
changed client object's service_type. Since all threads shared a single
client object, the change caused other threads generated malformed requests
and hence the failure.

I wonder if this is a known issue for novaclient, or the above approach is
not supported?


In general, unless something says it is thread-safe you should assume it is not.

Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] innodb OPTIMIZE TABLE ?

2018-04-03 Thread Jay Pipes

On 04/03/2018 11:07 AM, Michael Bayer wrote:

The MySQL / MariaDB variants we use nowadays default to
innodb_file_per_table=ON and we also set this flag to ON in installer
tools like TripleO. The reason we like file per table is so that
we don't grow an enormous ibdata file that can't be shrunk without
rebuilding the database.  Instead, we have lots of little .ibd
datafiles for each table throughout each openstack database.

But now we have the issue that these files also can benefit from
periodic optimization which can shrink them and also have a beneficial
effect on performance.   The OPTIMIZE TABLE statement achieves this,
but as would be expected it itself can lock tables for potentially a
long time.   Googling around reveals a lot of controversy, as various
users and publications suggest that OPTIMIZE is never needed and would
have only a negligible effect on performance.   However here we seek
to use OPTIMIZE so that we can reclaim disk space on tables that have
lots of DELETE activity, such as keystone "token" and ceilometer
"sample".

Questions for the group:

1. is OPTIMIZE table worthwhile to be run for tables where the
datafile has grown much larger than the number of rows we have in the
table?


Possibly, though it's questionable to use MySQL/InnoDB for storing 
transient data that is deleted often like ceilometer samples and 
keystone tokens. A much better solution is to use RDBMS partitioning so 
you can simply ALTER TABLE .. DROP PARTITION those partitions that are 
no longer relevant (and don't even bother DELETEing individual rows) or, 
in the case of Ceilometer samples, don't use a traditional RDBMS for 
timeseries data at all...


But since that is unfortunately already the case, yes it is probably a 
good idea to OPTIMIZE TABLE on those tables.



2. from people's production experience how safe is it to run OPTIMIZE,
e.g. how long is it locking tables, etc.


Is it safe? Yes.

Does it lock the entire table for the duration of the operation? No. It 
uses online DDL operations:


https://dev.mysql.com/doc/refman/5.7/en/innodb-file-defragmenting.html

Note that OPTIMIZE TABLE is mapped to ALTER TABLE tbl_name FORCE for 
InnoDB tables.



3. is there a heuristic we can use to measure when we might run this
-.e.g my plan is we measure the size in bytes of each row in a table
and then compare that in some ratio to the size of the corresponding
.ibd file, if the .ibd file is N times larger than the logical data
size we run OPTIMIZE ?


I don't believe so, no. Most things I see recommended is to simply run 
OPTIMIZE TABLE in a cron job on each table periodically.



4. I'd like to propose this job of scanning table datafile sizes in
ratio to logical data sizes, then running OPTIMIZE, be a utility
script that is delivered via oslo.db, and would run for all innodb
tables within a target MySQL/ MariaDB server generically.  That is, I
really *dont* want this to be a script that Keystone, Nova, Ceilometer
etc. are all maintaining delivering themselves.   this should be done
as a generic pass on a whole database (noting, again, we are only
running it for very specific InnoDB tables that we observe have a poor
logical/physical size ratio).


I don't believe this should be in oslo.db. This is strictly the purview 
of deployment tools and should stay there, IMHO.



5. for Galera this gets more tricky, as we might want to run OPTIMIZE
on individual nodes directly.  The script at [1] illustrates how to
run this on individual nodes one at a time.

More succinctly, the Q is:

a. OPTIMIZE, yes or no?


Yes.


b. oslo.db script to run generically, yes or no?


No. Just have Triple-O install galera_innoptimizer and run it in a cron job.

Best,
-jay


thanks for your thoughts!



[1] https://github.com/deimosfr/galera_innoptimizer

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap

2018-04-03 Thread Javier Pena
- Original Message -

> On Tue, 3 Apr 2018 at 10:00 Javier Pena < jp...@redhat.com > wrote:

> > > Greeting folks,
> 
> > >
> 
> > > During the last PTG we spent time discussing some ideas around an
> > > All-In-One
> 
> > > installer, using 100% of the TripleO bits to deploy a single node
> > > OpenStack
> 
> > > very similar with what we have today with the containerized undercloud
> > > and
> 
> > > what we also have with other tools like Packstack or Devstack.
> 
> > >
> 
> > > https://etherpad.openstack.org/p/tripleo-rocky-all-in-one
> 
> > >
> 

> > I'm really +1 to this. And as a Packstack developer, I'd love to see this
> > as
> > a
> 
> > mid-term Packstack replacement. So let's dive into the details.
> 

> > > One of the problems that we're trying to solve here is to give a simple
> > > tool
> 
> > > for developers so they can both easily and quickly deploy an OpenStack
> > > for
> 
> > > their needs.
> 
> > >
> 
> > > "As a developer, I need to deploy OpenStack in a VM on my laptop, quickly
> > > and
> 
> > > without complexity, reproducing the same exact same tooling as TripleO is
> 
> > > using."
> 
> > > "As a Neutron developer, I need to develop a feature in Neutron and test
> > > it
> 
> > > with TripleO in my local env."
> 
> > > "As a TripleO dev, I need to implement a new service and test its
> > > deployment
> 
> > > in my local env."
> 
> > > "As a developer, I need to reproduce a bug in TripleO CI that blocks the
> 
> > > production chain, quickly and simply."
> 
> > >
> 

> > "As a packager, I want an easy/low overhead way to test updated packages
> > with
> > TripleO bits, so I can make sure they will not break any automation".
> 

> I suspect we need to not only update packages, but also update containers,
> wdyt?

I'm being implementation-agnostic in my requirement on purpose :). It could be 
either a new container including the updates, or updating the existing 
container with the new packages. 

> > > Probably more use cases, but to me that's what came into my mind now.
> 
> > >
> 
> > > Dan kicked-off a doc patch a month ago:
> 
> > > https://review.openstack.org/#/c/547038/
> 
> > > And I just went ahead and proposed a blueprint:
> 
> > > https://blueprints.launchpad.net/tripleo/+spec/all-in-one
> 
> > > So hopefully we can start prototyping something during Rocky.
> 
> > >
> 
> > > Before talking about the actual implementation, I would like to gather
> 
> > > feedback from people interested by the use-cases. If you recognize
> > > yourself
> 
> > > in these use-cases and you're not using TripleO today to test your things
> 
> > > because it's too complex to deploy, we want to hear from you.
> 
> > > I want to see feedback (positive or negative) about this idea. We need to
> 
> > > gather ideas, use cases, needs, before we go design a prototype in Rocky.
> 
> > >
> 

> > I would like to offer help with initial testing once there is something in
> > the repos, so count me in!
> 

> > Regards,
> 
> > Javier
> 

> > > Thanks everyone who'll be involved,
> 
> > > --
> 
> > > Emilien Macchi
> 
> > >
> 
> > > __
> 
> > > OpenStack Development Mailing List (not for usage questions)
> 
> > > Unsubscribe:
> > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

> > __
> 
> > OpenStack Development Mailing List (not for usage questions)
> 
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] [HEAT] order in attributes list

2018-04-03 Thread Volodymyr Litovka

Hi colleagues,

I have the following HOT configuration of a port:

  n1-wan:
    type: OS::Neutron::Port
    properties:
  fixed_ips:
    - { subnet: e-subnet1, ip_address: 51.x.x.x }
    - { subnet: e-subnet2, ip_address: 25.x.x.x }

when I try to extract these values in template using {get_attr}, then, 
regardless of fixed_ips' order in port definition (either "subnet1, 
subnet2" or "subnet2, subnet1"), the value of { get_attr: [n1-wan, 
fixed_ips] } always give the following result:


output_value:
   - ip_address: 25.x.x.x
      subnet_id: ...
   - ip_address: 51.x.x.x
 subnet_id: ...

and, thus, { get_attr: [n1-wan, fixed_ips, 1, ip_address ] } gives me 
51.x.x.x value.


So, the question is - how the list of fixed_ips is ordered? Is there way 
to know for sure index of entry I'm interested in?


Thank you.

--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [nova][placement] Consumer generations (allowing multiple clients to allocate for an instance)

2018-04-03 Thread Jay Pipes

Stackers,

Today, a few of us had a chat to discuss changes to the Placement REST 
API [1] that will allow multiple clients to safely update a single 
consumer's set of resource allocations. This email is to summarize the 
decisions coming out of that chat.


Note that Ed is currently updating the following nova-spec:

https://review.openstack.org/#/c/556971/

The decisions made were as follows:

1) The GET /allocations/{consumer_uuid} REST API endpoint will now have 
a required consumer_generation field in the response. This will be an 
integer value.


2) The PUT /allocations/{consumer_uuid} REST API endpoint will have a 
new consumer_generation required field in the request payload.


3) Callers to PUT /allocations/{consumer_uuid} that believe they are the 
first caller to set allocations for the consumer will set 
consumer_generation to None.


4) If consumer_generation is None in the request to PUT 
/allocations/{consumer_uuid} and the placement service notes that 
allocations already exist for that consumer, a 409 conflict will be 
returned. The caller will need to then GET /allocations/{consumer_uuid} 
to retrieve the consumer's current generation and allocations, merge its 
new resources into those allocations and retry PUT 
/allocations/{consumer_uuid}, passing the merged allocation set and 
consumer generation.


5) The POST /allocations REST API endpoint is currently only used by 
nova when performing migrate or resize operations for a virtual machine. 
The POST /allocations REST API request payload will contain a new 
required consumer_generation field in each top-level dict element 
corresponding to the allocations to overwrite for one or more consumers. 
(the migrate/resize code paths use multiple consumer UUIDs to identify 
the resources that are allocated to the source and destination hosts)


6) The HTTP response codes for both PUT /allocations/{consumer_uuid} and 
POST /allocations will continue to be 204 No Content.


Thanks,
-jay

[1] https://docs.openstack.org/nova/latest/user/placement.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.db] innodb OPTIMIZE TABLE ?

2018-04-03 Thread Michael Bayer
The MySQL / MariaDB variants we use nowadays default to
innodb_file_per_table=ON and we also set this flag to ON in installer
tools like TripleO. The reason we like file per table is so that
we don't grow an enormous ibdata file that can't be shrunk without
rebuilding the database.  Instead, we have lots of little .ibd
datafiles for each table throughout each openstack database.

But now we have the issue that these files also can benefit from
periodic optimization which can shrink them and also have a beneficial
effect on performance.   The OPTIMIZE TABLE statement achieves this,
but as would be expected it itself can lock tables for potentially a
long time.   Googling around reveals a lot of controversy, as various
users and publications suggest that OPTIMIZE is never needed and would
have only a negligible effect on performance.   However here we seek
to use OPTIMIZE so that we can reclaim disk space on tables that have
lots of DELETE activity, such as keystone "token" and ceilometer
"sample".

Questions for the group:

1. is OPTIMIZE table worthwhile to be run for tables where the
datafile has grown much larger than the number of rows we have in the
table?

2. from people's production experience how safe is it to run OPTIMIZE,
e.g. how long is it locking tables, etc.

3. is there a heuristic we can use to measure when we might run this
-.e.g my plan is we measure the size in bytes of each row in a table
and then compare that in some ratio to the size of the corresponding
.ibd file, if the .ibd file is N times larger than the logical data
size we run OPTIMIZE ?

4. I'd like to propose this job of scanning table datafile sizes in
ratio to logical data sizes, then running OPTIMIZE, be a utility
script that is delivered via oslo.db, and would run for all innodb
tables within a target MySQL/ MariaDB server generically.  That is, I
really *dont* want this to be a script that Keystone, Nova, Ceilometer
etc. are all maintaining delivering themselves.   this should be done
as a generic pass on a whole database (noting, again, we are only
running it for very specific InnoDB tables that we observe have a poor
logical/physical size ratio).

5. for Galera this gets more tricky, as we might want to run OPTIMIZE
on individual nodes directly.  The script at [1] illustrates how to
run this on individual nodes one at a time.

More succinctly, the Q is:

a. OPTIMIZE, yes or no?
b. oslo.db script to run generically, yes or no?

thanks for your thoughts!



[1] https://github.com/deimosfr/galera_innoptimizer

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap

2018-04-03 Thread Wesley Hayutin
On Tue, 3 Apr 2018 at 10:00 Javier Pena  wrote:

>
> > Greeting folks,
> >
> > During the last PTG we spent time discussing some ideas around an
> All-In-One
> > installer, using 100% of the TripleO bits to deploy a single node
> OpenStack
> > very similar with what we have today with the containerized undercloud
> and
> > what we also have with other tools like Packstack or Devstack.
> >
> > https://etherpad.openstack.org/p/tripleo-rocky-all-in-one
> >
>
> I'm really +1 to this. And as a Packstack developer, I'd love to see this
> as a
> mid-term Packstack replacement. So let's dive into the details.
>
> > One of the problems that we're trying to solve here is to give a simple
> tool
> > for developers so they can both easily and quickly deploy an OpenStack
> for
> > their needs.
> >
> > "As a developer, I need to deploy OpenStack in a VM on my laptop,
> quickly and
> > without complexity, reproducing the same exact same tooling as TripleO is
> > using."
> > "As a Neutron developer, I need to develop a feature in Neutron and test
> it
> > with TripleO in my local env."
> > "As a TripleO dev, I need to implement a new service and test its
> deployment
> > in my local env."
> > "As a developer, I need to reproduce a bug in TripleO CI that blocks the
> > production chain, quickly and simply."
> >
>
> "As a packager, I want an easy/low overhead way to test updated packages
> with TripleO bits, so I can make sure they will not break any automation".
>

I suspect we need to not only update packages, but also update containers,
wdyt?


>
> > Probably more use cases, but to me that's what came into my mind now.
> >
> > Dan kicked-off a doc patch a month ago:
> > https://review.openstack.org/#/c/547038/
> > And I just went ahead and proposed a blueprint:
> > https://blueprints.launchpad.net/tripleo/+spec/all-in-one
> > So hopefully we can start prototyping something during Rocky.
> >
> > Before talking about the actual implementation, I would like to gather
> > feedback from people interested by the use-cases. If you recognize
> yourself
> > in these use-cases and you're not using TripleO today to test your things
> > because it's too complex to deploy, we want to hear from you.
> > I want to see feedback (positive or negative) about this idea. We need to
> > gather ideas, use cases, needs, before we go design a prototype in Rocky.
> >
>
> I would like to offer help with initial testing once there is something in
> the repos, so count me in!
>
> Regards,
> Javier
>
> > Thanks everyone who'll be involved,
> > --
> > Emilien Macchi
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap

2018-04-03 Thread Telles Nobrega
I'd really love to this going forward, I fit perfectly on the category that
I usually don't test stuff on tripleO because it can get too complex and it
will take a lot of time to deploy, so this seems like a perfect solution
for that.

Thanks for putting this forward.

On Tue, Apr 3, 2018 at 11:00 AM Javier Pena  wrote:

>
> > Greeting folks,
> >
> > During the last PTG we spent time discussing some ideas around an
> All-In-One
> > installer, using 100% of the TripleO bits to deploy a single node
> OpenStack
> > very similar with what we have today with the containerized undercloud
> and
> > what we also have with other tools like Packstack or Devstack.
> >
> > https://etherpad.openstack.org/p/tripleo-rocky-all-in-one
> >
>
> I'm really +1 to this. And as a Packstack developer, I'd love to see this
> as a
> mid-term Packstack replacement. So let's dive into the details.
>
> > One of the problems that we're trying to solve here is to give a simple
> tool
> > for developers so they can both easily and quickly deploy an OpenStack
> for
> > their needs.
> >
> > "As a developer, I need to deploy OpenStack in a VM on my laptop,
> quickly and
> > without complexity, reproducing the same exact same tooling as TripleO is
> > using."
> > "As a Neutron developer, I need to develop a feature in Neutron and test
> it
> > with TripleO in my local env."
> > "As a TripleO dev, I need to implement a new service and test its
> deployment
> > in my local env."
> > "As a developer, I need to reproduce a bug in TripleO CI that blocks the
> > production chain, quickly and simply."
> >
>
> "As a packager, I want an easy/low overhead way to test updated packages
> with TripleO bits, so I can make sure they will not break any automation".
>
> > Probably more use cases, but to me that's what came into my mind now.
> >
> > Dan kicked-off a doc patch a month ago:
> > https://review.openstack.org/#/c/547038/
> > And I just went ahead and proposed a blueprint:
> > https://blueprints.launchpad.net/tripleo/+spec/all-in-one
> > So hopefully we can start prototyping something during Rocky.
> >
> > Before talking about the actual implementation, I would like to gather
> > feedback from people interested by the use-cases. If you recognize
> yourself
> > in these use-cases and you're not using TripleO today to test your things
> > because it's too complex to deploy, we want to hear from you.
> > I want to see feedback (positive or negative) about this idea. We need to
> > gather ideas, use cases, needs, before we go design a prototype in Rocky.
> >
>
> I would like to offer help with initial testing once there is something in
> the repos, so count me in!
>
> Regards,
> Javier
>
> > Thanks everyone who'll be involved,
> > --
> > Emilien Macchi
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 

TELLES NOBREGA

SOFTWARE ENGINEER

Red Hat Brasil  

Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo

tenob...@redhat.com

TRIED. TESTED. TRUSTED. 
 Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
pelo Great Place to Work.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap

2018-04-03 Thread Javier Pena

> Greeting folks,
>
> During the last PTG we spent time discussing some ideas around an All-In-One
> installer, using 100% of the TripleO bits to deploy a single node OpenStack
> very similar with what we have today with the containerized undercloud and
> what we also have with other tools like Packstack or Devstack.
>
> https://etherpad.openstack.org/p/tripleo-rocky-all-in-one
>

I'm really +1 to this. And as a Packstack developer, I'd love to see this as a
mid-term Packstack replacement. So let's dive into the details.

> One of the problems that we're trying to solve here is to give a simple tool
> for developers so they can both easily and quickly deploy an OpenStack for
> their needs.
>
> "As a developer, I need to deploy OpenStack in a VM on my laptop, quickly and
> without complexity, reproducing the same exact same tooling as TripleO is
> using."
> "As a Neutron developer, I need to develop a feature in Neutron and test it
> with TripleO in my local env."
> "As a TripleO dev, I need to implement a new service and test its deployment
> in my local env."
> "As a developer, I need to reproduce a bug in TripleO CI that blocks the
> production chain, quickly and simply."
>

"As a packager, I want an easy/low overhead way to test updated packages with 
TripleO bits, so I can make sure they will not break any automation".

> Probably more use cases, but to me that's what came into my mind now.
>
> Dan kicked-off a doc patch a month ago:
> https://review.openstack.org/#/c/547038/
> And I just went ahead and proposed a blueprint:
> https://blueprints.launchpad.net/tripleo/+spec/all-in-one
> So hopefully we can start prototyping something during Rocky.
>
> Before talking about the actual implementation, I would like to gather
> feedback from people interested by the use-cases. If you recognize yourself
> in these use-cases and you're not using TripleO today to test your things
> because it's too complex to deploy, we want to hear from you.
> I want to see feedback (positive or negative) about this idea. We need to
> gather ideas, use cases, needs, before we go design a prototype in Rocky.
>

I would like to offer help with initial testing once there is something in the 
repos, so count me in!

Regards,
Javier

> Thanks everyone who'll be involved,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] nova-placement-api tuning

2018-04-03 Thread Jay Pipes

On 04/03/2018 06:48 AM, Chris Dent wrote:

On Mon, 2 Apr 2018, Alex Schultz wrote:


So this is/was valid. A few years back there was some perf tests done
with various combinations of process/threads and for Keystone it was
determined that threads should be 1 while you should adjust the
process count (hence the bug). Now I guess the question is for every
service what is the optimal configuration but I'm not sure there's
anyone who's looking at this in the upstream for all the services.  In
the puppet modules for consistency we applied a similar concept for
all the services when they are deployed under apache.  It can be tuned
as needed for each service but I don't think we have any great
examples of perf numbers. It's really a YMMV thing. We ship a basic
default that isn't crazy, but it's probably not optimal either.


Do you happen to recall if the trouble with keystone and threaded
web servers had anything to do with eventlet? Support for the
eventlet-based server was removed from keystone in Newton.


IIRC, it had something to do with the way the keystoneauth middleware 
interacted with memcache... not sure if this is still valid any more 
though. Probably worth re-checking the performance.


-jay

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap

2018-04-03 Thread James Slagle
On Mon, Apr 2, 2018 at 9:05 PM, Dan Prince  wrote:
> On Thu, Mar 29, 2018 at 5:32 PM, Emilien Macchi  wrote:
>> Greeting folks,
>>
>> During the last PTG we spent time discussing some ideas around an All-In-One
>> installer, using 100% of the TripleO bits to deploy a single node OpenStack
>> very similar with what we have today with the containerized undercloud and
>> what we also have with other tools like Packstack or Devstack.
>>
>> https://etherpad.openstack.org/p/tripleo-rocky-all-in-one
>>
>> One of the problems that we're trying to solve here is to give a simple tool
>> for developers so they can both easily and quickly deploy an OpenStack for
>> their needs.
>>
>> "As a developer, I need to deploy OpenStack in a VM on my laptop, quickly
>> and without complexity, reproducing the same exact same tooling as TripleO
>> is using."
>> "As a Neutron developer, I need to develop a feature in Neutron and test it
>> with TripleO in my local env."
>> "As a TripleO dev, I need to implement a new service and test its deployment
>> in my local env."
>> "As a developer, I need to reproduce a bug in TripleO CI that blocks the
>> production chain, quickly and simply."
>>
>> Probably more use cases, but to me that's what came into my mind now.
>>
>> Dan kicked-off a doc patch a month ago:
>> https://review.openstack.org/#/c/547038/
>> And I just went ahead and proposed a blueprint:
>> https://blueprints.launchpad.net/tripleo/+spec/all-in-one
>> So hopefully we can start prototyping something during Rocky.
>
> I've actually started hacking a bit here:
>
> https://github.com/dprince/talon
>
> Very early and I haven't committed everything yet. (Probably wouldn't
> have announced it to the list yet but it might help some understand
> the use case).
>
> I'm running this on my laptop to develop TripleO containers with no
> extra VM involved.
>
> P.S. We should call it Talon!
>
> Dan
>
>>
>> Before talking about the actual implementation, I would like to gather
>> feedback from people interested by the use-cases. If you recognize yourself
>> in these use-cases and you're not using TripleO today to test your things
>> because it's too complex to deploy, we want to hear from you.
>> I want to see feedback (positive or negative) about this idea. We need to
>> gather ideas, use cases, needs, before we go design a prototype in Rocky.
>
> Sorry dude. Already prototyping :)

A related use case to all this work that takes it a step further:

I think it would be useful if we could eventually further break down
"openstack undercloud deploy" into just the pieces needed to:

- start an ephemeral Heat container
- create the Heat stack passing all requested -e's
- run config-download and save the output

Essentially removing the undercloud specific logic (or all-in-one
specific logic in this case) from "openstack undercloud deploy" and
resulting in a generic way to create the config-download playbooks for
any given TripleO stack (openstack tripleo depoy?). This would be
possible when using deployed-server, noop'ing Neutron networks, and
using fixed IP's as those are the only OpenStack resources actually
created by Heat when using a full undercloud.

This would allow one to consume the ansible playbooks for a multinode
overcloud using an ephemeral Heat.

The same generic tooling could then be used to deploy an actual
undercloud, any all-in-one configuration, or any overcloud
configuration.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-stable-maint] Stable check of openstack/networking-midonet failed

2018-04-03 Thread Elõd Illés

Hi,

These patches probably solve the issue, if someone could review them:

https://review.openstack.org/#/c/557005/

and

https://review.openstack.org/#/c/557006/

Thanks,

Előd


On 2018-04-01 05:55, Tony Breeds wrote:

On Sat, Mar 31, 2018 at 06:17:41AM +, A mailing list for the OpenStack 
Stable Branch test reports. wrote:

Build failed.

- build-openstack-sphinx-docs 
http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/networking-midonet/stable/pike/build-openstack-sphinx-docs/b20c665/html/
 : SUCCESS in 5m 48s
- openstack-tox-py27 
http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/networking-midonet/stable/pike/openstack-tox-py27/75db3fe/
 : FAILURE in 11m 49s
  


I'm not sure what's going on here but as with stable/ocata the
networking-midonet periodic-stable jobs have been failing like this for
close to a week.

Can someone from that team take a look

Yours Tony.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Vitrage] New proposal for analysis.

2018-04-03 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi Minwook,

Thanks for the explanation, I understand the reasons for not running these 
checks on a regular basis in Zabbix or other monitoring tools. It makes sense. 
However, I don’t want to re-invent the wheel and add to Vitrage functionality 
that already exists in other projects.

How about using Mistral for the purpose of manually running these extra checks? 
If you prepare the script/agent in advance, as well as the Mistral workflow, I 
believe that Mistral can successfully execute the check and return the results. 
I’m not so sure about the UI part, we will have to figure out how and where the 
user can see the output. But it will save a lot of effort around managing the 
checks, running a new service, supporting a new API, etc.

What do you think?
Ifat


From: MinWookKim 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, 3 April 2018 at 5:36
To: "'OpenStack Development Mailing List (not for usage questions)'" 

Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis.

Hello Ifat,

I also thought about several scenarios that use monitoring tools like Zabbix, 
Nagios, and Prometheus.

But there are some limitations, so we have to think about it.

We also need to think about targets, scope, and so on.

The reason I do not think of tools like Zabbix, Nagios, and Prometheus as a 
tool to run checks is because we need to configure an agent or an exporter.

I think it is not hard to configure an agent for monitoring objects such as a 
physical host.

But the scope of the idea I think involves the VM's interior.

Therefore, configuring the agent automatically inside the VM may not be easy. 
(although we can use parameters like user-data)

If we exclude VM internal checks from scope, we can simply perform a check via 
Zabbix. (Like Zabbix's remote command, history)

On the other hand, if we include the inside of a VM in a scope, and configure 
each of them, we have a rather constant overhead.

The check service may incur temporary overhead, but the agent configuration can 
cause constant overhead.

And Zabbix history can be another task for Vitrage.

If we configure the agents themselves and exclude the VM's internal checks, we 
can provide functionality with simple code.

how is it?

Thank you.

Best regards,
Minwook.
From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.a...@nokia.com]
Sent: Monday, April 2, 2018 10:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis.

Hi Minwook,

Thinking about it again, writing a new service for these checks might be an 
unnecessary overhead. Have you considered using an existing tool, like Zabbix, 
for running such checks? If you use Zabbix, you can define new triggers that 
run the new checks, and whenever needed the user can ask to open Zabbix and 
show the relevant metrics. The format will not be exactly the same as in your 
example, but it will save a lot of work and spare you the need to write and 
manage a new service.

Some technical details:


· The current information that Vitrage stores is not enough for opening 
the right Zabbix page. We will need to keep a little more data, like the item 
id, on the alarm vertex. But can be done easily.

· A relevant Zabbix API is history.get [1]

· If you are not using Zabbix, I assume that other monitoring tools 
have similar capabilities

What do you think? Do you think it can work with your scenario?
Or do you see a benefit to the user is viewing the data in the format that you 
suggested?


[1] https://www.zabbix.com/documentation/3.0/manual/api/reference/history/get

Thanks,
Ifat


From: MinWookKim >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, 2 April 2018 at 4:51
To: "'OpenStack Development Mailing List (not for usage questions)'" 
>
Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis.

Hello Ifat,

Thank you for the reply. :)

It is my opinion only, so if I'm wrong, we can change the implementation part 
at any time. (Even if it differs from my initial intention)

The same security issues arise as you say. But now Vitrage does not call 
external APIs.

The Vitrage-dashboard uses Vitrageclient libraries for Topology, Alarms, and 
RCA requests to Vitrage.

So if we add an API, it will have the following flow.

Vitrage-dashboard requests checks using the Vitrageclient library. -> Vitrage 
receives the API.

-> api / controllers / v1 / checks.py is called. -> checks service is called.

In accordance with the above flow, passing through the Vitrage API is the 
purpose of data passing and function calls.

I think 

Re: [Openstack-operators] nova-placement-api tuning

2018-04-03 Thread Chris Dent

On Mon, 2 Apr 2018, Alex Schultz wrote:


So this is/was valid. A few years back there was some perf tests done
with various combinations of process/threads and for Keystone it was
determined that threads should be 1 while you should adjust the
process count (hence the bug). Now I guess the question is for every
service what is the optimal configuration but I'm not sure there's
anyone who's looking at this in the upstream for all the services.  In
the puppet modules for consistency we applied a similar concept for
all the services when they are deployed under apache.  It can be tuned
as needed for each service but I don't think we have any great
examples of perf numbers. It's really a YMMV thing. We ship a basic
default that isn't crazy, but it's probably not optimal either.


Do you happen to recall if the trouble with keystone and threaded
web servers had anything to do with eventlet? Support for the
eventlet-based server was removed from keystone in Newton.

I've been doing some experiments with placement using multiple uwsgi
processes, each with multiple threads and it appears to be working
very well. Ideally all the OpenStack HTTP-based services would be
able to run effectively in that kind of setup. If they can't I'd
like to help make it possible.

In any case: processes 3, threads 1 for WSGIDaemonProcess for the
placement service for a deployment of any real size errs on the
side of too conservative and I hope we can make some adjustments
there.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] Replacing pbr's autodoc feature with sphinxcontrib-apidoc

2018-04-03 Thread Stephen Finucane
On Mon, 2018-04-02 at 19:41 -0400, Zane Bitter wrote:
> On 28/03/18 10:31, Stephen Finucane wrote:
> > As noted last week [1], we're trying to move away from pbr's autodoc
> > feature as part of the new docs PTI. To that end, I've created
> > sphinxcontrib-apidoc, which should do what pbr was previously doing for
> > us by via a Sphinx extension.
> > 
> >https://pypi.org/project/sphinxcontrib-apidoc/
> > 
> > This works by reading some configuration from your documentation's
> > 'conf.py' file and using this to call 'sphinx-apidoc'. It means we no
> > longer need pbr to do this for.
> > 
> > I have pushed version 0.1.0 to PyPi already but before I add this to
> > global requirements, I'd like to ensure things are working as expected.
> > smcginnis was kind enough to test this out on glance and it seemed to
> > work for him but I'd appreciate additional data points. The
> > configuration steps for this extension are provided in the above link.
> > To test this yourself, you simply need to do the following:
> > 
> > 1. Add 'sphinxcontrib-apidoc' to your test-requirements.txt or
> >doc/requirements.txt file
> > 2. Configure as noted above and remove the '[pbr]' and '[build_sphinx]'
> >configuration from 'setup.cfg'
> > 3. Replace 'python setup.py build_sphinx' with a call to 'sphinx-build'
> > 4. Run 'tox -e docs'
> > 5. Profit?
> > 
> > Be sure to let me know if anyone encounters issues. If not, I'll be
> > pushing for this to be included in global requirements so we can start
> > the migration.
> 
> Thanks Stephen! I tried it out with no problems:
> 
> https://review.openstack.org/558262
> 
> However, there are a couple of differences compared to how pbr did things.
> 
> 1) pbr can generate an 'autoindex' file with a flat list of modules 
> (this appears to be configurable with the autodoc_index_modules option), 
> but apidoc only generates a 'modules' file with a hierarchical list of 
> modules. This is easy to work around, but I guess it needs to be added 
> to the instructions to check that you're not relying on it.

Yup, smcginnis and I discussed this at some point. PBR has two
different ways of generating API documentation: 'autodoc_tree', which
is based on 'sphinx-apidoc', and 'autodoc', which is custom (and
presumably legacy). This extension replaces the former of those but, as
you note below, it seems 'sphinx-apidoc' can be wrangled into
generating something approaching the latter.

> 2) pbr generates a page per module; this plugin generates a page per 
> package. This results in wy too much information on a page to be 
> able to navigate it comfortably IMHO. To the point where it's easier to 
> read the code. (It also breaks existing links, if you care about that 
> kind of thing.) I sent you a PR to add an option to pass --separate:
> 
> https://github.com/sphinx-contrib/apidoc/pull/1

Thanks for that. I've merged it and will use it as the basis of a 0.2.0
release assuming nothing else pops up in the next day or two. I'm not
sure what we can do about the broken links though - maybe use the
redirect infrastructure to just send everyone to the new place? I guess
I can add this to the guide I'm adding to the README on migrating from
pbr.

Cheers,
Stephen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [novaclient] invoking methods on the same client object in different theads caused malformed requests

2018-04-03 Thread Xiong, Huan
Hi,

I'm using a cloud benchmarking tool [1], which creates a *single* nova
client object in main thread and invoke methods on that object in different
worker threads. I find it generated malformed requests at random (my
system has python-novaclient 10.1.0 installed). The root cause was because
some methods in novaclient (e.g., those in images.py and networks.py) 
changed client object's service_type. Since all threads shared a single
client object, the change caused other threads generated malformed requests
and hence the failure.

I wonder if this is a known issue for novaclient, or the above approach is
not supported?

Thanks,
rayx

[1] https://github.com/ibmcb/cbtool

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Solar" release

2018-04-03 Thread Thomas Bechtold

Hey,

On 30.03.2018 16:26, Kashyap Chamarthy wrote:
[...]

Taking the DistroSupportMatrix into picture, for the sake of discussion,
how about the following NEXT_MIN versions for "Solar" release:

(a) libvirt: 3.2.0 (released on 23-Feb-2017)


[...]


(b) QEMU: 2.9.0 (released on 20-Apr-2017)


[...]

Works both for openSUSE and SLES.

Best,

Tom

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Solar" release

2018-04-03 Thread Thomas Bechtold

Hey,

On 30.03.2018 16:26, Kashyap Chamarthy wrote:
[...]

Taking the DistroSupportMatrix into picture, for the sake of discussion,
how about the following NEXT_MIN versions for "Solar" release:

(a) libvirt: 3.2.0 (released on 23-Feb-2017)


[...]


(b) QEMU: 2.9.0 (released on 20-Apr-2017)


[...]

Works both for openSUSE and SLES.

Best,

Tom

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-upstream-institute] Call before the Vancouver training - ACTION NEEDED

2018-04-03 Thread Ildiko Vancsa
Hi Training Team,

Our next training in Vancouver[1] is quickly approaching and we still have a 
lot of work to do.

In order to sync up I created a Doodle poll[2] with hours that are somewhat 
inconvenient, but can work around the globe. Please respond to the poll so we 
can setup a call to check on where we are and do last minute changes if needed.

In the meantime we are moving content over from the training-guides slides to 
the Contributor Guide[3], please pick a task and help out!

We also need to work on the exercises to keep the training interactive and 
hands on. If you have ideas please respond to this thread, jump on our IRC 
channel (#openstack-upstream-institute) or propose a patch to the training 
guides repository. :)

Let me know if you have any questions.

Thanks,
Ildikó (IRC: ildikov)

[1] 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/global-search?t=Upstream+Institute
[2] https://doodle.com/poll/i894hhd7bfukmm7p
[3] https://storyboard.openstack.org/#!/project/913



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback

2018-04-03 Thread Shintaro Mizuno

I'm also +1 on this.

I've circulated to the Japanese Ops group and heard no objection so 
would be more +1s from our community.


Shintaro
--
Shintaro MIZUNO (水野伸太郎)
NTT Software Innovation Center
TEL: 0422-59-4977
E-mail: mizuno.shint...@lab.ntt.co.jp


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback

2018-04-03 Thread Thierry Carrez
Erik McCormick wrote:
> I'm a +1 too as long as the devs at large are cool with it and won't
> hate on us for crashing their party.

As a data point, in a recent survey 89% of surveyed developers supported
that the Ops meetup should happen at the same time and place. Amongst
past PTG attendees, that support raises to 92%. Furthermore I only heard
good things about the Public Cloud WG participating to the Dublin PTG.

So I don't think anyone views it as "their party" -- just as an event
where we all get stuff done.

-- 
Thierry

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [k8s] OpenStack and Containers White Paper

2018-04-03 Thread Jaesuk Ahn
Hi Chris,

I can probably help on proof-reading and making some contents on the
openstack-helm part.
As Pete pointed out, LOCI and OpenStack-Helm (OSH) are agnostic to each
other. OSH is working well with both kolla image and loci image.

IMHO, following categorization might be better to capture the nature of
these project. Just suggestion.

* OpenStack Containerization tools
   * Kolla
   * Loci
* Container-based deployment tools for installing and managing OpenStack
   * Kolla-Ansible
   * OpenStack Helm


On Tue, Apr 3, 2018 at 10:08 AM Pete Birley  wrote:

> Chris,
>
> I'd be happy to help out where I can, mostly related to OSH and LOCI. One
> thing we should make clear is that both of these projects are agnostic to
> each other: we gate OSH with both LOCI and kolla images, and conversely
> LOCI has uses far beyond just OSH.
>
> Pete
>
> On Monday, April 2, 2018, Chris Hoge  wrote:
>
>> Hi everyone,
>>
>> In advance of the Vancouver Summit, I'm leading an effort to publish a
>> community produced white-paper on OpenStack and container integrations.
>> This has come out of a need to develop materials, both short and long
>> form, to help explain how OpenStack interacts with container
>> technologies across the entire stack, from infrastructure to
>> application. The rough outline of the white-paper proposes an entire
>> technology stack and discuss deployment and usage strategies at every
>> level. The white-paper will focus on existing technologies, and how they
>> are being used in production today across our community. Beginning at
>> the hardware layer, we have the following outline (which may be inverted
>> for clarity):
>>
>> * OpenStack Ironic for managing bare metal deployments.
>> * Container-based deployment tools for installing and managing OpenStack
>>* Kolla containers and Kolla-Ansible
>>* Loci containers and OpenStack Helm
>> * OpenStack-hosted APIs for managing container application
>>   infrastructure.
>>* Magnum
>>* Zun
>> * Community-driven integration of Kubernetes and OpenStack with K8s
>>   Cloud Provider OpenStack
>> * Projects that can stand alone in integrations with Kubernetes and
>>   other cloud technology
>>* Cinder
>>* Neutron with Kuryr and Calico integrations
>>* Keystone authentication and authorization
>>
>> I'm looking for volunteers to help produce the content for these sections
>> (and any others we may uncover to be useful) for presenting a complete
>> picture of OpenStack and container integrations. If you're involved with
>> one of these projects, or are using any of these tools in
>> production, it would be fantastic to get your input in producing the
>> appropriate section. We especially want real-world deployments to use as
>> small case studies to inform the work.
>>
>> During the process of creating the white-paper, we will be working with a
>> technical writer and the Foundation design team to produce a document that
>> is consistent in voice, has accurate and informative graphics that
>> can be used to illustrate the major points and themes of the white-paper,
>> and that can be used as stand-alone media for conferences and
>> presentations.
>>
>> Over the next week, I'll be reaching out to individuals and inviting them
>> to collaborate. This is also a general invitation to collaborate, and if
>> you'd like to help out with a section please reach out to me here, on the
>> K8s #sig-openstack Slack channel, or at my work e-mail,
>> ch...@openstack.org.
>> Starting next week, we'll work out a schedule for producing and delivering
>> the white-paper by the Vancouver Summit. We are very short on time, so
>> we will have to be focused to quickly produce high-quality content.
>>
>> Thanks in advance to everyone who participates in writing this
>> document. I'm looking forward to working with you in the coming weeks to
>> publish this important resource for clearly describing the multitude of
>> interactions between these complementary technologies.
>>
>> -Chris Hoge
>> K8s-SIG-OpenStack/OpenStack-SIG-K8s Co-Lead
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> --
>
> [image: Port.direct] 
>
> Pete Birley / Director
> pete@port.direct / +447446862551 <+44%207446%20862551>
>
> *PORT.*DIRECT
> United Kingdom
> https://port.direct
>
> This e-mail message may contain confidential or legally privileged
> information and is intended only for the use of the intended recipient(s).
> Any unauthorized disclosure, dissemination, distribution, copying or the
> taking of any action in reliance on the information herein is prohibited.
> E-mails are not secure and cannot be guaranteed to be error free as they
> can be intercepted,