Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-05-04 Thread Andrea Frittoli
On Thu, May 4, 2017 at 7:13 PM Emilien Macchi  wrote:

> On Thu, May 4, 2017 at 9:41 AM, Dan Prince  wrote:
> > On Thu, 2017-05-04 at 03:11 -0400, Luigi Toscano wrote:
> >> - Original Message -
> >> > On Wed, 2017-05-03 at 17:53 -0400, Emilien Macchi wrote:
> >> > > (cross-posting)
> >> >
> >> > > Instead of running the Pingtest, we would execute a Tempest
> >> > > Scenario
> >> > > that boot an instance from volume (like Pingstest is already
> >> > > doing)
> >> > > and see how it goes (in term of coverage and runtime).
> >> > > I volunteer to kick-off the work with someone more expert than I
> >> > > am
> >> > > with quickstart (Arx maybe?).
> >> > >
> >> > > Another iteration could be to start building an easy interface to
> >> > > select which Tempest tests we want a TripleO CI job to run and
> >> > > plug
> >> > > it
> >> > > to our CI tooling (tripleo-quickstart I presume).
> >> >
> >> > Running a subset of Tempest tests isn't the same thing as designing
> >> > (and owning) your own test suite that targets the things that mean
> >> > the
> >> > most to our community (namely speed and coverage). Even giving up
> >> > 5-10
> >> > minutes of runtime...just to be able to run Tempest isn't something
> >> > that some of us would be willing to do.
> >>
> >> As I mentioned, you can do it with Tempest (the library). You can
> >> have your own test suite that does exactly what you are asking
> >> (namely, a set of scenario tests based on Heat which targets the
> >> TripleO use case) in a Tempest plugin and there is no absolute reason
> >> that those tests should add 5-10 minutes of runtime compared to
> >> pingtest.
> >>
> >> It/they would be exactly pingtest, only implemented using a different
> >> library and running with a different runner, with the *exact* same
> >> run time.
> >>
> >> Obvious advantages: only one technology used to run tests, so if
> >> anyone else want to run additional tests, there is no need to
> >> maintain two code paths; reuse on a big and proven library of test
> >> and test runner tools.
> >
> > I like the idea of getting pingtest out of tripleo.sh as more of a
> > stand alone tool. I would support an effort that re-implemented it...
> > and using tempest-lib would be totally fine. And as you point out one
> > could even combine these tests with a more common "Tempest" run that
> > incorporates the scenarios, etc.
>
> I don't understand why we would re-implement the pingtest in a tempest
> plugin.
> Could you please tell us what is the technical difference between what
> does this scenario :
>
> https://github.com/openstack/tempest/blob/master/tempest/scenario/test_volume_boot_pattern.py
>
> And this pingtest:
>
> https://github.com/openstack/tripleo-heat-templates/blob/master/ci/pingtests/tenantvm_floatingip.yaml
>
> They both create a volume Cinder, snapshot it in Glance and and spawn
> a Nova server from the volume.
>
> What one does that the other one doesn't?
>
> > To me the message is clear that we DO NOT want to consume the normal
> > Tempest scenarios in TripleO upstream CI at this point. Sure there is
> > overlap there, but the focus of those tests is just plain different...
>
> I haven't seen strong pushback in this thread except you.
> I'm against overlap in general and this one is pretty obvious. Why
> would we maintain a tripleo-specific Tempest scenario while existing
> ones would work for us. Please give me a technical reason in what is
> not good enough in the existing scenarios.
>
> > speed isn't a primary concern there as it is for us so I don't think we
> > should do it now. And probably not ever unless the CI job time is less
> > than an hour. Like even if we were able to tune a set of stock Tempest
> > smoke tests today to our liking unless TripleO proper gates on the
> > runtime of those not increasing we'd be at risk of breaking our CI
> > queues as the wall time would potentially get too long. In this regard
> > this entire thread is poorly named I think in that we are no longer
> > talking about 'pingtest vs. tempest' but rather the implementation
> > details of how we reimplement our existing pingtest to better suite the
> > community.
>
> What I would like to see if we're going to use Tempest in our gate, is
> to run at least one TripleO jobs as voting in Tempest.
>
> Tempest folks: I need your support here. We have been running Puppet
> jobs as non-voting and we have seen a quite number of patches that
> broke us because folks were ignore the jobs. If we switch TripleO to
> use more Tempest, being in your gate might be required. We'll run the
> fastest and  more stable job that we have to make sure the impact for
> you is minimal.
>
>
For TripleO I guess the risks of failures might be about deploying tempest
tests and their dependencies more than else.

We're increasing our stable API surface constantly, and I believe the number
of breakages have decreased over time - I don't have data to support this

Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-05-04 Thread Andrea Frittoli
On Thu, May 4, 2017 at 11:11 PM Dan Prince  wrote:

> On Thu, 2017-05-04 at 14:11 -0400, Emilien Macchi wrote:
> > On Thu, May 4, 2017 at 9:41 AM, Dan Prince 
> > wrote:
> > > On Thu, 2017-05-04 at 03:11 -0400, Luigi Toscano wrote:
> > > > - Original Message -
> > > > > On Wed, 2017-05-03 at 17:53 -0400, Emilien Macchi wrote:
> > > > > > (cross-posting)
> > > > > > Instead of running the Pingtest, we would execute a Tempest
> > > > > > Scenario
> > > > > > that boot an instance from volume (like Pingstest is already
> > > > > > doing)
> > > > > > and see how it goes (in term of coverage and runtime).
> > > > > > I volunteer to kick-off the work with someone more expert
> > > > > > than I
> > > > > > am
> > > > > > with quickstart (Arx maybe?).
> > > > > >
> > > > > > Another iteration could be to start building an easy
> > > > > > interface to
> > > > > > select which Tempest tests we want a TripleO CI job to run
> > > > > > and
> > > > > > plug
> > > > > > it
> > > > > > to our CI tooling (tripleo-quickstart I presume).
> > > > >
> > > > > Running a subset of Tempest tests isn't the same thing as
> > > > > designing
> > > > > (and owning) your own test suite that targets the things that
> > > > > mean
> > > > > the
> > > > > most to our community (namely speed and coverage). Even giving
> > > > > up
> > > > > 5-10
> > > > > minutes of runtime...just to be able to run Tempest isn't
> > > > > something
> > > > > that some of us would be willing to do.
> > > >
> > > > As I mentioned, you can do it with Tempest (the library). You can
> > > > have your own test suite that does exactly what you are asking
> > > > (namely, a set of scenario tests based on Heat which targets the
> > > > TripleO use case) in a Tempest plugin and there is no absolute
> > > > reason
> > > > that those tests should add 5-10 minutes of runtime compared to
> > > > pingtest.
> > > >
> > > > It/they would be exactly pingtest, only implemented using a
> > > > different
> > > > library and running with a different runner, with the *exact*
> > > > same
> > > > run time.
> > > >
> > > > Obvious advantages: only one technology used to run tests, so if
> > > > anyone else want to run additional tests, there is no need to
> > > > maintain two code paths; reuse on a big and proven library of
> > > > test
> > > > and test runner tools.
> > >
> > > I like the idea of getting pingtest out of tripleo.sh as more of a
> > > stand alone tool. I would support an effort that re-implemented
> > > it...
> > > and using tempest-lib would be totally fine. And as you point out
> > > one
> > > could even combine these tests with a more common "Tempest" run
> > > that
> > > incorporates the scenarios, etc.
> >
> > I don't understand why we would re-implement the pingtest in a
> > tempest plugin.
> > Could you please tell us what is the technical difference between
> > what
> > does this scenario :
> > https://github.com/openstack/tempest/blob/master/tempest/scenario/tes
> > t_volume_boot_pattern.py
> >
> > And this pingtest:
> > https://github.com/openstack/tripleo-heat-templates/blob/master/ci/pi
> > ngtests/tenantvm_floatingip.yaml
> >
> > They both create a volume Cinder, snapshot it in Glance and and spawn
> > a Nova server from the volume.
> >
> > What one does that the other one doesn't?
>
> I don't think these are the same things. Does the Tempest test even
> create a floating IP? And in the case of pingtest we also cover Heat
> API in the overcloud (also valuable coverage). And even if they could
> be made to match today is there any guarantee that they would diverge
> in the future or maintain the same speed goals as that test lives in
> Tempest (and most TripleO cores don't review there).
>
> The main difference that I care about is it is easier for us to
> maintain and fix the pingtest varient at this point. We care a lot
> about our CI, and like I said before increasing the runtime isn't
> something we could easily tolerate. I'm willing to entertain reuse so
> long as it also allows us the speed and control we desire.
>
> >
> > > To me the message is clear that we DO NOT want to consume the
> > > normal
> > > Tempest scenarios in TripleO upstream CI at this point. Sure there
> > > is
> > > overlap there, but the focus of those tests is just plain
> > > different...
> >
> > I haven't seen strong pushback in this thread except you.
>
> Perhaps most cores haven't weighed in on this issue because moving to
> Tempest(-lib) isn't the most pressing issue ATM. We have a lot of
> architectural changes happening at the moment for example and that is
> why I only replied to this thread this week.
>
> > I'm against overlap in general and this one is pretty obvious. Why
> > would we maintain a tripleo-specific Tempest scenario while existing
> > ones would work for us. Please give me a technical reason in what is
> > not good enough in the existing scenarios.
>
> We maintain it because we care about speed I think.

Re: [Openstack] there is no resource usage panel after I installed telemetry service

2017-05-04 Thread Shake Chen
Because horizon have remove ceilometer.

now the Telemetry have no hroizon plugin.

On Fri, May 5, 2017 at 12:28 PM, Cheung 楊禮銓  wrote:

>
> I installed ocata's telemetry service.
>
> when I login horizon, I do not find resource usage panel.
>
> do I miss something?
>
> my system is ubuntu 16.4
> openstack version is ocata.
>
> Installation Tutorials
>
> https://docs.openstack.org/project-install-guide/ocata/
> ubuntu-services.html
>
>
>
>
>
> --
> 本電子郵件及其所有附件所含之資訊均屬機密,僅供指定之收件人使用,未經寄件人同意不得揭露、複製或散布本電子郵件。若您並非指定之收件人,請勿使用、
> 保存或揭露本電子郵件之任何部分,並請立即通知寄件人並完全刪除本電子郵件。網路通訊可能含有病毒,收件人應自行確認本郵件是否安全,若因此造成損害,寄件人恕不負責。
>
>
> The information contained in this communication and attachment is
> confidential and is intended only for the use of the recipient to which
> this communication is addressed. Any disclosure, copying or distribution of
> this communication without the sender's consents is strictly prohibited. If
> you are not the intended recipient, please notify the sender and delete
> this communication entirely without using, retaining, or disclosing any of
> its contents. Internet communications cannot be guaranteed to be
> virus-free. The recipient is responsible for ensuring that this
> communication is virus free and the sender accepts no liability for any
> damages caused by virus transmitted by this communication.
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
>


-- 
Shake Chen
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] there is no resource usage panel after I installed telemetry service

2017-05-04 Thread Cheung 楊禮銓

I installed ocata's telemetry service.

when I login horizon, I do not find resource usage panel.

do I miss something?

my system is ubuntu 16.4
openstack version is ocata.

Installation Tutorials

https://docs.openstack.org/project-install-guide/ocata/ubuntu-services.html




--
本電子郵件及其所有附件所含之資訊均屬機密,僅供指定之收件人使用,未經寄件人同意不得揭露、複製或散布本電子郵件。若您並非指定之收件人,請勿使用、保存或揭露本電子郵件之任何部分,並請立即通知寄件人並完全刪除本電子郵件。網路通訊可能含有病毒,收件人應自行確認本郵件是否安全,若因此造成損害,寄件人恕不負責。

The information contained in this communication and attachment is confidential 
and is intended only for the use of the recipient to which this communication 
is addressed. Any disclosure, copying or distribution of this communication 
without the sender's consents is strictly prohibited. If you are not the 
intended recipient, please notify the sender and delete this communication 
entirely without using, retaining, or disclosing any of its contents. Internet 
communications cannot be guaranteed to be virus-free. The recipient is 
responsible for ensuring that this communication is virus free and the sender 
accepts no liability for any damages caused by virus transmitted by this 
communication.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [keystone][nova][policy] policy goals and roadmap

2017-05-04 Thread Lance Bragstad
Hi all,

I spent some time today summarizing a discussion [0] about global roles. I
figured it would help build some context for next week as there are a
couple cross project policy/RBAC sessions at the Forum.

The first patch is a very general document trying to nail down our policy
goals [1]. The second is a proposed roadmap (given the existing patches and
direction) of how we can mitigate several of the security issues we face
today with policy across OpenStack [2].

Feel free to poke holes as it will hopefully lead to productive discussions
next week.

Thanks!


[0]
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-05-04.log.html#t2017-05-04T15:00:41
[1] https://review.openstack.org/#/c/460344/7
[2] https://review.openstack.org/#/c/462733/3
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [senlin] weekly meeting cancelled for next two weeks

2017-05-04 Thread Qiming Teng
Folks, we the team will be at Boston attending the summit next week, so
we won't be able to hold the meeting on May 9th. We will also skip the
May 16th meeting as well because most of us have just returned home from
the trip.

- Qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] stepping down from core

2017-05-04 Thread Edgar Magana
Que??? Estás bien? 
Me ha sorprendido tu email. 

Sent from my iPhone

> On May 4, 2017, at 6:55 AM, Rossella Sblendido  wrote:
> 
> Hi all,
> 
> I've moved to a new position recently and despite my best intentions I
> was not able to devote to Neutron as much time and energy as I wanted.
> It's time for me to move on and to leave room for new core reviewers.
> 
> It's been a great experience working with you all, I learned a lot both
> on the technical and on the human side.
> I won't disappear, you will see me around in IRC, etc, don't hesitate to
> contact me if you have any question or would like my feedback on something.
> 
> ciao,
> 
> Rossella
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ=4ETEPXXTG_yxobZ8tGP2CkB7HSll3hz5srfvMBYPSl0=U7b_viotsOSh4RtVSstF2bliq2LsKpaBRiGJRG21rmU=
>  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][devstack][kuryr][fuxi][zun] Consolidate docker installation

2017-05-04 Thread Hongbin Lu
Hi all,

Just want to give a little bit update about this. After discussing with the QA 
team, we agreed to create a dedicated repo for this purpose: 
https://github.com/openstack/devstack-plugin-container . In addition, a few 
patches [1][2][3] were proposed to different projects for switching to this 
common devstack plugin. I hope more teams will interest in using this plugin 
and helping out to improve and maintain it.

[1] https://review.openstack.org/#/c/457348/
[2] https://review.openstack.org/#/c/461210/
[3] https://review.openstack.org/#/c/461212/

Best regards,
Hongbin

> -Original Message-
> From: Davanum Srinivas [mailto:dava...@gmail.com]
> Sent: April-02-17 8:17 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [qa][devstack][kuryr][fuxi][zun]
> Consolidate docker installation
> 
> Hongbin,
> 
> Nice. +1 in theory :) the etcd one i have a WIP for the etcd/DLM,
> please see here https://review.openstack.org/#/c/445432/
> 
> -- Dims
> 
> On Sun, Apr 2, 2017 at 8:13 PM, Hongbin Lu 
> wrote:
> > Hi devstack team,
> >
> >
> >
> > Please find my proposal about consolidating docker installation into
> > one place that is devstack tree:
> >
> >
> >
> > https://review.openstack.org/#/c/452575/
> >
> >
> >
> > Currently, there are several projects that installed docker in their
> > devstack plugins in various different ways. This potentially
> introduce
> > issues if more than one such services were enabled in devstack
> because
> > the same software package will be installed and configured multiple
> > times. To resolve the problem, an option is to consolidate the docker
> > installation script into one place so that all projects will leverage
> > it. Before continuing this effort, I wanted to get early feedback to
> > confirm if this kind of work will be accepted. BTW, etcd installation
> > might have a similar problem and I would be happy to contribute
> > another patch to consolidate it if that is what will be accepted as
> well.
> >
> >
> >
> > Best regards,
> >
> > Hongbin
> >
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> 
> --
> Davanum Srinivas :: https://twitter.com/dims
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] stepping down from core

2017-05-04 Thread a...@vn.fujitsu.com
Rossella, 
I'm so sad to hear that! 

It's my pleasure  to work with you. I appreciated your great guidance, I have 
learned a lot  
Thank you so much, All the best for your future!

Will miss you,
An

> -Original Message-
> From: Rossella Sblendido [mailto:rsblend...@suse.com] 
> Sent: Thursday, May 04, 2017 8:52 PM
> To: openstack-dev@lists.openstack.org
> Cc: Kevin Benton
> Subject: [openstack-dev] [Neutron] stepping down from core

> Hi all,

> I've moved to a new position recently and despite my best intentions I was 
> not able to devote to Neutron as much time and energy as I wanted.
> It's time for me to move on and to leave room for new core reviewers.

> It's been a great experience working with you all, I learned a lot both on 
> the technical and on the human side.
> I won't disappear, you will see me around in IRC, etc, don't hesitate to 
> contact me if you have any question or would like my feedback on something.

> ciao,

> Rossella
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] cloud-init not start ini ubuntu 17.04

2017-05-04 Thread Adhi Priharmanto
Hi Bob,

yes I'm following those tutorial, creating glance image from existing vm
xenserver.

   - build from scratch VM using "16.04 template" and "other installation
   media"
   - update & upgrade the VM OS
   - installing cloud-init package, no change of cloud-init configuration
   and using the default setting of cloud-init
   - reboot the VM for testing the cloud-init and no output showing
   cloud-init activity, there is no process associated with cloud-init in
   "/var/log/syslog"
   - export the vdi, compress the VHD, upload to glance
   - start instance using the custom image, just get the IP address. To
   gather instance metadata, "cloud-init init" must be executed manually after
   instance completely booting.


On Thu, May 4, 2017 at 11:32 PM, Bob Ball  wrote:

> Hi Adhi,
>
>
>
> Did you follow a guide, such as http://citrix-openstack.
> siteleaf.net/posts/generating-images-for-xenserver-in-openstack/ for
> generating the image?  If not, how was the image generated?
>
>
>
> What exactly is the output from the 17.04 image you’re using?
>
>
>
> Thanks,
>
>
>
> Bob
>
>
>
> *From:* Adhi Priharmanto [mailto:adhi@gmail.com]
> *Sent:* 03 May 2017 16:36
> *To:* openstack 
> *Subject:* [Openstack] cloud-init not start ini ubuntu 17.04
>
>
>
> hi all,
>
> I just created ubuntu 17.04 custom image for working with openstack
> xenserver, after installing & update+upgrade ubuntu 17.04 base OS, I
> installed cloud-init, then reboot it to test cloud-init, but I can't see
> cloud-init process during the ubuntu 17.04 OS boot.
>
> Is there anyone can help or give a suggest for me ?
>
>
> --
>
> Cheers,
>
>
>
>
>
> *Adhi Priharmanto*
>
> about.me/a_dhi
>
> [image: http://d13pix9kaak6wt.cloudfront.net/signature/colorbar.png]
>
>
>
> +62-812-82121584 <+62%20812-8212-1584>
>
>
>



-- 
Cheers,



[image: --]
Adhi Priharmanto
[image: http://]about.me/a_dhi

+62-812-82121584
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-04 Thread Zane Bitter

On 04/05/17 10:14, Thierry Carrez wrote:

Chris Dent wrote:

On Wed, 3 May 2017, Drew Fisher wrote:

"Most large customers move slowly and thus are running older versions,
which are EOL upstream sometimes before they even deploy them."


Can someone with more of the history give more detail on where the
expectation arose that upstream ought to be responsible things like
long term support? I had always understood that such features were
part of the way in which the corporately avaialable products added
value?


We started with no stable branches, we were just producing releases and
ensuring that updates vaguely worked from N-1 to N. There were a lot of
distributions, and they all maintained their own stable branches,
handling backport of critical fixes. That is a pretty classic upstream /
downstream model.

Some of us (including me) spotted the obvious duplication of effort
there, and encouraged distributions to share that stable branch
maintenance work rather than duplicate it. Here the stable branches were
born, mostly through a collaboration between Red Hat developers and
Canonical developers. All was well. Nobody was saying LTS back then
because OpenStack was barely usable so nobody wanted to stay on any
given version for too long.


Heh, if you go back _that_ far then upgrades between versions basically 
weren't feasible, so everybody stayed on a given version for too long. 
It's true that nobody *wanted* to though :D



Maintaining stable branches has a cost. Keeping the infrastructure that
ensures that stable branches are actually working is a complex endeavor
that requires people to constantly pay attention. As time passed, we saw
the involvement of distro packagers become more limited. We therefore
limited the number of stable branches (and the length of time we
maintained them) to match the staffing of that team.


I wonder if this is one that needs revisiting. There was certainly a 
time when closing a branch came with a strong sense of relief that you 
could stop nursing the gate. I personally haven't felt that way in a 
couple of years, thanks to a lot of *very* hard work done by the folks 
looking after the gate to systematically solve a lot of those recurring 
issues (e.g. by introducing upper constraints). We're still assuming 
that stable branches are expensive, but what if they aren't any more?



Fast-forward to
today: the stable team is mostly one person, who is now out of his job
and seeking employment.

In parallel, OpenStack became more stable, so the demand for longer-term
maintenance is stronger. People still expect "upstream" to provide it,
not realizing upstream is made of people employed by various
organizations, and that apparently their interest in funding work in
that area is pretty dead.

I agree that our current stable branch model is inappropriate:
maintaining stable branches for one year only is a bit useless. But I
only see two outcomes:

1/ The OpenStack community still thinks there is a lot of value in doing
this work upstream, in which case organizations should invest resources
in making that happen (starting with giving the Stable branch
maintenance PTL a job), and then, yes, we should definitely consider
things like LTS or longer periods of support for stable branches, to
match the evolving usage of OpenStack.


Speaking as a downstream maintainer, it sucks that backports I'm still 
doing to, say, Liberty don't benefit anybody but Red Hat customers, 
because there's nowhere upstream that I can share them. I want everyone 
in the community to benefit. Even if I could only upload patches to 
Gerrit and not merge them, that would at least be something.


(In a related bugbear, why must we delete the branch at EOL? This is 
pure evil for consumers of the code. It breaks existing git checkouts 
and thousands of web links in bug reports, review comments, IRC logs...)



2/ The OpenStack community thinks this is better handled downstream, and
we should just get rid of them completely. This is a valid approach, and
a lot of other open source communities just do that.


Maybe we need a 5th 'Open', because to me the idea that the software 
isn't so much 'released' as 'abandoned' is problematic in many of the 
same ways that Open Core and code dumps are.


cheers,
Zane.


The current reality in terms of invested resources points to (2). I
personally would prefer (1), because that lets us address security
issues more efficiently and avoids duplicating effort downstream. But
unfortunately I don't control where development resources are posted.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] All Hail our Newest Release Name - OpenStack Rocky

2017-05-04 Thread Rochelle Grober
I arrive Sunday afternoon and leave Friday morning, so you can bracket and 
schedule your drink buying ;-)

My view of Rocky is that of a *solid* base to build on.  One that withstands 
the ravages of storms and squalls.  So, I look forward to helping to reinforce 
and expand the stability of OpenStack projects in the Rocky release.  And if 
you like, I've got a cute picture of our dog, Rocky (Secundus -- I existed 
first) as an informal mascot for the release.

--Rocky

> -Original Message-
> From: Tom Barron [mailto:t...@dyncloud.net]
> Sent: Friday, April 28, 2017 2:58 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] All Hail our Newest Release Name - OpenStack
> Rocky
> 
> 
> 
> On 04/28/2017 05:54 PM, Monty Taylor wrote:
> > Hey everybody!
> >
> > There isn't a ton more to say past the subject. The "R" release of
> > OpenStack shall henceforth be known as "Rocky".
> >
> > I believe it's the first time we've managed to name a release after a
> > community member - so please everyone buy RockyG a drink if you see
> > her in Boston.
> 
> Deal!
> 
> 
> >
> > For those of you who remember the actual election results, you may
> > recall that "Radium" was the top choice. Radium was judged to have
> > legal risk, so as per our name selection process, we moved to the next
> > name on the list.
> >
> > Monty
> >
> >
> __
> 
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][heat][murano][daisycloud] Removing Heat support from Tempest

2017-05-04 Thread Matthew Treinish
On Fri, May 05, 2017 at 09:29:40AM +1200, Steve Baker wrote:
> On Thu, May 4, 2017 at 3:56 PM, Matthew Treinish 
> wrote:
> 
> > On Wed, May 03, 2017 at 11:51:13AM +, Andrea Frittoli wrote:
> > > On Tue, May 2, 2017 at 5:33 PM Matthew Treinish 
> > > wrote:
> > >
> > > > On Tue, May 02, 2017 at 09:49:14AM +0530, Rabi Mishra wrote:
> > > > > On Fri, Apr 28, 2017 at 2:17 PM, Andrea Frittoli <
> > > > andrea.fritt...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > >
> > > > > >
> > > > > > On Fri, Apr 28, 2017 at 10:29 AM Rabi Mishra 
> > > > wrote:
> > > > > >
> > > > > >> On Thu, Apr 27, 2017 at 3:55 PM, Andrea Frittoli <
> > > > > >> andrea.fritt...@gmail.com> wrote:
> > > > > >>
> > > > > >>> Dear stackers,
> > > > > >>>
> > > > > >>> starting in the Liberty cycle Tempest has defined a set of
> > projects
> > > > > >>> which are in scope for direct
> > > > > >>> testing in Tempest [0]. The current list includes keystone, nova,
> > > > > >>> glance, swift, cinder and neutron.
> > > > > >>> All other projects can use the same Tempest testing
> > infrastructure
> > > > (or
> > > > > >>> parts of it) by taking advantage
> > > > > >>> the Tempest plugin and stable interfaces.
> > > > > >>>
> > > > > >>> Tempest currently hosts a set of API tests as well as a service
> > > > client
> > > > > >>> for the Heat project.
> > > > > >>> The Heat service client is used by the tests in Tempest, which
> > run in
> > > > > >>> Heat gate as part of the grenade
> > > > > >>> job, as well as in the Tempest gate (check pipeline) as part of
> > the
> > > > > >>> layer4 job.
> > > > > >>> According to code search [3] the Heat service client is also
> > used by
> > > > > >>> Murano and Daisycore.
> > > > > >>>
> > > > > >>
> > > > > >> For the heat grenade job, I've proposed two patches.
> > > > > >>
> > > > > >> 1. To run heat tree gabbi api tests as part of grenade
> > 'post-upgrade'
> > > > > >> phase
> > > > > >>
> > > > > >> https://review.openstack.org/#/c/460542/
> > > > > >>
> > > > > >> 2. To remove tempest tests from the grenade job
> > > > > >>
> > > > > >> https://review.openstack.org/#/c/460810/
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >>> I proposed a patch to Tempest to start the deprecation counter
> > for
> > > > Heat
> > > > > >>> / orchestration related
> > > > > >>> configuration items in Tempest [4], and I would like to make sure
> > > > that
> > > > > >>> all tests and the service client
> > > > > >>> either find a new home outside of Tempest, or are removed, by
> > the end
> > > > > >>> the Pike cycle at the latest.
> > > > > >>>
> > > > > >>> Heat has in-tree integration tests and Gabbi based API tests,
> > but I
> > > > > >>> don't know if those provide
> > > > > >>> enough coverage to replace the tests on Tempest side.
> > > > > >>>
> > > > > >>>
> > > > > >> Yes, the heat gabbi api tests do not yet have the same coverage
> > as the
> > > > > >> tempest tree api tests (lacks tests using nova, neutron and swift
> > > > > >> resources),  but I think that should not stop us from *not*
> > running
> > > > the
> > > > > >> tempest tests in the grenade job.
> > > > > >>
> > > > > >> I also don't know if the tempest tree heat tests are used by any
> > other
> > > > > >> upstream/downstream jobs. We could surely add more tests to bridge
> > > > the gap.
> > > > > >>
> > > > > >> Also, It's possible to run the heat integration tests (we've
> > enough
> > > > > >> coverage there) with tempest plugin after doing some initial
> > setup,
> > > > as we
> > > > > >> do in all our dsvm gate jobs.
> > > > > >>
> > > > > >> It would propose to move tests and client to a Tempest plugin
> > owned /
> > > > > >>> maintained by
> > > > > >>> the Heat team, so that the Heat team can have full flexibility in
> > > > > >>> consolidating their integration
> > > > > >>> tests. For Murano and Daisycloud - and any other team that may
> > want
> > > > to
> > > > > >>> use the Heat service
> > > > > >>> client in their tests, even if the client is removed from
> > Tempest, it
> > > > > >>> would still be available via
> > > > > >>> the Heat Tempest plugin. As long as the plugin implements the
> > service
> > > > > >>> client interface,
> > > > > >>> the Heat service client will register automatically in the
> > service
> > > > > >>> client manager and be available
> > > > > >>> for use as today.
> > > > > >>>
> > > > > >>>
> > > > > >> if I understand correctly, you're proposing moving the existing
> > > > tempest
> > > > > >> tests and service clients to a separate repo managed by heat team.
> > > > Though
> > > > > >> that would be collective decision, I'm not sure that's something I
> > > > would
> > > > > >> like to do. To start with we may look at adding some of the
> > missing
> > > > pieces
> > > > > >> in heat tree itself.
> > > > > >>
> > > > > >
> > > > > > I'm proposing to move tests and the service client outside of
> > tempest
> > > > to a
> > 

Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-05-04 Thread Dan Prince
On Thu, 2017-05-04 at 14:11 -0400, Emilien Macchi wrote:
> On Thu, May 4, 2017 at 9:41 AM, Dan Prince 
> wrote:
> > On Thu, 2017-05-04 at 03:11 -0400, Luigi Toscano wrote:
> > > - Original Message -
> > > > On Wed, 2017-05-03 at 17:53 -0400, Emilien Macchi wrote:
> > > > > (cross-posting)
> > > > > Instead of running the Pingtest, we would execute a Tempest
> > > > > Scenario
> > > > > that boot an instance from volume (like Pingstest is already
> > > > > doing)
> > > > > and see how it goes (in term of coverage and runtime).
> > > > > I volunteer to kick-off the work with someone more expert
> > > > > than I
> > > > > am
> > > > > with quickstart (Arx maybe?).
> > > > > 
> > > > > Another iteration could be to start building an easy
> > > > > interface to
> > > > > select which Tempest tests we want a TripleO CI job to run
> > > > > and
> > > > > plug
> > > > > it
> > > > > to our CI tooling (tripleo-quickstart I presume).
> > > > 
> > > > Running a subset of Tempest tests isn't the same thing as
> > > > designing
> > > > (and owning) your own test suite that targets the things that
> > > > mean
> > > > the
> > > > most to our community (namely speed and coverage). Even giving
> > > > up
> > > > 5-10
> > > > minutes of runtime...just to be able to run Tempest isn't
> > > > something
> > > > that some of us would be willing to do.
> > > 
> > > As I mentioned, you can do it with Tempest (the library). You can
> > > have your own test suite that does exactly what you are asking
> > > (namely, a set of scenario tests based on Heat which targets the
> > > TripleO use case) in a Tempest plugin and there is no absolute
> > > reason
> > > that those tests should add 5-10 minutes of runtime compared to
> > > pingtest.
> > > 
> > > It/they would be exactly pingtest, only implemented using a
> > > different
> > > library and running with a different runner, with the *exact*
> > > same
> > > run time.
> > > 
> > > Obvious advantages: only one technology used to run tests, so if
> > > anyone else want to run additional tests, there is no need to
> > > maintain two code paths; reuse on a big and proven library of
> > > test
> > > and test runner tools.
> > 
> > I like the idea of getting pingtest out of tripleo.sh as more of a
> > stand alone tool. I would support an effort that re-implemented
> > it...
> > and using tempest-lib would be totally fine. And as you point out
> > one
> > could even combine these tests with a more common "Tempest" run
> > that
> > incorporates the scenarios, etc.
> 
> I don't understand why we would re-implement the pingtest in a
> tempest plugin.
> Could you please tell us what is the technical difference between
> what
> does this scenario :
> https://github.com/openstack/tempest/blob/master/tempest/scenario/tes
> t_volume_boot_pattern.py
> 
> And this pingtest:
> https://github.com/openstack/tripleo-heat-templates/blob/master/ci/pi
> ngtests/tenantvm_floatingip.yaml
> 
> They both create a volume Cinder, snapshot it in Glance and and spawn
> a Nova server from the volume.
> 
> What one does that the other one doesn't?

I don't think these are the same things. Does the Tempest test even
create a floating IP? And in the case of pingtest we also cover Heat
API in the overcloud (also valuable coverage). And even if they could
be made to match today is there any guarantee that they would diverge
in the future or maintain the same speed goals as that test lives in
Tempest (and most TripleO cores don't review there).

The main difference that I care about is it is easier for us to
maintain and fix the pingtest varient at this point. We care a lot
about our CI, and like I said before increasing the runtime isn't
something we could easily tolerate. I'm willing to entertain reuse so
long as it also allows us the speed and control we desire.

> 
> > To me the message is clear that we DO NOT want to consume the
> > normal
> > Tempest scenarios in TripleO upstream CI at this point. Sure there
> > is
> > overlap there, but the focus of those tests is just plain
> > different...
> 
> I haven't seen strong pushback in this thread except you.

Perhaps most cores haven't weighed in on this issue because moving to
Tempest(-lib) isn't the most pressing issue ATM. We have a lot of
architectural changes happening at the moment for example and that is
why I only replied to this thread this week.

> I'm against overlap in general and this one is pretty obvious. Why
> would we maintain a tripleo-specific Tempest scenario while existing
> ones would work for us. Please give me a technical reason in what is
> not good enough in the existing scenarios.

We maintain it because we care about speed I think. Also, the re-use
you talk about here has a fairly large cost to us in that we'd be
introducing new dependencies and code reviews for TripleO. All of this
has a cost too... when it really comes down to it I think the simpler
implementation of ping test suites what we do and need in 

Re: [openstack-dev] [qa][heat][murano][daisycloud] Removing Heat support from Tempest

2017-05-04 Thread Steve Baker
On Thu, May 4, 2017 at 3:56 PM, Matthew Treinish 
wrote:

> On Wed, May 03, 2017 at 11:51:13AM +, Andrea Frittoli wrote:
> > On Tue, May 2, 2017 at 5:33 PM Matthew Treinish 
> > wrote:
> >
> > > On Tue, May 02, 2017 at 09:49:14AM +0530, Rabi Mishra wrote:
> > > > On Fri, Apr 28, 2017 at 2:17 PM, Andrea Frittoli <
> > > andrea.fritt...@gmail.com>
> > > > wrote:
> > > >
> > > > >
> > > > >
> > > > > On Fri, Apr 28, 2017 at 10:29 AM Rabi Mishra 
> > > wrote:
> > > > >
> > > > >> On Thu, Apr 27, 2017 at 3:55 PM, Andrea Frittoli <
> > > > >> andrea.fritt...@gmail.com> wrote:
> > > > >>
> > > > >>> Dear stackers,
> > > > >>>
> > > > >>> starting in the Liberty cycle Tempest has defined a set of
> projects
> > > > >>> which are in scope for direct
> > > > >>> testing in Tempest [0]. The current list includes keystone, nova,
> > > > >>> glance, swift, cinder and neutron.
> > > > >>> All other projects can use the same Tempest testing
> infrastructure
> > > (or
> > > > >>> parts of it) by taking advantage
> > > > >>> the Tempest plugin and stable interfaces.
> > > > >>>
> > > > >>> Tempest currently hosts a set of API tests as well as a service
> > > client
> > > > >>> for the Heat project.
> > > > >>> The Heat service client is used by the tests in Tempest, which
> run in
> > > > >>> Heat gate as part of the grenade
> > > > >>> job, as well as in the Tempest gate (check pipeline) as part of
> the
> > > > >>> layer4 job.
> > > > >>> According to code search [3] the Heat service client is also
> used by
> > > > >>> Murano and Daisycore.
> > > > >>>
> > > > >>
> > > > >> For the heat grenade job, I've proposed two patches.
> > > > >>
> > > > >> 1. To run heat tree gabbi api tests as part of grenade
> 'post-upgrade'
> > > > >> phase
> > > > >>
> > > > >> https://review.openstack.org/#/c/460542/
> > > > >>
> > > > >> 2. To remove tempest tests from the grenade job
> > > > >>
> > > > >> https://review.openstack.org/#/c/460810/
> > > > >>
> > > > >>
> > > > >>
> > > > >>> I proposed a patch to Tempest to start the deprecation counter
> for
> > > Heat
> > > > >>> / orchestration related
> > > > >>> configuration items in Tempest [4], and I would like to make sure
> > > that
> > > > >>> all tests and the service client
> > > > >>> either find a new home outside of Tempest, or are removed, by
> the end
> > > > >>> the Pike cycle at the latest.
> > > > >>>
> > > > >>> Heat has in-tree integration tests and Gabbi based API tests,
> but I
> > > > >>> don't know if those provide
> > > > >>> enough coverage to replace the tests on Tempest side.
> > > > >>>
> > > > >>>
> > > > >> Yes, the heat gabbi api tests do not yet have the same coverage
> as the
> > > > >> tempest tree api tests (lacks tests using nova, neutron and swift
> > > > >> resources),  but I think that should not stop us from *not*
> running
> > > the
> > > > >> tempest tests in the grenade job.
> > > > >>
> > > > >> I also don't know if the tempest tree heat tests are used by any
> other
> > > > >> upstream/downstream jobs. We could surely add more tests to bridge
> > > the gap.
> > > > >>
> > > > >> Also, It's possible to run the heat integration tests (we've
> enough
> > > > >> coverage there) with tempest plugin after doing some initial
> setup,
> > > as we
> > > > >> do in all our dsvm gate jobs.
> > > > >>
> > > > >> It would propose to move tests and client to a Tempest plugin
> owned /
> > > > >>> maintained by
> > > > >>> the Heat team, so that the Heat team can have full flexibility in
> > > > >>> consolidating their integration
> > > > >>> tests. For Murano and Daisycloud - and any other team that may
> want
> > > to
> > > > >>> use the Heat service
> > > > >>> client in their tests, even if the client is removed from
> Tempest, it
> > > > >>> would still be available via
> > > > >>> the Heat Tempest plugin. As long as the plugin implements the
> service
> > > > >>> client interface,
> > > > >>> the Heat service client will register automatically in the
> service
> > > > >>> client manager and be available
> > > > >>> for use as today.
> > > > >>>
> > > > >>>
> > > > >> if I understand correctly, you're proposing moving the existing
> > > tempest
> > > > >> tests and service clients to a separate repo managed by heat team.
> > > Though
> > > > >> that would be collective decision, I'm not sure that's something I
> > > would
> > > > >> like to do. To start with we may look at adding some of the
> missing
> > > pieces
> > > > >> in heat tree itself.
> > > > >>
> > > > >
> > > > > I'm proposing to move tests and the service client outside of
> tempest
> > > to a
> > > > > new home.
> > > > >
> > > > > I also suggested that the new home could be a dedicate repo, since
> that
> > > > > would allow you to maintain the
> > > > > current branchless nature of those tests. A more detailed
> discussion
> > > about
> > > > > the topic can be found
> > > > > in the corresponding proposed queens 

[openstack-dev] [refstack] No RefStack IRC meeting next week (May 9, 2017)

2017-05-04 Thread Catherine Cuong Diep

Hi Everyone,

There will be no RefStack IRC meeting next.  We will resume meeting on May
16, 2017.

Catherine Diep
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] neutron-lib impact: ml2 MechanismDriver and constants are now in neutron-lib

2017-05-04 Thread Boden Russell
If your project uses the MechanismDriver class or associated constants
in neutron.plugins.ml2.driver_api, please read on. If not, it's probably
safe to delete this message.

For details on what's been rehomed, please see [1].

Suggested actions:
- If you're a stadium project, you should be already covered with [2].
- If not and your project uses any of the rehomed code [1], please
update your imports to use the neutron-lib version.

We can discuss when to land the neutron patch that removes the rehomed
code [3] in our weekly neutron meeting.

Thanks


[1] https://review.openstack.org/#/c/428997/
[2]
https://review.openstack.org/#/q/message:%22+use+MechanismDriver+from+neutron-lib%22
[3] https://review.openstack.org/#/c/462731

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] How do you determine remaining (cloud) capacity?

2017-05-04 Thread Joshua Harlow

Blair Bethwaite wrote:

On 5 May 2017 at 03:26, Joshua Harlow  wrote:

Though technically not horrible it does seem like the various openstack
project APIs should provide there own projections of this same data (without
needing to scape it, send it to hadoop and then do various projections
there).


Consistent APIs to query this data (just Nova and Cinder would do),
from the same perspective as the respective schedulers see it as
opposed to some kind of error prone scraping, would be brilliant. The
cell capacity API has some flavor capacity information (we save that
for display: https://status.rc.nectar.org.au/capacity/) but, at least
in cellsv1, the capacity information is useless if there are any
non-toy nova-scheduler constraints in place, e.g., it doesn't know if
a host cannot launch a certain flavor.



Neats yours @ https://status.rc.nectar.org.au/ looks better than ours.

Is that whole code, and data pipeline anywhere in the open?

Maybe useful to work on a common project(?) while at the same time 
putting pressure on the projects themselves to get the data out of there 
schedulers (in a queryable manner, one that does not result in spinning 
up *actual* instances, volumes..., but instead tells u how many 'spin 
ups' could be possible or if the spin up u requested could be satisfied).


-Josh

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack] Fwd: Openstack Architecture help

2017-05-04 Thread Konstantin Raskoshnyi
Hmm...Your switch - layer3 or 2?

On Thu, May 4, 2017 at 12:29 PM, Muhammad Asif  wrote:

> Hi,
>
> Please find the attached pictures. I have mentioned some of main issues.
>
> Thanks
>
>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [Neutron][L3-subteam] Weekly IRC meeting canceled on May 11th

2017-05-04 Thread Miguel Lavalle
Dear L3-subteam,

Due to the OpenStack Summit next week in Boston, we will cancel our weekly
meeting on May 11th. We will resume normally on May 18th.

See you in Boston!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] Fwd: Openstack Architecture help

2017-05-04 Thread Muhammad Asif
Hi,

Please find the attached pictures. I have mentioned some of main issues.

Thanks
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [Neutron] stepping down from core

2017-05-04 Thread Miguel Lavalle
Buena suerte Rosella! We are going to miss you

Miguel

On Thu, May 4, 2017 at 8:52 AM, Rossella Sblendido 
wrote:

> Hi all,
>
> I've moved to a new position recently and despite my best intentions I
> was not able to devote to Neutron as much time and energy as I wanted.
> It's time for me to move on and to leave room for new core reviewers.
>
> It's been a great experience working with you all, I learned a lot both
> on the technical and on the human side.
> I won't disappear, you will see me around in IRC, etc, don't hesitate to
> contact me if you have any question or would like my feedback on something.
>
> ciao,
>
> Rossella
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-04 Thread Thierry Carrez
Flavio Percoco wrote:
> On 04/05/17 11:18 -0400, Jonathan Proulx wrote:
>> On Thu, May 04, 2017 at 04:14:07PM +0200, Thierry Carrez wrote:
>> :I agree that our current stable branch model is inappropriate:
>> :maintaining stable branches for one year only is a bit useless. But I
>> :only see two outcomes:
>> :
>> :1/ The OpenStack community still thinks there is a lot of value in doing
>> :this work upstream, in which case organizations should invest resources
>> :in making that happen (starting with giving the Stable branch
>> :maintenance PTL a job), and then, yes, we should definitely consider
>> :things like LTS or longer periods of support for stable branches, to
>> :match the evolving usage of OpenStack.
>> :
>> :2/ The OpenStack community thinks this is better handled downstream, and
>> :we should just get rid of them completely. This is a valid approach, and
>> :a lot of other open source communities just do that.
>> :
>> :The current reality in terms of invested resources points to (2). I
>> :personally would prefer (1), because that lets us address security
>> :issues more efficiently and avoids duplicating effort downstream. But
>> :unfortunately I don't control where development resources are posted.
> 
> Have there been issues with downstream distros not addressing security
> fixes properly?

No, not at all -- but usually they package upstream vulnerability fixes,
which are produced on stable branches. In mode #2 we would only patch
master, forcing downstream to do backports for more branches. That is
what I meant by "more efficiently".

Sorry for being unclear.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Scheduler meeting canceled for next Monday

2017-05-04 Thread Edward Leafe
Due to most participants being at the Forum this coming week, we will not hold 
our weekly Scheduler sub team meeting on Monday, May 8. Please join us the 
following Monday (May 15) in #openstack-meeting-alt at 1400 UTC.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Deployment for production

2017-05-04 Thread Haïkel
2017-05-03 10:41 GMT+02:00 Fawaz Mohammed :
> Hi Satish,
>
> I believe RDO is not meant to be for production. I prefer to use the
> original upstream project "TripleO" as they have better documentation.
>

It is meant for production, but it is community-support.
I won't comment further but many people are working on RDO to make it
usable either as full-time RH employees (such as myself) or community
contributor (as I did previously).

Regards,
H.

> Other production grade deployment tools are:
> Fuel:
> https://docs.openstack.org/developer/fuel-docs/userdocs/fuel-install-guide.html
> Support CentOS and Ubuntu as hosts.
>
> Charm:
> https://docs.openstack.org/developer/charm-guide/
> Support Ubuntu only.
>
>
> On May 3, 2017 11:00 AM, "Satish Patel"  wrote:
>>
>> We did POC on RDO and we are happy with product but now question is,
>> should we use RDO for production deployment or other open source flavor
>> available to deploy on prod. Not sure what is the best method of production
>> deployment?
>>
>> Sent from my iPhone
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-05-04 Thread Emilien Macchi
On Thu, May 4, 2017 at 9:41 AM, Dan Prince  wrote:
> On Thu, 2017-05-04 at 03:11 -0400, Luigi Toscano wrote:
>> - Original Message -
>> > On Wed, 2017-05-03 at 17:53 -0400, Emilien Macchi wrote:
>> > > (cross-posting)
>> >
>> > > Instead of running the Pingtest, we would execute a Tempest
>> > > Scenario
>> > > that boot an instance from volume (like Pingstest is already
>> > > doing)
>> > > and see how it goes (in term of coverage and runtime).
>> > > I volunteer to kick-off the work with someone more expert than I
>> > > am
>> > > with quickstart (Arx maybe?).
>> > >
>> > > Another iteration could be to start building an easy interface to
>> > > select which Tempest tests we want a TripleO CI job to run and
>> > > plug
>> > > it
>> > > to our CI tooling (tripleo-quickstart I presume).
>> >
>> > Running a subset of Tempest tests isn't the same thing as designing
>> > (and owning) your own test suite that targets the things that mean
>> > the
>> > most to our community (namely speed and coverage). Even giving up
>> > 5-10
>> > minutes of runtime...just to be able to run Tempest isn't something
>> > that some of us would be willing to do.
>>
>> As I mentioned, you can do it with Tempest (the library). You can
>> have your own test suite that does exactly what you are asking
>> (namely, a set of scenario tests based on Heat which targets the
>> TripleO use case) in a Tempest plugin and there is no absolute reason
>> that those tests should add 5-10 minutes of runtime compared to
>> pingtest.
>>
>> It/they would be exactly pingtest, only implemented using a different
>> library and running with a different runner, with the *exact* same
>> run time.
>>
>> Obvious advantages: only one technology used to run tests, so if
>> anyone else want to run additional tests, there is no need to
>> maintain two code paths; reuse on a big and proven library of test
>> and test runner tools.
>
> I like the idea of getting pingtest out of tripleo.sh as more of a
> stand alone tool. I would support an effort that re-implemented it...
> and using tempest-lib would be totally fine. And as you point out one
> could even combine these tests with a more common "Tempest" run that
> incorporates the scenarios, etc.

I don't understand why we would re-implement the pingtest in a tempest plugin.
Could you please tell us what is the technical difference between what
does this scenario :
https://github.com/openstack/tempest/blob/master/tempest/scenario/test_volume_boot_pattern.py

And this pingtest:
https://github.com/openstack/tripleo-heat-templates/blob/master/ci/pingtests/tenantvm_floatingip.yaml

They both create a volume Cinder, snapshot it in Glance and and spawn
a Nova server from the volume.

What one does that the other one doesn't?

> To me the message is clear that we DO NOT want to consume the normal
> Tempest scenarios in TripleO upstream CI at this point. Sure there is
> overlap there, but the focus of those tests is just plain different...

I haven't seen strong pushback in this thread except you.
I'm against overlap in general and this one is pretty obvious. Why
would we maintain a tripleo-specific Tempest scenario while existing
ones would work for us. Please give me a technical reason in what is
not good enough in the existing scenarios.

> speed isn't a primary concern there as it is for us so I don't think we
> should do it now. And probably not ever unless the CI job time is less
> than an hour. Like even if we were able to tune a set of stock Tempest
> smoke tests today to our liking unless TripleO proper gates on the
> runtime of those not increasing we'd be at risk of breaking our CI
> queues as the wall time would potentially get too long. In this regard
> this entire thread is poorly named I think in that we are no longer
> talking about 'pingtest vs. tempest' but rather the implementation
> details of how we reimplement our existing pingtest to better suite the
> community.

What I would like to see if we're going to use Tempest in our gate, is
to run at least one TripleO jobs as voting in Tempest.

Tempest folks: I need your support here. We have been running Puppet
jobs as non-voting and we have seen a quite number of patches that
broke us because folks were ignore the jobs. If we switch TripleO to
use more Tempest, being in your gate might be required. We'll run the
fastest and  more stable job that we have to make sure the impact for
you is minimal.

> So ++ for the idea of experimenting with the use of tempest.lib. But
> stay away from the idea of using Tempest smoke tests and the like for
> TripleO I think ATM.
>
> Its also worth noting there is some risk when maintaining your own in-
> tree Tempest tests [1]. If I understood that thread correctly that
> breakage wouldn't have occurred if the stable branch tests were gating
> Tempest proper... which is a very hard thing to do if we have our own
> in-tree stuff. So there is a cost to doing what you suggest here, but
> probably 

Re: [openstack-dev] [TripleO] Deep dive Thursday May 4th 1400 UTC on deployed-server

2017-05-04 Thread James Slagle
On Wed, May 3, 2017 at 8:34 AM, James Slagle  wrote:
> I saw there was no deep dive scheduled for tomorrow, so I decided I'd
> go ahead and plan one.
>
> It will be recorded if you can't make it on the short notice.
>
> I plan to cover the "deployed-server" feature in TripleO. We have been
> using this feature since Newton to drive our multinode CI. I'll go
> over how the multinode CI uses this feature to test TripleO on
> pre-provisioned nodes.
>
> I'll also discuss the improvements that were done in Ocata to make
> this feature ready for end user consumption. Finally, I'll cover
> what's being done in Pike around using this feature more fully for an
> end to end "split-stack".
>
> Thursday May 4th at 1400 UTC at https://bluejeans.com/176756457/
>
> You don't want to miss it! (or maybe you do). Go Owls!
>
> --
> -- James Slagle
> --

Hi, Here is the recording link for this deep dive:
https://www.youtube.com/watch?v=s8Hm4n9IjYg

Also, here is an etherpad of links I used during the deep dive:
https://etherpad.openstack.org/p/tripleo-deep-dive-deployed-server


-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Should the Technical Committee meetings be dropped?

2017-05-04 Thread Sean McGinnis
On Thu, May 04, 2017 at 10:10:41AM -0700, Flavio Percoco wrote:
> Greetings,
> 
> In the last Technical Committee meeting, we discussed the idea of dropping the
> Technical Committee meeting entirely[0][1] in favor of a more asynchronous
> communication. Here's a brief summary of the problems this is trying to solve
> (most taken from the proposal):
> 
> [snip]
> 
> Regardless we do this change on one-shot or multiple steps (or don't do it at
> all), I believe it requires changing the way TC activities are done:
> 
> * It requires folks (especially TC members) to be more active on reviewing
>  governance patches
> * It requires folks to engage more on the mailing list and start more
>  discussions there.
> 
> Sending this out to kick off a broader discussion on these topics. Thoughts?
> Opinions? Objections?
> 

I'll start off saying I'm fine trying this for a cycle or two to see if we can
make it work. There's nothing saying we can't reinstate the meeting if we
realize it was giving us something that we are not able to get (or adjust to
getting) through ad hoc IRC chats and mailing list discussions.

But part of my concern to getting rid of the meeting is that I do find it
valuable. The arguments against having it are some of the same I've heard for
our in-person events. It's hard for some to travel to the PTG. There's a lot
of active discussion at the PTG that is definitely a challenge for non-native
speakers to keep up with. But I think we all recognize what value having events
like the PTG provide. Or the Summit/Design Summit/Forum/Midcycle/
pick-your-favorite.

Just airing my concerns though. I do think we can make an effort and still get
things accomplished without the weekly meeting. There's some intangible 
benefits to having everyone together in one place in real time that will need
to be evaluated as we go to make sure we are not losing out on something we
have now.

Sean

> [0] 
> http://eavesdrop.openstack.org/meetings/tc/2017/tc.2017-05-02-20.01.log.html
> [1] https://review.openstack.org/#/c/459848/
> 
> -- 
> @flaper87
> Flavio Percoco



> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-04 Thread Flavio Percoco

On 04/05/17 11:18 -0400, Jonathan Proulx wrote:

On Thu, May 04, 2017 at 04:14:07PM +0200, Thierry Carrez wrote:
:I agree that our current stable branch model is inappropriate:
:maintaining stable branches for one year only is a bit useless. But I
:only see two outcomes:
:
:1/ The OpenStack community still thinks there is a lot of value in doing
:this work upstream, in which case organizations should invest resources
:in making that happen (starting with giving the Stable branch
:maintenance PTL a job), and then, yes, we should definitely consider
:things like LTS or longer periods of support for stable branches, to
:match the evolving usage of OpenStack.
:
:2/ The OpenStack community thinks this is better handled downstream, and
:we should just get rid of them completely. This is a valid approach, and
:a lot of other open source communities just do that.
:
:The current reality in terms of invested resources points to (2). I
:personally would prefer (1), because that lets us address security
:issues more efficiently and avoids duplicating effort downstream. But
:unfortunately I don't control where development resources are posted.


Have there been issues with downstream distros not addressing security fixes
properly?


Yes it seems that way to me as well.

just killing the stable branch model without some plan either
internally or externally to provide a better stability story seems
like it would send the wrong signal.  So I'd much prefer the distro
people to either back option 1) with significant resources so it can
really work or make public commitments to handle option 2) in a
reasonable way.


I think downstream distros are already doing #2, unless I'm missing something.
How public/vocal they are about it might be a different discussion.

I'd prefer #1 too because I'd rather have everything upstream. However, with the
current flux of people, the current roadmaps and the current status of the
community, it's unrealistic for us to expect #1 to happen. So, I'd rather
dedicate time documenting/communicating #2 properly.

Now, one big problem with LTS releases of OpenStack (regardless they happen
upstream or downstream) is the upgrade path, which is one of the problems Drew
raised.

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18

2017-05-04 Thread Flavio Percoco

On 04/05/17 08:18 -0400, Davanum Srinivas wrote:

On Thu, May 4, 2017 at 3:49 AM, Thierry Carrez  wrote:

Jeremy Stanley wrote:

On 2017-05-03 14:04:40 -0400 (-0400), Doug Hellmann wrote:

Excerpts from Sean Dague's message of 2017-05-03 13:23:11 -0400:

On 05/03/2017 01:02 PM, Doug Hellmann wrote:

Excerpts from Thierry Carrez's message of 2017-05-03 18:16:29 +0200:

[...]

Knowing what will be discussed in advanced also helps everyone
collect their thoughts and be ready to contribute.


What about ensuring that every agenda topic is more than a line,
but includes a full paragraph about what the agenda topic
proposer expects it will cover. A lot of times the agenda items
are cryptic enough unless you are knee deep in things.

That would help people collect their thoughts even more and
break away from the few minutes of delay in introducing the
subject (the introduction of the subject would be in the
agenda).


If the goal is to move most of the discussion onto the mailing
list, we could link to the thread(s) there, too.


This seems like a great idea to me. Granted in many cases we already
have a change proposed in Gerrit containing a (potentially) lengthy
explanation, but duplicating some of that on the agenda can't hurt.


I like the idea. One issue is the timing.

I prepare and post the meeting agenda on the Monday (in time for
everyone to read it and decide if they want to attend). However I
prepare the "introduction of the subject" shortly before the meeting on
Tuesday, so that it takes into account the recent changes and is up to
date with the status of the review. Some people post reviews/comments 10
minutes before meeting, so it will be very hard to account for those
comments or objections in the "introduction" posted the day before...


Right Thierry, we all have to adjust the way we work somewhat.


As mentioned in the meeting, I've started a thread to discuss the topic about
dropping the meetings:

http://lists.openstack.org/pipermail/openstack-dev/2017-May/116375.html

Thanks,
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][all] Should the Technical Committee meetings be dropped?

2017-05-04 Thread Flavio Percoco

Greetings,

In the last Technical Committee meeting, we discussed the idea of dropping the
Technical Committee meeting entirely[0][1] in favor of a more asynchronous
communication. Here's a brief summary of the problems this is trying to solve
(most taken from the proposal):

* It takes place a specific time of day, even if we have rotating time slots,
 we are always excluding someone.

* The fast paced nature of the IRC meetings can exclude many for the
 conversation. Many native English speakers struggle to keep track of the
 conversation and get their point across. It is even worse for non-native
 English speakers.

* Feels like many conversations happen outside the meeting in non-open-enough
 ways, we should make it easy to have more open conversations.

* Reduce the number of places where topics are discussed and, instead, improve
 the way we use the other ones we have, which favor a more distributed 
community.

The discussion in the meeting started from what problems this proposal is trying
to solve and evolved into whether we should go all-in on this or take baby steps
towards dropping the meeting and see how things evolve.

Some of the current TC activities depend on the meeting to some extent:

* We use the meeting to give the final ack on some the formal-vote reviews.
* Some folks (tc members and not) use the meeting agenda to know what they
 should be reviewing.
* Some folks (tc members and not) use the meeting as a way to review or
 paticipate in active discussions.
* Some folks use the meeting logs to catch up on what's going on in the TC

In the resolution that has been proposed[1], we've listed possible solutions for
some of this issues and others:

* Having office hours
* Sending weekly updates (pulse) on the current reviews and TC discussions

Regardless we do this change on one-shot or multiple steps (or don't do it at
all), I believe it requires changing the way TC activities are done:

* It requires folks (especially TC members) to be more active on reviewing
 governance patches
* It requires folks to engage more on the mailing list and start more
 discussions there.

Sending this out to kick off a broader discussion on these topics. Thoughts?
Opinions? Objections?

[0] http://eavesdrop.openstack.org/meetings/tc/2017/tc.2017-05-02-20.01.log.html
[1] https://review.openstack.org/#/c/459848/

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-wg/news

2017-05-04 Thread Chris Dent


Greetings OpenStack community,

Small group today, mostly edleafe and I (cdent) speaking to one another, with 
the ghostly presence of dtantsur lurking nearby. Main action today is that we 
finally merged the Interoperability Guideline (links below). It took months of 
sometimes heated discussion to reach a consensus, but we have one now. The 
other big change in play is a series related to version discovery and effective 
use of service types.

elmiko and I will be at summit, hosting an API-WG BOF session [4]. Please 
attend if you have the time and interest.

There will be no API-WG meeting 11th of May. We'll return to normal business on 
the 18th.

# Newly Published Guidelines

* Create a set of api interoperability guidelines from 
https://review.openstack.org/#/c/421846/ to be published at 
http://specs.openstack.org/openstack/api-wg/guidelines/api_interoperability.html
 eventually.

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None at this time but please check out the review below.

# Guidelines Currently Under Review [3]

* Microversions: add next_min_version field in version body
  https://review.openstack.org/#/c/446138/

* A suite of five documents about version discovery.
  Start at https://review.openstack.org/#/c/459405/

* Support for historical service type aliases
  https://review.openstack.org/#/c/460654/3

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API WG, please address your concerns in 
an email to the OpenStack developer mailing list[1] with the tag "[api]" in the 
subject. In your email, you should include any relevant reviews, links, and comments to 
help guide the discussion of the specific challenge you are facing.

To learn more about the API WG mission and the work we do, see OpenStack API 
Working Group [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18679/api-working-group-update-and-bof

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_wg/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] stepping down from core

2017-05-04 Thread Kevin Benton
Thanks for all of your contributions. Good luck in your new role!

Cheers

On Thu, May 4, 2017 at 9:52 AM, Rossella Sblendido 
wrote:

> Hi all,
>
> I've moved to a new position recently and despite my best intentions I
> was not able to devote to Neutron as much time and energy as I wanted.
> It's time for me to move on and to leave room for new core reviewers.
>
> It's been a great experience working with you all, I learned a lot both
> on the technical and on the human side.
> I won't disappear, you will see me around in IRC, etc, don't hesitate to
> contact me if you have any question or would like my feedback on something.
>
> ciao,
>
> Rossella
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] no ironic weekly meetings next week

2017-05-04 Thread Julia Kreger
All,

Following up for our fearless PTL, the Ironic meetings next week are
cancelled due to the Summit and the schedule of various participants.
This comprises the Team meeting that would be on May 8th, the UI
meeting on May 9th, and the Boot from Volume meeting on May 11th.

As always, you can find us in #openstack-ironic if there are any
questions or concerns.

Thank you, and have a wonderful week everyone!

-Julia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking-sfc] No networking-sfc meeting today

2017-05-04 Thread Henry Fourie
All,
   There will be no networking-sfc meetings for the next two weeks.
Will resume on May 18.
- Louis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack moving both too fast and too slow at the same time

2017-05-04 Thread Jay Pipes

On 05/04/2017 10:57 AM, Doug Hellmann wrote:

Excerpts from Drew Fisher's message of 2017-05-03 14:00:53 -0600:

These come up time and time again
How is the TC working with the dev teams to address these critical issues?

I asked this because on page 18 is this comment:

"Most large customers move slowly and thus are running older versions,
which are EOL upstream sometimes before they even deploy them."

This is exactly what we're seeing with some of our customers and I
wanted to ask the TC about it.


The contributors to OpenStack are not a free labor pool for the
consumers of the project.


1000 times THIS.

You generally get out of the open source projects what you put into them 
-- either time, money, or both.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [charms] Onboarding session at next weeks summit

2017-05-04 Thread Andrew Mcleod
I'll be there too! :)


Andrew

On Thu, May 4, 2017 at 5:34 PM, Alex Kavanagh 
wrote:

> I will be there too.  Looking forward to catching up with existing and new
> people.
>
> Cheers
> Alex.
>
> On Tue, May 2, 2017 at 11:36 AM, James Page  wrote:
>
>> Hi All
>>
>> The OpenStack summit is nearly upon us and for this summit we're running
>> a project onboarding session on Monday at 4.40pm in MR-105 (see [0] for
>> full details) for anyone who wants to get started either using the
>> OpenStack Charms or contributing to the development of the Charms,
>>
>> The majority of the core development team will be present so its a great
>> opportunity to learn more about our project from a use and development
>> perspective!
>>
>> I've created an etherpad at [1] so if you're intending on coming along,
>> please put your name down with some details on what you would like to get
>> out of the session.
>>
>> Cheers
>>
>> James
>>
>> [0] http://tiny.cc/onhwky
>> [1] https://etherpad.openstack.org/p/BOS-forum-charms-onboarding
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Alex Kavanagh - Software Engineer
> Cloud Dev Ops - Solutions & Product Engineering - Canonical Ltd
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [charms] Onboarding session at next weeks summit

2017-05-04 Thread Pete Vander Giessen
I will be there. It will kind of be an on-boarding experience for me, too
:-)

On Thu, May 4, 2017 at 11:37 AM Alex Kavanagh 
wrote:

> I will be there too.  Looking forward to catching up with existing and new
> people.
>
> Cheers
> Alex.
>
> On Tue, May 2, 2017 at 11:36 AM, James Page  wrote:
>
>> Hi All
>>
>> The OpenStack summit is nearly upon us and for this summit we're running
>> a project onboarding session on Monday at 4.40pm in MR-105 (see [0] for
>> full details) for anyone who wants to get started either using the
>> OpenStack Charms or contributing to the development of the Charms,
>>
>> The majority of the core development team will be present so its a great
>> opportunity to learn more about our project from a use and development
>> perspective!
>>
>> I've created an etherpad at [1] so if you're intending on coming along,
>> please put your name down with some details on what you would like to get
>> out of the session.
>>
>> Cheers
>>
>> James
>>
>> [0] http://tiny.cc/onhwky
>> [1] https://etherpad.openstack.org/p/BOS-forum-charms-onboarding
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Alex Kavanagh - Software Engineer
> Cloud Dev Ops - Solutions & Product Engineering - Canonical Ltd
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] Boston 2017 Summit dinner

2017-05-04 Thread Ricardo Carrillo Cruz
Monday works better for me, thanks Paul!

2017-05-04 17:30 GMT+02:00 Paul Belanger :

> On Thu, May 04, 2017 at 10:45:36AM -0400, Paul Belanger wrote:
> > On Thu, Apr 27, 2017 at 08:47:58PM -0400, Paul Belanger wrote:
> > > Greetings!
> > >
> > > Its that time where we all try to figure out when and where to meet up
> for some
> > > dinner and drinks in Boston. While I haven't figure out a place to eat
> > > (suggestion most welcome), maybe we can decide which night to go out.
> > >
> > > As a reminder, the summit schedule has 2 events this year that people
> may also
> > > be attending:
> > >
> > >   Mon 8, 6:00pm - 7:30pm - Marketplace Mixer
> > >   Tue 9, 7:00pm - 10:00pm - StackCity Boston at Fenway Park
> > >
> > > Please take a moment to reply, and which day may be better for you.
> > >
> > >   Sunday: Yes
> > >   Monday: Yes
> > >   Tuesday: No
> > >   Wednesday: Yes
> > >   Thursday: No
> > >
> > > And, if you have a resturant in mind, please share.
> > >
> > Looks like Sunday might be our best day? Is there any objection on maybe
> having
> > some early dinner and drinks that day?
> >
> > Since nobody has suggested a location, I am going to attempt
> reservations at
> > http://thesaltypig.com/ @ 5pm.
> >
> Okay, some changes. I had a few people reach out to me, new date and time
> is
> 8:00pm on Monday for http://thesaltypig.com/.
>
> I suggest maybe we meet at the summit mixer and walk over to the restaurant
> together.
>
> Expect an email on Monday for an exact location to meet.
>
> -PB
>
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
>
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [openstack-dev] [charms] Onboarding session at next weeks summit

2017-05-04 Thread Alex Kavanagh
I will be there too.  Looking forward to catching up with existing and new
people.

Cheers
Alex.

On Tue, May 2, 2017 at 11:36 AM, James Page  wrote:

> Hi All
>
> The OpenStack summit is nearly upon us and for this summit we're running a
> project onboarding session on Monday at 4.40pm in MR-105 (see [0] for full
> details) for anyone who wants to get started either using the OpenStack
> Charms or contributing to the development of the Charms,
>
> The majority of the core development team will be present so its a great
> opportunity to learn more about our project from a use and development
> perspective!
>
> I've created an etherpad at [1] so if you're intending on coming along,
> please put your name down with some details on what you would like to get
> out of the session.
>
> Cheers
>
> James
>
> [0] http://tiny.cc/onhwky
> [1] https://etherpad.openstack.org/p/BOS-forum-charms-onboarding
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Alex Kavanagh - Software Engineer
Cloud Dev Ops - Solutions & Product Engineering - Canonical Ltd
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] Boston 2017 Summit dinner

2017-05-04 Thread Paul Belanger
On Thu, May 04, 2017 at 10:45:36AM -0400, Paul Belanger wrote:
> On Thu, Apr 27, 2017 at 08:47:58PM -0400, Paul Belanger wrote:
> > Greetings!
> > 
> > Its that time where we all try to figure out when and where to meet up for 
> > some
> > dinner and drinks in Boston. While I haven't figure out a place to eat
> > (suggestion most welcome), maybe we can decide which night to go out.
> > 
> > As a reminder, the summit schedule has 2 events this year that people may 
> > also
> > be attending:
> > 
> >   Mon 8, 6:00pm - 7:30pm - Marketplace Mixer
> >   Tue 9, 7:00pm - 10:00pm - StackCity Boston at Fenway Park
> > 
> > Please take a moment to reply, and which day may be better for you.
> > 
> >   Sunday: Yes
> >   Monday: Yes
> >   Tuesday: No
> >   Wednesday: Yes
> >   Thursday: No
> > 
> > And, if you have a resturant in mind, please share.
> > 
> Looks like Sunday might be our best day? Is there any objection on maybe 
> having
> some early dinner and drinks that day?
> 
> Since nobody has suggested a location, I am going to attempt reservations at
> http://thesaltypig.com/ @ 5pm.
> 
Okay, some changes. I had a few people reach out to me, new date and time is
8:00pm on Monday for http://thesaltypig.com/.

I suggest maybe we meet at the summit mixer and walk over to the restaurant
together.

Expect an email on Monday for an exact location to meet.

-PB

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [openstack-dev] [charms] Onboarding session at next weeks summit

2017-05-04 Thread David Ames
I'll be there.


--
David Ames

On Tue, May 2, 2017 at 3:36 AM, James Page  wrote:
> Hi All
>
> The OpenStack summit is nearly upon us and for this summit we're running a
> project onboarding session on Monday at 4.40pm in MR-105 (see [0] for full
> details) for anyone who wants to get started either using the OpenStack
> Charms or contributing to the development of the Charms,
>
> The majority of the core development team will be present so its a great
> opportunity to learn more about our project from a use and development
> perspective!
>
> I've created an etherpad at [1] so if you're intending on coming along,
> please put your name down with some details on what you would like to get
> out of the session.
>
> Cheers
>
> James
>
> [0] http://tiny.cc/onhwky
> [1] https://etherpad.openstack.org/p/BOS-forum-charms-onboarding
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] [charms] Onboarding session at next weeks summit

2017-05-04 Thread Corey Bryant
I'll be there as well. Looking forward to seeing everyone and meeting some
new folks.

Corey

On Tue, May 2, 2017 at 6:36 AM, James Page  wrote:

> Hi All
>
> The OpenStack summit is nearly upon us and for this summit we're running a
> project onboarding session on Monday at 4.40pm in MR-105 (see [0] for full
> details) for anyone who wants to get started either using the OpenStack
> Charms or contributing to the development of the Charms,
>
> The majority of the core development team will be present so its a great
> opportunity to learn more about our project from a use and development
> perspective!
>
> I've created an etherpad at [1] so if you're intending on coming along,
> please put your name down with some details on what you would like to get
> out of the session.
>
> Cheers
>
> James
>
> [0] http://tiny.cc/onhwky
> [1] https://etherpad.openstack.org/p/BOS-forum-charms-onboarding
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [charms] Onboarding session at next weeks summit

2017-05-04 Thread Corey Bryant
I'll be there as well. Looking forward to seeing everyone and meeting some
new folks.

Corey

On Tue, May 2, 2017 at 6:36 AM, James Page  wrote:

> Hi All
>
> The OpenStack summit is nearly upon us and for this summit we're running a
> project onboarding session on Monday at 4.40pm in MR-105 (see [0] for full
> details) for anyone who wants to get started either using the OpenStack
> Charms or contributing to the development of the Charms,
>
> The majority of the core development team will be present so its a great
> opportunity to learn more about our project from a use and development
> perspective!
>
> I've created an etherpad at [1] so if you're intending on coming along,
> please put your name down with some details on what you would like to get
> out of the session.
>
> Cheers
>
> James
>
> [0] http://tiny.cc/onhwky
> [1] https://etherpad.openstack.org/p/BOS-forum-charms-onboarding
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] stepping down from core

2017-05-04 Thread Andreas Scheuring
Rossella, we will miss you! All the best for your future engagements!


-- 
-
Andreas 
IRC: andreas_s



On Do, 2017-05-04 at 15:52 +0200, Rossella Sblendido wrote:
> Hi all,
> 
> I've moved to a new position recently and despite my best intentions I
> was not able to devote to Neutron as much time and energy as I wanted.
> It's time for me to move on and to leave room for new core reviewers.
> 
> It's been a great experience working with you all, I learned a lot both
> on the technical and on the human side.
> I won't disappear, you will see me around in IRC, etc, don't hesitate to
> contact me if you have any question or would like my feedback on something.
> 
> ciao,
> 
> Rossella
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-04 Thread Jonathan Proulx
On Thu, May 04, 2017 at 04:14:07PM +0200, Thierry Carrez wrote:
:Chris Dent wrote:
:> On Wed, 3 May 2017, Drew Fisher wrote:
:>> "Most large customers move slowly and thus are running older versions,
:>> which are EOL upstream sometimes before they even deploy them."
:> 
:> Can someone with more of the history give more detail on where the
:> expectation arose that upstream ought to be responsible things like
:> long term support? I had always understood that such features were
:> part of the way in which the corporately avaialable products added
:> value?

:In parallel, OpenStack became more stable, so the demand for longer-term
:maintenance is stronger. People still expect "upstream" to provide it,
:not realizing upstream is made of people employed by various
:organizations, and that apparently their interest in funding work in
:that area is pretty dead.

Wearing my Operator hat I don't really care if "LTS" comes from
upstream or downstream.  I think the upstream expectation has
developed becuase there has been some upstream efforts and as far as I
can see no recent downstream efforts in support of stable releases,
though obviously I mostly pay attention to "my" distro so may be
missing things in this space.

Having watched this for some time I agree with everything Thierry has
said.

The increasing demand for "LTS" like releases is definitely a tribute
to the overall maturity of core services.  I used to be desperate for
the next release and back porting patches into custom packages just to
keep things working.

Now if I belived Ubuntu (which my world OpenStack and otherwise
happens to be built on) would provide a direct upgrade path from their
16.04 released OpenStack to what ever lands in their next LTS I'd
probably sit rather happily on that.  Which is a hugely positive shift.

:I agree that our current stable branch model is inappropriate:
:maintaining stable branches for one year only is a bit useless. But I
:only see two outcomes:
:
:1/ The OpenStack community still thinks there is a lot of value in doing
:this work upstream, in which case organizations should invest resources
:in making that happen (starting with giving the Stable branch
:maintenance PTL a job), and then, yes, we should definitely consider
:things like LTS or longer periods of support for stable branches, to
:match the evolving usage of OpenStack.
:
:2/ The OpenStack community thinks this is better handled downstream, and
:we should just get rid of them completely. This is a valid approach, and
:a lot of other open source communities just do that.
:
:The current reality in terms of invested resources points to (2). I
:personally would prefer (1), because that lets us address security
:issues more efficiently and avoids duplicating effort downstream. But
:unfortunately I don't control where development resources are posted.

Yes it seems that way to me as well.

just killing the stable branch model without some plan either
internally or externally to provide a better stability story seems
like it would send the wrong signal.  So I'd much prefer the distro
people to either back option 1) with significant resources so it can
really work or make public commitments to handle option 2) in a
reasonable way.

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] [openstack-infra] [jenkins-job-builder] run `jenkins-jobs test` failed

2017-05-04 Thread Mikhail Medvedev
On Thu, May 4, 2017 at 4:56 AM,   wrote:
>
> Hi folks,
>
> I use "puppet-jenkins" to setuped a jenkins node, and then run the command
> `jenkins-jobs test /etc/jenkins_jobs/config/`
>
> to parser the jobs which are from the "project-config/jenkins/jobs".
>
> But it raised a error, here is the log:
>
> Does anyone know how to resolve the issue? Do I need to install some tools?
>
> Thanks for help~
>
>
> INFO:jenkins_jobs.local_yaml:Including file 'include/run-project-guide.sh'
> from path '.'
>
> WARNING:root:logrotate is deprecated on jenkins>=1.637, use the property
> build-discarder on newer jenkins instead
>
> Traceback (most recent call last):
>
>   File "/usr/local/bin/jenkins-jobs", line 10, in <module>
>
> sys.exit(main())
>
>   File "/usr/local/lib/python2.7/dist-packages/jenkins_jobs/cmd.py", line
> 191, in main
>
> execute(options, config)
>
>   File "/usr/local/lib/python2.7/dist-packages/jenkins_jobs/cmd.py", line
> 380, in execute
>
> n_workers=1)
>
>   File "/usr/local/lib/python2.7/dist-packages/jenkins_jobs/builder.py",
> line 350, in update_jobs
>
> self.parser.generateXML()
>
>   File "/usr/local/lib/python2.7/dist-packages/jenkins_jobs/parser.py", line
> 342, in generateXML
>
> self.xml_jobs.append(self.getXMLForJob(job))
>
>   File "/usr/local/lib/python2.7/dist-packages/jenkins_jobs/parser.py", line
> 352, in getXMLForJob
>
> self.gen_xml(xml, data)
>
>   File "/usr/local/lib/python2.7/dist-packages/jenkins_jobs/parser.py", line
> 359, in gen_xml
>
> module.gen_xml(self, xml, data)
>
>   File
> "/usr/local/lib/python2.7/dist-packages/jenkins_jobs/modules/publishers.py",
> line 6158, in gen_xml
>
> self.registry.dispatch('publisher', parser, publishers, action)
>
>   File "/usr/local/lib/python2.7/dist-packages/jenkins_jobs/registry.py",
> line 249, in dispatch
>
> format(name, component_type))
>
> jenkins_jobs.errors.JenkinsJobsException: Unknown entry point or macro 'afs'
> for component type: 'publisher'.
>

The problem is exactly what it says it is - jjb can not find a
publisher macro named 'afs'. If you look into project-config/tox.ini
's testenv:jjb section, you'll see that it does 'pip install -U
jenkins/modules/jjb_afs', which I assume installs the missing macro.
So you can either install that missing macro, or you can remove all
uses of the macro from you configuration.

A more important question is why you want to use
project-config/jenkins jobs as is for your deployment of Jenknis? It
is a configuration specific to OpenStack Infra. You would be better
off starting with a small subset of jobs you are interested in using.

---
Mikhail Medvedev (mmedvede)Now there is another question
IBM

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [openstack-dev] [charms] Onboarding session at next weeks summit

2017-05-04 Thread Ryan Beisner
I plan to be there.  Looking forward to it!
-Ryan

On Tue, May 2, 2017 at 6:36 AM, James Page  wrote:

> Hi All
>
> The OpenStack summit is nearly upon us and for this summit we're running a
> project onboarding session on Monday at 4.40pm in MR-105 (see [0] for full
> details) for anyone who wants to get started either using the OpenStack
> Charms or contributing to the development of the Charms,
>
> The majority of the core development team will be present so its a great
> opportunity to learn more about our project from a use and development
> perspective!
>
> I've created an etherpad at [1] so if you're intending on coming along,
> please put your name down with some details on what you would like to get
> out of the session.
>
> Cheers
>
> James
>
> [0] http://tiny.cc/onhwky
> [1] https://etherpad.openstack.org/p/BOS-forum-charms-onboarding
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-04 Thread Alex Schultz
On Thu, May 4, 2017 at 5:32 AM, Chris Dent  wrote:
> On Wed, 3 May 2017, Drew Fisher wrote:
>
>> This email is meant to be the ML discussion of a question I brought up
>> during the TC meeting on April 25th.; [1]
>
>
> Thanks for starting this Drew, I hope my mentioning it in my tc
> report email wasn't too much of a nag.
>
> I've added [tc] and [all] tags to the subject in case people are
> filtering. More within.
>
>> The TL;DR version is:
>>
>> Reading the user survey [2], I see the same issues time and time again.
>> Pages 18-19 of the survey are especially common points.
>> Things move too fast, no LTS release, upgrades are terrifying for
>> anything that isn't N-1 -> N.
>> These come up time and time again
>> How is the TC working with the dev teams to address these critical issues?
>
>
> As I recall the "OpenStack-wide Goals"[a] are supposed to help address
> some of this sort of thing but it of course relies on people first
> proposing and detailing goals and then there actually being people
> to act on them. The first part was happening at[b] but it's not
> clear if that's the current way.
>
> Having people is the hard part. Given the current contribution
> model[c] that pretty much means enterprises ponying up the people do
> the work. If they don't do that then the work won't get done, and
> people won't buy the products they are supporting, I guess? Seems a
> sad state of affairs.
>
> There's also an issue where we seem to have decided that it is only
> appropriate to demand a very small number of goals per cycle
> (because each project already has too much on their plate, or too big
> a backlog, relative to resources). It might be that as the
> _Technical_ Committe is could be legitimate to make a larger demand.
> (Or it could be completely crazy.)
>
>> I asked this because on page 18 is this comment:
>>
>> "Most large customers move slowly and thus are running older versions,
>> which are EOL upstream sometimes before they even deploy them."
>
>
> Can someone with more of the history give more detail on where the
> expectation arose that upstream ought to be responsible things like
> long term support? I had always understood that such features were
> part of the way in which the corporately avaialable products added
> value?
>
>> This is exactly what we're seeing with some of our customers and I
>> wanted to ask the TC about it.
>
>
> I know you're not speaking as the voice of your employer when making
> this message, so this is not directed at you, but from what I can
> tell Oracle's presense upstream (both reviews and commits) in Ocata
> and thus far in Pike has not been huge. Maybe that's something that
> needs to change to keep the customers happy? Or at all.
>

Probably because they are still on Kilo. Not sure how much they could
be contributing to the current when their customers are demanding that
something is rock solid which by now looks nothing like the current
upstream.   I think this is part of the problem as the upstream can
tend to outpace anyone else in terms of features or anything else.  I
think the the bigger question could be what's the benefit of
continuing to press forward and add yet more features when consumers
cannot keep up to consume these?  Personally I think usability (and
some stability) sometimes tends to take a backseat to features in the
upstream which is unfortunate because it makes these problems worse.

Thanks,
-Alex

> [a]: https://governance.openstack.org/tc/goals/index.html
> [b]: https://etherpad.openstack.org/p/community-goals
> [c]: There's talk that the current model will change from devs hired
> to do OpenStack development being the main engine of contribution to
> users of OpenStack, who happen to be devs, being the main engine. Do
> we know the slope on that trend?
>
>
>> Thanks,
>>
>> -Drew
>>
>> [1]
>>
>> http://eavesdrop.openstack.org/meetings/tc/2017/tc.2017-04-25-20.00.log.html#l-177
>> [2] https://www.openstack.org/assets/survey/April2017SurveyReport.pdf
>
>
> --
> Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
> freenode: cdent tw: @anticdent
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack moving both too fast and too slow at the same time

2017-05-04 Thread Doug Hellmann
Excerpts from Drew Fisher's message of 2017-05-03 14:00:53 -0600:
> This email is meant to be the ML discussion of a question I brought up
> during the TC meeting on April 25th.; [1]

Thanks for starting this thread, Drew. I'll try to respond, but I
know a lot of folks are preparing for the summit next week, so it
may be a little quiet around here until after everyone is home.

> 
> The TL;DR version is:
> 
> 
> Reading the user survey [2], I see the same issues time and time again.
> Pages 18-19 of the survey are especially common points.

I was also interested in those comments and noticed that, as you
say, some are recurring themes. That reinforces in my mind that we
haven't adequately communicated the background behind some decisions
we've made in the past, or what we would need to do to make progress
on stalled initiatives.  I've started trying to address some of
those issues [1], and I'll be continuing that work after the summit.

[1] 
https://doughellmann.com/blog/2017/04/20/lessons-learned-from-working-on-large-scale-cross-project-initiatives-in-openstack/

> Things move too fast,

I have to say, after so many years of hearing that we weren't moving
fast enough this one was a big surprise. :-) I'm not sure if that's
good or bad, or if it just means we now have a completely different
set of people responding to the user survey.

> no LTS release,

Over the past couple of years we have shifted the majority of the
backport review work off of a centralized team so that the individual
project teams are responsible for establishing their own stable
review groups. We've also changed the way we handle stable releases,
so that we now encourage projects to tag a release when they need
it instead of waiting and trying to tag all of the projects together
at the same time. As a result of these changes, we've been seeing
more stable releases for the branches we do maintain, giving users
more actual bug fix releases for those series.

That said, there are two main reasons we are unlikely to add more
stable releases or maintain any releases for longer: we need more
people to do the work, and we need to find a way to do that work
that doesn't hurt our ability to work on master.

We do still have a stable team responsible for ensuring that projects
are following the policies for stable releases, and that team needs
more participation. I'm sure the project teams would appreciate
having more help with backports and reviews on their stable branches,
too. Getting contributors to work on those tasks has been difficult
since the very beginning of the project.

It has been difficult to attract contributors to this area in part
due to the scope of work that is necessary to say that the community
supports those releases. We need the older versions of the deployment
platforms available in our CI systems to run the automated tests.
We need supported versions of the development tools (setuptools and
pip are especially problemmatic).  We need supported versions of
the various libraries and system-level dependencies like libvirt.
I'm sure the stable maintenance team could add to that list, but
the point is that it's not just a matter of saying we want to do
it, or even that we *will* do it.

> upgrades are terrifying for anything that isn't N-1 -> N.

The OpenStack community has a strong culture of testing.  We have
reasonable testing in place to balance our ability to ensure that
N-1 -> N upgrades work and as a result upgrades are easier than
ever. It seems quite a few users are still on the older versions
of the software that don't have some of those improvements.  It's
not the ideal answer, but their experience will continue to improve
as they move forward onto newer releases.

Meanwhile, adding more combinations of upgrades to handle N-M -> N
changes our ability to simplify the applications by removing technical
debt and by deprecating configuration options (reducing complexity
by cutting the number of configuration options has also been a
long-standing request from users). It also means more people are
needed to keep those older releases running in CI, so that the
upgrade jobs are reliable (see the discussion above about why that
is an issue).

> These come up time and time again
> How is the TC working with the dev teams to address these critical issues?
> 
> I asked this because on page 18 is this comment:
> 
> "Most large customers move slowly and thus are running older versions,
> which are EOL upstream sometimes before they even deploy them."
> 
> This is exactly what we're seeing with some of our customers and I
> wanted to ask the TC about it.

The contributors to OpenStack are not a free labor pool for the
consumers of the project. Just like with any other open source
project, the work is done by the people who show up, and we're all
motivated to work on different things.  Many (most?) of us are paid
by companies selling products or services based on OpenStack. Those
companies apply resources, in the form of 

Re: [openstack-dev] Gnocchi

2017-05-04 Thread simona marinova
Hello Julien,


Sorry for the late reply.

We uninstalled Gnocchi, because we tried to follow the latest update on 
OpenStack Newton which includes Ceilometer with MongoDB and Alarming with MySQL.


Ceilometer now works, but only the commands which do not include Alarming give 
the correct output, for example "ceilometer meter-list", "ceilometer 
resource-list" etc.


 The Alarming service doesn't work at this point. For example the command 
"ceilometer alarm-list" gives the error:


HTTPConnectionPool(host='controller', port=8042): Max retries exceeded with 
url: /v2/alarms (Caused by 
NewConnectionError(': Failed to establish a new connection: [Errno 111] 
Connection refused',))

Now our biggest concern is that the Alarming service database (MySQL-based) and 
the Telemetry service database (MongoDB) are not communicating properly. Is it 
possible for the Aodh to access the data from mongoDB?

Additionally aodh-dbsync gives error, because it cannot detect the module 
gnocchiclient. There aren't gnocchi modules involved in this version.

What kind of configuration needs to be done in order for Telemetry and Alarming 
to work properly?

Best regards,
Simona




From: Julien Danjou 
Sent: Wednesday, April 26, 2017 3:15 PM
To: simona marinova
Cc: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Gnocchi

On Wed, Apr 26 2017, simona marinova wrote:

Hi Simona,

> I am a student working on a project that involves an OpenStack Newton 
> platform.
>
> Currently, we are trying to implement the Data Collection service. We saw that
> Gnocchi is recommended for this purpose, and we installed it.
>
> Now we have problems with the configuration.
>
> I have tried to configure the basic parameters, but the same errors appear 
> over and over.
>
> Until this point, every installation and configuration of the services in
> OpenStack is done exactly the same as shown in the official OpenStack
> documentation.
>
>  I am sending you a screenshot of the output when I try to run gnocchi.
>
>
> Can you help me with a basic configuration or some advice?

It looks like you set your Swift URL to a Keystone URL something like
that. Could you join your gnocchi.conf file?

--
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */
jd:/dev/blog | Julien Danjou
julien.danjou.info
Knowing that collectd is a daemon that collects system and applications metrics 
and that Gnocchi is a scalable timeseries database, it sounds like a good idea 
to ...



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] Boston 2017 Summit dinner

2017-05-04 Thread Paul Belanger
On Thu, Apr 27, 2017 at 08:47:58PM -0400, Paul Belanger wrote:
> Greetings!
> 
> Its that time where we all try to figure out when and where to meet up for 
> some
> dinner and drinks in Boston. While I haven't figure out a place to eat
> (suggestion most welcome), maybe we can decide which night to go out.
> 
> As a reminder, the summit schedule has 2 events this year that people may also
> be attending:
> 
>   Mon 8, 6:00pm - 7:30pm - Marketplace Mixer
>   Tue 9, 7:00pm - 10:00pm - StackCity Boston at Fenway Park
> 
> Please take a moment to reply, and which day may be better for you.
> 
>   Sunday: Yes
>   Monday: Yes
>   Tuesday: No
>   Wednesday: Yes
>   Thursday: No
> 
> And, if you have a resturant in mind, please share.
> 
Looks like Sunday might be our best day? Is there any objection on maybe having
some early dinner and drinks that day?

Since nobody has suggested a location, I am going to attempt reservations at
http://thesaltypig.com/ @ 5pm.

-PB

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-05-04 Thread Luigi Toscano
On Thursday, 4 May 2017 15:41:04 CEST Dan Prince wrote:
> On Thu, 2017-05-04 at 03:11 -0400, Luigi Toscano wrote:
> > - Original Message -
> > 

> > > 
> > > Running a subset of Tempest tests isn't the same thing as designing
> > > (and owning) your own test suite that targets the things that mean
> > > the
> > > most to our community (namely speed and coverage). Even giving up
> > > 5-10
> > > minutes of runtime...just to be able to run Tempest isn't something
> > > that some of us would be willing to do.
> > 
> > As I mentioned, you can do it with Tempest (the library). You can
> > have your own test suite that does exactly what you are asking
> > (namely, a set of scenario tests based on Heat which targets the
> > TripleO use case) in a Tempest plugin and there is no absolute reason
> > that those tests should add 5-10 minutes of runtime compared to
> > pingtest. 
> > 
> > It/they would be exactly pingtest, only implemented using a different
> > library and running with a different runner, with the *exact* same
> > run time. 
> > 
> > Obvious advantages: only one technology used to run tests, so if
> > anyone else want to run additional tests, there is no need to
> > maintain two code paths; reuse on a big and proven library of test
> > and test runner tools.
> 
> I like the idea of getting pingtest out of tripleo.sh as more of a
> stand alone tool. I would support an effort that re-implemented it...
> and using tempest-lib would be totally fine. And as you point out one
> could even combine these tests with a more common "Tempest" run that
> incorporates the scenarios, etc.

That's the idea, yes: anyone would be able to consume it easily with the other 
tests; just a regexp away.


> To me the message is clear that we DO NOT want to consume the normal
> Tempest scenarios in TripleO upstream CI at this point. Sure there is
> overlap there, but the focus of those tests is just plain different...
> speed isn't a primary concern there as it is for us so I don't think we
> should do it now. And probably not ever unless the CI job time is less
> than an hour. Like even if we were able to tune a set of stock Tempest
> smoke tests today to our liking unless TripleO proper gates on the
> runtime of those not increasing we'd be at risk of breaking our CI
> queues as the wall time would potentially get too long. In this regard
> this entire thread is poorly named I think in that we are no longer
> talking about 'pingtest vs. tempest' but rather the implementation
> details of how we reimplement our existing pingtest to better suite the
> community.
> 
> So ++ for the idea of experimenting with the use of tempest.lib. But
> stay away from the idea of using Tempest smoke tests and the like for
> TripleO I think ATM.

That would be good!

> 
> Its also worth noting there is some risk when maintaining your own in-
> tree Tempest tests [1]. If I understood that thread correctly that
> breakage wouldn't have occurred if the stable branch tests were gating
> Tempest proper... which is a very hard thing to do if we have our own
> in-tree stuff. So there is a cost to doing what you suggest here, but
> probably one that we'd be willing to accept.

About this, the idea is not to put the Tempest plugin in-tree, but keep it in 
a separate repository (and keep it branchless like Tempest; as you test the 
API. We did this for Sahara tests since liberty with good results. 
Moreover, there is a proposed global goal for Queen to decouple the Tempest 
plugin in separate repositories:
https://review.openstack.org/#/c/369749/

-- 
Luigi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Openstack-Ansible (Ocata branch) deployment failing - "No matching distribution found for mysql-python"

2017-05-04 Thread Jean-Philippe Evrard
Hello,

I forgot to do a “reply all” this morning. Here was the gist:

Don’t hesitate to give us the logs you can find on 
/openstack/logs/dc2-controller-01_repo_container-7ce807b6/repo (particularily 
the repo_venv_builder.log). It could help us debugging requirements issues.

Best regards,
JP

From: Andy McCrae 
Date: Thursday, 4 May 2017 at 12:10
To: Eugene Duvenage 
Cc: "openstack-operators@lists.openstack.org" 

Subject: Re: [Openstack-operators] Openstack-Ansible (Ocata branch) deployment 
failing - "No matching distribution found for mysql-python"

Hi Eugene,

You're right that error doesn't give us much.
My best advice for a next step would be to manually run "bash 
/opt/op-venv-script.sh" from inside the repo container: 
dc2-controller-01_repo_container-7ce807b6. I've had similar issues, which is 
usually a constraints issue (we're working ways to improve this too).

The output from the script is often more useful - additionally there are logs 
in the repo container for the venv build process, so check those out too.

On a plus side - which doesn't help you right now but may make this easier in 
future - we're looking to potentially move away from that bash script and 
manage the venv process via ansible tasks. This will hopefully make the issue 
more clear when it fails, and improve debugging.

Hope that helps!
Andy



Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [nova-scheduler] Get scheduler hint

2017-05-04 Thread Jay Pipes

On 05/04/2017 04:59 AM, Giuseppe Di Lena wrote:

Hi Chris,


I'm pretty sure a regular user can create a server group and specify the 
anti-affinity filter.


yes, but we want that the user specifies just the Robustness; the way in which 
we assign the instances to the compute nodes should be a black box for the 
regular user(and also for the admin).


Server groups *are* a black box though. You create a server group and 
set the policy of the group to "anti-affinity" and that's it. There's no 
need for the user or admin to know anything else...



Why do you need to track which compute nodes the instances are on?


Because putting the instances in the correct compute nodes is just the first 
step of the algorithm that we are implementing, for the next steps we need to 
know where is each instance.


In a cloud, it shouldn't matter which specific compute node an instance 
is on -- in fact, in clouds, an instance (workload) may not even know 
it's on a hypervisor vs. a baremetal machine vs. a privileged container.


What is important for the user in a cloud to specify is the amount of 
resources the workload will consume (this is the flavor in Nova) and a 
set of characteristics (traits) that the eventual host system should have.


I think it would help if you describe in a little more detail what is 
the eventual outcome you are trying to achieve and what use case that 
outcome serves. Then we can assist you in showing you how to get to that 
outcome.


Best,
-jay


Thank you for the question.

Best regards Giuseppe


Il giorno 03 mag 2017, alle ore 21:01, Chris Friesen 
 ha scritto:

On 05/03/2017 03:08 AM, Giuseppe Di Lena wrote:

Thank you a lot for the help!

I think that the problem can be solved using the anti-affinity filter, but we want 
a regular user can choose an instance and set the property(image, flavour, 
network, etc.) and a parameter Robustness >= 1(that is the number of copies of 
this particular instance).


I'm pretty sure a regular user can create a server group and specify the 
anti-affinity filter.  And a regular user can certainly specify --min-count and 
--max-count to specify the number of copies.


After that, we put every copy of this instance in a different compute, but we 
need to track where we put every copy of the instance (we need to know it for 
the algorithm that we would implement);


Normally only admin-level users are allowed to know which compute nodes a given 
instance is placed on.  Why do you need to track which compute nodes the 
instances are on?

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security] Today's IRC meeting.

2017-05-04 Thread Luke Hinds
On Thu, May 4, 2017 at 12:37 PM, Rob C  wrote:

> Hi All,
>
> I won't be able to make today's meeting as I'm travelling.
>
> I've not found a chair to cover the meeting, please decide if you have a
> quorum and either proceed or go back to "real life" as you see fit.
>
> Cheers
> -Rob
>

I am out this week too, so might find it a challenge to get on IRC, but
will do my best.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][horizon] weekly meeting

2017-05-04 Thread Lance Bragstad
I've proposed a patch to update the week meeting schedule [0].


[0] https://review.openstack.org/#/c/462569/

On Thu, Apr 20, 2017 at 2:49 PM, Steve Martinelli 
wrote:

> As someone who helped orchestrate the weekly sync-ups, I'll chime in. I
> always intended for these meetings to end once we accomplished most of the
> goals [1] we identified last summit. With most of the goals accomplished,
> scaling back or ending them entirely seems appropriate. We can always start
> them up again if our backlog grows again.
>
> [1] https://etherpad.openstack.org/p/ocata-keystone-horizon
>
> On Thu, Apr 20, 2017 at 3:46 PM, Lance Bragstad 
> wrote:
>
>> I wonder if the meeting tooling supports a monthly cadence?
>>
>> On Thu, Apr 20, 2017 at 2:42 PM, Rob Cresswell <
>> robert.cressw...@outlook.com> wrote:
>>
>>> It's been a week since the original email; I think we should scale back
>>> to a monthly sync up. No preference on which week of the month it falls in.
>>> Thanks!
>>>
>>> Rob
>>>
>>> On 13 April 2017 at 22:03, Lance Bragstad  wrote:
>>>
 Happy Thursday folks,

 Rob and I have noticed that the weekly attendance for the
 Keystone/Horizon [0] meeting has dropped significantly in the last month or
 two. We contemplated changing the frequency of this meeting to be monthly
 instead of weekly. We still think it is important to have a sync point
 between the two projects, but maybe it doesn't need to be as often as we
 were expecting.

 Does anyone have any objections to making this a monthly meeting?

 Does anyone have a preference on the week or day of the month (i.e. 3rd
 Thursday of the month)?

 Once we have consensus on a time, I'll submit a patch for the meeting
 agenda.

 Thanks and have a great weekend!

 [0] http://eavesdrop.openstack.org/#Keystone/Horizon_Collabo
 ration_Meeting

>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-04 Thread Thierry Carrez
Chris Dent wrote:
> On Wed, 3 May 2017, Drew Fisher wrote:
>> "Most large customers move slowly and thus are running older versions,
>> which are EOL upstream sometimes before they even deploy them."
> 
> Can someone with more of the history give more detail on where the
> expectation arose that upstream ought to be responsible things like
> long term support? I had always understood that such features were
> part of the way in which the corporately avaialable products added
> value?

We started with no stable branches, we were just producing releases and
ensuring that updates vaguely worked from N-1 to N. There were a lot of
distributions, and they all maintained their own stable branches,
handling backport of critical fixes. That is a pretty classic upstream /
downstream model.

Some of us (including me) spotted the obvious duplication of effort
there, and encouraged distributions to share that stable branch
maintenance work rather than duplicate it. Here the stable branches were
born, mostly through a collaboration between Red Hat developers and
Canonical developers. All was well. Nobody was saying LTS back then
because OpenStack was barely usable so nobody wanted to stay on any
given version for too long.

Maintaining stable branches has a cost. Keeping the infrastructure that
ensures that stable branches are actually working is a complex endeavor
that requires people to constantly pay attention. As time passed, we saw
the involvement of distro packagers become more limited. We therefore
limited the number of stable branches (and the length of time we
maintained them) to match the staffing of that team. Fast-forward to
today: the stable team is mostly one person, who is now out of his job
and seeking employment.

In parallel, OpenStack became more stable, so the demand for longer-term
maintenance is stronger. People still expect "upstream" to provide it,
not realizing upstream is made of people employed by various
organizations, and that apparently their interest in funding work in
that area is pretty dead.

I agree that our current stable branch model is inappropriate:
maintaining stable branches for one year only is a bit useless. But I
only see two outcomes:

1/ The OpenStack community still thinks there is a lot of value in doing
this work upstream, in which case organizations should invest resources
in making that happen (starting with giving the Stable branch
maintenance PTL a job), and then, yes, we should definitely consider
things like LTS or longer periods of support for stable branches, to
match the evolving usage of OpenStack.

2/ The OpenStack community thinks this is better handled downstream, and
we should just get rid of them completely. This is a valid approach, and
a lot of other open source communities just do that.

The current reality in terms of invested resources points to (2). I
personally would prefer (1), because that lets us address security
issues more efficiently and avoids duplicating effort downstream. But
unfortunately I don't control where development resources are posted.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] stepping down from core

2017-05-04 Thread Daniel Mellado
El 04/05/17 a las 15:52, Rossella Sblendido escribió:
> Hi all,
> 
> I've moved to a new position recently and despite my best intentions I
> was not able to devote to Neutron as much time and energy as I wanted.
> It's time for me to move on and to leave room for new core reviewers.
> 
> It's been a great experience working with you all, I learned a lot both
> on the technical and on the human side.
> I won't disappear, you will see me around in IRC, etc, don't hesitate to
> contact me if you have any question or would like my feedback on something.
> 
> ciao,
> 
> Rossella
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
Thanks Rossella! Best of luck! ;)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] stepping down from core

2017-05-04 Thread Rossella Sblendido
Hi all,

I've moved to a new position recently and despite my best intentions I
was not able to devote to Neutron as much time and energy as I wanted.
It's time for me to move on and to leave room for new core reviewers.

It's been a great experience working with you all, I learned a lot both
on the technical and on the human side.
I won't disappear, you will see me around in IRC, etc, don't hesitate to
contact me if you have any question or would like my feedback on something.

ciao,

Rossella

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] Mitaka and gnocchi

2017-05-04 Thread mate200
Hi everyone ! Am I understanding right that is possible to install gnocchi 2.0 
into Mitaka release ?

Thanks
-- 
Mate200___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-05-04 Thread Dan Prince
On Thu, 2017-05-04 at 03:11 -0400, Luigi Toscano wrote:
> - Original Message -
> > On Wed, 2017-05-03 at 17:53 -0400, Emilien Macchi wrote:
> > > (cross-posting)
> > 
> > > Instead of running the Pingtest, we would execute a Tempest
> > > Scenario
> > > that boot an instance from volume (like Pingstest is already
> > > doing)
> > > and see how it goes (in term of coverage and runtime).
> > > I volunteer to kick-off the work with someone more expert than I
> > > am
> > > with quickstart (Arx maybe?).
> > > 
> > > Another iteration could be to start building an easy interface to
> > > select which Tempest tests we want a TripleO CI job to run and
> > > plug
> > > it
> > > to our CI tooling (tripleo-quickstart I presume).
> > 
> > Running a subset of Tempest tests isn't the same thing as designing
> > (and owning) your own test suite that targets the things that mean
> > the
> > most to our community (namely speed and coverage). Even giving up
> > 5-10
> > minutes of runtime...just to be able to run Tempest isn't something
> > that some of us would be willing to do.
> 
> As I mentioned, you can do it with Tempest (the library). You can
> have your own test suite that does exactly what you are asking
> (namely, a set of scenario tests based on Heat which targets the
> TripleO use case) in a Tempest plugin and there is no absolute reason
> that those tests should add 5-10 minutes of runtime compared to
> pingtest. 
> 
> It/they would be exactly pingtest, only implemented using a different
> library and running with a different runner, with the *exact* same
> run time. 
> 
> Obvious advantages: only one technology used to run tests, so if
> anyone else want to run additional tests, there is no need to
> maintain two code paths; reuse on a big and proven library of test
> and test runner tools.

I like the idea of getting pingtest out of tripleo.sh as more of a
stand alone tool. I would support an effort that re-implemented it...
and using tempest-lib would be totally fine. And as you point out one
could even combine these tests with a more common "Tempest" run that
incorporates the scenarios, etc.

To me the message is clear that we DO NOT want to consume the normal
Tempest scenarios in TripleO upstream CI at this point. Sure there is
overlap there, but the focus of those tests is just plain different...
speed isn't a primary concern there as it is for us so I don't think we
should do it now. And probably not ever unless the CI job time is less
than an hour. Like even if we were able to tune a set of stock Tempest
smoke tests today to our liking unless TripleO proper gates on the
runtime of those not increasing we'd be at risk of breaking our CI
queues as the wall time would potentially get too long. In this regard
this entire thread is poorly named I think in that we are no longer
talking about 'pingtest vs. tempest' but rather the implementation
details of how we reimplement our existing pingtest to better suite the
community.

So ++ for the idea of experimenting with the use of tempest.lib. But
stay away from the idea of using Tempest smoke tests and the like for
TripleO I think ATM.

Its also worth noting there is some risk when maintaining your own in-
tree Tempest tests [1]. If I understood that thread correctly that
breakage wouldn't have occurred if the stable branch tests were gating
Tempest proper... which is a very hard thing to do if we have our own
in-tree stuff. So there is a cost to doing what you suggest here, but
probably one that we'd be willing to accept.

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-May/116172.
html

Dan

> 
> Ciao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tosca-parser][heat-translator] Next two IRC meetings canceled

2017-05-04 Thread HADDLETON, Robert W (Bob)
The IRC meetings for this week (today) and next week are canceled due to 
travel and the Boston Summit.


We will resume on May 18.

Thanks

Bob

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-05-04 Thread Chandan kumar
On Thu, May 4, 2017 at 5:34 PM, Arx Cruz  wrote:
>
>
> On Wed, May 3, 2017 at 11:53 PM, Emilien Macchi  wrote:
>>
>> (cross-posting)
>>
>> I've seen a bunch of interesting thoughts here.
>> The most relevant feedback I've seen so far:
>>
>> - TripleO folks want to keep testing fast and efficient.
>> - Tempest folks understand this problematic and is willing to collaborate.
>>
>> I propose that we move forward and experiment the usage of Tempest in
>> TripleO CI for one job that could be experimental or non-voting to
>> start.
>> Instead of running the Pingtest, we would execute a Tempest Scenario
>> that boot an instance from volume (like Pingstest is already doing)
>> and see how it goes (in term of coverage and runtime).
>> I volunteer to kick-off the work with someone more expert than I am
>> with quickstart (Arx maybe?).
>>
>
> Sure, let's work on that :)

@Arx, @EmilienM, If you need any helping hand on this please let me know

Thanks,

Chandan Kumar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-04 Thread Anne Gentle
On Wed, May 3, 2017 at 6:14 PM, Sean Dague  wrote:

> On 05/03/2017 07:08 PM, Doug Hellmann wrote:
>
>> Excerpts from Sean Dague's message of 2017-05-03 16:16:29 -0400:
>>
>>> Screen is going away in Queens.
>>>
>>> Making the dev / test runtimes as similar as possible is really
>>> important. And there is so much weird debt around trying to make screen
>>> launch things reliably (like random sleeps) because screen has funny
>>> races in it.
>>>
>>> It does mean some tricks people figured out in screen are going away.
>>>
>>
>> It sounds like maybe we should start building a shared repository of new
>> tips & tricks for systemd/journald.
>>
>
> Agreed, the devstack docs have the following beginnings of that:
>
> https://docs.openstack.org/developer/devstack/development.html - for
> basic flow
>
> which also links to a systemd primer - https://docs.openstack.org/dev
> eloper/devstack/systemd.html
>
> But more contributions are welcomed for sure.
>
> (These docs exist in the devstack tree under doc/source)


Another set of docs that helped me figure out screen in DevStack are in the
Ops Guide [1][2]. Low-hanging fruit, the way I see it, so I've also logged
a doc bug[3].

Anne

1.
https://github.com/openstack/openstack-manuals/blob/master/doc/ops-guide/source/ops-customize-objectstorage.rst

2.
https://github.com/openstack/openstack-manuals/blob/master/doc/ops-guide/source/ops-customize-compute.rst

3. https://bugs.launchpad.net/openstack-manuals/+bug/1688245


>
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Read my blog: justwrite.click 
Subscribe to Docs|Code: docslikecode.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] [openstack-infra] [jenkins-job-builder]  run `jenkins-jobs test` failed

2017-05-04 Thread Trinath Somanchi
Do you want to try, 
https://lists.opendaylight.org/pipermail/integration-dev/2015-June/003397.html

Also, check the job configuration.


Thanks,
Trinath Somanchi.

Digital Networking | NXP – Hyderabad – INDIA.
Email: trinath.soman...@nxp.com
Mobile: +91 9866235130 | Off: +91 4033504051


From: dong.wenj...@zte.com.cn [mailto:dong.wenj...@zte.com.cn]
Sent: Thursday, May 04, 2017 3:27 PM
To: openstack-infra@lists.openstack.org
Subject: [OpenStack-Infra] [openstack-infra] [jenkins-job-builder]  run 
`jenkins-jobs test` failed




Hi folks,

I use "puppet-jenkins" to setuped a jenkins node, and then run the command 
`jenkins-jobs test /etc/jenkins_jobs/config/`

to parser the jobs which are from the "project-config/jenkins/jobs".

But it raised a error, here is the log:

Does anyone know how to resolve the issue? Do I need to install some tools?

Thanks for help~



INFO:jenkins_jobs.local_yaml:Including file 'include/run-project-guide.sh' from 
path '.'

WARNING:root:logrotate is deprecated on jenkins>=1.637, use the property 
build-discarder on newer jenkins instead

Traceback (most recent call last):

  File "/usr/local/bin/jenkins-jobs", line 10, in <module>

sys.exit(main())

  File "/usr/local/lib/python2.7/dist-packages/jenkins_jobs/cmd.py", line 191, 
in main

execute(options, config)

  File "/usr/local/lib/python2.7/dist-packages/jenkins_jobs/cmd.py", line 380, 
in execute

n_workers=1)

  File "/usr/local/lib/python2.7/dist-packages/jenkins_jobs/builder.py", line 
350, in update_jobs

self.parser.generateXML()

  File "/usr/local/lib/python2.7/dist-packages/jenkins_jobs/parser.py", line 
342, in generateXML

self.xml_jobs.append(self.getXMLForJob(job))

  File "/usr/local/lib/python2.7/dist-packages/jenkins_jobs/parser.py", line 
352, in getXMLForJob

self.gen_xml(xml, data)

  File "/usr/local/lib/python2.7/dist-packages/jenkins_jobs/parser.py", line 
359, in gen_xml

module.gen_xml(self, xml, data)

  File 
"/usr/local/lib/python2.7/dist-packages/jenkins_jobs/modules/publishers.py", 
line 6158, in gen_xml

self.registry.dispatch('publisher', parser, publishers, action)

  File "/usr/local/lib/python2.7/dist-packages/jenkins_jobs/registry.py", line 
249, in dispatch

format(name, component_type))

jenkins_jobs.errors.JenkinsJobsException: Unknown entry point or macro 'afs' 
for component type: 'publisher'.



BR,

dwj










___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [Openstack] [designate] Synchronize bind9 backend

2017-05-04 Thread Graham Hayes
On 03/05/17 10:51, Lars-Erik Helander wrote:
> We are using designate with a bind9 backend in a newton based Openstack
> system.
> 
> When the designate processes and the bind9 process are restarted they
> get out of synch. The zones in designate are no longer in bind9. How can
> I get the bind9 backend to get synchronized after a restart?
> 
>  
> 
> /Lars

Designate should run a periodic sync to check that all the zones are on
the bind9 server - what version of Designate are you using?

- Graham

> 
> 
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 



0x23BA8E2E.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] How can we use Orchestration service efficiently

2017-05-04 Thread Amit Uniyal
Hi all,

Please help with heat service, the only example I could find is to create
new stack. But how to use existing stack template which is already stored
by orchestration service.

My understanding is we need to form/write a .yaml template and create new
stack by this template. It saves this template on orchestration and run the
template. According to template VMs will get launch (or other tasks) and it
should be finished. then whats the meaning of options [ suspend, resume,
check stack, change stack template ].

How can we rerun the same template without form/creating new stack in
openstack.


Thanks and Regards
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [magnum][containers] Size of userdata in drivers

2017-05-04 Thread Ricardo Rocha
Hi Kevin.

We've hit this locally in the past, and adding core-dns i see the
sample for kubernetes atomic.

Spyros is dropping some fragments that are not needed to temporarily
get around the issue. Is there any trick in Heat we can use? zipping
the fragments should give some gain, is this possible?

Cheers,
  Ricardo

On Mon, Apr 24, 2017 at 11:56 PM, Kevin Lefevre  wrote:
> Hi, I recently stumbled on this bug 
> https://bugs.launchpad.net/magnum/+bug/1680900 in which Spyros says we are 
> about to hit the 64k limit for Nova user-data.
>
> One way to prevent this is to reduce the size of software config. But there 
> is still many things to be added to templates.
>
> I’m talking only about Kubernetes for now :
>
> I know some other Kubernetes projects (on AWS for example with kube-aws) are 
> using object storage (AWS S3) to bypass the limit of AWS Cloudformation and 
> store stack-templates and user-data but I don’t think it is possible on 
> OpenStack with Nova/Swift
>
> Since we rely on an internet connection anyway (except when running local 
> copy of hypercube image) for a majority of deployment when pulling hypercube 
> and other Kubernetes components, maybe we could rely on upstream for some 
> user-data and save some space.
>
> A lot of driver maintenance include syncing Kubernetes manifest from upstream 
> changes, bumping version, this is fine for the core components for now (api, 
> proxy, controller, scheduler) but is bit more tricky when we start adding the 
> addons (which are bigger and take a lot more space).
>
> Kubernetes official salt base deployment already provides templating (sed) 
> for commons addons, e.g.:
>
> https://github.com/kubernetes/kubernetes/blob/release-1.6/cluster/addons/dns/kubedns-controller.yaml.sed
>
> These template are already versioned and maintained by upstream. Depending on 
> the Kubernetes branches used we could get directly the right addons from 
> upstream. This prevents errors and having to sync and upgrade the addons.
>
> This is just a thought and of course there are downsides to this and maybe it 
> goes against the project goal because we required internet access but we 
> could for example offer a way to pull addons or other config manifest from 
> local object storage.
>
> I know this also causes problems for idempotence and gate testing because we 
> cannot vouch for upstream changes but in theory Kubernetes releases and 
> addons are already tested against a specific version by their CI.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How can we use Orchestration service efficiently

2017-05-04 Thread Amit Uniyal
Hi all,

Please help with heat service, the only example I could find is to create
new stack. But how to use existing stack template which is already stored
by orchestration service.

My understanding is we need to form/write a .yaml template and create new
stack by this template. It saves this template on orchestration and run the
template. According to template VMs will get launch (or other tasks) and it
should be finished. then whats the meaning of options [ suspend, resume,
check stack, change stack template ].

How can we rerun the same template without form/creating new stack in
openstack.


Thanks and Regards
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18

2017-05-04 Thread Davanum Srinivas
On Thu, May 4, 2017 at 3:49 AM, Thierry Carrez  wrote:
> Jeremy Stanley wrote:
>> On 2017-05-03 14:04:40 -0400 (-0400), Doug Hellmann wrote:
>>> Excerpts from Sean Dague's message of 2017-05-03 13:23:11 -0400:
 On 05/03/2017 01:02 PM, Doug Hellmann wrote:
> Excerpts from Thierry Carrez's message of 2017-05-03 18:16:29 +0200:
>> [...]
> Knowing what will be discussed in advanced also helps everyone
> collect their thoughts and be ready to contribute.

 What about ensuring that every agenda topic is more than a line,
 but includes a full paragraph about what the agenda topic
 proposer expects it will cover. A lot of times the agenda items
 are cryptic enough unless you are knee deep in things.

 That would help people collect their thoughts even more and
 break away from the few minutes of delay in introducing the
 subject (the introduction of the subject would be in the
 agenda).
>>>
>>> If the goal is to move most of the discussion onto the mailing
>>> list, we could link to the thread(s) there, too.
>>
>> This seems like a great idea to me. Granted in many cases we already
>> have a change proposed in Gerrit containing a (potentially) lengthy
>> explanation, but duplicating some of that on the agenda can't hurt.
>
> I like the idea. One issue is the timing.
>
> I prepare and post the meeting agenda on the Monday (in time for
> everyone to read it and decide if they want to attend). However I
> prepare the "introduction of the subject" shortly before the meeting on
> Tuesday, so that it takes into account the recent changes and is up to
> date with the status of the review. Some people post reviews/comments 10
> minutes before meeting, so it will be very hard to account for those
> comments or objections in the "introduction" posted the day before...

Right Thierry, we all have to adjust the way we work somewhat.

> --
> Thierry Carrez (ttx)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-05-04 Thread Arx Cruz
On Wed, May 3, 2017 at 11:53 PM, Emilien Macchi  wrote:

> (cross-posting)
>
> I've seen a bunch of interesting thoughts here.
> The most relevant feedback I've seen so far:
>
> - TripleO folks want to keep testing fast and efficient.
> - Tempest folks understand this problematic and is willing to collaborate.
>
> I propose that we move forward and experiment the usage of Tempest in
> TripleO CI for one job that could be experimental or non-voting to
> start.
> Instead of running the Pingtest, we would execute a Tempest Scenario
> that boot an instance from volume (like Pingstest is already doing)
> and see how it goes (in term of coverage and runtime).
> I volunteer to kick-off the work with someone more expert than I am
> with quickstart (Arx maybe?).
>
>
Sure, let's work on that :)


> Another iteration could be to start building an easy interface to
> select which Tempest tests we want a TripleO CI job to run and plug it
> to our CI tooling (tripleo-quickstart I presume).
> I also hear some feedback about keeping the pingtest alive for some
> uses cases, and I agree we could keep some CI jobs to run the pingtest
> when it makes more sense (when we want to test Heat for example, or
> just maintain it for developers who used it).
>
> How does it sounds? Please bring feedback.
>
>
> On Tue, Apr 18, 2017 at 7:41 AM, Attila Fazekas 
> wrote:
> >
> >
> > On Tue, Apr 18, 2017 at 11:04 AM, Arx Cruz  wrote:
> >>
> >>
> >>
> >> On Tue, Apr 18, 2017 at 10:42 AM, Steven Hardy 
> wrote:
> >>>
> >>> On Mon, Apr 17, 2017 at 12:48:32PM -0400, Justin Kilpatrick wrote:
> >>> > On Mon, Apr 17, 2017 at 12:28 PM, Ben Nemec 
> >>> > wrote:
> >>> > > Tempest isn't really either of those things.  According to another
> >>> > > message
> >>> > > in this thread it takes around 15 minutes to run just the smoke
> >>> > > tests.
> >>> > > That's unacceptable for a lot of our CI jobs.
> >>> >
> >>
> >>
> >> I rather spend 15 minutes running tempest than add a regression or a new
> >> bug, which already happen in the past.
> >>
> > The smoke tests might not be the best test selection anyway, you should
> pick
> > some scenario which does
> > for example snapshot of images and volumes. yes, these are the slow ones,
> > but they can run in parallel.
> >
> > Very likely you do not really want to run all tempest test, but 10~20
> minute
> > time,
> > sounds reasonable for a sanity test.
> >
> > The tempest config utility also should be extended by some parallel
> > capability,
> > and should be able to use already downloaded (part of the image)
> resources.
> >
> > Tempest/testr/subunit worker balance is not always the best,
> > technically would be possible to do dynamic balancing, but it would
> require
> > a lot of work.
> > Let me know when it becomes the main concern, I can check what
> can/cannot be
> > done.
> >
> >
> >>
> >>>
> >>> > Ben, is the issue merely the time it takes? Is it the affect that
> time
> >>> > taken has on hardware availability?
> >>>
> >>> It's both, but the main constraint is the infra job timeout, which is
> >>> about
> >>> 2.5hrs - if you look at our current jobs many regularly get close to
> (and
> >>> sometimes exceed this), so we just don't have the time budget available
> >>> to
> >>> run exhasutive tests every commit.
> >>
> >>
> >> We have green light from infra to increase the job timeout to 5 hours,
> we
> >> do that in our periodic full tempest job.
> >
> >
> > Sounds good, but I am afraid it could hurt more than helping, it could
> delay
> > other things get fixed by lot
> > especially if we got some extra flakiness, because of foobar.
> >
> > You cannot have all possible tripleo configs on the gate anyway,
> > so something will pass which will require a quick fix.
> >
> > IMHO the only real solution, is making the before test-run steps faster
> or
> > shorter.
> >
> > Do you have any option to start the tempest running jobs in a more
> developed
> > state ?
> > I mean, having more things already done at the start time
> (images/snapshot)
> > and just do a fast upgrade at the beginning of the job.
> >
> > Openstack installation can be completed in a `fast` way (~minute) on
> > RHEL/Fedora systems
> > after the yum steps, also if you are able to aggregate all yum step to
> > single
> > command execution (transaction) you generally able to save a lot of time.
> >
> > There is plenty of things what can be made more efficient before the test
> > run,
> > when you start considering everything evil which can be accounted for
> more
> > than 30 sec
> > of time, this can happen soon.
> >
> > For example just executing the cpython interpreter for the openstack
> > commands is above 30 sec,
> > the work what they are doing can be done in much much faster way.
> >
> > Lot of install steps actually does not depends on each other,
> > it allows more things to be done in parallel, we generally can have 

Re: [OpenStack-Infra] Boston 2017 Summit dinner

2017-05-04 Thread Colleen Murphy
On Fri, Apr 28, 2017 at 2:47 AM, Paul Belanger 
wrote:

> Greetings!
>
> Its that time where we all try to figure out when and where to meet up for
> some
> dinner and drinks in Boston. While I haven't figure out a place to eat
> (suggestion most welcome), maybe we can decide which night to go out.
>
> As a reminder, the summit schedule has 2 events this year that people may
> also
> be attending:
>
>   Mon 8, 6:00pm - 7:30pm - Marketplace Mixer
>   Tue 9, 7:00pm - 10:00pm - StackCity Boston at Fenway Park
>
> Please take a moment to reply, and which day may be better for you.
>
> Would love to attend this, thanks for organizing it.

Sunday: Yes
Monday: maybe (maybe after the mixer?)
Tuesday: Yes-ish
Wednesday: No
Thursday: Yes

Colleen
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[openstack-dev] [Security] Today's IRC meeting.

2017-05-04 Thread Rob C
Hi All,

I won't be able to make today's meeting as I'm travelling.

I've not found a chair to cover the meeting, please decide if you have a
quorum and either proceed or go back to "real life" as you see fit.

Cheers
-Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-04 Thread David Shrewsbury
These docs are great. As someone who has avoided learning systemd, I really
appreciate
the time folks put into making these docs. Well done.

-Dave

On Wed, May 3, 2017 at 7:14 PM, Sean Dague  wrote:

> On 05/03/2017 07:08 PM, Doug Hellmann wrote:
>
>> Excerpts from Sean Dague's message of 2017-05-03 16:16:29 -0400:
>>
>>> Screen is going away in Queens.
>>>
>>> Making the dev / test runtimes as similar as possible is really
>>> important. And there is so much weird debt around trying to make screen
>>> launch things reliably (like random sleeps) because screen has funny
>>> races in it.
>>>
>>> It does mean some tricks people figured out in screen are going away.
>>>
>>
>> It sounds like maybe we should start building a shared repository of new
>> tips & tricks for systemd/journald.
>>
>
> Agreed, the devstack docs have the following beginnings of that:
>
> https://docs.openstack.org/developer/devstack/development.html - for
> basic flow
>
> which also links to a systemd primer - https://docs.openstack.org/dev
> eloper/devstack/systemd.html
>
> But more contributions are welcomed for sure.
>
> (These docs exist in the devstack tree under doc/source)
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
David Shrewsbury (Shrews)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-04 Thread Chris Dent

On Wed, 3 May 2017, Drew Fisher wrote:


This email is meant to be the ML discussion of a question I brought up
during the TC meeting on April 25th.; [1]


Thanks for starting this Drew, I hope my mentioning it in my tc
report email wasn't too much of a nag.

I've added [tc] and [all] tags to the subject in case people are
filtering. More within.


The TL;DR version is:

Reading the user survey [2], I see the same issues time and time again.
Pages 18-19 of the survey are especially common points.
Things move too fast, no LTS release, upgrades are terrifying for
anything that isn't N-1 -> N.
These come up time and time again
How is the TC working with the dev teams to address these critical issues?


As I recall the "OpenStack-wide Goals"[a] are supposed to help address
some of this sort of thing but it of course relies on people first
proposing and detailing goals and then there actually being people
to act on them. The first part was happening at[b] but it's not
clear if that's the current way.

Having people is the hard part. Given the current contribution
model[c] that pretty much means enterprises ponying up the people do
the work. If they don't do that then the work won't get done, and
people won't buy the products they are supporting, I guess? Seems a
sad state of affairs.

There's also an issue where we seem to have decided that it is only
appropriate to demand a very small number of goals per cycle
(because each project already has too much on their plate, or too big
a backlog, relative to resources). It might be that as the
_Technical_ Committe is could be legitimate to make a larger demand.
(Or it could be completely crazy.)


I asked this because on page 18 is this comment:

"Most large customers move slowly and thus are running older versions,
which are EOL upstream sometimes before they even deploy them."


Can someone with more of the history give more detail on where the
expectation arose that upstream ought to be responsible things like
long term support? I had always understood that such features were
part of the way in which the corporately avaialable products added
value?


This is exactly what we're seeing with some of our customers and I
wanted to ask the TC about it.


I know you're not speaking as the voice of your employer when making
this message, so this is not directed at you, but from what I can
tell Oracle's presense upstream (both reviews and commits) in Ocata
and thus far in Pike has not been huge. Maybe that's something that
needs to change to keep the customers happy? Or at all.

[a]: https://governance.openstack.org/tc/goals/index.html
[b]: https://etherpad.openstack.org/p/community-goals
[c]: There's talk that the current model will change from devs hired
to do OpenStack development being the main engine of contribution to
users of OpenStack, who happen to be devs, being the main engine. Do
we know the slope on that trend?


Thanks,

-Drew

[1]
http://eavesdrop.openstack.org/meetings/tc/2017/tc.2017-04-25-20.00.log.html#l-177
[2] https://www.openstack.org/assets/survey/April2017SurveyReport.pdf


--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][cinder][mistral][manila] A path forward to shiny consistent service types

2017-05-04 Thread Monty Taylor

On 05/04/2017 06:58 AM, Sean Dague wrote:

On 05/03/2017 11:56 PM, Monty Taylor wrote:

On 05/03/2017 03:47 AM, Thierry Carrez wrote:

Monty Taylor wrote:

On 05/01/2017 10:44 AM, Ben Swartzlander wrote:

On 04/28/2017 06:26 PM, Monty Taylor wrote:

[...]
Thoughts? Anyone violently opposed?


I don't have any problems with this idea. My main concern would be for
backwards-compatibility and it sounds like that's pretty well sorted
out.

I do think it's important that if we make this improvement that all the
projects really do get it done at around the same time, because if we
only implement it 80% of projects, it will look pretty weird.


I could not possibly agree more strongly with both points.


"All the projects [should] really [...] get it done at around the same
time, because if we only implement it 80% of projects, it will look
pretty weird" sounds pretty much like the definition of a good
cross-community goal. Can we afford to wait for Queens to implement this
? If yes it feels like this would make a great goal.



We could - and I agree with you ... but there is actually not work that
needs to be done in all of the projects. To support this from the
openstack side - we mostly need to land a patch to keystoneauth. (patch
already written) I will go check the other clientlibs, but I'm pretty
sure everyone has been updated to use keystoneauth at this point- except
swiftclient, but there is a patch up already to handle that. (also, nova
is working on consuming services via the catalog, but that patch is also
in flight and that work already has a local version of this done)

We also want to add support both for consuming this and testing it in
tempest - but that probably wants a deeper conversation with the tempest
team about the right way to do it.

In any case - I think the hardest part is ensuring consensus that it's a
good path forward, and a few logisitical concerns Sean and Morgan
brought up over in the service-types-authority and keystoneauth repos.
Once we find agreement, I can basically have this implemented on the
consume side in OpenStack in a few days.


On the aliases front... I'm actually a little concerned about putting
that into keystoneauth1 at all unless it's easy to globally disable,
because it glosses over the transition in a way that people may make
changes that assume differences in the service catalog than actually are
true.

There was an equivalent change when keystoneauth1 put in the magic that
allows OS_AUTH_URL to not have a version in it (which only works with
keystoneauth1 based clients). That meant that people started being told
that the keystone endpoint didn't need a version marker in the url.
Except, that kind of config would actually not work with every other
client out there. I actually wanted to revert that special work around,
but was told that basically lots of code now depends on it, so it would
break the world. :(


Totally. This is why a large part of this plan involves both 
documentation and not _only_ putting it in keystoneauth, but everywhere 
else too.


The thing is - the world is already broken because we have special 
snowflake workarounds in various of our python libs (python-cinderclient 
has a volume/volumev2/volumev3 workaround btw) We're also embracing 
microversions ... except that consuming microversions is impossible as 
an API consumer right now because it's unpossible to to get the 
discovery document as an API consumer. Well, unless you use 
python-novaclient which does magic URL inference to find the unversioned 
doc so nobody notices that a normal user has no access to the otherwise 
quite excellent mechanism.


This is why the summary of the plan is "define what things should look 
like in an area that is currently undefined, work to ensure backwards 
compatible consumption support for all of the consumers, encourage 
deployment adoption"



So I feel like we probably could use a powow in Boston to figure out the
concerns here. Because, honestly we can get compatibility without
keystoneauth1 by going wide here, and just asking folks to add all the
new entries (and making some validation system for people's service
catalog so they can see any changes that might be suggested).


Sure. We could only document and then ask all of the operators of all of 
the clouds out there to add new entries to the catalog. But they all 
won't - which means that client consumers _still_ won't be able to 
express "I want to connect to block-storage" and know that it'll just be 
a thing they can do.


I agree about a pow wow in Boston. We don't have to go with my proposed 
plan - I'll happily work to implement alternate plans as well ... but 
I'm very against continuing to spin our wheels in this area and continue 
to leave the problem to our API consumers with no help or guidance.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [Openstack-operators] Openstack-Ansible (Ocata branch) deployment failing - "No matching distribution found for mysql-python"

2017-05-04 Thread Andy McCrae
Hi Eugene,

You're right that error doesn't give us much.
My best advice for a next step would be to manually run "bash
/opt/op-venv-script.sh" from inside the repo container:
dc2-controller-01_repo_container-7ce807b6.
I've had similar issues, which is usually a constraints issue (we're
working ways to improve this too).

The output from the script is often more useful - additionally there are
logs in the repo container for the venv build process, so check those out
too.

On a plus side - which doesn't help you right now but may make this easier
in future - we're looking to potentially move away from that bash script
and manage the venv process via ansible tasks. This will hopefully make the
issue more clear when it fails, and improve debugging.

Hope that helps!
Andy
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [all][tc][cinder][mistral][manila] A path forward to shiny consistent service types

2017-05-04 Thread Sean Dague
On 05/03/2017 11:56 PM, Monty Taylor wrote:
> On 05/03/2017 03:47 AM, Thierry Carrez wrote:
>> Monty Taylor wrote:
>>> On 05/01/2017 10:44 AM, Ben Swartzlander wrote:
 On 04/28/2017 06:26 PM, Monty Taylor wrote:
> [...]
> Thoughts? Anyone violently opposed?

 I don't have any problems with this idea. My main concern would be for
 backwards-compatibility and it sounds like that's pretty well sorted
 out.

 I do think it's important that if we make this improvement that all the
 projects really do get it done at around the same time, because if we
 only implement it 80% of projects, it will look pretty weird.
>>>
>>> I could not possibly agree more strongly with both points.
>>
>> "All the projects [should] really [...] get it done at around the same
>> time, because if we only implement it 80% of projects, it will look
>> pretty weird" sounds pretty much like the definition of a good
>> cross-community goal. Can we afford to wait for Queens to implement this
>> ? If yes it feels like this would make a great goal.
>>
> 
> We could - and I agree with you ... but there is actually not work that
> needs to be done in all of the projects. To support this from the
> openstack side - we mostly need to land a patch to keystoneauth. (patch
> already written) I will go check the other clientlibs, but I'm pretty
> sure everyone has been updated to use keystoneauth at this point- except
> swiftclient, but there is a patch up already to handle that. (also, nova
> is working on consuming services via the catalog, but that patch is also
> in flight and that work already has a local version of this done)
> 
> We also want to add support both for consuming this and testing it in
> tempest - but that probably wants a deeper conversation with the tempest
> team about the right way to do it.
> 
> In any case - I think the hardest part is ensuring consensus that it's a
> good path forward, and a few logisitical concerns Sean and Morgan
> brought up over in the service-types-authority and keystoneauth repos.
> Once we find agreement, I can basically have this implemented on the
> consume side in OpenStack in a few days.

On the aliases front... I'm actually a little concerned about putting
that into keystoneauth1 at all unless it's easy to globally disable,
because it glosses over the transition in a way that people may make
changes that assume differences in the service catalog than actually are
true.

There was an equivalent change when keystoneauth1 put in the magic that
allows OS_AUTH_URL to not have a version in it (which only works with
keystoneauth1 based clients). That meant that people started being told
that the keystone endpoint didn't need a version marker in the url.
Except, that kind of config would actually not work with every other
client out there. I actually wanted to revert that special work around,
but was told that basically lots of code now depends on it, so it would
break the world. :(

So I feel like we probably could use a powow in Boston to figure out the
concerns here. Because, honestly we can get compatibility without
keystoneauth1 by going wide here, and just asking folks to add all the
new entries (and making some validation system for people's service
catalog so they can see any changes that might be suggested).

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-04 Thread Sean Dague
This is the cantrip in devstack-gate that's collecting the logs into the
compat format:

https://github.com/openstack-infra/devstack-gate/blob/3a21366743d6624fb5c51588fcdb26f818fbd8b5/functions.sh#L794-L797

It's also probably worth dumping the whole journal in native format for
people to download and query later if they want (I expect that will
become more of a thing):

https://github.com/openstack-infra/devstack-gate/blob/3a21366743d6624fb5c51588fcdb26f818fbd8b5/functions.sh#L802-L803


If you are using devstack-gate already, this should be happening for
you. If things are running differently, those are probably the missing
bits you need.

-Sean



On 05/04/2017 03:09 AM, Guy Rozendorn wrote:
> In regards to 3rd party CIs:
> Before this change, the screen logs were saved under $LOGDIR and copied
> to the log servers, and it was pretty much under the same location for
> all the jobs/projects.
> 
> What’s the convention now with switch to systemd?
> * should the logs be collected in journal exported format? or dump to
> simple text files so they could be viewed in the browser? or in journal
> json format?
> * is there a utility function in devstack/devstack-gate that takes care
> of the log collection so it’ll be the same for all jobs/projects?
> 
> 
> 
> On 3 May 2017 at 13:17:14, Sean Dague (s...@dague.net
> ) wrote:
> 
>> As a follow up, there are definitely a few edge conditions we've hit
>> with some jobs, so the following is provided as information in case you
>> have a job that seems to fail in one of these ways.


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[OpenStack-Infra] [openstack-infra] [jenkins-job-builder]  run `jenkins-jobs test` failed

2017-05-04 Thread dong.wenjuan
Hi folks,

I use "puppet-jenkins" to setuped a jenkins node, and then run the command 
`jenkins-jobs test /etc/jenkins_jobs/config/`

to parser the jobs which are from the "project-config/jenkins/jobs".

But it raised a error, here is the log:

Does anyone know how to resolve the issue? Do I need to install some tools?

Thanks for help~




INFO:jenkins_jobs.local_yaml:Including file 'include/run-project-guide.sh' from 
path '.'

WARNING:root:logrotate is deprecated on jenkins>=1.637, use the property 
build-discarder on newer jenkins instead

Traceback (most recent call last):

  File "/usr/local/bin/jenkins-jobs", line 10, in <module>

sys.exit(main())

  File "/usr/local/lib/python2.7/dist-packages/jenkins_jobs/cmd.py", line 191, 
in main

execute(options, config)

  File "/usr/local/lib/python2.7/dist-packages/jenkins_jobs/cmd.py", line 380, 
in execute

n_workers=1)

  File "/usr/local/lib/python2.7/dist-packages/jenkins_jobs/builder.py", line 
350, in update_jobs

self.parser.generateXML()

  File "/usr/local/lib/python2.7/dist-packages/jenkins_jobs/parser.py", line 
342, in generateXML

self.xml_jobs.append(self.getXMLForJob(job))

  File "/usr/local/lib/python2.7/dist-packages/jenkins_jobs/parser.py", line 
352, in getXMLForJob

self.gen_xml(xml, data)

  File "/usr/local/lib/python2.7/dist-packages/jenkins_jobs/parser.py", line 
359, in gen_xml

module.gen_xml(self, xml, data)

  File 
"/usr/local/lib/python2.7/dist-packages/jenkins_jobs/modules/publishers.py", 
line 6158, in gen_xml

self.registry.dispatch('publisher', parser, publishers, action)

  File "/usr/local/lib/python2.7/dist-packages/jenkins_jobs/registry.py", line 
249, in dispatch

format(name, component_type))

jenkins_jobs.errors.JenkinsJobsException: Unknown entry point or macro 'afs' 
for component type: 'publisher'.




BR,

dwj___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [openstack-dev] [qa][heat][murano][daisycloud] Removing Heat support from Tempest

2017-05-04 Thread Steven Hardy
On Wed, May 03, 2017 at 11:56:56PM -0400, Matthew Treinish wrote:
> On Wed, May 03, 2017 at 11:51:13AM +, Andrea Frittoli wrote:
> > On Tue, May 2, 2017 at 5:33 PM Matthew Treinish 
> > wrote:
> > 
> > > On Tue, May 02, 2017 at 09:49:14AM +0530, Rabi Mishra wrote:
> > > > On Fri, Apr 28, 2017 at 2:17 PM, Andrea Frittoli <
> > > andrea.fritt...@gmail.com>
> > > > wrote:
> > > >
> > > > >
> > > > >
> > > > > On Fri, Apr 28, 2017 at 10:29 AM Rabi Mishra 
> > > wrote:
> > > > >
> > > > >> On Thu, Apr 27, 2017 at 3:55 PM, Andrea Frittoli <
> > > > >> andrea.fritt...@gmail.com> wrote:
> > > > >>
> > > > >>> Dear stackers,
> > > > >>>
> > > > >>> starting in the Liberty cycle Tempest has defined a set of projects
> > > > >>> which are in scope for direct
> > > > >>> testing in Tempest [0]. The current list includes keystone, nova,
> > > > >>> glance, swift, cinder and neutron.
> > > > >>> All other projects can use the same Tempest testing infrastructure
> > > (or
> > > > >>> parts of it) by taking advantage
> > > > >>> the Tempest plugin and stable interfaces.
> > > > >>>
> > > > >>> Tempest currently hosts a set of API tests as well as a service
> > > client
> > > > >>> for the Heat project.
> > > > >>> The Heat service client is used by the tests in Tempest, which run 
> > > > >>> in
> > > > >>> Heat gate as part of the grenade
> > > > >>> job, as well as in the Tempest gate (check pipeline) as part of the
> > > > >>> layer4 job.
> > > > >>> According to code search [3] the Heat service client is also used by
> > > > >>> Murano and Daisycore.
> > > > >>>
> > > > >>
> > > > >> For the heat grenade job, I've proposed two patches.
> > > > >>
> > > > >> 1. To run heat tree gabbi api tests as part of grenade 'post-upgrade'
> > > > >> phase
> > > > >>
> > > > >> https://review.openstack.org/#/c/460542/
> > > > >>
> > > > >> 2. To remove tempest tests from the grenade job
> > > > >>
> > > > >> https://review.openstack.org/#/c/460810/
> > > > >>
> > > > >>
> > > > >>
> > > > >>> I proposed a patch to Tempest to start the deprecation counter for
> > > Heat
> > > > >>> / orchestration related
> > > > >>> configuration items in Tempest [4], and I would like to make sure
> > > that
> > > > >>> all tests and the service client
> > > > >>> either find a new home outside of Tempest, or are removed, by the 
> > > > >>> end
> > > > >>> the Pike cycle at the latest.
> > > > >>>
> > > > >>> Heat has in-tree integration tests and Gabbi based API tests, but I
> > > > >>> don't know if those provide
> > > > >>> enough coverage to replace the tests on Tempest side.
> > > > >>>
> > > > >>>
> > > > >> Yes, the heat gabbi api tests do not yet have the same coverage as 
> > > > >> the
> > > > >> tempest tree api tests (lacks tests using nova, neutron and swift
> > > > >> resources),  but I think that should not stop us from *not* running
> > > the
> > > > >> tempest tests in the grenade job.
> > > > >>
> > > > >> I also don't know if the tempest tree heat tests are used by any 
> > > > >> other
> > > > >> upstream/downstream jobs. We could surely add more tests to bridge
> > > the gap.
> > > > >>
> > > > >> Also, It's possible to run the heat integration tests (we've enough
> > > > >> coverage there) with tempest plugin after doing some initial setup,
> > > as we
> > > > >> do in all our dsvm gate jobs.
> > > > >>
> > > > >> It would propose to move tests and client to a Tempest plugin owned /
> > > > >>> maintained by
> > > > >>> the Heat team, so that the Heat team can have full flexibility in
> > > > >>> consolidating their integration
> > > > >>> tests. For Murano and Daisycloud - and any other team that may want
> > > to
> > > > >>> use the Heat service
> > > > >>> client in their tests, even if the client is removed from Tempest, 
> > > > >>> it
> > > > >>> would still be available via
> > > > >>> the Heat Tempest plugin. As long as the plugin implements the 
> > > > >>> service
> > > > >>> client interface,
> > > > >>> the Heat service client will register automatically in the service
> > > > >>> client manager and be available
> > > > >>> for use as today.
> > > > >>>
> > > > >>>
> > > > >> if I understand correctly, you're proposing moving the existing
> > > tempest
> > > > >> tests and service clients to a separate repo managed by heat team.
> > > Though
> > > > >> that would be collective decision, I'm not sure that's something I
> > > would
> > > > >> like to do. To start with we may look at adding some of the missing
> > > pieces
> > > > >> in heat tree itself.
> > > > >>
> > > > >
> > > > > I'm proposing to move tests and the service client outside of tempest
> > > to a
> > > > > new home.
> > > > >
> > > > > I also suggested that the new home could be a dedicate repo, since 
> > > > > that
> > > > > would allow you to maintain the
> > > > > current branchless nature of those tests. A more detailed discussion
> > > about
> > > > > the topic can be found
> > > > > in the 

[openstack-dev] [openstack-doc] [dev] Docs team meeting today

2017-05-04 Thread Alexandra Settle
Hey everyone,

The docs meeting will continue today in #openstack-meeting-alt as scheduled 
(Thursday at 21:00 UTC). For more details, and the agenda, see the meeting 
page: - 
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting

Last meeting before the summit ☺

Thanks,

Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova-scheduler] Get scheduler hint

2017-05-04 Thread Giuseppe Di Lena
Hi Chris,

> I'm pretty sure a regular user can create a server group and specify the 
> anti-affinity filter. 

yes, but we want that the user specifies just the Robustness; the way in which 
we assign the instances to the compute nodes should be a black box for the 
regular user(and also for the admin).

> Why do you need to track which compute nodes the instances are on?

Because putting the instances in the correct compute nodes is just the first 
step of the algorithm that we are implementing, for the next steps we need to 
know where is each instance.

Thank you for the question.

Best regards Giuseppe 

> Il giorno 03 mag 2017, alle ore 21:01, Chris Friesen 
>  ha scritto:
> 
> On 05/03/2017 03:08 AM, Giuseppe Di Lena wrote:
>> Thank you a lot for the help!
>> 
>> I think that the problem can be solved using the anti-affinity filter, but 
>> we want a regular user can choose an instance and set the property(image, 
>> flavour, network, etc.) and a parameter Robustness >= 1(that is the number 
>> of copies of this particular instance).
> 
> I'm pretty sure a regular user can create a server group and specify the 
> anti-affinity filter.  And a regular user can certainly specify --min-count 
> and --max-count to specify the number of copies.
> 
>> After that, we put every copy of this instance in a different compute, but 
>> we need to track where we put every copy of the instance (we need to know it 
>> for the algorithm that we would implement);
> 
> Normally only admin-level users are allowed to know which compute nodes a 
> given instance is placed on.  Why do you need to track which compute nodes 
> the instances are on?
> 
> Chris
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][cinder][mistral][manila] A path forward to shiny consistent service types

2017-05-04 Thread Thierry Carrez
Monty Taylor wrote:
> On 05/03/2017 03:47 AM, Thierry Carrez wrote:
>> Monty Taylor wrote:
>>> On 05/01/2017 10:44 AM, Ben Swartzlander wrote:
 On 04/28/2017 06:26 PM, Monty Taylor wrote:
> [...]
> Thoughts? Anyone violently opposed?

 I don't have any problems with this idea. My main concern would be for
 backwards-compatibility and it sounds like that's pretty well sorted
 out.

 I do think it's important that if we make this improvement that all the
 projects really do get it done at around the same time, because if we
 only implement it 80% of projects, it will look pretty weird.
>>>
>>> I could not possibly agree more strongly with both points.
>>
>> "All the projects [should] really [...] get it done at around the same
>> time, because if we only implement it 80% of projects, it will look
>> pretty weird" sounds pretty much like the definition of a good
>> cross-community goal. Can we afford to wait for Queens to implement this
>> ? If yes it feels like this would make a great goal.
>>
> 
> We could - and I agree with you ... but there is actually not work that
> needs to be done in all of the projects. To support this from the
> openstack side - we mostly need to land a patch to keystoneauth. (patch
> already written) I will go check the other clientlibs, but I'm pretty
> sure everyone has been updated to use keystoneauth at this point- except
> swiftclient, but there is a patch up already to handle that. (also, nova
> is working on consuming services via the catalog, but that patch is also
> in flight and that work already has a local version of this done)
> 
> We also want to add support both for consuming this and testing it in
> tempest - but that probably wants a deeper conversation with the tempest
> team about the right way to do it.
> 
> In any case - I think the hardest part is ensuring consensus that it's a
> good path forward, and a few logisitical concerns Sean and Morgan
> brought up over in the service-types-authority and keystoneauth repos.
> Once we find agreement, I can basically have this implemented on the
> consume side in OpenStack in a few days.
> 
> That's a super long response - sorry - I ramble. I'd be more than happy
> to make it a cross-project goal if we think that's the right way to get
> it done - but I worry that if we do it'll steal a valuable slot since
> there's not much of an ask from the projects on this one.

If it can be easily achieved, yes, just run for it !

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18

2017-05-04 Thread Thierry Carrez
Jeremy Stanley wrote:
> On 2017-05-03 14:04:40 -0400 (-0400), Doug Hellmann wrote:
>> Excerpts from Sean Dague's message of 2017-05-03 13:23:11 -0400:
>>> On 05/03/2017 01:02 PM, Doug Hellmann wrote:
 Excerpts from Thierry Carrez's message of 2017-05-03 18:16:29 +0200:
> [...]
 Knowing what will be discussed in advanced also helps everyone
 collect their thoughts and be ready to contribute.
>>>
>>> What about ensuring that every agenda topic is more than a line,
>>> but includes a full paragraph about what the agenda topic
>>> proposer expects it will cover. A lot of times the agenda items
>>> are cryptic enough unless you are knee deep in things.
>>>
>>> That would help people collect their thoughts even more and
>>> break away from the few minutes of delay in introducing the
>>> subject (the introduction of the subject would be in the
>>> agenda).
>>
>> If the goal is to move most of the discussion onto the mailing
>> list, we could link to the thread(s) there, too.
> 
> This seems like a great idea to me. Granted in many cases we already
> have a change proposed in Gerrit containing a (potentially) lengthy
> explanation, but duplicating some of that on the agenda can't hurt.

I like the idea. One issue is the timing.

I prepare and post the meeting agenda on the Monday (in time for
everyone to read it and decide if they want to attend). However I
prepare the "introduction of the subject" shortly before the meeting on
Tuesday, so that it takes into account the recent changes and is up to
date with the status of the review. Some people post reviews/comments 10
minutes before meeting, so it will be very hard to account for those
comments or objections in the "introduction" posted the day before...

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-05-04 Thread Luigi Toscano
- Original Message -
> On Wed, 2017-05-03 at 17:53 -0400, Emilien Macchi wrote:
> > (cross-posting)

> 
> > Instead of running the Pingtest, we would execute a Tempest Scenario
> > that boot an instance from volume (like Pingstest is already doing)
> > and see how it goes (in term of coverage and runtime).
> > I volunteer to kick-off the work with someone more expert than I am
> > with quickstart (Arx maybe?).
> > 
> > Another iteration could be to start building an easy interface to
> > select which Tempest tests we want a TripleO CI job to run and plug
> > it
> > to our CI tooling (tripleo-quickstart I presume).
> 
> Running a subset of Tempest tests isn't the same thing as designing
> (and owning) your own test suite that targets the things that mean the
> most to our community (namely speed and coverage). Even giving up 5-10
> minutes of runtime...just to be able to run Tempest isn't something
> that some of us would be willing to do.

As I mentioned, you can do it with Tempest (the library). You can have your own 
test suite that does exactly what you are asking (namely, a set of scenario 
tests based on Heat which targets the TripleO use case) in a Tempest plugin and 
there is no absolute reason that those tests should add 5-10 minutes of runtime 
compared to pingtest. 

It/they would be exactly pingtest, only implemented using a different library 
and running with a different runner, with the *exact* same run time. 

Obvious advantages: only one technology used to run tests, so if anyone else 
want to run additional tests, there is no need to maintain two code paths; 
reuse on a big and proven library of test and test runner tools.

Ciao
-- 
Luigi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-operators]project Tricricle onboarding in Boston

2017-05-04 Thread joehuang
Hello,

If you are interested in learning more about Tricircle - networking automation 
across OpenStack clouds, you are welcome to the on-boarding session in Boston, 
Tuesday, May 9, 4:40pm-5:25pm Level One - MR 101.

Please feel free to add topics in the etherpad, 
https://etherpad.openstack.org/p/BOS-forum-tricircle-onboarding , and use +1 to 
prompt your preferred topics.

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] trouble installing Nagios on devstack on ubuntu 16.04 ...

2017-05-04 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi Greg,

Sorry for not responding earlier about your Nagios problem, most of the Vitrage 
team is busy preparing to Boston.

We have already heard about the Ubuntu 16.04 issue but didn’t investigate it 
yet, so unfortunately I don’t have a solution for you at the moment. If you are 
only interested in having alarms in Vitrage, there are several options to 
achieve this.


1.   Use Yujun’s suggestion



2.   Raise a “compute down” alarm and let the Doctor datasource handle it. 
You need to:

· Make sure ‘doctor’ is defined in the list of ‘types’ in 
/etc/vitrage/vitrage.conf (if not, add it and restart vitrage-graph)

· Send an event to Vitrage using the CLI:

vitrage event post --type="compute.host.down" 
--details='{"hostname":"","source":"sample_monitor","cause":"link-down","severity":"critical","status":"down","monitor_id":"monitor-1","monitor_event_id":"123"}'



3.   Raise an Aodh alarm with constant state ‘alarm’

· Make sure ‘aodh’ is defined in the list of ‘types’ in 
/etc/vitrage/vitrage.conf (if not, add it and restart vitrage-graph)

· Call aodh CLI:

aodh alarm create --type threshold --name 'cpu_alarm' --state alarm 
--description 'CPU utilization is above 1%' -m 'cpu_util' --period 60 
--threshold 0.01 --comparison-operator gt --query 'resource_id=< instance 
uuid>' --enabled False

Hope this helps.

Best Regards,
Ifat.

From: "Yujun Zhang (ZTE)" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 4 May 2017 at 1:56
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [vitrage] trouble installing Nagios on devstack on 
ubuntu 16.04 ...

One easy way could be writing a scenario to raise deduced alarm based on a 
simple rule, e.g. when a host is discovered, raise an alarm saying host is up.
Waines, Greg 
>于2017年5月4日 
周四04:35写道:
I don’t think I saw any responses to this.

Alternative question ... so I’ve got vitrage up and running fine ...

What’s the easiest way to generate an alarm against a host ?   ( OTHER than 
NAGIOS, due to problem in original email ) ???

let me know any ideas,
Greg.


From: Greg Waines >
Date: Tuesday, May 2, 2017 at 9:03 AM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: [openstack-dev] [vitrage] trouble installing Nagios on devstack on 
ubuntu 16.04 ...

Hey ... I’m working thru the ‘Vitrage - Getting Started Guide’

https://docs.openstack.org/developer/vitrage/vitrage-first_steps.html

Was able to get vitrage up and running and enabled in horizon ... on ubuntu 
16.04 .
 ( I tried on ubuntu 14.04 and ‘./stack.sh’ warned that it had not been 
tested on trusty (14.04), I FORCE=yes it ... but it failed. )


Now trying to install Nagios in devstack
https://docs.openstack.org/developer/vitrage/nagios-devstack-installation.html

BUT it doesn’t seem like there is an OMD package available for ubuntu 16.04 ... 
and the trusty (14.04) package won’t install due to dependency issues.



Any suggestions ?

Greg.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Yujun Zhang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack moving both too fast and too slow at the same time

2017-05-04 Thread Matthew Thode
On 05/03/2017 09:30 PM, Chris Friesen wrote:
> On 05/03/2017 02:00 PM, Drew Fisher wrote:
>> I asked this because on page 18 is this comment:
>>
>> "Most large customers move slowly and thus are running older versions,
>> which are EOL upstream sometimes before they even deploy them."
>>
>> This is exactly what we're seeing with some of our customers and I
>> wanted to ask the TC about it.
> 
> Us too.  I'm not sure there is a simple solution.  To some extent I
> suppose that's what distro folks get paid for...to do stuff that
> upstream can't (or won't) do.
> 
> Chris

Us distro folks don't like it either :P

-- 
Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] - no drivers meeting today or next week (May 4th and May 11th)

2017-05-04 Thread Kevin Benton
Hi all,

I'm canceling the drivers meeting May 4th and 11th to avoid discussion of
new features until after the summit when we have collected user/operator
feedback.

Cheers,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev