[openstack-dev] [fuel] New gate jobs for 'fuel-agent' and 'python-fuelclient' packages

2016-01-21 Thread Dmitry Kaiharodsev
Hi to all,

please be informed that starting from today we're launching additional
gating jobs [1] [2]:

- for 'fuel-agent' package [3]
- for 'python-fuelclient' package [4]

Mentioned jobs will be started on each commit and will do following steps:
- build packages from the commit
- run system tests scenario [5] [6] with using created packages
- vote in a patchset

Job duration:
- for 'fuel-agent' package - 20 min [7]
- for 'python-fuelclient' package - 45 min [8]

For any additional questions please use our #fuel-infra IRC channel

[1]
https://ci.fuel-infra.org/job/master.fuel-agent.pkgs.ubuntu.review_fuel_agent_one_node_provision/
[2]
https://ci.fuel-infra.org/job/master.python-fuelclient.pkgs.ubuntu.review_fuel_client/
[3] https://github.com/openstack/fuel-agent
[4] https://github.com/openstack/python-fuelclient
[5] for 'fuel-agent' package
https://github.com/openstack/fuel-qa/blob/master/gates_tests/tests/test_review_in_fuel_agent.py#L41-L48
[6] for 'python-fuelclient' package
https://github.com/openstack/fuel-qa/blob/master/gates_tests/tests/test_review_in_fuel_client.py#L102-L113
[7]
https://ci.fuel-infra.org/job/master.fuel-agent.pkgs.ubuntu.review_fuel_agent_one_node_provision/buildTimeTrend
[8]
https://ci.fuel-infra.org/job/master.python-fuelclient.pkgs.ubuntu.review_fuel_client/buildTimeTrend
-- 

Dmitry Kaigarodtsev

IRC: dkaiharodsev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] New gate jobs for 'fuel-agent' and 'python-fuelclient' packages

2016-01-21 Thread Igor Kalnitsky
Hey Dmitry -

That's cool, thank you. I wonder you build RPM or DEB or both?

- Igor

On Thu, Jan 21, 2016 at 12:48 PM, Dmitry Kaiharodsev
 wrote:
> Hi to all,
>
> please be informed that starting from today we're launching additional
> gating jobs [1] [2]:
>
> - for 'fuel-agent' package [3]
> - for 'python-fuelclient' package [4]
>
> Mentioned jobs will be started on each commit and will do following steps:
> - build packages from the commit
> - run system tests scenario [5] [6] with using created packages
> - vote in a patchset
>
> Job duration:
> - for 'fuel-agent' package - 20 min [7]
> - for 'python-fuelclient' package - 45 min [8]
>
> For any additional questions please use our #fuel-infra IRC channel
>
> [1]
> https://ci.fuel-infra.org/job/master.fuel-agent.pkgs.ubuntu.review_fuel_agent_one_node_provision/
> [2]
> https://ci.fuel-infra.org/job/master.python-fuelclient.pkgs.ubuntu.review_fuel_client/
> [3] https://github.com/openstack/fuel-agent
> [4] https://github.com/openstack/python-fuelclient
> [5] for 'fuel-agent' package
> https://github.com/openstack/fuel-qa/blob/master/gates_tests/tests/test_review_in_fuel_agent.py#L41-L48
> [6] for 'python-fuelclient' package
> https://github.com/openstack/fuel-qa/blob/master/gates_tests/tests/test_review_in_fuel_client.py#L102-L113
> [7]
> https://ci.fuel-infra.org/job/master.fuel-agent.pkgs.ubuntu.review_fuel_agent_one_node_provision/buildTimeTrend
> [8]
> https://ci.fuel-infra.org/job/master.python-fuelclient.pkgs.ubuntu.review_fuel_client/buildTimeTrend
> --
>
> Dmitry Kaigarodtsev
>
> IRC: dkaiharodsev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on the idea to move it forward

2016-01-21 Thread Daniel P. Berrange
On Wed, Jan 20, 2016 at 01:23:02PM -0430, Flavio Percoco wrote:
> Greetings,
> 
> At the Tokyo summit, we discussed OpenStack's development themes in a
> cross-project session. In this session a group of folks started discussing 
> what
> topics the overall community could focus on as a shared effort. One of the
> things that was raised during this session is the need of having cycles to
> stabilize projects. This was brought up by Robert Collins again in a 
> meeting[0]
> the TC had right after the summit and no much has been done ever since.
> 
> Now, "stabilization Cycles" are easy to dream about but really hard to do and
> enforce. Nonetheless, they are still worth a try or, at the very least, a
> thought. I'll try to go through some of the issues and benefits a 
> stabilization
> cycle could bring but bear in mind that the lists below are not exhaustive. In
> fact, I'd love for other folks to chime in and help building a case in favor 
> or
> against this.
> 
> Negative(?) effects
> ===
> 
> - Project won't get new features for a period of time Economic impact on
>  developers(?)
> - It was mentioned that some folks receive bonuses for landed features
> - Economic impact on companies/market because no new features were added (?)
> - (?)

It will push more development into non-upstream vendor private
branches.

> 
> Positive effects
> 
> 
> - Focus on bug fixing
> - Reduce review backlog
> - Refactor *existing* code/features with cleanups
> - Focus on multi-cycle features (if any) and complete those
> - (?)

I don't think the idea of stabalization cycles would really have
such a positive effect, certainly not while our release cycle is
6 months in length.

If you say the next cycle is primarily stabalization, then what
you are in effect saying is that people have to wait 12 months
for their desired new feature.  In the fast moving world of
cloud, I don't think that is a very credible approach. Even
with our current workflow, where we selectively approve features
for cycles, we have this impact of forcing people to wait 12
months, or more, for their features.

In the non-stabalization cycle, we're not going to be able to
merge a larger number of features than we already do today.
So in effect we'll have 2 cycles worth of features being
proposed for 1 cycle. When we inevitably reject moany of
those features they'll have to wait for the next non-stabalization
cycle, which means 18-24 months delay.

Of course in reality this kind of delay won't happen. What will
instead happen is that various vendors will get pressure from
their customers/partners and their local branches of openstack
packages will fork & diverge even further from upstream than
they already do today.

So while upstream branch will be "stabalized", most users will
probably get a *less* stable release because they'll be using
a branch from vendors with a tonne of non-upstream stuff added.


In addition having a stablization cycle will give the impression
that the following cycle is a non-stable one and likely cause
more distruption by pushing lots of features in at one time.
Instead of having a master branch which has an approximately
constant level of stabalization, you'll create a situation
where it fluctuates significantly, which is clearly worse for
people doing continuous deployment.

I think it is important to have the mindset that master should
*always* be considered stable - we already have this in general
and it is one of the success points of openstack's development
model IMHO. The idea of stabalization cycles is a step backwards

I still believe that if you want to improve stabality of the
codebase, we'd be better off moving to a shorter development
cycle. Even the 6 month cycle we have today is quite "lumpy"
in terms of what kind of work happens from month to month. If
we moved to a 2 month cycle, I think it would relieve pressure
to push in features quickly before freeze, because people would
know they'd have another opportunity very soon, instead of having
to wait 6+ months. I've previously suggested that here:

  http://lists.openstack.org/pipermail/openstack-dev/2015-February/057614.html

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][drivers] Re-think the Glance Driver's team

2016-01-21 Thread Julien Danjou
On Wed, Jan 20 2016, Flavio Percoco wrote:

Hi,

[…]

> Thoughts? Critics? Improvements?

I'm a small contributor on Glance, so take my opinion with the
appropriate weight :-), but I think this would be a really good set of
improvements. This is going in the right direction.

Go team!

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-Dev][neutron][TaaS] Suggestion and review of TaaS Specification

2016-01-21 Thread reedip banerjee
Dear All Neutron Members,
As you may know, Tap-as-a-service was first demonstrated in the Vancouver
summit[1].
Since then it has progressed with the work currently going on in the
specification, and its induction as an extension of the Neutron project.

Considering the same,we would like to invite you to review and provide your
invaluable comments to the specification at [2].

You are also invited to join the Weekly Meeting [3] to share your thoughts
on the further development of Tap-as-a-service.

Looking forward to seeing you there.

[1]
https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/tap-as-a-service-taas-port-monitoring-for-neutron-networks
[2] https://review.openstack.org/#/c/256210/ .
[3] http://eavesdrop.openstack.org/#Tap_as_a_Service_Meeting


-- 
Thanks and Regards,
Reedip Banerjee
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on the idea to move it forward

2016-01-21 Thread Chris Dent

On Wed, 20 Jan 2016, Flavio Percoco wrote:


- It was mentioned that some folks receive bonuses for landed features


In this thread we've had people recoil in shock at this ^ one...


- Economic impact on companies/market because no new features were added (?)


...but I have to say it was this ^ one that gave me the most concern.

At the opensource project level I really don't think this should be
something we're actively worrying about. What we should be worrying
about is if OpenStack is any good. Often "good" will include features,
but not all the time.

Let the people doing the selling worry about the market, if they
want. That stuff is, or at least should be, on the other side of a
boundary.

--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] New gate jobs for 'fuel-agent' and 'python-fuelclient' packages

2016-01-21 Thread Dmitry Kaiharodsev
Hi Igor,

according to the script [1] - by default we're building RPM package,
and if in a package repository exists 'debian' folder - trying to build DEB
as well

[1]
https://github.com/fuel-infra/jenkins-jobs/blob/master/servers/fuel-ci/builders/build-pkgs.sh

On Thu, Jan 21, 2016 at 12:51 PM, Igor Kalnitsky 
wrote:

> Hey Dmitry -
>
> That's cool, thank you. I wonder you build RPM or DEB or both?
>
> - Igor
>
> On Thu, Jan 21, 2016 at 12:48 PM, Dmitry Kaiharodsev
>  wrote:
> > Hi to all,
> >
> > please be informed that starting from today we're launching additional
> > gating jobs [1] [2]:
> >
> > - for 'fuel-agent' package [3]
> > - for 'python-fuelclient' package [4]
> >
> > Mentioned jobs will be started on each commit and will do following
> steps:
> > - build packages from the commit
> > - run system tests scenario [5] [6] with using created packages
> > - vote in a patchset
> >
> > Job duration:
> > - for 'fuel-agent' package - 20 min [7]
> > - for 'python-fuelclient' package - 45 min [8]
> >
> > For any additional questions please use our #fuel-infra IRC channel
> >
> > [1]
> >
> https://ci.fuel-infra.org/job/master.fuel-agent.pkgs.ubuntu.review_fuel_agent_one_node_provision/
> > [2]
> >
> https://ci.fuel-infra.org/job/master.python-fuelclient.pkgs.ubuntu.review_fuel_client/
> > [3] https://github.com/openstack/fuel-agent
> > [4] https://github.com/openstack/python-fuelclient
> > [5] for 'fuel-agent' package
> >
> https://github.com/openstack/fuel-qa/blob/master/gates_tests/tests/test_review_in_fuel_agent.py#L41-L48
> > [6] for 'python-fuelclient' package
> >
> https://github.com/openstack/fuel-qa/blob/master/gates_tests/tests/test_review_in_fuel_client.py#L102-L113
> > [7]
> >
> https://ci.fuel-infra.org/job/master.fuel-agent.pkgs.ubuntu.review_fuel_agent_one_node_provision/buildTimeTrend
> > [8]
> >
> https://ci.fuel-infra.org/job/master.python-fuelclient.pkgs.ubuntu.review_fuel_client/buildTimeTrend
> > --
> >
> > Dmitry Kaigarodtsev
> >
> > IRC: dkaiharodsev
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kind Regards,
Dmitry Kaigarodtsev
Fuel Ci Engineer

IRC: dkaiharodsev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gate] gate-grenade-dsvm-multinode intermittent failures

2016-01-21 Thread Davanum Srinivas
Hi,

Failures for this job has been trending up and is causing the large
gate queue as well. I've logged a bug:
https://bugs.launchpad.net/openstack-gate/+bug/1536622

and am requesting switching the voting to off for this job:
https://review.openstack.org/#/c/270788/

We need to find and fix the underlying issue which can help us
determine when to switch this back on to voting or we cleanup this job
from all the gate queues and move them to check queues (i have a TODO
for this in this review)

Thanks,
Dims

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Monasca] collectd-Monasca Python plugin

2016-01-21 Thread Jaesuk Ahn
We are looking into similar plan to have collectd-plugin for Monasca.

There are some env. we cannot deploy monasca agent, but want to put data
into Monasca. In addition, we wanted to use easily accepted collectd for
gathering data from legacy env.

It will be interesting to see more detail about your plan.

Cheers,


---
Jaesuk Ahn
SDI Tech. Lab, SKT


2016년 1월 21일 (목) 19:11, Alonso Hernandez, Rodolfo <
rodolfo.alonso.hernan...@intel.com>님이 작성:

> Hello:
>
> We are doing (or at least planning) a collectd-Monasca Python plugin. This
> plugin will receive the data from RPC calls form collectd and will write
> this data in Monasca, using statsd API.
>
> My question is: do you think this development could be useful? Does it
> worth? Any comment?
>
> Thank you in advance. Regards.
>
> Rodolfo Alonso.
> --
> Intel Research and Development Ireland Limited
> Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> Registered Number: 308263
>
>
> This e-mail and any attachments may contain confidential material for the
> sole
> use of the intended recipient(s). Any review or distribution by others is
> strictly prohibited. If you are not the intended recipient, please contact
> the
> sender and delete all copies.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on the idea to move it forward

2016-01-21 Thread Flavio Percoco

On 21/01/16 12:00 +0100, Julien Danjou wrote:

On Wed, Jan 20 2016, Flavio Percoco wrote:

Hi fellows,


Now, "stabilization Cycles" are easy to dream about but really hard to do and
enforce. Nonetheless, they are still worth a try or, at the very least, a
thought. I'll try to go through some of the issues and benefits a stabilization
cycle could bring but bear in mind that the lists below are not exhaustive. In
fact, I'd love for other folks to chime in and help building a case in favor or
against this.


[…]

I don't think this is a bad idea per say – obviously, who would think
it's a bad idea to fix bugs. But I'm still concerned. Isn't this in some
way just a band-aid?

If a project needs to spend an entire cycle (6 months) doing
stabilization, this tells me that its development model and operating is
having some problems. What about talking about those and trying to fix
them? Maybe we should try to fix (or at least enhance) the root cause(s)
rather than just the symptoms?

So can someone enlighten me on why some projects need an entire cycle to
work fixing bugs? :)


So, I don't think it has to be the entire cycle. It could also be a couple of
milestones (or even just 1). Thing is, I believe this has to be communicated and
I want teams to know this is fine and they are encouraged to do so.

Tl;DR: It's fine to tell folks no new features will land on this and the
upcoming milestone because they'll be used to stabilize the project.

Unfortunately, just talking and proposing to fix them doesn't help. We don't
control contributor's management and we can't make calls for them other than
proposing things. I'm not saying this will fix that issue but at least it'll
communicate properly that that will be the only way to contribute to project X
in that period of time.

Flavio


Best,
--
Julien Danjou
/* Free Software hacker
  https://julien.danjou.info */




--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on the idea to move it forward

2016-01-21 Thread Julien Danjou
On Thu, Jan 21 2016, Flavio Percoco wrote:

> So, I don't think it has to be the entire cycle. It could also be a couple of
> milestones (or even just 1). Thing is, I believe this has to be communicated 
> and
> I want teams to know this is fine and they are encouraged to do so.
>
> Tl;DR: It's fine to tell folks no new features will land on this and the
> upcoming milestone because they'll be used to stabilize the project.

I can understand that, though I think it's a very naive approach. If
your project built technical debt for the last N cycles, unfortunately I
doubt that stating you're gonna work for ⅓ of a cycle on reducing it is
going to improve your project on the long run – that's why I was saying
"band-aid".

I'd be more inclined to spend time trying to fix the root cause that
pushes projects on the slope of the technical debt rate increase.

> Unfortunately, just talking and proposing to fix them doesn't help. We don't
> control contributor's management and we can't make calls for them other than
> proposing things. I'm not saying this will fix that issue but at least it'll
> communicate properly that that will be the only way to contribute to project X
> in that period of time.

Yes, exactly. So it's my view¹ that people will just do something else
for 1.5 month (e.g. work downstream, take vacation…), and then come back
knocking at your door for their feature to be merged, now that this
stabilization period is over. And even in the best case scenario, you'll
merge some fixes and improvement, and that's it: in the end you'll end
up with the same problems in N cycle, and you'll have to redo that
again.

That's why I'm talking about fixing the root causes. :-)

Cheers,

¹  pessimistic or realistic, YMMV :-)

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [oslo] Proposal of adding puppet-oslo to OpenStack

2016-01-21 Thread Doug Hellmann
Excerpts from Cody Herriges's message of 2016-01-19 15:50:05 -0800:
> Colleen Murphy wrote:
> > On Tue, Jan 19, 2016 at 9:57 AM, Xingchao Yu  > > wrote:
> > 
> > Hi, Emilien:
> > 
> >  Thanks for your efforts on this topic, I didn't attend V
> > release summit and missed related discussion about puppet-oslo.
> > 
> >  As the reason for not using a unified way to manage oslo_*
> > parameters is there maybe exist different oslo_* version between
> > openstack projects.
> > 
> >  I have an idea to solve this potential problem,we can maintain
> > several versions of puppet-oslo, each module can map to different
> > version of puppet-oslo.
> > 
> > It would be something like follows: (the map info is not true,
> > just for example)
> > 
> > In Mitaka release
> > puppet-nova maps to puppet-oslo with 8.0.0
> > puppet-designate maps to puppet-oslo with 7.0.0
> > puppet-murano maps to puppet-oslo with 6.0.0
> > 
> > In Newton release
> > puppet-nova maps to puppet-oslo with 9.0.0
> > puppet-designate maps to puppet-oslo with 9.0.0
> > puppet-murano maps to puppet-oslo with 7.0.0
> > 
> > For the simplest case of puppet infrastructure configuration, which is a
> > single puppetmaster with one environment, you cannot have multiple
> > versions of a single puppet module installed. This means you absolutely
> > cannot have an openstack infrastructure depend on having different
> > versions of a single module installed. In your example, a user would not
> >  be able to use both puppet-nova and puppet-designate since they are
> > using different versions of the puppet-oslo module.
> > 
> > When we put out puppet modules, we guarantee that version X.x.x of a
> > given module works with the same version of every other module, and this
> > proposal would totally break that guarantee. 
> > 
> 
> How does OpenStack solve this issue?
> 
> * Do they literally install several different versions of the same
> python library?
> * Does every project vendor oslo?
> * Is the oslo library its self API compatible with older versions?

Each Oslo library has its own version. Only one version of each
library is installed at a time. We use the global requirements list
to sync compatible requirements specifications across all OpenStack
projects to make them co-installable. And we try hard to maintain
API compatibility, using SemVer versioning to indicate when that
was not possible.

If you want to have a single puppet module install all of the Oslo
libraries, you could pull the right versions from the upper-constraints.txt
file in the openstack/requirements repository. That file lists the
versions that were actually tested in the gate.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on the idea to move it forward

2016-01-21 Thread Flavio Percoco

On 21/01/16 11:55 +0100, Thierry Carrez wrote:

Flavio Percoco wrote:

[...]
So, the above sounds quite vague, still but that's the idea. This email
is not a formal proposal but a starting point to move this conversation forward.
Is this something other teams would be interested in? Is this something some
teams would be entirely against? Why?

From a governance perspective, projects are already empowered to do this and
they don't (and won't) need to be granted permission to have stabilization
cycles. However, the TC could work on formalizing this process so that teams
have a reference to follow when they want to have one.


I think "stabilization cycles" will come in all shapes and form, so 
it's hard to standardize them (and provides little value). They will 
mean different things and happen at different times for every project, 
and that is fine.


As you said, projects can already decide to restrict feature 
development in a given cycle, so this is nothing new. We only need to 
communicate more aggressively that it is perfectly fine (and even 
encouraged) to define the amount of feature work that is acceptable 
for a project for a given cycle.


++

Precisely my point. If there's a way, from a governance perspective, to help
communicate and encourage folks to do this, I wan't to take it. It was mentioned
that some teams didn't know this was possible others that felt it was going to
be really hard w/o any support from the governance team, hence this email and
effort.


For example, we would
have to formalize how projects announce they want to have a
stabilization cycle
(I believe it should be done before the mid-term of the ongoing cycle).


While I understand the value of announcing this "cycle feature 
strategy" beforehand, this comes slightly at odds with our PTL 
rotation every cycle: one PTL would announce a more stable cycle and 
then the next one would have to execute on a choice that may not be 
his.


I actually wouldn't mind if that feature strategy decision was part of 
the PTL election platform -- it sounds like that would trigger 
interesting discussions.



I wouldn't mind it either. However, I'd like us to stop having such a hard
separation between current PTL's plans and future PTL's. As a community, I'd
like us to work harder on building new leaders and a better community. I'd like
current PTLs to find in the community who would like to run for the PTL role
(regardless of the current PTL planning re-election) and work with them towards
having a better long term plan for the project. We should really stop thinking
that our responsibilities as PTLs start when the cycle begins and end when our
term ends. At least, I don't believe that.

That is to say, I'd like these plans to be discussed with the community in
advance because I believe the project will benefit from proper communication.

If I run for PTL and win because I proposed a stabilization cycle, I might end
up with a good plan and no ppl due to their tasks being re-scheduled because
their management doesn't want them to spend so much time "just fixing bugs".


Thoughts? Feedback?


Just an additional thought. It is not entirely impossible that due to 
events organization we'll accidentally have a shorter cycle (say, 4 
months instead of 6) in the future here and there. I could totally see 
projects take advantage of such a short cycle to place a 
"stabilization cycle" or another limited-feature-addition period.


++

As I mentioned in a previous reply, I think projects could also have a
stabilization milestone rather than a full cycle.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Deploy Overcloud Keystone in HTTPD

2016-01-21 Thread Emilien Macchi


On 01/19/2016 10:07 AM, Adam Young wrote:
> On 01/19/2016 07:54 AM, Emilien Macchi wrote:
>>
>> On 01/18/2016 09:59 PM, Adam Young wrote:
>>> I have a review here for switching Keystone to HTTPD
>>>
>>> https://review.openstack.org/#/c/269377/
>> Adam, I think your patch overlaps with my patch:
>> https://review.openstack.org/#/c/269377
> 
> Yep.  I wanted to test out just the Overcloud subset.  I'll abandon
> mine; CI ran.

Thanks.

I already submitted a patch for undercloud:
https://review.openstack.org/#/c/270477/

CI passing, can we approve it?

Thanks,

> 
>>
>> Feel free to take over it if you feel like it miss something.
>> I haven't worked on it since lot of time now, and it will need to be
>> rebased.
>>
>> Thanks,
>>
>>> But I have no idea how to kick off the CI to really test it.  The check
>>> came back way too quick for it to have done a full install; less than 3
>>> minutes.  I think it was little more than a lint check.
>>>
>>> How can I get a real sense of if it is this easy or if there is
>>> something more that needs to be done?
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][plugin] node_role only need when attribute false - where is the fuel plugin parser code?

2016-01-21 Thread Aleksandr Didenko
Whoops, forgot to add a link, sorry.. Here it is

[0] http://paste.openstack.org/show/484552/

On Thu, Jan 21, 2016 at 1:24 PM, Aleksandr Didenko 
wrote:

> Hi,
>
> I'm working on a plugin for 8.0 atm and this is how it worked for me [0].
> It's a restriction not for node role, it's for other plugin setting, but I
> suppose it should work in your case as well.
> So in general it should look like:
>
> condition: "settings:plugin_name.attribute_name.value == false"
>
> Regards,
> Alex
>
> On Wed, Jan 20, 2016 at 8:07 PM, Matthew Mosesohn 
> wrote:
>
>> Hi Nikolas,
>>
>> I'm not exactly sure about your case, but you should try something like
>> this:
>> https://github.com/openstack/fuel-plugin-detach-keystone/blob/master/node_roles.yaml#L14-L15
>> restrictions:
>> - condition: "settings:opendaylight_plugin:use_external_odl == false"
>> - message: "OpenDaylight role can only be used without external ODL"
>>
>> On Wed, Jan 20, 2016 at 9:41 PM, Nikolas Hermanns <
>> nikolas.herma...@ericsson.com> wrote:
>>
>>> Hey,
>>>
>>> I am developing on a fuel plugin at the moment.
>>> (fuel-plugin-opendaylight)
>>> In node_roles.yaml I would like to define something similar to:
>>> opendaylight:
>>>   limits:
>>> max: 1
>>> min: if "attributes:use_external_odl == true" then 0 else 1
>>>
>>> attributes:use_external_odl comes from the environment_config. Is such
>>> thing possible? And where is the code where that logic is actually build
>>> up. Which repo is having that?
>>>
>>> BR Nikolas
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][plugin] node_role only need when attribute false - where is the fuel plugin parser code?

2016-01-21 Thread Aleksandr Didenko
Hi,

I'm working on a plugin for 8.0 atm and this is how it worked for me [0].
It's a restriction not for node role, it's for other plugin setting, but I
suppose it should work in your case as well.
So in general it should look like:

condition: "settings:plugin_name.attribute_name.value == false"

Regards,
Alex

On Wed, Jan 20, 2016 at 8:07 PM, Matthew Mosesohn 
wrote:

> Hi Nikolas,
>
> I'm not exactly sure about your case, but you should try something like
> this:
> https://github.com/openstack/fuel-plugin-detach-keystone/blob/master/node_roles.yaml#L14-L15
> restrictions:
> - condition: "settings:opendaylight_plugin:use_external_odl == false"
> - message: "OpenDaylight role can only be used without external ODL"
>
> On Wed, Jan 20, 2016 at 9:41 PM, Nikolas Hermanns <
> nikolas.herma...@ericsson.com> wrote:
>
>> Hey,
>>
>> I am developing on a fuel plugin at the moment. (fuel-plugin-opendaylight)
>> In node_roles.yaml I would like to define something similar to:
>> opendaylight:
>>   limits:
>> max: 1
>> min: if "attributes:use_external_odl == true" then 0 else 1
>>
>> attributes:use_external_odl comes from the environment_config. Is such
>> thing possible? And where is the code where that logic is actually build
>> up. Which repo is having that?
>>
>> BR Nikolas
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] New gate jobs for 'fuel-agent' and 'python-fuelclient' packages

2016-01-21 Thread Igor Kalnitsky
Thanks. Any plans to add similar job for fuel-web?

On Thu, Jan 21, 2016 at 1:34 PM, Dmitry Kaiharodsev
 wrote:
> Hi Igor,
>
> according to the script [1] - by default we're building RPM package,
> and if in a package repository exists 'debian' folder - trying to build DEB
> as well
>
> [1]
> https://github.com/fuel-infra/jenkins-jobs/blob/master/servers/fuel-ci/builders/build-pkgs.sh
>
> On Thu, Jan 21, 2016 at 12:51 PM, Igor Kalnitsky 
> wrote:
>>
>> Hey Dmitry -
>>
>> That's cool, thank you. I wonder you build RPM or DEB or both?
>>
>> - Igor
>>
>> On Thu, Jan 21, 2016 at 12:48 PM, Dmitry Kaiharodsev
>>  wrote:
>> > Hi to all,
>> >
>> > please be informed that starting from today we're launching additional
>> > gating jobs [1] [2]:
>> >
>> > - for 'fuel-agent' package [3]
>> > - for 'python-fuelclient' package [4]
>> >
>> > Mentioned jobs will be started on each commit and will do following
>> > steps:
>> > - build packages from the commit
>> > - run system tests scenario [5] [6] with using created packages
>> > - vote in a patchset
>> >
>> > Job duration:
>> > - for 'fuel-agent' package - 20 min [7]
>> > - for 'python-fuelclient' package - 45 min [8]
>> >
>> > For any additional questions please use our #fuel-infra IRC channel
>> >
>> > [1]
>> >
>> > https://ci.fuel-infra.org/job/master.fuel-agent.pkgs.ubuntu.review_fuel_agent_one_node_provision/
>> > [2]
>> >
>> > https://ci.fuel-infra.org/job/master.python-fuelclient.pkgs.ubuntu.review_fuel_client/
>> > [3] https://github.com/openstack/fuel-agent
>> > [4] https://github.com/openstack/python-fuelclient
>> > [5] for 'fuel-agent' package
>> >
>> > https://github.com/openstack/fuel-qa/blob/master/gates_tests/tests/test_review_in_fuel_agent.py#L41-L48
>> > [6] for 'python-fuelclient' package
>> >
>> > https://github.com/openstack/fuel-qa/blob/master/gates_tests/tests/test_review_in_fuel_client.py#L102-L113
>> > [7]
>> >
>> > https://ci.fuel-infra.org/job/master.fuel-agent.pkgs.ubuntu.review_fuel_agent_one_node_provision/buildTimeTrend
>> > [8]
>> >
>> > https://ci.fuel-infra.org/job/master.python-fuelclient.pkgs.ubuntu.review_fuel_client/buildTimeTrend
>> > --
>> >
>> > Dmitry Kaigarodtsev
>> >
>> > IRC: dkaiharodsev
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Kind Regards,
> Dmitry Kaigarodtsev
> Fuel Ci Engineer
>
> IRC: dkaiharodsev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] gate-grenade-dsvm-multinode intermittent failures

2016-01-21 Thread Sean Dague
On 01/21/2016 08:18 AM, Davanum Srinivas wrote:
> Hi,
> 
> Failures for this job has been trending up and is causing the large
> gate queue as well. I've logged a bug:
> https://bugs.launchpad.net/openstack-gate/+bug/1536622
> 
> and am requesting switching the voting to off for this job:
> https://review.openstack.org/#/c/270788/
> 
> We need to find and fix the underlying issue which can help us
> determine when to switch this back on to voting or we cleanup this job
> from all the gate queues and move them to check queues (i have a TODO
> for this in this review)

By trending up we mean above 75% failure rate - http://tinyurl.com/zrq35e8

All the spot checking of jobs I've found is the job dying on the liberty
side validation with test_volume_boot_pattern, which means we've never
even gotten to the any of the real grenade logic.

+2 on non-voting.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on the idea to move it forward

2016-01-21 Thread Julien Danjou
On Wed, Jan 20 2016, Flavio Percoco wrote:

Hi fellows,

> Now, "stabilization Cycles" are easy to dream about but really hard to do and
> enforce. Nonetheless, they are still worth a try or, at the very least, a
> thought. I'll try to go through some of the issues and benefits a 
> stabilization
> cycle could bring but bear in mind that the lists below are not exhaustive. In
> fact, I'd love for other folks to chime in and help building a case in favor 
> or
> against this.

[…]

I don't think this is a bad idea per say – obviously, who would think
it's a bad idea to fix bugs. But I'm still concerned. Isn't this in some
way just a band-aid?

If a project needs to spend an entire cycle (6 months) doing
stabilization, this tells me that its development model and operating is
having some problems. What about talking about those and trying to fix
them? Maybe we should try to fix (or at least enhance) the root cause(s)
rather than just the symptoms?

So can someone enlighten me on why some projects need an entire cycle to
work fixing bugs? :)

Best,
-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on the idea to move it forward

2016-01-21 Thread Flavio Percoco

On 21/01/16 11:22 +, Daniel P. Berrange wrote:

On Wed, Jan 20, 2016 at 01:23:02PM -0430, Flavio Percoco wrote:

Greetings,

At the Tokyo summit, we discussed OpenStack's development themes in a
cross-project session. In this session a group of folks started discussing what
topics the overall community could focus on as a shared effort. One of the
things that was raised during this session is the need of having cycles to
stabilize projects. This was brought up by Robert Collins again in a meeting[0]
the TC had right after the summit and no much has been done ever since.

Now, "stabilization Cycles" are easy to dream about but really hard to do and
enforce. Nonetheless, they are still worth a try or, at the very least, a
thought. I'll try to go through some of the issues and benefits a stabilization
cycle could bring but bear in mind that the lists below are not exhaustive. In
fact, I'd love for other folks to chime in and help building a case in favor or
against this.

Negative(?) effects
===

- Project won't get new features for a period of time Economic impact on
 developers(?)
- It was mentioned that some folks receive bonuses for landed features
- Economic impact on companies/market because no new features were added (?)
- (?)


It will push more development into non-upstream vendor private
branches.



Positive effects


- Focus on bug fixing
- Reduce review backlog
- Refactor *existing* code/features with cleanups
- Focus on multi-cycle features (if any) and complete those
- (?)


I don't think the idea of stabalization cycles would really have
such a positive effect, certainly not while our release cycle is
6 months in length.

If you say the next cycle is primarily stabalization, then what
you are in effect saying is that people have to wait 12 months
for their desired new feature.  In the fast moving world of
cloud, I don't think that is a very credible approach. Even
with our current workflow, where we selectively approve features
for cycles, we have this impact of forcing people to wait 12
months, or more, for their features.


++

This is one of the main concerns and perhaps the reason why I don't think it
should be all-or-nothing. It should be perfectly fine for teams to have
stabilization milestones, FWIW.


In the non-stabalization cycle, we're not going to be able to
merge a larger number of features than we already do today.
So in effect we'll have 2 cycles worth of features being
proposed for 1 cycle. When we inevitably reject moany of
those features they'll have to wait for the next non-stabalization
cycle, which means 18-24 months delay.

Of course in reality this kind of delay won't happen. What will
instead happen is that various vendors will get pressure from
their customers/partners and their local branches of openstack
packages will fork & diverge even further from upstream than
they already do today.

So while upstream branch will be "stabalized", most users will
probably get a *less* stable release because they'll be using
a branch from vendors with a tonne of non-upstream stuff added.



I would expect these vendors to (slowly?) push their changes upstream. It'd take
time but it should certainly happen.


In addition having a stablization cycle will give the impression
that the following cycle is a non-stable one and likely cause
more distruption by pushing lots of features in at one time.
Instead of having a master branch which has an approximately
constant level of stabalization, you'll create a situation
where it fluctuates significantly, which is clearly worse for
people doing continuous deployment.

I think it is important to have the mindset that master should
*always* be considered stable - we already have this in general
and it is one of the success points of openstack's development
model IMHO. The idea of stabalization cycles is a step backwards


Perhaps, it is being presented the wrong way. I guess the main point here is how
ca we communicate that we'd like to take some time to clean-up the mess we have
in some projects. How can projects ask their team to put more efforts on
tackling technical debt rather than pushing the new sexy thing?

I could consider Mitaka as a stabilization cycle for Glance (except for the
upload path refactor spec). The team has spent quite some time on working out a
way to improve that workflow. Few other specs have been implemented but nothing
major, TBH (talking about Glance here, not the other components).

What I mean is, that I don't consider a stabilization cycle a full heads-down on
bug fixing cyle but rather a cycle where no major features are approved. What
unfortunatelly happens when these kind of cycles are announced or planned is
that contributions vanish and they are routed to places where new features land.
That should perhaps be an indicator of how good/bad these cycles are. *shurgs*


I still believe that if you want to improve stabality of the
codebase, we'd be better off moving to a 

Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on the idea to move it forward

2016-01-21 Thread Thierry Carrez

Flavio Percoco wrote:

[...]
So, the above sounds quite vague, still but that's the idea. This email
is not a formal proposal but a starting point to move this conversation forward.
Is this something other teams would be interested in? Is this something some
teams would be entirely against? Why?

From a governance perspective, projects are already empowered to do this and
they don't (and won't) need to be granted permission to have stabilization
cycles. However, the TC could work on formalizing this process so that teams
have a reference to follow when they want to have one.


I think "stabilization cycles" will come in all shapes and form, so it's 
hard to standardize them (and provides little value). They will mean 
different things and happen at different times for every project, and 
that is fine.


As you said, projects can already decide to restrict feature development 
in a given cycle, so this is nothing new. We only need to communicate 
more aggressively that it is perfectly fine (and even encouraged) to 
define the amount of feature work that is acceptable for a project for a 
given cycle.



For example, we would
have to formalize how projects announce they want to have a
stabilization cycle
(I believe it should be done before the mid-term of the ongoing cycle).


While I understand the value of announcing this "cycle feature strategy" 
beforehand, this comes slightly at odds with our PTL rotation every 
cycle: one PTL would announce a more stable cycle and then the next one 
would have to execute on a choice that may not be his.


I actually wouldn't mind if that feature strategy decision was part of 
the PTL election platform -- it sounds like that would trigger 
interesting discussions.



Thoughts? Feedback?


Just an additional thought. It is not entirely impossible that due to 
events organization we'll accidentally have a shorter cycle (say, 4 
months instead of 6) in the future here and there. I could totally see 
projects take advantage of such a short cycle to place a "stabilization 
cycle" or another limited-feature-addition period.


Regards,

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] large-ops failure spike

2016-01-21 Thread Roman Vasilets
>I also suggest their time has probably come and gone. There is no one
>active on them, and the Rally team is.

Fully agree with idea to use Rally for large-ops. It was build for testing
OpenStack under the scale=)



On Wed, Jan 20, 2016 at 9:08 PM, Tony Breeds 
wrote:

> On Wed, Jan 20, 2016 at 07:45:16AM -0500, Sean Dague wrote:
> > The large-ops jobs jumped to a 50% fail in check, 25% fail in gate in
> > the last 24 hours.
> >
> > http://tinyurl.com/j5u4nf5
> >
> > There isn't an obvious culprit at this point. I spent some time this
> > morning digging into it a bit. Possibly each individual instance build
> > got slower, possibly some other timeout is getting hit.
> >
> > The large-ops jobs were largely maintained by Joe Gordon, who dug into
> > them when there were issues. He's not part of the community any more,
> > and I don't think there is currently a point person.
> >
> > With no current maintainer, I'd suggest we make the jobs non voting -
> > https://review.openstack.org/#/c/270141/
> >
> I think that non-voting makes sense in the short term.
>
> > I also suggest their time has probably come and gone. There is no one
> > active on them, and the Rally team is.
> >
> > A pre-gating test job is only useful if someone is actively addressing
> > systematic fails. This job class no longer has it. We should thus retire
> it.
>
> If this still adds value (and I think it does) then I think we should try
> hard
> to keep this job.
>
> (once the gate gets back to normal)  25hours to gate is nuts.
>
> Yes I'm volunteering to climb under that bus.
>
> Yours Tony.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][neutron][requirements] - keystonemiddleware-4.1.0 performance regression

2016-01-21 Thread Boris Pavlovic
Hi,


By the way OSprofiler trace shows how this regression impacts on amount of
DB queries done by Keystone (during the boot of VM):
http://boris-42.github.io/b2.html


Best regards,
Boris Pavlovic

On Wed, Jan 20, 2016 at 3:30 PM, Morgan Fainberg 
wrote:

> As promised here are the fixes:
>
>
> https://review.openstack.org/#/q/Ifc17c27744dac5ad55e84752ca6f68169c2f5a86,n,z
>
> Proposed to both master and liberty.
>
> On Wed, Jan 20, 2016 at 12:15 PM, Sean Dague  wrote:
>
>> On 01/20/2016 02:59 PM, Morgan Fainberg wrote:
>> > So this was due to a change in keystonemiddleware. We stopped doing
>> > in-memory caching of tokens per process, per worker by default [1].
>> > There are a couple of reasons:
>> >
>> > 1) in-memory caching produced unreliable validation because some
>> > processed may have a cache, some may not
>> > 2) in-memory caching was unbounded memory wise per worker.
>> >
>> > I'll spin up a devstack change to enable memcache and use the memcache
>> > caching for keystonemiddleware today. This will benefit things in a
>> > couple ways
>> >
>> > * All services and all service's workers will share the offload of the
>> > validation, likely producing a real speedup even over the old in-memory
>> > caching
>> > * There will no longer be inconsistent validation offload/responses
>> > based upon which worker you happen to hit for a given service.
>> >
>> > I'll post to the ML here with the proposed change later today.
>> >
>> > [1]
>> >
>> https://github.com/openstack/keystonemiddleware/commit/f27d7f776e8556d976f75d07c99373455106de52
>>
>> This seems like a pretty substantial performance impact. Was there a
>> reno associated with this?
>>
>> I think that we should still probably:
>>
>> * != the keystone middleware version, it's impacting the ability to land
>> fixes in the gate
>> * add devstack memcache code
>> * find some way to WARN if we are running without memcache config, so
>> people realize they are in a regressed state
>> * add back keystone middleware at that version
>>
>> -Sean
>>
>> >
>> > Cheers,
>> > --Morgan
>> >
>> > On Tue, Jan 19, 2016 at 10:57 PM, Armando M. > > > wrote:
>> >
>> >
>> >
>> > On 19 January 2016 at 22:46, Kevin Benton > > > wrote:
>> >
>> > Hi all,
>> >
>> > We noticed a major jump in the neutron tempest and API test run
>> > times recently in Neutron. After digging through logstash I
>> > found out that it first occurred on the requirements bump here:
>> > https://review.openstack.org/#/c/265697/
>> >
>> > After locally testing each requirements change individually, I
>> > found that the keystonemiddleware change seems to be the
>> > culprit. It almost doubles the time it takes to fulfill simple
>> > port-list requests in Neutron.
>> >
>> > Armando pushed up a patch here to
>> > confirm: https://review.openstack.org/#/c/270024/
>> > Once that's verified, we should probably put a cap on the
>> > middleware because it's causing the tests to run up close to
>> > their time limits.
>> >
>> >
>> > Kevin,
>> >
>> > As usual your analytical skills are to be praised.
>> >
>> > I wonder if anyone else is aware of the issue/s, because during the
>> > usual hunting I could see other projects being affected and showing
>> > abnormally high run times of the dsvm jobs.
>> >
>> > I am not sure that [1] is the right approach, but it should give us
>> > some data points if executed successfully.
>> >
>> > Cheers,
>> > Armando
>> >
>> > [1]  https://review.openstack.org/#/c/270024/
>> >
>> >
>> > --
>> > Kevin Benton
>> >
>> >
>>  __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > <
>> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
>> >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> >
>>  __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > <
>> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 

Re: [openstack-dev] [fuel][plugins] Detached components plugin update requirement

2016-01-21 Thread Swann Croiset
Sergii,
I'm also curious, what about plugins which intend to be compatible with
both MOS 7 and MOS 8?
I've in mind the LMA plugins stable/0.8

BR

--
Swann

On Wed, Jan 20, 2016 at 8:34 PM, Sergii Golovatiuk  wrote:

> Plugin master branch won't be compatible with older versions. Though the
> plugin developer may create stable branch to have compatibility with older
> versions.
>
>
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
>
> On Wed, Jan 20, 2016 at 6:41 PM, Dmitry Mescheryakov <
> dmescherya...@mirantis.com> wrote:
>
>> Sergii,
>>
>> I am curious - does it mean that the plugins will stop working with older
>> versions of Fuel?
>>
>> Thanks,
>>
>> Dmitry
>>
>> 2016-01-20 19:58 GMT+03:00 Sergii Golovatiuk :
>>
>>> Hi,
>>>
>>> Recently I merged the change to master and 8.0 that moves one task from
>>> Nailgun to Library [1]. Actually, it replaces [2] to allow operator more
>>> flexibility with repository management.  However, it affects the detached
>>> components as they will require one more task to add as written at [3].
>>> Please adapt your plugin accordingly.
>>>
>>> [1]
>>> https://review.openstack.org/#/q/I1b83e3bfaebecdb8455d5697e320f24fb4941536
>>> [2]
>>> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/orchestrator/tasks_serializer.py#L149-L190
>>> [3] https://review.openstack.org/#/c/270232/1/deployment_tasks.yaml
>>>
>>> --
>>> Best regards,
>>> Sergii Golovatiuk,
>>> Skype #golserge
>>> IRC #holser
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] [fuelclient] Pre-release versions of fuelclient for testing purposes

2016-01-21 Thread Yuriy Taraday
By the way, it would be very helpful for testing external tools if we had
7.0.1 release on PyPI as well. It seems python-fuelclient somehow ended up
with a "stable/7.0.1" branch instead of "7.0.1" tag.

On Wed, Jan 20, 2016 at 2:49 PM Roman Prykhodchenko  wrote:

> Releasing a beta version sounds like a good plan but does OpenStack Infra
> actually support this?
>
> > 20 січ. 2016 р. о 12:05 Oleg Gelbukh 
> написав(ла):
> >
> > Hi,
> >
> > Currently we're experiencing issues with Python dependencies of our
> package (fuel-octane), specifically between fuelclient's dependencies and
> keystoneclient dependencies.
> >
> > New keystoneclient is required to work with the new version of Nailgun
> due to introduction of SSL in the latter. On the other hand, fuelclient is
> released along with the main release of Fuel, and the latest version
> available from PyPI is 7.0.0, and it has very old dependencies (based on
> packages available in centos6/python26).
> >
> > The solution I'd like to propose is to release beta version of
> fuelclient (8.0.0b1) with updated requirements ASAP. With --pre flag to
> pip/tox, this will allow to run unittests against the proper set of
> requirements. On the other hand, it will not break the users consuming the
> latest stable (7.0.0) version with old requirements from PyPI.
> >
> > Please, share your thoughts and considerations. If no objections, I will
> create a corresponding bug/blueprint against fuelclient to be fixed in the
> current release cycle.
> >
> > --
> > Best regards,
> > Oleg Gelbukh
> > Mirantis
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Removing ml2_type_vxlan.vxlan_group parameter?

2016-01-21 Thread Andreas Scheuring
Convinced :)
-- 
-
Andreas (IRC: scheuran) 

On Mi, 2016-01-20 at 19:58 +, Sean M. Collins wrote:
> On Wed, Jan 20, 2016 at 02:24:35PM EST, Sourabh Patwardhan wrote:
> > This option is used to configure a global multicast group IP for VXLAN
> > networks.
> > The Cisco Nexus1000V mech driver uses the IP address from this config
> > option when creating a VXLAN network in multicast mode.
> 
> Right - and I think that really this shows the reference implementation
> should probably consult the ML2 configuration for vxlan groups instead
> of having it configured in each agent. I mean, it is an ML2 driver now.
> I believe the agent configuration for vxlan group is a carry over from
> when Linux Bridge was a separate plugin, before it became an ML2
> mechanism driver.
> 
> So let's make LB more like the ML2 mechanism driver it's supposed to be.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nova cli commands fail with 404. devstack installation from today

2016-01-21 Thread Chen CH Ji
 Guess it's image-list instead of image list,right?  maybe you can check with nova --debug image-list and see the API which wassend to nova-api server then analyze the nova api log to know what's exactly the error?-"Bob Hansen"  wrote: -To: openstack-dev@lists.openstack.orgFrom: "Bob Hansen" Date: 01/20/2016 10:31PMSubject: [openstack-dev] nova cli commands fail with 404. devstackinstallation from todayInstalled devstack today, this morning actually,  and most everything works except simple nova cli commands, nova image list, list, flavor-list all fail) glance ok, nuetron ok, As an example, nova image list returns:devstack$ nova image listERROR (NotFound): The resource could not be found. (HTTP 404)However the command; openstack image list returns the correct list of cirros images, plus one I have already imported.key.log has:127.0.0.1 - - [20/Jan/2016:21:10:49 +] "POST /tokens HTTP/1.1" 404 93 "-" "keystoneauth1/2.2.0 python-requests/2.9.1 CPython/2.7.6" 2270(us)Clearly an authentication thing. Since other commands work, e.g. neutorn subnet-list, I concluded keystone auth is just fine.I suspect it is something in nova.conf. [keystone_auth] has this in it, which stack.sh built[keystone_authtoken]signing_dir = /var/cache/novacafile = /opt/stack/data/ca-bundle.pemauth_uri = http://127.0.0.1:5000project_domain_id = defaultproject_name = serviceuser_domain_id = defaultpassword = secretserviceusername = novaauth_url = http://127.0.0.1:35357auth_type = passwordAny suggestions on where else to look?Bob Hansenz/VM OpenStack Enablement 
__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][plugins] Detached components plugin update requirement

2016-01-21 Thread Bartlomiej Piotrowski
Breakage of anything is probably the last thing I intended to achieve with
that patch. Maybe I misunderstand how tasks dependencies works, let me
describe *explicit* dependencies I did in tasks.yaml:

hiera requires deploy_start
hiera is required for setup_repositories
setup_repositories is required for fuel_pkgs
setup_repositories requires hiera
fuel_pkgs requires setup_repositories
fuel_pkgs is required globals

Coming from packaging realm, there is clear transitive dependency for
anything that pulls globals task, i.e. if task foo depends on globals, the
latter pulls fuel_pkgs, which brings setup_repositories in. I'm in favor of
reverting both patches (master and stable/8.0) if it's going to break
backwards compatibility, but I really see bigger problem in the way we
handle task dependencies.

Bartłomiej

On Thu, Jan 21, 2016 at 9:51 AM, Swann Croiset 
wrote:

> Sergii,
> I'm also curious, what about plugins which intend to be compatible with
> both MOS 7 and MOS 8?
> I've in mind the LMA plugins stable/0.8
>
> BR
>
> --
> Swann
>
> On Wed, Jan 20, 2016 at 8:34 PM, Sergii Golovatiuk <
> sgolovat...@mirantis.com> wrote:
>
>> Plugin master branch won't be compatible with older versions. Though the
>> plugin developer may create stable branch to have compatibility with older
>> versions.
>>
>>
>> --
>> Best regards,
>> Sergii Golovatiuk,
>> Skype #golserge
>> IRC #holser
>>
>> On Wed, Jan 20, 2016 at 6:41 PM, Dmitry Mescheryakov <
>> dmescherya...@mirantis.com> wrote:
>>
>>> Sergii,
>>>
>>> I am curious - does it mean that the plugins will stop working with
>>> older versions of Fuel?
>>>
>>> Thanks,
>>>
>>> Dmitry
>>>
>>> 2016-01-20 19:58 GMT+03:00 Sergii Golovatiuk :
>>>
 Hi,

 Recently I merged the change to master and 8.0 that moves one task from
 Nailgun to Library [1]. Actually, it replaces [2] to allow operator more
 flexibility with repository management.  However, it affects the detached
 components as they will require one more task to add as written at [3].
 Please adapt your plugin accordingly.

 [1]
 https://review.openstack.org/#/q/I1b83e3bfaebecdb8455d5697e320f24fb4941536
 [2]
 https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/orchestrator/tasks_serializer.py#L149-L190
 [3] https://review.openstack.org/#/c/270232/1/deployment_tasks.yaml

 --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Relieving CI/gate jenkins bottleneck

2016-01-21 Thread Bartlomiej Piotrowski
Let's drop 3.3 as well. 3.4 is oldschool enough for vintage lovers.

BP

On Thu, Jan 21, 2016 at 11:03 AM, Aleksandr Didenko 
wrote:

> Hi,
>
> > I also think 3.3 is the version that ships with 14.04.
>
> 3.4.3 is shipped with Ubuntu-14.04. I think 3.4, 3.8 and 4 should be
> enough.
>
> Regards,
> Alex
>
> On Wed, Jan 20, 2016 at 6:38 PM, Sergii Golovatiuk <
> sgolovat...@mirantis.com> wrote:
>
>> +1 for 3.3, 3.4, 3.8 and 4
>>
>>
>> --
>> Best regards,
>> Sergii Golovatiuk,
>> Skype #golserge
>> IRC #holser
>>
>> On Wed, Jan 20, 2016 at 6:12 PM, Alex Schultz 
>> wrote:
>>
>>> On Wed, Jan 20, 2016 at 9:02 AM, Matthew Mosesohn
>>>  wrote:
>>> > Hi all,
>>> >
>>> > Unit tests on CI and gate bottleneck are really slowing down commit
>>> > progress. We recently had a meeting to discuss possible ways to improve
>>> > this, including symlinks, caching git repositories, etc, but one thing
>>> we
>>> > can do much faster is to simply disable 3.3-3.7 puppet jobs. We don't
>>> deploy
>>> > Fuel 9.0 (or 8.0) on earlier Puppet versions, so what value is there
>>> to the
>>> > checks? I propose we remove these tests, and hopefully we will see some
>>> > immediate relief.
>>> >
>>>
>>> How about we reduce to 3.3, 3.4, 3.8 and 4?  We would remove  3.6 and
>>> 3.7 which would reduce the number of jobs by a third  The goal of
>>> keeping the others was to ensure that if/when we are able to install
>>> fuel-library without our version of puppet that a user could use
>>> whatever version their environment has. There were some changes
>>> between 3.3 and 3.4 (if I remember correctly) so we should keep
>>> checking that as it's also the oldest version supported by the
>>> upstream puppet openstack modules.  I also think 3.3 is the version
>>> that ships with 14.04.  Additionally we used 3.4 in fuel 7 and below
>>> so we should keep those around.
>>>
>>> -Alex
>>>
>>> > Best Regards,
>>> > Matthew Mosesohn
>>> >
>>> >
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] L3-HA - Solution for GW not pingable issue bug/1365461

2016-01-21 Thread Lubosz Kosnik

He neutrinos,

Currently I'm working on this bug [1]. Almost one year ago Yoni Shafrir 
prepared a patch to fix this issue but he got in review information that 
this solution must be changed because it's using only one script to 
check the GW availability and because of that it cannot be used in multi 
tenat environment.
I tooked his code and was trying to upgrade that code to support 
multiple scripts but I was designed separate solution for that.

I would like to know what do you think about this solution.

1. Add bash script generator to neutron/agent/linux/keepalived.py
2. There will be one script per one keepalived instance per node
3. There are two possible solutions for checking is everything is 
working OK. Script will verify:
a. That all interfaces are up - internal router interfaces in 
namespace, external interface taken from neutron configuration file and 
also br-tun/br-int interfaces.
b. That GW is pingable from router NS - there is only one problem 
what if GW is not configured in router already - plus we could ping 
other network node or other server which IP is specified in some 
configuration.


That solution will also fix this issue [2].
I would hear from you what do you think about that two possible 
solutions and what do you think about whole solution at all.


Cheers,
Lubosz (diltram) Kosnik

[1] https://bugs.launchpad.net/neutron/+bug/1365461
[2] https://bugs.launchpad.net/neutron/+bug/1375625

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Relieving CI/gate jenkins bottleneck

2016-01-21 Thread Aleksandr Didenko
Hi,

> I also think 3.3 is the version that ships with 14.04.

3.4.3 is shipped with Ubuntu-14.04. I think 3.4, 3.8 and 4 should be enough.

Regards,
Alex

On Wed, Jan 20, 2016 at 6:38 PM, Sergii Golovatiuk  wrote:

> +1 for 3.3, 3.4, 3.8 and 4
>
>
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
>
> On Wed, Jan 20, 2016 at 6:12 PM, Alex Schultz 
> wrote:
>
>> On Wed, Jan 20, 2016 at 9:02 AM, Matthew Mosesohn
>>  wrote:
>> > Hi all,
>> >
>> > Unit tests on CI and gate bottleneck are really slowing down commit
>> > progress. We recently had a meeting to discuss possible ways to improve
>> > this, including symlinks, caching git repositories, etc, but one thing
>> we
>> > can do much faster is to simply disable 3.3-3.7 puppet jobs. We don't
>> deploy
>> > Fuel 9.0 (or 8.0) on earlier Puppet versions, so what value is there to
>> the
>> > checks? I propose we remove these tests, and hopefully we will see some
>> > immediate relief.
>> >
>>
>> How about we reduce to 3.3, 3.4, 3.8 and 4?  We would remove  3.6 and
>> 3.7 which would reduce the number of jobs by a third  The goal of
>> keeping the others was to ensure that if/when we are able to install
>> fuel-library without our version of puppet that a user could use
>> whatever version their environment has. There were some changes
>> between 3.3 and 3.4 (if I remember correctly) so we should keep
>> checking that as it's also the oldest version supported by the
>> upstream puppet openstack modules.  I also think 3.3 is the version
>> that ships with 14.04.  Additionally we used 3.4 in fuel 7 and below
>> so we should keep those around.
>>
>> -Alex
>>
>> > Best Regards,
>> > Matthew Mosesohn
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Monasca] collectd-Monasca Python plugin

2016-01-21 Thread Alonso Hernandez, Rodolfo
Hello:

We are doing (or at least planning) a collectd-Monasca Python plugin. This 
plugin will receive the data from RPC calls form collectd and will write this 
data in Monasca, using statsd API.

My question is: do you think this development could be useful? Does it worth? 
Any comment?

Thank you in advance. Regards.

Rodolfo Alonso.
--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] New gate jobs for 'fuel-agent' and 'python-fuelclient' packages

2016-01-21 Thread Dmitry Kaiharodsev
Yes,
we have a bug [1] for making similar [2] gate job for 'fuel-web'
and we got success run of the job [3] today.

Expecting approve from 'fuel-qa' team for moving forward.

[1] https://bugs.launchpad.net/fuel/+bug/1532129
[2]
https://github.com/openstack/fuel-qa/blob/master/gates_tests/tests/test_review_fuel_web.py#L38-L49
[3]
https://ci.fuel-infra.org/job/master.fuel-web.pkgs.ubuntu.review_fuel_web_deploy/4/

On Thu, Jan 21, 2016 at 3:33 PM, Igor Kalnitsky 
wrote:

> Thanks. Any plans to add similar job for fuel-web?
>
> On Thu, Jan 21, 2016 at 1:34 PM, Dmitry Kaiharodsev
>  wrote:
> > Hi Igor,
> >
> > according to the script [1] - by default we're building RPM package,
> > and if in a package repository exists 'debian' folder - trying to build
> DEB
> > as well
> >
> > [1]
> >
> https://github.com/fuel-infra/jenkins-jobs/blob/master/servers/fuel-ci/builders/build-pkgs.sh
> >
> > On Thu, Jan 21, 2016 at 12:51 PM, Igor Kalnitsky <
> ikalnit...@mirantis.com>
> > wrote:
> >>
> >> Hey Dmitry -
> >>
> >> That's cool, thank you. I wonder you build RPM or DEB or both?
> >>
> >> - Igor
> >>
> >> On Thu, Jan 21, 2016 at 12:48 PM, Dmitry Kaiharodsev
> >>  wrote:
> >> > Hi to all,
> >> >
> >> > please be informed that starting from today we're launching additional
> >> > gating jobs [1] [2]:
> >> >
> >> > - for 'fuel-agent' package [3]
> >> > - for 'python-fuelclient' package [4]
> >> >
> >> > Mentioned jobs will be started on each commit and will do following
> >> > steps:
> >> > - build packages from the commit
> >> > - run system tests scenario [5] [6] with using created packages
> >> > - vote in a patchset
> >> >
> >> > Job duration:
> >> > - for 'fuel-agent' package - 20 min [7]
> >> > - for 'python-fuelclient' package - 45 min [8]
> >> >
> >> > For any additional questions please use our #fuel-infra IRC channel
> >> >
> >> > [1]
> >> >
> >> >
> https://ci.fuel-infra.org/job/master.fuel-agent.pkgs.ubuntu.review_fuel_agent_one_node_provision/
> >> > [2]
> >> >
> >> >
> https://ci.fuel-infra.org/job/master.python-fuelclient.pkgs.ubuntu.review_fuel_client/
> >> > [3] https://github.com/openstack/fuel-agent
> >> > [4] https://github.com/openstack/python-fuelclient
> >> > [5] for 'fuel-agent' package
> >> >
> >> >
> https://github.com/openstack/fuel-qa/blob/master/gates_tests/tests/test_review_in_fuel_agent.py#L41-L48
> >> > [6] for 'python-fuelclient' package
> >> >
> >> >
> https://github.com/openstack/fuel-qa/blob/master/gates_tests/tests/test_review_in_fuel_client.py#L102-L113
> >> > [7]
> >> >
> >> >
> https://ci.fuel-infra.org/job/master.fuel-agent.pkgs.ubuntu.review_fuel_agent_one_node_provision/buildTimeTrend
> >> > [8]
> >> >
> >> >
> https://ci.fuel-infra.org/job/master.python-fuelclient.pkgs.ubuntu.review_fuel_client/buildTimeTrend
> >> > --
> >> >
> >> > Dmitry Kaigarodtsev
> >> >
> >> > IRC: dkaiharodsev
> >> >
> >> >
> >> >
> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> > --
> > Kind Regards,
> > Dmitry Kaigarodtsev
> > Fuel Ci Engineer
> >
> > IRC: dkaiharodsev
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kind Regards,
Dmitry Kaigarodtsev
Fuel Ci Engineer

IRC: dkaiharodsev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-21 Thread Dougal Matthews
On 21 January 2016 at 14:46, Dougal Matthews  wrote:

>
>
> On 20 January 2016 at 20:05, Tzu-Mainn Chen  wrote:
>
>> - Original Message -
>> > On 18.1.2016 19:49, Tzu-Mainn Chen wrote:
>> > > - Original Message -
>> > >> On Thu, 2016-01-14 at 16:04 -0500, Tzu-Mainn Chen wrote:
>> > >>>
>> > >>> - Original Message -
>> >  On Wed, Jan 13, 2016 at 04:41:28AM -0500, Tzu-Mainn Chen wrote:
>> > > Hey all,
>> > >
>> > > I realize now from the title of the other TripleO/Mistral thread
>> > > [1] that
>> > > the discussion there may have gotten confused.  I think using
>> > > Mistral for
>> > > TripleO processes that are obviously workflows - stack
>> > > deployment, node
>> > > registration - makes perfect sense.  That thread is exploring
>> > > practicalities
>> > > for doing that, and I think that's great work.
>> > >
>> > > What I inappropriately started to address in that thread was a
>> > > somewhat
>> > > orthogonal point that Dan asked in his original email, namely:
>> > >
>> > > "what it might look like if we were to use Mistral as a
>> > > replacement for the
>> > > TripleO API entirely"
>> > >
>> > > I'd like to create this thread to talk about that; more of a
>> > > 'should we'
>> > > than 'can we'.  And to do that, I want to indulge in a thought
>> > > exercise
>> > > stemming from an IRC discussion with Dan and others.  All, please
>> > > correct
>> > > me
>> > > if I've misstated anything.
>> > >
>> > > The IRC discussion revolved around one use case: deploying a Heat
>> > > stack
>> > > directly from a Swift container.  With an updated patch, the Heat
>> > > CLI can
>> > > support this functionality natively.  Then we don't need a
>> > > TripleO API; we
>> > > can use Mistral to access that functionality, and we're done,
>> > > with no need
>> > > for additional code within TripleO.  And, as I understand it,
>> > > that's the
>> > > true motivation for using Mistral instead of a TripleO API:
>> > > avoiding custom
>> > > code within TripleO.
>> > >
>> > > That's definitely a worthy goal... except from my perspective,
>> > > the story
>> > > doesn't quite end there.  A GUI needs additional functionality,
>> > > which boils
>> > > down to: understanding the Heat deployment templates in order to
>> > > provide
>> > > options for a user; and persisting those options within a Heat
>> > > environment
>> > > file.
>> > >
>> > > Right away I think we hit a problem.  Where does the code for
>> > > 'understanding
>> > > options' go?  Much of that understanding comes from the
>> > > capabilities map
>> > > in tripleo-heat-templates [2]; it would make sense to me that
>> > > responsibility
>> > > for that would fall to a TripleO library.
>> > >
>> > > Still, perhaps we can limit the amount of TripleO code.  So to
>> > > give API
>> > > access to 'getDeploymentOptions', we can create a Mistral
>> > > workflow.
>> > >
>> > >Retrieve Heat templates from Swift -> Parse capabilities map
>> > >
>> > > Which is fine-ish, except from an architectural perspective
>> > > 'getDeploymentOptions' violates the abstraction layer between
>> > > storage and
>> > > business logic, a problem that is compounded because
>> > > 'getDeploymentOptions'
>> > > is not the only functionality that accesses the Heat templates
>> > > and needs
>> > > exposure through an API.  And, as has been discussed on a
>> > > separate TripleO
>> > > thread, we're not even sure Swift is sufficient for our needs;
>> > > one possible
>> > > consideration right now is allowing deployment from templates
>> > > stored in
>> > > multiple places, such as the file system or git.
>> > 
>> >  Actually, that whole capabilities map thing is a workaround for a
>> >  missing
>> >  feature in Heat, which I have proposed, but am having a hard time
>> >  reaching
>> >  consensus on within the Heat community:
>> > 
>> >  https://review.openstack.org/#/c/196656/
>> > 
>> >  Given that is a large part of what's anticipated to be provided by
>> >  the
>> >  proposed TripleO API, I'd welcome feedback and collaboration so we
>> >  can move
>> >  that forward, vs solving only for TripleO.
>> > 
>> > > Are we going to have duplicate 'getDeploymentOptions' workflows
>> > > for each
>> > > storage mechanism?  If we consolidate the storage code within a
>> > > TripleO
>> > > library, do we really need a *workflow* to call a single
>> > > function?  Is a
>> > > thin TripleO API that contains no additional business logic
>> > > really so bad
>> > > at that point?
>> > 
>> >  Actually, 

Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-21 Thread Tzu-Mainn Chen
- Original Message -

> On 21 January 2016 at 14:46, Dougal Matthews < dou...@redhat.com > wrote:

> > On 20 January 2016 at 20:05, Tzu-Mainn Chen < tzuma...@redhat.com > wrote:
> 

> > > - Original Message -
> > 
> 
> > > > On 18.1.2016 19:49, Tzu-Mainn Chen wrote:
> > 
> 
> > > > > - Original Message -
> > 
> 
> > > > >> On Thu, 2016-01-14 at 16:04 -0500, Tzu-Mainn Chen wrote:
> > 
> 
> > > > >>>
> > 
> 
> > > > >>> - Original Message -
> > 
> 
> > > >  On Wed, Jan 13, 2016 at 04:41:28AM -0500, Tzu-Mainn Chen wrote:
> > 
> 
> > > > > Hey all,
> > 
> 
> > > > >
> > 
> 
> > > > > I realize now from the title of the other TripleO/Mistral thread
> > 
> 
> > > > > [1] that
> > 
> 
> > > > > the discussion there may have gotten confused. I think using
> > 
> 
> > > > > Mistral for
> > 
> 
> > > > > TripleO processes that are obviously workflows - stack
> > 
> 
> > > > > deployment, node
> > 
> 
> > > > > registration - makes perfect sense. That thread is exploring
> > 
> 
> > > > > practicalities
> > 
> 
> > > > > for doing that, and I think that's great work.
> > 
> 
> > > > >
> > 
> 
> > > > > What I inappropriately started to address in that thread was a
> > 
> 
> > > > > somewhat
> > 
> 
> > > > > orthogonal point that Dan asked in his original email, namely:
> > 
> 
> > > > >
> > 
> 
> > > > > "what it might look like if we were to use Mistral as a
> > 
> 
> > > > > replacement for the
> > 
> 
> > > > > TripleO API entirely"
> > 
> 
> > > > >
> > 
> 
> > > > > I'd like to create this thread to talk about that; more of a
> > 
> 
> > > > > 'should we'
> > 
> 
> > > > > than 'can we'. And to do that, I want to indulge in a thought
> > 
> 
> > > > > exercise
> > 
> 
> > > > > stemming from an IRC discussion with Dan and others. All, please
> > 
> 
> > > > > correct
> > 
> 
> > > > > me
> > 
> 
> > > > > if I've misstated anything.
> > 
> 
> > > > >
> > 
> 
> > > > > The IRC discussion revolved around one use case: deploying a Heat
> > 
> 
> > > > > stack
> > 
> 
> > > > > directly from a Swift container. With an updated patch, the Heat
> > 
> 
> > > > > CLI can
> > 
> 
> > > > > support this functionality natively. Then we don't need a
> > 
> 
> > > > > TripleO API; we
> > 
> 
> > > > > can use Mistral to access that functionality, and we're done,
> > 
> 
> > > > > with no need
> > 
> 
> > > > > for additional code within TripleO. And, as I understand it,
> > 
> 
> > > > > that's the
> > 
> 
> > > > > true motivation for using Mistral instead of a TripleO API:
> > 
> 
> > > > > avoiding custom
> > 
> 
> > > > > code within TripleO.
> > 
> 
> > > > >
> > 
> 
> > > > > That's definitely a worthy goal... except from my perspective,
> > 
> 
> > > > > the story
> > 
> 
> > > > > doesn't quite end there. A GUI needs additional functionality,
> > 
> 
> > > > > which boils
> > 
> 
> > > > > down to: understanding the Heat deployment templates in order to
> > 
> 
> > > > > provide
> > 
> 
> > > > > options for a user; and persisting those options within a Heat
> > 
> 
> > > > > environment
> > 
> 
> > > > > file.
> > 
> 
> > > > >
> > 
> 
> > > > > Right away I think we hit a problem. Where does the code for
> > 
> 
> > > > > 'understanding
> > 
> 
> > > > > options' go? Much of that understanding comes from the
> > 
> 
> > > > > capabilities map
> > 
> 
> > > > > in tripleo-heat-templates [2]; it would make sense to me that
> > 
> 
> > > > > responsibility
> > 
> 
> > > > > for that would fall to a TripleO library.
> > 
> 
> > > > >
> > 
> 
> > > > > Still, perhaps we can limit the amount of TripleO code. So to
> > 
> 
> > > > > give API
> > 
> 
> > > > > access to 'getDeploymentOptions', we can create a Mistral
> > 
> 
> > > > > workflow.
> > 
> 
> > > > >
> > 
> 
> > > > > Retrieve Heat templates from Swift -> Parse capabilities map
> > 
> 
> > > > >
> > 
> 
> > > > > Which is fine-ish, except from an architectural perspective
> > 
> 
> > > > > 'getDeploymentOptions' violates the abstraction layer between
> > 
> 
> > > > > storage and
> > 
> 
> > > > > business logic, a problem that is compounded because
> > 
> 
> > > > > 'getDeploymentOptions'
> > 
> 
> > > > > is not the only functionality that accesses the Heat templates
> > 
> 
> > > > > and needs
> > 
> 
> > > > > exposure through an API. And, as has been discussed on a
> > 
> 
> > > > > separate TripleO
> > 
> 
> > > > > thread, we're not even sure Swift is sufficient for our needs;
> > 
> 
> > > > > one possible
> > 
> 
> > > > > consideration right now is allowing deployment from templates
> > 
> 
> > > > > stored in
> > 
> 
> > > > > multiple places, such as the file system or git.
> > 
> 
> > > > 

Re: [openstack-dev] [gate] gate-grenade-dsvm-multinode intermittent failures

2016-01-21 Thread Matt Riedemann



On 1/21/2016 7:33 AM, Sean Dague wrote:

On 01/21/2016 08:18 AM, Davanum Srinivas wrote:

Hi,

Failures for this job has been trending up and is causing the large
gate queue as well. I've logged a bug:
https://bugs.launchpad.net/openstack-gate/+bug/1536622

and am requesting switching the voting to off for this job:
https://review.openstack.org/#/c/270788/

We need to find and fix the underlying issue which can help us
determine when to switch this back on to voting or we cleanup this job
from all the gate queues and move them to check queues (i have a TODO
for this in this review)


By trending up we mean above 75% failure rate - http://tinyurl.com/zrq35e8

All the spot checking of jobs I've found is the job dying on the liberty
side validation with test_volume_boot_pattern, which means we've never
even gotten to the any of the real grenade logic.

+2 on non-voting.

-Sean



clarkb was looking into this yesterday, see the IRC logs starting here:

http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2016-01-20.log.html#t2016-01-20T22:44:24

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-21 Thread Dougal Matthews
On 20 January 2016 at 20:05, Tzu-Mainn Chen  wrote:

> - Original Message -
> > On 18.1.2016 19:49, Tzu-Mainn Chen wrote:
> > > - Original Message -
> > >> On Thu, 2016-01-14 at 16:04 -0500, Tzu-Mainn Chen wrote:
> > >>>
> > >>> - Original Message -
> >  On Wed, Jan 13, 2016 at 04:41:28AM -0500, Tzu-Mainn Chen wrote:
> > > Hey all,
> > >
> > > I realize now from the title of the other TripleO/Mistral thread
> > > [1] that
> > > the discussion there may have gotten confused.  I think using
> > > Mistral for
> > > TripleO processes that are obviously workflows - stack
> > > deployment, node
> > > registration - makes perfect sense.  That thread is exploring
> > > practicalities
> > > for doing that, and I think that's great work.
> > >
> > > What I inappropriately started to address in that thread was a
> > > somewhat
> > > orthogonal point that Dan asked in his original email, namely:
> > >
> > > "what it might look like if we were to use Mistral as a
> > > replacement for the
> > > TripleO API entirely"
> > >
> > > I'd like to create this thread to talk about that; more of a
> > > 'should we'
> > > than 'can we'.  And to do that, I want to indulge in a thought
> > > exercise
> > > stemming from an IRC discussion with Dan and others.  All, please
> > > correct
> > > me
> > > if I've misstated anything.
> > >
> > > The IRC discussion revolved around one use case: deploying a Heat
> > > stack
> > > directly from a Swift container.  With an updated patch, the Heat
> > > CLI can
> > > support this functionality natively.  Then we don't need a
> > > TripleO API; we
> > > can use Mistral to access that functionality, and we're done,
> > > with no need
> > > for additional code within TripleO.  And, as I understand it,
> > > that's the
> > > true motivation for using Mistral instead of a TripleO API:
> > > avoiding custom
> > > code within TripleO.
> > >
> > > That's definitely a worthy goal... except from my perspective,
> > > the story
> > > doesn't quite end there.  A GUI needs additional functionality,
> > > which boils
> > > down to: understanding the Heat deployment templates in order to
> > > provide
> > > options for a user; and persisting those options within a Heat
> > > environment
> > > file.
> > >
> > > Right away I think we hit a problem.  Where does the code for
> > > 'understanding
> > > options' go?  Much of that understanding comes from the
> > > capabilities map
> > > in tripleo-heat-templates [2]; it would make sense to me that
> > > responsibility
> > > for that would fall to a TripleO library.
> > >
> > > Still, perhaps we can limit the amount of TripleO code.  So to
> > > give API
> > > access to 'getDeploymentOptions', we can create a Mistral
> > > workflow.
> > >
> > >Retrieve Heat templates from Swift -> Parse capabilities map
> > >
> > > Which is fine-ish, except from an architectural perspective
> > > 'getDeploymentOptions' violates the abstraction layer between
> > > storage and
> > > business logic, a problem that is compounded because
> > > 'getDeploymentOptions'
> > > is not the only functionality that accesses the Heat templates
> > > and needs
> > > exposure through an API.  And, as has been discussed on a
> > > separate TripleO
> > > thread, we're not even sure Swift is sufficient for our needs;
> > > one possible
> > > consideration right now is allowing deployment from templates
> > > stored in
> > > multiple places, such as the file system or git.
> > 
> >  Actually, that whole capabilities map thing is a workaround for a
> >  missing
> >  feature in Heat, which I have proposed, but am having a hard time
> >  reaching
> >  consensus on within the Heat community:
> > 
> >  https://review.openstack.org/#/c/196656/
> > 
> >  Given that is a large part of what's anticipated to be provided by
> >  the
> >  proposed TripleO API, I'd welcome feedback and collaboration so we
> >  can move
> >  that forward, vs solving only for TripleO.
> > 
> > > Are we going to have duplicate 'getDeploymentOptions' workflows
> > > for each
> > > storage mechanism?  If we consolidate the storage code within a
> > > TripleO
> > > library, do we really need a *workflow* to call a single
> > > function?  Is a
> > > thin TripleO API that contains no additional business logic
> > > really so bad
> > > at that point?
> > 
> >  Actually, this is an argument for making the validation part of the
> >  deployment a workflow - then the interface with the storage
> >  mechanism
> >  becomes more easily pluggable vs baked 

Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on the idea to move it forward

2016-01-21 Thread Ryan Brown

On 01/21/2016 06:23 AM, Chris Dent wrote:

On Wed, 20 Jan 2016, Flavio Percoco wrote:


- It was mentioned that some folks receive bonuses for landed features


In this thread we've had people recoil in shock at this ^ one...


- Economic impact on companies/market because no new features were
added (?)


...but I have to say it was this ^ one that gave me the most concern.

At the opensource project level I really don't think this should be
something we're actively worrying about. What we should be worrying
about is if OpenStack is any good. Often "good" will include features,
but not all the time.

Let the people doing the selling worry about the market, if they
want. That stuff is, or at least should be, on the other side of a
boundary.


I'm certain that they will worry about the market.

But look at where contributions come from. A glance at stackalytics says 
that only 11% of contributors are independent, meaning companies are 89% 
of the contributions. Whether we acknowledge it at the project level or 
not, features and "the OpenStack market" are going to be a priority for 
a some portion of those 89% of contributions.


Those contributors also want openstack to be "good" but they also have 
deadlines to meet internally. Having a freeze upstream for stabilization 
is going to put downstream development into overdrive, no doubt. That 
would be a poor precedent to have set given where the bulk of 
contributions come from.


--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Removing the Tuskar repos

2016-01-21 Thread Dougal Matthews
On 22 December 2015 at 11:38, Dougal Matthews  wrote:

> Hi all,
>
> I mentioned this at the meeting last week, but wanted to get wider input.
> As far as I can tell from the current activity, there is no work going into
> Tuskar and it isn't being tested with CI. This means the code is becoming
> more stale quickly and likely wont work soon (if not already).
>
> TripleO common is working towards solving the same problems that Tuskar
> attempted and can be seen as the replacement for Tuskar. [1][2]
>
> Are there any objections to it's removal? This would include the tuskar,
> python-tuskarclient and tuskar-ui repos. We would also need to remove it
> from instack-undercloud and tripleo-image-elements.
>
> I'll start to beginning the cleanup process sometime in min/late January
> if there are no objections.
>

It seems about time to kick this off! So, for anyone that was following
this thread. I have started the process for removing the Tuskar repos with
this review:
- https://review.openstack.org/#/c/270850/

Followed by these three dependent reviews:
- https://review.openstack.org/270855
- https://review.openstack.org/270854
- https://review.openstack.org/270851

Then finally, dependent on all of the above:
- https://review.openstack.org/#/c/270869/


> Cheers,
> Dougal
>
>
> [1]:
> https://specs.openstack.org/openstack/tripleo-specs/specs/mitaka/tripleo-overcloud-deployment-library.html
> [2]: https://review.openstack.org/230432
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Vitrage Demo - Get Topology Use Case

2016-01-21 Thread Weyl, Alexey (Nokia - IL)
Hi,

We are happy to share with you all our first Vitrage demo, showing the "get 
topology" api and horizon UI. Here it is:

https://www.youtube.com/watch?v=GyTnMw8stXQ=youtu.be

For more details about Vitrage, please visit our wiki here:

https://wiki.openstack.org/wiki/Vitrage 

Thanks,
Alexey



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nova cli commands fail with 404. devstack installation from today

2016-01-21 Thread Bob Hansen

Yes, it is image-list not image list. I don't seem to be able to  find any
other hints in any of the nova logs.

nova --debug image-list shows this:

DEBUG (extension:157) found extension EntryPoint.parse('token =
keystoneauth1.loading._plugins.identity.generic:Token')
DEBUG (extension:157) found extension EntryPoint.parse('v3token =
keystoneauth1.loading._plugins.identity.v3:Token')
DEBUG (extension:157) found extension EntryPoint.parse('password =
keystoneauth1.loading._plugins.identity.generic:Password')
DEBUG (v2:62) Making authentication request to
http://127.0.0.1:35357/tokens
INFO (connectionpool:207) Starting new HTTP connection (1): 127.0.0.1
DEBUG (connectionpool:387) "POST /tokens HTTP/1.1" 404 93
DEBUG (session:439) Request returned failure status: 404
DEBUG (shell:894) The resource could not be found. (HTTP 404)
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/novaclient/shell.py", line
892, in main
OpenStackComputeShell().main(argv)
  File "/usr/local/lib/python2.7/dist-packages/novaclient/shell.py", line
726, in main
api_version = api_versions.discover_version(self.cs, api_version)
  File "/usr/local/lib/python2.7/dist-packages/novaclient/api_versions.py",
line 267, in discover_version
client)
  File "/usr/local/lib/python2.7/dist-packages/novaclient/api_versions.py",
line 248, in _get_server_version_range
version = client.versions.get_current()
  File "/usr/local/lib/python2.7/dist-packages/novaclient/v2/versions.py",
line 83, in get_current
return self._get_current()
  File "/usr/local/lib/python2.7/dist-packages/novaclient/v2/versions.py",
line 56, in _get_current
url = "%s" % self.api.client.get_endpoint()
  File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/adapter.py",
line 132, in get_endpoint
return self.session.get_endpoint(auth or self.auth, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py",
line 634, in get_endpoint
return auth.get_endpoint(self, **kwargs)
  File
"/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/base.py",
line 209, in get_endpoint
service_catalog = self.get_access(session).service_catalog
  File
"/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/base.py",
line 135, in get_access
self.auth_ref = self.get_auth_ref(session)
  File
"/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/v2.py", line
64, in get_auth_ref
authenticated=False, log=False)
  File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py",
line 545, in post
return self.request(url, 'POST', **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/_utils.py",
line 180, in inner
return func(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py",
line 440, in request
raise exceptions.from_response(resp, method, url)
NotFound: The resource could not be found. (HTTP 404)



Bob Hansen
z/VM OpenStack Enablement




From:   "Chen CH Ji" 
To: "OpenStack Development Mailing List \(not for usage questions
\)" 
Date:   01/21/2016 04:25 AM
Subject:Re: [openstack-dev] nova cli commands fail with 404. devstack
installation from today




Guess it's image-list instead of image list,right?  maybe you can check
with nova --debug image-list and see the API which was
send to nova-api server then analyze the nova api log to know what's
exactly the error?

-"Bob Hansen"  wrote: -
To: openstack-dev@lists.openstack.org
From: "Bob Hansen" 
Date: 01/20/2016 10:31PM
Subject: [openstack-dev] nova cli commands fail with 404. devstack
installation from today



Installed devstack today, this morning actually, and most everything
works except simple nova cli commands, nova image list, list,
flavor-list all fail) glance ok, nuetron ok,

As an example, nova image list returns:

devstack$ nova image list
ERROR (NotFound): The resource could not be found. (HTTP 404)

However the command; openstack image list returns the correct list of
cirros images, plus one I have already imported.

key.log has:

127.0.0.1 - - [20/Jan/2016:21:10:49 +] "POST /tokens HTTP/1.1" 404 93
"-" "keystoneauth1/2.2.0 python-requests/2.9.1 CPython/2.7.6" 2270(us)

Clearly an authentication thing. Since other commands work, e.g. neutorn
subnet-list, I concluded keystone auth is just fine.

I suspect it is something in nova.conf. [keystone_auth] has this in it,
which stack.sh built

[keystone_authtoken]
signing_dir = /var/cache/nova
cafile = /opt/stack/data/ca-bundle.pem
auth_uri = http://127.0.0.1:5000
project_domain_id = default
project_name = service
user_domain_id = default
password = secretservice
username = nova
auth_url = http://127.0.0.1:35357
auth_type = password

Any suggestions on where else to look?

Bob Hansen
z/VM OpenStack Enablement




Re: [openstack-dev] [nova] Feature suggestion - API for creating VM without powering it up

2016-01-21 Thread Matt Riedemann



On 1/20/2016 10:57 AM, Shoham Peller wrote:

Hi,

I would like to suggest a feature in nova to allow creating a VM,
without powering it up.

If the user will be able to create a stopped VM, it will allow for
better flexibility and user automation.

I can personally say such a feature would greatly improve comfortability
of my work with nova - currently we shutdown each vm manually as we're
creating it.
What do you think?

Regards,
Shoham Peller


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



What is your use case?

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] gate-grenade-dsvm-multinode intermittent failures

2016-01-21 Thread Matt Riedemann



On 1/21/2016 7:33 AM, Sean Dague wrote:

On 01/21/2016 08:18 AM, Davanum Srinivas wrote:

Hi,

Failures for this job has been trending up and is causing the large
gate queue as well. I've logged a bug:
https://bugs.launchpad.net/openstack-gate/+bug/1536622

and am requesting switching the voting to off for this job:
https://review.openstack.org/#/c/270788/

We need to find and fix the underlying issue which can help us
determine when to switch this back on to voting or we cleanup this job
from all the gate queues and move them to check queues (i have a TODO
for this in this review)


By trending up we mean above 75% failure rate - http://tinyurl.com/zrq35e8

All the spot checking of jobs I've found is the job dying on the liberty
side validation with test_volume_boot_pattern, which means we've never
even gotten to the any of the real grenade logic.

+2 on non-voting.

-Sean



Potential fix here:

https://review.openstack.org/#/c/270857/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on the idea to move it forward

2016-01-21 Thread Thierry Carrez

Flavio Percoco wrote:

On 21/01/16 11:55 +0100, Thierry Carrez wrote:

As you said, projects can already decide to restrict feature
development in a given cycle, so this is nothing new. We only need to
communicate more aggressively that it is perfectly fine (and even
encouraged) to define the amount of feature work that is acceptable
for a project for a given cycle.


++

Precisely my point. If there's a way, from a governance perspective, to help
communicate and encourage folks to do this, I wan't to take it. It was mentioned
that some teams didn't know this was possible others that felt it was going to
be really hard w/o any support from the governance team, hence this email and
effort.


The light approach would be to document the possibility in the project 
team guide. The heavy approach would be to take a TC resolution. Both 
solutions would have to be mentioned on openstack-dev and the weekly 
digest to get extra publicity.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tip: jsonformatter site for parsing/debugging logs

2016-01-21 Thread Doug Hellmann
Excerpts from Matt Riedemann's message of 2016-01-21 10:09:32 -0600:
> Are you tired of trying to strain your eyes to parse something like this 
> in the logs [1]?
> 
> vif=VIF({'profile': {}, 'ovs_interfaceid': 
> u'ac3ca8e7-c22d-4f63-9620-ce031bf3eaac', 'preserve_on_delete': False, 
> 'network': Network({'bridge': u'br-int', 'subnets': [Subnet({'ips': 
> [FixedIP({'meta': {}, 'version': 4, 'type': u'fixed', 'floating_ips': 
> [], 'address': u'10.100.0.18'})], 'version': 4, 'meta': {u'dhcp_server': 
> u'10.100.0.17'}, 'dns': [], 'routes': [], 'cidr': u'10.100.0.16/28', 
> 'gateway': IP({'meta': {}, 'version': None, 'type': u'gateway', 
> 'address': None})})], 'meta': {u'injected': False, u'tenant_id': 
> u'1d760ac487e24e06add18dacefa221a1'}, 'id': 
> u'b13e9828-2bd9-4fb4-a20d-a92e2a8c1a77', 'label': 
> u'tempest-network-smoke--1979535575'}), 'devname': u'tapac3ca8e7-c2', 
> 'vnic_type': u'normal', 'qbh_params': None, 'meta': {}, 'details': 
> {u'port_filter': True, u'ovs_hybrid_plug': True}, 'address': 
> u'fa:16:3e:0c:d3:95', 'active': False, 'type': u'ovs', 'id': 
> u'ac3ca8e7-c22d-4f63-9620-ce031bf3eaac', 'qbg_params': None}
> 
> I found https://jsonformatter.curiousconcept.com/ which is nice since 
> you can just copy that json from the logs and paste it into the text 
> area and format it (I disable validation).

You can also do this using Python's json module from the command line:

$ echo '{"json":"obj"}' | python -m json.tool
{
  "json": "obj"
}

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on the idea to move it forward

2016-01-21 Thread Flavio Percoco

On 21/01/16 16:50 +0100, Thierry Carrez wrote:

Flavio Percoco wrote:

On 21/01/16 11:55 +0100, Thierry Carrez wrote:

As you said, projects can already decide to restrict feature
development in a given cycle, so this is nothing new. We only need to
communicate more aggressively that it is perfectly fine (and even
encouraged) to define the amount of feature work that is acceptable
for a project for a given cycle.


++

Precisely my point. If there's a way, from a governance perspective, to help
communicate and encourage folks to do this, I wan't to take it. It was mentioned
that some teams didn't know this was possible others that felt it was going to
be really hard w/o any support from the governance team, hence this email and
effort.


The light approach would be to document the possibility in the project 
team guide. The heavy approach would be to take a TC resolution. Both 
solutions would have to be mentioned on openstack-dev and the weekly 
digest to get extra publicity.


I'm aiming for the heavy approach here and, if it is not redundant, the light
one as well.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tip: jsonformatter site for parsing/debugginglogs

2016-01-21 Thread Mariam John

T



From:   Doug Hellmann 
To: openstack-dev 
Date:   01/21/2016 10:38 AM
Subject:Re: [openstack-dev] Tip: jsonformatter site for
parsing/debugging   logs



Excerpts from Matt Riedemann's message of 2016-01-21 10:09:32 -0600:
> Are you tired of trying to strain your eyes to parse something like this
> in the logs [1]?
>
> vif=VIF({'profile': {}, 'ovs_interfaceid':
> u'ac3ca8e7-c22d-4f63-9620-ce031bf3eaac', 'preserve_on_delete': False,
> 'network': Network({'bridge': u'br-int', 'subnets': [Subnet({'ips':
> [FixedIP({'meta': {}, 'version': 4, 'type': u'fixed', 'floating_ips':
> [], 'address': u'10.100.0.18'})], 'version': 4, 'meta': {u'dhcp_server':
> u'10.100.0.17'}, 'dns': [], 'routes': [], 'cidr': u'10.100.0.16/28',
> 'gateway': IP({'meta': {}, 'version': None, 'type': u'gateway',
> 'address': None})})], 'meta': {u'injected': False, u'tenant_id':
> u'1d760ac487e24e06add18dacefa221a1'}, 'id':
> u'b13e9828-2bd9-4fb4-a20d-a92e2a8c1a77', 'label':
> u'tempest-network-smoke--1979535575'}), 'devname': u'tapac3ca8e7-c2',
> 'vnic_type': u'normal', 'qbh_params': None, 'meta': {}, 'details':
> {u'port_filter': True, u'ovs_hybrid_plug': True}, 'address':
> u'fa:16:3e:0c:d3:95', 'active': False, 'type': u'ovs', 'id':
> u'ac3ca8e7-c22d-4f63-9620-ce031bf3eaac', 'qbg_params': None}
>
> I found https://jsonformatter.curiousconcept.com/ which is nice since
> you can just copy that json from the logs and paste it into the text
> area and format it (I disable validation).

You can also do this using Python's json module from the command line:

$ echo '{"json":"obj"}' | python -m json.tool
{
  "json": "obj"
}

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][drivers] Re-think the Glance Driver's team

2016-01-21 Thread Flavio Percoco

On 20/01/16 13:16 -0430, Flavio Percoco wrote:

Yo! Glancers,

Gonna cut the chase: I think we would do a better job on the specs (and light
specs) side if we get rid of the Glance Drivers team and encourage everyone
(especially from the core team) to weight in.



Thoughts? Critics? Improvements?



For ppl following at home! This was brought up during today's Glance meeting and
there seemed to be agreement towards doing it. I'll work on a formal proposal
and bring it up at the drivers meeting next Tuesday.

Cheers,
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [oslo] Proposal of adding puppet-oslo to OpenStack

2016-01-21 Thread Emilien Macchi


On 01/21/2016 08:15 AM, Doug Hellmann wrote:
> Excerpts from Cody Herriges's message of 2016-01-19 15:50:05 -0800:
>> Colleen Murphy wrote:
>>> On Tue, Jan 19, 2016 at 9:57 AM, Xingchao Yu >> > wrote:
>>>
>>> Hi, Emilien:
>>>
>>>  Thanks for your efforts on this topic, I didn't attend V
>>> release summit and missed related discussion about puppet-oslo.
>>>
>>>  As the reason for not using a unified way to manage oslo_*
>>> parameters is there maybe exist different oslo_* version between
>>> openstack projects.
>>> 
>>>  I have an idea to solve this potential problem,we can maintain
>>> several versions of puppet-oslo, each module can map to different
>>> version of puppet-oslo.
>>>
>>> It would be something like follows: (the map info is not true,
>>> just for example)
>>>
>>> In Mitaka release
>>> puppet-nova maps to puppet-oslo with 8.0.0
>>> puppet-designate maps to puppet-oslo with 7.0.0
>>> puppet-murano maps to puppet-oslo with 6.0.0
>>>
>>> In Newton release
>>> puppet-nova maps to puppet-oslo with 9.0.0
>>> puppet-designate maps to puppet-oslo with 9.0.0
>>> puppet-murano maps to puppet-oslo with 7.0.0
>>>
>>> For the simplest case of puppet infrastructure configuration, which is a
>>> single puppetmaster with one environment, you cannot have multiple
>>> versions of a single puppet module installed. This means you absolutely
>>> cannot have an openstack infrastructure depend on having different
>>> versions of a single module installed. In your example, a user would not
>>>  be able to use both puppet-nova and puppet-designate since they are
>>> using different versions of the puppet-oslo module.
>>>
>>> When we put out puppet modules, we guarantee that version X.x.x of a
>>> given module works with the same version of every other module, and this
>>> proposal would totally break that guarantee. 
>>>
>>
>> How does OpenStack solve this issue?
>>
>> * Do they literally install several different versions of the same
>> python library?
>> * Does every project vendor oslo?
>> * Is the oslo library its self API compatible with older versions?
> 
> Each Oslo library has its own version. Only one version of each
> library is installed at a time. We use the global requirements list
> to sync compatible requirements specifications across all OpenStack
> projects to make them co-installable. And we try hard to maintain
> API compatibility, using SemVer versioning to indicate when that
> was not possible.
> 
> If you want to have a single puppet module install all of the Oslo
> libraries, you could pull the right versions from the upper-constraints.txt
> file in the openstack/requirements repository. That file lists the
> versions that were actually tested in the gate.

Thanks for this feedback Doug!
So I propose we create the module in openstack namespace, please vote for:
https://review.openstack.org/#/c/270872/

I talked with xingchao on IRC #puppet-openstack and he's doing
project-config patch today.
Maybe could we start with Nova, Neutron, Cinder, Glance, Keystone, see
how it works and iterate later with other modules.

Thoughts are welcome,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][api] GET call with huge argument list

2016-01-21 Thread Salvatore Orlando
More inline,
Salvatore

On 20 January 2016 at 16:51, Shraddha Pandhe 
wrote:

> Thank you all for the comments.
>
> The client that we expect to call this API with thousands of network-ids
> is nova-scheduler.
>
> Since this call is happening in the middle of scheduling, we don't want to
> spend time in paginating or sending multiple requests. I have tens of
> thousands of networks and subnets in my test cluster right now and with
> that scale, the extension takes more than 2 seconds to return.
>

What percentage of this time is spent in the GET /v2.0/networks call?


> With multiple calls, scheduler will become very slow.
>

If the calls are serialized that is surely correct. As most production
neutron servers employ multiple workers the overhead of doing multiple
calls in parallel might however be tolerable.
I'd like to understand more about your use case. Here are some additional
questions

Is network-id the only attribute you can filter on?
Assuming Neutron provided tags in the API could you leverage those?
Why is not tenant-id a viable alternative?


>
> I agree that sending payload with GET is not recommended and most
> libraries just drop the payload for such cases.
>

Nevertheless, we're pretty much in control of that. We've already discussed
this, and doing so does not violate RFC7231, so it's ok from a protocol
perspective.
If needed, we can tweak the API request processing workflow for allowing
this.


>
>
>
> On Wed, Jan 20, 2016 at 2:27 PM, Salvatore Orlando  > wrote:
>
>> I tend to agree with Doug and Ryan's stance. If you need to pass 1000s of
>> network-id on a single request you're probably not doing things right on
>> the client side.
>> As Ryan suggested you can try and split the request in multiple requests
>> with acceptable URI lenght and send them in parallel; this will add some
>> overhead, but should work flawlessly.
>>
>> Once tags will be implemented you will be able to leverage those to
>> simplify your queries.
>>
>> Regarding GET requests with plenty of parameters, this discussion came up
>> on the mailing list a while ago [1]. A good proposal was made in that
>> thread but never formalised as an API-wg guideline; you consider submitting
>> a patch to the API-wg too.
>> Note however that Neutron won't be able to support it out of the box
>> considering its WSGI framework completely ignores request bodies on GET
>> requests.
>>
>> Salvatore
>>
>> [1]
>> http://lists.openstack.org/pipermail/openstack-dev/2015-November/078243.html
>>
>> On 20 January 2016 at 12:33, Ryan Brown  wrote:
>>
>>> So having a URI too long error is, in this case, likely an indication
>>> that you're requesting too many things at once.
>>>
>>> You could:
>>> 1. Request 100 at a time in parallel
>>> 2. Find a query that would give you all those networks & page through
>>> the reply
>>> 3. Page through all the user's networks and filter client-side
>>>
>>> How is the user supposed to be assembling this giant UUID list? I'd
>>> think it would be easier for them to specify a query (e.g. "get usage data
>>> for all my production subnets" or something).
>>>
>>>
>>> On 01/19/2016 06:59 PM, Shraddha Pandhe wrote:
>>>
 Hi folks,


 I am writing a Neutron extension which needs to take 1000s of
 network-ids as argument for filtering. The CURL call is as follows:

 curl -i -X GET
 'http://hostname:port
 /neutron/v2.0/extension_name.json?net-id=fffecbd1-0f6d-4f02-aee7-ca62094830f5=fffeee07-4f94-4cff-bf8e-a2aa7be59e2e'
 -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H
 "X-Auth-Token: "


 The list of net-ids can go up to 1000s. The problem is, with such large
 url, I get the "Request URI too long" error. I don't want to update this
 limit as proxies can have their own limits.

 What options do I have to send 1000s of network IDs?

 1. -d '{}' is not a recommended option for GET call and wsgi Controller
 drops the data part when routing the request.

 2. Use POST instead of GET? I will need to write the get_
 logic inside create_resource logic for this to work. Its a hack, but
 complies with HTTP standard.




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>> --
>>> Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 

Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on the idea to move it forward

2016-01-21 Thread Markus Zoeller
Flavio Percoco  wrote on 01/21/2016 09:13:02 AM:

> From: Flavio Percoco 
> To: "Daniel P. Berrange" 
> Cc: "OpenStack Development Mailing List \(not for usage questions\)" 
> 
> Date: 01/21/2016 01:47 PM
> Subject: Re: [openstack-dev] [all][tc] Stabilization cycles: 
> Elaborating on the idea to move it forward
> 
> On 21/01/16 11:22 +, Daniel P. Berrange wrote:
> >On Wed, Jan 20, 2016 at 01:23:02PM -0430, Flavio Percoco wrote:
> >> Greetings,
> >>
> >> At the Tokyo summit, we discussed OpenStack's development themes in a
> >> cross-project session. In this session a group of folks started 
> discussing what
> >> topics the overall community could focus on as a shared effort. One 
of the
> >> things that was raised during this session is the need of having 
cycles to
> >> stabilize projects. This was brought up by Robert Collins again in 
> a meeting[0]
> >> the TC had right after the summit and no much has been done ever 
since.
> >>
> >> Now, "stabilization Cycles" are easy to dream about but really hardto 
do and
> >> enforce. Nonetheless, they are still worth a try or, at the very 
least, a
> >> thought. I'll try to go through some of the issues and benefits a 
> stabilization
> >> cycle could bring but bear in mind that the lists below are not 
> exhaustive. In
> >> fact, I'd love for other folks to chime in and help building a case
> in favor or
> >> against this.
> >>
> >> Negative(?) effects
> >> ===
> >>
> >> - Project won't get new features for a period of time Economic impact 
on
> >>  developers(?)
> >> - It was mentioned that some folks receive bonuses for landed 
features
> >> - Economic impact on companies/market because no new features were 
added (?)
> >> - (?)
> >
> >It will push more development into non-upstream vendor private
> >branches.
> >
> >>
> >> Positive effects
> >> 
> >>
> >> - Focus on bug fixing
> >> - Reduce review backlog
> >> - Refactor *existing* code/features with cleanups
> >> - Focus on multi-cycle features (if any) and complete those
> >> - (?)
> >
> >I don't think the idea of stabalization cycles would really have
> >such a positive effect, certainly not while our release cycle is
> >6 months in length.
> >
> >If you say the next cycle is primarily stabalization, then what
> >you are in effect saying is that people have to wait 12 months
> >for their desired new feature.  In the fast moving world of
> >cloud, I don't think that is a very credible approach. Even
> >with our current workflow, where we selectively approve features
> >for cycles, we have this impact of forcing people to wait 12
> >months, or more, for their features.
> 
> ++
> 
> This is one of the main concerns and perhaps the reason why I don't 
think it
> should be all-or-nothing. It should be perfectly fine for teams to have
> stabilization milestones, FWIW.
> 
> >In the non-stabalization cycle, we're not going to be able to
> >merge a larger number of features than we already do today.
> >So in effect we'll have 2 cycles worth of features being
> >proposed for 1 cycle. When we inevitably reject moany of
> >those features they'll have to wait for the next non-stabalization
> >cycle, which means 18-24 months delay.
> >
> >Of course in reality this kind of delay won't happen. What will
> >instead happen is that various vendors will get pressure from
> >their customers/partners and their local branches of openstack
> >packages will fork & diverge even further from upstream than
> >they already do today.
> >
> >So while upstream branch will be "stabalized", most users will
> >probably get a *less* stable release because they'll be using
> >a branch from vendors with a tonne of non-upstream stuff added.
> >
> 
> I would expect these vendors to (slowly?) push their changes 
upstream.It'd take
> time but it should certainly happen.
> 
> >In addition having a stablization cycle will give the impression
> >that the following cycle is a non-stable one and likely cause
> >more distruption by pushing lots of features in at one time.
> >Instead of having a master branch which has an approximately
> >constant level of stabalization, you'll create a situation
> >where it fluctuates significantly, which is clearly worse for
> >people doing continuous deployment.
> >
> >I think it is important to have the mindset that master should
> >*always* be considered stable - we already have this in general
> >and it is one of the success points of openstack's development
> >model IMHO. The idea of stabalization cycles is a step backwards
> 
> Perhaps, it is being presented the wrong way. I guess the main point 
here is how
> ca we communicate that we'd like to take some time to clean-up the mess 
we have
> in some projects. How can projects ask their team to put more efforts on
> tackling technical debt rather than pushing the new sexy thing?
> 
> I could consider Mitaka as a stabilization cycle 

Re: [openstack-dev] [magnum] Planning Magnum Midcycle

2016-01-21 Thread Adrian Otto
Team,

We have selected Feb 18-19 for the Midcycle, and will be hosted by HPE. Please 
save the date. The exact location is forthcoming, and is expected to be 
Sunnyvale.

Thanks,

Adrian

> On Jan 11, 2016, at 11:29 AM, Adrian Otto  wrote:
> 
> Team,
> 
> We are planning a mid cycle meetup for the Magnum team to be held in the San 
> Francisco Bay area. If you would like to attend, please take a moment to 
> respond to this poll to select the date:
> 
> http://doodle.com/poll/k8iidtamnkwqe3hd
> 
> Thanks,
> 
> Adrian


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on the idea to move it forward

2016-01-21 Thread Doug Hellmann
Excerpts from Markus Zoeller's message of 2016-01-21 18:37:00 +0100:
> Flavio Percoco  wrote on 01/21/2016 09:13:02 AM:
> 
> > From: Flavio Percoco 
> > To: "Daniel P. Berrange" 
> > Cc: "OpenStack Development Mailing List \(not for usage questions\)" 
> > 
> > Date: 01/21/2016 01:47 PM
> > Subject: Re: [openstack-dev] [all][tc] Stabilization cycles: 
> > Elaborating on the idea to move it forward
> > 
> > On 21/01/16 11:22 +, Daniel P. Berrange wrote:
> > >On Wed, Jan 20, 2016 at 01:23:02PM -0430, Flavio Percoco wrote:
> > >> Greetings,
> > >>
> > >> At the Tokyo summit, we discussed OpenStack's development themes in a
> > >> cross-project session. In this session a group of folks started 
> > discussing what
> > >> topics the overall community could focus on as a shared effort. One 
> of the
> > >> things that was raised during this session is the need of having 
> cycles to
> > >> stabilize projects. This was brought up by Robert Collins again in 
> > a meeting[0]
> > >> the TC had right after the summit and no much has been done ever 
> since.
> > >>
> > >> Now, "stabilization Cycles" are easy to dream about but really hardto 
> do and
> > >> enforce. Nonetheless, they are still worth a try or, at the very 
> least, a
> > >> thought. I'll try to go through some of the issues and benefits a 
> > stabilization
> > >> cycle could bring but bear in mind that the lists below are not 
> > exhaustive. In
> > >> fact, I'd love for other folks to chime in and help building a case
> > in favor or
> > >> against this.
> > >>
> > >> Negative(?) effects
> > >> ===
> > >>
> > >> - Project won't get new features for a period of time Economic impact 
> on
> > >>  developers(?)
> > >> - It was mentioned that some folks receive bonuses for landed 
> features
> > >> - Economic impact on companies/market because no new features were 
> added (?)
> > >> - (?)
> > >
> > >It will push more development into non-upstream vendor private
> > >branches.
> > >
> > >>
> > >> Positive effects
> > >> 
> > >>
> > >> - Focus on bug fixing
> > >> - Reduce review backlog
> > >> - Refactor *existing* code/features with cleanups
> > >> - Focus on multi-cycle features (if any) and complete those
> > >> - (?)
> > >
> > >I don't think the idea of stabalization cycles would really have
> > >such a positive effect, certainly not while our release cycle is
> > >6 months in length.
> > >
> > >If you say the next cycle is primarily stabalization, then what
> > >you are in effect saying is that people have to wait 12 months
> > >for their desired new feature.  In the fast moving world of
> > >cloud, I don't think that is a very credible approach. Even
> > >with our current workflow, where we selectively approve features
> > >for cycles, we have this impact of forcing people to wait 12
> > >months, or more, for their features.
> > 
> > ++
> > 
> > This is one of the main concerns and perhaps the reason why I don't 
> think it
> > should be all-or-nothing. It should be perfectly fine for teams to have
> > stabilization milestones, FWIW.
> > 
> > >In the non-stabalization cycle, we're not going to be able to
> > >merge a larger number of features than we already do today.
> > >So in effect we'll have 2 cycles worth of features being
> > >proposed for 1 cycle. When we inevitably reject moany of
> > >those features they'll have to wait for the next non-stabalization
> > >cycle, which means 18-24 months delay.
> > >
> > >Of course in reality this kind of delay won't happen. What will
> > >instead happen is that various vendors will get pressure from
> > >their customers/partners and their local branches of openstack
> > >packages will fork & diverge even further from upstream than
> > >they already do today.
> > >
> > >So while upstream branch will be "stabalized", most users will
> > >probably get a *less* stable release because they'll be using
> > >a branch from vendors with a tonne of non-upstream stuff added.
> > >
> > 
> > I would expect these vendors to (slowly?) push their changes 
> upstream.It'd take
> > time but it should certainly happen.
> > 
> > >In addition having a stablization cycle will give the impression
> > >that the following cycle is a non-stable one and likely cause
> > >more distruption by pushing lots of features in at one time.
> > >Instead of having a master branch which has an approximately
> > >constant level of stabalization, you'll create a situation
> > >where it fluctuates significantly, which is clearly worse for
> > >people doing continuous deployment.
> > >
> > >I think it is important to have the mindset that master should
> > >*always* be considered stable - we already have this in general
> > >and it is one of the success points of openstack's development
> > >model IMHO. The idea of stabalization cycles is a step backwards
> > 
> > Perhaps, it is being presented the wrong way. I guess 

Re: [openstack-dev] Tip: jsonformatter site for parsing/debugging logs

2016-01-21 Thread gord chung



On 21/01/2016 11:27 AM, Doug Hellmann wrote:

You can also do this using Python's json module from the command line:

$ echo '{"json":"obj"}' | python -m json.tool
{
   "json": "obj"
}

Doug

very useful... if you want a site,  i use http://pro.jsonlint.com/ 
(vladikr told me about it... he will be happy i mentioned his name here.)


cheers,

--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature suggestion - API for creating VM without powering it up

2016-01-21 Thread Fox, Kevin M
The nova instance user spec has a use case.
https://review.openstack.org/#/c/93/

Thanks,
Kevin

From: Matt Riedemann [mrie...@linux.vnet.ibm.com]
Sent: Thursday, January 21, 2016 7:32 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Feature suggestion - API for creating VM 
without powering it up

On 1/20/2016 10:57 AM, Shoham Peller wrote:
> Hi,
>
> I would like to suggest a feature in nova to allow creating a VM,
> without powering it up.
>
> If the user will be able to create a stopped VM, it will allow for
> better flexibility and user automation.
>
> I can personally say such a feature would greatly improve comfortability
> of my work with nova - currently we shutdown each vm manually as we're
> creating it.
> What do you think?
>
> Regards,
> Shoham Peller
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

What is your use case?

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nova cli commands fail with 404. devstack installation from today

2016-01-21 Thread Bob Hansen

Found it. The contents of  the admin file
(e.g. ../devstack/accrc/admin/admin) that I sourced for the admin
credentials do not work with the nova cli. This combination of OS_*
variables produced the error.

export OS_PROJECT_NAME="admin"
export OS_AUTH_URL="http://127.0.0.1:35357;
export OS_CACERT=""
export OS_AUTH_TYPE=v2password
export OS_PASSWORD="secretadmin"
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_DOMAIN_ID=default

this combination works with nova, glance and neutron.

export OS_PROJECT_NAME=admin
export OS_PASSWORD=secretadmin
export OS_AUTH_URL=http://127.0.0.1:35357
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_CACERT=

To be honest, I've seen so many examples of the 'correct' set of
environment variables with different AUTH_TYPES, it's very hard to tell
which variable 'set' is appropriate for which AUTH_TYPE and version of the
keystone API.

A pointer to this sort of information is appreciated.

Bob Hansen
z/VM OpenStack Enablement




From:   Bob Hansen/Endicott/IBM@IBMUS
To: "OpenStack Development Mailing List \(not for usage questions
\)" 
Date:   01/21/2016 10:46 AM
Subject:Re: [openstack-dev] nova cli commands fail with 404. devstack
installation from today



Yes, it is image-list not image list. I don't seem to be able to find any
other hints in any of the nova logs.

nova --debug image-list shows this:

DEBUG (extension:157) found extension EntryPoint.parse('token =
keystoneauth1.loading._plugins.identity.generic:Token')
DEBUG (extension:157) found extension EntryPoint.parse('v3token =
keystoneauth1.loading._plugins.identity.v3:Token')
DEBUG (extension:157) found extension EntryPoint.parse('password =
keystoneauth1.loading._plugins.identity.generic:Password')
DEBUG (v2:62) Making authentication request to
http://127.0.0.1:35357/tokens
INFO (connectionpool:207) Starting new HTTP connection (1): 127.0.0.1
DEBUG (connectionpool:387) "POST /tokens HTTP/1.1" 404 93
DEBUG (session:439) Request returned failure status: 404
DEBUG (shell:894) The resource could not be found. (HTTP 404)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/novaclient/shell.py", line
892, in main
OpenStackComputeShell().main(argv)
File "/usr/local/lib/python2.7/dist-packages/novaclient/shell.py", line
726, in main
api_version = api_versions.discover_version(self.cs, api_version)
File "/usr/local/lib/python2.7/dist-packages/novaclient/api_versions.py",
line 267, in discover_version
client)
File "/usr/local/lib/python2.7/dist-packages/novaclient/api_versions.py",
line 248, in _get_server_version_range
version = client.versions.get_current()
File "/usr/local/lib/python2.7/dist-packages/novaclient/v2/versions.py",
line 83, in get_current
return self._get_current()
File "/usr/local/lib/python2.7/dist-packages/novaclient/v2/versions.py",
line 56, in _get_current
url = "%s" % self.api.client.get_endpoint()
File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/adapter.py",
line 132, in get_endpoint
return self.session.get_endpoint(auth or self.auth, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py",
line 634, in get_endpoint
return auth.get_endpoint(self, **kwargs)
File
"/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/base.py",
line 209, in get_endpoint
service_catalog = self.get_access(session).service_catalog
File
"/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/base.py",
line 135, in get_access
self.auth_ref = self.get_auth_ref(session)
File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/v2.py",
line 64, in get_auth_ref
authenticated=False, log=False)
File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py",
line 545, in post
return self.request(url, 'POST', **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/_utils.py", line
180, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py",
line 440, in request
raise exceptions.from_response(resp, method, url)
NotFound: The resource could not be found. (HTTP 404)



Bob Hansen
z/VM OpenStack Enablement


Inactive hide details for "Chen CH Ji" ---01/21/2016 04:25:28 AM---Guess
it's image-list instead of image list,right?  maybe yo"Chen CH Ji"
---01/21/2016 04:25:28 AM---Guess it's image-list instead of image
list,right? maybe you can check with nova --debug image-list

From: "Chen CH Ji" 
To: "OpenStack Development Mailing List \(not for usage questions\)"

Date: 01/21/2016 04:25 AM
Subject: Re: [openstack-dev] nova cli commands fail with 404. devstack
installation from today




Guess it's image-list instead of image list,right? maybe you can check with
nova --debug image-list and see the API which was
send to nova-api server then analyze the nova api log to know what's
exactly the error?

-"Bob Hansen"  wrote: 

[openstack-dev] [release][documentation] fairy-slipper release HEAD (independent)

2016-01-21 Thread doug
We are delighted to announce the release of:

fairy-slipper HEAD: A project to make OpenStack API's self
documententing.

This release is part of the independent release series.

With source available at:

https://git.openstack.org/cgit/openstack/fairy-slipper

With package available at:

https://pypi.python.org/pypi/fairy-slipper

Please report issues through launchpad:

https://bugs.launchpad.net/openstack-doc-tools

For more details, please see below.


Changes in fairy-slipper 0.1.0..HEAD



Diffstat (except docs and test files)
-




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] gate-grenade-dsvm-multinode intermittent failures

2016-01-21 Thread Matthew Treinish
On Thu, Jan 21, 2016 at 08:18:14AM -0500, Davanum Srinivas wrote:
> Hi,
> 
> Failures for this job has been trending up and is causing the large
> gate queue as well. I've logged a bug:
> https://bugs.launchpad.net/openstack-gate/+bug/1536622
> 
> and am requesting switching the voting to off for this job:
> https://review.openstack.org/#/c/270788/

I think this was premature, we were actually looking at the problem last night. 
If
you look at:

http://status.openstack.org/openstack-health/#/g/node_provider/internap-nyj01

and

http://status.openstack.org/openstack-health/#/g/node_provider/bluebox-sjc1

grenade-multinode is 100% failure on both providers. The working hypothesis is
that it's because tempest is trying to login to the guest over the "private"
network which isn't setup as accessible outside. You can see the discussion on
this starting here:

http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2016-01-20.log.html#t2016-01-20T22:44:24

> 
> We need to find and fix the underlying issue which can help us
> determine when to switch this back on to voting or we cleanup this job
> from all the gate queues and move them to check queues (i have a TODO
> for this in this review)

TBH, there is always this push to remove jobs or testing whenever there is
release pressure and a gate backup. No one seems to notice whenever anything 
isn't
working and recheck grinds patches through. (well maybe not you Dims, because
you're more on top of it then almost everyone) I know that I get complacent when
there isn't a gate backup. The problem is when things like our categorization 
rate
on:

http://status.openstack.org/elastic-recheck/data/uncategorized.html

routinely has been at or below 50% this cycle it's not really a surprise we have
gate backups like this. More people need to be actively debugging these problems
as they come up, it can't just be the same handful of us. I don't think making
things non-voting is the trend we want to set because then what's gonna be the
motivation to get others to help on this.

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] gate-grenade-dsvm-multinode intermittent failures

2016-01-21 Thread Sean Dague
On 01/21/2016 11:00 AM, Matthew Treinish wrote:
> On Thu, Jan 21, 2016 at 08:18:14AM -0500, Davanum Srinivas wrote:
>> Hi,
>>
>> Failures for this job has been trending up and is causing the large
>> gate queue as well. I've logged a bug:
>> https://bugs.launchpad.net/openstack-gate/+bug/1536622
>>
>> and am requesting switching the voting to off for this job:
>> https://review.openstack.org/#/c/270788/
> 
> I think this was premature, we were actually looking at the problem last 
> night. If
> you look at:
> 
> http://status.openstack.org/openstack-health/#/g/node_provider/internap-nyj01
> 
> and
> 
> http://status.openstack.org/openstack-health/#/g/node_provider/bluebox-sjc1
> 
> grenade-multinode is 100% failure on both providers. The working hypothesis is
> that it's because tempest is trying to login to the guest over the "private"
> network which isn't setup as accessible outside. You can see the discussion on
> this starting here:
> 
> http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2016-01-20.log.html#t2016-01-20T22:44:24
> 
>>
>> We need to find and fix the underlying issue which can help us
>> determine when to switch this back on to voting or we cleanup this job
>> from all the gate queues and move them to check queues (i have a TODO
>> for this in this review)
> 
> TBH, there is always this push to remove jobs or testing whenever there is
> release pressure and a gate backup. No one seems to notice whenever anything 
> isn't
> working and recheck grinds patches through. (well maybe not you Dims, because
> you're more on top of it then almost everyone) I know that I get complacent 
> when
> there isn't a gate backup. The problem is when things like our categorization 
> rate
> on:
> 
> http://status.openstack.org/elastic-recheck/data/uncategorized.html
> 
> routinely has been at or below 50% this cycle it's not really a surprise we 
> have
> gate backups like this. More people need to be actively debugging these 
> problems
> as they come up, it can't just be the same handful of us. I don't think making
> things non-voting is the trend we want to set because then what's gonna be the
> motivation to get others to help on this.

Deciding to stop everyone else's work while a key infrastructure / test
setup bug is being sorted isn't really an option.

It's an OpenStack global lock on all productivity.

Making jobs non voting means that it's a local lock instead of a global
one. That *has* to be the model for fixing things like this. We need to
get some agreement on that fact, otherwise there will never be more
volunteers to help fix things. Not everyone in the community can drop
all the work and context they have for solving hard problems because a
new cloud was added / upgraded / acts differently.

When your bus lights on fire you don't just keep driving with the bus
full of passengers. You pull over, let them get off, and deal with the
fire separately from the passengers.

If there is in flight work, by a set of people that are all going to
bed, handing that off with an email needs to happen. Especially if we
are expecting them to not just start over from scratch.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-21 Thread Ryan Brady
On Thu, Jan 21, 2016 at 10:29 AM, Tzu-Mainn Chen 
wrote:
> 
>
>
>
> On 21 January 2016 at 14:46, Dougal Matthews  wrote:
>>
>>
>>
>> On 20 January 2016 at 20:05, Tzu-Mainn Chen  wrote:
>>>
>>> - Original Message -
>>> > On 18.1.2016 19:49, Tzu-Mainn Chen wrote:
>>> > > - Original Message -
>>> > >> On Thu, 2016-01-14 at 16:04 -0500, Tzu-Mainn Chen wrote:
>>> > >>>
>>> > >>> - Original Message -
>>> >  On Wed, Jan 13, 2016 at 04:41:28AM -0500, Tzu-Mainn Chen wrote:
>>> > > Hey all,
>>> > >
>>> > > I realize now from the title of the other TripleO/Mistral thread
>>> > > [1] that
>>> > > the discussion there may have gotten confused.  I think using
>>> > > Mistral for
>>> > > TripleO processes that are obviously workflows - stack
>>> > > deployment, node
>>> > > registration - makes perfect sense.  That thread is exploring
>>> > > practicalities
>>> > > for doing that, and I think that's great work.
>>> > >
>>> > > What I inappropriately started to address in that thread was a
>>> > > somewhat
>>> > > orthogonal point that Dan asked in his original email, namely:
>>> > >
>>> > > "what it might look like if we were to use Mistral as a
>>> > > replacement for the
>>> > > TripleO API entirely"
>>> > >
>>> > > I'd like to create this thread to talk about that; more of a
>>> > > 'should we'
>>> > > than 'can we'.  And to do that, I want to indulge in a thought
>>> > > exercise
>>> > > stemming from an IRC discussion with Dan and others.  All,
please
>>> > > correct
>>> > > me
>>> > > if I've misstated anything.
>>> > >
>>> > > The IRC discussion revolved around one use case: deploying a
Heat
>>> > > stack
>>> > > directly from a Swift container.  With an updated patch, the
Heat
>>> > > CLI can
>>> > > support this functionality natively.  Then we don't need a
>>> > > TripleO API; we
>>> > > can use Mistral to access that functionality, and we're done,
>>> > > with no need
>>> > > for additional code within TripleO.  And, as I understand it,
>>> > > that's the
>>> > > true motivation for using Mistral instead of a TripleO API:
>>> > > avoiding custom
>>> > > code within TripleO.
>>> > >
>>> > > That's definitely a worthy goal... except from my perspective,
>>> > > the story
>>> > > doesn't quite end there.  A GUI needs additional functionality,
>>> > > which boils
>>> > > down to: understanding the Heat deployment templates in order to
>>> > > provide
>>> > > options for a user; and persisting those options within a Heat
>>> > > environment
>>> > > file.
>>> > >
>>> > > Right away I think we hit a problem.  Where does the code for
>>> > > 'understanding
>>> > > options' go?  Much of that understanding comes from the
>>> > > capabilities map
>>> > > in tripleo-heat-templates [2]; it would make sense to me that
>>> > > responsibility
>>> > > for that would fall to a TripleO library.
>>> > >
>>> > > Still, perhaps we can limit the amount of TripleO code.  So to
>>> > > give API
>>> > > access to 'getDeploymentOptions', we can create a Mistral
>>> > > workflow.
>>> > >
>>> > >Retrieve Heat templates from Swift -> Parse capabilities map
>>> > >
>>> > > Which is fine-ish, except from an architectural perspective
>>> > > 'getDeploymentOptions' violates the abstraction layer between
>>> > > storage and
>>> > > business logic, a problem that is compounded because
>>> > > 'getDeploymentOptions'
>>> > > is not the only functionality that accesses the Heat templates
>>> > > and needs
>>> > > exposure through an API.  And, as has been discussed on a
>>> > > separate TripleO
>>> > > thread, we're not even sure Swift is sufficient for our needs;
>>> > > one possible
>>> > > consideration right now is allowing deployment from templates
>>> > > stored in
>>> > > multiple places, such as the file system or git.
>>> > 
>>> >  Actually, that whole capabilities map thing is a workaround for a
>>> >  missing
>>> >  feature in Heat, which I have proposed, but am having a hard time
>>> >  reaching
>>> >  consensus on within the Heat community:
>>> > 
>>> >  https://review.openstack.org/#/c/196656/
>>> > 
>>> >  Given that is a large part of what's anticipated to be provided
by
>>> >  the
>>> >  proposed TripleO API, I'd welcome feedback and collaboration so
we
>>> >  can move
>>> >  that forward, vs solving only for TripleO.
>>> > 
>>> > > Are we going to have duplicate 'getDeploymentOptions' workflows
>>> > > for each
>>> > > storage mechanism?  If we consolidate the storage code within a
>>> > > TripleO
>>> > > 

[openstack-dev] [release][documentation] fairy-slipper release 0.1.0 (independent)

2016-01-21 Thread doug
We are chuffed to announce the release of:

fairy-slipper 0.1.0: A project to make OpenStack API's self
documententing.

This release is part of the independent release series.

With source available at:

https://git.openstack.org/cgit/openstack/fairy-slipper

With package available at:

https://pypi.python.org/pypi/fairy-slipper

Please report issues through launchpad:

https://bugs.launchpad.net/openstack-doc-tools

For more details, please see below.

0.1.0
^

Initial release with validated Swagger files for many of the WADL
conversions.


Other Notes
***

* Use reno for release note management.


Changes in fairy-slipper ..0.1.0



Diffstat (except docs and test files)
-




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on the idea to move it forward

2016-01-21 Thread Zane Bitter

On 20/01/16 12:53, Flavio Percoco wrote:

Greetings,

At the Tokyo summit, we discussed OpenStack's development themes in a
cross-project session. In this session a group of folks started
discussing what
topics the overall community could focus on as a shared effort. One of the
things that was raised during this session is the need of having cycles to
stabilize projects. This was brought up by Robert Collins again in a
meeting[0]
the TC had right after the summit and no much has been done ever since.

Now, "stabilization Cycles" are easy to dream about but really hard to
do and
enforce. Nonetheless, they are still worth a try or, at the very least, a
thought. I'll try to go through some of the issues and benefits a
stabilization
cycle could bring but bear in mind that the lists below are not
exhaustive. In
fact, I'd love for other folks to chime in and help building a case in
favor or
against this.

Negative(?) effects
===

- Project won't get new features for a period of time Economic impact on
  developers(?)
- It was mentioned that some folks receive bonuses for landed features


o.O

Is this real life???


- Economic impact on companies/market because no new features were added
(?)
- (?)

Positive effects


- Focus on bug fixing


Or maybe just a focus on anything but upstream OpenStack work


- Reduce review backlog


Or increase the review backlog.

Or leave it about the same. It'll definitely be one of those.


- Refactor *existing* code/features with cleanups
- Focus on multi-cycle features (if any) and complete those
- (?)

A stabilization cycle, as it was also discussed in the aforementioned
meeting[0], doesn't need to be all or nothing. For instance, it should be
perfectly fine for a project to say that a project would dedicate 50% of
the
cycle to stabilization and the rest to complete some pending features.


I guess not being all-or-nothing is a good thing, but in that case what 
does this even mean in practice? If there's a review up for a feature 
what would you do differently under this policy? Merge half of it? Flip 
a coin and only review if it comes up heads?



Moreover,
each project is free to choose when/if a stabilization cycle would be
good for
it or not.

For example, the Glance team is currently working on refactoring the image
import workflow. This is a long term effort that will require at least 2
cycles
to be completed. Furthermore, it's very likely these changes will
introduce bugs
and that will require further work. If the Glance team would decide
(this is not
an actual proposal... yet :) to use Newton as a stabilization cycle, the
team
would be able to focus all its forces on fixing those bugs, completing the
feature and tackling other, long-term, pending issues. In the case of
Glance,
this would impact *only glance* and not other projects under the Glance
team
umbrella like glanceclient and glance_store. In fact, this would be a
perfect
time for the glance team to dedicate time to improving glanceclient and
catch up
with the server side latest changes.

So, the above sounds quite vague, still but that's the idea. This email
is not a
formal proposal but a starting point to move this conversation forward.
Is this
something other teams would be interested in? Is this something some
teams would
be entirely against? Why?


I actually hate this idea really quite a lot, largely for the same 
reasons that Julien and Dan have already articulated. Honestly, it 
sounds like the kind of thing you come up with when you've given up.


Instead a project could develop a long-term architecture plan that makes 
the features on its roadmap easier to implement in a robust way. Or 
introduce new features that simplify the code base and reduce the 
prevalence of existing bugs. Or demand working, tested, incremental 
changes instead of accepting unreviewable 5k line feature patches. Or 
invest in improving testing. Or break the project up into smaller units 
with clear API boundaries and give them specialist review teams. Or get 
a bunch of specialist exploratory testers to find bugs instead of 
waiting for them to affect developers somehow. Or... YMMV for any given 
idea on any given project, but the point is that saying "ok, no more 
features" is what you do as a last resort when you have literally zero 
ideas.


I guess it bugs me because I think it's an instance of a larger class of 
problem, which is characterised by the notion that one's future, better 
informed self will somehow make worse decisions than one's current self. 
i.e. you assume that you're getting stupider over time, so you decide to 
ignore the merits of any individual decision and substitute a default 
answer ("no") that you've formulated a priori. In a way it's the 
opposite of engineering.





 From a governance perspective, projects are already empowered to do
this and
they don't (and won't) need to be granted permission to have stabilization
cycles. However, the TC could work on 

Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on the idea to move it forward

2016-01-21 Thread Joshua Harlow

Julien Danjou wrote:

On Thu, Jan 21 2016, Flavio Percoco wrote:


So, I don't think it has to be the entire cycle. It could also be a couple of
milestones (or even just 1). Thing is, I believe this has to be communicated and
I want teams to know this is fine and they are encouraged to do so.

Tl;DR: It's fine to tell folks no new features will land on this and the
upcoming milestone because they'll be used to stabilize the project.


I can understand that, though I think it's a very naive approach. If
your project built technical debt for the last N cycles, unfortunately I
doubt that stating you're gonna work for ⅓ of a cycle on reducing it is
going to improve your project on the long run – that's why I was saying
"band-aid".

I'd be more inclined to spend time trying to fix the root cause that
pushes projects on the slope of the technical debt rate increase.


Unfortunately, just talking and proposing to fix them doesn't help. We don't
control contributor's management and we can't make calls for them other than
proposing things. I'm not saying this will fix that issue but at least it'll
communicate properly that that will be the only way to contribute to project X
in that period of time.


Yes, exactly. So it's my view¹ that people will just do something else
for 1.5 month (e.g. work downstream, take vacation…), and then come back
knocking at your door for their feature to be merged, now that this
stabilization period is over. And even in the best case scenario, you'll
merge some fixes and improvement, and that's it: in the end you'll end
up with the same problems in N cycle, and you'll have to redo that
again.

That's why I'm talking about fixing the root causes. :-)

Cheers,

¹  pessimistic or realistic, YMMV :-)


IMHO realistic, there are root causes that we need to dig out and fully 
expose to really deal with the issue of why stabilization cycles are 
needed in the first place (and said issues exposed there are going to be 
painful, and controversial and such, that's just how it is going to be).


Overall though, I'm glad we are talking about this and starting to think 
about what we as a community can do to start talking and thinking about 
these issues (and hopefully figuring out a plan to resolve some or all 
of those issues).


In all honesty its likely going to require a carrot and a stick (in some 
cases more of a carrot or more of a stick) to get companies that want to 
focus on features to think about focusing on stability. If done 
incorrectly this will have a real impact on some peoples lives and 
businesses (for better or worse this is the reality of the world, sorry 
for people that have just realized this, time to get some coffee for 
u...) so we really need to be thoughtful about how to go about this.


Anyways, enough rant/comments/thoughts from me, +1 for the general idea, 
and +1 for starting to think about what are the root causes of requiring 
this kind of cycle in the first place (to large of project? to many 
features? not enough contributors? to much code? to much junk/debt in 
your project that never got cleaned up/removed? ...)


-Josh




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Monasca] collectd-Monasca Python plugin

2016-01-21 Thread Hochmuth, Roland M
Hi Rodolfo, I think this would be useful work. Collectd has a lot of metrics 
that aren't supported in Monasca yet.

How would you map the metric names and fields in collectd to a monasa name and 
dimensions?

Regards --Roland

From: Jaesuk Ahn >
Reply-To: OpenStack List 
>
Date: Thursday, January 21, 2016 at 4:49 AM
To: OpenStack List 
>
Subject: Re: [openstack-dev] [Monasca] collectd-Monasca Python plugin

We are looking into similar plan to have collectd-plugin for Monasca.

There are some env. we cannot deploy monasca agent, but want to put data into 
Monasca. In addition, we wanted to use easily accepted collectd for gathering 
data from legacy env.

It will be interesting to see more detail about your plan.

Cheers,


---
Jaesuk Ahn
SDI Tech. Lab, SKT


2016년 1월 21일 (목) 19:11, Alonso Hernandez, Rodolfo 
>님이
 작성:
Hello:

We are doing (or at least planning) a collectd-Monasca Python plugin. This 
plugin will receive the data from RPC calls form collectd and will write this 
data in Monasca, using statsd API.

My question is: do you think this development could be useful? Does it worth? 
Any comment?

Thank you in advance. Regards.

Rodolfo Alonso.
--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] add support for other types of virtual switches in os_net_config

2016-01-21 Thread Xin Wu
Hi, all

 This is Xin from Big Switch. I'm proposing to make os_net_config
support more types of virtual switches.
 Current os_net_config supports ovs_bridge, linux_bridge and different
types of interfaces associated with ovs and linux bridge. Current
os_net_config's abstraction is perfect to support other types of virtual
switches as well. The examples off the top of my head are indigo virtual
switch and Cisco n1k.
 If this proposal makes sense, I would love to start to add the support
for indigo virtual switch first.

Xin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Monasca] alarms based on events

2016-01-21 Thread Hochmuth, Roland M
Hi Prema, SNMP isn't handled in Monasca and I have little experience in
that area. This would be new development.

It is possible to map binary data, such as health/status of a system or
component. The usual way is to use the value 0 for up/OK and 1 for
down/NOT_OK. A component would need to be developed to handle SNMP traps,
then translate and send them to the Monasca API as binary data. Possibly,
this component could be added to the Agent.

Using the Monasca Alarm API, an alarm could be defined, such as
max(snmp{}) > 0.

The latency for a min/max alarm expression in Monasca is very low.

Regards --Roland


On 1/18/16, 9:07 AM, "Premysl Kouril"  wrote:

>Hello,
>
>we are just evaluating Monasca for our new cloud infrastructure and I
>would like to ask if there are any possibilities in current Monasca or
>some development plans to address following use case:
>
>We have a box which we need to monitor and when something goes wrong
>with the box, it sends out and SNMP trap indicating that it is in bad
>condition and when the box is fixed it sends out SNMP trap indicating
>that it is OK and operational again (in other words: the box is
>indicating health state transitions by sending events - in this case
>SNMP traps).
>
>Is it possible in Monasca to define such alarm which would work on top
>of such events? In other words - Is it possible to have a Monasca
>alarm which would go red on some external event go back green on some
>other external event? By alarm I really mean a stateful entity in
>monasca database not some notification to administrator.
>
>Best regards.
>Prema
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on the idea to move it forward

2016-01-21 Thread Kashyap Chamarthy
On Thu, Jan 21, 2016 at 06:37:00PM +0100, Markus Zoeller wrote:
> Flavio Percoco  wrote on 01/21/2016 09:13:02 AM:

[...]

First, positive remark(s): 

Thanks for writing this up.  FWIW, I support the notion of having
milestones focusing on stability, as opposed to explicitly declaring a
whole cycle as 'stable' as I agree (as do you) with Dan Berrange's
reasoning.

(Also see Doug's comment in the thread: "projects the option of choosing
a release model that allows for multiple releases per 6 month cycle,
while still maintaining a single stable release after the cycle.")

> > >> Negative(?) effects
> > >> ===
> > >>
> > >> - Project won't get new features for a period of time Economic
> > >> impact  on developers(?)
> > >> - It was mentioned that some folks receive bonuses for landed 
> > >>   features

This (non-point) reminds of a recent comment I've read elsewhere[1]
about why websites have been becoming bloated[1], and how people are
(dis)incentivized.  [NB: This was written in the context of websites;
we're talking about an Infra project, so adjust the view accordingly]:

   "[...] People (designers, coders, etc) get bonuses and paychecks for
   creating stuff more than tearing down stuff.

   Put this on your resume -- "Implemented feature x, designed y, added
   z" vs "Cut out 10k lines worth of crap only 10% of customers
   [operators] used, stripped away stupid 1Mb worth for js that displays
   animated snowflakes, etc".  You'd produce a better perception by
   claiming you added / created / built, rather than deleted.

   So it is not surprising that more stuff gets built, more code added
   to the pile, more features implemented. Heck, even GMail keeps
   changing every 6 months for apparently no reason. But in reality
   there is a reason -- Google has full time designers on the GMail
   team. There is probably no way they'd end the year with "Yap, site
   worked great, we did a nice job 2 years ago, so I didn't touch it
   this year."

[...]
 
> I try to handle in one post the different aspects which came up so
> far:
> 
> wrt dedicated stabilization cycles|milestones:
> 
> Piled up (=older) bugs are harder to solve than fresh ones.  I've
> seen next to no bug report in Nova which has all the necessary
> data to do a proper analysis. There are usually 1-3 requests to
> the bug reporter necessary to get enough data.  This makes me
> believe that stabilization should be a continuous effort.

Whole-heartedly agree with this.  It just ought to be a _continuous_
effort.

While we're at it, thanks Markus for your patient (and productive)
efforts on bug triaging in Nova!

[...]

[1] https://news.ycombinator.com/item?id=10820716

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [watcher] DevStack plugin

2016-01-21 Thread Taylor D Peoples

Hi all,

Watcher now has a DevStack plugin that can be used to easily stand up
the Watcher services for development.  Documentation can be found at
[0] and example local.conf files are provided for both controller and
compute nodes.

If you're interested in working on Watcher, then please try out the
DevStack plugin.  If you run into any issues or have any questions
please feel free to ask here or on IRC in #openstack-watcher.

[0]
https://github.com/openstack/watcher/blob/master/doc/source/dev/devstack-plugin.rst

Thanks.

Taylor Peoples
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] mid cycle details

2016-01-21 Thread Doug Wiegley
Hi all,

Where: Minnesota (great planning for winter!)
When:Feb 23-26

Details:
https://etherpad.openstack.org/p/neutron-mitaka-midcycle

Please RSVP. And yell at Kyle for using that shade of red.

Thanks,
doug



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on the idea to move it forward

2016-01-21 Thread Rochelle Grober


Devananda van der Veen, on  January 21, 2016 5:14 PM wrote:

On Wed, Jan 20, 2016 at 9:53 AM, Flavio Percoco 
> wrote:
Greetings,

At the Tokyo summit, we discussed OpenStack's development themes in a
cross-project session. In this session a group of folks started discussing what
topics the overall community could focus on as a shared effort. One of the
things that was raised during this session is the need of having cycles to
stabilize projects. This was brought up by Robert Collins again in a meeting[0]
the TC had right after the summit and no much has been done ever since.

Now, "stabilization Cycles" are easy to dream about but really hard to do and
enforce. Nonetheless, they are still worth a try or, at the very least, a
thought. I'll try to go through some of the issues and benefits a stabilization
cycle could bring but bear in mind that the lists below are not exhaustive. In
fact, I'd love for other folks to chime in and help building a case in favor or
against this.

Negative(?) effects
===

- Project won't get new features for a period of time Economic impact on
 developers(?)
- It was mentioned that some folks receive bonuses for landed features
- Economic impact on companies/market because no new features were added (?)
- (?)

Positive effects


- Focus on bug fixing
- Reduce review backlog
- Refactor *existing* code/features with cleanups
- Focus on multi-cycle features (if any) and complete those
- (?)

A stabilization cycle, as it was also discussed in the aforementioned
meeting[0], doesn't need to be all or nothing. For instance, it should be
perfectly fine for a project to say that a project would dedicate 50% of the
cycle to stabilization and the rest to complete some pending features. Moreover,
each project is free to choose when/if a stabilization cycle would be good for
it or not.

For example, the Glance team is currently working on refactoring the image
import workflow. This is a long term effort that will require at least 2 cycles
to be completed. Furthermore, it's very likely these changes will introduce bugs
and that will require further work. If the Glance team would decide (this is not
an actual proposal... yet :) to use Newton as a stabilization cycle, the team
would be able to focus all its forces on fixing those bugs, completing the
feature and tackling other, long-term, pending issues. In the case of Glance,
this would impact *only glance* and not other projects under the Glance team
umbrella like glanceclient and glance_store. In fact, this would be a perfect
time for the glance team to dedicate time to improving glanceclient and catch up
with the server side latest changes.

So, the above sounds quite vague, still but that's the idea. This email is not a
formal proposal but a starting point to move this conversation forward. Is this
something other teams would be interested in? Is this something some teams would
be entirely against? Why?

From a governance perspective, projects are already empowered to do this and
they don't (and won't) need to be granted permission to have stabilization
cycles. However, the TC could work on formalizing this process so that teams
have a reference to follow when they want to have one. For example, we would
have to formalize how projects announce they want to have a stabilization cycle
(I believe it should be done before the mid-term of the ongoing cycle).

Thoughts? Feedback?
Flavio


Thanks for writing this up, Flavio.

The topic's come up in smaller discussion groups several times over the last 
few years, mostly with a nod to "that would be great, except the corporations 
won't let it happen".

To everyone who's replied with shock to this thread, the reality is that nearly 
all of the developer-hours which fuel OpenStack's progress are funded directly 
by corporations, whether big or small. Even those folks who have worked in open 
source for a long time, and are working on OpenStack by choice, are being paid 
by companies deeply invested in the success of this project. Some developers 
are adept at separating the demands of their employer from the best interests 
of the community. Some are not. I don't have hard data, but I suspect that most 
of the nearly-2000 developers who have contributed to OpenStack during the 
Mitaka cycle are working on what ever they're working on BECAUSE IT MATTERS TO 
THEIR EMPLOYER.

Every project experiences pressure from companies who are trying to land very 
specific features. Why? Because they're all chasing first-leader advantage in 
the market. Lots of features are in flight right now, and, even though it 
sounds unbelievable, many companies announce those features in their products 
BEFORE they actually land upstream. Crazy, right? Except... IT WORKS. Other 
companies buy their product because they are buying a PRODUCT from some 
company. It happens to contain OpenStack. And it has a bunch of unmerged 
features.

With my 

[openstack-dev] What's Up, Doc? 22 January 2016

2016-01-21 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi everyone,

We seem to be more or less back into the swing of things here in docs land this 
week, with our meeting schedule back up and running, and now our very first 
full-length newsletter for 2016! We only have two and a half months until 
Mitaka goes out, but we're in great shape so far, with 345 bugs dealt with, and 
all the planned RST conversions complete. Most of the discussion this week has 
been around API documentation and Fairy Slipper, Docs Tools and some site theme 
changes, and a little back and forth on meeting times.

== Progress towards Mitaka ==

75 days to go!

345 bugs closed so far for this release.

RST Conversions
* All RST conversions are now complete! Well done to all the contributors and 
reviewers who made this happen so quickly this time around. We are now, with 
only a couple of exceptions, completely converted. Great job :) 

Reorganisations
* Arch Guide
** 
https://blueprints.launchpad.net/openstack-manuals/+spec/archguide-mitaka-reorg
** Contact the Ops Guide Speciality team: 
https://wiki.openstack.org/wiki/Documentation/OpsGuide
* User Guides
** 
https://blueprints.launchpad.net/openstack-manuals/+spec/user-guides-reorganised
** Contact the User Guide Speciality team: 
https://wiki.openstack.org/wiki/User_Guides

DocImpact
* After some discussion on the dev list, we're adjusting our approach to this 
problem. Watch this space.

== Speciality Teams ==

'''HA Guide - Bogdan Dobrelya'''
Meeting moved 
http://lists.openstack.org/pipermail/openstack-docs/2016-January/008209.html. 
Kenneth finished his Galera patch https://review.openstack.org/#/c/263075/, 
there were also several patches from different contributors, so things are 
improving.

'''Installation Guide - Christian Berendt'''
AODH instructions near completion, EU/US meeting moved (Every two weeks (on 
even weeks) on Tuesday at 1600 UTC), APAC meeting will be dropped the next 
weeks if if not more participants join.

'''Networking Guide - Edgar Magana'''
No update this week.

'''Security Guide - Nathaniel Dillon'''
No update this week.

'''User Guides - Joseph Robinson'''
One patch combining Images and Instances content merged, and a patch on Shared 
File Systems is now available for review. User Guide team meetings are set to 
begin next Thursday, with guidelines for combining admin and cloud admin files 
a topic for discussion.

'''Ops and Arch Guides - Shilla Saebi'''
No update this week.

'''API Docs - Anne Gentle'''
Anne imported and triaged over 60 GitHub Issues into Launchpad bugs. Anne 
working on build jobs for making API docs with fairy-slipper: 
https://review.openstack.org/#/c/269809. Anne emailed over 25 individuals about 
the plans for API docs and answered follow-up questions. Karen worked on 
validated Swagger: https://review.openstack.org/#/c/266527/. Need to release 
fairy-slipper this week or next.

'''Config Ref - Gauvain Pocentek'''
No update this week.

'''Training labs - Roger Luethi'''
Liberty is making good progress, trying to get Liberty patches working. Early 
beginnings of Fedora support switched to CentOS. Copy-on-write disks for KVM. 
Bug fixing and refactoring, as usual. Early work on python port. Plan to host 
training labs on a web page.

'''Training Guides - Matjaz Pancur'''
German translation of the Upstream training 
(http://docs.openstack.org/de/upstream-training/) - this makes 3 full 
translations (en, de, ko_kr), ja is at 80%. A patch that enables direct bug 
reporting on the slides (https://review.openstack.org/#/c/259654/), several 
other patches.

'''Hyperviser Tuning Guide - Joe Topjian'''
No update this week.

== Core Team Changes ==

The next core team review will occur on 1 February. In the meantime, it's 
probably a good time to go over what being a core means, and how the review 
system works. For more detail, see the Contributor Guide: 
http://docs.openstack.org/contributor-guide/docs-review.html

Docs cores have the ability to +2 docs reviews, and they also have the ability 
to merge reviews. Reviews are merged by cores only once there's been *two* +2 
votes, and at least one +1. There are very few exceptions to this rule, for 
changes that need to go through very quickly to resolve gate breakages or 
similar, but this rule holds in just about every situation. Cores are assigned 
based on merit, usually through the core review process that we do every month. 
During a core review, the PTL gathers statistics on reviews made, and patches 
created and merged, and uses that to determine who should be added to (or 
removed from) the core list. That then goes to a vote of the existing core team 
before being actioned and announced. Core team members also have the ability to 
nominate (and vote on) exceptional team members, where the statistics might not 
reflect the amount of effort they have put in.

If you ever have any questions about becoming a docs core, or about getting 
your docs patch merged, you can contact the 

Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on the idea to move it forward

2016-01-21 Thread Devananda van der Veen
On Wed, Jan 20, 2016 at 9:53 AM, Flavio Percoco  wrote:

> Greetings,
>
> At the Tokyo summit, we discussed OpenStack's development themes in a
> cross-project session. In this session a group of folks started discussing
> what
> topics the overall community could focus on as a shared effort. One of the
> things that was raised during this session is the need of having cycles to
> stabilize projects. This was brought up by Robert Collins again in a
> meeting[0]
> the TC had right after the summit and no much has been done ever since.
>
> Now, "stabilization Cycles" are easy to dream about but really hard to do
> and
> enforce. Nonetheless, they are still worth a try or, at the very least, a
> thought. I'll try to go through some of the issues and benefits a
> stabilization
> cycle could bring but bear in mind that the lists below are not
> exhaustive. In
> fact, I'd love for other folks to chime in and help building a case in
> favor or
> against this.
>
> Negative(?) effects
> ===
>
> - Project won't get new features for a period of time Economic impact on
>  developers(?)
> - It was mentioned that some folks receive bonuses for landed features
> - Economic impact on companies/market because no new features were added
> (?)
> - (?)
>
> Positive effects
> 
>
> - Focus on bug fixing
> - Reduce review backlog
> - Refactor *existing* code/features with cleanups
> - Focus on multi-cycle features (if any) and complete those
> - (?)
>
> A stabilization cycle, as it was also discussed in the aforementioned
> meeting[0], doesn't need to be all or nothing. For instance, it should be
> perfectly fine for a project to say that a project would dedicate 50% of
> the
> cycle to stabilization and the rest to complete some pending features.
> Moreover,
> each project is free to choose when/if a stabilization cycle would be good
> for
> it or not.
>
> For example, the Glance team is currently working on refactoring the image
> import workflow. This is a long term effort that will require at least 2
> cycles
> to be completed. Furthermore, it's very likely these changes will
> introduce bugs
> and that will require further work. If the Glance team would decide (this
> is not
> an actual proposal... yet :) to use Newton as a stabilization cycle, the
> team
> would be able to focus all its forces on fixing those bugs, completing the
> feature and tackling other, long-term, pending issues. In the case of
> Glance,
> this would impact *only glance* and not other projects under the Glance
> team
> umbrella like glanceclient and glance_store. In fact, this would be a
> perfect
> time for the glance team to dedicate time to improving glanceclient and
> catch up
> with the server side latest changes.
>
> So, the above sounds quite vague, still but that's the idea. This email is
> not a
> formal proposal but a starting point to move this conversation forward. Is
> this
> something other teams would be interested in? Is this something some teams
> would
> be entirely against? Why?
>
> From a governance perspective, projects are already empowered to do this
> and
> they don't (and won't) need to be granted permission to have stabilization
> cycles. However, the TC could work on formalizing this process so that
> teams
> have a reference to follow when they want to have one. For example, we
> would
> have to formalize how projects announce they want to have a stabilization
> cycle
> (I believe it should be done before the mid-term of the ongoing cycle).
>
> Thoughts? Feedback?
> Flavio
>
>

Thanks for writing this up, Flavio.

The topic's come up in smaller discussion groups several times over the
last few years, mostly with a nod to "that would be great, except the
corporations won't let it happen".

To everyone who's replied with shock to this thread, the reality is that
nearly all of the developer-hours which fuel OpenStack's progress are
funded directly by corporations, whether big or small. Even those folks who
have worked in open source for a long time, and are working on OpenStack by
choice, are being paid by companies deeply invested in the success of this
project. Some developers are adept at separating the demands of their
employer from the best interests of the community. Some are not. I don't
have hard data, but I suspect that most of the nearly-2000 developers who
have contributed to OpenStack during the Mitaka cycle are working on what
ever they're working on BECAUSE IT MATTERS TO THEIR EMPLOYER.

Every project experiences pressure from companies who are trying to land
very specific features. Why? Because they're all chasing first-leader
advantage in the market. Lots of features are in flight right now, and,
even though it sounds unbelievable, many companies announce those features
in their products BEFORE they actually land upstream. Crazy, right?
Except... IT WORKS. Other companies buy their product because they are
buying a PRODUCT from some company. It happens to 

[openstack-dev] [release] release countdown for week R-10, Jan 25-29

2016-01-21 Thread Doug Hellmann
Focus
-

With the second milestone behind us, project teams should be focusing
on wrapping up new feature work and stabilizing recent additions.

Release Actions
---

We will be more strictly enforcing the library release freeze before
M3 in 5 weeks. Please review client libraries, integration libraries,
and any other libraries managed by your team and ensure that recent
changes have been released and the global requirements and constraints
lists are up to date with accurate minimum versions and exclusions.
Keep in mind our policy about not releasing new libraries into the
CI system late in the week.

We have quite a few projects with unreleased changes on the
stable/liberty branch. Please check http://paste.openstack.org/show/484431/
for info about your project, and propose appropriate releases.

Important Dates
---

Final release for  non-client libraries: Feb 24
Final release for client libraries: Mar 2
Mitaka 3: Feb 29-Mar 4 (includes feature freeze and soft string freeze)

Mitaka release schedule: 
http://docs.openstack.org/releases/schedules/mitaka.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nova cli commands fail with 404. devstack installation from today

2016-01-21 Thread Mikhail Medvedev
On Thu, Jan 21, 2016 at 1:48 PM, Bob Hansen  wrote:

> Found it. The contents of the admin file (e.g.
> ../devstack/accrc/admin/admin) that I sourced for the admin credentials do
> not work with the nova cli. This combination of OS_* variables produced the
> error.
>
> export OS_PROJECT_NAME="admin"
> export OS_AUTH_URL="http://127.0.0.1:35357;
> export OS_CACERT=""
> export OS_AUTH_TYPE=v2password
> export OS_PASSWORD="secretadmin"
> export OS_USER_DOMAIN_ID=default
> export OS_PROJECT_DOMAIN_ID=default
>
> this combination works with nova, glance and neutron.
>
> export OS_PROJECT_NAME=admin
> export OS_PASSWORD=secretadmin
> export OS_AUTH_URL=http://127.0.0.1:35357
> export OS_USERNAME=admin
> export OS_TENANT_NAME=admin
> export OS_CACERT=
>
> To be honest, I've seen so many examples of the 'correct' set of
> environment variables with different AUTH_TYPES, it's very hard to tell
> which variable 'set' is appropriate for which AUTH_TYPE and version of the
> keystone API.
>
> A pointer to this sort of information is appreciated.
>
AFAIK, OpenStackClient (OSC) is making it a bit saner. See the docs
http://docs.openstack.org/developer/python-openstackclient/, and
"authentication" section in particular.

>
> Bob Hansen
> z/VM OpenStack Enablement
>
>
> [image: Inactive hide details for Bob Hansen---01/21/2016 10:46:15
> AM---Yes, it is image-list not image list. I don't seem to be able t]Bob
> Hansen---01/21/2016 10:46:15 AM---Yes, it is image-list not image list. I
> don't seem to be able to find any other hints in any of the
>
> From: Bob Hansen/Endicott/IBM@IBMUS
> To: "OpenStack Development Mailing List \(not for usage questions\)" <
> openstack-dev@lists.openstack.org>
> Date: 01/21/2016 10:46 AM
>
> Subject: Re: [openstack-dev] nova cli commands fail with 404. devstack
> installation from today
> --
>
>
>
> Yes, it is image-list not image list. I don't seem to be able to find any
> other hints in any of the nova logs.
>
> nova --debug image-list shows this:
>
> DEBUG (extension:157) found extension EntryPoint.parse('token =
> keystoneauth1.loading._plugins.identity.generic:Token')
> DEBUG (extension:157) found extension EntryPoint.parse('v3token =
> keystoneauth1.loading._plugins.identity.v3:Token')
> DEBUG (extension:157) found extension EntryPoint.parse('password =
> keystoneauth1.loading._plugins.identity.generic:Password')
> DEBUG (v2:62) Making authentication request to
> *http://127.0.0.1:35357/tokens* 
> INFO (connectionpool:207) Starting new HTTP connection (1): 127.0.0.1
> DEBUG (connectionpool:387) "POST /tokens HTTP/1.1" 404 93
> DEBUG (session:439) Request returned failure status: 404
> DEBUG (shell:894) The resource could not be found. (HTTP 404)
> Traceback (most recent call last):
> File "/usr/local/lib/python2.7/dist-packages/novaclient/shell.py", line
> 892, in main
> OpenStackComputeShell().main(argv)
> File "/usr/local/lib/python2.7/dist-packages/novaclient/shell.py", line
> 726, in main
> api_version = api_versions.discover_version(self.cs, api_version)
> File "/usr/local/lib/python2.7/dist-packages/novaclient/api_versions.py",
> line 267, in discover_version
> client)
> File "/usr/local/lib/python2.7/dist-packages/novaclient/api_versions.py",
> line 248, in _get_server_version_range
> version = client.versions.get_current()
> File "/usr/local/lib/python2.7/dist-packages/novaclient/v2/versions.py",
> line 83, in get_current
> return self._get_current()
> File "/usr/local/lib/python2.7/dist-packages/novaclient/v2/versions.py",
> line 56, in _get_current
> url = "%s" % self.api.client.get_endpoint()
> File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/adapter.py",
> line 132, in get_endpoint
> return self.session.get_endpoint(auth or self.auth, **kwargs)
> File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py",
> line 634, in get_endpoint
> return auth.get_endpoint(self, **kwargs)
> File
> "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/base.py",
> line 209, in get_endpoint
> service_catalog = self.get_access(session).service_catalog
> File
> "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/base.py",
> line 135, in get_access
> self.auth_ref = self.get_auth_ref(session)
> File
> "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/v2.py", line
> 64, in get_auth_ref
> authenticated=False, log=False)
> File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py",
> line 545, in post
> return self.request(url, 'POST', **kwargs)
> File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/_utils.py",
> line 180, in inner
> return func(*args, **kwargs)
> File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py",
> line 440, in request
> raise exceptions.from_response(resp, method, url)
> NotFound: The resource could not be found. (HTTP 404)
>
>
>
> Bob Hansen
> z/VM OpenStack Enablement
>
>
> [image: Inactive hide details 

Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on the idea to move it forward

2016-01-21 Thread Robert Collins
On 21 January 2016 at 07:38, Ian Cordasco  wrote:
>
> I think this is a solid proposal but I'm not sure what (if anything) the TC 
> needs to do about this. This is something most non-corporate open source 
> projects do (and even some corporate open source projects). It's the natural 
> life-cycle of any software project (that we ship a bunch of things and then 
> focus on stability). Granted, I haven't seen much of a focus on it in 
> OpenStack but that's a different story.
>
> That said, I'd like to see a different release cadence for cycles that are 
> "stabilization cycles". We, as a community, are not using minor version 
> numbers. During a stabilization cycle, I would like to see master be released 
> around the 3 milestones as X.1.0, X.2.0, X.3.0. If we work that way, then 
> we'll be able to avoid having to backport a lot of work to the X.0 series and 
> while we could support X.0 series with specific backports, it would avoid 
> stressing our already small stable teams. My release strategy, however, may 
> cause more stress for downstream packages though. It'll cause them to have to 
> decide what and when to package and to be far more aware of each project's 
> current development cycle. I'm not sure that's positive.

So the reason this was on my todo this cycle - and I'm so glad Flavio
has picked it up (point 9 of
https://rbtcollins.wordpress.com/2015/11/02/openstack-mitaka-debrief/)
- was that during the Tokyo summit, in multiple sessions, folk were
saying that they wanted space from features, to consolidate already
added things, and to cleanup accrued debt, and that without TC
support, they couldn't sell it back to their companies.

Essentially, if the TC provides some leadership here: maybe as little as:
 - its ok to do it [we think it will benefit our users]
 - sets some basic expectations

And then individual projects decide to do it (whether thats a PTL
call, a vote, core consensus, whatever) - then developers have a
platform to say to their organisation that the focus is X, don't
expect features to land - and that they are *expected* to help with
the cycle.

Without some framework, we're leaving those developers out in the cold
trying to explain what-and-why-and-how all by themselves.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Dynamically adding Extra Specs

2016-01-21 Thread Dhvanan Shah
Hi,

I had a few queries regarding adding extra specs for VM requests.

According to my understanding if I want to add extra specs to requests then
I need to change that in different flavors adding those capabilities by
setting them in the flavors. But if the requests that I get have varying
values for those extra capabilities then it seems to create an issue as the
values in the flavors are static. Please correct me if I'm wrong.

So I wanted to know as to how I could dynamically add those extra specs
best suiting each request. Is there a way of mentioning the extra specs
everytime I spawn a VM through the nova cli? Setting and unsetting the
extra specs everytime I spawn VM's according to my need would be quite
inefficient as it makes changes to the database.


Thanks,
Dhvanan Shah
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project]How to make use of x-openstack-request-id

2016-01-21 Thread Tan, Lin
Thanks Kebane, I test glance/neutron/keystone with ``x-openstack-request-id`` 
and find something interesting.

I am able to pass ``x-openstack-request-id``  to glance and it will use the 
UUID as its request-id. But it failed with neutron and keystone.
Here is my test:
http://paste.openstack.org/show/484644/

It looks like because keystone and neutron are using 
oslo_middleware:RequestId.factory and in this part:
https://github.com/openstack/oslo.middleware/blob/master/oslo_middleware/request_id.py#L35
It will always generate an UUID and append to response as 
``x-openstack-request-id`` header.

My question is should we accept an external passed request-id as the project's 
own request-id or having its unique request-id?
In other words, which one is correct way, glance or neutron/keystone? There 
must be something wrong with one of them.

Thanks

B.R

Tan

From: Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com]
Sent: Wednesday, December 2, 2015 2:24 PM
To: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
Subject: Re: [openstack-dev] [nova][glance][cinder][neutron]How to make use of 
x-openstack-request-id


Hi Tan,



Most of the OpenStack RESTful API returns `X-Openstack-Request-Id` in the API 
response header but this request id is not available to the caller from the 
python client.

When you use --debug option from command from the command prompt using client, 
you can see `X-Openstack-Request-Id` on the console but it is not logged 
anywhere.



Currently a cross-project specs [1] is submitted and approved for returning 
X-Openstack-Request-Id to the caller and the implementation for the same is in 
progress.

Please go through the specs for detail information which will help you to 
understand more about request-ids and current work about the same.



Please feel free to revert back anytime for your doubts.



[1] 
https://github.com/openstack/openstack-specs/blob/master/specs/return-request-id.rst



Thanks,



Abhishek Kekane









Hi guys

I recently play around with 'x-openstack-request-id' header but have a 
dump question about how it works. At beginning, I thought an action across 
different services should use a same request-id but it looks like this is not 
the true.



First I read the spec: 
https://blueprints.launchpad.net/nova/+spec/cross-service-request-id which said 
"This ID and the request ID of the other service will be logged at service 
boundaries". and I see cinder/neutron/glance will attach its context's 
request-id as the value of "x-openstack-request-id" header to its response 
while nova use X-Compute-Request-Id. This is easy to understand. So It looks 
like each service should generate its own request-id and attach to its 
response, that's all.



But then I see glance read 'X-Openstack-Request-ID' to generate the request-id 
while cinder/neutron/nova read 'openstack.request_id' when using with keystone. 
It is try to reuse the request-id from keystone.



This totally confused me. It would be great if you can correct me or point me 
some reference. Thanks a lot



Best Regards,



Tan


__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] Failure on installing openstack with latest devstack.

2016-01-21 Thread Vikram Choudhary
Hi There,

Can someone please me to resolve below devstack error?

2016-01-22 07:00:29.802 | 6134 INFO keystone.cmd.cli [-] Domain default
already exists, skipping creation.
2016-01-22 07:00:29.952 | 6134 INFO keystone.cmd.cli [-] Created project
admin
2016-01-22 07:00:29.959 | 6134 DEBUG passlib.registry [-] registered
'sha512_crypt' handler: 
register_crypt_handler
/usr/local/lib/python2.7/dist-packages/passlib/registry.py:284
2016-01-22 07:00:30.032 | 6134 INFO keystone.cmd.cli [-] Created user admin
2016-01-22 07:00:30.088 | 6134 INFO keystone.cmd.cli [-] Created Role admin
2016-01-22 07:00:30.168 | 6134 INFO keystone.cmd.cli [-] Granted admin on
admin to user admin.
2016-01-22 07:00:30.247 | + local token_id
2016-01-22 07:00:30.248 | ++ openstack token issue -c id -f value
--os-username admin --os-project-name admin --os-user-domain-id default
--os-project-domain-id default --os-identity-api-version 3 --os-auth-url
http://192.168.2.5:35357 --os-password openstack
2016-01-22 07:00:31.284 | Discovering versions from the identity service
failed when creating the password plugin. Attempting to determine version
from URL.
2016-01-22 07:00:31.284 | Could not determine a suitable URL for the plugin
2016-01-22 07:00:31.307 | + token_id=
2016-01-22 07:00:31.307 | + exit_trap
2016-01-22 07:00:31.307 | + local r=1
2016-01-22 07:00:31.308 | ++ jobs -p
2016-01-22 07:00:31.308 | + jobs=
2016-01-22 07:00:31.308 | + [[ -n '' ]]
2016-01-22 07:00:31.308 | + kill_spinner
2016-01-22 07:00:31.308 | + '[' '!' -z '' ']'
2016-01-22 07:00:31.308 | + [[ 1 -ne 0 ]]
2016-01-22 07:00:31.308 | + echo 'Error on exit'
2016-01-22 07:00:31.308 | Error on exit
2016-01-22 07:00:31.308 | + [[ -z /opt/stack/logs/ ]]
2016-01-22 07:00:31.308 | + /home/openstack/devstack/tools/worlddump.py -d
/opt/stack/logs/
2016-01-22 07:00:31.583 | + exit 1


Keystone logs:
sudo tail -f /var/log/apache2/keystone.log
2016-01-22 11:35:24.804405 mod_wsgi (pid=19720): Target WSGI script
'/usr/local/bin/keystone-wsgi-admin' cannot be loaded as Python module.
2016-01-22 11:35:24.804452 mod_wsgi (pid=19720): Exception occurred
processing WSGI script '/usr/local/bin/keystone-wsgi-admin'.
2016-01-22 11:35:24.804481 Traceback (most recent call last):
2016-01-22 11:35:24.804499   File "/usr/local/bin/keystone-wsgi-admin",
line 6, in 
2016-01-22 11:35:24.804554 from keystone.server.wsgi import
initialize_admin_application
2016-01-22 11:35:24.804567   File
"/opt/stack/keystone/keystone/server/wsgi.py", line 31, in 
2016-01-22 11:35:24.804608 from keystone.version import service as
keystone_service
2016-01-22 11:35:24.804619   File
"/opt/stack/keystone/keystone/version/service.py", line 20, in 
2016-01-22 11:35:24.804676 from paste import deploy
2016-01-22 11:35:24.804694 ImportError: cannot import name deploy
2016-01-22 12:30:29.030111 mod_wsgi (pid=5922): Target WSGI script
'/usr/local/bin/keystone-wsgi-public' cannot be loaded as Python module.
2016-01-22 12:30:29.030151 mod_wsgi (pid=5922): Exception occurred
processing WSGI script '/usr/local/bin/keystone-wsgi-public'.
2016-01-22 12:30:29.030171 Traceback (most recent call last):
2016-01-22 12:30:29.030190   File "/usr/local/bin/keystone-wsgi-public",
line 6, in 
2016-01-22 12:30:29.030248 from keystone.server.wsgi import
initialize_public_application
2016-01-22 12:30:29.030261   File
"/opt/stack/keystone/keystone/server/wsgi.py", line 31, in 
2016-01-22 12:30:29.030303 from keystone.version import service as
keystone_service
2016-01-22 12:30:29.030324   File
"/opt/stack/keystone/keystone/version/service.py", line 20, in 
2016-01-22 12:30:29.030390 from paste import deploy
2016-01-22 12:30:29.030410 ImportError: cannot import name deploy
2016-01-22 12:30:31.282643 mod_wsgi (pid=5923): Target WSGI script
'/usr/local/bin/keystone-wsgi-admin' cannot be loaded as Python module.
2016-01-22 12:30:31.282693 mod_wsgi (pid=5923): Exception occurred
processing WSGI script '/usr/local/bin/keystone-wsgi-admin'.
2016-01-22 12:30:31.282716 Traceback (most recent call last):
2016-01-22 12:30:31.282734   File "/usr/local/bin/keystone-wsgi-admin",
line 6, in 
2016-01-22 12:30:31.282816 from keystone.server.wsgi import
initialize_admin_application
2016-01-22 12:30:31.282838   File
"/opt/stack/keystone/keystone/server/wsgi.py", line 31, in 
2016-01-22 12:30:31.282900 from keystone.version import service as
keystone_service
2016-01-22 12:30:31.282921   File
"/opt/stack/keystone/keystone/version/service.py", line 20, in 
2016-01-22 12:30:31.283000 from paste import deploy
2016-01-22 12:30:31.283027 ImportError: cannot import name deploy

Thanks
Vikram
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][heat-client] Question about heat command: deployment-list and deployment-metadata-show

2016-01-21 Thread 邸小丽 Di XiaoLi
Hi:

When using heat command: heat deployment-list  and heat deloyment-metadata-show 
like this:
# heat deployment-list -s non-exist-server-id
++---+---+++---+---+
| id | config_id | server_id | action | status | creation_time | status_reason |
++---+---+++---+---+
++---+---+++---+---+
# heat deployment-metadata-show  non-exist-server-id
[]
Here, I give the invalid server_id. But heat client did not show me that the 
server_id is not exist.
I think it is may be a bug as the case of invalid server_id and valid server_id 
with no deployments will both just return same empty output.
So, My questions are:
1) Is this a bug or consistent with design ?
2) If this is a bug, we should do the validation on the server_id and return a 
Not Found message as appropriate.
I would like to know whether we should do the validation  in heat client or 
heat ?



Best Regards,
Di XiaoLi__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][l2-gateway] Kindly ask for review for this RFE

2016-01-21 Thread joehuang
Hello,

This RFE[1] “extend l2gw api to support multi-site connectivity” needs review 
after it’s registered several weeks ago.

There are several projects looking forward for the feature of extending L2 
network across Neutron, or say, across sites.
For example: OPNFV multisite[2]: VNF(Telecom Application) high availability 
across OpenStack ( VIM in NFV terms). Another way of implementation is proposed 
in Neutron[3], but Neutron suggested to move this to L2GW sub-project. 
Tricircle also has expectation on the cross OpenStack networking via VxLAN [4]

[1] RFE: https://bugs.launchpad.net/networking-l2gw/+bug/1529863
[2] OPNFV multisite requirement  
https://git.opnfv.org/cgit/multisite/tree/docs/requirements/VNF_high_availability_across_VIM.rst
[3] RFE in Neutron https://bugs.launchpad.net/neutron/+bug/1484005
[4] Tricircle requirement: 
https://docs.google.com/document/d/18kZZ1snMOCD9IQvUKI5NVDzSASpw-QKj7l2zNqMEd3g

Best Regards
Chaoyi Huang ( Joe Huang )

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Thursday, January 14, 2016 3:04 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron]{l2-gateway] is this project alive



From: "Armando M." >
Reply-To: OpenStack List 
>
Date: Tuesday, January 12, 2016 at 8:57 PM
To: OpenStack List 
>
Subject: Re: [openstack-dev] [Neutron]{l2-gateway] is this project alive



On 12 January 2016 at 09:21, Gary Kotton 
> wrote:
Here is an example of a patch that was posted 5 weeks ago. - 
https://review.openstack.org/#/c/254816/
That is a pretty long time for something that is trivial

5 weeks is bad but not the end of the world. We all have patches sitting idle 
in the queues of various projects.

[Gary] I have patches in other project for over 10 months … Thanks for 
addressing the concerns and reviewing the patches


From: "Vasudevan, Swaminathan (PNB Roseville)" 
>
Reply-To: OpenStack List 
>
Date: Tuesday, January 12, 2016 at 6:11 PM
To: OpenStack List 
>
Subject: Re: [openstack-dev] [Neutron]{l2-gateway] is this project alive

Hi Gary,
I think it is still active.
What are your concerns, I can talk to the team.
Thanks
Swami

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Tuesday, January 12, 2016 6:14 AM
To: OpenStack List
Subject: [openstack-dev] [Neutron]{l2-gateway] is this project alive

Its like a desert out here trying to get a review…

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Let's get together and fix all the bugs

2016-01-21 Thread Lance Bragstad
Hi all,

We've been consistently chipping away at keystone bugs for a while now
every Friday. We've also refactored some of the information around the bug
day [0] and built a couple dashboards to help people get started with
reviews [1] [2].

I wanted to send out this note just to update everyone in case the
information was missed.

Thanks and see you tomorrow!

[0] https://etherpad.openstack.org/p/keystone-office-hours
[1] http://bit.ly/keystone-bug-reviews
[2] https://goo.gl/T8jo7S

On Sun, Oct 11, 2015 at 10:11 AM, Henrique Truta <
henriquecostatr...@gmail.com> wrote:

> I'm in! And hope I can put some other folks in too.
>
> Em sáb, 10 de out de 2015 às 12:03, Lance Bragstad 
> escreveu:
>
>> On Sat, Oct 10, 2015 at 8:07 AM, Boris Bobrov 
>> wrote:
>>
>>> On Saturday 10 October 2015 08:42:10 Shinobu Kinjo wrote:
>>> > So what's the procedure?
>>>
>>> You go to #openstack-keystone on Friday, choose a bug, talk to someone
>>> of the
>>> core reviewers. After talking to them fix the bug.
>>>
>>
>> Wash, rinse, repeat? ;)
>>
>> Looking forward to it, I think this is a much needed pattern!
>>
>>>
>>> > Shinobu
>>> >
>>> > - Original Message -
>>> > From: "Adam Young" 
>>> > To: openstack-dev@lists.openstack.org
>>> > Sent: Saturday, October 10, 2015 12:11:35 PM
>>> > Subject: Re: [openstack-dev] [keystone] Let's get together and fix all
>>> the
>>> > bugs
>>> >
>>> > On 10/09/2015 11:04 PM, Chen, Wei D wrote:
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > Great idea! core reviewer’s advice is definitely much important and
>>> valuable
>>> > before proposing a fixing. I was always thinking it will help save us
>>> if we
>>> > can get some agreement at some point.
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > Best Regards,
>>> >
>>> > Dave Chen
>>> >
>>> >
>>> >
>>> >
>>> > From: David Stanek [ mailto:dsta...@dstanek.com ]
>>> > Sent: Saturday, October 10, 2015 3:54 AM
>>> > To: OpenStack Development Mailing List
>>> > Subject: [openstack-dev] [keystone] Let's get together and fix all the
>>> bugs
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > I would like to start running a recurring bug squashing day. The
>>> general
>>> > idea is to get more focus on bugs and stability. You can find the
>>> details
>>> > here: https://etherpad.openstack.org/p/keystone-office-hours Can we
>>> start
>>> > with Bug 968696?
>>>
>>> --
>>> С наилучшими пожеланиями,
>>> Boris
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev