Re: [openstack-dev] [bug-smash] Global OpenStack Bug Smash Mitaka

2016-03-04 Thread Wang, Shane
Typo: view -> review:)

From: Wang, Shane
Sent: Saturday, March 05, 2016 2:09 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: RE: [bug-smash] Global OpenStack Bug Smash Mitaka

A reminder for the community, the bug smashes are going to start next Monday, 
if you can't join those 11 sites, you can do virtual bug smash remotely by 
fixing the bugs in your offices.
Also, please help do remote view. Thanks.

Best Regards.
--
Shane
From: Wang, Shane [mailto:shane.w...@intel.com]
Sent: Friday, February 05, 2016 11:42 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [bug-smash] Global OpenStack Bug Smash Mitaka
Importance: High

Hi all,

After discussing with TC members and other community guys, we thought March 2-4 
might not be a good timing for bug smash. So we decided to change the dates to 
be March 7 - 9 (Monday - Wednesday) in R4.
Please join our efforts to fix bugs for OpenStack.

Thanks.
--
Shane
From: Wang, Shane [mailto:shane.w...@intel.com]
Sent: Thursday, January 28, 2016 5:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [bug-smash] Global OpenStack Bug Smash Mitaka

Save the Date:
Global OpenStack Bug Smash
Wednesday-Friday, March 7-9, 2016
RSVP by Friday, February 24

How can you help make the OpenStack Mitaka release stable and bug-free while 
having fun with your peers? Join Intel, Rackspace, Mirantis, IBM, HP, Huawei, 
CESI and others in a global bug smash across four continents as we work 
together. Then, join us later in April in Austin, Texas, U.S.A. at the 
OpenStack Summit to get re-acquainted & celebrate our accomplishments!

OUR GOAL
Our key goal is to collaborate round-the-clock and around the world to fix as 
many bugs as possible across the wide range of OpenStack projects. In the 
process, we'll also help onboard and grow the number of OpenStack developers, 
and increase our collective knowledge of OpenStack tools and processes. To ease 
collaboration among all of the participants and ensure that core reviews can be 
conducted promptly, we will use the IRC channel, the mailing list, and Gerrit 
and enlist core reviewers in the event.

GET INVOLVED
Simply choose a place near you-and register by Friday, February 24. 
Registration is free, and we encourage you to invite others who may be 
interested.

* Australia
* China
* India

* Russia
* United Kingdom
* United States


Visit the link below for additional details:
https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Mitaka

Come make the Mitaka release a grand success through your contributions, and 
ease the journey for newcomers!

Regards.
--
OpenStack Bug Smash team

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bug-smash] Global OpenStack Bug Smash Mitaka

2016-03-04 Thread Wang, Shane
A reminder for the community, the bug smashes are going to start next Monday, 
if you can't join those 11 sites, you can do virtual bug smash remotely by 
fixing the bugs in your offices.
Also, please help do remote view. Thanks.

Best Regards.
--
Shane
From: Wang, Shane [mailto:shane.w...@intel.com]
Sent: Friday, February 05, 2016 11:42 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [bug-smash] Global OpenStack Bug Smash Mitaka
Importance: High

Hi all,

After discussing with TC members and other community guys, we thought March 2-4 
might not be a good timing for bug smash. So we decided to change the dates to 
be March 7 - 9 (Monday - Wednesday) in R4.
Please join our efforts to fix bugs for OpenStack.

Thanks.
--
Shane
From: Wang, Shane [mailto:shane.w...@intel.com]
Sent: Thursday, January 28, 2016 5:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [bug-smash] Global OpenStack Bug Smash Mitaka

Save the Date:
Global OpenStack Bug Smash
Wednesday-Friday, March 7-9, 2016
RSVP by Friday, February 24

How can you help make the OpenStack Mitaka release stable and bug-free while 
having fun with your peers? Join Intel, Rackspace, Mirantis, IBM, HP, Huawei, 
CESI and others in a global bug smash across four continents as we work 
together. Then, join us later in April in Austin, Texas, U.S.A. at the 
OpenStack Summit to get re-acquainted & celebrate our accomplishments!

OUR GOAL
Our key goal is to collaborate round-the-clock and around the world to fix as 
many bugs as possible across the wide range of OpenStack projects. In the 
process, we'll also help onboard and grow the number of OpenStack developers, 
and increase our collective knowledge of OpenStack tools and processes. To ease 
collaboration among all of the participants and ensure that core reviews can be 
conducted promptly, we will use the IRC channel, the mailing list, and Gerrit 
and enlist core reviewers in the event.

GET INVOLVED
Simply choose a place near you-and register by Friday, February 24. 
Registration is free, and we encourage you to invite others who may be 
interested.

* Australia
* China
* India

* Russia
* United Kingdom
* United States


Visit the link below for additional details:
https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Mitaka

Come make the Mitaka release a grand success through your contributions, and 
ease the journey for newcomers!

Regards.
--
OpenStack Bug Smash team

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bug-smash] Global OpenStack Bug Smash Mitaka

2016-03-04 Thread Wang, Shane
Yes, sure, Markus, we are using 
https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Mitaka-Bugs to track. They 
were edited by those participants to share between different geos. Feel free to 
let the community know on communication channels, IRC channel (e.g. nova) or 
mail list.
Let's talk via them to concentrate fixing from next Monday to next Wednesday.

Thanks.
--
Shane

-Original Message-
From: Markus Zoeller [mailto:mzoel...@de.ibm.com] 
Sent: Thursday, March 03, 2016 2:09 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [bug-smash] Global OpenStack Bug Smash Mitaka

"Wang, Shane"  wrote on 02/05/2016 04:42:21 AM:

> From: "Wang, Shane" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 02/05/2016 04:43 AM
> Subject: Re: [openstack-dev] [bug-smash] Global OpenStack Bug Smash
Mitaka
> 
> Hi all,
> 
> After discussing with TC members and other community guys, we thought 
> March 2-4 might not be a good timing for bug smash. So we decided to 
> change the dates to be March 7 ? 9 (Monday ? Wednesday) in R4.
> Please join our efforts to fix bugs for OpenStack.
> 
> Thanks.

Hi Shane,

I'm the bug list maintainer of Nova, is it possible for me to propose a list of 
bugs which need fixes? 
Nova (and surely other projects too) would also benefit from:
* a cleanup of inconsistencies in bug reports in Launchpad
* triaging new bugs in Launchpad
* reviews of pushed bug fixes in Gerrit
Basically the steps from [1]. As we're heading to the rc phase in a few weeks 
it would be benefitial to have a lot of eyes on that. 

References:
[1] https://wiki.openstack.org/wiki/BugTriage

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-04 Thread Hongbin Lu


From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: March-04-16 6:31 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

Steve,

On Mar 4, 2016, at 2:41 PM, Steven Dake (stdake) 
> wrote:

From: Adrian Otto >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, March 4, 2016 at 12:48 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

Hongbin,

To be clear, this pursuit is not about what OS options cloud operators can 
select. We will be offering a method of choice. It has to do with what we plan 
to build comprehensive testing for,
This is easy. Once we build comprehensive tests for the first OS, just re-run 
it for other OS(s).

and the implications that has on our pace of feature development. My guidance 
here is that we resist the temptation to create a system with more permutations 
than we can possibly support. The relation between bay node OS, Heat Template, 
Heat Template parameters, COE, and COE dependencies (could-init, docker, 
flannel, etcd, etc.) are multiplicative in nature. From the mid cycle, it was 
clear to me that:

1) We want to test at least one OS per COE from end-to-end with comprehensive 
functional tests.
2) We want to offer clear and precise integration points to allow cloud 
operators to substitute their own OS in place of whatever one is the default 
for the given COE.

A COE shouldn't have a default necessarily that locks out other defaults.  
Magnum devs are the experts in how these systems operate, and as such need to 
take on the responsibility of the implementation for multi-os support.

3) We want to control the total number of configuration permutations to 
simplify our efforts as a project. We agreed that gate testing all possible 
permutations is intractable.

I disagree with this point, but I don't have the bandwidth available to prove 
it ;)

That's exactly my point. It takes a chunk of human bandwidth to carry that 
responsibility. If we had a system engineer assigned from each of the various 
upstream OS distros working with Magnum, this would not be a big deal. 
Expecting our current contributors to support a variety of OS variants is not 
realistic.
You have my promise to support an additional OS for 1 or 2 popular COEs.

Change velocity among all the components we rely on has been very high. We see 
some of our best contributors frequently sidetracked in the details of the 
distros releasing versions of code that won't work with ours. We want to 
upgrade a component to add a new feature, but struggle to because the new 
release of the distro that offers that component is otherwise incompatible. 
Multiply this by more distros, and we expect a real problem.
At Magnum upstream, the overhead doesn't seem to come from the OS. Perhaps, 
that is specific to your downstream?


There is no harm if you have 30 gates running the various combinations.  
Infrastructure can handle the load.  Whether devs have the cycles to make a 
fully bulletproof gate is the question I think you answered with the word 
intractable.

Actually, our existing gate tests are really stressing out our CI infra. At 
least one of the new infrastructure providers that replaced HP have equipment 
that runs considerably slower. For example, our swam functional gate now 
frequently fails because it can't finish before the allowed time limit of 2 
hours where it could finish substantially faster before. If we expanded the 
workload considerably, we might quickly work to the detriment of other projects 
by perpetually clogging the CI pipelines. We want to be a good citizen of the 
openstack CI community. Testing configuration of third party software should be 
done with third party CI setups. That's one of the reasons those exist. 
Ideally, each would be maintained by those who have a strategic (commercial?) 
interest in support for that particular OS.

I can tell you in Kolla we spend a lot of cycles just getting basic gating  
going of building containers and then deploying them.  We have even made 
inroads into testing the deployment.  We do CentOS, Ubuntu, and soon Oracle 
Linux, for both source and binary and build and deploy.  Lots of gates and if 
they aren't green we know the patch is wrong.

Remember that COE's are tested on nova instances within heat stacks. Starting 
lots of nova instances within devstack in the gates is problematic. We are 
looking into using a libvirt-lxc instance type from nova instead of a 
libvirt-kvm instance to help alleviate this. Until then, limiting the scope 

Re: [openstack-dev] [nova] Non-Admin user can show deleted instances using changes-since parameter when calling list API

2016-03-04 Thread Chris Friesen

On 03/04/2016 03:42 PM, Matt Riedemann wrote:



On 3/3/2016 9:14 PM, Zhenyu Zheng wrote:

Hm, I found out the reason:
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/servers.py#L1139-L1145

here we filtered out parameters like "deleted", and that's why the API
behavior is like above mentioned.

So should we simple add "deleted" to the tuple or a microversion is needed?



Nice find. This is basically the same as the ip6 case which required
microversion 2.5 [1]. So I think this is going to require a microversion in
Newton to fix it (which means a blueprint and a spec since it's an API change).
But it's pretty trivial, the paperwork is the majority of the work.

[1] https://review.openstack.org/#/c/179569/


Does it really need a spec given that microversions are documented in the 
codebase?

That almost seems like paperwork for the sake of following the rules rather than 
to serve any useful purpose.


Is anyone really going to argue about the details?

Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Contributing to TripleO is challenging

2016-03-04 Thread Adam Young

On 03/04/2016 09:23 AM, Emilien Macchi wrote:

That's not the name of any Summit's talk, it's just an e-mail I wanted
to write for a long time.

It is an attempt to expose facts or things I've heard a lot; and bring
constructive thoughts about why it's challenging to contribute in
TripleO project.


1/ "I don't review this patch, we don't have CI coverage."

One thing I've noticed in TripleO is that a very few people are involved
in CI work.
In my opinion, CI system is more critical than any feature in a product.
Developing Software without tests is a bit like http://goo.gl/OlgFRc
All people - specially core - in the project should be involved in CI
work. If you are TripleO core and you don't contribute on CI, you might
ask yourself why.


OK...so what is the state of Tripleo CI?  My experience with Tripleo has 
shown that it is quite resource intesive, far more so than, say, 
Keystone, and so I could see that being the gating factor.



In order for me to be able to get into Tripleo coding, I needed a new 
machine, with 32 Gb of Ram, separate from my everyday work machine.  Not 
a killer outlay, but enough to hold me up until I got the HW allocated.


If we could split up the testing undercloud vs. overcloud, it might be 
more feasable.  I see no fundamental reason that the majority of the 
Overcloud development and testing could not be done on top of a 
non-ironic based OpenStack deployment.


That leaves just the undercloud, which could, possibly, also run onto 
top of an existing OpenStack deployment for much of the development.


A true end to end run of Tripleo with HA requires a lot:  3 Physical 
machines plus a little overhead for the Overcloud.  But this is what is 
really needed.  Ideally, on multiple vendors' systems, so that we 
identify some aspect of the Hardware variation.






2/ "I don't review this patch, CI is broken."

Another thing I've noticed in TripleO is that when CI is broken, again,
a very few people are actually working on fixing failures.
My experience over the last years taught me to stop my daily work when
CI is broken and fix it asap.


Puppet and Heat are black boxes to me still.  I don't clearly understand 
how they fit together.


I think we need to start depuppetifying Tripleo. I know we have a lot of 
sunk costs in to it, but we went with Puppet because it was all we had, 
not that it well matched the problem set.


I'd recommend a freeze on all new Puppet development, and start doing 
all new features in Ansible. Fully acknowledging the havoc this will 
wreak,  I think it is important strategically.   It is really hard to 
swap between two languages, and the rest of OpenStack in Python.  
Switching to Ruby is hard.


All of our Client support is in Python.

The number of people that know Puppet that actively contribute to 
OpenStack is small. The number of real Ruby experts is smaller.





3/ "I don't review it, because this feature / code is not my area".

My first though is "Aren't we supposed to be engineers and learn new areas?"
My second though is that I think we have a problem with TripleO Heat
Templates.
THT or TripleO Heat Templates's code is 80% of Puppet / Hiera. If
TripleO core say "I'm not familiar with Puppet", we have a problem here,
isn't?
Maybe should we split this repository? Or revisit the list of people who
can +2 patches on THT.
I am more than happy to review anything Keystone related, but again, I 
struggle with Puppet.


Not really knowing Heat as well makes it even tougher. We need a better 
overall orientation guide if people are going to come up to speed quicker.






4/ Patches are stalled. Most of the time.

Over the last 12 months, I've pushed a lot of patches in TripleO and one
thing I've noticed is that if I don't ping people, my patch got no
review. And I have to rebase it, every week, because the interface
changed. I got +2, cool ! Oh, merge conflict. Rebasing. Waiting for +2
again... and so on..
Same is true on Keystone.  There is just a lot to get done on this 
project.  All these projects.




I personally spent 20% of my time to review code, every day.
I wrote a blog post about how I'm doing review, with Gertty:
http://my1.fr/blog/reviewing-puppet-openstack-patches/
I suggest TripleO folks to spend more time on reviews, for some reasons:


Nice of you to write that up.


* decreasing frustration from contributors
* accelerate development process
* teach new contributors to work on TripleO, and eventually scale-up the
core team. It's a time investment, but worth it.

In Puppet team, we have weekly triage sessions and it's pretty helpful.


5/ Most of the tests are run... manually.

How many times I've heard "I've tested this patch locally, and it does
not work so -1".

The only test we do in current CI is a ping to an instance. Seriously?
Most of OpenStack CIs (Fuel included), run Tempest, for testing APIs and
real scenarios. And we run a ping.
That's similar to 1/ but I wanted to raise it too.
Again, testing is expensive; if I 

Re: [openstack-dev] [magnum-ui] Proposed Core addition, and removal notice

2016-03-04 Thread Bradley Jones (bradjone)
+1 

Shu has done some great work in magnum-ui and will be a welcome addition to the 
core team.

Thanks,
Brad

> On 5 Mar 2016, at 00:29, Adrian Otto  wrote:
> 
> Magnum UI Cores,
> 
> I propose the following changes to the magnum-ui core group [1]:
> 
> + Shu Muto
> - Dims (Davanum Srinivas), by request - justified by reduced activity level.
> 
> Please respond with your +1 votes to approve this change or -1 votes to 
> oppose.
> 
> Thanks,
> 
> Adrian
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum-ui] Proposed Core addition, and removal notice

2016-03-04 Thread Adrian Otto
Magnum UI Cores,

I propose the following changes to the magnum-ui core group [1]:

+ Shu Muto
- Dims (Davanum Srinivas), by request - justified by reduced activity level.

Please respond with your +1 votes to approve this change or -1 votes to oppose.

Thanks,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][magnum-ui] Liaisons for I18n

2016-03-04 Thread Adrian Otto
Kato,

I have confirmed with Shu Muto, who will be assuming our I18n Liaison role for 
Magnum until further notice. Thanks for raising this important request.

Regards,

Adrian

> On Mar 3, 2016, at 6:53 AM, KATO Tomoyuki  wrote:
> 
> I added Magnum to the list... Feel free to add your name and IRC nick, Shu.
> 
> https://wiki.openstack.org/wiki/CrossProjectLiaisons#I18n
> 
>> One thing to note.
>> 
>> The role of i18n liaison is not to keep it well translated.
>> The main role is in a project side,
>> for example, to encourage i18n related reviews and fixes, or
>> to suggest what kind of coding is recommended from i18n point of view.
> 
> Yep, that is a reason why a core reviewer is preferred for liaison.
> We sometimes have various requirements:
> word ordering (block trans), n-plural form, and so on.
> Some of them may not be important for Japanese.
> 
> Regards,
> KATO Tomoyuki
> 
>> 
>> Akihiro
>> 
>> 2016-03-02 12:17 GMT+09:00 Shuu Mutou :
>>> Hi Hongbin, Yuanying and team,
>>> 
>>> Thank you for your recommendation.
>>> I'm keeping 100% of EN to JP translation of Magnum-UI everyday.
>>> I'll do my best, if I become a liaison.
>>> 
>>> Since translation has became another point of review for Magnum-UI, I hope 
>>> that members translate Magnum-UI into your native language.
>>> 
>>> Best regards,
>>> Shu Muto
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] IPv4 network isolation testing update

2016-03-04 Thread Dan Prince
On Thu, 2016-03-03 at 21:38 -0500, Dan Prince wrote:
> Some progress today:
> 
> After rebuilding the test-env workers nodes in our CI rack to support
> multiple-nics we got a stack to go into CREATE_COMPLETE with network
> isolation enabled in CI.
> 
> https://review.openstack.org/#/c/288163/
> 
> Have a look at the Ceph result here:
> 
> http://logs.openstack.org/63/288163/1/check-tripleo/gate-tripleo-ci-f
> 22
> -ceph/a284105/console.html
> 
> After the stack goes to CREATE_COMPLETE it then failed promptly with
> a
> HTTP 503 error. I think this might have been from python-
> keystoneclient's init code (perhaps trying to get keystone endpoints
> or
> something). I haven't quite tracked this down yet.
> 
> Anyways, this is progress. The stack completed, validations passed,
> but
> it failed during post deployment configuration somewhere. If anyone
> has
> ideas in the meantime please feel free to comment here or on the
> patches.

Ben Nemec pointed out that the http_proxy environment variable was the
cause of our keystone initialization failures. With a simple extra
'unset http_proxy' if finally started passing today:

https://review.openstack.org/#/c/288163/

Thanks gfidente and derekh for all the help and debugging on this.

Dan

> 
> Dan
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Team meeting this Monday at 2100 UTC

2016-03-04 Thread Armando M.
Hi neutrinos,

A kind reminder for next week's meeting. More on the agenda [1].

Cheers,
Armando

[1] https://wiki.openstack.org/wiki/Network/Meetings
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-04 Thread Adrian Otto
Steve,

On Mar 4, 2016, at 2:41 PM, Steven Dake (stdake) 
> wrote:

From: Adrian Otto >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, March 4, 2016 at 12:48 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

Hongbin,

To be clear, this pursuit is not about what OS options cloud operators can 
select. We will be offering a method of choice. It has to do with what we plan 
to build comprehensive testing for, and the implications that has on our pace 
of feature development. My guidance here is that we resist the temptation to 
create a system with more permutations than we can possibly support. The 
relation between bay node OS, Heat Template, Heat Template parameters, COE, and 
COE dependencies (could-init, docker, flannel, etcd, etc.) are multiplicative 
in nature. From the mid cycle, it was clear to me that:

1) We want to test at least one OS per COE from end-to-end with comprehensive 
functional tests.
2) We want to offer clear and precise integration points to allow cloud 
operators to substitute their own OS in place of whatever one is the default 
for the given COE.

A COE shouldn’t have a default necessarily that locks out other defaults.  
Magnum devs are the experts in how these systems operate, and as such need to 
take on the responsibility of the implementation for multi-os support.

3) We want to control the total number of configuration permutations to 
simplify our efforts as a project. We agreed that gate testing all possible 
permutations is intractable.

I disagree with this point, but I don't have the bandwidth available to prove 
it ;)

That’s exactly my point. It takes a chunk of human bandwidth to carry that 
responsibility. If we had a system engineer assigned from each of the various 
upstream OS distros working with Magnum, this would not be a big deal. 
Expecting our current contributors to support a variety of OS variants is not 
realistic. Change velocity among all the components we rely on has been very 
high. We see some of our best contributors frequently sidetracked in the 
details of the distros releasing versions of code that won’t work with ours. We 
want to upgrade a component to add a new feature, but struggle to because the 
new release of the distro that offers that component is otherwise incompatible. 
Multiply this by more distros, and we expect a real problem.

There is no harm if you have 30 gates running the various combinations.  
Infrastructure can handle the load.  Whether devs have the cycles to make a 
fully bulletproof gate is the question I think you answered with the word 
intractable.

Actually, our existing gate tests are really stressing out our CI infra. At 
least one of the new infrastructure providers that replaced HP have equipment 
that runs considerably slower. For example, our swam functional gate now 
frequently fails because it can’t finish before the allowed time limit of 2 
hours where it could finish substantially faster before. If we expanded the 
workload considerably, we might quickly work to the detriment of other projects 
by perpetually clogging the CI pipelines. We want to be a good citizen of the 
openstack CI community. Testing configuration of third party software should be 
done with third party CI setups. That’s one of the reasons those exist. 
Ideally, each would be maintained by those who have a strategic (commercial?) 
interest in support for that particular OS.

I can tell you in Kolla we spend a lot of cycles just getting basic gating  
going of building containers and then deploying them.  We have even made 
inroads into testing the deployment.  We do CentOS, Ubuntu, and soon Oracle 
Linux, for both source and binary and build and deploy.  Lots of gates and if 
they aren't green we know the patch is wrong.

Remember that COE’s are tested on nova instances within heat stacks. Starting 
lots of nova instances within devstack in the gates is problematic. We are 
looking into using a libvirt-lxc instance type from nova instead of a 
libvirt-kvm instance to help alleviate this. Until then, limiting the scope of 
our gate tests is appropriate. We will continue our efforts to make them 
reasonably efficient.

Thanks,

Adrian


Regards
-steve


Note that it will take a thoughtful approach (subject to discussion) to balance 
these interests. Please take a moment to review the interest above. Do you or 
others disagree with these? If so, why?

Adrian

On Mar 4, 2016, at 9:09 AM, Hongbin Lu 
> wrote:

I don’t think there is any consensus on supporting single distro. There are 

Re: [openstack-dev] [tripleo] Contributing to TripleO is challenging

2016-03-04 Thread Giulio Fidente

On 03/04/2016 03:23 PM, Emilien Macchi wrote:

That's not the name of any Summit's talk, it's just an e-mail I wanted
to write for a long time.

It is an attempt to expose facts or things I've heard a lot; and bring
constructive thoughts about why it's challenging to contribute in
TripleO project.


hi Emilien,

thanks for bringing this up, it's not an easy topic and yet of most 
crucial. As a core contributors I feel, to some extent, responsible for 
the current status of things and I think it's time for us to reflect 
more about what we can, individually, do.


I have some ideas but I want to start by commenting to your points.


1/ "I don't review this patch, we don't have CI coverage."

One thing I've noticed in TripleO is that a very few people are involved
in CI work.
In my opinion, CI system is more critical than any feature in a product.
Developing Software without tests is a bit like http://goo.gl/OlgFRc
All people - specially core - in the project should be involved in CI
work. If you are TripleO core and you don't contribute on CI, you might
ask yourself why.


Agreed, we need more 'eyes' on out CI to cope with both the infra and 
the inavoidable failures due to changes/bugs in the puppet modules or 
openstack itself.


But there is more hiding behind this problem ... we already have quite a 
number of optional and even pluggable features in TripleO and we're even 
designing an interface to make this easier; testing them all isn't going 
to happen. So we'll always hit something we don't have coverage for.


Let's have a conversation on how we can improve coverage at the summit! 
Maybe we can make simply make our CI scenarios more variegated/complex 
in the attempt to touch more features?



2/ "I don't review this patch, CI is broken."

Another thing I've noticed in TripleO is that when CI is broken, again,
a very few people are actually working on fixing failures.
My experience over the last years taught me to stop my daily work when
CI is broken and fix it asap.


Agreed. More eyes and more coverage to increase its dependability.


3/ "I don't review it, because this feature / code is not my area".

My first though is "Aren't we supposed to be engineers and learn new areas?"
My second though is that I think we have a problem with TripleO Heat
Templates.
THT or TripleO Heat Templates's code is 80% of Puppet / Hiera. If
TripleO core say "I'm not familiar with Puppet", we have a problem here,
isn't?
Maybe should we split this repository? Or revisit the list of people who
can +2 patches on THT.


Not sure here, I find that manifests and templates are pretty much 
"meant to go together" so I am worried that a split could solve some 
problems but also cause others.


This said, let's be honest, an effective patch for THT requires a good 
understanding of many different problems which can be TripleO specific 
(eg. implications on upgrades), tooling specific (eg. Heat/Puppet), 
OpenStack specific (eg. cooperation with other, optional, features) so I 
have myself skipped changes when I didn't feel comfortable with it.


But one problem which I think is more recently slowing reviews and which 
is somewhat concause of 3) is that we're not dealing too well with code 
duplication in the yamls and with conditional logic in the manifests.


Maybe we could stop and think a together about new HOT functionalities 
which could help us? Interesting for the summit as well?



4/ Patches are stalled. Most of the time.

Over the last 12 months, I've pushed a lot of patches in TripleO and one
thing I've noticed is that if I don't ping people, my patch got no
review. And I have to rebase it, every week, because the interface
changed. I got +2, cool ! Oh, merge conflict. Rebasing. Waiting for +2
again... and so on..

I personally spent 20% of my time to review code, every day.
I wrote a blog post about how I'm doing review, with Gertty:
http://my1.fr/blog/reviewing-puppet-openstack-patches/
I suggest TripleO folks to spend more time on reviews, for some reasons:

* decreasing frustration from contributors
* accelerate development process
* teach new contributors to work on TripleO, and eventually scale-up the
core team. It's a time investment, but worth it.


I'm inclined to think that this is a bit of a consequence of 1), 2) and 
3) together.



In Puppet team, we have weekly triage sessions and it's pretty helpful.


Right. I think we experimented with something like this before but it 
was probably perceived as an emergency measure so we put it on a side 
after a while.


I remember we had a list of 'hot reviews' which we would review during 
the weekly meetings. But it isn't trivial to understand which type of 
review is considered hot. What is the purpose of the puppet team 
triaging? To find old reviews? Mergeable reviews? To dropping stale 
reviews? To speed up bug fixes? To get attention on features?



5/ Most of the tests are run... manually.

How many times I've heard "I've tested this patch locally, and 

[openstack-dev] [neutron][taas] tap-service-list / tap-flow-list failure when list is empty

2016-03-04 Thread Anil Rao
Hi,

Here is some additional information pertaining  to the failures I am seeing 
when invoking the tap-service-list and tap-flow-list commands. This is on a 
multi-node DevStack environment (1 controller node, I network node and 2 
compute nodes).



1.   The tap-service-list command returns a failure when there are no 
tap-services.

2.   The tap-flow-list command returns a failure when there are no 
tap-flows.

3.   Both commands work as expected when the respective objects are present.

See example output (for tap-services) below.

osadmin@ds-ctl:~$ neutron tap-service-list
list index out of range
osadmin@ds-ctl:~$ neutron tap-service-create --name TS1 --description 
"tap-service-1" --port 2100906e-cb1a-4ab4-b50f-77f55a3f0793 --network 
cfb88d7c-8e9e-4954-a923-2f9cac3b4ebe
Created a new tap_service:
+-+--+
| Field   | Value|
+-+--+
| description | tap-service-1|
| id  | 1086170e-a9cd-41bd-a5df-7ad4782da337 |
| name| TS1  |
| port_id | 2100906e-cb1a-4ab4-b50f-77f55a3f0793 |
| tenant_id   | 93c1c68f06e843938159329bfdbed384 |
+-+--+
osadmin@ds-ctl:~$ neutron tap-service-list
+--+--+
| id   | name |
+--+--+
| 1086170e-a9cd-41bd-a5df-7ad4782da337 | TS1  |
+--+--+
osadmin@ds-ctl:~$ neutron tap-service-delete TS1
Deleted tap_service: TS1
osadmin@ds-ctl:~$ neutron tap-service-list
list index out of range

Here is the output of tap-service-list with the "-debug" flag. The error is 
being reported by neutronclient.shell.


DEBUG: keystoneauth.session RESP: [200] Date: Fri, 04 Mar 2016 22:50:16 GMT 
Connection: keep-alive Content-Type: application/json; charset=UTF-8 
Content-Length: 20 X-Openstack-Request-Id: 
req-641ba1a0-7f49-4460-b720-313d92009b87
RESP BODY: {"tap_services": []}

ERROR: neutronclient.shell list index out of range
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py", line 
819, in run_subcommand
return run_command(cmd, cmd_parser, sub_argv)
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py", line 
105, in run_command
return cmd.run(known_args)
  File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/common/command.py", line 
29, in run
return super(OpenStackCommand, self).run(parsed_args)
  File "/usr/local/lib/python2.7/dist-packages/cliff/display.py", line 88, in 
run
self.produce_output(parsed_args, column_names, data)
  File "/usr/local/lib/python2.7/dist-packages/cliff/lister.py", line 51, in 
produce_output
parsed_args,
  File "/usr/local/lib/python2.7/dist-packages/cliff/formatters/table.py", line 
64, in emit_list
stdout, x, int(parsed_args.max_width), min_width)
  File "/usr/local/lib/python2.7/dist-packages/cliff/formatters/table.py", line 
148, in _assign_max_widths
first_line = x.get_string().splitlines()[0]
IndexError: list index out of range
list index out of range

It appears that other list commands associated with the neutron client also 
show the same type of failure when their lists are empty.

osadmin@ds-ctl:~$ neutron agent-list
list index out of range
osadmin@ds-ctl:~$ neutron address-scope-list
list index out of range

Thanks,
Anil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Fuel-Library] Fuel CI issues

2016-03-04 Thread Aleksandra Fedorova
I think we need to address two separate concerns here:

1) smooth integration of puppet-openstack with Fuel master;
2)  tests for changes in fuel-library.

For 1) we need to build and test Fuel against master of
puppet-openstack and treat any integration issues as critical/blocker.
And this is what regular ISO builds and BVT tests are for.

But for 2) we need to test new commit to fuel-library against fixed
baseline - which means stable everything including Ubuntu upstream
mirror, QA framework, fuel code, mos packages or puppet-openstack
modules.

So while I think we should build ISO from current latest HEAD, debug
BVT failures and address them ASAP, we need also to pin puppet modules
used for fuel-library tests to provide targeted feedback on
fuel-library reviews.

To do so we can rely on the same process as we use now for fixing
ubuntu upstream repo:

We have a jenkins job which defines environment for deployment tests:

  https://ci.fuel-infra.org/view/devops/job/devops.master.env/

And additionally to fuel-qa commit, ubuntu mirror id and iso magnet
link we can provide there also the versions list or the entire tarball
of puppet modules which were tested in the latest bvt.

Then we will update the environment as a whole via the same process as
we have now for ISO images.
(Actually I hope it will become a much better process soon as we plan
to speed up and fully automate environment update in the nearest
future).

On Tue, Mar 1, 2016 at 11:36 PM, Sergey Kolekonov
 wrote:
> I think we should also look at the root cause of these CI failures.
> They are connected with difference in packages and not with manifests or
> deployment process.
> So another possible solution is to stay as close as possible to the package
> sources used by OpenStack Puppet modules CI.
> For example, we have a BP [0] that adds an ability to deploy UCA packages
> with Fuel.
> Current package sources used by openstack-modules CI can be found here [1]
>
> Just my 2c.
> Thanks.
>
> [0] https://blueprints.launchpad.net/fuel/+spec/deploy-with-uca-packages
> [1]
> https://github.com/openstack/puppet-openstack-integration/blob/master/manifests/repos.pp#L4
>
> On Tue, Mar 1, 2016 at 2:21 PM, Vladimir Kuklin 
> wrote:
>>
>> Dmitry
>>
>> >I don't think "hurried" is applicable here: the only way to become more
>> >ready to track upstream than we already are is to *start* tracking
>> >upstream. Postponing that leaves us in a Catch-22 situation where we
>> >can't stay in sync with upstream because we're not continuously catching
>> >up with upstream.
>>
>> First of all, if you read my email again, you will see that I propose a
>> way of tracking upstream in less continuous mode with nightly testing and
>> switching to it based on automated integration testing which will leave us 0
>> opportunity to face the aforementioned issues.
>>
>> >That would lock us into that Catch-22 situation where we can't allow
>> >Fuel CI to vote on puppet-openstack commits because fuel-library is
>> >always too far behind puppet-openstack for its votes to mean anything
>> >useful.
>>
>> This is not true. We can run FUEL CI against any set of commits.
>>
>> > We have to approach this from the opposite direction: make Fuel CI
>> > stable and meaningful enough so that, 9 times out of 10, Fuel CI failure
>> > indicates a real problem with the code, and the remaining cases can be
>> > quickly unblocked by pushing a catch-up commit to fuel-library (ideally
>> > with a Depends-On tag).
>>
>> Dmitry, could you please point me at the person who will be strictly
>> responsible for creating this 'ketchup' commit? Do you know that this may
>> take up the whole day (couple of hours to do RCA, couple of hours on writing
>> and debugging and couple of hours for FUEL CI tests run) and block the
>> entire Fuel project from having ANY code merged? Taking into consideration
>> that openstack infra is currently under really high load it may take even
>> several days for the fix to land into master. How do you expect us to have
>> any feature merged prior to FF?
>>
>> > It is a matter of trust between projects: do we trust Puppet OpenStack
>> > project to take Fuel's problems seriously and to avoid breaking our CI
>> > more often than necessary? Can Puppet OpenStack project trust us with
>> > the same? So far, our collaboration track record has been pretty good
>> > bordering on examplary, and I think we should build on that and move
>> > forward instead of clinging to the old ways.
>> > The problem with moving only one piece at a time is that you end up so
>> > far behind that you're slowing everyone down. BKL and GIL are not the
>> > only way to deal with concurrency, we can do better than that.
>>
>> I have always thought that buliding software is about verification being
>> more important than 'trust'. There should not be any humanitarian stuff
>> invloved - we are not in a relationship with Puppet-OpenStack folks,
>> although I really 

[openstack-dev] [all][release] Release countdown for week R-4, Mar 7 - 11

2016-03-04 Thread Doug Hellmann
The Mitaka 3 milestone has passed, and we're on our way to preparing
release candidates. See Thierry's email [1] for details about the
artifacts produced for the milestone.

[1] 
http://lists.openstack.org/pipermail/openstack-announce/2016-March/001002.html

Focus
-

Project teams should be concentrating on finishing work for which
a feature freeze exception (FFE) was granted, and fixing release-critical
bugs before preparing the release candidates during week R-3. Any
FFE work not completed this week should be postponed to the next
cycle.

General Notes
-

The global requirements list is frozen. If you need to change a
dependency, for example to include a bug fix in one of our libraries
or an upstream library, please provide enough detail in the change
request to allow the requirements review team to evaluate the change.

User-facing strings are frozen to allow the translation team time
to finish their work.

Release Actions
---

The stable/mitaka branches for libraries will be created early
during R-4. After the branch is created, a patch will be submitted
to update the .gitreview file. If the project uses reno, another
patch will be submitted on master to add a mitaka-specific page to
the reno build.  Please watch for those patches and prioritize
reviewing them.

Important Dates
---

RC Target Week: R-3, Mar 14-18

Mitaka release schedule: http://releases.openstack.org/mitaka/schedule.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-04 Thread Steven Dake (stdake)


From: Adrian Otto >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, March 4, 2016 at 12:48 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

Hongbin,

To be clear, this pursuit is not about what OS options cloud operators can 
select. We will be offering a method of choice. It has to do with what we plan 
to build comprehensive testing for, and the implications that has on our pace 
of feature development. My guidance here is that we resist the temptation to 
create a system with more permutations than we can possibly support. The 
relation between bay node OS, Heat Template, Heat Template parameters, COE, and 
COE dependencies (could-init, docker, flannel, etcd, etc.) are multiplicative 
in nature. From the mid cycle, it was clear to me that:

1) We want to test at least one OS per COE from end-to-end with comprehensive 
functional tests.
2) We want to offer clear and precise integration points to allow cloud 
operators to substitute their own OS in place of whatever one is the default 
for the given COE.

A COE shouldn’t have a default necessarily that locks out other defaults.  
Magnum devs are the experts in how these systems operate, and as such need to 
take on the responsibility of the implementation for multi-os support.

3) We want to control the total number of configuration permutations to 
simplify our efforts as a project. We agreed that gate testing all possible 
permutations is intractable.

I disagree with this point, but I don't have the bandwidth available to prove 
it ;)  There is no harm if you have 30 gates running the various combinations.  
Infrastructure can handle the load.  Whether devs have the cycles to make a 
fully bulletproof gate is the question I think you answered with the word 
intractable.

I can tell you in Kolla we spend a lot of cycles just getting basic gating  
going of building containers and then deploying them.  We have even made 
inroads into testing the deployment.  We do CentOS, Ubuntu, and soon Oracle 
Linux, for both source and binary and build and deploy.  Lots of gates and if 
they aren't green we know the patch is wrong.

Regards
-steve


Note that it will take a thoughtful approach (subject to discussion) to balance 
these interests. Please take a moment to review the interest above. Do you or 
others disagree with these? If so, why?

Adrian

On Mar 4, 2016, at 9:09 AM, Hongbin Lu 
> wrote:

I don’t think there is any consensus on supporting single distro. There are 
multiple disagreements on this thread, including several senior team members 
and a project co-founder. This topic should be re-discussed (possibly at the 
design summit).

Best regards,
Hongbin

From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: March-04-16 11:37 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

I don't think anyone is saying that code should somehow block support for 
multiple distros. The discussion at midcycle was about what the we should gate 
on and ensure feature parity for as a team. Ideally, we'd like to get support 
for every distro, I think, but no one wants to have that many gates. Instead, 
the consensus at the midcycle was to have 1 reference distro for each COE, gate 
on those and develop features there, and then have any other distros be 
maintained by those in the community that are passionate about them.

The issue also isn't about how difficult or not it is. The problem we want to 
avoid is spending precious time guaranteeing that new features and bug fixes 
make it through multiple distros.

Corey

On Fri, Mar 4, 2016 at 11:18 AM Steven Dake (stdake) 
> wrote:
My position on this is simple.

Operators are used to using specific distros because that is what they used in 
the 90s,and the 00s, and the 10s.  Yes, 25 years of using a distro, and you 
learn it inside and out.  This means you don't want to relearn a new distro, 
especially if your an RPM user going to DEB or a DEB user going to RPM.  These 
are non-starter options for operators, and as a result, mean that distro choice 
is a must.  Since CoreOS is a new OS in the marketplace, it may make sense to 
consider placing it in "third" position in terms of support.

Besides that problem, various distribution companies will only support distros 
running in Vms if it matches the host kernel, which makes total sense to me.  
This means on an Ubuntu host if I want support I need to run Ubuntu vms, on a 
RHEL host I want to run RHEL vms, 

Re: [openstack-dev] [fuel] Fuel 9.0/Mitaka is now in Feature Freeze

2016-03-04 Thread Dmitry Borodaenko
Based on the list of approved exceptions, we're going to be merging some
feature changes until March 24. It doesn't make sense to have Soft Code
Freeze until a couple of weeks after that, so I propose to shift the
release dates by 3 weeks:

9.0 Soft Code Freeze: April 6
9.0 Release: April 20

Updated schedule:
https://wiki.openstack.org/wiki/Fuel/9.0_Release_Schedule

-- 
Dmitry Borodaenko


On Thu, Mar 03, 2016 at 04:31:56PM -0800, Dmitry Borodaenko wrote:
> Following feature freeze exceptions were granted, ordered by their merge
> deadline. See linked emails for additonal conditions attached to some of
> these exceptions.
> 
> UCA, 3/10:
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088309.html
> 
> Multipath disks, 3/10:
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088282.html
> 
> LCM readyness for all deployment tasks, 3/15:
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088310.html
> 
> HugePages, 3/16:
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088292.html
> 
> Numa, 3/16:
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088292.html
> 
> SR-IOV, 3/16:
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088307.html
> 
> Decouple Fuel and OpenStack tasks, 3/20:
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088297.html
> 
> Remove conflicting openstack module parts, 3/20:
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088298.html
> 
> DPDK, 3/24:
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088291.html
> 
> Unlock "Settings" Tab, 3/24:
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088305.html
> 
> ConfigDB, 3/24:
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088279.html
> 
> Osnailyfacter refactoring for Puppet Master compatibility, 3/24:
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088308.html
> 
> All other feature changes will have to wait until Soft Code Freeze.
> 
> See IRC meeting minutes and log from #fuel-dev for more details:
> http://eavesdrop.openstack.org/meetings/fuel/2016/fuel.2016-03-03-16.00.html
> http://irclog.perlgeek.de/fuel-dev/2016-03-03#i_12133112
> 
> -- 
> Dmitry Borodaenko
> 
> 
> On Wed, Mar 02, 2016 at 10:31:09PM -0800, Dmitry Borodaenko wrote:
> > Feature Freeze [0] for Fuel 9.0/Mitaka is now in effect. From this
> > moment and until stable/mitaka branch is created at Soft Code Freeze,
> > please do not merge feature related changes that have not received a
> > feature freeze exception.
> > 
> > [0] https://wiki.openstack.org/wiki/FeatureFreeze
> > 
> > We will discuss all outstanding feature freeze exception requests in our
> > weekly IRC meeting tomorrow [1]. If that discussion takes longer than
> > the 1 hour time slot we have booked on #openstack-meeting-alt, we'll
> > move the discussion to #fuel-dev and finish it there.
> > 
> > [1] https://etherpad.openstack.org/p/fuel-weekly-meeting-agenda
> > 
> > The list of exceptions requested so far is exceedingly long and it is
> > likely that most of these exceptions will be rejected. If you want your
> > exception to be approved, please have the following information ready
> > for the meeting:
> > 
> > 1) Link to design spec in fuel-specs, spec review status;
> > 
> > 2) Links to all outstanding commits for the feature;
> > 
> > 3) Dependencies between your change and other features: what will be
> > broken or useless if your change is not merged, what else has to be
> > merged for your change to work;
> > 
> > 4) Analysis of impact and risks mitigation plan: which components are
> > affected by the change, what can break, how can impact be verified, how
> > can the change be isolated;
> > 
> > 5) Status of test coverage: what can be tested, what's covered by
> > automated tests, what's been tested so far (with links to test results).
> > 
> > -- 
> > Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] customizing nova scheduler using metrics from ceilometer

2016-03-04 Thread Kapil
Hi

I would like to implement my own scheduling algorithm based on the samples
collected in ceilometer for a custom meter.
I see there is a MetricsFilter but I think they are nova internal metrics
and not ceilometer metrics.

Please refer me to some documentation and how can I install this plug in
into the nova scheduler.

Thanks
Kapil Agarwal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Proposing Alicja Kwasniewska for core reviewer

2016-03-04 Thread Eric LEMOINE
+1

Looking forward to more collaboration on Diagnostics and other stuff :)

Le 4 mars 2016 17:58, "Steven Dake (stdake)"  a écrit :
>
> Core Reviewers,
>
> Alicja has been instrumental in our work around jinja2 docker file
creation, removing our symlink madness.  She has also been instrumental in
actually getting Diagnostics implemented in a sanitary fashion.  She has
also done a bunch of other work that folks in the community already know
about that I won't repeat here.
>
> I had always hoped she would start reviewing so we could invite her to
the core review team, and over the last several months she has reviewed
quite a bit!  Her 90 day stats[1] place her at #9 with a solid ratio of
72%.  Her 30 day stats[2] are even better and place her at #6 with an
improving ratio of 67%.  She also just doesn't rubber stamp reviews or jump
in reviews at the end; she sticks with them from beginning to end and finds
real problems, not trivial things.  Finally Alicja is full time on Kolla as
funded by her employer so she will be around for the long haul and always
available.
>
> Please consider my proposal to be a +1 vote.
>
> To be approved for the core reviewer team, Alicja requires a majority
vote of 6 total votes with no veto within the one week period beginning now
and ending Friday March 11th.  If your on the fence, you can always
abstain.  If the vote is unanimous before the voting ends, I will make
appropriate changes to gerrit's acls.  If their is a veto vote, voting will
close prior to March 11th.
>
> Regards,
> -steve
>
> [1] http://stackalytics.com/report/contribution/kolla-group/90
> [2] http://stackalytics.com/report/contribution/kolla-group/30
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Non-Admin user can show deleted instances using changes-since parameter when calling list API

2016-03-04 Thread Matt Riedemann



On 3/3/2016 9:14 PM, Zhenyu Zheng wrote:

Hm, I found out the reason:
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/servers.py#L1139-L1145
here we filtered out parameters like "deleted", and that's why the API
behavior is like above mentioned.

So should we simple add "deleted" to the tuple or a microversion is needed?

On Fri, Mar 4, 2016 at 10:27 AM, Zhenyu Zheng > wrote:

Anyway, I updated the bug report:
https://bugs.launchpad.net/nova/+bug/1552071

and I will start to working on the bug first.

On Fri, Mar 4, 2016 at 9:29 AM, Zhenyu Zheng
> wrote:

Yes, so you are suggest fixing the return data of non-admin user
use 'nova list --deleted' but leave non-admin using 'nova list
--status=deleted' as is. Or it would be better to also submit a
BP for next cycle to add support for non-admin using
'--status=deleted' with microversions. Because in my opinion, if
we allow non-admin use "nova list --deleted", there will be no
reason for us to limit the use of "--status=deleted".

On Fri, Mar 4, 2016 at 12:37 AM, Matt Riedemann
>
wrote:



On 3/3/2016 10:02 AM, Matt Riedemann wrote:



On 3/3/2016 2:55 AM, Zhenyu Zheng wrote:

Yes, I agree with you guys, I'm also OK for
non-admin users to list
their own instances no matter what status they are.

My question is this:
I have done some tests, yet we have 2 different ways
to list deleted
instances (not counting using changes-since):

1.
"GET

/v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?status=deleted
HTTP/1.1"
(nova list --status deleted in CLI)
2. REQ: curl -g -i -X GET

http://10.229.45.17:8774/v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?deleted=True
(nova
list --deleted in CLI)

for admin user, we can all get deleted
instances(after the fix of Matt's
patch).

But for non-admin users, #1 is restricted here:

https://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/servers.py#n350

and it will return 403 error:
RESP BODY: {"forbidden": {"message": "Only
administrators may list
deleted instances", "code": 403}}


This is part of the API so if we were going to allow
non-admins to query
for deleted servers using status=deleted, it would have
to be a
microversion change. [1] I could also see that being
policy-driven.

It does seem odd and inconsistent though that non-admins
can't query
with status=deleted but they can query with deleted=True
in the query
options.


and for #2 it will strangely return servers that are
not in deleted
status:


This seems like a bug. I tried looking for something
obvious in the code
but I'm not seeing the issue, I'd suspect something down
in the DB API
code that's doing the filtering.


DEBUG (connectionpool:387) "GET

/v2.1/62bfb653eb0d4d5cabdf635dd8181313/servers/detail?deleted=True
HTTP/1.1" 200 3361
DEBUG (session:235) RESP: [200] Content-Length: 3361
X-Compute-Request-Id:
req-bd073750-982a-4ef7-864a-a5db03e59a68 Vary:
X-OpenStack-Nova-API-Version Connection: keep-alive
X-Openstack-Nova-Api-Version: 2.1 Date: Thu, 03 Mar
2016 08:43:17 GMT
Content-Type: application/json
RESP BODY: {"servers": [{"status": "ACTIVE", "updated":
"2016-02-29T06:24:16Z", "hostId":
"56b12284bb4d1da6cbd066d15e17df252dac1f0dc6c81a74bf0634b7",
"addresses":
{"private": [{"OS-EXT-IPS-MAC:mac_addr":
"fa:16:3e:4f:1b:32", "version":
4, "addr": "10.0.0.14", "OS-EXT-IPS:type": "fixed"},
{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4f:1b:32",
"version": 6, "addr":
"fdb7:5d7b:6dcd:0:f816:3eff:fe4f:1b32",
   

Re: [openstack-dev] [fuel] Fuel 8.0 is released

2016-03-04 Thread Dmitry Borodaenko
No, we have no plans for CentOS 7.x on controllers in Fuel 9.0/Mitaka,
and our plans for Newton are still under discussion.

I strongly suspect that life cycle management related features will
consume most of Fuel team's attention, we've found that problems with
the numerous LCM use cases cause a lot more pain to operators than the
limited choice of base operating system distributions.

Still, if you or someone else is willing to dedicate some time to adding
support for CentOS 7.x based controllers to Fuel, we would gladly
welcome it. Mind that it's not going to be easy, see the amount of work
that went into adding Ubuntu 12.04 support in Fuel 4.0 and later
updating that to Ubuntu 14.04 in Fuel 6.1.

-- 
Dmitry Borodaenko


On Wed, Mar 02, 2016 at 07:40:36PM +0800, Shake Chen wrote:
> Hi
> 
> any plan support install Openstack controller in CentOS 7.x?
> 
> On Wed, Mar 2, 2016 at 10:20 AM, Dmitry Borodaenko  > wrote:
> 
> > We are proud to announce the release of Fuel 8.0, deployment and
> > management tool for OpenStack.
> >
> > This release initroduces support for OpenStack Liberty, adds a number of
> > exciting new features and enhancements, fixes over 1600 bugs, and
> > eliminates a great deal of technical debt.
> >
> > Some highlights:
> >
> > - Support for multi-rack deployments with L3 routing between racks that
> >   was first introduced in Fuel 6.0 was expanded with more automation and
> >   validation; some key limitations of the previous implementation, such
> >   as placing all VIPs, floating IPs, and controllers in a single rack,
> >   have been relaxed (although controller services failover across racks
> >   still needs extra work); node groups can now be managed via Fuel UI.
> >
> > - Fuel master node now runs on CentOS 7 with Python 2.7.
> >
> > - The bootstrap image used for node discovery and provisioning is now
> >   generated when Fuel node is setup, and can be dynamically rebuilt to
> >   include additional drivers. This unifies the kernel version from
> >   discovery to a working install, removing a whole host of possible
> >   compatibility issues
> >
> > - As another small step towards enabling life cycle management, a
> >   limited set of cloud configuration parameters can now be changed after
> >   deployment. This includes changing configuration of OpenStack services
> >   and installation of additional software via plugins.
> >
> > Learn more about Fuel:
> > https://wiki.openstack.org/wiki/Fuel
> >
> > How we work:
> > https://wiki.openstack.org/wiki/Fuel/How_to_contribute
> >
> > Specs for features in 8.0 and other Fuel releases:
> > http://specs.openstack.org/openstack/fuel-specs/
> >
> > ISO image:
> >
> > http://seed.fuel-infra.org/fuelweb-community-release/fuel-community-8.0.iso.torrent
> >
> > RPM packages:
> > http://mirror.fuel-infra.org/mos-repos/centos/mos8.0-centos7-fuel/
> >
> > Great work Fuel team, thanks to everyone who contributed to this awesome
> > release!
> >
> > --
> > Dmitry Borodaenko
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> 
> -- 
> Shake Chen

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Contributing to TripleO is challenging

2016-03-04 Thread Paul Belanger
On Fri, Mar 04, 2016 at 09:23:19AM -0500, Emilien Macchi wrote:
> That's not the name of any Summit's talk, it's just an e-mail I wanted
> to write for a long time.
> 
> It is an attempt to expose facts or things I've heard a lot; and bring
> constructive thoughts about why it's challenging to contribute in
> TripleO project.
> 
> 
> 1/ "I don't review this patch, we don't have CI coverage."
> 
> One thing I've noticed in TripleO is that a very few people are involved
> in CI work.
> In my opinion, CI system is more critical than any feature in a product.
> Developing Software without tests is a bit like http://goo.gl/OlgFRc
> All people - specially core - in the project should be involved in CI
> work. If you are TripleO core and you don't contribute on CI, you might
> ask yourself why.
> 
As somebody who contributes to openstack-infa and knows most of the ins and outs
of OpenStack CI, I often wish the TripleO CI would be more inline with
openstack-infa.  Right now, TripleO CI is a black hole to me.  I understand
there are some reason to have separate CI (eg: baremetal provisioning) but it
would be nice to revisit the current setup and see if we can move more inline
with openstack-infra.

For the simple reason, having common tooling means I can contribute to TripleO
CI if needed.

> 
> 2/ "I don't review this patch, CI is broken."
> 
> Another thing I've noticed in TripleO is that when CI is broken, again,
> a very few people are actually working on fixing failures.
> My experience over the last years taught me to stop my daily work when
> CI is broken and fix it asap.
> 
See my above comment. I think this would go a great way to helping the team.
> 
> 3/ "I don't review it, because this feature / code is not my area".
> 
> My first though is "Aren't we supposed to be engineers and learn new areas?"
> My second though is that I think we have a problem with TripleO Heat
> Templates.
> THT or TripleO Heat Templates's code is 80% of Puppet / Hiera. If
> TripleO core say "I'm not familiar with Puppet", we have a problem here,
> isn't?
> Maybe should we split this repository? Or revisit the list of people who
> can +2 patches on THT.
> 
> 
> 4/ Patches are stalled. Most of the time.
> 
> Over the last 12 months, I've pushed a lot of patches in TripleO and one
> thing I've noticed is that if I don't ping people, my patch got no
> review. And I have to rebase it, every week, because the interface
> changed. I got +2, cool ! Oh, merge conflict. Rebasing. Waiting for +2
> again... and so on..
> 
> I personally spent 20% of my time to review code, every day.
> I wrote a blog post about how I'm doing review, with Gertty:
> http://my1.fr/blog/reviewing-puppet-openstack-patches/
> I suggest TripleO folks to spend more time on reviews, for some reasons:
> 
> * decreasing frustration from contributors
> * accelerate development process
> * teach new contributors to work on TripleO, and eventually scale-up the
> core team. It's a time investment, but worth it.
> 
> In Puppet team, we have weekly triage sessions and it's pretty helpful.
> 
> 
> 5/ Most of the tests are run... manually.
> 
> How many times I've heard "I've tested this patch locally, and it does
> not work so -1".
> 
> The only test we do in current CI is a ping to an instance. Seriously?
> Most of OpenStack CIs (Fuel included), run Tempest, for testing APIs and
> real scenarios. And we run a ping.
> That's similar to 1/ but I wanted to raise it too.
> 
> 
> 
> If we don't change our way to work on TripleO, people will be more
> frustrated and reduce contributions at some point.
> I hope from here we can have a open and constructive discussion to try
> to improve the TripleO project.
> 
> Thank you for reading so far.
> -- 
> Emilien Macchi
> 
So for me, I'd love to help more but having to context shift into TripleO CI is
a deal breaker for me (and more of -infra is I was a betting man).  So, anything
I can do to help move things like base images or using AFS mirrors into TripleO
I am happy to help.  However, having the TripleO team maintain CI themselves
doesn't seem to be the best case scenario.

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-04 Thread Daneyon Hansen (danehans)

+1 on the points Adrian makes below.

On Mar 4, 2016, at 12:52 PM, Adrian Otto 
> wrote:

Hongbin,

To be clear, this pursuit is not about what OS options cloud operators can 
select. We will be offering a method of choice. It has to do with what we plan 
to build comprehensive testing for, and the implications that has on our pace 
of feature development. My guidance here is that we resist the temptation to 
create a system with more permutations than we can possibly support. The 
relation between bay node OS, Heat Template, Heat Template parameters, COE, and 
COE dependencies (could-init, docker, flannel, etcd, etc.) are multiplicative 
in nature. From the mid cycle, it was clear to me that:

1) We want to test at least one OS per COE from end-to-end with comprehensive 
functional tests.
2) We want to offer clear and precise integration points to allow cloud 
operators to substitute their own OS in place of whatever one is the default 
for the given COE.
3) We want to control the total number of configuration permutations to 
simplify our efforts as a project. We agreed that gate testing all possible 
permutations is intractable.

Note that it will take a thoughtful approach (subject to discussion) to balance 
these interests. Please take a moment to review the interest above. Do you or 
others disagree with these? If so, why?

Adrian

On Mar 4, 2016, at 9:09 AM, Hongbin Lu 
> wrote:

I don’t think there is any consensus on supporting single distro. There are 
multiple disagreements on this thread, including several senior team members 
and a project co-founder. This topic should be re-discussed (possibly at the 
design summit).

Best regards,
Hongbin

From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: March-04-16 11:37 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

I don't think anyone is saying that code should somehow block support for 
multiple distros. The discussion at midcycle was about what the we should gate 
on and ensure feature parity for as a team. Ideally, we'd like to get support 
for every distro, I think, but no one wants to have that many gates. Instead, 
the consensus at the midcycle was to have 1 reference distro for each COE, gate 
on those and develop features there, and then have any other distros be 
maintained by those in the community that are passionate about them.

The issue also isn't about how difficult or not it is. The problem we want to 
avoid is spending precious time guaranteeing that new features and bug fixes 
make it through multiple distros.

Corey

On Fri, Mar 4, 2016 at 11:18 AM Steven Dake (stdake) 
> wrote:
My position on this is simple.

Operators are used to using specific distros because that is what they used in 
the 90s,and the 00s, and the 10s.  Yes, 25 years of using a distro, and you 
learn it inside and out.  This means you don't want to relearn a new distro, 
especially if your an RPM user going to DEB or a DEB user going to RPM.  These 
are non-starter options for operators, and as a result, mean that distro choice 
is a must.  Since CoreOS is a new OS in the marketplace, it may make sense to 
consider placing it in "third" position in terms of support.

Besides that problem, various distribution companies will only support distros 
running in Vms if it matches the host kernel, which makes total sense to me.  
This means on an Ubuntu host if I want support I need to run Ubuntu vms, on a 
RHEL host I want to run RHEL vms, because, hey, I want my issues supported.

For these reasons and these reasons alone, there is no good rationale to remove 
multi-distro support  from Magnum.  All I've heard in this thread so far is 
"its too hard".  Its not too hard, especially with Heat conditionals making 
their way into Mitaka.

Regards
-steve

From: Hongbin Lu >
Reply-To: 
"openstack-dev@lists.openstack.org" 
>
Date: Monday, February 29, 2016 at 9:40 AM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: [openstack-dev] [magnum] Discussion of supporting single/multiple OS 
distro

Hi team,

This is a continued discussion from a review [1]. Corey O'Brien suggested to 
have Magnum support a single OS distro (Atomic). I disagreed. I think we should 
bring the discussion to here to get broader set of inputs.

Corey O'Brien
>From the midcycle, we decided we weren't going to continue to support 2 
>different versions of the k8s template. Instead, we were going to maintain the 
>Fedora Atomic version of k8s and 

Re: [openstack-dev] {openstack-dev][tc] Leadership training proposal/info

2016-03-04 Thread Colette Alexander
On Fri, Mar 4, 2016 at 11:22 AM, Sean Dague  wrote:

> On 03/04/2016 02:00 PM, Anne Gentle wrote:
> >
> >
> > On Tue, Mar 1, 2016 at 4:41 PM, Colette Alexander
> > > wrote:
>
> > tl;dr - If you're a member of the TC or the Board and would like to
> > attend Leadership Training at ZingTrain in Ann Arbor, please get
> > back to me ASAP with your contact info and preferred timing/dates
> > for this two-day training - also, please let me know whether April
> > 20/21 (or April 21/22) would specifically work for you or not.
> >
> >
> >
> > I'd like to, but curious about the possibility of deferring until after
> > the new TC is elected? I know the seats tend to stay static, but with
> > possibly half the TC changing it would offer a nice opportunity to get
> > to know any new members.
>
> The whole thing sounds great, however the timing of that week in April
> is problematic, for the same reason that back to back summit /
> conference weeks would be. 2 weeks away from family is really tough.
>
> Hopefully there will be some later opportunity.
>

I hope so, too!

Current status, btw is 5 TC members as a 'yes' for going on April 21/22  -
the Thurs/Fri before the summit. 4 additional people have mentioned they're
interested, but unable to make the scheduling work for this particular
time.

I also have heard from some core reviewers and PTLs who are eager to
participate, and who've asked to attend if it's possible.

My initial thoughts were that this might ideally be located as far away
from mid-cycles or summits as possible, to give those attending a lot of
time and space to soak it in. I backed away from that stance with
suggesting the week before the summit only because I realized that a) This
is meant to be more of a pilot/test balloon than a final version of some
kind of training, and b) getting everyone to be in one place physically is
going to be difficult, no matter how we approach scheduling. (For example,
Thierry is basically unable to attend May/June/July for anything at this
point, as his schedule is booked up)

I'm absolutely okay with pushing this off to another time, if the TC feels
very strongly about waiting until more TC members can attend (I expect,
even with the newer elected TC, we'll get no more than 8 TC members
attending for the April dates). However, if you all are comfortable, adding
in a few PTLs/core reviewers who are interested and able to attend might be
a good addition to the group in terms of evaluating the potential of the
training. The expectation would be that those who attend would be able to
report back to the rest of the TC, the Board, and the community at large,
and talk about what they thought worked/what didn't, and make
recommendations for future leadership work in the community based on that.

Let me know what you're thinking - I'd love to have the April date nailed
down by early next week (Mon/Tues) *before* the TC meeting so I can allow
people to book their travel.

Thanks everyone!

-colette
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.cache 1.5.0 release (mitaka)

2016-03-04 Thread no-reply
We are stoked to announce the release of:

oslo.cache 1.5.0: Cache storage for Openstack projects.

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.cache

With package available at:

https://pypi.python.org/pypi/oslo.cache

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.cache

For more details, please see below.

Changes in oslo.cache 1.4.0..1.5.0
--

efc1a96 Updated from global requirements
754551e Updated from global requirements

Diffstat (except docs and test files)
-

requirements.txt | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 9867a50..c005c80 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -6 +6 @@ Babel>=1.3 # BSD
-dogpile.cache>=0.5.4 # BSD
+dogpile.cache>=0.5.7 # BSD
@@ -8 +8 @@ six>=1.9.0 # MIT
-oslo.config>=3.4.0 # Apache-2.0
+oslo.config>=3.7.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Openstack] Instalation Problem:VBoxManage: error: Guest not running [ubuntu14.04]

2016-03-04 Thread Maksim Malchuk
Samer, please address my recommendations.


On Fri, Mar 4, 2016 at 7:49 PM, Samer Machara <
samer.mach...@telecom-sudparis.eu> wrote:

> Hi, Igor
>   Thanks for answer so quickly.
>
> I wait until the following message appears
> Installation timed out! (3000 seconds)
> I don't have any virtual machines created.
>
> I update to 5.0 VirtualBox version, Now I got the following message
>
> VBoxManage: error: Machine 'fuel-master' is not currently running
>  Waiting for product VM to download files. Please do NOT abort the
> script...
>
> I'm still waiting
>
> --
> *De: *"Maksim Malchuk" 
> *À: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Envoyé: *Vendredi 4 Mars 2016 15:19:54
> *Objet: *Re: [openstack-dev] [Fuel] [Openstack] Instalation
> Problem:VBoxManage: error: Guest not running [ubuntu14.04]
>
>
> Igor,
>
> Some information about my system:
> OS: ubuntu 14.04 LTS
> Memory: 3,8GiB
>
> Samer can't run many guests I think.
>
>
> On Fri, Mar 4, 2016 at 5:12 PM, Igor Marnat  wrote:
>
>> Samer, Maksim,
>> I'd rather say that script started fuel-master already (VM "fuel-master"
>> has been successfully started.), didn't find running guests, (VBoxManage:
>> error: Guest not running) but it can try to start them afterwards.
>>
>> Samer,
>> - how many VMs are there running besides fuel-master?
>> - is it still showing "Waiting for product VM to download files. Please
>> do NOT abort the script..." ?
>> - for how long did you wait since the message above?
>>
>>
>> Regards,
>> Igor Marnat
>>
>> On Fri, Mar 4, 2016 at 5:04 PM, Maksim Malchuk 
>> wrote:
>>
>>> Hi Sames,
>>>
>>> *VBoxManage: error: Guest not running*
>>>
>>> looks line the problem with VirtualBox itself or settings for the
>>> 'fuel-master' VM, it can't boot it.
>>> Open the VirtualBox Manger (GUI), select the 'fuel-master' VM and start
>>> it manually - it should show you what is exactly happens.
>>>
>>>
>>> On Fri, Mar 4, 2016 at 4:41 PM, Samer Machara <
>>> samer.mach...@telecom-sudparis.eu> wrote:
>>>
 Hello, everyone.
 I'm new with Fuel. I'm trying to follow the QuickStart Guide (
 https://docs.fuel-infra.org/openstack/fuel/fuel-8.0/quickstart-guide.html),
 but I have the following Error:


 *Waiting for VM "fuel-master" to power on...*
 *VM "fuel-master" has been successfully started.*
 *VBoxManage: error: Guest not running*
 *VBoxManage: error: Guest not running*
 ...
 *VBoxManage: error: Guest not running*
 *Waiting for product VM to download files. Please do NOT abort the
 script...*



 I hope you can help me.

 Thanks in advance


 Some information about my system:
 OS: ubuntu 14.04 LTS
 Memory: 3,8GiB
 Processor: Intel® Core™2 Quad CPU Q9550 @ 2.83GHz × 4
 OS type: 64-bit
 Disk 140,2GB
 VirtualBox Version: 4.3.36_Ubuntu
 Checking for 'expect'... OK
 Checking for 'xxd'... OK
 Checking for "VBoxManage"... OK
 Checking for VirtualBox Extension Pack... OK
 Checking if SSH client installed... OK
 Checking if ipconfig or ifconfig installed... OK


 I modify the config.sh to adapt my hardware configuration
 ...
 # Master node settings
 if [ "$CONFIG_FOR" = "4GB" ]; then
 vm_master_memory_mb=1024
 vm_master_disk_mb=2
 ...
 # The number of nodes for installing OpenStack on
 elif [ "$CONFIG_FOR" = "4GB" ]; then
 cluster_size=3
 ...
 # Slave node settings. This section allows you to define CPU count for
 each slave node.
 elif [ "$CONFIG_FOR" = "4GB" ]; then
 vm_slave_cpu_default=1
 vm_slave_cpu[1]=1
 vm_slave_cpu[2]=1
 vm_slave_cpu[3]=1
 ...
 # This section allows you to define RAM size in MB for each slave node.
 elif [ "$CONFIG_FOR" = "4GB" ]; then
 vm_slave_memory_default=1024

 vm_slave_memory_mb[1]=512
 vm_slave_memory_mb[2]=512
 vm_slave_memory_mb[3]=512
 ...
 # Nodes with combined roles may require more disk space.
 if [ "$CONFIG_FOR" = "4GB" ]; then
 vm_slave_first_disk_mb=2
 vm_slave_second_disk_mb=2
 vm_slave_third_disk_mb=2
 ...

 I found someone that had a similar problem (
 https://www.mail-archive.com/fuel-dev@lists.launchpad.net/msg01084.html),
 he had a corrupted iso file, he solved the problem downloaded it again. I
 downloaded the .iso file from
 http://seed.fuel-infra.org/fuelweb-community-release/fuel-community-8.0.iso.torrent
 . I chek the size 3,1 GB. How ever I still with the problem.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [Nova] Live Migration post feature freeze update

2016-03-04 Thread Sean Dague
On 03/04/2016 02:54 PM, Matt Riedemann wrote:
> 
> 
> On 3/4/2016 10:34 AM, Murray, Paul (HP Cloud) wrote:
>> Hi All,
>>
>> Now that we have passed the feature freeze I thought it was worth giving
>> a quick update
>>
>> on where we are with the live migration priority.
>>
>> The following is a list of work items that have been merged in this
>> cycle ( for the live migration
>>
>> sub-team’s working page see
>> https://etherpad.openstack.org/p/mitaka-live-migration ). There
>>
>> is also a number of merged and on-going bug fixes that are not listed
>> here.
>>
>> _Progress reporting_
>>
>> Provide progress reporting information for on-going live migrations.
>>
>> ·https://blueprints.launchpad.net/nova/+spec/live-migration-progress-report
>>
>>
>>   *
>> https://review.openstack.org/#/q/topic:bp/live-migration-progress-report
>>
>> __
>>
>> _Force complete_
>>
>> Force an on-going live migration to complete by pausing the virtual
>> machine for the
>>
>> duration of the migration.
>>
>> ·https://blueprints.launchpad.net/nova/+spec/pause-vm-during-live-migration
>>
>>
>> ·https://review.openstack.org/#/q/topic:bp/pause-vm-during-live-migration
>>
>> __
>>
>> _Cancel_
>>
>> Cancel an on-going live migration.
>>
>> ·https://blueprints.launchpad.net/nova/+spec/abort-live-migration
>>
>>   * https://review.openstack.org/#/q/topic:bp/abort-live-migration
>>
>> __
>>
>> _Block live migration with attached volumes_
>>
>> Enable live migration of VMs with a combination of local and shared
>> storage.
>>
>> ·https://blueprints.launchpad.net/nova/+spec/block-live-migrate-with-attached-volumes
>>
>>
>>
>> ·https://review.openstack.org/#/c/227278
>>
>> __
>>
>> _Split networking_
>>
>> Send live migration traffic over a specified network.
>>
>> ·https://blueprints.launchpad.net/nova/+spec/split-network-plane-for-live-migration
>>
>>
>>
>> ·https://review.openstack.org/#/q/topic:bp/split-network-plane-for-live-migration
>>
>>
>>
>> __
>>
>> _Make live migration api friendly_
>>
>> Remove –disk_over_commit flag and add –block_migration=auto (let nova
>> determine
>>
>> how to migrate the disks)
>>
>> ·https://blueprints.launchpad.net/nova/+spec/making-live-migration-api-friendly
>>
>>
>>
>>   *
>> https://review.openstack.org/#/q/topic:bp/making-live-migration-api-friendly
>>
>>
>> __
>>
>> _Use request spec_
>>
>> Add scheduling to live migration and evacuate using original request
>> spec (includes all
>>
>> original scheduling properties)
>>
>> ·https://blueprints.launchpad.net/nova/+spec/check-destination-on-migrations
>>
>>
>> ·https://review.openstack.org/#/c/277800/
>>
>> ·https://review.openstack.org/#/c/273104/
>>
>> _Deprecate migration flags_
>>
>> Replace the combination of migration configuration flags with a single
>> tunneled flag.
>>
>> ·(no blueprint)
>>
>> ·https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:deprecate-migration-flags-config
>>
>>
>> __
>>
>> _Objectify live migrate data_
>>
>> Use the migrate object instead of a dictionary in migration code.
>>
>> ·https://blueprints.launchpad.net/nova/+spec/objectify-live-migrate-data
>>
>> ·https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/objectify-live-migrate-data
>>
>>
>>
>> Next steps…
>>
>> Now we have passed the feature freeze we will be turning attention to
>> the following
>>
>> three tasks:
>>
>> 1.Documenting the new features
>>
>> 2.Adding expanding the CI coverage
>>
>> 3.Fixing bugs
>>
>> The CI job gate-tempest-dsvm-multinode-live-migration was added to the
>> experimental
>>
>> queue earlier In the cycle. We now need to add tests to this job to
>> increase coverage. If
>>
>> you have any suggestions for CI improvements please contribute them on
>> this page:
>>
>> https://etherpad.openstack.org/p/nova-live-migration-CI-ideas
>>
>> If you can contributed to live migration bug fixing you can look for
>> things to do here:
>>
>> https://bugs.launchpad.net/nova/+bugs?field.tag=live-migration
>>
>> For priority reviews see the live migration section here:
>>
>> https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking
>>
>> The live migration sub-team has an IRC meeting on Tuesdays at 14:00
>> UTC on
>>
>> #openstack-meeting-3:
>>
>> https://wiki.openstack.org/wiki/Meetings/NovaLiveMigration
>>
>> Best regards,
>>
>> Paul
>>
>> Paul Murray
>>
>> Technical Lead, HPE Cloud
>>
>> Hewlett Packard Enterprise
>>
>> +44 117 316 2527
>>
>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> The gate-tempest-dsvm-multinode-full job which runs live migration tests
> on nova patches has been non-voting for awhile now. There are at least
> two known tracked bugs so we can keep an eye on failure rates.
> 
> 1. Volume based live migration 

Re: [openstack-dev] [ceilometer] Unable to get IPMI meter readings

2016-03-04 Thread Kapil
Yes, I had to look through the source code of the ipmi pollster class to
figure out why the error was being raised. Apparently, I don't have Intel
node manager installed, so power plugin was not being loaded.
I had to write my own plugin to get that data using ipmi-dcmi command which
is not specific to Intel I guess.
Is there any plan to add dcmi support to ceilometer ?
On Mar 3, 2016 7:47 PM, "Lu, Lianhao"  wrote:

> Hi Kapil,
>
> Currenlyt, the ipmi pollsters can only get the ipmi data from system bus
> due to the security concerns. So you have the make sure the
> ceilometer-agent-ipmi is running on the same machine you want get the
> hardware.ipmi.node.power metric from. Also you should make sure your
> machine have NodeManager features and enabled that  in your bios settings,
> otherwise the the hardware.ipmi.node.power pollster won't be loaded because
> it will checks whether your machine support that during load time.
>
> -Lianhao Lu
>
> > -Original Message-
> > From: Kapil [mailto:kapil6...@gmail.com]
> > Sent: Friday, March 04, 2016 2:34 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [ceilometer] Unable to get IPMI meter
> > readings
> >
> > So, we upgraded our openstack install from Juno to Kilo 2015.1.1
> >
> > Not sure if this fixed some stuff, but I can now get samples for
> > hardware.ipmi.(fan|temperature). However, I want to get
> > hardware.ipmi.node.power samples and I get the following error in
> > the ceilometer log-
> >
> > ERROR ceilometer.agent.base [-] Skip loading extension for
> > hardware.ipmi.node.power
> >
> >
> > I edited pipeline.yaml as follows-
> > sources:
> > - name: meter_ipmi
> >   interval: 10
> >   resources:
> >   - "ipmi://"
> >   meters:
> >   - "hardware.ipmi.node.power"
> >   sinks:
> >   - ipmi_sink
> > sinks:
> >  - name: ipmi_sink
> >   transformers:
> >   publishers:
> >   - notifier://?per_meter_topic=1
> >
> >
> > I also checked "rabbitmqctl list_queues | grep metering" and all the
> > queues are empty.
> >
> >
> > Do I need to change anything in ceilometer.conf or on the controller
> > nodes ? Currently, I am working only with the compute node and only
> > running ceilometer queries from controller node.
> >
> >
> > Thanks
> >
> >
> > Regards,
> > Kapil Agarwal
> >
> > On Thu, Feb 25, 2016 at 12:20 PM, gordon chung  wrote:
> >
> >
> >   at quick glance, it seems like data is being generated[1]. if you
> > check
> >   your queues (rabbitmqctl list_queues for rabbit), do you see
> > any items
> >   sitting on notification.sample queue or metering.sample
> > queue? do you
> >   receive other meters fine? maybe you can query db directly to
> > verify
> >   it's not a permission issue.
> >
> >   [1] see: 2016-02-25 13:36:58.909 21226 DEBUG
> > ceilometer.pipeline [-]
> >   Pipeline meter_sink: Transform sample
> >  >   at 0x7f6b3630ae50> from 0 transformer _publish_samples
> >   /usr/lib/python2.7/dist-packages/ceilometer/pipeline.py:296
> >
> >   On 25/02/2016 8:43 AM, Kapil wrote:
> >   > Below is the output of ceilometer-agent-ipmi in debug mode
> >   >
> >   > http://paste.openstack.org/show/488180/
> >   > ᐧ
> >   >
> >   > Regards,
> >   > Kapil Agarwal
> >   >
> >   > On Wed, Feb 24, 2016 at 8:18 PM, Lu, Lianhao
> >  >
> >   > > wrote:
> >   >
> >   > On Feb 25, 2016 06:18, Kapil wrote:
> >   >  > Hi
> >   >  >
> >   >  >
> >   >  > I discussed this problem with gordc on the telemetry IRC
> > channel
> >   > but I
> >   >  > am still facing issues.
> >   >  >
> >   >  > I am running the ceilometer-agent-ipmi on the compute
> > nodes, I
> >   > changed
> >   >  > pipeline.yaml of the compute node to include the ipmi
> > meters and
> >   >  > resource as "ipmi://localhost".
> >   >  >
> >   >  > - name: meter_ipmi
> >   >  >   interval: 60
> >   >  >   resources:
> >   >  >   - ipmi://localhost meters: -
> "hardware.ipmi.node*"
> > -
> >   >  >   "hardware.ipmi*" - "hardware.degree*" sinks: -
> > meter_sink I
> >   >  > have ipmitool installed on the compute nodes and
> > restarted the
> >   >  > ceilometer services on compute and controller nodes.
> > Yet, I am not
> >   >  > receiving any ipmi meters when I run "ceilometer meter-
> > list". I also
> >   >  > tried passing the hypervisor IP address and the ipmi
> > address I get
> >   >  > when I run "ipmitool lan print" to resources but to no
> > avail.
> >   >  >
> >   >  >
> >   >  > Please help in this regard.
> >   >  >
> >   >  >
> >   >  > Thanks
> >  

[openstack-dev] OpenStack Developer Mailing List Digest Feb 27 – March 4

2016-03-04 Thread Mike Perez
HTML version: 
http://www.openstack.org/blog/2016/03/openstack-developer-mailing-list-digest-20160304/


SuccessBot Says
===
* Ttx: Mitaka-3 is done.
* Odyssey4me: OpenStack-Ansible Liberty 12.0.7 is released [1].
* johnthetubaguy: Nova is down to four pending blueprints for feature freeze
  now [2], sort of one day left. Better than it was this morning at least.
* Russellb: Got a set of OVS flows working in OVN that applies security group
  changes immediately to existing connections.
* Tell us yours via IRC with a message “#success [insert success]”.
* All: https://wiki.openstack.org/wiki/Successes


Cross-Project
=
* Quotas and Nested Quotas Working group
  - Meeting [3]
  - Spec [4]


Outreachy May-Aug 2016: Call For Funding and Mentors

* Outreachy [5] helps people from groups underrepresented in free and open
  source software get involved by matching interns with established mentors in
  the upstream community.
* We have 10 volunteer mentors for OpenStack this next cycle (May 23-August 23
  2016).
  - Learn more and apply to be a mentor [6]
* Potential sponsors have reached out, but we need more due to the increase in
  applicants.
  - Each intern is $6,500 for the three-month program.
  - The OpenStack Foundation has confirmed participation.
  - Learn more and apply to be a sponsor [7].
* Regardless, help spread the world!
* Full thread: 
http://lists.openstack.org/pipermail/openstack-dev/2016-February/087459.html


Changing Microversion Headers

* The API working group would like to change the format of headers used for
  microversions to make them more future proof before too many projects are
  using them.
  - Proposed guideline [8].
* This came up in another guide for header non-proliferation [9].  After plenty
  of discussions, and with projects already deploying microversions (Nova,
  Ironic, Manila) the proposal is change basic from:
  - X-OpenStack-Nova-API-Version: 2.11
  - OpenStack-Compute-API-Version: 2.11
* To:
  - OpenStack-API-Version: compute 2.11
* This allows us to use one header name for multipel services and avoids some
  of the problems described in the header non-proliferation guideline [9].
Full thread: 
http://lists.openstack.org/pipermail/openstack-dev/2016-March/087928.html


OpenStack Contributor Awards

* The Foundation would like introduce some informal quirky awards to recognize
  the extremely valuable work that we all do to make Openstack excel.
* With many different areas to celebrate, there are a few main chunks of the
  community that need a little love:
  - Those who might not be aware that they are valued, particularly new
contributors
  - Those who are the active glue that binds the community together
  - Those who share their hard-earned knowledge with others and mentor
  - Those who challenge assumptions, and make us think
* Nominate someone who you think is deserving of an awards [10]!
* Full thread: 
http://lists.openstack.org/pipermail/openstack-dev/2016-February/thread.html#87459


Status Of Python 3 In OpenStack Mitaka
==
* 13 services were ported to Python 3 during the Mitaka cycle: Cinder, Glance,
  Heat, Horizon, etc.
* 9 services still need to be ported
* Next Milestone: Functional and integration tests
* “Ported to Python 3” means that all unit tests pass on Python 3.4 which is
  verified by a voting gate job. It is not enough to run applications in
  production with Python 3. Integration and functional tests are not run on
  Python 3 yet.
* Read the full status post [11] by Victor Stinner.
* Join Freenode channel #openstack-python3 to discuss and help out!
* Full thread: 
http://lists.openstack.org/pipermail/openstack-dev/2016-March/088389.html



[1] - https://launchpad.net/openstack-ansible/+milestone/12.0.7
[2] -  https://blueprints.launchpad.net/nova/mitaka
[3] - 
http://eavesdrop.openstack.org/#Cross-project_Quotas_and_Nested_Quotas_Working_Group_Virtual_Standup
[4] - https://review.openstack.org/284454
[5] - https://www.gnome.org/outreachy/
[6] - https://wiki.openstack.org/wiki/Outreachy/Mentors
[7] - https://wiki.gnome.org/Outreachy/Admin/InfoForOrgs#Action
[8] - https://review.openstack.org/#/c/243414/
[9] - https://review.openstack.org/#/c/280381/
[10] - 
https://docs.google.com/forms/d/1HP1jAobT-s4hlqZpmxoGIGTxZmY6lCWolS3zOq8miDk/viewform
[11] - http://blogs.rdoproject.org/7894/status-of-python-3-in-openstack-mitaka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Live Migration post feature freeze update

2016-03-04 Thread Matt Riedemann



On 3/4/2016 10:34 AM, Murray, Paul (HP Cloud) wrote:

Hi All,

Now that we have passed the feature freeze I thought it was worth giving
a quick update

on where we are with the live migration priority.

The following is a list of work items that have been merged in this
cycle ( for the live migration

sub-team’s working page see
https://etherpad.openstack.org/p/mitaka-live-migration ). There

is also a number of merged and on-going bug fixes that are not listed here.

_Progress reporting_

Provide progress reporting information for on-going live migrations.

·https://blueprints.launchpad.net/nova/+spec/live-migration-progress-report

  * https://review.openstack.org/#/q/topic:bp/live-migration-progress-report

__

_Force complete_

Force an on-going live migration to complete by pausing the virtual
machine for the

duration of the migration.

·https://blueprints.launchpad.net/nova/+spec/pause-vm-during-live-migration

·https://review.openstack.org/#/q/topic:bp/pause-vm-during-live-migration

__

_Cancel_

Cancel an on-going live migration.

·https://blueprints.launchpad.net/nova/+spec/abort-live-migration

  * https://review.openstack.org/#/q/topic:bp/abort-live-migration

__

_Block live migration with attached volumes_

Enable live migration of VMs with a combination of local and shared storage.

·https://blueprints.launchpad.net/nova/+spec/block-live-migrate-with-attached-volumes


·https://review.openstack.org/#/c/227278

__

_Split networking_

Send live migration traffic over a specified network.

·https://blueprints.launchpad.net/nova/+spec/split-network-plane-for-live-migration


·https://review.openstack.org/#/q/topic:bp/split-network-plane-for-live-migration


__

_Make live migration api friendly_

Remove –disk_over_commit flag and add –block_migration=auto (let nova
determine

how to migrate the disks)

·https://blueprints.launchpad.net/nova/+spec/making-live-migration-api-friendly


  * https://review.openstack.org/#/q/topic:bp/making-live-migration-api-friendly

__

_Use request spec_

Add scheduling to live migration and evacuate using original request
spec (includes all

original scheduling properties)

·https://blueprints.launchpad.net/nova/+spec/check-destination-on-migrations

·https://review.openstack.org/#/c/277800/

·https://review.openstack.org/#/c/273104/

_Deprecate migration flags_

Replace the combination of migration configuration flags with a single
tunneled flag.

·(no blueprint)

·https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:deprecate-migration-flags-config

__

_Objectify live migrate data_

Use the migrate object instead of a dictionary in migration code.

·https://blueprints.launchpad.net/nova/+spec/objectify-live-migrate-data

·https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/objectify-live-migrate-data


Next steps…

Now we have passed the feature freeze we will be turning attention to
the following

three tasks:

1.Documenting the new features

2.Adding expanding the CI coverage

3.Fixing bugs

The CI job gate-tempest-dsvm-multinode-live-migration was added to the
experimental

queue earlier In the cycle. We now need to add tests to this job to
increase coverage. If

you have any suggestions for CI improvements please contribute them on
this page:

https://etherpad.openstack.org/p/nova-live-migration-CI-ideas

If you can contributed to live migration bug fixing you can look for
things to do here:

https://bugs.launchpad.net/nova/+bugs?field.tag=live-migration

For priority reviews see the live migration section here:

https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking

The live migration sub-team has an IRC meeting on Tuesdays at 14:00 UTC on

#openstack-meeting-3:

https://wiki.openstack.org/wiki/Meetings/NovaLiveMigration

Best regards,

Paul

Paul Murray

Technical Lead, HPE Cloud

Hewlett Packard Enterprise

+44 117 316 2527



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



The gate-tempest-dsvm-multinode-full job which runs live migration tests 
on nova patches has been non-voting for awhile now. There are at least 
two known tracked bugs so we can keep an eye on failure rates.


1. Volume based live migration aborted unexpectedly:

http://status.openstack.org/elastic-recheck/index.html#1524898

2. Libvirt live block migration migration stalls

http://status.openstack.org/elastic-recheck/index.html#1539271

Those are actually the top two failures in the check queue.

The job is bouncing between 25% and ~80% failure rates:

http://tinyurl.com/gvt5h56

At one point that job was relatively stable, it had to have been because 
it was voting.


So I'm not sure what's going on, but those should probably be the top 
priority bugs for live migration. The problem, 

Re: [openstack-dev] [Openstack-operators] [glance] Remove `run_tests.sh` and `tools/*`

2016-03-04 Thread Nikhil Komawar
I meant exactly that if you're not using venv for glance testing and
want to use testing tools shipped that have the wrappings of run_test or
even tox then asking packagers to write new scripts for doing the same
thing seems odd.

My question still stands: who is still using run_test towards this
purpose? I am aware that some ops do prefer it.

On 3/4/16 2:20 PM, Flavio Percoco wrote:
> On 04/03/16 14:12 -0500, Nikhil Komawar wrote:
>> Surely you can directly use the standard libraries to test systemwide
>> but I am more curious to know if there are some who are still using
>> run_tests wrappings that exist for to ease the pain a bit.
>
> Oh, sorry if I came off wrong. What I meant is that if you have glance
> installed
> systemwide, you'd be better off running the command found in tox.ini
> rather than
> running `run_tests.sh`. One reason is that one point in favor to `tox`
> and even
> `run_tests.sh` itself is isolating test environments. Other than that,
> I don't
> think they provide much other benefits over just running testr
> directly (which I
> sometimes do).
>
> Hope that's clearer,
> Flavio
>
>> On 3/4/16 12:41 PM, Flavio Percoco wrote:
>>> On 04/03/16 11:59 -0500, Nikhil Komawar wrote:
 I think the hard question to me here is:

 Do people care about testing code on system installs vs. virtual env?
 run_tests does that and for some cases when you want to be extra sure
 about your CICD nodes, packaging and upgrades, the problem is solved.

 Are packagers using tox to this purpose?
>>>
>>> TBH, if you're testing things without venvs and sytemwide, I think
>>> it'd be far
>>> easier to just call nosetests/testr directly in your system than
>>> calling the
>>> run_tests script.
>>>
>>> Some packages don't even ship tests.
>>>
>>> Cheers,
>>> Flavio
>>>
 On 3/4/16 11:16 AM, Steve Martinelli wrote:
>
> The keystone team did the same during Liberty while we were moving
> towards using oslo.* projects instead of oslo-incubator [0]. We also
> noticed that they were rarely used, and we did not go through a
> deprecation process since these are developer tools. We're still
> finding a few spots in our docs that need updating, but overall it
> was
> an easy transition.
>
> [0]
> https://github.com/openstack/keystone/commit/55e9514cbd4e712e2c317335294355cf1596d870
>
>
>
> stevemar
>
> Inactive hide details for Flavio Percoco ---2016/03/04 06:51:47
> AM---Hey Folks, I'm looking at doing some cleanups in our repo Flavio
> Percoco ---2016/03/04 06:51:47 AM---Hey Folks, I'm looking at doing
> some cleanups in our repo and I would like to start by
>
> From: Flavio Percoco 
> To: openstack-dev@lists.openstack.org
> Cc: openstack-operat...@lists.openstack.org
> Date: 2016/03/04 06:51 AM
> Subject: [Openstack-operators] [glance] Remove `run_tests.sh` and
> `tools/*`
>
> 
>
>
>
>
>
> Hey Folks,
>
> I'm looking at doing some cleanups in our repo and I would like to
> start by
> deprecating the `run_tests` script and the contents in the `tools/`
> dir.
>
> As far as I can tell, no one is using this code - we're not even
> using
> it in the
> gate - as it was broken until recently, I believe. The recommended
> way
> to run
> tests is using `tox` and I believe having this script in the code
> base
> misleads
> new contributors and other users.
>
> So, before we do this. I wanted to get feedback from a broader
> audience and give
> a heads up to folks that might be using this code.
>
> Any objections? Something I'm missing?
>
> Flavio
>
> -- 
> @flaper87
> Flavio Percoco
> [attachment "signature.asc" deleted by Steve Martinelli/Toronto/IBM]
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
>
>
> __
>
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 -- 

 Thanks,
 Nikhil

>>>
>>
>> -- 
>>
>> Thanks,
>> Nikhil
>>
>

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-04 Thread Adrian Otto
Hongbin,

To be clear, this pursuit is not about what OS options cloud operators can 
select. We will be offering a method of choice. It has to do with what we plan 
to build comprehensive testing for, and the implications that has on our pace 
of feature development. My guidance here is that we resist the temptation to 
create a system with more permutations than we can possibly support. The 
relation between bay node OS, Heat Template, Heat Template parameters, COE, and 
COE dependencies (could-init, docker, flannel, etcd, etc.) are multiplicative 
in nature. From the mid cycle, it was clear to me that:

1) We want to test at least one OS per COE from end-to-end with comprehensive 
functional tests.
2) We want to offer clear and precise integration points to allow cloud 
operators to substitute their own OS in place of whatever one is the default 
for the given COE.
3) We want to control the total number of configuration permutations to 
simplify our efforts as a project. We agreed that gate testing all possible 
permutations is intractable.

Note that it will take a thoughtful approach (subject to discussion) to balance 
these interests. Please take a moment to review the interest above. Do you or 
others disagree with these? If so, why?

Adrian

On Mar 4, 2016, at 9:09 AM, Hongbin Lu 
> wrote:

I don’t think there is any consensus on supporting single distro. There are 
multiple disagreements on this thread, including several senior team members 
and a project co-founder. This topic should be re-discussed (possibly at the 
design summit).

Best regards,
Hongbin

From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: March-04-16 11:37 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

I don't think anyone is saying that code should somehow block support for 
multiple distros. The discussion at midcycle was about what the we should gate 
on and ensure feature parity for as a team. Ideally, we'd like to get support 
for every distro, I think, but no one wants to have that many gates. Instead, 
the consensus at the midcycle was to have 1 reference distro for each COE, gate 
on those and develop features there, and then have any other distros be 
maintained by those in the community that are passionate about them.

The issue also isn't about how difficult or not it is. The problem we want to 
avoid is spending precious time guaranteeing that new features and bug fixes 
make it through multiple distros.

Corey

On Fri, Mar 4, 2016 at 11:18 AM Steven Dake (stdake) 
> wrote:
My position on this is simple.

Operators are used to using specific distros because that is what they used in 
the 90s,and the 00s, and the 10s.  Yes, 25 years of using a distro, and you 
learn it inside and out.  This means you don't want to relearn a new distro, 
especially if your an RPM user going to DEB or a DEB user going to RPM.  These 
are non-starter options for operators, and as a result, mean that distro choice 
is a must.  Since CoreOS is a new OS in the marketplace, it may make sense to 
consider placing it in "third" position in terms of support.

Besides that problem, various distribution companies will only support distros 
running in Vms if it matches the host kernel, which makes total sense to me.  
This means on an Ubuntu host if I want support I need to run Ubuntu vms, on a 
RHEL host I want to run RHEL vms, because, hey, I want my issues supported.

For these reasons and these reasons alone, there is no good rationale to remove 
multi-distro support  from Magnum.  All I've heard in this thread so far is 
"its too hard".  Its not too hard, especially with Heat conditionals making 
their way into Mitaka.

Regards
-steve

From: Hongbin Lu >
Reply-To: 
"openstack-dev@lists.openstack.org" 
>
Date: Monday, February 29, 2016 at 9:40 AM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: [openstack-dev] [magnum] Discussion of supporting single/multiple OS 
distro

Hi team,

This is a continued discussion from a review [1]. Corey O'Brien suggested to 
have Magnum support a single OS distro (Atomic). I disagreed. I think we should 
bring the discussion to here to get broader set of inputs.

Corey O'Brien
From the midcycle, we decided we weren't going to continue to support 2 
different versions of the k8s template. Instead, we were going to maintain the 
Fedora Atomic version of k8s and remove the coreos templates from the tree. I 
don't think we should continue to develop features for coreos k8s if that is 
true.
In addition, I don't think 

Re: [openstack-dev] [Openstack-operators] [glance] Remove `run_tests.sh` and `tools/*`

2016-03-04 Thread Flavio Percoco

On 04/03/16 14:12 -0500, Nikhil Komawar wrote:

Surely you can directly use the standard libraries to test systemwide
but I am more curious to know if there are some who are still using
run_tests wrappings that exist for to ease the pain a bit.


Oh, sorry if I came off wrong. What I meant is that if you have glance installed
systemwide, you'd be better off running the command found in tox.ini rather than
running `run_tests.sh`. One reason is that one point in favor to `tox` and even
`run_tests.sh` itself is isolating test environments. Other than that, I don't
think they provide much other benefits over just running testr directly (which I
sometimes do).

Hope that's clearer,
Flavio


On 3/4/16 12:41 PM, Flavio Percoco wrote:

On 04/03/16 11:59 -0500, Nikhil Komawar wrote:

I think the hard question to me here is:

Do people care about testing code on system installs vs. virtual env?
run_tests does that and for some cases when you want to be extra sure
about your CICD nodes, packaging and upgrades, the problem is solved.

Are packagers using tox to this purpose?


TBH, if you're testing things without venvs and sytemwide, I think
it'd be far
easier to just call nosetests/testr directly in your system than
calling the
run_tests script.

Some packages don't even ship tests.

Cheers,
Flavio


On 3/4/16 11:16 AM, Steve Martinelli wrote:


The keystone team did the same during Liberty while we were moving
towards using oslo.* projects instead of oslo-incubator [0]. We also
noticed that they were rarely used, and we did not go through a
deprecation process since these are developer tools. We're still
finding a few spots in our docs that need updating, but overall it was
an easy transition.

[0]
https://github.com/openstack/keystone/commit/55e9514cbd4e712e2c317335294355cf1596d870


stevemar

Inactive hide details for Flavio Percoco ---2016/03/04 06:51:47
AM---Hey Folks, I'm looking at doing some cleanups in our repo Flavio
Percoco ---2016/03/04 06:51:47 AM---Hey Folks, I'm looking at doing
some cleanups in our repo and I would like to start by

From: Flavio Percoco 
To: openstack-dev@lists.openstack.org
Cc: openstack-operat...@lists.openstack.org
Date: 2016/03/04 06:51 AM
Subject: [Openstack-operators] [glance] Remove `run_tests.sh` and
`tools/*`






Hey Folks,

I'm looking at doing some cleanups in our repo and I would like to
start by
deprecating the `run_tests` script and the contents in the `tools/`
dir.

As far as I can tell, no one is using this code - we're not even using
it in the
gate - as it was broken until recently, I believe. The recommended way
to run
tests is using `tox` and I believe having this script in the code base
misleads
new contributors and other users.

So, before we do this. I wanted to get feedback from a broader
audience and give
a heads up to folks that might be using this code.

Any objections? Something I'm missing?

Flavio

--
@flaper87
Flavio Percoco
[attachment "signature.asc" deleted by Steve Martinelli/Toronto/IBM]
___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators





__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--

Thanks,
Nikhil





--

Thanks,
Nikhil



--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] {openstack-dev][tc] Leadership training proposal/info

2016-03-04 Thread Sean Dague
On 03/04/2016 02:00 PM, Anne Gentle wrote:
> 
> 
> On Tue, Mar 1, 2016 at 4:41 PM, Colette Alexander
> > wrote:
> 
> Hello Stackers,
> 
> This is the continuation of an ongoing conversation within the TC
> about encouraging the growth of leadership skills within the
> community that began just after the Mitaka summit last year[1].
> After being asked by lifeless to do a bit of research and discussing
> needs/wants re: leadership directly with TC members, I made some
> suggestions on an etherpad[2], and was then asked to go find out
> about funding possibilities.
> 
> tl;dr - If you're a member of the TC or the Board and would like to
> attend Leadership Training at ZingTrain in Ann Arbor, please get
> back to me ASAP with your contact info and preferred timing/dates
> for this two-day training - also, please let me know whether April
> 20/21 (or April 21/22) would specifically work for you or not.
> 
> 
> 
> I'd like to, but curious about the possibility of deferring until after
> the new TC is elected? I know the seats tend to stay static, but with
> possibly half the TC changing it would offer a nice opportunity to get
> to know any new members. 

The whole thing sounds great, however the timing of that week in April
is problematic, for the same reason that back to back summit /
conference weeks would be. 2 weeks away from family is really tough.

Hopefully there will be some later opportunity.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Newton PTL and CL elections

2016-03-04 Thread Dmitry Borodaenko
On Thu, Mar 03, 2016 at 10:34:30AM +0100, Thierry Carrez wrote:
> Dmitry Borodaenko wrote:
> >Team,
> >
> >We're only two weeks away from the beginning of the Newton elections
> >period. Based on the Fuel 9.0/Mitaka release schedule [0], I propose the
> >following dates for PTL and CL self-nomination and election periods:
> >
> >PTL self-nomination: March 13-20
> >PTL election: March 21-27
> >CL self-nomination: March 28-April 3
> >CL election: April 4-10
> 
> Note that since Fuel is now an official project, the Fuel PTL election will
> be organized by the election officials (under the Technical Committee
> oversight).
> 
> Tentative dates have been posted here:
> http://git.openstack.org/cgit/openstack/election/tree/events.yaml

My apologies, should have done my homework better... For the reference,
here's the wiki page for PTL elections that I should have read:
https://wiki.openstack.org/wiki/PTL_Elections_March_2016

Updated dates based on openstack/election events.yaml:

PTL self-nomination: March 11-17
PTL election: March 18-24
CL self-nomination: March 25-31
CL election: April 1-7

Can we fit the component leads election into the same process (i.e.
component lead candidates would self-nominate by submitting
candidates///.txt files to
openstack/election)?

-- 
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] constrained tox targets

2016-03-04 Thread Armando M.
On 4 March 2016 at 11:12, Ihar Hrachyshka  wrote:

> Armando M.  wrote:
>
>
>>
>> On 4 March 2016 at 08:50, Ihar Hrachyshka  wrote:
>> Hi all,
>>
>> currently we have both py27 and py27-constraints tox targets in neutron
>> repos. For some repos (neutron) they are even executed in both master and
>> stable/liberty gates. TC lately decided that instead of having separate
>> targets for constrained requirements, we want to have constraints applied
>> to default targets (py27, docs, …), unconditionally; we also want to use
>> those ‘default’ targets in gate; and we also want to eventually get rid of
>> those -constraints tox targets.
>>
>> To achieve that, I sent a set of patches spanning neutron, neutron-*aas,
>> and project-config repos:
>>
>>
>> https://review.openstack.org/#/q/status:open+branch:master+topic:neutron-constraints
>>
>> For the very least, we want to get our mitaka gate switched to ‘default’
>> (but constrained) tox targets before final release, so that we have a solid
>> foundation in the stable/mitaka branch that would reflect TC desires.
>>
>> Those important patches are (in order of merge):
>>
>> for mitaka:
>> - https://review.openstack.org/286778: makes ‘default’ tox targets
>> constrained;
>> - https://review.openstack.org/286777: switches mitaka gate to using
>> ‘default’ targets;
>> - https://review.openstack.org/288516: cleans up -constraints targets;
>>
>> for liberty:
>> - [not proposed yet; waiting for 286778]: makes ‘default’ tox targets
>> constrained;
>> - https://review.openstack.org/288506: switches branch back to ‘default’
>> targets;
>> * we probably don’t want to drop old targets since some external users
>> may already rely on them
>>
>> There are also patches to constrain remaining gate jobs (releasenotes,
>> cover) too:
>> - https://review.openstack.org/288517: neutron
>> - https://review.openstack.org/288472: lbaas
>> - https://review.openstack.org/288470: fwaas
>> - https://review.openstack.org/288443: vpnaas
>>
>> ...though those depend on some project-config work:
>> - https://review.openstack.org/288451: releasenotes
>> - https://review.openstack.org/288455: coverage
>> * note those also depend on another patch for zuul-cloner
>>
>> Thanks for attention and reviews,
>>
>> This is mainly a question of timing: when shall we pull the trigger on
>> all of these? I am happy to do it today, but it's already Friday afternoon
>> in some parts of the world and changes span multiple projects…
>>
>
> I would not advice to pull it till Monday. I will revise the patches,
> including gate votes, early on Monday; then once everyone from US timezones
> is online, we may push first pieces in.
>
> In the meantime, it would be great to see it validated by reviewers
> nevertheless.


Ack.


>
>
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [glance] Remove `run_tests.sh` and `tools/*`

2016-03-04 Thread Nikhil Komawar
Surely you can directly use the standard libraries to test systemwide
but I am more curious to know if there are some who are still using
run_tests wrappings that exist for to ease the pain a bit.

On 3/4/16 12:41 PM, Flavio Percoco wrote:
> On 04/03/16 11:59 -0500, Nikhil Komawar wrote:
>> I think the hard question to me here is:
>>
>> Do people care about testing code on system installs vs. virtual env?
>> run_tests does that and for some cases when you want to be extra sure
>> about your CICD nodes, packaging and upgrades, the problem is solved.
>>
>> Are packagers using tox to this purpose?
>
> TBH, if you're testing things without venvs and sytemwide, I think
> it'd be far
> easier to just call nosetests/testr directly in your system than
> calling the
> run_tests script.
>
> Some packages don't even ship tests.
>
> Cheers,
> Flavio
>
>> On 3/4/16 11:16 AM, Steve Martinelli wrote:
>>>
>>> The keystone team did the same during Liberty while we were moving
>>> towards using oslo.* projects instead of oslo-incubator [0]. We also
>>> noticed that they were rarely used, and we did not go through a
>>> deprecation process since these are developer tools. We're still
>>> finding a few spots in our docs that need updating, but overall it was
>>> an easy transition.
>>>
>>> [0]
>>> https://github.com/openstack/keystone/commit/55e9514cbd4e712e2c317335294355cf1596d870
>>>
>>>
>>> stevemar
>>>
>>> Inactive hide details for Flavio Percoco ---2016/03/04 06:51:47
>>> AM---Hey Folks, I'm looking at doing some cleanups in our repo Flavio
>>> Percoco ---2016/03/04 06:51:47 AM---Hey Folks, I'm looking at doing
>>> some cleanups in our repo and I would like to start by
>>>
>>> From: Flavio Percoco 
>>> To: openstack-dev@lists.openstack.org
>>> Cc: openstack-operat...@lists.openstack.org
>>> Date: 2016/03/04 06:51 AM
>>> Subject: [Openstack-operators] [glance] Remove `run_tests.sh` and
>>> `tools/*`
>>>
>>> 
>>>
>>>
>>>
>>>
>>> Hey Folks,
>>>
>>> I'm looking at doing some cleanups in our repo and I would like to
>>> start by
>>> deprecating the `run_tests` script and the contents in the `tools/`
>>> dir.
>>>
>>> As far as I can tell, no one is using this code - we're not even using
>>> it in the
>>> gate - as it was broken until recently, I believe. The recommended way
>>> to run
>>> tests is using `tox` and I believe having this script in the code base
>>> misleads
>>> new contributors and other users.
>>>
>>> So, before we do this. I wanted to get feedback from a broader
>>> audience and give
>>> a heads up to folks that might be using this code.
>>>
>>> Any objections? Something I'm missing?
>>>
>>> Flavio
>>>
>>> -- 
>>> @flaper87
>>> Flavio Percoco
>>> [attachment "signature.asc" deleted by Steve Martinelli/Toronto/IBM]
>>> ___
>>> OpenStack-operators mailing list
>>> openstack-operat...@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>>
>>>
>>>
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> -- 
>>
>> Thanks,
>> Nikhil
>>
>

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] constrained tox targets

2016-03-04 Thread Ihar Hrachyshka

Armando M.  wrote:




On 4 March 2016 at 08:50, Ihar Hrachyshka  wrote:
Hi all,

currently we have both py27 and py27-constraints tox targets in neutron  
repos. For some repos (neutron) they are even executed in both master and  
stable/liberty gates. TC lately decided that instead of having separate  
targets for constrained requirements, we want to have constraints applied  
to default targets (py27, docs, …), unconditionally; we also want to use  
those ‘default’ targets in gate; and we also want to eventually get rid  
of those -constraints tox targets.


To achieve that, I sent a set of patches spanning neutron, neutron-*aas,  
and project-config repos:


https://review.openstack.org/#/q/status:open+branch:master+topic:neutron-constraints

For the very least, we want to get our mitaka gate switched to ‘default’  
(but constrained) tox targets before final release, so that we have a  
solid foundation in the stable/mitaka branch that would reflect TC  
desires.


Those important patches are (in order of merge):

for mitaka:
- https://review.openstack.org/286778: makes ‘default’ tox targets  
constrained;
- https://review.openstack.org/286777: switches mitaka gate to using  
‘default’ targets;

- https://review.openstack.org/288516: cleans up -constraints targets;

for liberty:
- [not proposed yet; waiting for 286778]: makes ‘default’ tox targets  
constrained;
- https://review.openstack.org/288506: switches branch back to ‘default’  
targets;
* we probably don’t want to drop old targets since some external users  
may already rely on them


There are also patches to constrain remaining gate jobs (releasenotes,  
cover) too:

- https://review.openstack.org/288517: neutron
- https://review.openstack.org/288472: lbaas
- https://review.openstack.org/288470: fwaas
- https://review.openstack.org/288443: vpnaas

...though those depend on some project-config work:
- https://review.openstack.org/288451: releasenotes
- https://review.openstack.org/288455: coverage
* note those also depend on another patch for zuul-cloner

Thanks for attention and reviews,

This is mainly a question of timing: when shall we pull the trigger on  
all of these? I am happy to do it today, but it's already Friday  
afternoon in some parts of the world and changes span multiple projects…


I would not advice to pull it till Monday. I will revise the patches,  
including gate votes, early on Monday; then once everyone from US timezones  
is online, we may push first pieces in.


In the meantime, it would be great to see it validated by reviewers  
nevertheless.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] {openstack-dev][tc] Leadership training proposal/info

2016-03-04 Thread Anne Gentle
On Tue, Mar 1, 2016 at 4:41 PM, Colette Alexander <
colettealexan...@gmail.com> wrote:

> Hello Stackers,
>
> This is the continuation of an ongoing conversation within the TC about
> encouraging the growth of leadership skills within the community that began
> just after the Mitaka summit last year[1]. After being asked by lifeless to
> do a bit of research and discussing needs/wants re: leadership directly
> with TC members, I made some suggestions on an etherpad[2], and was then
> asked to go find out about funding possibilities.
>
> tl;dr - If you're a member of the TC or the Board and would like to attend
> Leadership Training at ZingTrain in Ann Arbor, please get back to me ASAP
> with your contact info and preferred timing/dates for this two-day training
> - also, please let me know whether April 20/21 (or April 21/22) would
> specifically work for you or not.
>


I'd like to, but curious about the possibility of deferring until after the
new TC is elected? I know the seats tend to stay static, but with possibly
half the TC changing it would offer a nice opportunity to get to know any
new members.

Thanks for working on this effort!
Anne


>
>
> Longer version:
> Mark Collier and the Foundation have graciously offered to cover the costs
> of training for a two-day session at ZingTrain in Ann Arbor  - this
> includes the cost of breakfast/lunch for two days as well as two full
> working-days of seminars. Attendees would be responsible for their own
> travel, lodging, and incidental expenses beyond that (hopefully picked up
> by your employer who sees this as an amazing opportunity for your career
> growth). Currently, I've heard the week before the Austin Summit suggested
> by more than one person coming in from out of the country as preferred
> dates, but we've not committed to anything yet, so here might be a great
> time and place to hash that out among interested parties. ZingTrain has
> suggested a cap of ~20 people on the course, but that's not totally firm,
> so it's possible to add more if more are interested, or we could hold two
> separate two-day sessions to accommodate overflow. My ideal mix of people
> include those who are really excited by the idea of training, and those who
> are are seriously skeptical of any leadership training at all. In fact, if
> you've been to leadership training before and have found it to be terrible
> and awful, I think your input would be most valuable on this one. My
> summary of reasoning behind the 'why' of ZingTrain can be found on the
> etherpad I already mentioned[2]. Also, did I mention, the food will be
> amazing? It will be[3].
>
> Some complications: the week before the Newton Summit there will be a set
> of incoming TC members (elected in early April) and likely some TC members
> who will be outgoing. Some possible solutions: we can certainly push back
> training til post-Summit when we can have a set of the 'new' TC, or we can
> sign up anyone interested currently, and allow a limited number of newly
> elected folks who are interested sign on as the election is finished. I
> certainly welcome any thoughts on that.
>
> A note about starting out with the TC/Board for training: this initiative
> began as a set of conversations about leadership as a whole in the entire
> OpenStack community, so the intent with limiting to TC/Board here is not
> exclusion, merely finding the right place to start. My proposal with the TC
> begins with them, because the leadership conversation within OpenStack
> began with them, and the goals of training are really to help them talk
> about defining the issue/problem collectively, within a space designed to
> help people do that.
>
> If you have any questions at all, please feel free to ping me on IRC
> (gothicmindfood) or ask them here.
>
> Thanks everyone!
>
> -colette
>
>
> [1]
> http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-11-03-20.07.log.html
> [2] https://etherpad.openstack.org/p/Leadershiptraining
> [3] http://www.zingermansdeli.com/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] upgrade implications of lots of content in paste.ini

2016-03-04 Thread Michael Krotscheck
All patches have been proposed, with the exception of ironic, which
implemented its own config generator which does not support defaults. The
patch list is available here:

https://review.openstack.org/#/q/status:open+branch:master+(topic:bug/1551836)


On Tue, Mar 1, 2016 at 8:59 AM Michael Krotscheck 
wrote:

> The keystone patch has landed. I've gone ahead and filed the appropriate
> launchpad bug to address this issue:
>
> https://bugs.launchpad.net/oslo.config/+bug/1551836
>
> Note: Using latent configuration imposes a forward-going maintenance
> burden on all projects impacted, if released in mitaka. As such I recommend
> that all PTL's mark this as a release-blocking bug, in order to buy us more
> time to get these patches landed. I am working as best I can, however I
> cannot guarantee that I'll be able to land all these patches in time.
>
> Additionally, I will not be able to address issues caused by projects that
> have not adopted oslo.config's generate-config. I hope those teams will be
> able to find their own paths forward.
>
> Who is willing to help?
>
> Michael
>
> On Fri, Feb 26, 2016 at 6:09 AM Michael Krotscheck 
> wrote:
>
>> Alright, I have a first sample patch up for what was discussed in this
>> thread here:
>>
>> (Keystone) https://review.openstack.org/#/c/285308/
>>
>> The noted TODO on that is the cors middleware should (eventually) provide
>> its own set_defaults method, so that CORS_OPTS isn't exposed publicly.
>> However, dhellmann doesn't believe we have time for that in Mitaka, since
>> oslo_middleware is already frozen for the release. I'll mark it as a todo
>> item for myself, as the next cycle will contain a good amount of additional
>> work on this portion of openstack.
>>
>> Given the time constraints, I'll wait until Tuesday for everyone to weigh
>> in on the implementation. After that I will start converting the other
>> projects over as best I can and as I have time. Who is willing to help?
>>
>> Michael
>>
>> On Thu, Feb 25, 2016 at 9:05 AM Michael Krotscheck 
>> wrote:
>>
>>> On Thu, Feb 18, 2016 at 10:18 AM Morgan Fainberg <
>>> morgan.fainb...@gmail.com> wrote:
>>>

 I am against "option 1". This could be a case where we classify it as a
 release blocking bug for Mitaka final (is that reasonable to have m3 with
 the current scenario and final to be fixed?), which opens the timeline a
 bit rather than hard against feature-freeze.

>>>
>>> This sounds like a really good way to get us more time, so I'm in favor
>>> of this. However, even with the additional time I will not be able to land
>>> all these patches on my own.
>>>
>>> Who is willing to help?
>>>
>>> Michael
>>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packstack] Update packstack core list

2016-03-04 Thread Flavio Percoco

On 04/03/16 10:09 -0500, Emilien Macchi wrote:

Hi,

[post originally sent on RDO-list but I've been told I should use this
channel]

I've looked at packstack core-list [1] and I suggest we revisit to keep
only active contributors [2] in the core members list.

The list seems super big comparing to who is actually active on the
project; in a meritocracy world it would make sense to revisit that list.


++

I agree with Emilien.

It's been a long time since I contributed to packstack the last time and I think
it makes sense for me to drop off from the team. I'm not focused on packstack
anymore and I'd be of more help as a user than as a reviewer.

It was a pleasure to contribute to the project.
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] proposal to create puppet-neutron-core and add Sergey Kolekonov

2016-03-04 Thread Alex Schultz
+1

On Fri, Mar 4, 2016 at 10:07 AM, Matt Fischer  wrote:

> +1 from me!
>
> gmail/openstack-dev is doing its thing where I see your email 4 hours
> before Emilien's original, so apologies for the reply ordering
>
> On Fri, Mar 4, 2016 at 8:49 AM, Cody Herriges  wrote:
>
>> Emilien Macchi wrote:
>> > Hi,
>> >
>> > To scale-up our review process, we created pupept-keystone-core and it
>> > worked pretty well until now.
>> >
>> > I propose that we continue this model and create puppet-neutron-core.
>> >
>> > I also propose to add Sergey Kolekonov in this group.
>> > He's done a great job helping us to bring puppet-neutron rock-solid for
>> > deploying OpenStack networking.
>> >
>> > http://stackalytics.com/?module=puppet-neutron=marks
>> > http://stackalytics.com/?module=puppet-neutron=commits
>> > 14 commits and 47 reviews, present on IRC during meetings & bug triage,
>> > he's always helpful. He has a very good understanding of Neutron &
>> > Puppet so I'm quite sure he would be a great addition.
>> >
>> > As usual, please vote!
>>
>> +1 from me.  Excited to continue seeing neutron get better.
>>
>> --
>> Cody
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When to revert a patch?

2016-03-04 Thread Flavio Percoco

On 04/03/16 10:24 -0500, Morgan Fainberg wrote:


On Mar 4, 2016 10:16, "Monty Taylor"  wrote:


On 03/04/2016 08:37 AM, Ruby Loo wrote:


Hijacked from ' [openstack-dev] [ironic] Remember to follow RFE process'
thread:

        > Should we revert the patch [1] for now? (Disclaimer. I haven't

looked at the

        > patch itself. But I don't think I should have to, to know what the

API

        > change is.)
        >

        Thanks for calling it out Ruby, that's unfortunate that the
        patch was
        merged without the RFE being approved. About reverting the patch I
        think we shouldn't do that now because the patch is touching the API
        and introducing a new microversion to it.


    Exactly. I've -2'ed the revert, as removing API version is even
    worse than landing a change without an RFE approved. Let us make
    sure to approve RFE asap, and then adjust the code according to it.


This brings up another issue, which I recall discussing before. Did we
decide that we'd never revert something that touches the
API/microversion? It might be good to have guidelines on this if we
don't already. IF the API is incorrect? If the API could be improved? If
the API was only in master for eg 48 hours?



I believe you need to treat master as if it's deployed to production. So once

an API change is released, 'fixing' it needs to be done like any other API
change - with a microversion bump and appropriate backwards compat.


(For instance, I have a CI/CD pipeline merging from master every hour and

doing a deploy - so 48 hours is a long time ago)


Monty


So let me jump in here and add in that a direct revert only should happen in
extreme circumstances: aka a change that breaks behavior without a micro
version bump - or something that is causing a break that cannot be fixed easily
rolling forward. (Unable to land code in the gate at all for example, including
roll forward fixes)

In general (and especially with microversions) fail and fix moving forward is
much better for the end users/deployers especially since folks are doing CD
more aggressively now.

There are other considerations but a revert really is one of the most extreme
responses and should be used sparingly.



Just want to +1 the above. Master should be considered as deployed and we
shouldn't assume things. So, I'd advice a proper fix that is also backwards
compatible.

*cough* Doing a change on the API and then a revert feels like releasing on pypi
and then deleting the release. *cough*

Cheers,
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [glance] Remove `run_tests.sh` and `tools/*`

2016-03-04 Thread Flavio Percoco

On 04/03/16 11:59 -0500, Nikhil Komawar wrote:

I think the hard question to me here is:

Do people care about testing code on system installs vs. virtual env?
run_tests does that and for some cases when you want to be extra sure
about your CICD nodes, packaging and upgrades, the problem is solved.

Are packagers using tox to this purpose?


TBH, if you're testing things without venvs and sytemwide, I think it'd be far
easier to just call nosetests/testr directly in your system than calling the
run_tests script.

Some packages don't even ship tests.

Cheers,
Flavio


On 3/4/16 11:16 AM, Steve Martinelli wrote:


The keystone team did the same during Liberty while we were moving
towards using oslo.* projects instead of oslo-incubator [0]. We also
noticed that they were rarely used, and we did not go through a
deprecation process since these are developer tools. We're still
finding a few spots in our docs that need updating, but overall it was
an easy transition.

[0]
https://github.com/openstack/keystone/commit/55e9514cbd4e712e2c317335294355cf1596d870

stevemar

Inactive hide details for Flavio Percoco ---2016/03/04 06:51:47
AM---Hey Folks, I'm looking at doing some cleanups in our repo Flavio
Percoco ---2016/03/04 06:51:47 AM---Hey Folks, I'm looking at doing
some cleanups in our repo and I would like to start by

From: Flavio Percoco 
To: openstack-dev@lists.openstack.org
Cc: openstack-operat...@lists.openstack.org
Date: 2016/03/04 06:51 AM
Subject: [Openstack-operators] [glance] Remove `run_tests.sh` and
`tools/*`





Hey Folks,

I'm looking at doing some cleanups in our repo and I would like to
start by
deprecating the `run_tests` script and the contents in the `tools/` dir.

As far as I can tell, no one is using this code - we're not even using
it in the
gate - as it was broken until recently, I believe. The recommended way
to run
tests is using `tox` and I believe having this script in the code base
misleads
new contributors and other users.

So, before we do this. I wanted to get feedback from a broader
audience and give
a heads up to folks that might be using this code.

Any objections? Something I'm missing?

Flavio

--
@flaper87
Flavio Percoco
[attachment "signature.asc" deleted by Steve Martinelli/Toronto/IBM]
___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--

Thanks,
Nikhil



--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Many same "region_name" configuration really meaingful for Multi-region customers?

2016-03-04 Thread Adam Young

On 03/03/2016 11:43 PM, Dolph Mathews wrote:
Unless someone on the operations side wants to speak up and defend 
cross-region nova-cinder or nova-neutron interactions as being a 
legitimate use case, I'd be in favor of a single region identifier.


MOC use case I think depends on this.  I'll see if I can get someone 
from there to respond.




However, both of these configuration blocks should ultimately be used 
to configure keystoneauth, so I would be in favor of whatever solution 
simplifies configuration for keystoneauth.


On Tue, Mar 1, 2016 at 10:01 PM, Kai Qiang Wu > wrote:


Hi All,


Right now, we found that nova.conf have many places for
region_name configuration. Check below:

nova.conf

***
[cinder]
os_region_name = ***

[neutron]
region_name= ***



***


From some mult-region environments observation, those two regions
would always config same value.
*Question 1: Does nova support config different regions in
nova.conf ? Like below*

[cinder]

os_region_name = RegionOne

[neutron]
region_name=RegionTwo


From Keystone point, I suspect those regions can access from each
other.


*Question 2: If all need to config with same value, why we not use
single region_name in nova.conf ?* (instead of create many
region_name in same file )

Is it just for code maintenance or else consideration ?



Could nova and keystone community members help this question ?


Thanks


Best Wishes,


Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com 
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193


Follow your heart. You are miracle!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Proposing Alicja Kwasniewska for core reviewer

2016-03-04 Thread Paul Bourke
I've been impressed with Alicja's contributions and had suggested this 
nomination. So +1 a valuable addition.


Cheers,
-Paul

On 04/03/16 17:27, Swapnil Kulkarni wrote:

On Fri, Mar 4, 2016 at 10:25 PM, Steven Dake (stdake)  wrote:

Core Reviewers,

Alicja has been instrumental in our work around jinja2 docker file creation,
removing our symlink madness.  She has also been instrumental in actually
getting Diagnostics implemented in a sanitary fashion.  She has also done a
bunch of other work that folks in the community already know about that I
won't repeat here.

I had always hoped she would start reviewing so we could invite her to the
core review team, and over the last several months she has reviewed quite a
bit!  Her 90 day stats[1] place her at #9 with a solid ratio of 72%.  Her 30
day stats[2] are even better and place her at #6 with an improving ratio of
67%.  She also just doesn't rubber stamp reviews or jump in reviews at the
end; she sticks with them from beginning to end and finds real problems, not
trivial things.  Finally Alicja is full time on Kolla as funded by her
employer so she will be around for the long haul and always available.

Please consider my proposal to be a +1 vote.

To be approved for the core reviewer team, Alicja requires a majority vote
of 6 total votes with no veto within the one week period beginning now and
ending Friday March 11th.  If your on the fence, you can always abstain.  If
the vote is unanimous before the voting ends, I will make appropriate
changes to gerrit's acls.  If their is a veto vote, voting will close prior
to March 11th.

Regards,
-steve

[1] http://stackalytics.com/report/contribution/kolla-group/90
[2] http://stackalytics.com/report/contribution/kolla-group/30

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Proposing Alicja Kwasniewska for core reviewer

2016-03-04 Thread Swapnil Kulkarni
On Fri, Mar 4, 2016 at 10:25 PM, Steven Dake (stdake)  wrote:
> Core Reviewers,
>
> Alicja has been instrumental in our work around jinja2 docker file creation,
> removing our symlink madness.  She has also been instrumental in actually
> getting Diagnostics implemented in a sanitary fashion.  She has also done a
> bunch of other work that folks in the community already know about that I
> won't repeat here.
>
> I had always hoped she would start reviewing so we could invite her to the
> core review team, and over the last several months she has reviewed quite a
> bit!  Her 90 day stats[1] place her at #9 with a solid ratio of 72%.  Her 30
> day stats[2] are even better and place her at #6 with an improving ratio of
> 67%.  She also just doesn't rubber stamp reviews or jump in reviews at the
> end; she sticks with them from beginning to end and finds real problems, not
> trivial things.  Finally Alicja is full time on Kolla as funded by her
> employer so she will be around for the long haul and always available.
>
> Please consider my proposal to be a +1 vote.
>
> To be approved for the core reviewer team, Alicja requires a majority vote
> of 6 total votes with no veto within the one week period beginning now and
> ending Friday March 11th.  If your on the fence, you can always abstain.  If
> the vote is unanimous before the voting ends, I will make appropriate
> changes to gerrit's acls.  If their is a veto vote, voting will close prior
> to March 11th.
>
> Regards,
> -steve
>
> [1] http://stackalytics.com/report/contribution/kolla-group/90
> [2] http://stackalytics.com/report/contribution/kolla-group/30
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][calico] networking-calico 1.1.3 release

2016-03-04 Thread Neil Jerram
I'm happy to announce the release of networking-calico 1.1.3. 
networking-calico is a Neutron driver that connects instances using IP 
routing instead of layer 2 bridging and tunneling.

networking-calico's server-side code works with Liberty and previous 
OpenStack releases back to Icehouse, and the release includes a DevStack 
plugin that makes it easy to try out with Liberty.  There are also 
various platform-packaged ways that you can use networking-calico [1], 
together with the 1.3.0 release of the common Calico code [2].

[1] http://docs.projectcalico.org/en/1.3.0/openstack.html
[2] 
http://lists.projectcalico.org/pipermail/calico-tech_lists.projectcalico.org/2016-February/95.html

networking-calico docs and source are available at:

 http://docs.openstack.org/developer/networking-calico/
 http://git.openstack.org/cgit/openstack/networking-calico/

Please report issues through launchpad:

 https://bugs.launchpad.net/networking-calico

Below is the list of changes since the previous 1.0.0 release.

Many thanks to everyone who has contributed to this release!

Neil


Changes in networking-calico 1.0.0..1.1.3
-

fabfb01 Doc: explain networking-calico, to an OpenStack-savvy audience
52e582b Doc: add some implementation notes
9803fab Move Calico's mechanism driver to networking-calico
a246a50 devstack/bootstrap.sh: Don't set SERVICE_HOST
4203acf Various leader election improvements:
35ffc79 Remove 'sqlalchemy' from requirements.txt
7b4b9a7 Handle EtcdKeyNotFound in addition to EtcdCompareFailed.
d089633 Reduce election refresh interval, handle EtcdEventIndexCleared.
ede8ef2 Fix deadlock in status reporting.
6dce2e4 Adjust tox and testr config to print coverage.
b95c8ab Add TLS support to the Neutron driver's etcd connection.
59db532 Skip all ports in DHCP agents on different hosts
ef5df2a Use standard logging in test code, instead of print
eecdc0f Decouple status reporting from etcd polling.
c6aebff Prevent concurrent initialisation of the mechanism driver.
fe5a2dd Update pbr requirement to match global-requirements
9bd6055 New DHCP agent driven by etcd data instead of by Neutron RPC
b7ff0e8 Pass a string to delete_onlink_route instead of an IPNetwork
b085fb0 Fix handling of endpoint directory deletion
8d740be Update test-requirements.txt to fix CI.
27ea7ca Add service framework around Calico DHCP agent
94bf75c Don't automatically install and use Calico DHCP agent
2f2a227 Debian and RPM packaging for release
6495f68 Improve workaround for requests/urllib3 vendoring issue
eda6447 Debian and RPM packaging for release
ec7a01c Make networking-calico RPM package depend on python-pbr
6047195 Change default host for etcd connections from localhost to 127.0.0.1


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-04 Thread Hongbin Lu
I don’t think there is any consensus on supporting single distro. There are 
multiple disagreements on this thread, including several senior team members 
and a project co-founder. This topic should be re-discussed (possibly at the 
design summit).

Best regards,
Hongbin

From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: March-04-16 11:37 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

I don't think anyone is saying that code should somehow block support for 
multiple distros. The discussion at midcycle was about what the we should gate 
on and ensure feature parity for as a team. Ideally, we'd like to get support 
for every distro, I think, but no one wants to have that many gates. Instead, 
the consensus at the midcycle was to have 1 reference distro for each COE, gate 
on those and develop features there, and then have any other distros be 
maintained by those in the community that are passionate about them.

The issue also isn't about how difficult or not it is. The problem we want to 
avoid is spending precious time guaranteeing that new features and bug fixes 
make it through multiple distros.

Corey

On Fri, Mar 4, 2016 at 11:18 AM Steven Dake (stdake) 
> wrote:
My position on this is simple.

Operators are used to using specific distros because that is what they used in 
the 90s,and the 00s, and the 10s.  Yes, 25 years of using a distro, and you 
learn it inside and out.  This means you don't want to relearn a new distro, 
especially if your an RPM user going to DEB or a DEB user going to RPM.  These 
are non-starter options for operators, and as a result, mean that distro choice 
is a must.  Since CoreOS is a new OS in the marketplace, it may make sense to 
consider placing it in "third" position in terms of support.

Besides that problem, various distribution companies will only support distros 
running in Vms if it matches the host kernel, which makes total sense to me.  
This means on an Ubuntu host if I want support I need to run Ubuntu vms, on a 
RHEL host I want to run RHEL vms, because, hey, I want my issues supported.

For these reasons and these reasons alone, there is no good rationale to remove 
multi-distro support  from Magnum.  All I've heard in this thread so far is 
"its too hard".  Its not too hard, especially with Heat conditionals making 
their way into Mitaka.

Regards
-steve

From: Hongbin Lu >
Reply-To: 
"openstack-dev@lists.openstack.org" 
>
Date: Monday, February 29, 2016 at 9:40 AM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: [openstack-dev] [magnum] Discussion of supporting single/multiple OS 
distro

Hi team,

This is a continued discussion from a review [1]. Corey O'Brien suggested to 
have Magnum support a single OS distro (Atomic). I disagreed. I think we should 
bring the discussion to here to get broader set of inputs.

Corey O'Brien
From the midcycle, we decided we weren't going to continue to support 2 
different versions of the k8s template. Instead, we were going to maintain the 
Fedora Atomic version of k8s and remove the coreos templates from the tree. I 
don't think we should continue to develop features for coreos k8s if that is 
true.
In addition, I don't think we should break the coreos template by adding the 
trust token as a heat parameter.

Hongbin Lu
I was on the midcycle and I don't remember any decision to remove CoreOS 
support. Why you want to remove CoreOS templates from the tree. Please note 
that this is a very big decision and please discuss it with the team 
thoughtfully and make sure everyone agree.

Corey O'Brien
Removing the coreos templates was a part of the COE drivers decision. Since 
each COE driver will only support 1 distro+version+coe we discussed which ones 
to support in tree. The decision was that instead of trying to support every 
distro and every version for every coe, the magnum tree would only have support 
for 1 version of 1 distro for each of the 3 COEs (swarm/docker/mesos). Since we 
already are going to support Atomic for swarm, removing coreos and keeping 
Atomic for kubernetes was the favored choice.

Hongbin Lu
Strongly disagree. It is a huge risk to support a single distro. The selected 
distro could die in the future. Who knows. Why make Magnum take this huge risk? 
Again, the decision of supporting single distro is a very big decision. Please 
bring it up to the team and have it discuss thoughtfully before making any 
decision. Also, Magnum doesn't have to support every distro and every version 
for every coe, but should support *more than one* popular distro for 

Re: [openstack-dev] [puppet] proposal to create puppet-neutron-core and add Sergey Kolekonov

2016-03-04 Thread Matt Fischer
+1 from me!

gmail/openstack-dev is doing its thing where I see your email 4 hours
before Emilien's original, so apologies for the reply ordering

On Fri, Mar 4, 2016 at 8:49 AM, Cody Herriges  wrote:

> Emilien Macchi wrote:
> > Hi,
> >
> > To scale-up our review process, we created pupept-keystone-core and it
> > worked pretty well until now.
> >
> > I propose that we continue this model and create puppet-neutron-core.
> >
> > I also propose to add Sergey Kolekonov in this group.
> > He's done a great job helping us to bring puppet-neutron rock-solid for
> > deploying OpenStack networking.
> >
> > http://stackalytics.com/?module=puppet-neutron=marks
> > http://stackalytics.com/?module=puppet-neutron=commits
> > 14 commits and 47 reviews, present on IRC during meetings & bug triage,
> > he's always helpful. He has a very good understanding of Neutron &
> > Puppet so I'm quite sure he would be a great addition.
> >
> > As usual, please vote!
>
> +1 from me.  Excited to continue seeing neutron get better.
>
> --
> Cody
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] constrained tox targets

2016-03-04 Thread Armando M.
On 4 March 2016 at 08:50, Ihar Hrachyshka  wrote:

> Hi all,
>
> currently we have both py27 and py27-constraints tox targets in neutron
> repos. For some repos (neutron) they are even executed in both master and
> stable/liberty gates. TC lately decided that instead of having separate
> targets for constrained requirements, we want to have constraints applied
> to default targets (py27, docs, …), unconditionally; we also want to use
> those ‘default’ targets in gate; and we also want to eventually get rid of
> those -constraints tox targets.
>
> To achieve that, I sent a set of patches spanning neutron, neutron-*aas,
> and project-config repos:
>
>
> https://review.openstack.org/#/q/status:open+branch:master+topic:neutron-constraints
>
> For the very least, we want to get our mitaka gate switched to ‘default’
> (but constrained) tox targets before final release, so that we have a solid
> foundation in the stable/mitaka branch that would reflect TC desires.
>
> Those important patches are (in order of merge):
>
> for mitaka:
> - https://review.openstack.org/286778: makes ‘default’ tox targets
> constrained;
> - https://review.openstack.org/286777: switches mitaka gate to using
> ‘default’ targets;
> - https://review.openstack.org/288516: cleans up -constraints targets;
>
> for liberty:
> - [not proposed yet; waiting for 286778]: makes ‘default’ tox targets
> constrained;
> - https://review.openstack.org/288506: switches branch back to ‘default’
> targets;
> * we probably don’t want to drop old targets since some external users may
> already rely on them
>
> There are also patches to constrain remaining gate jobs (releasenotes,
> cover) too:
> - https://review.openstack.org/288517: neutron
> - https://review.openstack.org/288472: lbaas
> - https://review.openstack.org/288470: fwaas
> - https://review.openstack.org/288443: vpnaas
>
> ...though those depend on some project-config work:
> - https://review.openstack.org/288451: releasenotes
> - https://review.openstack.org/288455: coverage
> * note those also depend on another patch for zuul-cloner
>
> Thanks for attention and reviews,
>

This is mainly a question of timing: when shall we pull the trigger on all
of these? I am happy to do it today, but it's already Friday afternoon in
some parts of the world and changes span multiple projects...


> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [glance] Remove `run_tests.sh` and `tools/*`

2016-03-04 Thread Nikhil Komawar
I think the hard question to me here is:

Do people care about testing code on system installs vs. virtual env?
run_tests does that and for some cases when you want to be extra sure
about your CICD nodes, packaging and upgrades, the problem is solved.

Are packagers using tox to this purpose?

On 3/4/16 11:16 AM, Steve Martinelli wrote:
>
> The keystone team did the same during Liberty while we were moving
> towards using oslo.* projects instead of oslo-incubator [0]. We also
> noticed that they were rarely used, and we did not go through a
> deprecation process since these are developer tools. We're still
> finding a few spots in our docs that need updating, but overall it was
> an easy transition.
>
> [0]
> https://github.com/openstack/keystone/commit/55e9514cbd4e712e2c317335294355cf1596d870
>
> stevemar
>
> Inactive hide details for Flavio Percoco ---2016/03/04 06:51:47
> AM---Hey Folks, I'm looking at doing some cleanups in our repo Flavio
> Percoco ---2016/03/04 06:51:47 AM---Hey Folks, I'm looking at doing
> some cleanups in our repo and I would like to start by
>
> From: Flavio Percoco 
> To: openstack-dev@lists.openstack.org
> Cc: openstack-operat...@lists.openstack.org
> Date: 2016/03/04 06:51 AM
> Subject: [Openstack-operators] [glance] Remove `run_tests.sh` and
> `tools/*`
>
> 
>
>
>
> Hey Folks,
>
> I'm looking at doing some cleanups in our repo and I would like to
> start by
> deprecating the `run_tests` script and the contents in the `tools/` dir.
>
> As far as I can tell, no one is using this code - we're not even using
> it in the
> gate - as it was broken until recently, I believe. The recommended way
> to run
> tests is using `tox` and I believe having this script in the code base
> misleads
> new contributors and other users.
>
> So, before we do this. I wanted to get feedback from a broader
> audience and give
> a heads up to folks that might be using this code.
>
> Any objections? Something I'm missing?
>
> Flavio
>
> -- 
> @flaper87
> Flavio Percoco
> [attachment "signature.asc" deleted by Steve Martinelli/Toronto/IBM]
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Proposing Alicja Kwasniewska for core reviewer

2016-03-04 Thread Michał Jastrzębski
+1!

congrats Ala! I'm super proud!
For others, I had pleasure to be in same team with Ala and I vouch for
her with all the credibility I have:) she will be wonderful addition
to core team!

On 4 March 2016 at 10:55, Steven Dake (stdake)  wrote:
> Core Reviewers,
>
> Alicja has been instrumental in our work around jinja2 docker file creation,
> removing our symlink madness.  She has also been instrumental in actually
> getting Diagnostics implemented in a sanitary fashion.  She has also done a
> bunch of other work that folks in the community already know about that I
> won't repeat here.
>
> I had always hoped she would start reviewing so we could invite her to the
> core review team, and over the last several months she has reviewed quite a
> bit!  Her 90 day stats[1] place her at #9 with a solid ratio of 72%.  Her 30
> day stats[2] are even better and place her at #6 with an improving ratio of
> 67%.  She also just doesn't rubber stamp reviews or jump in reviews at the
> end; she sticks with them from beginning to end and finds real problems, not
> trivial things.  Finally Alicja is full time on Kolla as funded by her
> employer so she will be around for the long haul and always available.
>
> Please consider my proposal to be a +1 vote.
>
> To be approved for the core reviewer team, Alicja requires a majority vote
> of 6 total votes with no veto within the one week period beginning now and
> ending Friday March 11th.  If your on the fence, you can always abstain.  If
> the vote is unanimous before the voting ends, I will make appropriate
> changes to gerrit's acls.  If their is a veto vote, voting will close prior
> to March 11th.
>
> Regards,
> -steve
>
> [1] http://stackalytics.com/report/contribution/kolla-group/90
> [2] http://stackalytics.com/report/contribution/kolla-group/30
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][vote] Proposing Alicja Kwasniewska for core reviewer

2016-03-04 Thread Steven Dake (stdake)
Core Reviewers,

Alicja has been instrumental in our work around jinja2 docker file creation, 
removing our symlink madness.  She has also been instrumental in actually 
getting Diagnostics implemented in a sanitary fashion.  She has also done a 
bunch of other work that folks in the community already know about that I won't 
repeat here.

I had always hoped she would start reviewing so we could invite her to the core 
review team, and over the last several months she has reviewed quite a bit!  
Her 90 day stats[1] place her at #9 with a solid ratio of 72%.  Her 30 day 
stats[2] are even better and place her at #6 with an improving ratio of 67%.  
She also just doesn't rubber stamp reviews or jump in reviews at the end; she 
sticks with them from beginning to end and finds real problems, not trivial 
things.  Finally Alicja is full time on Kolla as funded by her employer so she 
will be around for the long haul and always available.

Please consider my proposal to be a +1 vote.

To be approved for the core reviewer team, Alicja requires a majority vote of 6 
total votes with no veto within the one week period beginning now and ending 
Friday March 11th.  If your on the fence, you can always abstain.  If the vote 
is unanimous before the voting ends, I will make appropriate changes to 
gerrit's acls.  If their is a veto vote, voting will close prior to March 11th.

Regards,
-steve

[1] http://stackalytics.com/report/contribution/kolla-group/90
[2] http://stackalytics.com/report/contribution/kolla-group/30
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Openstack] Instalation Problem:VBoxManage: error: Guest not running [ubuntu14.04]

2016-03-04 Thread Samer Machara
Hi, Igor 
Thanks for answer so quickly. 

I wait until the following message appears 
Installation timed out! (3000 seconds) 
I don't have any virtual machines created. 

I update to 5.0 VirtualBox version, Now I got the following message 

VBoxManage: error: Machine 'fuel-master' is not currently running 
Waiting for product VM to download files. Please do NOT abort the script... 

I'm still waiting 

- Mail original -

De: "Maksim Malchuk"  
À: "OpenStack Development Mailing List (not for usage questions)" 
 
Envoyé: Vendredi 4 Mars 2016 15:19:54 
Objet: Re: [openstack-dev] [Fuel] [Openstack] Instalation Problem:VBoxManage: 
error: Guest not running [ubuntu14.04] 

Igor, 



Some information about my system: 
OS: ubuntu 14.04 LTS 
Memory: 3,8GiB 

Samer can't run many guests I think. 


On Fri, Mar 4, 2016 at 5:12 PM, Igor Marnat < imar...@mirantis.com > wrote: 



Samer, Maksim, 
I'd rather say that script started fuel-master already (VM "fuel-master" has 
been successfully started.), didn't find running guests, (VBoxManage: error: 
Guest not running) but it can try to start them afterwards. 

Samer, 
- how many VMs are there running besides fuel-master? 
- is it still showing "Waiting for product VM to download files. Please do NOT 
abort the script..." ? 
- for how long did you wait since the message above? 


Regards, 
Igor Marnat 

On Fri, Mar 4, 2016 at 5:04 PM, Maksim Malchuk < mmalc...@mirantis.com > wrote: 



Hi Sames, 

VBoxManage: error: Guest not running 

looks line the problem with VirtualBox itself or settings for the 'fuel-master' 
VM, it can't boot it. 
Open the VirtualBox Manger (GUI), select the 'fuel-master' VM and start it 
manually - it should show you what is exactly happens. 


On Fri, Mar 4, 2016 at 4:41 PM, Samer Machara < 
samer.mach...@telecom-sudparis.eu > wrote: 





Hello, everyone. 
I'm new with Fuel. I'm trying to follow the QuickStart Guide ( 
https://docs.fuel-infra.org/openstack/fuel/fuel-8.0/quickstart-guide.html ), 
but I have the following Error: 


Waiting for VM "fuel-master" to power on... 
VM "fuel-master" has been successfully started. 
VBoxManage: error: Guest not running 
VBoxManage: error: Guest not running 
... 
VBoxManage: error: Guest not running 
Waiting for product VM to download files. Please do NOT abort the script... 


I hope you can help me. 

Thanks in advance 




Some information about my system: 
OS: ubuntu 14.04 LTS 
Memory: 3,8GiB 
Processor: Intel® Core™2 Quad CPU Q9550 @ 2.83GHz × 4 
OS type: 64-bit 
Disk 140,2GB 
VirtualBox Version: 4.3.36_Ubuntu 
Checking for 'expect'... OK 
Checking for 'xxd'... OK 
Checking for "VBoxManage"... OK 
Checking for VirtualBox Extension Pack... OK 
Checking if SSH client installed... OK 
Checking if ipconfig or ifconfig installed... OK 





I modify the config.sh to adapt my hardware configuration 
... 
# Master node settings 
if [ "$CONFIG_FOR" = "4GB" ]; then 
vm_master_memory_mb=1024 
vm_master_disk_mb=2 
... 
# The number of nodes for installing OpenStack on 
elif [ "$CONFIG_FOR" = "4GB" ]; then 
cluster_size=3 
... 
# Slave node settings. This section allows you to define CPU count for each 
slave node. 
elif [ "$CONFIG_FOR" = "4GB" ]; then 
vm_slave_cpu_default=1 
vm_slave_cpu[1]=1 
vm_slave_cpu[2]=1 
vm_slave_cpu[3]=1 
... 
# This section allows you to define RAM size in MB for each slave node. 
elif [ "$CONFIG_FOR" = "4GB" ]; then 
vm_slave_memory_default=1024 


vm_slave_memory_mb[1]=512 
vm_slave_memory_mb[2]=512 
vm_slave_memory_mb[3]=512 
... 
# Nodes with combined roles may require more disk space. 
if [ "$CONFIG_FOR" = "4GB" ]; then 
vm_slave_first_disk_mb=2 
vm_slave_second_disk_mb=2 
vm_slave_third_disk_mb=2 
... 


I found someone that had a similar problem ( 
https://www.mail-archive.com/fuel-dev@lists.launchpad.net/msg01084.html ), he 
had a corrupted iso file, he solved the problem downloaded it again. I 
downloaded the .iso file from 
http://seed.fuel-infra.org/fuelweb-community-release/fuel-community-8.0.iso.torrent
 . I chek the size 3,1 GB. How ever I still with the problem. 

__ 
OpenStack Development Mailing List (not for usage questions) 
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 







-- 
Best Regards, 
Maksim Malchuk, 
Senior DevOps Engineer , 
MOS: Product Engineering, 
Mirantis, Inc 


__ 
OpenStack Development Mailing List (not for usage questions) 
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 






__ 
OpenStack Development Mailing List (not for usage questions) 
Unsubscribe: 

[openstack-dev] [openstack-ansible][security] Security hardening backport to Liberty desirable?

2016-03-04 Thread Major Hayden
Hey folks,

I have proposed a review[1] which adds the openstack-ansible-security[2] role 
to OpenStack-Ansible's Liberty release.  I would really appreciate some 
feedback from deployers on whether this change is desirable in Liberty.

The role applies cleanly to Liberty on Ubuntu 14.04 and the role already has 
some fairly basic gating.

The two main questions are:

  1) Does it make sense to backport the openstack-ansible-security
 role/playbook to Liberty?
  2) Should it be applied by default on AIO/gate builds as it is
 in Mitaka (master)?

Thanks!

[1] https://review.openstack.org/#/c/273257/
[2] http://docs.openstack.org/developer/openstack-ansible-security/

--
Major Hayden



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] constrained tox targets

2016-03-04 Thread Ihar Hrachyshka

Hi all,

currently we have both py27 and py27-constraints tox targets in neutron  
repos. For some repos (neutron) they are even executed in both master and  
stable/liberty gates. TC lately decided that instead of having separate  
targets for constrained requirements, we want to have constraints applied  
to default targets (py27, docs, …), unconditionally; we also want to use  
those ‘default’ targets in gate; and we also want to eventually get rid of  
those -constraints tox targets.


To achieve that, I sent a set of patches spanning neutron, neutron-*aas,  
and project-config repos:


https://review.openstack.org/#/q/status:open+branch:master+topic:neutron-constraints

For the very least, we want to get our mitaka gate switched to ‘default’  
(but constrained) tox targets before final release, so that we have a solid  
foundation in the stable/mitaka branch that would reflect TC desires.


Those important patches are (in order of merge):

for mitaka:
- https://review.openstack.org/286778: makes ‘default’ tox targets  
constrained;
- https://review.openstack.org/286777: switches mitaka gate to using  
‘default’ targets;

- https://review.openstack.org/288516: cleans up -constraints targets;

for liberty:
- [not proposed yet; waiting for 286778]: makes ‘default’ tox targets  
constrained;
- https://review.openstack.org/288506: switches branch back to ‘default’  
targets;
* we probably don’t want to drop old targets since some external users may  
already rely on them


There are also patches to constrain remaining gate jobs (releasenotes,  
cover) too:

- https://review.openstack.org/288517: neutron
- https://review.openstack.org/288472: lbaas
- https://review.openstack.org/288470: fwaas
- https://review.openstack.org/288443: vpnaas

...though those depend on some project-config work:
- https://review.openstack.org/288451: releasenotes
- https://review.openstack.org/288455: coverage
* note those also depend on another patch for zuul-cloner

Thanks for attention and reviews,
Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] mitaka-3 development milestone

2016-03-04 Thread Thierry Carrez

Hello everyone,

The last milestone of the Mitaka development cycle, "mitaka-3", is now 
reached. Some OpenStack projects following the milestone-based release 
schedule took the opportunity to publish a development artifact, which 
contains all the new features and bugfixes that have been added since 
mitaka-2 6 weeks ago:


aodh 2.0.0.0b3:
https://tarballs.openstack.org/aodh/aodh-2.0.0.0b3.tar.gz

barbican 2.0.0.0b3:
https://tarballs.openstack.org/barbican/barbican-2.0.0.0b3.tar.gz

ceilometer 6.0.0.0b3:
https://tarballs.openstack.org/ceilometer/ceilometer-6.0.0.0b3.tar.gz

cinder 8.0.0.0b3:
https://tarballs.openstack.org/cinder/cinder-8.0.0.0b3.tar.gz

congress 3.0.0.0b3:
https://tarballs.openstack.org/congress/congress-3.0.0.0b3.tar.gz

designate 2.0.0.0b3:
https://tarballs.openstack.org/designate/designate-2.0.0.0b3.tar.gz
https://tarballs.openstack.org/designate-dashboard/designate-dashboard-2.0.0.0b3.tar.gz

glance 12.0.0.0b3:
https://tarballs.openstack.org/glance/glance-12.0.0.0b3.tar.gz

heat 6.0.0.0b3:
https://tarballs.openstack.org/heat/heat-6.0.0.0b3.tar.gz

horizon 9.0.0.0b3:
https://tarballs.openstack.org/horizon/horizon-9.0.0.0b3.tar.gz

keystone 9.0.0.0b3:
https://tarballs.openstack.org/keystone/keystone-9.0.0.0b3.tar.gz

manila 2.0.0.0b3:
https://tarballs.openstack.org/manila/manila-2.0.0.0b3.tar.gz

mistral 2.0.0.0b3:
https://tarballs.openstack.org/mistral/mistral-2.0.0.0b3.tar.gz
https://tarballs.openstack.org/mistral-dashboard/mistral-dashboard-2.0.0.0b3.tar.gz
https://tarballs.openstack.org/mistral-extra/mistral-extra-2.0.0.0b3.tar.gz

murano 2.0.0.0b3:
https://tarballs.openstack.org/murano/murano-2.0.0.0b3.tar.gz

neutron 8.0.0.0b3:
https://tarballs.openstack.org/neutron/neutron-8.0.0.0b3.tar.gz
https://tarballs.openstack.org/neutron-fwaas/neutron-fwaas-8.0.0.0b3.tar.gz
https://tarballs.openstack.org/neutron-lbaas/neutron-lbaas-8.0.0.0b3.tar.gz
https://tarballs.openstack.org/neutron-vpnaas/neutron-vpnaas-8.0.0.0b3.tar.gz

nova 13.0.0.0b3:
https://tarballs.openstack.org/nova/nova-13.0.0.0b3.tar.gz

sahara 4.0.0.0b3:
https://tarballs.openstack.org/sahara/sahara-4.0.0.0b3.tar.gz
https://tarballs.openstack.org/sahara-dashboard/sahara-dashboard-4.0.0.0b3.tar.gz
https://tarballs.openstack.org/sahara-extra/sahara-extra-4.0.0.0b3.tar.gz
https://tarballs.openstack.org/sahara-image-elements/sahara-image-elements-4.0.0.0b3.tar.gz

searchlight 0.2.0.0b3:
https://tarballs.openstack.org/searchlight/searchlight-0.2.0.0b3.tar.gz

senlin 1.0.0.0b3:
https://tarballs.openstack.org/senlin/senlin-1.0.0.0b3.tar.gz

trove 5.0.0.0b3:
https://tarballs.openstack.org/trove/trove-5.0.0.0b3.tar.gz

zaqar 2.0.0.0b3:
https://tarballs.openstack.org/zaqar/zaqar-2.0.0.0b3.tar.gz

You can also find all those links (and all other mitaka intermediary
releases) at:
http://docs.openstack.org/releases/releases/mitaka.html

For those projects, mitaka-3 marks the end of the feature addition 
period and the start of the stabilization / bugfixing period before 
final release. Those projects are now under Feature Freeze: feature 
addition, new configuration options (or other significant behavioral 
changes) should not be merged without being approved by your project PTL 
as exceptions. The goal is to facilitate testing and focus on bugfixing 
and quality until the project can produce its first Liberty release 
candidate.


We are also under requirements freeze (exceptions needed to bump 
requirements or introduce a last-minute dependency) and Soft String 
Freeze (limit the number of string changes to facilitate the work of 
translators).


For more information on the release schedule and the freezes, please 
see: http://releases.openstack.org/mitaka/schedule.html


And yes, there are only 5 weeks before the end of the Mitaka cycle !
Cheers!

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara][sahara-tests] Sahara-tests release and launchpad project

2016-03-04 Thread Evgeny Sikachev
Also, new Launchpad adds ability to create blueprints for the sahara-tests
project and this is good for release notes.

On Fri, Mar 4, 2016 at 7:14 PM, michael mccune  wrote:

> On Fri, Mar 4, 2016 at 12:29 AM, Evgeny Sikachev > > wrote:
>>
>> Hi, sahara folks!
>>
>> I would like to propose release sahara-tests. All steps from spec
>> implemented except releases and packaging.[0]
>>
>> Release criteria: framework ready for testing a new release of Sahara.
>>
>> Next step: build a package and publish to PyPI.
>>
>
> no objection from me on creating a release, i am curious how we will plan
> future releases though.
>
>
>> Also, I think we need to create a separate Launchpad project (like
>> python-saharaclient[1]) for correct bugs tracking process. This adds
>> ability nominate bugs to releases andwill not be a confusion with
>> Sahara bugs.
>>
>>
> i don't have strong opinions about creating a new launchpad. on one hand,
> i can see the value of keeping these bugs separate and having a specific
> location for the project. on the other hand, i generally like being able to
> see all the sahara projects in one place.
>
> thanks for bringing this up Evgeny
>
> regards,
> mike
>
>
>> [0]
>>
>> https://github.com/openstack/sahara-specs/blob/master/specs/mitaka/move-scenario-tests-to-separate-repository.rst
>> [1] https://bugs.launchpad.net/python-saharaclient
>>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-
Best Regards,

Evgeny Sikachev
QA Engineer
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [glance] Remove `run_tests.sh` and `tools/*`

2016-03-04 Thread Ronald Bradford
+1 from me.   I am all for standardizing this.

Personally when I started looking at OpenStack code as new contributor this
was very confusing with the online docs that listed run_tests.sh but the
first project I looked at, this didn't exist and it was all tox based.

I went on to blog about my first experience of tox, as this was lacking in
OpenStack docs.

Ronald



On Fri, Mar 4, 2016 at 11:16 AM, Steve Martinelli 
wrote:

> The keystone team did the same during Liberty while we were moving towards
> using oslo.* projects instead of oslo-incubator [0]. We also noticed that
> they were rarely used, and we did not go through a deprecation process
> since these are developer tools. We're still finding a few spots in our
> docs that need updating, but overall it was an easy transition.
>
> [0]
> https://github.com/openstack/keystone/commit/55e9514cbd4e712e2c317335294355cf1596d870
>
> stevemar
>
> [image: Inactive hide details for Flavio Percoco ---2016/03/04 06:51:47
> AM---Hey Folks, I'm looking at doing some cleanups in our repo]Flavio
> Percoco ---2016/03/04 06:51:47 AM---Hey Folks, I'm looking at doing some
> cleanups in our repo and I would like to start by
>
> From: Flavio Percoco 
> To: openstack-dev@lists.openstack.org
> Cc: openstack-operat...@lists.openstack.org
> Date: 2016/03/04 06:51 AM
> Subject: [Openstack-operators] [glance] Remove `run_tests.sh` and
> `tools/*`
> --
>
>
>
> Hey Folks,
>
> I'm looking at doing some cleanups in our repo and I would like to start by
> deprecating the `run_tests` script and the contents in the `tools/` dir.
>
> As far as I can tell, no one is using this code - we're not even using it
> in the
> gate - as it was broken until recently, I believe. The recommended way to
> run
> tests is using `tox` and I believe having this script in the code base
> misleads
> new contributors and other users.
>
> So, before we do this. I wanted to get feedback from a broader audience
> and give
> a heads up to folks that might be using this code.
>
> Any objections? Something I'm missing?
>
> Flavio
>
> --
> @flaper87
> Flavio Percoco
> [attachment "signature.asc" deleted by Steve Martinelli/Toronto/IBM]
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Live Migration post feature freeze update

2016-03-04 Thread Murray, Paul (HP Cloud)
Hi All,

Now that we have passed the feature freeze I thought it was worth giving a 
quick update
on where we are with the live migration priority.

The following is a list of work items that have been merged in this cycle ( for 
the live migration
sub-team's working page see 
https://etherpad.openstack.org/p/mitaka-live-migration ). There
is also a number of merged and on-going bug fixes that are not listed here.

Progress reporting
Provide progress reporting information for on-going live migrations.

* 
https://blueprints.launchpad.net/nova/+spec/live-migration-progress-report

  *   https://review.openstack.org/#/q/topic:bp/live-migration-progress-report

Force complete
Force an on-going live migration to complete by pausing the virtual machine for 
the
duration of the migration.

* 
https://blueprints.launchpad.net/nova/+spec/pause-vm-during-live-migration

* 
https://review.openstack.org/#/q/topic:bp/pause-vm-during-live-migration

Cancel
Cancel an on-going live migration.

* https://blueprints.launchpad.net/nova/+spec/abort-live-migration

  *   https://review.openstack.org/#/q/topic:bp/abort-live-migration

Block live migration with attached volumes
Enable live migration of VMs with a combination of local and shared storage.

* 
https://blueprints.launchpad.net/nova/+spec/block-live-migrate-with-attached-volumes

* https://review.openstack.org/#/c/227278

Split networking
Send live migration traffic over a specified network.

* 
https://blueprints.launchpad.net/nova/+spec/split-network-plane-for-live-migration

* 
https://review.openstack.org/#/q/topic:bp/split-network-plane-for-live-migration

Make live migration api friendly
Remove -disk_over_commit flag and add -block_migration=auto (let nova determine
how to migrate the disks)

* 
https://blueprints.launchpad.net/nova/+spec/making-live-migration-api-friendly

  *   
https://review.openstack.org/#/q/topic:bp/making-live-migration-api-friendly

Use request spec
Add scheduling to live migration and evacuate using original request spec 
(includes all
original scheduling properties)

* 
https://blueprints.launchpad.net/nova/+spec/check-destination-on-migrations

* https://review.openstack.org/#/c/277800/

* https://review.openstack.org/#/c/273104/

Deprecate migration flags
Replace the combination of migration configuration flags with a single tunneled 
flag.

* (no blueprint)

* 
https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:deprecate-migration-flags-config

Objectify live migrate data
Use the migrate object instead of a dictionary in migration code.

* 
https://blueprints.launchpad.net/nova/+spec/objectify-live-migrate-data

* 
https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/objectify-live-migrate-data

Next steps...

Now we have passed the feature freeze we will be turning attention to the 
following
three tasks:

1.   Documenting the new features

2.   Adding expanding the CI coverage

3.   Fixing bugs

The CI job gate-tempest-dsvm-multinode-live-migration was added to the 
experimental
queue earlier In the cycle. We now need to add tests to this job to increase 
coverage. If
you have any suggestions for CI improvements please contribute them on this 
page:

https://etherpad.openstack.org/p/nova-live-migration-CI-ideas

If you can contributed to live migration bug fixing you can look for things to 
do here:

https://bugs.launchpad.net/nova/+bugs?field.tag=live-migration

For priority reviews see the live migration section here:

https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking

The live migration sub-team has an IRC meeting on Tuesdays at 14:00 UTC on
#openstack-meeting-3:

https://wiki.openstack.org/wiki/Meetings/NovaLiveMigration

Best regards,
Paul

Paul Murray
Technical Lead, HPE Cloud
Hewlett Packard Enterprise
+44 117 316 2527



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-04 Thread Corey O'Brien
I don't think anyone is saying that code should somehow block support for
multiple distros. The discussion at midcycle was about what the we should
gate on and ensure feature parity for as a team. Ideally, we'd like to get
support for every distro, I think, but no one wants to have that many
gates. Instead, the consensus at the midcycle was to have 1 reference
distro for each COE, gate on those and develop features there, and then
have any other distros be maintained by those in the community that are
passionate about them.

The issue also isn't about how difficult or not it is. The problem we want
to avoid is spending precious time guaranteeing that new features and bug
fixes make it through multiple distros.

Corey

On Fri, Mar 4, 2016 at 11:18 AM Steven Dake (stdake) 
wrote:

> My position on this is simple.
>
> Operators are used to using specific distros because that is what they
> used in the 90s,and the 00s, and the 10s.  Yes, 25 years of using a distro,
> and you learn it inside and out.  This means you don't want to relearn a
> new distro, especially if your an RPM user going to DEB or a DEB user going
> to RPM.  These are non-starter options for operators, and as a result, mean
> that distro choice is a must.  Since CoreOS is a new OS in the marketplace,
> it may make sense to consider placing it in "third" position in terms of
> support.
>
> Besides that problem, various distribution companies will only support
> distros running in Vms if it matches the host kernel, which makes total
> sense to me.  This means on an Ubuntu host if I want support I need to run
> Ubuntu vms, on a RHEL host I want to run RHEL vms, because, hey, I want my
> issues supported.
>
> For these reasons and these reasons alone, there is no good rationale to
> remove multi-distro support  from Magnum.  All I've heard in this thread so
> far is "its too hard".  Its not too hard, especially with Heat conditionals
> making their way into Mitaka.
>
> Regards
> -steve
>
> From: Hongbin Lu 
> Reply-To: "openstack-dev@lists.openstack.org" <
> openstack-dev@lists.openstack.org>
> Date: Monday, February 29, 2016 at 9:40 AM
> To: "openstack-dev@lists.openstack.org"  >
> Subject: [openstack-dev] [magnum] Discussion of supporting
> single/multiple OS distro
>
> Hi team,
>
>
>
> This is a continued discussion from a review [1]. Corey O'Brien suggested
> to have Magnum support a single OS distro (Atomic). I disagreed. I think we
> should bring the discussion to here to get broader set of inputs.
>
>
>
> *Corey O'Brien*
>
> *From the midcycle, we decided we weren't going to continue to support 2
> different versions of the k8s template. Instead, we were going to maintain
> the Fedora Atomic version of k8s and remove the coreos templates from the
> tree. I don't think we should continue to develop features for coreos k8s
> if that is true.*
>
> *In addition, I don't think we should break the coreos template by adding
> the trust token as a heat parameter.*
>
>
>
> *Hongbin Lu*
>
> *I was on the midcycle and I don't remember any decision to remove CoreOS
> support. Why you want to remove CoreOS templates from the tree. Please note
> that this is a very big decision and please discuss it with the team
> thoughtfully and make sure everyone agree.*
>
>
>
> *Corey O'Brien*
>
> *Removing the coreos templates was a part of the COE drivers decision.
> Since each COE driver will only support 1 distro+version+coe we discussed
> which ones to support in tree. The decision was that instead of trying to
> support every distro and every version for every coe, the magnum tree would
> only have support for 1 version of 1 distro for each of the 3 COEs
> (swarm/docker/mesos). Since we already are going to support Atomic for
> swarm, removing coreos and keeping Atomic for kubernetes was the favored
> choice.*
>
>
>
> *Hongbin Lu*
>
> *Strongly disagree. It is a huge risk to support a single distro. The
> selected distro could die in the future. Who knows. Why make Magnum take
> this huge risk? Again, the decision of supporting single distro is a very
> big decision. Please bring it up to the team and have it discuss
> thoughtfully before making any decision. Also, Magnum doesn't have to
> support every distro and every version for every coe, but should support
> *more than one* popular distro for some COEs (especially for the popular
> COEs).*
>
>
>
> *Corey O'Brien*
>
> *The discussion at the midcycle started from the idea of adding support
> for RHEL and CentOS. We all discussed and decided that we wouldn't try to
> support everything in tree. Magnum would provide support in-tree for 1 per
> COE and the COE driver interface would allow others to add support for
> their preferred distro out of tree.*
>
>
>
> *Hongbin Lu*
>
> *I agreed the part that "we wouldn't try to support everything in tree".
> That doesn't imply the decision to support single distro. Again, support
> single 

Re: [openstack-dev] [nova] nova hooks - document & test or deprecate?

2016-03-04 Thread Ed Leafe
On 03/03/2016 09:09 AM, Sam Matzek wrote:

>> > So, don't deprecate until you have a solution.  All you will be doing is
>> > putting people in a tight spot where they will have to fork the code base,
>> > and that is downright antisocial.
>> >
>> > Let's plan this out in the Newton Summit and have a plan moving forward.
> Deprecate isn't the same as remove unless I'm missing something on how
> this works.  I think we want to deprecate it to discourage further
> use, to gather current use cases, and to drive approved specs for
> those use cases.   Hooks should not be removed from tree until we have
> the replacements in tree.

Deprecate ideally means "Don't use this anymore, as it is not the
recommended approach, and will not be supported/available in the future.
Instead, use XXX."

In other words, don't tell people not to use something unless you can
point them to a better way to accomplish what they need to do.

-- 

-- Ed Leafe

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Openstack] Instalation Problem:VBoxManage: error: Guest not running [ubuntu14.04]

2016-03-04 Thread Samer Machara
Hi, Igor 
Thanks for answer so quickly. 

I wait until the following message appears 
Installation timed out! (3000 seconds) 
I don't have any virtual machines created. 

I update to 5.0 VirtualBox version, Now I got the following message 

VBoxManage: error: Machine 'fuel-master' is not currently running 
Waiting for product VM to download files. Please do NOT abort the script... 



- Mail original -

De: "Maksim Malchuk"  
À: "OpenStack Development Mailing List (not for usage questions)" 
 
Envoyé: Vendredi 4 Mars 2016 15:19:54 
Objet: Re: [openstack-dev] [Fuel] [Openstack] Instalation Problem:VBoxManage: 
error: Guest not running [ubuntu14.04] 

Igor, 



Some information about my system: 
OS: ubuntu 14.04 LTS 
Memory: 3,8GiB 

Samer can't run many guests I think. 


On Fri, Mar 4, 2016 at 5:12 PM, Igor Marnat < imar...@mirantis.com > wrote: 



Samer, Maksim, 
I'd rather say that script started fuel-master already (VM "fuel-master" has 
been successfully started.), didn't find running guests, (VBoxManage: error: 
Guest not running) but it can try to start them afterwards. 

Samer, 
- how many VMs are there running besides fuel-master? 
- is it still showing "Waiting for product VM to download files. Please do NOT 
abort the script..." ? 
- for how long did you wait since the message above? 


Regards, 
Igor Marnat 

On Fri, Mar 4, 2016 at 5:04 PM, Maksim Malchuk < mmalc...@mirantis.com > wrote: 



Hi Sames, 

VBoxManage: error: Guest not running 

looks line the problem with VirtualBox itself or settings for the 'fuel-master' 
VM, it can't boot it. 
Open the VirtualBox Manger (GUI), select the 'fuel-master' VM and start it 
manually - it should show you what is exactly happens. 


On Fri, Mar 4, 2016 at 4:41 PM, Samer Machara < 
samer.mach...@telecom-sudparis.eu > wrote: 





Hello, everyone. 
I'm new with Fuel. I'm trying to follow the QuickStart Guide ( 
https://docs.fuel-infra.org/openstack/fuel/fuel-8.0/quickstart-guide.html ), 
but I have the following Error: 


Waiting for VM "fuel-master" to power on... 
VM "fuel-master" has been successfully started. 
VBoxManage: error: Guest not running 
VBoxManage: error: Guest not running 
... 
VBoxManage: error: Guest not running 
Waiting for product VM to download files. Please do NOT abort the script... 


I hope you can help me. 

Thanks in advance 




Some information about my system: 
OS: ubuntu 14.04 LTS 
Memory: 3,8GiB 
Processor: Intel® Core™2 Quad CPU Q9550 @ 2.83GHz × 4 
OS type: 64-bit 
Disk 140,2GB 
VirtualBox Version: 4.3.36_Ubuntu 
Checking for 'expect'... OK 
Checking for 'xxd'... OK 
Checking for "VBoxManage"... OK 
Checking for VirtualBox Extension Pack... OK 
Checking if SSH client installed... OK 
Checking if ipconfig or ifconfig installed... OK 





I modify the config.sh to adapt my hardware configuration 
... 
# Master node settings 
if [ "$CONFIG_FOR" = "4GB" ]; then 
vm_master_memory_mb=1024 
vm_master_disk_mb=2 
... 
# The number of nodes for installing OpenStack on 
elif [ "$CONFIG_FOR" = "4GB" ]; then 
cluster_size=3 
... 
# Slave node settings. This section allows you to define CPU count for each 
slave node. 
elif [ "$CONFIG_FOR" = "4GB" ]; then 
vm_slave_cpu_default=1 
vm_slave_cpu[1]=1 
vm_slave_cpu[2]=1 
vm_slave_cpu[3]=1 
... 
# This section allows you to define RAM size in MB for each slave node. 
elif [ "$CONFIG_FOR" = "4GB" ]; then 
vm_slave_memory_default=1024 


vm_slave_memory_mb[1]=512 
vm_slave_memory_mb[2]=512 
vm_slave_memory_mb[3]=512 
... 
# Nodes with combined roles may require more disk space. 
if [ "$CONFIG_FOR" = "4GB" ]; then 
vm_slave_first_disk_mb=2 
vm_slave_second_disk_mb=2 
vm_slave_third_disk_mb=2 
... 


I found someone that had a similar problem ( 
https://www.mail-archive.com/fuel-dev@lists.launchpad.net/msg01084.html ), he 
had a corrupted iso file, he solved the problem downloaded it again. I 
downloaded the .iso file from 
http://seed.fuel-infra.org/fuelweb-community-release/fuel-community-8.0.iso.torrent
 . I chek the size 3,1 GB. How ever I still with the problem. 

__ 
OpenStack Development Mailing List (not for usage questions) 
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 







-- 
Best Regards, 
Maksim Malchuk, 
Senior DevOps Engineer , 
MOS: Product Engineering, 
Mirantis, Inc 


__ 
OpenStack Development Mailing List (not for usage questions) 
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 






__ 
OpenStack Development Mailing List (not for usage questions) 
Unsubscribe: 

Re: [openstack-dev] [Openstack-operators] [glance] Remove `run_tests.sh` and `tools/*`

2016-03-04 Thread Steve Martinelli

The keystone team did the same during Liberty while we were moving towards
using oslo.* projects instead of oslo-incubator [0]. We also noticed that
they were rarely used, and we did not go through a deprecation process
since these are developer tools. We're still finding a few spots in our
docs that need updating, but overall it was an easy transition.

[0]
https://github.com/openstack/keystone/commit/55e9514cbd4e712e2c317335294355cf1596d870

stevemar



From:   Flavio Percoco 
To: openstack-dev@lists.openstack.org
Cc: openstack-operat...@lists.openstack.org
Date:   2016/03/04 06:51 AM
Subject:[Openstack-operators] [glance] Remove `run_tests.sh` and
`tools/*`



Hey Folks,

I'm looking at doing some cleanups in our repo and I would like to start by
deprecating the `run_tests` script and the contents in the `tools/` dir.

As far as I can tell, no one is using this code - we're not even using it
in the
gate - as it was broken until recently, I believe. The recommended way to
run
tests is using `tox` and I believe having this script in the code base
misleads
new contributors and other users.

So, before we do this. I wanted to get feedback from a broader audience and
give
a heads up to folks that might be using this code.

Any objections? Something I'm missing?

Flavio

--
@flaper87
Flavio Percoco
[attachment "signature.asc" deleted by Steve Martinelli/Toronto/IBM]
___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-04 Thread Steven Dake (stdake)
My position on this is simple.

Operators are used to using specific distros because that is what they used in 
the 90s,and the 00s, and the 10s.  Yes, 25 years of using a distro, and you 
learn it inside and out.  This means you don't want to relearn a new distro, 
especially if your an RPM user going to DEB or a DEB user going to RPM.  These 
are non-starter options for operators, and as a result, mean that distro choice 
is a must.  Since CoreOS is a new OS in the marketplace, it may make sense to 
consider placing it in "third" position in terms of support.

Besides that problem, various distribution companies will only support distros 
running in Vms if it matches the host kernel, which makes total sense to me.  
This means on an Ubuntu host if I want support I need to run Ubuntu vms, on a 
RHEL host I want to run RHEL vms, because, hey, I want my issues supported.

For these reasons and these reasons alone, there is no good rationale to remove 
multi-distro support  from Magnum.  All I've heard in this thread so far is 
"its too hard".  Its not too hard, especially with Heat conditionals making 
their way into Mitaka.

Regards
-steve

From: Hongbin Lu >
Reply-To: 
"openstack-dev@lists.openstack.org" 
>
Date: Monday, February 29, 2016 at 9:40 AM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: [openstack-dev] [magnum] Discussion of supporting single/multiple OS 
distro

Hi team,

This is a continued discussion from a review [1]. Corey O'Brien suggested to 
have Magnum support a single OS distro (Atomic). I disagreed. I think we should 
bring the discussion to here to get broader set of inputs.

Corey O'Brien
>From the midcycle, we decided we weren't going to continue to support 2 
>different versions of the k8s template. Instead, we were going to maintain the 
>Fedora Atomic version of k8s and remove the coreos templates from the tree. I 
>don't think we should continue to develop features for coreos k8s if that is 
>true.
In addition, I don't think we should break the coreos template by adding the 
trust token as a heat parameter.

Hongbin Lu
I was on the midcycle and I don't remember any decision to remove CoreOS 
support. Why you want to remove CoreOS templates from the tree. Please note 
that this is a very big decision and please discuss it with the team 
thoughtfully and make sure everyone agree.

Corey O'Brien
Removing the coreos templates was a part of the COE drivers decision. Since 
each COE driver will only support 1 distro+version+coe we discussed which ones 
to support in tree. The decision was that instead of trying to support every 
distro and every version for every coe, the magnum tree would only have support 
for 1 version of 1 distro for each of the 3 COEs (swarm/docker/mesos). Since we 
already are going to support Atomic for swarm, removing coreos and keeping 
Atomic for kubernetes was the favored choice.

Hongbin Lu
Strongly disagree. It is a huge risk to support a single distro. The selected 
distro could die in the future. Who knows. Why make Magnum take this huge risk? 
Again, the decision of supporting single distro is a very big decision. Please 
bring it up to the team and have it discuss thoughtfully before making any 
decision. Also, Magnum doesn't have to support every distro and every version 
for every coe, but should support *more than one* popular distro for some COEs 
(especially for the popular COEs).

Corey O'Brien
The discussion at the midcycle started from the idea of adding support for RHEL 
and CentOS. We all discussed and decided that we wouldn't try to support 
everything in tree. Magnum would provide support in-tree for 1 per COE and the 
COE driver interface would allow others to add support for their preferred 
distro out of tree.

Hongbin Lu
I agreed the part that "we wouldn't try to support everything in tree". That 
doesn't imply the decision to support single distro. Again, support single 
distro is a huge risk. Why make Magnum take this huge risk?

[1] https://review.openstack.org/#/c/277284/

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara][sahara-tests] Sahara-tests release and launchpad project

2016-03-04 Thread michael mccune

On Fri, Mar 4, 2016 at 12:29 AM, Evgeny Sikachev > wrote:

Hi, sahara folks!

I would like to propose release sahara-tests. All steps from spec
implemented except releases and packaging.[0]

Release criteria: framework ready for testing a new release of Sahara.

Next step: build a package and publish to PyPI.


no objection from me on creating a release, i am curious how we will 
plan future releases though.




Also, I think we need to create a separate Launchpad project (like
python-saharaclient[1]) for correct bugs tracking process. This adds
ability nominate bugs to releases andwill not be a confusion with
Sahara bugs.



i don't have strong opinions about creating a new launchpad. on one 
hand, i can see the value of keeping these bugs separate and having a 
specific location for the project. on the other hand, i generally like 
being able to see all the sahara projects in one place.


thanks for bringing this up Evgeny

regards,
mike



[0]

https://github.com/openstack/sahara-specs/blob/master/specs/mitaka/move-scenario-tests-to-separate-repository.rst
[1] https://bugs.launchpad.net/python-saharaclient





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements] global requirements update squash for milestone

2016-03-04 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2016-03-04 00:00:43 -0500:
> We have a handful of requirements changes for community releases
> that we need to land this week before fully freezing the repo. We're
> starting to see merge conflicts, so I've combined them all into one
> commit to make it easier to land the changes quickly.
> 
> https://review.openstack.org/288249 Updates for Mitaka 3 releases
> 
> replaces:
> 
> https://review.openstack.org/288220 - zaqar client
> https://review.openstack.org/287751 - glance client
> https://review.openstack.org/287963 - swift client
> https://review.openstack.org/288219 - ironic client
> https://review.openstack.org/263598 - senlin client
> 
> If I missed any, please follow up with other suggestions. We should
> review and land these before approving any other changes.
> 
> Doug
> 

I've added a procedural -2 to all of the other open requirements
changes in master. There's still a bit of time for us to address
critical ones, so please request an exception in #openstack-release
if you need one.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] proposal to create puppet-neutron-core and add Sergey Kolekonov

2016-03-04 Thread Matthew Mosesohn
I'm not core, but I would like to say his contributions for Mitaka
were invaluable and I've greatly benefited from his efforts :)

On Fri, Mar 4, 2016 at 6:49 PM, Cody Herriges  wrote:
> Emilien Macchi wrote:
>> Hi,
>>
>> To scale-up our review process, we created pupept-keystone-core and it
>> worked pretty well until now.
>>
>> I propose that we continue this model and create puppet-neutron-core.
>>
>> I also propose to add Sergey Kolekonov in this group.
>> He's done a great job helping us to bring puppet-neutron rock-solid for
>> deploying OpenStack networking.
>>
>> http://stackalytics.com/?module=puppet-neutron=marks
>> http://stackalytics.com/?module=puppet-neutron=commits
>> 14 commits and 47 reviews, present on IRC during meetings & bug triage,
>> he's always helpful. He has a very good understanding of Neutron &
>> Puppet so I'm quite sure he would be a great addition.
>>
>> As usual, please vote!
>
> +1 from me.  Excited to continue seeing neutron get better.
>
> --
> Cody
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] proposal to create puppet-neutron-core and add Sergey Kolekonov

2016-03-04 Thread Cody Herriges
Emilien Macchi wrote:
> Hi,
> 
> To scale-up our review process, we created pupept-keystone-core and it
> worked pretty well until now.
> 
> I propose that we continue this model and create puppet-neutron-core.
> 
> I also propose to add Sergey Kolekonov in this group.
> He's done a great job helping us to bring puppet-neutron rock-solid for
> deploying OpenStack networking.
> 
> http://stackalytics.com/?module=puppet-neutron=marks
> http://stackalytics.com/?module=puppet-neutron=commits
> 14 commits and 47 reviews, present on IRC during meetings & bug triage,
> he's always helpful. He has a very good understanding of Neutron &
> Puppet so I'm quite sure he would be a great addition.
> 
> As usual, please vote!

+1 from me.  Excited to continue seeing neutron get better.

-- 
Cody



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] proposal to create puppet-neutron-core and add Sergey Kolekonov

2016-03-04 Thread Denis Egorenko
+1

2016-03-04 18:40 GMT+03:00 Emilien Macchi :

> Hi,
>
> To scale-up our review process, we created pupept-keystone-core and it
> worked pretty well until now.
>
> I propose that we continue this model and create puppet-neutron-core.
>
> I also propose to add Sergey Kolekonov in this group.
> He's done a great job helping us to bring puppet-neutron rock-solid for
> deploying OpenStack networking.
>
> http://stackalytics.com/?module=puppet-neutron=marks
> http://stackalytics.com/?module=puppet-neutron=commits
> 14 commits and 47 reviews, present on IRC during meetings & bug triage,
> he's always helpful. He has a very good understanding of Neutron &
> Puppet so I'm quite sure he would be a great addition.
>
> As usual, please vote!
> --
> Emilien Macchi
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Egorenko Denis,
Senior Deployment Engineer
Mirantis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] proposal to create puppet-neutron-core and add Sergey Kolekonov

2016-03-04 Thread Emilien Macchi
Hi,

To scale-up our review process, we created pupept-keystone-core and it
worked pretty well until now.

I propose that we continue this model and create puppet-neutron-core.

I also propose to add Sergey Kolekonov in this group.
He's done a great job helping us to bring puppet-neutron rock-solid for
deploying OpenStack networking.

http://stackalytics.com/?module=puppet-neutron=marks
http://stackalytics.com/?module=puppet-neutron=commits
14 commits and 47 reviews, present on IRC during meetings & bug triage,
he's always helpful. He has a very good understanding of Neutron &
Puppet so I'm quite sure he would be a great addition.

As usual, please vote!
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug reports and stable branches: tags or series?

2016-03-04 Thread Matt Riedemann



On 3/4/2016 6:27 AM, Markus Zoeller wrote:

What's the story behind having the tags "in-stable-liberty" and
"liberty-backport-potential" and also having the series target "liberty"?

I didn't gave much TCL to the backports in the past but I'd like to
change that, that's why I'm asking. Please let me know the history
behind it.

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



The 'liberty-backport-potential' tag can be set by anyone if the bug fix 
is a potential backport for the stable/liberty branch. This is generally 
good to set if the bug was reported against liberty itself, a release 
before liberty but not fixed in liberty, or reported after liberty but 
was a latent bug in liberty. Basically, if the bug is also in liberty, 
it's backport potential to also fix it there.


The 'in-stable-liberty' tag is applied by infra when a stable branch 
patch is merged. I haven't actually been seeing this happening as 
automatically as before with the in-stable-kilo tag. I'm not sure if 
that's a bug in infra or if it's by design (or maybe it doesn't happen 
if the bug is nominated (series target) for the liberty release).


The series target is really just for tracking the progress of a 
backport, like on master. It sets the status/importance/owner, which is 
useful. The thing with the series target though is anyone (at least 
anyone part of the nova bug team) can nominate a bug for a series, but 
only drivers [1] can accept it.


So, my general workflow is to both tag and nominate for backports. We 
really want people to use the tag because then the bug team can come 
along later and do the series nominations if those haven't happened yet. 
But I use the tag to query launchpad for fixed bugs that have a tag like 
liberty-backport-potential, and then I dig into those bugs to see if a 
fix has been proposed as a backport to stable/liberty yet. Sometimes 
that's already done and we can just remove the tag (or replace it with 
in-stable-liberty), or we can work on backporting the fix (if it's fits 
the stable branch policy for appropriate fixes [2]).


As I mentioned in the nova team meeting yesterday it'd be helpful to the 
stable maintenance team if people doing bug triage can apply the 
*-backport-potential tags depending on what release the bug was reported 
against. You don't have to dig into the details to figure out if it's 
actually a latent bug or not, just if someone says they hit a bug on 
liberty, we can add kilo-backport-potential so we can look into it when 
that time comes (after it's fixed on trunk).


[1] https://launchpad.net/~nova-drivers/+members#active
[2] 
http://docs.openstack.org/project-team-guide/stable-branches.html#appropriate-fixes


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When to revert a patch?

2016-03-04 Thread Morgan Fainberg
On Mar 4, 2016 10:16, "Monty Taylor"  wrote:
>
> On 03/04/2016 08:37 AM, Ruby Loo wrote:
>>
>> Hijacked from ' [openstack-dev] [ironic] Remember to follow RFE process'
>> thread:
>>
>> > Should we revert the patch [1] for now? (Disclaimer. I haven't
looked at the
>> > patch itself. But I don't think I should have to, to know what
the API
>> > change is.)
>> >
>>
>> Thanks for calling it out Ruby, that's unfortunate that the
>> patch was
>> merged without the RFE being approved. About reverting the patch
I
>> think we shouldn't do that now because the patch is touching the
API
>> and introducing a new microversion to it.
>>
>>
>> Exactly. I've -2'ed the revert, as removing API version is even
>> worse than landing a change without an RFE approved. Let us make
>> sure to approve RFE asap, and then adjust the code according to it.
>>
>>
>> This brings up another issue, which I recall discussing before. Did we
>> decide that we'd never revert something that touches the
>> API/microversion? It might be good to have guidelines on this if we
>> don't already. IF the API is incorrect? If the API could be improved? If
>> the API was only in master for eg 48 hours?
>
>
> I believe you need to treat master as if it's deployed to production. So
once an API change is released, 'fixing' it needs to be done like any other
API change - with a microversion bump and appropriate backwards compat.
>
> (For instance, I have a CI/CD pipeline merging from master every hour and
doing a deploy - so 48 hours is a long time ago)
>
> Monty

So let me jump in here and add in that a direct revert only should happen
in extreme circumstances: aka a change that breaks behavior without a micro
version bump - or something that is causing a break that cannot be fixed
easily rolling forward. (Unable to land code in the gate at all for
example, including roll forward fixes)

In general (and especially with microversions) fail and fix moving forward
is much better for the end users/deployers especially since folks are doing
CD more aggressively now.

There are other considerations but a revert really is one of the most
extreme responses and should be used sparingly.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When to revert a patch?

2016-03-04 Thread Monty Taylor

On 03/04/2016 08:37 AM, Ruby Loo wrote:

Hijacked from ' [openstack-dev] [ironic] Remember to follow RFE process'
thread:

> Should we revert the patch [1] for now? (Disclaimer. I haven't looked 
at the
> patch itself. But I don't think I should have to, to know what the API
> change is.)
>

Thanks for calling it out Ruby, that's unfortunate that the
patch was
merged without the RFE being approved. About reverting the patch I
think we shouldn't do that now because the patch is touching the API
and introducing a new microversion to it.


Exactly. I've -2'ed the revert, as removing API version is even
worse than landing a change without an RFE approved. Let us make
sure to approve RFE asap, and then adjust the code according to it.


This brings up another issue, which I recall discussing before. Did we
decide that we'd never revert something that touches the
API/microversion? It might be good to have guidelines on this if we
don't already. IF the API is incorrect? If the API could be improved? If
the API was only in master for eg 48 hours?


I believe you need to treat master as if it's deployed to production. So 
once an API change is released, 'fixing' it needs to be done like any 
other API change - with a microversion bump and appropriate backwards 
compat.


(For instance, I have a CI/CD pipeline merging from master every hour 
and doing a deploy - so 48 hours is a long time ago)


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [packstack] Update packstack core list

2016-03-04 Thread Emilien Macchi
Hi,

[post originally sent on RDO-list but I've been told I should use this
channel]

I've looked at packstack core-list [1] and I suggest we revisit to keep
only active contributors [2] in the core members list.

The list seems super big comparing to who is actually active on the
project; in a meritocracy world it would make sense to revisit that list.

Thanks,

[1] https://review.openstack.org/#/admin/groups/124,members
[2] http://stackalytics.com/report/contribution/packstack/90

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Status of Python 3 in OpenStack Mitaka

2016-03-04 Thread Victor Stinner

Hi,

I just wrote an article "Status of Python 3 in OpenStack Mitaka":
http://blogs.rdoproject.org/7894/status-of-python-3-in-openstack-mitaka

Summary:

* 13 services were ported to Python 3 during the Mitaka cycle: Cinder, 
Glance, Heat, Horizon, etc.

* 9 services still need to be ported
* Next Milestone: Functional and integration tests

“Ported to Python 3” means that all unit tests pass on Python 3.4 which 
is verified by a voting gate job. It is not enough to run applications 
in production with Python 3. Integration and functional tests are not 
run on Python 3 yet.


Join us in the #openstack-python3 IRC channel on Freenode to discuss 
Python 3! ;-)


Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara][sahara-tests] Sahara-tests release and launchpad project

2016-03-04 Thread Vitaly Gridnev
+ added sahara tag

On Fri, Mar 4, 2016 at 12:29 AM, Evgeny Sikachev 
wrote:

> Hi, sahara folks!
>
> I would like to propose release sahara-tests. All steps from spec
> implemented except releases and packaging.[0]
>
> Release criteria: framework ready for testing a new release of Sahara.
>
> Next step: build a package and publish to PyPI.
>
> Also, I think we need to create a separate Launchpad project (like
> python-saharaclient[1]) for correct bugs tracking process. This adds
> ability nominate bugs to releases and will not be a confusion with Sahara
> bugs.
>
>
> [0]
> https://github.com/openstack/sahara-specs/blob/master/specs/mitaka/move-scenario-tests-to-separate-repository.rst
> [1] https://bugs.launchpad.net/python-saharaclient
> -
> Best Regards,
>
> Evgeny Sikachev
> QA Engineer
> Mirantis, Inc
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Vitaly Gridnev
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Remember to follow RFE process

2016-03-04 Thread Ruby Loo
> Hi,
>>
>> > Ironic'ers, please remember to follow the RFE process; especially the
>> cores.
>> >
>> > I noticed that a patch [1] got merged yesterday. The patch was
>> associated
>> > with an RFE [2] that hadn't been approved yet :-( What caught my eye was
>> > that the commit message didn't describe the actual API change so I took
>> a
>> > quick look at the (RFE) bug and it wasn't documented there either.
>>
>
Thanks everyone! I see that the RFE has been approved, although I don't see
what the CLI/API change is. I'm guessing it was to add --driver or
something like that to 'ironic node-list' and a similar thing for the
openstack client plugin and API :)

--ruby
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] When to revert a patch?

2016-03-04 Thread Ruby Loo
Hijacked from ' [openstack-dev] [ironic] Remember to follow RFE process'
thread:

> Should we revert the patch [1] for now? (Disclaimer. I haven't looked at
>> the
>> > patch itself. But I don't think I should have to, to know what the API
>> > change is.)
>> >
>>
>> Thanks for calling it out Ruby, that's unfortunate that the patch was
>> merged without the RFE being approved. About reverting the patch I
>> think we shouldn't do that now because the patch is touching the API
>> and introducing a new microversion to it.
>>
>
> Exactly. I've -2'ed the revert, as removing API version is even worse than
> landing a change without an RFE approved. Let us make sure to approve RFE
> asap, and then adjust the code according to it.
>
>

This brings up another issue, which I recall discussing before. Did we
decide that we'd never revert something that touches the API/microversion?
It might be good to have guidelines on this if we don't already. IF the API
is incorrect? If the API could be improved? If the API was only in master
for eg 48 hours?

--ruby
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Grant FFE to "Host-state level locking" BP

2016-03-04 Thread Nikola Đipanov
On 03/04/2016 02:06 PM, John Garbutt wrote:
> tl;dr
> As on IRC, I don't think this should get an FFE this cycle.
> 
> On 4 March 2016 at 10:56, Nikola Đipanov  wrote:
>> Hi,
>>
>> The actual BP that links to the approved spec is here: [1] and 2
>> outstanding patches are [2][3].
>>
>> Apart from the usual empathy-inspired reasons to allow this (code's been
>> up for a while, yet only had real review on the last day etc.) which are
>> not related to the technical merit of the work, there is also the fact
>> that two initial patches that add locking around updates of the
>> in-memory host map ([4] and [5]) have already been merged.
>>
>> They add the overhead of locking to the scheduler, but without the final
>> work they don't provide any benefits (races will not be detected,
>> without [2]).
> 
> We could land a patch to drop the synchronized decorators, but it
> seemed like it might still help (the possibly theoretical issue?) of
> two greenlets competing decrementing the same resource counts.
> 
>> I don't have any numbers on this but the result is likely that we made
>> things worse, for the sake of adhering to random and made-up dates.
> 
> For details on the reasons behind our process, please see:
> http://docs.openstack.org/developer/nova/process.html
> 
>> With
>> this in mind I think it only makes sense to do our best to merge the 2
>> outstanding patches.
> 
> Looking at the feature freeze exception criteria:
> https://wiki.openstack.org/wiki/FeatureFreeze
> 
> The code is not ready to merge right now, so its hard to asses the
> risk of merging it, and hard to asses how long it will take to merge.
> It seems medium-ish risk, given the existing patches.
> 
> We have had 2 FFEs, just for things that were +Wed but merged when we
> cut mitaka-3. They are all merged now.
> 
> Time is much tighter this cycle than usual. We also seem to have less
> reviewers doing reviews than normal for this point in the cycle, and a
> much bigger backlog of bug fixes to review. We only have about 7 more
> working days between now and tagging RC1, at which point master opens
> for Newton, and these patches are free to merge again.
> 
> While this is useful, its not a regression. It would help us detect
> races in the scheduler sooner. It does not feel release critical.
> 

Thanks for the response John,

If we take "release critical" to mean "Nova not able to start VMs if we
don't have this", then no - it's not release critical.

But it does mean that people consuming releases will not get to use this
and consequently find and report bugs for another 6 months.

On a more personal note - this is the second thing that I was involved
with this cycle that got accepted, only to get half merged over a random
deadline. The other one being [1], which was just integration work that
would make a lot of other work that went in this cycle (in both Nova and
Neutron) usable. Again - the result is, we have the code in tree, but no
one can use it and test it.

Even if I try to keep my personal feelings out of this - I still feel
that this is a massive waste we are happy to accept for practically 0 gain.

N.

[1]
https://blueprints.launchpad.net/openstack/?searchtext=sriov-pf-passthrough-neutron-port

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Contributing to TripleO is challenging

2016-03-04 Thread Emilien Macchi
That's not the name of any Summit's talk, it's just an e-mail I wanted
to write for a long time.

It is an attempt to expose facts or things I've heard a lot; and bring
constructive thoughts about why it's challenging to contribute in
TripleO project.


1/ "I don't review this patch, we don't have CI coverage."

One thing I've noticed in TripleO is that a very few people are involved
in CI work.
In my opinion, CI system is more critical than any feature in a product.
Developing Software without tests is a bit like http://goo.gl/OlgFRc
All people - specially core - in the project should be involved in CI
work. If you are TripleO core and you don't contribute on CI, you might
ask yourself why.


2/ "I don't review this patch, CI is broken."

Another thing I've noticed in TripleO is that when CI is broken, again,
a very few people are actually working on fixing failures.
My experience over the last years taught me to stop my daily work when
CI is broken and fix it asap.


3/ "I don't review it, because this feature / code is not my area".

My first though is "Aren't we supposed to be engineers and learn new areas?"
My second though is that I think we have a problem with TripleO Heat
Templates.
THT or TripleO Heat Templates's code is 80% of Puppet / Hiera. If
TripleO core say "I'm not familiar with Puppet", we have a problem here,
isn't?
Maybe should we split this repository? Or revisit the list of people who
can +2 patches on THT.


4/ Patches are stalled. Most of the time.

Over the last 12 months, I've pushed a lot of patches in TripleO and one
thing I've noticed is that if I don't ping people, my patch got no
review. And I have to rebase it, every week, because the interface
changed. I got +2, cool ! Oh, merge conflict. Rebasing. Waiting for +2
again... and so on..

I personally spent 20% of my time to review code, every day.
I wrote a blog post about how I'm doing review, with Gertty:
http://my1.fr/blog/reviewing-puppet-openstack-patches/
I suggest TripleO folks to spend more time on reviews, for some reasons:

* decreasing frustration from contributors
* accelerate development process
* teach new contributors to work on TripleO, and eventually scale-up the
core team. It's a time investment, but worth it.

In Puppet team, we have weekly triage sessions and it's pretty helpful.


5/ Most of the tests are run... manually.

How many times I've heard "I've tested this patch locally, and it does
not work so -1".

The only test we do in current CI is a ping to an instance. Seriously?
Most of OpenStack CIs (Fuel included), run Tempest, for testing APIs and
real scenarios. And we run a ping.
That's similar to 1/ but I wanted to raise it too.



If we don't change our way to work on TripleO, people will be more
frustrated and reduce contributions at some point.
I hope from here we can have a open and constructive discussion to try
to improve the TripleO project.

Thank you for reading so far.
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Openstack] Instalation Problem:VBoxManage: error: Guest not running [ubuntu14.04]

2016-03-04 Thread Maksim Malchuk
Igor,

Some information about my system:
OS: ubuntu 14.04 LTS
Memory: 3,8GiB

Samer can't run many guests I think.


On Fri, Mar 4, 2016 at 5:12 PM, Igor Marnat  wrote:

> Samer, Maksim,
> I'd rather say that script started fuel-master already (VM "fuel-master"
> has been successfully started.), didn't find running guests, (VBoxManage:
> error: Guest not running) but it can try to start them afterwards.
>
> Samer,
> - how many VMs are there running besides fuel-master?
> - is it still showing "Waiting for product VM to download files. Please do
> NOT abort the script..." ?
> - for how long did you wait since the message above?
>
>
> Regards,
> Igor Marnat
>
> On Fri, Mar 4, 2016 at 5:04 PM, Maksim Malchuk 
> wrote:
>
>> Hi Sames,
>>
>> *VBoxManage: error: Guest not running*
>>
>> looks line the problem with VirtualBox itself or settings for the
>> 'fuel-master' VM, it can't boot it.
>> Open the VirtualBox Manger (GUI), select the 'fuel-master' VM and start
>> it manually - it should show you what is exactly happens.
>>
>>
>> On Fri, Mar 4, 2016 at 4:41 PM, Samer Machara <
>> samer.mach...@telecom-sudparis.eu> wrote:
>>
>>> Hello, everyone.
>>> I'm new with Fuel. I'm trying to follow the QuickStart Guide (
>>> https://docs.fuel-infra.org/openstack/fuel/fuel-8.0/quickstart-guide.html),
>>> but I have the following Error:
>>>
>>>
>>> *Waiting for VM "fuel-master" to power on...*
>>> *VM "fuel-master" has been successfully started.*
>>> *VBoxManage: error: Guest not running*
>>> *VBoxManage: error: Guest not running*
>>> ...
>>> *VBoxManage: error: Guest not running*
>>> *Waiting for product VM to download files. Please do NOT abort the
>>> script...*
>>>
>>>
>>>
>>> I hope you can help me.
>>>
>>> Thanks in advance
>>>
>>>
>>> Some information about my system:
>>> OS: ubuntu 14.04 LTS
>>> Memory: 3,8GiB
>>> Processor: Intel® Core™2 Quad CPU Q9550 @ 2.83GHz × 4
>>> OS type: 64-bit
>>> Disk 140,2GB
>>> VirtualBox Version: 4.3.36_Ubuntu
>>> Checking for 'expect'... OK
>>> Checking for 'xxd'... OK
>>> Checking for "VBoxManage"... OK
>>> Checking for VirtualBox Extension Pack... OK
>>> Checking if SSH client installed... OK
>>> Checking if ipconfig or ifconfig installed... OK
>>>
>>>
>>> I modify the config.sh to adapt my hardware configuration
>>> ...
>>> # Master node settings
>>> if [ "$CONFIG_FOR" = "4GB" ]; then
>>> vm_master_memory_mb=1024
>>> vm_master_disk_mb=2
>>> ...
>>> # The number of nodes for installing OpenStack on
>>> elif [ "$CONFIG_FOR" = "4GB" ]; then
>>> cluster_size=3
>>> ...
>>> # Slave node settings. This section allows you to define CPU count for
>>> each slave node.
>>> elif [ "$CONFIG_FOR" = "4GB" ]; then
>>> vm_slave_cpu_default=1
>>> vm_slave_cpu[1]=1
>>> vm_slave_cpu[2]=1
>>> vm_slave_cpu[3]=1
>>> ...
>>> # This section allows you to define RAM size in MB for each slave node.
>>> elif [ "$CONFIG_FOR" = "4GB" ]; then
>>> vm_slave_memory_default=1024
>>>
>>> vm_slave_memory_mb[1]=512
>>> vm_slave_memory_mb[2]=512
>>> vm_slave_memory_mb[3]=512
>>> ...
>>> # Nodes with combined roles may require more disk space.
>>> if [ "$CONFIG_FOR" = "4GB" ]; then
>>> vm_slave_first_disk_mb=2
>>> vm_slave_second_disk_mb=2
>>> vm_slave_third_disk_mb=2
>>> ...
>>>
>>> I found someone that had a similar problem (
>>> https://www.mail-archive.com/fuel-dev@lists.launchpad.net/msg01084.html),
>>> he had a corrupted iso file, he solved the problem downloaded it again. I
>>> downloaded the .iso file from
>>> http://seed.fuel-infra.org/fuelweb-community-release/fuel-community-8.0.iso.torrent
>>> . I chek the size 3,1 GB. How ever I still with the problem.
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Best Regards,
>> Maksim Malchuk,
>> Senior DevOps Engineer,
>> MOS: Product Engineering,
>> Mirantis, Inc
>> 
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Maksim Malchuk,
Senior DevOps Engineer,
MOS: Product Engineering,
Mirantis, Inc

__
OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [Fuel] [Openstack] Instalation Problem:VBoxManage: error: Guest not running [ubuntu14.04]

2016-03-04 Thread Igor Marnat
Samer, Maksim,
I'd rather say that script started fuel-master already (VM "fuel-master"
has been successfully started.), didn't find running guests, (VBoxManage:
error: Guest not running) but it can try to start them afterwards.

Samer,
- how many VMs are there running besides fuel-master?
- is it still showing "Waiting for product VM to download files. Please do
NOT abort the script..." ?
- for how long did you wait since the message above?


Regards,
Igor Marnat

On Fri, Mar 4, 2016 at 5:04 PM, Maksim Malchuk 
wrote:

> Hi Sames,
>
> *VBoxManage: error: Guest not running*
>
> looks line the problem with VirtualBox itself or settings for the
> 'fuel-master' VM, it can't boot it.
> Open the VirtualBox Manger (GUI), select the 'fuel-master' VM and start it
> manually - it should show you what is exactly happens.
>
>
> On Fri, Mar 4, 2016 at 4:41 PM, Samer Machara <
> samer.mach...@telecom-sudparis.eu> wrote:
>
>> Hello, everyone.
>> I'm new with Fuel. I'm trying to follow the QuickStart Guide (
>> https://docs.fuel-infra.org/openstack/fuel/fuel-8.0/quickstart-guide.html),
>> but I have the following Error:
>>
>>
>> *Waiting for VM "fuel-master" to power on...*
>> *VM "fuel-master" has been successfully started.*
>> *VBoxManage: error: Guest not running*
>> *VBoxManage: error: Guest not running*
>> ...
>> *VBoxManage: error: Guest not running*
>> *Waiting for product VM to download files. Please do NOT abort the
>> script...*
>>
>>
>>
>> I hope you can help me.
>>
>> Thanks in advance
>>
>>
>> Some information about my system:
>> OS: ubuntu 14.04 LTS
>> Memory: 3,8GiB
>> Processor: Intel® Core™2 Quad CPU Q9550 @ 2.83GHz × 4
>> OS type: 64-bit
>> Disk 140,2GB
>> VirtualBox Version: 4.3.36_Ubuntu
>> Checking for 'expect'... OK
>> Checking for 'xxd'... OK
>> Checking for "VBoxManage"... OK
>> Checking for VirtualBox Extension Pack... OK
>> Checking if SSH client installed... OK
>> Checking if ipconfig or ifconfig installed... OK
>>
>>
>> I modify the config.sh to adapt my hardware configuration
>> ...
>> # Master node settings
>> if [ "$CONFIG_FOR" = "4GB" ]; then
>> vm_master_memory_mb=1024
>> vm_master_disk_mb=2
>> ...
>> # The number of nodes for installing OpenStack on
>> elif [ "$CONFIG_FOR" = "4GB" ]; then
>> cluster_size=3
>> ...
>> # Slave node settings. This section allows you to define CPU count for
>> each slave node.
>> elif [ "$CONFIG_FOR" = "4GB" ]; then
>> vm_slave_cpu_default=1
>> vm_slave_cpu[1]=1
>> vm_slave_cpu[2]=1
>> vm_slave_cpu[3]=1
>> ...
>> # This section allows you to define RAM size in MB for each slave node.
>> elif [ "$CONFIG_FOR" = "4GB" ]; then
>> vm_slave_memory_default=1024
>>
>> vm_slave_memory_mb[1]=512
>> vm_slave_memory_mb[2]=512
>> vm_slave_memory_mb[3]=512
>> ...
>> # Nodes with combined roles may require more disk space.
>> if [ "$CONFIG_FOR" = "4GB" ]; then
>> vm_slave_first_disk_mb=2
>> vm_slave_second_disk_mb=2
>> vm_slave_third_disk_mb=2
>> ...
>>
>> I found someone that had a similar problem (
>> https://www.mail-archive.com/fuel-dev@lists.launchpad.net/msg01084.html),
>> he had a corrupted iso file, he solved the problem downloaded it again. I
>> downloaded the .iso file from
>> http://seed.fuel-infra.org/fuelweb-community-release/fuel-community-8.0.iso.torrent
>> . I chek the size 3,1 GB. How ever I still with the problem.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best Regards,
> Maksim Malchuk,
> Senior DevOps Engineer,
> MOS: Product Engineering,
> Mirantis, Inc
> 
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] "DocImpact" => bug report

2016-03-04 Thread Markus Zoeller
Since 1-2 weeks each merged change which has a "DocImpact" in its
commit message will open a new bug report in Nova, for example [1].

Those bug reports will be forwarded to the "manuals" project *IF*
they provide enough information for the manuals team to work with.
If they do not, they will cause extra work to the nova bugs team.
This is avoidable by providing enough information in the commit 
message *what* should be documented (which manual, which section,
example phrasing, anything that helps the manual team).

If you have a reno file in the commit and the manuals *don't* need
an update, "DocImpact" is not necessary. IOW, saying "DocImpact"
is shorthand for "the manuals need to be updated".

@(core-)reviewers: Please consider this also in the reviews of patches.

References:
[1] https://bugs.launchpad.net/nova/+bug/1551782

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Grant FFE to "Host-state level locking" BP

2016-03-04 Thread John Garbutt
tl;dr
As on IRC, I don't think this should get an FFE this cycle.

On 4 March 2016 at 10:56, Nikola Đipanov  wrote:
> Hi,
>
> The actual BP that links to the approved spec is here: [1] and 2
> outstanding patches are [2][3].
>
> Apart from the usual empathy-inspired reasons to allow this (code's been
> up for a while, yet only had real review on the last day etc.) which are
> not related to the technical merit of the work, there is also the fact
> that two initial patches that add locking around updates of the
> in-memory host map ([4] and [5]) have already been merged.
>
> They add the overhead of locking to the scheduler, but without the final
> work they don't provide any benefits (races will not be detected,
> without [2]).

We could land a patch to drop the synchronized decorators, but it
seemed like it might still help (the possibly theoretical issue?) of
two greenlets competing decrementing the same resource counts.

> I don't have any numbers on this but the result is likely that we made
> things worse, for the sake of adhering to random and made-up dates.

For details on the reasons behind our process, please see:
http://docs.openstack.org/developer/nova/process.html

>With
> this in mind I think it only makes sense to do our best to merge the 2
> outstanding patches.

Looking at the feature freeze exception criteria:
https://wiki.openstack.org/wiki/FeatureFreeze

The code is not ready to merge right now, so its hard to asses the
risk of merging it, and hard to asses how long it will take to merge.
It seems medium-ish risk, given the existing patches.

We have had 2 FFEs, just for things that were +Wed but merged when we
cut mitaka-3. They are all merged now.

Time is much tighter this cycle than usual. We also seem to have less
reviewers doing reviews than normal for this point in the cycle, and a
much bigger backlog of bug fixes to review. We only have about 7 more
working days between now and tagging RC1, at which point master opens
for Newton, and these patches are free to merge again.

While this is useful, its not a regression. It would help us detect
races in the scheduler sooner. It does not feel release critical.

As such, I don't think this it should get an exception. Lets keep
focus on the lower risk, high value bug fixes sitting in our review
backlog.

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Concurrent execution of drivers

2016-03-04 Thread Valeriy Ponomaryov
>
> Thanks - so if I understand you correctly, each share instance is
> uniquely associated with a single instance of the driver at one time,
> right?  So while I might have two concurrent calls to ensure_share,
> they are guaranteed to be for different shares?
>
Yes.

Is this true for the whole driver interface?

Yes.


> Two instances of the
> driver will never both be asked to do operations on the same share at
> the same time?


Yes.

Each instance of a driver will have its own unique list of shares to be
'ensure'd.

-- 
Kind Regards
Valeriy Ponomaryov
www.mirantis.com
vponomar...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Openstack] Instalation Problem:VBoxManage: error: Guest not running [ubuntu14.04]

2016-03-04 Thread Maksim Malchuk
Hi Sames,

*VBoxManage: error: Guest not running*

looks line the problem with VirtualBox itself or settings for the
'fuel-master' VM, it can't boot it.
Open the VirtualBox Manger (GUI), select the 'fuel-master' VM and start it
manually - it should show you what is exactly happens.


On Fri, Mar 4, 2016 at 4:41 PM, Samer Machara <
samer.mach...@telecom-sudparis.eu> wrote:

> Hello, everyone.
> I'm new with Fuel. I'm trying to follow the QuickStart Guide (
> https://docs.fuel-infra.org/openstack/fuel/fuel-8.0/quickstart-guide.html),
> but I have the following Error:
>
>
> *Waiting for VM "fuel-master" to power on...*
> *VM "fuel-master" has been successfully started.*
> *VBoxManage: error: Guest not running*
> *VBoxManage: error: Guest not running*
> ...
> *VBoxManage: error: Guest not running*
> *Waiting for product VM to download files. Please do NOT abort the
> script...*
>
>
>
> I hope you can help me.
>
> Thanks in advance
>
>
> Some information about my system:
> OS: ubuntu 14.04 LTS
> Memory: 3,8GiB
> Processor: Intel® Core™2 Quad CPU Q9550 @ 2.83GHz × 4
> OS type: 64-bit
> Disk 140,2GB
> VirtualBox Version: 4.3.36_Ubuntu
> Checking for 'expect'... OK
> Checking for 'xxd'... OK
> Checking for "VBoxManage"... OK
> Checking for VirtualBox Extension Pack... OK
> Checking if SSH client installed... OK
> Checking if ipconfig or ifconfig installed... OK
>
>
> I modify the config.sh to adapt my hardware configuration
> ...
> # Master node settings
> if [ "$CONFIG_FOR" = "4GB" ]; then
> vm_master_memory_mb=1024
> vm_master_disk_mb=2
> ...
> # The number of nodes for installing OpenStack on
> elif [ "$CONFIG_FOR" = "4GB" ]; then
> cluster_size=3
> ...
> # Slave node settings. This section allows you to define CPU count for
> each slave node.
> elif [ "$CONFIG_FOR" = "4GB" ]; then
> vm_slave_cpu_default=1
> vm_slave_cpu[1]=1
> vm_slave_cpu[2]=1
> vm_slave_cpu[3]=1
> ...
> # This section allows you to define RAM size in MB for each slave node.
> elif [ "$CONFIG_FOR" = "4GB" ]; then
> vm_slave_memory_default=1024
>
> vm_slave_memory_mb[1]=512
> vm_slave_memory_mb[2]=512
> vm_slave_memory_mb[3]=512
> ...
> # Nodes with combined roles may require more disk space.
> if [ "$CONFIG_FOR" = "4GB" ]; then
> vm_slave_first_disk_mb=2
> vm_slave_second_disk_mb=2
> vm_slave_third_disk_mb=2
> ...
>
> I found someone that had a similar problem (
> https://www.mail-archive.com/fuel-dev@lists.launchpad.net/msg01084.html),
> he had a corrupted iso file, he solved the problem downloaded it again. I
> downloaded the .iso file from
> http://seed.fuel-infra.org/fuelweb-community-release/fuel-community-8.0.iso.torrent
> . I chek the size 3,1 GB. How ever I still with the problem.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Maksim Malchuk,
Senior DevOps Engineer,
MOS: Product Engineering,
Mirantis, Inc

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Grant FFE to "Host-state level locking" BP

2016-03-04 Thread Cheng, Yingxin
I'll make sure to deliver the patches if FFE is granted.

Regards,
-Yingxin

On Friday, March 4, 2016 6:56 PM Nikola Đipanov wrote:
> 
> Hi,
> 
> The actual BP that links to the approved spec is here: [1] and 2 outstanding
> patches are [2][3].
> 
> Apart from the usual empathy-inspired reasons to allow this (code's been up 
> for
> a while, yet only had real review on the last day etc.) which are not related 
> to
> the technical merit of the work, there is also the fact that two initial 
> patches
> that add locking around updates of the in-memory host map ([4] and [5]) have
> already been merged.
> 
> They add the overhead of locking to the scheduler, but without the final work
> they don't provide any benefits (races will not be detected, without [2]).
> 
> I don't have any numbers on this but the result is likely that we made things
> worse, for the sake of adhering to random and made-up dates. With this in mind
> I think it only makes sense to do our best to merge the 2 outstanding patches.
> 
> Cheers,
> N.
> 
> [1]
> https://blueprints.launchpad.net/openstack/?searchtext=host-state-level-
> locking
> [2] https://review.openstack.org/#/c/262938/
> [3] https://review.openstack.org/#/c/262939/
> 
> [4] https://review.openstack.org/#/c/259891/
> [5] https://review.openstack.org/#/c/259892/
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Concurrent execution of drivers

2016-03-04 Thread John Spray
On Fri, Mar 4, 2016 at 1:34 PM, Valeriy Ponomaryov
 wrote:
> John,
>
> each instance of manila-share service will perform "ensure_share" operation
> for each "share instance" that is located at
> "hostname@driver_config_group_name".
> So, only one driver is expected to run "ensure_share" for some "share
> instance", because each instance of a driver will have its own unique value
> of "hostname@driver_config_group_name".

Thanks - so if I understand you correctly, each share instance is
uniquely associated with a single instance of the driver at one time,
right?  So while I might have two concurrent calls to ensure_share,
they are guaranteed to be for different shares?

Is this true for the whole driver interface?  Two instances of the
driver will never both be asked to do operations on the same share at
the same time?

John



> Valeriy
>
> On Fri, Mar 4, 2016 at 3:15 PM, John Spray  wrote:
>>
>> On Fri, Mar 4, 2016 at 12:11 PM, Shinobu Kinjo 
>> wrote:
>> > What are you facing?
>>
>> In this particular instance, I'm dealing with a case where we may add
>> some metadata in ceph that will get updated by the driver, and I need
>> to know how I'm going to be called.  I need to know whether e.g. I can
>> expect that ensure_share will only be called once at a time per share,
>> or whether it might be called multiple times in parallel, resulting in
>> a need for me to do more synchronisation a lower level.
>>
>> This is more complicated than locking, because where we update more
>> than one thing at a time we also have to deal with recovery (e.g.
>> manila crashed halfway through updating something in ceph and now I'm
>> recovering it), especially whether the places we do recovery will be
>> called concurrently or not.
>>
>> My very favourite answer here would be a pointer to some
>> documentation, but I'm guessing much this stuff is still at a "word of
>> mouth" stage.
>>
>> John
>>
>> > On Fri, Mar 4, 2016 at 9:06 PM, John Spray  wrote:
>> >> Hi,
>> >>
>> >> What expectations should driver authors have about multiple instances
>> >> of the driver being instantiated within different instances of
>> >> manila-share?
>> >>
>> >> For example, should I assume that when one instance of a driver is
>> >> having ensure_share called during startup, another instance of the
>> >> driver might be going through the same process on the same share at
>> >> the same time?  Are there any rules at all?
>> >>
>> >> Thanks,
>> >> John
>> >>
>> >>
>> >> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> > --
>> > Email:
>> > shin...@linux.com
>> > GitHub:
>> > shinobu-x
>> > Blog:
>> > Life with Distributed Computational System based on OpenSource
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Kind Regards
> Valeriy Ponomaryov
> www.mirantis.com
> vponomar...@mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] [Openstack] Instalation Problem:VBoxManage: error: Guest not running [ubuntu14.04]

2016-03-04 Thread Samer Machara


Hello, everyone. 
I'm new with Fuel. I'm trying to follow the QuickStart Guide ( 
https://docs.fuel-infra.org/openstack/fuel/fuel-8.0/quickstart-guide.html ), 
but I have the following Error: 


Waiting for VM "fuel-master" to power on... 
VM "fuel-master" has been successfully started. 
VBoxManage: error: Guest not running 
VBoxManage: error: Guest not running 
... 
VBoxManage: error: Guest not running 
Waiting for product VM to download files. Please do NOT abort the script... 


I hope you can help me. 

Thanks in advance 




Some information about my system: 
OS: ubuntu 14.04 LTS 
Memory: 3,8GiB 
Processor: Intel® Core™2 Quad CPU Q9550 @ 2.83GHz × 4 
OS type: 64-bit 
Disk 140,2GB 
VirtualBox Version: 4.3.36_Ubuntu 
Checking for 'expect'... OK 
Checking for 'xxd'... OK 
Checking for "VBoxManage"... OK 
Checking for VirtualBox Extension Pack... OK 
Checking if SSH client installed... OK 
Checking if ipconfig or ifconfig installed... OK 





I modify the config.sh to adapt my hardware configuration 
... 
# Master node settings 
if [ "$CONFIG_FOR" = "4GB" ]; then 
vm_master_memory_mb=1024 
vm_master_disk_mb=2 
... 
# The number of nodes for installing OpenStack on 
elif [ "$CONFIG_FOR" = "4GB" ]; then 
cluster_size=3 
... 
# Slave node settings. This section allows you to define CPU count for each 
slave node. 
elif [ "$CONFIG_FOR" = "4GB" ]; then 
vm_slave_cpu_default=1 
vm_slave_cpu[1]=1 
vm_slave_cpu[2]=1 
vm_slave_cpu[3]=1 
... 
# This section allows you to define RAM size in MB for each slave node. 
elif [ "$CONFIG_FOR" = "4GB" ]; then 
vm_slave_memory_default=1024 


vm_slave_memory_mb[1]=512 
vm_slave_memory_mb[2]=512 
vm_slave_memory_mb[3]=512 
... 
# Nodes with combined roles may require more disk space. 
if [ "$CONFIG_FOR" = "4GB" ]; then 
vm_slave_first_disk_mb=2 
vm_slave_second_disk_mb=2 
vm_slave_third_disk_mb=2 
... 


I found someone that had a similar problem ( 
https://www.mail-archive.com/fuel-dev@lists.launchpad.net/msg01084.html ), he 
had a corrupted iso file, he solved the problem downloaded it again. I 
downloaded the .iso file from 
http://seed.fuel-infra.org/fuelweb-community-release/fuel-community-8.0.iso.torrent
 . I chek the size 3,1 GB. How ever I still with the problem. 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] PTL for Newton and beyond

2016-03-04 Thread Mehdi Abaakouk

Hi,

Thanks for all the great work you have done, I have appreciated your 
leadership on Oslo,

and a special thanks to bring in new people in oslo.messaging ;)

Le 2016-03-03 11:32, Davanum Srinivas a écrit :

Team,

It has been great working with you all as PTL for Oslo. Looks like the
nominations open up next week for elections and am hoping more than
one of you will step up for the next cycle(s). I can show you the
ropes and help smoothen the transition process if you let me know
about your interest in being the next PTL. With the move to more
automated testing in our CI (periodic jobs running against oslo.*
master) and the adoption of the release process (logging reviews in
/releases repo) the load should be considerably less on you.
especially proud of all the new people joining as both oslo cores and
project cores and hitting the ground running. Big shout out to Doug
Hellmann for his help and guidance when i transitioned into the PTL
role.

Main challenges will be to get back confidence of all the projects
that use the oslo libraries, NOT be the first thing they look for when
things break (Better backward compat, better test matrix) and
evangelizing that Oslo is still the common play ground for *all*
projects and not just the headache of some nut jobs who are willing to
take up the impossible task of defining and nurturing these libraries.
There's a lot of great work ahead of us and i am looking forward to
continue to work with you all.

Thanks,
Dims


--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Concurrent execution of drivers

2016-03-04 Thread Valeriy Ponomaryov
John,

each instance of manila-share service will perform "ensure_share" operation
for each "share instance" that is located at
"hostname@driver_config_group_name".
So, only one driver is expected to run "ensure_share" for some "share
instance", because each instance of a driver will have its own unique value
of "hostname@driver_config_group_name".

Valeriy

On Fri, Mar 4, 2016 at 3:15 PM, John Spray  wrote:

> On Fri, Mar 4, 2016 at 12:11 PM, Shinobu Kinjo 
> wrote:
> > What are you facing?
>
> In this particular instance, I'm dealing with a case where we may add
> some metadata in ceph that will get updated by the driver, and I need
> to know how I'm going to be called.  I need to know whether e.g. I can
> expect that ensure_share will only be called once at a time per share,
> or whether it might be called multiple times in parallel, resulting in
> a need for me to do more synchronisation a lower level.
>
> This is more complicated than locking, because where we update more
> than one thing at a time we also have to deal with recovery (e.g.
> manila crashed halfway through updating something in ceph and now I'm
> recovering it), especially whether the places we do recovery will be
> called concurrently or not.
>
> My very favourite answer here would be a pointer to some
> documentation, but I'm guessing much this stuff is still at a "word of
> mouth" stage.
>
> John
>
> > On Fri, Mar 4, 2016 at 9:06 PM, John Spray  wrote:
> >> Hi,
> >>
> >> What expectations should driver authors have about multiple instances
> >> of the driver being instantiated within different instances of
> >> manila-share?
> >>
> >> For example, should I assume that when one instance of a driver is
> >> having ensure_share called during startup, another instance of the
> >> driver might be going through the same process on the same share at
> >> the same time?  Are there any rules at all?
> >>
> >> Thanks,
> >> John
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > --
> > Email:
> > shin...@linux.com
> > GitHub:
> > shinobu-x
> > Blog:
> > Life with Distributed Computational System based on OpenSource
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kind Regards
Valeriy Ponomaryov
www.mirantis.com
vponomar...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][logging] log rotation

2016-03-04 Thread Eric LEMOINE
Hi Kolla devs

So with the Heka work services write their logs into files (in the
"kolla_logs" volume). This means that we need log rotation. This was
mentioned in the "logging-with-heka" spec [*].

I've just created a change request [**] that adds a "cron" Dockerfile
and Ansible tasks/plays to deploy a "cron" container on every node.
I've opened this mainly as a base for discussion.

So the "cron" container runs the crond daemon, which runs logrotate
daily for rotating the log files of kolla services. In the future the
"cron" container could be used for other cron-type tasks.

Thoughts? Feedback?

Thanks.

[*] 

[**] 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Concurrent execution of drivers

2016-03-04 Thread John Spray
On Fri, Mar 4, 2016 at 12:11 PM, Shinobu Kinjo  wrote:
> What are you facing?

In this particular instance, I'm dealing with a case where we may add
some metadata in ceph that will get updated by the driver, and I need
to know how I'm going to be called.  I need to know whether e.g. I can
expect that ensure_share will only be called once at a time per share,
or whether it might be called multiple times in parallel, resulting in
a need for me to do more synchronisation a lower level.

This is more complicated than locking, because where we update more
than one thing at a time we also have to deal with recovery (e.g.
manila crashed halfway through updating something in ceph and now I'm
recovering it), especially whether the places we do recovery will be
called concurrently or not.

My very favourite answer here would be a pointer to some
documentation, but I'm guessing much this stuff is still at a "word of
mouth" stage.

John

> On Fri, Mar 4, 2016 at 9:06 PM, John Spray  wrote:
>> Hi,
>>
>> What expectations should driver authors have about multiple instances
>> of the driver being instantiated within different instances of
>> manila-share?
>>
>> For example, should I assume that when one instance of a driver is
>> having ensure_share called during startup, another instance of the
>> driver might be going through the same process on the same share at
>> the same time?  Are there any rules at all?
>>
>> Thanks,
>> John
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Email:
> shin...@linux.com
> GitHub:
> shinobu-x
> Blog:
> Life with Distributed Computational System based on OpenSource
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][i18n] Liaisons for I18n

2016-03-04 Thread Ying Chun Guo
hmm...

It's true that a core CPL can help to get translation patch merged in 
time, 
more conveniently than a non-core CPL.

But if there is non-core i18n CPL who can 
- understand project release schedule very well
- understand project i18n status and technologies
- manage to get translation patch merged in time
I agree core is not a required condition.

Best regards
Ying Chun Guo (Daisy)


Sylvain Bauza  wrote on 03/04/2016 07:36:50 PM:

> From: Sylvain Bauza 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 03/04/2016 07:40 PM
> Subject: Re: [openstack-dev] [all][i18n] Liaisons for I18n
> 
> 

> Le 04/03/2016 12:24, Ihar Hrachyshka a écrit :
> Tony Breeds  wrote: 

> On Mon, Feb 29, 2016 at 05:26:44PM +0800, Ying Chun Guo wrote: 

> If you are interested to be a liaison and help translators, 
> input your information here: 
> https://wiki.openstack.org/wiki/CrossProjectLiaisons#I18n . 
> 
> So https://wiki.openstack.org/wiki/CrossProjectLiaisons#I18n 
> 
> says the CPL needs to be a core.  That reduces the potential pool 
ofpeople to 
> those that are already busy.  Is there a good reason for that? 
> 
> I'd suspect all that's required is a good working relationship with 
> the project 
> cores. 
> 
> Yes. I believe it’s fair to say that most liaison positions do not 
> require coreship (which is strictly about review participation and 
> not the only indicator of person’s involvement in the project). 

> 
> I don't see much of interest of asking people to be cores and that's
> even counter-productive.
> We should rather welcome any contributor willing to help the 
> projects so that their could flourish nicely.
> 
> On a practical PoV, only the Zanata translation patches are really 
> needing approval rights, but that's something a i18n non-core CPL 
> could manage just by raising the priority to the core team whenever 
needed.
> 
> -Sylvain

> Ihar
> 

> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][bugs] Weekly Status Report

2016-03-04 Thread Markus Zoeller
The weekly bug report is dead. Long live the 10 minute bug report!

At least that's the interval I gave the script to update the data of
the Grafana dashboard of this PoC:
http://45.55.105.55:3000/dashboard/db/openstack-bugs

I'd like to include that PoC to the official OpenStack grafana dashboard
but that effort [1] has little focus right now. I don't know the status
of the migration from launchpad to another bug tracker, which is a
precondition to give that PoC a direction. Maybe the next summit will
give some insights.

References:
[1] https://review.openstack.org/#/c/250903/

Regards, Markus Zoeller (markus_z)


Markus Zoeller/Germany/IBM@IBMDE wrote on 11/06/2015 05:54:59 PM:

> From: Markus Zoeller/Germany/IBM@IBMDE
> To: "OpenStack Development Mailing List" 

> Date: 11/06/2015 05:56 PM
> Subject: [openstack-dev] [nova][bugs] Weekly Status Report
> 
> Hey folks,
> 
> below is the first report of bug stats I intend to post weekly.
> We discussed it shortly during the Mitaka summit that this report
> could be useful to keep the attention of the open bugs at a certain
> level. Let me know if you think it's missing something.
> 
> Stats
> =
> 
> New bugs which are *not* assigned to any subteam
> 
> count: 19
> query: http://bit.ly/1WF68Iu
> 
> 
> New bugs which are *not* triaged
> 
> subteam: libvirt 
> count: 14 
> query: http://bit.ly/1Hx3RrL
> subteam: volumes 
> count: 11
> query: http://bit.ly/1NU2DM0
> subteam: network : 
> count: 4
> query: http://bit.ly/1LVAQdq
> subteam: db : 
> count: 4
> query: http://bit.ly/1LVATWG
> subteam: 
> count: 67
> query: http://bit.ly/1RBVZLn
> 
> 
> High prio bugs which are *not* in progress
> --
> count: 39
> query: http://bit.ly/1MCKoHA
> 
> 
> Critical bugs which are *not* in progress
> -
> count: 0
> query: http://bit.ly/1kfntfk
> 
> 
> Readings
> 
> * https://wiki.openstack.org/wiki/BugTriage
> * https://wiki.openstack.org/wiki/Nova/BugTriage
> * 
> 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078252.html

> 
> 
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Remove `run_tests.sh` and `tools/*`

2016-03-04 Thread Flavio Percoco

On 04/03/16 12:04 +, Bunting, Niall wrote:

Hey Folks,

I'm looking at doing some cleanups in our repo and I would like to start by
deprecating the `run_tests` script and the contents in the `tools/` dir.

As far as I can tell, no one is using this code - we're not even using it in the
gate - as it was broken until recently, I believe. The recommended way to run
tests is using `tox` and I believe having this script in the code base misleads
new contributors and other users.

So, before we do this. I wanted to get feedback from a broader audience and give
a heads up to folks that might be using this code.

Any objections? Something I'm missing?

Flavio


This is not strictly related however, it might be worth having some 
documentation or some links to info. So the new  contributors have some 
information about how to run the various tox tests and utilities such as the 
config regen.



Absolutely!

As part of this change we'll be documenting the testing tools and 
recommendations for Glance.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] bug reports and stable branches: tags or series?

2016-03-04 Thread Markus Zoeller
What's the story behind having the tags "in-stable-liberty" and 
"liberty-backport-potential" and also having the series target "liberty"?

I didn't gave much TCL to the backports in the past but I'd like to
change that, that's why I'm asking. Please let me know the history
behind it.

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >