[openstack-dev] [neutron] [ovs] How to update flows in br-tun proactively

2016-02-17 Thread 康敬亭
Hi guys:
   
The bug has be reported on https://bugs.launchpad.net/neutron/+bug/1541738

The flow in br-tun as below is generated by learning flow, and not updated 
immediately after vm live migration.


Original flow:
cookie=0x0, duration=194.884s, table=20, n_packets=0, n_bytes=0, 
hard_timeout=300, idle_age=194, 
priority=1,vlan_tci=0x0306/0x0fff,dl_dst=5a:c6:4f:34:61:06 
actions=load:0->NXM_OF_VLAN_TCI[],load:0x1ef->NXM_NX_TUN_ID[],output:24
  
Updated flow:
cookie=0x0, duration=194.884s, table=20, n_packets=0, n_bytes=0, 
hard_timeout=300, idle_age=194, 
priority=1,vlan_tci=0x0306/0x0fff,dl_dst=5a:c6:4f:34:61:06 
actions=load:0->NXM_OF_VLAN_TCI[],load:0x1ef->NXM_NX_TUN_ID[],output:26


Anyone has idea how to update this flow proactively.thanks!
  
jingting__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer]ceilometer-collector high CPU usage

2016-02-17 Thread Gyorgy Szombathelyi
Hi!

Excuse me, if the following question/problem is a basic one, already known 
problem, 
or even a bad setup on my side.

I just noticed that the most CPU consuming process in an idle 
OpenStack cluster is ceilometer-collector. When there are only 
10-15 samples/minute, it just constantly eats about 15-20% CPU. 

I started to debug, and noticed that it epoll()s constantly with a zero 
timeout, so it seems it just polls for events in a tight loop. 
I found out that the _maybe_ the python side of the problem is 
oslo_messaging.get_notification_listener() with the eventlet executor.
A quick search showed that this function is only used in aodh_listener and
ceilometer_collector, and both are using relatively high CPU even if they're
just 'listening'. 

My skills for further debugging is limited, but I'm just curious why this 
listener
uses so much CPU, while other executors, which are using eventlet, are not that
bad. Excuse me, if it was a basic question, already known problem, or even a bad
setup on my side.

Br,
György

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Vitrage meeting minutes

2016-02-17 Thread Afek, Ifat (Nokia - IL)
Hi,

You can find the meeting minutes of Vitrage meeting: 
http://eavesdrop.openstack.org/meetings/vitrage/2016/vitrage.2016-02-17-09.00.html
Meeting log: 
http://eavesdrop.openstack.org/meetings/vitrage/2016/vitrage.2016-02-17-09.00.log.html

See you next week,
Ifat.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible]Review of Bug "1535536"

2016-02-17 Thread Sirisha Guduru
Hi All,

I recently committed code as a fix for the bug 
"https://bugs.launchpad.net/openstack-ansible/+bug/1535536”.
Jenkins gave a ‘-1’ during the review. Going through the logs I found that the 
errors are not in the code I committed but from other containers and the 
original code in openstack-ansible.
Due to that, there is no actual review of the code committed.

Kindly let me know, how to get it fixed? Or if anyone can review the code, that 
would be great.

Regards,
Sirisha G.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][cinder] Projects acting as a domain at the top of the project hierarchy

2016-02-17 Thread Henry Nash
Michal & Raildo,

So the keystone patch (https://review.openstack.org/#/c/270057/ 
) is now merged.  Do you perhaps have 
a cinder patch that I could review so we can make sure that this is likely to 
work with the new projects acting as domains? Currently it is the cinder 
tempest tests that are failing.

Thanks

Henry


> On 2 Feb 2016, at 13:30, Raildo Mascena  wrote:
> 
> See responses inline.
> 
> On Mon, Feb 1, 2016 at 6:25 PM Michał Dulko  > wrote:
> On 01/30/2016 07:02 PM, Henry Nash wrote:
> > Hi
> >
> > One of the things the keystone team was planning to merge ahead of 
> > milestone-3 of Mitaka, was “projects acting as a domain”. Up until now, 
> > domains in keystone have been stored totally separately from projects, even 
> > though all projects must be owned by a domain (even tenants created via the 
> > keystone v2 APIs will be owned by a domain, in this case the ‘default’ 
> > domain). All projects in a project hierarchy are always owned by the same 
> > domain. Keystone supports a number of duplicate concepts (e.g. domain 
> > assignments, domain tokens) similar to their project equivalents.
> >
> > 
> >
> > I’ve got a couple of questions about the impact of the above:
> >
> > 1) I already know that if we do exactly as described above, the cinder gets 
> > confused with how it does quotas today - since suddenly there is a new 
> > parent to what it thought was a top level project (and the permission rules 
> > it encodes requires the caller to be cloud admin, or admin of the root 
> > project of a hierarchy).
> 
> These problems are there because our nested quotas code is really buggy
> right now. Once Keystone merges a fix allowing non-admin users to fetch
> his own project hierarchy - we should be able to fix it.
> 
> ++ The patch to fix this problem are closer to be merged, there is just minor 
> comments to fix: https://review.openstack.org/#/c/270057/ 
>   So I believe that we can fix this 
> bug on cinder in in next days.
> 
> > 2) I’m not sure of the state of nova quotas - and whether it would suffer a 
> > similar problem?
> 
> As far as I know Nova haven't had merged nested quotas code and will not
> do that in Mitaka due to feature freeze. 
> Nested quotas code on Nova is very similar with the Cinder code and we are 
> already fixing the bugs that we found on Cinder. Agreed that It will not be 
> merged in Mitaka due to feature freeze. 
> 
> > 3) Will Horizon get confused by this at all?
> >
> > Depending on the answers to the above, we can go in a couple of directions. 
> > The cinder issues looks easy to fix (having had a quick look at the code) - 
> > and if that was the only issue, then that may be fine. If we think there 
> > may be problems in multiple services, we could, for Mitaka, still create 
> > the projects acting as domains, but not set the parent_id of the current 
> > top level projects to point at the new project acting as a domain - that 
> > way those projects acting as domains remain isolated from the hierarchy for 
> > now (and essentially invisible to any calling service). Then as part of 
> > Newton we can provide patches to those services that need changing, and 
> > then wire up the projects acting as a domain to their children.
> >
> > Interested in feedback to the questions above.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ovs] How to update flows in br-tun proactively

2016-02-17 Thread Xiao Ma (xima2)
Hi, JingTing

The flow should be updated after the rarp broadcast packet be send by qemu.
So I think you should make sure whether the broadcast packet has been send and 
received by the host.


Best regards,

Xiao Ma (xi...@cisco.com)
马啸
SDN Architect & OpenStack specialist
Hybrid Cloud
Cisco System (China)
Mobile: (+86) 18911219332




在 2016年2月17日,下午3:57,康敬亭 
mailto:jingt...@unitedstack.com>> 写道:

Hi guys:

The bug has be reported on https://bugs.launchpad.net/neutron/+bug/1541738

The flow in br-tun as below is generated by learning flow, and not updated 
immediately after vm live migration.

Original flow:
cookie=0x0, duration=194.884s, table=20, n_packets=0, n_bytes=0, 
hard_timeout=300, idle_age=194, 
priority=1,vlan_tci=0x0306/0x0fff,dl_dst=5a:c6:4f:34:61:06 
actions=load:0->NXM_OF_VLAN_TCI[],load:0x1ef->NXM_NX_TUN_ID[],output:24

Updated flow:
cookie=0x0, duration=194.884s, table=20, n_packets=0, n_bytes=0, 
hard_timeout=300, idle_age=194, 
priority=1,vlan_tci=0x0306/0x0fff,dl_dst=5a:c6:4f:34:61:06 
actions=load:0->NXM_OF_VLAN_TCI[],load:0x1ef->NXM_NX_TUN_ID[],output:26

Anyone has idea how to update this flow proactively.thanks!

jingting
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-17 Thread Mike Perez

On 02/16/2016 11:30 AM, Doug Hellmann wrote:

So I think the project team is doing everything we've asked.  We
changed our policies around new projects to emphasize the social
aspects of projects, and community interactions. Telling a bunch
of folks that they "are not OpenStack" even though they follow those
policies is rather distressing.  I think we should be looking for
ways to say "yes" to new projects, rather than "no."


My disagreements with accepting Poppy has been around testing, so let me
reiterate what I've already said in this thread.

The governance currently states that under Open Development "The project 
has core reviewers and adopts a test-driven gate in the OpenStack 
infrastructure for changes" [1].


If we don't have a solution like OpenCDN, Poppy has to adopt a reference
implementation that is a commercial entity, and infra has to also be 
dependent on it. I get Infra is already dependent on public cloud 
donations, but if we start opening the door to allow projects to bring 
in those commercial dependencies, that's not good.


[1] - 
http://governance.openstack.org/reference/new-projects-requirements.html


--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] [tripleo] [stable] Phasing out old Ironic ramdisk and its gate jobs

2016-02-17 Thread Dmitry Tantsur

Hi everyone!

Yesterday on the Ironic midcycle we agreed that we would like to remove 
support for the old bash ramdisk from our code and gate. This, however, 
pose a problem, since we still support Kilo and Liberty. Meaning:


1. We can't remove gate jobs completely, as they still run on Kilo/Liberty.
2. Then we should continue to run our job on DIB, as DIB does not have 
stable branches.
3. Then we can't remove support from Ironic master as well, as it would 
break DIB job :(


I see the following options:

1. Wait for Kilo end-of-life (April?) before removing jobs and code. 
This means that the old ramdisk will essentially be supported in Mitaka, 
but we'll remove gating on stable/liberty and stable/mitaka very soon. 
Pros: it will happen soon. Cons: in theory we do support the old ramdisk 
on Liberty, so removing gates will end this support prematurely.


2. Wait for Liberty end-of-life. This means that the old ramdisk will 
essentially be supported in Mitaka and Newton. We should somehow 
communicate that it's not official and can be dropped at any moment 
during stable branches life time. Pros: we don't drop support of the 
bash ramdisk on any branch where we promised to support it. Cons: people 
might assume we still support the old ramdisk on Mitaka/Newton; it will 
also take a lot of time.


3. Do it now, recommend Kilo users to switch to IPA too. Pros: it 
happens now, no confusing around old ramdisk support in Mitaka and 
later. Cons: probably most Kilo users (us included) are using the bash 
ramdisk, meaning we can potentially break them when landing changes on 
stable/kilo.


4. Upper-cap DIB in stable/{kilo,liberty} to the current release, then 
remove gates from Ironic master and DIB, leaving them on Kilo and 
Liberty. Pros: we can remove old ramdisk support right now. Cons: DIB 
bug fixes won't affect kilo and liberty any more.


5. The same as #4, but only on Kilo.

As gate on stable/kilo is not working right now, and end-of-life is 
quickly approaching, I see number 3 as a pretty viable option anyway. We 
probably won't land any more changes on Kilo, so no use in keeping gates 
on it. Liberty is still a concern though, as the old ramdisk was only 
deprecated in Liberty.


What do you all think? Did I miss any options?

Cheers,
Dmitry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] unconstrained growth, why?

2016-02-17 Thread Chris Dent

On Tue, 16 Feb 2016, Doug Hellmann wrote:


If we want to do that, we should change the rules because we put
the current set of rules in place specifically to encourage more
project teams to join officially. We can do that, but that discussion
deserves its own thread.


(Yeah, that's why I changed the subject header: Indicate change of
subject, but maintain references.)

I'm not sure what the right thing to do is, but I do think there's a
good opportunity to review what various initiatives (big tent, death
to stackforge, tags, governance changes, cross-project work) are trying
to accomplish, whether they are succeeding, what the unintended
consequences have been.


For the example of Poppy, there is nothing that requires it be a part
of OpenStack for it to be useful to OpenStack nor for it to exist as
a valuable part of the open source world.


Nor is there for lots of our existing official projects. Which ones
should we remove?


The heartless rationalist in me says "most of them". The nicer guy
says "this set is grandfathered, henceforth we're more strict".

A reason _I_[1] think we need to limit things is because from the
outside OpenStack doesn't really look like anything that you can put
a short description on. It's more murky than that and it is hard to
experience positive progress in a fog. Many people react to this fog
by focusing on their specific project rather than OpenStack at
large: At least there they can see their impact.

This results in increasing the fog because cross-project concerns (which
help unify the vision and actuality that is OpenStack) get less
attention and the cycle deepens.

[1] Other people, some reasonable, some not, will have different
opinions. Yay!
--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-17 Thread Stefano Maffulli
On 02/05/2016 07:17 PM, Doug Hellmann wrote:
> So, is Poppy "open core"?

I think it's a simple answer: no, Poppy is not open core.

Poppy is not open core... Is Linux Open Core because you have to buy a
processor and ram to run it?

Or is Firefox open core because I have to buy service from a bank before
I can use an online banking system?

A better question to ask is whether it fits in OpenStack given that
Poppy's open source code can only be tested by the community if the
community buys some/all those CDNs.

/stef

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Recovering from Instance from a failed 'resize' Operation

2016-02-17 Thread Sudhakar Gariganti
Hi,

We have an Openstack installation based on Kilo *(with KVM compute)*. One
of the users tried to resize his instance. The operation failed midway
because of an Auth exception from Neutron probably during a port binding
update. The instance was actually tried to be rescheduled to a new Host
(say B) from parent host (say A).

We see that nova updates the host name proactively to 'B' without a
confirmation that the operation succeeded. And since Neutron did not finish
the port update, the port is seen to be still bound to Host A. After some
code walkthrough/readings, we were able to bring the instance back online
by playing around with /var/lib/instances/_resize folder on
host A, and then using the virsh utility.

How can I update the OS-EXT-SRV-ATTR:host, OS-EXT-SRV-ATTR:hypervisor_
hostname, *OS-EXT-STS:power_state *attributes for the instances, so that I
can manage the instance again via horizon/cli? Currently the API calls are
directed to host B, because of the DB update.

Is it feasible to update at all? or I am too greedy here.


Thanks,
Sudhakar.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [tripleo] [stable] Phasing out old Ironic ramdisk and its gate jobs

2016-02-17 Thread Sam Betts (sambetts)
My preference is option 4, however with a slight difference, and that is
that we only apply the DIB cap to the job that¹s testing the bash ramdisk.
We can say that the bash ramdisk is deprecated in liberty and will not be
receiving any further updates, so we¹re capping the DIB version, but
because the IPA ramdisks are LTS then we will keep testing latest DIB for
those. WDYT?

Sam

On 17/02/2016 11:27, "Dmitry Tantsur"  wrote:

>Hi everyone!
>
>Yesterday on the Ironic midcycle we agreed that we would like to remove
>support for the old bash ramdisk from our code and gate. This, however,
>pose a problem, since we still support Kilo and Liberty. Meaning:
>
>1. We can't remove gate jobs completely, as they still run on
>Kilo/Liberty.
>2. Then we should continue to run our job on DIB, as DIB does not have
>stable branches.
>3. Then we can't remove support from Ironic master as well, as it would
>break DIB job :(
>
>I see the following options:
>
>1. Wait for Kilo end-of-life (April?) before removing jobs and code.
>This means that the old ramdisk will essentially be supported in Mitaka,
>but we'll remove gating on stable/liberty and stable/mitaka very soon.
>Pros: it will happen soon. Cons: in theory we do support the old ramdisk
>on Liberty, so removing gates will end this support prematurely.
>
>2. Wait for Liberty end-of-life. This means that the old ramdisk will
>essentially be supported in Mitaka and Newton. We should somehow
>communicate that it's not official and can be dropped at any moment
>during stable branches life time. Pros: we don't drop support of the
>bash ramdisk on any branch where we promised to support it. Cons: people
>might assume we still support the old ramdisk on Mitaka/Newton; it will
>also take a lot of time.
>
>3. Do it now, recommend Kilo users to switch to IPA too. Pros: it
>happens now, no confusing around old ramdisk support in Mitaka and
>later. Cons: probably most Kilo users (us included) are using the bash
>ramdisk, meaning we can potentially break them when landing changes on
>stable/kilo.
>
>4. Upper-cap DIB in stable/{kilo,liberty} to the current release, then
>remove gates from Ironic master and DIB, leaving them on Kilo and
>Liberty. Pros: we can remove old ramdisk support right now. Cons: DIB
>bug fixes won't affect kilo and liberty any more.
>
>5. The same as #4, but only on Kilo.
>
>As gate on stable/kilo is not working right now, and end-of-life is
>quickly approaching, I see number 3 as a pretty viable option anyway. We
>probably won't land any more changes on Kilo, so no use in keeping gates
>on it. Liberty is still a concern though, as the old ramdisk was only
>deprecated in Liberty.
>
>What do you all think? Did I miss any options?
>
>Cheers,
>Dmitry
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A prototype implementation towards the "shared state scheduler"

2016-02-17 Thread Chris Dent

On Wed, 17 Feb 2016, Cheng, Yingxin wrote:


To better illustrate the differences between shared-state, resource-
provider and legacy scheduler, I've drew 3 simplified pictures [1] in
emphasizing the location of resource view, the location of claim and
resource consumption, and the resource update/refresh pattern in three
kinds of schedulers. Hoping I'm correct in the "resource-provider
scheduler" part.


That's a useful visual aid, thank you. It aligns pretty well with my
understanding of each idea.

A thing that may be missing, which may help in exploring the usefulness
of each idea, is a representation of resources which are separate
from compute nodes and shared by them, such as shared disk or pools
of network addresses. In addition some would argue that we need to
see bare-metal nodes for a complete picture.

One of the driving motivations of the resource-provider work is to
make it possible to adequately and accurately track and consume the
shared resources. The legacy scheduler currently fails to do that
well. As you correctly points out it does this by having "strict
centralized consistency" as a design goal.


As can be seen in the illustrations [1], the main compatibility issue
between shared-state and resource-provider scheduler is caused by the
different location of claim/consumption and the assumed consistent
resource view. IMO unless the claims are allowed to happen in both
places(resource tracker and resource-provider db), it seems difficult
to make shared-state and resource-provider scheduler work together.


Yes, but doing claims twice feels intuitively redundant.

As I've explored this space I've often wondered why we feel it is
necessary to persist the resource data at all. Your shared-state
model is appealing because it lets the concrete resource(-provider)
be the authority about its own resources. That is information which
it can broadcast as it changes or on intervals (or both) to other
things which need that information. That feels like the correct
architecture in a massively distributed system, especially one where
resources are not scarce.

The advantage of a centralized datastore for that information is
that it provides administrative control (e.g. reserving resources for
other needs) and visibility. That level of command and control seems
to be something people really want (unfortunately).

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for papers talk gone astray?

2016-02-17 Thread Bailey, Darragh

In case anyone else is looking at this:

On 16/02/16 17:43, Bailey, Darragh wrote:
> Anyone able to help out locate where the talk disappeared to as well as
> fixing my profile before the voting closes?

Jimmy McArthur has been kind enough to step in and take care of the
problem, much appreciated.


Appears it happens to a small number of talks each cycle, and I will be
taking care in the future to ensure that I both have exact copies of
what I've submited saved somewhere I can easily find them, and have
access to check immediately when they available for voting that they
have made it.

Lesson learnt... ;-)


Guess many have already voted and may not be planning to check again so
if the title "Practical Ansible hacks used to deploy an OpenStack
solution" peaks your interest 


--

Regards,
Darragh Bailey
IRC: electrofelix
"Nothing is foolproof to a sufficiently talented fool" - Unknown



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] eventlet 0.18.1 not on PyPi anymore

2016-02-17 Thread Henry Gessau
And it looks like eventlet 0.18.3 breaks neutron:
https://bugs.launchpad.net/neutron/+bug/1546506

Victor Stinner  wrote:
> Hi,
> 
> I asked eventlet dev to *not* remove a release from PyPI before they did 
> it, but they ignored me and removed 0.18.0 and 0.18.1 releases from PyPI :-(
> 
> 0.18.0 fixed a bug in Python 3:
> https://github.com/eventlet/eventlet/issues/274
> 
> But 0.18.0 introduced a regression on Python 3 in WSGI:
> https://github.com/eventlet/eventlet/issues/295
> 
> 0.18.2 was supposed to fix the WSGI bug, but introduced a different bug 
> in Keystone:
> https://github.com/eventlet/eventlet/issues/296
> 
> Yeah, it's funny to work on eventlet :-) A new bug everyday :-D
> 
> At least, the eventlet test suite is completed at each bugfix.
> 
> Victor
> 
> Le 09/02/2016 17:44, Markus Zoeller a écrit :
>> For the sake of completeness: The eventlet package version 0.18.1
>> seems to be disappeared from the PyPi servers, which is a bad thing,
>> as we use that version in the "upper-constraints.txt" of the
>> requirements project. There is patch [1] in the queue which solves that.
>> Until this is merged, there is a change that our CI (and your third-party
>> CI) will break after the locally cached version in the CI vanishes.
>>
>> References:
>> [1] https://review.openstack.org/#/c/277912/
>>
>> Regards, Markus Zoeller (markus_z)
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] upgrade implications of lots of content in paste.ini

2016-02-17 Thread Sean Dague
A set of CORS patches came out recently that add a ton of content to
paste.ini for every project (much of it the same between projects) -
https://review.openstack.org/#/c/265415/1

paste.ini is in a really weird space because it's config, ops can change
it, so large amounts of complicated things in there which may change in
future releases is really problematic. Deprecating content out of there
turns into a giant challenge because of this. As does changes to code
which make any assumption what so ever about other content in that file.

Why weren't these options included as sane defaults in the base cors
middleware?

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer]ceilometer-collector high CPU usage

2016-02-17 Thread gordon chung
hi,

this seems to be similar to a bug we were tracking in earlier[1]. 
basically, any service with a listener never seemed to idle properly.

based on earlier investigation, we found it relates to the heartbeat 
functionality in oslo.messaging. i'm not entirely sure if it's because 
of it or some combination of things including it. the short answer, is 
to disable heartbeat by setting heartbeat_timeout_threshold = 0 and see 
if it fixes your cpu usage. you can track the comments in bug.

[1] https://bugs.launchpad.net/oslo.messaging/+bug/1478135

On 17/02/2016 4:14 AM, Gyorgy Szombathelyi wrote:
> Hi!
>
> Excuse me, if the following question/problem is a basic one, already known 
> problem,
> or even a bad setup on my side.
>
> I just noticed that the most CPU consuming process in an idle
> OpenStack cluster is ceilometer-collector. When there are only
> 10-15 samples/minute, it just constantly eats about 15-20% CPU.
>
> I started to debug, and noticed that it epoll()s constantly with a zero
> timeout, so it seems it just polls for events in a tight loop.
> I found out that the _maybe_ the python side of the problem is
> oslo_messaging.get_notification_listener() with the eventlet executor.
> A quick search showed that this function is only used in aodh_listener and
> ceilometer_collector, and both are using relatively high CPU even if they're
> just 'listening'.
>
> My skills for further debugging is limited, but I'm just curious why this 
> listener
> uses so much CPU, while other executors, which are using eventlet, are not 
> that
> bad. Excuse me, if it was a basic question, already known problem, or even a 
> bad
> setup on my side.
>
> Br,
> György
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Decision of how to manage stable/liberty from Kolla Midcycle

2016-02-17 Thread Martin André
On Wed, Feb 17, 2016 at 3:15 AM, Steven Dake (stdake) 
wrote:

> Hey folks,
>
> We held a midcycle Feb 9th and 10th.  The full notes of the midcycle are
> here:
> https://etherpad.openstack.org/p/kolla-mitaka-midcycle
>
> We had 3 separate ~40 minute sessions on making stable stable.  The reason
> for so many sessions on this topic were that it took a long time to come to
> an agreement about the problem and solution.
>
> There are two major problems with stable:
> Stable is hard-pinned to 1.8.2 of docker.  Ansible 1.9.4 is the last
> version of Ansible in the 1 z series coming from Ansible.  Ansible 1.9.4
> docker module is totally busted with Docker 1.8.3 and later.
>
> Stable uses data containers.  Data containers used with Ansible can
> result, in some very limited instances, such as an upgrade of the data
> container image, *data loss*.  We didn't really recognize this until
> recently.  We can't really fix Ansible to behave correctly with the data
> containers.
>
> The solution:
> Use the kolla-docker.py module to replace ansible's built in docker
> module.  This is not a fork of that module from Ansible's upstream so it
> has no GPLv3 licensing concerns.  Instead its freshly written code in
> master.  This allows the Kolla upstream to implement support for any
> version of docker we prefer.
>
> We will be making 1.9 and possibly 1.10 depending on the outcome of a thin
> containers vote the minimum version of docker required to run
> stable/liberty.
>
> We will be replacing the data containers with named volumes.  Named
> volumes offer a similar functionality (persistent data containment) in a
> different implementation way.  They were introduced in Docker 1.9, because
> data containers have many shortcomings.
>
> This will require some rework on the playbooks.  Rather then backport the
> 900+ patches that have entered master since liberty, we are going to
> surgically correct the problems with named volumes.  We suspect this work
> will take 4-6 weeks to complete and will be less then 15 patches on top of
> stable/liberty.  The numbers here are just estimates, it could be more or
> less, but on that order of magnitude.
>
> The above solution is what we decided we would go with, after nearly 3
> hours of debate ;)  If I got any of that wrong, please feel free to chime
> in for folks that were there.  Note there was a majority of core reviewers
> present, and nobody raised objection to this plan of activity, so I'd
> consider it voted and approved :)  There was not a majority approval for
> another proposal to backport thin containers for neutron which I will
> handle in a separate email.
>

As one of the core reviewers that couldn't make it to the mid-cycle I want
to say that I fully agree with this plan.


> Going forward, my personal preference is that we make stable branches a
> low-rate-of-change branch, rather then how it  is misnamed to to imply a
> high rate of backports to fix problems.  We will have further design
> sessions about stable branch maintenance at the Austin ODS.
>

In all fairness, we're still pretty early in the life of Kolla, I expect
the rate of backports to slow down naturally over time.

Martin

Regards
> -steve
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] eventlet 0.18.1 not on PyPi anymore

2016-02-17 Thread Victor Stinner

Le 17/02/2016 13:43, Henry Gessau a écrit :

And it looks like eventlet 0.18.3 breaks neutron:
https://bugs.launchpad.net/neutron/+bug/1546506


2 releases, 2 regressions in OpenStack. Should we cap eventlet version? 
The requirement bot can produce patches to update eventlet, patches 
which would run integration tests using Nova, Keystone, Neutron on the 
new eventlet version.


eventlet 0.18.2 broke OpenStack Keystone and OpenStack Nova
https://github.com/eventlet/eventlet/issues/296
https://github.com/eventlet/eventlet/issues/299
https://review.openstack.org/#/c/278147/
https://bugs.launchpad.net/nova/+bug/1544801

eventlet 0.18.3 broke OpenStack Neutron
https://github.com/eventlet/eventlet/issues/301
https://bugs.launchpad.net/neutron/+bug/1546506

FYI eventlet 0.18.0 broke WSGI servers:
https://github.com/eventlet/eventlet/issues/295

It was followed quickly by eventlet 0.18.2 to fix this issue.

Sadly, it looks like bugfix releases of eventlet don't include a single 
bugfix, but include also other changes. For example, 0.18.3 fixed the 
bug #296 but introduced "wsgi: TCP_NODELAY enabled by default" optimization.


IMHO the problem is not the release manager of eventlet, but more the 
lack of tests on eventlet, especially on OpenStack services.


Current "Continious Delivery"-like with gates do detect bugs, yeah, but 
also block a lot of developers when the gates are broken. It doesn't 
seem trivial to investigate and fix eventlet issues.


Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [tripleo] [stable] Phasing out old Ironic ramdisk and its gate jobs

2016-02-17 Thread John Trowbridge


On 02/17/2016 06:27 AM, Dmitry Tantsur wrote:
> Hi everyone!
> 
> Yesterday on the Ironic midcycle we agreed that we would like to remove
> support for the old bash ramdisk from our code and gate. This, however,
> pose a problem, since we still support Kilo and Liberty. Meaning:
> 
> 1. We can't remove gate jobs completely, as they still run on Kilo/Liberty.
> 2. Then we should continue to run our job on DIB, as DIB does not have
> stable branches.
> 3. Then we can't remove support from Ironic master as well, as it would
> break DIB job :(
> 
> I see the following options:
> 
> 1. Wait for Kilo end-of-life (April?) before removing jobs and code.
> This means that the old ramdisk will essentially be supported in Mitaka,
> but we'll remove gating on stable/liberty and stable/mitaka very soon.
> Pros: it will happen soon. Cons: in theory we do support the old ramdisk
> on Liberty, so removing gates will end this support prematurely.
> 
> 2. Wait for Liberty end-of-life. This means that the old ramdisk will
> essentially be supported in Mitaka and Newton. We should somehow
> communicate that it's not official and can be dropped at any moment
> during stable branches life time. Pros: we don't drop support of the
> bash ramdisk on any branch where we promised to support it. Cons: people
> might assume we still support the old ramdisk on Mitaka/Newton; it will
> also take a lot of time.
> 
> 3. Do it now, recommend Kilo users to switch to IPA too. Pros: it
> happens now, no confusing around old ramdisk support in Mitaka and
> later. Cons: probably most Kilo users (us included) are using the bash
> ramdisk, meaning we can potentially break them when landing changes on
> stable/kilo.
> 

I think if we were to do this, then we need to backport LIO support in
IPA to liberty and kilo. While the bash ramdisk is not awesome to
troubleshoot, tgtd is not great either, and the bash ramdisk has
supported LIO since Kilo. However, there is not stable/kilo branch in
IPA, so that backport is impossible. I have not looked at how hard the
stable/liberty backport would be, but I imagine not very.

> 4. Upper-cap DIB in stable/{kilo,liberty} to the current release, then
> remove gates from Ironic master and DIB, leaving them on Kilo and
> Liberty. Pros: we can remove old ramdisk support right now. Cons: DIB
> bug fixes won't affect kilo and liberty any more.
> 
> 5. The same as #4, but only on Kilo.
> 
> As gate on stable/kilo is not working right now, and end-of-life is
> quickly approaching, I see number 3 as a pretty viable option anyway. We
> probably won't land any more changes on Kilo, so no use in keeping gates
> on it. Liberty is still a concern though, as the old ramdisk was only
> deprecated in Liberty.
> 
> What do you all think? Did I miss any options?

My favorite option would be 5 with backport of LIO support to liberty
(since backport to kilo is not possible). That is the only benefit of
the current bash ramdisk over the liberty/kilo IPA ramdisk. This is not
just for RHEL, but RHEL derivatives like CentOS which the RDO distro is
based on. (technically tgt can still be installed from EPEL, but there
is a reason it is not included in the base repos)

Other than that, I think 4 is the next best option.
> 
> Cheers,
> Dmitry
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [tripleo] [stable] Phasing out old Ironic ramdisk and its gate jobs

2016-02-17 Thread Dmitry Tantsur

On 02/17/2016 02:22 PM, John Trowbridge wrote:



On 02/17/2016 06:27 AM, Dmitry Tantsur wrote:

Hi everyone!

Yesterday on the Ironic midcycle we agreed that we would like to remove
support for the old bash ramdisk from our code and gate. This, however,
pose a problem, since we still support Kilo and Liberty. Meaning:

1. We can't remove gate jobs completely, as they still run on Kilo/Liberty.
2. Then we should continue to run our job on DIB, as DIB does not have
stable branches.
3. Then we can't remove support from Ironic master as well, as it would
break DIB job :(

I see the following options:

1. Wait for Kilo end-of-life (April?) before removing jobs and code.
This means that the old ramdisk will essentially be supported in Mitaka,
but we'll remove gating on stable/liberty and stable/mitaka very soon.
Pros: it will happen soon. Cons: in theory we do support the old ramdisk
on Liberty, so removing gates will end this support prematurely.

2. Wait for Liberty end-of-life. This means that the old ramdisk will
essentially be supported in Mitaka and Newton. We should somehow
communicate that it's not official and can be dropped at any moment
during stable branches life time. Pros: we don't drop support of the
bash ramdisk on any branch where we promised to support it. Cons: people
might assume we still support the old ramdisk on Mitaka/Newton; it will
also take a lot of time.

3. Do it now, recommend Kilo users to switch to IPA too. Pros: it
happens now, no confusing around old ramdisk support in Mitaka and
later. Cons: probably most Kilo users (us included) are using the bash
ramdisk, meaning we can potentially break them when landing changes on
stable/kilo.



I think if we were to do this, then we need to backport LIO support in
IPA to liberty and kilo. While the bash ramdisk is not awesome to
troubleshoot, tgtd is not great either, and the bash ramdisk has
supported LIO since Kilo. However, there is not stable/kilo branch in
IPA, so that backport is impossible. I have not looked at how hard the
stable/liberty backport would be, but I imagine not very.


4. Upper-cap DIB in stable/{kilo,liberty} to the current release, then
remove gates from Ironic master and DIB, leaving them on Kilo and
Liberty. Pros: we can remove old ramdisk support right now. Cons: DIB
bug fixes won't affect kilo and liberty any more.

5. The same as #4, but only on Kilo.

As gate on stable/kilo is not working right now, and end-of-life is
quickly approaching, I see number 3 as a pretty viable option anyway. We
probably won't land any more changes on Kilo, so no use in keeping gates
on it. Liberty is still a concern though, as the old ramdisk was only
deprecated in Liberty.

What do you all think? Did I miss any options?


My favorite option would be 5 with backport of LIO support to liberty
(since backport to kilo is not possible). That is the only benefit of
the current bash ramdisk over the liberty/kilo IPA ramdisk. This is not
just for RHEL, but RHEL derivatives like CentOS which the RDO distro is
based on. (technically tgt can still be installed from EPEL, but there
is a reason it is not included in the base repos)


Oh, that's a good catch, IPA is usable on RHEL starting with Mitaka... I 
wonder if having stable branches for IPA was a good idea at all, 
especially provided that our gate is using git master on all branches.




Other than that, I think 4 is the next best option.


Cheers,
Dmitry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible]Review of Bug "1535536"

2016-02-17 Thread Major Hayden
On 02/17/2016 04:33 AM, Sirisha Guduru wrote:
> I recently committed code as a fix for the bug 
> "https://bugs.launchpad.net/openstack-ansible/+bug/1535536”.
> Jenkins gave a ‘-1’ during the review. Going through the logs I found that 
> the errors are not in the code I committed but from other containers and the 
> original code in openstack-ansible.
> Due to that, there is no actual review of the code committed.
> 
> Kindly let me know, how to get it fixed? Or if anyone can review the code, 
> that would be great.

Hello Sirisha,

It looks like Andy has given you some feedback there in the review that should 
help.  If not, feel free to make additional comments in that review and we will 
have a look. ;)

--
Major Hayden

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] eventlet 0.18.1 not on PyPi anymore

2016-02-17 Thread Doug Hellmann
Excerpts from Victor Stinner's message of 2016-02-17 14:14:18 +0100:
> Le 17/02/2016 13:43, Henry Gessau a écrit :
> > And it looks like eventlet 0.18.3 breaks neutron:
> > https://bugs.launchpad.net/neutron/+bug/1546506
> 
> 2 releases, 2 regressions in OpenStack. Should we cap eventlet version? 
> The requirement bot can produce patches to update eventlet, patches 
> which would run integration tests using Nova, Keystone, Neutron on the 
> new eventlet version.
> 
> eventlet 0.18.2 broke OpenStack Keystone and OpenStack Nova
> https://github.com/eventlet/eventlet/issues/296
> https://github.com/eventlet/eventlet/issues/299
> https://review.openstack.org/#/c/278147/
> https://bugs.launchpad.net/nova/+bug/1544801
> 
> eventlet 0.18.3 broke OpenStack Neutron
> https://github.com/eventlet/eventlet/issues/301
> https://bugs.launchpad.net/neutron/+bug/1546506
> 
> FYI eventlet 0.18.0 broke WSGI servers:
> https://github.com/eventlet/eventlet/issues/295
> 
> It was followed quickly by eventlet 0.18.2 to fix this issue.
> 
> Sadly, it looks like bugfix releases of eventlet don't include a single 
> bugfix, but include also other changes. For example, 0.18.3 fixed the 
> bug #296 but introduced "wsgi: TCP_NODELAY enabled by default" optimization.
> 
> IMHO the problem is not the release manager of eventlet, but more the 
> lack of tests on eventlet, especially on OpenStack services.
> 
> Current "Continious Delivery"-like with gates do detect bugs, yeah, but 
> also block a lot of developers when the gates are broken. It doesn't 
> seem trivial to investigate and fix eventlet issues.
> 
> Victor
> 

Whether we cap or not, we should exclude the known broken versions.
It looks like getting back to a good version will also require
lowering the minimum version we support, since we have >=0.18.2
now.

What was the last version of eventlet known to work?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] eventlet 0.18.1 not on PyPi anymore

2016-02-17 Thread Davanum Srinivas
I'd support this,

Last known version is https://pypi.python.org/pypi/eventlet/0.17.4

-- Dims

On Wed, Feb 17, 2016 at 8:42 AM, Doug Hellmann  wrote:
> Excerpts from Victor Stinner's message of 2016-02-17 14:14:18 +0100:
>> Le 17/02/2016 13:43, Henry Gessau a écrit :
>> > And it looks like eventlet 0.18.3 breaks neutron:
>> > https://bugs.launchpad.net/neutron/+bug/1546506
>>
>> 2 releases, 2 regressions in OpenStack. Should we cap eventlet version?
>> The requirement bot can produce patches to update eventlet, patches
>> which would run integration tests using Nova, Keystone, Neutron on the
>> new eventlet version.
>>
>> eventlet 0.18.2 broke OpenStack Keystone and OpenStack Nova
>> https://github.com/eventlet/eventlet/issues/296
>> https://github.com/eventlet/eventlet/issues/299
>> https://review.openstack.org/#/c/278147/
>> https://bugs.launchpad.net/nova/+bug/1544801
>>
>> eventlet 0.18.3 broke OpenStack Neutron
>> https://github.com/eventlet/eventlet/issues/301
>> https://bugs.launchpad.net/neutron/+bug/1546506
>>
>> FYI eventlet 0.18.0 broke WSGI servers:
>> https://github.com/eventlet/eventlet/issues/295
>>
>> It was followed quickly by eventlet 0.18.2 to fix this issue.
>>
>> Sadly, it looks like bugfix releases of eventlet don't include a single
>> bugfix, but include also other changes. For example, 0.18.3 fixed the
>> bug #296 but introduced "wsgi: TCP_NODELAY enabled by default" optimization.
>>
>> IMHO the problem is not the release manager of eventlet, but more the
>> lack of tests on eventlet, especially on OpenStack services.
>>
>> Current "Continious Delivery"-like with gates do detect bugs, yeah, but
>> also block a lot of developers when the gates are broken. It doesn't
>> seem trivial to investigate and fix eventlet issues.
>>
>> Victor
>>
>
> Whether we cap or not, we should exclude the known broken versions.
> It looks like getting back to a good version will also require
> lowering the minimum version we support, since we have >=0.18.2
> now.
>
> What was the last version of eventlet known to work?
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer]ceilometer-collector high CPU usage

2016-02-17 Thread Gyorgy Szombathelyi
> 
> hi,
Hi Gordon,

> 
> this seems to be similar to a bug we were tracking in earlier[1].
> basically, any service with a listener never seemed to idle properly.
> 
> based on earlier investigation, we found it relates to the heartbeat
> functionality in oslo.messaging. i'm not entirely sure if it's because of it 
> or
> some combination of things including it. the short answer, is to disable
> heartbeat by setting heartbeat_timeout_threshold = 0 and see if it fixes your
> cpu usage. you can track the comments in bug.

As I see in the bug report, you mention that the problem is only with the 
notification agent, 
and the collector is fine. I'm in an entirely opposite else situtation.

starce-ing the two processes:

Notification agent:
--
epoll_wait(4, {}, 1023, 43) = 0
epoll_wait(4, {}, 1023, 0)  = 0
epoll_ctl(4, EPOLL_CTL_DEL, 8, 
{EPOLLWRNORM|EPOLLMSG|EPOLLERR|EPOLLHUP|EPOLLRDHUP|EPOLLONESHOT|EPOLLET|0x1ec88000,
 {u32=32738, u64=24336577484324834}}) = 0
recvfrom(8, 0x7fe2da3a4084, 7, 0, 0, 0) = -1 EAGAIN (Resource temporarily 
unavailable)
epoll_ctl(4, EPOLL_CTL_ADD, 8, {EPOLLIN|EPOLLPRI|EPOLLERR|EPOLLHUP, {u32=8, 
u64=40046962262671368}}) = 0
epoll_wait(4, {}, 1023, 1)  = 0
epoll_ctl(4, EPOLL_CTL_DEL, 24, 
{EPOLLWRNORM|EPOLLMSG|EPOLLERR|EPOLLHUP|EPOLLRDHUP|EPOLLONESHOT|EPOLLET|0x1ec88000,
 {u32=32738, u64=24336577484324834}}) = 0
recvfrom(24, 0x7fe2da3a4084, 7, 0, 0, 0) = -1 EAGAIN (Resource temporarily 
unavailable)
epoll_ctl(4, EPOLL_CTL_ADD, 24, {EPOLLIN|EPOLLPRI|EPOLLERR|EPOLLHUP, {u32=24, 
u64=40046962262671384}}) = 0
epoll_wait(4, {}, 1023, 0)  = 0

ceilometer-collector:
-
epoll_wait(4, {}, 1023, 0)  = 0
epoll_wait(4, {}, 1023, 0)  = 0
epoll_wait(4, {}, 1023, 0)  = 0
epoll_wait(4, {}, 1023, 0)  = 0
epoll_wait(4, {}, 1023, 0)  = 0
epoll_wait(4, {}, 1023, 0)  = 0
epoll_wait(4, {}, 1023, 0)  = 0
epoll_wait(4, {}, 1023, 0)  = 0
epoll_wait(4, {}, 1023, 0)  = 0
epoll_wait(4, {}, 1023, 0)  = 0

So the notification agent do something at least between the crazy epoll()s.

It is the same with or without the heartbeat_timeout_threshold = 0 in 
[oslo_messaging_rabbit].
Then something must be still wrong with the listeners, the bug[1] should not be 
closed, I think.

Br,
György

> 
> [1] https://bugs.launchpad.net/oslo.messaging/+bug/1478135
> 
> On 17/02/2016 4:14 AM, Gyorgy Szombathelyi wrote:
> > Hi!
> >
> > Excuse me, if the following question/problem is a basic one, already
> > known problem, or even a bad setup on my side.
> >
> > I just noticed that the most CPU consuming process in an idle
> > OpenStack cluster is ceilometer-collector. When there are only
> > 10-15 samples/minute, it just constantly eats about 15-20% CPU.
> >
> > I started to debug, and noticed that it epoll()s constantly with a
> > zero timeout, so it seems it just polls for events in a tight loop.
> > I found out that the _maybe_ the python side of the problem is
> > oslo_messaging.get_notification_listener() with the eventlet executor.
> > A quick search showed that this function is only used in aodh_listener
> > and ceilometer_collector, and both are using relatively high CPU even
> > if they're just 'listening'.
> >
> > My skills for further debugging is limited, but I'm just curious why
> > this listener uses so much CPU, while other executors, which are using
> > eventlet, are not that bad. Excuse me, if it was a basic question,
> > already known problem, or even a bad setup on my side.
> >
> > Br,
> > György
> >
> >
> __
> 
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> --
> gord
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] eventlet 0.18.1 not on PyPi anymore

2016-02-17 Thread Sean Dague
On 02/17/2016 08:42 AM, Doug Hellmann wrote:
> Excerpts from Victor Stinner's message of 2016-02-17 14:14:18 +0100:
>> Le 17/02/2016 13:43, Henry Gessau a écrit :
>>> And it looks like eventlet 0.18.3 breaks neutron:
>>> https://bugs.launchpad.net/neutron/+bug/1546506
>>
>> 2 releases, 2 regressions in OpenStack. Should we cap eventlet version? 
>> The requirement bot can produce patches to update eventlet, patches 
>> which would run integration tests using Nova, Keystone, Neutron on the 
>> new eventlet version.
>>
>> eventlet 0.18.2 broke OpenStack Keystone and OpenStack Nova
>> https://github.com/eventlet/eventlet/issues/296
>> https://github.com/eventlet/eventlet/issues/299
>> https://review.openstack.org/#/c/278147/
>> https://bugs.launchpad.net/nova/+bug/1544801
>>
>> eventlet 0.18.3 broke OpenStack Neutron
>> https://github.com/eventlet/eventlet/issues/301
>> https://bugs.launchpad.net/neutron/+bug/1546506
>>
>> FYI eventlet 0.18.0 broke WSGI servers:
>> https://github.com/eventlet/eventlet/issues/295
>>
>> It was followed quickly by eventlet 0.18.2 to fix this issue.
>>
>> Sadly, it looks like bugfix releases of eventlet don't include a single 
>> bugfix, but include also other changes. For example, 0.18.3 fixed the 
>> bug #296 but introduced "wsgi: TCP_NODELAY enabled by default" optimization.
>>
>> IMHO the problem is not the release manager of eventlet, but more the 
>> lack of tests on eventlet, especially on OpenStack services.
>>
>> Current "Continious Delivery"-like with gates do detect bugs, yeah, but 
>> also block a lot of developers when the gates are broken. It doesn't 
>> seem trivial to investigate and fix eventlet issues.
>>
>> Victor
>>
> 
> Whether we cap or not, we should exclude the known broken versions.
> It looks like getting back to a good version will also require
> lowering the minimum version we support, since we have >=0.18.2
> now.
> 
> What was the last version of eventlet known to work?

0.18.2 works. On the Nova side we had a failure around unit tests which
was quite synthetic that we fixed. I don' know what the keystone issue
turned out to be.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A prototype implementation towards the "shared state scheduler"

2016-02-17 Thread Sylvain Bauza



Le 17/02/2016 12:59, Chris Dent a écrit :

On Wed, 17 Feb 2016, Cheng, Yingxin wrote:


To better illustrate the differences between shared-state, resource-
provider and legacy scheduler, I've drew 3 simplified pictures [1] in
emphasizing the location of resource view, the location of claim and
resource consumption, and the resource update/refresh pattern in three
kinds of schedulers. Hoping I'm correct in the "resource-provider
scheduler" part.


That's a useful visual aid, thank you. It aligns pretty well with my
understanding of each idea.

A thing that may be missing, which may help in exploring the usefulness
of each idea, is a representation of resources which are separate
from compute nodes and shared by them, such as shared disk or pools
of network addresses. In addition some would argue that we need to
see bare-metal nodes for a complete picture.

One of the driving motivations of the resource-provider work is to
make it possible to adequately and accurately track and consume the
shared resources. The legacy scheduler currently fails to do that
well. As you correctly points out it does this by having "strict
centralized consistency" as a design goal.



So, to be clear, I'm really happy to see the resource-providers series 
for many reasons :
 - it will help us getting a nice Facade for getting the resources and 
attributing them
 - it will help a shared-storage deployment by making sure that we 
don't have some resource problems when the resource is shared
 - it will create a possibility for external resource providers to 
provide some resource types to Nova so the Nova scheduler could use them 
(like Neutron related resources)


That, I really want to have it implemented in Mitaka and Newton and I'm 
totally on-board and supporting it.


TBC, the only problem I see with the series is [2], not the whole, please.




As can be seen in the illustrations [1], the main compatibility issue
between shared-state and resource-provider scheduler is caused by the
different location of claim/consumption and the assumed consistent
resource view. IMO unless the claims are allowed to happen in both
places(resource tracker and resource-provider db), it seems difficult
to make shared-state and resource-provider scheduler work together.


Yes, but doing claims twice feels intuitively redundant.

As I've explored this space I've often wondered why we feel it is
necessary to persist the resource data at all. Your shared-state
model is appealing because it lets the concrete resource(-provider)
be the authority about its own resources. That is information which
it can broadcast as it changes or on intervals (or both) to other
things which need that information. That feels like the correct
architecture in a massively distributed system, especially one where
resources are not scarce.


So, IMHO, we should only have the compute nodes being the authority for 
allocating resources. They are many reasons for that I provided in the 
spec review, but I can reply again :


 * #1 If we consider that an external system, as a resource provider,
   will provide a single resource class usage (like network segment
   availability), it will still require the instance to be spawned
   *for* consuming that resource class, even if the scheduler accounts
   for it. That would mean that the scheduler would have to manage a
   list of allocations with TTL, and periodically verify that the
   allocation succeeded by asking the external system (or getting
   feedback from the external system). See, that's racy.
 * #2 the scheduler is just a decision maker, by any case it doesn't
   account for the real instance creation (it doesn't hold the
   ownership of the instance). Having it being accountable for the
   instances usage is heavily difficult. Take for example a request for
   CPU pinning or NUMA affinity. The user can't really express which
   pin of the pCPU he will get, that's the compute node which will do
   that for him. Of course, the scheduler will help picking an host
   that can fit the request, but the real pinning will happen in the
   compute node.


Also, I'm very interested in keeping an optimistic scheduler which 
wouldn't lock the entire view of the world anytime a request comes in. 
There are many papers showing different architectures and benchmarks 
against different possibilities and TBH, I'm very concerned by the 
scaling effect.
Also, we should keep in mind our new paradigm called Cells V2, which 
implies a global distributed scheduler for handling all requests. Having 
it following the same design tenets of OpenStack [3] by having a 
"eventually consistent shared-state" makes my guts saying that I'd love 
to see that.






The advantage of a centralized datastore for that information is
that it provides administrative control (e.g. reserving resources for
other needs) and visibility. That level of command and control seems
to be something people really want (unfortunately).




My point is that while I truly u

Re: [openstack-dev] [ceilometer]ceilometer-collector high CPU usage

2016-02-17 Thread Roman Podoliaka
Hi all,

Based on my investigation [1], I believe this is a combined effect of
using eventlet and condition variables on Python 2.x. When heartbeats
are enabled in oslo.messaging, you'll see polling with very small
timeout values. This must not waste a lot of CPU time, still it is
kind of annoying.

Thanks,
Roman

[1] https://bugs.launchpad.net/mos/+bug/1380220

On Wed, Feb 17, 2016 at 3:06 PM, gordon chung  wrote:
> hi,
>
> this seems to be similar to a bug we were tracking in earlier[1].
> basically, any service with a listener never seemed to idle properly.
>
> based on earlier investigation, we found it relates to the heartbeat
> functionality in oslo.messaging. i'm not entirely sure if it's because
> of it or some combination of things including it. the short answer, is
> to disable heartbeat by setting heartbeat_timeout_threshold = 0 and see
> if it fixes your cpu usage. you can track the comments in bug.
>
> [1] https://bugs.launchpad.net/oslo.messaging/+bug/1478135
>
> On 17/02/2016 4:14 AM, Gyorgy Szombathelyi wrote:
>> Hi!
>>
>> Excuse me, if the following question/problem is a basic one, already known 
>> problem,
>> or even a bad setup on my side.
>>
>> I just noticed that the most CPU consuming process in an idle
>> OpenStack cluster is ceilometer-collector. When there are only
>> 10-15 samples/minute, it just constantly eats about 15-20% CPU.
>>
>> I started to debug, and noticed that it epoll()s constantly with a zero
>> timeout, so it seems it just polls for events in a tight loop.
>> I found out that the _maybe_ the python side of the problem is
>> oslo_messaging.get_notification_listener() with the eventlet executor.
>> A quick search showed that this function is only used in aodh_listener and
>> ceilometer_collector, and both are using relatively high CPU even if they're
>> just 'listening'.
>>
>> My skills for further debugging is limited, but I'm just curious why this 
>> listener
>> uses so much CPU, while other executors, which are using eventlet, are not 
>> that
>> bad. Excuse me, if it was a basic question, already known problem, or even a 
>> bad
>> setup on my side.
>>
>> Br,
>> György
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> --
> gord
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [tripleo] [stable] Phasing out old Ironic ramdisk and its gate jobs

2016-02-17 Thread John Trowbridge


On 02/17/2016 08:30 AM, Dmitry Tantsur wrote:
> On 02/17/2016 02:22 PM, John Trowbridge wrote:
>>
>>
>> On 02/17/2016 06:27 AM, Dmitry Tantsur wrote:
>>> Hi everyone!
>>>
>>> Yesterday on the Ironic midcycle we agreed that we would like to remove
>>> support for the old bash ramdisk from our code and gate. This, however,
>>> pose a problem, since we still support Kilo and Liberty. Meaning:
>>>
>>> 1. We can't remove gate jobs completely, as they still run on
>>> Kilo/Liberty.
>>> 2. Then we should continue to run our job on DIB, as DIB does not have
>>> stable branches.
>>> 3. Then we can't remove support from Ironic master as well, as it would
>>> break DIB job :(
>>>
>>> I see the following options:
>>>
>>> 1. Wait for Kilo end-of-life (April?) before removing jobs and code.
>>> This means that the old ramdisk will essentially be supported in Mitaka,
>>> but we'll remove gating on stable/liberty and stable/mitaka very soon.
>>> Pros: it will happen soon. Cons: in theory we do support the old ramdisk
>>> on Liberty, so removing gates will end this support prematurely.
>>>
>>> 2. Wait for Liberty end-of-life. This means that the old ramdisk will
>>> essentially be supported in Mitaka and Newton. We should somehow
>>> communicate that it's not official and can be dropped at any moment
>>> during stable branches life time. Pros: we don't drop support of the
>>> bash ramdisk on any branch where we promised to support it. Cons: people
>>> might assume we still support the old ramdisk on Mitaka/Newton; it will
>>> also take a lot of time.
>>>
>>> 3. Do it now, recommend Kilo users to switch to IPA too. Pros: it
>>> happens now, no confusing around old ramdisk support in Mitaka and
>>> later. Cons: probably most Kilo users (us included) are using the bash
>>> ramdisk, meaning we can potentially break them when landing changes on
>>> stable/kilo.
>>>
>>
>> I think if we were to do this, then we need to backport LIO support in
>> IPA to liberty and kilo. While the bash ramdisk is not awesome to
>> troubleshoot, tgtd is not great either, and the bash ramdisk has
>> supported LIO since Kilo. However, there is not stable/kilo branch in
>> IPA, so that backport is impossible. I have not looked at how hard the
>> stable/liberty backport would be, but I imagine not very.
>>
>>> 4. Upper-cap DIB in stable/{kilo,liberty} to the current release, then
>>> remove gates from Ironic master and DIB, leaving them on Kilo and
>>> Liberty. Pros: we can remove old ramdisk support right now. Cons: DIB
>>> bug fixes won't affect kilo and liberty any more.
>>>
>>> 5. The same as #4, but only on Kilo.
>>>
>>> As gate on stable/kilo is not working right now, and end-of-life is
>>> quickly approaching, I see number 3 as a pretty viable option anyway. We
>>> probably won't land any more changes on Kilo, so no use in keeping gates
>>> on it. Liberty is still a concern though, as the old ramdisk was only
>>> deprecated in Liberty.
>>>
>>> What do you all think? Did I miss any options?
>>
>> My favorite option would be 5 with backport of LIO support to liberty
>> (since backport to kilo is not possible). That is the only benefit of
>> the current bash ramdisk over the liberty/kilo IPA ramdisk. This is not
>> just for RHEL, but RHEL derivatives like CentOS which the RDO distro is
>> based on. (technically tgt can still be installed from EPEL, but there
>> is a reason it is not included in the base repos)
> 
> Oh, that's a good catch, IPA is usable on RHEL starting with Mitaka... I
> wonder if having stable branches for IPA was a good idea at all,
> especially provided that our gate is using git master on all branches.
> 

Interesting, I did not know that master is used for all gates. Maybe RDO
should just build liberty IPA from master. That would solve my only
concern for 3.

>>
>> Other than that, I think 4 is the next best option.
>>>
>>> Cheers,
>>> Dmitry
>>>
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsub

Re: [openstack-dev] [Watcher] Nominating Vincent Francoise to Watcher Core

2016-02-17 Thread Joe Cropper
+1

> On Feb 17, 2016, at 8:05 AM, David TARDIVEL  wrote:
> 
> Team,
> 
> I’d like to promote Vincent Francoise to the core team. Vincent's done a 
> great work
> on code reviewing and has proposed a lot of patchsets. He is currently the 
> most active 
> non-core reviewer on Watcher project, and today, he has a very good vision of 
> Watcher. 
> I think he would make an excellent addition to the team.
> 
> Please vote
> 
> David TARDIVEL
> b<>COM
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Watcher] Nominating Vincent Francoise to Watcher Core

2016-02-17 Thread David TARDIVEL
Team,

I'd like to promote Vincent Francoise to the core team. Vincent's done a great 
work
on code reviewing and has proposed a lot of patchsets. He is currently the most 
active
non-core reviewer on Watcher project, and today, he has a very good vision of 
Watcher.

I think he would make an excellent addition to the team.

Please vote

David TARDIVEL
b<>COM

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][cinder] Projects acting as a domain at the top of the project hierarchy

2016-02-17 Thread Raildo Mascena
Henry,

I know about two patches related:
Fixes cinder quota mgmt for keystone v3 -
https://review.openstack.org/#/c/253759
Split out NestedQuotas into a separate driver -
https://review.openstack.org/#/c/274825

The first one was abandoned, so I think the second patch is enough to fix
this issue.

Cheers,

Raildo

On Wed, Feb 17, 2016 at 8:07 AM Henry Nash  wrote:

> Michal & Raildo,
>
> So the keystone patch (https://review.openstack.org/#/c/270057/) is now
> merged.  Do you perhaps have a cinder patch that I could review so we can
> make sure that this is likely to work with the new projects acting as
> domains? Currently it is the cinder tempest tests that are failing.
>
> Thanks
>
> Henry
>
>
> On 2 Feb 2016, at 13:30, Raildo Mascena  wrote:
>
> See responses inline.
>
> On Mon, Feb 1, 2016 at 6:25 PM Michał Dulko 
> wrote:
>
>> On 01/30/2016 07:02 PM, Henry Nash wrote:
>> > Hi
>> >
>> > One of the things the keystone team was planning to merge ahead of
>> milestone-3 of Mitaka, was “projects acting as a domain”. Up until now,
>> domains in keystone have been stored totally separately from projects, even
>> though all projects must be owned by a domain (even tenants created via the
>> keystone v2 APIs will be owned by a domain, in this case the ‘default’
>> domain). All projects in a project hierarchy are always owned by the same
>> domain. Keystone supports a number of duplicate concepts (e.g. domain
>> assignments, domain tokens) similar to their project equivalents.
>> >
>> > 
>> >
>> > I’ve got a couple of questions about the impact of the above:
>> >
>> > 1) I already know that if we do exactly as described above, the cinder
>> gets confused with how it does quotas today - since suddenly there is a new
>> parent to what it thought was a top level project (and the permission rules
>> it encodes requires the caller to be cloud admin, or admin of the root
>> project of a hierarchy).
>>
>> These problems are there because our nested quotas code is really buggy
>> right now. Once Keystone merges a fix allowing non-admin users to fetch
>> his own project hierarchy - we should be able to fix it.
>>
>
> ++ The patch to fix this problem are closer to be merged, there is just
> minor comments to fix: https://review.openstack.org/#/c/270057/  So I
> believe that we can fix this bug on cinder in in next days.
>
>>
>> > 2) I’m not sure of the state of nova quotas - and whether it would
>> suffer a similar problem?
>>
>> As far as I know Nova haven't had merged nested quotas code and will not
>> do that in Mitaka due to feature freeze.
>
> Nested quotas code on Nova is very similar with the Cinder code and we are
> already fixing the bugs that we found on Cinder. Agreed that It will not be
> merged in Mitaka due to feature freeze.
>
>>
>> > 3) Will Horizon get confused by this at all?
>> >
>> > Depending on the answers to the above, we can go in a couple of
>> directions. The cinder issues looks easy to fix (having had a quick look at
>> the code) - and if that was the only issue, then that may be fine. If we
>> think there may be problems in multiple services, we could, for Mitaka,
>> still create the projects acting as domains, but not set the parent_id of
>> the current top level projects to point at the new project acting as a
>> domain - that way those projects acting as domains remain isolated from the
>> hierarchy for now (and essentially invisible to any calling service). Then
>> as part of Newton we can provide patches to those services that need
>> changing, and then wire up the projects acting as a domain to their
>> children.
>> >
>> > Interested in feedback to the questions above.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Watcher] Nominating Vincent Francoise to Watcher Core

2016-02-17 Thread Jean-Émile DARTOIS
+2


Jean-Emile
DARTOIS

{P} Software Engineer
Cloud Computing

{T} +33 (0) 2 56 35 8260
{W} www.b-com.com

De : Joe Cropper 
Envoyé : mercredi 17 février 2016 15:06
À : OpenStack Development Mailing List (not for usage questions)
Objet : Re: [openstack-dev] [Watcher] Nominating Vincent Francoise to Watcher 
Core

+1

On Feb 17, 2016, at 8:05 AM, David TARDIVEL 
mailto:david.tardi...@b-com.com>> wrote:


Team,

I'd like to promote Vincent Francoise to the core team. Vincent's done a great 
work
on code reviewing and has proposed a lot of patchsets. He is currently the most 
active
non-core reviewer on Watcher project, and today, he has a very good vision of 
Watcher.

I think he would make an excellent addition to the team.

Please vote

David TARDIVEL
b<>COM


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] git-upstream 0.11.0 release

2016-02-17 Thread Bailey, Darragh


Pleased to announce the 0.11.0 release of git-upstream.


With source available at:

http://git.openstack.org/cgit/openstack/git-upstream

Please report any issues through launchpad:

https://bugs.launchpad.net/git-upstream


git-upstream is an open source Python application that can be used to
keep in sync with upstream open source projects, mainly OpenStack.

For more info on what git-upstream is for:

https://pypi.python.org/pypi/git-upstream



For more details see below.

Changes in git-upstream 0.10.1..0.11.0
--

b271750 Changelog for 0.11.0 Release
227be23 Ask rev-parse for shortest unique SHA1
a4342f4 Remove upstream branch requirement from '--finish'
394a9b7 Move logging setup earlier
f0f03b4 Support a finalize method for argument parsing
f41f3f9 Fix order of commits from previous imports
3d641bc Convert strategy tests to use scenarios
60aac04 Have pbr update AUTHORS but not changelog
ba47733 Fix manpage building
126feca Remove '\' from multiline strings
c13695d Improve detection of the previous import merge
5c523d6 Begin conversion to testscenarios
dcffac3 Include node 'name' in commit subject
0414e5f Update typos
2e45b55 Add more complex usage summary for import command
1621091 Tidy up fixture usage
7158716 Use standard library for text generation
49e87a2 Update ChangeLog with missing releases
795abb7 tests: Switch to use list as stack
ea9b4af Capture log messages for test failures
2631be6 Add option to perform finish only
33007ad Grammar fixes
be2bb18 Allow direct execution of main module
8f7f2f9 Change repository from stackforge to openstack
ada3917 Update .gitreview for new namespace
dc9567b Update the read-tree command in USAGE.md
3fb410c Re-factor and split code
ca9eefd Restructure subcommands parser creation
7e9436f Mask broken versions of mock
3384eb5 Sample jobs for mirroring upstream repositories
66004d0 Find additional missing commit scenarios
be0d8d6 Use DFS reverse topological sort to allow unordered inputs
0e6416d Make function private and replace boolean with function result
dfe55f6 Update hacking, enable some checks and fix style issues
964d5c7 Catch BadName exceptions from newer GitPython
e9bf6db Additional scenarios that result in missed commits
77e560f Add test for obsolete approach to track upstream
d4eeebc Refactor code used to build tree into helpers
589948b Add test support for creating carried changes
14122e2 Include graph of git log and node info on error
ed82333 Fix typo
80e1741 Workflow documentation is now in infra-manual

Diffstat (except docs and test files)
-

.gitchangelog.rc | 104 
.gitreview   |   2 +-
.mailmap |   2 +
AUTHORS  |  15 +-
ChangeLog| 281 +++
DESCRIPTION  |  12 +-
README.md|   4 +-
USAGE.md |   6 +-
build_manpage.py |   4 +-
contrib/jjb/defaults.yaml|   5 +
contrib/jjb/macros.yaml  |  74 +++
contrib/jjb/mirror.yaml  |  57 +++
contrib/jjb/projects.yaml|  11 +
contrib/jjb/scripts/mirror-upstream.bash |  64 +++
git_upstream/commands/__init__.py|  79 ++-
git_upstream/commands/drop.py| 130 +
git_upstream/commands/help.py|  40 ++
git_upstream/commands/import.py  | 796
+++
git_upstream/commands/supersede.py   | 215 ++---
git_upstream/lib/drop.py | 122 +
git_upstream/lib/importupstream.py   | 406 
git_upstream/lib/note.py |   3 +-
git_upstream/lib/pygitcompat.py  |   9 +-
git_upstream/lib/rebaseeditor.py |  27 +-
git_upstream/lib/searchers.py| 277 +++
git_upstream/lib/strategies.py   | 128 +
git_upstream/lib/supersede.py| 167 +++
git_upstream/log.py  |   6 +-
git_upstream/main.py |  64 +--
git_upstream/rebase_editor.py|  15 +-
git_upstream/subcommand.py   |  26 -
setup.cfg|   3 +
test-requirements.txt|   7 +-
tox.ini  |   3 +-
34 files changed, 1914 insertions(+), 1250 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index f2efc91..6db383b 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -1,2 +1,3 @@
-hacking>=0.5.6,<0.8
-mock
+hacking>=0.9,<=0.10.0
+loremipsum
+mock!=1.1.1,<=1.3.0
@@ -7,0 +9 @@ testrepository>=0.0.17
+testscenarios>=0.4
@@ -9,0 +12 @@ sphinxcontrib-programoutput
+PyYAML>=3.1.0

-- 
Regards,
Darragh Bailey
IRC: electrofelix
"Nothing is foolproof to a sufficiently talented fool" - Unknown
___

Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-17 Thread Doug Hellmann
Excerpts from Mike Perez's message of 2016-02-17 03:21:51 -0800:
> On 02/16/2016 11:30 AM, Doug Hellmann wrote:
> > So I think the project team is doing everything we've asked.  We
> > changed our policies around new projects to emphasize the social
> > aspects of projects, and community interactions. Telling a bunch
> > of folks that they "are not OpenStack" even though they follow those
> > policies is rather distressing.  I think we should be looking for
> > ways to say "yes" to new projects, rather than "no."
> 
> My disagreements with accepting Poppy has been around testing, so let me
> reiterate what I've already said in this thread.
> 
> The governance currently states that under Open Development "The project 
> has core reviewers and adopts a test-driven gate in the OpenStack 
> infrastructure for changes" [1].
> 
> If we don't have a solution like OpenCDN, Poppy has to adopt a reference
> implementation that is a commercial entity, and infra has to also be 
> dependent on it. I get Infra is already dependent on public cloud 
> donations, but if we start opening the door to allow projects to bring 
> in those commercial dependencies, that's not good.

Only Poppy's test suite would rely on that, though, right? And other
projects can choose whether to co-gate with Poppy or not. So I don't see
how this limitation has an effect on anyone other than the Poppy team.

Doug

> 
> [1] - 
> http://governance.openstack.org/reference/new-projects-requirements.html
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] unconstrained growth, why?

2016-02-17 Thread Doug Hellmann
Excerpts from Chris Dent's message of 2016-02-17 11:30:29 +:
> On Tue, 16 Feb 2016, Doug Hellmann wrote:
> 
> > If we want to do that, we should change the rules because we put
> > the current set of rules in place specifically to encourage more
> > project teams to join officially. We can do that, but that discussion
> > deserves its own thread.
> 
> (Yeah, that's why I changed the subject header: Indicate change of
> subject, but maintain references.)

Ah, my mailer continued to thread it together with the other messages.

> I'm not sure what the right thing to do is, but I do think there's a
> good opportunity to review what various initiatives (big tent, death
> to stackforge, tags, governance changes, cross-project work) are trying
> to accomplish, whether they are succeeding, what the unintended
> consequences have been.
> 
> >> For the example of Poppy, there is nothing that requires it be a part
> >> of OpenStack for it to be useful to OpenStack nor for it to exist as
> >> a valuable part of the open source world.
> >
> > Nor is there for lots of our existing official projects. Which ones
> > should we remove?
> 
> The heartless rationalist in me says "most of them". The nicer guy
> says "this set is grandfathered, henceforth we're more strict".

Right. Poppy has been around longer than some of those, so it hardly
seems fair to them to do that.

> A reason _I_[1] think we need to limit things is because from the
> outside OpenStack doesn't really look like anything that you can put
> a short description on. It's more murky than that and it is hard to
> experience positive progress in a fog. Many people react to this fog
> by focusing on their specific project rather than OpenStack at
> large: At least there they can see their impact.

I've never understood this argument. OpenStack is a community
creating a collection of tools for building clouds. Each part
implements a different set of features, and you only need the parts
for the features you want.  In that respect, it's no different from
a Linux distro. You need a few core pieces (kernel, init, etc.),
and you install the other parts based on your use case (hardware
drivers, $SHELL, $GUI, etc.).

Are people confused about what OpenStack is because they're looking
for a single turn-key system from a vendor? Because they don't know
what features they want/need? Or are we just doing a bad job of
communicating the product vs. kit nature of the project?

> This results in increasing the fog because cross-project concerns (which
> help unify the vision and actuality that is OpenStack) get less
> attention and the cycle deepens.

I'm not sure cross-project issues are really any worse today than
when I started working on OpenStack a few years ago. In fact, I think
they're significantly better.

At the time, there were only the integrated projects and no real
notion that we would add a lot of new ones. We still had a hard
time recruiting folks to participate in release management, docs,
Oslo, infra, etc. The larger community and liaison system has
improved the situation. There's more work, because there are more
projects, but by restructuring the relationship of the vertical and
horizontal teams to require project teams to participate explicitly
we've reduced some of the pressure on the teams doing the coordination.

Architecturally and technically, project teams have always wanted
to go their own way to some degree. Experimentation with different
approaches and tools to address similar problems like that is good,
and success has resulted in the adoption of more common tools like
third-party WSGI frameworks, test tools, and patterns like the specs
review process and multiple teams managing non-client libraries.
So on a technical front we're doing better than the days where we
all just copied code out of nova and modified it for our own purposes
without looking back.

We also have several new cross-project "policy" initiatives like
the API working group, the new naming standards thing, and cross-project
spec liaisons. These teams are a new, more structured way to
collaborate to solve some of the issues we dealt with in the early
days through force of personality, or by leaving it up to whoever
was doing the implementation.  All of those efforts are seeing more
success because people showed up to collaborate and reach consensus,
and stuck through the hard parts of actually documenting the decision
and then doing the work agreed to. Again, we could always use more
help, but I see the trend as improving.

We've had to change our approaches to dealing with the growth,
and we still have a ways to go (much of it uphill), but I'm not
prepared to say that we've failed to meet the challenge.

Doug

> 
> [1] Other people, some reasonable, some not, will have different
> opinions. Yay!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.ope

Re: [openstack-dev] [Fuel][library] Switching to external fixtures for integration Noop tests

2016-02-17 Thread Bogdan Dobrelya
Hello,
an update inline!

On 27.01.2016 17:37, Bogdan Dobrelya wrote:
> On 26.01.2016 22:18, Kyrylo Galanov wrote:
>> Hello Bogdan,
>>
>> I hope I am not the one of the context. Why do we separate fixtures for
>> Noop tests from the repo?
>> I can understand if while noop test block was carried out to a separate
>> repo.
>>
> 
> I believe fixtures normally are downloaded by the rake spec_prep.
> Developers avoid to ship fixtures with tests.
> 
> The astute.yaml data fixtures are supposed to be external to the
> fuel-library as that data comes from the Nailgun backend and corresponds
> to all known deploy paths.
> 
> Later, the generated puppet catalogs (see [0]) shall be put to the
> fixtures repo as well - as they will contain hundreds thousands of
> auto-generate lines and are tightly related to the astute.yaml fixtures.
> 
> While the Noop tests framework itself indeed may be moved to another
> separate repo (later), we should keep our integration tests [1] in the
> fuel-library repository, which is "under test" by those tests.

Dmitry Ilyin did a great job and reworked the Fuel-library Noop Tests
Framework. He also provided docs to describe changes for developers.
There is a patch [0] to move the astute.yaml fixtures, noop tests docs
and the framework itself from the fuel-library to the fuel-noop-fixtures
repo [1].

With the patch, full run for the Noop tests job shortens from 40 minutes
to 5 (by 8 times!) as it supports multiple rspec processes running in
parallel. It also provides advanced test reports. Please see details in
the docs [2]. You can read as is or build locally with tox. Later, the
docs will go to readthedocs.org as well.

Note, there is no impact for developers and all changes are backwards
compatible to existing noop tests and Fuel jenkins CI jobs. Later we may
start to add new features from the reworked framework to make things
even better. So please take a look on the patch and new docs.

PS. Noop tests gate passed for the patch, though there is CI -1 as we
disabled non related deployment gates by the "Fuel-CI: disable" tag.

[0] https://review.openstack.org/#/c/276816/
[1] https://git.openstack.org/cgit/openstack/fuel-noop-fixtures
[2] https://git.openstack.org/cgit/openstack/fuel-noop-fixtures/tree/doc

> 
> [0] https://blueprints.launchpad.net/fuel/+spec/deployment-data-dryrun
> [1]
> https://git.openstack.org/cgit/openstack/fuel-library/tree/tests/noop/spec/hosts
> 
>> On Tue, Jan 26, 2016 at 1:54 PM, Bogdan Dobrelya > > wrote:
>>
>> We are going to switch [0] to external astute.yaml fixtures for Noop
>> tests and remove them from the fuel-library repo as well.
>> Please make sure all new changes to astute.yaml fixtures will be
>> submitted now to the new location. Related mail thread [1].
>>
>> [0]
>> 
>> https://review.openstack.org/#/c/272480/1/doc/noop-guide/source/noop_fixtures.rst
>> [1]
>> 
>> http://lists.openstack.org/pipermail/openstack-dev/2015-December/082888.html
>>
>> --
>> Best regards,
>> Bogdan Dobrelya,
>> Irc #bogdando
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Move virtualbox scripts to a separate directory

2016-02-17 Thread Fabrizio Soppelsa
Vladimir,
a dedicated repo - good to hear.
Do you have a rough estimate for how long this directory will be in freeze 
state?

Thanks,
Fabrizio


> On Feb 15, 2016, at 5:16 PM, Vladimir Kozhukalov  
> wrote:
> 
> Dear colleagues,
> 
> I'd like to announce that we are next to moving fuel-main/virtualbox 
> directory to a separate git repository. This directory contains a set of bash 
> scripts that could be used to easily deploy Fuel environment and try to 
> deploy OpenStack cluster using Fuel. Virtualbox is used as a virtualization 
> layer.
> 
> Checklist for this change is as follows:
> Launchpad bug: https://bugs.launchpad.net/fuel/+bug/1544271 
> 
> project-config patch https://review.openstack.org/#/c/279074/2 
>  (ON REVIEW)
> prepare upstream (DONE) https://github.com/kozhukalov/fuel-virtualbox 
> 
> .gitreview file (TODO)
> .gitignore file (TODO)
> MAINTAINERS file (TODO)
> remove old files from fuel-main (TODO)
> Virtualbox directory is not actively changed, so freezing this directory for 
> a while is not going to affect the development process significantly. 
> From this moment virtualbox directory is declared freezed and all changes in 
> this directory that are currently in work should be later backported to the 
> new git repository (fuel-virtualbox).  
> 
> Vladimir Kozhukalov
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][API] Does nova API allow the server_id parem as DB index?

2016-02-17 Thread Sean Dague
I did push a speculative patch which would address this by not exposing
the lookup by int id backdoor - https://review.openstack.org/#/c/281277/
- the results were better than I expected.

Andrey, is this going to negatively impact the openstack/ec2 project at
all if we do it?

-Sean


On 02/16/2016 06:49 AM, Sean Dague wrote:
> This was needed originally for ec2 support (which requires an integer
> id). It's not really the db index per say, just another id value which
> is valid (though hidden) for the server.
> 
> Before unwinding this issue we *must* make sure that the openstack/ec2
> project does not need access to it.
> 
> On 02/15/2016 09:36 PM, Alex Xu wrote:
>> I don't think our API supports get servers by DB index is good idea. So
>> I prefer we remove it in the future with microversions. But for now,
>> yes, it is here.
>>
>> 2016-02-16 8:03 GMT+08:00 少合冯 > >:
>>
>> I guess others may ask the same questions. 
>>
>> I read the nova API doc: 
>> such as this API: 
>> http://developer.openstack.org/api-ref-compute-v2.1.html#showServer
>>
>> GET /v2.1/​{tenant_id}​/servers/​{server_id}​
>> *Show server details*
>>
>>
>> *Request parameters*
>> ParameterStyle   TypeDescription
>> tenant_idURI csapi:UUID  
>>
>> The UUID of the tenant in a multi-tenancy cloud.
>>
>> server_idURI csapi:UUID  
>>
>> The UUID of the server.
>>
>>
>> But I can get the server by DB index: 
>>
>> curl -s -H X-Auth-Token:6b8968eb38df47c6a09ac9aee81ea0c6
>> http://192.168.2.103:8774/v2.1/f5a8829cc14c4825a2728b273aa91aa1/servers/2
>> {
>> "server": {
>> "OS-DCF:diskConfig": "MANUAL",
>> "OS-EXT-AZ:availability_zone": "nova",
>> "OS-EXT-SRV-ATTR:host": "shaohe1",
>> "OS-EXT-SRV-ATTR:hypervisor_hostname": "shaohe1",
>> "OS-EXT-SRV-ATTR:instance_name": "instance-0002",
>> "OS-EXT-STS:power_state": 1,
>> "OS-EXT-STS:task_state": "migrating",
>> "OS-EXT-STS:vm_state": "error",
>> "OS-SRV-USG:launched_at": "2015-12-18T07:41:00.00",
>> "OS-SRV-USG:terminated_at": null,
>> ..
>> }
>> }
>>
>> and the code really allow it use  DB index
>> https://github.com/openstack/nova/blob/master/nova/compute/api.py#L1939
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] network question and documentation

2016-02-17 Thread Fabrice Grelaud
Hi,

after a first test architecture of openstack (juno then upgrade to kilo), 
installed from scratch, and because we use Ansible in our organization, we 
decided to deploy our next openstack generation architecture from the project 
openstack-ansible.

I studied your documentation (very good work and very appreciate, 
http://docs.openstack.org/developer/openstack-ansible/[kilo|liberty]/install-guide/index.html)
 and i will need some more clarification compared to network architecture.

I'm not sure to be on the good mailing-list because it 's dev oriented here, 
for all that, i fear my request to be embedded in the openstack overall list, 
because it's very specific to the architecture proposed by your project (bond0 
(br-mngt, br-storage), bond1 (br-vxlan, br-vlan)).

I'm sorry about that if that is the case...

So, i would like to know if i'm going in the right direction.
We want to use both, existing vlan from our existing physical architecture 
inside openstack (vlan provider) and "private tenant network" with IP floating 
offer (from a flat network).

My question is about switch configuration:

On Bond0:
the switch port connected to bond0 need to be configured as trunks with:
- the host management network (vlan untagged but can be tagged ?)
- container(mngt) network (vlan-container)
- storage network (vlan-storage)

On Bond1:
the switch port connected to bond1 need to be configured as trunks with:
- vxlan network (vlan-vxlan)
- vlan X (existing vlan in our existing network infra)
- vlan Y (existing vlan in our existing network infra)

Is that right ?

And do i have to define a new network (a new vlan, flat network) that offer 
floatting IP for private tenant (not using existing vlan X or Y)? Is that new 
vlan have to be connected to bond1 and/or bond0 ?
Is that host management network could play this role ?

Thank you to consider my request.
Regards

ps: otherwise, about the documentation, for great understanding and perhaps 
consistency
In Github (https://github.com/openstack/openstack-ansible), in the file 
openstack_interface.cfg.example, you point out that for br-vxlan and 
br-storage, "only compute node have an IP on this bridge. When used by infra 
nodes, IPs exist in the containers and inet should be set to manual".

I think it will be good (but i may be wrong ;-) ) that in chapter 3 of the 
"install guide: configuring the network on target host", you propose the 
/etc/network/interfaces for both controller node (br-vxlan, br-storage: manual 
without IP) and compute node (br-vxlan, br-storage: static with IP).


Fabrice GRELAUD
Université de Bordeaux

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] midcycle voice channel is 7778

2016-02-17 Thread Jim Rollenhagen
Hi,

We've moved the midcycle to channel 7778 on the infra conferencing
system - something is wrong with  (no audio coming through).

/me lets infra know as well

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] eventlet 0.18.1 not on PyPi anymore

2016-02-17 Thread Morgan Fainberg
On Wed, Feb 17, 2016 at 5:55 AM, Sean Dague  wrote:

> On 02/17/2016 08:42 AM, Doug Hellmann wrote:
> > Excerpts from Victor Stinner's message of 2016-02-17 14:14:18 +0100:
> >> Le 17/02/2016 13:43, Henry Gessau a écrit :
> >>> And it looks like eventlet 0.18.3 breaks neutron:
> >>> https://bugs.launchpad.net/neutron/+bug/1546506
> >>
> >> 2 releases, 2 regressions in OpenStack. Should we cap eventlet version?
> >> The requirement bot can produce patches to update eventlet, patches
> >> which would run integration tests using Nova, Keystone, Neutron on the
> >> new eventlet version.
> >>
> >> eventlet 0.18.2 broke OpenStack Keystone and OpenStack Nova
> >> https://github.com/eventlet/eventlet/issues/296
> >> https://github.com/eventlet/eventlet/issues/299
> >> https://review.openstack.org/#/c/278147/
> >> https://bugs.launchpad.net/nova/+bug/1544801
> >>
> >> eventlet 0.18.3 broke OpenStack Neutron
> >> https://github.com/eventlet/eventlet/issues/301
> >> https://bugs.launchpad.net/neutron/+bug/1546506
> >>
> >> FYI eventlet 0.18.0 broke WSGI servers:
> >> https://github.com/eventlet/eventlet/issues/295
> >>
> >> It was followed quickly by eventlet 0.18.2 to fix this issue.
> >>
> >> Sadly, it looks like bugfix releases of eventlet don't include a single
> >> bugfix, but include also other changes. For example, 0.18.3 fixed the
> >> bug #296 but introduced "wsgi: TCP_NODELAY enabled by default"
> optimization.
> >>
> >> IMHO the problem is not the release manager of eventlet, but more the
> >> lack of tests on eventlet, especially on OpenStack services.
> >>
> >> Current "Continious Delivery"-like with gates do detect bugs, yeah, but
> >> also block a lot of developers when the gates are broken. It doesn't
> >> seem trivial to investigate and fix eventlet issues.
> >>
> >> Victor
> >>
> >
> > Whether we cap or not, we should exclude the known broken versions.
> > It looks like getting back to a good version will also require
> > lowering the minimum version we support, since we have >=0.18.2
> > now.
> >
> > What was the last version of eventlet known to work?
>
> 0.18.2 works. On the Nova side we had a failure around unit tests which
> was quite synthetic that we fixed. I don' know what the keystone issue
> turned out to be.
>

I believe the keystone issue was a test specific issue, not a runtime
issue. We disabled the test.
--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] midcycle voice channel is 7779

2016-02-17 Thread Jim Rollenhagen
So, someone has injected their hold music into 7778. We've now moved to
7779, sorry for the trouble :(

// jim

On Wed, Feb 17, 2016 at 07:01:22AM -0800, Jim Rollenhagen wrote:
> Hi,
> 
> We've moved the midcycle to channel 7778 on the infra conferencing
> system - something is wrong with  (no audio coming through).
> 
> /me lets infra know as well
> 
> // jim
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Heads up: Third Party CI systems

2016-02-17 Thread Lucas Alvares Gomes
Hi,

This email is just a heads up to the people working on 3rd Party CI
systems for Ironic.

There's a patch in the review queue now [0] that may break you guys
(the fix is simple). The patch is adding the ability to deploy nodes
using the {pxe, agent}_ipmitool drivers with VMs. But, the problem is
that prior to that patch there's an assumption in the Ironic DevStack
module which assumes that if the node is not using a "ssh" driver (e.g
pxe_ssh, agent_ssh) it's going to be deployed onto bare metal instead
of VMs.

The patch is kills that assumption and adds a configuration variable
that needs to be set in the local.conf file if you want to use bare
metal. So, please if you guys can pull down that patch [0], add
"IRONIC_IS_HARDWARE=True" to your devstack local.conf and comment on
the patch that would be great.

For context, the "ssh" drivers was just a workaround that are using to
be able to mock/replace the IPMI power interfaces so we could run
tests in gate with VMs. Now, with the help of some utilities
(virtualbmc, pyghmi, libvirt) we are able to test IPMI drivers in gate
and that's the main goal of the patch [0].

[0] https://review.openstack.org/#/c/280267/

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Puppet] New Rspec Noop Tests Matcher to Ensure Transitive Dependencies

2016-02-17 Thread Vladimir Kuklin
Fuelers

It seems that this change [0] into Fuel came unnoticed, but it may help you
in testing your puppet catalogues.

I was refactoring our code pieces that actually wait for Load Balancer to
be ready to serve requests. I ended putting things into a special define
called `wait_for_backend`. This broke direct dependencies between resources
as there was only transitive order between them (e.g. A->B->C) instead of
direct (A->C). And this broke our noop tests as they were expecting to see
the latter ordering. This in turn required me to write an additional
matcher based on puppet source code.

[0] https://review.openstack.org/#/c/272702/

Here is an example of the code you might want to use in your rspecs:

`expect(graph).to
ensure_transitive_dependency("Class[openstack::galera::status]",
"Haproxy_backend_status[mysql]")`
graph is lazy calculated within noop tests shared_examples.

So, if you need to just check if one resource will execute after the
another, you do not need 'that_comes_[before|after]' anymore.

Puppeters

Folks, this may be also interesting for you.

-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] eventlet 0.18.1 not on PyPi anymore

2016-02-17 Thread Doug Hellmann
Excerpts from Morgan Fainberg's message of 2016-02-17 07:10:34 -0800:
> On Wed, Feb 17, 2016 at 5:55 AM, Sean Dague  wrote:
> 
> > On 02/17/2016 08:42 AM, Doug Hellmann wrote:
> > > Excerpts from Victor Stinner's message of 2016-02-17 14:14:18 +0100:
> > >> Le 17/02/2016 13:43, Henry Gessau a écrit :
> > >>> And it looks like eventlet 0.18.3 breaks neutron:
> > >>> https://bugs.launchpad.net/neutron/+bug/1546506
> > >>
> > >> 2 releases, 2 regressions in OpenStack. Should we cap eventlet version?
> > >> The requirement bot can produce patches to update eventlet, patches
> > >> which would run integration tests using Nova, Keystone, Neutron on the
> > >> new eventlet version.
> > >>
> > >> eventlet 0.18.2 broke OpenStack Keystone and OpenStack Nova
> > >> https://github.com/eventlet/eventlet/issues/296
> > >> https://github.com/eventlet/eventlet/issues/299
> > >> https://review.openstack.org/#/c/278147/
> > >> https://bugs.launchpad.net/nova/+bug/1544801
> > >>
> > >> eventlet 0.18.3 broke OpenStack Neutron
> > >> https://github.com/eventlet/eventlet/issues/301
> > >> https://bugs.launchpad.net/neutron/+bug/1546506
> > >>
> > >> FYI eventlet 0.18.0 broke WSGI servers:
> > >> https://github.com/eventlet/eventlet/issues/295
> > >>
> > >> It was followed quickly by eventlet 0.18.2 to fix this issue.
> > >>
> > >> Sadly, it looks like bugfix releases of eventlet don't include a single
> > >> bugfix, but include also other changes. For example, 0.18.3 fixed the
> > >> bug #296 but introduced "wsgi: TCP_NODELAY enabled by default"
> > optimization.
> > >>
> > >> IMHO the problem is not the release manager of eventlet, but more the
> > >> lack of tests on eventlet, especially on OpenStack services.
> > >>
> > >> Current "Continious Delivery"-like with gates do detect bugs, yeah, but
> > >> also block a lot of developers when the gates are broken. It doesn't
> > >> seem trivial to investigate and fix eventlet issues.
> > >>
> > >> Victor
> > >>
> > >
> > > Whether we cap or not, we should exclude the known broken versions.
> > > It looks like getting back to a good version will also require
> > > lowering the minimum version we support, since we have >=0.18.2
> > > now.
> > >
> > > What was the last version of eventlet known to work?
> >
> > 0.18.2 works. On the Nova side we had a failure around unit tests which
> > was quite synthetic that we fixed. I don' know what the keystone issue
> > turned out to be.
> >
> 
> I believe the keystone issue was a test specific issue, not a runtime
> issue. We disabled the test.
> --Morgan

OK. Can someone from the neutron team verify that 0.18.2 works? If so,
we can just exclude 0.18.3 and reset the constraint.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A prototype implementation towards the "shared state scheduler"

2016-02-17 Thread Cheng, Yingxin
On Wed, 17 February 2016, Sylvain Bauza wrote
Le 17/02/2016 12:59, Chris Dent a écrit :
On Wed, 17 Feb 2016, Cheng, Yingxin wrote:


To better illustrate the differences between shared-state, resource-
provider and legacy scheduler, I've drew 3 simplified pictures [1] in
emphasizing the location of resource view, the location of claim and
resource consumption, and the resource update/refresh pattern in three
kinds of schedulers. Hoping I'm correct in the "resource-provider
scheduler" part.

That's a useful visual aid, thank you. It aligns pretty well with my
understanding of each idea.

A thing that may be missing, which may help in exploring the usefulness
of each idea, is a representation of resources which are separate
from compute nodes and shared by them, such as shared disk or pools
of network addresses. In addition some would argue that we need to
see bare-metal nodes for a complete picture.

One of the driving motivations of the resource-provider work is to
make it possible to adequately and accurately track and consume the
shared resources. The legacy scheduler currently fails to do that
well. As you correctly points out it does this by having "strict
centralized consistency" as a design goal.

So, to be clear, I'm really happy to see the resource-providers series for many 
reasons :
 - it will help us getting a nice Facade for getting the resources and 
attributing them
 - it will help a shared-storage deployment by making sure that we don't have 
some resource problems when the resource is shared
 - it will create a possibility for external resource providers to provide some 
resource types to Nova so the Nova scheduler could use them (like Neutron 
related resources)

That, I really want to have it implemented in Mitaka and Newton and I'm totally 
on-board and supporting it.

TBC, the only problem I see with the series is [2], not the whole, please.

@cdent:
As far as I know, some resources are defined "shared", simply because they are 
not the resources of compute node service. In other words, the compute node 
resource tracker does not have the authority of those "shared" resources. For 
example, the "shared" storage resources are actually managed by the storage 
service, and the "shared" network resource "IP pool" is actually owned by 
network service. If all the resources are labeled "shared" only because they 
are not owned by compute node services, the 
shared-resource-tracking/consumption problem can be solved by implementing 
resource trackers in all the authorized services. Those resource trackers are 
constantly providing incremental updates to schedulers, and have the 
responsibilities to reserve and consume resources independently/distributedly, 
no matter where they are from, compute service, storage service or network 
service etc.

As can be seen in the illustrations [1], the main compatibility issue
between shared-state and resource-provider scheduler is caused by the
different location of claim/consumption and the assumed consistent
resource view. IMO unless the claims are allowed to happen in both
places(resource tracker and resource-provider db), it seems difficult
to make shared-state and resource-provider scheduler work together.

Yes, but doing claims twice feels intuitively redundant.

As I've explored this space I've often wondered why we feel it is
necessary to persist the resource data at all. Your shared-state
model is appealing because it lets the concrete resource(-provider)
be the authority about its own resources. That is information which
it can broadcast as it changes or on intervals (or both) to other
things which need that information. That feels like the correct
architecture in a massively distributed system, especially one where
resources are not scarce.

So, IMHO, we should only have the compute nodes being the authority for 
allocating resources. They are many reasons for that I provided in the spec 
review, but I can reply again :
·#1 If we consider that an external system, as a resource provider, 
will provide a single resource class usage (like network segment availability), 
it will still require the instance to be spawned *for* consuming that resource 
class, even if the scheduler accounts for it. That would mean that the 
scheduler would have to manage a list of allocations with TTL, and periodically 
verify that the allocation succeeded by asking the external system (or getting 
feedback from the external system). See, that's racy.
·#2 the scheduler is just a decision maker, by any case it doesn't 
account for the real instance creation (it doesn't hold the ownership of the 
instance). Having it being accountable for the instances usage is heavily 
difficult. Take for example a request for CPU pinning or NUMA affinity. The 
user can't really express which pin of the pCPU he will get, that's the compute 
node which will do that for him. Of course, the scheduler will help picking an 
host that can fit the request, but the real pinning 

Re: [openstack-dev] [Fuel][Puppet] New Rspec Noop Tests Matcher to Ensure Transitive Dependencies

2016-02-17 Thread Bogdan Dobrelya
On 17.02.2016 16:23, Vladimir Kuklin wrote:
> Fuelers
> 
> It seems that this change [0] into Fuel came unnoticed, but it may help
> you in testing your puppet catalogues. 
> 
> I was refactoring our code pieces that actually wait for Load Balancer
> to be ready to serve requests. I ended putting things into a special
> define called `wait_for_backend`. This broke direct dependencies between
> resources as there was only transitive order between them (e.g. A->B->C)
> instead of direct (A->C). And this broke our noop tests as they were
> expecting to see the latter ordering. This in turn required me to write
> an additional matcher based on puppet source code.
> 
> [0] https://review.openstack.org/#/c/272702/
> 
> Here is an example of the code you might want to use in your rspecs:
> 
> `expect(graph).to
> ensure_transitive_dependency("Class[openstack::galera::status]",
> "Haproxy_backend_status[mysql]")`
> graph is lazy calculated within noop tests shared_examples.
> 
> So, if you need to just check if one resource will execute after the
> another, you do not need 'that_comes_[before|after]' anymore.

Thank you Vladimir for this improvement. Please also do not forget to
address the changes in the docs. Note, a new location is expected to be
in the fuel-noop-fixtures repo [0], so you may want to submit it
directly there.

[0] https://git.openstack.org/cgit/openstack/fuel-noop-fixtures/tree/doc

> 
> Puppeters
> 
> Folks, this may be also interesting for you.
> 
> -- 
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com 
> www.mirantis.ru 
> vkuk...@mirantis.com 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer]ceilometer-collector high CPU usage

2016-02-17 Thread Gyorgy Szombathelyi
Hi all,

I did some more debugging with pdb, and seems to be the problem somehow 
connected to this eventlet issue:
https://github.com/eventlet/eventlet/issues/30

I don't have a clue if it has any connections to the Rabbit heartbeat thing, 
but if I change the self.wait(0)
to self.wait(0.1) in eventlet/hubs/hub.py, then the CPU usage drops 
significantly.

Br,
György

> -Original Message-
> From: Gyorgy Szombathelyi
> [mailto:gyorgy.szombathe...@doclerholding.com]
> Sent: 2016 február 17, szerda 14:47
> To: 'openstack-dev@lists.openstack.org'  d...@lists.openstack.org>
> Subject: Re: [openstack-dev] [ceilometer]ceilometer-collector high CPU
> usage
> 
> >
> > hi,
> Hi Gordon,
> 
> >
> > this seems to be similar to a bug we were tracking in earlier[1].
> > basically, any service with a listener never seemed to idle properly.
> >
> > based on earlier investigation, we found it relates to the heartbeat
> > functionality in oslo.messaging. i'm not entirely sure if it's because
> > of it or some combination of things including it. the short answer, is
> > to disable heartbeat by setting heartbeat_timeout_threshold = 0 and
> > see if it fixes your cpu usage. you can track the comments in bug.
> 
> As I see in the bug report, you mention that the problem is only with the
> notification agent, and the collector is fine. I'm in an entirely opposite 
> else
> situtation.
> 
> starce-ing the two processes:
> 
> Notification agent:
> --
> epoll_wait(4, {}, 1023, 43) = 0
> epoll_wait(4, {}, 1023, 0)  = 0
> epoll_ctl(4, EPOLL_CTL_DEL, 8,
> {EPOLLWRNORM|EPOLLMSG|EPOLLERR|EPOLLHUP|EPOLLRDHUP|EPOLLON
> ESHOT|EPOLLET|0x1ec88000, {u32=32738, u64=24336577484324834}}) = 0
> recvfrom(8, 0x7fe2da3a4084, 7, 0, 0, 0) = -1 EAGAIN (Resource temporarily
> unavailable) epoll_ctl(4, EPOLL_CTL_ADD, 8,
> {EPOLLIN|EPOLLPRI|EPOLLERR|EPOLLHUP, {u32=8,
> u64=40046962262671368}}) = 0
> epoll_wait(4, {}, 1023, 1)  = 0
> epoll_ctl(4, EPOLL_CTL_DEL, 24,
> {EPOLLWRNORM|EPOLLMSG|EPOLLERR|EPOLLHUP|EPOLLRDHUP|EPOLLON
> ESHOT|EPOLLET|0x1ec88000, {u32=32738, u64=24336577484324834}}) = 0
> recvfrom(24, 0x7fe2da3a4084, 7, 0, 0, 0) = -1 EAGAIN (Resource temporarily
> unavailable) epoll_ctl(4, EPOLL_CTL_ADD, 24,
> {EPOLLIN|EPOLLPRI|EPOLLERR|EPOLLHUP, {u32=24,
> u64=40046962262671384}}) = 0
> epoll_wait(4, {}, 1023, 0)  = 0
> 
> ceilometer-collector:
> -
> epoll_wait(4, {}, 1023, 0)  = 0
> epoll_wait(4, {}, 1023, 0)  = 0
> epoll_wait(4, {}, 1023, 0)  = 0
> epoll_wait(4, {}, 1023, 0)  = 0
> epoll_wait(4, {}, 1023, 0)  = 0
> epoll_wait(4, {}, 1023, 0)  = 0
> epoll_wait(4, {}, 1023, 0)  = 0
> epoll_wait(4, {}, 1023, 0)  = 0
> epoll_wait(4, {}, 1023, 0)  = 0
> epoll_wait(4, {}, 1023, 0)  = 0
> 
> So the notification agent do something at least between the crazy epoll()s.
> 
> It is the same with or without the heartbeat_timeout_threshold = 0 in
> [oslo_messaging_rabbit].
> Then something must be still wrong with the listeners, the bug[1] should not
> be closed, I think.
> 
> Br,
> György
> 
> >
> > [1] https://bugs.launchpad.net/oslo.messaging/+bug/1478135
> >
> > On 17/02/2016 4:14 AM, Gyorgy Szombathelyi wrote:
> > > Hi!
> > >
> > > Excuse me, if the following question/problem is a basic one, already
> > > known problem, or even a bad setup on my side.
> > >
> > > I just noticed that the most CPU consuming process in an idle
> > > OpenStack cluster is ceilometer-collector. When there are only
> > > 10-15 samples/minute, it just constantly eats about 15-20% CPU.
> > >
> > > I started to debug, and noticed that it epoll()s constantly with a
> > > zero timeout, so it seems it just polls for events in a tight loop.
> > > I found out that the _maybe_ the python side of the problem is
> > > oslo_messaging.get_notification_listener() with the eventlet executor.
> > > A quick search showed that this function is only used in
> > > aodh_listener and ceilometer_collector, and both are using
> > > relatively high CPU even if they're just 'listening'.
> > >
> > > My skills for further debugging is limited, but I'm just curious why
> > > this listener uses so much CPU, while other executors, which are
> > > using eventlet, are not that bad. Excuse me, if it was a basic
> > > question, already known problem, or even a bad setup on my side.
> > >
> > > Br,
> > > György
> > >
> > >
> >
> __
> > 
> > >  OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > --
> > gord
> >
> >
> __
> > 
> > OpenStack Development Mailing List (not for usage qu

Re: [openstack-dev] [Nova][API] Does nova API allow the server_id parem as DB index?

2016-02-17 Thread Anne Gentle
On Wed, Feb 17, 2016 at 8:49 AM, Sean Dague  wrote:

> I did push a speculative patch which would address this by not exposing
> the lookup by int id backdoor - https://review.openstack.org/#/c/281277/
> - the results were better than I expected.
>
> Andrey, is this going to negatively impact the openstack/ec2 project at
> all if we do it?
>

Bug tracking here: https://bugs.launchpad.net/nova/+bug/1545922


>
> -Sean
>
>
> On 02/16/2016 06:49 AM, Sean Dague wrote:
> > This was needed originally for ec2 support (which requires an integer
> > id). It's not really the db index per say, just another id value which
> > is valid (though hidden) for the server.
> >
> > Before unwinding this issue we *must* make sure that the openstack/ec2
> > project does not need access to it.
> >
> > On 02/15/2016 09:36 PM, Alex Xu wrote:
> >> I don't think our API supports get servers by DB index is good idea. So
> >> I prefer we remove it in the future with microversions. But for now,
> >> yes, it is here.
> >>
> >> 2016-02-16 8:03 GMT+08:00 少合冯  >> >:
> >>
> >> I guess others may ask the same questions.
> >>
> >> I read the nova API doc:
> >> such as this API:
> >> http://developer.openstack.org/api-ref-compute-v2.1.html#showServer
> >>
> >> GET /v2.1/​{tenant_id}​/servers/​{server_id}​
> >> *Show server details*
> >>
> >>
> >> *Request parameters*
> >> ParameterStyle   TypeDescription
> >> tenant_idURI csapi:UUID
> >>
> >> The UUID of the tenant in a multi-tenancy cloud.
> >>
> >> server_idURI csapi:UUID
> >>
> >> The UUID of the server.
> >>
> >>
> >> But I can get the server by DB index:
> >>
> >> curl -s -H X-Auth-Token:6b8968eb38df47c6a09ac9aee81ea0c6
> >>
> http://192.168.2.103:8774/v2.1/f5a8829cc14c4825a2728b273aa91aa1/servers/2
> >> {
> >> "server": {
> >> "OS-DCF:diskConfig": "MANUAL",
> >> "OS-EXT-AZ:availability_zone": "nova",
> >> "OS-EXT-SRV-ATTR:host": "shaohe1",
> >> "OS-EXT-SRV-ATTR:hypervisor_hostname": "shaohe1",
> >> "OS-EXT-SRV-ATTR:instance_name": "instance-0002",
> >> "OS-EXT-STS:power_state": 1,
> >> "OS-EXT-STS:task_state": "migrating",
> >> "OS-EXT-STS:vm_state": "error",
> >> "OS-SRV-USG:launched_at": "2015-12-18T07:41:00.00",
> >> "OS-SRV-USG:terminated_at": null,
> >> ..
> >> }
> >> }
> >>
> >> and the code really allow it use  DB index
> >>
> https://github.com/openstack/nova/blob/master/nova/compute/api.py#L1939
> >>
> >>
>  __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> <
> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
>
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Watcher] Nominating Vincent Francoise to Watcher Core

2016-02-17 Thread Antoine Cabot
+1

On Wed, Feb 17, 2016 at 3:07 PM, Jean-Émile DARTOIS <
jean-emile.dart...@b-com.com> wrote:

> +2
>
>
> Jean-Emile
> DARTOIS
>
> {P} Software Engineer
> Cloud Computing
> {T} +33 (0) 2 56 35 8260
> {W} www.b-com.com
> --
> *De :* Joe Cropper 
> *Envoyé :* mercredi 17 février 2016 15:06
> *À :* OpenStack Development Mailing List (not for usage questions)
> *Objet :* Re: [openstack-dev] [Watcher] Nominating Vincent Francoise to
> Watcher Core
>
> +1
>
> On Feb 17, 2016, at 8:05 AM, David TARDIVEL 
> wrote:
>
> Team,
>
>
> I’d like to promote Vincent Francoise to the core team. Vincent's done a 
> great work
> on code reviewing and has proposed a lot of patchsets. He is currently the 
> most active
> non-core reviewer on Watcher project, and today, he has a very good vision of 
> Watcher.
>
> I think he would make an excellent addition to the team.
>
> Please vote
>
>
> David TARDIVEL
> b<>COM
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A prototype implementation towards the "shared state scheduler"

2016-02-17 Thread Sylvain Bauza
(sorry, quoting off-context, but I feel it's a side point, not the main 
discussion)



Le 17/02/2016 16:40, Cheng, Yingxin a écrit :


IMHO, the authority to allocate resources is not limited by compute 
nodes, but also include network service, storage service or all other 
services which have the authority to manage their own resources. Those 
“shared” resources are coming from external services(i.e. system) 
which are not compute service. They all have responsibilities to push 
their own resource updates to schedulers, make resource reservation 
and consumption. The resource provider series provides a flexible 
representation of all kinds of resources, so that scheduler can handle 
them without having the specific knowledge of all the resources.




No, IMHO, the authority has to stay the entity which physically create 
the instance and own its lifecycle. What the user wants when booting is 
an instance, not something else. He can express some SLA by providing 
more context (implicit thru aggregates or flavors) or explicit (thru 
hints or AZs) that could be not compute-related (say a network segment 
locality or a volume-related thing) but at the end, it will create an 
instance on a compute node that matches the requirements.


Cinder and Neutron shouldn't manage which instances are on which hosts, 
they just have to provide the resource types and possible allocations 
(like a taken port)


-Sylvain

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] eventlet 0.18.1 not on PyPi anymore

2016-02-17 Thread Henry Gessau
Doug Hellmann  wrote:
> Excerpts from Morgan Fainberg's message of 2016-02-17 07:10:34 -0800:
>> On Wed, Feb 17, 2016 at 5:55 AM, Sean Dague  wrote:
>>
>>> On 02/17/2016 08:42 AM, Doug Hellmann wrote:
 Excerpts from Victor Stinner's message of 2016-02-17 14:14:18 +0100:
> Le 17/02/2016 13:43, Henry Gessau a écrit :
>> And it looks like eventlet 0.18.3 breaks neutron:
>> https://bugs.launchpad.net/neutron/+bug/1546506
>
> 2 releases, 2 regressions in OpenStack. Should we cap eventlet version?
> The requirement bot can produce patches to update eventlet, patches
> which would run integration tests using Nova, Keystone, Neutron on the
> new eventlet version.
>
> eventlet 0.18.2 broke OpenStack Keystone and OpenStack Nova
> https://github.com/eventlet/eventlet/issues/296
> https://github.com/eventlet/eventlet/issues/299
> https://review.openstack.org/#/c/278147/
> https://bugs.launchpad.net/nova/+bug/1544801
>
> eventlet 0.18.3 broke OpenStack Neutron
> https://github.com/eventlet/eventlet/issues/301
> https://bugs.launchpad.net/neutron/+bug/1546506
>
> FYI eventlet 0.18.0 broke WSGI servers:
> https://github.com/eventlet/eventlet/issues/295
>
> It was followed quickly by eventlet 0.18.2 to fix this issue.
>
> Sadly, it looks like bugfix releases of eventlet don't include a single
> bugfix, but include also other changes. For example, 0.18.3 fixed the
> bug #296 but introduced "wsgi: TCP_NODELAY enabled by default"
>>> optimization.
>
> IMHO the problem is not the release manager of eventlet, but more the
> lack of tests on eventlet, especially on OpenStack services.
>
> Current "Continious Delivery"-like with gates do detect bugs, yeah, but
> also block a lot of developers when the gates are broken. It doesn't
> seem trivial to investigate and fix eventlet issues.
>
> Victor
>

 Whether we cap or not, we should exclude the known broken versions.
 It looks like getting back to a good version will also require
 lowering the minimum version we support, since we have >=0.18.2
 now.

 What was the last version of eventlet known to work?
>>>
>>> 0.18.2 works. On the Nova side we had a failure around unit tests which
>>> was quite synthetic that we fixed. I don' know what the keystone issue
>>> turned out to be.
>>>
>>
>> I believe the keystone issue was a test specific issue, not a runtime
>> issue. We disabled the test.
>> --Morgan
> 
> OK. Can someone from the neutron team verify that 0.18.2 works? If so,
> we can just exclude 0.18.3 and reset the constraint.

I can confirm that neutron works with 0.18.2 as far as we know.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A prototype implementation towards the "shared state scheduler"

2016-02-17 Thread Cheng, Yingxin

On Wed, 17 February 2016, Sylvain Bauza wrote

(sorry, quoting off-context, but I feel it's a side point, not the main 
discussion)
Le 17/02/2016 16:40, Cheng, Yingxin a écrit :
IMHO, the authority to allocate resources is not limited by compute nodes, but 
also include network service, storage service or all other services which have 
the authority to manage their own resources. Those "shared" resources are 
coming from external services(i.e. system) which are not compute service. They 
all have responsibilities to push their own resource updates to schedulers, 
make resource reservation and consumption. The resource provider series 
provides a flexible representation of all kinds of resources, so that scheduler 
can handle them without having the specific knowledge of all the resources.

No, IMHO, the authority has to stay the entity which physically create the 
instance and own its lifecycle. What the user wants when booting is an 
instance, not something else. He can express some SLA by providing more context 
(implicit thru aggregates or flavors) or explicit (thru hints or AZs) that 
could be not compute-related (say a network segment locality or a 
volume-related thing) but at the end, it will create an instance on a compute 
node that matches the requirements.

Cinder and Neutron shouldn't manage which instances are on which hosts, they 
just have to provide the resource types and possible allocations (like a taken 
port)

-Sylvain

Yes, thought twice. The cinder project also has its own scheduler, so it is not 
the responsibility of nova-scheduler to schedule all pieces of resources. 
Nova-scheduler is responsible to boot instances, it has a limited scope to 
compute services.
-Yingxin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Should we rename "RDO Manager" to "TripleO" ?

2016-02-17 Thread David Moreau Simard
Greetings,

(Note: cross-posted between rdo-list and openstack-dev to reach a
larger audience)

Today, because of the branding and the name "RDO Manager", you might
think that it's something other than TripleO - either something
entirely different or perhaps with downstream patches baked in.
You would not be the only one because the community, the users and the
developers alike have shared their confusion on that topic.

The truth is, as it stands right now, "RDO Manager" really is "TripleO".
There is no code or documentation differences.

I feel the only thing that is different is the strategy around how we
test TripleO to ensure the stability of RDO packages but it's already
in the process of being sent upstream [1] because we're convinced it's
the best way forward.

Historically, RDO Manager and TripleO were different things.
Today this is no longer the case and we plan on keeping it that way.

With this in mind, we would like to drop the RDO manager branding and
use TripleO instead.
Not only would we clear the confusion on the topic of what RDO Manager
really is but it would also strengthen the TripleO name.

We would love the RDO community to chime in on this and give their
feedback as to whether or not this is a good initiative.
We will proceed to a formal vote on $subject at the next RDO meeting
on Wednesday, 24th Feb, 2016 1500 UTC [2]. Feel free to join us on
#rdo on freenode.

Thanks,

[1]: https://review.openstack.org/#/c/276810/
[2]: https://etherpad.openstack.org/p/RDO-Meeting

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStackClient] New core team members

2016-02-17 Thread Dean Troyer
I would like to announce the addition of Richard Theis and Tang Chen to the
OpenStackClient core team.  They both have been contributing quality
reviews and code for some time now, particularly in the areas of SDK
integration and new Network commands.

Thank you Richard and Tang for your work and welcome to the core team.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron]Network software verification survey

2016-02-17 Thread Arseniy Zaostrovnykh

Hello,

I am a PhD student, and I research the possibilities of verification of 
a software dataplane applications. In order to maximize the utility of 
our efforts for the networking community, our team asks you to answer a 
couple of questions about your preferences regarding network 
applications verification:


http://goo.gl/cRE0Gy

Please ignore this letter if you already participated in the survey in 
December.


This short form should take no more than 2-3 minutes of your time. If 
you have any questions, please, e-mail me personally 
(arseniy.zaostrovn...@epfl.ch).


--
Respectfully,
Arseniy.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Supporting multiple Openstack versions

2016-02-17 Thread Aleksandr Didenko
> This requires the loss of all of the features in the newer version of
fuel since it relies on the older version of the serialized data from
nailgun.

Yes. But isn't it how "stable" branches are supposed to work? Introducing
new features into "stable" branches will make them not so "stable", right?
Even if these new features are introduced in composition layer or
configuration data. just an example: network transformations in astute.yaml
that are being translated into actual network configuration.

> Yes, this is, in part,  about taking advantage of new fuel features on
stable openstack releases, we are almost always behind and the previous
release(s) supported this already.

Introducing new features to stable releases will require full cycle of
testing. So, basically, it will affect the whole development process.

> In addtion we currently don't allow for new clusters to be deployed this
way.

We can remove this restriction. Nailgun is able to serialize data for
previous releases because that's how it supports adding new nodes to older
environments after upgrade, so it should not be a problem.

Regards,
Alex

On Fri, Feb 12, 2016 at 10:19 PM, Andrew Woodward  wrote:

>
>
> On Thu, Feb 11, 2016 at 1:03 AM Aleksandr Didenko 
> wrote:
>
>> Hi,
>>
>>
>> > So what is open? The composition layer.
>>
>> We can have different composition layers for every release and it's
>> already implemented in releases - separate puppet modules/manifests dir for
>> every release.
>>
>
> This requires the loss of all of the features in the newer version of fuel
> since it relies on the older version of the serialized data from nailgun.
> In addtion we currently don't allow for new clusters to be deployed this
> way.
>
>
>>
>> > Currently, we just abandon support for previous versions in the
>> composition layer and leave them to only be monuments in the
>> stable/ series branches for maintenance. If we instead started
>> making changes (forwards or backwards that) change the calls based on the
>> openstack version [5] then we would be able to change the calls based on
>> then needs of that release, and the puppet-openstack modules we are working
>> with.
>>
>> So we'll have tons of conditionals in composition layer, right? Even if
>> some puppet-openstack class have just one new parameter in new release,
>> then we'll have to write a conditional and duplicate class declaration. Or
>> write complex parameters hash definitions/merges and use
>> create_resources(). The more releases we want to support the more
>> complicated composition layer will become. That won't make contribution to
>> fuel-library easier and even can greatly reduce development speed. Also are
>> we going to add new features to stable releases using this workflow with
>> single composition layer?
>>
>> Yes, we need conditionals in the composition layer, we already need these
> to no jam the gate when we switch between stable and master, we might as
> well maintain them properly so that we can start running multiple versions
>
> Yes, this is, in part,  about taking advantage of new fuel features on
> stable openstack releases, we are almost always behind and the previous
> release(s) supported this already.
>
> If its only supported in the newer version, then we would have a similar
> problem with enabling the feature anyways as our current process results in
> us developing on stable openstack with the newer fuel until late in the
> cycle, when we switch packaging over.
>
>>
>> > Testing master while keeping stable. Given the ability to conditional
>> what source of openstack bits, which versions of manifests we can start
>> testing both master and keep health on stable. This would help accelerate
>> both fuel development and deploying and testing development versions of
>> openstack
>>
>> I'm sorry, but I don't see how we can accelerate things by making
>> composition layer more and more complicated. If we're going to run CI and
>> swarm for all of the supported releases on the ISO, that would rather
>> decrease speed of development and testing drastically. Also aren't we
>> "testing both master and keep health on stable" right now by running tests
>> for master and stable versions of Fuel?
>>
>> No, this is about deploying stable and master from the same version of
> Fuel, with the new features from fuel. As we develop new features in fuel
> we frequently run into problems simply because openstack version we are
> deploying is broken, this would allow for gating on stable and edge testing
> master until it can become the new stable.
>
>>
>> > Deploying stable and upgrading later. Again given the ability to deploy
>> multiple OpenStack versions within the same Fuel version, teams focused on
>> upgrades can take advantage of the latest enhancements in fuel to work the
>> upgrade process more easily, as an added benefit this would eventually lead
>> to better support for end user upgrades too.
>>
>> Using the same composition layers is not required for this. A

Re: [openstack-dev] [Rdo-list] [TripleO] Should we rename "RDO Manager" to "TripleO" ?

2016-02-17 Thread Fox, Kevin M
+1. There are already arguably too many names involved in OpenStack, let alone 
having multiple names for the same thing. :)

Thanks,
Kevin

From: rdo-list-boun...@redhat.com [rdo-list-boun...@redhat.com] on behalf of 
David Moreau Simard [d...@redhat.com]
Sent: Wednesday, February 17, 2016 8:27 AM
To: OpenStack Development Mailing List (not for usage questions); rdo-list
Subject: [Rdo-list] [TripleO] Should we rename "RDO Manager" to "TripleO" ?

Greetings,

(Note: cross-posted between rdo-list and openstack-dev to reach a
larger audience)

Today, because of the branding and the name "RDO Manager", you might
think that it's something other than TripleO - either something
entirely different or perhaps with downstream patches baked in.
You would not be the only one because the community, the users and the
developers alike have shared their confusion on that topic.

The truth is, as it stands right now, "RDO Manager" really is "TripleO".
There is no code or documentation differences.

I feel the only thing that is different is the strategy around how we
test TripleO to ensure the stability of RDO packages but it's already
in the process of being sent upstream [1] because we're convinced it's
the best way forward.

Historically, RDO Manager and TripleO were different things.
Today this is no longer the case and we plan on keeping it that way.

With this in mind, we would like to drop the RDO manager branding and
use TripleO instead.
Not only would we clear the confusion on the topic of what RDO Manager
really is but it would also strengthen the TripleO name.

We would love the RDO community to chime in on this and give their
feedback as to whether or not this is a good initiative.
We will proceed to a formal vote on $subject at the next RDO meeting
on Wednesday, 24th Feb, 2016 1500 UTC [2]. Feel free to join us on
#rdo on freenode.

Thanks,

[1]: https://review.openstack.org/#/c/276810/
[2]: https://etherpad.openstack.org/p/RDO-Meeting

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

___
Rdo-list mailing list
rdo-l...@redhat.com
https://www.redhat.com/mailman/listinfo/rdo-list

To unsubscribe: rdo-list-unsubscr...@redhat.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] unconstrained growth, why?

2016-02-17 Thread Chris Dent

On Wed, 17 Feb 2016, Doug Hellmann wrote:

Excerpts from Chris Dent's message of 2016-02-17 11:30:29 +:

A reason _I_[1] think we need to limit things is because from the
outside OpenStack doesn't really look like anything that you can put
a short description on. It's more murky than that and it is hard to
experience positive progress in a fog. Many people react to this fog
by focusing on their specific project rather than OpenStack at
large: At least there they can see their impact.


I've never understood this argument. OpenStack is a community
creating a collection of tools for building clouds. Each part
implements a different set of features, and you only need the parts
for the features you want.  In that respect, it's no different from
a Linux distro. You need a few core pieces (kernel, init, etc.),
and you install the other parts based on your use case (hardware
drivers, $SHELL, $GUI, etc.).


Ah. I think this gets to the heart of the matter. "OpenStack is a
[...] collection of tools for building clouds" is not really how I
think about it, so perhaps that's where I experience a problem. I
wonder how many people feel the way you do and how many people feel
more like I do, which is: I want OpenStack to be a thing that I, as
an individual without the help of a "vendor", can use to deploy a
cloud (that is easy for me and my colleagues to use) if I happen to
have >1 (or even just 1) pieces of baremetal lying around.

It's that "vendor" part that is the rub and to me starts bringing us
back into the spirit of "open core" that started the original
thread. If I need a _vendor_ to make use of the main features of
OpenStack then good golly that makes me want to cry, and want to fix
it.

To fix it, you're right, it does need a greater sense of "product"
"instead" of kit and the injection of opinions about reasonable
defaults and expectations of some reasonable degree of sameness
between different deployments of OpenStack. This is, in fact, what
much of the cross-project work that is happening now is trying to
accomplish.


This results in increasing the fog because cross-project concerns (which
help unify the vision and actuality that is OpenStack) get less
attention and the cycle deepens.


I'm not sure cross-project issues are really any worse today than
when I started working on OpenStack a few years ago. In fact, I think
they're significantly better.


I agree it is much better but it can be better still with some
reasonable sense of us all working in a similar direction. The
addition of "users" to the mission is helpful.


Architecturally and technically, project teams have always wanted
to go their own way to some degree. Experimentation with different
approaches and tools to address similar problems like that is good,
and success has resulted in the adoption of more common tools like
third-party WSGI frameworks, test tools, and patterns like the specs
review process and multiple teams managing non-client libraries.
So on a technical front we're doing better than the days where we
all just copied code out of nova and modified it for our own purposes
without looking back.


History is always full of weird stuff.


We've had to change our approaches to dealing with the growth,
and we still have a ways to go (much of it uphill), but I'm not
prepared to say that we've failed to meet the challenge.


I fear that I gave you the wrong impression. I wasn't trying to imply
that we are doing poorly at cross project things, rather that if we had
fewer projects we could do even better at cross project things (as a
result of fewer combinations).

Also that growth should not be considered a good thing in and of itself.

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Should we rename "RDO Manager" to "TripleO" ?

2016-02-17 Thread John Trowbridge
+1, I will also add for reference, that there is no other project in RDO
that is renamed/rebranded. In fact, even the TripleO packages in RDO
have the same naming as the upstream projects.

On 02/17/2016 11:27 AM, David Moreau Simard wrote:
> Greetings,
> 
> (Note: cross-posted between rdo-list and openstack-dev to reach a
> larger audience)
> 
> Today, because of the branding and the name "RDO Manager", you might
> think that it's something other than TripleO - either something
> entirely different or perhaps with downstream patches baked in.
> You would not be the only one because the community, the users and the
> developers alike have shared their confusion on that topic.
> 
> The truth is, as it stands right now, "RDO Manager" really is "TripleO".
> There is no code or documentation differences.
> 
> I feel the only thing that is different is the strategy around how we
> test TripleO to ensure the stability of RDO packages but it's already
> in the process of being sent upstream [1] because we're convinced it's
> the best way forward.
> 
> Historically, RDO Manager and TripleO were different things.
> Today this is no longer the case and we plan on keeping it that way.
> 
> With this in mind, we would like to drop the RDO manager branding and
> use TripleO instead.
> Not only would we clear the confusion on the topic of what RDO Manager
> really is but it would also strengthen the TripleO name.
> 
> We would love the RDO community to chime in on this and give their
> feedback as to whether or not this is a good initiative.
> We will proceed to a formal vote on $subject at the next RDO meeting
> on Wednesday, 24th Feb, 2016 1500 UTC [2]. Feel free to join us on
> #rdo on freenode.
> 
> Thanks,
> 
> [1]: https://review.openstack.org/#/c/276810/
> [2]: https://etherpad.openstack.org/p/RDO-Meeting
> 
> David Moreau Simard
> Senior Software Engineer | Openstack RDO
> 
> dmsimard = [irc, github, twitter]
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] Supporting multiple Openstack versions

2016-02-17 Thread Bogdan Dobrelya
> So we'll have tons of conditionals in composition layer, right? Even if
> some puppet-openstack class have just one new parameter in new release,
> then we'll have to write a conditional and duplicate class declaration. Or
> write complex parameters hash definitions/merges and use
> create_resources(). The more releases we want to support the more
> complicated composition layer will become. That won't make contribution to
> fuel-library easier and even can greatly reduce development speed. Also are
> we going to add new features to stable releases using this workflow with
> single composition layer?

As I can see from an example composition [0], such code would be an
unmaintainable burden for development and QA process. Next imagine a
case for incompatible *providers* like network transformations - shall
we put multiple if/case to the ruby providers as well?..

That is not a way to go for a composition, sorry. While the idea may be
doable, I agree, but perhaps another way.

(tl;dr)
By the way, this reminded me "The wrong abstraction" [1] article and
discussion. I agree with the author and believe one should not group
code (here it is versioned puppet modules & compositions) in a way which
introduces abstractions (here a super-composition) with multiple
if/else/case and hardcoded things to switch the execution flow based on
version of things. Just keep code as is - partially duplicated by
different releases in separate directories with separate modules and
composition layers and think of better solutions please.

There is also a nice comment: "...try to optimize my code around
reducing state, coupling, complexity and code, in that order". I
understood that like a set of "golden rules":
- Make it coupled more tight to decrease (shared) state
- Make it more complex to decrease coupling
- Make it duplicated to decrease complexity (e.g. abstractions)

(tl;dr, I mean it)
So, bringing those here.
- The shared state is perhaps the Nailgun's world view of all data and
versioned serializers for supported releases, which know how to convert
the only latest existing data to any of its supported previous versions.
- Decoupling we do by putting modules with its compositions to different
versioned /etc/puppet subdirectories. I'm not sure how do we decouple
Nailgun serializers though.
- Complexity is how we compose those modules / write logic of serializers.
- Duplication is puppet classes (and providers) with slightly different
call parameters from a version to version. Sometimes even not backwards
compatible. Probably same to the serializers?

So, we're going to *increase complexity* by introducing
super-compositions for multi OpenStack releases. Not sure about what to
happen to the serializers, any volunteers to clarify an impact?. And the
Rules "allow" us to do so only in order to decrease either coupling or
shared state, which is not the case, AFAICT. Modules with compositions
are separated well by OpenStack versions, nothing to decrease. Might
that change to decrease a shared state? I'm not sure if it even applies
here. Puppet versioning shares nothing. Only Nailgun folks may know the
answer.

[0]
https://review.openstack.org/#/c/281084/1/deployment/puppet/ceph/manifests/nova_compute.pp
[1] https://news.ycombinator.com/item?id=11032296

-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] upgrade implications of lots of content in paste.ini

2016-02-17 Thread Michael Krotscheck
We (that is, the cores, contributors, and consumers that I've been
collaborating with over the past year on this) came to the consensus that
leaving the cors middleware as generic & configurable as possible was
preferable, and that an openstack-specific version that automatically
initializes itself from keystone's service catalog and trusted dashboards
might be a next step. I'm sure the oslo core team can chime in, but the
arguments that I recall were:

1- Maintaining a list of headers in a repository separate from projects
that use them is undesirable and likely to bitrot.
2- Oslo's backwards-compatibility policy means that new and/or modified
headers could only be added, not removed. This would result in a long list
of no-longer used headers (https://tools.ietf.org/html/rfc6648 was
mentioned).
3- We could not guarantee that downstream consumers of the CORS middleware
don't already exist, and they should not be subject to having
suddenly-approved headers.

We'd like to move forward with this approach at this time, to ensure that
CORS is consistently enabled in Mitaka before the freeze. I'd be happy to
work with you on alternative approaches moving forwards; for example, I
think teaching oslo's genconfig to permit project-specific default
overrides would solve this problem in a way that would make both of us, and
other users of genconfig, very happy.

Michael

On Wed, Feb 17, 2016 at 5:08 AM Sean Dague  wrote:

> A set of CORS patches came out recently that add a ton of content to
> paste.ini for every project (much of it the same between projects) -
> https://review.openstack.org/#/c/265415/1
>
> paste.ini is in a really weird space because it's config, ops can change
> it, so large amounts of complicated things in there which may change in
> future releases is really problematic. Deprecating content out of there
> turns into a giant challenge because of this. As does changes to code
> which make any assumption what so ever about other content in that file.
>
> Why weren't these options included as sane defaults in the base cors
> middleware?
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Nominating Fernando Diaz for Barbican Core

2016-02-17 Thread John Wood
+1

On 2/16/16, 12:52 PM, "Ade Lee"  wrote:

>+1
>
>On Mon, 2016-02-15 at 11:45 -0600, Douglas Mendizábal wrote:
>> Hi All,
>> 
>> I would like to nominate Fernando Diaz for the Barbican Core team.
>> Fernando has been an enthusiastic contributor since joining the
>> Barbican team.  He is currently the most active non-core reviewer on
>> Barbican projects for the last 90 days. [1]  He¹s got an excellent
>> eye
>> for review and I think he would make an excellent addition to the
>> team.
>> 
>> As a reminder to our current core reviewers, our Core Team policy is
>> documented in the wiki. [2]  So please reply to this thread with your
>> votes.
>> 
>> Thanks,
>> - Douglas Mendizábal
>> 
>> [1] http://stackalytics.com/report/contribution/barbican-group/90
>> [2] https://wiki.openstack.org/wiki/Barbican/CoreTeam
>> _
>> _
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
>> cribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] unconstrained growth, why?

2016-02-17 Thread Doug Hellmann
Excerpts from Chris Dent's message of 2016-02-17 17:00:00 +:
> On Wed, 17 Feb 2016, Doug Hellmann wrote:
> > Excerpts from Chris Dent's message of 2016-02-17 11:30:29 +:
> >> A reason _I_[1] think we need to limit things is because from the
> >> outside OpenStack doesn't really look like anything that you can put
> >> a short description on. It's more murky than that and it is hard to
> >> experience positive progress in a fog. Many people react to this fog
> >> by focusing on their specific project rather than OpenStack at
> >> large: At least there they can see their impact.
> >
> > I've never understood this argument. OpenStack is a community
> > creating a collection of tools for building clouds. Each part
> > implements a different set of features, and you only need the parts
> > for the features you want.  In that respect, it's no different from
> > a Linux distro. You need a few core pieces (kernel, init, etc.),
> > and you install the other parts based on your use case (hardware
> > drivers, $SHELL, $GUI, etc.).
> 
> Ah. I think this gets to the heart of the matter. "OpenStack is a
> [...] collection of tools for building clouds" is not really how I
> think about it, so perhaps that's where I experience a problem. I
> wonder how many people feel the way you do and how many people feel
> more like I do, which is: I want OpenStack to be a thing that I, as
> an individual without the help of a "vendor", can use to deploy a
> cloud (that is easy for me and my colleagues to use) if I happen to
> have >1 (or even just 1) pieces of baremetal lying around.
> 
> It's that "vendor" part that is the rub and to me starts bringing us
> back into the spirit of "open core" that started the original
> thread. If I need a _vendor_ to make use of the main features of
> OpenStack then good golly that makes me want to cry, and want to fix
> it.
> 
> To fix it, you're right, it does need a greater sense of "product"
> "instead" of kit and the injection of opinions about reasonable
> defaults and expectations of some reasonable degree of sameness
> between different deployments of OpenStack. This is, in fact, what
> much of the cross-project work that is happening now is trying to
> accomplish.

You don't need a vendor to use OpenStack. The community has deployment
stories for, I think, every possible automation framework.  Packages
are available for distros that don't have license fees. It is
entirely possible to deploy a cloud using these tools.

The challenge with deployment is that everyone wants to make their
own choices about the cloud they're building. If we were going to
give everyone the same sort of cloud, all of OpenStack would be a
lot simpler and no one would want to use it because it wouldn't
meet their needs.

If some of the existing installation mechanisms don't meet simplicity
requirements, we should figure out why, specifically. It's quite
likely there's room for a "fewer choices needed" deployment tool
that expresses more opinions than the existing tools, and is useful
for some simpler cases by removing some of the flexibility.

> 
> >> This results in increasing the fog because cross-project concerns (which
> >> help unify the vision and actuality that is OpenStack) get less
> >> attention and the cycle deepens.
> >
> > I'm not sure cross-project issues are really any worse today than
> > when I started working on OpenStack a few years ago. In fact, I think
> > they're significantly better.
> 
> I agree it is much better but it can be better still with some
> reasonable sense of us all working in a similar direction. The
> addition of "users" to the mission is helpful.
> 
> > Architecturally and technically, project teams have always wanted
> > to go their own way to some degree. Experimentation with different
> > approaches and tools to address similar problems like that is good,
> > and success has resulted in the adoption of more common tools like
> > third-party WSGI frameworks, test tools, and patterns like the specs
> > review process and multiple teams managing non-client libraries.
> > So on a technical front we're doing better than the days where we
> > all just copied code out of nova and modified it for our own purposes
> > without looking back.
> 
> History is always full of weird stuff.
> 
> > We've had to change our approaches to dealing with the growth,
> > and we still have a ways to go (much of it uphill), but I'm not
> > prepared to say that we've failed to meet the challenge.
> 
> I fear that I gave you the wrong impression. I wasn't trying to imply
> that we are doing poorly at cross project things, rather that if we had
> fewer projects we could do even better at cross project things (as a
> result of fewer combinations).

OK. Speaking as someone heavily involved in "cross project things"
when we had only a very few projects, I can report that at that
time we did not do as well at cooperation even as we are doing
today. That's not to say you're wrong about the future, sin

Re: [openstack-dev] [oslo] upgrade implications of lots of content in paste.ini

2016-02-17 Thread Doug Hellmann
Excerpts from Michael Krotscheck's message of 2016-02-17 17:26:57 +:
> We (that is, the cores, contributors, and consumers that I've been
> collaborating with over the past year on this) came to the consensus that
> leaving the cors middleware as generic & configurable as possible was
> preferable, and that an openstack-specific version that automatically
> initializes itself from keystone's service catalog and trusted dashboards
> might be a next step. I'm sure the oslo core team can chime in, but the
> arguments that I recall were:
> 
> 1- Maintaining a list of headers in a repository separate from projects
> that use them is undesirable and likely to bitrot.
> 2- Oslo's backwards-compatibility policy means that new and/or modified
> headers could only be added, not removed. This would result in a long list
> of no-longer used headers (https://tools.ietf.org/html/rfc6648 was
> mentioned).
> 3- We could not guarantee that downstream consumers of the CORS middleware
> don't already exist, and they should not be subject to having
> suddenly-approved headers.
> 
> We'd like to move forward with this approach at this time, to ensure that
> CORS is consistently enabled in Mitaka before the freeze. I'd be happy to
> work with you on alternative approaches moving forwards; for example, I
> think teaching oslo's genconfig to permit project-specific default
> overrides would solve this problem in a way that would make both of us, and
> other users of genconfig, very happy.

The next release of oslo.config will have this.
https://review.openstack.org/#/c/278604/

Doug

> 
> Michael
> 
> On Wed, Feb 17, 2016 at 5:08 AM Sean Dague  wrote:
> 
> > A set of CORS patches came out recently that add a ton of content to
> > paste.ini for every project (much of it the same between projects) -
> > https://review.openstack.org/#/c/265415/1
> >
> > paste.ini is in a really weird space because it's config, ops can change
> > it, so large amounts of complicated things in there which may change in
> > future releases is really problematic. Deprecating content out of there
> > turns into a giant challenge because of this. As does changes to code
> > which make any assumption what so ever about other content in that file.
> >
> > Why weren't these options included as sane defaults in the base cors
> > middleware?
> >
> > -Sean
> >
> > --
> > Sean Dague
> > http://dague.net
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] Merge freeze for CI switch to Mitaka

2016-02-17 Thread Dmitry Borodaenko
Fuel core reviewers,

Fuel CI is being migrated to an ISO image with Mitaka packages, please
don't merge any commits to any Fuel repositories without coordination
with Aleksandra until further notice.

This merge freeze is expected to last a few hours.

-- 
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Merge freeze for CI switch to Mitaka

2016-02-17 Thread Vladimir Kuklin
Fuelers

I have strong opinion against this merge freeze right now. We have critical
bugs blocking bvt and we do not have enough info on mitaka readiness for
other scenarios than bvt.
17 февр. 2016 г. 20:45 пользователь "Dmitry Borodaenko" <
dborodae...@mirantis.com> написал:

> Fuel core reviewers,
>
> Fuel CI is being migrated to an ISO image with Mitaka packages, please
> don't merge any commits to any Fuel repositories without coordination
> with Aleksandra until further notice.
>
> This merge freeze is expected to last a few hours.
>
> --
> Dmitry Borodaenko
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rdo-list] [TripleO] Should we rename "RDO Manager" to "TripleO" ?

2016-02-17 Thread Haïkel
+1 it fuels the confusion that RDO Manager has downstream-only patches
which is not the case anymore.

And I'll bite anyone who will try to sneak downstream-only patches in
RDO package of tripleO.

Regards,
H.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Supporting multiple Openstack versions

2016-02-17 Thread Andrew Woodward
On Wed, Feb 17, 2016 at 8:38 AM Aleksandr Didenko 
wrote:

> > This requires the loss of all of the features in the newer version of
> fuel since it relies on the older version of the serialized data from
> nailgun.
>
> Yes. But isn't it how "stable" branches are supposed to work? Introducing
> new features into "stable" branches will make them not so "stable", right?
> Even if these new features are introduced in composition layer or
> configuration data. just an example: network transformations in astute.yaml
> that are being translated into actual network configuration.
>

I think you may be confusing the OpenStack version fuel deploys and fuel
its self.
This is about keeping around older version(s) of OpenStack (most often the
most recent from master) and not dropping it during the development
process. This would allow for better switching during development (ie when
we need to move the default forward) and if we don't drop until after the
release, would allow for the usage of multiple versions of openstack.

> Yes, this is, in part,  about taking advantage of new fuel features on
> stable openstack releases, we are almost always behind and the previous
> release(s) supported this already.
>
> Introducing new features to stable releases will require full cycle of
> testing. So, basically, it will affect the whole development process.
>

This is not about introducing features to a stable fuel release, its about
taking advantage of fuel features with a older openstack release, the only
cycle we are working on is fuel.

We are almost always one or more releases behind openstack in supporting
features in openstack, this would rarely create an issue where the version
of openstack doesn't support what fuel is doing

Our development process already moves us through this process, when we
develop the next version of fuel, we stay on the same version of openstack
as the previous fuel release. We only cut forward very late in the cycle.
So we can simply support both through the end of the release cycle and then
decide if we are doping it in beginning of the next cycle



> > In addtion we currently don't allow for new clusters to be deployed this
> way.
>
> We can remove this restriction. Nailgun is able to serialize data for
> previous releases because that's how it supports adding new nodes to older
> environments after upgrade, so it should not be a problem.
>
> Regards,
> Alex
>
> On Fri, Feb 12, 2016 at 10:19 PM, Andrew Woodward 
> wrote:
>
>>
>>
>> On Thu, Feb 11, 2016 at 1:03 AM Aleksandr Didenko 
>> wrote:
>>
>>> Hi,
>>>
>>>
>>> > So what is open? The composition layer.
>>>
>>> We can have different composition layers for every release and it's
>>> already implemented in releases - separate puppet modules/manifests dir for
>>> every release.
>>>
>>
>> This requires the loss of all of the features in the newer version of
>> fuel since it relies on the older version of the serialized data from
>> nailgun. In addtion we currently don't allow for new clusters to be
>> deployed this way.
>>
>>
>>>
>>> > Currently, we just abandon support for previous versions in the
>>> composition layer and leave them to only be monuments in the
>>> stable/ series branches for maintenance. If we instead started
>>> making changes (forwards or backwards that) change the calls based on the
>>> openstack version [5] then we would be able to change the calls based on
>>> then needs of that release, and the puppet-openstack modules we are working
>>> with.
>>>
>>> So we'll have tons of conditionals in composition layer, right? Even if
>>> some puppet-openstack class have just one new parameter in new release,
>>> then we'll have to write a conditional and duplicate class declaration. Or
>>> write complex parameters hash definitions/merges and use
>>> create_resources(). The more releases we want to support the more
>>> complicated composition layer will become. That won't make contribution to
>>> fuel-library easier and even can greatly reduce development speed. Also are
>>> we going to add new features to stable releases using this workflow with
>>> single composition layer?
>>>
>>> Yes, we need conditionals in the composition layer, we already need
>> these to no jam the gate when we switch between stable and master, we might
>> as well maintain them properly so that we can start running multiple
>> versions
>>
>> Yes, this is, in part,  about taking advantage of new fuel features on
>> stable openstack releases, we are almost always behind and the previous
>> release(s) supported this already.
>>
>> If its only supported in the newer version, then we would have a similar
>> problem with enabling the feature anyways as our current process results in
>> us developing on stable openstack with the newer fuel until late in the
>> cycle, when we switch packaging over.
>>
>>>
>>> > Testing master while keeping stable. Given the ability to conditional
>>> what source of openstack bits, which versions of manifests we can start
>>> testi

Re: [openstack-dev] [all][infra] eventlet 0.18.1 not on PyPi anymore

2016-02-17 Thread Doug Hellmann
Excerpts from Henry Gessau's message of 2016-02-17 11:00:53 -0500:
> Doug Hellmann  wrote:
> > Excerpts from Morgan Fainberg's message of 2016-02-17 07:10:34 -0800:
> >> On Wed, Feb 17, 2016 at 5:55 AM, Sean Dague  wrote:
> >>
> >>> On 02/17/2016 08:42 AM, Doug Hellmann wrote:
>  Excerpts from Victor Stinner's message of 2016-02-17 14:14:18 +0100:
> > Le 17/02/2016 13:43, Henry Gessau a écrit :
> >> And it looks like eventlet 0.18.3 breaks neutron:
> >> https://bugs.launchpad.net/neutron/+bug/1546506
> >
> > 2 releases, 2 regressions in OpenStack. Should we cap eventlet version?
> > The requirement bot can produce patches to update eventlet, patches
> > which would run integration tests using Nova, Keystone, Neutron on the
> > new eventlet version.
> >
> > eventlet 0.18.2 broke OpenStack Keystone and OpenStack Nova
> > https://github.com/eventlet/eventlet/issues/296
> > https://github.com/eventlet/eventlet/issues/299
> > https://review.openstack.org/#/c/278147/
> > https://bugs.launchpad.net/nova/+bug/1544801
> >
> > eventlet 0.18.3 broke OpenStack Neutron
> > https://github.com/eventlet/eventlet/issues/301
> > https://bugs.launchpad.net/neutron/+bug/1546506
> >
> > FYI eventlet 0.18.0 broke WSGI servers:
> > https://github.com/eventlet/eventlet/issues/295
> >
> > It was followed quickly by eventlet 0.18.2 to fix this issue.
> >
> > Sadly, it looks like bugfix releases of eventlet don't include a single
> > bugfix, but include also other changes. For example, 0.18.3 fixed the
> > bug #296 but introduced "wsgi: TCP_NODELAY enabled by default"
> >>> optimization.
> >
> > IMHO the problem is not the release manager of eventlet, but more the
> > lack of tests on eventlet, especially on OpenStack services.
> >
> > Current "Continious Delivery"-like with gates do detect bugs, yeah, but
> > also block a lot of developers when the gates are broken. It doesn't
> > seem trivial to investigate and fix eventlet issues.
> >
> > Victor
> >
> 
>  Whether we cap or not, we should exclude the known broken versions.
>  It looks like getting back to a good version will also require
>  lowering the minimum version we support, since we have >=0.18.2
>  now.
> 
>  What was the last version of eventlet known to work?
> >>>
> >>> 0.18.2 works. On the Nova side we had a failure around unit tests which
> >>> was quite synthetic that we fixed. I don' know what the keystone issue
> >>> turned out to be.
> >>>
> >>
> >> I believe the keystone issue was a test specific issue, not a runtime
> >> issue. We disabled the test.
> >> --Morgan
> > 
> > OK. Can someone from the neutron team verify that 0.18.2 works? If so,
> > we can just exclude 0.18.3 and reset the constraint.
> 
> I can confirm that neutron works with 0.18.2 as far as we know.
> 

Great. If you (or someone else) wants to submit a requirements update, I
can approve it. Ping me in #openstack-release.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][cinder] Projects acting as a domain at the top of the project hierarchy

2016-02-17 Thread Samuel de Medeiros Queiroz
Hi all,

I discussed the change with other cores in -keystone and, looking at the
API change guidelines, it should be an allowed API change.

I had a doubt whether the rule "Changing or removing a property in a
resource representation." could make it a forbidden API change or not.
However, since it is not changing the returned attribute itself (type), but
only the returned data in it, it should be okay.

[1]
http://specs.openstack.org/openstack/api-wg/guidelines/evaluating_api_changes.html

Regards,
Samuel

On Wed, Feb 17, 2016 at 11:06 AM, Raildo Mascena  wrote:

> Henry,
>
> I know about two patches related:
> Fixes cinder quota mgmt for keystone v3 -
> https://review.openstack.org/#/c/253759
> Split out NestedQuotas into a separate driver -
> https://review.openstack.org/#/c/274825
>
> The first one was abandoned, so I think the second patch is enough to fix
> this issue.
>
> Cheers,
>
> Raildo
>
> On Wed, Feb 17, 2016 at 8:07 AM Henry Nash  wrote:
>
>> Michal & Raildo,
>>
>> So the keystone patch (https://review.openstack.org/#/c/270057/) is now
>> merged.  Do you perhaps have a cinder patch that I could review so we can
>> make sure that this is likely to work with the new projects acting as
>> domains? Currently it is the cinder tempest tests that are failing.
>>
>> Thanks
>>
>> Henry
>>
>>
>> On 2 Feb 2016, at 13:30, Raildo Mascena  wrote:
>>
>> See responses inline.
>>
>> On Mon, Feb 1, 2016 at 6:25 PM Michał Dulko 
>> wrote:
>>
>>> On 01/30/2016 07:02 PM, Henry Nash wrote:
>>> > Hi
>>> >
>>> > One of the things the keystone team was planning to merge ahead of
>>> milestone-3 of Mitaka, was “projects acting as a domain”. Up until now,
>>> domains in keystone have been stored totally separately from projects, even
>>> though all projects must be owned by a domain (even tenants created via the
>>> keystone v2 APIs will be owned by a domain, in this case the ‘default’
>>> domain). All projects in a project hierarchy are always owned by the same
>>> domain. Keystone supports a number of duplicate concepts (e.g. domain
>>> assignments, domain tokens) similar to their project equivalents.
>>> >
>>> > 
>>> >
>>> > I’ve got a couple of questions about the impact of the above:
>>> >
>>> > 1) I already know that if we do exactly as described above, the cinder
>>> gets confused with how it does quotas today - since suddenly there is a new
>>> parent to what it thought was a top level project (and the permission rules
>>> it encodes requires the caller to be cloud admin, or admin of the root
>>> project of a hierarchy).
>>>
>>> These problems are there because our nested quotas code is really buggy
>>> right now. Once Keystone merges a fix allowing non-admin users to fetch
>>> his own project hierarchy - we should be able to fix it.
>>>
>>
>> ++ The patch to fix this problem are closer to be merged, there is just
>> minor comments to fix: https://review.openstack.org/#/c/270057/  So I
>> believe that we can fix this bug on cinder in in next days.
>>
>>>
>>> > 2) I’m not sure of the state of nova quotas - and whether it would
>>> suffer a similar problem?
>>>
>>> As far as I know Nova haven't had merged nested quotas code and will not
>>> do that in Mitaka due to feature freeze.
>>
>> Nested quotas code on Nova is very similar with the Cinder code and we
>> are already fixing the bugs that we found on Cinder. Agreed that It will
>> not be merged in Mitaka due to feature freeze.
>>
>>>
>>> > 3) Will Horizon get confused by this at all?
>>> >
>>> > Depending on the answers to the above, we can go in a couple of
>>> directions. The cinder issues looks easy to fix (having had a quick look at
>>> the code) - and if that was the only issue, then that may be fine. If we
>>> think there may be problems in multiple services, we could, for Mitaka,
>>> still create the projects acting as domains, but not set the parent_id of
>>> the current top level projects to point at the new project acting as a
>>> domain - that way those projects acting as domains remain isolated from the
>>> hierarchy for now (and essentially invisible to any calling service). Then
>>> as part of Newton we can provide patches to those services that need
>>> changing, and then wire up the projects acting as a domain to their
>>> children.
>>> >
>>> > Interested in feedback to the questions above.
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/op

[openstack-dev] [fuel][nailgun][volume-manager][fuel-agent] lvm metadata size value. why was it set to 64M?

2016-02-17 Thread Alexander Gordeev
Hi,

Apparently, nailgun assumes that lvm metadata size is always set to 64M [1]

It seems that it was defined here since the early beginning of nailgun as a
project, therefore it's impossible to figure out for what purposes that was
done as early commit messages are not so informative.

According to the documentation (man lvm.conf):

  pvmetadatasize — Approximate number of sectors to set aside
for each copy of the metadata. Volume groups with large numbers  of
physical  or  logical  volumes,  or  volumes groups containing complex
logical volume structures will need additional space for their metadata.
The metadata areas are treated as circular buffers, so unused space becomes
filled with an archive of the most recent previous versions of the metadata.


The default value is set to 255 sectors. (128KiB)

Quotation from particular lvm.conf sample:
# Approximate default size of on-disk metadata areas in sectors.
# You should increase this if you have large volume groups or
# you want to retain a large on-disk history of your metadata changes.

# pvmetadatasize = 255


nailgun's volume manager calculates sizes of logical volumes within one
physical volume group and takes into account the size of lvm metadata [2].

However, due to logical volumes size gets rounded to the nearest multiple
of PE size (which is 4M usually), fuel-agent always ends up with the lack
of free space when creating logical volumes exactly in accordance with
partitioning scheme is generated by volume manager.
Thus, tricky logic was added into fuel-agent [3] to bypass that flaw. Since
64M is way too big value when compared with typical one, fuel-agent
silently reduces the size of lvm metadata by 8M and then partitioning
always goes smooth.

Consequently, almost each physical volume group remains only 4M of free
space. It worked fine on old good HDDs.

But when the time comes to use any FC/HBA/HW RAID block storage device
which is occasionally reporting relatively huge values for minimal io size
and optimal io size exposed in sysfs, then fuel-agent might end up with the
lack of free space once again due to logical volume alignments within
physical volume group [4]. Those alignments have been done by LVM
automatically with respect to those values [5]

As I'm going to trade off some portion of reserved amount of disk space for
storing lvm metadata for the sake of logical volume alignments, here're the
questions:

* why was lvm metadata set to 64M?
* could someone shed more light on any obvious reasons/needs hidden behind
that?
* what is the minimal size of lvm metadata we'll be happy with?
* the same question for the optimal size.


[1]
https://github.com/openstack/fuel-web/blob/6bd08607c6064e99ad2ed277b1c17d7b23b13c8a/nailgun/nailgun/extensions/volume_manager/manager.py#L824
[2]
https://github.com/openstack/fuel-web/blob/6bd08607c6064e99ad2ed277b1c17d7b23b13c8a/nailgun/nailgun/extensions/volume_manager/manager.py#L867-L875
[3]
https://github.com/openstack/fuel-agent/commit/c473202d4db774b0075b8d9c25f217068f7c1727
[4] https://bugs.launchpad.net/fuel/+bug/1546049
[5] http://people.redhat.com/msnitzer/docs/io-limits.txt


Thanks,
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStackClient] New core team members

2016-02-17 Thread Steve Martinelli

Congrats to both Richard Theis and Tang Chen -- very well deserved!!! Thank
you for guarding the gate!

stevemar



From:   Dean Troyer 
To: OpenStack Development Mailing List

Date:   2016/02/17 11:34 AM
Subject:[openstack-dev] [OpenStackClient] New core team members



I would like to announce the addition of Richard Theis and Tang Chen to the
OpenStackClient core team.  They both have been contributing quality
reviews and code for some time now, particularly in the areas of SDK
integration and new Network commands.

Thank you Richard and Tang for your work and welcome to the core team.

dt

--

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A prototype implementation towards the "shared state scheduler"

2016-02-17 Thread Clint Byrum
Excerpts from Cheng, Yingxin's message of 2016-02-14 21:21:28 -0800:
> Hi,
> 
> I've uploaded a prototype https://review.openstack.org/#/c/280047/ to testify 
> its design goals in accuracy, performance, reliability and compatibility 
> improvements. It will also be an Austin Summit Session if elected: 
> https://www.openstack.org/summit/austin-2016/vote-for-speakers/Presentation/7316
> 
> I want to gather opinions about this idea:
> 1. Is this feature possible to be accepted in the Newton release?
> 2. Suggestions to improve its design and compatibility.
> 3. Possibilities to integrate with resource-provider bp series: I know 
> resource-provider is the major direction of Nova scheduler, and there will be 
> fundamental changes in the future, especially according to the bp 
> https://review.openstack.org/#/c/271823/1/specs/mitaka/approved/resource-providers-scheduler.rst.
>  However, this prototype proposes a much faster and compatible way to make 
> schedule decisions based on scheduler caches. The in-memory decisions are 
> made at the same speed with the caching scheduler, but the caches are kept 
> consistent with compute nodes as quickly as possible without db refreshing.
> 
> Here is the detailed design of the mentioned prototype:
> 
> >>
> Background:
> The host state cache maintained by host manager is the scheduler resource 
> view during schedule decision making. It is updated whenever a request is 
> received[1], and all the compute node records are retrieved from db every 
> time. There are several problems in this update model, proven in 
> experiments[3]:
> 1. Performance: The scheduler performance is largely affected by db access in 
> retrieving compute node records. The db block time of a single request is 
> 355ms in average in the deployment of 3 compute nodes, compared with only 3ms 
> in in-memory decision-making. Imagine there could be at most 1k nodes, even 
> 10k nodes in the future.
> 2. Race conditions: This is not only a parallel-scheduler problem, but also a 
> problem using only one scheduler. The detailed analysis of 
> one-scheduler-problem is located in bug analysis[2]. In short, there is a gap 
> between the scheduler makes a decision in host state cache and the
> compute node updates its in-db resource record according to that decision in 
> resource tracker. A recent scheduler resource consumption in cache can be 
> lost and overwritten by compute node data because of it, result in cache 
> inconsistency and unexpected retries. In a one-scheduler experiment using 
> 3-node deployment, there are 7 retries out of 31 concurrent schedule requests 
> recorded, results in 22.6% extra performance overhead.
> 3. Parallel scheduler support: The design of filter scheduler leads to an 
> "even worse" performance result using parallel schedulers. In the same 
> experiment with 4 schedulers on separate machines, the average db block time 
> is increased to 697ms per request and there are 16 retries out of 31 schedule 
> requests, namely 51.6% extra overhead.


This mostly agrees with recent tests I've been doing simulating 1000
compute nodes with the fake virt driver. My retry rate is much lower,
because there's less window for race conditions since there is no latency
for the time between nova-compute getting the message that the VM is
scheduled to it, and responding with a host update. Note that your
database latency numbers seem much higher, we see about 200ms, and I
wonder if you are running in a very resource constrained database
instance.

> 
> Improvements:
> This prototype solved the mentioned issues above by implementing a new update 
> model to scheduler host state cache. Instead of refreshing caches from db, 
> every compute node maintains its accurate version of host state cache updated 
> by the resource tracker, and sends incremental updates directly to 
> schedulers. So the scheduler cache are synchronized to the correct state as 
> soon as possible with the lowest overhead. Also, scheduler will send resource 
> claim with its decision to the target compute node. The compute node can 
> decide whether the resource claim is successful immediately by its local host 
> state cache and send responds back ASAP. With all the claims are tracked from 
> schedulers to compute nodes, no false overwrites will happen, and thus the 
> gaps between scheduler cache and real compute node states are minimized. The 
> benefits are obvious with recorded experiments[3] compared with caching 
> scheduler and filter scheduler:

You don't mention this, but I'm assuming this is true: At startup of a
new shared state scheduler, it fills its host state cache from the
database.

> 1. There is no db block time during scheduler decision making, the average 
> decision time per request is about 3ms in both single and multiple scheduler 
> scenarios, which is equal to the in-memory decision time of filter scheduler 
> and caching scheduler.
> 2. Since the scheduler claims

Re: [openstack-dev] [fuel] Supporting multiple Openstack versions

2016-02-17 Thread Andrew Woodward
On Wed, Feb 17, 2016 at 9:29 AM Bogdan Dobrelya 
wrote:

> > So we'll have tons of conditionals in composition layer, right? Even if
> > some puppet-openstack class have just one new parameter in new release,
> > then we'll have to write a conditional and duplicate class declaration.
> Or
> > write complex parameters hash definitions/merges and use
> > create_resources(). The more releases we want to support the more
> > complicated composition layer will become. That won't make contribution
> to
> > fuel-library easier and even can greatly reduce development speed. Also
> are
> > we going to add new features to stable releases using this workflow with
> > single composition layer?
>
> As I can see from an example composition [0], such code would be an
> unmaintainable burden for development and QA process. Next imagine a
> case for incompatible *providers* like network transformations - shall
> we put multiple if/case to the ruby providers as well?..
>

No, part of the point of reusing the current serializers from nailgun and
the current composition layer / fuel-library is exactly to avoid this kind
of issue. The other point is to take advantage of new features in the new
version of Fuel

The conditions needed in the composition layer are only to the underlying
puppet-openstack modules, which would be rolled back to version that
matches the openstack versions [a]

[a] https://github.com/xarses/fuel-library/blob/9-Kilo/deployment/Puppetfile

>
> That is not a way to go for a composition, sorry. While the idea may be
> doable, I agree, but perhaps another way.
>

Given the requirements to be able to use new features in fuel, with an
older version of OpenStack, what alternative would you propose?

>
> (tl;dr)
> By the way, this reminded me "The wrong abstraction" [1] article and
> discussion. I agree with the author and believe one should not group
> code (here it is versioned puppet modules & compositions) in a way which
> introduces abstractions (here a super-composition) with multiple
> if/else/case and hardcoded things to switch the execution flow based on
> version of things. Just keep code as is - partially duplicated by
> different releases in separate directories with separate modules and
> composition layers and think of better solutions please.
>
> There is also a nice comment: "...try to optimize my code around
> reducing state, coupling, complexity and code, in that order". I
> understood that like a set of "golden rules":
> - Make it coupled more tight to decrease (shared) state
> - Make it more complex to decrease coupling
> - Make it duplicated to decrease complexity (e.g. abstractions)
>
> (tl;dr, I mean it)
> So, bringing those here.
> - The shared state is perhaps the Nailgun's world view of all data and
> versioned serializers for supported releases, which know how to convert
> the only latest existing data to any of its supported previous versions.
> - Decoupling we do by putting modules with its compositions to different
> versioned /etc/puppet subdirectories. I'm not sure how do we decouple
> Nailgun serializers though.
> - Complexity is how we compose those modules / write logic of serializers.
> - Duplication is puppet classes (and providers) with slightly different
> call parameters from a version to version. Sometimes even not backwards
> compatible. Probably same to the serializers?
>
> So, we're going to *increase complexity* by introducing
> super-compositions for multi OpenStack releases. Not sure about what to
> happen to the serializers, any volunteers to clarify an impact?. And the
> Rules "allow" us to do so only in order to decrease either coupling or
> shared state, which is not the case, AFAICT. Modules with compositions
> are separated well by OpenStack versions, nothing to decrease. Might
> that change to decrease a shared state? I'm not sure if it even applies
> here. Puppet versioning shares nothing. Only Nailgun folks may know the
> answer.
>
> [0]
>
> https://review.openstack.org/#/c/281084/1/deployment/puppet/ceph/manifests/nova_compute.pp
> [1] https://news.ycombinator.com/item?id=11032296
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] eventlet 0.18.1 not on PyPi anymore

2016-02-17 Thread Henry Gessau
Doug Hellmann  wrote:
> Excerpts from Henry Gessau's message of 2016-02-17 11:00:53 -0500:
>> Doug Hellmann  wrote:
>>> Excerpts from Morgan Fainberg's message of 2016-02-17 07:10:34 -0800:
 On Wed, Feb 17, 2016 at 5:55 AM, Sean Dague  wrote:

> On 02/17/2016 08:42 AM, Doug Hellmann wrote:
>> Excerpts from Victor Stinner's message of 2016-02-17 14:14:18 +0100:
>>> Le 17/02/2016 13:43, Henry Gessau a écrit :
 And it looks like eventlet 0.18.3 breaks neutron:
 https://bugs.launchpad.net/neutron/+bug/1546506
>>>
>>> 2 releases, 2 regressions in OpenStack. Should we cap eventlet version?
>>> The requirement bot can produce patches to update eventlet, patches
>>> which would run integration tests using Nova, Keystone, Neutron on the
>>> new eventlet version.
>>>
>>> eventlet 0.18.2 broke OpenStack Keystone and OpenStack Nova
>>> https://github.com/eventlet/eventlet/issues/296
>>> https://github.com/eventlet/eventlet/issues/299
>>> https://review.openstack.org/#/c/278147/
>>> https://bugs.launchpad.net/nova/+bug/1544801
>>>
>>> eventlet 0.18.3 broke OpenStack Neutron
>>> https://github.com/eventlet/eventlet/issues/301
>>> https://bugs.launchpad.net/neutron/+bug/1546506
>>>
>>> FYI eventlet 0.18.0 broke WSGI servers:
>>> https://github.com/eventlet/eventlet/issues/295
>>>
>>> It was followed quickly by eventlet 0.18.2 to fix this issue.
>>>
>>> Sadly, it looks like bugfix releases of eventlet don't include a single
>>> bugfix, but include also other changes. For example, 0.18.3 fixed the
>>> bug #296 but introduced "wsgi: TCP_NODELAY enabled by default"
> optimization.
>>>
>>> IMHO the problem is not the release manager of eventlet, but more the
>>> lack of tests on eventlet, especially on OpenStack services.
>>>
>>> Current "Continious Delivery"-like with gates do detect bugs, yeah, but
>>> also block a lot of developers when the gates are broken. It doesn't
>>> seem trivial to investigate and fix eventlet issues.
>>>
>>> Victor
>>>
>>
>> Whether we cap or not, we should exclude the known broken versions.
>> It looks like getting back to a good version will also require
>> lowering the minimum version we support, since we have >=0.18.2
>> now.
>>
>> What was the last version of eventlet known to work?
>
> 0.18.2 works. On the Nova side we had a failure around unit tests which
> was quite synthetic that we fixed. I don' know what the keystone issue
> turned out to be.
>

 I believe the keystone issue was a test specific issue, not a runtime
 issue. We disabled the test.
 --Morgan
>>>
>>> OK. Can someone from the neutron team verify that 0.18.2 works? If so,
>>> we can just exclude 0.18.3 and reset the constraint.
>>
>> I can confirm that neutron works with 0.18.2 as far as we know.
>>
> 
> Great. If you (or someone else) wants to submit a requirements update, I
> can approve it. Ping me in #openstack-release.

If it's only neutron that is affected by 0.18.3 then we already have our
workaround in place [1]. Additionally, eventlet 0.18.4 will replace the
breaking change with a different approach [2].

[1] https://review.openstack.org/281278
[2] https://github.com/eventlet/eventlet/issues/301


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Merge freeze for CI switch to Mitaka

2016-02-17 Thread Dmitry Borodaenko
BVT for master has been unblocked earlier today, and a custom ISO with
Mitaka packages is passing BVT, so switching to Mitaka will not regress
Fuel CI deployment tests. Lets not make this process more complicated
than it has to be, non-BVT swarm regressions will have to be fixed
either way, and it will be much easier to address them with unfrozen
Mitaka packages repo than with the frozen snapshot of Liberty that we've
been using so far.

-- 
Dmitry Borodaenko


On Wed, Feb 17, 2016 at 08:48:50PM +0300, Vladimir Kuklin wrote:
> Fuelers
> 
> I have strong opinion against this merge freeze right now. We have critical
> bugs blocking bvt and we do not have enough info on mitaka readiness for
> other scenarios than bvt.
> 17 февр. 2016 г. 20:45 пользователь "Dmitry Borodaenko" <
> dborodae...@mirantis.com> написал:
> 
> > Fuel core reviewers,
> >
> > Fuel CI is being migrated to an ISO image with Mitaka packages, please
> > don't merge any commits to any Fuel repositories without coordination
> > with Aleksandra until further notice.
> >
> > This merge freeze is expected to last a few hours.
> >
> > --
> > Dmitry Borodaenko
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][keystone] pycadf 2.1.0 release (mitaka)

2016-02-17 Thread no-reply
We are satisfied to announce the release of:

pycadf 2.1.0: CADF Library

This release is part of the mitaka release series.

With source available at:

https://git.openstack.org/cgit/openstack/pycadf

With package available at:

https://pypi.python.org/pypi/pycadf

Please report issues through launchpad:

https://bugs.launchpad.net/pycadf

For more details, please see below.

Changes in pycadf 2.0.1..2.1.0
--

fb81d12 Updated from global requirements
9c4100b Add docstring validation
dc55313 Adding ironic api specific audit map configuration
3c5f795 Updated from global requirements
9410f1c Updated from global requirements
d9340d8 Enable cadf support for Heat
49bff50 Fix wrong use of comma
bb54738 Updated from global requirements
db43b4c remove suport for py33
043b209 Put py34 first in the env order of tox

Diffstat (except docs and test files)
-

etc/pycadf/heat_api_audit_map.conf   | 32 
etc/pycadf/ironic_api_audit_map.conf | 25 +
pycadf/eventfactory.py   |  2 +-
pycadf/identifier.py |  8 +++-
requirements.txt |  8 
setup.cfg|  1 -
test-requirements.txt| 15 ---
tox.ini  | 18 --
9 files changed, 93 insertions(+), 20 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 40dfd57..15ad583 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -4 +4 @@
-oslo.config>=2.7.0 # Apache-2.0
+oslo.config>=3.4.0 # Apache-2.0
@@ -6,3 +6,3 @@ oslo.serialization>=1.10.0 # Apache-2.0
-pytz>=2013.6
-six>=1.9.0
-debtcollector>=0.3.0 # Apache-2.0
+pytz>=2013.6 # MIT
+six>=1.9.0 # MIT
+debtcollector>=1.2.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 7d1893f..bc262b6 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -5,0 +6 @@ hacking<0.11,>=0.10.0
+flake8-docstrings==0.2.1.post1 # MIT
@@ -7,3 +8,3 @@ hacking<0.11,>=0.10.0
-coverage>=3.6
-discover
-fixtures>=1.3.1
+coverage>=3.6 # Apache-2.0
+discover # BSD
+fixtures>=1.3.1 # Apache-2.0/BSD
@@ -11,3 +12,3 @@ oslotest>=1.10.0 # Apache-2.0
-python-subunit>=0.0.18
-testrepository>=0.0.18
-testtools>=1.4.0
+python-subunit>=0.0.18 # Apache-2.0/BSD
+testrepository>=0.0.18 # Apache-2.0/BSD
+testtools>=1.4.0 # MIT
@@ -17 +18 @@ oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
-sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
+sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Merge freeze for CI switch to Mitaka

2016-02-17 Thread Igor Kalnitsky
Vladimir,

Obviously, there will be regressions in other scenarios. However, it's
better to catch them now. We have not much time before FF, and it'd be
better to merge such features as early as possible, and do not wait
for merge hell a day before FF.

The thing we need to know is that BVT is green, and that means most
developers aren't blocked.

Thanks,
Igor

On Wed, Feb 17, 2016 at 7:48 PM, Vladimir Kuklin  wrote:
> Fuelers
>
> I have strong opinion against this merge freeze right now. We have critical
> bugs blocking bvt and we do not have enough info on mitaka readiness for
> other scenarios than bvt.
>
> 17 февр. 2016 г. 20:45 пользователь "Dmitry Borodaenko"
>  написал:
>
>> Fuel core reviewers,
>>
>> Fuel CI is being migrated to an ISO image with Mitaka packages, please
>> don't merge any commits to any Fuel repositories without coordination
>> with Aleksandra until further notice.
>>
>> This merge freeze is expected to last a few hours.
>>
>> --
>> Dmitry Borodaenko
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] unconstrained growth, why?

2016-02-17 Thread Fox, Kevin M
To use the parallel of the Linux OS again, what Linux user doesn't use a vendor 
(distro) to deploy their machine? Sure, you can linux from scratch it, but who 
does but for education/entertainment purposes?

Yes, its important to be able to do it without a vendor. The same way its 
important to be able to do linux from scratch without a vendor. But "easy" is 
not a requirement to be open.

Easy thought would be nice. Just saying it isn't a requirement. Flexibility in 
Open Source has often trumped ease. :)

Thanks,
Kevin

From: Doug Hellmann [d...@doughellmann.com]
Sent: Wednesday, February 17, 2016 9:36 AM
To: openstack-dev
Subject: Re: [openstack-dev] [all] [tc] unconstrained growth, why?

Excerpts from Chris Dent's message of 2016-02-17 17:00:00 +:
> On Wed, 17 Feb 2016, Doug Hellmann wrote:
> > Excerpts from Chris Dent's message of 2016-02-17 11:30:29 +:
> >> A reason _I_[1] think we need to limit things is because from the
> >> outside OpenStack doesn't really look like anything that you can put
> >> a short description on. It's more murky than that and it is hard to
> >> experience positive progress in a fog. Many people react to this fog
> >> by focusing on their specific project rather than OpenStack at
> >> large: At least there they can see their impact.
> >
> > I've never understood this argument. OpenStack is a community
> > creating a collection of tools for building clouds. Each part
> > implements a different set of features, and you only need the parts
> > for the features you want.  In that respect, it's no different from
> > a Linux distro. You need a few core pieces (kernel, init, etc.),
> > and you install the other parts based on your use case (hardware
> > drivers, $SHELL, $GUI, etc.).
>
> Ah. I think this gets to the heart of the matter. "OpenStack is a
> [...] collection of tools for building clouds" is not really how I
> think about it, so perhaps that's where I experience a problem. I
> wonder how many people feel the way you do and how many people feel
> more like I do, which is: I want OpenStack to be a thing that I, as
> an individual without the help of a "vendor", can use to deploy a
> cloud (that is easy for me and my colleagues to use) if I happen to
> have >1 (or even just 1) pieces of baremetal lying around.
>
> It's that "vendor" part that is the rub and to me starts bringing us
> back into the spirit of "open core" that started the original
> thread. If I need a _vendor_ to make use of the main features of
> OpenStack then good golly that makes me want to cry, and want to fix
> it.
>
> To fix it, you're right, it does need a greater sense of "product"
> "instead" of kit and the injection of opinions about reasonable
> defaults and expectations of some reasonable degree of sameness
> between different deployments of OpenStack. This is, in fact, what
> much of the cross-project work that is happening now is trying to
> accomplish.

You don't need a vendor to use OpenStack. The community has deployment
stories for, I think, every possible automation framework.  Packages
are available for distros that don't have license fees. It is
entirely possible to deploy a cloud using these tools.

The challenge with deployment is that everyone wants to make their
own choices about the cloud they're building. If we were going to
give everyone the same sort of cloud, all of OpenStack would be a
lot simpler and no one would want to use it because it wouldn't
meet their needs.

If some of the existing installation mechanisms don't meet simplicity
requirements, we should figure out why, specifically. It's quite
likely there's room for a "fewer choices needed" deployment tool
that expresses more opinions than the existing tools, and is useful
for some simpler cases by removing some of the flexibility.

>
> >> This results in increasing the fog because cross-project concerns (which
> >> help unify the vision and actuality that is OpenStack) get less
> >> attention and the cycle deepens.
> >
> > I'm not sure cross-project issues are really any worse today than
> > when I started working on OpenStack a few years ago. In fact, I think
> > they're significantly better.
>
> I agree it is much better but it can be better still with some
> reasonable sense of us all working in a similar direction. The
> addition of "users" to the mission is helpful.
>
> > Architecturally and technically, project teams have always wanted
> > to go their own way to some degree. Experimentation with different
> > approaches and tools to address similar problems like that is good,
> > and success has resulted in the adoption of more common tools like
> > third-party WSGI frameworks, test tools, and patterns like the specs
> > review process and multiple teams managing non-client libraries.
> > So on a technical front we're doing better than the days where we
> > all just copied code out of nova and modified it for our own purposes
> > without looking back.
>
> History is

Re: [openstack-dev] [fuel] Move virtualbox scripts to a separate directory

2016-02-17 Thread Maksim Malchuk
Hi Fabrizio,

The project-config patch is on the review now, waiting for a core-reviewers
to merge the changes.


On Wed, Feb 17, 2016 at 5:47 PM, Fabrizio Soppelsa 
wrote:

> Vladimir,
> a dedicated repo - good to hear.
> Do you have a rough estimate for how long this directory will be in freeze
> state?
>
> Thanks,
> Fabrizio
>
>
> On Feb 15, 2016, at 5:16 PM, Vladimir Kozhukalov 
> wrote:
>
> Dear colleagues,
>
> I'd like to announce that we are next to moving fuel-main/virtualbox
> directory to a separate git repository. This directory contains a set of
> bash scripts that could be used to easily deploy Fuel environment and try
> to deploy OpenStack cluster using Fuel. Virtualbox is used as a
> virtualization layer.
>
> Checklist for this change is as follows:
>
>1. Launchpad bug: https://bugs.launchpad.net/fuel/+bug/1544271
>2. project-config patch https://review.openstack.org/#/c/279074/2 (ON
>REVIEW)
>3. prepare upstream (DONE)
>https://github.com/kozhukalov/fuel-virtualbox
>4. .gitreview file (TODO)
>5. .gitignore file (TODO)
>6. MAINTAINERS file (TODO)
>7. remove old files from fuel-main (TODO)
>
> Virtualbox directory is not actively changed, so freezing this directory
> for a while is not going to affect the development process significantly.
> From this moment virtualbox directory is declared freezed and all changes
> in this directory that are currently in work should be later backported to
> the new git repository (fuel-virtualbox).
>
> Vladimir Kozhukalov
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Maksim Malchuk,
Senior DevOps Engineer,
MOS: Product Engineering,
Mirantis, Inc

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-17 Thread Thomas Goirand
On 02/17/2016 03:10 AM, Sean M. Collins wrote:
> Thomas Goirand wrote:
>> s/I dislike/is not free software/ [*]
>>
>> It's not a mater of taste. Having Poppy requiring a non-free component,
>> even indirectly (ie: the Oracle JVM that CassandraDB needs), makes it
>> non-free.
> 
> Your definition of non-free versus free, if I am not mistaken, is
> based on GPLv3. OpenStack is not GPL licensed

This has nothing to do with the GPL license.
Where did you get this from?
IMO, you really are, mistaking.

> I understand and respect the point of view of the Debian project on
> this, however OpenStack is an Apache licensed project. So, this is
> entirely your bikeshed.
> 
>> Ensuring we really only accept free software is not a bikeshed color
>> discussion, it is really important. And that's the same topic as using
>> non-free CDN solution (see below).
> 
> It is a bikeshed, because you are injecting a debate over the freedoms of
> Apache license vs. GPLv3 into this discussion.

I have no idea where this comes from.

> Which you do, on many
> occasions. I respect this, but at some point it does hijack the original
> intent of the thread. Which is now happening.

I have *never* discussed Apache vs GPLv3, neither in this list or
elsewhere. So I really don't understand why you're writing this.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][keystone] keystoneauth1 2.3.0 release (mitaka)

2016-02-17 Thread no-reply
We are satisfied to announce the release of:

keystoneauth1 2.3.0: Authentication Library for OpenStack Identity

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/keystoneauth

With package available at:

https://pypi.python.org/pypi/keystoneauth1

Please report issues through launchpad:

http://bugs.launchpad.net/keystoneauth

For more details, please see below.

Changes in keystoneauth1 2.2.0..2.3.0
-

787c4d1 Cleanup test-requirements.txt
46b7be3 Updated from global requirements
585c525 Allow parameter expansion in endpoint_override
e8c3a2e Updated from global requirements
834a96f Updated from global requirements
d04320a Updated from global requirements
f21def7 Use positional library instead of our own copy
11503c3 Remove argparse from requirements
37548ee HTTPError should contain 'retry_after' parameter
f33cb0e Updated from global requirements
2db23d2 Remove keyring as a test-requiremnet
fcd9538 Mark password/secret options as secret
627fdd0 Replace deprecated library function os.popen() with subprocess

Diffstat (except docs and test files)
-

keystoneauth1/_utils.py| 157 -
keystoneauth1/access/access.py |   4 +-
keystoneauth1/access/service_catalog.py|  12 +-
keystoneauth1/adapter.py   |   4 +-
keystoneauth1/discover.py  |   8 +-
keystoneauth1/exceptions/http.py   |   4 +-
keystoneauth1/extras/_saml2/_loading.py|   4 +-
keystoneauth1/fixture/discovery.py |  14 +-
keystoneauth1/identity/access.py   |   5 +-
keystoneauth1/identity/base.py |   3 +-
keystoneauth1/identity/generic/password.py |   7 +-
keystoneauth1/identity/v2.py   |   5 +-
keystoneauth1/identity/v3/base.py  |   3 +-
keystoneauth1/identity/v3/oidc.py  |   7 +-
keystoneauth1/loading/_plugins/identity/generic.py |   5 +-
keystoneauth1/loading/_plugins/identity/v3.py  |   7 +-
keystoneauth1/loading/cli.py   |   5 +-
keystoneauth1/loading/opts.py  |   6 +-
keystoneauth1/loading/session.py   |   5 +-
keystoneauth1/session.py   |  39 -
requirements.txt   |  10 +-
setup.cfg  |   6 +-
test-requirements.txt  |  29 ++--
25 files changed, 201 insertions(+), 235 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 01bdcda..4504520 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5,5 +5,5 @@
-pbr>=1.6
-argparse
-iso8601>=0.1.9
-requests!=2.9.0,>=2.8.1
-six>=1.9.0
+pbr>=1.6 # Apache-2.0
+iso8601>=0.1.9 # MIT
+positional>=1.0.1 # Apache-2.0
+requests!=2.9.0,>=2.8.1 # Apache-2.0
+six>=1.9.0 # MIT
diff --git a/test-requirements.txt b/test-requirements.txt
index 7be56d0..ba8eec3 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -6 +6 @@ hacking<0.11,>=0.10.0
-flake8-docstrings==0.2.1.post1
+flake8-docstrings==0.2.1.post1 # MIT
@@ -8,7 +8,5 @@ flake8-docstrings==0.2.1.post1
-coverage>=3.6
-discover
-fixtures>=1.3.1
-keyring>=5.5.1
-mock>=1.2
-oauthlib>=0.6
-oslo.config>=3.2.0 # Apache-2.0
+coverage>=3.6 # Apache-2.0
+discover # BSD
+fixtures>=1.3.1 # Apache-2.0/BSD
+mock>=1.2 # BSD
+oslo.config>=3.4.0 # Apache-2.0
@@ -15,0 +14 @@ oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
+oslo.utils>=3.4.0 # Apache-2.0
@@ -17 +16 @@ oslotest>=1.10.0 # Apache-2.0
-os-testr>=0.4.1
+os-testr>=0.4.1 # Apache-2.0
@@ -19 +18 @@ betamax>=0.5.1 # Apache-2.0
-pycrypto>=2.6
+pycrypto>=2.6 # Public Domain
@@ -22,6 +21,4 @@ requests-mock>=0.7.0 # Apache-2.0
-sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
-tempest-lib>=0.13.0
-testrepository>=0.0.18
-testresources>=0.2.4
-testtools>=1.4.0
-WebOb>=1.2.3
+sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
+testrepository>=0.0.18 # Apache-2.0/BSD
+testresources>=0.2.4 # Apache-2.0/BSD
+testtools>=1.4.0 # MIT



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-17 Thread Jay Pipes

On 02/17/2016 09:30 AM, Doug Hellmann wrote:

Excerpts from Mike Perez's message of 2016-02-17 03:21:51 -0800:

On 02/16/2016 11:30 AM, Doug Hellmann wrote:

So I think the project team is doing everything we've asked.  We
changed our policies around new projects to emphasize the social
aspects of projects, and community interactions. Telling a bunch
of folks that they "are not OpenStack" even though they follow those
policies is rather distressing.  I think we should be looking for
ways to say "yes" to new projects, rather than "no."


My disagreements with accepting Poppy has been around testing, so let me
reiterate what I've already said in this thread.

The governance currently states that under Open Development "The project
has core reviewers and adopts a test-driven gate in the OpenStack
infrastructure for changes" [1].

If we don't have a solution like OpenCDN, Poppy has to adopt a reference
implementation that is a commercial entity, and infra has to also be
dependent on it. I get Infra is already dependent on public cloud
donations, but if we start opening the door to allow projects to bring
in those commercial dependencies, that's not good.


Only Poppy's test suite would rely on that, though, right? And other
projects can choose whether to co-gate with Poppy or not. So I don't see
how this limitation has an effect on anyone other than the Poppy team.


But what would really be tested in Poppy without any commercial CDN 
vendor? Nothing functional, right? I believe the fact that Poppy cannot 
be functionally tested in the OpenStack CI gate basically disqualifies 
it from being "in OpenStack".


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Nominating Masahito for core

2016-02-17 Thread Masahito MUROI
Thank you folks. I'm glad to be a part of this team and community, and 
appreciate all supports from you.


On 2016/02/17 12:10, Anusha Ramineni wrote:

+1

Best Regards,
Anusha

On 17 February 2016 at 00:59, Peter Balland mailto:pball...@vmware.com>> wrote:

+1

From: Tim Hinrichs mailto:t...@styra.com>>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, February 16, 2016 at 11:15 AM
To: "OpenStack Development Mailing List (not for usage questions)"
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Congress] Nominating Masahito for core

Hi all,

I'm writing to nominate Masahito Muroi for the Congress core
team.  He's been a consistent contributor for the entirety of
Liberty and Mitaka, both in terms of code contributions and
reviews.  In addition to volunteering for bug fixes and
blueprints, he initiated and carried out the design and
implementation of a new class of datasource driver that allows
external datasources to push data into Congress.  He has also
been instrumental in migrating Congress to its new distributed
architecture.

Tim


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
室井 雅仁(Masahito MUROI)
Software Innovation Center, NTT
Tel: +81-422-59-4539



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron] publish and update Gerrit dashboard link automatically

2016-02-17 Thread Doug Wiegley
Hi all,

I automated a non-dashboard version of Rossella’s script.

The tweaked script that gets run:

https://review.openstack.org/#/c/281446/

Results, updated hourly (bookmarkable, will redirect to gerrit):

http://104.236.79.17/
http://104.236.79.17/current
http://104.236.79.17/current-min

Thanks,
doug


> On Feb 16, 2016, at 2:52 PM, Carl Baldwin  wrote:
> 
> Could this be done by creating a project dashboard [1]?  I think the
> one thing that prevents using such a dashboard is that your script
> generates a dashboard that crosses multiple projects.  So, we'd be
> stuck with multiple dashboards, one per project.
> 
> The nature of your script is to create a new URL reflecting the
> current state of things each time it is run.  But, it would be nice if
> it were bookmark-able.  These seem to conflict.
> 
> Would it be possible have a way to create a URL which would return a
> "307 Temporary Redirect" to the URL of the day?  It could be
> bookmarked and redirect to the latest URL of the day.
> 
> Another idea is a page with a frame or something so that the permanent
> URL stays in the browser bar.  I think I've seen web pages redirect
> this way before.
> 
> Or, we could not do the fancy stuff and just have a link on a wiki or 
> something.
> 
> No matter how it is done, there is the problem of where to host such a
> page which can be automatically updated daily (or more often) by this
> script.
> 
> Any thoughts from infra on this?
> 
> Carl
> 
> [1] 
> https://gerrit-review.googlesource.com/Documentation/user-dashboards.html#project-dashboards
> 
> On Fri, Feb 12, 2016 at 10:12 AM, Rossella Sblendido
>  wrote:
>> 
>> 
>> On 02/12/2016 12:25 PM, Rossella Sblendido wrote:
>>> 
>>> Hi all,
>>> 
>>> it's hard sometimes for reviewers to filter reviews that are high
>>> priority. In Neutron in this mail thread [1] we had the idea to create a
>>> script for that. The script is now available in the Neutron repository
>>> [2].
>>> The script queries Launchpad and creates a file that can be used by
>>> gerrit-dash-creator to display a dashboard listing patches that fix
>>> critical/high bugs, that implement approved blueprint or feature
>>> requests. This is how it looks like today [3].
>>> For it to be really useful the dashboard link needs to be updated once a
>>> day at least. Here I need your help. I'd like to publish the URL in a
>>> public place and update it every day in an automated way. How can I do
>>> that?
>>> 
>>> thanks,
>>> 
>>> Rossella
>>> 
>>> [1]
>>> 
>>> http://lists.openstack.org/pipermail/openstack-dev/2015-November/079816.html
>>> 
>>> [2]
>>> 
>>> https://github.com/openstack/neutron/blob/master/tools/milestone-review-dash.py
>>> 
>>> [3] https://goo.gl/FSKTj9
>> 
>> 
>> This last link is wrong, this is the right one [1] sorry.
>> 
>> [1] https://goo.gl/Hb3vKu
>> 
>> 
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][keystone] keystonemiddleware 4.3.0 release (mitaka)

2016-02-17 Thread no-reply
We are eager to announce the release of:

keystonemiddleware 4.3.0: Middleware for OpenStack Identity

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/keystonemiddleware

With package available at:

https://pypi.python.org/pypi/keystonemiddleware

Please report issues through launchpad:

http://bugs.launchpad.net/keystonemiddleware

For more details, please see below.

4.3.0
^

New Features

* [bug 1540022
  (https://bugs.launchpad.net/keystonemiddleware/+bug/1540022)] The
  auth_token middleware will now accept a conf setting named
  "oslo_config_config". If this is set its value must be an existing
  oslo_config *ConfigOpts*. "olso_config_config" takes precedence over
  "oslo_config_project". This feature is useful to applications that
  are instantiating the auth_token middleware themselves and wish to
  use an existing configuration.

Changes in keystonemiddleware 4.2.0..4.3.0
--

6806d14 argparse expects a list not a dictionary
c531b87 update deprecation message to indicate when deprecations were made
184fbff Updated from global requirements
f0965c9 Split oslo_config and list all opts
9600119 Updated from global requirements
4c0c5ce Make pep8 *the* linting interface
dc22e9f Remove clobbering of passed oslo_config_config
7f5db94 Updated from global requirements
2e12c05 Use positional instead of keystoneclient version
4b6da68 Updated from global requirements
e2a5f9a Remove Babel from requirements.txt
9ea26df Remove bandit tox environment
4315ea4 Remove unnecessary _reject_request function
e8ca927 Group common PKI validation code - Refactor
808c922 Group common PKI validation code - Tests
4008e75 Remove except Exception handler
ee5e6cc Use load_from_options_getter for auth plugins

Diffstat (except docs and test files)
-

keystonemiddleware/auth_token/__init__.py  | 116 ---
keystonemiddleware/auth_token/_auth.py |  21 ++--
keystonemiddleware/auth_token/_opts.py |  55 +
keystonemiddleware/fixture.py  |   4 +-
keystonemiddleware/opts.py |  13 ++-
.../unit/auth_token/test_auth_token_middleware.py  | 125 ++---
...ccepts-oslo-config-config-a37212b60f58e154.yaml |  10 ++
requirements.txt   |   4 +-
setup.cfg  |   2 +-
test-requirements.txt  |   2 +-
tox.ini|  15 +--
13 files changed, 280 insertions(+), 149 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 3152749..733e4a9 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5 +4,0 @@
-Babel>=1.3 # BSD
@@ -7 +6 @@ keystoneauth1>=2.1.0 # Apache-2.0
-oslo.config>=3.2.0 # Apache-2.0
+oslo.config>=3.4.0 # Apache-2.0
@@ -12,0 +12 @@ pbr>=1.6 # Apache-2.0
+positional>=1.0.1 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 682422d..357d49e 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -24 +24 @@ python-memcached>=1.56 # PSF
-bandit>=0.13.2 # Apache-2.0
+bandit>=0.17.3 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] unconstrained growth, why?

2016-02-17 Thread Jay Pipes

On 02/17/2016 09:28 AM, Doug Hellmann wrote:

Excerpts from Chris Dent's message of 2016-02-17 11:30:29 +:

A reason _I_[1] think we need to limit things is because from the
outside OpenStack doesn't really look like anything that you can put
a short description on. It's more murky than that and it is hard to
experience positive progress in a fog. Many people react to this fog
by focusing on their specific project rather than OpenStack at
large: At least there they can see their impact.


I've never understood this argument. OpenStack is a community
creating a collection of tools for building clouds. Each part
implements a different set of features, and you only need the parts
for the features you want.  In that respect, it's no different from
a Linux distro. You need a few core pieces (kernel, init, etc.),
and you install the other parts based on your use case (hardware
drivers, $SHELL, $GUI, etc.).


Yes. This.


Are people confused about what OpenStack is because they're looking
for a single turn-key system from a vendor? Because they don't know
what features they want/need? Or are we just doing a bad job of
communicating the product vs. kit nature of the project?


I think we are doing a bad job of communicating the product vs. kit 
nature of OpenStack.



This results in increasing the fog because cross-project concerns (which
help unify the vision and actuality that is OpenStack) get less
attention and the cycle deepens.


I'm not sure cross-project issues are really any worse today than
when I started working on OpenStack a few years ago. In fact, I think
they're significantly better.

At the time, there were only the integrated projects and no real
notion that we would add a lot of new ones. We still had a hard
time recruiting folks to participate in release management, docs,
Oslo, infra, etc. The larger community and liaison system has
improved the situation. There's more work, because there are more
projects, but by restructuring the relationship of the vertical and
horizontal teams to require project teams to participate explicitly
we've reduced some of the pressure on the teams doing the coordination.

Architecturally and technically, project teams have always wanted
to go their own way to some degree. Experimentation with different
approaches and tools to address similar problems like that is good,
and success has resulted in the adoption of more common tools like
third-party WSGI frameworks, test tools, and patterns like the specs
review process and multiple teams managing non-client libraries.
So on a technical front we're doing better than the days where we
all just copied code out of nova and modified it for our own purposes
without looking back.

We also have several new cross-project "policy" initiatives like
the API working group, the new naming standards thing, and cross-project
spec liaisons. These teams are a new, more structured way to
collaborate to solve some of the issues we dealt with in the early
days through force of personality, or by leaving it up to whoever
was doing the implementation.  All of those efforts are seeing more
success because people showed up to collaborate and reach consensus,
and stuck through the hard parts of actually documenting the decision
and then doing the work agreed to. Again, we could always use more
help, but I see the trend as improving.

We've had to change our approaches to dealing with the growth,
and we still have a ways to go (much of it uphill), but I'm not
prepared to say that we've failed to meet the challenge.


Agreed on your points above, Doug.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-17 Thread Anne Gentle
On Wed, Feb 17, 2016 at 12:20 PM, Jay Pipes  wrote:

> On 02/17/2016 09:30 AM, Doug Hellmann wrote:
>
>> Excerpts from Mike Perez's message of 2016-02-17 03:21:51 -0800:
>>
>>> On 02/16/2016 11:30 AM, Doug Hellmann wrote:
>>>
 So I think the project team is doing everything we've asked.  We
 changed our policies around new projects to emphasize the social
 aspects of projects, and community interactions. Telling a bunch
 of folks that they "are not OpenStack" even though they follow those
 policies is rather distressing.  I think we should be looking for
 ways to say "yes" to new projects, rather than "no."

>>>
>>> My disagreements with accepting Poppy has been around testing, so let me
>>> reiterate what I've already said in this thread.
>>>
>>> The governance currently states that under Open Development "The project
>>> has core reviewers and adopts a test-driven gate in the OpenStack
>>> infrastructure for changes" [1].
>>>
>>> If we don't have a solution like OpenCDN, Poppy has to adopt a reference
>>> implementation that is a commercial entity, and infra has to also be
>>> dependent on it. I get Infra is already dependent on public cloud
>>> donations, but if we start opening the door to allow projects to bring
>>> in those commercial dependencies, that's not good.
>>>
>>
>> Only Poppy's test suite would rely on that, though, right? And other
>> projects can choose whether to co-gate with Poppy or not. So I don't see
>> how this limitation has an effect on anyone other than the Poppy team.
>>
>
> But what would really be tested in Poppy without any commercial CDN
> vendor? Nothing functional, right? I believe the fact that Poppy cannot be
> functionally tested in the OpenStack CI gate basically disqualifies it from
> being "in OpenStack".
>

I do want end-users to have CDN, I do. And I'm a pragmatist as well so the
"open core" arguments aren't as important to me.

That said, for me, since poppy itself doesn't offer/run/maintain the
service but instead simply offers an API on top of CDN provider's APIs, I
don't think it's necessary to govern it in OpenStack.

Thanks,
Anne


>
> Best,
> -jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][stable][oslo] oslo.service 0.9.1 release (liberty)

2016-02-17 Thread no-reply
We are chuffed to announce the release of:

oslo.service 0.9.1: oslo.service library

This release is part of the liberty stable release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.service

With package available at:

https://pypi.python.org/pypi/oslo.service

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.service

For more details, please see below.

Changes in oslo.service 0.9.0..0.9.1


8b6e2f6 Fix race condition on handling signals
eb1a4aa Fix a race condition in signal handlers
0caf56e Updated from global requirements
56b3ff0 ThreadGroup's stop didn't recognise the current thread correctly
9bee390 doing monkey_patch for unittest.
5a20d1f Updated from global requirements
d81e4f2 Update .gitreview for stable/liberty

Diffstat (except docs and test files)
-

.gitreview |  1 +
oslo_service/service.py| 24 
oslo_service/threadgroup.py|  9 +++--
requirements.txt   |  2 +-
setup.py   |  2 +-
8 files changed, 84 insertions(+), 11 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 54c954a..2dc2018 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -10 +10 @@ monotonic>=0.3 # Apache-2.0
-oslo.utils>=2.0.0 # Apache-2.0
+oslo.utils!=2.6.0,>=2.0.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-17 Thread Doug Hellmann
Excerpts from Anne Gentle's message of 2016-02-17 12:28:42 -0600:
> On Wed, Feb 17, 2016 at 12:20 PM, Jay Pipes  wrote:
> 
> > On 02/17/2016 09:30 AM, Doug Hellmann wrote:
> >
> >> Excerpts from Mike Perez's message of 2016-02-17 03:21:51 -0800:
> >>
> >>> On 02/16/2016 11:30 AM, Doug Hellmann wrote:
> >>>
>  So I think the project team is doing everything we've asked.  We
>  changed our policies around new projects to emphasize the social
>  aspects of projects, and community interactions. Telling a bunch
>  of folks that they "are not OpenStack" even though they follow those
>  policies is rather distressing.  I think we should be looking for
>  ways to say "yes" to new projects, rather than "no."
> 
> >>>
> >>> My disagreements with accepting Poppy has been around testing, so let me
> >>> reiterate what I've already said in this thread.
> >>>
> >>> The governance currently states that under Open Development "The project
> >>> has core reviewers and adopts a test-driven gate in the OpenStack
> >>> infrastructure for changes" [1].
> >>>
> >>> If we don't have a solution like OpenCDN, Poppy has to adopt a reference
> >>> implementation that is a commercial entity, and infra has to also be
> >>> dependent on it. I get Infra is already dependent on public cloud
> >>> donations, but if we start opening the door to allow projects to bring
> >>> in those commercial dependencies, that's not good.
> >>>
> >>
> >> Only Poppy's test suite would rely on that, though, right? And other
> >> projects can choose whether to co-gate with Poppy or not. So I don't see
> >> how this limitation has an effect on anyone other than the Poppy team.
> >>
> >
> > But what would really be tested in Poppy without any commercial CDN
> > vendor? Nothing functional, right? I believe the fact that Poppy cannot be
> > functionally tested in the OpenStack CI gate basically disqualifies it from
> > being "in OpenStack".
> >
> 
> I do want end-users to have CDN, I do. And I'm a pragmatist as well so the
> "open core" arguments aren't as important to me.
> 
> That said, for me, since poppy itself doesn't offer/run/maintain the
> service but instead simply offers an API on top of CDN provider's APIs, I
> don't think it's necessary to govern it in OpenStack.

Most of our successful services do the same thing. They abstract
another service, but don't replicate it's features in their code
base. Nova isn't a hypervisor. Cinder isn't a block device. Trove
isn't a database.  Neutron isn't an SDN.

The *only* difference is that because of the nature of a CDN, running
one yourself isn't practical and so there's no significant (or
viable) open source implementation.

Doug

> 
> Thanks,
> Anne
> 
> >
> > Best,
> > -jay
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] unconstrained growth, why?

2016-02-17 Thread Doug Hellmann
Excerpts from Jay Pipes's message of 2016-02-17 13:25:58 -0500:
> On 02/17/2016 09:28 AM, Doug Hellmann wrote:
> > Excerpts from Chris Dent's message of 2016-02-17 11:30:29 +:
> >> A reason _I_[1] think we need to limit things is because from the
> >> outside OpenStack doesn't really look like anything that you can put
> >> a short description on. It's more murky than that and it is hard to
> >> experience positive progress in a fog. Many people react to this fog
> >> by focusing on their specific project rather than OpenStack at
> >> large: At least there they can see their impact.
> >
> > I've never understood this argument. OpenStack is a community
> > creating a collection of tools for building clouds. Each part
> > implements a different set of features, and you only need the parts
> > for the features you want.  In that respect, it's no different from
> > a Linux distro. You need a few core pieces (kernel, init, etc.),
> > and you install the other parts based on your use case (hardware
> > drivers, $SHELL, $GUI, etc.).
> 
> Yes. This.
> 
> > Are people confused about what OpenStack is because they're looking
> > for a single turn-key system from a vendor? Because they don't know
> > what features they want/need? Or are we just doing a bad job of
> > communicating the product vs. kit nature of the project?
> 
> I think we are doing a bad job of communicating the product vs. kit 
> nature of OpenStack.

Yeah, I tend to think that's it, too.

> 
> >> This results in increasing the fog because cross-project concerns (which
> >> help unify the vision and actuality that is OpenStack) get less
> >> attention and the cycle deepens.
> >
> > I'm not sure cross-project issues are really any worse today than
> > when I started working on OpenStack a few years ago. In fact, I think
> > they're significantly better.
> >
> > At the time, there were only the integrated projects and no real
> > notion that we would add a lot of new ones. We still had a hard
> > time recruiting folks to participate in release management, docs,
> > Oslo, infra, etc. The larger community and liaison system has
> > improved the situation. There's more work, because there are more
> > projects, but by restructuring the relationship of the vertical and
> > horizontal teams to require project teams to participate explicitly
> > we've reduced some of the pressure on the teams doing the coordination.
> >
> > Architecturally and technically, project teams have always wanted
> > to go their own way to some degree. Experimentation with different
> > approaches and tools to address similar problems like that is good,
> > and success has resulted in the adoption of more common tools like
> > third-party WSGI frameworks, test tools, and patterns like the specs
> > review process and multiple teams managing non-client libraries.
> > So on a technical front we're doing better than the days where we
> > all just copied code out of nova and modified it for our own purposes
> > without looking back.
> >
> > We also have several new cross-project "policy" initiatives like
> > the API working group, the new naming standards thing, and cross-project
> > spec liaisons. These teams are a new, more structured way to
> > collaborate to solve some of the issues we dealt with in the early
> > days through force of personality, or by leaving it up to whoever
> > was doing the implementation.  All of those efforts are seeing more
> > success because people showed up to collaborate and reach consensus,
> > and stuck through the hard parts of actually documenting the decision
> > and then doing the work agreed to. Again, we could always use more
> > help, but I see the trend as improving.
> >
> > We've had to change our approaches to dealing with the growth,
> > and we still have a ways to go (much of it uphill), but I'm not
> > prepared to say that we've failed to meet the challenge.
> 
> Agreed on your points above, Doug.
> 
> -jay
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-17 Thread Fox, Kevin M
You should be able to test that the functionality of using the api, and seeing 
an appropriate plugin call gets called without a proprietary back end. Then its 
up to each plugin to test for their own compliance to the reference.

Another approach for testing, maybe you could create the "dead simple cdn" that 
basically just uploads stuff to the local swift? Its not much of a cdn, but you 
may not need more then that. For my cloud, that amount of "cdn" would be enough 
in a lot of cases.

Thanks,
Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Wednesday, February 17, 2016 10:20 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

On 02/17/2016 09:30 AM, Doug Hellmann wrote:
> Excerpts from Mike Perez's message of 2016-02-17 03:21:51 -0800:
>> On 02/16/2016 11:30 AM, Doug Hellmann wrote:
>>> So I think the project team is doing everything we've asked.  We
>>> changed our policies around new projects to emphasize the social
>>> aspects of projects, and community interactions. Telling a bunch
>>> of folks that they "are not OpenStack" even though they follow those
>>> policies is rather distressing.  I think we should be looking for
>>> ways to say "yes" to new projects, rather than "no."
>>
>> My disagreements with accepting Poppy has been around testing, so let me
>> reiterate what I've already said in this thread.
>>
>> The governance currently states that under Open Development "The project
>> has core reviewers and adopts a test-driven gate in the OpenStack
>> infrastructure for changes" [1].
>>
>> If we don't have a solution like OpenCDN, Poppy has to adopt a reference
>> implementation that is a commercial entity, and infra has to also be
>> dependent on it. I get Infra is already dependent on public cloud
>> donations, but if we start opening the door to allow projects to bring
>> in those commercial dependencies, that's not good.
>
> Only Poppy's test suite would rely on that, though, right? And other
> projects can choose whether to co-gate with Poppy or not. So I don't see
> how this limitation has an effect on anyone other than the Poppy team.

But what would really be tested in Poppy without any commercial CDN
vendor? Nothing functional, right? I believe the fact that Poppy cannot
be functionally tested in the OpenStack CI gate basically disqualifies
it from being "in OpenStack".

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] upgrade implications of lots of content in paste.ini

2016-02-17 Thread Michael Krotscheck
On Wed, Feb 17, 2016 at 9:41 AM Doug Hellmann  wrote:

>
> The next release of oslo.config will have this.
> https://review.openstack.org/#/c/278604/


http://stjent.pinnaclecart.com/images/products/preview/55008.jpg

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] upgrade implications of lots of content in paste.ini

2016-02-17 Thread Morgan Fainberg
I am very much against adding extra data to paste-ini especially config
data that is consumed by the applications. I generally understand why it
was implemented in the way it has. The oslo_config change that Doug linked
will make this need mostly go away however. I would like to move us towards
not needing the extra config data in paste-ini files and instead rely on
the oslo_config options where possible. There will be exceptions (such a
swift as it doesn't use oslo-config).

On Wed, Feb 17, 2016 at 10:40 AM, Michael Krotscheck 
wrote:

> On Wed, Feb 17, 2016 at 9:41 AM Doug Hellmann 
> wrote:
>
>>
>> The next release of oslo.config will have this.
>> https://review.openstack.org/#/c/278604/
>
>
> http://stjent.pinnaclecart.com/images/products/preview/55008.jpg
>
> Michael
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] eventlet 0.18.1 not on PyPi anymore

2016-02-17 Thread Doug Hellmann
Excerpts from Henry Gessau's message of 2016-02-17 13:00:03 -0500:
> Doug Hellmann  wrote:
> > Excerpts from Henry Gessau's message of 2016-02-17 11:00:53 -0500:
> >> Doug Hellmann  wrote:
> >>> Excerpts from Morgan Fainberg's message of 2016-02-17 07:10:34 -0800:
>  On Wed, Feb 17, 2016 at 5:55 AM, Sean Dague  wrote:
> 
> > On 02/17/2016 08:42 AM, Doug Hellmann wrote:
> >> Excerpts from Victor Stinner's message of 2016-02-17 14:14:18 +0100:
> >>> Le 17/02/2016 13:43, Henry Gessau a écrit :
>  And it looks like eventlet 0.18.3 breaks neutron:
>  https://bugs.launchpad.net/neutron/+bug/1546506
> >>>
> >>> 2 releases, 2 regressions in OpenStack. Should we cap eventlet 
> >>> version?
> >>> The requirement bot can produce patches to update eventlet, patches
> >>> which would run integration tests using Nova, Keystone, Neutron on the
> >>> new eventlet version.
> >>>
> >>> eventlet 0.18.2 broke OpenStack Keystone and OpenStack Nova
> >>> https://github.com/eventlet/eventlet/issues/296
> >>> https://github.com/eventlet/eventlet/issues/299
> >>> https://review.openstack.org/#/c/278147/
> >>> https://bugs.launchpad.net/nova/+bug/1544801
> >>>
> >>> eventlet 0.18.3 broke OpenStack Neutron
> >>> https://github.com/eventlet/eventlet/issues/301
> >>> https://bugs.launchpad.net/neutron/+bug/1546506
> >>>
> >>> FYI eventlet 0.18.0 broke WSGI servers:
> >>> https://github.com/eventlet/eventlet/issues/295
> >>>
> >>> It was followed quickly by eventlet 0.18.2 to fix this issue.
> >>>
> >>> Sadly, it looks like bugfix releases of eventlet don't include a 
> >>> single
> >>> bugfix, but include also other changes. For example, 0.18.3 fixed the
> >>> bug #296 but introduced "wsgi: TCP_NODELAY enabled by default"
> > optimization.
> >>>
> >>> IMHO the problem is not the release manager of eventlet, but more the
> >>> lack of tests on eventlet, especially on OpenStack services.
> >>>
> >>> Current "Continious Delivery"-like with gates do detect bugs, yeah, 
> >>> but
> >>> also block a lot of developers when the gates are broken. It doesn't
> >>> seem trivial to investigate and fix eventlet issues.
> >>>
> >>> Victor
> >>>
> >>
> >> Whether we cap or not, we should exclude the known broken versions.
> >> It looks like getting back to a good version will also require
> >> lowering the minimum version we support, since we have >=0.18.2
> >> now.
> >>
> >> What was the last version of eventlet known to work?
> >
> > 0.18.2 works. On the Nova side we had a failure around unit tests which
> > was quite synthetic that we fixed. I don' know what the keystone issue
> > turned out to be.
> >
> 
>  I believe the keystone issue was a test specific issue, not a runtime
>  issue. We disabled the test.
>  --Morgan
> >>>
> >>> OK. Can someone from the neutron team verify that 0.18.2 works? If so,
> >>> we can just exclude 0.18.3 and reset the constraint.
> >>
> >> I can confirm that neutron works with 0.18.2 as far as we know.
> >>
> > 
> > Great. If you (or someone else) wants to submit a requirements update, I
> > can approve it. Ping me in #openstack-release.
> 
> If it's only neutron that is affected by 0.18.3 then we already have our
> workaround in place [1]. Additionally, eventlet 0.18.4 will replace the
> breaking change with a different approach [2].

I suspect, given the phase of our cycle we're in and the nature of the
past couple of eventlet releases, we're going to want to do more
extensive testing before taking a new release. We're approaching a
requirements freeze *anyway* so it may be moot, depending on when they
get 0.18.4 out.

Doug

> 
> [1] https://review.openstack.org/281278
> [2] https://github.com/eventlet/eventlet/issues/301
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] intrinsic function bugfixes and hot versioning

2016-02-17 Thread Steven Hardy
Hi all,

So, Zane and I have discussed $subject and it was suggested I take this to
the list to reach consensus.

Recently, I've run into a couple of small but inconvenient limitations in
our intrinsic function implementations, specifically for str_replace and
repeat, both of which did not behave the way I expected when referencing
things via get_param/get_attr:

https://bugs.launchpad.net/heat/+bug/1539737
https://bugs.launchpad.net/heat/+bug/1546684

A patch fixing one has merged, another patch is under review.

I'm viewing both issues as bugs in our existing implementation, but Zane's
comment here https://review.openstack.org/#/c/275602/ prompted some
discussion ref if we should bump the version of the functions (like we did
recently for e.g json serialization via str_replace).

I guess it's arguable, but in these cases, I'm thinking they are bugfixes,
and not new features, but what do folks think, where do we draw the line
with intrinsic functions in terms of what is considered a fix or a
version-worthy change in behavior?

The real disadvantage of requiring a version bump for bug fixes is that
folks have to wait longer to consume the fixed version (because we can't
backport a new HOT version).  The advantage is that there's less chance
someone will write a template on a newly updated Heat install, then find it
doesn't work on an older Heat environment (containing the bug) - is this
any different to any other user-visible bug though?

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rdo-list] [TripleO] Should we rename "RDO Manager" to "TripleO" ?

2016-02-17 Thread Juan Antonio Osorio
+1 This has been very confusing and it would be nice to finally get that
clearer.

On Wed, Feb 17, 2016 at 7:49 PM, Haïkel  wrote:

> +1 it fuels the confusion that RDO Manager has downstream-only patches
> which is not the case anymore.
>
> And I'll bite anyone who will try to sneak downstream-only patches in
> RDO package of tripleO.
>
> Regards,
> H.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][doc][all] Removal of deprecate oslo.og log_format configuration option

2016-02-17 Thread Ronald Bradford
The deprecated log_format option is being removed [1]

This option can be found across many projects generally in sample
configuration files [2].  These should be automatically removed if these
files are auto generated via oslo-config-generator or in sphinx generated
documentation.  In other projects using legacy Oslo Incubator code, this
option is in place.  These projects should consider migrating to the
olso.log library.


[1] https://review.openstack.org/#/c/263903/
[2] http://codesearch.openstack.org/?q=log_format&i=nope&files=&repos=
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >