Re: [openstack-dev] [tacker] Proposing Lin Cheng for tacker-horizon core team

2016-05-30 Thread Sridhar Ramaswamy
I think we have enough votes to proceed...

Lin,

Welcome to Tacker Horizon core team!

- Sridhar

On Thu, May 26, 2016 at 9:13 PM, Janki Chhatbar  wrote:

> +1
>
> Thanking you
> Janki Chhatbar
> OpenStack | SDN | Docker
> simplyexplainedblog.wordpress.com
> On May 26, 2016 11:16 PM, "Sridhar Ramaswamy"  wrote:
>
>> Tackers,
>>
>> I'd like to propose Lin Cheng to join as a tacker-horizon core team
>> member. Lin has been our go-to person for all guidance related to UI
>> enhancements for Tacker. He has been actively reviewing patchsets in
>> this area [1] and also contributed to setup the unit test framework for
>> tacker-horizon repo.
>>
>> Please provide your +1/-1 votes.
>>
>> - Sridhar
>>
>> [1]
>> http://stackalytics.com/?project_type=all=marks=all=tacker-group_id=lin-hua-cheng
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier

2016-05-30 Thread Cathy Zhang
Hi everyone,

Thanks YAMAMOTO Takashi for reserving a new meeting channel 
"openstack-meeting", for the discussion.
We will have biweekly-odd meetings on this channel at UTC 1700 on Tuesday. Due 
to this long weekend in US and some people taking extra days off, let's start 
the meeting on this new channel on June 14. 

Thanks,
Cathy

-Original Message-
From: Cathy Zhang 
Sent: Thursday, May 05, 2016 3:35 PM
To: Cathy Zhang; OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent 
extension for Newton cycle

Hi everyone,

We had a discussion on the two topics during the summit. Here is the etherpad 
link for the discussion. 
https://etherpad.openstack.org/p/Neutron-FC-OVSAgentExt-Austin-Summit

We agreed to continue the discussion on Neutron channel on a weekly basis. It 
seems UTC 1700 ~ UTC 1800 Tuesday is good for most people. 
Another option is UTC 1700 ~ UTC 1800 Friday. 

I will tentatively set the meeting time to UTC 1700 ~ UTC 1800 Tuesday. Hope 
this time is good for all people who have interest and like to contribute to 
this work. We plan to start the first meeting on May 17. 

Thanks,
Cathy


-Original Message-
From: Cathy Zhang
Sent: Thursday, April 21, 2016 11:43 AM
To: Cathy Zhang; OpenStack Development Mailing List (not for usage questions); 
Ihar Hrachyshka; Vikram Choudhary; Sean M. Collins; Haim Daniel; Mathieu Rohon; 
Shaughnessy, David; Eichberger, German; Henry Fourie; arma...@gmail.com; Miguel 
Angel Ajo; Reedip; Thierry Carrez
Cc: Cathy Zhang
Subject: RE: [openstack-dev] [neutron] work on Common Flow Classifier and OVS 
Agent extension for Newton cycle

Hi everyone,

We have room 400 at 3:10pm on Thursday available for discussion of the two 
topics. 
Another option is to use the common room with roundtables in "Salon C" during 
Monday or Wednesday lunch time.

Room 400 at 3:10pm is a closed room while the Salon C is a big open room which 
can host 500 people.

I am Ok with either option. Let me know if anyone has a strong preference. 

Thanks,
Cathy


-Original Message-
From: Cathy Zhang
Sent: Thursday, April 14, 2016 1:23 PM
To: OpenStack Development Mailing List (not for usage questions); 'Ihar 
Hrachyshka'; Vikram Choudhary; 'Sean M. Collins'; 'Haim Daniel'; 'Mathieu 
Rohon'; 'Shaughnessy, David'; 'Eichberger, German'; Cathy Zhang; Henry Fourie; 
'arma...@gmail.com'
Subject: RE: [openstack-dev] [neutron] work on Common Flow Classifier and OVS 
Agent extension for Newton cycle

Thanks for everyone's reply! 

Here is the summary based on the replies I received: 

1.  We should have a meet-up for these two topics. The "to" list are the people 
who have interest in these topics. 
I am thinking about around lunch time on Tuesday or Wednesday since some of 
us will fly back on Friday morning/noon. 
If this time is OK with everyone, I will find a place and let you know 
where and what time to meet. 

2.  There is a bug opened for the QoS Flow Classifier 
https://bugs.launchpad.net/neutron/+bug/1527671
We can either change the bug title and modify the bug details or start with a 
new one for the common FC which provides info on all requirements needed by all 
relevant use cases. There is a bug opened for OVS agent extension 
https://bugs.launchpad.net/neutron/+bug/1517903

3.  There are some very rough, ugly as Sean put it:-), and preliminary work on 
common FC https://github.com/openstack/neutron-classifier which we can see how 
to leverage. There is also a SFC API spec which covers the FC API for SFC usage 
https://github.com/openstack/networking-sfc/blob/master/doc/source/api.rst,
the following is the CLI version of the Flow Classifier for your reference:

neutron flow-classifier-create [-h]
[--description ]
[--protocol ]
[--ethertype ]
[--source-port :]
[--destination-port :]
[--source-ip-prefix ]
[--destination-ip-prefix ]
[--logical-source-port ]
[--logical-destination-port ]
[--l7-parameters ] FLOW-CLASSIFIER-NAME

The corresponding code is here 
https://github.com/openstack/networking-sfc/tree/master/networking_sfc/extensions

4.  We should come up with a formal Neutron spec for FC and another one for OVS 
Agent extension and get everyone's review and approval. Here is the etherpad 
catching our previous requirement discussion on OVS agent (Thanks David for the 
link! I remember we had this discussion before) 
https://etherpad.openstack.org/p/l2-agent-extensions-api-expansion


More inline. 

Thanks,
Cathy


-Original Message-
From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
Sent: Thursday, April 14, 2016 3:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS 
Agent extension for Newton cycle

Cathy Zhang  wrote:

> Hi everyone,
> Per Armando’s request, 

[openstack-dev] [Infra] Meeting Tuesday May 31st at 19:00 UTC

2016-05-30 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday May 31st, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting

Anyone is welcome to to add agenda items and everyone interested in
the project infrastructure and process surrounding automated testing
and deployment is encouraged to attend.

In case you missed it or would like a refresher, the meeting minutes
and full logs from our last meeting are available:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-05-24-19.03.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-05-24-19.03.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-05-24-19.03.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] aggregates associated with multiple resource providers

2016-05-30 Thread Cheng, Yingxin
Hi, cdent:

This problem arises because the RT(resource tracker) only knows to consume the 
DISK resource in its host, but it still doesn’t know exactly which resource 
provider to place the consumption. That is to say, the RT still needs to *find* 
the correct resource provider in the step 4. The *step 4* finally causes the 
explicit problem that “the RT can find two resource providers providing 
DISK_GB, but it doesn’t know which is right”, as you’ve encountered.

The problem is: the RT needs to make a decision to choose a resource provider 
when it finds multiple of them according to *step 4*. However, the scheduler 
should already know which resource provider to choose when it is making a 
decision, and it doesn’t send this information to compute nodes, either. That’s 
also to say, there is a missing step in the bp g-r-p that we should “improve 
filter scheduler that can make correct decisions with generic resource pools”, 
the scheduler should tell the compute node RT not only about the resources 
consumptions in the compute-node resource provider, but also the information 
where to consume shared resources, i.e. their related resource-provider-ids.

Hope it can help you.

-- 
Regards
Yingxin


On 5/30/16, 06:19, "Chris Dent"  wrote:
>
>I'm currently doing some thinking on step 4 ("Modify resource tracker
>to pull information on aggregates the compute node is associated with
>and the resource pools available for those aggregatesa.") of the
>work items for the generic resource pools spec[1] and I've run into
>a brain teaser that I need some help working out.
>
>I'm not sure if I've run into an issue, or am just being ignorant. The
>latter is quite likely.
>
>This gets a bit complex (to me) but: The idea for step 4 is that the
>resource tracker will be modified such that:
>
>* if the compute node being claimed by an instance is a member of some
>   aggregates
>* and one of those  aggregates is associated with a resource provider 
>* and the resource provider has inventory of resource class DISK_GB
>
>then rather than claiming disk on the compute node, claim it on the
>resource provider.
>
>The first hurdle to overcome when doing this is to trace the path
>from compute node, through aggregates, to a resource provider. We
>can get a list of aggregates by host, and then we can use those
>aggregates to get a list of resource providers by joining across
>ResourceProviderAggregates, and we can join further to get just
>those ResourceProviders which have Inventory of resource class
>DISK_GB.
>
>The issue here is that the result is a list. As far as I can tell
>we can end up with >1 ResourceProviders providing DISK_GB for this
>host because it is possible for a host to be in more than one
>aggregate and it is necessary for an aggregate to be able to associate
>with more than one resource provider.
>
>If the above is true and we can find two resource providers providing
>DISK_GB how does:
>
>* the resource tracker know where (to which provider) to write its
>   disk claim?
>* the scheduler (the next step in the work items) make choices and
>   declarations amongst providers? (Yes, place on that node, but use disk 
> provider
>   X, not Y)
>
>If the above is not true, why is it not true? (show me the code
>please)
>
>If the above is an issue, but we'd like to prevent it, how do we fix it?
>Do we need to make it so that when we associate an aggregate with a
>resource provider we check to see that it is not already associated with
>some other provider of the same resource class? This would be a
>troubling approach because as things currently stand we can add Inventory
>of any class and aggregates to a provider at any time and the amount of
>checking that would need to happen is at least bi-directional if not multi
>and that level of complexity is not a great direction to be going.
>
>So, yeah, if someone could help me tease this out, that would be
>great, thanks.
>
>
>[1] 
>http://specs.openstack.org/openstack/nova-specs/specs/newton/approved/generic-resource-pools.html#work-items
>
>-- 
>Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
>freenode: cdent tw: @anticdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-05-30 Thread Paul Carver

On 5/25/2016 13:24, Tim Rozet wrote:

In my opinion, it is a better approach to break this down into plugin vs driver 
support.  There should be no problem adding support into networking-sfc plugin 
for NSH today.  The OVS driver however, depends on OVS as the dataplane - which 
I can see a solid argument for only supporting an official version with a 
non-NSH solution.  The plugin side should have no dependency on OVS.  Therefore 
if we add NSH SFC support to an ODL driver in networking-odl, and use that as 
our networking-sfc driver, the argument about OVS goes away (since 
neutron/networking-sfc is totally unaware of the dataplane at this point).  We 
would just need to ensure that API calls to networking-sfc specifying NSH port 
pairs returned error if the enabled driver was OVS (until official OVS with NSH 
support is released).



Does ODL have a dataplane? I thought it used OvS. Is the ODL project 
supporting its own fork of OvS that has NSH support or is ODL expecting 
that the user will patch OvS themself?


I don't know the details of why OvS hasn't added NSH support so I can't 
judge the validity of the concerns, but one way or another there has to 
be a production-quality dataplane for networking-sfc to front-end.


If ODL has forked OvS or in some other manner is supporting its own NSH 
capable dataplane then it's reasonable to consider that the ODL driver 
could be the first networking-sfc driver to support NSH. However, we 
still need to make sure that the API is an abstraction, not 
implementation specific.


But if ODL is not supporting its own NSH capable dataplane, instead 
expecting the user to run a patched OvS that doesn't have upstream 
acceptance then I think we would be building a rickety tower by piling 
networking-sfc on top of that unstable base.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] API changes on limit / marker / sort in Newton

2016-05-30 Thread Zhenyu Zheng
I think it is good to share codes and a single microversion can make life
more easier during coding.
Can we approve those specs first and then decide on the details in IRC and
patch review? Because
the non-priority spec deadline is so close.

Thanks

On Tue, May 31, 2016 at 1:09 AM, Ken'ichi Ohmichi 
wrote:

> 2016-05-29 19:25 GMT-07:00 Alex Xu :
> >
> >
> > 2016-05-20 20:05 GMT+08:00 Sean Dague :
> >>
> >> There are a number of changes up for spec reviews that add parameters to
> >> LIST interfaces in Newton:
> >>
> >> * keypairs-pagination (MERGED) -
> >>
> >>
> https://github.com/openstack/nova-specs/blob/8d16fc11ee6d01b5a9fe1b8b7ab7fa6dff460e2a/specs/newton/approved/keypairs-pagination.rst#L2
> >> * os-instances-actions - https://review.openstack.org/#/c/240401/
> >> * hypervisors - https://review.openstack.org/#/c/240401/
> >> * os-migrations - https://review.openstack.org/#/c/239869/
> >>
> >> I think that limit / marker is always a legit thing to add, and I almost
> >> wish we just had a single spec which is "add limit / marker to the
> >> following APIs in Newton"
> >>
> >
> > Are you looking for code sharing or one microversion? For code sharing,
> it
> > sounds ok if people have some co-work. Probably we need a common
> pagination
> > supported model_query function for all of those. For one microversion,
> i'm a
> > little hesitate, we should keep one small change, or enable all in one
> > microversion. But if we have some base code for pagination support, we
> > probably can make the pagination as default thing support for all list
> > method?
>
> It is nice to share some common code for this, that would be nice for
> writing the api doc also to know what APIs support them.
> And also nice to do it with a single microversion for the above
> resources, because we can avoid microversion bumping conflict and all
> of them don't seem a big change.
>
> Thanks
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Call for Commiters & Contributors for daisycloud-core project

2016-05-30 Thread hu . zhijiang
Hi All,

I would like to introduce to you a new OpenStack installer project 
Daisy(project name: daisycloud-core). Daisy used to be a closed source 
project mainly developed by ZTE, but currently we make it a OpenStack 
related project(http://www.daisycloud.org, 
https://github.com/openstack/daisycloud-core). 

Although it is not mature and still under development, Daisy concentrates 
on deploying OpenStack fast and efficiently for large data center which 
has hundreds of nodes. In order to reach that goal, Daisy was born to 
focus on many features that may not be suitable for small clusters, but 
definitely conducive to the deployment of big clusters. Those features 
include but not limited to the following: 

1. Containerized OpenStack Services 
In order to speed up installation and upgrading as a whole, Daisy decides 
to use Kolla as underlying deployment module to support containerized 
OpenStack services. 

2. Multicast 
Daisy utilizes multicast as much as possible to speed up imaging work flow 
during the installation. For example, instead of using centralized Docker 
registry while adopting Kolla, Daisy multicasts all Docker images to each 
node of the cluster, then creates and uses local registries on each node 
during Kolla deployment process. The Same things can be done for OS 
imaging too. 

3. Automatic Deployment 
Instead of letting users decide if a node can be provisioned and deserved 
to join to the cluster, Daisy provide a characteristics matching mechanism 
to recognize if a new node has the same capabilities as a current working 
computer nodes. If it is true, Daisy will start deployment on that node 
right after it is discovered and make it a computer node with the same 
configuration as that current working computer nodes. 

4. Configuration Template 
Using precise configuration file to describe a big dynamic cluster is not 
applicable, and it is not able to be reused when moving to another 
approximate environment either. Daisy’s configuration template only 
describes the common part of the cluster and the representative of the 
controller/compute nodes. It can be seen as a semi-finished configuration 
file which can be used in any approximate environments. During deployment, 
users only have to evaluate few specific parameters to make the 
configuration template a final configuration file. 

5. Your comments on anything else that can brings unique value to the 
large data center deployment? 

As the project lead, I would like to get feedback from you about this new 
project. You are more than welcome to join this project! 

Thank you 
Zhijiang


ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Monasca] influxDB clustering and HA will be "commercial option".

2016-05-30 Thread Martinx - ジェームズ
On 30 May 2016 at 11:59, Jaesuk Ahn  wrote:

> Hi, Monasca developers and users,
>
>
> https://influxdata.com/blog/update-on-influxdb-clustering-high-availability-and-monetization/
> "For our current and future customers, we’ll be offering clustering and
> high availability through Influx Cloud, our managed hosting offering, and
> Influx Enterprise, our on-premise offering, in the coming months.”
>
>
> It seems like “clustering” and “high availablity” of influxDB will be
> available only in commercial version.
> Monasca is currently leveraging influxDB as a metrics and alarm database.
> Beside vertical, influxDB is currently only an open source option to use.
>
> With this update stating “influxDB open source sw version will not have
> clustering / ha feature”,
> I would like to know if there has been any discussion among monasca
> community to add more database backend rather than influxDB, especially
> OpenTSDB.
>
>
> Thank you.
>
>
>
>
>
> --
> Jaesuk Ahn, Ph.D.
> Software Defined Infra Tech. Lab.
> SKT
>


What about Prometheus?

https://prometheus.io/

https://prometheus.io/docs/introduction/comparison/

Cheers!
Thiago
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [keystone] rolling dogpile.core into dogpile.cache, removing namespace packaging (PLEASE REVIEW)

2016-05-30 Thread Monty Taylor
On 05/30/2016 06:17 PM, Mike Bayer wrote:
> Hi all -
> 
> Just a heads up what's happening for dogpile.cache, in version 0.6.0 we
> are rolling the functionality of the dogpile.core package into
> dogpile.cache itself, and retiring the use of namespace package naming
> for dogpile.cache.
> 
> Towards retiring the use of namespace packaging, the magic
> "declare_namespace() / extend_path()" logic is being removed from the
> file dogpile/__init__.py from dogpile.cache, and the "namespace_package"
> directive being removed from setup.py.
> 
> However, currently, the plan is to leave alone entirely the
> "dogpile.core" package as is, and to no longer use the name
> "dogpile.core" within dogpile.cache at all; the constructs that it
> previously imported from "dogpile.core" it now just imports from
> "dogpile" and "dogpile.util" from within the dogpile.cache package.
> 
> The caveat here is that Python environments that have dogpile.cache
> 0.5.7 or earlier installed will also have dogpile.core 0.4.1 installed
> as well, and dogpile.core *does* still contain the namespace package
> verbiage as before.   From our testing, we don't see there being any
> problem with this, however, I know there are people on this list who are
> vastly more familiar than I am with namespace packaging and I would
> invite them to comment on this as well as on the gerrit review [1] (the
> gerrit invites anyone with a Github account to register and comment).
> 
> Note that outside of the Openstack world, there are a very small number
> of applications that make use of dopgile.core directly.  From our
> grepping we can find no mentions of "dogpile.core" in any Openstack
> requirements files.For these applications, if a Python environment
> already has dogpile.core installed, this would continue to be used;
> however dogpile.cache also includes a file dogpile/core.py which sets up
> a compatible namespace, so that applications which list only
> dogpile.cache in their requirements but make use of "dogpile.core"
> constructs will continue to work as before.
> 
> I would ask that anyone reading this to please alert me to anyone, any
> project, or any announcement medium which may be necessary in order to
> ensure that anyone who needs to be made aware of these changes are aware
> of them and have vetted them ahead of time.   I would like to release
> dogpile.cache 0.6.0 by the end of the week if possible.  I will send
> this email a few more times to the list to make sure that it is seen.

This seems perfectly reasonable to me. I'm sure that somewhere in the
dark reaches of the night there will be some edge case that this trips,
but I cannot currently think of what it would be.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] [keystone] rolling dogpile.core into dogpile.cache, removing namespace packaging (PLEASE REVIEW)

2016-05-30 Thread Mike Bayer

Hi all -

Just a heads up what's happening for dogpile.cache, in version 0.6.0 we 
are rolling the functionality of the dogpile.core package into 
dogpile.cache itself, and retiring the use of namespace package naming 
for dogpile.cache.


Towards retiring the use of namespace packaging, the magic 
"declare_namespace() / extend_path()" logic is being removed from the 
file dogpile/__init__.py from dogpile.cache, and the "namespace_package" 
directive being removed from setup.py.


However, currently, the plan is to leave alone entirely the 
"dogpile.core" package as is, and to no longer use the name 
"dogpile.core" within dogpile.cache at all; the constructs that it 
previously imported from "dogpile.core" it now just imports from 
"dogpile" and "dogpile.util" from within the dogpile.cache package.


The caveat here is that Python environments that have dogpile.cache 
0.5.7 or earlier installed will also have dogpile.core 0.4.1 installed 
as well, and dogpile.core *does* still contain the namespace package 
verbiage as before.   From our testing, we don't see there being any 
problem with this, however, I know there are people on this list who are 
vastly more familiar than I am with namespace packaging and I would 
invite them to comment on this as well as on the gerrit review [1] (the 
gerrit invites anyone with a Github account to register and comment).


Note that outside of the Openstack world, there are a very small number 
of applications that make use of dopgile.core directly.  From our 
grepping we can find no mentions of "dogpile.core" in any Openstack 
requirements files.For these applications, if a Python environment 
already has dogpile.core installed, this would continue to be used; 
however dogpile.cache also includes a file dogpile/core.py which sets up 
a compatible namespace, so that applications which list only 
dogpile.cache in their requirements but make use of "dogpile.core" 
constructs will continue to work as before.


I would ask that anyone reading this to please alert me to anyone, any 
project, or any announcement medium which may be necessary in order to 
ensure that anyone who needs to be made aware of these changes are aware 
of them and have vetted them ahead of time.   I would like to release 
dogpile.cache 0.6.0 by the end of the week if possible.  I will send 
this email a few more times to the list to make sure that it is seen.



[1] https://gerrit.sqlalchemy.org/#/c/89/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][infra][deployment] Adding multinode CI jobs for TripleO in nodepool

2016-05-30 Thread Steve Baker

On 28/05/16 06:03, James Slagle wrote:

I've been working on various patches to TripleO to make it possible
for the baremetal provisioning part of the workflow to be optional. In
such a scenario, TripleO wouldn't use Nova or Ironic to boot any
baremetal nodes. Instead it would rely on the nodes to be already
installed with an OS and powered on. We then use Heat to drive the
deployment of OpenStack on those nodes...that part of the process is
largely unchanged.

One of the things this would allow TripleO to do is make use of CI
jobs using nodes just from the regular cloud providers in nodepool
instead of having to use our own TripleO cloud
(tripleo-test-cloud-rh1) to run all our jobs.

I'm at a point where I can start working on patches to try and set
this up, but I wanted to provide this context so folks were aware of
the background.

We'd probably start with our simplest configuration of a job with at
least 3 nodes (undercloud/controller/compute), and using CentOS
images. It looks like right now all multinode jobs are 2 nodes only
and use Ubuntu. My hope is that I/we can make some progress in
different multinode configurations and collaborate on any setup
scripts or ansible playbooks in a generally useful way. I know there
was interest in different multinode setups from the various deployment
teams at the cross project session in Austin.

If there are any pitfalls or if there are any concerns about TripleO
going in this direction, I thought we could discuss those here. Thanks
for any feedback.

This raises the possibility of an alternative to OVB for 
trying/developing TripleO on a host cloud.


If a vm version of the overcloud-full image is also generated then the 
host cloud can boot these directly. The approach above can then be used 
to treat these nodes as pre-existing nodes to adopt.


I did this for a while configuring the undercloud nova to use the fake 
virt driver, but it sounds like the approach above doesn't interact with 
nova at all.


So I'm +1 on this approach for *some* development environments too. Can 
you provide a list of the changes?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] I'm going to expire open bug reports older than 18 months.

2016-05-30 Thread Dave Walker
On 24 May 2016 at 18:05, Doug Hellmann  wrote:

> Excerpts from Markus Zoeller's message of 2016-05-24 11:00:35 +0200:
> > On 24.05.2016 09:34, Duncan Thomas wrote:
> > > Cinder bugs list was far more manageable once this had been done.
> > >
> > > It is worth sharing the tool for this? I realise it's fairly trivial to
> > > write one, but some standardisation on the comment format etc seems
> > > valuable, particularly for Q/A folks who work between different
> projects.
> >
> > A first draft (without the actual expiring) is at [1]. I'm going to
> > finish it this week. If there is a place in an OpenStack repo, just give
> > me a pointer and I'll push a change.
> >
> > > On 23 May 2016 at 14:02, Markus Zoeller 
> wrote:
> > >
> > >> TL;DR: Automatic closing of 185 bug reports which are older than 18
> > >> months in the week R-13. Skipping specific bug reports is possible. A
> > >> bug report comment explains the reasons.
> > >> [...]
> >
> > References:
> > [1]
> >
> https://github.com/markuszoeller/openstack/blob/master/scripts/launchpad/expire_old_bug_reports.py
> >
>
> Feel free to submit that to the openstack-infra/release-tools repo. We
> have some other tools in that repo for managing launchpad bugs.
>
> Doug


Rather that blanket expiring old bugs, it might seem better to expire bugs
which are in non-triaged state which haven't had activity for >18 months.
This would seem like a less aggressive approach to closing issues which
haven't managed to reach triaged state and are otherwise stale.

This information is available in the API and i'd be happy to suggest a
review to change to this model if it is agreed.

Thanks

--
Kind Regards,
Dave Walker
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] I'm going to expire open bug reports older than 18 months.

2016-05-30 Thread Shoham Peller
I support Clint's comment, and as an example, only today I was able to
search a bug and to see it was reported 2 years ago and wasn't solved since.
I've commented on the bug saying it happened to me in an up-to-date nova.
I'm talking about a bug which is on your list -
https://bugs.launchpad.net/nova/+bug/1298075

I guess I wouldn't
 been able to do so if the bug was closed.

On Mon, May 30, 2016 at 9:37 PM, Clint Byrum  wrote:

> (Top posting as a general reply to the thread)
>
> Bugs are precious data. As much as it feels like the bug list is full of
> cruft that won't ever get touched, one thing that we might be missing in
> doing this is that the user who encounters the bug and takes the time
> to actually find the bug tracker and report a bug, may be best served
> by finding that somebody else has experienced something similar. If you
> close this bug, that user is now going to be presented with the "I may
> be the first person to report this" flow instead of "yeah I've seen that
> error too!". The former can be a daunting task, but the latter provides
> extra incentive to press forward, since clearly there are others who
> need this, and more data is helpful to triagers and fixers.
>
> I 100% support those who are managing bugs doing whatever they need
> to do to make sure users' issues are being addressed as well as can be
> done with the resources available. However, I would also urge everyone
> to remember that the bug tracker is not only a way for developers to
> manage the bugs, it is also a way for the community of dedicated users
> to interact with the project as a whole.
>
> Excerpts from Markus Zoeller's message of 2016-05-23 13:02:29 +0200:
> > TL;DR: Automatic closing of 185 bug reports which are older than 18
> > months in the week R-13. Skipping specific bug reports is possible. A
> > bug report comment explains the reasons.
> >
> >
> > I'd like to get rid of more clutter in our bug list to make it more
> > comprehensible by a human being. For this, I'm targeting our ~185 bug
> > reports which were reported 18 months ago and still aren't in progress.
> > That's around 37% of open bug reports which aren't in progress. This
> > post is about *how* and *when* I do it. If you have very strong reasons
> > to *not* do it, let me hear them.
> >
> > When
> > 
> > I plan to do it in the week after the non-priority feature freeze.
> > That's week R-13, at the beginning of July. Until this date you can
> > comment on bug reports so they get spared from this cleanup (see below).
> > Beginning from R-13 until R-5 (Newton-3 milestone), we should have
> > enough time to gain some overview of the rest.
> >
> > I also think it makes sense to make this a repeated effort, maybe after
> > each milestone/release or monthly or daily.
> >
> > How
> > ---
> > The bug reports which will be affected are:
> > * in status: [new, confirmed, triaged]
> > * AND without assignee
> > * AND created at: > 18 months
> > A preview of them can be found at [1].
> >
> > You can spare bug reports if you leave a comment there which says
> > one of these (case-sensitive flags):
> > * CONFIRMED FOR: NEWTON
> > * CONFIRMED FOR: MITAKA
> > * CONFIRMED FOR: LIBERTY
> >
> > The expired bug report will have:
> > * status: won't fix
> > * assignee: none
> > * importance: undecided
> > * a new comment which explains *why* this was done
> >
> > The comment the expired bug reports will get:
> > This is an automated cleanup. This bug report got closed because
> > it is older than 18 months and there is no open code change to
> > fix this. After this time it is unlikely that the circumstances
> > which lead to the observed issue can be reproduced.
> > If you can reproduce it, please:
> > * reopen the bug report
> > * AND leave a comment "CONFIRMED FOR: "
> >   Only still supported release names are valid.
> >   valid example: CONFIRMED FOR: LIBERTY
> >   invalid example: CONFIRMED FOR: KILO
> > * AND add the steps to reproduce the issue (if applicable)
> >
> >
> > Let me know if you think this comment gives enough information how to
> > handle this situation.
> >
> >
> > References:
> > [1] http://45.55.105.55:8082/bugs-dashboard.html#tabExpired
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] I'm going to expire open bug reports older than 18 months.

2016-05-30 Thread Clint Byrum
(Top posting as a general reply to the thread)

Bugs are precious data. As much as it feels like the bug list is full of
cruft that won't ever get touched, one thing that we might be missing in
doing this is that the user who encounters the bug and takes the time
to actually find the bug tracker and report a bug, may be best served
by finding that somebody else has experienced something similar. If you
close this bug, that user is now going to be presented with the "I may
be the first person to report this" flow instead of "yeah I've seen that
error too!". The former can be a daunting task, but the latter provides
extra incentive to press forward, since clearly there are others who
need this, and more data is helpful to triagers and fixers.

I 100% support those who are managing bugs doing whatever they need
to do to make sure users' issues are being addressed as well as can be
done with the resources available. However, I would also urge everyone
to remember that the bug tracker is not only a way for developers to
manage the bugs, it is also a way for the community of dedicated users
to interact with the project as a whole.

Excerpts from Markus Zoeller's message of 2016-05-23 13:02:29 +0200:
> TL;DR: Automatic closing of 185 bug reports which are older than 18
> months in the week R-13. Skipping specific bug reports is possible. A
> bug report comment explains the reasons.
> 
> 
> I'd like to get rid of more clutter in our bug list to make it more
> comprehensible by a human being. For this, I'm targeting our ~185 bug
> reports which were reported 18 months ago and still aren't in progress.
> That's around 37% of open bug reports which aren't in progress. This
> post is about *how* and *when* I do it. If you have very strong reasons
> to *not* do it, let me hear them.
> 
> When
> 
> I plan to do it in the week after the non-priority feature freeze.
> That's week R-13, at the beginning of July. Until this date you can
> comment on bug reports so they get spared from this cleanup (see below).
> Beginning from R-13 until R-5 (Newton-3 milestone), we should have
> enough time to gain some overview of the rest.
> 
> I also think it makes sense to make this a repeated effort, maybe after
> each milestone/release or monthly or daily.
> 
> How
> ---
> The bug reports which will be affected are:
> * in status: [new, confirmed, triaged]
> * AND without assignee
> * AND created at: > 18 months
> A preview of them can be found at [1].
> 
> You can spare bug reports if you leave a comment there which says
> one of these (case-sensitive flags):
> * CONFIRMED FOR: NEWTON
> * CONFIRMED FOR: MITAKA
> * CONFIRMED FOR: LIBERTY
> 
> The expired bug report will have:
> * status: won't fix
> * assignee: none
> * importance: undecided
> * a new comment which explains *why* this was done
> 
> The comment the expired bug reports will get:
> This is an automated cleanup. This bug report got closed because
> it is older than 18 months and there is no open code change to
> fix this. After this time it is unlikely that the circumstances
> which lead to the observed issue can be reproduced.
> If you can reproduce it, please:
> * reopen the bug report
> * AND leave a comment "CONFIRMED FOR: "
>   Only still supported release names are valid.
>   valid example: CONFIRMED FOR: LIBERTY
>   invalid example: CONFIRMED FOR: KILO
> * AND add the steps to reproduce the issue (if applicable)
> 
> 
> Let me know if you think this comment gives enough information how to
> handle this situation.
> 
> 
> References:
> [1] http://45.55.105.55:8082/bugs-dashboard.html#tabExpired
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] [Tempest] Abondoned old code reviews

2016-05-30 Thread Ken'ichi Ohmichi
Hi,

There are many patches which are not updated in Tempest review queue
even if having gotten negative feedback from reviewers or jenkins.
Nova team is abandoning such patches like [1].
I feel it would be nice to abandone such patches which are not updated
since the end of 2015.
Any thoughts?

[1]: http://lists.openstack.org/pipermail/openstack-dev/2016-May/096112.html

Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] API changes on limit / marker / sort in Newton

2016-05-30 Thread Ken'ichi Ohmichi
2016-05-29 19:25 GMT-07:00 Alex Xu :
>
>
> 2016-05-20 20:05 GMT+08:00 Sean Dague :
>>
>> There are a number of changes up for spec reviews that add parameters to
>> LIST interfaces in Newton:
>>
>> * keypairs-pagination (MERGED) -
>>
>> https://github.com/openstack/nova-specs/blob/8d16fc11ee6d01b5a9fe1b8b7ab7fa6dff460e2a/specs/newton/approved/keypairs-pagination.rst#L2
>> * os-instances-actions - https://review.openstack.org/#/c/240401/
>> * hypervisors - https://review.openstack.org/#/c/240401/
>> * os-migrations - https://review.openstack.org/#/c/239869/
>>
>> I think that limit / marker is always a legit thing to add, and I almost
>> wish we just had a single spec which is "add limit / marker to the
>> following APIs in Newton"
>>
>
> Are you looking for code sharing or one microversion? For code sharing, it
> sounds ok if people have some co-work. Probably we need a common pagination
> supported model_query function for all of those. For one microversion, i'm a
> little hesitate, we should keep one small change, or enable all in one
> microversion. But if we have some base code for pagination support, we
> probably can make the pagination as default thing support for all list
> method?

It is nice to share some common code for this, that would be nice for
writing the api doc also to know what APIs support them.
And also nice to do it with a single microversion for the above
resources, because we can avoid microversion bumping conflict and all
of them don't seem a big change.

Thanks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cinder] nova's broken py27/py33 and functional tests

2016-05-30 Thread Sean McGinnis
On Mon, May 30, 2016 at 05:15:56PM +0200, Sylvain Bauza wrote:
> (titling now accordingly)
> 
> Le 30/05/2016 14:01, Sylvain Bauza a écrit :
> >Hi,
> >
> >For the moment, we have a non-resolved critical bug hitting our
> >UTs and functional tests :
> >https://bugs.launchpad.net/nova/+bug/1586976
> >
> >Markus Z. and I are currently investigating the root cause, but in
> >the meantime, please refrain asking for rechecks unless you're
> >sure why you're asking it.
> >Be gentle, save electricity by not wasting our precious CI resources :-)
> >
> 
> 
> A bit of progress here. We were lucky enough to bisect the problem
> up to cinderclient-1.7.1 and finally identifying the guilty :
> https://review.openstack.org/#/c/294751/2
> 
> Victor Stinner is proposing https://review.openstack.org/#/c/322878/1

That fix is making its way through the gate queue right now. Looks like
we should have a fix in ~40 minutes.

I'll request a new python-cinderclient release as soon as that goes in.
I'm in and out today though, so if someone sees it and there's no update
from me, feel free to request the release and I'll just put my vote on
there.

Sean (smcginnis)

> 
> Cinder folks, please treat this change with all the love you can
> get, it does seem to unblock Nova jobs.
> 
> -Sylvain
> 
> >-Sylvain
> >
> >
> >__
> >
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> >openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Focus for week R-18 May 30-June 03

2016-05-30 Thread Nikhil Komawar

Hi all,


Thanks for the feedback offline and indicating that I need to clear my
intent on why this email was sent.


This email was intended primarily to the Glance team to know more about
what the release plans look like as not everyone is able to attend the
meetings and it's near to impossible to keep everyone in the loop,
ad-hoc, on when I intend to work on release stuff.

I guess, I should have mentioned that I am putting my release-liaison
hat on, for establishing the awareness tone of this email. It wasn't a
directive to the team rather a call for partnership to adhere to the
timeline proposed. The purpose is -- People are welcome to contribute
but are not forced to do so; at the same time people may want to know
about the dates to target their bugs/reviews accordingly and help
prioritize the work. Also, as reviews are merged the gate may sometimes
become unstable where the team will benefit from knowing that such
possibility is at hand and something to watch out for.


Part of this email specific to the releases was from 'a release liaison
role'  and I do apologize for missing out on adding that info in the
email. It is more of a plan for me that other are welcome to join in and
all non-mandatory help on this front is greatly appreciated.


May be the title also can be changed a bit to indicate that it is a
awareness and focus email that indicates focus of subset of individuals
(of course, OpenStack is a community so great number of things are
non-mandatory :-) ).


I am sorry if this has been hurtful to some.


On 5/29/16 9:50 PM, Nikhil Komawar wrote:
> Hi team,
>
>
> The focus for week R-18 is as follows:
>
>
> * Mon, part of Tues: Reviews that look good enough to be merged and
> suitable for newton-1 release, any reviews that help fixing important
> bugs or those that possibly need further testing later in the release.
> Please keep any eye on the gate to ensure things are in place. I plan to
> propose a review for newton-1 release tag (a bit early) on Tuesday May
> 31st. Last few days, I have pushed quite a few reviews that looked clear
> enough to be merged. Now, we need to merge reviews (if there are any)
> that people are motivated enough to get them ready for newton-1.
>
> * rest of Tues, Wed: Reviews that help us get a good store and client
> release. As discussed in the meeting [1], I plan to 'propose' a store
> and a client release tag later in the week (possibly on late Wednesday
> or on Thursday). I want Glance team to be ready from the release
> perspective, the actual release will happen whenever release team gets a
> chance; given R-18 is general release week, they look less likely then.
>
> * Fri: Reviews on glance-specs, see if the author has followed up on
> your comments, see which reviews look important to give a early
> indication to the author that the spec is not likely to be accepted for
> Newton if we do not deem it fit during our discussions at
> spec-soft-freeze of June 17, R-16 [2].
>
>
> Please be mindful that Monday May 30th is a federal holiday [3] in the US so 
> you may see folks (across OpenStack) OOO. However, I will be available on 
> Monday.
>
>
> Reference for the week numbers:
> http://releases.openstack.org/newton/schedule.html
>
>
> (unlike some of my emails meant to be notices, this email is okay for 
> discussion)
>
>
> As always, feel free to reach out for any questions or concerns.
>
>
> [1] 
> http://eavesdrop.openstack.org/meetings/glance/2016/glance.2016-05-26-14.03.log.html#l-88
> [2] http://lists.openstack.org/pipermail/openstack-dev/2016-May/094780.html
> [3] https://en.wikipedia.org/wiki/Memorial_Day
>
>
>


-- 

Thanks,
Nikhil



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [stable] Proposal to add Ian Cordasco to glance-stable-maint

2016-05-30 Thread Nikhil Komawar
This has been done.


Thanks!


On 5/25/16 5:12 PM, Nikhil Komawar wrote:
> Hi all,
>
>
> I would like to propose adding Ian to glance-stable-maint team. The
> interest is coming from him and I've already asked for feedback from the
> current glance-stable-maint folks, which has been in Ian's favor. Also,
> as Ian mentions the current global stable team isn't going to subsume
> the per-project teams anytime soon.
>
>
> Ian is willing to shoulder the responsibility of stable liaison for
> Glance [1] which is great news. If no objections are raised by Friday
> May 27th 2359UTC, we will go ahead and do the respective changes.
>
>
> [1] https://wiki.openstack.org/wiki/CrossProjectLiaisons#Stable_Branch
>
>

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Monasca] influxDB clustering and HA will be "commercial option".

2016-05-30 Thread Jaesuk Ahn
Hi, Monasca developers and users,

https://influxdata.com/blog/update-on-influxdb-clustering-high-availability-and-monetization/
"For our current and future customers, we’ll be offering clustering and
high availability through Influx Cloud, our managed hosting offering, and
Influx Enterprise, our on-premise offering, in the coming months.”


It seems like “clustering” and “high availablity” of influxDB will be
available only in commercial version.
Monasca is currently leveraging influxDB as a metrics and alarm database.
Beside vertical, influxDB is currently only an open source option to use.

With this update stating “influxDB open source sw version will not have
clustering / ha feature”,
I would like to know if there has been any discussion among monasca
community to add more database backend rather than influxDB, especially
OpenTSDB.


Thank you.





-- 
Jaesuk Ahn, Ph.D.
Software Defined Infra Tech. Lab.
SKT
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cinder] nova's broken py27/py33 and functional tests

2016-05-30 Thread Sylvain Bauza



Le 30/05/2016 17:33, Davanum Srinivas a écrit :

Sylvain,

I don't see any reverts proposed in requirements/ repo... yet


https://review.openstack.org/#/c/322899/ is on its way.
Folks, you can monitor this change, once it's merged, it should fix Nova 
jobs since we don't need to sync our local requirements for UTs 
(instead, we directly download the upper-constraints file)


Thanks,
-Sylvain


-- Dims

On Mon, May 30, 2016 at 11:15 AM, Sylvain Bauza  wrote:

(titling now accordingly)

Le 30/05/2016 14:01, Sylvain Bauza a écrit :

Hi,

For the moment, we have a non-resolved critical bug hitting our UTs and
functional tests :
https://bugs.launchpad.net/nova/+bug/1586976

Markus Z. and I are currently investigating the root cause, but in the
meantime, please refrain asking for rechecks unless you're sure why you're
asking it.
Be gentle, save electricity by not wasting our precious CI resources :-)



A bit of progress here. We were lucky enough to bisect the problem up to
cinderclient-1.7.1 and finally identifying the guilty :
https://review.openstack.org/#/c/294751/2

Victor Stinner is proposing https://review.openstack.org/#/c/322878/1

Cinder folks, please treat this change with all the love you can get, it
does seem to unblock Nova jobs.

-Sylvain


-Sylvain


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] next notification subteam meeting

2016-05-30 Thread Balázs Gibizer
Hi, 

The next notification subteam meeting will be held on 2016.05.31 17:00 UTC [1] 
on #openstack-meeting-4.

Cheers,
Gibi

[1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20160531T17

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cinder] nova's broken py27/py33 and functional tests

2016-05-30 Thread Davanum Srinivas
Sylvain,

I don't see any reverts proposed in requirements/ repo... yet

-- Dims

On Mon, May 30, 2016 at 11:15 AM, Sylvain Bauza  wrote:
> (titling now accordingly)
>
> Le 30/05/2016 14:01, Sylvain Bauza a écrit :
>>
>> Hi,
>>
>> For the moment, we have a non-resolved critical bug hitting our UTs and
>> functional tests :
>> https://bugs.launchpad.net/nova/+bug/1586976
>>
>> Markus Z. and I are currently investigating the root cause, but in the
>> meantime, please refrain asking for rechecks unless you're sure why you're
>> asking it.
>> Be gentle, save electricity by not wasting our precious CI resources :-)
>>
>
>
> A bit of progress here. We were lucky enough to bisect the problem up to
> cinderclient-1.7.1 and finally identifying the guilty :
> https://review.openstack.org/#/c/294751/2
>
> Victor Stinner is proposing https://review.openstack.org/#/c/322878/1
>
> Cinder folks, please treat this change with all the love you can get, it
> does seem to unblock Nova jobs.
>
> -Sylvain
>
>> -Sylvain
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cinder] nova's broken py27/py33 and functional tests

2016-05-30 Thread Sylvain Bauza

(titling now accordingly)

Le 30/05/2016 14:01, Sylvain Bauza a écrit :

Hi,

For the moment, we have a non-resolved critical bug hitting our UTs 
and functional tests :

https://bugs.launchpad.net/nova/+bug/1586976

Markus Z. and I are currently investigating the root cause, but in the 
meantime, please refrain asking for rechecks unless you're sure why 
you're asking it.

Be gentle, save electricity by not wasting our precious CI resources :-)




A bit of progress here. We were lucky enough to bisect the problem up to 
cinderclient-1.7.1 and finally identifying the guilty : 
https://review.openstack.org/#/c/294751/2


Victor Stinner is proposing https://review.openstack.org/#/c/322878/1

Cinder folks, please treat this change with all the love you can get, it 
does seem to unblock Nova jobs.


-Sylvain


-Sylvain


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][qa][ironic][nova] When Nova should mark instance as successfully deleted?

2016-05-30 Thread Loo, Ruby
Hi,

>But the issue here is just capacity. Whether or not we keep an instance
>in a deleting state, or when we release quota, doesn't change the
>Tempest failures from what I can tell. The suggestions below address
>that.
>
>
>> 
>> > > >
>> > > > I think we should go with #1, but instead of erasing the whole disk
>> > > > for real maybe we should have a "fake" clean step that runs quickly
>> > > > for tests purposes only?
>> > > >
>> 
>> Disabling the cleaning step (or having a fake one that does nothing) for
>> the
>> gate would get around the failures at least. It would make things work
>> again
>> because the nodes would be available right after Nova deletes them.

I lost track of what we are trying to test ? If we want to test that an ironic 
node gets cleaned, then add fake cleaning. If we don¹t care that the node gets 
cleaned (because eg we have a different test that will test for that), then 
disable the cleaning. [And if we don¹t care either way, but one is harder to do 
than the other, go with the easier ;)]

--ruby


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] weekly meeting

2016-05-30 Thread Emilien Macchi
Hi!

We'll have our weekly meeting tomorrow at 2pm UTC on
#openstack-meeting-alt.

Here's a first agenda:
https://wiki.openstack.org/wiki/Meetings/TripleO#Agenda_for_next_meeting

Feel free to add more topics, and any outstanding bug and patch.

See you tomorrow!
Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] weekly meeting #83

2016-05-30 Thread Emilien Macchi
Hi Puppeteers!

We'll have our weekly meeting tomorrow at 3pm UTC on
#openstack-meeting-4.

Here's a first agenda:
https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160531

Feel free to add more topics, and any outstanding bug and patch.

See you tomorrow!
Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Broken py27/py33 and functional tests

2016-05-30 Thread Sylvain Bauza

Hi,

For the moment, we have a non-resolved critical bug hitting our UTs and 
functional tests :

https://bugs.launchpad.net/nova/+bug/1586976

Markus Z. and I are currently investigating the root cause, but in the 
meantime, please refrain asking for rechecks unless you're sure why 
you're asking it.

Be gentle, save electricity by not wasting our precious CI resources :-)

-Sylvain


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][horizon] django_openstack_auth 2.3.0 release (newton)

2016-05-30 Thread no-reply
We are jubilant to announce the release of:

django_openstack_auth 2.3.0: Django authentication backend for use
with OpenStack Identity

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/django_openstack_auth/

With package available at:

https://pypi.python.org/pypi/django_openstack_auth

Please report issues through launchpad:

https://bugs.launchpad.net/django-openstack-auth

For more details, please see below.

Changes in django_openstack_auth 2.2.0..2.3.0
-

dd56c4e Clarify the confusing warning in case of Keystone v2.0 vs v3 conflict
405cb08 Fix Keystone url version suffix when webpath is present
086fc27 Use login endpoint as key for AVAILABLE_REGIONS
313827c Updated from global requirements
5f9fbac Imported Translations from Zanata
62e079e Updated from global requirements
7f2f632 Updated from global requirements
0d69d13 Imported Translations from Zanata
489bedf Updated from global requirements
d7a2dce When calculating session_time, use the actual token life
9545694 Imported Translations from Zanata
1961c1a Imported Translations from Zanata
e9e09a4 Imported Translations from Zanata
67ce03e Fix token hashing with python 3
75a6b97 Don't call the Keystone client if the token is None

Diffstat (except docs and test files)
-

openstack_auth/backend.py |  2 +-
openstack_auth/locale/ar/LC_MESSAGES/django.po| 10 +--
openstack_auth/locale/ca/LC_MESSAGES/django.po| 10 +--
openstack_auth/locale/cs/LC_MESSAGES/django.po| 10 +--
openstack_auth/locale/de/LC_MESSAGES/django.po| 17 +++--
openstack_auth/locale/django.pot  | 88 ---
openstack_auth/locale/en_AU/LC_MESSAGES/django.po | 11 ++-
openstack_auth/locale/en_GB/LC_MESSAGES/django.po | 10 +--
openstack_auth/locale/es/LC_MESSAGES/django.po| 17 +++--
openstack_auth/locale/es_MX/LC_MESSAGES/django.po | 11 ++-
openstack_auth/locale/fi_FI/LC_MESSAGES/django.po | 10 +--
openstack_auth/locale/fr/LC_MESSAGES/django.po| 25 +++
openstack_auth/locale/hi/LC_MESSAGES/django.po| 10 +--
openstack_auth/locale/it/LC_MESSAGES/django.po| 43 +--
openstack_auth/locale/ja/LC_MESSAGES/django.po| 18 ++---
openstack_auth/locale/ko_KR/LC_MESSAGES/django.po | 33 +
openstack_auth/locale/ne/LC_MESSAGES/django.po| 10 +--
openstack_auth/locale/nl_NL/LC_MESSAGES/django.po | 10 +--
openstack_auth/locale/pa_IN/LC_MESSAGES/django.po | 10 ++-
openstack_auth/locale/pl_PL/LC_MESSAGES/django.po | 16 +++--
openstack_auth/locale/pt/LC_MESSAGES/django.po| 10 +--
openstack_auth/locale/pt_BR/LC_MESSAGES/django.po | 17 +++--
openstack_auth/locale/ru/LC_MESSAGES/django.po|  9 ++-
openstack_auth/locale/sl_SI/LC_MESSAGES/django.po | 10 +--
openstack_auth/locale/sr/LC_MESSAGES/django.po| 10 +--
openstack_auth/locale/tr_TR/LC_MESSAGES/django.po | 27 +++
openstack_auth/locale/zh_CN/LC_MESSAGES/django.po | 31 
openstack_auth/locale/zh_TW/LC_MESSAGES/django.po | 28 ++--
openstack_auth/user.py|  6 +-
openstack_auth/utils.py   | 27 ---
openstack_auth/views.py   |  3 +-
requirements.txt  |  4 +-
test-requirements.txt |  2 +-
35 files changed, 300 insertions(+), 287 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 3d62f8b..6d19807 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -6 +6 @@ Django<1.9,>=1.8 # BSD
-oslo.config>=3.7.0 # Apache-2.0
+oslo.config>=3.9.0 # Apache-2.0
@@ -8 +8 @@ oslo.policy>=0.5.0 # Apache-2.0
-python-keystoneclient!=1.8.0,!=2.1.0,>=1.6.0 # Apache-2.0
+python-keystoneclient!=1.8.0,!=2.1.0,>=1.7.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 133922c..282368c 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -5 +5 @@ hacking<0.11,>=0.10.0
-Babel>=1.3 # BSD
+Babel>=2.3.4 # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How to list all the servers of a user across projects

2016-05-30 Thread OpenStack Mailing List Archive

Link: https://openstack.nimeyo.com/86042/?show=86042#q86042
From: imocha 

How to list all the servers owned by the user across projects. I tried to create token scoped to a project and able to list the servers on that project using the compute end point http://localhost:8774/v2.1/servers

However, I wanted to get all the list of servers for all the projects a user created. Initially I wanted to change the policy.json under nova but could not get it working. The rule in the policy is:
"isadmin:True or projectid:%(project_id)s"

is there a way to get all the servers for the user in the domain by modifying the above rule.

Also, how to access the attributes of the token passed. I looked at the content of the token in the token table in keystone database. If you look at the token "extra" field, it is a json file containing the token details. Can we access this token information from the Policy.json so that we can write appropriate rule to evaluate? 

I tried to use the syntax like "token.domain.id" but does not work.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] [nova] live migration, libvirt 1.3, and the gate

2016-05-30 Thread Kashyap Chamarthy
On Thu, May 26, 2016 at 10:55:47AM -0400, Sean Dague wrote:
> On 05/26/2016 05:38 AM, Kashyap Chamarthy wrote:
> > On Wed, May 25, 2016 at 05:42:04PM +0200, Kashyap Chamarthy wrote:
> > 
> > [...]
> > 
> >> So, in short, the central issue seems to be this: the custom 'gate64'
> >> model is not being trasnalted by libvirt into a model that QEMU can
> >> recognize.
> > 
> > An update:
> > 
> > Upstream libvirt points out that this turns to be regression, and
> > bisected it to commit (in libvirt Git): 1.2.9-31-g445a09b -- "qemu:
> > Don't compare CPU against host for TCG".
> > 
> > So, I expect there's going to be fix pretty soon upstream libvirt.
> 
> Which is good... I wonder how long we'll be waiting for that back in our
> distro packages though.

Yeah, until the fix lands, our current options seem to be:

  (a) Revert to a known good version of libvirt
  
  (b) Use nested virt (i.e. ) -- I doubt is possible
  on RAX environment, which is using Xen, last I know.
  
  (c) Or a different CPU model


-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][manila] Moving forward with landing manila in tripleo

2016-05-30 Thread Rodrigo Barbieri
Hi Marios,

I am ok with landing manila in tripleo with manila-data pending as you are
suggesting. Manila-data can be added later, thus not blocking your current
efforts.

Regards,
--
Rodrigo Barbieri
Computer Scientist
OpenStack Manila Contributor
Federal University of São Carlos
On 27/05/16 22:46, Rodrigo Barbieri wrote:
> Hello Marios,
>

Hi Rodrigo, thanks very much for taking the time, indeed that clarifies
quite a lot:

> The Data Service is needed for Share Migration feature in manila since the
> Mitaka release.
>
> There has not been any work done yet towards adding it to puppet. Since
its
> introduction in Mitaka, it has been made compatible only with devstack so
> far.

I see so at least that confirms I didn't just miss it at puppet-manila
... so that is a prerequisite really to us being able to configure and
enable manila-data in tripleo and someone will have to look at doing that.

>
> I have not invested time thinking about how it should fit in a HA
> environment at this stage, this is a service that currently sports a
single
> instance, but we have plans to make it more scallable in the future.
> What I have briefly thought about is the idea where there would be a
> scheduler that decides whether to send the data job to m-dat1, m-dat2 or
> m-dat3 and so on, based on information that indicates how busy each Data
> Service instance is.
>
> For this moment, active/passive makes sense in the context that manila
> expects only a single instance of m-dat. But active/active would allow the
> service to be load balanced through HAProxy and could partially accomplish
> what we have plans to achieve in the future.

OK thanks, so we can proceed with a/p for manila-share and manila-data
(one thought below) for now and revisit once you've worked out the
details there.

>
> I hope I have addressed your question. The absence of m-dat implies in the
> Share Migration feature not working.
>

thanks for the clarification. So then I wonder if this is a feature we
can live w/out for now, especially if this is an obstacle to landing
manila-anything in tripleo. I mean, if we can live w/out the Share
Migration Feature, until we get proper support for configuring
manila-data landed, then lets land w/out manila data and just be really
clear about what is going on, manila-data pending etc.

thanks again, marios



>
> Regards,
>
> On Fri, May 27, 2016 at 10:10 AM, Marios Andreou 
wrote:
>
>> Hi all, I explicitly cc'd a few folks I thought might be interested for
>> visibility, sorry for spam if you're not. This email is about getting
>> manila landed into tripleo asap, and the current obstacles to that (at
>> least those visible to me):
>>
>> The current review [1] isn't going to land as is, regardless of the
>> outcome/discussion of any of the following points because all the
>> services are going to "composable controller services". How do people
>> feel about me merging my review at [2] into its parent review (which is
>> the current manilla review at [1]). My review just takes what is in  [1]
>> (caveats below) and makes it 'composable', and includes a dependency on
>> [3] which is the puppet-tripleo side for the 'composable manila'.
>>
>>---> Proposal merge the 'composable manila' tripleo-heat-templates
>> review @ [2] into the parent review @ [1]. The review at [2] will be
>> abandoned. We will continue to try and land [1] in its new 'composable
>> manila' form.
>>
>> WRT the 'caveats' mentioned above and why I haven't just just ported
>> what is in the current manila review @ [1] into the composable one @
>> [2]... there are two main things I've changed, both of which on
>> guidance/discussion on the reviews.
>>
>> The first is addition of manila-data (wasn't in the original/current
>> review at [1]). The second a change to the pacemaker constraints, which
>> I've corrected to make manila-data and manila-share pacemaker a/p but
>> everything else systemd managed, based on ongoing discussion at [3].
>>
>> So IMO to move forward I need clarity on both those points. For
>> manila-data my concerns are is it already available where we need it. I
>> looked at puppet-manila [4] and couldn't quickly find much (any) mention
>> of manila-data. We need it there if we are to configure anything for it
>> via puppet. The other unkown/concern here is does manila-data get
>> delivered with the manila package (I recall manila-share possibly, at
>> least one of them, had a stand-alone package) otherwise we'll need to
>> add it to the image. But mainly my question here is, can we live without
>> it? I mean can we deploy sans manila-data or does it just not make sense
>> (sorry for silly question). The motivation is if we can let's land and
>> iterate to add it.
>>
>>Q. Can we live w/out manila-data so we can land and iterate (esp. if
>> we need to land things into puppet-manila or anywhere else it is yet to
>> be landed)
>>
>> For the pacemaker constraints I'm mainly just waiting for confirmation
>> of 

Re: [openstack-dev] [tripleo][manila] Moving forward with landing manila in tripleo

2016-05-30 Thread Marios Andreou
On 27/05/16 22:46, Rodrigo Barbieri wrote:
> Hello Marios,
> 

Hi Rodrigo, thanks very much for taking the time, indeed that clarifies
quite a lot:

> The Data Service is needed for Share Migration feature in manila since the
> Mitaka release.
> 
> There has not been any work done yet towards adding it to puppet. Since its
> introduction in Mitaka, it has been made compatible only with devstack so
> far.

I see so at least that confirms I didn't just miss it at puppet-manila
... so that is a prerequisite really to us being able to configure and
enable manila-data in tripleo and someone will have to look at doing that.

> 
> I have not invested time thinking about how it should fit in a HA
> environment at this stage, this is a service that currently sports a single
> instance, but we have plans to make it more scallable in the future.
> What I have briefly thought about is the idea where there would be a
> scheduler that decides whether to send the data job to m-dat1, m-dat2 or
> m-dat3 and so on, based on information that indicates how busy each Data
> Service instance is.
> 
> For this moment, active/passive makes sense in the context that manila
> expects only a single instance of m-dat. But active/active would allow the
> service to be load balanced through HAProxy and could partially accomplish
> what we have plans to achieve in the future.

OK thanks, so we can proceed with a/p for manila-share and manila-data
(one thought below) for now and revisit once you've worked out the
details there.

> 
> I hope I have addressed your question. The absence of m-dat implies in the
> Share Migration feature not working.
> 

thanks for the clarification. So then I wonder if this is a feature we
can live w/out for now, especially if this is an obstacle to landing
manila-anything in tripleo. I mean, if we can live w/out the Share
Migration Feature, until we get proper support for configuring
manila-data landed, then lets land w/out manila data and just be really
clear about what is going on, manila-data pending etc.

thanks again, marios



> 
> Regards,
> 
> On Fri, May 27, 2016 at 10:10 AM, Marios Andreou  wrote:
> 
>> Hi all, I explicitly cc'd a few folks I thought might be interested for
>> visibility, sorry for spam if you're not. This email is about getting
>> manila landed into tripleo asap, and the current obstacles to that (at
>> least those visible to me):
>>
>> The current review [1] isn't going to land as is, regardless of the
>> outcome/discussion of any of the following points because all the
>> services are going to "composable controller services". How do people
>> feel about me merging my review at [2] into its parent review (which is
>> the current manilla review at [1]). My review just takes what is in  [1]
>> (caveats below) and makes it 'composable', and includes a dependency on
>> [3] which is the puppet-tripleo side for the 'composable manila'.
>>
>>---> Proposal merge the 'composable manila' tripleo-heat-templates
>> review @ [2] into the parent review @ [1]. The review at [2] will be
>> abandoned. We will continue to try and land [1] in its new 'composable
>> manila' form.
>>
>> WRT the 'caveats' mentioned above and why I haven't just just ported
>> what is in the current manila review @ [1] into the composable one @
>> [2]... there are two main things I've changed, both of which on
>> guidance/discussion on the reviews.
>>
>> The first is addition of manila-data (wasn't in the original/current
>> review at [1]). The second a change to the pacemaker constraints, which
>> I've corrected to make manila-data and manila-share pacemaker a/p but
>> everything else systemd managed, based on ongoing discussion at [3].
>>
>> So IMO to move forward I need clarity on both those points. For
>> manila-data my concerns are is it already available where we need it. I
>> looked at puppet-manila [4] and couldn't quickly find much (any) mention
>> of manila-data. We need it there if we are to configure anything for it
>> via puppet. The other unkown/concern here is does manila-data get
>> delivered with the manila package (I recall manila-share possibly, at
>> least one of them, had a stand-alone package) otherwise we'll need to
>> add it to the image. But mainly my question here is, can we live without
>> it? I mean can we deploy sans manila-data or does it just not make sense
>> (sorry for silly question). The motivation is if we can let's land and
>> iterate to add it.
>>
>>Q. Can we live w/out manila-data so we can land and iterate (esp. if
>> we need to land things into puppet-manila or anywhere else it is yet to
>> be landed)
>>
>> For the pacemaker constraints I'm mainly just waiting for confirmation
>> of our current understanding.. manila-share and manila-data are a/p
>> pacemaker managed, everything else systemd.
>>
>> thanks for any info, I will follow up and update the reviews accordingly
>> based on any comments,
>>
>> marios
>>
>> [1] "Enable Manila integration" 

Re: [openstack-dev] [congress] Spec for congress.conf

2016-05-30 Thread Masahito MUROI

Hi Bryan,


On 2016/05/28 2:52, Bryan Sullivan wrote:

Masahito,

Sorry, I'm not quite clear on the guidance. Sounds like you're saying
all options will be defaulted by Oslo.config if not set in the
congress.conf file. That's OK, if I understood.

you're right.



It's clear to me that some will be deployment-specific.

But what I am asking is where is the spec for:
- what congress.conf fields are supported i.e. defined for possible
setting in a release

Your generated congress.conf has a list of all supported config fields.


- which fields are mandatory to be set (or Congress will simply not work)
- which fields are not mandatory, but must be set for some specific
purpose, which right now is unclear
Without deployment-specific configs, IIRC what you need to change from 
default only is "drivers" fields to run Congress with default setting.




I'm hoping the answer isn't "go look at the code"! That won't work for
end-users, who are looking to use Congress but not decipher the
meaning/importance of specific fields from the code.

I guess your generated config has the purpose of each config fields.

If you expect the spec means documents like [1], unfortunately Congress 
doesn't have these kind of document now.


[1] http://docs.openstack.org/mitaka/config-reference/

best regards,
Masahito



Thanks,
Bryan Sullivan


From: muroi.masah...@lab.ntt.co.jp
Date: Fri, 27 May 2016 15:40:31 +0900
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [congress] Spec for congress.conf

Hi Bryan,

Oslo.config that Congress uses to manage config sets each fields to
default value if you don't specify your configured values in
congress.conf. In that meaning, all config is option/required.

In my experience, config values differing from each deployment, like ip
address and so on, have to be configured, but others might be configured
when you want Congress to run with different behaviors.

best regard,
Masahito

On 2016/05/27 3:36, SULLIVAN, BRYAN L wrote:
> Hi Congress team,
>
>
>
> Quick question for anyone. Is there a spec for fields in congress.conf
> file? As of Liberty this has to be tox-generated but I need to know
> which conf values are required vs optional. The generated sample output
> doesn't clarify that. This is for the Puppet Module and JuJu Charm I am
> developing with the help of RedHat and Canonical in OPNFV. I should have
> Congress installed by default (for the RDO and JuJu installers) in the
> OPNFV Colorado release in the next couple of weeks, and the
> congress.conf file settings are an open question. The Puppet module will
> also be used to create a Fuel plugin for installation.
>
>
>
> Thanks,
>
> Bryan Sullivan | AT
>
>
>
>
>
>

__

> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:

openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


--
室井 雅仁(Masahito MUROI)
Software Innovation Center, NTT
Tel: +81-422-59-4539



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
室井 雅仁(Masahito MUROI)
Software Innovation Center, NTT
Tel: +81-422-59-4539



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting reminder - 05/30/2016

2016-05-30 Thread Renat Akhmerov
Hi,

This is a reminder about the team meeting that we’ll have today at 16.00 UTC at 
#openstack-meeting.

Agenda:
Review action items
Current status (progress, issues, roadblocks, further plans)
Newton-2 scope
Open discussion

As usually, feel free to bring your own topics.

Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] vitrage - how much links model in API is permanent?

2016-05-30 Thread Malin, Eylon (Nokia - IL)
Hi,

While calling /v1/topology/  the response has links part, which is list of 
dicts.
Each dict have the following properties :

is_deleted : Boolean
key : string
relationship_type : string
source : int
target : int

How much that structure is permanent ?
Can I assume that these keys are here for long ?

Thank you

Eylon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [dns-name] Is there some use case which users manually set dns-name?

2016-05-30 Thread zhang bailin
If there is no case ,It's better to deny PUT to dns-name. 

Detail info:
https://bugs.launchpad.net/neutron/+bug/1583739

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev