[openstack-dev] [barbican][castellan] How to share secrets in barbican

2017-03-27 Thread yanxin...@cmss.chinamobile.com

 Hello, folks:
As i known, the secrets are saved in a user's domain, and other 
project/user can not retrieve the secrets.
   But i have a situation that many users need retrieve a same secret.

   After looking into the castellan usage,  I see the method that saving the 
credentials in configuration,
then all operators use this pre-created user to create/retrieve secrets. 
I want to know, is this way typical and easy-accepted? Does other projects face 
this issue?

Thanks.
Yan Xing'an__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] What the behavior of AddFixedIp API should be?

2017-03-27 Thread Rui Chen
Thank you Matt, the background information is important. Seems all the
peoples don't know how the add-fixed-ip API works,
and there is no exact use case about it. Now neutron port-update API also
support to set multiple fixed ip for a port, and
the fixed-ip updating will sync to nova side automatically (I had verified
it in my latest devstack). Updating fixed-ip for
specified port is easier to understand for me in multiple nics case than
nova add-fixed-ip API.

So if others known the orignal API design or had used nova add/remove
fixed-ip API and would like to show your use cases,
it's nice for us to understand how the API works and when we should use it,
we can update the api-ref and add exact usage,
avoid users' confusion about it. Feel free to reply something, thank you.

2017-03-27 23:36 GMT+08:00 Matt Riedemann :

> On 3/27/2017 7:23 AM, Rui Chen wrote:
>
>> Hi:
>>
>> A question about nova AddFixedIp API, nova api-ref[1] describe the
>> API as "Adds a fixed IP address to a server instance, which associates
>> that address with the server.", the argument of API is network id, so if
>> there are two or more subnets in a network, which one is lucky to
>> associate ip address to the instance? and the API behavior is always
>> consistent? I'm not sure.
>> The latest code[2] get all of the instance's ports and subnets of
>> the specified network, then loop them, but it return when the first
>> update_port success, so the API behavior depends on the order of subnet
>> and port list that return by neutron API. I have no idea about what
>> scenario we should use the API in, and the original design, anyone know
>> that?
>>
>> [1]: https://developer.openstack.org/api-ref/compute/#add-associa
>> te-fixed-ip-addfixedip-action
>> [2]: https://github.com/openstack/nova/blob/master/nova/network/n
>> eutronv2/api.py#L1366
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> I wondered about this API implementation myself awhile ago, see this bug
> report for details:
>
> https://bugs.launchpad.net/nova/+bug/1430512
>
> There was a related change for this from garyk:
>
> https://review.openstack.org/#/c/163864/
>
> But that was abandoned.
>
> I'm honestly not really sure what the direction is here. From what I
> remember when I reported that bug, this was basically a feature-parity
> implementation in the compute API for the multinic API with nova-network.
> However, I'm not sure it's very usable. There is a Tempest test for this
> API, but I think all it does is attach an interface and make sure that does
> not blow up, it does not try to use the interface to ssh into the guest,
> for example.
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][networking-l2gw] Project update

2017-03-27 Thread Gary Kotton
Hi,
Please see below for a current update on the status of the project.

1.   stable/ocata:

a.   a tag 10.0.0 has been created

b.   code has been updated to pass unit tests (there was a breakage as it 
was pulling master neutron)

2.   master:

a.   Due to the tag above being created we created a dummy tag to ensure 
that later versions will be used

b.   Thanks to xuqihou for kicking the wheels for the log translations 
(https://review.openstack.org/#/c/447949/)

c.   There are a few other patches that need some review cycles 
(https://review.openstack.org/#/q/project:openstack/networking-l2gw)
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mogan][valence] Valence integration

2017-03-27 Thread Zhenguo Niu
OK, thanks Yang, Lin, I will prepare a spec for the new flavor soon!

On Tue, Mar 28, 2017 at 1:32 AM, Yang, Lin A  wrote:

> Hi Zhenguo,
>
>
>
> The spec looks prefect to me, thanks a lot for doing that.
>
>
>
> The python-valenceclient is high priority of valence Pike release, and
> still undergoing right now. We plan to release the python binding library
> at first. Now you need a simple wrapper if you start coding right now.
>
>
>
> Regards,
>
> Lin.
>
>
>
> *From:* Zhenguo Niu [mailto:niu.zgli...@gmail.com]
> *Sent:* Friday, March 24, 2017 7:23 PM
> *To:* Yang, Lin A 
> *Cc:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [mogan][valence] Valence integration
>
>
>
> Thanks Yang, Lin for the explanation!
>
>
>
> The Valence flavor shows a good example for baremetal instances, we do
> need such a flavor :D
>
>
>
> I will draft a spec for this, you can help to review later. Another
> question is whether the Valence client is ready to use or we need to wrap
> the REST API ourselves?
>
>
>
>
>
>
>
> On Sat, Mar 25, 2017 at 9:37 AM, Yang, Lin A  wrote:
>
> Hi Zhenguo,
>
>
>
> Please checkout the latest valence api spec, current it support two ways
> to specify the arguments when composing node via valence api.
>
> 1. Specify flavor for composition -  Specify flavor uuid in ‘flavor_id’
> field {flavor_id: flavor_uuid} besides the name and description fields. An
> example of request body shows as below.
>
>   {‘name’: ‘new_node’,
>
>‘description’: ‘test composition’,
>
>‘flavor_id’: ‘fake_uuid’}
>
>
>
> 2. Specify every hardware details, like cpu, memory, local/remote drive,
> nic, in ‘properties’ field.
>
>   {‘name’: ‘new_node’,
>
>‘description’: ‘test composition’,
>
>‘properties’: {‘processor’: {‘total_cores’:8,
>
> ‘model’:
> ‘fale_model’},
>
> ‘memore’: {‘capacity_mib’: 4096,
>
>   ‘type’: ‘DDR3’}}}
>
> We will update user document to list all available parameters for node
> composition soon.
>
>
>
> [0] https://github.com/openstack/valence/blob/
> 0db8a8e186e25ded2b17460f5ae2ce9abf576851/api-ref/source/
> valence-api-v1-nodes.inc
>
>
>
> Thanks,
>
> Lin.
>
> *From:* Zhenguo Niu [mailto:niu.zgli...@gmail.com]
> *Sent:* Tuesday, March 21, 2017 4:20 AM
> *To:* OpenStack Development Mailing List  openstack.org>
> *Subject:* [openstack-dev] [mogan][valence] Valence integration
>
>
>
> hi guys,
>
>
>
> Here is a spec about Mogan and Valence integration[1], but before this
> happen, I would like to know what information needed when requesting to
> compose a node through Valence. From the API doc[2], I can only find name
> and description parameters, but seems like it's incorrect, I suppose that
> it should at least include cpus, ram, disk or maybe cpuinfo. We need to
> align with this before introducing a new flavor for both RSD nodes and
> generic nodes.
>
>
>
>
>
> [1] https://review.openstack.org/#/c/441790/
>
> [2] https://github.com/openstack/valence/blob/master/
> api-ref/source/valence-api-v1-nodes.inc#request
>
>
>
> --
>
> Best Regards,
>
> Zhenguo Niu
>
>
>
>
>
> --
>
> Best Regards,
>
> Zhenguo Niu
>



-- 
Best Regards,
Zhenguo Niu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ui] [heat] i18n proposal for heat templates 'description' help strings

2017-03-27 Thread Peng Wu
On Thu, 2017-03-23 at 10:07 +0100, Thomas Herve wrote:
> From the Heat side of things, that sounds like a big no-no to me.
> While we've done many things to cater to TripleO, this is way too
> specific of a use case. It doesn't even make sense for the general
> use
> case of passing user templates to Heat.

Thanks, I see.
I will try to find some work around to avoid to change heat project if
possible.

Regards,
  Peng

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Project Navigator Updates - Feedback Request

2017-03-27 Thread Michael Johnson
I have a few comments on the updated Project Navigator.

 

1.  I hope this is mostly automated at this point?  The current content for 
Project Navigator is very out of date (Mitaka?) and folks have asked why 
projects are not listed there.
2.  What is the policy around the tags?  For octavia I see that standard 
deprecation isn’t listed there even though our neutron-lbaas repository does 
have the tag.  Granted, I need to update the octavia repository to also have 
the tag, but with projects that have multiple sub-projects, how is this listing 
determined?
3.  How is the project age determined?  I see that octavia shows one year, 
but it has been an active project since 2014.  2012 if you count neutron-lbaas 
(now part of octavia).  This could be confusing for folks that have attended 
summit sessions in the past or downloaded the packages previously.
4.  API version history is another item I am curious to understand how it 
is calculated.  It seems confusing with actual project API 
versions/microversions when it links to the releases page.  API version history 
is not a one-to-one relationship with project releases.
5.  The “About this project” seems to come from the developer 
documentation.  Is this something the PTL can update?
6.  Is there a way to highlight that a blank adoption is because a project 
was not included in the survey?  This can also be deceiving and lead someone to 
think that a project is unused.  (Looking at page 54 of the April survey from 
2016 I expect load balancing is widely used)
7.  Finally, from reading my above questions/comments, it would be nice to 
have a “PTL guide to project navigator”.

 

Thank you for updating this, folks have asked us why octavia was not listed.

 

Michael

 

 

From: Lauren Sell [mailto:lau...@openstack.org] 
Sent: Friday, March 24, 2017 9:58 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] Project Navigator Updates - Feedback Request

 

Hi everyone,

 

We’ve been talking for some time about updating the project navigator, and we 
have a draft ready to share for community feedback before we launch and 
publicize it. One of the big goals coming out of the joint TC/UC/Board meeting 
a few weeks ago[1] was to help better communicate ‘what is openstack?’ and this 
is one step in that direction.

A few goals in mind for the redesign:
- Represent all official, user-facing projects and deployment services in the 
navigator
- Better categorize the projects by function in a way that makes sense to 
prospective users (this may evolve over time as we work on mapping the 
OpenStack landscape)
- Help users understand which projects are mature and stable vs emerging
- Highlight popular project sets and sample configurations based on different 
use cases to help users get started

For a bit of context, we’re working to give each OpenStack official project a 
stronger platform as we think of OpenStack as a framework of composable 
infrastructure services that can be used individually or together as a powerful 
system. This includes the project mascots (so we in effect have logos to 
promote each component separately), updates to the project navigator, and 
bringing back the “project updates” track at the Summit to give each PTL/core 
team a chance to provide an update on their project roadmap (to be recorded and 
promoted in the project navigator among other places!). 

We want your feedback on the project navigator v2 before it launches. Please 
take a look at the current version on the staging site and provide feedback on 
this thread.

http://devbranch.openstack.org/software/project-navigator/

Please review the overall concept and the data and description for your project 
specifically. The data is primarily pulled from TC tags[2] and Ops tags[3]. 
You’ll notice some projects have more information available than others for 
various reasons. That’s one reason we decided to downplay the maturity metric 
for now and the data on some pages is hidden. If you think your project is 
missing data, please check out the repositories and submit changes or again 
respond to this thread.

Also know this will continue to evolve and we are open to feedback. As I 
mentioned, a team that formed at the joint strategy session a few weeks ago is 
tackling how we map OpenStack projects, which may be reflected in the 
categories. And I suspect we’ll continue to build out additional tags and 
better data sources to be incorporated.

Thanks for your feedback and help.

Best,
Lauren

[1] 
http://superuser.openstack.org/articles/community-leadership-charts-course-openstack/
[2] https://governance.openstack.org/tc/reference/tags/
[3] https://wiki.openstack.org/wiki/Operations/Tags

 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.ope

Re: [openstack-dev] [monasca] Grafana "app" for Monasca

2017-03-27 Thread Hochmuth, Roland M
Hi Steve, This is awesome. We are very interested in this work! I was just 
talking to Grafana Labs about this.

I should also mention that we are in the process of getting Keystone 
authentication built-in to Grafana so that we don't have to maintain a separate 
fork. I'm assuming that work will proceed, but it is contingent on a contract 
that I'm working through. Grafana Labs needed to be involved on the 
authentication plugin that is being added to Grafana.

How much work do you believe is remaining to complete this? I would also be 
very interested in reviewing this and helping out where I can on code. We could 
potentially create an upstream repo in the openstack org.

Regards --Roland




On 3/27/17, 4:40 PM, "Brandt, Ryan"  wrote:

>
>
>On 3/27/17, 10:01 AM, "Steve Simpson"  wrote:
>
>>Hi,
>>
>>We have been working on prototyping an "app" for Grafana which can be
>>used to view/configure alarm definitions, notifications and alarms.
>>This is still work-in-progress (insert normal disclaimer here), but is
>>usable enough to get a feel for what we would like to achieve. We
>>would like some feedback on whether this is something Monasca would be
>>interested in collaborating on or adopting upstream. If so, we may be
>>able to commit more development resource to get it polished.
>>
>>https://github.com/stackhpc/monasca-grafana-app
>>
>>In particular what spurred this was a discussion at the mid-cycle
>>around using Monasca outside an OpenStack environment or for
>>monitoring bare-metal. As it happens this aligns with our
>>requirements; in the environment we will be deploying in, we will
>>likely not be able to deploy the Horizon UI component.
>>
>>Cheers,
>>Steve
>>
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Your next semi weekly gate status report

2017-03-27 Thread Clark Boylan
Hello,

Previously we saw Libvirt crashes, OOMs,  and Tempest SSH Banner
failures were a problem. The SSH Banner failures have since been sorted
out, thank you to everyone that helped work that out. For details please
see https://review.openstack.org/#/c/439638/. There is also a fix for a
race that was causing ssh to fail in test_attach_detach_volume that has
been fixed in https://review.openstack.org/#/c/449661/ (this change is
not merged yet so would be great if tempest cores could get this in).

To address the OOMs we've also seen work to reduce the memory overhead
in running devstack. Changes to modify Apache's memory use have gone in:
https://review.openstack.org/#/c/446741/
https://review.openstack.org/#/c/445910/

We also tried putting MySQL on a diet, but that had to be reverted,
https://review.openstack.org/#/c/446196/.

There is also a memory_tracker logging service which you'll find logs
for in your job logs now. This can be useful in determining where memory
was used which you can use to reduce memory use.
https://review.openstack.org/#/c/434470/.

It is great to see people take an interest in addressing memory issues.
And we no longer see OOMkiller being a major problem according to
elastic-recheck. That said there is more that we can do here.
Outstanding changes that may help too include:
https://review.openstack.org/#/c/447119/
https://review.openstack.org/#/c/450207/

But we also really need individual projects to be looking at the memory
consumption of openstack itself and work on trimming as they are able.

Unfortunately the Libvirt crashes continue to be a problem.

Current top issues:

1. Libvirt crashes: http://status.openstack.org/elastic-recheck/#1643911
and http://status.openstack.org/elastic-recheck/#1646779

Libvirt is randomly crashing during the job which causes things to fail
(for obvious reasons). To address this will likely require someone with
experience debugging libvirt since it's most likely a bug isolated to
libvirt. We're looking for someone familiar with libvirt internals to
drive the effort to fix this issue,

2. Network packet loss in OSIC

This has caused connectivity errors to external services. Various e-r
bugs like http://status.openstack.org/elastic-recheck/index.html#1282876
http://status.openstack.org/elastic-recheck/index.html#1674681
http://status.openstack.org/elastic-recheck/index.html#1669162
http://status.openstack.org/elastic-recheck/index.html#1326813 all
appear to have tripped on this. We expect that the problem has been
corrected, but we should keep an eye on these and make sure they fall
off the e-r list.

Also our classification rate has taken a nose dive lately:

http://status.openstack.org/elastic-recheck/data/integrated_gate.html

Something that would help out is if people start classifiying these
failures. While the overall failure rate is lower than in previous
weeks, having a low classification rate means there are race conditions
(or other failures) we're not tracking yet, which will only make it more
difficult to fix. Normally if there is < a 90% classification rate we've
got at least one big persistent failure condition we're not aware of
yet.

Thank you,

mtreinish and clarkb

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Project Navigator Updates - Feedback Request

2017-03-27 Thread Lauren Sell
Hi Matt,

Thanks for the feedback. 

> On Mar 24, 2017, at 3:50 PM, Matt Riedemann  wrote:
> 
> Overall I like the groupings of the projects in the main page. When I drill 
> into Nova, a couple of things:
> 
> 1. The link for the install guide goes to the home page for docs.o.o rather 
> than https://docs.openstack.org/project-install-guide/ocata/ 
>  - is that 
> intentional?

Good point. We’ll directly the link to the install guide and also change the 
wording in the project details to something along the lines of “Nova is 
included in the install guide” since it’s not linking directly to the 
project-specific install guide.
> 
> 2. The "API Version History" section in the bottom right says:
> 
> "Version v2.1 (Ocata) - LATEST RELEASE"
> 
> And links to https://releases.openstack.org/ 
> . The latest compute microversion in Ocata 
> was actually 2.42:
> 
> https://docs.openstack.org/developer/nova/api_microversion_history.html 
> 
> 
> I'm wondering how we can better sort that out. I guess "API Version History" 
> in the navigator is meant more for major versions and wasn't intended to 
> handle microversions? That seems like something that should be dealt with at 
> some point as more and more projects are moving to using micro versions.

Agreed, we could use some guidance here. From what we can tell, each team logs 
these a little bit differently, so there’s no easy way for us to pull them. 
Could we output the correct link as a tag for each project, or does anyone have 
a recommendation?

Thanks!

> -- 
> 
> Thanks,
> 
> Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] How to Preview the Overcloud Stack?

2017-03-27 Thread Dan Sneddon
I've been trying to figure out a workflow for previewing the results of
importing custom templates in an overcloud deployment (without actually
deploying). For instance, I am overriding some parameters using custom
templates, and I want to make sure those parameters will be expressed
correctly when I deploy.

I know about "heat stack-preview", but between the complexity of the
overcloud stack and the jinja2 template processing, I can't figure out a
way to preview the entire overcloud stack.

Is this possible? If not, any hints on what would it take to write a
script that would accomplish this?

-- 
Dan Sneddon |  Senior Principal Software Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-upstream-institute] Meeting reminder

2017-03-27 Thread Ildiko Vancsa
Hi,

A friendly reminder that the OpenStack Upstream Institute meeting starts in a 
bit less than an hour on #openstack-meeting-3.

See you there! :)

Thanks and Best Regards,
Ildikó
IRC: ildikov


> On 2017. Mar 20., at 21:03, Ildiko Vancsa  wrote:
> 
> Hi All,
> 
> Quick reminder, we have our first meeting now! :)
> 
> Thanks,
> Ildikó
> 
> 
>> On 2017. Mar 19., at 14:14, Ildiko Vancsa > > wrote:
>> 
>> Hi All,
>> 
>> Based on the results of the Doodle poll I sent out earlier the most favorite 
>> slot for the meeting is __Mondays, 2000 UTC__.
>> 
>> In order to get progress with the training preparation for Boston we will 
>> hold our first meeting on __March 20, at 2000 UTC__. The meeting channel is 
>> __#openstack-meeting-3__. You can find and extend the agenda on the meetings 
>> etherpad [2].
>> 
>> I uploaded a patch for review [1] to register the meeting slot as a 
>> permanent meeting on this channel.
>> 
>> We will look into alternatives to keep those of you involved and up to date 
>> for whom this slot is unfortunately does not work.
>> 
>> Please let me know if you have any questions or comments.
>> 
>> Thanks and Best Regards,
>> Ildikó
>> IRC: ildikov
>> 
>> 
>> [1] https://review.openstack.org/447291 
>> [2] https://etherpad.openstack.org/p/openstack-upstream-institute-meetings 
>>  
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] this week's priorities and subteam reports

2017-03-27 Thread Yeleswarapu, Ramamani
Hi,

We are delighted to present this week's priorities and subteam report for 
Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and 
formatted.

This Week's Priorities (as of the weekly ironic meeting)

1. review rolling upgrades, start with https://review.openstack.org/#/c/407491/
1.1. this soft blocks patches bumping the RPC (and/or object) version as 
those fail multinode grenade; Update (vdrok): we need to make a change to the 
grenade job to only upgrade conductor
2. update/review next BFV patch: https://review.openstack.org/#/c/355625/
3. update/review next rescue patches: https://review.openstack.org/#/c/350831/ 
and https://review.openstack.org/#/c/353156/
4. review e-tags spec: https://review.openstack.org/#/c/381991/
5. next driver comp client patch: https://review.openstack.org/#/c/419274/


Bugs (dtantsur, vdrok, TheJulia)

- Stats (diff between 20 Mar 2017 and 27 Mar 2017)
- Ironic: 239 bugs (+1) + 248 wishlist items (+3). 18 new (+3), 196 in progress 
(+6), 0 critical, 25 high (-3) and 28 incomplete (-1)
- Inspector: 17 bugs (+1) + 29 wishlist items (0). 4 new (0), 15 in progress 
(+1), 0 critical, 1 high and 4 incomplete
- Nova bugs with Ironic tag: 13 (-1). 2 new, 0 critical, 0 high

Essential Priorities


CI refactoring and missing test coverage

- Standalone CI tests (vsaienk0)
- patch to be reviewed: https://review.openstack.org/#/c/437549 MERGED
- nex patch to be reviewed: https://review.openstack.org/#/c/429770/
- Missing test coverage (all)
- portgroups and attach/detach tempest tests: 
https://review.openstack.org/382476
- blocked by: https://review.openstack.org/#/c/440719/

Generic boot-from-volume (TheJulia, dtantsur)
-
* trello: https://trello.com/c/UttNjDB7/13-generic-boot-from-volume
- status as of most recent weekly meeting:
- Joanna has been taking on updating/rebasing patches. She is refactoring 
the unit tests, which is slowing down progress at this time.
- Patch/note tracking etherpad: https://etherpad.openstack.org/p/Ironic-BFV
Ironic Patches:
https://review.openstack.org/#/c/355625/ - ready to be reviewed
https://review.openstack.org/#/c/366197/ - Has feedback that needs 
to be addressed
https://review.openstack.org/#/c/406290
https://review.openstack.org/#/c/413324 - Has Feedback that needs 
to be addressed
https://review.openstack.org/#/c/214586/ - Volume Connection 
Information Rest API Change - Needs Rebase
Additional patches exist, for python-ironicclient and one for nova.  
Links in the patch/note tracking etherpad.

Rolling upgrades and grenade-partial (rloo, jlvillal)
-
* trello: 
https://trello.com/c/GAlhSzLm/2-rolling-upgrades-and-grenade-with-multi-node
- status as of most recent weekly meeting:
- patches have been rebased and are available, but rloo wants to test so 
might be best to hold off on reviewing because there may be changes
- Testing work:
- 27-Mar-2017: Grenade multi-node is non-voting
- Examine stats next week and decide if it is ready to become a voting 
job.

Reference architecture guide (jroll)

- no progress this week

Python 3.5 compatibility (JayF, hurricanerix)
-
- no updates

Deploying with Apache and WSGI in CI (vsaienk0)
---
- seems like we can deploy with WSGI, but it still uses a fixed port, instead 
of sub-path
- next one is https://review.openstack.org/#/c/444337/

Driver composition (dtantsur, jroll)

* trello: https://trello.com/c/fTya14y6/14-driver-composition
- gerrit topic: https://review.openstack.org/#/q/status:open+topic:bug/1524745
- status as of most recent weekly meeting:
- TODO as of 27 Mar 2017
- install guide / admin guide docs
- client changes:
- driver commands update: https://review.openstack.org/419274
- node-update update: https://review.openstack.org/#/c/431542/
- new hardware types:
- ilo: https://review.openstack.org/#/c/439404/
- contentious topics:
- what to do about driver properties API and dynamic drivers?
- rloo and dtantsur started brainstorming: 
https://etherpad.openstack.org/p/ironic-driver-properties-reform

Feature parity between two CLIs (rloo, dtantsur)

- OSC driver-properties spec is work in progress: 
https://review.openstack.org/#/c/439907/
- we don't have API to show driver properties for dynamic drivers (we show 
hardware type + default interfaces): 
https://bugs.launchpad.net/ironic/+bug/1671549. This should 

[openstack-dev] [Networking-vSphere]

2017-03-27 Thread Carlos Cesario
Hello team, 

Could someone could confirm if the current 
https://github.com/openstack/networking-vsphere code supports VSS Switch!? It 
seems that the current code only make reference to DVS Switch. 

Thanks in advance! 


best regards, 

Carlos 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Boston Forum Reminder

2017-03-27 Thread Melvin Hillsman
Hey everyone,

This is  a friendly reminder that all proposed Forum session leaders must
submit their abstracts at:

http://forumtopics.openstack.org/

*before 11:59PM UTC on Sunday April 2nd!*

Regards,

TC/UC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mogan][valence] Valence integration

2017-03-27 Thread Yang, Lin A
Hi Zhenguo,

The spec looks prefect to me, thanks a lot for doing that.

The python-valenceclient is high priority of valence Pike release, and still 
undergoing right now. We plan to release the python binding library at first. 
Now you need a simple wrapper if you start coding right now.

Regards,
Lin.

From: Zhenguo Niu [mailto:niu.zgli...@gmail.com]
Sent: Friday, March 24, 2017 7:23 PM
To: Yang, Lin A 
Cc: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [mogan][valence] Valence integration

Thanks Yang, Lin for the explanation!

The Valence flavor shows a good example for baremetal instances, we do need 
such a flavor :D

I will draft a spec for this, you can help to review later. Another question is 
whether the Valence client is ready to use or we need to wrap the REST API 
ourselves?



On Sat, Mar 25, 2017 at 9:37 AM, Yang, Lin A 
mailto:lin.a.y...@intel.com>> wrote:
Hi Zhenguo,

Please checkout the latest valence api spec, current it support two ways to 
specify the arguments when composing node via valence api.
1. Specify flavor for composition -  Specify flavor uuid in ‘flavor_id’ field 
{flavor_id: flavor_uuid} besides the name and description fields. An example of 
request body shows as below.
  {‘name’: ‘new_node’,
   ‘description’: ‘test composition’,
   ‘flavor_id’: ‘fake_uuid’}

2. Specify every hardware details, like cpu, memory, local/remote drive, nic, 
in ‘properties’ field.
  {‘name’: ‘new_node’,
   ‘description’: ‘test composition’,
   ‘properties’: {‘processor’: {‘total_cores’:8,
‘model’: ‘fale_model’},
‘memore’: {‘capacity_mib’: 4096,
  ‘type’: ‘DDR3’}}}
We will update user document to list all available parameters for node 
composition soon.

[0] 
https://github.com/openstack/valence/blob/0db8a8e186e25ded2b17460f5ae2ce9abf576851/api-ref/source/valence-api-v1-nodes.inc

Thanks,
Lin.
From: Zhenguo Niu [mailto:niu.zgli...@gmail.com]
Sent: Tuesday, March 21, 2017 4:20 AM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [mogan][valence] Valence integration

hi guys,

Here is a spec about Mogan and Valence integration[1], but before this happen, 
I would like to know what information needed when requesting to compose a node 
through Valence. From the API doc[2], I can only find name and description 
parameters, but seems like it's incorrect, I suppose that it should at least 
include cpus, ram, disk or maybe cpuinfo. We need to align with this before 
introducing a new flavor for both RSD nodes and generic nodes.


[1] https://review.openstack.org/#/c/441790/
[2] 
https://github.com/openstack/valence/blob/master/api-ref/source/valence-api-v1-nodes.inc#request

--
Best Regards,
Zhenguo Niu



--
Best Regards,
Zhenguo Niu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-27 Thread Adrian Otto
> On Mar 22, 2017, at 5:48 AM, Ricardo Rocha  wrote:
> 
> Hi.
> 
> One simplification would be:
> openstack coe create/list/show/config/update
> openstack coe template create/list/show/update
> openstack coe ca show/sign

I like Ricardo’s suggestion above. I think we should decide between the option 
above (Option 1), and this one (Option 2):

openstack coe cluster create/list/show/config/update
openstack coe cluster template create/list/show/update
openstack coe ca show/sign

Both options are clearly unique to magnum, and are unlikely to cause any future 
collisions with other projects. If you have a preference, please express it so 
we can consider your input and proceed with the implementation. I have a slight 
preference for Option 2 because it more closely reflects how I think about what 
the commands do, and follows the noun/verb pattern correctly. Please share your 
feedback.

Thanks,

Adrian

> This covers all the required commands and is a bit less verbose. The
> cluster word is too generic and probably adds no useful info.
> 
> Whatever it is, kerberos support for the magnum client is very much
> needed and welcome! :)
> 
> Cheers,
>  Ricardo
> 
> On Tue, Mar 21, 2017 at 2:54 PM, Spyros Trigazis  wrote:
>> IMO, coe is a little confusing. It is a term used by people related somehow
>> to the magnum community. When I describe to users how to use magnum,
>> I spent a few moments explaining what we call coe.
>> 
>> I prefer one of the following:
>> * openstack magnum cluster create|delete|...
>> * openstack mcluster create|delete|...
>> * both the above
>> 
>> It is very intuitive for users because, they will be using an openstack
>> cloud
>> and they will be wanting to use the magnum service. So, it only make sense
>> to type openstack magnum cluster or mcluster which is shorter.
>> 
>> 
>> On 21 March 2017 at 02:24, Qiming Teng  wrote:
>>> 
>>> On Mon, Mar 20, 2017 at 03:35:18PM -0400, Jay Pipes wrote:
 On 03/20/2017 03:08 PM, Adrian Otto wrote:
> Team,
> 
> Stephen Watson has been working on an magnum feature to add magnum
> commands to the openstack client by implementing a plugin:
> 
 
>> https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc
> 
> In review of this work, a question has resurfaced, as to what the
> client command name should be for magnum related commands. Naturally, we’d
> like to have the name “cluster” but that word is already in use by Senlin.
 
 Unfortunately, the Senlin API uses a whole bunch of generic terms as
 top-level REST resources, including "cluster", "event", "action",
 "profile", "policy", and "node". :( I've warned before that use of
 these generic terms in OpenStack APIs without a central group
 responsible for curating the API would lead to problems like this.
 This is why, IMHO, we need the API working group to be ultimately
 responsible for preventing this type of thing from happening.
 Otherwise, there ends up being a whole bunch of duplication and same
 terms being used for entirely different things.
 
>>> 
>>> Well, I believe the name and namespaces used by Senlin is very clean.
>>> Please see the following outputs. All commands are contained in the
>>> cluster namespace to avoid any conflicts with any other projects.
>>> 
>>> On the other hand, is there any document stating that Magnum is about
>>> providing clustering service? Why Magnum cares so much about the top
>>> level noun if it is not its business?
>> 
>> 
>> From magnum's wiki page [1]:
>> "Magnum uses Heat to orchestrate an OS image which contains Docker
>> and Kubernetes and runs that image in either virtual machines or bare
>> metal in a cluster configuration."
>> 
>> Many services may offer clusters indirectly. Clusters is NOT magnum's focus,
>> but we can't refer to a collection of virtual machines or physical servers
>> with
>> another name. Bay proven to be confusing to users. I don't think that magnum
>> should reserve the cluster noun, even if it was available.
>> 
>> [1] https://wiki.openstack.org/wiki/Magnum
>> 
>>> 
>>> 
>>> 
>>> $ openstack --help | grep cluster
>>> 
>>>  --os-clustering-api-version 
>>> 
>>>  cluster action list  List actions.
>>>  cluster action show  Show detailed info about the specified action.
>>>  cluster build info  Retrieve build information.
>>>  cluster check  Check the cluster(s).
>>>  cluster collect  Collect attributes across a cluster.
>>>  cluster create  Create the cluster.
>>>  cluster delete  Delete the cluster(s).
>>>  cluster event list  List events.
>>>  cluster event show  Describe the event.
>>>  cluster expand  Scale out a cluster by the specified number of nodes.
>>>  cluster list   List the user's clusters.
>>>  cluster members add  Add specified nodes to cluster.
>>>  cluster members del  Delete specified nodes from cluster.
>>>  cluster members list  List nodes from cluster.
>>>  cluster members replace  Replace the

[openstack-dev] [openstack-docs] [dev] What's up, doc?

2017-03-27 Thread Alexandra Settle
Team team team team team,

Well the last month has just FLOWN by since the PTG. We've got plenty going on 
in the doc team...

This week I have been helping out the security team with the Security Guide. 
We've been working on some cursory edits, and removal of content. A few patches 
have already made it through - thanks to the OSIC security team for tackling 
some of the outstanding bugs. There'll be more edits coming from me in the next 
few weeks. To see our planning: https://etherpad.openstack.org/p/sec-guide-pike
I am also in the process of drafting a governance tag for our install guides. 
Would be great for everyone to review and understand what the process will 
involve: https://review.openstack.org/#/c/445536/

Shoutout and big thanks to Brian Moss and the nova team who worked together 
tirelessly to document Nova v2 Cells and Placement API - which was a massive 
blocker for our Installation Guide.

Also, thank you to our Ocata release managers, Maria Zlatkova and Brian Moss 
for cutting the branch! Pike is well and truly underway now.

Ianeta Hutchinson has done an awesome job for the last two weeks in keeping our 
bug list under control. We are down to an amazing 104 bugs in queue, and 59 
bugs closed this cycle already! Next week, we have Lana who will be looking 
after the bug triage liaison role! If you're sitting there thinking "bugs are 
for me, I really love triaging bugs!" well, you're in luck! We have a few spots 
open for the rest of the cycle: 
https://wiki.openstack.org/wiki/Documentation/SpecialityTeams#Bug_Triage_Team

== The Road to the Summit in Boston ==

* Schedule has been released: 
https://www.openstack.org/summit/boston-2017/summit-schedule/
* Docs and I18n have a project onboarding room at the summit, keep an eye out 
on the dev ML for more information. Kendall will inform us when the time comes. 
Anyone around to help me with that? 
http://lists.openstack.org/pipermail/openstack-dev/2017-March/114149.html
* Docs project update will be delivered by me (asettle) on Mon 8, 
3:40pm-4:20pm. 
https://www.openstack.org/summit/boston-2017/summit-schedule/global-search?t=Alexandra+Settle

== Specialty Team Reports ==

* API - Anne Gentle: There's still a lot of discussion on 
https://review.openstack.org/#/c/421846/ which is about API change guidelines. 
Take a look and join in on the review. Also on the openstack-dev list, there's 
a thread about the future of the app catalog, which is relevant to the app 
developer audience so I include it here: 
http://lists.openstack.org/pipermail/openstack 
dev/2017-March/113362.html
 Also related to the app dev audience is the wrapping up of the App Ecosystem 
working group: 
http://lists.openstack.org/pipermail/user-committee/2017-March/001825.html

* Configuration Reference and CLI Reference - Tomoyuki Kato: N/A

* High Availability Guide - Ianeta Hutchinson: At the Atlanta PTG, the 
documentation team outlined a new table of contents that is now upstream as a 
draft here: 
https://github.com/openstack/openstack-manuals/tree/master/doc/ha-guide-draft. 
A blocker to progress in the past had been a lack of SME’s for the topic of 
high availability but that is no longer the case \o/. The OSIC DevOps team has 
an “adopt-a-guide” project in which they are collaborating with the OpenStack 
docs community and OSIC Docs team to apply the new ToC and validate all content 
for the guide. The progress of this collaboration is being tracked here 
https://docs.google.com/spreadsheets/d/1hw4axU2IbLlsjKpz9_EGlKQt0S6siViik7ETjNg_MgI/edit?usp=sharing
 We are calling for more contributors both as SME's and tech writers. Ping 
iphutch if interested!

* Hypervisor Tuning Guide - Blair Bethwaite: N/A

* Installation guides - Lana Brindley: Cells bug is closer to being fixed, and 
we are closer to a complete test install 
https://wiki.openstack.org/wiki/Documentation/OcataDocTesting look at all that 
green!). We're planning to branch Ocata by the end of this week.

* Networking Guide - John Davidge: N/A

* Operations and Architecture Design guides - Darren Chan: Arch Design Guide: 
Minor IA and general cleanup of the storage, compute, and networking sections 
in the Design chapter. Currently updating gaps in storage design content. Ops 
Guide: Removed cloud architecture content (migrated to the Arch Design Guide).

* Security Guide - Nathaniel Dillon: Edits from Alex going through, and patches 
from the OSIC DevOps team. See above for more info.

* Training Guides - Matjaz Pancur: For Training guides, related topics: a new 
brand for activities around OpenStack Upstream University/Training. It is now 
known as OpenStack Upstream Institute 
(https://wiki.openstack.org/wiki/OpenStack_Upstream_Institute)

* Training labs - Roger Luethi: We are currently testing our automated version 
of the Ocata install-guide. We had a problem with Ubuntu's new ISO image 
(16.04.2 LTS) which is now resol

Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-27 Thread Kendall Nelson
Hello :)

At this point we have a full list, but if you are interested in a lunch
slot I can put Zaqar down for one of those unless some other project is
willing to share their space/time?

Thanks for the interest!

-Kendall Nelson(diablo_rojo)

On Tue, Mar 21, 2017 at 4:50 PM Fei Long Wang 
wrote:

> As far as I know, most of Zaqar team members won't be in Boston. But I
> will be there, so pls help put Zaqar on the list if there is one available.
> Thanks.
>
> On 16/03/17 07:20, Kendall Nelson wrote:
>
> Hello All!
>
> As you may have seen in a previous thread [1] the Forum will offer project
> on-boarding rooms! This idea is that these rooms will provide a place for
> new contributors to a given project to find out more about the project,
> people, and code base. The slots will be spread out throughout the whole
> Summit and will be 90 min long.
>
> We have a very limited slots available for interested projects so it will
> be a first come first served process. Let me know if you are interested and
> I will reserve a slot for you if there are spots left.
>
> - Kendall Nelson (diablo_rojo)
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2017-March/113459.html
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
> Cheers & Best regards,
> Feilong Wang (王飞龙)
> --
> Senior Cloud Software Engineer
> Tel: +64-48032246 <+64%204-803%202246>
> Email: flw...@catalyst.net.nz
> Catalyst IT Limited
> Level 6, Catalyst House, 150 Willis Street, Wellington
> --
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [monasca] Grafana "app" for Monasca

2017-03-27 Thread Steve Simpson
Hi,

We have been working on prototyping an "app" for Grafana which can be
used to view/configure alarm definitions, notifications and alarms.
This is still work-in-progress (insert normal disclaimer here), but is
usable enough to get a feel for what we would like to achieve. We
would like some feedback on whether this is something Monasca would be
interested in collaborating on or adopting upstream. If so, we may be
able to commit more development resource to get it polished.

https://github.com/stackhpc/monasca-grafana-app

In particular what spurred this was a discussion at the mid-cycle
around using Monasca outside an OpenStack environment or for
monitoring bare-metal. As it happens this aligns with our
requirements; in the environment we will be deploying in, we will
likely not be able to deploy the Horizon UI component.

Cheers,
Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] What the behavior of AddFixedIp API should be?

2017-03-27 Thread Matt Riedemann

On 3/27/2017 7:23 AM, Rui Chen wrote:

Hi:

A question about nova AddFixedIp API, nova api-ref[1] describe the
API as "Adds a fixed IP address to a server instance, which associates
that address with the server.", the argument of API is network id, so if
there are two or more subnets in a network, which one is lucky to
associate ip address to the instance? and the API behavior is always
consistent? I'm not sure.
The latest code[2] get all of the instance's ports and subnets of
the specified network, then loop them, but it return when the first
update_port success, so the API behavior depends on the order of subnet
and port list that return by neutron API. I have no idea about what
scenario we should use the API in, and the original design, anyone know
that?

[1]: 
https://developer.openstack.org/api-ref/compute/#add-associate-fixed-ip-addfixedip-action
[2]: 
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1366


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I wondered about this API implementation myself awhile ago, see this bug 
report for details:


https://bugs.launchpad.net/nova/+bug/1430512

There was a related change for this from garyk:

https://review.openstack.org/#/c/163864/

But that was abandoned.

I'm honestly not really sure what the direction is here. From what I 
remember when I reported that bug, this was basically a feature-parity 
implementation in the compute API for the multinic API with 
nova-network. However, I'm not sure it's very usable. There is a Tempest 
test for this API, but I think all it does is attach an interface and 
make sure that does not blow up, it does not try to use the interface to 
ssh into the guest, for example.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] remove-mox-pike blueprint

2017-03-27 Thread Matt Riedemann

On 3/27/2017 9:11 AM, John Garbutt wrote:

Hi,

I added some notes on the blueprint:
https://blueprints.launchpad.net/nova/+spec/remove-mox-pike

I have seen quite a few patches trying to remove the use of
"self.stub_out". While possibly interesting in the future, I think
this should be out of scope for the mox removal blueprint. The aim of
that method is to help us easily stop calling the mox related
"self.stubs.Set" in a way that is really easy to review (and hard to
get wrong).


stub_out is perfectly fine to use, it uses fixtures rather than mox, so 
you are correct here.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] CI Squad Meeting Summary (week 12)

2017-03-27 Thread Attila Darazs
If the topics below interest you and you want to contribute to the 
discussion, feel free to join the next meeting:


Time: Thursdays, 14:30-15:30 UTC
Place: https://bluejeans.com/4113567798/

Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting

== Gating & CDN issues ==

Last week was a rough one for the TripleO gate jobs. We fixed a couple 
of issues on the oooq gates handling the stable branches. This was 
mainly a workaround[1] from tripleo-ci missing from quickstart for 
building the gated packages.


We also had quite a lot of issues with gate jobs not being able to 
download packages[2]. Figuring out how to deal with that issue is still 
under way. There were quite a lot more small fixes to help fix the gate 
instability[3].


== Timestamps ==

We also added timestamps to all the quickstart deployment logs, so now 
it will be easy to link directly to a timestamp in any of the logs. 
Example[4]. It has a per-second resolution, but it only depends on awk 
being present on the systems running the commands.


== Logs, postci.txt ==

Until now the postci.txt file was a bit hidden, we now copy it out under 
logs/postci.txt.gz in every oooq gate job. We're also working on making 
a README style page for the logs that could help guide newcomers 
debugging common errors and finding the relevant logs files.


Let us know if you have further suggestions for improving the log 
browsing, or if you're missing some vital logs.


Some smaller discussion items:

* due to the critical patch for OVB[5] not merging last week, we're 
going to push out the transition of the next batch of jobs to at least 
next Monday (3rd of April).


* the periodic pipeline is still not running often enough. We will 
probably move 3 OVB jobs to run every 8 hours as a start to increase the 
cadence


* We're probably going to move to the "CI Squad" Trello board[6] from 
the current RDO board that we're sharing with other team(s).


Best regards,
Attila

[1] https://review.openstack.org/447530
[2] https://bugs.launchpad.net/tripleo/+bug/1674681
[3] https://review.openstack.org/#/q/topic:tripleo/outstanding
[4] 
http://logs.openstack.org/75/446075/8/check/gate-tripleo-ci-centos-7-nonha-multinode-oooq/cb1f563/logs/undercloud/home/jenkins/install_packages.sh.log.txt.gz#_2017-03-24_21_30_20

[5] https://review.openstack.org/431567
[6] https://trello.com/b/U1ITy0cu/tripleo-ci-squad

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Tags for volumes

2017-03-27 Thread Duncan Thomas
On 27 March 2017 at 14:20, 王玺源  wrote:

> I think the reason is quite simple:
> 1. Some users don't want to use key/value pairs to tag volums. They just
> need some simple strings.
>

...and some do. We can hide this in the client and just save tags under a
metadata item called 'tags', with no API changes needed on the cinder side
and backwards compatability on the client.


> 2. Metadata must be shorter than 255. If users don't need keys, use tag
> here can save some spaces.
>

How many / long tags are you considering supporting?


> 3. Easy for quick searching or filter. Users don't need to know what' the
> key related to the value.
>

The client can hide all this, so it is not really a justification


> 4. For Web App, it should be a basic function[1]
>

Web standards are not really standards. You can find a million things that
apps 'should' do. They're usually contradictory.



>
> [1]https://en.m.wikipedia.org/wiki/Tag_(metadata)
>
>
> 2017-03-27 19:49 GMT+08:00 Sean McGinnis :
>
>> On Mon, Mar 27, 2017 at 03:13:59PM +0800, 王玺源 wrote:
>> > Hi cinder team:
>> >
>> > I want to know what's your thought about adding tags for volumes.
>> >
>> > Now Many resources, like Nova instances, Glance images, Neutron
>> > networks and so on, all support tagging. And some of our cloud customers
>> > want this feature in Cinder as well. It's useful for auditing, billing
>> for
>> > could admin, it can let admin and users filter resources by tag, it can
>> let
>> > users categorize resources for different usage or just remarks
>> something.
>> >
>> > Actually there is a related spec in Cinder 2 years ago, but
>> > unfortunately it was not accepted and was abandoned :
>> > https://review.openstack.org/#/c/99305/
>> >
>> > Can we bring it up and revisit it a second time now? What's cinder
>> > team's idea?  Can you give me some advice that if we can do it or not?
>>
>> Can you give any reason why the existing metadata mechanism does not or
>> will
>> not work for them? There was some discussion in that spec explaining why
>> it
>> was rejected at the time. I don't think anything has changed since then
>> that
>> would change what was said there.
>>
>> >
>> >
>> > Thanks!
>> >
>> > Wangxiyuan
>>
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Meeting Mar 28, 2017 @1500 UTC

2017-03-27 Thread Alex Schultz
Hey puppet folks,

Just a reminder we have a meeting schedule tomorrow. The agenda[0] is
currently empty. If you have something you would like to discuss,
please add it to the list. If the agenda is empty at meeting time we
will cancel the meeting for this week.

Thanks,
-Alex

[0] https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20170328

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] patch abandoment policy

2017-03-27 Thread Alex Schultz
On Mon, Mar 27, 2017 at 6:47 AM, Dan Prince  wrote:
> On Mon, 2017-03-27 at 13:49 +0200, Flavio Percoco wrote:
>> On 24/03/17 17:16 -0400, Dan Prince wrote:
>> > On Thu, 2017-03-23 at 16:20 -0600, Alex Schultz wrote:
>> > > Hey folks,
>> > >
>> > > So after looking at the backlog of patches to review across all
>> > > of
>> > > the
>> > > tripleo projects, I noticed we have a bunch of really old stale
>> > > patches. I think it's time we address when we can abandon these
>> > > stale
>> > > patches.
>> > >
>> > > Please comment on the proposed policy[0].  I know this has
>> > > previously
>> > > been brought up [1] but I would like to formalize the policy so
>> > > we
>> > > can
>> > > reduce the backlog of stale patches.  If you're wondering what
>> > > would
>> > > be abandoned by this policy as it currently sits, I have a gerrit
>> > > dashboard for you[2] (it excludes diskimage-builder) .
>> >
>> > I think it is fine to periodically review patches and abandon them
>> > if
>> > need be. Last time this came up I wasn't in fan of auto-abandoning
>> > though. Rather I just made a pass manually and did it in fairly
>> > short
>> > order. The reason I like the manual approach is a lot of ideas
>> > could
>> > get lost (or silently ignored) if nobody acts on them manually.
>> >
>> > Rather then try to automate this would it serve us better to add a
>> > link
>> > to your Gerrit query in [2] below to highlight these patches and
>> > quickly go through them.
>>
>> I used to do this in Glance. I had 2 scripts that ran every week. The
>> first one
>> would select the patches to abandon and comment on them saying that
>> the patches
>> would be abandoned in a week. The second script abandoned the patches
>> that had
>> been flagged to be abandoned that were not updated in a week.
>
> I don't think a week is enough time to react in all cases though. There
> could be a really good idea that comes in, gets flagged as abandoned
> and then nobody thinks about it again because it got abandoned.
>

So this is a different problem as it relates to the proposed patches.
People shouldn't be letting their patches go this stale. It'd be one
thing if we were doing it on a weekly basis, but the problem is that
the patches would already be 80 days stale (assuming we automated a 10
day warning) and then would get abandoned after 90 days of inactivity.
Most of the patches that fall under this policy are already so out of
date the question becomes how much are we really saving by letting
them stick around? Many times trying to rebase something this out of
date can be more of an effort than fixing it again.  Who knows maybe
it was already incidentally fixed by other patches of 90 days.
Additionally if someone knows they want to come back to it, they
should mark it -1 WIP rather than just letting it sit idle. That would
give them 180 days to get back to it.  The authors need to take
responsibility for their patches and not just throw things out there
and walk away.

> There is sometimes a fine line between automation that helps humans do
> their job better... and automation that goes to far. I don't think
> TripleO or Glance projects have enough patch volume that it would take
> the core team more than an hour to triage patches that need to be
> abandoned. We probably don't even need to do this weekly. Once a month,
> or once a quarter for that matter would probably be fine I think.
>

So are you signing up to triage every week?  The problem with
rejecting automation is that it's adding work to an already overloaded
bunch of people who aren't able to keep up on current reviews let
alone including triaging stale reviews.  I was thinking about what you
said previously about patches getting lost. There's some truth to that
but it's also exaggerating what would happen.  Patches that reference
blueprints or bugs aren't lost because the abandon notice also gets
posted to launchpad. Meaning if someone hits the bug again, the patch
can be restored and reused.  If as a team we actually track our
patches correctly via bugs and blueprints nothing gets lost.  The
policy simply allows for the use of automation. No one is proposing
that we setup automation right now and I would agree that many of your
concerns are legitimate and would need to be addressed if we put
automation in.  I would just like to allow the use of it as part of
this process even if it's a script run by a person as opposed to an
automated job.

Thanks,
-Alex

> Dan
>
>>
>> It was easy to know what patches needed to be checked since these
>> script ran w/
>> its own user (Glance Bot). I believe this worked pretty well and the
>> Glance team
>> is now working on a better version of that bot.
>>
>> I'd share the scripts I used but they are broken and depend on
>> another broken
>> library but you get the idea/rule we used in Glance.
>>
>> Flavio
>>
>> _
>> _
>> OpenStack Development Mailing List (not for usage 

[openstack-dev] [nova] remove-mox-pike blueprint

2017-03-27 Thread John Garbutt
Hi,

I added some notes on the blueprint:
https://blueprints.launchpad.net/nova/+spec/remove-mox-pike

I have seen quite a few patches trying to remove the use of
"self.stub_out". While possibly interesting in the future, I think
this should be out of scope for the mox removal blueprint. The aim of
that method is to help us easily stop calling the mox related
"self.stubs.Set" in a way that is really easy to review (and hard to
get wrong).

I think the current focus should be on emptying this list. I know we
have had quite a few patches up around related tests already:
https://github.com/openstack/nova/blob/master/tests-py3.txt

Just wanting to double check we are all agreed on the direction there.

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] patch abandoment policy

2017-03-27 Thread Flavio Percoco

On 27/03/17 08:47 -0400, Dan Prince wrote:

On Mon, 2017-03-27 at 13:49 +0200, Flavio Percoco wrote:

On 24/03/17 17:16 -0400, Dan Prince wrote:
> On Thu, 2017-03-23 at 16:20 -0600, Alex Schultz wrote:
> > Hey folks,
> >
> > So after looking at the backlog of patches to review across all
> > of
> > the
> > tripleo projects, I noticed we have a bunch of really old stale
> > patches. I think it's time we address when we can abandon these
> > stale
> > patches.
> >
> > Please comment on the proposed policy[0].  I know this has
> > previously
> > been brought up [1] but I would like to formalize the policy so
> > we
> > can
> > reduce the backlog of stale patches.  If you're wondering what
> > would
> > be abandoned by this policy as it currently sits, I have a gerrit
> > dashboard for you[2] (it excludes diskimage-builder) .
>
> I think it is fine to periodically review patches and abandon them
> if
> need be. Last time this came up I wasn't in fan of auto-abandoning
> though. Rather I just made a pass manually and did it in fairly
> short
> order. The reason I like the manual approach is a lot of ideas
> could
> get lost (or silently ignored) if nobody acts on them manually.
>
> Rather then try to automate this would it serve us better to add a
> link
> to your Gerrit query in [2] below to highlight these patches and
> quickly go through them.

I used to do this in Glance. I had 2 scripts that ran every week. The
first one
would select the patches to abandon and comment on them saying that
the patches
would be abandoned in a week. The second script abandoned the patches
that had
been flagged to be abandoned that were not updated in a week.


I don't think a week is enough time to react in all cases though. There
could be a really good idea that comes in, gets flagged as abandoned
and then nobody thinks about it again because it got abandoned.

There is sometimes a fine line between automation that helps humans do
their job better... and automation that goes to far. I don't think
TripleO or Glance projects have enough patch volume that it would take
the core team more than an hour to triage patches that need to be
abandoned. We probably don't even need to do this weekly. Once a month,
or once a quarter for that matter would probably be fine I think.


The Glance team did have a high volume of patches at the time and a week was
actually enough to request feedback. Glance bot wouldn't have abandoned the
patch if there was activity on it, even just a comment saying: "Don't abandon"

Running the script weekly worked well in Glance's case too.

Also, FWIW, my email is just to share what we did in Glance. I'm not suggesting
it'll work for TripleO.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Tags for volumes

2017-03-27 Thread 王玺源
I think the reason is quite simple:
1. Some users don't want to use key/value pairs to tag volums. They just
need some simple strings.
2. Metadata must be shorter than 255. If users don't need keys, use tag
here can save some spaces.
3. Easy for quick searching or filter. Users don't need to know what' the
key related to the value.
4. For Web App, it should be a basic function[1]

[1]https://en.m.wikipedia.org/wiki/Tag_(metadata)


2017-03-27 19:49 GMT+08:00 Sean McGinnis >:

> On Mon, Mar 27, 2017 at 03:13:59PM +0800, 王玺源 wrote:
> > Hi cinder team:
> >
> > I want to know what's your thought about adding tags for volumes.
> >
> > Now Many resources, like Nova instances, Glance images, Neutron
> > networks and so on, all support tagging. And some of our cloud customers
> > want this feature in Cinder as well. It's useful for auditing, billing
> for
> > could admin, it can let admin and users filter resources by tag, it can
> let
> > users categorize resources for different usage or just remarks something.
> >
> > Actually there is a related spec in Cinder 2 years ago, but
> > unfortunately it was not accepted and was abandoned :
> > https://review.openstack.org/#/c/99305/
> >
> > Can we bring it up and revisit it a second time now? What's cinder
> > team's idea?  Can you give me some advice that if we can do it or not?
>
> Can you give any reason why the existing metadata mechanism does not or
> will
> not work for them? There was some discussion in that spec explaining why it
> was rejected at the time. I don't think anything has changed since then
> that
> would change what was said there.
>
> >
> >
> > Thanks!
> >
> > Wangxiyuan
>
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.op
> enstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] patch abandoment policy

2017-03-27 Thread Dan Prince
On Mon, 2017-03-27 at 13:49 +0200, Flavio Percoco wrote:
> On 24/03/17 17:16 -0400, Dan Prince wrote:
> > On Thu, 2017-03-23 at 16:20 -0600, Alex Schultz wrote:
> > > Hey folks,
> > > 
> > > So after looking at the backlog of patches to review across all
> > > of
> > > the
> > > tripleo projects, I noticed we have a bunch of really old stale
> > > patches. I think it's time we address when we can abandon these
> > > stale
> > > patches.
> > > 
> > > Please comment on the proposed policy[0].  I know this has
> > > previously
> > > been brought up [1] but I would like to formalize the policy so
> > > we
> > > can
> > > reduce the backlog of stale patches.  If you're wondering what
> > > would
> > > be abandoned by this policy as it currently sits, I have a gerrit
> > > dashboard for you[2] (it excludes diskimage-builder) .
> > 
> > I think it is fine to periodically review patches and abandon them
> > if
> > need be. Last time this came up I wasn't in fan of auto-abandoning
> > though. Rather I just made a pass manually and did it in fairly
> > short
> > order. The reason I like the manual approach is a lot of ideas
> > could
> > get lost (or silently ignored) if nobody acts on them manually.
> > 
> > Rather then try to automate this would it serve us better to add a
> > link
> > to your Gerrit query in [2] below to highlight these patches and
> > quickly go through them.
> 
> I used to do this in Glance. I had 2 scripts that ran every week. The
> first one
> would select the patches to abandon and comment on them saying that
> the patches
> would be abandoned in a week. The second script abandoned the patches
> that had
> been flagged to be abandoned that were not updated in a week.

I don't think a week is enough time to react in all cases though. There
could be a really good idea that comes in, gets flagged as abandoned
and then nobody thinks about it again because it got abandoned.

There is sometimes a fine line between automation that helps humans do
their job better... and automation that goes to far. I don't think
TripleO or Glance projects have enough patch volume that it would take
the core team more than an hour to triage patches that need to be
abandoned. We probably don't even need to do this weekly. Once a month,
or once a quarter for that matter would probably be fine I think.

Dan

> 
> It was easy to know what patches needed to be checked since these
> script ran w/
> its own user (Glance Bot). I believe this worked pretty well and the
> Glance team
> is now working on a better version of that bot.
> 
> I'd share the scripts I used but they are broken and depend on
> another broken
> library but you get the idea/rule we used in Glance.
> 
> Flavio
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [neutron] What the behavior of AddFixedIp API should be?

2017-03-27 Thread Rui Chen
Hi:

A question about nova AddFixedIp API, nova api-ref[1] describe the API
as "Adds a fixed IP address to a server instance, which associates that
address with the server.", the argument of API is network id, so if there
are two or more subnets in a network, which one is lucky to associate ip
address to the instance? and the API behavior is always consistent? I'm not
sure.
The latest code[2] get all of the instance's ports and subnets of the
specified network, then loop them, but it return when the first update_port
success, so the API behavior depends on the order of subnet and port list
that return by neutron API. I have no idea about what scenario we should
use the API in, and the original design, anyone know that?

[1]:
https://developer.openstack.org/api-ref/compute/#add-associate-fixed-ip-addfixedip-action
[2]:
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L1366
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] container jobs are unstable

2017-03-27 Thread Flavio Percoco

On 23/03/17 16:24 +0100, Martin André wrote:

On Wed, Mar 22, 2017 at 2:20 PM, Dan Prince  wrote:

On Wed, 2017-03-22 at 13:35 +0100, Flavio Percoco wrote:

On 22/03/17 13:32 +0100, Flavio Percoco wrote:
> On 21/03/17 23:15 -0400, Emilien Macchi wrote:
> > Hey,
> >
> > I've noticed that container jobs look pretty unstable lately; to
> > me,
> > it sounds like a timeout:
> > http://logs.openstack.org/19/447319/2/check-tripleo/gate-tripleo-
> > ci-centos-7-ovb-containers-oooq-nv/bca496a/console.html#_2017-03-
> > 22_00_08_55_358973
>
> There are different hypothesis on what is going on here. Some
> patches have
> landed to improve the write performance on containers by using
> hostpath mounts
> but we think the real slowness is coming from the images download.
>
> This said, this is still under investigation and the containers
> squad will
> report back as soon as there are new findings.

Also, to be more precise, Martin André is looking into this. He also
fixed the
gate in the last 2 weeks.


I spoke w/ Martin on IRC. He seems to think this is the cause of some
of the failures:

http://logs.openstack.org/32/446432/1/check-tripleo/gate-tripleo-ci-cen
tos-7-ovb-containers-oooq-nv/543bc80/logs/oooq/overcloud-controller-
0/var/log/extra/docker/containers/heat_engine/log/heat/heat-
engine.log.txt.gz#_2017-03-21_20_26_29_697


Looks like Heat isn't able to create Nova instances in the overcloud
due to "Host 'overcloud-novacompute-0' is not mapped to any cell'. This
means our cells initialization code for containers may not be quite
right... or there is a race somewhere.


Here are some findings. I've looked at time measures from CI for
https://review.openstack.org/#/c/448533/ which provided the most
recent results:

* gate-tripleo-ci-centos-7-ovb-ha [1]
   undercloud install: 23
   overcloud deploy: 72
   total time: 125
* gate-tripleo-ci-centos-7-ovb-nonha [2]
   undercloud install: 25
   overcloud deploy: 48
   total time: 122
* gate-tripleo-ci-centos-7-ovb-updates [3]
   undercloud install: 24
   overcloud deploy: 57
   total time: 152
* gate-tripleo-ci-centos-7-ovb-containers-oooq-nv [4]
   undercloud install: 28
   overcloud deploy: 48
   total time: 165 (timeout)

Looking at the undercloud & overcloud install times, the most task
consuming tasks, the containers job isn't doing that bad compared to
other OVB jobs. But looking closer I could see that:
- the containers job pulls docker images from dockerhub, this process
takes roughly 18 min.


I think we can optimize this a bit by having the script that populates the local
registry in the overcloud job to run in parallel. The docker daemon can do
multiple pulls w/o problems.


- the overcloud validate task takes 10 min more than it should because
of the bug Dan mentioned (a fix is in the queue at
https://review.openstack.org/#/c/448575/)


+A


- the postci takes a long time with quickstart, 13 min (4 min alone
spent on docker log collection) whereas it takes only 3 min when using
tripleo.sh


mmh, does this have anything to do with ansible being in between? Or is that
time specifically for the part that gets the logs?



Adding all these numbers, we're at about 40 min of additional time for
oooq containers job which is enough to cross the CI job limit.

There is certainly a lot of room for optimization here and there and
I'll explore how we can speed up the containers CI job over the next


Thanks a lot for the update. The time break down is fantastic,
Flavio


weeks.

Martin

[1] 
http://logs.openstack.org/33/448533/2/check-tripleo/gate-tripleo-ci-centos-7-ovb-ha/d2c1b16/
[2] 
http://logs.openstack.org/33/448533/2/check-tripleo/gate-tripleo-ci-centos-7-ovb-nonha/d6df760/
[3] 
http://logs.openstack.org/33/448533/2/check-tripleo/gate-tripleo-ci-centos-7-ovb-updates/3b1f795/
[4] 
http://logs.openstack.org/33/448533/2/check-tripleo/gate-tripleo-ci-centos-7-ovb-containers-oooq-nv/b816f20/


Dan



Flavio



_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
cribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-r

Re: [openstack-dev] [tripleo] patch abandoment policy

2017-03-27 Thread Flavio Percoco

On 24/03/17 17:16 -0400, Dan Prince wrote:

On Thu, 2017-03-23 at 16:20 -0600, Alex Schultz wrote:

Hey folks,

So after looking at the backlog of patches to review across all of
the
tripleo projects, I noticed we have a bunch of really old stale
patches. I think it's time we address when we can abandon these stale
patches.

Please comment on the proposed policy[0].  I know this has previously
been brought up [1] but I would like to formalize the policy so we
can
reduce the backlog of stale patches.  If you're wondering what would
be abandoned by this policy as it currently sits, I have a gerrit
dashboard for you[2] (it excludes diskimage-builder) .


I think it is fine to periodically review patches and abandon them if
need be. Last time this came up I wasn't in fan of auto-abandoning
though. Rather I just made a pass manually and did it in fairly short
order. The reason I like the manual approach is a lot of ideas could
get lost (or silently ignored) if nobody acts on them manually.

Rather then try to automate this would it serve us better to add a link
to your Gerrit query in [2] below to highlight these patches and
quickly go through them.


I used to do this in Glance. I had 2 scripts that ran every week. The first one
would select the patches to abandon and comment on them saying that the patches
would be abandoned in a week. The second script abandoned the patches that had
been flagged to be abandoned that were not updated in a week.

It was easy to know what patches needed to be checked since these script ran w/
its own user (Glance Bot). I believe this worked pretty well and the Glance team
is now working on a better version of that bot.

I'd share the scripts I used but they are broken and depend on another broken
library but you get the idea/rule we used in Glance.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Tags for volumes

2017-03-27 Thread Sean McGinnis
On Mon, Mar 27, 2017 at 03:13:59PM +0800, 王玺源 wrote:
> Hi cinder team:
> 
> I want to know what's your thought about adding tags for volumes.
> 
> Now Many resources, like Nova instances, Glance images, Neutron
> networks and so on, all support tagging. And some of our cloud customers
> want this feature in Cinder as well. It's useful for auditing, billing for
> could admin, it can let admin and users filter resources by tag, it can let
> users categorize resources for different usage or just remarks something.
> 
> Actually there is a related spec in Cinder 2 years ago, but
> unfortunately it was not accepted and was abandoned :
> https://review.openstack.org/#/c/99305/
> 
> Can we bring it up and revisit it a second time now? What's cinder
> team's idea?  Can you give me some advice that if we can do it or not?

Can you give any reason why the existing metadata mechanism does not or will
not work for them? There was some discussion in that spec explaining why it
was rejected at the time. I don't think anything has changed since then that
would change what was said there.

> 
> 
> Thanks!
> 
> Wangxiyuan

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] Extending Topology

2017-03-27 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi,

Let me try and explain the more general use case.

You can query OVS for the switches information, and understand how they are 
mapped to one another. This is not enough for knowing the exact route of the 
network traffic for a certain VM.

A certain switch can be connected to more than one other switch. You can, as 
you said, query the network type (encapsulation) information from Neutron. But 
then you will need in addition to query the rules of the specific switch from 
OVS, in order to know which route to take for each encapsulation type.

Another problematic use case is when the switches are not connected to each 
other. The traffic can be redirected by a network-stack software component, so 
you will have to query it in addition in order to tell the route.

And on top of all this, we need to think how to best represent this information 
in Vitrage (i.e. how to draw the graph, which vertices to connect to one 
another, etc.).

IMO, this is all feasible and will give a lot of value to Vitrage. Just not 
easy to implement.

Best Regards,
Ifat.


On 22/03/2017, 08:50, "Muhammad Usman"  wrote:

Hello Ifat,

I tried to see more in depth about the issues you mentioned regarding
the extension of vSwitches. Due to a lot of complexity involved in
generating this topology and associated effects, I believe we need to
setup some baseline (e.g. adding a configuration file for specifying
bridges in existing deployment setup). Then using that baseline,
topology can be constructed as well as type of network can be
extracted from neutron and associated path followed (e.g. vlan or
vxlan). However, more general case you mentioned, I cannot get it. Do
you mean nova-network?

Regarding the sunburst representation -  Yes I agree, if you want to
keep compute hierarchy separate then addition of networking components
is not a good idea.

Also, suggestions from other vitrage members are welcomed.


> On Thu, Mar 16, 2017 at 6:44 PM, Afek, Ifat (Nokia - IL) <
> ifat.a...@nokia.com > wrote:
>
>> Hi,
>>
>>
>>
>> Adding switches to the Vitrage topology is generally a good idea, but the
>> implementation might be very complex. Your diagram shows a simple use
> case,
>> where all switches are linked to one another and it is easy to determine
>> the effect of a failure on the vms. However, in the more general case
> there
>> might be switches without a connecting port (e.g. they might be connected
>> via the network stack). In such cases, it is not clear how to model the
>> switches topology in Vitrage. Another case to consider is when the
>> network
>> type affects the packets path, like vlan vs. vxlan. If you have an idea
>> of
>> how to solve these issues, I will be happy to hear it.
>>
>>
>>
>> Regarding the sunburst representation – I’m not sure I understand your
>> diagram. Currently the sunburst is meant to show (only) the compute
>> hierarchy: zones, hosts and instances. It is arranged in a containment
>> relationship, i.e. every instance on the outer ring appears next to its
>> host in the inner ring. If you add the bridges in the middle, you lose
> this
>> containment relationship. Can you please explain to me the suggested
>> diagram?
>>
>>
>>
>> BTW, you can send such questions to OpenStack mailing list (
>> openstack-dev@lists.openstack.org ) with [vitrage] tag in
> the title, and
>> possibly get replies from other contributors as well.
>>
>>
>>
>> Best Regards,
>>
>> Ifat.
>>
>>
>>
>>
>>
>> *From: *Muhammad Usman >
>> *Date: *Monday, 13 March 2017 at 09:16
>>
>> *To: *"Afek, Ifat (Nokia - IL)" >
>> *Cc: *JungSu Han >
>> *Subject: *Re: OpenStack Vitrage
>>
>>
>>
>> Hi Ifat,
>>
>> I attached our idea of extending the Vitrage Topology to include Virtual
>> switches.
>>
>> The reason, I mentioned about adding switches part in Vitrage is because
>> we experienced looping issues that effect all infrastructure resources
>> (i.e. physical host as well as vm's) performance. Therefore, it's
> important
>> to monitor the virtual switches as well to assist overall monitoring/RCA
>> tasks.
>>
>> I think this idea will extend the Vitrage scope to touch some portion of
>> SDN (e.g. if we consider the SDN managed virtual switches) as well.
>>
>>
>>
>> On Thu, Mar 9, 2017 at 6:49 PM, Muhammad Usman  > wrote:
>>
>> Dear Ifat,
>>
>> Thanks for your guidance, I managed to install Vitrage properly using
>> Master branches for both OpenStack and Vitrage.
>>
>> Now, I will look into the visualization as well as other aspects.
>>
>>
>>
>>
>>
>> On Thu, Mar 9, 2017 at 2:43 PM, Afek, Ifat (Nokia - IL) <
>> ifat.a

Re: [openstack-dev] [neutron][nova] Config drive claims ipv6_dhcp, neutron api says slaac

2017-03-27 Thread Jens Rosenboom
2017-03-24 17:17 GMT+00:00 Clark Boylan :
> On Fri, Mar 24, 2017, at 05:24 AM, Jens Rosenboom wrote:
>> 2017-03-24 9:48 GMT+00:00 Jens Rosenboom :
>> > 2017-03-24 9:30 GMT+00:00 Simon Leinen :
>> >> Clark Boylan writes:
>> >> [...]
>> >>> {
>> >>> "id": "network1",
>> >>> "link": "tap14b906ba-8c",
>> >>> "network_id": "7ee08c00-7323-4f18-94bb-67e081520e70",
>> >>> "type": "ipv6_dhcp"
>> >>> }
>> >>> ],
>> >>> "services": []
>> >>> }
>> >>
>> >>> You'll notice that the network1 entry has a type of ipv6_dhcp; however,
>> >>> if you ask the neutron api it tells slaac is the ipv6_address_mode and
>> >>> ipv6_ra_mode. But enable_dhcp is also set to True. So which is it? Is
>> >>> there a bug here or am I missing something obvious? At the very least it
>> >>> appears that the config drive info is incomplete and does not include
>> >>> the slaac info.
>> >
>> > Two small notes:
>> >
>> > 1. The enable_dhcp must be true also for slaac, its real meaning
>> > is not "dhcp is enabled", but "neutron will take care of address 
>> > assignments".
>> >
>> > 2. The situation is not specific to the config drive being used, the 
>> > identical
>> > information is presented at
>> > http://169.254.169.254/openstack/latest/network_data.json
>> >
>> >> Here's my hypothesis... "type ipvX_dhcp" really means "Neutron will
>> >> manage ipvX addresses", not necessarily that it will use the DHCP
>> >> protocol.
>> >
>> > Right, this is the code part that produces the info:
>> > http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/netutils.py#n267
>>
>> Actually, there seems to be a bug here, or maybe two.
>>
>> There is a dhcp_server address set in the info for the subnet even when
>> it
>> has type slaac, which cause the logic above to output type "ipv6_dhcp"
>> instead
>> of "ipv6". Either that is a bug in Neutron or there is some hidden reason
>> to
>> also have a DHCP server address for slaac.
>>
>> It certainly is a bug in Nova to rely on that attribute in order to
>> decide upon
>> the network type, as for dhcpv6-stateless we would certainly have a
>> dhcp_server
>> defined additional information, but still the address configuration
>> type is slaac
>> and so the network type should be "ipv6" and the address for that subnet
>> be included in the metadata.
>>
>> P.S.: I vaguely remember a discussion that the dhcp_server should also
>> send RAs in case of networks not having a router, maybe that is the
>> reason
>> for the behaviour above. Though I consider that scenario broken, RAs are
>> "*router* advertisements" and thus should only be sent by routers. If
>> people decide to deploy IPv6 on an isolated subnet, they should either
>> be using DHCP or no auto-configuration at all.
>
> Thank you for looking into this. As mentioned earlier in the thread
> glean needs to be able to configure the Linux interfaces explicitly for
> auto or dhcp so ideally the metadata info would also be explicit. I
> think that setting the type to "ipv6_dhcp" when using slaac has to be a
> bug when considering this because it means glean and other tools like
> cloud init will not be able to configure Linux interfaces properly.
>
> Are you going to be filing the bugs against nova and/or neutron? I think
> you understand the fine details better than I do, but I am happy to help
> out filing and pushing things as this would affect our use case quite a
> bit. Just let me know how I can help.

IMO this is a nova bug, neutron does provide all the information that
is needed, its just that nova chooses to filter some of it:

https://bugs.launchpad.net/nova/+bug/1676363

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Last call for Leadership Training April 11-13

2017-03-27 Thread Thierry Carrez
Colette Alexander wrote:
> Just wanted to poke a bit, since there are still two slots available for
> leadership training in Ann Arbor for April 11/12/13. You can sign up
> here: https://etherpad.openstack.org/p/Leadershiptraining
> 
> I promise delicious food and a busy daily schedule with lots of learning
> with some great members of the community. Training costs are covered by
> the Foundation, but attendees are responsible for their own travel and
> incidentals.

If you can attend it, I recommend it! I was pretty skeptical first but
the servant leadership model that they present adapts quite well to our
unique environment. It is also a great opportunity to take a step back
with fellow members of our community and look at the big picture.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] notification update week 12

2017-03-27 Thread Balazs Gibizer

Hi,

Here is the status update / focus setting mail about notification work
for week 12.

Bugs

[Medium] https://bugs.launchpad.net/nova/+bug/1657428 The instance
notifications are sent with inconsistent timestamp format. Fix is ready
for the cores to review https://review.openstack.org/#/c/421981


Versioned notification transformation
-
Most of the transformation patches are in merge conflict. There is a 
patch to avoid such trivial merge conflicts in the future:
https://review.openstack.org/#/c/448225/ Pre-add functional tests stub 
to notification testing


The following patches are needed for searchlight to be able to switch 
to versioned notifications hence they are in focus:
* https://review.openstack.org/#/c/401992/ Transform 
instance.volume_attach notification
* https://review.openstack.org/#/c/408676/ Transform 
instance.volume_detach notification



Searchlight integration
---
changing Searchlight to use versioned notifications
~~~
https://blueprints.launchpad.net/searchlight/+spec/nova-versioned-notifications
bp is a hard dependency for the integration work. Searchlight needs 
instance.volume_attach and instance.volume_detach notifications to be 
transformed before they can switch to the nova's versioned 
notifications. So we treat those transformation patches with priority.



bp additional-notification-fields-for-searchlight
~
Patches needs review:
https://review.openstack.org/#/q/label:Code-Review%253E%253D1+status:open+branch:master+topic:bp/additional-notification-fields-for-searchlight

The BlockDeviceMapping addition to the InstancePayload has been 
proposed:
https://review.openstack.org/#/c/448779/ [WIP] Add BDM to 
InstancePayload



Other items
---
Short circuit notification payload generation
~
Test coverage is still needed for the nova patch 
https://review.openstack.org/#/c/428260/



Weekly meeting
--
The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC
on openstack-meeting-4 so the next meeting will be held on 28th of
March
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170328T17
Please note that most of Europe switched to daylight saving time this 
weekend but the meeting is booked in UTC.


Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Tags for volumes

2017-03-27 Thread 王玺源
Hi cinder team:

I want to know what's your thought about adding tags for volumes.

Now Many resources, like Nova instances, Glance images, Neutron
networks and so on, all support tagging. And some of our cloud customers
want this feature in Cinder as well. It's useful for auditing, billing for
could admin, it can let admin and users filter resources by tag, it can let
users categorize resources for different usage or just remarks something.

Actually there is a related spec in Cinder 2 years ago, but
unfortunately it was not accepted and was abandoned :
https://review.openstack.org/#/c/99305/

Can we bring it up and revisit it a second time now? What's cinder
team's idea?  Can you give me some advice that if we can do it or not?


Thanks!

Wangxiyuan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev