Re: [openstack-dev] [tripleo] [tripleo-validations] using using top-level fact vars will deprecated in future Ansible versions

2018-07-23 Thread Cédric Jeanneret


On 07/23/2018 08:33 PM, Emilien Macchi wrote:
> Thanks Monty for pointing that out to me today on #ansible-devel.
> 
> Context: https://github.com/ansible/ansible/pull/41811
> The top-level fact vars are currently being deprecated in Ansible, maybe
> 2.7.
> It looks like it only affects tripleo-validations (in a quick look), but
> it could be more.
> See: http://codesearch.openstack.org/?q=ansible_facts&i=nope&files=&repos=
> 
> An example playbook was written to explain what is deprecated:
> https://github.com/ansible/ansible/pull/41811#issuecomment-399220997
> 
> But it seems like, starting with Ansible 2.5 (what we already have in
> Rocky and beyond), we should encourage the usage of ansible_facts
> dictionary.
> Example:
> var=hostvars[inventory_hostname].ansible_facts.hostname
> instead of:
> var=ansible_hostname

guh I'm sorry, but this is a non-sense, ugly as hell, and will just
make things overcomplicated as sh*t. Like, really. I know we can't
really have a word about that kind of decision, but... damn, WHY ?!

Thanks for the heads-up though - will patch my current disk space
validation update in order to take that into account.

> 
> Can we have someone from TripleO Validations to help, and make sure we
> make it working for future versions of Ansible.
> Also there is a way to test this behavior by disabling the
> 'inject_facts_as_vars' option in ansible.cfg.
> 
> Hope this helps,
> -- 
> Emilien Macchi
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Cédric Jeanneret
Software Engineer
DFG:DF



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Please use neutron-lib 1.18.0 for Rocky

2018-07-23 Thread Sangho Shin
Hello Boden,

Thank you for your notification.
It applies also to the networking- projects. Right?

Thank you,

Sangho


> 2018. 7. 23. 오후 10:19, Boden Russell  작성:
> 
> If you're a networking project that uses neutron/neutron-lib, please
> read on.
> 
> We recently created the stable/rocky branch for neutron-lib based on
> neutron-lib 1.18.0 and neutron is now using 1.18.0 as well [1]. If
> you're a networking project that depends on (uses) neutron/master then
> it's probably best your project is also using 1.18.0.
> 
> Action items if you project uses neutron/master for Rocky:
> - If your project is covered in the existing patches to use neutron-lib
> 1.18.0 [2], please help verify/review.
> - If your project is not covered in [2], please update your requirements
> to use neutron-lib 1.18.0 in prep for Rocky.
> 
> If you run into any issues with neutron-lib 1.18.0 please report them
> immediately and/or find me on #openstack-neutron
> 
> Thanks
> 
> [1] https://review.openstack.org/#/c/583671/
> [2] https://review.openstack.org/#/q/topic:rocky-neutronlib
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tempest] [patrole] Service client duplication between Tempest and Tempest plugins

2018-07-23 Thread Ghanshyam Mann
  On Tue, 24 Jul 2018 04:22:47 +0900 MONTEIRO, FELIPE C  
wrote  
 >   Hi,
 >   
 >  ** Intention **
 >  Intention is to expand Patrole testing to some service clients that already 
 > exist in some Tempest plugins, for core services only.
 >   
 >  ** Background **
 >  Digging through Neutron testing, it seems like there is currently a lot of 
 > test duplication between neutron-tempest-plugin and Tempest [1]. Under some 
 > circumstances it seems OK to have redundant testing/parallel  testing: 
 > “Having potential duplication between testing is not a big deal especially 
 > compared to the alternative of removing something which is actually 
 > providing value and is actively catching bugs, or blocking incorrect patches 
 > from landing” [2].

We really need to minimize the test duplication. If there is test in tempest 
plugin for core services then, we do not need to add those in Tempest repo  
until it is interop requirement. This is for new tests so we can avoid the 
duplication in future. I will write this in Tempest reviewer guide.
For existing duplicate tests, as per bug you mentioned[1] we need to cleanup 
the duplicate tests and they should live in their respective repo(either in 
neutron tempest plugin or tempest) which is categorized in etherpad[7]. How 
many tests are duplicated now? I will plan this as one of cleanup working item 
in stein. 

 >   
 >  This leads me to the following question: If API test duplication is OK, 
 > what about service client duplication? Patches like [3] and [4]  promote 
 > service client duplication with neutron-tempest-plugin. As far as I can 
 > tell, Neutron builds out some of its service clients dynamically here: [5]. 
 > Which includes segments service client (proposed as an addition to 
 > tempest.lib in [4]) here: [6].

Yeah, they are very dynamic in neutron plugins and its because of old legacy 
code. That is because when neutron tempest plugin was forked from Tempest as it 
is. These dynamic generation of service clients are really hard to debug and 
maintain. This can easily lead to backward incompatible changes if we make 
those service clients stable interface to consume outside. For those reason, we 
did fixed those in Tempest 3 years back [8] and made them  static and 
consistent service client methods like other services clients. 

 >   
 >  This leads to a situation where if we want to offer RBAC testing for these 
 > APIs (to validate their policy enforcement), we can’t really do so without 
 > adding the service client to Tempest, unless  we rely on the 
 > neutron-tempest-plugin (for example) in Patrole’s .zuul.yaml.
 >   
 >  ** Path Forward **
 >  Option #1: For the core services, most service clients should live in 
 > tempest.lib for standardization/governance around documentation and 
 > stability for those clients. Service client duplication  should try to be 
 > minimized as much as possible. API testing related to some service clients, 
 > though, should remain in the Tempest plugins.
 >   
 >  Option #2: Proceed with service client duplication, either by adding the 
 > service client to Tempest (or as yet another alternative, Patrole). This 
 > leads to maintenance overhead: have to maintain  service clients in the 
 > plugins and Tempest itself.
 >   
 >  Option #3: Don’t offer RBAC testing in Patrole plugin for those APIs.

We need to share the service clients among Tempest plugins. And each service 
clients which are being shared across repo has to be declared as stable 
interface like Tempest does. Idea here is service clients will live in the repo 
where their original tests were added or going to be added. For example in case 
of neutron tempest plugin, if rbac-policy API tests are in neutron then its 
service client needs to be owned by neutron-tempest-plugin. further rbac-policy 
service client can be consumed by Patrole. It is same case for congress tempest 
plugin, where they consume mistral service client. I recommended the same in 
that thread also of using service client from Mistral and Mistral make the 
service client as stable interface [9]. Which is being done in congress[10]

Here are the general recommendation for Tempest Plugins for service clients :
- Tempest Plugins should make their service clients as stable interface which 
gives 2 advantage:
  1. By this you make sure that you are not allowing to change the API calling 
interface(service clietns) which indirectly means you are not allowing to 
change the APIs. Makes your tempest plugin testing more reliable.

   2. Your service clients can be used in other Tempest plugins to avoid 
duplicate code/interface. If any other plugins use you service clients means, 
they also test your project so it is good to help them by providing the 
required interface as stable.

Initial idea of owning the service clients in their respective plugins was to 
share them among plugins for integrated testing of more then one openstack 
service.

- Usage of service client

Re: [openstack-dev] [Openstack-operators][nova] Couple of CellsV2 questions

2018-07-23 Thread Matt Riedemann
I'll try to help a bit inline. Also cross-posting to openstack-dev and 
tagging with [nova] to highlight it.


On 7/23/2018 10:43 AM, Jonathan Mills wrote:
I am looking at implementing CellsV2 with multiple cells, and there's a 
few things I'm seeking clarification on:


1) How does a superconductor know that it is a superconductor?  Is its 
operation different in any fundamental way?  Is there any explicit 
configuration or a setting in the database required? Or does it simply 
not care one way or another?


It's a topology term, not really anything in config or the database that 
distinguishes the "super" conductor. I assume you've gone over the 
service layout in the docs:


https://docs.openstack.org/nova/latest/user/cellsv2-layout.html#service-layout

There are also some summit talks from Dan about the topology linked here:

https://docs.openstack.org/nova/latest/user/cells.html#cells-v2

The superconductor is the conductor service at the "top" of the tree 
which interacts with the API and scheduler (controller) services and 
routes operations to the cell. Then once in a cell, the operation should 
ideally be confined there. So, for example, reschedules during a build 
would be confined to the cell. The cell conductor doesn't go back "up" 
to the scheduler to get a new set of hosts for scheduling. This of 
course depends on which release you're using and your configuration, see 
the caveats section in the cellsv2-layout doc.




2) When I ran the command "nova-manage cell_v2 create_cell --name=cell1 
--verbose", the entry created for cell1 in the api database includes 
only one rabbitmq server, but I have three of them as an HA cluster.  
Does it only support talking to one rabbitmq server in this 
configuration? Or can I just update the cell1 transport_url in the 
database to point to all three? Is that a supported configuration?


First, don't update stuff directly in the database if you don't have to. 
:) What you set on the transport_url should be whatever oslo.messaging 
can handle:


https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.transport_url

There is at least one reported bug for this but I'm not sure I fully 
grok it or what its status is at this point:


https://bugs.launchpad.net/nova/+bug/1717915



3) Is there anything wrong with having one cell share the amqp bus with 
your control plane, while having additional cells use their own amqp 
buses? Certainly I realize that the point of CellsV2 is to shard the 
amqp bus for greater horizontal scalability.  But in my case, my first 
cell is on the smaller side, and happens to be colocated with the 
control plane hardware (whereas other cells will be in other parts of 
the datacenter, or in other datacenters with high-speed links).  I was 
thinking of just pointing that first cell back at the same rabbitmq 
servers used by the control plane, but perhaps directing them at their 
own rabbitmq vhost. Is that a terrible idea?


Would need to get input from operators and/or Dan Smith's opinion on 
this one, but I'd say it's no worse than having a flat single cell 
deployment. However, if you're going to do multi-cell long-term anyway, 
then it would be best to get in the mindset and discipline of not 
relying on shared MQ between the controller services and the cells. In 
other words, just do the right thing from the start rather than have to 
worry about maybe changing the deployment / configuration for that one 
cell down the road when it's harder.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Is the XenProject CI dead?

2018-07-23 Thread Matt Riedemann
We have the XenProject CI [1] which is supposed to run the libvirt+xen 
configuration. But I haven't seen it run on this libvirt driver change 
[2]. Does anyone know about its status?


[1] https://wiki.openstack.org/wiki/ThirdPartySystems/XenProject_CI
[2] https://review.openstack.org/#/c/560317/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] [tempest] [patrole] Service client duplication between Tempest and Tempest plugins

2018-07-23 Thread MONTEIRO, FELIPE C
Hi,

** Intention **
Intention is to expand Patrole testing to some service clients that already 
exist in some Tempest plugins, for core services only.

** Background **
Digging through Neutron testing, it seems like there is currently a lot of test 
duplication between neutron-tempest-plugin and Tempest [1]. Under some 
circumstances it seems OK to have redundant testing/parallel testing: "Having 
potential duplication between testing is not a big deal especially compared to 
the alternative of removing something which is actually providing value and is 
actively catching bugs, or blocking incorrect patches from landing" [2].

This leads me to the following question: If API test duplication is OK, what 
about service client duplication? Patches like [3] and [4]  promote service 
client duplication with neutron-tempest-plugin. As far as I can tell, Neutron 
builds out some of its service clients dynamically here: [5]. Which includes 
segments service client (proposed as an addition to tempest.lib in [4]) here: 
[6].

This leads to a situation where if we want to offer RBAC testing for these APIs 
(to validate their policy enforcement), we can't really do so without adding 
the service client to Tempest, unless we rely on the neutron-tempest-plugin 
(for example) in Patrole's .zuul.yaml.

** Path Forward **
Option #1: For the core services, most service clients should live in 
tempest.lib for standardization/governance around documentation and stability 
for those clients. Service client duplication should try to be minimized as 
much as possible. API testing related to some service clients, though, should 
remain in the Tempest plugins.

Option #2: Proceed with service client duplication, either by adding the 
service client to Tempest (or as yet another alternative, Patrole). This leads 
to maintenance overhead: have to maintain service clients in the plugins and 
Tempest itself.

Option #3: Don't offer RBAC testing in Patrole plugin for those APIs.

Thanks,

Felipe

[1] https://bugs.launchpad.net/neutron/+bug/1552960
[2] https://docs.openstack.org/tempest/latest/test_removal.html
[3] https://review.openstack.org/#/c/482395/
[4] https://review.openstack.org/#/c/582340/
[5] 
http://git.openstack.org/cgit/openstack/neutron-tempest-plugin/tree/neutron_tempest_plugin/services/network/json/network_client.py
[6]  
http://git.openstack.org/cgit/openstack/neutron-tempest-plugin/tree/neutron_tempest_plugin/api/test_timestamp.py


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][ptl] Deadlines this week

2018-07-23 Thread Sean McGinnis
Just a quick reminder that this week is a big one for deadlines.

This Thursday, July 26, is our scheduled deadline for feature freeze, soft
string freeze, client library freeze, and requirements freeze.

String freeze is necessary to give our i18n team a chance at translating error
strings. You are highly encouraged not to accept proposed changes containing
modifications in user-facing strings (with consideration for important bug
fixes of course). Such changes should be rejected by the review team and
postponed until the next series development opens (which should happen when
RC1 is published).

The other freezes are to allow library changes and other code churn to settle
down before we get to RC1. Import feature freeze exceptions should be requested
from the project's PTL for them to decide if the risk is low enough to allow
changes to still be accepted.

Requirements updates will need a feature freeze exception from the requirements
team. Those should be requested by sending a request to openstack-dev with the
subject line containing "[requirements][ffe]".

For more details, please refer to our published Rocky release schedule:

https://releases.openstack.org/rocky/schedule.html

Thanks!
Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StoryBoard] issues found while using storyboard

2018-07-23 Thread Jay S Bryant



On 7/23/2018 1:53 PM, Chris Friesen wrote:

Hi,

I'm on a team that is starting to use StoryBoard, and I just thought 
I'd raise some issues I've recently run into.  It may be that I'm 
making assumptions based on previous tools that I've used (Launchpad 
and Atlassian's Jira) and perhaps StoryBoard is intended to be used 
differently, so if that's the case please let me know.


1) There doesn't seem to be a formal way to search for newly-created 
stories that have not yet been triaged.


2) There doesn't seem to be a way to find stories/tasks using 
arbitrary boolean logic, for example something of the form "(A OR (B 
AND C)) AND NOT D". Automatic worklists will only let you do "(A AND 
B) OR (C AND D) OR (E AND F)" and story queries won't even let you do 
that.


3) I don't see a structured way to specify that a bug has been 
confirmed by someone other than the reporter, or how many people have 
been impacted by it.


4) I can't find a way to add attachments to a story.  (Like a big log 
file, or a proposed patch, or a screenshot.)

Chris,

Tom Barron and I have both raised this as a concern for Cinder and 
Manila.  I could not find a bug for not being able to create attachments 
so I have created one: https://storyboard.openstack.org/#!/story/2003071


Jay


5) I don't see a way to search for stories that have not been assigned 
to someone.


6) This is more a convenience thing, but when looking at someone 
else's public automatic worklist, there's no way to see what the query 
terms were that generated the worklist.


Chris

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [StoryBoard] issues found while using storyboard

2018-07-23 Thread Chris Friesen

Hi,

I'm on a team that is starting to use StoryBoard, and I just thought I'd raise 
some issues I've recently run into.  It may be that I'm making assumptions based 
on previous tools that I've used (Launchpad and Atlassian's Jira) and perhaps 
StoryBoard is intended to be used differently, so if that's the case please let 
me know.


1) There doesn't seem to be a formal way to search for newly-created stories 
that have not yet been triaged.


2) There doesn't seem to be a way to find stories/tasks using arbitrary boolean 
logic, for example something of the form "(A OR (B AND C)) AND NOT D". 
Automatic worklists will only let you do "(A AND B) OR (C AND D) OR (E AND F)" 
and story queries won't even let you do that.


3) I don't see a structured way to specify that a bug has been confirmed by 
someone other than the reporter, or how many people have been impacted by it.


4) I can't find a way to add attachments to a story.  (Like a big log file, or a 
proposed patch, or a screenshot.)


5) I don't see a way to search for stories that have not been assigned to 
someone.

6) This is more a convenience thing, but when looking at someone else's public 
automatic worklist, there's no way to see what the query terms were that 
generated the worklist.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] [tripleo-validations] using using top-level fact vars will deprecated in future Ansible versions

2018-07-23 Thread Emilien Macchi
Thanks Monty for pointing that out to me today on #ansible-devel.

Context: https://github.com/ansible/ansible/pull/41811
The top-level fact vars are currently being deprecated in Ansible, maybe
2.7.
It looks like it only affects tripleo-validations (in a quick look), but it
could be more.
See: http://codesearch.openstack.org/?q=ansible_facts&i=nope&files=&repos=

An example playbook was written to explain what is deprecated:
https://github.com/ansible/ansible/pull/41811#issuecomment-399220997

But it seems like, starting with Ansible 2.5 (what we already have in Rocky
and beyond), we should encourage the usage of ansible_facts dictionary.
Example:
var=hostvars[inventory_hostname].ansible_facts.hostname
instead of:
var=ansible_hostname

Can we have someone from TripleO Validations to help, and make sure we make
it working for future versions of Ansible.
Also there is a way to test this behavior by disabling the
'inject_facts_as_vars' option in ansible.cfg.

Hope this helps,
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Jose Luis Franco for TripleO core reviewer on Upgrade bits

2018-07-23 Thread Jiří Stránský

+1!

On 20.7.2018 10:07, Carlos Camacho Gonzalez wrote:

Hi!!!

I'll like to propose Jose Luis Franco [1][2] for core reviewer in all the
TripleO upgrades bits. He shows a constant and active involvement in
improving and fixing our updates/upgrades workflows, he helps also trying
to develop/improve/fix our upstream support for testing the
updates/upgrades.

Please vote -1/+1, and consider this my +1 vote :)

[1]: https://review.openstack.org/#/q/owner:jfrancoa%2540redhat.com
[2]: http://stackalytics.com/?release=all&metric=commits&user_id=jfrancoa

Cheers,
Carlos.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sig][upgrades][ansible][charms][tripleo][kolla][airship] reboot or poweroff?

2018-07-23 Thread James Page
Hi All

tl;dr we (the original founders) have not managed to invest the time to get
the Upgrades SIG booted - time to hit reboot or time to poweroff?

Since Vancouver, two of the original SIG chairs have stepped down leaving
me in the hot seat with minimal participation from either deployment
projects or operators in the IRC meetings.  In addition I've only been able
to make every 3rd IRC meeting, so they have generally not being happening.

I think the current timing is not good for a lot of folk so finding a
better slot is probably a must-have if the SIG is going to continue - and
maybe moving to a monthly or bi-weekly schedule rather than the weekly slot
we have now.

In addition I need some willing folk to help with leadership in the SIG.
If you have an interest and would like to help please let me know!

I'd also like to better engage with all deployment projects - upgrades is
something that deployment tools should be looking to encapsulate as
features, so it would be good to get deployment projects engaged in the SIG
with nominated representatives.

Based on the attendance in upgrades sessions in Vancouver and
developer/operator appetite to discuss all things upgrade at said sessions
I'm assuming that there is still interest in having a SIG for Upgrades but
I may be wrong!

Thoughts?

James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Please use neutron-lib 1.18.0 for Rocky

2018-07-23 Thread Boden Russell
If you're a networking project that uses neutron/neutron-lib, please
read on.

We recently created the stable/rocky branch for neutron-lib based on
neutron-lib 1.18.0 and neutron is now using 1.18.0 as well [1]. If
you're a networking project that depends on (uses) neutron/master then
it's probably best your project is also using 1.18.0.

Action items if you project uses neutron/master for Rocky:
- If your project is covered in the existing patches to use neutron-lib
1.18.0 [2], please help verify/review.
- If your project is not covered in [2], please update your requirements
to use neutron-lib 1.18.0 in prep for Rocky.

If you run into any issues with neutron-lib 1.18.0 please report them
immediately and/or find me on #openstack-neutron

Thanks

[1] https://review.openstack.org/#/c/583671/
[2] https://review.openstack.org/#/q/topic:rocky-neutronlib

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Jose Luis Franco for TripleO core reviewer on Upgrade bits

2018-07-23 Thread Alex Schultz
+1

On Fri, Jul 20, 2018 at 2:07 AM, Carlos Camacho Gonzalez
 wrote:
> Hi!!!
>
> I'll like to propose Jose Luis Franco [1][2] for core reviewer in all the
> TripleO upgrades bits. He shows a constant and active involvement in
> improving and fixing our updates/upgrades workflows, he helps also trying to
> develop/improve/fix our upstream support for testing the updates/upgrades.
>
> Please vote -1/+1, and consider this my +1 vote :)
>
> [1]: https://review.openstack.org/#/q/owner:jfrancoa%2540redhat.com
> [2]: http://stackalytics.com/?release=all&metric=commits&user_id=jfrancoa
>
> Cheers,
> Carlos.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]Notification update week 30

2018-07-23 Thread Balázs Gibizer

Hi,

Here is the latest notification subteam update.

Bugs

No new bugs tagged with notifications and no progress with the existing
ones.


Features


Versioned notification transformation
-
We have only a handfull of patches left before we can finally finish 
the multi year effort of transforming every legacy notifiaction to the 
versioned format. 3 of those patches already have a +2:

https://review.openstack.org/#/q/status:open+topic:bp/versioned-notification-transformation-rocky


Weekly meeting
--
No meeting this week. Please ping me on IRC if you have something
important to talk about.

Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] How to integrate a Heat plugin in a containerized deployment?

2018-07-23 Thread Bogdan Dobrelya

On 7/23/18 12:50 PM, Ricardo Noriega De Soto wrote:

Hello guys,

I need to deploy the following Neutron BGPVPN heat plugin.

https://docs.openstack.org/networking-bgpvpn/ocata/heat.html


This will allow users, to create Heat templates with BGPVPN resources. 
Right now, BGPVPN service plugin is only available in 
neutron-server-opendaylight Kolla image:



https://github.com/openstack/kolla/blob/master/docker/neutron/neutron-server-opendaylight/Dockerfile.j2#L13


It would make sense to add right there the python-networking-bgpvpn-heat 
package. Is that correct? Heat exposes a parameter to configure plugins 


You can override that via neutron_server_opendaylight_packages_append in
tripleo common, like [0]

[0] 
http://git.openstack.org/cgit/openstack/tripleo-common/tree/container-images/tripleo_kolla_template_overrides.j2#n76 



( HeatEnginePluginDirs), that corresponds to plugins_dir parameter in 
heat.conf.


What is the issue here?

Heat will try to search any available plugin in the path determined by 
HeatEnginePluginDirs, however, the heat plugin is located in a separate 
container (neutron_api). How should we tackle this? I see no other 
example of this type of integration.


Here is the most recent example [1] of inter-containers state sharing 
for Ironic containers. I think something similar should be done for 
docker/services/heat* yaml files.


[1] https://review.openstack.org/#/c/584265/



AFAIK, /usr/lib/python2.7/site-packages is not exposed to the host as a 
mounted volume, so how is heat supposed to find bgpvpn heat plugin?


Thanks for your advice.

Cheers


--
Ricardo Noriega

Senior Software Engineer - NFV Partner Engineer | Office of Technology 
  | Red Hat

irc: rnoriega @freenode


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] How to integrate a Heat plugin in a containerized deployment?

2018-07-23 Thread Ricardo Noriega De Soto
Hello guys,

I need to deploy the following Neutron BGPVPN heat plugin.

https://docs.openstack.org/networking-bgpvpn/ocata/heat.html


This will allow users, to create Heat templates with BGPVPN resources.
Right now, BGPVPN service plugin is only available in
neutron-server-opendaylight Kolla image:

https://github.com/openstack/kolla/blob/master/docker/neutron/neutron-server-opendaylight/Dockerfile.j2#L13


It would make sense to add right there the python-networking-bgpvpn-heat
package. Is that correct? Heat exposes a parameter to configure plugins
( HeatEnginePluginDirs), that corresponds to plugins_dir parameter in
heat.conf.

What is the issue here?

Heat will try to search any available plugin in the path determined by
HeatEnginePluginDirs,
however, the heat plugin is located in a separate container (neutron_api).
How should we tackle this? I see no other example of this type of
integration.

AFAIK, /usr/lib/python2.7/site-packages is not exposed to the host as a
mounted volume, so how is heat supposed to find bgpvpn heat plugin?

Thanks for your advice.

Cheers


-- 
Ricardo Noriega

Senior Software Engineer - NFV Partner Engineer | Office of Technology  |
Red Hat
irc: rnoriega @freenode
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [self-healing] [ptg] [monasca] PTG track schedule published

2018-07-23 Thread Bedyk, Witold
Hi Adam,

if nothing else works, we could probably offer you half-day of Monasca slot on 
Monday or Tuesday afternoon. I'm afraid though that our room might be too small 
for you.

Cheers
Witek

> -Original Message-
> From: Thierry Carrez 
> Sent: Freitag, 20. Juli 2018 18:46
> To: Adam Spiers 
> Cc: openstack-dev mailing list 
> Subject: Re: [openstack-dev] [self-healing] [ptg] PTG track schedule
> published
> 
> Adam Spiers wrote:
> > Apologies - I have had to change plans and leave on the Thursday
> > evening (old friend is getting married on Saturday morning).  Is there
> > any chance of swapping the self-healing slot with one of the others?
> 
> It's tricky, as you asked to avoid conflicts with API SIG, Watcher, Monasca,
> Masakari, and Mistral... Which day would be best for you given the current
> schedule (assuming we don't move anything else as it's too late for that).
> 
> --
> Thierry Carrez (ttx)
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder][Tempest] Help with tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest.test_tagged_attachment needed

2018-07-23 Thread Slawomir Kaplonski
Hi,

Thx Artom for taking care of it. Did You made any progress?
I think that it might be quite important to fix as it failed around 50 times 
during last 7 days:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22line%20386%2C%20in%20test_tagged_attachment%5C%22

> Wiadomość napisana przez Artom Lifshitz  w dniu 
> 19.07.2018, o godz. 19:28:
> 
> I've proposed [1] to add extra logging on the Nova side. Let's see if
> that helps us catch the root cause of this.
> 
> [1] https://review.openstack.org/584032
> 
> On Thu, Jul 19, 2018 at 12:50 PM, Artom Lifshitz  wrote:
>> Because we're waiting for the volume to become available before we
>> continue with the test [1], its tag still being present means Nova's
>> not cleaning up the device tags on volume detach. This is most likely
>> a bug. I'll look into it.
>> 
>> [1] 
>> https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_device_tagging.py#L378
>> 
>> On Thu, Jul 19, 2018 at 7:09 AM, Slawomir Kaplonski  
>> wrote:
>>> Hi,
>>> 
>>> Since some time we see that test 
>>> tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest.test_tagged_attachment
>>>  is failing sometimes.
>>> Bug about that is reported for Tempest currently [1] but after small patch 
>>> [2] was merged I was today able to check what cause this issue.
>>> 
>>> Test which is failing is in [3] and it looks that everything is going fine 
>>> with it up to last line of test. So volume and port are created, attached, 
>>> tags are set properly, both devices are detached properly also and at the 
>>> end test is failing as in 
>>> http://169.254.169.254/openstack/latest/meta_data.json still has some 
>>> device inside.
>>> And it looks now from [4] that it is volume which isn’t removed from this 
>>> meta_data.json.
>>> So I think that it would be good if people from Nova and Cinder teams could 
>>> look at it and try to figure out what is going on there and how it can be 
>>> fixed.
>>> 
>>> Thanks in advance for help.
>>> 
>>> [1] https://bugs.launchpad.net/tempest/+bug/1775947
>>> [2] https://review.openstack.org/#/c/578765/
>>> [3] 
>>> https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_device_tagging.py#L330
>>> [4] 
>>> http://logs.openstack.org/69/567369/15/check/tempest-full/528bc75/job-output.txt.gz#_2018-07-19_10_06_09_273919
>>> 
>>> —
>>> Slawek Kaplonski
>>> Senior software engineer
>>> Red Hat
>>> 
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> 
>> --
>> --
>> Artom Lifshitz
>> Software Engineer, OpenStack Compute DFG
> 
> 
> 
> -- 
> --
> Artom Lifshitz
> Software Engineer, OpenStack Compute DFG
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

— 
Slawek Kaplonski
Senior software engineer
Red Hat


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Bug deputy report 07/16/2018 - 07/22/2018

2018-07-23 Thread Lujin Luo
Hello everyone,

I am on bug deputy from July 16th to 22th. Here is a brief summary of
the bugs reported during this period.

In total we have 6 bugs reported last week.

1. https://bugs.launchpad.net/neutron/+bug/1781892 - confirmed (Low).
QoS related. Proposed patch to add a clarification that QoS policy
attached to a floating IP will not be automatically associated and
visible in port's ``qos_policy_id`` field after associating a floating
IP to a port. (Link to the patch:
https://review.openstack.org/#/c/583967/ )

2. https://bugs.launchpad.net/neutron/+bug/1782141 - confirmed (High).
QoS related. Patch proposed to clear rate limits when default NULL
values are used. (Link to the patch:
https://review.openstack.org/#/c/584297/)

3. https://bugs.launchpad.net/neutron/+bug/1782026 - duplicate of
1758952. Backport patch proposed/merged to stable/queens.

4. https://bugs.launchpad.net/neutron/+bug/1782337 - duplicate of
1776840. Backport patch proposed and under review
https://review.openstack.org/#/c/584172/.

5. https://bugs.launchpad.net/neutron/+bug/1782421 - Under discussion.
Large scale concurrent port creations fail due to revision number
bumps. The submitter has had a workaround to solve his issue, but it
may have side effects also. Anyone who is familiar with large scale
deployments/revision numbers, please kindly join the discussion.

6. https://bugs.launchpad.net/neutron/+bug/1782576 - confirmed (High).
SG logging data is not logged into /var/log/syslog.

Best regards,
Lujin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Jose Luis Franco for TripleO core reviewer on Upgrade bits

2018-07-23 Thread Carlos Camacho Gonzalez
Thank you Jose Luis for the work,

Let's keep the thread open until July 31st and iif there is no veto I'll
grant you the correct permissions.

Cheers,
Carlos.

On Fri, Jul 20, 2018 at 2:55 PM, Jose Luis Franco Arza 
wrote:

> Thank you very much to all for the recognition.
> I will use this power with responsibility, as Uncle Ben once said:
> https://giphy.com/gifs/MCZ39lz83o5lC/fullscreen
>
> Regards,
> Jose Luis
>
> On Fri, Jul 20, 2018 at 1:00 PM, Emilien Macchi 
> wrote:
>
>>
>>
>> On Fri, Jul 20, 2018 at 4:09 AM Carlos Camacho Gonzalez <
>> ccama...@redhat.com> wrote:
>>
>>> Hi!!!
>>>
>>> I'll like to propose Jose Luis Franco [1][2] for core reviewer in all
>>> the TripleO upgrades bits. He shows a constant and active involvement in
>>> improving and fixing our updates/upgrades workflows, he helps also trying
>>> to develop/improve/fix our upstream support for testing the
>>> updates/upgrades.
>>>
>>> Please vote -1/+1, and consider this my +1 vote :)
>>>
>>
>> Nice work indeed, +1. Keep doing a good job and thanks for all your help!
>> --
>> Emilien Macchi
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev