Re: [openstack-dev] [masakari] BUG in Masakari Installation and Procedure and/or Documentation

2018-01-29 Thread Rikimaru Honjo

Hello Greg,

Thank you for reporting & researching.

On 2018/01/27 5:59, Waines, Greg wrote:

Update on this.

It turned out that i had incorrectly set the ‘project_name’ and ‘username’ in  
/etc/masakarimonitors/masakarimonitors.conf

Setting both these attributes to ‘admin’ made it such that the 
instancemonitor’s notification to masakari-engine was successful.
e.g.
stack@devstack-masakari-louie:~/devstack$ masakari notification-list
+--++-+--+--+
| notification_uuid| generated_time | status  | 
source_host_uuid | type |
+--++-+--+--+
| b8c6c561-7a93-40a2-8d73-3783024865b4 | 2018-01-26T19:41:29.00 | running | 
51bc8b8b-324f-499a-9166-38c22b3842cd | VM   |
+--++-+--+--+
stack@devstack-masakari-louie:~/devstack$


However I now get the following error in masakari-engine, when the 
masakari-engine attempts to do the VM Recovery

Jan 26 19:41:28 devstack-masakari-louie masakari-engine[11795]: 2018-01-26 
19:41:28.968 TRACE masakari.engine.drivers.taskflow.driver EndpointNotFound: 
publicURL endpoint for compute service named Compute Service not found


Why is masakari-engine looking for a publicURL endpoint for 
service_type=’compute’ and service_name=’Compute Service’ ?

I think there is no reason.
This default value was added by the following patch.
https://review.openstack.org/#/c/388734/

I think this is a bug.
Could you report in Launchpad?


See below that the Service Name = ‘nova’ ... NOT ‘Compute Service’

stack@devstack-masakari-louie:~/devstack$ openstack endpoint list
+--+---+--++-+---+--+
| ID   | Region| Service Name | Service Type   
| Enabled | Interface | URL  |
+--+---+--++-+---+--+
| 0111643ef1584decb523524a3db5ce18 | RegionOne | nova_legacy  | compute_legacy 
| True| public| http://10.10.10.14/compute/v2/$(project_id)s |
| 01790448c22f49e69774adf290fba728 | RegionOne | gnocchi  | metric 
| True| internal  | http://10.10.10.14/metric|
| 0b31693c6650499a981d580721be9e48 | RegionOne | vitrage  | rca
| True| internal  | http://10.10.10.14:8999  |
| 40f66ed61b4e4310829aa69e11c75554 | RegionOne | neutron  | network
| True| public| http://10.10.10.14:9696/ |
| 47479cf64af944b996b1fbca42efd945 | RegionOne | nova | compute
| True| public| http://10.10.10.14/compute/v2.1  |
| 49dccfc61e8246a2a2c0b8d12b3db91a | RegionOne | vitrage  | rca
| True| admin | http://10.10.10.14:8999  |
| 5261ba0327de4c2d92842147636ee770 | RegionOne | masakari | ha 
| True| internal  | http://10.10.10.14:15868/v1/$(tenant_id)s|
| 5df28622c6f449ebad12d9b62110cd08 | RegionOne | gnocchi  | metric 
| True| admin | http://10.10.10.14/metric|
| 64f8f401431042a0ab1d053ca4f4df02 | RegionOne | glance   | image  
| True| public| http://10.10.10.14/image |
| 69ad6b9d0b0b4d0a8da6fa36af8289cb | RegionOne | masakari | ha 
| True| public| http://10.10.10.14:15868/v1/$(tenant_id)s|
| 7dd9d5396e9c49d4a41e2865b841f6a0 | RegionOne | masakari | ha 
| True| admin | http://10.10.10.14:15868/v1/$(tenant_id)s|
| 811fa7f4b3c14612b4aca354dc8ea77e | RegionOne | vitrage  | rca
| True| public| http://10.10.10.14:8999  |
| 8535da724c424363bffe1d033ee033e5 | RegionOne | cinder   | volume 
| True| public| http://10.10.10.14/volume/v1/$(project_id)s  |
| 853f1783f1014075a03c16f7c3a2568a | RegionOne | keystone | identity   
| True| admin | http://10.10.10.14/identity  |
| 9450f5611ca747f2a049f22ff0996dba | RegionOne | cinderv3 | volumev3   
| True| public| http://10.10.10.14/volume/v3/$(project_id)s  |
| 9a73696d88a9438cb0ab75a754a08e9d | RegionOne | gnocchi  | metric 
| True| public| http://10.10.10.14/metric|
| b1ff2b4d683c4a58a3b27232699d0058 | RegionOne | cinderv2 | volumev2   
| True| public| http://10.10.10.14/volume/v2/$(project_id)s  |
| d4e66240faff48f2b5e1d0fcfb73a74b | RegionOne | placement| placement  
| True

[openstack-dev] [magnum] Any plan to resume nodegroup work?

2018-01-29 Thread Wan-yen Hsu
Hi,

  I saw magnum nodegroup specs  https://review.openstack.org/425422,
https://review.openstack.org/433680, and
https://review.openstack.org/425431 were last updated a year ago.  is there
any plan to resume this work or is it superseded by other specs or features?

  Thanks!

Regards,
Wan-yen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirement][cyborg]FFE - pyspdk requirement dependency

2018-01-29 Thread We We
Hi,
The pyspdk is a important tool library [1] which  supports Cyborg SPDK driver 
[2] to manage the backend SPDK-base app, so we need to upload pyspdk into the 
pypi [3]  and then append 'pyspdk>=0.0.1’ item into 
‘OpenStack/Cyborg/requirements.txt’ , so that  SPDK driver can be built 
correctly when zuul runs. However, It's not what we thought it would be, if we 
want to  add the new requirements, we should get support from upstream 
OpenStack/requirements [4] to append 'pyspdk>=0.0.1’ item.

I'm sorry for propose the request so late. Please Please help.


[1] https://review.gerrithub.io/#/c/379741/ 

[2] https://review.openstack.org/#/c/538164/11 

[3] https://pypi.python.org/pypi/pyspdk/0.0.1 

[4] https://github.com/openstack/requirements 



Regards,
Helloway

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]Nova rescue inject pasword failed

2018-01-29 Thread 李杰
Thank you,Mathieu.Do you know how to use metadata RESTful service to inject 
password?
 
 
-- Original --
From: "Mathieu Gagné"; 
Date: 2018年1月30日(星期二) 凌晨3:05
To: "OpenStack Developmen"; 
Subject: Re: [openstack-dev] [nova]Nova rescue inject pasword failed

 
On Mon, Jan 29, 2018 at 4:57 AM, Matthew Booth  wrote:
> On 29 January 2018 at 09:27, 李杰  wrote:
>>
>>  Hi,all:
>>   I want to access to my instance under rescue state using
>> temporary password which nova rescue gave me.But this password doesn't work.
>> Can I ask how this password is injected to instance? I can't find any
>> specification how is it done.I saw the code about rescue,But it displays the
>> password has inject.
>>   I use the libvirt as the virt driver. The web said to
>> set"[libvirt]inject_password=true",but it didn't work. Is it a bug?Can you
>> give me some advice?Help in troubleshooting this issue will be appreciated.
>
>
> Ideally your rescue image will support cloud-init and you would use a config
> disk.
>
> But to reiterate, ideally your rescue image would support cloud-init and you
> would use a config disk.
>
> Matt
> --
> Matthew Booth
> Red Hat OpenStack Engineer, Compute DFG
>

Just so you know, cloud-init does not read/support the admin_pass
injected in the config-drive:
https://bugs.launchpad.net/cloud-init/+bug/1236883

Known bug for years and no fix has been approved yet for various
non-technical reasons.

--
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Queens milestone 3 has been released!

2018-01-29 Thread Emilien Macchi
Queens milestone 3 has been tagged and stable/queens branch was created for
python-tripleoclient.

Some interesting numbers:
- 178 bug fixed (171 in pike-3, 110 in ocata-3 and 76 in newton-3).
- 9 blueprints implemented (22 in pike-3, 11 in ocata-3 and 13 in newton-3)

If we count by release (only the 3 milestones, not the RC):

- Queens: 628 bugs fixed and 27 blueprints implemented
- Pike: 511 bugs fixed and 37 blueprints implemented
- Ocata: 282 bugs fixed and 14 blueprints implemented (remember the short
cycle)
- Newton: 129 bugs fixed and 16 blueprints implemented

Good work team!
And as usual, kudos to release managers for their eternal help :)
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [blazar][release] release job configuration issues

2018-01-29 Thread Masahito MUROI

Thanks for the help.

I've already pushed patches for updating the release job of 
blazar-nova[1] and blazar-dashboard[2]. The two patches are under review 
now and added as Depends-On links.



1. https://review.openstack.org/#/c/538182/
2. https://review.openstack.org/#/c/538185/

best regards,
Masahito



On 2018/01/30 9:27, Doug Hellmann wrote:

Both blazar-dashboard and blazar-nova have configuration issues blocking
their release and the release team needs input from the blazar team to
resolve the problems.

The validation output for blazar-dashboard [2] shows that the repo is
being treated as a horizon plugin but it is configured to use the
release-openstack-server jobs. We think the correct way to resolve this
is to update project-config to use publish-to-pypi-horizon. However, if
horizon is not needed then project-config should be updated to use
publish-to-pypi and the release-type in [1] should be updated to
"python-pypi".

The validation output for blazar-nova shows a similar problem [4]. In
this case, we think the correct solution is to change project-config so
that the repo uses publish-to-pypi instead of release-openstack-server.

Please update those settings and update the release requests with
Depends-On links to the project-config patches so we can process the
releases.

Doug

[1] https://review.openstack.org/#/c/538175/
[2] 
http://logs.openstack.org/75/538175/3/check/openstack-tox-validate/7ed5005/tox/validate-request-results.log
[3] https://review.openstack.org/#/c/538139/
[4] 
http://logs.openstack.org/39/538139/5/check/openstack-tox-validate/05a7503/tox/validate-request-results.log

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Policy regarding template customisation

2018-01-29 Thread Michał Jastrzębski
Hey,

So I'm also for option 2. There was big discussion in Atlanta about
"how hard it is to keep configs up to date and remove deprecated
options". merge_config makes it easier for us to handle this. With
amount of services we support I don't think we have enough time to
keep tabs on every config change across OpenStack.

On 29 January 2018 at 08:03, Steven Dake (stdake)  wrote:
> Agree, the “why” of this policy is stated here:
>
> https://docs.openstack.org/developer/kolla-ansible/deployment-philosophy.html
>
>
>
> Paul, I think your corrective actions sound good.  Perhaps we should also
> reword “essential” to some other word that is more lenient.
>
>
>
> Cheers
>
> -steve
>
>
>
> From: Jeffrey Zhang 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: Monday, January 29, 2018 at 7:14 AM
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: Re: [openstack-dev] [kolla] Policy regarding template customisation
>
>
>
> Thank Paul for pointing this out.
>
>
>
> for me, I prefer to consist with 2)
>
>
>
> There are thousands of configuration in OpenStack, it is hard for Kolla to
>
> add every key/value pair in playbooks. Currently, the merge_config is a more
>
> better solutions.
>
>
>
>
>
>
>
>
>
> On Mon, Jan 29, 2018 at 7:13 PM, Paul Bourke  wrote:
>
> Hi all,
>
> I'd like to revisit our policy of not templating everything in
> kolla-ansible's template files. This is a policy that was set in place very
> early on in kolla-ansible's development, but I'm concerned we haven't been
> very consistent with it. This leads to confusion for contributors and
> operators - "should I template this and submit a patch, or do I need to
> start using my own config files?".
>
> The docs[0] are currently clear:
>
> "The Kolla upstream community does not want to place key/value pairs in the
> Ansible playbook configuration options that are not essential to obtaining a
> functional deployment."
>
> In practice though our templates contain many options that are not
> necessary, and plenty of patches have merged that while very useful to
> operators, are not necessary to an 'out of the box' deployment.
>
> So I'd like us to revisit the questions:
>
> 1) Is kolla-ansible attempting to be a 'batteries included' tool, which
> caters to operators via key/value config options?
>
> 2) Or, is it to be a solid reference implementation, where any degree of
> customisation implies a clear 'bring your own configs' type policy.
>
> If 1), then we should potentially:
>
> * Update ours docs to remove the referenced paragraph
> * Look at reorganising files like globals.yml into something more
> maintainable.
>
> If 2),
>
> * We should make it clear to reviewers that patches templating options that
> are non essential should not be accepted.
> * Encourage patches to strip down existing config files to an absolute
> minimum.
> * Make this policy more clear in docs / templates to avoid frustration on
> the part of operators.
>
> Thoughts?
>
> Thanks,
> -Paul
>
> [0]
> https://docs.openstack.org/kolla-ansible/latest/admin/deployment-philosophy.html#why-not-template-customization
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
>
> Regards,
>
> Jeffrey Zhang
>
> Blog: http://xcodest.me
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures][mistral][release][requirements] Pre-release of openstack/mistral-extra failed

2018-01-29 Thread Doug Hellmann
Excerpts from zuul's message of 2018-01-30 00:40:13 +:
> Build failed.
> 
> - release-openstack-python 
> http://logs.openstack.org/53/533a5ee424ebf6937f03d3b1d9d5b52e8ecb/pre-release/release-openstack-python/44f2fd4/
>  : FAILURE in 7m 58s
> - announce-release announce-release : SKIPPED
> - propose-update-constraints propose-update-constraints : SKIPPED
> 

This release appears to have failed because tox.ini is set up to use the
old style of constraints list management and mistral-extra appears in
the constraints list.

I don't know why the tox environment is being used to build the package;
I thought we stopped doing that.

One solution is to fix the tox.ini to put the constraints specification
in the "deps" field. The patch [1] to oslo.config making a similar
change should show you what is needed.

Doug

[1] https://review.openstack.org/#/c/524496/1/tox.ini

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all][kolla][rdo] Collaboration with Kolla for the RDO test days

2018-01-29 Thread Michał Jastrzębski
Cool, thank you David, sign me up!:)

On 29 January 2018 at 05:30, David Moreau Simard  wrote:
> Hi !
>
> For those who might be unfamiliar with the RDO [1] community project:
> we hang out in #rdo, we don't bite and we build vanilla OpenStack
> packages.
>
> These packages are what allows you to leverage one of the deployment
> projects such as TripleO, PackStack or Kolla to deploy on CentOS or
> RHEL.
> The RDO community collaborates with these deployment projects by
> providing trunk and stable packages in order to let them develop and
> test against the latest and the greatest of OpenStack.
>
> RDO test days typically happen around a week after an upstream
> milestone has been reached [2].
> The purpose is to get everyone together in #rdo: developers, users,
> operators, maintainers -- and test not just RDO but OpenStack itself
> as installed by the different deployment projects.
>
> We tried something new at our last test day [3] and it worked out great.
> Instead of encouraging participants to install their own cloud for
> testing things, we supplied a cloud of our own... a bit like a limited
> duration TryStack [4].
> This lets users without the operational knowledge, time or hardware to
> install an OpenStack environment to see what's coming in the upcoming
> release of OpenStack and get the feedback loop going ahead of the
> release.
>
> We used Packstack for the last deployment and invited Packstack cores
> to deploy, operate and troubleshoot the installation for the duration
> of the test days.
> The idea is to rotate between the different deployment projects to
> give every interested project a chance to participate.
>
> Last week, we reached out to Kolla to see if they would be interested
> in participating in our next RDO test days [5] around February 8th.
> We supply the bare metal hardware and their core contributors get to
> deploy and operate a cloud with real users and developers poking
> around.
> All around, this is a great opportunity to get feedback for RDO, Kolla
> and OpenStack.
>
> We'll be advertising the event a bit more as the test days draw closer
> but until then, I thought it was worthwhile to share some context for
> this new thing we're doing.
>
> Let me know if you have any questions !
>
> Thanks,
>
> [1]: https://www.rdoproject.org/
> [2]: https://www.rdoproject.org/testday/
> [3]: 
> https://dmsimard.com/2017/11/29/come-try-a-real-openstack-queens-deployment/
> [4]: http://trystack.org/
> [5]: 
> http://eavesdrop.openstack.org/meetings/kolla/2018/kolla.2018-01-24-16.00.log.html
>
> David Moreau Simard
> Senior Software Engineer | OpenStack RDO
>
> dmsimard = [irc, github, twitter]
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Release-job-failures] [mistral] Pre-release of openstack/mistral-extra failed

2018-01-29 Thread Sean McGinnis
The mistral-extra package is failing the pre-release check. The commit sha for
the queens-3 milestone is the same as it was for queens-2. This appears to be
the cause of the issue, as the constraints have that last release.

Please take a look and let us know in #openstack-release if there is anything
we can do to help.

Sean

- Forwarded message from z...@openstack.org -

Date: Tue, 30 Jan 2018 00:40:13 +
From: z...@openstack.org
To: release-job-failu...@lists.openstack.org
Subject: [Release-job-failures] Pre-release of openstack/mistral-extra failed
Reply-To: openstack-dev@lists.openstack.org

Build failed.

- release-openstack-python 
http://logs.openstack.org/53/533a5ee424ebf6937f03d3b1d9d5b52e8ecb/pre-release/release-openstack-python/44f2fd4/
 : FAILURE in 7m 58s
- announce-release announce-release : SKIPPED
- propose-update-constraints propose-update-constraints : SKIPPED

___
Release-job-failures mailing list
release-job-failu...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures

- End forwarded message -

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Race in FixedIP.associate_pool

2018-01-29 Thread Arun SAG
Hello,


On Tue, Dec 12, 2017 at 12:22 PM, Arun SAG  wrote:
> Hello,
>
> We are running nova-network in ocata. We use mysql in a master-slave
> configuration, The master is read/write, and all reads go to the slave
> (slave_connection is set). When we tried to boot multiple VMs in
> parallel (lets say 15), we see a race in allocate_for_instance's
> FixedIP.associate_pool. We see FixedIP.associate_pool associates an
> IP, but later in the code we try to read the allocated FixedIP using
> objects.FixedIPList.get_by_instance_uuid and it throws
> FixedIPNotFoundException. We also checked the slave replication status
> and Seconds_Behind_Master: 0
>
[snip]
>
> This kind of how the logs look like
> 2017-12-08 22:33:37,124 DEBUG
> [yahoo.contrib.ocata_openstack_yahoo_plugins.nova.network.manager]
> /opt/openstack/venv/nova/lib/python2.7/site-packages/yahoo/contrib/ocata_openstack_yahoo_plugins/nova/network/manager.py:get_instance_nw_info:894
> Fixed IP NOT found for instance
> 2017-12-08 22:33:37,125 DEBUG
> [yahoo.contrib.ocata_openstack_yahoo_plugins.nova.network.manager]
> /opt/openstack/venv/nova/lib/python2.7/site-packages/yahoo/contrib/ocata_openstack_yahoo_plugins/nova/network/manager.py:get_instance_nw_info:965
> Built network info: |[]|
> 2017-12-08 22:33:37,126 INFO [nova.network.manager]
> /opt/openstack/venv/nova/lib/python2.7/site-packages/nova/network/manager.py:allocate_for_instance:428
> Allocated network: '[]' for instance
> 2017-12-08 22:33:37,126 ERROR [oslo_messaging.rpc.server]
> /opt/openstack/venv/nova/lib/python2.7/site-packages/oslo_messaging/rpc/server.py:_process_incoming:164
> Exception during message handling
> Traceback (most recent call last):
>   File 
> "/opt/openstack/venv/nova/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
> line 155, in _process_incoming
> res = self.dispatcher.dispatch(message)
>   File 
> "/opt/openstack/venv/nova/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
> line 222, in dispatch
> return self._do_dispatch(endpoint, method, ctxt, args)
>   File 
> "/opt/openstack/venv/nova/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
> line 192, in _do_dispatch
> result = func(ctxt, **new_args)
>   File 
> "/opt/openstack/venv/nova/lib/python2.7/site-packages/yahoo/contrib/ocata_openstack_yahoo_plugins/nova/network/manager.py",
> line 347, in allocate_for_instance
> vif = nw_info[0]
> IndexError: list index out of range
>
>
> This problem goes way when we get rid of the slave_connection setting
> and just use single master. Has any one else seen this? Any
> recommendation to fix this issue?
>
> This issue is kind of  similar to https://bugs.launchpad.net/nova/+bug/1249065
>

If anyone is running into db race while running database in
master-slave mode with async replication, The bug has been identified
and getting fixed  here
https://bugs.launchpad.net/oslo.db/+bug/1746116

-- 
Arun S A G
http://zer0c00l.in/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [blazar][release] release job configuration issues

2018-01-29 Thread Doug Hellmann
Both blazar-dashboard and blazar-nova have configuration issues blocking
their release and the release team needs input from the blazar team to
resolve the problems.

The validation output for blazar-dashboard [2] shows that the repo is
being treated as a horizon plugin but it is configured to use the
release-openstack-server jobs. We think the correct way to resolve this
is to update project-config to use publish-to-pypi-horizon. However, if
horizon is not needed then project-config should be updated to use
publish-to-pypi and the release-type in [1] should be updated to
"python-pypi".

The validation output for blazar-nova shows a similar problem [4]. In
this case, we think the correct solution is to change project-config so
that the repo uses publish-to-pypi instead of release-openstack-server.

Please update those settings and update the release requests with
Depends-On links to the project-config patches so we can process the
releases.

Doug

[1] https://review.openstack.org/#/c/538175/
[2] 
http://logs.openstack.org/75/538175/3/check/openstack-tox-validate/7ed5005/tox/validate-request-results.log
[3] https://review.openstack.org/#/c/538139/
[4] 
http://logs.openstack.org/39/538139/5/check/openstack-tox-validate/05a7503/tox/validate-request-results.log

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][searchlight] problem with release job configurations

2018-01-29 Thread Doug Hellmann
Both searchlight-ui has a configuration issue that the release team
cannot fix by ourselves. We need input from the searchlight team about
how to resolve it.

As you'll see from [2] the release validation logic is categorizing
searchlight-ui as a horizon-plugin. It is then rejecting the release
request [1] because, according to the settings in project-config,
the repository is configured to use publish-to-pypi instead of the
expected publish-to-pypi-horizon.

The difference between the two jobs is the latter installs horizon
before trying to build the package. Many horizon plugins apparently
needed this. We don't know if searchlight does.

There are 2 possible ways to fix the issue:

1. Set release-type to "python-pypi" in [1] to tell the validation code
   that publish-to-pypi is the expected job.
2. Change the release job for the repository in project-config.

Please let us know which fix is correct by either updating [1] with the
release-type or a Depends-On link to the change in project-config to use
the correct release job.

Doug


[1] https://review.openstack.org/#/c/538321/
[2] 
http://logs.openstack.org/21/538321/1/check/openstack-tox-validate/3afbe28/tox/validate-request-results.log

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL][requirements] Prepare for a bitter harvest, winter has come at last!

2018-01-29 Thread Matthew Thode
On 18-01-29 14:44:20, Matthew Thode wrote:
> On 18-01-28 20:47:42, Matthew Thode wrote:
> > On 18-01-27 21:37:53, Matthew Thode wrote:
> > > On 18-01-26 23:05:11, Matthew Thode wrote:
> > > > On 18-01-26 00:12:38, Matthew Thode wrote:
> > > > > On 18-01-24 22:32:27, Matthew Thode wrote:
> > > > > > On 18-01-24 01:29:47, Matthew Thode wrote:
> > > > > > > On 18-01-23 01:23:50, Matthew Thode wrote:
> > > > > > > > Requirements is freezing Friday at 23:59:59 UTC so any last
> > > > > > > > global-requrements updates that need to get in need to get in 
> > > > > > > > now.
> > > > > > > > 
> > > > > > > > I'm afraid that my condition has left me cold to your pleas of 
> > > > > > > > mercy.
> > > > > > > > 
> > > > > > > 
> > > > > > > Just your daily reminder that the freeze will happen in about 3 
> > > > > > > days
> > > > > > > time.  Reviews seem to be winding down for requirements now 
> > > > > > > (which is
> > > > > > > a good sign this release will be chilled to perfection).
> > > > > > > 
> > > > > > 
> > > > > > There's still a couple of things that may cause bumps for iso8601 
> > > > > > and
> > > > > > oslo.versionedobjects but those are the main things.  The msgpack 
> > > > > > change
> > > > > > is also rolling out (thanks dirk :D).  Even with all these changes
> > > > > > though, in this universe, there's only one absolute. Everything 
> > > > > > freezes!
> > > > > > 
> > > > > > https://review.openstack.org/535520 (oslo.serialization)
> > > > > > 
> > > > > 
> > > > > Last day, gate is sad and behind, but not my fault you waited til the
> > > > > last minute :P  (see my first comment).  The Iceman Cometh!
> > > > > 
> > > > 
> > > > All right everyone, Chill.  Looks like we have another couple days to
> > > > get stuff in for gate's slowness.  The new deadline is 23:59:59 UTC
> > > > 29-01-2018.
> > > > 
> > > 
> > > It's a cold town.  The current status is as follows.  It looks like the
> > > gate is clearing up.  oslo.versionedobjects-1.31.2 and iso8601 will be
> > > in a gr bump but that's it.  monasca-tempest-plugin is not going to get
> > > in by freeze at this rate (has fixes needed in the review).  There was
> > > some stuff needed to get nova-client/osc to work together again, but
> > > mriedem seems to have it in hand (and no gr updates it looks like).
> > > 
> > 
> > Allow me to break the Ice. My name is Freeze. Learn it well for it's
> > the chilling sound of your doom!  Can you feel it coming? The icy cold
> > of space!  It's less than 24 hours til the freeze fomrally happens, the
> > only outstanding item is that oslo.versionedobjects seems to need
> > another fix for the iso8601 bump.  osc-placement won't be added to
> > requirements at this point as there has been no responce on their
> > review.
> > 
> > https://review.openstack.org/538515
> > 
> > python-vitrageclient looks like it'll make it in if gate doesn't break.
> > msgpack may also be late, but we'll see (just workflow'd).
> > openstacksdk may need a gr bump, I'm waiting on a response from mordred
> > 
> > https://review.openstack.org/538695
> > 
> 
> Tonight Hell freezes over!
> 
> At just about 3 hours til your frozen doom I thought I'd send a final
> update.  Since gate is still being slow the current plan is to stop
> accepting any new reviews to requirements (procedural -W) at the cutoff
> time.  At that point we'll work on getting the existing approved items
> through gate, then work on branching.
> 

requirements is now frozen, any review after 538994 will require a FFE.

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [All] [Elections] Rocky PTL Nominations Are Now Open

2018-01-29 Thread Kendall Nelson
 Hello All!

Nominations for OpenStack PTLs (Program Team Leads) are now open and will
remain open until Feb 07, 2018 23:45 UTC.

All nominations must be submitted as a text file to the
openstack/election repository as explained at
http://governance.openstack.org/election/#how-to-submit-your-candidacy

Please make sure to follow the new candidacy file naming
convention: $cycle_name/$project_name/$ircname.txt.

In order to be an eligible candidate (and be allowed to vote) in
a given PTL election, you need to have contributed an accepted
patch to one of the corresponding project teams[0] during
the Pike-Queens timeframe (22 Feb 2017 to 29 Jan 2018).

Additional information about the nomination process can be found here:
https://governance.openstack.org/election/

Shortly after election officials approve candidates, they will be listed
here:
https://governance.openstack.org/election/#Rocky-ptl-candidates

The electorate is requested to confirm their email address in gerrit[1],
prior to 1 Feb 0:00 UTC so that the emailed ballots are mailed to the
correct email address. This email address should match that which was
provided in your foundation member profile[2] as well.

Happy running,

Kendall Nelson (diablo_rojo)

[0] https://governance.openstack.org/tc/reference/projects/
[1] https://review.openstack.org/#/settings/contact
[2] https://www.openstack.org/profile/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] PTL non-candidacy

2018-01-29 Thread Paul Grist
On Mon, Jan 29, 2018 at 2:18 PM, Brian Rosmaita 
wrote:

> I've been PTL of Glance through some rocky times, but have decided not
> to stand for election for the Rocky cycle.  My plan is to stick
> around, attend to my duties as a glance core contributor, and support
> my successor in whatever way I can to make for a smooth transition.
> After three consecutive cycles of me, it's time for some new ideas and
> new approaches.
>
> For anyone out there who hasn't contributed to Glance yet, the Glance
> community is friendly and welcoming, and we've got a backlog of
> "untargeted" specs ready for you to pick up.  Weekly meetings are
> 14:00 UTC on Thursdays in #openstack-meeting-4.
>
> cheers,
> brian
>

Many thanks for all the work you've done for glance and the community. Your
leadership and commitment was remarkable at the most challenging of times
this past year.  Glad to hear you are staying with Glance!

Paul

>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] PTL Nomination

2018-01-29 Thread Jeremy Freudberg
Thanks for volunteering again, Telles. The project is in good hands
under your leadership.

On Mon, Jan 29, 2018 at 2:45 PM, Telles Nobrega  wrote:
> Hi Saharans, I would like to nominate myself to act as PTL for Sahara during
> the Rocky cycle.
>
> I've been acting as PTL for the last two cycles (Pike and Queens) and I
> believe that even though we lost a lot of resources we were able to improve
> Sahara considerably in the last year.
>
> Moving forward I aim to continue working on the direction of making Sahara
> more user oriented.
>
> * Bug triaging:
>
> We need to start testing and cleaning the bug list and sadly this queue did
> not decrease significantly and we need to keep working on it.
>
> * Documentation:
>
> We already had improvements this lasy cycle but we need to keep going and
> for that we are already planning a documentation day pre-PTG and during PTG.
>
> * Final APIv2 work
>
> We need to finally release APIv2 in Rocky. We released APIv2 as experimental
> in Queens and will work to have it as main API in Rocky.
>
> In the overall picture we need to continue improving user experience and
> asking what is necessary to make Sahara more usable so we can have Sahara in
> more and more OpenStack deployments.

+1, I could not have said it better myself. This is an admirable focus
to have for the coming cycle.

> --
>
> TELLES NOBREGA
>
> SOFTWARE ENGINEER
>
> Red Hat Brasil
>
> Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo
>
> tenob...@redhat.com
>
> TRIED. TESTED. TRUSTED.
>  Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
> pelo Great Place to Work.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] PTL non-candidacy

2018-01-29 Thread Doug Hellmann
Excerpts from Brian Rosmaita's message of 2018-01-29 14:18:18 -0500:
> I've been PTL of Glance through some rocky times, but have decided not
> to stand for election for the Rocky cycle.  My plan is to stick
> around, attend to my duties as a glance core contributor, and support
> my successor in whatever way I can to make for a smooth transition.
> After three consecutive cycles of me, it's time for some new ideas and
> new approaches.
> 
> For anyone out there who hasn't contributed to Glance yet, the Glance
> community is friendly and welcoming, and we've got a backlog of
> "untargeted" specs ready for you to pick up.  Weekly meetings are
> 14:00 UTC on Thursdays in #openstack-meeting-4.
> 
> cheers,
> brian
> 

Thank you for carrying the mantle for so long, Brian. I know it
hasn't necessarily been easy but you've dealt with the challenges
well and helped the team move to a healthier state than it was in
when you started in the role.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL][requirements] Tonight Hell freezes over!

2018-01-29 Thread Matthew Thode
On 18-01-28 20:47:42, Matthew Thode wrote:
> On 18-01-27 21:37:53, Matthew Thode wrote:
> > On 18-01-26 23:05:11, Matthew Thode wrote:
> > > On 18-01-26 00:12:38, Matthew Thode wrote:
> > > > On 18-01-24 22:32:27, Matthew Thode wrote:
> > > > > On 18-01-24 01:29:47, Matthew Thode wrote:
> > > > > > On 18-01-23 01:23:50, Matthew Thode wrote:
> > > > > > > Requirements is freezing Friday at 23:59:59 UTC so any last
> > > > > > > global-requrements updates that need to get in need to get in now.
> > > > > > > 
> > > > > > > I'm afraid that my condition has left me cold to your pleas of 
> > > > > > > mercy.
> > > > > > > 
> > > > > > 
> > > > > > Just your daily reminder that the freeze will happen in about 3 days
> > > > > > time.  Reviews seem to be winding down for requirements now (which 
> > > > > > is
> > > > > > a good sign this release will be chilled to perfection).
> > > > > > 
> > > > > 
> > > > > There's still a couple of things that may cause bumps for iso8601 and
> > > > > oslo.versionedobjects but those are the main things.  The msgpack 
> > > > > change
> > > > > is also rolling out (thanks dirk :D).  Even with all these changes
> > > > > though, in this universe, there's only one absolute. Everything 
> > > > > freezes!
> > > > > 
> > > > > https://review.openstack.org/535520 (oslo.serialization)
> > > > > 
> > > > 
> > > > Last day, gate is sad and behind, but not my fault you waited til the
> > > > last minute :P  (see my first comment).  The Iceman Cometh!
> > > > 
> > > 
> > > All right everyone, Chill.  Looks like we have another couple days to
> > > get stuff in for gate's slowness.  The new deadline is 23:59:59 UTC
> > > 29-01-2018.
> > > 
> > 
> > It's a cold town.  The current status is as follows.  It looks like the
> > gate is clearing up.  oslo.versionedobjects-1.31.2 and iso8601 will be
> > in a gr bump but that's it.  monasca-tempest-plugin is not going to get
> > in by freeze at this rate (has fixes needed in the review).  There was
> > some stuff needed to get nova-client/osc to work together again, but
> > mriedem seems to have it in hand (and no gr updates it looks like).
> > 
> 
> Allow me to break the Ice. My name is Freeze. Learn it well for it's
> the chilling sound of your doom!  Can you feel it coming? The icy cold
> of space!  It's less than 24 hours til the freeze fomrally happens, the
> only outstanding item is that oslo.versionedobjects seems to need
> another fix for the iso8601 bump.  osc-placement won't be added to
> requirements at this point as there has been no responce on their
> review.
> 
> https://review.openstack.org/538515
> 
> python-vitrageclient looks like it'll make it in if gate doesn't break.
> msgpack may also be late, but we'll see (just workflow'd).
> openstacksdk may need a gr bump, I'm waiting on a response from mordred
> 
> https://review.openstack.org/538695
> 

Tonight Hell freezes over!

At just about 3 hours til your frozen doom I thought I'd send a final
update.  Since gate is still being slow the current plan is to stop
accepting any new reviews to requirements (procedural -W) at the cutoff
time.  At that point we'll work on getting the existing approved items
through gate, then work on branching.


-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] FFE Request for Queens

2018-01-29 Thread Lajos Katona

Hi,

I would like to ask for FFE on the neutron-trunk-ui blueprint to let the 
admin panel for trunks be accepted for Queens.


Based on discussion on IRC 
(http://eavesdrop.openstack.org/irclogs/%23openstack-horizon/%23openstack-horizon.2018-01-29.log.html#t2018-01-29T14:36:58 
) the remaining part of the blueprint neutron-trunk-ui 
(https://blueprints.launchpad.net/horizon/+spec/neutron-trunk-ui) should 
be handled separately:


 * The admin panel (https://review.openstack.org/516657) should be part
   of the Queens release, as now that is not dependent on the ngDetails
   patches. With this the blueprint should be set to implemented.
 * The links (https://review.openstack.org/524619) for the ports
   details (trunk parent and subports) from the trunk panel should be
   handled in a bug report:
 o https://bugs.launchpad.net/horizon/+bug/1746082

Regards
Lajos Katona
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] PTL Nomination

2018-01-29 Thread Telles Nobrega
Hi Saharans, I would like to nominate myself to act as PTL for Sahara
during the Rocky cycle.

I've been acting as PTL for the last two cycles (Pike and Queens) and I
believe that even though we lost a lot of resources we were able to improve
Sahara considerably in the last year.

Moving forward I aim to continue working on the direction of making Sahara
more user oriented.

* Bug triaging:

We need to start testing and cleaning the bug list and sadly this queue did
not decrease significantly and we need to keep working on it.

* Documentation:

We already had improvements this lasy cycle but we need to keep going and
for that we are already planning a documentation day pre-PTG and during PTG.

* Final APIv2 work

We need to finally release APIv2 in Rocky. We released APIv2 as
experimental in Queens and will work to have it as main API in Rocky.

In the overall picture we need to continue improving user experience and
asking what is necessary to make Sahara more usable so we can have Sahara
in more and more OpenStack deployments.
-- 

TELLES NOBREGA

SOFTWARE ENGINEER

Red Hat Brasil  

Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo

tenob...@redhat.com

TRIED. TESTED. TRUSTED. 
 Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
pelo Great Place to Work.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] PTL non-candidacy

2018-01-29 Thread Carter, Kevin
++ Thanks for leadership within Glance and everything else you've done in
the community!


--

Kevin Carter
IRC: Cloudnull

On Mon, Jan 29, 2018 at 1:27 PM, Sean McGinnis 
wrote:

> On Mon, Jan 29, 2018 at 02:18:18PM -0500, Brian Rosmaita wrote:
> > I've been PTL of Glance through some rocky times, but have decided not
> > to stand for election for the Rocky cycle.  My plan is to stick
> > around, attend to my duties as a glance core contributor, and support
> > my successor in whatever way I can to make for a smooth transition.
> > After three consecutive cycles of me, it's time for some new ideas and
> > new approaches.
> >
> > For anyone out there who hasn't contributed to Glance yet, the Glance
> > community is friendly and welcoming, and we've got a backlog of
> > "untargeted" specs ready for you to pick up.  Weekly meetings are
> > 14:00 UTC on Thursdays in #openstack-meeting-4.
> >
> > cheers,
> > brian
> >
>
> Thanks for all your hard work as Glance PTL Brian. Great to hear you are
> not
> going anywhere.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] PTL non-candidacy

2018-01-29 Thread Abhishek Kekane
Thanks so much for your remarkable work for glance for the couple of
cycles. You have introduced very good processes like weekly priorities
which helped community members to keep focused on priorities.  It’s my
pleasure to work with you, and yet I need to learn lot from you. Wish you
all the best Brian!

Cheers,

Abhishek

On 30-Jan-2018 00:49, "Brian Rosmaita"  wrote:

> I've been PTL of Glance through some rocky times, but have decided not
> to stand for election for the Rocky cycle.  My plan is to stick
> around, attend to my duties as a glance core contributor, and support
> my successor in whatever way I can to make for a smooth transition.
> After three consecutive cycles of me, it's time for some new ideas and
> new approaches.
>
> For anyone out there who hasn't contributed to Glance yet, the Glance
> community is friendly and welcoming, and we've got a backlog of
> "untargeted" specs ready for you to pick up.  Weekly meetings are
> 14:00 UTC on Thursdays in #openstack-meeting-4.
>
> cheers,
> brian
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] PTL non-candidacy

2018-01-29 Thread Sean McGinnis
On Mon, Jan 29, 2018 at 02:18:18PM -0500, Brian Rosmaita wrote:
> I've been PTL of Glance through some rocky times, but have decided not
> to stand for election for the Rocky cycle.  My plan is to stick
> around, attend to my duties as a glance core contributor, and support
> my successor in whatever way I can to make for a smooth transition.
> After three consecutive cycles of me, it's time for some new ideas and
> new approaches.
> 
> For anyone out there who hasn't contributed to Glance yet, the Glance
> community is friendly and welcoming, and we've got a backlog of
> "untargeted" specs ready for you to pick up.  Weekly meetings are
> 14:00 UTC on Thursdays in #openstack-meeting-4.
> 
> cheers,
> brian
> 

Thanks for all your hard work as Glance PTL Brian. Great to hear you are not
going anywhere.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] PTL non-candidacy

2018-01-29 Thread Brian Rosmaita
I've been PTL of Glance through some rocky times, but have decided not
to stand for election for the Rocky cycle.  My plan is to stick
around, attend to my duties as a glance core contributor, and support
my successor in whatever way I can to make for a smooth transition.
After three consecutive cycles of me, it's time for some new ideas and
new approaches.

For anyone out there who hasn't contributed to Glance yet, the Glance
community is friendly and welcoming, and we've got a backlog of
"untargeted" specs ready for you to pick up.  Weekly meetings are
14:00 UTC on Thursdays in #openstack-meeting-4.

cheers,
brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]Nova rescue inject pasword failed

2018-01-29 Thread Mathieu Gagné
On Mon, Jan 29, 2018 at 4:57 AM, Matthew Booth  wrote:
> On 29 January 2018 at 09:27, 李杰  wrote:
>>
>>  Hi,all:
>>   I want to access to my instance under rescue state using
>> temporary password which nova rescue gave me.But this password doesn't work.
>> Can I ask how this password is injected to instance? I can't find any
>> specification how is it done.I saw the code about rescue,But it displays the
>> password has inject.
>>   I use the libvirt as the virt driver. The web said to
>> set"[libvirt]inject_password=true",but it didn't work. Is it a bug?Can you
>> give me some advice?Help in troubleshooting this issue will be appreciated.
>
>
> Ideally your rescue image will support cloud-init and you would use a config
> disk.
>
> But to reiterate, ideally your rescue image would support cloud-init and you
> would use a config disk.
>
> Matt
> --
> Matthew Booth
> Red Hat OpenStack Engineer, Compute DFG
>

Just so you know, cloud-init does not read/support the admin_pass
injected in the config-drive:
https://bugs.launchpad.net/cloud-init/+bug/1236883

Known bug for years and no fix has been approved yet for various
non-technical reasons.

--
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-upstream-institute] Meeting reminder

2018-01-29 Thread Ildiko Vancsa
Hi Training Team,

Friendly reminder that we have our next meeting in an hour (2000 UTC) on 
#openstack-meeting-3.

You can find the agenda here: 
https://etherpad.openstack.org/p/openstack-upstream-institute-meetings

See you soon! :)

Thanks,
Ildikó
(IRC: ildikov)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Re: VMWare's resource pool / cluster and nested resource providers

2018-01-29 Thread Giridhar Jayavelu
Eric,
Response inline.




On 1/29/18, 10:27 AM, "Eric Fried"  wrote:

>We had some lively discussion in #openstack-nova today, which I'll try
>to summarize here.
>
>First of all, the hierarchy:
>
>   controller (n-cond)
>/   \
> cluster/n-cpu cluster/n-cpu
> /   \   / \
> res. poolres. pool ......
>/ \   /\
> host   host ...   ...
> /  \  /\
>... ...  inst  inst
>
>Important points:
>
>(1) Instances do indeed get deployed to individual hosts, BUT vCenter
>can and does move them around within a cluster independent of nova-isms
>like live migration.
>
>(2) VMWare wants the ability to specify that an instance should be
>deployed to a specific resource pool.
>
>(3) VMWare accounts for resources at the level of the resource pool (not
>host).
>
>(4) Hosts can move fluidly among resource pools.
>
>(5) Conceptually, VMWare would like you not to see or think about the
>'host' layer at all.
>
>(6) It has been suggested that resource pools may be best represented
>via aggregates.  But to satisfy (2), this would require support for
>doing allocation requests that specify one (e.g. porting the GET
>/resource_providers ?member_of= queryparam to GET
>/allocation_candidates, and the corresponding flavor enhancements).  And
>doing so would mean getting past our reluctance up to this point of
>exposing aggregates by name/ID to users.
>
>Here are some possible models:
>
>(A) Today's model, where the cluster/n-cpu is represented as a single
>provider owning all resources.  This requires some creative finagling of
>inventory fields to ensure that a resource request might actually be
>satisfied by a single host under this broad umbrella.  (An example cited
>was to set VCPU's max_unit to whatever one host could provide.)  It is
>not clear to me if/how resource pools have been represented in this
>model thus far, or if/how it is currently possible to (2) target an
>instance to a specific one.  I also don't see how anything we've done
>with traits or aggregates would help with that aspect in this model.
>
>(B) Representing each host as a root provider, each owning its own
>actual inventory, each possessing a CUSTOM_RESOURCE_POOL_X trait
>indicating which pool it belongs to at the moment; or representing pools
>via aggregates as in (6).  This model breaks because of (1), unless we
>give virt drivers some mechanism to modify allocations (e.g. via POST
>/allocations) without doing an actual migration.
>
>(C) Representing each resource pool as a root provider which presents
>the collective inventory of all its hosts.  Each could possess its own
>unique CUSTOM_RESOURCE_POOL_X trait.  Or we could possibly adapt
>whatever mechanism Ironic uses when it targets a particular baremetal
>node.  Or we could use aggregates as in (6), where each aggregate is
>associated with just one provider.  This one breaks down because we
>don't currently have a way for nova to know that, when an instance's
>resources were allocated from the provider corresponding to resource
>pool X, that means we should schedule the instance to (nova, n-cpu) host
>Y.  There may be some clever solution for this involving aggregates (NOT
>sharing providers!), but it has not been thought through.  It also
>entails the same "creative finagling of inventory" described in (A).
>
>(D) Using actual nested resource providers: the "cluster" is the
>(inventory-less) root provider, and each resource pool is a child of the
>cluster.  This is closest to representing the real logical hierarchy,
>and is desirable for that reason.  The drawback is that you then MUST
>use some mechanism to ensure allocations are never spread across pools.
>If your request *always* targets a specific resource pool, that works.
>Otherwise, you would have to use a numbered request group, as described
>below.  It also entails the same "creative finagling of inventory"
>described in (A).
I think nested resource provider is better option for another reason. Every
resource pool could have it's own limits. So, it is important to track the
allocations/usage and ensure that the scheduler can throw error if there are
no sufficient resources on the vcenter resource pool. NOTE: a vcenter cluster,
which compute node, might have more capacity left. But, resource pool limit
could prevent placing a VM on that pool. And yes, the request would always
target a specific resource pool.

>
>(E) Take (D) a step further by adding each 'host' as a child of its
>respective resource pool.  No "creative finagling", but same "moving
>allocations" issue as (B).

This might not work because resource pool is a logical construct. They may not
exist under vcenter cluster too. Vms can be placed
on vcenter cluster with or without resource pool. 



>
>I'm sure I've missed/misrepresented things.  Please correct and refine
>as necessary.
>
>Thanks,
>Eric

Thanks,
Giri


>
>On 01/27/2018 12:23 PM, Eric Fried wrote:
>> Rado-

Re: [openstack-dev] [nova][placement] Re: VMWare's resource pool / cluster and nested resource providers

2018-01-29 Thread Eric Fried
We had some lively discussion in #openstack-nova today, which I'll try
to summarize here.

First of all, the hierarchy:

   controller (n-cond)
/   \
 cluster/n-cpu cluster/n-cpu
 /   \   / \
 res. poolres. pool ......
/ \   /\
 host   host ...   ...
 /  \  /\
... ...  inst  inst

Important points:

(1) Instances do indeed get deployed to individual hosts, BUT vCenter
can and does move them around within a cluster independent of nova-isms
like live migration.

(2) VMWare wants the ability to specify that an instance should be
deployed to a specific resource pool.

(3) VMWare accounts for resources at the level of the resource pool (not
host).

(4) Hosts can move fluidly among resource pools.

(5) Conceptually, VMWare would like you not to see or think about the
'host' layer at all.

(6) It has been suggested that resource pools may be best represented
via aggregates.  But to satisfy (2), this would require support for
doing allocation requests that specify one (e.g. porting the GET
/resource_providers ?member_of= queryparam to GET
/allocation_candidates, and the corresponding flavor enhancements).  And
doing so would mean getting past our reluctance up to this point of
exposing aggregates by name/ID to users.

Here are some possible models:

(A) Today's model, where the cluster/n-cpu is represented as a single
provider owning all resources.  This requires some creative finagling of
inventory fields to ensure that a resource request might actually be
satisfied by a single host under this broad umbrella.  (An example cited
was to set VCPU's max_unit to whatever one host could provide.)  It is
not clear to me if/how resource pools have been represented in this
model thus far, or if/how it is currently possible to (2) target an
instance to a specific one.  I also don't see how anything we've done
with traits or aggregates would help with that aspect in this model.

(B) Representing each host as a root provider, each owning its own
actual inventory, each possessing a CUSTOM_RESOURCE_POOL_X trait
indicating which pool it belongs to at the moment; or representing pools
via aggregates as in (6).  This model breaks because of (1), unless we
give virt drivers some mechanism to modify allocations (e.g. via POST
/allocations) without doing an actual migration.

(C) Representing each resource pool as a root provider which presents
the collective inventory of all its hosts.  Each could possess its own
unique CUSTOM_RESOURCE_POOL_X trait.  Or we could possibly adapt
whatever mechanism Ironic uses when it targets a particular baremetal
node.  Or we could use aggregates as in (6), where each aggregate is
associated with just one provider.  This one breaks down because we
don't currently have a way for nova to know that, when an instance's
resources were allocated from the provider corresponding to resource
pool X, that means we should schedule the instance to (nova, n-cpu) host
Y.  There may be some clever solution for this involving aggregates (NOT
sharing providers!), but it has not been thought through.  It also
entails the same "creative finagling of inventory" described in (A).

(D) Using actual nested resource providers: the "cluster" is the
(inventory-less) root provider, and each resource pool is a child of the
cluster.  This is closest to representing the real logical hierarchy,
and is desirable for that reason.  The drawback is that you then MUST
use some mechanism to ensure allocations are never spread across pools.
If your request *always* targets a specific resource pool, that works.
Otherwise, you would have to use a numbered request group, as described
below.  It also entails the same "creative finagling of inventory"
described in (A).

(E) Take (D) a step further by adding each 'host' as a child of its
respective resource pool.  No "creative finagling", but same "moving
allocations" issue as (B).

I'm sure I've missed/misrepresented things.  Please correct and refine
as necessary.

Thanks,
Eric

On 01/27/2018 12:23 PM, Eric Fried wrote:
> Rado-
> 
>     [+dev ML.  We're getting pretty general here; maybe others will get
> some use out of this.]
> 
>> is there a way to make the scheduler allocate only from one specific RP
> 
>     "...one specific RP" - is that Resource Provider or Resource Pool?
> 
>     And are we talking about scheduling an instance to a specific
> compute node, or are we talking about making sure that all the requested
> resources are pulled from the same compute node (but it could be any one
> of several compute nodes)?  Or justlimiting the scheduler to any node in
> a specific resource pool?
> 
>     To make sure I'm fully grasping the VMWare-specific
> ratios/relationships between resource pools and compute nodes,I have
> been assuming:
> 
> controller 1:many compute "host"(where n-cpu runs)
> compute "host"  1:many resource pool
> resource pool 1:many compute "n

[openstack-dev] [ironic] this week's priorities and subteam reports

2018-01-29 Thread Yeleswarapu, Ramamani
Hi,

We are glad to present this week's priorities and subteam report for Ironic. As 
usual, this is pulled directly from the Ironic whiteboard[0] and formatted.

This Week's Priorities (as of the weekly ironic meeting)

Bugs that we want to land in this release:
1. ironic - Don't try to lock upfront for vif removal: 
https://review.openstack.org/#/c/534441/

FFEs that have been granted, need to land by Feb 2:
1. Classic drivers deprecation:
- champions: rloo, stendulker
- 
https://review.openstack.org/#/q/topic:bug/1690185+(status:open+OR+status:merged)
1.1. Deprecate classic drivers: https://review.openstack.org/#/c/536928/
1.2. Switch contributor documentation to hardware types: 
https://review.openstack.org/#/c/537959/
1.3. Switch the CI to hardware types: 
https://review.openstack.org/#/c/536875/
2. Routed Networks support
- champions: TheJulia, sambetts
- https://review.openstack.org/#/q/project:openstack/networking-baremetal
- https://review.openstack.org/521838 Switch from MechanismDriver to 
SimpleAgentMechanismDriverBase. **
- https://review.openstack.org/#/c/536792/ Use reporting_interval option 
from neutron
- https://review.openstack.org/#/c/536040/ Flat networks use node.uuid when 
binding ports. **
- https://review.openstack.org/#/c/537353 Add documentation for baremetal 
mech **
- https://review.openstack.org/#/c/532349/7 Add support to bind type 
vlan networks
- https://review.openstack.org/524709 Make the agent distributed using 
hashring and notifications
- CI patches:
- https://review.openstack.org/#/c/531275/ Devstack - use neutron 
segments (routed provider networks)
- https://review.openstack.org/#/c/531637/ Wait for 
ironic-neutron-agent to report state
- https://review.openstack.org/#/c/530117/ Devstack - Add 
ironic-neutron-agent
- https://review.openstack.org/#/c/530409/ Add dsvm job

3. Traits:
- champions: rloo, TheJulia
- 
https://review.openstack.org/#/q/topic:bug/1722194+(status:open+OR+status:merged)
3.1. Add traits field to node notifications: 
https://review.openstack.org/#/c/536979/
3.2. Fix nits found in node traits: https://review.openstack.org/#/c/537386/
3.3. Add documentation for node traits: 
https://review.openstack.org/#/c/536980/
3.4. Sort node traits in comparisons: 
https://review.openstack.org/#/c/538653/
4. Rescue
4.1. Requires quick review for devstack changes. We cannot land devstack 
changes as the client calls did not land in Queens.
4.2. TheJuia to do so after Monday meeting.
- champions: dtantsur, TheJulia
- 
https://review.openstack.org/#/q/topic:bug/1526449+(status:open+OR+status:merged)
4.1. devstack: add support for rescue mode: 
https://review.openstack.org/#/c/524118/
- rest of test patches can't land since they depend on a nova-related 
patch
4.2. Update "standalone" job for supporting rescue mode: 
https://review.openstack.org/#/c/537821/
4.3. Rescue mode standalone tests: https://review.openstack.org/#/c/538119/ 
(failing CI, not ready for reviews)
4.4. Follow-up for agent rescue implementation: 
https://review.openstack.org/#/c/538252/
4.5. Add documentation for rescue interface: 
https://review.openstack.org/#/c/419606/ (needs update)
4.6. Follow-up patch for rescue extension for CoreOS: 
https://review.openstack.org/#/c/538429/
4.7. Add documentation for rescue mode: 
https://review.openstack.org/#/c/431622/ (needs update)
5. Implementation for UEFI iSCSI boot for ILO:
- champions: TheJulia, stendulker
5.1. follow up patch needed, for https://review.openstack.org/#/c/468288/
6. deprecating python-oneviewclient from OneView interfaces
- champions: dtantsur, TheJulia
- 
https://review.openstack.org/#/q/status:merged+project:openstack/ironic+branch:master+topic:bug/1693788
- Appears to be in good shape - Reno should be updated
- 
https://review.openstack.org/#/c/524729/11/releasenotes/notes/remove-python-oneviewclient-b1d345ef861e156e.yaml

Vendor priorities
-
cisco-ucs:
Patches in works for SDK update, but not posted yet, currently rebuilding 
third party CI infra after a disaster...
idrac:
RFE and first several patches for adding UEFI support will be posted by 
Tuesday, 1/9
ilo:
https://review.openstack.org/#/c/530838/ - OOB Raid spec for iLO5
irmc:
None

oneview:
Remove python-oneviewclient from oneview hardware type - 
https://review.openstack.org/#/c/524729/ MERGED

Subproject priorities
-
bifrost:
(TheJulia): Fedora support fixes -  https://review.openstack.org/#/c/471750/
ironic-inspector (or its client):
(dtantsur) keystoneauth adapters https://review.openstack.org/#/c/515787/ 
MERGED
networking-baremetal:
neutron baremetal agent https://review.openstack.org/#/c/456235/ MERGED
sushy and the redfish driver:
(dtants

Re: [openstack-dev] [ALL][requirements] A freeze is coming and you should be prepared

2018-01-29 Thread Matthew Thode
On 18-01-29 08:30:37, Sean McGinnis wrote:
> > 
> > ... the
> > only outstanding item is that oslo.versionedobjects seems to need
> > another fix for the iso8601 bump. ...
> 
> I took a look at the failing jobs for the oslo.versionobjects bump, and it
> appears this is not directly related.
> 
> There are failures in nova, cinder, and keystone with the new
> oslo.versionedobjects. This appears to be due to a mix of UTC time handling in
> these projects between their own local implementations and usage of the
> timeutils inside oslo.versionedobjects.
> 
> The right answer might be to get all of these local implementations moved out
> into something like oslo.utils, but for the time being, these patches will 
> need
> to land before we can raise oslo.versionedobjects (and raise the iso8601
> version that triggered this work).
> 
> Cinder - https://review.openstack.org/#/c/536182/2
> Nova - https://review.openstack.org/#/c/535700/3
> Keystone - https://review.openstack.org/#/c/538263/1
> 
> There are similar patches in other projects (I think they are all using the
> same topic) that will need to land as well that don't appear to be covered in
> the requirements cross jobs.
> 

Added them as depends-on to https://review.openstack.org/538549

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][horizon] django-openstack-auth retirement

2018-01-29 Thread Doug Hellmann
Excerpts from Akihiro Motoki's message of 2018-01-30 00:36:30 +0900:
> Hi the release team and the requirements team,
> 
> I would like to have on django-openstack-auth (DOA) retirement.
> In the thread of the announce of DOA retirement last week, I was
> advised to release a transition package which provides no python
> module and make horizon depend on it so that the transition can be
> smooth.
> http://lists.openstack.org/pipermail/openstack-dev/2018-January/thread.html#126428
> 
> To achieve this, the horizon team needs:
> * to release django-openstack-auth 4.0.0 (the current version is 3.5.0
> so 4.0.0 makes sense) https://review.openstack.org/#/c/538709/
> * to add django-openstack-auth 4.0.0 to g-r and u-c (for queens)
> * to add django-openstack-auth 4.0.0 to horizon queens RC1

I think what Jeremy was proposing in the thread you linked to was
that the new version of django-openstack-auth should depend on
Horizon, so that any projects that depend on django-openstack-auth
but that do not depend on Horizon will still have the relevant
packages installed when they install django_openstack_auth.

We would not need to update the global requirements or constraints lists
to do that.

Doug

> 
> I think there are two options in horizon queens:
> - to release the transition package of django-openstack-auth 4.0.0 as
> described above, or
> - to just document the retirement of django-openstack-auth
> 
> The requirement release is in 9 hours.
> I would like to ask advices from the release and requirements team.
> 
> Thanks,
> Akihiro
> 
> 2018-01-27 2:45 GMT+09:00 Jeremy Stanley :
> > On 2018-01-24 08:47:30 -0600 (-0600), Monty Taylor wrote:
> > [...]
> >> Horizon and neutron were updated to start publishing to PyPI
> >> already.
> >>
> >> https://review.openstack.org/#/c/531822/
> >>
> >> This is so that we can start working on unwinding the neutron and
> >> horizon specific versions of jobs for neutron and horizon plugins.
> >
> > Nice! I somehow missed that merging a couple of weeks back. In that
> > case, I suppose we could in theory do one final transitional package
> > upload of DOA depending on the conflicting Horizon release if others
> > think that's a good idea.
> > --
> > Jeremy Stanley
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [designate] designate-core updates

2018-01-29 Thread Graham Hayes
Another update to the designate-core team:

+ eandersson
- timsim
- kiall

eandersson has been a long term reviewer and end user of designate who
has consistently performed good, and detail orientated reviews.

Unfortunately both Kiall and Tim have moved on to other areas, and as
such have not had the time to be consistent with their reviews.

I would like to thank Kiall (the projects original founder) and Tim
for the help they have provided over the years, and for taking the
time to do reviews even after they were working on other areas.

If anyone thinks that they, or someone else would be a good core
reviewer for Designate, please let me know, on this email,
or on IRC (mugsie on freenode).

Thanks

- Graham



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] FFE for application credentials

2018-01-29 Thread Lance Bragstad
+1

I agree. Thanks for the heads up, Colleen.


On 01/29/2018 09:53 AM, Colleen Murphy wrote:
>> On Thu, Jan 25, 2018 at 10:15 PM, Lance Bragstad  wrote:
>>> Hey all,
>>>
>>> The work for application credentials [0] has been up for a while,
>>> reviewers are happy with it, and it is slowly making it's way through
>>> the gate. I propose we consider a feature freeze exception given the
>>> state of the gate and the frequency of rechecks/failures.
>>>
>>> Thoughts, comments, or concerns?
>>>
>>> [0]
>>> https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/application-credentials
> These changes were approved on Wednesday (Jan 24). They are still not
> merged as of now (Monday, Jan 29, about 16:00 UTC) because of
>
> * tempest failures related to issues with cinder
> * the log server falling over
> * tempest timing out
> * merge conflicts with the system-scope patches that managed to land
> * hosting provider maintenance that caused zuul to fall over and jobs
> needing to be reenqueued and start over
> * unit test jobs timing out (https://bugs.launchpad.net/keystone/+bug/1746016)
> * zuul running out of memory and jobs needing to be reenqueued and start over
>
> As of now, the base patch in this change series is about 21st in the
> integrated gate queue. With any luck, there is a chance it might be
> merged some time tomorrow.
>
> I'd like to request that we keep the feature freeze exception open for
> these changes.
>
> Colleen
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ptg] [glance] Dublin PTG planning

2018-01-29 Thread Brian Rosmaita
We've been talking about this at the weekly glance meeting, but I
forgot to put out a wider shout on the ML.  The Glance planning
etherpad is here:
  https://etherpad.openstack.org/p/glance-rocky-ptg-planning

Right now it contains some excellent proposals*, but we could use some more.

cheers,
brian

*They're all from me, so YMMV in terms of excellence.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Policy regarding template customisation

2018-01-29 Thread Steven Dake (stdake)
Agree, the “why” of this policy is stated here:
https://docs.openstack.org/developer/kolla-ansible/deployment-philosophy.html

Paul, I think your corrective actions sound good.  Perhaps we should also 
reword “essential” to some other word that is more lenient.

Cheers
-steve

From: Jeffrey Zhang 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, January 29, 2018 at 7:14 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [kolla] Policy regarding template customisation

Thank Paul for pointing this out.

for me, I prefer to consist with 2)

There are thousands of configuration in OpenStack, it is hard for Kolla to
add every key/value pair in playbooks. Currently, the merge_config is a more
better solutions.




On Mon, Jan 29, 2018 at 7:13 PM, Paul Bourke 
mailto:paul.bou...@oracle.com>> wrote:
Hi all,

I'd like to revisit our policy of not templating everything in kolla-ansible's 
template files. This is a policy that was set in place very early on in 
kolla-ansible's development, but I'm concerned we haven't been very consistent 
with it. This leads to confusion for contributors and operators - "should I 
template this and submit a patch, or do I need to start using my own config 
files?".

The docs[0] are currently clear:

"The Kolla upstream community does not want to place key/value pairs in the 
Ansible playbook configuration options that are not essential to obtaining a 
functional deployment."

In practice though our templates contain many options that are not necessary, 
and plenty of patches have merged that while very useful to operators, are not 
necessary to an 'out of the box' deployment.

So I'd like us to revisit the questions:

1) Is kolla-ansible attempting to be a 'batteries included' tool, which caters 
to operators via key/value config options?

2) Or, is it to be a solid reference implementation, where any degree of 
customisation implies a clear 'bring your own configs' type policy.

If 1), then we should potentially:

* Update ours docs to remove the referenced paragraph
* Look at reorganising files like globals.yml into something more maintainable.

If 2),

* We should make it clear to reviewers that patches templating options that are 
non essential should not be accepted.
* Encourage patches to strip down existing config files to an absolute minimum.
* Make this policy more clear in docs / templates to avoid frustration on the 
part of operators.

Thoughts?

Thanks,
-Paul

[0] 
https://docs.openstack.org/kolla-ansible/latest/admin/deployment-philosophy.html#why-not-template-customization

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] FFE for application credentials

2018-01-29 Thread Colleen Murphy
> On Thu, Jan 25, 2018 at 10:15 PM, Lance Bragstad  wrote:
>> Hey all,
>>
>> The work for application credentials [0] has been up for a while,
>> reviewers are happy with it, and it is slowly making it's way through
>> the gate. I propose we consider a feature freeze exception given the
>> state of the gate and the frequency of rechecks/failures.
>>
>> Thoughts, comments, or concerns?
>>
>> [0]
>> https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/application-credentials

These changes were approved on Wednesday (Jan 24). They are still not
merged as of now (Monday, Jan 29, about 16:00 UTC) because of

* tempest failures related to issues with cinder
* the log server falling over
* tempest timing out
* merge conflicts with the system-scope patches that managed to land
* hosting provider maintenance that caused zuul to fall over and jobs
needing to be reenqueued and start over
* unit test jobs timing out (https://bugs.launchpad.net/keystone/+bug/1746016)
* zuul running out of memory and jobs needing to be reenqueued and start over

As of now, the base patch in this change series is about 21st in the
integrated gate queue. With any luck, there is a chance it might be
merged some time tomorrow.

I'd like to request that we keep the feature freeze exception open for
these changes.

Colleen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] FFE request for deprecating python-oneviewclient from OneView interfaces

2018-01-29 Thread Julia Kreger
Circling back to this,

Since Dmitry and myself agreed to continue reviewing this work, I
believe we have implicitly agreed to grant this FFE and continue to
land this work. Should anyone disagree, please reply indicating as
such. I will also bring this up during our weekly meeting that is in
about an hour.

-Julia

On Tue, Jan 23, 2018 at 8:57 AM, Ricardo Araújo  wrote:
> Hi,
>
> I'd like to request an FFE for deprecating python-oneviewclient and
> introduce python-hpOneView in OneView interfaces [1]. This migration was
> performed in Pike cycle but it was reverted due to the lack of a CA
> certificate validation in python-hpOneView (available since 4.4.0 [2]).
>
> As the introduction of the new lib was already merged [3], following changes
> are in scope of this FFE:
> 1. Replace python-oneviewclient by python-hpOneView in power, management,
> inspect and deployment interfaces for OneView hardware type [4]
> 2. Move existing ironic related validation hosted in python-oneviewclient to
> ironic code base [5]
> 3. Remove python-oneviewclient dependency from Ironic [6]
>
> By performing this migration in Queens we will be able to concentrate
> efforts in maintaining a single python lib for accessing HPE OneView while
> being able to enhance current interfaces with features already provided in
> python-hpOneView like soft power operations [7] and timeout for power
> operations [8].
>
> Despite being a big change to merge close to the end of the cycle, all
> migration patches have received core reviewers attention lately and a few
> positive reviews. They're also passing in both the community and UFCG
> OneView CI (running deployment tests with HPE OneView). Postponing this will
> be a blocker for the teams responsible for maintaining this hardware type
> and both python libs for the next cycle.
>
> dtantsur and TheJulia have kindly agreed to keep reviewing this work during
> the feature freeze window, if it gets an exception.
>
> Thanks,
> Ricardo (ricardoas)
>
> [1] - https://bugs.launchpad.net/ironic/+bug/1693788
> [2] - https://github.com/HewlettPackard/python-hpOneView/releases/tag/v4.4.0
> [3] - https://review.openstack.org/#/c/523943/
> [4] - https://review.openstack.org/#/c/524310/
> [5] - https://review.openstack.org/#/c/524599/
> [6] - https://review.openstack.org/#/c/524729/
> [7] - https://review.openstack.org/#/c/510685/
> [8] - https://review.openstack.org/#/c/524624/
>
> Ricardo Araújo Santos -
> www.lsd.ufcg.edu.br/~ricardo
>
> M.Sc in Computer Science at UFCG - www.ufcg.edu.br
> Researcher and Developer at Distributed Systems Laboratory -
> www.lsd.ufcg.edu.br
> Paraíba - Brasil
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][requirements][horizon] django-openstack-auth retirement

2018-01-29 Thread Akihiro Motoki
Hi the release team and the requirements team,

I would like to have on django-openstack-auth (DOA) retirement.
In the thread of the announce of DOA retirement last week, I was
advised to release a transition package which provides no python
module and make horizon depend on it so that the transition can be
smooth.
http://lists.openstack.org/pipermail/openstack-dev/2018-January/thread.html#126428

To achieve this, the horizon team needs:
* to release django-openstack-auth 4.0.0 (the current version is 3.5.0
so 4.0.0 makes sense) https://review.openstack.org/#/c/538709/
* to add django-openstack-auth 4.0.0 to g-r and u-c (for queens)
* to add django-openstack-auth 4.0.0 to horizon queens RC1

I think there are two options in horizon queens:
- to release the transition package of django-openstack-auth 4.0.0 as
described above, or
- to just document the retirement of django-openstack-auth

The requirement release is in 9 hours.
I would like to ask advices from the release and requirements team.

Thanks,
Akihiro

2018-01-27 2:45 GMT+09:00 Jeremy Stanley :
> On 2018-01-24 08:47:30 -0600 (-0600), Monty Taylor wrote:
> [...]
>> Horizon and neutron were updated to start publishing to PyPI
>> already.
>>
>> https://review.openstack.org/#/c/531822/
>>
>> This is so that we can start working on unwinding the neutron and
>> horizon specific versions of jobs for neutron and horizon plugins.
>
> Nice! I somehow missed that merging a couple of weeks back. In that
> case, I suppose we could in theory do one final transitional package
> upload of DOA depending on the conflicting Horizon release if others
> think that's a good idea.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL][requirements] A freeze is coming and you should be prepared

2018-01-29 Thread Doug Hellmann
Excerpts from Matthew Thode's message of 2018-01-28 20:47:42 -0600:
> On 18-01-27 21:37:53, Matthew Thode wrote:
> > On 18-01-26 23:05:11, Matthew Thode wrote:
> > > On 18-01-26 00:12:38, Matthew Thode wrote:
> > > > On 18-01-24 22:32:27, Matthew Thode wrote:
> > > > > On 18-01-24 01:29:47, Matthew Thode wrote:
> > > > > > On 18-01-23 01:23:50, Matthew Thode wrote:
> > > > > > > Requirements is freezing Friday at 23:59:59 UTC so any last
> > > > > > > global-requrements updates that need to get in need to get in now.
> > > > > > > 
> > > > > > > I'm afraid that my condition has left me cold to your pleas of 
> > > > > > > mercy.
> > > > > > > 
> > > > > > 
> > > > > > Just your daily reminder that the freeze will happen in about 3 days
> > > > > > time.  Reviews seem to be winding down for requirements now (which 
> > > > > > is
> > > > > > a good sign this release will be chilled to perfection).
> > > > > > 
> > > > > 
> > > > > There's still a couple of things that may cause bumps for iso8601 and
> > > > > oslo.versionedobjects but those are the main things.  The msgpack 
> > > > > change
> > > > > is also rolling out (thanks dirk :D).  Even with all these changes
> > > > > though, in this universe, there's only one absolute. Everything 
> > > > > freezes!
> > > > > 
> > > > > https://review.openstack.org/535520 (oslo.serialization)
> > > > > 
> > > > 
> > > > Last day, gate is sad and behind, but not my fault you waited til the
> > > > last minute :P  (see my first comment).  The Iceman Cometh!
> > > > 
> > > 
> > > All right everyone, Chill.  Looks like we have another couple days to
> > > get stuff in for gate's slowness.  The new deadline is 23:59:59 UTC
> > > 29-01-2018.
> > > 
> > 
> > It's a cold town.  The current status is as follows.  It looks like the
> > gate is clearing up.  oslo.versionedobjects-1.31.2 and iso8601 will be
> > in a gr bump but that's it.  monasca-tempest-plugin is not going to get
> > in by freeze at this rate (has fixes needed in the review).  There was
> > some stuff needed to get nova-client/osc to work together again, but
> > mriedem seems to have it in hand (and no gr updates it looks like).
> > 
> 
> Allow me to break the Ice. My name is Freeze. Learn it well for it's
> the chilling sound of your doom!  Can you feel it coming? The icy cold
> of space!  It's less than 24 hours til the freeze fomrally happens, the
> only outstanding item is that oslo.versionedobjects seems to need
> another fix for the iso8601 bump.  osc-placement won't be added to
> requirements at this point as there has been no responce on their
> review.
> 
> https://review.openstack.org/538515
> 
> python-vitrageclient looks like it'll make it in if gate doesn't break.
> msgpack may also be late, but we'll see (just workflow'd).
> openstacksdk may need a gr bump, I'm waiting on a response from mordred
> 
> https://review.openstack.org/538695
> 

We also have pending releases for cloudkittyclient, blazarclient,
django-openstack-auth, swiftclient, and zaqarclient. Those are blocked
by the current infra issues, and we really should not freeze the
requirements list until we those libraries are released and we have a
chance to try to update the constraints list to include them.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Policy regarding template customisation

2018-01-29 Thread Mark Goddard
I have to agree - I prefer the minimal approach of option 2. It keeps the
kolla-ansible code base small and easy to understand. The required test
matrix is therefore relatively small (although better coverage of services
in CI would be good). Finally, the approach has allowed the project to move
quickly and support deployment of many OpenStack projects.

Customised options shouldn't be outlawed though. There are times when they
are very useful and/or required:

* some things that cannot be expressed in config files alone
* some options apply to many/all services (sometimes with subtle
differences in configuration)
* some config files are not in a format that can be easily merged (HAProxy,
dnsmasq, etc.)

These should be the exception, rather than the rule, however.

Mark

On 29 January 2018 at 14:12, Jeffrey Zhang  wrote:

> Thank Paul for pointing this out.
>
> for me, I prefer to consist with 2)
>
> There are thousands of configuration in OpenStack, it is hard for Kolla to
> add every key/value pair in playbooks. Currently, the merge_config is a
> more
> better solutions.
>
>
>
>
> On Mon, Jan 29, 2018 at 7:13 PM, Paul Bourke 
> wrote:
>
>> Hi all,
>>
>> I'd like to revisit our policy of not templating everything in
>> kolla-ansible's template files. This is a policy that was set in place very
>> early on in kolla-ansible's development, but I'm concerned we haven't been
>> very consistent with it. This leads to confusion for contributors and
>> operators - "should I template this and submit a patch, or do I need to
>> start using my own config files?".
>>
>> The docs[0] are currently clear:
>>
>> "The Kolla upstream community does not want to place key/value pairs in
>> the Ansible playbook configuration options that are not essential to
>> obtaining a functional deployment."
>>
>> In practice though our templates contain many options that are not
>> necessary, and plenty of patches have merged that while very useful to
>> operators, are not necessary to an 'out of the box' deployment.
>>
>> So I'd like us to revisit the questions:
>>
>> 1) Is kolla-ansible attempting to be a 'batteries included' tool, which
>> caters to operators via key/value config options?
>>
>> 2) Or, is it to be a solid reference implementation, where any degree of
>> customisation implies a clear 'bring your own configs' type policy.
>>
>> If 1), then we should potentially:
>>
>> * Update ours docs to remove the referenced paragraph
>> * Look at reorganising files like globals.yml into something more
>> maintainable.
>>
>> If 2),
>>
>> * We should make it clear to reviewers that patches templating options
>> that are non essential should not be accepted.
>> * Encourage patches to strip down existing config files to an absolute
>> minimum.
>> * Make this policy more clear in docs / templates to avoid frustration on
>> the part of operators.
>>
>> Thoughts?
>>
>> Thanks,
>> -Paul
>>
>> [0] https://docs.openstack.org/kolla-ansible/latest/admin/deploy
>> ment-philosophy.html#why-not-template-customization
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Notification update week 5

2018-01-29 Thread Balázs Gibizer

Hi,

Here is the status update / focus settings mail for w5.

Bugs

[High] https://bugs.launchpad.net/nova/+bug/1742962 nova functional 
test does not triggered on notification sample only changes
Fix merged to master, backports are on the gate. When backport lands we 
can merge the removal of the triggering of the old jobs for nova by 
merging https://review.openstack.org/#/c/533608/


As a followup I did some investigation to see if other jobs are 
affected with the same problem, see ML 
http://lists.openstack.org/pipermail/openstack-dev/2018-January/126616.html


[High] https://bugs.launchpad.net/nova/+bug/1737201 TypeError when
sending notification during attach_interface
Fix merged to master. Backports have been proposed:
* Pike: https://review.openstack.org/#/c/531745/
* Queens: https://review.openstack.org/#/c/531746/

[High] https://bugs.launchpad.net/nova/+bug/1739325 Server operations
fail to complete with versioned notifications if payload contains unset
non-nullable fields
We need to understand first how this can happen. Based on the comments 
from the bug it seems it happens after upgrading an old deployment. So 
it might be some problem with the online data migration that moves the

flavor into the instance.

[Low] https://bugs.launchpad.net/nova/+bug/1487038
nova.exception._cleanse_dict should use
oslo_utils.strutils._SANITIZE_KEYS
Old abandoned patches exist but need somebody to pick them up:
* https://review.openstack.org/#/c/215308/
* https://review.openstack.org/#/c/388345/

Versioned notification transformation
-
Feature Freeze hit but the team made a good last minute push. 
Altogether we merged 17 transformation patches in Queens. \o/ Thanks 
for everybody who contributed with code, review, or encuragement. We 
have 22 transformations left to reach feature parity which means we 
have a chance to finish this work in Rocky. I also put up this as a 
possible intership idea on the wiki: 
https://wiki.openstack.org/wiki/GSoC2018#Internship_ideas


Reno for the Queens work is up to date: 
https://review.openstack.org/#/c/518018


Introduce instance.lock and instance.unlock notifications
-
A specless bp has been proposed to the Rocky cycle
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances
Some preliminary discussion happened in an earlier patch
https://review.openstack.org/#/c/526251/

Add the user id and project id of the user initiated the instance
action to the notification
-
A new bp has been proposed
https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications
As the user who initiates the instance action (e.g. reboot) could be
different from the user owning the instance it would make sense to
include the user_id and project_id of the action initiatior to the
versioned instance action notifications as well.

Factor out duplicated notification sample
-
As https://bugs.launchpad.net/nova/+bug/1742962 is merged it is safe to 
look at the patches on 
https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open 
again.


Weekly meeting
--
The next meeting will be held on 30th of January on #openstack-meeting-4
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180130T17

Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL][requirements] A freeze is coming and you should be prepared

2018-01-29 Thread Sean McGinnis
> 
> ... the
> only outstanding item is that oslo.versionedobjects seems to need
> another fix for the iso8601 bump. ...

I took a look at the failing jobs for the oslo.versionobjects bump, and it
appears this is not directly related.

There are failures in nova, cinder, and keystone with the new
oslo.versionedobjects. This appears to be due to a mix of UTC time handling in
these projects between their own local implementations and usage of the
timeutils inside oslo.versionedobjects.

The right answer might be to get all of these local implementations moved out
into something like oslo.utils, but for the time being, these patches will need
to land before we can raise oslo.versionedobjects (and raise the iso8601
version that triggered this work).

Cinder - https://review.openstack.org/#/c/536182/2
Nova - https://review.openstack.org/#/c/535700/3
Keystone - https://review.openstack.org/#/c/538263/1

There are similar patches in other projects (I think they are all using the
same topic) that will need to land as well that don't appear to be covered in
the requirements cross jobs.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] Security PTG Planning, x-project request for topics.

2018-01-29 Thread Adam Young
Bug 968696 and System Roles.   Needs to be addressed across the Service
catalog.

On Mon, Jan 29, 2018 at 7:38 AM, Luke Hinds  wrote:

> Just a reminder as we have not had many uptakes yet..
>
> Are there any projects (new and old) that would like to make use of the
> security SIG for either gaining another perspective on security challenges
> / blueprints etc or for help gaining some cross project collaboration?
>
> On Thu, Jan 11, 2018 at 3:33 PM, Luke Hinds  wrote:
>
>> Hello All,
>>
>> I am seeking topics for the PTG from all projects, as this will be where
>> we try out are new form of being a SIG.
>>
>> For this PTG, we hope to facilitate more cross project collaboration
>> topics now that we are a SIG, so if your project has a security need /
>> problem / proposal than please do use the security SIG room where a larger
>> audience may be present to help solve problems and gain x-project consensus.
>>
>> Please see our PTG planning pad [0] where I encourage you to add to the
>> topics.
>>
>> [0] https://etherpad.openstack.org/p/security-ptg-rocky
>>
>> --
>> Luke Hinds
>> Security Project PTL
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Policy regarding template customisation

2018-01-29 Thread Jeffrey Zhang
Thank Paul for pointing this out.

for me, I prefer to consist with 2)

There are thousands of configuration in OpenStack, it is hard for Kolla to
add every key/value pair in playbooks. Currently, the merge_config is a more
better solutions.




On Mon, Jan 29, 2018 at 7:13 PM, Paul Bourke  wrote:

> Hi all,
>
> I'd like to revisit our policy of not templating everything in
> kolla-ansible's template files. This is a policy that was set in place very
> early on in kolla-ansible's development, but I'm concerned we haven't been
> very consistent with it. This leads to confusion for contributors and
> operators - "should I template this and submit a patch, or do I need to
> start using my own config files?".
>
> The docs[0] are currently clear:
>
> "The Kolla upstream community does not want to place key/value pairs in
> the Ansible playbook configuration options that are not essential to
> obtaining a functional deployment."
>
> In practice though our templates contain many options that are not
> necessary, and plenty of patches have merged that while very useful to
> operators, are not necessary to an 'out of the box' deployment.
>
> So I'd like us to revisit the questions:
>
> 1) Is kolla-ansible attempting to be a 'batteries included' tool, which
> caters to operators via key/value config options?
>
> 2) Or, is it to be a solid reference implementation, where any degree of
> customisation implies a clear 'bring your own configs' type policy.
>
> If 1), then we should potentially:
>
> * Update ours docs to remove the referenced paragraph
> * Look at reorganising files like globals.yml into something more
> maintainable.
>
> If 2),
>
> * We should make it clear to reviewers that patches templating options
> that are non essential should not be accepted.
> * Encourage patches to strip down existing config files to an absolute
> minimum.
> * Make this policy more clear in docs / templates to avoid frustration on
> the part of operators.
>
> Thoughts?
>
> Thanks,
> -Paul
>
> [0] https://docs.openstack.org/kolla-ansible/latest/admin/deploy
> ment-philosophy.html#why-not-template-customization
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][infra] zuul job definitions overrides and the irrelevant-file attribute

2018-01-29 Thread Balázs Gibizer


On Fri, Jan 26, 2018 at 6:57 PM, James E. Blair  
wrote:

Balázs Gibizer  writes:


 Hi,

 I'm getting more and more confused how the zuul job hierarchy works 
or

 is supposed to work.


Hi!

First, you (or others) may or may not have seen this already -- some 
of
it didn't exist when we first rolled out v3, and some of it has 
changed
-- but here are the relevant bits of the documentation that should 
help

explain what's going on.  It helps to understand freezing:

  https://docs.openstack.org/infra/zuul/user/config.html#job

and matching:

  https://docs.openstack.org/infra/zuul/user/config.html#matchers


Thanks for the doc references they are really helpful.




 First there was a bug in nova that some functional tests are not
 triggered although the job (re-)definition in the nova part of the
 project-config should not prevent it to run [1].

 There we figured out that irrelevant-files parameter of the jobs are
 not something that can be overriden during re-definition or through
 parent-child relationship. The base job openstack-tox-functional has
 an irrelevant-files attribute that lists '^doc/.*$' as a path to be
 ignored [2]. In the other hand the nova part of the project-config
 tries to make this ignore less broad by adding only 
'^doc/source/.*$'
 . This does not work as we expected and the job did not run on 
changes

 that only affected ./doc/notification_samples path. We are fixing it
 by defining our own functional job in nova tree [4].

 [1] https://bugs.launchpad.net/nova/+bug/1742962
 [2]
 
https://github.com/openstack-infra/openstack-zuul-jobs/blob/1823e3ea20e6dfaf37786a6ff79c56cb786bf12c/zuul.d/jobs.yaml#L380

 [3]
 
https://github.com/openstack-infra/project-config/blob/1145ab1293f5fa4d34c026856403c22b091e673c/zuul.d/projects.yaml#L10509

 [4] https://review.openstack.org/#/c/533210/


This is correct.  The issue here is that the irrelevant-files 
definition

on openstack-tox-functional is too broad.  We need to be *extremely*
careful applying matchers to jobs like that.  Generally I think that
irrelevant-files should be reserved for the project-pipeline 
invocations

only.  That's how they were effectively used in Zuul v2, after all.

Essentially, when someone puts an irrelevant-files section on a job 
like
that, they are saying "this job will never apply to these files, 
ever."

That's clearly not correct in this case.

So our solutions are to acknowledge that it's over-broad, and reduce 
or

eliminate the list in [2] and expand it elsewhere (as in [3]).  Or we
can say "we were generally correct, but nova is extra special so it
needs its own job".  If that's the choice, then I think [4] is a fine
solution.


The [4] just get merged this morning so I think that is OK for us now.




 Then I started looking into other jobs to see if we made similar
 mistakes. I found two other examples in the nova related jobs where
 redefining the irrelevant-files of a job caused problems. In these
 examples nova tried to ignore more paths during the override than 
what

 was originally ignored in the job definition but that did not work
 [5][6].

 [5] https://bugs.launchpad.net/nova/+bug/1745405 (temptest-full)


As noted in that bug, the tempest-full job is invoked on nova via this
stanza:

https://github.com/openstack-infra/project-config/blob/5ddbd62a46e17dd2fdee07bec32aa65e3b637ff3/zuul.d/projects.yaml#L10674-L10688

As expected, that did not match.  There is a second invocation of
tempest-full on nova here:

http://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/zuul.d/zuul-legacy-project-templates.yaml#n126

That has no irrelevant-files matches, and so matches everything.  If 
you
drop the use of that template, it will work as expected.  Or, if you 
can

say with some certainty that nova's irrelevant-files set is not
over-broad, you could move the irrelevant-files from nova's invocation
into the template, or even the job, and drop nova's individual
invocation.


Thanks for the explanation, it is much clearer now. With this info I 
think I was able to propose a patcha that fixes the two bugs: 
https://review.openstack.org/#/c/538908/





 [6] https://bugs.launchpad.net/nova/+bug/1745431 (neutron-grenade)


The same template invokes this job as well.


 So far the problem seemed to be consistent (i.e. override does not
 work). But then I looked into neutron-grenade-multinode. That job is
 defined in neutron tree (like neutron-grenade) but nova also refers 
to

 it in nova section of the project-config with different
 irrelevant-files than their original definition. So I assumed that
 this will lead to similar problem than in case of neutron-grenade, 
but

 it doesn't.

 The neutron-grenade-multinode original definition [1] does not try 
to

 ignore the 'nova/tests' path but the nova side of the definition in
 the project config does try to ignore that path [8]. Interestingly a
 patch in nova that only changes under the path: nova/tests/ does not
 trigger the job [9]. So in th

[openstack-dev] [all][kolla][rdo] Collaboration with Kolla for the RDO test days

2018-01-29 Thread David Moreau Simard
Hi !

For those who might be unfamiliar with the RDO [1] community project:
we hang out in #rdo, we don't bite and we build vanilla OpenStack
packages.

These packages are what allows you to leverage one of the deployment
projects such as TripleO, PackStack or Kolla to deploy on CentOS or
RHEL.
The RDO community collaborates with these deployment projects by
providing trunk and stable packages in order to let them develop and
test against the latest and the greatest of OpenStack.

RDO test days typically happen around a week after an upstream
milestone has been reached [2].
The purpose is to get everyone together in #rdo: developers, users,
operators, maintainers -- and test not just RDO but OpenStack itself
as installed by the different deployment projects.

We tried something new at our last test day [3] and it worked out great.
Instead of encouraging participants to install their own cloud for
testing things, we supplied a cloud of our own... a bit like a limited
duration TryStack [4].
This lets users without the operational knowledge, time or hardware to
install an OpenStack environment to see what's coming in the upcoming
release of OpenStack and get the feedback loop going ahead of the
release.

We used Packstack for the last deployment and invited Packstack cores
to deploy, operate and troubleshoot the installation for the duration
of the test days.
The idea is to rotate between the different deployment projects to
give every interested project a chance to participate.

Last week, we reached out to Kolla to see if they would be interested
in participating in our next RDO test days [5] around February 8th.
We supply the bare metal hardware and their core contributors get to
deploy and operate a cloud with real users and developers poking
around.
All around, this is a great opportunity to get feedback for RDO, Kolla
and OpenStack.

We'll be advertising the event a bit more as the test days draw closer
but until then, I thought it was worthwhile to share some context for
this new thing we're doing.

Let me know if you have any questions !

Thanks,

[1]: https://www.rdoproject.org/
[2]: https://www.rdoproject.org/testday/
[3]: 
https://dmsimard.com/2017/11/29/come-try-a-real-openstack-queens-deployment/
[4]: http://trystack.org/
[5]: 
http://eavesdrop.openstack.org/meetings/kolla/2018/kolla.2018-01-24-16.00.log.html

David Moreau Simard
Senior Software Engineer | OpenStack RDO

dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] Security PTG Planning, x-project request for topics.

2018-01-29 Thread Luke Hinds
Just a reminder as we have not had many uptakes yet..

Are there any projects (new and old) that would like to make use of the
security SIG for either gaining another perspective on security challenges
/ blueprints etc or for help gaining some cross project collaboration?

On Thu, Jan 11, 2018 at 3:33 PM, Luke Hinds  wrote:

> Hello All,
>
> I am seeking topics for the PTG from all projects, as this will be where
> we try out are new form of being a SIG.
>
> For this PTG, we hope to facilitate more cross project collaboration
> topics now that we are a SIG, so if your project has a security need /
> problem / proposal than please do use the security SIG room where a larger
> audience may be present to help solve problems and gain x-project consensus.
>
> Please see our PTG planning pad [0] where I encourage you to add to the
> topics.
>
> [0] https://etherpad.openstack.org/p/security-ptg-rocky
>
> --
> Luke Hinds
> Security Project PTL
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]Nova rescue inject pasword failed

2018-01-29 Thread 李杰
yeah,but Idon't know why we have to use a config disk,we can also gain the 
metadata by metadata RESTful service.Now I set my nova.conf 
inject_password=True, inject_partition=-1.And the 
libguestfs-1.36.3-6.el7_4.3.x86_64 is also installed.But it doesn't work.
 
 
-- Original --
From:  "Matthew Booth";
Date:  Mon, Jan 29, 2018 05:57 PM
To:  "OpenStack Developmen"; 

Subject:  Re: [openstack-dev] [nova]Nova rescue inject pasword failed

 
On 29 January 2018 at 09:27, 李杰  wrote:
 Hi,all:
  I want to access to my instance under rescue state using temporary 
password which nova rescue gave me.But this password doesn't work. Can I ask 
how this password is injected to instance? I can't find any specification how 
is it done.I saw the code about rescue,But it displays the password has inject.
  I use the libvirt as the virt driver. The web said to 
set"[libvirt]inject_password=true",but it didn't work. Is it a bug?Can you give 
me some advice?Help in troubleshooting this issue will be appreciated.



Ideally your rescue image will support cloud-init and you would use a config 
disk. For password injection to work you need inject_password=True, 
inject_partition=-1 (*NOT* -2, which is the default), and for libguestfs to be 
correctly installed on your compute hosts.


But to reiterate, ideally your rescue image would support cloud-init and you 
would use a config disk.


Matt


-- 
Matthew Booth

Red Hat OpenStack Engineer, Compute DFG


Phone: +442070094448 (UK)__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Policy regarding template customisation

2018-01-29 Thread Paul Bourke

Hi all,

I'd like to revisit our policy of not templating everything in 
kolla-ansible's template files. This is a policy that was set in place 
very early on in kolla-ansible's development, but I'm concerned we 
haven't been very consistent with it. This leads to confusion for 
contributors and operators - "should I template this and submit a patch, 
or do I need to start using my own config files?".


The docs[0] are currently clear:

"The Kolla upstream community does not want to place key/value pairs in 
the Ansible playbook configuration options that are not essential to 
obtaining a functional deployment."


In practice though our templates contain many options that are not 
necessary, and plenty of patches have merged that while very useful to 
operators, are not necessary to an 'out of the box' deployment.


So I'd like us to revisit the questions:

1) Is kolla-ansible attempting to be a 'batteries included' tool, which 
caters to operators via key/value config options?


2) Or, is it to be a solid reference implementation, where any degree of 
customisation implies a clear 'bring your own configs' type policy.


If 1), then we should potentially:

* Update ours docs to remove the referenced paragraph
* Look at reorganising files like globals.yml into something more 
maintainable.


If 2),

* We should make it clear to reviewers that patches templating options 
that are non essential should not be accepted.
* Encourage patches to strip down existing config files to an absolute 
minimum.
* Make this policy more clear in docs / templates to avoid frustration 
on the part of operators.


Thoughts?

Thanks,
-Paul

[0] 
https://docs.openstack.org/kolla-ansible/latest/admin/deployment-philosophy.html#why-not-template-customization


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] opendaylight OpenDaylightConnectionProtocol deprecation issue

2018-01-29 Thread Moshe Levi
Hi all,

It seem that this commit [1] deprecated the OpenDaylightConnectionProtocol, but 
it also remove it.
This is causing the following issue when we deploy opendaylight non 
containerized. See [2]

One solution is to add back the OpenDaylightConnectionProtocol [3] the other 
solution is to remove the OpenDaylightConnectionProtocol from the deprecated 
parameter_groups [4].



[1] - 
https://github.com/openstack/tripleo-heat-templates/commit/af4ce05dc5270b84864a382ddb2a1161d9082eab

[2] - http://paste.openstack.org/show/656702/

[3] - 
https://github.com/openstack/tripleo-heat-templates/commit/af4ce05dc5270b84864a382ddb2a1161d9082eab#diff-21674daa44a327c016a80173efeb10e7L20

[4] - 
https://github.com/openstack/tripleo-heat-templates/commit/af4ce05dc5270b84864a382ddb2a1161d9082eab#diff-21674daa44a327c016a80173efeb10e7R112

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]Nova rescue inject pasword failed

2018-01-29 Thread Matthew Booth
On 29 January 2018 at 09:27, 李杰  wrote:

>  Hi,all:
>   I want to access to my instance under rescue state using
> temporary password which nova rescue gave me.But this password doesn't
> work. Can I ask how this password is injected to instance? I can't find any
> specification how is it done.I saw the code about rescue,But it displays
> the password has inject.
>   I use the libvirt as the virt driver. The web said to
> set"[libvirt]inject_password=true",but it didn't work. Is it a bug?Can
> you give me some advice?Help in troubleshooting this issue will be
> appreciated.
>

Ideally your rescue image will support cloud-init and you would use a
config disk. For password injection to work you need inject_password=True,
inject_partition=-1 (*NOT* -2, which is the default), and for libguestfs to
be correctly installed on your compute hosts.

But to reiterate, ideally your rescue image would support cloud-init and
you would use a config disk.

Matt
-- 
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG

Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]Nova rescue inject pasword failed

2018-01-29 Thread 李杰
Hi,all:
  I want to access to my instance under rescue state using temporary 
password which nova rescue gave me.But this password doesn't work. Can I ask 
how this password is injected to instance? I can't find any specification how 
is it done.I saw the code about rescue,But it displays the password has inject.
  I use the libvirt as the virt driver. The web said to 
set"[libvirt]inject_password=true",but it didn't work. Is it a bug?Can you give 
me some advice?Help in troubleshooting this issue will be appreciated.







Best Regards
Lijie__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Limiting pip wheel builds for OpenStack clients

2018-01-29 Thread Jean-Philippe Evrard
I added my comment/opinion on the bug.

Thanks for reporting this, Major!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev