[Yahoo-eng-team] [Bug 1741000] [NEW] Can't force-delete instance with task_state not 'None'

2018-01-02 Thread Rajesh Tailor
Public bug reported:

Problem Description:

When user tries to force-delete instance which is not in task_state
'None', throws 500 error on console, as below:

ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-2578def4-a83d-458c-a3cb-c5fa1e6b56a8)

Steps to reproduce:
1) Create instance.
$ nova boot --flavor  --image  

2) To change the instance task_state to anything other than 'None', I tried to 
resize the instance.
$ nova resize  

3) Try to force-delete instance, just after above step, so that instance task 
state is anything in (resize_prep, resize_migrating, resize_migrated, 
resize_finish), but not 'None'.
$ nova force-delete 

4) User gets below error on console:

ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-2578def4-a83d-458c-a3cb-c5fa1e6b56a8)


Actual result: 
User gets 500 ClientException Error.

Expected result:
The instance should be deleted without any error.

traceback from nova-api logs:

DEBUG oslo_concurrency.lockutils [None req-2578def4-a83d-458c-a3cb-c5fa1e6b56a8 
demo demo] Lock "fbe6eec8-be64-4473-8434-1d795e7ca5fb" released by 
"nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.0
00s {{(pid=24369) inner 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285}}
ERROR nova.api.openstack.wsgi [None req-2578def4-a83d-458c-a3cb-c5fa1e6b56a8 
demo demo] Unexpected exception in API method: InstanceInvalidState: Instance 
6abb021a-174e-4551-acc1-a96653a9bf83 in task_state resize_prep. Cannot 
force_delete while the instance is in this state.
ERROR nova.api.openstack.wsgi Traceback (most recent call last):
ERROR nova.api.openstack.wsgi   File 
"/opt/stack/nova/nova/api/openstack/wsgi.py", line 803, in wrapped
ERROR nova.api.openstack.wsgi return f(*args, **kwargs)
ERROR nova.api.openstack.wsgi   File 
"/opt/stack/nova/nova/api/openstack/compute/deferred_delete.py", line 61, in 
_force_delete
ERROR nova.api.openstack.wsgi self.compute_api.force_delete(context, 
instance)
ERROR nova.api.openstack.wsgi   File "/opt/stack/nova/nova/compute/api.py", 
line 201, in inner
ERROR nova.api.openstack.wsgi return function(self, context, instance, 
*args, **kwargs)
ERROR nova.api.openstack.wsgi   File "/opt/stack/nova/nova/compute/api.py", 
line 141, in inner
ERROR nova.api.openstack.wsgi method=f.__name__)
ERROR nova.api.openstack.wsgi InstanceInvalidState: Instance 
6abb021a-174e-4551-acc1-a96653a9bf83 in task_state resize_prep. Cannot 
force_delete while the instance is in this state.
ERROR nova.api.openstack.wsgi 
INFO nova.api.openstack.wsgi [None req-2578def4-a83d-458c-a3cb-c5fa1e6b56a8 
demo demo] HTTP exception thrown: Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.

DEBUG nova.api.openstack.wsgi [None req-2578def4-a83d-458c-a3cb-c5fa1e6b56a8 
demo demo] Returning 500 to user: Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.


NOTE: If user tries delete api instead of force-delete api for instance 
deletion, in that case instance is deleted and no error is thrown.

** Affects: nova
 Importance: Undecided
 Assignee: Rajesh Tailor (ratailor)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Rajesh Tailor (ratailor)

** Changed in: nova
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1741000

Title:
  Can't force-delete instance with task_state not 'None'

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Problem Description:

  When user tries to force-delete instance which is not in task_state
  'None', throws 500 error on console, as below:

  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-2578def4-a83d-458c-a3cb-c5fa1e6b56a8)

  Steps to reproduce:
  1) Create instance.
  $ nova boot --flavor  --image  

  2) To change the instance task_state to anything other than 'None', I tried 
to resize the instance.
  $ nova resize  

  3) Try to force-delete instance, just after above step, so that instance task 
state is anything in (resize_prep, resize_migrating, resize_migrated, 
resize_finish), but not 'None'.
  $ nova force-delete 

  4) User gets below error on console:

  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-2578def4-a83d-458c-a3cb-c5fa1e6b56a8)

  
  Actual result: 
  User gets 500 ClientExcepti

[Yahoo-eng-team] [Bug 1741001] [NEW] Got an unexpected keyword argument when starting nova-api

2018-01-02 Thread Eric Xie
Public bug reported:

Description
===
After upgrading oslo.db, nova-api service was started failed.

Steps to reproduce
==
* pip install oslo.db==4.24.0
* start nova-api service is OK
* `pip install upgrade oslo.db` to 4.32.0
* failed to start nova-api 

Expected result
===
In the requirement.txt, oslo.db >= 4.24.0.
But at version 4.32.0(latest), got 'unexpected keyword' error.

Actual result
=
'nova-api' is running OK.

Environment
===
1. nova version
# rpm -qa | grep nova
openstack-nova-console-16.0.3-2.el7.noarch
openstack-nova-common-16.0.3-2.el7.noarch
python2-novaclient-9.1.1-1.el7.noarch
openstack-nova-scheduler-16.0.3-2.el7.noarch
openstack-nova-api-16.0.3-2.el7.noarch
openstack-nova-placement-api-16.0.3-2.el7.noarch
python-nova-16.0.3-2.el7.noarch
openstack-nova-conductor-16.0.3-2.el7.noarch
openstack-nova-novncproxy-16.0.3-2.el7.noarch

Logs & Configs
==
Jan  3 06:59:13 host-172-23-59-134 systemd: Starting OpenStack Nova API 
Server...
Jan  3 06:59:16 host-172-23-59-134 nova-api: Traceback (most recent call last):
Jan  3 06:59:16 host-172-23-59-134 nova-api: File "/usr/bin/nova-api", line 6, 
in 
Jan  3 06:59:16 host-172-23-59-134 nova-api: from nova.cmd.api import main
Jan  3 06:59:16 host-172-23-59-134 nova-api: File 
"/usr/lib/python2.7/site-packages/nova/cmd/api.py", line 29, in 
Jan  3 06:59:16 host-172-23-59-134 nova-api: from nova import config
Jan  3 06:59:16 host-172-23-59-134 nova-api: File 
"/usr/lib/python2.7/site-packages/nova/config.py", line 23, in 
Jan  3 06:59:16 host-172-23-59-134 nova-api: from nova.db.sqlalchemy import api 
as sqlalchemy_api
Jan  3 06:59:16 host-172-23-59-134 nova-api: File 
"/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 925, in 

Jan  3 06:59:16 host-172-23-59-134 nova-api: retry_on_request=True)
Jan  3 06:59:16 host-172-23-59-134 nova-api: TypeError: __init__() got an 
unexpected keyword argument 'retry_on_request'
Jan  3 06:59:16 host-172-23-59-134 systemd: openstack-nova-api.service: main 
process exited, code=exited, status=1/FAILURE

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1741001

Title:
  Got an unexpected keyword argument when starting nova-api

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  After upgrading oslo.db, nova-api service was started failed.

  Steps to reproduce
  ==
  * pip install oslo.db==4.24.0
  * start nova-api service is OK
  * `pip install upgrade oslo.db` to 4.32.0
  * failed to start nova-api 

  Expected result
  ===
  In the requirement.txt, oslo.db >= 4.24.0.
  But at version 4.32.0(latest), got 'unexpected keyword' error.

  Actual result
  =
  'nova-api' is running OK.

  Environment
  ===
  1. nova version
  # rpm -qa | grep nova
  openstack-nova-console-16.0.3-2.el7.noarch
  openstack-nova-common-16.0.3-2.el7.noarch
  python2-novaclient-9.1.1-1.el7.noarch
  openstack-nova-scheduler-16.0.3-2.el7.noarch
  openstack-nova-api-16.0.3-2.el7.noarch
  openstack-nova-placement-api-16.0.3-2.el7.noarch
  python-nova-16.0.3-2.el7.noarch
  openstack-nova-conductor-16.0.3-2.el7.noarch
  openstack-nova-novncproxy-16.0.3-2.el7.noarch

  Logs & Configs
  ==
  Jan  3 06:59:13 host-172-23-59-134 systemd: Starting OpenStack Nova API 
Server...
  Jan  3 06:59:16 host-172-23-59-134 nova-api: Traceback (most recent call 
last):
  Jan  3 06:59:16 host-172-23-59-134 nova-api: File "/usr/bin/nova-api", line 
6, in 
  Jan  3 06:59:16 host-172-23-59-134 nova-api: from nova.cmd.api import main
  Jan  3 06:59:16 host-172-23-59-134 nova-api: File 
"/usr/lib/python2.7/site-packages/nova/cmd/api.py", line 29, in 
  Jan  3 06:59:16 host-172-23-59-134 nova-api: from nova import config
  Jan  3 06:59:16 host-172-23-59-134 nova-api: File 
"/usr/lib/python2.7/site-packages/nova/config.py", line 23, in 
  Jan  3 06:59:16 host-172-23-59-134 nova-api: from nova.db.sqlalchemy import 
api as sqlalchemy_api
  Jan  3 06:59:16 host-172-23-59-134 nova-api: File 
"/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 925, in 

  Jan  3 06:59:16 host-172-23-59-134 nova-api: retry_on_request=True)
  Jan  3 06:59:16 host-172-23-59-134 nova-api: TypeError: __init__() got an 
unexpected keyword argument 'retry_on_request'
  Jan  3 06:59:16 host-172-23-59-134 systemd: openstack-nova-api.service: main 
process exited, code=exited, status=1/FAILURE

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1741001/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1739227] Re: test_create_subport_invalid_inherit_network_segmentation_type doesn't obey when parent network is vlan

2018-01-02 Thread Jakub Libosvar
Addressed by https://review.openstack.org/#/c/529124

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1739227

Title:
  test_create_subport_invalid_inherit_network_segmentation_type doesn't
  obey when parent network is vlan

Status in neutron:
  Fix Released

Bug description:
  test_create_subport_invalid_inherit_network_segmentation_type uses
  default network type. It tests that in case 'inherit' is passed as
  segmentation type, the API call will fail because it assumes
  unsupported segmentation type. In case the test is executed against
  deployment using supported type as default, the test fails because API
  correctly returns this supported segmentation type, e.g. vlan.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1739227/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1740998] [NEW] DHCP Agents of multiple Allocation Pools

2018-01-02 Thread Yi-Hsuan Lou
Public bug reported:

Problem :
OpenStack has down time to establish the ports for DHCP by creating subnet 
("enable_dhcp":true), I can not control and know which IP is going to be or 
apply as DHCP agents. The response can't provide the specific IP which is 
selected as Attached Device(network:dhcp) immediatly. In my observations and 
experiments, create the subnet with multiple allocation Pools, the IP will be 
used as DHCP agents randomly by different allocation pools. I need to use the 
Get /v2.0/ports to map the networks and subnets repeatly until the first IP is 
show up in the repsonse.

If there's a way let us know which IP is used as DHCP agents by multiple
allocation Pools after geting the response by creating the subnet.

--

I sent a post which references the Networking API v2.0 to create a
subnet within multiple allocation Pools by the network, and I also
enable DHCP with ture.

version: mitaka

POST /v2.0/subnets

#Response

{"subnet":{"allocation_pools":[
{"start":"172.18.0.1","end":"172.18.0.10"},
{"start":"172.18.0.20","end":"172.18.0.30"}],
"ipv6_address_mode":null,
"tenant_id":"b3746b09cacb421cba95ce1afd380762",
"subnetpool_id":null,
"gateway_ip":"172.18.0.254",
"cidr":"172.18.0.0/24",
"id":"73b8ffaa-6fed-425a-b00b-fb75dffdaa57",
"updated_at":"2018-01-03T04:18:22",
"network_id":"0926e907-8e69-4231-ae2f-a051b2d77bf2",
"dns_nameservers":["8.8.8.8"],
"description":"","name":"",
"enable_dhcp":true,
"created_at":"2018-01-03T04:18:22","ipv6_ra_mode":null,"host_routes":[],"ip_version":4}}



** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1740998

Title:
  DHCP Agents of multiple Allocation Pools

Status in neutron:
  New

Bug description:
  Problem :
  OpenStack has down time to establish the ports for DHCP by creating subnet 
("enable_dhcp":true), I can not control and know which IP is going to be or 
apply as DHCP agents. The response can't provide the specific IP which is 
selected as Attached Device(network:dhcp) immediatly. In my observations and 
experiments, create the subnet with multiple allocation Pools, the IP will be 
used as DHCP agents randomly by different allocation pools. I need to use the 
Get /v2.0/ports to map the networks and subnets repeatly until the first IP is 
show up in the repsonse.

  If there's a way let us know which IP is used as DHCP agents by
  multiple allocation Pools after geting the response by creating the
  subnet.

  --

  I sent a post which references the Networking API v2.0 to create a
  subnet within multiple allocation Pools by the network, and I also
  enable DHCP with ture.

  version: mitaka

  POST /v2.0/subnets

  #Response

  {"subnet":{"allocation_pools":[
  {"start":"172.18.0.1","end":"172.18.0.10"},
  {"start":"172.18.0.20","end":"172.18.0.30"}],
  "ipv6_address_mode":null,
  "tenant_id":"b3746b09cacb421cba95ce1afd380762",
  "subnetpool_id":null,
  "gateway_ip":"172.18.0.254",
  "cidr":"172.18.0.0/24",
  "id":"73b8ffaa-6fed-425a-b00b-fb75dffdaa57",
  "updated_at":"2018-01-03T04:18:22",
  "network_id":"0926e907-8e69-4231-ae2f-a051b2d77bf2",
  "dns_nameservers":["8.8.8.8"],
  "description":"","name":"",
  "enable_dhcp":true,
  
"created_at":"2018-01-03T04:18:22","ipv6_ra_mode":null,"host_routes":[],"ip_version":4}}

  

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1740998/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1688189] Re: Member create raises 500 error for unicode charater values

2018-01-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/500735
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=05e9bdb656d9c120ed3cd6ffc8ae7dbf5614b5e4
Submitter: Zuul
Branch:master

commit 05e9bdb656d9c120ed3cd6ffc8ae7dbf5614b5e4
Author: neha.pandey 
Date:   Thu May 4 16:53:08 2017 +0530

Fix member create to handle unicode characters

If user passes member id as unicode characters in member create then
HTTP 500 internal server error is raised.
Reason: The unicode format check is not performed in db create member.

This patch fixes the member create by checking member id before
inserting in db. If member id is unicode then proper exception
is raised and same is handled in controller api.

Change-Id: I67be5e990d1269cbb986db7fff21a90a41af06e4
Closes-Bug: #1688189


** Changed in: glance
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1688189

Title:
  Member create raises 500 error for unicode charater values

Status in Glance:
  Fix Released

Bug description:
  If user passes member as Unicode character while creating member for
  image then he will get HTTP 500 error.

  Steps to reproduce:

  1. Create image
  2. Assign member to image using
 $ glance member-create e64f4347-51d6-4f97-8b6e-02e40c7ecb30 𠜎

 or using curl
 $ curl -g -i -X POST 
http://10.232.48.198:9292/v2/images/e64f4347-51d6-4f97-8b6e-02e40c7ecb30/members
 -H "User-Agent: python-glanceclient" -H "Content-Type: application/json" -H 
"X-Auth-Token: 
gABZCs01HPjCjKDkYnWQECtu9dYOxySXXrMH-lH4xO9xZBtl4MXNIPbTwkuCWSQ4EOh0tKvOPz55DmMdyOM0RYziM-qNE2Jikncq2oExZvf6k8OZYj_Vad5Q04p_uCU0Rg-9b94mVFfv_HaImCnT9ofO6RQZyNLOf1zc-AOzQPOMnjv9e4g"
 -d '{"member": "𠜎"}'

  Output:
  
   
500 Internal Server Error
   
   
500 Internal Server Error
The server has either erred or is incapable of performing the requested 
operation.

   
  

  API Logs:
  500 Internal Server Error: The server has either erred or is incapable of 
performing the requested operation. (HTTP 500)

  2017-05-04 12:18:14.460 TRACE glance.common.wsgi self._flush(objects)
  2017-05-04 12:18:14.460 TRACE glance.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 2259, 
in _flush
  2017-05-04 12:18:14.460 TRACE glance.common.wsgi 
transaction.rollback(_capture_exception=True)
  2017-05-04 12:18:14.460 TRACE glance.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 
66, in __exit__
  2017-05-04 12:18:14.460 TRACE glance.common.wsgi compat.reraise(exc_type, 
exc_value, exc_tb)
  2017-05-04 12:18:14.460 TRACE glance.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 2223, 
in _flush
  2017-05-04 12:18:14.460 TRACE glance.common.wsgi flush_context.execute()
  2017-05-04 12:18:14.460 TRACE glance.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", line 
389, in execute
  2017-05-04 12:18:14.460 TRACE glance.common.wsgi rec.execute(self)
  2017-05-04 12:18:14.460 TRACE glance.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", line 
548, in execute
  2017-05-04 12:18:14.460 TRACE glance.common.wsgi uow
  2017-05-04 12:18:14.460 TRACE glance.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 
181, in save_obj
  2017-05-04 12:18:14.460 TRACE glance.common.wsgi mapper, table, insert)
  2017-05-04 12:18:14.460 TRACE glance.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 
835, in _emit_insert_statements
  2017-05-04 12:18:14.460 TRACE glance.common.wsgi execute(statement, 
params)
  2017-05-04 12:18:14.460 TRACE glance.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 945, 
in execute
  2017-05-04 12:18:14.460 TRACE glance.common.wsgi return meth(self, 
multiparams, params)
  2017-05-04 12:18:14.460 TRACE glance.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/elements.py", line 263, 
in _execute_on_connection
  2017-05-04 12:18:14.460 TRACE glance.common.wsgi return 
connection._execute_clauseelement(self, multiparams, params)
  2017-05-04 12:18:14.460 TRACE glance.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1053, 
in _execute_clauseelement
  2017-05-04 12:18:14.460 TRACE glance.common.wsgi compiled_sql, 
distilled_params
  2017-05-04 12:18:14.460 TRACE glance.common.wsgi   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1189, 
in _execute_context
  2017-05-04 12:18:14.460 TRACE glance.common.wsgi context)
  2017-05-04 12:18:14.460 TRACE glance.common.wsgi   File 
"/usr/local/lib/python2

[Yahoo-eng-team] [Bug 1736759] Re: Glance images can contain no data

2018-01-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/527370
Committed: 
https://git.openstack.org/cgit/openstack/python-glanceclient/commit/?id=4dcbc30e317a25495bebc073bb9913d9fd9d43a2
Submitter: Zuul
Branch:master

commit 4dcbc30e317a25495bebc073bb9913d9fd9d43a2
Author: Stephen Finucane 
Date:   Tue Dec 12 10:57:24 2017 +

Compare against 'RequestIdProxy.wrapped'

Due to the 'glanceclient.common.utils.add_req_id_to_object' decorator,
an instance of 'glanceclient.common.utils.RequestIdProxy' is returned
for most calls in glanceclient. If we wish to compare to None, we have
to compare the contents of this wrapper and not the wrapper itself.

Unit tests are updated to highlight this.

Change-Id: I7dadf32d37ac2bda33a92c71d5882e9f23e38a82
Closes-Bug: #1736759


** Changed in: python-glanceclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1736759

Title:
  Glance images can contain no data

Status in OpenStack Compute (nova):
  In Progress
Status in Glance Client:
  Fix Released

Bug description:
  Due to another bug [1], glance was returning None from
  'glanceclient.v2.images.Controller.data'. However, the glance
  documentation states that this is a valid return value. We should
  handle this. Logs below.

  [1] https://bugzilla.redhat.com/show_bug.cgi?id=1476448
  [2] 
https://docs.openstack.org/python-glanceclient/latest/reference/api/glanceclient.v2.images.html#glanceclient.v2.images.Controller.data

  ---

  2017-08-15 17:34:01.677 1 ERROR nova.image.glance 
[req-70546b57-a282-4552-8b9e-65be1871825a bd800a91d263411393899aff269084a0 
aaed41f2e25f494c9fadd01c340f25c8 - default default] Error writing to 
/var/lib/nova/instances/_base/cae3a4306eeb5643cb6caffbe1e3050645f8aee2.part: 
'NoneType' object is not iterable
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager 
[req-70546b57-a282-4552-8b9e-65be1871825a bd800a91d263411393899aff269084a0 
aaed41f2e25f494c9fadd01c340f25c8 - default default] [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0] Instance failed to spawn: TypeError: 
'NoneType' object is not iterable
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0] Traceback (most recent call last):
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2125, in 
_build_resources
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0] yield resources
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1940, in 
_build_and_run_instance
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0] block_device_info=block_device_info)
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2793, in 
spawn
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0] block_device_info=block_device_info)
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3231, in 
_create_image
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0] fallback_from_host)
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3322, in 
_create_and_inject_local_root
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0] instance, size, fallback_from_host)
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6968, in 
_try_fetch_image_cache
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0] size=size)
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 241, 
in cache
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0] *args, **kwargs)
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 595, 
in cr

[Yahoo-eng-team] [Bug 1740951] [NEW] Unable to dump policy

2018-01-02 Thread Logan V
Public bug reported:

I'm having issues dumping policy from Keystone in Pike

root@aio1-keystone-container-398c6a0f:~# 
/openstack/venvs/keystone-16.0.6/bin/oslopolicy-policy-generator --namespace 
keystone
WARNING:stevedore.named:Could not load keystone
Traceback (most recent call last):
  File "/openstack/venvs/keystone-16.0.6/bin/oslopolicy-policy-generator", line 
11, in 
sys.exit(generate_policy())
  File 
"/openstack/venvs/keystone-16.0.6/lib/python2.7/site-packages/oslo_policy/generator.py",
 line 233, in generate_policy
_generate_policy(conf.namespace, conf.output_file)
  File 
"/openstack/venvs/keystone-16.0.6/lib/python2.7/site-packages/oslo_policy/generator.py",
 line 178, in _generate_policy
enforcer = _get_enforcer(namespace)
  File 
"/openstack/venvs/keystone-16.0.6/lib/python2.7/site-packages/oslo_policy/generator.py",
 line 74, in _get_enforcer
enforcer = mgr[namespace].obj
  File 
"/openstack/venvs/keystone-16.0.6/lib/python2.7/site-packages/stevedore/extension.py",
 line 314, in __getitem__
return self._extensions_by_name[name]
KeyError: 'keystone'

Normally it works like this with Nova:
root@aio1-nova-api-os-compute-container-3589c25e:~# 
/openstack/venvs/nova-16.0.6/bin/oslopolicy-policy-generator --namespace nova
"os_compute_api:os-evacuate": "rule:admin_api"
"os_compute_api:servers:create": "rule:admin_or_owner"
"os_compute_api:os-extended-volumes": "rule:admin_or_owner"
"os_compute_api:servers:create:forced_host": "rule:admin_api"
"os_compute_api:os-aggregates:remove_host": "rule:admin_api"
...

IRC convo regarding this bug:
[04:00:26PM] logan- hello. I'm trying to use oslopolicy-policy-generator to 
dump the base RBAC so it can be combined with my policy overrides and provided 
to horizon. with nova i'm able to dump RBAC using 
"/path/to/nova/venv/bin/oslopolicy-policy-generator --namespace nova", but the 
doing the same with keystone using "keystone" or "identity" as the namespace 
does not work. 
[04:01:39PM] @lbragstad logan-: do you have keystone installed?
[04:01:57PM] @lbragstad let me see if i can recreate
[04:03:30PM] logan- o/ @lbragstad. yep keystone's installed. here's the venv 
and output for the oslopolicy command at the bottom: 
http://paste.openstack.org/raw/636624/
[04:03:53PM] @lbragstad huh - weird
[04:03:56PM] @lbragstad i can recreate
[04:04:48PM] ayoung @lbragstad, logan- I bet it is a dependency issue
[04:05:25PM] ayoung trying to load Keystone fails cuz some other library is 
missing, and I bet  that is pulled in from oslopolicy polgen
[04:07:05PM] ayoung oslo.policy.policies =
[04:07:05PM] ayoung # With the move of default policy in code list_rules 
returns a list of
[04:07:05PM] ayoung # the default defined polices.
[04:07:05PM] ayoung keystone = keystone.common.policies:list_rules
[04:07:12PM] ayoung that is from setup.cfg
[04:07:21PM] ayoung is that what iti is trying to load?
[04:07:36PM] @lbragstad well - it's should be an entrypoint in oslo.policy
[04:07:47PM] @lbragstad keystone is just responsible for exposing the namespace
[04:07:59PM] @lbragstad 
https://github.com/openstack/keystone/blob/master/config-generator/keystone-policy-generator.conf
[04:08:26PM] @lbragstad which is the same as what nova defines
[04:08:28PM] @lbragstad 
https://github.com/openstack/nova/blob/master/etc/nova/nova-policy-generator.conf
[04:09:31PM] ayoung seems like it is not registered
[04:12:16PM] ayoung yep, reproduced it here, too
[04:15:32PM] @lbragstad i think we're missing this entrypoint
[04:15:33PM] @lbragstad 
https://docs.openstack.org/oslo.policy/latest/user/usage.html#merged-file-generation
[04:15:45PM] @lbragstad which just needs something to return the _ENFORCER
[04:15:55PM] @lbragstad so keystone.common.policy:get_enforcer
[04:15:59PM] @lbragstad or something like that
[04:16:24PM] @lbragstad logan-: certainly a bug
[04:16:35PM] @lbragstad logan-: would you be able to open up something in 
launchpad?
[04:16:53PM] @lbragstad we can get a patch up shortly, i think we're missing 
something with how we wire up the entry poionts

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1740951

Title:
  Unable to dump policy

Status in OpenStack Identity (keystone):
  New

Bug description:
  I'm having issues dumping policy from Keystone in Pike

  root@aio1-keystone-container-398c6a0f:~# 
/openstack/venvs/keystone-16.0.6/bin/oslopolicy-policy-generator --namespace 
keystone
  WARNING:stevedore.named:Could not load keystone
  Traceback (most recent call last):
File "/openstack/venvs/keystone-16.0.6/bin/oslopolicy-policy-generator", 
line 11, in 
  sys.exit(generate_policy())
File 
"/openstack/venvs/keystone-16.0.6/lib/python2.7/site-packages/oslo_policy/generator.py",
 line 233, in generate_policy
  _generate_policy(conf.namespace

[Yahoo-eng-team] [Bug 1737894] Re: 'a bug for test(ignore it )'

2018-01-02 Thread Lance Bragstad
** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1737894

Title:
  'a bug for test(ignore it )'

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  it is for test, ignore it please

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1737894/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1740937] [NEW] Support Ansible

2018-01-02 Thread Marcos Alano
Public bug reported:

cloud-init already has support for Chef and Puppet (if I remember also
for Salt). Would be nice support to execute playbooks in a elegant way,

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1740937

Title:
  Support Ansible

Status in cloud-init:
  New

Bug description:
  cloud-init already has support for Chef and Puppet (if I remember also
  for Salt). Would be nice support to execute playbooks in a elegant
  way,

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1740937/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1734117] Re: Scoping to project which is not on authentication domain is not working as expected

2018-01-02 Thread Lance Bragstad
Currently, authorization in keystone is explicit in that you must grant
users roles on projects or domains in order for them to get tokens
scoped to those targets. Another option that might be available to you
is to use role inheritance [0]. This API let's you grant roles to users
and groups but let's them be inherited to children projects in the
hierarchy.

[0] https://developer.openstack.org/api-ref/identity/v3/index.html#os-
inherit-api

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1734117

Title:
  Scoping to project which is not on authentication domain is not
  working as expected

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  
  Having user "U" on domain "X" which has admin role on domain "X" and domain 
"Y"
  domain "X" and domain "Y" have projects "X1" and "Y1" respectively.

  Authenticating with user "U" on domain "X" and scoping to domain "X"
  OK.

  Authenticating with user "U" on domain "X" and scoping to domain "Y"
  OK.

  Authenticating with user "U" on domain "X" and scoping to project "X1" 
belonging to domain "X"
  OK.

  Authenticating with user "U" on domain "X" and scoping to project "Y1" 
belonging to domain "Y"
  FAILS.

  I expect the last authentication to succeed, since user has admin role
  on the domain of the project.

  This kind of authentication will succeed if admin role on project "Y"
  will be granted to the user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1734117/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1735192] Re: OCF resource agent out of date or HA guide incorrect.

2018-01-02 Thread Lance Bragstad
This looks like a bug in the HA guide, which should be in the openstack-
manuals project. Adding openstack-manuals to this report for further
clarification.

** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

** No longer affects: keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1735192

Title:
  OCF resource agent out of date or HA guide incorrect.

Status in openstack-manuals:
  New

Bug description:
  The HA guide over at

  https://docs.openstack.org/ha-guide/controller-ha-identity.html

  recommends downloading an OCF resource agent from git. This OCF
  resource agent is now about 18 months old, dating from early 2016. It
  still uses the commands 'keystone-all' and 'keystone'. Both
  executables no longer exist, so the resource agent does not work as-
  is.

  The newer commands are 'keystone-manage' and 'openstack'

  In addition, 'keystone user-list' is wrong syntax, it should now be 
'openstack user list'
  Here's a diff of the changes I made;

  

  38c38
  < OCF_RESKEY_binary_default="keystone-manage"
  ---
  > OCF_RESKEY_binary_default="keystone-all"
  42c42
  < OCF_RESKEY_client_binary_default="openstack"
  ---
  > OCF_RESKEY_client_binary_default="keystone"
  250c250
  < user list > /dev/null 2>&1
  ---
  > user-list > /dev/null 2>&1

  

  While this fixes errors in the resource agent, It's still impossible
  for me to run keystone via the OCF, simply because, since those
  commands were removed, there's no way for me to stop keystone from
  running directly.

  In addition, I can't help but notice the HA guide only speaks about
  RHEL and SUSE. Where's the Ubuntu section for Keystone HA? It's there
  for the other components...

  ps aux | grep keystone

  returns 10 lines like these;

  keystone 10173  0.0  1.8 409096 111612 ?   Sl   06:25   0:17 (wsgi
  :keystone-pu -k start

  This means keystone runs under the apache2 web server. 
  Thus if we add the apache2 systemd script ('systemctl start apache2') to the 
pacemaker cluster as a cloned service, then it should be able to manage 
keystone. 

  Why would you want this, instead of just running systemd on separate
  hosts? Well, other services kind of 'depend' on keystone, as such you
  can create hooks in crmsh to ensure that the active/passive services,
  which actually require crmsh, only start after keystone is available.

  E.g. this code suffices to switch keystone from the default 'systemd' managed 
setup to a crm-managed setup on ubuntu or debian with N nodes; 
  
  node1> systemctl stop apache2
  node1> systemctl disable apache2 
  node2> systemctl stop apache2
  node2> systemctl disable apache2
   
  nodeN> systemctl stop apache2
  nodeN> systemctl disable apache2 
  node1> crm
  crm$ configure primitive p_keystone systemd:apache2 op monitor interval="30s" 
timeout="30s"
  crm$ configure clone keystone_clone p_keystone
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-manuals/+bug/1735192/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1739453] Re: MigrationsAdminTest fails with NoValidHost because resource claim swap in placement fails with 500

2018-01-02 Thread Matt Riedemann
** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Changed in: nova/pike
   Status: New => Confirmed

** Changed in: nova/pike
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1739453

Title:
  MigrationsAdminTest fails with NoValidHost because resource claim swap
  in placement fails with 500

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  Confirmed

Bug description:
  http://logs.openstack.org/84/529184/2/check/legacy-tempest-dsvm-
  py35/888d647/job-output.txt.gz#_2017-12-20_16_08_20_862659

  Fails in conductor here when swapping the resource allocation from the
  instance to the migration record:

  http://logs.openstack.org/84/529184/2/check/legacy-tempest-dsvm-
  py35/888d647/logs/screen-n-super-cond.txt.gz#_Dec_20_15_15_20_744636

  Dec 20 15:15:20.744636 ubuntu-xenial-citycloud-lon1-0001533915 
nova-conductor[21763]: WARNING nova.scheduler.client.report [None 
req-bd8ccca7-0a5a-4b8f-a129-bfd147f72fe5 tempest-MigrationsAdminTest-1384405657 
tempest-MigrationsAdminTest-1384405657] Unable to submit allocation for 
instance d44e9a86-5ebd-4229-b516-6428ace9cb09 (500 {"computeFault": {"code": 
500, "message": "The server has either erred or is incapable of performing the 
requested operation."}})
  Dec 20 15:15:20.747237 ubuntu-xenial-citycloud-lon1-0001533915 
nova-conductor[21763]: ERROR nova.conductor.tasks.migrate [None 
req-bd8ccca7-0a5a-4b8f-a129-bfd147f72fe5 tempest-MigrationsAdminTest-1384405657 
tempest-MigrationsAdminTest-1384405657] [instance: 
8befd9e7-4df0-40b6-97a0-1e268e00108f] Unable to replace resource claim on 
source host ubuntu-xenial-citycloud-lon1-0001533915 node 
ubuntu-xenial-citycloud-lon1-0001533915 for instance

  The failure in the placement logs:

  http://logs.openstack.org/84/529184/2/check/legacy-tempest-dsvm-
  py35/888d647/logs/screen-placement-api.txt.gz#_Dec_20_15_15_20_666337

  Dec 20 15:15:20.726882 ubuntu-xenial-citycloud-lon1-0001533915 
devstack@placement-api.service[15195]: ERROR nova.api.openstack Traceback (most 
recent call last):
  Dec 20 15:15:20.727033 ubuntu-xenial-citycloud-lon1-0001533915 
devstack@placement-api.service[15195]: ERROR nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/__init__.py", line 82, in __call__
  Dec 20 15:15:20.727187 ubuntu-xenial-citycloud-lon1-0001533915 
devstack@placement-api.service[15195]: ERROR nova.api.openstack return 
req.get_response(self.application)
  Dec 20 15:15:20.727332 ubuntu-xenial-citycloud-lon1-0001533915 
devstack@placement-api.service[15195]: ERROR nova.api.openstack   File 
"/usr/local/lib/python3.5/dist-packages/webob/request.py", line 1327, in send
  Dec 20 15:15:20.727509 ubuntu-xenial-citycloud-lon1-0001533915 
devstack@placement-api.service[15195]: ERROR nova.api.openstack 
application, catch_exc_info=False)
  Dec 20 15:15:20.727670 ubuntu-xenial-citycloud-lon1-0001533915 
devstack@placement-api.service[15195]: ERROR nova.api.openstack   File 
"/usr/local/lib/python3.5/dist-packages/webob/request.py", line 1291, in 
call_application
  Dec 20 15:15:20.727830 ubuntu-xenial-citycloud-lon1-0001533915 
devstack@placement-api.service[15195]: ERROR nova.api.openstack app_iter = 
application(self.environ, start_response)
  Dec 20 15:15:20.727982 ubuntu-xenial-citycloud-lon1-0001533915 
devstack@placement-api.service[15195]: ERROR nova.api.openstack   File 
"/usr/local/lib/python3.5/dist-packages/webob/dec.py", line 131, in __call__
  Dec 20 15:15:20.730042 ubuntu-xenial-citycloud-lon1-0001533915 
devstack@placement-api.service[15195]: ERROR nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  Dec 20 15:15:20.730225 ubuntu-xenial-citycloud-lon1-0001533915 
devstack@placement-api.service[15195]: ERROR nova.api.openstack   File 
"/usr/local/lib/python3.5/dist-packages/webob/dec.py", line 196, in call_func
  Dec 20 15:15:20.730397 ubuntu-xenial-citycloud-lon1-0001533915 
devstack@placement-api.service[15195]: ERROR nova.api.openstack return 
self.func(req, *args, **kwargs)
  Dec 20 15:15:20.730590 ubuntu-xenial-citycloud-lon1-0001533915 
devstack@placement-api.service[15195]: ERROR nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/placement/microversion.py", line 117, 
in __call__
  Dec 20 15:15:20.730783 ubuntu-xenial-citycloud-lon1-0001533915 
devstack@placement-api.service[15195]: ERROR nova.api.openstack response = 
req.get_response(self.application)
  Dec 20 15:15:20.730957 ubuntu-xenial-citycloud-lon1-0001533915 
devstack@placement-api.service[15195]: ERROR nova.api.openstack   File 
"/usr/local/lib/python3.5/dist-packages/webob/request.py", line 1327, in send
  Dec 20 15:15:20.731124 ubuntu-xenial-citycloud-lon1-0001533915 
devstack@placement-api.service[15195]: ERROR nova.api.openstack 

[Yahoo-eng-team] [Bug 1740885] [NEW] Trunk ports are sometimes not tagged with internal vlan

2018-01-02 Thread Jakub Libosvar
Public bug reported:

It happens that tpi patch ports between trunk bridge and integration
bridge don't have internal vlan tag in other_config row in ovsdb. It
looks like a race between trunk handler and ovs agent.

Example of failure: http://logs.openstack.org/92/527992/2/check/neutron-
tempest-plugin-dvr-multinode-
scenario/166eee3/logs/subnode-2/screen-q-agt.txt.gz#_Dec_14_18_31_25_801432

Trace example: 
Dec 14 18:31:25.801432 ubuntu-xenial-rax-ord-0001444835 
neutron-openvswitch-agent[17015]: ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [None 
req-412981e2-ac8a-4fe8-8ec4-288bbb63e2a7 None None] Error while processing VIF 
ports: OVSFWTagNotFound: Cannot get tag for port tpi-6457d45d-b6 from its 
other_config: {}
Dec 14 18:31:25.801580 ubuntu-xenial-rax-ord-0001444835 
neutron-openvswitch-agent[17015]: ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
Dec 14 18:31:25.801708 ubuntu-xenial-rax-ord-0001444835 
neutron-openvswitch-agent[17015]: ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2080, in rpc_loop
Dec 14 18:31:25.801838 ubuntu-xenial-rax-ord-0001444835 
neutron-openvswitch-agent[17015]: ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent port_info, 
ovs_restarted)
Dec 14 18:31:25.801965 ubuntu-xenial-rax-ord-0001444835 
neutron-openvswitch-agent[17015]: ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 157, in 
wrapper
Dec 14 18:31:25.802089 ubuntu-xenial-rax-ord-0001444835 
neutron-openvswitch-agent[17015]: ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent result = 
f(*args, **kwargs)
Dec 14 18:31:25.802214 ubuntu-xenial-rax-ord-0001444835 
neutron-openvswitch-agent[17015]: ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1676, in process_network_ports
Dec 14 18:31:25.802345 ubuntu-xenial-rax-ord-0001444835 
neutron-openvswitch-agent[17015]: ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
port_info.get('updated', set()))
Dec 14 18:31:25.802476 ubuntu-xenial-rax-ord-0001444835 
neutron-openvswitch-agent[17015]: ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 256, in 
setup_port_filters
Dec 14 18:31:25.802600 ubuntu-xenial-rax-ord-0001444835 
neutron-openvswitch-agent[17015]: ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
self.refresh_firewall(updated_devices)
Dec 14 18:31:25.802725 ubuntu-xenial-rax-ord-0001444835 
neutron-openvswitch-agent[17015]: ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 110, in 
decorated_function
Dec 14 18:31:25.802850 ubuntu-xenial-rax-ord-0001444835 
neutron-openvswitch-agent[17015]: ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent *args, 
**kwargs)
Dec 14 18:31:25.802983 ubuntu-xenial-rax-ord-0001444835 
neutron-openvswitch-agent[17015]: ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 209, in 
refresh_firewall
Dec 14 18:31:25.803103 ubuntu-xenial-rax-ord-0001444835 
neutron-openvswitch-agent[17015]: ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
self._apply_port_filter(device_ids, update_filter=True)
Dec 14 18:31:25.803237 ubuntu-xenial-rax-ord-0001444835 
neutron-openvswitch-agent[17015]: ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 141, in 
_apply_port_filter
Dec 14 18:31:25.803366 ubuntu-xenial-rax-ord-0001444835 
neutron-openvswitch-agent[17015]: ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
self.firewall.update_port_filter(device)
Dec 14 18:31:25.803492 ubuntu-xenial-rax-ord-0001444835 
neutron-openvswitch-agent[17015]: ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/openvswitch_firewall/firewall.py", 
line 509, in update_port_filter
Dec 14 18:31:25.803612 ubuntu-xenial-rax-ord-0001444835 
neutron-openvswitch-agent[17015]: ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
self.prepare_port_filter(port)
Dec 14 18:31:25.803763 ubuntu-xenial-rax-ord-0001444835 
neutron-openvswitch-agent[17015]: ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/openvswitch_firewall/firewall.py", 
line 49

[Yahoo-eng-team] [Bug 1681073] Re: Create Consistency Group form has an exception

2018-01-02 Thread Corey Bryant
This bug was fixed in the package horizon - 3:11.0.4-0ubuntu1~cloud1
---

 horizon (3:11.0.4-0ubuntu1~cloud1) xenial-ocata; urgency=medium
 .
   * Fix create consistency group form exception (LP: #1681073)
 - d/p/fix_create_consistency_group_form_exception.patch


** Changed in: cloud-archive/ocata
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1681073

Title:
  Create Consistency Group form has an exception

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Committed
Status in Ubuntu Cloud Archive newton series:
  Fix Released
Status in Ubuntu Cloud Archive ocata series:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in horizon package in Ubuntu:
  Fix Released
Status in horizon source package in Xenial:
  Fix Released

Bug description:
  [Impact]

  Affected
  - UCA Mitaka, Ocata
  - Xenial, Zesty ( Artful is incidentlly added, please ignore it) 

  After enabling consistency groups by changing api-paste.ini,

  When trying to create consistency group, there are error like below.

  Exception info:
  Internal Server Error: /project/cgroups/create/
  Traceback (most recent call last):
    File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", 
line 132, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
    File "/opt/stack/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
    File "/opt/stack/horizon/horizon/decorators.py", line 52, in dec
  return view_func(request, *args, **kwargs)
    File "/opt/stack/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
    File "/opt/stack/horizon/horizon/decorators.py", line 84, in dec
  return view_func(request, *args, **kwargs)
    File "/usr/local/lib/python2.7/dist-packages/django/views/generic/base.py", 
line 71, in view
  return self.dispatch(request, *args, **kwargs)
    File "/usr/local/lib/python2.7/dist-packages/django/views/generic/base.py", 
line 89, in dispatch
  return handler(request, *args, **kwargs)
    File "/opt/stack/horizon/horizon/workflows/views.py", line 199, in post
  exceptions.handle(request)
    File "/opt/stack/horizon/horizon/exceptions.py", line 352, in handle
  six.reraise(exc_type, exc_value, exc_traceback)
    File "/opt/stack/horizon/horizon/workflows/views.py", line 194, in post
  success = workflow.finalize()
    File "/opt/stack/horizon/horizon/workflows/base.py", line 824, in finalize
  if not self.handle(self.request, self.context):
    File 
"/opt/stack/horizon/openstack_dashboard/dashboards/project/cgroups/workflows.py",
 line 323, in handle
  vol_type.extra_specs['volume_backend_name']
  KeyError: 'volume_backend_name'

  [Test case]

  juju deploy bundle

  (this is same as original description)
  1. Go to admin/volume types panel
  2. Create volume type with any name
  3. Go to project/Consistency Groups panel
  4. Create Consistency Group and add the volume type we just created
  5. Submit Create Consistency Group form

  [Regression Potential]
  This changes horizon UI, and need to restart apache2 server. There could be 
very short outage for openstack dashboard. but this seems not critical to 
openstack service. and if there is code error, django server can't be started 
and outage time could be longer than above.

  [Others]

  upstream commit
  
https://git.openstack.org/cgit/openstack/horizon/commit/?id=89bb9268204a2316fc526d660f38d5517980f209

  [Original Description]

  Env: devstack master branch

  Steps to reproduce:
  1. Go to admin/volume types panel
  2. Create volume type with any name
  3. Go to project/Consistency Groups panel
  4. Create Consistency Group and add the volume type we just created
  5. Submit Create Consistency Group form

  it will throws an exception.

  Exception info:
  Internal Server Error: /project/cgroups/create/
  Traceback (most recent call last):
    File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", 
line 132, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
    File "/opt/stack/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
    File "/opt/stack/horizon/horizon/decorators.py", line 52, in dec
  return view_func(request, *args, **kwargs)
    File "/opt/stack/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
    File "/opt/stack/horizon/horizon/decorators.py", line 84, in dec
  return view_func(request, *args, **kwargs)
    File "/usr/local/lib/python2.7/dist-packages/django/views/generic/base.py", 
line 71, in view
  return self.dispatch(request, *a

[Yahoo-eng-team] [Bug 1681073] Re: Create Consistency Group form has an exception

2018-01-02 Thread Corey Bryant
This bug was fixed in the package horizon - 3:10.0.5-0ubuntu1~cloud2
---

 horizon (3:10.0.5-0ubuntu1~cloud2) xenial-newton; urgency=medium
 .
   * Fix create consistency group form exception (LP: #1681073)
 - d/p/fix_create_consistency_group_form_exception.patch


** Changed in: cloud-archive/newton
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1681073

Title:
  Create Consistency Group form has an exception

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Committed
Status in Ubuntu Cloud Archive newton series:
  Fix Released
Status in Ubuntu Cloud Archive ocata series:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in horizon package in Ubuntu:
  Fix Released
Status in horizon source package in Xenial:
  Fix Released

Bug description:
  [Impact]

  Affected
  - UCA Mitaka, Ocata
  - Xenial, Zesty ( Artful is incidentlly added, please ignore it) 

  After enabling consistency groups by changing api-paste.ini,

  When trying to create consistency group, there are error like below.

  Exception info:
  Internal Server Error: /project/cgroups/create/
  Traceback (most recent call last):
    File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", 
line 132, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
    File "/opt/stack/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
    File "/opt/stack/horizon/horizon/decorators.py", line 52, in dec
  return view_func(request, *args, **kwargs)
    File "/opt/stack/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
    File "/opt/stack/horizon/horizon/decorators.py", line 84, in dec
  return view_func(request, *args, **kwargs)
    File "/usr/local/lib/python2.7/dist-packages/django/views/generic/base.py", 
line 71, in view
  return self.dispatch(request, *args, **kwargs)
    File "/usr/local/lib/python2.7/dist-packages/django/views/generic/base.py", 
line 89, in dispatch
  return handler(request, *args, **kwargs)
    File "/opt/stack/horizon/horizon/workflows/views.py", line 199, in post
  exceptions.handle(request)
    File "/opt/stack/horizon/horizon/exceptions.py", line 352, in handle
  six.reraise(exc_type, exc_value, exc_traceback)
    File "/opt/stack/horizon/horizon/workflows/views.py", line 194, in post
  success = workflow.finalize()
    File "/opt/stack/horizon/horizon/workflows/base.py", line 824, in finalize
  if not self.handle(self.request, self.context):
    File 
"/opt/stack/horizon/openstack_dashboard/dashboards/project/cgroups/workflows.py",
 line 323, in handle
  vol_type.extra_specs['volume_backend_name']
  KeyError: 'volume_backend_name'

  [Test case]

  juju deploy bundle

  (this is same as original description)
  1. Go to admin/volume types panel
  2. Create volume type with any name
  3. Go to project/Consistency Groups panel
  4. Create Consistency Group and add the volume type we just created
  5. Submit Create Consistency Group form

  [Regression Potential]
  This changes horizon UI, and need to restart apache2 server. There could be 
very short outage for openstack dashboard. but this seems not critical to 
openstack service. and if there is code error, django server can't be started 
and outage time could be longer than above.

  [Others]

  upstream commit
  
https://git.openstack.org/cgit/openstack/horizon/commit/?id=89bb9268204a2316fc526d660f38d5517980f209

  [Original Description]

  Env: devstack master branch

  Steps to reproduce:
  1. Go to admin/volume types panel
  2. Create volume type with any name
  3. Go to project/Consistency Groups panel
  4. Create Consistency Group and add the volume type we just created
  5. Submit Create Consistency Group form

  it will throws an exception.

  Exception info:
  Internal Server Error: /project/cgroups/create/
  Traceback (most recent call last):
    File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", 
line 132, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
    File "/opt/stack/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
    File "/opt/stack/horizon/horizon/decorators.py", line 52, in dec
  return view_func(request, *args, **kwargs)
    File "/opt/stack/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
    File "/opt/stack/horizon/horizon/decorators.py", line 84, in dec
  return view_func(request, *args, **kwargs)
    File "/usr/local/lib/python2.7/dist-packages/django/views/generic/base.py", 
line 71, in view
  return self.dispatch(request, 

[Yahoo-eng-team] [Bug 1740853] Re: Add identity API 3.0 Version support in CLIClient

2018-01-02 Thread Zhuang Changkun
** Project changed: glance => sahara-tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1740853

Title:
  Add identity API 3.0 Version support in CLIClient

Status in OpenStack Data Processing ("Sahara") sahara-tests:
  New

Bug description:
  The tempest doesn't support identity API 3.0.It will report an error
  that the server could not comply with the request when I run sahara-
  tests project.

To manage notifications about this bug go to:
https://bugs.launchpad.net/sahara-tests/+bug/1740853/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1740853] [NEW] Add identity API 3.0 Version support in CLIClient

2018-01-02 Thread changkun.zhuang
Public bug reported:

This change adds API 3.0 interface to solve a error that the server could not 
comply with the request when I run sahara-tests project.
Becault identity API version is 3.0 in sahara-tests project,the 2.0 is only 
available in CLIClient of the tempest

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1740853

Title:
  Add identity API 3.0 Version support in CLIClient

Status in Glance:
  New

Bug description:
  This change adds API 3.0 interface to solve a error that the server could not 
comply with the request when I run sahara-tests project.
  Becault identity API version is 3.0 in sahara-tests project,the 2.0 is only 
available in CLIClient of the tempest

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1740853/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681073] Re: Create Consistency Group form has an exception

2018-01-02 Thread Launchpad Bug Tracker
This bug was fixed in the package horizon - 2:9.1.2-0ubuntu3

---
horizon (2:9.1.2-0ubuntu3) xenial; urgency=medium

  [ Corey Bryant ]
  * The horizon 2:9.1.2-0ubuntu1 point release was released with quilt
patches pushed and now debian/patches won't apply. Revert the commits
to upstream code of the following patches:
- d/p/embedded-xstatic.patch
- d/p/fix-dashboard-django-wsgi.patch
- d/p/fix-dashboard-manage.patch
- d/p/fix-horizon-test-settings.patch
- d/p/ubuntu_settings.patch
- d/p/add-juju-environment-download.patch (all but
  juju.environments.template were pushed)

  [ Seyeong Kim ]
  * Fix create consistency group form exception (LP: #1681073)
- d/p/fix_create_consistency_group_form_exception.patch

 -- Corey Bryant   Thu, 14 Dec 2017 10:48:25
-0500

** Changed in: horizon (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1681073

Title:
  Create Consistency Group form has an exception

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in Ubuntu Cloud Archive newton series:
  Fix Committed
Status in Ubuntu Cloud Archive ocata series:
  Fix Committed
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in horizon package in Ubuntu:
  Fix Released
Status in horizon source package in Xenial:
  Fix Released

Bug description:
  [Impact]

  Affected
  - UCA Mitaka, Ocata
  - Xenial, Zesty ( Artful is incidentlly added, please ignore it) 

  After enabling consistency groups by changing api-paste.ini,

  When trying to create consistency group, there are error like below.

  Exception info:
  Internal Server Error: /project/cgroups/create/
  Traceback (most recent call last):
    File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", 
line 132, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
    File "/opt/stack/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
    File "/opt/stack/horizon/horizon/decorators.py", line 52, in dec
  return view_func(request, *args, **kwargs)
    File "/opt/stack/horizon/horizon/decorators.py", line 36, in dec
  return view_func(request, *args, **kwargs)
    File "/opt/stack/horizon/horizon/decorators.py", line 84, in dec
  return view_func(request, *args, **kwargs)
    File "/usr/local/lib/python2.7/dist-packages/django/views/generic/base.py", 
line 71, in view
  return self.dispatch(request, *args, **kwargs)
    File "/usr/local/lib/python2.7/dist-packages/django/views/generic/base.py", 
line 89, in dispatch
  return handler(request, *args, **kwargs)
    File "/opt/stack/horizon/horizon/workflows/views.py", line 199, in post
  exceptions.handle(request)
    File "/opt/stack/horizon/horizon/exceptions.py", line 352, in handle
  six.reraise(exc_type, exc_value, exc_traceback)
    File "/opt/stack/horizon/horizon/workflows/views.py", line 194, in post
  success = workflow.finalize()
    File "/opt/stack/horizon/horizon/workflows/base.py", line 824, in finalize
  if not self.handle(self.request, self.context):
    File 
"/opt/stack/horizon/openstack_dashboard/dashboards/project/cgroups/workflows.py",
 line 323, in handle
  vol_type.extra_specs['volume_backend_name']
  KeyError: 'volume_backend_name'

  [Test case]

  juju deploy bundle

  (this is same as original description)
  1. Go to admin/volume types panel
  2. Create volume type with any name
  3. Go to project/Consistency Groups panel
  4. Create Consistency Group and add the volume type we just created
  5. Submit Create Consistency Group form

  [Regression Potential]
  This changes horizon UI, and need to restart apache2 server. There could be 
very short outage for openstack dashboard. but this seems not critical to 
openstack service. and if there is code error, django server can't be started 
and outage time could be longer than above.

  [Others]

  upstream commit
  
https://git.openstack.org/cgit/openstack/horizon/commit/?id=89bb9268204a2316fc526d660f38d5517980f209

  [Original Description]

  Env: devstack master branch

  Steps to reproduce:
  1. Go to admin/volume types panel
  2. Create volume type with any name
  3. Go to project/Consistency Groups panel
  4. Create Consistency Group and add the volume type we just created
  5. Submit Create Consistency Group form

  it will throws an exception.

  Exception info:
  Internal Server Error: /project/cgroups/create/
  Traceback (most recent call last):
    File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", 
line 132, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
    File "/opt/stack/horizon/horizon/decorators.py", line 36, in de