[Yahoo-eng-team] [Bug 1837681] [NEW] Failed to create vgpu cause of IOError

2019-07-23 Thread Eric Xie
Public bug reported:

Description
===
I used 'Tesla V100' to create vm with vgpu.
Got error.

Steps to reproduce
==
* Create flavor with resources:VGPU='1'
* Create vm with CLI `openstack server create --image 
27dc8e63-6d28-4f80-a6f4-e5a855a02e46 --flavor 
224e1385-7de4-4c0b-931d-a7431d329f78 --network net-1 ins-vgpu-t`

Expected result
===
Create successfully

Actual result
=
Got ERROR

Environment
===
1. Exact version of OpenStack you are running. See the following
  # apt list --installed | grep nova

WARNING: apt does not have a stable CLI interface. Use with caution in
scripts.

nova-common/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed]
nova-compute/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed,automatic]
nova-compute-kvm/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed]
python-nova/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed,automatic]
python-novaclient/xenial,xenial,now 2:9.1.1-1~u16.04 all [installed]

2. Which hypervisor did you use?
Libvirt + KVM


Logs & Configs
==
2019-07-22 08:12:18,500.500 21346 ERROR nova.virt.libvirt.driver 
[req-4053b3df-ae7d-4378-b3c4-1c26e8482e24 4c31323efa7e4abf824399b63a687ff8 
187e1165ec2a40e9a72efab673e940d9 - default default] [instance: 
c9737cde-af6c-40b5-b719-2190428a0a03] Failed to start libvirt guest: 
libvirtError: internal error: qemu unexpectedly closed the monitor: 
2019-07-22T00:12:18.186786Z qemu-system-x86_64: -device 
vfio-pci,id=hostdev0,sysfsdev=/sys/bus/mdev/devices/78c27f7b-e2ed-4fe8-afcf-84c6107620b9,bus=pci.0,addr=0x7:
 vfio error: 78c27f7b-e2ed-4fe8-afcf-84c6107620b9: error getting device from 
group 0: Input/output error

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1837681

Title:
  Failed to create vgpu cause of IOError

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  I used 'Tesla V100' to create vm with vgpu.
  Got error.

  Steps to reproduce
  ==
  * Create flavor with resources:VGPU='1'
  * Create vm with CLI `openstack server create --image 
27dc8e63-6d28-4f80-a6f4-e5a855a02e46 --flavor 
224e1385-7de4-4c0b-931d-a7431d329f78 --network net-1 ins-vgpu-t`

  Expected result
  ===
  Create successfully

  Actual result
  =
  Got ERROR

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
# apt list --installed | grep nova

  WARNING: apt does not have a stable CLI interface. Use with caution in
  scripts.

  nova-common/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed]
  nova-compute/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed,automatic]
  nova-compute-kvm/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed]
  python-nova/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed,automatic]
  python-novaclient/xenial,xenial,now 2:9.1.1-1~u16.04 all [installed]

  2. Which hypervisor did you use?
  Libvirt + KVM

  
  Logs & Configs
  ==
  2019-07-22 08:12:18,500.500 21346 ERROR nova.virt.libvirt.driver 
[req-4053b3df-ae7d-4378-b3c4-1c26e8482e24 4c31323efa7e4abf824399b63a687ff8 
187e1165ec2a40e9a72efab673e940d9 - default default] [instance: 
c9737cde-af6c-40b5-b719-2190428a0a03] Failed to start libvirt guest: 
libvirtError: internal error: qemu unexpectedly closed the monitor: 
2019-07-22T00:12:18.186786Z qemu-system-x86_64: -device 
vfio-pci,id=hostdev0,sysfsdev=/sys/bus/mdev/devices/78c27f7b-e2ed-4fe8-afcf-84c6107620b9,bus=pci.0,addr=0x7:
 vfio error: 78c27f7b-e2ed-4fe8-afcf-84c6107620b9: error getting device from 
group 0: Input/output error

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1837681/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1823198] Re: NetworkAmbiguous traceback in nova-compute logs even though it's a user error

2019-07-23 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/650077
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=ade6c9393632e830c2368825568769853fce3b99
Submitter: Zuul
Branch:master

commit ade6c9393632e830c2368825568769853fce3b99
Author: Matt Riedemann 
Date:   Thu Apr 4 13:04:33 2019 -0400

Handle Invalid exceptions as expected in attach_interface

The bug prompting this is a tempest test which is requesting
a port attachment to a server but not specifying a port or
network to use, so nova-compute looks for a valid network
and finds there are two and raises NetworkAmbiguous. This
is treated as a 400 error in the API but because this is a
synchronous RPC call from nova-api to nova-compute,
oslo.messaging logs an exception traceback for the unexpected
error. That traceback is pretty gross in the compute logs for
something that is a user error and the cloud operator has
nothing to do to fix it.

We can handle the traceback by registering our expected
exceptions for the attach_interface method with oslo.messaging,
which is what this change does.

While looking to just add NetworkAmbiguous it became clear that
lots of different user errors can be raised from this method
and none of those should result in a traceback, so this change
just expects Invalid and its subclasses.

The one exception is InterfaceAttachFailed which is raised when
something in allocate_port_for_instance or driver.attach_interface
fails. That is an unexpected situation so the parent class for
InterfaceAttachFailed is changed from Invalid to NovaException so
it continues to be traced in the logs as an exception.
InterfaceAttachFailedNoNetwork is kept as Invalid since it is a
user error (trying to attach an interface when the user has no
access to any networks).

test_tagged_attach_interface_raises is adjusted to show the
ExpectedException handling for one of the Invalid cases.

Change-Id: I927ff1d8c8f45405833d6012b7d7af37b98b10a0
Closes-Bug: #1823198


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1823198

Title:
  NetworkAmbiguous traceback in nova-compute logs even though it's a
  user error

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  Triaged
Status in OpenStack Compute (nova) rocky series:
  Triaged
Status in OpenStack Compute (nova) stein series:
  In Progress

Bug description:
  There is a tempest negative test which is trying to attach a network
  interface to a server but it is not requesting a specific network or
  port:

  http://logs.openstack.org/58/637058/15/check/nova-
  next/f4e8140/logs/tempest.txt.gz#_2019-04-04_00_07_20_973

  2019-04-04 00:07:20.973 7584 INFO tempest.lib.common.rest_client 
[req-47bc59bc-01de-450c-9185-3c0b9dc4bf51 ] Request 
(AttachInterfacesTestJSON:test_create_list_show_delete_interfaces_by_network_port):
 400 POST 
https://104.130.239.125/compute/v2.1/servers/7f24a95b-b5be-4269-980b-a92f3c07c7ca/os-interface
 0.678s
  2019-04-04 00:07:20.974 7584 DEBUG tempest.lib.common.rest_client 
[req-47bc59bc-01de-450c-9185-3c0b9dc4bf51 ] Request - Headers: {'Content-Type': 
'application/json', 'Accept': 'application/json', 'X-Auth-Token': ''}
  Body: {"interfaceAttachment": {}}
  Response - Headers: {'date': 'Thu, 04 Apr 2019 00:07:20 GMT', 'server': 
'Apache/2.4.29 (Ubuntu)', 'openstack-api-version': 'compute 2.1', 
'x-openstack-nova-api-version': '2.1', 'vary': 
'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'content-type': 
'application/json; charset=UTF-8', 'content-length': '115', 
'x-openstack-request-id': 'req-47bc59bc-01de-450c-9185-3c0b9dc4bf51', 
'x-compute-request-id': 'req-47bc59bc-01de-450c-9185-3c0b9dc4bf51', 
'connection': 'close', 'status': '400', 'content-location': 
'https://104.130.239.125/compute/v2.1/servers/7f24a95b-b5be-4269-980b-a92f3c07c7ca/os-interface'}
  Body: b'{"badRequest": {"code": 400, "message": "Multiple possible 
networks found, use a Network ID to be more specific."}}' _log_request_full 
/opt/stack/new/tempest/tempest/lib/common/rest_client.py:440

  Which results in a NetworkAmbiguous error response from the compute
  API as seen above.

  The problem is this is a synchronous call from nova-api to nova-
  compute and results in an exception traceback in the nova-compute
  logs:

  logs.openstack.org/58/637058/15/check/nova-
  next/f4e8140/logs/screen-n-cpu.txt.gz?level=TRACE#_Apr_04_00_07_20_968719

  Apr 04 00:07:20.968719 ubuntu-bionic-rax-iad-0004701322 nova-compute[32008]: 
ERROR oslo_messaging.rpc.server [None req-47bc59bc-01de-450c-9185-3c0b9dc4bf51 
tempest-AttachInterfacesTestJSON-791222649 
tempest-AttachInterfacesTestJSON-791222649] Exception 

[Yahoo-eng-team] [Bug 1817455] Re: FWaaS V2 removing a port from the FW group set the FWG to INACTIVE

2019-07-23 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/670496
Committed: 
https://git.openstack.org/cgit/openstack/neutron-fwaas/commit/?id=90a2707ccffd2175d76e0e2ac5a4cd87e5faa7ef
Submitter: Zuul
Branch:master

commit 90a2707ccffd2175d76e0e2ac5a4cd87e5faa7ef
Author: zhanghao2 
Date:   Fri Jul 12 07:08:28 2019 -0400

Fix bug when removing a port from the firewall group

When removing a port from the firewall group, the last port is detected as
true or false based on the old port and the new port, but it ignores the
specific number of ports, which causes the fwg status to be inactive 
regardless
of whether there is a port after the firewall group is reset.

Change-Id: I887e06893f3e11031548767272e95afee40462d8
Closes-Bug: #1817455


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1817455

Title:
  FWaaS V2 removing a port from the FW group set the FWG to INACTIVE

Status in neutron:
  Fix Released

Bug description:
  Creating a firewall group with policies and 2 interface ports.
  Now removing 1 of the ports using:
  openstack firewall group unset  --port 
  the firewall group is updated, and now has only 1 interface port, but its 
status is changed to INACTIVE.

  The reason seems to be in update_firewall_group_postcommit: 
https://github.com/openstack/neutron-fwaas/blob/master/neutron_fwaas/services/firewall/service_drivers/agents/agents.py#L329
  last-port is set to True if no new ports are added, instead of setting it to 
True only if there are no ports left.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1817455/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1782922] Re: LDAP: changing user_id_attribute bricks group mapping

2019-07-23 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/649177
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=a1dc21f3d34ae34bc6a5c9acebc0eb752495ae7a
Submitter: Zuul
Branch:master

commit a1dc21f3d34ae34bc6a5c9acebc0eb752495ae7a
Author: Raildo Mascena 
Date:   Mon Apr 1 16:48:07 2019 -0300

Fixing dn_to_id function for cases were id is not in the DN

The more common scenario to return the uid as part of the RDN in a DN,
However, it's a valid case to not have the uid in the RDN, so we need to
search in the LDAP based on the DN and return the uid in the entire object.

Also, we do not support multivalued attribute id on DN, so the test case
covering this case, it was adjusted for raise NotFound.

Closes-Bug: 1782922
Change-Id: I87a3bfa94b5907ce4c6b4eb8e124ec948b390bf2


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1782922

Title:
  LDAP: changing user_id_attribute bricks group mapping

Status in Ubuntu Cloud Archive:
  Triaged
Status in Ubuntu Cloud Archive queens series:
  Triaged
Status in Ubuntu Cloud Archive rocky series:
  Triaged
Status in Ubuntu Cloud Archive stein series:
  Triaged
Status in Ubuntu Cloud Archive train series:
  Triaged
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystone package in Ubuntu:
  Triaged
Status in keystone source package in Bionic:
  Triaged
Status in keystone source package in Cosmic:
  Triaged
Status in keystone source package in Disco:
  Triaged
Status in keystone source package in Eoan:
  Triaged

Bug description:
  Env Details:
  Openstack version: Queens (17.0.5)
  OS: CentOS 7.5
  LDAP: Active Directory, Windows Server 2012R2

  We changed the user_id_attribute to sAMAccountName when configuring
  keystone. [ user_id_attribute = "sAMAccountName" ;
  group_members_are_ids = False ]. Unfortunately this bricks the group
  mapping logic in keystone.

  The relevant code in keystone:
  `list_users_in_group` [1] -> gets all groups from the LDAP server, and then 
calls `_transform_group_member_ids`. `_transform_group_member_ids` tries to 
match the user ids (for posixGroups e.g.) or the DN. However DN matching does 
not match the full DN. It rather takes the first RDN of the DN and computes the 
keystone user id [2]. The first RDN in Active Directory is the "CN". While the 
user-create part honors the user_id_attribute and takes "sAMAccountName" in our 
configuration. The generated user-ids in keystone now do not match anymore and 
hence group mapping is broken.

  A fix could be looking up the user by the DN received from the
  'member' attribute of a given group and compare the configured
  'user_id_attribute' of the received ldap user id and the in keystone
  stored user id. A quick fix could also be to mention that behavior in
  the documentation.

  /e: related
  https://bugs.launchpad.net/keystone/+bug/1231488/comments/19

  [1]
  
https://github.com/openstack/keystone/blob/master/keystone/identity/backends/ldap/common.py#L1285

  [2]
  
https://github.com/openstack/keystone/blob/master/keystone/identity/backends/ldap/core.py#L126

  [3]
  
https://github.com/openstack/keystone/blob/master/keystone/identity/backends/ldap/common.py#L1296

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1782922/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1837252] Re: IFLA_BR_AGEING_TIME of 0 causes flooding across bridges

2019-07-23 Thread sean mooney
triaging as high as folding could lead to network disruption to guests
on multiple hosts.

i have root caused this as a result of combining the code into a single
shared codepath between the ovs and linux bridge plugin

for ovs hybrid plug we set the ageing to 0 to prevent packet loss during
live migation

https://github.com/openstack/os-
vif/commit/fa4ff64b86e6e1b6399f7250eadbee9775c22d32#diff-
f55bc78ffb4c1bbf81b88bf68673

however this is not valid for linux bridge in general
 
https://github.com/openstack/os-vif/commit/1f6fed6a69e9fd386e421f3cacae97c11cdd7c75#diff-010d1833da7ca175fffc8c41a38497c2

which replace the use of brctl in the linux bridge driver resued the
common code i introduced in

https://github.com/openstack/os-vif/commit/5027ce833c6fccaa80b5ddc8544d262c0bf99dbd#diff-
cec1a2ac6413663c344b607129c39fab

and as a result it picked up the ovs ageing code which was not
intentinal.

ill fix this shortly and backport it.

** Changed in: os-vif
   Importance: Undecided => High

** Changed in: os-vif
   Status: New => Confirmed

** Changed in: os-vif
 Assignee: (unassigned) => sean mooney (sean-k-mooney)

** Changed in: nova
   Status: New => Invalid

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1837252

Title:
  IFLA_BR_AGEING_TIME of 0 causes flooding across bridges

Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Invalid
Status in os-vif:
  Confirmed

Bug description:
  Release: OpenStack Stein
  Driver: LinuxBridge

  Using Stein w/ the LinuxBridge mech driver/agent, we have found that
  traffic is being flooded across bridges. Using tcpdump inside an
  instance, you can see unicast traffic for other instances.

  We have confirmed the macs table shows the aging timer set to 0 for
  permanent entries, and the bridge is NOT learning new MACs:

  root@lab-compute01:~# brctl showmacs brqd0084ac0-f7
  port no   mac addris local?   ageing timer
5   24:be:05:a3:1f:e1   yes0.00
5   24:be:05:a3:1f:e1   yes0.00
1   fe:16:3e:02:62:18   yes0.00
1   fe:16:3e:02:62:18   yes0.00
7   fe:16:3e:07:65:47   yes0.00
7   fe:16:3e:07:65:47   yes0.00
4   fe:16:3e:1d:d6:33   yes0.00
4   fe:16:3e:1d:d6:33   yes0.00
9   fe:16:3e:2b:2f:f0   yes0.00
9   fe:16:3e:2b:2f:f0   yes0.00
8   fe:16:3e:3c:42:64   yes0.00
8   fe:16:3e:3c:42:64   yes0.00
   10   fe:16:3e:5c:a6:6c   yes0.00
   10   fe:16:3e:5c:a6:6c   yes0.00
2   fe:16:3e:86:9c:dd   yes0.00
2   fe:16:3e:86:9c:dd   yes0.00
6   fe:16:3e:91:9b:45   yes0.00
6   fe:16:3e:91:9b:45   yes0.00
   11   fe:16:3e:b3:30:00   yes0.00
   11   fe:16:3e:b3:30:00   yes0.00
3   fe:16:3e:dc:c3:3e   yes0.00
3   fe:16:3e:dc:c3:3e   yes0.00

  root@lab-compute01:~# bridge fdb show | grep brqd0084ac0-f7
  01:00:5e:00:00:01 dev brqd0084ac0-f7 self permanent
  fe:16:3e:02:62:18 dev tap74af38f9-2e master brqd0084ac0-f7 permanent
  fe:16:3e:02:62:18 dev tap74af38f9-2e vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:86:9c:dd dev tapb00b3c18-b3 master brqd0084ac0-f7 permanent
  fe:16:3e:86:9c:dd dev tapb00b3c18-b3 vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:dc:c3:3e dev tap7284d235-2b master brqd0084ac0-f7 permanent
  fe:16:3e:dc:c3:3e dev tap7284d235-2b vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:1d:d6:33 dev tapbeb9441a-99 vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:1d:d6:33 dev tapbeb9441a-99 master brqd0084ac0-f7 permanent
  24:be:05:a3:1f:e1 dev eno1.102 vlan 1 master brqd0084ac0-f7 permanent
  24:be:05:a3:1f:e1 dev eno1.102 master brqd0084ac0-f7 permanent
  fe:16:3e:91:9b:45 dev tapc8ad2cec-90 master brqd0084ac0-f7 permanent
  fe:16:3e:91:9b:45 dev tapc8ad2cec-90 vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:07:65:47 dev tap86e2c412-24 master brqd0084ac0-f7 permanent
  fe:16:3e:07:65:47 dev tap86e2c412-24 vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:3c:42:64 dev tap37bcb70e-9e master brqd0084ac0-f7 permanent
  fe:16:3e:3c:42:64 dev tap37bcb70e-9e vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:2b:2f:f0 dev tap40f6be7c-2d vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:2b:2f:f0 dev tap40f6be7c-2d master brqd0084ac0-f7 permanent
  fe:16:3e:b3:30:00 dev tap6548bacb-c0 vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:b3:30:00 dev tap6548bacb-c0 master brqd0084ac0-f7 permanent
  fe:16:3e:5c:a6:6c dev tap61107236-1e 

[Yahoo-eng-team] [Bug 1837655] [NEW] glance can't be used with MySQL 8.0.17 or newer because 'member' became keyword

2019-07-23 Thread Nikita Gerasimov
Public bug reported:

Since MySQL 8.0.17 'member' is reserved keyword so it can't be column name in 
'image_members' table.
https://dev.mysql.com/doc/refman/8.0/en/keywords.html

# rpm -qf /usr/bin/glance-manage
openstack-glance-17.0.0-2.el7.noarch
$ glance-manage --config-file /etc/glance/glance-api.conf db_sync
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade  -> liberty, liberty initial
CRITI [glance] Unhandled error
Traceback (most recent call last):
  File "/bin/glance-manage", line 10, in 
sys.exit(main())
  File "/usr/lib/python2.7/site-packages/glance/cmd/manage.py", line 563, in 
main
return CONF.command.action_fn()
  File "/usr/lib/python2.7/site-packages/glance/cmd/manage.py", line 395, in 
sync
self.command_object.sync(CONF.command.version)
  File "/usr/lib/python2.7/site-packages/glance/cmd/manage.py", line 165, in 
sync
self.expand(online_migration=False)
  File "/usr/lib/python2.7/site-packages/glance/cmd/manage.py", line 222, in 
expand
self._sync(version=expand_head)
  File "/usr/lib/python2.7/site-packages/glance/cmd/manage.py", line 180, in 
_sync
alembic_command.upgrade(a_config, version)
  File "/usr/lib/python2.7/site-packages/alembic/command.py", line 254, in 
upgrade
script.run_env()
  File "/usr/lib/python2.7/site-packages/alembic/script/base.py", line 425, in 
run_env
util.load_python_file(self.dir, 'env.py')
  File "/usr/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 81, in 
load_python_file
module = load_module_py(module_id, path)
  File "/usr/lib/python2.7/site-packages/alembic/util/compat.py", line 141, in 
load_module_py
mod = imp.load_source(module_id, path, fp)
  File 
"/usr/lib/python2.7/site-packages/glance/db/sqlalchemy/alembic_migrations/env.py",
 line 88, in 
run_migrations_online()
  File 
"/usr/lib/python2.7/site-packages/glance/db/sqlalchemy/alembic_migrations/env.py",
 line 83, in run_migrations_online
context.run_migrations()
  File "", line 8, in run_migrations
  File "/usr/lib/python2.7/site-packages/alembic/runtime/environment.py", line 
836, in run_migrations
self.get_context().run_migrations(**kw)
  File "/usr/lib/python2.7/site-packages/alembic/runtime/migration.py", line 
330, in run_migrations
step.migration_fn(**kw)
  File 
"/usr/lib/python2.7/site-packages/glance/db/sqlalchemy/alembic_migrations/versions/liberty_initial.py",
 line 37, in upgrade
add_images_tables.upgrade()
  File 
"/usr/lib/python2.7/site-packages/glance/db/sqlalchemy/alembic_migrations/add_images_tables.py",
 line 200, in upgrade
_add_image_members_table()
  File 
"/usr/lib/python2.7/site-packages/glance/db/sqlalchemy/alembic_migrations/add_images_tables.py",
 line 155, in _add_image_members_table
extend_existing=True)
  File "", line 8, in create_table
  File "", line 3, in create_table
  File "/usr/lib/python2.7/site-packages/alembic/operations/ops.py", line 1120, 
in create_table
return operations.invoke(op)
  File "/usr/lib/python2.7/site-packages/alembic/operations/base.py", line 319, 
in invoke
return fn(self, operation)
  File "/usr/lib/python2.7/site-packages/alembic/operations/toimpl.py", line 
101, in create_table
operations.impl.create_table(table)
  File "/usr/lib/python2.7/site-packages/alembic/ddl/impl.py", line 194, in 
create_table
self._exec(schema.CreateTable(table))
  File "/usr/lib/python2.7/site-packages/alembic/ddl/impl.py", line 118, in 
_exec
return conn.execute(construct, *multiparams, **params)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
948, in execute
return meth(self, multiparams, params)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/ddl.py", line 68, in 
_execute_on_connection
return connection._execute_ddl(self, multiparams, params)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
1009, in _execute_ddl
compiled
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
1200, in _execute_context
context)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
1409, in _handle_dbapi_exception
util.raise_from_cause(newraise, exc_info)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 
203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
1193, in _execute_context
context)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 
507, in do_execute
cursor.execute(statement, parameters)
  File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 170, in 
execute
result = self._query(query)
  File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 328, in 
_query
conn.query(q)
  File 

[Yahoo-eng-team] [Bug 1832164] Re: SADeprecationWarning: The joinedload_all() function is deprecated, and will be removed in a future release. Please use method chaining with joinedload() instead

2019-07-23 Thread Matt Riedemann
sqlalchemy docs:

https://docs.sqlalchemy.org/en/13/changelog/migration_09.html#new-query-
options-api-load-only-option

Added cinder since this also hits in cinder jobs in logstash.

** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1832164

Title:
  SADeprecationWarning: The joinedload_all() function is deprecated, and
  will be removed in a future release.  Please use method chaining with
  joinedload() instead

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) stein series:
  In Progress

Bug description:
  The following warning is output in the unit tests.

  b'/tmp/nova/nova/db/sqlalchemy/api.py:1871: SADeprecationWarning: The 
joinedload_all() function is deprecated, and will be removed in a future 
release.  Please use method chaining with joinedload() instead'
  b"  options(joinedload_all('security_groups.rules')).\\"

  * http://logs.openstack.org/53/566153/43/check/openstack-tox-
  py36/b7edf77/job-output.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1832164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1832164] Re: SADeprecationWarning: The joinedload_all() function is deprecated, and will be removed in a future release. Please use method chaining with joinedload() instead

2019-07-23 Thread Matt Riedemann
** Also affects: nova/stein
   Importance: Undecided
   Status: New

** Changed in: nova/stein
   Status: New => In Progress

** Changed in: nova/stein
   Importance: Undecided => Low

** Changed in: nova
   Importance: Undecided => Low

** Changed in: nova/stein
 Assignee: (unassigned) => sean mooney (sean-k-mooney)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1832164

Title:
  SADeprecationWarning: The joinedload_all() function is deprecated, and
  will be removed in a future release.  Please use method chaining with
  joinedload() instead

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) stein series:
  In Progress

Bug description:
  The following warning is output in the unit tests.

  b'/tmp/nova/nova/db/sqlalchemy/api.py:1871: SADeprecationWarning: The 
joinedload_all() function is deprecated, and will be removed in a future 
release.  Please use method chaining with joinedload() instead'
  b"  options(joinedload_all('security_groups.rules')).\\"

  * http://logs.openstack.org/53/566153/43/check/openstack-tox-
  py36/b7edf77/job-output.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1832164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1819043] Re: Lintian warning: package-installs-into-obsolete-dir etc/bash_completion.d/

2019-07-23 Thread Dan Watkins
** Changed in: cloud-init
   Status: Fix Committed => Fix Released

** Changed in: cloud-init (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1819043

Title:
  Lintian warning: package-installs-into-obsolete-dir
  etc/bash_completion.d/

Status in cloud-init:
  Fix Released
Status in cloud-init package in Ubuntu:
  Fix Released

Bug description:
  When building the latest cloud-init package, lintian reports:

  W: cloud-init: package-installs-into-obsolete-dir etc/bash_completion.d/ : 
^etc/bash_completion.d/ -> usr/share/bash-completion/completions Ensure new 
filename matches stricter requirements (see https://bugs.debian.org/776954 and 
https://bugs.debian.org/814599)
  W: cloud-init: package-installs-into-obsolete-dir 
etc/bash_completion.d/cloud-init : ^etc/bash_completion.d/ -> 
usr/share/bash-completion/completions Ensure new filename matches stricter 
requirements (see https://bugs.debian.org/776954 and 
https://bugs.debian.org/814599)

  (These paths are determined in setup.py, so this isn't a trivial fix
  in the packaging; hence the bug.)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1819043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1819994] Re: cloud-init selects sysconfig netconfig renderer if network-manager is installed on Ubuntu

2019-07-23 Thread Dan Watkins
Hi Amy et al,

I'm going to mark this Fix Released, as 19.1 has made its way in to
Ubuntu.  Please let us know if you don't think this is fixed!


Dan

** Changed in: cloud-init (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1819994

Title:
  cloud-init selects sysconfig netconfig renderer if network-manager is
  installed on Ubuntu

Status in cloud-init:
  Fix Released
Status in MAAS:
  Invalid
Status in Provider for Plainbox - Canonical Certification Server:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released

Bug description:
  Configuration:
  UEFI/BIOS:TEE136S
  IMM/BMC:  CDI333V
  CPU:  Intel(R) Xeon(R) Platinum 8253 CPU @ 2.20GHz
  Memory:   16G DIMM * 12
  Raid card:ThinkSystem RAID 530-8i 
  NIC Card: Intel X722 LOM

  Reproduce Steps:
  1.Config "network" as first boot
  2.Power on machine
  3.Visit TC through web browser and Commission machine
  4.When commission complete, deploy ubuntu 18.04 LTS on SUT
  5.The Error appeared during OS deploy.

  Deploy errors like the following(you can view the attachment for
  details):

  cloud-init[] Date_and_time - handlers.py[WARNING]: failed posting
  event: start: modules-final/config-: running config-

  cloud-init[] Date_and_time - handlers.py[WARNING]: failed posting
  event: fainish: modules-final: SUCCESS: running modules for final

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1819994/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1828641] Re: Xenial, Bionic, Cosmic revert ubuntu-advantage-tools config module changes from tip

2019-07-23 Thread Dan Watkins
** Changed in: cloud-init (Ubuntu)
   Status: New => Invalid

** Changed in: cloud-init
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1828641

Title:
  Xenial, Bionic, Cosmic revert ubuntu-advantage-tools config module
  changes from tip

Status in cloud-init:
  Fix Released
Status in cloud-init package in Ubuntu:
  Invalid
Status in cloud-init source package in Xenial:
  Fix Released
Status in cloud-init source package in Bionic:
  Fix Released
Status in cloud-init source package in Cosmic:
  Fix Released

Bug description:
  == Begin SRU Template ==
  [Impact]
  Ubuntu-advantage-tools package version 19 introduced a new command line 
client that is backwards incompatible with previous ubuntu-advantage-tools 
releases.

  Changes in cloud-init 19.1 support only the new ubuntu-advantage-tools
  CLI.

  To avoid breaking the cc_ubuntu_advantage cloud-config module, we need
  to revert changes in cloud-init tip to avoid tracebacks for customers
  in Xenial, Bionic and Cosmic using ubuntu-advantage: declarating in
  their cloud-config.

  Once ubuntu-advantage-tools >= 19 is SRU'd to Xenial, Bionic and
  Cosmic. This debian patch can be dropped.

  [Test Case]
  # Use old ubuntu-advantage cloud-config syntax to enable livepatch on a kvm 
instance to enable livepatch

  Note: there are a number of expected failures
   * Xenial: Bug: #1830154 snap not in $PATH
   * Cosmic: livepatch is not supported on Cosmic
   * Disco: Bug: #1829788  KeyError traceback

  [Test Case]
  cat > pre-disco-ua.yaml <
  EOF

  cat > disco-ua.yaml <
    enable: [livepatch]
  EOF

  cat > setup_proposed.sh = 19.1 is released into Xenial, Bionic
  and Cosmic. Carry a debian patch file to revert upstream cloud-init
  config module changes for cc_ubuntu_advantage.py.

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1837635] [NEW] HA router state change from "standby" to "master" should be delayed

2019-07-23 Thread Rodolfo Alonso
Public bug reported:

Currently, when a HA state change occurs, the agent execute a series of
actions [1]: updates the metadata proxy, updates the prefix delegation,
executed L3 extension "ha_state_change" methods, updates the radvd
status and notifies this to the server.

When, in a system with more than two routers (one in "active" mode and
the others in "standby"), a switch-over is done, the "keepalived"
process [2] in each "standby" server will set the virtual IP in the HA
interface and advert it. In case that other router HA interface has the
same priority (by default in Neutron, the HA instances of the same
router ID will have the same priority, 50) but higher IP [3], the HA
interface of this instance will have the VIPs and routes deleted and
will become "standby" again. E.g.: [4]

In some cases, we have detected that when the master controller is
rebooted, the change from "standby" to "master" of the other two servers
is detected, but the change from "master" to "standby" of the server
with lower IP (as commented before) is not registered by the server,
because the Neutron server is still not accessible (the master
controller was rebooted). This status change, sometimes, is lost. This
is the situation when both "standby" servers become "master" but the
"master"-"standby" transition of one of them is lost.

1) INITIAL STATUS
(overcloud) [stack@undercloud-0 ~]$ neutron l3-agent-list-hosting-router router
neutron CLI is deprecated and will be removed in the future. Use openstack CLI 
instead.
+--+--++---+--+
| id   | host | 
admin_state_up | alive | ha_state |
+--+--++---+--+
| 4056cd8e-e062-4f45-bc83-d3eb51905ff5 | controller-0.localdomain | True
   | :-)   | standby  |
| 527d6a6c-8d2e-4796-bbd0-8b41cf365743 | controller-2.localdomain | True
   | :-)   | standby  |
| edbdfc1c-3505-4891-8d00-f3a6308bb1de | controller-1.localdomain | True
   | :-)   | active   |
+--+--++---+--+

2) CONTROLLER 1 REBOOTED
neutron CLI is deprecated and will be removed in the future. Use openstack CLI 
instead.
+--+--++---+--+
| id   | host | 
admin_state_up | alive | ha_state |
+--+--++---+--+
| 4056cd8e-e062-4f45-bc83-d3eb51905ff5 | controller-0.localdomain | True
   | :-)   | active   |
| 527d6a6c-8d2e-4796-bbd0-8b41cf365743 | controller-2.localdomain | True
   | :-)   | active   |
| edbdfc1c-3505-4891-8d00-f3a6308bb1de | controller-1.localdomain | True
   | :-)   | standby  |
+--+--++---+--+


The aim of this bug is to make public this problem and propose a patch to delay 
the transition from "standby" to "master" to let keepalived, among all the 
instances running in the HA servers, to decide which one of them is the 
"master" server.


[1] 
https://github.com/openstack/neutron/blob/stable/stein/neutron/agent/l3/ha.py#L115-L134
[2] https://www.keepalived.org/
[3] This method is used by keepalived to define which router is predominant and 
must be master.
[4] http://paste.openstack.org/show/754760/

** Affects: neutron
 Importance: Undecided
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1837635

Title:
  HA router state change from "standby" to "master" should be delayed

Status in neutron:
  New

Bug description:
  Currently, when a HA state change occurs, the agent execute a series
  of actions [1]: updates the metadata proxy, updates the prefix
  delegation, executed L3 extension "ha_state_change" methods, updates
  the radvd status and notifies this to the server.

  When, in a system with more than two routers (one in "active" mode and
  the others in "standby"), a switch-over is done, the "keepalived"
  process [2] in each "standby" server will set the virtual IP in the HA
  interface and advert it. In case that other router HA interface has
  the same priority (by default in Neutron, the HA instances of the same
  router ID will have the same priority, 50) but higher IP [3], the HA
  interface of this instance will have the VIPs and routes deleted and
  will become "standby" again. E.g.: [4]

  In some cases, we have detected that when the master controller is
 

[Yahoo-eng-team] [Bug 1791111] Re: allow change password upon first use as user

2019-07-23 Thread Radomir Dopieralski
I'm working on implementing this for Horizon, and I have a working view
where the user can change their password
(https://review.opendev.org/672289). However, for this to be actually
usable, the user has to know their user_id somehow. As far as I can
tell, there is no way to determine the user_id from username without
first authenticating, so the users still can't change their expired
passwords.

** Changed in: keystone
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/179

Title:
  allow change password upon first use as user

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in OpenStack Identity (keystone):
  New
Status in python-openstackclient:
  New

Bug description:
  It's impossible to reset your password in user level if
  "change_password_upon_first_use" is set.

  keystone.conf:
  [security_compliance]
  change_password_upon_first_use = True

  For new users it's impossible to reset your password via keystone. You
  can only reset the password via an admin, which created the user in
  the first place. So now the change_password_upon_first_use is kinda
  useless.

  (test2@test) [root@controller1 ~]# openstack user password set
  The password is expired and needs to be changed for user: 
bd3cc251fe694b15be88c443aa752ec1. (HTTP 401) (Request-ID: 
req-cdc7ddaf-d2ec-49ac-9708-2693811eb819)

  Desired situation: User can reset it's own password on first use.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/179/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1837553] [NEW] Neutron api-ref should mention that list of e.g. IDs is supported in GET requests

2019-07-23 Thread Slawek Kaplonski
Public bug reported:

It is supported behaviour that e.g. request like

GET /v2.0/ports?id==

should treat ID filters as list and return ports with those 2 IDs.

And it is like that also for other resources. But it isn't documented in
API-REF and that should be updated in docs.

** Affects: neutron
 Importance: Low
 Status: Confirmed


** Tags: api-ref doc low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1837553

Title:
  Neutron api-ref should mention that list of e.g. IDs is supported in
  GET requests

Status in neutron:
  Confirmed

Bug description:
  It is supported behaviour that e.g. request like

  GET /v2.0/ports?id==

  should treat ID filters as list and return ports with those 2 IDs.

  And it is like that also for other resources. But it isn't documented
  in API-REF and that should be updated in docs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1837553/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378904] Re: renaming availability zone doesn't modify host's availability zone

2019-07-23 Thread Matt Riedemann
** Changed in: nova
 Assignee: Matt Riedemann (mriedem) => Andrey Volkov (avolkov)

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Changed in: nova/rocky
 Assignee: (unassigned) => Andrey Volkov (avolkov)

** Changed in: nova/rocky
   Status: New => Fix Released

** Changed in: nova/rocky
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378904

Title:
  renaming availability zone doesn't modify host's availability zone

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) rocky series:
  Fix Released

Bug description:
  Hi,

  After renaming our availability zones via Horizon Dashboard, we
  couldn't migrate any "old" instance anymore, the scheduler returning
  "No valid Host found"...

  After searching, we found in the nova DB `instances` table, the
  "availability_zone" field contains the name of the availability zone,
  instead of the ID ( or maybe it is intentional ;) ).

  So renaming AZ leaves the hosts created prior to this rename orphan
  and the scheduler cannot find any valid host for them...

  Our openstack install is on debian wheezy, with the icehouse
  "official" repository from archive.gplhost.com/debian/, up to date.

  If you need any more infos, I'd be glad to help.

  Cheers

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378904/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1837552] [NEW] neutron-tempest-with-uwsgi job finish with timeout very often

2019-07-23 Thread Slawek Kaplonski
Public bug reported:

Example of such timeouted job:
http://logs.openstack.org/22/660722/15/check/neutron-tempest-with-
uwsgi/721b94f/job-output.txt.gz

I checked only this one example failure so far but it looks for me that
all was running fine for some time and than suddently all tests started
failing, exactly since:

http://logs.openstack.org/22/660722/15/check/neutron-tempest-with-
uwsgi/721b94f/job-output.txt.gz#_2019-07-22_22_43_40_285453

In apache logs I see that around this time many "500" responses started
happening:

http://logs.openstack.org/22/660722/15/check/neutron-tempest-with-
uwsgi/721b94f/controller/logs/apache/access_log.txt.gz

But in neutron-api logs there is nothing wrong:
http://logs.openstack.org/22/660722/15/check/neutron-tempest-with-uwsgi/721b94f/controller/logs/screen-neutron-api.txt.gz
 

It has to be investigated why it happens like that.

** Affects: neutron
 Importance: Medium
 Status: Confirmed


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1837552

Title:
  neutron-tempest-with-uwsgi job finish with timeout very often

Status in neutron:
  Confirmed

Bug description:
  Example of such timeouted job:
  http://logs.openstack.org/22/660722/15/check/neutron-tempest-with-
  uwsgi/721b94f/job-output.txt.gz

  I checked only this one example failure so far but it looks for me
  that all was running fine for some time and than suddently all tests
  started failing, exactly since:

  http://logs.openstack.org/22/660722/15/check/neutron-tempest-with-
  uwsgi/721b94f/job-output.txt.gz#_2019-07-22_22_43_40_285453

  In apache logs I see that around this time many "500" responses
  started happening:

  http://logs.openstack.org/22/660722/15/check/neutron-tempest-with-
  uwsgi/721b94f/controller/logs/apache/access_log.txt.gz

  But in neutron-api logs there is nothing wrong:
  
http://logs.openstack.org/22/660722/15/check/neutron-tempest-with-uwsgi/721b94f/controller/logs/screen-neutron-api.txt.gz
   

  It has to be investigated why it happens like that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1837552/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1830782] Re: stein: openstack-dashboard gui not showing newly created project/users under newly domain

2019-07-23 Thread Alex Kavanagh
On further debugging, it appears that there is an issue in horizon
(caused due to a change in keystone) with the use of a scoped tokens for
the admin user when multi-domain is enabled.

The scenario is as follows:

1. Multi domain is enabled.
2. The admin user is logged in with credentials using an admin domain.
3. Domain context is set to a domain in which the admin user is not a member.
4. The admin user attempts to list the projects or users.
5. A domain scoped token is used by horizon to list the projects, due to the 
code in [1]
6. No users are returned from keystone because, due to change [2] the users are 
filtered as the token contains the admin domain, not the target domain or users 
to list.

It's quite involved!

I'm not sure if the issue is:

1. Keystone shouldn't be filtering this list.
2. Horizon shouldn't be using a domain scoped token for the admin user (e.g. 
the openstack CLI doesn't use a domain scoped token to list the users in the 
domain, or an admin user).
3. Something else.

Horizon appears to only start using the domain scoped token after the
domain context is set.  Also, it only appears (in my testing) to use it
for the user list and (maybe) project list -- I focussed on the user
list.  It looks like a new token is requested to perform the user list
and that this one is domain scoped.

I can do further testing as necessary.


References:
[1] Horizon, openstack_dashboard/api/keystone.py (def keystoneclient:) 
https://github.com/openstack/horizon/blob/stable/stein/openstack_dashboard/api/keystone.py#L167
[2] Keystone, change Id: I60b2e2b8af172c369eab0eb2c29f056f5c98ad16, 
https://review.opendev.org/#/c/647587/ (for user list)


Debugging info:

I added some debug LOG lines to the various bits of horizon and keystone
to try to work out what was going on.  The following is a comparison
between Horizon and the OpenStack CLI in listing users for a domain
"test-domain":

Preamble: The test set up:

The test is listing users for the "test-domain" on the OpenStack CLI and
using the Horizon dashboard.


Domain list:

+--++-+-+
| ID   | Name   | Enabled | Description 
|
+--++-+-+
| 4c97d83fd8f34507aa5849710218272e | default| True| Created by Juju 
|
| 917f251e6fc24c389f1e3f3624d701d1 | admin_domain   | True| Created by Juju 
|
| be5450b76a2348c48df0d0571295de40 | test-domain2   | True| 
|
| c9ca71bd88894017a6b6448dfcffeb68 | test-domain| True| 
|
| ecb1e99a62534253a5b515dcfc218733 | service_domain | True| Created by Juju 
|
+--++-+-+

The "admin" user is in the admin_domain.

Project List:

+--+---+
| ID   | Name  |
+--+---+
| 1014c1815147453b8bd77de578467a80 | demo  |
| 49ae284fd4aa42208573d9c399a95eee | services  |
| 7581c43d252848dface4c75e2b921224 | test-project  |
| 75c183f2aece43e2860be59926e244fb | admin |
| 9bc98ed16a7547e0b11d002172ab1d6e | test-project2 |
| 9c619796ef91470bba2d30427bd7adc6 | admin |
| a7c8c2f4d11844619fb22753ab4d7a80 | services  |
| b8eb986468684e7ab4c7eb92542d3e58 | alt_demo  |
+--+---+

The "admin" users is in the "admin" project.

openstack user list
+--+--+
| ID   | Name |
+--+--+
| 8973385dd5ca467fb4be7a3eca7a603f | admin|
| 8aeaead88fdc49c6a44a3983d3ff8c63 | demo |
| b7beaf7d43b144d5b71acb33f0abb87d | alt_demo |
+--+--+

+--+--+
| ID   | Name |
+--+--+
| 9c1fa58637a64cd387922a4b2b8ce522 | test-domain-user |
+--+--+


---

OpenStack CLI debug for "openstack user list --domain=test-domain"

OS_VARS:

OS_AUTH_URL=http://10.5.0.56:5000/v3
OS_DOMAIN_NAME=admin_domain
OS_REGION_NAME=RegionOne
OS_PROJECT_NAME=admin
OS_PROJECT_DOMAIN_NAME=admin_domain
OS_USER_DOMAIN_NAME=admin_domain
OS_AUTH_VERSION=3
OS_IDENTITY_API_VERSION=3
OS_PASSWORD=openstack
OS_USERNAME=admin


Token:
(keystone.token.provider): 2019-07-17 18:03:04,001 DEBUG  - the token: 
gABdL2LX_HT3mi4RO0KcwuqYaJ-NoY-gDMQtcKm-QDDJ0o-SsiH1BOaI5LlhbPLVyiKw7amvGcuwwuM9LLCCBb0VGkyIs2cmkTlHAC
rOyXvtHAdIcRTwzOVmdQ3wsswwB02jnRL2c49w4a9dfii1eMUhxwtCs-ZDkxE8k52Yf9lkXDnDyzQ 
contains:
(keystone.token.provider): 2019-07-17 18:03:04,001 DEBUG domain: None, 
domain_scoped: False, user: {'email': 'juju@localhost', 'id': 
'8973385dd5ca467fb4be7a3eca7a603f', 

[Yahoo-eng-team] [Bug 1837531] [NEW] Incorrect Error message when user try to Encrypt an 'Volume Type' in Use

2019-07-23 Thread Vishal Manchanda
Public bug reported:

When User tries to encrypt a 'Volume Type' which is already in use, User
gets an error message "Unable to create encrypted volume type" on the
horizon. but IMO it needs to raise an error message like "Cannot create
encryption specs. Volume type in use".

Steps to reproduce:
Go to Admin Panel> Volumes >> Volume Types >>> Create Encryption.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1837531

Title:
  Incorrect Error message when user try to Encrypt an  'Volume  Type' in
  Use

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When User tries to encrypt a 'Volume Type' which is already in use,
  User gets an error message "Unable to create encrypted volume type" on
  the horizon. but IMO it needs to raise an error message like "Cannot
  create encryption specs. Volume type in use".

  Steps to reproduce:
  Go to Admin Panel> Volumes >> Volume Types >>> Create Encryption.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1837531/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1837530] [NEW] cloud-config in vendordata overriden by user-data

2019-07-23 Thread r
Public bug reported:

This may be a bug or valid behaviour, but by reading little available
vendordata documentations, I think either documentation is invalid or
at-least lacking something.

I'm trying to boot instances using config-drive and openstack dialect.
There is no Nova or metadata service involved, just plain config-drive
which happens to have user-data, vendor-data, network-data and metadata
in following hierarchy:

─── openstack
├── content
│   └── hotplug-cpu-udev.rules
└── latest
├── meta_data.json
├── network_data.json
├── user_data
└── vendor_data.json

I'm using vendor_data to supply general instructions which is shared
between every machine. It is provided by me; as infrastructure provider
not user and contains instructions to make machines works with platform
that I provide. So I think it's valid to have these general instructions
in vendor_data and let users configure their machines as their wish by
providing user_data.

The problem is whenever user_data exists, vendor_data will be erased.
Actually logs says it's processed and there will be
/var/lib/cloud/instance/vendor-cloud-config.txt but If user_data and
vendor_data both have `runcmd` for example,
/var/lib/cloud/instance/scripts/runcmd will contain just user_data
scripts.

user_data cloud-config is like:

#cloud-config
hostname: ubuntu.default
manage_etc_hosts: localhost
growpart:
  mode: auto
  devices:
  - /
  ignore_growroot_disabled: false
runcmd:
- wget http://example/img -O /tmp/img
resize_rootfs: true
ssh_pwauth: false

So my understanding is, It should not prevent vendor-data to be ran. For
the reference vendor_data.json content is:

{"cloud-init": "#cloud-config\nruncmd:\n- touch /tmp/vendor"}

So, should I change user provided user_data to allow vendor data
execution? That would be a little weird since documentation said the
other way; user_data should contain instructions to disallow vendordata
executions. And if documentation is wrong, it's sill doesn't look nice
to ask users change their cloud-config to be run properly on my
platform.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1837530

Title:
  cloud-config in vendordata overriden by user-data

Status in cloud-init:
  New

Bug description:
  This may be a bug or valid behaviour, but by reading little available
  vendordata documentations, I think either documentation is invalid or
  at-least lacking something.

  I'm trying to boot instances using config-drive and openstack dialect.
  There is no Nova or metadata service involved, just plain config-drive
  which happens to have user-data, vendor-data, network-data and
  metadata in following hierarchy:

  ─── openstack
  ├── content
  │   └── hotplug-cpu-udev.rules
  └── latest
  ├── meta_data.json
  ├── network_data.json
  ├── user_data
  └── vendor_data.json

  I'm using vendor_data to supply general instructions which is shared
  between every machine. It is provided by me; as infrastructure
  provider not user and contains instructions to make machines works
  with platform that I provide. So I think it's valid to have these
  general instructions in vendor_data and let users configure their
  machines as their wish by providing user_data.

  The problem is whenever user_data exists, vendor_data will be erased.
  Actually logs says it's processed and there will be
  /var/lib/cloud/instance/vendor-cloud-config.txt but If user_data and
  vendor_data both have `runcmd` for example,
  /var/lib/cloud/instance/scripts/runcmd will contain just user_data
  scripts.

  user_data cloud-config is like:

  #cloud-config
  hostname: ubuntu.default
  manage_etc_hosts: localhost
  growpart:
mode: auto
devices:
- /
ignore_growroot_disabled: false
  runcmd:
  - wget http://example/img -O /tmp/img
  resize_rootfs: true
  ssh_pwauth: false

  So my understanding is, It should not prevent vendor-data to be ran.
  For the reference vendor_data.json content is:

  {"cloud-init": "#cloud-config\nruncmd:\n- touch /tmp/vendor"}

  So, should I change user provided user_data to allow vendor data
  execution? That would be a little weird since documentation said the
  other way; user_data should contain instructions to disallow
  vendordata executions. And if documentation is wrong, it's sill
  doesn't look nice to ask users change their cloud-config to be run
  properly on my platform.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1837530/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1837529] [NEW] Cannot use push-notification with custom objects

2019-07-23 Thread Roman Dobosz
Public bug reported:

We have custom object which we would like to have updated in remote
resource cache. Currently, in CacheBackedPluginApi resource cache is
created on initialization by create_cache_for_l2_agent function which
have fixed list of resources to subscribe.

If we want to use additional type of resource, there is no other way,
than either copy entire class and use custom cache creation function, or
alter the list in the neutron code, which is bad either.

This isn't a bug, but rather it's an annoying inconvenience, which might
be easily fixed.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1837529

Title:
  Cannot use push-notification with custom objects

Status in neutron:
  New

Bug description:
  We have custom object which we would like to have updated in remote
  resource cache. Currently, in CacheBackedPluginApi resource cache is
  created on initialization by create_cache_for_l2_agent function which
  have fixed list of resources to subscribe.

  If we want to use additional type of resource, there is no other way,
  than either copy entire class and use custom cache creation function,
  or alter the list in the neutron code, which is bad either.

  This isn't a bug, but rather it's an annoying inconvenience, which
  might be easily fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1837529/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1835037] Re: Upgrade from bionic-rocky to bionic-stein failed migrations.

2019-07-23 Thread Sahid Orentino
I also proposed a fix for nova since 'nova-manage cellv2 update_cell' is
bugged for cell0.

  https://review.opendev.org/#/c/672045/

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1835037

Title:
  Upgrade from bionic-rocky to bionic-stein failed migrations.

Status in OpenStack nova-cloud-controller charm:
  In Progress
Status in OpenStack Compute (nova):
  New

Bug description:
  We were trying to upgrade from rocky to stein using the charm
  procedure described here:

  https://docs.openstack.org/project-deploy-guide/charm-deployment-
  guide/latest/app-upgrade-openstack.html

  and we got into this problem,

  
  2019-07-02 09:56:44 ERROR juju-log online_data_migrations failed
  b'Running batches of 50 until complete\nError attempting to run \n9 rows matched query 
populate_user_id, 0 
migrated\n+-+--+---+\n|
  Migration  | Total Needed | Completed 
|\n+-+--+---+\n|
 create_incomplete_consumers |  0   | 0 |\n| 
delete_build_requests_with_no_instance_uuid |  0   | 0 |\n| 
fill_virtual_interface_list |  0   | 0 |\n| 
migrate_empty_ratio |  0   | 0 |\n|  
migrate_keypairs_to_api_db |  0   | 0 |\n|   
migrate_quota_classes_to_api_db   |  0   | 0 |\n|
migrate_quota_limits_to_api_db   |  0   | 0 |\n|  
migration_migrate_to_uuid  |  0   | 0 |\n| 
populate_missing_availability_zones |  0   | 0 |\n| 
 populate_queued_for_delete |  0   | 0 |\n| 
  populate_user_id  |  9   | 0 |\n|
populate_uuids   |  0   | 0 |\n| 
service_uuids_online_data_migration |  0   | 0 
|\n+-+--+---+\nSome
 migrations failed unexpectedly. Check log for details.\n'

  What should we do to get this fixed?

  Regards,

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-cloud-controller/+bug/1835037/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1837455] Re: could not find requested endpoint in service catalog

2019-07-23 Thread Kyle Dean
configuration error. closing

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1837455

Title:
  could not find requested endpoint in service catalog

Status in neutron:
  Invalid

Bug description:
  im trying to assign a DNS name to floating IP and put this in
  designate but i get the following response back. Can someone please
  tell me what I need to add to the service catalog. thanks

  
  ### NEUTRON LOG
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource 
[req-40c644eb-0dc2-4e3f-903c-2e8f869b0810 f6d220afc2ba40c59e43dcef3681c56f 
07a8270a4ea6432cb985f291cb0a1aa4 - default default] create failed: No details.: 
keystoneauth1.exceptions.catalog.EndpointNotFound: Could not find requested 
endpoint in Service Catalog.
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/neutron/api/v2/resource.py", line 98, in 
resource
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/neutron/api/v2/base.py", line 436, in create
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource return 
self._create(request, body, **kwargs)
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/neutron_lib/db/api.py", line 139, in wrapped
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource 
self.force_reraise()
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/six.py", line 693, in reraise
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource raise value
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/neutron_lib/db/api.py", line 135, in wrapped
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource return 
f(*args, **kwargs)
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/oslo_db/api.py", line 154, in wrapper
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource 
self.force_reraise()
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/six.py", line 693, in reraise
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource raise value
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/oslo_db/api.py", line 142, in wrapper
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource return 
f(*args, **kwargs)
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/neutron_lib/db/api.py", line 183, in wrapped
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource 
LOG.debug("Retry wrapper got retriable exception: %s", e)
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource 
self.force_reraise()
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3/dist-packages/six.py", line 693, in reraise
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource raise value
  2019-07-22 20:02:56.318 17768 ERROR neutron.api.v2.resource   File 

[Yahoo-eng-team] [Bug 1837513] [NEW] Install and configure in keystone

2019-07-23 Thread ahmed elbendary
Public bug reported:


Step Number 5 in keystone installation and configuration

keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
  --bootstrap-admin-url http://controller:35357/v3/ \
  --bootstrap-internal-url http://controller:5000/v3/ \
  --bootstrap-public-url http://controller:5000/v3/ \
  --bootstrap-region-id RegionOne


It gives me the below error in /var/log/keystone/keystone.conf in Centos 7

2019-07-23 01:38:18.238 45036 WARNING stevedore.named [-] Could not load 
keystone.catalog.backends.sql.Catalog
2019-07-23 01:38:18.239 45036 CRITICAL keystone [-] Unhandled error: 
ImportError: Unable to find 'keystone.catalog.backends.sql.Catalog' driver in 
'keystone.catalog'.
2019-07-23 01:38:18.239 45036 ERROR keystone Traceback (most recent call last):
2019-07-23 01:38:18.239 45036 ERROR keystone   File "/usr/bin/keystone-manage", 
line 10, in 
2019-07-23 01:38:18.239 45036 ERROR keystone sys.exit(main())
2019-07-23 01:38:18.239 45036 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/manage.py", line 40, in main
2019-07-23 01:38:18.239 45036 ERROR keystone cli.main(argv=sys.argv, 
developer_config_file=developer_config)
2019-07-23 01:38:18.239 45036 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1216, in main
2019-07-23 01:38:18.239 45036 ERROR keystone CONF.command.cmd_class.main()
2019-07-23 01:38:18.239 45036 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 177, in main
2019-07-23 01:38:18.239 45036 ERROR keystone klass = cls()
2019-07-23 01:38:18.239 45036 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 66, in __init__
2019-07-23 01:38:18.239 45036 ERROR keystone self.bootstrapper = 
bootstrap.Bootstrapper()
2019-07-23 01:38:18.239 45036 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/bootstrap.py", line 31, in 
__init__
2019-07-23 01:38:18.239 45036 ERROR keystone backends.load_backends()
2019-07-23 01:38:18.239 45036 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/server/backends.py", line 59, in 
load_backends
2019-07-23 01:38:18.239 45036 ERROR keystone drivers = {d._provides_api: 
d() for d in managers}
2019-07-23 01:38:18.239 45036 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/server/backends.py", line 59, in 

2019-07-23 01:38:18.239 45036 ERROR keystone drivers = {d._provides_api: 
d() for d in managers}
2019-07-23 01:38:18.239 45036 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/catalog/core.py", line 61, in 
__init__
2019-07-23 01:38:18.239 45036 ERROR keystone super(Manager, 
self).__init__(CONF.catalog.driver)
2019-07-23 01:38:18.239 45036 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/common/manager.py", line 181, in 
__init__
2019-07-23 01:38:18.239 45036 ERROR keystone self.driver = 
load_driver(self.driver_namespace, driver_name)
2019-07-23 01:38:18.239 45036 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/common/manager.py", line 81, in 
load_driver
2019-07-23 01:38:18.239 45036 ERROR keystone raise ImportError(msg % 
{'name': driver_name, 'namespace': namespace})
2019-07-23 01:38:18.239 45036 ERROR keystone ImportError: Unable to find 
'keystone.catalog.backends.sql.Catalog' driver in 'keystone.catalog'.
2019-07-23 01:38:18.239 45036 ERROR keystone

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: doc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1837513

Title:
  Install and configure in keystone

Status in OpenStack Identity (keystone):
  New

Bug description:

  Step Number 5 in keystone installation and configuration

  keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
--bootstrap-admin-url http://controller:35357/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne

  
  It gives me the below error in /var/log/keystone/keystone.conf in Centos 7

  2019-07-23 01:38:18.238 45036 WARNING stevedore.named [-] Could not load 
keystone.catalog.backends.sql.Catalog
  2019-07-23 01:38:18.239 45036 CRITICAL keystone [-] Unhandled error: 
ImportError: Unable to find 'keystone.catalog.backends.sql.Catalog' driver in 
'keystone.catalog'.
  2019-07-23 01:38:18.239 45036 ERROR keystone Traceback (most recent call 
last):
  2019-07-23 01:38:18.239 45036 ERROR keystone   File 
"/usr/bin/keystone-manage", line 10, in 
  2019-07-23 01:38:18.239 45036 ERROR keystone sys.exit(main())
  2019-07-23 01:38:18.239 45036 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/manage.py", line 40, in main
  2019-07-23 01:38:18.239 45036 ERROR keystone cli.main(argv=sys.argv,