[Yahoo-eng-team] [Bug 1738420] Re: Displayed IP address is different than in the VM

2018-04-10 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1738420

Title:
  Displayed IP address is different than in the VM

Status in neutron:
  Expired

Bug description:
  Hi All,

  I have an unusual issue with OpenStack Pike release when I create a VM.
  The IP address assigned to the VM is different than the IP address in the VM.
  For example, the assigned IP in the Dashboard shows 10.20.5.14, but the IP 
address in the VM will be 10.20.5.17.

  I am using Layer 2, Flat Provider network in my configuration.
  Has anyone encountered such an issue?

  I have attached the console log of the VM machine.

  Regards,
  Hanif Kukkalli

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1738420/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762886] [NEW] ha router with gateway ip create failed because of router scheduled before ha_vr_id allocated

2018-04-10 Thread shiliang
Public bug reported:

when creating a ha router with gateway ip, l3 agent throw an exception:

2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent [-] Failed to process 
compatible router '9a525821-509d-453b-b8c3-d2f192ac1beb'
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 514, in 
_process_router_update
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
self._process_router_if_compatible(router)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 453, in 
_process_router_if_compatible
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
self._process_updated_router(router)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 467, in 
_process_updated_router
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent ri.process(self)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/dvr_local_router.py", line 
518, in process
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
super(DvrLocalRouter, self).process(agent)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/dvr_router_base.py", line 
33, in process
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
super(DvrRouterBase, self).process(agent)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 447, in 
process
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent super(HaRouter, 
self).process(agent)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 385, in call
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent self.logger(e)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
self.force_reraise()
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
six.reraise(self.type_, self.value, self.tb)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 382, in call
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent return 
func(*args, **kwargs)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 1153, 
in process
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
self.process_external(agent)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/dvr_local_router.py", line 
472, in process_external
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
super(DvrLocalRouter, self).process_external(agent)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 898, 
in process_external
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
self._process_external_gateway(ex_gw_port, agent.pd)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 724, 
in _process_external_gateway
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
self.external_gateway_updated(ex_gw_port, interface_name)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/dvr_edge_ha_router.py", line 
85, in external_gateway_updated
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
HaRouter.external_gateway_updated(self, ex_gw_port, interface_name)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 424, in 
external_gateway_updated
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
self._remove_vip(old_gateway_cidr)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 202, in 
_remove_vip
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
instance.remove_vip_by_ip_address(ip_cidr)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent AttributeError: 
'NoneType' object has no attribute 'remove_vip_by_ip_address'
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent



[Yahoo-eng-team] [Bug 1762886] [NEW] ha router with gateway ip create failed because of router scheduled before ha_vr_id allocated

2018-04-10 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

when creating a ha router with gateway ip, l3 agent throw an exception:

2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent [-] Failed to process 
compatible router '9a525821-509d-453b-b8c3-d2f192ac1beb'
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 514, in 
_process_router_update
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
self._process_router_if_compatible(router)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 453, in 
_process_router_if_compatible
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
self._process_updated_router(router)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 467, in 
_process_updated_router
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent ri.process(self)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/dvr_local_router.py", line 
518, in process
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
super(DvrLocalRouter, self).process(agent)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/dvr_router_base.py", line 
33, in process
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
super(DvrRouterBase, self).process(agent)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 447, in 
process
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent super(HaRouter, 
self).process(agent)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 385, in call
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent self.logger(e)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
self.force_reraise()
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
six.reraise(self.type_, self.value, self.tb)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 382, in call
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent return 
func(*args, **kwargs)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 1153, 
in process
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
self.process_external(agent)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/dvr_local_router.py", line 
472, in process_external
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
super(DvrLocalRouter, self).process_external(agent)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 898, 
in process_external
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
self._process_external_gateway(ex_gw_port, agent.pd)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 724, 
in _process_external_gateway
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
self.external_gateway_updated(ex_gw_port, interface_name)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/dvr_edge_ha_router.py", line 
85, in external_gateway_updated
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
HaRouter.external_gateway_updated(self, ex_gw_port, interface_name)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 424, in 
external_gateway_updated
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
self._remove_vip(old_gateway_cidr)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 202, in 
_remove_vip
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent 
instance.remove_vip_by_ip_address(ip_cidr)
2018-04-10 03:26:06.686 3667 ERROR neutron.agent.l3.agent AttributeError: 
'NoneType' object has no attribute 'remove_vip_by_ip_address'
2018-04-10 03:26:06.686 3667 ERROR 

[Yahoo-eng-team] [Bug 1762880] [NEW] Launch instance availability-zone ordering

2018-04-10 Thread Sam Morrison
Public bug reported:

In the launch instance view the drop down list for selecting an
availability zone is in a random order. Would be good if this was sorted
alphabetically.

This is in Queens dashboard

** Affects: horizon
 Importance: Undecided
 Assignee: Sam Morrison (sorrison)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1762880

Title:
  Launch instance availability-zone ordering

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  In the launch instance view the drop down list for selecting an
  availability zone is in a random order. Would be good if this was
  sorted alphabetically.

  This is in Queens dashboard

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1762880/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762879] [NEW] CREATE_INSTANCE_FLAVOR_SORT doesn't work

2018-04-10 Thread Sam Morrison
Public bug reported:

The setting CREATE_INSTANCE_FLAVOR_SORT looks like it has no affect
anymore, possibly due to the change to using the angular launch instance
version?

This is using the queens dashboard

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1762879

Title:
  CREATE_INSTANCE_FLAVOR_SORT doesn't work

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The setting CREATE_INSTANCE_FLAVOR_SORT looks like it has no affect
  anymore, possibly due to the change to using the angular launch
  instance version?

  This is using the queens dashboard

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1762879/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762877] [NEW] Queen with Postgresql can not create network

2018-04-10 Thread suzhouclark
Public bug reported:

pg version:
PostgreSQL 10.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 
(Red Hat 4.8.5-16), 64-bit

Openstack Version:
Installed Packages
openstack-neutron.noarch
1:12.0.0-1.el7
openstack-neutron-common.noarch 
1:12.0.0-1.el7
openstack-neutron-linuxbridge.noarch
1:12.0.0-1.el7
openstack-neutron-ml2.noarch
1:12.0.0-1.el7
python-neutron.noarch   
1:12.0.0-1.el7
python2-neutron-lib.noarch  
1.13.0-1.el7  
python2-neutronclient.noarch
6.7.0-1.el7   


==

cat  /var/log/neutron/server.log

=

2018-04-11 08:39:56.761 4569 ERROR oslo_db.sqlalchemy.exc_filters 
[req-223d3522-1ce7-4df5-93b8-fe897f16322e 193593edff6943adb6ae487412c188ad 
4e121d3b79dd461286a48ee9303b0be0 - default default] DBAPIError exception 
wrapped from (psycopg2.ProgrammingError) column "agents.id" must appear in the 
GROUP BY clause or be used in an aggregate function
LINE 1: SELECT agents.id AS agents_id, agents.agent_type AS agents_a...
   ^
 [SQL: 'SELECT agents.id AS agents_id, agents.agent_type AS agents_agent_type, 
agents."binary" AS agents_binary, agents.topic AS agents_topic, agents.host AS 
agents_host, agents.availability_zone AS agents_availability_zone, 
agents.admin_state_up AS agents_admin_state_up, agents.created_at AS 
agents_created_at, agents.started_at AS agents_started_at, 
agents.heartbeat_timestamp AS agents_heartbeat_timestamp, agents.description AS 
agents_description, agents.configurations AS agents_configurations, 
agents.resource_versions AS agents_resource_versions, agents.load AS 
agents_load \nFROM agents \nWHERE agents.agent_type = %(agent_type_1)s AND 
agents.availability_zone IN (%(availability_zone_1)s) GROUP BY 
agents.availability_zone'] [parameters: {'agent_type_1': 'DHCP agent', 
'availability_zone_1': u'nova'}] (Background on this error at: 
http://sqlalche.me/e/f405): ProgrammingError: column "agents.id" must appear in 
the GROUP BY clause or be used in an aggregate function
LINE 1: SELECT agents.id AS agents_id, agents.agent_type AS agents_a...
   ^
2018-04-11 08:39:56.761 4569 ERROR oslo_db.sqlalchemy.exc_filters Traceback 
(most recent call last):
2018-04-11 08:39:56.761 4569 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1193, in 
_execute_context
2018-04-11 08:39:56.761 4569 ERROR oslo_db.sqlalchemy.exc_filters context)
2018-04-11 08:39:56.761 4569 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 507, in 
do_execute
2018-04-11 08:39:56.761 4569 ERROR oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
2018-04-11 08:39:56.761 4569 ERROR oslo_db.sqlalchemy.exc_filters 
ProgrammingError: column "agents.id" must appear in the GROUP BY clause or be 
used in an aggregate function
2018-04-11 08:39:56.761 4569 ERROR oslo_db.sqlalchemy.exc_filters LINE 1: 
SELECT agents.id AS agents_id, agents.agent_type AS agents_a...
2018-04-11 08:39:56.761 4569 ERROR oslo_db.sqlalchemy.exc_filters   
 ^
2018-04-11 08:39:56.761 4569 ERROR oslo_db.sqlalchemy.exc_filters
2018-04-11 08:39:56.761 4569 ERROR oslo_db.sqlalchemy.exc_filters
2018-04-11 08:39:56.766 4569 ERROR neutron.pecan_wsgi.hooks.translation 
[req-223d3522-1ce7-4df5-93b8-fe897f16322e 193593edff6943adb6ae487412c188ad 
4e121d3b79dd461286a48ee9303b0be0 - default default] POST failed.: DBError: 
(psycopg2.ProgrammingError) column "agents.id" must appear in the GROUP BY 
clause or be used in an aggregate function
LINE 1: SELECT agents.id AS agents_id, agents.agent_type AS agents_a...
   ^
 [SQL: 'SELECT agents.id AS agents_id, agents.agent_type AS agents_agent_type, 
agents."binary" AS agents_binary, agents.topic AS agents_topic, agents.host AS 
agents_host, agents.availability_zone AS agents_availability_zone, 
agents.admin_state_up AS agents_admin_state_up, agents.created_at AS 
agents_created_at, agents.started_at AS agents_started_at, 
agents.heartbeat_timestamp AS agents_heartbeat_timestamp, agents.description AS 
agents_description, agents.configurations AS agents_configurations, 
agents.resource_versions AS agents_resource_versions, agents.load AS 
agents_load \nFROM agents \nWHERE agents.agent_type = %(agent_type_1)s AND 
agents.availability_zone IN (%(availability_zone_1)s) GROUP BY 
agents.availability_zone'] 

[Yahoo-eng-team] [Bug 1762876] [NEW] KeyError during move operation functional tests

2018-04-10 Thread Matt Riedemann
Public bug reported:

I noticed this in a stable/pike functional test job run:

http://logs.openstack.org/46/560146/2/check/nova-tox-functional/4a9d1fd
/job-output.txt.gz#_2018-04-10_21_37_20_943583

2018-04-10 21:37:20.944928 | ubuntu-xenial | Captured stderr:
2018-04-10 21:37:20.944966 | ubuntu-xenial | 
2018-04-10 21:37:20.945029 | ubuntu-xenial | Traceback (most recent call 
last):
2018-04-10 21:37:20.945231 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 457, in fire_timers
2018-04-10 21:37:20.945268 | ubuntu-xenial | timer()
2018-04-10 21:37:20.945467 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/eventlet/hubs/timer.py",
 line 58, in __call__
2018-04-10 21:37:20.945513 | ubuntu-xenial | cb(*args, **kw)
2018-04-10 21:37:20.945598 | ubuntu-xenial |   File "nova/utils.py", line 
1030, in context_wrapper
2018-04-10 21:37:20.945650 | ubuntu-xenial | func(*args, **kwargs)
2018-04-10 21:37:20.945756 | ubuntu-xenial |   File 
"nova/compute/manager.py", line 5620, in dispatch_live_migration
2018-04-10 21:37:20.945839 | ubuntu-xenial | 
self._do_live_migration(*args, **kwargs)
2018-04-10 21:37:20.945939 | ubuntu-xenial |   File 
"nova/compute/manager.py", line 5599, in _do_live_migration
2018-04-10 21:37:20.945993 | ubuntu-xenial | clean_task_state=True)
2018-04-10 21:37:20.946194 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
2018-04-10 21:37:20.946246 | ubuntu-xenial | self.force_reraise()
2018-04-10 21:37:20.946452 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
2018-04-10 21:37:20.946532 | ubuntu-xenial | six.reraise(self.type_, 
self.value, self.tb)
2018-04-10 21:37:20.946679 | ubuntu-xenial |   File 
"nova/compute/manager.py", line 5588, in _do_live_migration
2018-04-10 21:37:20.946764 | ubuntu-xenial | block_migration, 
migrate_data)
2018-04-10 21:37:20.946856 | ubuntu-xenial |   File "nova/virt/fake.py", 
line 497, in live_migration
2018-04-10 21:37:20.946901 | ubuntu-xenial | migrate_data)
2018-04-10 21:37:20.947003 | ubuntu-xenial |   File 
"nova/exception_wrapper.py", line 76, in wrapped
2018-04-10 21:37:20.947069 | ubuntu-xenial | function_name, call_dict, 
binary)
2018-04-10 21:37:20.947270 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
2018-04-10 21:37:20.947322 | ubuntu-xenial | self.force_reraise()
2018-04-10 21:37:20.947536 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
2018-04-10 21:37:20.947619 | ubuntu-xenial | six.reraise(self.type_, 
self.value, self.tb)
2018-04-10 21:37:20.947707 | ubuntu-xenial |   File 
"nova/exception_wrapper.py", line 67, in wrapped
2018-04-10 21:37:20.947779 | ubuntu-xenial | return f(self, context, 
*args, **kw)
2018-04-10 21:37:20.947878 | ubuntu-xenial |   File 
"nova/compute/manager.py", line 218, in decorated_function
2018-04-10 21:37:20.947950 | ubuntu-xenial | kwargs['instance'], e, 
sys.exc_info())
2018-04-10 21:37:20.948160 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
2018-04-10 21:37:20.948214 | ubuntu-xenial | self.force_reraise()
2018-04-10 21:37:20.948433 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
2018-04-10 21:37:20.948510 | ubuntu-xenial | six.reraise(self.type_, 
self.value, self.tb)
2018-04-10 21:37:20.948607 | ubuntu-xenial |   File 
"nova/compute/manager.py", line 206, in decorated_function
2018-04-10 21:37:20.948687 | ubuntu-xenial | return function(self, 
context, *args, **kwargs)
2018-04-10 21:37:20.948802 | ubuntu-xenial |   File 
"nova/compute/manager.py", line 5824, in _post_live_migration
2018-04-10 21:37:20.948856 | ubuntu-xenial | instance, source_node)
2018-04-10 21:37:20.949000 | ubuntu-xenial |   File 
"nova/compute/resource_tracker.py", line 1294, in 
delete_allocation_for_migrated_instance
2018-04-10 21:37:20.949111 | ubuntu-xenial | 
self._delete_allocation_for_moved_instance(instance, node, 'migrated')
2018-04-10 21:37:20.949244 | ubuntu-xenial 

[Yahoo-eng-team] [Bug 1762870] Re: server fault is not blacklisted for filtering/sorting

2018-04-10 Thread Matt Riedemann
Trying to sort on 'fault' results in a 400 as I'd expect:

stack@rocky:~$ curl http://162.253.55.188/compute/v2.1/servers?sort_key=fault 
-H "x-auth-token: $TOKEN" | python -m json.tool
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100   875  100   8750 0  77860  0 --:--:-- --:--:-- --:--:-- 79545
{
"badRequest": {
"code": 400,
"message": "Invalid input for query parameters sort_key. Value: fault. 
u'fault' is not one of ['access_ip_v4', 'access_ip_v6', 'auto_disk_config', 
'availability_zone', 'config_drive', 'created_at', 'display_description', 
'display_name', 'host', 'hostname', 'image_ref', 'instance_type_id', 
'kernel_id', 'key_name', 'launch_index', 'launched_at', 'locked_by', 'node', 
'power_state', 'progress', 'project_id', 'ramdisk_id', 'root_device_name', 
'task_state', 'terminated_at', 'updated_at', 'user_id', 'uuid', 'vm_state', 
'architecture', 'cell_name', 'cleaned', 'default_ephemeral_device', 
'default_swap_device', 'deleted', 'deleted_at', 'disable_terminate', 
'ephemeral_gb', 'ephemeral_key_uuid', 'id', 'key_data', 'launched_on', 
'locked', 'memory_mb', 'os_type', 'reservation_id', 'root_gb', 
'shutdown_terminate', 'user_data', 'vcpus', 'vm_mode']"
}
}
stack@rocky:~$ 


** Changed in: nova
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1762870

Title:
  server fault is not blacklisted for filtering/sorting

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  The server['fault'] response key is a joined column in the
  instance_faults table, but is not listed in the blacklisted list of
  joined table keys:

  
https://github.com/openstack/nova/blob/43f4755d5e034a6ff1cd5788d76851642027a54e/nova/api/openstack/compute/schemas/servers.py#L298

  it's also not listed as an ignored sort key, but we can't sort on
  fault since it's a dict in the response:

  
https://github.com/openstack/nova/blob/43f4755d5e034a6ff1cd5788d76851642027a54e/nova/api/openstack/compute/schemas/servers.py#L312

  I'm not sure what would happen if you tried sorting instances on
  fault, but I think filtering on fault probably results in a 500 error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1762870/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762870] [NEW] server fault is not blacklisted for filtering/sorting

2018-04-10 Thread Matt Riedemann
Public bug reported:

The server['fault'] response key is a joined column in the
instance_faults table, but is not listed in the blacklisted list of
joined table keys:

https://github.com/openstack/nova/blob/43f4755d5e034a6ff1cd5788d76851642027a54e/nova/api/openstack/compute/schemas/servers.py#L298

it's also not listed as an ignored sort key, but we can't sort on fault
since it's a dict in the response:

https://github.com/openstack/nova/blob/43f4755d5e034a6ff1cd5788d76851642027a54e/nova/api/openstack/compute/schemas/servers.py#L312

I'm not sure what would happen if you tried sorting instances on fault,
but I think filtering on fault probably results in a 500 error.

** Affects: nova
 Importance: Medium
 Status: Invalid


** Tags: api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1762870

Title:
  server fault is not blacklisted for filtering/sorting

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  The server['fault'] response key is a joined column in the
  instance_faults table, but is not listed in the blacklisted list of
  joined table keys:

  
https://github.com/openstack/nova/blob/43f4755d5e034a6ff1cd5788d76851642027a54e/nova/api/openstack/compute/schemas/servers.py#L298

  it's also not listed as an ignored sort key, but we can't sort on
  fault since it's a dict in the response:

  
https://github.com/openstack/nova/blob/43f4755d5e034a6ff1cd5788d76851642027a54e/nova/api/openstack/compute/schemas/servers.py#L312

  I'm not sure what would happen if you tried sorting instances on
  fault, but I think filtering on fault probably results in a 500 error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1762870/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762842] [NEW] Compute API guide: Faults in nova - Instance Faults section is wrong

2018-04-10 Thread Matt Riedemann
Public bug reported:

- [x] This doc is inaccurate in this way:

This section about instance faults:

https://developer.openstack.org/api-guide/compute/faults.html#instance-
faults

says:

"However, there is currently no API to retrieve this information."

This is wrong, as the GET /servers/{server_id} response has a 'fault'
entry if there was a fault for the server:

https://developer.openstack.org/api-ref/compute/#id27

"A fault object. Only displayed in the failed response. Default keys are
code, created, and message (response code, created time, and message
respectively). In addition, the key details (stack trace) is available
if you have the administrator privilege."

---
Release: 17.0.0.0rc2.dev613 on 2018-04-10 15:08
SHA: 836c3913cc382428625a5e7502a4c807b8136d0a
Source: 
https://git.openstack.org/cgit/openstack/nova/tree/api-guide/source/faults.rst
URL: https://developer.openstack.org/api-guide/compute/faults.html

** Affects: nova
 Importance: Medium
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged


** Tags: api-guide doc

** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1762842

Title:
  Compute API guide: Faults in nova - Instance Faults section is wrong

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  - [x] This doc is inaccurate in this way:

  This section about instance faults:

  https://developer.openstack.org/api-guide/compute/faults.html
  #instance-faults

  says:

  "However, there is currently no API to retrieve this information."

  This is wrong, as the GET /servers/{server_id} response has a 'fault'
  entry if there was a fault for the server:

  https://developer.openstack.org/api-ref/compute/#id27

  "A fault object. Only displayed in the failed response. Default keys
  are code, created, and message (response code, created time, and
  message respectively). In addition, the key details (stack trace) is
  available if you have the administrator privilege."

  ---
  Release: 17.0.0.0rc2.dev613 on 2018-04-10 15:08
  SHA: 836c3913cc382428625a5e7502a4c807b8136d0a
  Source: 
https://git.openstack.org/cgit/openstack/nova/tree/api-guide/source/faults.rst
  URL: https://developer.openstack.org/api-guide/compute/faults.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1762842/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762687] Re: Concurrent requests to attach the same non-multiattach volume to multiple instances can succeed

2018-04-10 Thread Matt Riedemann
** Changed in: nova
   Status: New => Invalid

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1762687

Title:
  Concurrent requests to attach the same non-multiattach volume to
  multiple instances can succeed

Status in Cinder:
  In Progress

Bug description:
  Description
  ===

  Discovered this by chance yesterday, at first glance this appears to
  be due to a lack of locking within c-api when we initial create the
  attachments and no additional validation when we update with the
  connector later in the attach flow. Reporting this against both nova
  and cinder for now.

  $ nova volume-attach 77c092c2-9664-42b2-bf71-277b1bfad707 
b4240f39-da7a-4372-b4ca-15a0c6121ac8 & nova volume-attach 
91a7c490-5a9a-4048-a109-b1159b7f0e79 b4240f39-da7a-4372-b4ca-15a0c6121ac8
  [1] 24949
  +--+--+
  | Property | Value|
  +--+--+
  | device   | /dev/vdb |
  | id   | b4240f39-da7a-4372-b4ca-15a0c6121ac8 |
  | serverId | 91a7c490-5a9a-4048-a109-b1159b7f0e79 |
  | volumeId | b4240f39-da7a-4372-b4ca-15a0c6121ac8 |
  +--+--+
  +--+--+
  | Property | Value|
  +--+--+
  | device   | /dev/vdb |
  | id   | b4240f39-da7a-4372-b4ca-15a0c6121ac8 |
  | serverId | 77c092c2-9664-42b2-bf71-277b1bfad707 |
  | volumeId | b4240f39-da7a-4372-b4ca-15a0c6121ac8 |
  +--+--+
  $ cinder show b4240f39-da7a-4372-b4ca-15a0c6121ac8
  
++--+
  | Property   | Value  
  |
  
++--+
  | attached_servers   | ['91a7c490-5a9a-4048-a109-b1159b7f0e79', 
'77c092c2-9664-42b2-bf71-277b1bfad707'] |
  | attachment_ids | ['31b8c16f-07d0-4f0c-95d8-c56797a270dc', 
'a7eb9cb1-b7be-44e3-a176-3c6989459aaa'] |
  | availability_zone  | nova   
  |
  | bootable   | false  
  |
  | consistencygroup_id| None   
  |
  | created_at | 2018-04-09T19:02:46.00 
  |
  | description| None   
  |
  | encrypted  | False  
  |
  | id | b4240f39-da7a-4372-b4ca-15a0c6121ac8   
  |
  | metadata   | attached_mode : rw 
  |
  | migration_status   | None   
  |
  | multiattach| False  
  |
  | name   | None   
  |
  | os-vol-host-attr:host  | test.example.com@lvmdriver-1#lvmdriver-1   
  |
  | os-vol-mig-status-attr:migstat | None   
  |
  | os-vol-mig-status-attr:name_id | None   
  |
  | os-vol-tenant-attr:tenant_id   | fe3128ecf4704369ae3f7ede03f6bc29   
  |
  | replication_status | None   
  |
  | size   | 1  
  |
  | snapshot_id| None   
  |
  | source_volid   | None   
  |
  | status | in-use 
  |
  | updated_at | 2018-04-09T19:04:24.00 

[Yahoo-eng-team] [Bug 1718512] Re: migration fails if instance build failed on destination host

2018-04-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/559447
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=6647f11dc1aba89f9b0e2703f236a43f31d88079
Submitter: Zuul
Branch:master

commit 6647f11dc1aba89f9b0e2703f236a43f31d88079
Author: Matt Riedemann 
Date:   Fri Apr 6 20:28:53 2018 -0400

Don't persist RequestSpec.retry

During a resize, the RequestSpec.flavor is updated
to the new flavor. If the resize failed on one host
and was rescheduled, the RequestSpec.retry is updated
for that failed host and mistakenly persisted, which
can affect later move operations, like if an admin
targets one of those previously failed hosts for a
live migration or evacuate operation.

This change fixes the problem by not ever persisting
the RequestSpec.retry field to the database, since
retries are per-request/operation and not something
that needs to be persisted.

Alternative to this, we could reset the retry field
in the RequestSpec.reset_forced_destinations method
but that would be slightly overloading the meaning
of that method, and the approach taken in this patch
is arguably cleaner since retries shouldn't ever be
persisted. It should be noted, however, that one
advantage to resetting the 'retry' field in the
RequestSpec.reset_forced_destinations method would
be to avoid this issue for any existing DB entries
that have this problem.

The related functional regression test is updated
to show the bug is now fixed.

Change-Id: Iadbf8ec935565a6d4ccf6f36ef630ab6bf1bea5d
Closes-Bug: #1718512


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1718512

Title:
  migration fails if instance build failed on destination host

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  Confirmed
Status in OpenStack Compute (nova) pike series:
  In Progress
Status in OpenStack Compute (nova) queens series:
  In Progress

Bug description:
  (OpenStack Nova, commit d8b30c3772, per OSA-14.2.7)

  if an instance build fails on a hypervisor the "retry" field of the
  instance's request spec is populated with which host and how many
  times it attempted to retry the build. this field remains populated
  during the life-time of the instance.

  if a live-migration for the same instance is requested, the conductor
  loads this request spec and passes it on to the scheduler. the
  scheduler will fail the migration request on RetryFilter since the
  target was already known to have failed (albeit, for the build).

  with the help of mriedem and melwitt of #openstack-nova, we determined
  that migration retries are handled separately from build retries.
  mriedem suggested a patch to ignore the retry field of the instance
  request spec during migrations. this patch allowed the failing
  migration to succeed.

  it is important to note that it may fail the migration again, however
  there is still sufficient reason to ignore the build's
  failures/retries during a migration.

  12:55 < mriedem> it does stand to reason that if this instance failed to 
build originally on those 2 hosts, that live migrating it there might fail 
too...but we don't know why it originally failed, could have been a resource 
claim issue at the time
  12:58 < melwitt> yeah, often it's a failed claim. and also what if that 
compute host is eventually replaced over the lifetime of the cluster, making it 
a fresh candidate for several instances that might still avoid it because they 
once failed to build there back when it was a different machine

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1718512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762031] Re: image schema 'status' missing values added for interoperable image import

2018-04-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/559501
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=c48acba8404629c11bc16c7da99bc7a084e13b99
Submitter: Zuul
Branch:master

commit c48acba8404629c11bc16c7da99bc7a084e13b99
Author: Brian Rosmaita 
Date:   Sat Apr 7 14:22:11 2018 -0400

Update image schema with Image API 2.6 statuses

Updates the schemas/image(s) responses and the api-ref.  (The dev
docs are correct.)  Adds a test so this won't happen again.

Closes-bug: #1762031
Change-Id: Ifb0a07fcdb1c8d91f1ad700329a208200222a2a6


** Changed in: glance
   Status: In Progress => Fix Released

** Changed in: glance/queens
   Status: Triaged => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1762031

Title:
  image schema 'status' missing values added for interoperable image
  import

Status in Glance:
  Fix Released
Status in Glance queens series:
  In Progress

Bug description:
  GET v2/schemas/image response includes this:

  "status": {
  "description": "Status of the image",
  "enum": [
  "queued",
  "saving",
  "active",
  "killed",
  "deleted",
  "pending_delete",
  "deactivated"
  ],
  "readOnly": true,
  "type": "string"
  },

  It's missing 'uploading' and 'importing'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1762031/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762789] [NEW] ResourceClass.normalize_name produces different results for py2 vs. py3

2018-04-10 Thread Eric Fried
Public bug reported:

Due to a py3 quirk (read: bug they decided not to fix) [1], .upper()
works differently in py2 vs. py3 for the sharp S ('ß').  In py2, it
stays the same (like all other characters not in the a-z range); in py3
it becomes the two-character string 'SS'.  This means that, as written,
ResourceClass.normalize_name('ß') will yield 'CUSTOM__' in py2, but
'CUSTOM_SS' in py3.

[1] https://bugs.python.org/issue4610

** Affects: nova
 Importance: Undecided
 Assignee: Eric Fried (efried)
 Status: In Progress


** Tags: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1762789

Title:
  ResourceClass.normalize_name produces different results for py2 vs.
  py3

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Due to a py3 quirk (read: bug they decided not to fix) [1], .upper()
  works differently in py2 vs. py3 for the sharp S ('ß').  In py2, it
  stays the same (like all other characters not in the a-z range); in
  py3 it becomes the two-character string 'SS'.  This means that, as
  written, ResourceClass.normalize_name('ß') will yield 'CUSTOM__' in
  py2, but 'CUSTOM_SS' in py3.

  [1] https://bugs.python.org/issue4610

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1762789/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1720726] Re: Attach public network return 500

2018-04-10 Thread Matt Riedemann
** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova/ocata
   Status: New => In Progress

** Changed in: nova/ocata
   Importance: Undecided => Medium

** Changed in: nova/pike
   Status: New => In Progress

** Changed in: nova/pike
   Importance: Undecided => Medium

** Changed in: nova/ocata
 Assignee: (unassigned) => Sam Yaple (s8m)

** Changed in: nova/pike
 Assignee: (unassigned) => Sam Yaple (s8m)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1720726

Title:
  Attach public network return 500

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  In Progress
Status in OpenStack Compute (nova) pike series:
  In Progress

Bug description:
  Description
  ===
  If policy allow attaching external network, Attach a 'public' network return 
500.

  Steps to reproduce
  ==
  1. Run devstack with default configuration.
  2. Add policy.json to allow attaching external network. 
  3. Source demo credential
  4. Create an instance and wait for the instance to become active. 
  5. Attach the instance with the 'public' network.

  $ nova interface-attach --net-id 2b7e8a86-c5c6-4396-84bc-14cdc741e033 test
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-3e353c62-a356-4d7d-bd73-28cf4406e447)

  Expected result
  ===
  Nova should return 4xx response

  Actual result
  =
  Nova returned 500 response

  Environment
  ===
  Devstack with master

  Logs & Configs
  ==
  Oct 02 04:39:13 testzun nova-compute[24239]: ERROR oslo_messaging.rpc.server 
[None req-3e125907-0419-418a-a03c-154b99e97610 demo demo] Exception during 
message handling: TypeError:  can't be encoded
  Oct 02 04:39:13 testzun nova-compute[24239]: ERROR oslo_messaging.rpc.server 
Traceback (most recent call last):
  Oct 02 04:39:13 testzun nova-compute[24239]: ERROR oslo_messaging.rpc.server  
 File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", 
line 160, in _process_incoming
  Oct 02 04:39:13 testzun nova-compute[24239]: ERROR oslo_messaging.rpc.server  
   res = self.dispatcher.dispatch(message)
  Oct 02 04:39:13 testzun nova-compute[24239]: ERROR oslo_messaging.rpc.server  
 File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
222, in dispatch
  Oct 02 04:39:13 testzun nova-compute[24239]: ERROR oslo_messaging.rpc.server  
   return self._do_dispatch(endpoint, method, ctxt, args)
  Oct 02 04:39:13 testzun nova-compute[24239]: ERROR oslo_messaging.rpc.server  
 File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
192, in _do_dispatch
  Oct 02 04:39:13 testzun nova-compute[24239]: ERROR oslo_messaging.rpc.server  
   result = func(ctxt, **new_args)
  Oct 02 04:39:13 testzun nova-compute[24239]: ERROR oslo_messaging.rpc.server  
 File "/opt/stack/nova/nova/exception_wrapper.py", line 76, in wrapped
  Oct 02 04:39:13 testzun nova-compute[24239]: ERROR oslo_messaging.rpc.server  
   function_name, call_dict, binary)
  Oct 02 04:39:13 testzun nova-compute[24239]: ERROR oslo_messaging.rpc.server  
 File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
220, in __exit__
  Oct 02 04:39:13 testzun nova-compute[24239]: ERROR oslo_messaging.rpc.server  
   self.force_reraise()
  Oct 02 04:39:13 testzun nova-compute[24239]: ERROR oslo_messaging.rpc.server  
 File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
196, in force_reraise
  Oct 02 04:39:13 testzun nova-compute[24239]: ERROR oslo_messaging.rpc.server  
   six.reraise(self.type_, self.value, self.tb)
  Oct 02 04:39:13 testzun nova-compute[24239]: ERROR oslo_messaging.rpc.server  
 File "/opt/stack/nova/nova/exception_wrapper.py", line 67, in wrapped
  Oct 02 04:39:13 testzun nova-compute[24239]: ERROR oslo_messaging.rpc.server  
   return f(self, context, *args, **kw)
  Oct 02 04:39:13 testzun nova-compute[24239]: ERROR oslo_messaging.rpc.server  
 File "/opt/stack/nova/nova/compute/manager.py", line 217, in decorated_function
  Oct 02 04:39:13 testzun nova-compute[24239]: ERROR oslo_messaging.rpc.server  
   kwargs['instance'], e, sys.exc_info())
  Oct 02 04:39:13 testzun nova-compute[24239]: ERROR oslo_messaging.rpc.server  
 File "/opt/stack/nova/nova/compute/utils.py", line 99, in 
add_instance_fault_from_exc
  Oct 02 04:39:13 testzun nova-compute[24239]: ERROR oslo_messaging.rpc.server  
   fault_obj.update(exception_to_dict(fault, message=fault_message))
  Oct 02 04:39:13 testzun nova-compute[24239]: ERROR 

[Yahoo-eng-team] [Bug 1758919] Re: Static routes are not per-interface, which breaks some deployments

2018-04-10 Thread Andres Rodriguez
** Changed in: maas/2.3
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1758919

Title:
  Static routes are not per-interface, which breaks some deployments

Status in cloud-init:
  New
Status in MAAS:
  Fix Released
Status in MAAS 2.3 series:
  Fix Released

Bug description:
  When juju tries to deploy a lxd container on a maas managed machine,
  it looses all static routes, due to ifdown/ifup being issued and e/n/i
  has no saved data on the original state.

  Machine with no lxd container deployed:
  root@4-compute-4:~# ip r
  default via 100.68.4.254 dev bond2 onlink 
  100.68.4.0/24 dev bond2  proto kernel  scope link  src 100.68.4.1 
  100.68.5.0/24 via 100.68.4.254 dev bond2 
  100.68.6.0/24 via 100.68.4.254 dev bond2 
  100.84.4.0/24 dev bond1  proto kernel  scope link  src 100.84.4.2 
  100.84.5.0/24 via 100.84.4.254 dev bond1 
  100.84.6.0/24 via 100.84.4.254 dev bond1 
  100.99.4.0/24 dev bond0  proto kernel  scope link  src 100.99.4.101 
  100.99.5.0/24 via 100.99.4.254 dev bond0 
  100.99.6.0/24 via 100.99.4.254 dev bond0 
  100.107.0.0/24 via 100.99.4.254 dev bond0 

  After juju deploys a container, routes are disappearing:
  root@4-management-1:~# ip r
  default via 100.68.100.254 dev bond2 onlink 
  10.177.144.0/24 dev lxdbr0  proto kernel  scope link  src 10.177.144.1 
  100.68.100.0/24 dev bond2  proto kernel  scope link  src 100.68.100.26 
  100.84.4.0/24 dev br-bond1  proto kernel  scope link  src 100.84.4.1 
  100.99.4.0/24 dev br-bond0  proto kernel  scope link  src 100.99.4.3 

  After host reboot, the routes are NOT getting back in place, they are still 
gone:
  root@4-management-1:~# ip r s
  default via 100.68.100.254 dev bond2 onlink 
  100.68.100.0/24 dev bond2  proto kernel  scope link  src 100.68.100.26 
  100.84.4.0/24 dev br-bond1  proto kernel  scope link  src 100.84.4.1 
  100.84.5.0/24 via 100.84.4.254 dev br-bond1 
  100.84.6.0/24 via 100.84.4.254 dev br-bond1 
  100.99.4.0/24 dev br-bond0  proto kernel  scope link  src 100.99.4.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1758919/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1593684] Re: Nova & Cinder - Disk attachement changes after shutoff/start

2018-04-10 Thread Javier Diaz Jr
This is a very old and long running issue OpenStack has had with KVM. In
essence, the KVM (libvirt) hypervisor lacks the ability to set the mount
points on instances and thus regardless of what nova instructs the
hypervisor to do KVM will simply attach the volume to the next available
drive mapping (vdb, vdc, vdd) at boot.

This is unfortunately expected behavior if you ask me.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1593684

Title:
  Nova & Cinder - Disk attachement changes after shutoff/start

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description
  ===
  After shut off an instance and starting it again the disk path changes inside 
the virtual instance. This is usually a small issue if you are only using the 
disk label/or any other ID rather than disk path.
  If any application is using the disk path inside windows the application 
becamomes failed until replacing the device path.

  Also I understand that most people won't even notice the problem, but
  we don't have similar issues with VMware and we believe that the path
  should be the same regardless of any kind of shut off/startup
  operation.

  Steps to reproduce
  ==
  - Deploy a new instance - eg: Windows 2012 R2 
  - Attach a new CEPH block device volume 
  - Verify the disk path inside Windows
  Device instance path: 
SCSI\DISK_RED_HAT_VIRTIO\4&1618751F&0&00
Parent: PCI\VEN_1AF4_1001_00021AF4_00\3&13c0b0c5&0&30
Location: Bus Number 0, Target Id 0, LUN 0
  - Shut off the instance
  - Start the instance
  - The disk is offline after the startup, changing to online working
  - Verify the disk path:
Device instance path: 
SCSI\DISK_RED_HAT_VIRTIO\4&352177D0&1&00
Parent: PCI\VEN_1AF4_1001_00021AF4_00\3&13c0b0c5&0&28
Location: Bus Number 0, Target Id 0, LUN 0

  Using  Ubuntu 16.04 LTS:
  - Spin up a new Instance using Ubuntu 16.04 LTS
  - Attach a new CEPH block device
root@attach-test-2:~# lsblk
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda253:00  16G  0 disk
└─vda1 253:10  16G  0 part /
vdb253:16   0   1G  0 disk

root@attach-test-2:~# ls -la /dev/disk/by-path
total 0
drwxr-xr-x 2 root root 100 Jun 17 11:26 .
drwxr-xr-x 6 root root 120 Jun 17 11:26 ..
lrwxrwxrwx 1 root root   9 Jun 17 11:23 virtio-pci-:00:04.0 -> 
../../vda
lrwxrwxrwx 1 root root  10 Jun 17 11:23 virtio-pci-:00:04.0-part1 
-> ../../vda1
lrwxrwxrwx 1 root root   9 Jun 17 11:26 virtio-pci-:00:06.0 -> 
../../vdb

  - Shut ioff the instance
  - Start the instance
  - Check disk path
root@attach-test-2:~# ls -la /dev/disk/by-path
total 0
drwxr-xr-x 2 root root 100 Jun 17 11:28 .
drwxr-xr-x 6 root root 120 Jun 17 11:28 ..
lrwxrwxrwx 1 root root   9 Jun 17 11:28 virtio-pci-:00:04.0 -> 
../../vda
lrwxrwxrwx 1 root root  10 Jun 17 11:28 virtio-pci-:00:04.0-part1 
-> ../../vda1
lrwxrwxrwx 1 root root   9 Jun 17 11:28 virtio-pci-:00:05.0 -> 
../../vdb

  THe disk path changed using Linux too

  I have a feeling that this is a KVM/LibVirt issue...


  Expected result
  ===
  Nova & Libvirt & KVM should use the same details when attaching the disk on 
the instance regardless how many time we shut it down and start it.
  So both instance path and parent should be unchanged.

  Actual result
  =
  Path changes

  Environment
  ===
  Control node:
  root@openstack1:/etc/nova# dpkg -l | grep nova
  ii nova-api 2:13.0.0-0ubuntu2 all OpenStack Compute - API frontend
  ii nova-cells 2:13.0.0-0ubuntu2 all Openstack Compute - cells
  ii nova-cert 2:13.0.0-0ubuntu2 all OpenStack Compute - certificate management
  ii nova-common 2:13.0.0-0ubuntu2 all OpenStack Compute - common files
  ii nova-conductor 2:13.0.0-0ubuntu2 all OpenStack Compute - conductor service
  ii nova-consoleauth 2:13.0.0-0ubuntu2 all OpenStack Compute - Console 
Authenticator
  ii nova-novncproxy 2:13.0.0-0ubuntu2 all OpenStack Compute - NoVNC proxy
  ii nova-scheduler 2:13.0.0-0ubuntu2 all OpenStack Compute - virtual machine 
scheduler
  ii python-nova 2:13.0.0-0ubuntu2 all OpenStack Compute Python libraries
  ii python-novaclient 2:3.3.1-2 all client library for OpenStack Compute API - 
Python 2.7

  Compute node1:
  root@openstackcompute:~# dpkg -l | grep nova
  ii nova-common 2:13.0.0-0ubuntu2~cloud0 all OpenStack Compute - common files
  ii nova-compute 2:13.0.0-0ubuntu2~cloud0 all OpenStack Compute - compute node 
base
  ii nova-compute-kvm 2:13.0.0-0ubuntu2~cloud0 all OpenStack Compute - compute 
node (KVM)
  ii nova-compute-libvirt 2:13.0.0-0ubuntu2~cloud0 all OpenStack Compute - 
compute node 

[Yahoo-eng-team] [Bug 1751092] Re: install guide: statement about uwsgi

2018-04-10 Thread Erno Kuvaja
** Also affects: glance/queens
   Importance: Undecided
   Status: New

** Changed in: glance/queens
Milestone: None => queens-stable-1

** Changed in: glance/queens
 Assignee: (unassigned) => Brian Rosmaita (brian-rosmaita)

** Changed in: glance/queens
   Status: New => Triaged

** Changed in: glance/queens
   Importance: Undecided => High

** Tags removed: queens-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1751092

Title:
  install guide: statement about uwsgi

Status in Glance:
  Fix Committed
Status in Glance queens series:
  Triaged

Bug description:
  Add a statement to the install guide that Glance does not currently
  support deployment under uwsgi.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1751092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762748] [NEW] Larger than 2 TB disks not possible

2018-04-10 Thread George
Public bug reported:

Hi,

We run Openstack and need to provide instances that have very large root
disks (> 2 TB) and it looks like cloud-init doesn't want to use the
entire space.

The regular Ubuntu cloud image has MBR and it doesn't see more than 2
TB, but even the GPT version (http://cloud-
images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-uefi1.img)
still fails to see more than 2 TB.

root@ubuntu-16:~# df -h
Filesystem  Size  Used Avail Use% Mounted on
udev121G 0  121G   0% /dev
tmpfs25G  8.6M   25G   1% /run
/dev/vda1   2.0T  857M  2.0T   1% /
tmpfs   121G 0  121G   0% /dev/shm
tmpfs   5.0M 0  5.0M   0% /run/lock
tmpfs   121G 0  121G   0% /sys/fs/cgroup
tmpfs25G 0   25G   0% /run/user/1000

root@ubuntu-16:~# parted /dev/vda p
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 5583GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End SizeType File system  Flags
 1  1049kB  2199GB  2199GB  primary  ext4 boot

root@ubuntu-16:~# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda253:00  5.1T  0 disk
└─vda1 253:102T  0 part /
root@ubuntu-16:~# dpkg -l | grep cloud-init
ii  cloud-init   17.2-35-gf576b2a2-0ubuntu1~16.04.2 
all  Init scripts for cloud instances
ii  cloud-initramfs-copymods 0.27ubuntu1.5  
all  copy initramfs modules into root filesystem for later use
ii  cloud-initramfs-dyn-netconf  0.27ubuntu1.5  
all  write a network interface file in /run for BOOTIF


The cloud-init.log looks like the disk growing and file system resizing went 
fine:

018-04-10 14:14:31,332 - stages.py[DEBUG]: Running module growpart () with 
frequency always
2018-04-10 14:14:31,332 - handlers.py[DEBUG]: start: 
init-network/config-growpart: running config-growpart with frequency always
2018-04-10 14:14:31,332 - helpers.py[DEBUG]: Running config-growpart using lock 
()
2018-04-10 14:14:31,332 - cc_growpart.py[DEBUG]: No 'growpart' entry in cfg.  
Using default: {'mode': 'auto', 'ignore_growroot_disabled': False, 'devices': 
['/']}
2018-04-10 14:14:31,332 - util.py[DEBUG]: Running command ['growpart', 
'--help'] with allowed return codes [0] (shell=False, capture=True)
2018-04-10 14:14:31,352 - util.py[DEBUG]: Reading from /proc/1192/mountinfo 
(quiet=False)
2018-04-10 14:14:31,352 - util.py[DEBUG]: Read 2621 bytes from 
/proc/1192/mountinfo
2018-04-10 14:14:31,353 - util.py[DEBUG]: Running command 
['systemd-detect-virt', '--quiet', '--container'] with allowed return codes [0] 
(shell=False, capture=True)
2018-04-10 14:14:31,355 - util.py[DEBUG]: Running command 
['running-in-container'] with allowed return codes [0] (shell=False, 
capture=True)
2018-04-10 14:14:31,356 - util.py[DEBUG]: Running command ['lxc-is-container'] 
with allowed return codes [0] (shell=False, capture=True)
2018-04-10 14:14:31,357 - util.py[DEBUG]: Reading from /proc/1/environ 
(quiet=False)
2018-04-10 14:14:31,358 - util.py[DEBUG]: Read 153 bytes from /proc/1/environ
2018-04-10 14:14:31,358 - util.py[DEBUG]: Reading from /proc/self/status 
(quiet=False)
2018-04-10 14:14:31,358 - util.py[DEBUG]: Read 906 bytes from /proc/self/status
2018-04-10 14:14:31,358 - util.py[DEBUG]: Reading from 
/sys/class/block/vda1/partition (quiet=False)
2018-04-10 14:14:31,358 - util.py[DEBUG]: Read 2 bytes from 
/sys/class/block/vda1/partition
2018-04-10 14:14:31,358 - util.py[DEBUG]: Reading from 
/sys/devices/pci:00/:00:04.0/virtio1/block/vda/dev (quiet=False)
2018-04-10 14:14:31,359 - util.py[DEBUG]: Read 6 bytes from 
/sys/devices/pci:00/:00:04.0/virtio1/block/vda/dev
2018-04-10 14:14:31,359 - util.py[DEBUG]: Running command ['growpart', 
'--dry-run', '/dev/vda', '1'] with allowed return codes [0] (shell=False, 
capture=True)
2018-04-10 14:14:31,504 - util.py[DEBUG]: Running command ['growpart', 
'/dev/vda', '1'] with allowed return codes [0] (shell=False, capture=True)
2018-04-10 14:14:31,567 - util.py[DEBUG]: resize_devices took 0.215 seconds
2018-04-10 14:14:31,567 - cc_growpart.py[INFO]: '/' resized: changed (/dev/vda, 
1) from 2359296000 to 2199022206464
2018-04-10 14:14:31,567 - handlers.py[DEBUG]: finish: 
init-network/config-growpart: SUCCESS: config-growpart ran successfully
2018-04-10 14:14:31,567 - stages.py[DEBUG]: Running module resizefs () with 
frequency always
2018-04-10 14:14:31,568 - handlers.py[DEBUG]: start: 
init-network/config-resizefs: running config-resizefs with frequency always
2018-04-10 14:14:31,568 - helpers.py[DEBUG]: Running config-resizefs using lock 
()
2018-04-10 14:14:31,568 - schema.py[DEBUG]: Ignoring schema validation. 
python-jsonschema is not present
2018-04-10 14:14:31,568 - util.py[DEBUG]: Reading from /proc/1192/mountinfo 
(quiet=False)
2018-04-10 14:14:31,568 - util.py[DEBUG]: Read 2621 bytes from 

[Yahoo-eng-team] [Bug 1762752] [NEW] Support filter by attribute with null value

2018-04-10 Thread Hongbin Lu
Public bug reported:

Right now, it seems it is impossible to list resources with a filter
that contains null value. For example, it is impossible to list subnets
that doesn't have subnetpool. I tried the following, but it doesn't seem
to work (it returned an empty list instead of the filtered list).

  GET "/subnets?subnetpool_id=null"

IMHO, we can consider to add support for that.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1762752

Title:
  Support filter by attribute with null value

Status in neutron:
  New

Bug description:
  Right now, it seems it is impossible to list resources with a filter
  that contains null value. For example, it is impossible to list
  subnets that doesn't have subnetpool. I tried the following, but it
  doesn't seem to work (it returned an empty list instead of the
  filtered list).

GET "/subnets?subnetpool_id=null"

  IMHO, we can consider to add support for that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1762752/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762376] Re: openstack router create - multiple routers with same name in same project

2018-04-10 Thread Brian Haley
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1762376

Title:
  openstack router create - multiple routers with same name in same
  project

Status in neutron:
  Invalid

Bug description:
  https://bugzilla.redhat.com/show_bug.cgi?id=1565059

  
  Description of problem:

  Unlike other openstack cli commands, that prevents duplicate objects
  creation with same name (returning "Conflict occurred attempting to
  store ..."), the "router create" command - does allow to create
  multiple routers with same name in same project.

  
  Version-Release number of selected component:
  OSP 13 - 2018-03-20.2

  How reproducible: 
  always

  
  Steps to Reproduce:

  (overcloud) [stack@undercloud-0 ~]$ 
  $ openstack project create test_cloud --enable

  Conflict occurred attempting to store project - it is not permitted to
  have two projects with the same name in the same domain : test_cloud.
  (HTTP 409)

  $ openstack user create tester --enable --password testerpass
  --project test_cloud

  Conflict occurred attempting to store user - Duplicate entry found
  with name tester at domain ID default. (HTTP 409)

  
  $ openstack router create internal_router
  ...
  | id  | a2b97aa1-50f7-4e44-abcd-41468998703b |
  | name| internal_router  |

  $ openstack router create internal_router
  ...
  | id  | 8c2f09d3-0677-4632-9251-b498d570a96e |
  | name| internal_router  |

  $ openstack router list
  
+--+-++---+-+---+--+
  | ID   | Name| Status | State | 
Distributed | HA| Project  |
  
+--+-++---+-+---+--+
  | 8c2f09d3-0677-4632-9251-b498d570a96e | internal_router | ACTIVE | UP| 
False   | False | 676572adef8e41999011e22234871c31 |
  | a2b97aa1-50f7-4e44-abcd-41468998703b | internal_router | ACTIVE | UP| 
False   | False | 676572adef8e41999011e22234871c31 |
  
+--+-++---+-+---+--+

  
  Expected results:

  A second attempt to run "openstack router create internal_router"
  should probably return:

  Conflict occurred attempting to store router - Duplicate entry found
  with name internal_router at project ID test_cloud. (HTTP 409)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1762376/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1732837] Re: weather applet crashes on logout

2018-04-10 Thread Erno Kuvaja
** Changed in: glance
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1732837

Title:
  weather applet crashes on logout

Status in Glance:
  Invalid

Bug description:
  weather applet crashes on logout

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1732837/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762736] [NEW] Iptables firewall driver adds forward rules for trusted ports only in the ingress direction

2018-04-10 Thread Nikita Gerasimov
Public bug reported:

Iptables firewall driver adds forward rules for trusted ports only in the 
ingress direction.
But for normal working of ports like "network:router_ha_interface" egress 
direction also required.

Version: queens
openstack-neutron-linuxbridge-12.0.1-1.el7.noarch

https://review.openstack.org/525607

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1762736

Title:
  Iptables firewall driver adds forward rules for trusted ports only in
  the ingress direction

Status in neutron:
  New

Bug description:
  Iptables firewall driver adds forward rules for trusted ports only in the 
ingress direction.
  But for normal working of ports like "network:router_ha_interface" egress 
direction also required.

  Version: queens
  openstack-neutron-linuxbridge-12.0.1-1.el7.noarch

  https://review.openstack.org/525607

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1762736/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762732] [NEW] l3agentscheduler doesn't return a response body with POST /v2.0/agents/{agent_id}/l3-routers

2018-04-10 Thread Boden R
Public bug reported:

As discussed in [1], the

POST /v2.0/agents/{agent_id}/l3-routers

does not return a response body. This seems inconsistent with our other
APIs as POSTs typically return the created resource. This is even true
with other APIs that 'add' something to a resource.

It seems we should consider returning the resource here; I suspect it's
just a few LOC changes in the API.

[1] https://review.openstack.org/#/c/543408/6/api-ref/source/v2/l3
-agent-scheduler.inc@76

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1762732

Title:
  l3agentscheduler doesn't return a response body with POST
  /v2.0/agents/{agent_id}/l3-routers

Status in neutron:
  New

Bug description:
  As discussed in [1], the

  POST /v2.0/agents/{agent_id}/l3-routers

  does not return a response body. This seems inconsistent with our
  other APIs as POSTs typically return the created resource. This is
  even true with other APIs that 'add' something to a resource.

  It seems we should consider returning the resource here; I suspect
  it's just a few LOC changes in the API.

  [1] https://review.openstack.org/#/c/543408/6/api-ref/source/v2/l3
  -agent-scheduler.inc@76

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1762732/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762733] [NEW] l3agentscheduler doesn't return a response body with POST /v2.0/agents/{agent_id}/l3-routers

2018-04-10 Thread Boden R
Public bug reported:

As discussed in [1], the

POST /v2.0/agents/{agent_id}/l3-routers

does not return a response body. This seems inconsistent with our other
APIs as POSTs typically return the created resource. This is even true
with other APIs that 'add' something to a resource.

It seems we should consider returning the resource here; I suspect it's
just a few LOC changes in the API.

[1] https://review.openstack.org/#/c/543408/6/api-ref/source/v2/l3
-agent-scheduler.inc@76

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1762733

Title:
  l3agentscheduler doesn't return a response body with POST
  /v2.0/agents/{agent_id}/l3-routers

Status in neutron:
  New

Bug description:
  As discussed in [1], the

  POST /v2.0/agents/{agent_id}/l3-routers

  does not return a response body. This seems inconsistent with our
  other APIs as POSTs typically return the created resource. This is
  even true with other APIs that 'add' something to a resource.

  It seems we should consider returning the resource here; I suspect
  it's just a few LOC changes in the API.

  [1] https://review.openstack.org/#/c/543408/6/api-ref/source/v2/l3
  -agent-scheduler.inc@76

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1762733/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750140] Re: security group from all projects are shown in "Edit Security Group" of instance

2018-04-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/545600
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=3051dee604d69797fd31c27d01cea3c9b9a29ca4
Submitter: Zuul
Branch:master

commit 3051dee604d69797fd31c27d01cea3c9b9a29ca4
Author: Akihiro Motoki 
Date:   Sat Feb 17 23:31:58 2018 +0900

Ensure to show security groups only from current project

Previously when logging in as a user with admin role,
if we visit "Edit Security Group" action of the instance table,
security groups from all projects are listed.

Change-Id: I71ff940434ef8dc146e934dc833c4d26829930c0
Closes-Bug: #1750140


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1750140

Title:
  security group from all projects are shown in "Edit Security Group" of
  instance

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  When logging in as a user with admin role, if we visit "Edit Security
  Group" action of the instance table, security groups from all projects
  are listed. The expected behavior is that only security groups from
  the current project are shown.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1750140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1733390] Re: routerservicetype extension not documented in api-ref

2018-04-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/549765
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=4cdf339a0b5858f1a9c0a6eafd5a74b43db85d1c
Submitter: Zuul
Branch:master

commit 4cdf339a0b5858f1a9c0a6eafd5a74b43db85d1c
Author: Michal Kelner Mishali 
Date:   Mon Mar 5 15:48:48 2018 +0200

Documenting Router service type ID

Closes-Bug: #1733390

Change-Id: I83a0525d73858178d600f3844cc6780d4d5ca738
Signed-off-by: Michal Kelner Mishali 


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1733390

Title:
  routerservicetype extension not documented in api-ref

Status in neutron:
  Fix Released

Bug description:
  The routerservicetype extension is not doc'd in our api-ref:
  - The routerservicetype extension needs a subsection atop the router api-ref 
describing the extn itself.
  - Any params added by the extn need to be doc'd.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1733390/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762708] [NEW] Unable to ssh/ping IP assigned to VM deployed on flat network

2018-04-10 Thread Sheetal
Public bug reported:

I have created a base VM (to install rdo queens) with RHEL 7.4 OS, 16 GB
RAM, 8 CPU,200 GB HDD, virtualization enabled.

I have installed RDO queens (all-in-one topology) using command below : 
nohup packstack --allinone --provision-demo=n 
--os-neutron-ovs-bridge-mappings=extnet:br-ex 
--os-neutron-ovs-bridge-interfaces=br-ex:ens192 
--os-neutron-ml2-type-drivers=vxlan,flat,vlan 
--os-neutron-ml2-flat-networks=extnet --os-heat-install=y 
--keystone-admin-passwd=openstack1 & 

I wanted to configure flat network on this setup.So did below changes in
ml2_conf.ini

type_drivers = flat,vlan
flat_networks = extnet

changes in openvswitch_agent.ini :
integration_bridge=br-int
tenant_network_type = flat,vlan
network_vlan_ranges = extnet
enable_tunneling = True
bridge_mappings = extent:br-ex
physical_interface_mappings = extnet:br-ex

Created security groups to allow ping/ssh to deployed instances:
neutron security-group-rule-create --direction ingress --ethertype IPv4 
--protocol tcp --port-range-min 22 --port-range-max 22 defualt

neutron security-group-rule-create --direction ingress --ethertype IPv4
--protocol icmp default

Created external n/w :
neutron net-create external --router:external=True --shared 
--provider:network_type=flat --provider:physical_network=extnet

neutron subnet-create --name ext-subnet external 172.16.82.0/24
--allocation-pool start=172.16.82.30,end=172.16.82.35 (dhcp=enabled)

Deployed a cirros image on above created external n/w as admin user in
default domain. I also made sure that security group rules created for
tcp/icmp are applied while deployment.

Note that : ifconfig command inside the deployed cirros instance
displayed the IP from the allocation pool but IP can not ping the
gateway, it can only ping the controller. I can not ping/ssh the IP
assigned to deployed cirros instance from my machine.

Note : If I disable the dhcp in subnet, ifconfig command does not
display the IP for deployed instance.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1762708

Title:
  Unable to ssh/ping IP assigned to VM deployed on flat network

Status in neutron:
  New

Bug description:
  I have created a base VM (to install rdo queens) with RHEL 7.4 OS, 16
  GB RAM, 8 CPU,200 GB HDD, virtualization enabled.

  I have installed RDO queens (all-in-one topology) using command below : 
  nohup packstack --allinone --provision-demo=n 
--os-neutron-ovs-bridge-mappings=extnet:br-ex 
--os-neutron-ovs-bridge-interfaces=br-ex:ens192 
--os-neutron-ml2-type-drivers=vxlan,flat,vlan 
--os-neutron-ml2-flat-networks=extnet --os-heat-install=y 
--keystone-admin-passwd=openstack1 & 

  I wanted to configure flat network on this setup.So did below changes
  in ml2_conf.ini

  type_drivers = flat,vlan
  flat_networks = extnet

  changes in openvswitch_agent.ini :
  integration_bridge=br-int
  tenant_network_type = flat,vlan
  network_vlan_ranges = extnet
  enable_tunneling = True
  bridge_mappings = extent:br-ex
  physical_interface_mappings = extnet:br-ex

  Created security groups to allow ping/ssh to deployed instances:
  neutron security-group-rule-create --direction ingress --ethertype IPv4 
--protocol tcp --port-range-min 22 --port-range-max 22 defualt

  neutron security-group-rule-create --direction ingress --ethertype
  IPv4 --protocol icmp default

  Created external n/w :
  neutron net-create external --router:external=True --shared 
--provider:network_type=flat --provider:physical_network=extnet

  neutron subnet-create --name ext-subnet external 172.16.82.0/24
  --allocation-pool start=172.16.82.30,end=172.16.82.35 (dhcp=enabled)

  Deployed a cirros image on above created external n/w as admin user in
  default domain. I also made sure that security group rules created for
  tcp/icmp are applied while deployment.

  Note that : ifconfig command inside the deployed cirros instance
  displayed the IP from the allocation pool but IP can not ping the
  gateway, it can only ping the controller. I can not ping/ssh the IP
  assigned to deployed cirros instance from my machine.

  Note : If I disable the dhcp in subnet, ifconfig command does not
  display the IP for deployed instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1762708/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762687] [NEW] Concurrent requests to attach the same non-multiattach volume to multiple instances can succeed

2018-04-10 Thread Lee Yarwood
Public bug reported:

Description
===

Discovered this by chance yesterday, at first glance this appears to be
due to a lack of locking within c-api when we initial create the
attachments and no additional validation when we update with the
connector later in the attach flow. Reporting this against both nova and
cinder for now.

$ nova volume-attach 77c092c2-9664-42b2-bf71-277b1bfad707 
b4240f39-da7a-4372-b4ca-15a0c6121ac8 & nova volume-attach 
91a7c490-5a9a-4048-a109-b1159b7f0e79 b4240f39-da7a-4372-b4ca-15a0c6121ac8
[1] 24949
+--+--+
| Property | Value|
+--+--+
| device   | /dev/vdb |
| id   | b4240f39-da7a-4372-b4ca-15a0c6121ac8 |
| serverId | 91a7c490-5a9a-4048-a109-b1159b7f0e79 |
| volumeId | b4240f39-da7a-4372-b4ca-15a0c6121ac8 |
+--+--+
+--+--+
| Property | Value|
+--+--+
| device   | /dev/vdb |
| id   | b4240f39-da7a-4372-b4ca-15a0c6121ac8 |
| serverId | 77c092c2-9664-42b2-bf71-277b1bfad707 |
| volumeId | b4240f39-da7a-4372-b4ca-15a0c6121ac8 |
+--+--+
$ cinder show b4240f39-da7a-4372-b4ca-15a0c6121ac8
++--+
| Property   | Value
|
++--+
| attached_servers   | ['91a7c490-5a9a-4048-a109-b1159b7f0e79', 
'77c092c2-9664-42b2-bf71-277b1bfad707'] |
| attachment_ids | ['31b8c16f-07d0-4f0c-95d8-c56797a270dc', 
'a7eb9cb1-b7be-44e3-a176-3c6989459aaa'] |
| availability_zone  | nova 
|
| bootable   | false
|
| consistencygroup_id| None 
|
| created_at | 2018-04-09T19:02:46.00   
|
| description| None 
|
| encrypted  | False
|
| id | b4240f39-da7a-4372-b4ca-15a0c6121ac8 
|
| metadata   | attached_mode : rw   
|
| migration_status   | None 
|
| multiattach| False
|
| name   | None 
|
| os-vol-host-attr:host  | test.example.com@lvmdriver-1#lvmdriver-1 
|
| os-vol-mig-status-attr:migstat | None 
|
| os-vol-mig-status-attr:name_id | None 
|
| os-vol-tenant-attr:tenant_id   | fe3128ecf4704369ae3f7ede03f6bc29 
|
| replication_status | None 
|
| size   | 1
|
| snapshot_id| None 
|
| source_volid   | None 
|
| status | in-use   
|
| updated_at | 2018-04-09T19:04:24.00   
|
| user_id| 57293b0839da449580ce7008c8734c1c 
|
| volume_type| lvmdriver-1  
|
++--+

$ ll 

[Yahoo-eng-team] [Bug 1762329] Re: visit i18n/js/horizon+openstack_dashboard/ need account for login

2018-04-10 Thread Akihiro Motoki
*** This bug is a duplicate of bug 1753557 ***
https://bugs.launchpad.net/bugs/1753557

** This bug has been marked a duplicate of bug 1753557
   i18n javascript on login page requires login (again)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1762329

Title:
  visit i18n/js/horizon+openstack_dashboard/ need account for login

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  I use horizon version: horizon-13.0.0

  In the console prompt a lot of mistakes when i open login url,it shows
  "gettext is not defined"

  Then, i found url `i18n/js/horizon+openstack_dashboard/` need to login
  first

  I edit `horizon/base.py` and add this line: `if pattern.name == 'jsi18n': 
continue#skip auth Tag1` in function _decorate_urlconf, then, it's good
  

  def _decorate_urlconf(urlpatterns, decorator, *args, **kwargs):
  for pattern in urlpatterns:
  if getattr(pattern, 'callback', None):
  if pattern.name == 'jsi18n': continue#skip auth Tag1
  decorated = decorator(pattern.callback, *args, **kwargs)
  if django.VERSION >= (1, 10):
  pattern.callback = decorated
  else:
  # prior to 1.10 callback was a property and we had
  # to modify the private attribute behind the property
  pattern._callback = decorated
  if getattr(pattern, 'url_patterns', []):
  _decorate_urlconf(pattern.url_patterns, decorator, *args, 
**kwargs)

  
  where is my wrong configuration? 

  Please help

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1762329/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762688] [NEW] rescued instance doesn't have attached vGPUs

2018-04-10 Thread Sylvain Bauza
Public bug reported:

With the libvirt driver, rescuing an instance means that the attached
mediated devices for virtual GPUs will be released. When unrescuing the
instance, the driver will use the existing guest XML so it will attach
again the mediated devices, but it could be a race condition in case the
related vGPUs are now attached to a separate instance.

We should attach the mediated devices to the rescued instance too.

** Affects: nova
 Importance: Low
 Assignee: Sylvain Bauza (sylvain-bauza)
 Status: Confirmed


** Tags: libvirt vgpu

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1762688

Title:
  rescued instance doesn't have attached vGPUs

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  With the libvirt driver, rescuing an instance means that the attached
  mediated devices for virtual GPUs will be released. When unrescuing
  the instance, the driver will use the existing guest XML so it will
  attach again the mediated devices, but it could be a race condition in
  case the related vGPUs are now attached to a separate instance.

  We should attach the mediated devices to the rescued instance too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1762688/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1761140] Re: dpkg eror processing package nova-compute

2018-04-10 Thread Annie Melen
** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1761140

Title:
  dpkg eror processing package nova-compute

Status in OpenStack Compute (nova):
  Invalid
Status in nova package in Ubuntu:
  New

Bug description:
  Hello! 
  I've encountered the bug while installing Nova on compute nodes:

  ...
  Setting up qemu-system-x86 (1:2.11+dfsg-1ubuntu5~cloud0) ...
  Setting up qemu-kvm (1:2.11+dfsg-1ubuntu5~cloud0) ...
  Setting up qemu-utils (1:2.11+dfsg-1ubuntu5~cloud0) ...
  Setting up python-keystone (2:13.0.0-0ubuntu1~cloud0) ...
  Processing triggers for initramfs-tools (0.122ubuntu8.11) ...
  update-initramfs: Generating /boot/initrd.img-4.4.0-116-generic
  Setting up nova-compute-libvirt (2:17.0.1-0ubuntu1~cloud0) ...
  adduser: The user `nova' does not exist.
  dpkg: error processing package nova-compute-libvirt (--configure):
   subprocess installed post-installation script returned error exit status 1
  dpkg: dependency problems prevent configuration of nova-compute-kvm:
   nova-compute-kvm depends on nova-compute-libvirt (= 
2:17.0.1-0ubuntu1~cloud0); however:
Package nova-compute-libvirt is not configured yet.

  dpkg: error processing package nova-compute-kvm (--configure):
   dependency problems - leaving unconfigured
  Setting up python-os-brick (2.3.0-0ubuntu1~cloud0) ...
  No apport report written because the error message indicates its a followup 
error from a previous failure.
  Setting up python-nova (2:17.0.1-0ubuntu1~cloud0) ...
  Setting up nova-common (2:17.0.1-0ubuntu1~cloud0) ...
  Setting up nova-compute (2:17.0.1-0ubuntu1~cloud0) ...
  Processing triggers for libc-bin (2.23-0ubuntu10) ...
  Processing triggers for systemd (229-4ubuntu21.2) ...
  Processing triggers for ureadahead (0.100.0-19) ...
  Processing triggers for dbus (1.10.6-1ubuntu3.3) ...
  Errors were encountered while processing:
   nova-compute-libvirt
   nova-compute-kvm
  ...

  Installation failed when executing 'post-installation script'. 
  After some investigation I've found out that if I've create 'nova' user 
BEFORE running package installation, it's will be succeded. 

  Steps to reproduce
  --
  1. Prepare the node for installing nova-compute packages
  2. Run 'apt-get install nova-compute'

  Expected result
  --
  Nova-compute installed successfully without errors

  Actual result
  --
  Installation failed with dpkg error

  Workaround
  --
  1. Create system user: add to /etc/passwd
 nova:x:64060:64060::/var/lib/nova:/bin/false
  2. Create system group: add to /etc/group
 nova:x:64060:
  3. Run 'apt-get install nova-compute'
 
  My Environment
  --
  Ubuntu 16.04.4 LTS, 4.4.0-116-generic
  Openstack Queens Release
  Nova 17.0.1-0ubuntu1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1761140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762665] [NEW] Maybe wrong config in "Install and configure (Red Hat) in glance"

2018-04-10 Thread George Gao
Public bug reported:


This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [x] This doc is inaccurate in this way: ___auth_url = 
http://controller:5000___
- [ ] This is a doc addition request.
- [ ] I have a fix to the document that I can paste below including example: 
input and output. 

If you have a troubleshooting or support issue, use the following
resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release: 16.0.1.dev11 on 'Sat Apr 7 22:02:25 2018, commit 5326f64'
SHA: 5326f644439f5b7c81a8da12189af52dec6f33b1
Source: 
https://git.openstack.org/cgit/openstack/glance/tree/doc/source/install/install-rdo.rst
URL: https://docs.openstack.org/glance/queens/install/install-rdo.html

** Affects: glance
 Importance: Undecided
 Status: New

** Attachment added: "screenshots of these documents."
   
https://bugs.launchpad.net/bugs/1762665/+attachment/5109047/+files/screens_of_relative_documents.zip

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1762665

Title:
  Maybe wrong config in "Install and configure (Red Hat) in glance"

Status in Glance:
  New

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc is inaccurate in this way: ___auth_url = 
http://controller:5000___
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 16.0.1.dev11 on 'Sat Apr 7 22:02:25 2018, commit 5326f64'
  SHA: 5326f644439f5b7c81a8da12189af52dec6f33b1
  Source: 
https://git.openstack.org/cgit/openstack/glance/tree/doc/source/install/install-rdo.rst
  URL: https://docs.openstack.org/glance/queens/install/install-rdo.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1762665/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1756064] Re: neutron agent doesn't remove trunk bridge after nova-compute restart

2018-04-10 Thread Ivan Dyukov
looks like it's solved by https://bugs.launchpad.net/os-vif/+bug/1670628

** Changed in: os-vif
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1756064

Title:
  neutron agent doesn't remove trunk bridge after nova-compute restart

Status in neutron:
  Invalid
Status in os-vif:
  Invalid

Bug description:
  env:
  backend is openvswitch with DPDK
  Version is Ocata

  Steps:
  Create two networks.
  Create two ports for each network
  Create trunk port
  boot virtual machine with trunk port
  Restart nova-compute on compute node: #  openstack-service restart 
openstack-nova-compute
  Remove the virtual machine
  Check ovs configuration on compute node: ovs-vsctl show

  Expected result: there is no trunk bridge e.g. tbr-c4ce71ea-7
  Actual result: trunk bridge and services ports are still in ovs 
configuration. e.g.

  Bridge "tbr-c4ce71ea-7"
  Port "spt-63eb23e7-af"
  tag: 102
  Interface "spt-63eb23e7-af"
  type: patch
  options: {peer="spi-63eb23e7-af"}
  Port "tbr-c4ce71ea-7"
  Interface "tbr-c4ce71ea-7"
  type: internal
  Port "tpt-d6c0e47e-ed"
  Interface "tpt-d6c0e47e-ed"
  type: patch
  options: {peer="tpi-d6c0e47e-ed"}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1756064/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747650] Re: Make bdms querying in multiple cells use scatter-gather

2018-04-10 Thread Surya Seetharaman
** No longer affects: nova/pike

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1747650

Title:
  Make bdms querying in multiple cells use scatter-gather

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  Fix Committed

Bug description:
  Currently the "_get_instance_bdms_in_multiple_cells" function in
  extended_volumes runs sequentially and this affects the performance in
  case of large deployments (running a lot of cells) :
  
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/extended_volumes.py#L50

  So it would be nice to use the scatter_gather_cells function to do
  this operation in parallel.

  Also apart from the performance scaling point of view, in case
  connection to a particular cell fails, it would be nice to have
  sentinels returned which is done by the scatter_gather_cells function.
  This helps when a cell is down.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1747650/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762633] [NEW] Keystone Installation Tutorial for Ubuntu in keystone

2018-04-10 Thread gokul
Public bug reported:


This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [x] This doc is inaccurate in this way: In the note, MacOS is written
as MaOS.

If you have a troubleshooting or support issue, use the following
resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release: 12.0.1.dev17 on 2018-02-20 08:16
SHA: 6de0a147d68042af79ddc6d700cec3fd71b9d03c
Source: 
https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/index-ubuntu.rst
URL: https://docs.openstack.org/keystone/pike/install/index-ubuntu.html

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: doc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1762633

Title:
  Keystone Installation Tutorial for Ubuntu in keystone

Status in OpenStack Identity (keystone):
  New

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc is inaccurate in this way: In the note, MacOS is
  written as MaOS.

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 12.0.1.dev17 on 2018-02-20 08:16
  SHA: 6de0a147d68042af79ddc6d700cec3fd71b9d03c
  Source: 
https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/index-ubuntu.rst
  URL: https://docs.openstack.org/keystone/pike/install/index-ubuntu.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1762633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp