[Yahoo-eng-team] [Bug 1818791] [NEW] Volume Snapshot table has incorrect error message.

2019-03-05 Thread Vishal Manchanda
Public bug reported:

Volume snapshot table to retrieve volume snapshot project project
information has incorrect error message here [1].

[1]
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/admin/snapshots/tables.py#L52

** Affects: horizon
 Importance: Undecided
 Assignee: Vishal Manchanda (vishalmanchanda)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1818791

Title:
  Volume Snapshot table has incorrect error message.

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Volume snapshot table to retrieve volume snapshot project project
  information has incorrect error message here [1].

  [1]
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/admin/snapshots/tables.py#L52

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1818791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378904] Re: renaming availability zone doesn't modify host's availability zone

2019-03-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/509206
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=8e19ef4173906da0b7c761da4de0728a2fd71e24
Submitter: Zuul
Branch:master

commit 8e19ef4173906da0b7c761da4de0728a2fd71e24
Author: Andrey Volkov 
Date:   Tue Oct 3 15:42:55 2017 +0300

Check hosts have no instances for AZ rename

Update aggregate and update aggregate metadata API calls have the
ability to update availability zone name for the aggregate. If the
aggregate is not empty (has hosts with instances on it)
the update leads to discrepancy for objects saving availability zone as a
string but not reference.

From devstack DB they are:
- cinder.backups.availability_zone
- cinder.consistencygroups.availability_zone
- cinder.groups.availability_zone
- cinder.services.availability_zone
- cinder.volumes.availability_zone
- neutron.agents.availability_zone
- neutron.networks.availability_zone_hints
- neutron.router_extra_attributes.availability_zone_hints
- nova.dns_domains.availability_zone
- nova.instances.availability_zone
- nova.volume_usage_cache.availability_zone
- nova.shadow_dns_domains.availability_zone
- nova.shadow_instances.availability_zone
- nova.shadow_volume_usage_cache.availability_zone

Why that's bad?
First, API and Horizon show different values for host and instance for
example. Second, migration for instances with changed availability
zone fails with "No valid host found" for old AZ.

This change adds an additional check to aggregate an Update Aggregate API 
call.
With the check, it's not possible to rename AZ if the corresponding
aggregate has instances in any hosts.

PUT /os-aggregates/{aggregate_id} and
POST /os-aggregates/{aggregate_id}/action return HTTP 400 for
availability zone renaming if the hosts of the aggregate have any instances.
It's similar to conflicting AZ names error already available.

Change-Id: Ic27195e46502067c87ee9c71a811a3ca3f610b73
Closes-Bug: #1378904


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378904

Title:
  renaming availability zone doesn't modify host's availability zone

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Hi,

  After renaming our availability zones via Horizon Dashboard, we
  couldn't migrate any "old" instance anymore, the scheduler returning
  "No valid Host found"...

  After searching, we found in the nova DB `instances` table, the
  "availability_zone" field contains the name of the availability zone,
  instead of the ID ( or maybe it is intentional ;) ).

  So renaming AZ leaves the hosts created prior to this rename orphan
  and the scheduler cannot find any valid host for them...

  Our openstack install is on debian wheezy, with the icehouse
  "official" repository from archive.gplhost.com/debian/, up to date.

  If you need any more infos, I'd be glad to help.

  Cheers

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378904/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818765] Re: the PeriodicWorker function misssing the default desc in constructor

2019-03-05 Thread baisen
https://review.openstack.org/#/c/641186/

** Also affects: tricircle
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818765

Title:
  the PeriodicWorker function misssing the default desc in constructor

Status in neutron:
  New
Status in Tricircle:
  New

Bug description:
  
  After this pr merged.https://review.openstack.org/#/c/637019/

  we should add the default desc in PeriodicWorker. Otherwise some class

  base on the PeriodicWorker which do not set the setproctitle off in
  neutorn conf.

  will get the core dump error. like below, where set_proctitle  is None
  and do not

  have the setproctitle config

  packages/neutron/worker.py", line 21, in __init__
  set_proctitle = set_proctitle or cfg.CONF.setproctitle

  
  ft2.2: 
tricircle.tests.unit.network.test_central_trunk_plugin.PluginTest.test_delete_trunk_StringException:
 Traceback (most recent call last):
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 1305, in patched
  return func(*args, **keywargs)
File "tricircle/tests/unit/network/test_central_trunk_plugin.py", line 555, 
in test_delete_trunk
  fake_plugin.delete_trunk(q_ctx, t_trunk['id'])
File "tricircle/network/central_trunk_plugin.py", line 70, in delete_trunk
  super(TricircleTrunkPlugin, self).delete_trunk(context, trunk_id)
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/services/trunk/plugin.py",
 line 267, in delete_trunk
  if trunk_port_validator.can_be_trunked_or_untrunked(context):
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/services/trunk/rules.py",
 line 115, in can_be_trunked_or_untrunked
  if not self.is_bound(context):
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/services/trunk/rules.py",
 line 109, in is_bound
  core_plugin = directory.get_plugin()
File "tricircle/tests/unit/network/test_central_trunk_plugin.py", line 254, 
in fake_get_plugin
  return FakeCorePlugin()
File "tricircle/network/central_plugin.py", line 182, in __new__
  n = super(TricirclePlugin, cls).__new__(cls, *args, **kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/callbacks/registry.py",
 line 106, in replacement_new
  instance = orig_new(cls, *args, **kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/db/db_base_plugin_v2.py",
 line 156, in __new__
  return super(NeutronDbPluginV2, cls).__new__(cls, *args, **kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/resource_extend.py",
 line 126, in replacement_new
  instance = orig_new(cls, *args, **kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/callbacks/registry.py",
 line 104, in replacement_new
  instance = super_new(cls, *args, **kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/resource_extend.py",
 line 126, in replacement_new
  instance = orig_new(cls, *args, **kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/callbacks/registry.py",
 line 106, in replacement_new
  instance = orig_new(cls, *args, **kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/db/external_net_db.py",
 line 77, in __new__
  return super(External_net_db_mixin, cls).__new__(cls, *args, **kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/resource_extend.py",
 line 126, in replacement_new
  instance = orig_new(cls, *args, **kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/db/portbindings_db.py",
 line 54, in __new__
  return super(PortBindingMixin, cls).__new__(cls, *args, **kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/resource_extend.py",
 line 124, in replacement_new
  instance = super_new(cls, *args, **kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/resource_extend.py",
 line 124, in replacement_new
  instance = super_new(cls, *args, **kwargs)
File 

[Yahoo-eng-team] [Bug 1818765] [NEW] the PeriodicWorker function misssing the default desc in constructor

2019-03-05 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:


After this pr merged.https://review.openstack.org/#/c/637019/

we should add the default desc in PeriodicWorker. Otherwise some class

base on the PeriodicWorker which do not set the setproctitle off in
neutorn conf.

will get the core dump error. like below, where set_proctitle  is None
and do not

have the setproctitle config

packages/neutron/worker.py", line 21, in __init__
set_proctitle = set_proctitle or cfg.CONF.setproctitle


ft2.2: 
tricircle.tests.unit.network.test_central_trunk_plugin.PluginTest.test_delete_trunk_StringException:
 Traceback (most recent call last):
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 1305, in patched
return func(*args, **keywargs)
  File "tricircle/tests/unit/network/test_central_trunk_plugin.py", line 555, 
in test_delete_trunk
fake_plugin.delete_trunk(q_ctx, t_trunk['id'])
  File "tricircle/network/central_trunk_plugin.py", line 70, in delete_trunk
super(TricircleTrunkPlugin, self).delete_trunk(context, trunk_id)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/services/trunk/plugin.py",
 line 267, in delete_trunk
if trunk_port_validator.can_be_trunked_or_untrunked(context):
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/services/trunk/rules.py",
 line 115, in can_be_trunked_or_untrunked
if not self.is_bound(context):
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/services/trunk/rules.py",
 line 109, in is_bound
core_plugin = directory.get_plugin()
  File "tricircle/tests/unit/network/test_central_trunk_plugin.py", line 254, 
in fake_get_plugin
return FakeCorePlugin()
  File "tricircle/network/central_plugin.py", line 182, in __new__
n = super(TricirclePlugin, cls).__new__(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/callbacks/registry.py",
 line 106, in replacement_new
instance = orig_new(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/db/db_base_plugin_v2.py",
 line 156, in __new__
return super(NeutronDbPluginV2, cls).__new__(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/resource_extend.py",
 line 126, in replacement_new
instance = orig_new(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/callbacks/registry.py",
 line 104, in replacement_new
instance = super_new(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/resource_extend.py",
 line 126, in replacement_new
instance = orig_new(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/callbacks/registry.py",
 line 106, in replacement_new
instance = orig_new(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/db/external_net_db.py",
 line 77, in __new__
return super(External_net_db_mixin, cls).__new__(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/resource_extend.py",
 line 126, in replacement_new
instance = orig_new(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/db/portbindings_db.py",
 line 54, in __new__
return super(PortBindingMixin, cls).__new__(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/resource_extend.py",
 line 124, in replacement_new
instance = super_new(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/resource_extend.py",
 line 124, in replacement_new
instance = super_new(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/callbacks/registry.py",
 line 106, in replacement_new
instance = orig_new(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/db/l3_db.py",
 line 96, in __new__
inst._start_janitor()
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/db/l3_db.py",
 line 

[Yahoo-eng-team] [Bug 1818765] [NEW] the PeriodicWorker function misssing the default desc in constructor

2019-03-05 Thread baisen
Public bug reported:


After this pr merged.https://review.openstack.org/#/c/637019/

we should add the default desc in PeriodicWorker. Otherwise some class

base on the PeriodicWorker which do not set the setproctitle off in
neutorn conf.

will get the core dump error. like below, where set_proctitle  is None
and do not

have the setproctitle config

packages/neutron/worker.py", line 21, in __init__
set_proctitle = set_proctitle or cfg.CONF.setproctitle


ft2.2: 
tricircle.tests.unit.network.test_central_trunk_plugin.PluginTest.test_delete_trunk_StringException:
 Traceback (most recent call last):
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 1305, in patched
return func(*args, **keywargs)
  File "tricircle/tests/unit/network/test_central_trunk_plugin.py", line 555, 
in test_delete_trunk
fake_plugin.delete_trunk(q_ctx, t_trunk['id'])
  File "tricircle/network/central_trunk_plugin.py", line 70, in delete_trunk
super(TricircleTrunkPlugin, self).delete_trunk(context, trunk_id)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/services/trunk/plugin.py",
 line 267, in delete_trunk
if trunk_port_validator.can_be_trunked_or_untrunked(context):
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/services/trunk/rules.py",
 line 115, in can_be_trunked_or_untrunked
if not self.is_bound(context):
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/services/trunk/rules.py",
 line 109, in is_bound
core_plugin = directory.get_plugin()
  File "tricircle/tests/unit/network/test_central_trunk_plugin.py", line 254, 
in fake_get_plugin
return FakeCorePlugin()
  File "tricircle/network/central_plugin.py", line 182, in __new__
n = super(TricirclePlugin, cls).__new__(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/callbacks/registry.py",
 line 106, in replacement_new
instance = orig_new(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/db/db_base_plugin_v2.py",
 line 156, in __new__
return super(NeutronDbPluginV2, cls).__new__(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/resource_extend.py",
 line 126, in replacement_new
instance = orig_new(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/callbacks/registry.py",
 line 104, in replacement_new
instance = super_new(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/resource_extend.py",
 line 126, in replacement_new
instance = orig_new(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/callbacks/registry.py",
 line 106, in replacement_new
instance = orig_new(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/db/external_net_db.py",
 line 77, in __new__
return super(External_net_db_mixin, cls).__new__(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/resource_extend.py",
 line 126, in replacement_new
instance = orig_new(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/db/portbindings_db.py",
 line 54, in __new__
return super(PortBindingMixin, cls).__new__(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/resource_extend.py",
 line 124, in replacement_new
instance = super_new(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/resource_extend.py",
 line 124, in replacement_new
instance = super_new(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/callbacks/registry.py",
 line 106, in replacement_new
instance = orig_new(cls, *args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/db/l3_db.py",
 line 96, in __new__
inst._start_janitor()
  File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/db/l3_db.py",
 line 139, in _start_janitor
 

[Yahoo-eng-team] [Bug 1818759] [NEW] Unexpected API Error

2019-03-05 Thread Steve Mitchell
Public bug reported:

Unexpected API Error. Please report this at
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500)
(Request-ID: req-b4622d4a-a853-4806-83f9-3056f4530314)

I could find no "Nova API log": "sudo find / -iname nova*.log" returned
an empty set.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1818759

Title:
  Unexpected API Error

Status in OpenStack Compute (nova):
  New

Bug description:
  Unexpected API Error. Please report this at
  http://bugs.launchpad.net/nova/ and attach the Nova API log if
  possible.  (HTTP
  500) (Request-ID: req-b4622d4a-a853-4806-83f9-3056f4530314)

  I could find no "Nova API log": "sudo find / -iname nova*.log"
  returned an empty set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1818759/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818701] Re: invalid PCI alias in flavor results in HTTP 500 on instance create

2019-03-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/641082
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=cb5ad6d3c14caccfc2b222dc5d2f1f6c5e05da9c
Submitter: Zuul
Branch:master

commit cb5ad6d3c14caccfc2b222dc5d2f1f6c5e05da9c
Author: Chris Friesen 
Date:   Tue Mar 5 09:53:37 2019 -0600

Handle missing exception in instance creation code

In the instance creation code path it's possible for the PciInvalidAlias
exception to be raised if the flavor extra-specs have an invalid PCI
alias.  This should be converted to HTTPBadRequest along with the other
exceptions stemming from invalid extra-specs.

Without this, it gets reported as an HTTP 500 error.

Change-Id: Ia6921b5cd9253f65ff6904bdbce942759633de95
Closes-Bug: #1818701
Signed-off-by: Chris Friesen 


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1818701

Title:
  invalid PCI alias in flavor results in HTTP 500 on instance create

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  Confirmed
Status in OpenStack Compute (nova) queens series:
  Confirmed
Status in OpenStack Compute (nova) rocky series:
  Confirmed

Bug description:
  If an invalid PCI alias is specified in the flavor extra-specs and we
  try to create an instance with that flavor, it will result in a
  PciInvalidAlias exception being raised.

  In ServersController.create() PciInvalidAlias is missing from the list
  of exceptions that get converted to an HTTPBadRequest.  Instead, it's
  reported as a 500 error:

  [stack@fedora-1 nova]$ nova boot --flavor  ds2G --image fedora29 --nic none 
--admin-pass fedora asdf3
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-fec3face-4135-41fd-bc48-07957363ddae)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1818701/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1659062] Re: Failed evacuations leave neutron ports on destination host

2019-03-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/603844
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=542635034882e1b6897e1935f09d6feb6e77d1ce
Submitter: Zuul
Branch:master

commit 542635034882e1b6897e1935f09d6feb6e77d1ce
Author: Jack Ding 
Date:   Wed Sep 19 11:54:44 2018 -0400

Correct instance port binding for rebuilds

The following 2 scenarios could result in an instance with incorrect
port binding and cause subsequent rebuilds to fail.

If an evacuation of an instance fails part way through, after the point
where we reassign the port binding to the new host but before we change
the instance host, we end up with the ports assigned to the wrong host.
This change adds a check to determine if there's any port binding host
mismatches and if so trigger setup of instance network.

During recovery of failed hosts, neutron could get overwhelmed and lose
messages, for example when active controller was powered-off in the
middle of instance evacuations. In this case the vif_type was set to
'binding_failed' or 'unbound'. We subsequently hit "Unsupported VIF
type" exception during instance hard_reboot or rebuild, leaving the
instance unrecoverable.

This commit changes _heal_instance_info_cache periodic task to update
port binding if evacuation fails due to above errors so that the
instance can be recovered later.

Closes-Bug: #1659062
Related-Bug: #1784579

Co-Authored-By: Gerry Kopec 
Co-Authored-By: Jim Gauld 
Change-Id: I75fd15ac2a29e420c09499f2c41d11259ca811ae
Signed-off-by: Jack Ding 


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1659062

Title:
  Failed evacuations leave neutron ports on destination host

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  This is related to https://bugs.launchpad.net/nova/+bug/1430042 and the 
associated fix https://review.openstack.org/#/c/169827/; if an evacuation fails 
there is no reverting of the neutron ports' host_id binding back to the source 
host.

  This may or may not be a bug, but if the evacuation fails and the
  source host comes back up and VMs are expected to be running, then the
  neutron ports should probably be rolled back.

  Steps to reproduce
  ==
  * Raise an exception at some point in the evacuation flow after the 
setup_instance_network_on_host calls in _do_rebuild_instance in the manager
  * Issue an evacuation of a VM to the host that will fail

  Expected result
  ===
  * If the evacuation fails the expectation would be to have the neutron ports 
have their host_id binding updated to be the source host.

  Actual result
  =
  * The ports host_id bindings remain as the destination host.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
 Newton

  2. Which hypervisor did you use?
 PowerVM

  2. Which storage type did you use?
 N/A

  3. Which networking type did you use?
 Neutron with SEA

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1659062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815153] Re: Requested host during cold migrate is ignored if server created before Rocky

2019-03-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/636271
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=67d5970445818f2f245cf1b6d9d46c36fb220f04
Submitter: Zuul
Branch:master

commit 67d5970445818f2f245cf1b6d9d46c36fb220f04
Author: Takashi Natsume 
Date:   Tue Feb 12 11:46:57 2019 +0900

Fix resetting non-persistent fields when saving obj

The 'requested_destination', 'network_metadata', 'retry' fields
in the RequestSpec object are reset when saving the object currently.

When cold migrating a server, the API sets the requested_destination
so conductor will pass that information to the scheduler
to restrict the cold migration to that host.
But the 'heal_reqspec_is_bfv' method called from the conductor
makes an update to the RequestSpec which resets
the requested_destination so the server could end up being cold migrated
to some other host than the one that was requested by the API user.

So make them not be reset when saving the object.

Change-Id: I2131558f0edfe603ee1e8d8bae66a3caf5182a58
Closes-Bug: #1815153


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1815153

Title:
  Requested host during cold migrate is ignored if server created before
  Rocky

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) rocky series:
  Triaged

Bug description:
  I stumbled across this during a failing functional test:

  https://review.openstack.org/#/c/635668/2/nova/conductor/tasks/migrate.py@263

  In Rocky, new RequestSpec objects have the is_bfv field set, but
  change https://review.openstack.org/#/c/583715/ was added to 'heal'
  old RequestSpecs when servers created before Rocky are migrated (cold
  migrate, live migrate, unshelve and evacuate).

  The problem is change https://review.openstack.org/#/c/610098/ made
  the RequestSpec.save() operation stop persisting the
  requested_destination field, which means when heal_reqspec_is_bfv
  saves the is_bfv change to the RequestSpec, the requested_destination
  is lost and the user-specified target host is not honored (this would
  impact all move APIs that target a target host, so cold migrate, live
  migrate and evacuate).

  The simple way to fix it is by not overwriting the set
  requested_destination field during save (don't persist it in the
  database, but don't reset it to None in the object in memory):

  https://review.openstack.org/#/c/635668/2/nova/objects/request_spec.py@517

  This could also be a problem for the 'network_metadata' field added in
  Rocky:

  https://review.openstack.org/#/c/564442/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1815153/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818295] Re: Only Ironic public endpoint is supported

2019-03-05 Thread Matt Riedemann
** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova/queens
   Importance: Undecided => Medium

** Changed in: nova/rocky
   Importance: Undecided => Medium

** Changed in: nova/queens
   Status: New => Confirmed

** Changed in: nova/rocky
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1818295

Title:
  Only Ironic public endpoint is supported

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) queens series:
  Confirmed
Status in OpenStack Compute (nova) rocky series:
  Confirmed

Bug description:
  Currently, there are number of places in Ironic that does endpoint lookup 
from the Keystone service catalog. By default, keystoneauth set it to 'public' 
if not specified.
  Description
  ===
  We are supposed to be able to select the endpoint type by specify either the 
'interface' or 'valid_interfaces' option in the [keystone_authtoken] section in 
nova.conf. But that parameter is not being conveyed in ironicclient.

  Consequently, this makes it impossible to using Ironic without having
  to expose the public endpoint in the service catalog. Furthermore, for
  security reasons, our controller nodes (subnet) have no route to the
  public network and therefore will not be able to access the public
  endpoint. This is a rather significant limitation in deploying Ironic.
  Also, we seem to have broken backward compatibility as well as Ironic
  use to work in Pike without having to configure a public endpoint.

  Steps to reproduce
  ==
  1) enable Ironic in devstack
  2) delete the Ironic public endpoint in Keystone
  3) set 'valid_interfaces = internal' in the [ironic] section in nova.conf and 
restart nova-compute service
  4) try to provision a server and it will fail with errors similar to these in 
nova-compute logs

  2019-02-28 18:00:28.136 48891 ERROR nova.virt.ironic.driver [req-
  4bace607-0ab6-45b5-911b-1df5fbcc0e01 None None] An unknown error has
  occurred when trying to get the list of nodes from the Ironic
  inventory. Error: Must provide Keystone credentials or user-defined
  endpoint, error was: publicURL endpoint for baremetal service not
  found: AmbiguousAuthSystem: Must provide Keystone credentials or user-
  defined endpoint, error was: publicURL endpoint for baremetal service
  not found

  Expected result
  ===
  Server created without error.

  
  Actual result
  =
  Server failed to create, with errors similar to these in nova-compute logs

  2019-02-28 18:00:28.136 48891 ERROR nova.virt.ironic.driver [req-
  4bace607-0ab6-45b5-911b-1df5fbcc0e01 None None] An unknown error has
  occurred when trying to get the list of nodes from the Ironic
  inventory. Error: Must provide Keystone credentials or user-defined
  endpoint, error was: publicURL endpoint for baremetal service not
  found: AmbiguousAuthSystem: Must provide Keystone credentials or user-
  defined endpoint, error was: publicURL endpoint for baremetal service
  not found

  Environment
  ===
  This bug is reproducible in devstack with Ironic plugin enabled.

  
  Related bugs:

  Ironic: https://storyboard.openstack.org/#!/story/2005118
  Nova: https://bugs.launchpad.net/nova/+bug/1707860

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1818295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818744] [NEW] OS-EP-FILTER API doesn't use default roles

2019-03-05 Thread Lance Bragstad
Public bug reported:

In Rocky, keystone implemented support to ensure at least three default
roles were available [0]. The OS-EP-FILTER API doesn't incorporate these
defaults into its default policies [1], but it should. The association
between projects and endpoints are system-specific actions, but it
should be possible for system-members and system-readers to view those
associations.

[0] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html
[1] 
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/policies/project_endpoint.py?id=6e3f1f6e46787ed4542609c935c13cb85e91d7fc

** Affects: keystone
 Importance: Low
 Status: Triaged


** Tags: default-roles policy

** Changed in: keystone
   Status: New => Triaged

** Changed in: keystone
   Importance: Undecided => Low

** Tags added: default-roles policy

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1818744

Title:
  OS-EP-FILTER API doesn't use default roles

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  In Rocky, keystone implemented support to ensure at least three
  default roles were available [0]. The OS-EP-FILTER API doesn't
  incorporate these defaults into its default policies [1], but it
  should. The association between projects and endpoints are system-
  specific actions, but it should be possible for system-members and
  system-readers to view those associations.

  [0] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html
  [1] 
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/policies/project_endpoint.py?id=6e3f1f6e46787ed4542609c935c13cb85e91d7fc

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1818744/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818736] [NEW] The limit and registered limit APIs should account for different scopes

2019-03-05 Thread Lance Bragstad
Public bug reported:

Keystone implemented scope_types for oslo.policy RuleDefault objects in
the Queens release [0]. In order to take full advantage of scope_types,
keystone is going to have to evolve policy enforcement checks in the
limit and registered limit APIs. This is because there are some limit
and registered limit APIs that should be accessible to project users,
domain users, and system users.

System users should be able to manage limits and registered limits
across the entire deployment. At this point, project and domain users
shouldn't be able to manage limits and registered limits. At some point
in the future, we might consider opening up the functionality to domain
users to manage limits for projects within the domains they have
authorization on.

This bug report is strictly for tracking the ability to get information
out of keystone regarding limits with system-scope, domain-scope, and
project-scope.

[0] https://review.openstack.org/#/c/525706/

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: policy system-scope

** Tags added: policy system-scope

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1818736

Title:
  The limit and registered limit APIs should account for different
  scopes

Status in OpenStack Identity (keystone):
  New

Bug description:
  Keystone implemented scope_types for oslo.policy RuleDefault objects
  in the Queens release [0]. In order to take full advantage of
  scope_types, keystone is going to have to evolve policy enforcement
  checks in the limit and registered limit APIs. This is because there
  are some limit and registered limit APIs that should be accessible to
  project users, domain users, and system users.

  System users should be able to manage limits and registered limits
  across the entire deployment. At this point, project and domain users
  shouldn't be able to manage limits and registered limits. At some
  point in the future, we might consider opening up the functionality to
  domain users to manage limits for projects within the domains they
  have authorization on.

  This bug report is strictly for tracking the ability to get
  information out of keystone regarding limits with system-scope,
  domain-scope, and project-scope.

  [0] https://review.openstack.org/#/c/525706/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1818736/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818734] [NEW] The endpoint group API doesn't use default roles

2019-03-05 Thread Lance Bragstad
Public bug reported:

In Rocky, keystone implemented support to ensure at least three default roles 
were available [0]. 
An endpoint group is a collection of endpoints that can be populated in a users 
service catalog through association to projects. Ultimately, endpoint groups 
are system-specific resources and shouldn't be accessible directly by domain or 
project users.

The report is to track the work for implementing system `member` and
system `reader` role support for endpoint groups.

[0] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html
[1] 
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/policies/endpoint_group.py?id=6e3f1f6e46787ed4542609c935c13cb85e91d7fc

** Affects: keystone
 Importance: Low
 Status: Triaged


** Tags: default-roles policy

** Tags added: policy

** Tags added: default-roles

** Changed in: keystone
   Status: New => Triaged

** Changed in: keystone
   Importance: Undecided => Medium

** Changed in: keystone
   Importance: Medium => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1818734

Title:
  The endpoint group API doesn't use default roles

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  In Rocky, keystone implemented support to ensure at least three default roles 
were available [0]. 
  An endpoint group is a collection of endpoints that can be populated in a 
users service catalog through association to projects. Ultimately, endpoint 
groups are system-specific resources and shouldn't be accessible directly by 
domain or project users.

  The report is to track the work for implementing system `member` and
  system `reader` role support for endpoint groups.

  [0] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html
  [1] 
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/policies/endpoint_group.py?id=6e3f1f6e46787ed4542609c935c13cb85e91d7fc

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1818734/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818730] [NEW] Errors in finish_revert_resize can leave migration.dest_compute pointing at source_compute

2019-03-05 Thread Matt Riedemann
Public bug reported:

Because of this code in finish_revert_resize:

https://github.com/openstack/nova/blob/8cdb8cc7c56b574382b9a9fff662cc95e78136a2/nova/compute/manager.py#L4121

And the @errors_out_migration decorator on the method, if something
fails after that line we will save the migration object changes which
would leave the dest_compute pointing at the source_compute, which could
be very confusing when trying to debug.

The comment says the field is set temporarily but it's not really
temporary if the migration changes are saved like in that decorator.

** Affects: nova
 Importance: Medium
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged


** Tags: compute resize

** Changed in: nova
   Status: New => Triaged

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1818730

Title:
  Errors in finish_revert_resize can leave migration.dest_compute
  pointing at source_compute

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  Because of this code in finish_revert_resize:

  
https://github.com/openstack/nova/blob/8cdb8cc7c56b574382b9a9fff662cc95e78136a2/nova/compute/manager.py#L4121

  And the @errors_out_migration decorator on the method, if something
  fails after that line we will save the migration object changes which
  would leave the dest_compute pointing at the source_compute, which
  could be very confusing when trying to debug.

  The comment says the field is set temporarily but it's not really
  temporary if the migration changes are saved like in that decorator.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1818730/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818732] [NEW] EC2 credential API doesn't use default roles

2019-03-05 Thread Lance Bragstad
Public bug reported:

In Rocky, keystone implemented support to ensure at least three default
roles were available [0]. The EC2 credentials API doesn't incorporate
these defaults into its default policies [1], but it should.

For example, system administrators should be able to clean up
credentials regardless of users, but system members or readers should
only be able to list or get credentials. Users who are not system users
should only be able to manage their credentials.

[0] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html
[1] 
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/policies/ec2_credential.py?id=6e3f1f6e46787ed4542609c935c13cb85e91d7fc

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: default-roles policy

** Tags added: default-roles policy

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1818732

Title:
  EC2 credential API doesn't use default roles

Status in OpenStack Identity (keystone):
  New

Bug description:
  In Rocky, keystone implemented support to ensure at least three
  default roles were available [0]. The EC2 credentials API doesn't
  incorporate these defaults into its default policies [1], but it
  should.

  For example, system administrators should be able to clean up
  credentials regardless of users, but system members or readers should
  only be able to list or get credentials. Users who are not system
  users should only be able to manage their credentials.

  [0] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html
  [1] 
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/policies/ec2_credential.py?id=6e3f1f6e46787ed4542609c935c13cb85e91d7fc

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1818732/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818725] [NEW] Application credential API doesn't use default roles

2019-03-05 Thread Lance Bragstad
Public bug reported:

In Rocky, keystone implemented support to ensure at least three default
roles were available [0]. The application credentials API doesn't
incorporate these defaults into its default policies [1], but it should.

For example, system administrators should be able to clean up
application credentials regardless of users, but system members or
readers should only be able to list or get application credentials.
Users who are not system users should only be able to manage their
application credentials.

[0] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html
[1] 
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/policies/application_credential.py?id=6e3f1f6e46787ed4542609c935c13cb85e91d7fc

** Affects: keystone
 Importance: Medium
 Status: Triaged


** Tags: default-roles policy

** Changed in: keystone
   Status: New => Triaged

** Changed in: keystone
   Importance: Undecided => Medium

** Description changed:

  In Rocky, keystone implemented support to ensure at least three default
  roles were available [0]. The application credentials API doesn't
  incorporate these defaults into its default policies [1], but it should.
  
- For example, system users should be able to manage any application
- credential, regardless of the user. Users who are not system users
- should only be able to manage their application credentials.
+ For example, system administrators should be able to clean up
+ application credentials regardless of users, but system members or
+ readers should only be able to list or get application credentials.
+ Users who are not system users should only be able to manage their
+ application credentials.
  
  [0] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html
  [1] 
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/policies/application_credential.py?id=6e3f1f6e46787ed4542609c935c13cb85e91d7fc

** Tags added: default-roles policy

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1818725

Title:
  Application credential API doesn't use default roles

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  In Rocky, keystone implemented support to ensure at least three
  default roles were available [0]. The application credentials API
  doesn't incorporate these defaults into its default policies [1], but
  it should.

  For example, system administrators should be able to clean up
  application credentials regardless of users, but system members or
  readers should only be able to list or get application credentials.
  Users who are not system users should only be able to manage their
  application credentials.

  [0] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html
  [1] 
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/policies/application_credential.py?id=6e3f1f6e46787ed4542609c935c13cb85e91d7fc

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1818725/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1817542] Re: nova instance-action fails if project_id=NULL

2019-03-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/639936
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=31fe7c76009e1c6d7859036e44b057d081b059b5
Submitter: Zuul
Branch:master

commit 31fe7c76009e1c6d7859036e44b057d081b059b5
Author: Takashi NATSUME 
Date:   Thu Feb 28 13:49:41 2019 +0900

Fix an error when generating a host ID

When instance action events are created by periodic tasks,
the project IDs of them become null (None).
It causes an error when 'hostId' is generated
in the "Show Server Action Details"
(GET /servers/{server_id}/os-instance-actions/{request_id})
API.

Fix the issue by using the project ID of the server
if the project ID of the event is None.

Change-Id: Iac07fcddd4cc3321c6efe702066eb8af6a875418
Closes-Bug: #1817542


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1817542

Title:
  nova instance-action fails if project_id=NULL

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) rocky series:
  Triaged

Bug description:
  nova instance-action fails if project_id=NULL

  Starting in api version 2.62 "an obfuscated hashed host id is returned"
  To generate the host_id it uses utils.generate_hostid() that uses (in this 
case) the project_id and the host of the action.

  However, we can have actions without a user_id/project_id defined.
  For example, when something happens outside nova API (user shutdown the VM 
inside the guest OS).
  In this case we have an action "stop", without a user_id/project_id.

  When running 2.62 it fails when performing:
  nova instance-action  

  no issues if using:
  --os-compute-api-version 2.60 

  ===
  The trace in nova-api logs:

  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py", line 
801, in wrapped
  return f(*args, **kwargs)
File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/instance_actions.py",
 line 169, in show
  ) for evt in events_raw]
File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/instance_actions.py",
 line 69, in _format_event
  project_id)
File "/usr/lib/python2.7/site-packages/nova/utils.py", line 1295, in 
generate_hostid
  data = (project_id + host).encode('utf-8')
  TypeError: unsupported operand type(s) for +: 'NoneType' and 'unicode'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1817542/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818701] Re: invalid PCI alias in flavor results in HTTP 500 on instance create

2019-03-05 Thread Matt Riedemann
** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Changed in: nova/pike
   Status: New => Confirmed

** Changed in: nova/rocky
   Status: New => Confirmed

** Changed in: nova/queens
   Status: New => Confirmed

** Changed in: nova/pike
   Importance: Undecided => Medium

** Changed in: nova/rocky
   Importance: Undecided => Medium

** Changed in: nova/queens
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1818701

Title:
  invalid PCI alias in flavor results in HTTP 500 on instance create

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) pike series:
  Confirmed
Status in OpenStack Compute (nova) queens series:
  Confirmed
Status in OpenStack Compute (nova) rocky series:
  Confirmed

Bug description:
  If an invalid PCI alias is specified in the flavor extra-specs and we
  try to create an instance with that flavor, it will result in a
  PciInvalidAlias exception being raised.

  In ServersController.create() PciInvalidAlias is missing from the list
  of exceptions that get converted to an HTTPBadRequest.  Instead, it's
  reported as a 500 error:

  [stack@fedora-1 nova]$ nova boot --flavor  ds2G --image fedora29 --nic none 
--admin-pass fedora asdf3
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-fec3face-4135-41fd-bc48-07957363ddae)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1818701/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818701] [NEW] invalid PCI alias in flavor results in HTTP 500 on instance create

2019-03-05 Thread Chris Friesen
Public bug reported:

If an invalid PCI alias is specified in the flavor extra-specs and we
try to create an instance with that flavor, it will result in a
PciInvalidAlias exception being raised.

In ServersController.create() PciInvalidAlias is missing from the list
of exceptions that get converted to an HTTPBadRequest.  Instead, it's
reported as a 500 error:

[stack@fedora-1 nova]$ nova boot --flavor  ds2G --image fedora29 --nic none 
--admin-pass fedora asdf3
ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-fec3face-4135-41fd-bc48-07957363ddae)

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1818701

Title:
  invalid PCI alias in flavor results in HTTP 500 on instance create

Status in OpenStack Compute (nova):
  New

Bug description:
  If an invalid PCI alias is specified in the flavor extra-specs and we
  try to create an instance with that flavor, it will result in a
  PciInvalidAlias exception being raised.

  In ServersController.create() PciInvalidAlias is missing from the list
  of exceptions that get converted to an HTTPBadRequest.  Instead, it's
  reported as a 500 error:

  [stack@fedora-1 nova]$ nova boot --flavor  ds2G --image fedora29 --nic none 
--admin-pass fedora asdf3
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-fec3face-4135-41fd-bc48-07957363ddae)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1818701/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818697] [NEW] neutron fullstack frequently times out waiting on qos ports

2019-03-05 Thread Doug Wiegley
Public bug reported:

ft1.1: 
neutron.tests.fullstack.test_qos.TestMinBwQoSOvs.test_bw_limit_qos_port_removed(egress,openflow-native)_StringException:
 Traceback (most recent call last):
  File "/opt/stack/new/neutron/neutron/common/utils.py", line 685, in 
wait_until_true
eventlet.sleep(sleep)
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/lib/python3.5/site-packages/eventlet/greenthread.py",
 line 36, in sleep
hub.switch()
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/lib/python3.5/site-packages/eventlet/hubs/hub.py",
 line 297, in switch
return self.greenlet.switch()
eventlet.timeout.Timeout: 60 seconds

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/stack/new/neutron/neutron/tests/base.py", line 174, in func
return f(self, *args, **kwargs)
  File "/opt/stack/new/neutron/neutron/tests/fullstack/test_qos.py", line 690, 
in test_bw_limit_qos_port_removed
vm, MIN_BANDWIDTH, self.direction)
  File "/opt/stack/new/neutron/neutron/tests/fullstack/test_qos.py", line 675, 
in _wait_for_min_bw_rule_applied
lambda: vm.bridge.get_egress_min_bw_for_port(
  File "/opt/stack/new/neutron/neutron/common/utils.py", line 690, in 
wait_until_true
raise WaitTimeout(_("Timed out after %d seconds") % timeout)
neutron.common.utils.WaitTimeout: Timed out after 60 seconds

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818697

Title:
  neutron fullstack frequently times out waiting on qos ports

Status in neutron:
  New

Bug description:
  ft1.1: 
neutron.tests.fullstack.test_qos.TestMinBwQoSOvs.test_bw_limit_qos_port_removed(egress,openflow-native)_StringException:
 Traceback (most recent call last):
File "/opt/stack/new/neutron/neutron/common/utils.py", line 685, in 
wait_until_true
  eventlet.sleep(sleep)
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/lib/python3.5/site-packages/eventlet/greenthread.py",
 line 36, in sleep
  hub.switch()
File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/lib/python3.5/site-packages/eventlet/hubs/hub.py",
 line 297, in switch
  return self.greenlet.switch()
  eventlet.timeout.Timeout: 60 seconds

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File "/opt/stack/new/neutron/neutron/tests/base.py", line 174, in func
  return f(self, *args, **kwargs)
File "/opt/stack/new/neutron/neutron/tests/fullstack/test_qos.py", line 
690, in test_bw_limit_qos_port_removed
  vm, MIN_BANDWIDTH, self.direction)
File "/opt/stack/new/neutron/neutron/tests/fullstack/test_qos.py", line 
675, in _wait_for_min_bw_rule_applied
  lambda: vm.bridge.get_egress_min_bw_for_port(
File "/opt/stack/new/neutron/neutron/common/utils.py", line 690, in 
wait_until_true
  raise WaitTimeout(_("Timed out after %d seconds") % timeout)
  neutron.common.utils.WaitTimeout: Timed out after 60 seconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1818697/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818696] [NEW] frequent ci failures trying to delete qos port

2019-03-05 Thread Doug Wiegley
Public bug reported:

Lots of this error:
RuntimeError: OVSDB Error: {"details":"cannot delete QoS row 
03bc0e7a-bd4e-42a7-95e1-493fce7d6342 because of 1 remaining 
reference(s)","error":"referential integrity violation"}

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818696

Title:
  frequent ci failures trying to delete qos port

Status in neutron:
  New

Bug description:
  Lots of this error:
  RuntimeError: OVSDB Error: {"details":"cannot delete QoS row 
03bc0e7a-bd4e-42a7-95e1-493fce7d6342 because of 1 remaining 
reference(s)","error":"referential integrity violation"}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1818696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818693] [NEW] Make "phys_brs" parameter variable in OVSAgentExtensionAPI

2019-03-05 Thread Rodolfo Alonso
Public bug reported:

In [1], a new init parameter was introduced in the class
OVSAgentExtensionAPI. This change in the extension API can break
backwards compatibility with other projects (networking_sfc and bagpipe
are affected).

Because this parameter is needed only in qos_driver extension when
calling OVSAgentExtensionAPI.request_phy_brs() (to retrieve the physical
bridges), we can make this new parameter optional not to break other
stadium projects. When the OVS it's initialized (in-tree agent), the
extension is called with the three needed parameters.

[1]
https://review.openstack.org/#/c/406841/22/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_agent_extension_api.py@43

** Affects: neutron
 Importance: Undecided
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818693

Title:
  Make "phys_brs" parameter variable in OVSAgentExtensionAPI

Status in neutron:
  In Progress

Bug description:
  In [1], a new init parameter was introduced in the class
  OVSAgentExtensionAPI. This change in the extension API can break
  backwards compatibility with other projects (networking_sfc and
  bagpipe are affected).

  Because this parameter is needed only in qos_driver extension when
  calling OVSAgentExtensionAPI.request_phy_brs() (to retrieve the
  physical bridges), we can make this new parameter optional not to
  break other stadium projects. When the OVS it's initialized (in-tree
  agent), the extension is called with the three needed parameters.

  [1]
  
https://review.openstack.org/#/c/406841/22/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_agent_extension_api.py@43

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1818693/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815345] Re: neutron doesnt delete port binding level when deleting an inactive port binding

2019-03-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/634276
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=b197f7c1c4b9c0dd4c58f5c5a4b654dde5596b85
Submitter: Zuul
Branch:master

commit b197f7c1c4b9c0dd4c58f5c5a4b654dde5596b85
Author: Adrian Chiris 
Date:   Thu Jan 31 18:51:33 2019 +0200

Delete port binding level for deleted bindings

Today, if live migration has failed after an inactive
binding was created on the destination node but before
the activation of the created binding, the port's binding level
for the destination host is not cleared during nova's API call
to neutron to delete the port binding.

This causes future attempts to perform live migration
of the instance to the same host to fail.

This change removes port binding level object during port binding
deletion.

Closes-Bug: #1815345

Change-Id: Idd55f7d24a2062c08ac8a0dc2243625632d962a5


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1815345

Title:
  neutron doesnt delete port binding level when deleting an inactive
  port binding

Status in neutron:
  Fix Released

Bug description:
  When performing VM live migration with a normal port (OVS mechanism
  driver) nova creates an in-active binding on the destination node then
  activates that binding upon successful migration.

  in case of failure to migrate with libvirt, an exception is raised,
  nova performs a rollback operation for the live migration and instance
  remains in running state on the source node.

  part of the rollback operation is deleting neutron's port binding on the 
destination node with the following API call:
  DELETE /v2.0/ports/​{port_id}​/bindings/{host_id}

  this call, for an inactive port binding (which was never activated),
  does not delete the port's binding level which causes future migration
  attempts to fail.

  Reproduction setup:
  - devstack deployment of an all in one and a compute node from master
  - OS: FC28
  - QEMU hypervisor
  - neutron OVS mechanism driver enabled
  - perform further configurations to enable live-migration : 
https://docs.openstack.org/nova/pike/admin/configuring-migrations.html
  - block libvirt migration port with iptables on destination node (on my setup 
i just needed to activate iptables on destination node)

  reproduction steps:
  http://paste.openstack.org/show/744802/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1815345/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818687] [NEW] Cannot boot a VM with utf8 name with contrail

2019-03-05 Thread Andrey Volkov
Public bug reported:

This traceback is for Queens release:

2019-02-28 17:38:50.815 4688 ERROR nova.virt.libvirt.driver 
[req-ff7251c9-ffc4-427c-8971-ae3b06ddf3bd f86665bb986e4392976a2f22d9c2d522 
b35422ec2c02435cbe6a606659f595e3 - default default] [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] Failed to start libvirt guest: 
UnicodeEncodeError: 'ascii' codec can't encode character u'\u20a1' in position 
19: ordinal not in range(128)
2019-02-28 17:38:51.264 4688 INFO nova.virt.libvirt.driver 
[req-ff7251c9-ffc4-427c-8971-ae3b06ddf3bd f86665bb986e4392976a2f22d9c2d522 
b35422ec2c02435cbe6a606659f595e3 - default default] [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] Deleting instance files 
/var/lib/nova/instances/8e90550d-3b62-4f70-bd70-b3c135a8a092_del
2019-02-28 17:38:51.265 4688 INFO nova.virt.libvirt.driver 
[req-ff7251c9-ffc4-427c-8971-ae3b06ddf3bd f86665bb986e4392976a2f22d9c2d522 
b35422ec2c02435cbe6a606659f595e3 - default default] [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] Deletion of 
/var/lib/nova/instances/8e90550d-3b62-4f70-bd70-b3c135a8a092_del complete
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager 
[req-ff7251c9-ffc4-427c-8971-ae3b06ddf3bd f86665bb986e4392976a2f22d9c2d522 
b35422ec2c02435cbe6a606659f595e3 - default default] [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] Instance failed to spawn: 
UnicodeEncodeError: 'ascii' codec can't encode character u'\u20a1' in position 
19: ordinal not in range(128)
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] Traceback (most recent call last):
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2252, in 
_build_resources
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] yield resources
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2032, in 
_build_and_run_instance
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] block_device_info=block_device_info)
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3107, in 
spawn
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] destroy_disks_on_failure=True)
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5627, in 
_create_domain_and_network
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] destroy_disks_on_failure)
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092]   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] self.force_reraise()
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092]   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] six.reraise(self.type_, self.value, 
self.tb)
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5586, in 
_create_domain_and_network
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] self.plug_vifs(instance, network_info)
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 836, in 
plug_vifs
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] self.vif_driver.plug(instance, vif)
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py", line 805, in plug
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] func(instance, vif)
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py", line 762, in 
plug_vrouter
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1818239] Re: scheduler: build failure high negative weighting

2019-03-05 Thread Corey Bryant
Opening this back up against the package and adding upstream as well. I
may be missing something, but I think this is still an issue upstream.

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova (Ubuntu)
   Status: Won't Fix => Triaged

** Changed in: nova (Ubuntu)
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1818239

Title:
  scheduler: build failure high negative weighting

Status in OpenStack nova-cloud-controller charm:
  Fix Committed
Status in OpenStack Compute (nova):
  New
Status in nova package in Ubuntu:
  Triaged

Bug description:
  Whilst debugging a Queens cloud which seems to be landing all new
  instances on 3 out of 9 hypervisors (which resulted in three very
  heavily overloaded servers) I noticed that the weighting of the build
  failure weighter is -100.0 * number of failures:

  https://github.com/openstack/nova/blob/master/nova/conf/scheduler.py#L495

  This means that a server which has any sort of build failure instantly
  drops to the bottom of the weighed list of hypervisors for scheduling
  of instances.

  Why might a instance fail to build? Could be a timeout due to load,
  might also be due to a bad image (one that won't actually boot under
  qemu).  This second cause could be triggered by an end user of the
  cloud inadvertently causing all instances to be pushed to a small
  subset of hypervisors (which is what I think happened in our case).

  This feels like quite a dangerous default to have given the potential
  to DOS hypervisors intentionally or otherwise.

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: nova-scheduler 2:17.0.7-0ubuntu1
  ProcVersionSignature: Ubuntu 4.15.0-43.46-generic 4.15.18
  Uname: Linux 4.15.0-43-generic x86_64
  ApportVersion: 2.20.9-0ubuntu7.5
  Architecture: amd64
  Date: Fri Mar  1 13:57:39 2019
  NovaConf: Error: [Errno 13] Permission denied: '/etc/nova/nova.conf'
  PackageArchitecture: all
  ProcEnviron:
   TERM=screen-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=C.UTF-8
   SHELL=/bin/bash
  SourcePackage: nova
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-cloud-controller/+bug/1818239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818683] [NEW] Placement reporter service plugin sometimes creates orphaned resource providers

2019-03-05 Thread Bence Romsics
Public bug reported:

As discovered by lajoskatona while working on a fullstack test
(https://review.openstack.org/631793) the placement reporter plugin may
create some of the neutron resource providers in the wrong resource
provider tree. For example consider:

$ openstack --os-placement-api-version 1.17 resource provider list
+--+--++--+--+
| uuid | name   
  | generation | root_provider_uuid   | parent_provider_uuid
 |
+--+--++--+--+
| 89ca1421-5117-5348-acab-6d0e2054239c | devstack0:Open vSwitch agent   
  |  0 | 89ca1421-5117-5348-acab-6d0e2054239c | None
 |
| 4a6f5f40-b7a1-5df4-9938-63983543f365 | devstack0:Open vSwitch 
agent:br-physnet0 |  2 | 89ca1421-5117-5348-acab-6d0e2054239c | 
89ca1421-5117-5348-acab-6d0e2054239c |
| 193134fd-464c-5545-9d20-df7d58c0166f | devstack0:Open vSwitch agent:br-ex 
  |  2 | 89ca1421-5117-5348-acab-6d0e2054239c | 
89ca1421-5117-5348-acab-6d0e2054239c |
| dbc498c7-8808-4f31-8abb-18560a4c3b53 | devstack0  
  |  2 | dbc498c7-8808-4f31-8abb-18560a4c3b53 | None
 |
| 4a8a819d-61f9-5822-8c5c-3e9c7cb942d6 | devstack0:NIC Switch agent 
  |  0 | dbc498c7-8808-4f31-8abb-18560a4c3b53 | 
dbc498c7-8808-4f31-8abb-18560a4c3b53 |
| 1c7e83f0-108d-5c35-ada7-7ebebbe43aad | devstack0:NIC Switch agent:ens5
  |  2 | dbc498c7-8808-4f31-8abb-18560a4c3b53 | 
4a8a819d-61f9-5822-8c5c-3e9c7cb942d6 |
+--+--++--+--+

Please note that all RPs should have the root_provider_uuid set to the
devstack0 RP's uuid, but the open vswitch RPs have a different (wrong)
root. And 'devstack0:Open vSwitch agent' has no parent.

This situation is dependent on service startup order. The ovs RPs were
created before the compute host RP. That case should have been detected
as an error, but it was not.

I'll upload a proposed fix right away.

** Affects: neutron
 Importance: Undecided
 Assignee: Bence Romsics (bence-romsics)
 Status: New


** Tags: qos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818683

Title:
  Placement reporter service plugin sometimes creates orphaned resource
  providers

Status in neutron:
  New

Bug description:
  As discovered by lajoskatona while working on a fullstack test
  (https://review.openstack.org/631793) the placement reporter plugin
  may create some of the neutron resource providers in the wrong
  resource provider tree. For example consider:

  $ openstack --os-placement-api-version 1.17 resource provider list
  
+--+--++--+--+
  | uuid | name 
| generation | root_provider_uuid   | parent_provider_uuid  
   |
  
+--+--++--+--+
  | 89ca1421-5117-5348-acab-6d0e2054239c | devstack0:Open vSwitch agent 
|  0 | 89ca1421-5117-5348-acab-6d0e2054239c | None  
   |
  | 4a6f5f40-b7a1-5df4-9938-63983543f365 | devstack0:Open vSwitch 
agent:br-physnet0 |  2 | 89ca1421-5117-5348-acab-6d0e2054239c | 
89ca1421-5117-5348-acab-6d0e2054239c |
  | 193134fd-464c-5545-9d20-df7d58c0166f | devstack0:Open vSwitch agent:br-ex   
|  2 | 89ca1421-5117-5348-acab-6d0e2054239c | 
89ca1421-5117-5348-acab-6d0e2054239c |
  | dbc498c7-8808-4f31-8abb-18560a4c3b53 | devstack0
|  2 | dbc498c7-8808-4f31-8abb-18560a4c3b53 | None  
   |
  | 4a8a819d-61f9-5822-8c5c-3e9c7cb942d6 | devstack0:NIC Switch agent   
|  0 | dbc498c7-8808-4f31-8abb-18560a4c3b53 | 
dbc498c7-8808-4f31-8abb-18560a4c3b53 |
  | 1c7e83f0-108d-5c35-ada7-7ebebbe43aad | devstack0:NIC Switch agent:ens5  
|  2 | dbc498c7-8808-4f31-8abb-18560a4c3b53 | 
4a8a819d-61f9-5822-8c5c-3e9c7cb942d6 |
  

[Yahoo-eng-team] [Bug 1818682] [NEW] HAproxy for metadata refuses connection from VM cloud-init

2019-03-05 Thread Marcus Klein
Public bug reported:

It sometimes happens, when we spawn VMs, that the requests of cloud-init
inside the VM to the metadata agent are refused. This seems to be a
timing problem as this happens with fast booting images more often than
with slowly booting images. Error message for the request is "Connection
refused". Some seconds later the exact same request works without any
problems.

Our deployment is just upgraded from Ocata to Pike and neutron-ns-
metadata-proxy was replaced with haproxy. Since this change, the problem
occurs. Our setup uses Open vSwitch, self service networks and network
nodes for L3 router, metadata agent, dhcp agent are separated from
compute nodes and controller nodes. We use Ubuntu Cloud Archive
repositories to install on Ubuntu 16.04 LTS.

15:57:12.780152 IP (tos 0x0, ttl 64, id 7253, offset 0, flags [DF], proto TCP 
(6), length 60)
192.168.5.3.59378 > 169.254.169.254.http: Flags [S], cksum 0xebec 
(correct), seq 4230673254, win 29200, options [mss 1460,sackOK,TS val 
2933213616 ecr 0,nop,wscale 7], length 0
15:57:12.780208 IP (tos 0x0, ttl 64, id 6932, offset 0, flags [DF], proto TCP 
(6), length 40)
169.254.169.254.http > 192.168.5.3.59378: Flags [R.], cksum 0xbe52 
(correct), seq 0, ack 4230673255, win 0, length 0

The TCP SYN package to the metadata IP is replied with a TCP RST,ACK
package. The image does not try again to connect to the metadata agent
and is from then on not usable due to missing injection of public SSH
key.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818682

Title:
  HAproxy for metadata refuses connection from VM cloud-init

Status in neutron:
  New

Bug description:
  It sometimes happens, when we spawn VMs, that the requests of cloud-
  init inside the VM to the metadata agent are refused. This seems to be
  a timing problem as this happens with fast booting images more often
  than with slowly booting images. Error message for the request is
  "Connection refused". Some seconds later the exact same request works
  without any problems.

  Our deployment is just upgraded from Ocata to Pike and neutron-ns-
  metadata-proxy was replaced with haproxy. Since this change, the
  problem occurs. Our setup uses Open vSwitch, self service networks and
  network nodes for L3 router, metadata agent, dhcp agent are separated
  from compute nodes and controller nodes. We use Ubuntu Cloud Archive
  repositories to install on Ubuntu 16.04 LTS.

  15:57:12.780152 IP (tos 0x0, ttl 64, id 7253, offset 0, flags [DF], proto TCP 
(6), length 60)
  192.168.5.3.59378 > 169.254.169.254.http: Flags [S], cksum 0xebec 
(correct), seq 4230673254, win 29200, options [mss 1460,sackOK,TS val 
2933213616 ecr 0,nop,wscale 7], length 0
  15:57:12.780208 IP (tos 0x0, ttl 64, id 6932, offset 0, flags [DF], proto TCP 
(6), length 40)
  169.254.169.254.http > 192.168.5.3.59378: Flags [R.], cksum 0xbe52 
(correct), seq 0, ack 4230673255, win 0, length 0

  The TCP SYN package to the metadata IP is replied with a TCP RST,ACK
  package. The image does not try again to connect to the metadata agent
  and is from then on not usable due to missing injection of public SSH
  key.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1818682/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818678] [NEW] centos7 uses fqdn in hostname

2019-03-05 Thread Adrian Tabatabai
Public bug reported:

Hi guys,

i think i found a bug for centos 7! 
When i do cat /etc/hostname -> fqdn is given.

When i do the same command on ubuntu1604 for example only the shortname
is given.

Can you fix this?

Would be great if you answer me via E-Mail: tabatabai.adr...@gmail.com

Thank you and greets!

Adrian

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1818678

Title:
  centos7 uses fqdn in hostname

Status in cloud-init:
  New

Bug description:
  Hi guys,

  i think i found a bug for centos 7! 
  When i do cat /etc/hostname -> fqdn is given.

  When i do the same command on ubuntu1604 for example only the
  shortname is given.

  Can you fix this?

  Would be great if you answer me via E-Mail: tabatabai.adr...@gmail.com

  Thank you and greets!

  Adrian

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1818678/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1817542] Re: nova instance-action fails if project_id=NULL

2019-03-05 Thread Matt Riedemann
** Changed in: nova
   Importance: Undecided => High

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Changed in: nova/rocky
   Status: New => Triaged

** Changed in: nova/rocky
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1817542

Title:
  nova instance-action fails if project_id=NULL

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) rocky series:
  Triaged

Bug description:
  nova instance-action fails if project_id=NULL

  Starting in api version 2.62 "an obfuscated hashed host id is returned"
  To generate the host_id it uses utils.generate_hostid() that uses (in this 
case) the project_id and the host of the action.

  However, we can have actions without a user_id/project_id defined.
  For example, when something happens outside nova API (user shutdown the VM 
inside the guest OS).
  In this case we have an action "stop", without a user_id/project_id.

  When running 2.62 it fails when performing:
  nova instance-action  

  no issues if using:
  --os-compute-api-version 2.60 

  ===
  The trace in nova-api logs:

  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py", line 
801, in wrapped
  return f(*args, **kwargs)
File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/instance_actions.py",
 line 169, in show
  ) for evt in events_raw]
File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/instance_actions.py",
 line 69, in _format_event
  project_id)
File "/usr/lib/python2.7/site-packages/nova/utils.py", line 1295, in 
generate_hostid
  data = (project_id + host).encode('utf-8')
  TypeError: unsupported operand type(s) for +: 'NoneType' and 'unicode'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1817542/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818671] [NEW] Openstack usage list not showing all projects

2019-03-05 Thread Giuseppe Petralia
Public bug reported:

In a customer environment running nova 2:17.0.5-0ubuntu1~cloud0

when querying projects usage list most recent projects are not listed in
the reply.

Example:

$ openstack  usage list --print-empty --start 2019-01-01 --end
2019-02-01

Not showing any information about project
a897ea83f01c436e82e13a4306fa5ef0

But querying for the usage of the specific project we can retrieve the
results:

openstack  usage show --project a897ea83f01c436e82e13a4306fa5ef0  --start 
2019-01-01 --end 2019-02-01 
Usage from 2019-01-01 to 2019-02-01 on project 
a897ea83f01c436e82e13a4306fa5ef0: 
+---++
| Field | Value  |
+---++
| CPU Hours | 528.3  |
| Disk GB-Hours | 10566.07   |
| RAM MB-Hours  | 2163930.45 |
| Servers   | 43 |
+---++

As a workaround we are able to get projects_uuid like this:
projects_uuid=$(openstack project list | grep -v ID | awk '{print $2}')

And iterate over them and get individuals usage:

for prog in $projects_uuid; do openstack project show $prog; openstack
usage show --project $prog  --start 2019-01-01 --end 2019-02-01; done

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1818671

Title:
  Openstack usage list not showing all projects

Status in OpenStack Compute (nova):
  New

Bug description:
  In a customer environment running nova 2:17.0.5-0ubuntu1~cloud0

  when querying projects usage list most recent projects are not listed
  in the reply.

  Example:

  $ openstack  usage list --print-empty --start 2019-01-01 --end
  2019-02-01

  Not showing any information about project
  a897ea83f01c436e82e13a4306fa5ef0

  But querying for the usage of the specific project we can retrieve the
  results:

  openstack  usage show --project a897ea83f01c436e82e13a4306fa5ef0  --start 
2019-01-01 --end 2019-02-01 
  Usage from 2019-01-01 to 2019-02-01 on project 
a897ea83f01c436e82e13a4306fa5ef0: 
  +---++
  | Field | Value  |
  +---++
  | CPU Hours | 528.3  |
  | Disk GB-Hours | 10566.07   |
  | RAM MB-Hours  | 2163930.45 |
  | Servers   | 43 |
  +---++

  As a workaround we are able to get projects_uuid like this:
  projects_uuid=$(openstack project list | grep -v ID | awk '{print $2}')

  And iterate over them and get individuals usage:

  for prog in $projects_uuid; do openstack project show $prog; openstack
  usage show --project $prog  --start 2019-01-01 --end 2019-02-01; done

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1818671/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818047] Re: nova-status doesn't render cell DB connection strings before use

2019-03-05 Thread Matt Riedemann
** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Changed in: nova/rocky
   Status: New => Triaged

** Changed in: nova/rocky
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1818047

Title:
  nova-status doesn't render cell DB connection strings before use

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) rocky series:
  Triaged

Bug description:
  Description
  ===

  I've been working on introducing basic upgrade check calls in TripleO
  but have encountered the following issue now template based db
  connection strings are being used by TripleO in support of cellsv2:

  $ nova-status upgrade check
  [..]
  ArgumentError: Could not parse rfc1738 URL from string 
'{scheme}://{username}:{password}@{hostname}/nova?{query}'

  http://logs.openstack.org/39/635139/2/check/tripleo-ci-
  
centos-7-standalone/91d4b45/logs/undercloud/home/zuul/standalone_deploy.log.txt.gz#_2019-02-26_22_04_00

  Steps to reproduce
  ==

  http://logs.openstack.org/39/635139/2/check/tripleo-ci-
  centos-7-standalone/91d4b45/logs/reproducer-quickstart.sh

  Expected result
  ===

  Connection string is formatted correctly before use.

  Actual result
  =

  Connection string is not formatted before use leading to `nova-status`
  errors.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

 Master / Stein

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
 What's the version of that?

 N/A

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?

 N/A

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

 N/A

  Logs & Configs
  ==

  See above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1818047/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818669] [NEW] ipv6 static routes configured for eni are incorrect

2019-03-05 Thread raphael.glon
Public bug reported:

static routes rendered for eni configuration are not correct

example:

config:
- mac_address: aa:12:bc:34:ee:ac
  name: eno3
  subnets:
  - address: fd00::12/64
dns_nameservers: ['fd00:2::15']
gateway: fd00::1
ipv6: true
routes:
- netmask: '32'
  network: 'fd00:12::'
  gateway: fd00::2
type: static
  type: physical
version: 1

Cloud init renders:
"""
auto lo
iface lo inet loopback

auto eno3
iface eno3 inet6 static
address fd00::12/64
dns-nameservers fd00:2::15
gateway fd00::1
post-up route add -net fd00:12:: netmask 32 gw fd00::2 || true
pre-down route del -net fd00:12:: netmask 32 gw fd00::2 || true
"""

but the post-up/pre-down commands are incorrect (tested, even when
replacing the 32 netmask by :::)

One working version
"""
post-up route add -A inet6 fd00:12::/32 gw fd00::2 || true
pre-down route del -A inet6 fd00:12::/32 gw fd00::2 || true
"""

Fix proposal available here
https://code.launchpad.net/~raphael-glon/cloud-init/+git/cloud-init/+merge/363970

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Description changed:

- static routes rendered for eni configuration are not correct for static
- ipv6 routes
+ static routes rendered for eni configuration are not correct
  
  example:
  
  config:
- - mac_address: aa:12:bc:34:ee:ac
-   name: eno3
-   subnets:
-   - address: fd00::12/64
- dns_nameservers: ['fd00:2::15']
- gateway: fd00::1
- ipv6: true
- routes:
- - netmask: '32'
-   network: 'fd00:12::'
-   gateway: fd00::2
- type: static
-   type: physical
- version: 1
- 
+ - mac_address: aa:12:bc:34:ee:ac
+   name: eno3
+   subnets:
+   - address: fd00::12/64
+ dns_nameservers: ['fd00:2::15']
+ gateway: fd00::1
+ ipv6: true
+ routes:
+ - netmask: '32'
+   network: 'fd00:12::'
+   gateway: fd00::2
+ type: static
+   type: physical
+ version: 1
  
  Cloud init renders:
  """
  auto lo
  iface lo inet loopback
  
  auto eno3
  iface eno3 inet6 static
- address fd00::12/64
- dns-nameservers fd00:2::15
- gateway fd00::1
- post-up route add -net fd00:12:: netmask 32 gw fd00::2 || true
- pre-down route del -net fd00:12:: netmask 32 gw fd00::2 || true
+ address fd00::12/64
+ dns-nameservers fd00:2::15
+ gateway fd00::1
+ post-up route add -net fd00:12:: netmask 32 gw fd00::2 || true
+ pre-down route del -net fd00:12:: netmask 32 gw fd00::2 || true
  """
  
  but the post-up/pre-down commands are incorrect (tested, even when
  replacing the 32 netmask by :::)
  
  One working version
  """
- post-up route add -A inet6 fd00:12::/32 gw fd00::2 || true
- pre-down route del -A inet6 fd00:12::/32 gw fd00::2 || true
+ post-up route add -A inet6 fd00:12::/32 gw fd00::2 || true
+ pre-down route del -A inet6 fd00:12::/32 gw fd00::2 || true
  """
  
  Fix proposal available here
  
https://code.launchpad.net/~raphael-glon/cloud-init/+git/cloud-init/+merge/363970

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1818669

Title:
  ipv6 static routes configured for eni are incorrect

Status in cloud-init:
  New

Bug description:
  static routes rendered for eni configuration are not correct

  example:

  config:
  - mac_address: aa:12:bc:34:ee:ac
    name: eno3
    subnets:
    - address: fd00::12/64
  dns_nameservers: ['fd00:2::15']
  gateway: fd00::1
  ipv6: true
  routes:
  - netmask: '32'
    network: 'fd00:12::'
    gateway: fd00::2
  type: static
    type: physical
  version: 1

  Cloud init renders:
  """
  auto lo
  iface lo inet loopback

  auto eno3
  iface eno3 inet6 static
  address fd00::12/64
  dns-nameservers fd00:2::15
  gateway fd00::1
  post-up route add -net fd00:12:: netmask 32 gw fd00::2 || true
  pre-down route del -net fd00:12:: netmask 32 gw fd00::2 || true
  """

  but the post-up/pre-down commands are incorrect (tested, even when
  replacing the 32 netmask by :::)

  One working version
  """
  post-up route add -A inet6 fd00:12::/32 gw fd00::2 || true
  pre-down route del -A inet6 fd00:12::/32 gw fd00::2 || true
  """

  Fix proposal available here
  
https://code.launchpad.net/~raphael-glon/cloud-init/+git/cloud-init/+merge/363970

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1818669/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818661] [NEW] ipv6 static routes dropped when rendering opensuse files

2019-03-05 Thread raphael.glon
Public bug reported:

Ipv6 static routes seem dropped during opensuse file generation

The reason of this:

The destination file path is the same for ipv4 and ipv6 routes, for
opensuse

opensuse.py:

'route_templates': {
'ipv4': '%(base)s/network/ifroute-%(name)s',
'ipv6': '%(base)s/network/ifroute-%(name)s',
}

but from sysconfig.py:

def _render_sysconfig
[...]
when rendering routes:
if cpath not in contents:
contents[cpath] = iface_cfg.routes.to_string(proto)

So ipv6 routes get skipped (ipv4 has already taken the ifroute slot in
contents dict)

By the way this seems directly visible in the unittests:

test_net:TestOpenSuseSysConfigRendering.test_bond_config

-> see NETWORK_CONFIGS['bond']['expected_sysconfig_opensuse']['ifroute-
bond0']

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Description changed:

  Ipv6 static routes seem dropped during opensuse file generation
  
  The reason of this:
  
  The destination file path is the same for ipv4 and ipv6 routes, for
  opensuse
  
  opensuse.py:
  
  'route_templates': {
- 'ipv4': '%(base)s/network/ifroute-%(name)s',
- 'ipv6': '%(base)s/network/ifroute-%(name)s',
- }
+ 'ipv4': '%(base)s/network/ifroute-%(name)s',
+ 'ipv6': '%(base)s/network/ifroute-%(name)s',
+ }
  
  but from sysconfig.py:
  
  def _render_sysconfig
  [...]
  when rendering routes:
  if cpath not in contents:
- contents[cpath] = iface_cfg.routes.to_string(proto)
+ contents[cpath] = iface_cfg.routes.to_string(proto)
  
- So ipv6 routes get dropped
+ So ipv6 routes get skipped (ipv4 has already taken the ifroute slot in
+ contents dict)
  
  By the way this seems directly visible in the unittests:
  
  test_net:TestOpenSuseSysConfigRendering.test_bond_config
  
  -> see NETWORK_CONFIGS['bond']['expected_sysconfig_opensuse']['ifroute-
  bond0']

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1818661

Title:
  ipv6 static routes dropped when rendering opensuse files

Status in cloud-init:
  New

Bug description:
  Ipv6 static routes seem dropped during opensuse file generation

  The reason of this:

  The destination file path is the same for ipv4 and ipv6 routes, for
  opensuse

  opensuse.py:

  'route_templates': {
  'ipv4': '%(base)s/network/ifroute-%(name)s',
  'ipv6': '%(base)s/network/ifroute-%(name)s',
  }

  but from sysconfig.py:

  def _render_sysconfig
  [...]
  when rendering routes:
  if cpath not in contents:
  contents[cpath] = iface_cfg.routes.to_string(proto)

  So ipv6 routes get skipped (ipv4 has already taken the ifroute slot in
  contents dict)

  By the way this seems directly visible in the unittests:

  test_net:TestOpenSuseSysConfigRendering.test_bond_config

  -> see NETWORK_CONFIGS['bond']['expected_sysconfig_opensuse
  ']['ifroute-bond0']

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1818661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818649] [NEW] [pike] neutron-lbaasv2 with barbican error: LookupError: Container XXXXX could not be found

2019-03-05 Thread miaoyuliang
Public bug reported:

Is there any configuration method about neutron_lbaasv2?. this problem
troubled me a long time, the doc is not very well. my lbaas'es config
file has been changed many times, but it didn't work.

my openstack version is pike.

It was relate to https://bugs.launchpad.net/barbican/+bug/1689846, but
my environment is lbaasv2, not octavia.

I tried create tls listener throuth through lbaasv2, there will be a
right response on CLI. And neutron-server.log was all right, but there
will be an ERROR in lbaas-agent.log, the log is bellow.


2019-03-05 18:47:52.427 14045 INFO 
neutron_lbaas.common.cert_manager.barbican_cert_manager 
[req-6e0b798e-b10c-4132-8665-dfe1122133bb cdb0fbe60ff84eaf932ba6a90dd030b2 
502990c9fd4d442693e8d818b01051b5 - - -] Loading certificate container 
http://192.168.10.10:9311/v1/containers/25670926-0f89-42b6-9fe6-05083d59736a 
from Barbican.
2019-03-05 18:47:52.428 14045 DEBUG barbicanclient.v1.containers 
[req-6e0b798e-b10c-4132-8665-dfe1122133bb cdb0fbe60ff84eaf932ba6a90dd030b2 
502990c9fd4d442693e8d818b01051b5 - - -] Getting container - Container href: 
http://192.168.10.10:9311/v1/containers/25670926-0f89-42b6-9fe6-05083d59736a 
get /usr/lib/python2.7/site-packages/barbicanclient/v1/containers.py:537
2019-03-05 18:47:52.429 14045 ERROR 
neutron_lbaas.common.cert_manager.barbican_cert_manager 
[req-6e0b798e-b10c-4132-8665-dfe1122133bb cdb0fbe60ff84eaf932ba6a90dd030b2 
502990c9fd4d442693e8d818b01051b5 - - -] Error getting 
http://192.168.10.10:9311/v1/containers/25670926-0f89-42b6-9fe6-05083d59736a: 
LookupError: Container 
http://192.168.10.10:9311/v1/containers/25670926-0f89-42b6-9fe6-05083d59736a 
could not be found.
2019-03-05 18:47:52.429 14045 ERROR 
neutron_lbaas.common.cert_manager.barbican_cert_manager Traceback (most recent 
call last):
2019-03-05 18:47:52.429 14045 ERROR 
neutron_lbaas.common.cert_manager.barbican_cert_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/common/cert_manager/barbican_cert_manager.py",
 line 174, in get_cert
2019-03-05 18:47:52.429 14045 ERROR 
neutron_lbaas.common.cert_manager.barbican_cert_manager 
container_ref=cert_ref
2019-03-05 18:47:52.429 14045 ERROR 
neutron_lbaas.common.cert_manager.barbican_cert_manager   File 
"/usr/lib/python2.7/site-packages/barbicanclient/v1/containers.py", line 543, 
in get
2019-03-05 18:47:52.429 14045 ERROR 
neutron_lbaas.common.cert_manager.barbican_cert_manager 
.format(container_ref))
2019-03-05 18:47:52.429 14045 ERROR 
neutron_lbaas.common.cert_manager.barbican_cert_manager LookupError: Container 
http://192.168.10.10:9311/v1/containers/25670926-0f89-42b6-9fe6-05083d59736a 
could not be found.
2019-03-05 18:47:52.429 14045 ERROR 
neutron_lbaas.common.cert_manager.barbican_cert_manager
2019-03-05 18:47:52.430 14045 DEBUG oslo_concurrency.lockutils 
[req-6e0b798e-b10c-4132-8665-dfe1122133bb cdb0fbe60ff84eaf932ba6a90dd030b2 
502990c9fd4d442693e8d818b01051b5 - - -] Lock "haproxy-driver" released by 
"neutron_lbaas.drivers.haproxy.namespace_driver.deploy_instance" :: held 2.682s 
inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:282
2019-03-05 18:47:52.430 14045 ERROR neutron_lbaas.agent.agent_manager 
[req-6e0b798e-b10c-4132-8665-dfe1122133bb cdb0fbe60ff84eaf932ba6a90dd030b2 
502990c9fd4d442693e8d818b01051b5 - - -] Create listener 
73a3aacc-3e81-4cee-aec9-1f0fa9cb61ca failed on device driver haproxy_ns: 
LookupError: Container 
http://192.168.10.10:9311/v1/containers/25670926-0f89-42b6-9fe6-05083d59736a 
could not be found.
2019-03-05 18:47:52.430 14045 ERROR neutron_lbaas.agent.agent_manager Traceback 
(most recent call last):
2019-03-05 18:47:52.430 14045 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/agent/agent_manager.py", line 
303, in create_listener
2019-03-05 18:47:52.430 14045 ERROR neutron_lbaas.agent.agent_manager 
driver.listener.create(listener)
2019-03-05 18:47:52.430 14045 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 480, in create
2019-03-05 18:47:52.430 14045 ERROR neutron_lbaas.agent.agent_manager 
self.driver.loadbalancer.refresh(listener.loadbalancer)
2019-03-05 18:47:52.430 14045 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 444, in refresh
2019-03-05 18:47:52.430 14045 ERROR neutron_lbaas.agent.agent_manager if 
(not self.driver.deploy_instance(loadbalancer) and
2019-03-05 18:47:52.430 14045 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in 
inner
2019-03-05 18:47:52.430 14045 ERROR neutron_lbaas.agent.agent_manager 
return f(*args, **kwargs)
2019-03-05 

[Yahoo-eng-team] [Bug 1818651] [NEW] UnsupportedVersion: Endpoint does not support RPC version 1.2. Attempted method: report_state

2019-03-05 Thread Dilip Renkila
Public bug reported:

 Hi all, Recently i have upgraded neutron binaries from 13 to 14. Since
then there is a mismatch in rpc versions that the clients are expecting
from neutron-server. I am getting  Endpoint does not support RPC version
1.2. on all neutron-agents. Below is the detailed log


root@ctrl1:~# neutron-server --version
neutron-server 14.0.0.0b1

root@ctrl2:~# neutron-dhcp-agent --version
neutron-dhcp-agent 14.0.0.0b1


neutron-server logs

2019-03-05 11:44:45.194 81929 ERROR oslo_messaging.rpc.server [-] Exception 
during message handling: oslo_messaging.rpc.dispatcher.UnsupportedVersion: 
Endpoint does not support RPC version 1.2. Attempted method: report_state
2019-03-05 11:44:45.194 81929 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
2019-03-05 11:44:45.194 81929 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python3/dist-packages/oslo_messaging/rpc/server.py", line 166, in 
_process_incoming
2019-03-05 11:44:45.194 81929 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
2019-03-05 11:44:45.194 81929 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", line 276, in 
dispatch
2019-03-05 11:44:45.194 81929 ERROR oslo_messaging.rpc.server raise 
UnsupportedVersion(version, method=method)
2019-03-05 11:44:45.194 81929 ERROR oslo_messaging.rpc.server 
oslo_messaging.rpc.dispatcher.UnsupportedVersion: Endpoint does not support RPC 
version 1.2. Attempted method: report_state
2019-03-05 11:44:45.194 81929 ERROR oslo_messaging.rpc.server 


neutron-dhcp-agent logs

2019-03-05 11:45:15.206 2099366 ERROR neutron.agent.dhcp.agent Traceback (most 
recent call last):
2019-03-05 11:45:15.206 2099366 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python3/dist-packages/neutron/agent/dhcp/agent.py", line 883, in 
_report_state
2019-03-05 11:45:15.206 2099366 ERROR neutron.agent.dhcp.agent ctx, 
self.agent_state, True)
2019-03-05 11:45:15.206 2099366 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python3/dist-packages/neutron/agent/rpc.py", line 102, in report_state
2019-03-05 11:45:15.206 2099366 ERROR neutron.agent.dhcp.agent return 
method(context, 'report_state', **kwargs)
2019-03-05 11:45:15.206 2099366 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python3/dist-packages/oslo_messaging/rpc/client.py", line 179, in call
2019-03-05 11:45:15.206 2099366 ERROR neutron.agent.dhcp.agent 
retry=self.retry)
2019-03-05 11:45:15.206 2099366 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python3/dist-packages/oslo_messaging/transport.py", line 128, in _send
2019-03-05 11:45:15.206 2099366 ERROR neutron.agent.dhcp.agent retry=retry)
2019-03-05 11:45:15.206 2099366 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python3/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 
645, in send
2019-03-05 11:45:15.206 2099366 ERROR neutron.agent.dhcp.agent 
call_monitor_timeout, retry=retry)
2019-03-05 11:45:15.206 2099366 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python3/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 
636, in _send
2019-03-05 11:45:15.206 2099366 ERROR neutron.agent.dhcp.agent raise result
2019-03-05 11:45:15.206 2099366 ERROR neutron.agent.dhcp.agent 
oslo_messaging.rpc.client.RemoteError: Remote error: UnsupportedVersion 
Endpoint does not support RPC version 1.2. Attempted method: report_state
2019-03-05 11:45:15.206 2099366 ERROR neutron.agent.dhcp.agent ['Traceback 
(most recent call last):\n', '  File 
"/usr/lib/python3/dist-packages/oslo_messaging/rpc/server.py", line 166, in 
_process_incoming\nres = self.dispatcher.dispatch(message)\n', '  File 
"/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", line 276, in 
dispatch\nraise UnsupportedVersion(version, method=method)\n', 
'oslo_messaging.rpc.dispatcher.UnsupportedVersion: Endpoint does not support 
RPC version 1.2. Attempted method: report_state\n'].
2019-03-05 11:45:15.206 2099366 ERROR neutron.agent.dhcp.agent 
2019-03-05 11:45:45.201 2099366 ERROR neutron.agent.dhcp.agent 
[req-16cb695b-fee8-48de-a8f3-9541796608d0 - - - - -] Failed reporting state!: 
oslo_messaging.rpc.client.RemoteError: Remote error: UnsupportedVersion 
Endpoint does not support RPC version 1.2. Attempted method: report_state
2019-03-05 11:45:45.201 2099366 ERROR neutron.agent.dhcp.agent Traceback (most 
recent call last):
2019-03-05 11:45:45.201 2099366 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python3/dist-packages/neutron/agent/dhcp/agent.py", line 883, in 
_report_state
2019-03-05 11:45:45.201 2099366 ERROR neutron.agent.dhcp.agent ctx, 
self.agent_state, True)
2019-03-05 11:45:45.201 2099366 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python3/dist-packages/neutron/agent/rpc.py", line 102, in report_state
2019-03-05 11:45:45.201 2099366 ERROR neutron.agent.dhcp.agent return 
method(context, 'report_state', **kwargs)
2019-03-05 11:45:45.201 2099366 ERROR 

[Yahoo-eng-team] [Bug 1818252] Re: Incorrect logging instance uuid in nova logs

2019-03-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/640723
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=704880468b2cc495bb00266ff00bbea4fb0f28e6
Submitter: Zuul
Branch:master

commit 704880468b2cc495bb00266ff00bbea4fb0f28e6
Author: Takashi NATSUME 
Date:   Mon Mar 4 15:34:19 2019 +0900

Fix wrong consumer type in logging

In the 'delete_allocation_for_instance' method,
a consumer UUID is output in the log.

The consumer UUID is UUID of a server or UUID of a migration.
However the consumer UUID is described as UUID of a server
in the log.
Fix the description in the log.

Change-Id: I1dea4472b232d6c054879ebda2536658d9769053
Closes-Bug: #1818252


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1818252

Title:
  Incorrect logging instance uuid in nova logs

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Hi,

  Found a logging mistake in nova :

  1596dd6225ef4abea7762c8b040b3f55 d60b403029ad41888c5822584263b983 - default 
default] [instance: 26bec746-110b-4777-af3f-15143b473667] Migrating instance to 
p6r01-nd02 finished successfully.
  2019-03-01 14:40:29.949 2262558 INFO nova.scheduler.client.report 
[req-4439c30b-0f5b-4982-b775-37b2062e849c 1596dd6225ef4abea7762c8b040b3f55 
d60b403029ad41888c5822584263b983 - default default] Deleted allocation for 
instance 4c7621bb-c34b-4e57-82ee-e9cea87d7a8b

  There is  " Deleted allocation for instance 4c7621bb-c34b-4e57-82ee-
  e9cea87d7a8b  "  , which is bad uuid ( this is uuid for migration , no
  for instance )

  It should be  26bec746-110b-4777-af3f-15143b473667 ( instance uuid )

  Thanks,
  Michal

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1818252/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818639] [NEW] Chinese translations got renamed from zh-cn and zh-tw to zh-hans and zh-hant

2019-03-05 Thread Radomir Dopieralski
Public bug reported:

According to https://code.djangoproject.com/ticket/18419 there was a
rename of those languages/locales, but Horizon still uses the old names.

However, since Django no longer ships with zh_CN and zh_TW translations,
the translation.check_for_language(lang_code) function now returns False
for them, and the validation on the user settings form fails (silently,
of course).

We should rename our translations to the new language names.

** Affects: horizon
 Importance: Critical
 Status: New

** Changed in: horizon
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1818639

Title:
  Chinese translations got renamed from zh-cn and zh-tw to zh-hans and
  zh-hant

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  According to https://code.djangoproject.com/ticket/18419 there was a
  rename of those languages/locales, but Horizon still uses the old
  names.

  However, since Django no longer ships with zh_CN and zh_TW
  translations, the translation.check_for_language(lang_code) function
  now returns False for them, and the validation on the user settings
  form fails (silently, of course).

  We should rename our translations to the new language names.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1818639/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818641] [NEW] IPv6 not enabling on EC2 when done afterwards (CentOS 7.6)

2019-03-05 Thread Chris NIVARD
Public bug reported:

How to reproduce:
=
On AWS EC2:
- launch a CentOS 7.6 instance with IPv4 only
- once running: assign an IPv6 from AWS EC2, and reboot the instance
- while the instance data is properly populated: the network configuration does 
not assign the IPv6 to the instance (no connection).


Unsuccessful turnaround attempts:
=
A) /etc/cloud/cloud.cfg.d/99-ipv6-networking.cfg
network:
version: 1
config:
- type: physical
name: eth0
subnets:
- type: dhcp6

B) There has been a debate in the CentOS github as to whether AWS was IPv6 RFC 
compliant, read the entire thread here:
https://github.com/coreos/bugs/issues/1828
A solution provided there for Ubuntu cannot be applied on CentOS.

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: aws ec2 ipv6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1818641

Title:
  IPv6 not enabling on EC2 when done afterwards (CentOS 7.6)

Status in cloud-init:
  New

Bug description:
  How to reproduce:
  =
  On AWS EC2:
  - launch a CentOS 7.6 instance with IPv4 only
  - once running: assign an IPv6 from AWS EC2, and reboot the instance
  - while the instance data is properly populated: the network configuration 
does not assign the IPv6 to the instance (no connection).

  
  Unsuccessful turnaround attempts:
  =
  A) /etc/cloud/cloud.cfg.d/99-ipv6-networking.cfg
  network:
  version: 1
  config:
  - type: physical
  name: eth0
  subnets:
  - type: dhcp6

  B) There has been a debate in the CentOS github as to whether AWS was IPv6 
RFC compliant, read the entire thread here:
  https://github.com/coreos/bugs/issues/1828
  A solution provided there for Ubuntu cannot be applied on CentOS.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1818641/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818632] [NEW] Fullstack tests fails on Ubuntu Bionic

2019-03-05 Thread Slawek Kaplonski
Public bug reported:

neutron-fullstack job is failing on compiling openvswitch when running on 
Ubuntu Bionic.
Error example: 
http://logs.openstack.org/61/639361/2/check/neutron-fullstack/328791a/job-output.txt.gz#_2019-02-26_15_45_00_395433

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: fullstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818632

Title:
  Fullstack tests fails on Ubuntu Bionic

Status in neutron:
  Confirmed

Bug description:
  neutron-fullstack job is failing on compiling openvswitch when running on 
Ubuntu Bionic.
  Error example: 
http://logs.openstack.org/61/639361/2/check/neutron-fullstack/328791a/job-output.txt.gz#_2019-02-26_15_45_00_395433

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1818632/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818628] [NEW] Scenario jobs for neutron-dynamic-routing project fails on Ubuntu Bionic

2019-03-05 Thread Slawek Kaplonski
Public bug reported:

Scenario jobs "neutron-dynamic-routing-dsvm-tempest-scenario-basic",
"neutron-dynamic-routing-dsvm-tempest-scenario-ipv4" and "neutron-
dynamic-routing-dsvm-tempest-scenario-ipv6" are failing on Ubuntu Bionic
due to missing docker-engine package.

Error example: http://logs.openstack.org/75/639675/1/check/neutron-
dynamic-routing-dsvm-tempest-scenario-basic/1c7de20/job-
output.txt.gz#_2019-02-27_14_17_16_457972

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818628

Title:
  Scenario jobs for neutron-dynamic-routing project fails on Ubuntu
  Bionic

Status in neutron:
  Confirmed

Bug description:
  Scenario jobs "neutron-dynamic-routing-dsvm-tempest-scenario-basic",
  "neutron-dynamic-routing-dsvm-tempest-scenario-ipv4" and "neutron-
  dynamic-routing-dsvm-tempest-scenario-ipv6" are failing on Ubuntu
  Bionic due to missing docker-engine package.

  Error example: http://logs.openstack.org/75/639675/1/check/neutron-
  dynamic-routing-dsvm-tempest-scenario-basic/1c7de20/job-
  output.txt.gz#_2019-02-27_14_17_16_457972

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1818628/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818614] [NEW] Various L3HA functional tests fails often

2019-03-05 Thread Slawek Kaplonski
Public bug reported:

Recently many L3 HA related functional tests are failing.
The common thing in all those errors is fact that it fails when waiting for l3 
ha router to become master.

Example stack trace:

ft2.12: 
neutron.tests.functional.agent.l3.test_ha_router.LinuxBridgeL3HATestCase.test_ha_router_lifecycle_StringException:
 Traceback (most recent call last):
  File "neutron/tests/base.py", line 174, in func
return f(self, *args, **kwargs)
  File "neutron/tests/base.py", line 174, in func
return f(self, *args, **kwargs)
  File "neutron/tests/functional/agent/l3/test_ha_router.py", line 81, in 
test_ha_router_lifecycle
self._router_lifecycle(enable_ha=True, router_info=router_info)
  File "neutron/tests/functional/agent/l3/framework.py", line 274, in 
_router_lifecycle
common_utils.wait_until_true(lambda: router.ha_state == 'master')
  File "neutron/common/utils.py", line 690, in wait_until_true
raise WaitTimeout(_("Timed out after %d seconds") % timeout)
neutron.common.utils.WaitTimeout: Timed out after 60 seconds

Example failure: http://logs.openstack.org/79/633979/21/check/neutron-
functional-python27/ce7ef07/logs/testr_results.html.gz

Logstash query:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22ha_state%20%3D%3D%20'master')%5C%22

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: functional-tests gate-failure l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818614

Title:
  Various L3HA functional tests fails often

Status in neutron:
  Confirmed

Bug description:
  Recently many L3 HA related functional tests are failing.
  The common thing in all those errors is fact that it fails when waiting for 
l3 ha router to become master.

  Example stack trace:

  ft2.12: 
neutron.tests.functional.agent.l3.test_ha_router.LinuxBridgeL3HATestCase.test_ha_router_lifecycle_StringException:
 Traceback (most recent call last):
File "neutron/tests/base.py", line 174, in func
  return f(self, *args, **kwargs)
File "neutron/tests/base.py", line 174, in func
  return f(self, *args, **kwargs)
File "neutron/tests/functional/agent/l3/test_ha_router.py", line 81, in 
test_ha_router_lifecycle
  self._router_lifecycle(enable_ha=True, router_info=router_info)
File "neutron/tests/functional/agent/l3/framework.py", line 274, in 
_router_lifecycle
  common_utils.wait_until_true(lambda: router.ha_state == 'master')
File "neutron/common/utils.py", line 690, in wait_until_true
  raise WaitTimeout(_("Timed out after %d seconds") % timeout)
  neutron.common.utils.WaitTimeout: Timed out after 60 seconds

  Example failure: http://logs.openstack.org/79/633979/21/check/neutron-
  functional-python27/ce7ef07/logs/testr_results.html.gz

  Logstash query:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22ha_state%20%3D%3D%20'master')%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1818614/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818613] [NEW] Functional qos tests fails often

2019-03-05 Thread Slawek Kaplonski
Public bug reported:

Various QoS related tests are failing often recently. In all cases
reason is the same: "ovsdbapp.backend.ovs_idl.idlutils.RowNotFound:
Cannot find Port with name=cc566ab0-4201-44b5-ae89-d342284ffdd6" during
"_minimum_bandwidth_initialize".

Stacktrace:

ft1.1: 
neutron.tests.functional.agent.l2.extensions.test_ovs_agent_qos_extension.TestOVSAgentQosExtension.test_policy_rule_delete(ingress)_StringException:
 Traceback (most recent call last):
  File "neutron/tests/base.py", line 174, in func
return f(self, *args, **kwargs)
  File 
"neutron/tests/functional/agent/l2/extensions/test_ovs_agent_qos_extension.py", 
line 354, in test_policy_rule_delete
port_dict = self._create_port_with_qos()
  File 
"neutron/tests/functional/agent/l2/extensions/test_ovs_agent_qos_extension.py", 
line 172, in _create_port_with_qos
self.setup_agent_and_ports([port_dict])
  File "neutron/tests/functional/agent/l2/base.py", line 375, in 
setup_agent_and_ports
ancillary_bridge=ancillary_bridge)
  File "neutron/tests/functional/agent/l2/base.py", line 116, in create_agent
ext_mgr, self.config)
  File "neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", 
line 256, in __init__
self.connection, constants.EXTENSION_DRIVER_TYPE, agent_api)
  File "neutron/agent/agent_extensions_manager.py", line 54, in initialize
extension.obj.initialize(connection, driver_type)
  File "neutron/agent/l2/extensions/qos.py", line 207, in initialize
self.qos_driver.initialize()
  File 
"neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py",
 line 57, in initialize
self._minimum_bandwidth_initialize()
  File 
"neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py",
 line 52, in _minimum_bandwidth_initialize
self.br_int.clear_minimum_bandwidth_qos()
  File "neutron/agent/common/ovs_lib.py", line 1006, in 
clear_minimum_bandwidth_qos
self.ovsdb.db_destroy('QoS', qos_id).execute(check_error=True)
  File 
"/opt/stack/new/neutron/.tox/dsvm-functional-python27/local/lib/python2.7/site-packages/ovsdbapp/backend/ovs_idl/command.py",
 line 40, in execute
txn.add(self)
  File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
self.gen.next()
  File 
"/opt/stack/new/neutron/.tox/dsvm-functional-python27/local/lib/python2.7/site-packages/ovsdbapp/api.py",
 line 112, in transaction
del self._nested_txns_map[cur_thread_id]
  File 
"/opt/stack/new/neutron/.tox/dsvm-functional-python27/local/lib/python2.7/site-packages/ovsdbapp/api.py",
 line 69, in __exit__
self.result = self.commit()
  File 
"/opt/stack/new/neutron/.tox/dsvm-functional-python27/local/lib/python2.7/site-packages/ovsdbapp/backend/ovs_idl/transaction.py",
 line 62, in commit
raise result.ex
ovsdbapp.backend.ovs_idl.idlutils.RowNotFound: Cannot find Port with 
name=cc566ab0-4201-44b5-ae89-d342284ffdd6

Example failure: http://logs.openstack.org/74/640874/1/check/neutron-
functional-python27/d51cd50/logs/testr_results.html.gz

Logstash query:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22line%2052%2C%20in%20_minimum_bandwidth_initialize%5C%22

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: functional-tests gate-failure qos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818613

Title:
  Functional qos tests fails often

Status in neutron:
  Confirmed

Bug description:
  Various QoS related tests are failing often recently. In all cases
  reason is the same: "ovsdbapp.backend.ovs_idl.idlutils.RowNotFound:
  Cannot find Port with name=cc566ab0-4201-44b5-ae89-d342284ffdd6"
  during "_minimum_bandwidth_initialize".

  Stacktrace:

  ft1.1: 
neutron.tests.functional.agent.l2.extensions.test_ovs_agent_qos_extension.TestOVSAgentQosExtension.test_policy_rule_delete(ingress)_StringException:
 Traceback (most recent call last):
File "neutron/tests/base.py", line 174, in func
  return f(self, *args, **kwargs)
File 
"neutron/tests/functional/agent/l2/extensions/test_ovs_agent_qos_extension.py", 
line 354, in test_policy_rule_delete
  port_dict = self._create_port_with_qos()
File 
"neutron/tests/functional/agent/l2/extensions/test_ovs_agent_qos_extension.py", 
line 172, in _create_port_with_qos
  self.setup_agent_and_ports([port_dict])
File "neutron/tests/functional/agent/l2/base.py", line 375, in 
setup_agent_and_ports
  ancillary_bridge=ancillary_bridge)
File "neutron/tests/functional/agent/l2/base.py", line 116, in create_agent
  ext_mgr, self.config)
File "neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", 
line 256, in __init__
  self.connection, constants.EXTENSION_DRIVER_TYPE, agent_api)
File "neutron/agent/agent_extensions_manager.py", line 54, in initialize
  extension.obj.initialize(connection,