[Yahoo-eng-team] [Bug 1800579] [NEW] Resize instance failed will throw two error messages

2018-10-29 Thread Wangliangyu
Public bug reported:

There will be two error prompts on page when resizing instance failed.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1800579

Title:
  Resize instance failed will throw two error messages

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  There will be two error prompts on page when resizing instance failed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1800579/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1800575] [NEW] the prompt message does not contain a name or id when creating volume with empty name

2018-10-29 Thread Wangliangyu
Public bug reported:

The name of volume is not requested, and we could not fill anything when 
creating volume.
The prompt ,'Creating volume "%s"' % data["name"], will be 'Creating volume ""'.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1800575

Title:
  the prompt message does not contain a name or id when creating volume
  with empty name

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The name of volume is not requested, and we could not fill anything when 
creating volume.
  The prompt ,'Creating volume "%s"' % data["name"], will be 'Creating volume 
""'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1800575/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537044] Re: Unit test failure when buildnig debian package for Mitaka b2

2018-10-29 Thread Brian Rosmaita
Looks like this was fixed by a library version change and is no longer
relevant.

** Changed in: glance
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1537044

Title:
  Unit test failure when buildnig debian package for Mitaka b2

Status in Glance:
  Invalid

Bug description:
  Hi,

  I have 3 unit test failures when building the Glance Mitaka b2
  package, as per below. Please help me to fix them.

  ==
  FAIL: glance.tests.functional.test_reload.TestReload.test_reload
  --
  Traceback (most recent call last):
  testtools.testresult.real._StringException: traceback-1: {{{
  Traceback (most recent call last):
File "glance/tests/functional/test_reload.py", line 50, in tearDown
  self.stop_servers()
File "glance/tests/functional/__init__.py", line 899, in stop_servers
  self.stop_server(self.scrubber_daemon, 'Scrubber daemon')
File "glance/tests/functional/__init__.py", line 884, in stop_server
  server.stop()
File "glance/tests/functional/__init__.py", line 257, in stop
  raise Exception('why is this being called? %s' % self.server_name)
  Exception: why is this being called? scrubber
  }}}
 
  Traceback (most recent call last):
File "glance/tests/functional/test_reload.py", line 113, in test_reload
  self.start_servers(fork_socket=False, **vars(self))
File "glance/tests/functional/__init__.py", line 804, in start_servers
  self.start_with_retry(self.api_server, 'api_port', 3, **kwargs)
File "glance/tests/functional/__init__.py", line 774, in start_with_retry
  launch_msg = self.wait_for_servers([server], expect_launch)
File "glance/tests/functional/__init__.py", line 866, in wait_for_servers
  execute(cmd, raise_error=False, expect_exit=False)
File "glance/tests/utils.py", line 315, in execute  
  env=env)
File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
  errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1335, in _execute_child
  raise child_exception
  OSError: [Errno 2] No such file or directory

  
  ==
  FAIL: 
glance.tests.functional.v1.test_multiprocessing.TestMultiprocessing.test_interrupt_avoids_respawn_storm
  --
  Traceback (most recent call last):
  testtools.testresult.real._StringException: Traceback (most recent call last):
File "glance/tests/functional/v1/test_multiprocessing.py", line 61, in 
test_interrupt_avoids_respawn_storm
  children = self._get_children()
File "glance/tests/functional/v1/test_multiprocessing.py", line 50, in 
_get_children
  children = process.get_children()
  AttributeError: 'Process' object has no attribute 'get_children'

  
  ==
  FAIL: 
glance.tests.unit.common.test_wsgi_ipv6.IPv6ServerTest.test_evnetlet_no_dnspython
  --
  Traceback (most recent call last):
  testtools.testresult.real._StringException: Traceback (most recent call last):
File "glance/tests/unit/common/test_wsgi_ipv6.py", line 61, in 
test_evnetlet_no_dnspython
  self.assertEqual(0, rc)
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 350, in 
assertEqual
  self.assertThat(observed, matcher, message)
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 435, in 
assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 0 != 1

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1537044/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1800511] Re: VMs started before Rocky upgrade cannot be live migrated

2018-10-29 Thread Matt Riedemann
FWIW I don't think
https://github.com/openstack/nova/commit/2b52cde565d542c03f004b48ee9c1a6a25f5b7cd
really changed how
https://github.com/openstack/nova/commit/f02b3800051234ecc14f3117d5987b1a8ef75877
could have broken anything. _update_vif_xml is called from the source
host using migrate data from the dest host, but as far as I know that
migrate data doesn't have any information about mtu from the dest to
determine what to set in the source vif config. Before _update_vif_xml,
we would have just sent the source guest xml vif config to the dest and
if the dest didn't support mtu it would have failed also.

** Tags added: libvirt live-migration upgrade

** Changed in: nova
   Importance: Undecided => High

** Changed in: nova
   Status: New => Triaged

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1800511

Title:
  VMs started before Rocky upgrade cannot be live migrated

Status in OpenStack Compute (nova):
  Triaged
Status in OpenStack Compute (nova) rocky series:
  Triaged

Bug description:
  In Rocky, the following patch introduced adding MTU to the network for
  VMs:

  
https://github.com/openstack/nova/commit/f02b3800051234ecc14f3117d5987b1a8ef75877

  However, this didn't affect live migrations much because Nova didn't
  touch the network bits of the XML during live migration, until this
  patch:

  
https://github.com/openstack/nova/commit/2b52cde565d542c03f004b48ee9c1a6a25f5b7cd

  With that change, the MTU is added to the configuration, which means
  that the destination is launched with host_mtu=N, which apparently
  changes the guest ABI (see:
  https://bugzilla.redhat.com/show_bug.cgi?id=1449346).  This means the
  live migration will fail with an error looking like this:

  2018-10-29 14:59:15.126+: 5289: error : qemuProcessReportLogError:1914 : 
internal error: qemu unexpectedly closed the monitor: 
2018-10-29T14:59:14.977084Z qemu-kvm: get_pci_config_device: Bad config data: 
i=0x10 read: 61 device: 1 cmask: ff wmask: c0 w1cmask:0
  2018-10-29T14:59:14.977105Z qemu-kvm: Failed to load PCIDevice:config
  2018-10-29T14:59:14.977109Z qemu-kvm: Failed to load virtio-net:virtio
  2018-10-29T14:59:14.977112Z qemu-kvm: error while loading state for instance 
0x0 of device ‘:00:03.0/virtio-net’
  2018-10-29T14:59:14.977283Z qemu-kvm: load of migration failed: Invalid 
argument

  I was able to further verify this by seeing that `host_mtu` exists in
  the command line when looking at the destination host instance logs in
  /var/log/libvirt/qemu/instance-foo.log

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1800511/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1800515] [NEW] Unnecessary locking when connecting volumes

2018-10-29 Thread Gorka Eguileor
Public bug reported:

Cinder introduced "shared_targets" and "service_uuid" fields in volumes
to allow volume consumers to protect themselves from unintended leftover
devices when handling iSCSI connections with shared targets.

The way to protect from the automatic scans that happen on detach/map
race conditions is by locking and only allowing one attach or one detach
operation for each server to happen at a given time.

When using an up to date Open iSCSI initiator we don't need to use
locks, as it has the possibility to disable automatic LUN scans (which
are the real cause of the leftover devices), and OS-Brick already
supports this feature.

Currently Nova is blindly locking whenever "shared_targets" is set to
True, even when the iSCSI initiator and OS-Brick are already presenting
such races, which introduces unnecessary serialization on the connection
of volumes.

** Affects: nova
 Importance: Undecided
 Assignee: Gorka Eguileor (gorka)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Gorka Eguileor (gorka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1800515

Title:
  Unnecessary locking when connecting volumes

Status in OpenStack Compute (nova):
  New

Bug description:
  Cinder introduced "shared_targets" and "service_uuid" fields in
  volumes to allow volume consumers to protect themselves from
  unintended leftover devices when handling iSCSI connections with
  shared targets.

  The way to protect from the automatic scans that happen on detach/map
  race conditions is by locking and only allowing one attach or one
  detach operation for each server to happen at a given time.

  When using an up to date Open iSCSI initiator we don't need to use
  locks, as it has the possibility to disable automatic LUN scans (which
  are the real cause of the leftover devices), and OS-Brick already
  supports this feature.

  Currently Nova is blindly locking whenever "shared_targets" is set to
  True, even when the iSCSI initiator and OS-Brick are already
  presenting such races, which introduces unnecessary serialization on
  the connection of volumes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1800515/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1800511] [NEW] VMs started before Rocky upgrade cannot be live migrated

2018-10-29 Thread Mohammed Naser
Public bug reported:

In Rocky, the following patch introduced adding MTU to the network for
VMs:

https://github.com/openstack/nova/commit/f02b3800051234ecc14f3117d5987b1a8ef75877

However, this didn't affect live migrations much because Nova didn't
touch the network bits of the XML during live migration, until this
patch:

https://github.com/openstack/nova/commit/2b52cde565d542c03f004b48ee9c1a6a25f5b7cd

With that change, the MTU is added to the configuration, which means
that the destination is launched with host_mtu=N, which apparently
changes the guest ABI (see:
https://bugzilla.redhat.com/show_bug.cgi?id=1449346).  This means the
live migration will fail with an error looking like this:

2018-10-29 14:59:15.126+: 5289: error : qemuProcessReportLogError:1914 : 
internal error: qemu unexpectedly closed the monitor: 
2018-10-29T14:59:14.977084Z qemu-kvm: get_pci_config_device: Bad config data: 
i=0x10 read: 61 device: 1 cmask: ff wmask: c0 w1cmask:0
2018-10-29T14:59:14.977105Z qemu-kvm: Failed to load PCIDevice:config
2018-10-29T14:59:14.977109Z qemu-kvm: Failed to load virtio-net:virtio
2018-10-29T14:59:14.977112Z qemu-kvm: error while loading state for instance 
0x0 of device ‘:00:03.0/virtio-net’
2018-10-29T14:59:14.977283Z qemu-kvm: load of migration failed: Invalid argument

I was able to further verify this by seeing that `host_mtu` exists in
the command line when looking at the destination host instance logs in
/var/log/libvirt/qemu/instance-foo.log

** Affects: nova
 Importance: High
 Assignee: Mohammed Naser (mnaser)
 Status: Triaged


** Tags: libvirt live-migration upgrade

** Changed in: nova
 Assignee: (unassigned) => Mohammed Naser (mnaser)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1800511

Title:
  VMs started before Rocky upgrade cannot be live migrated

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  In Rocky, the following patch introduced adding MTU to the network for
  VMs:

  
https://github.com/openstack/nova/commit/f02b3800051234ecc14f3117d5987b1a8ef75877

  However, this didn't affect live migrations much because Nova didn't
  touch the network bits of the XML during live migration, until this
  patch:

  
https://github.com/openstack/nova/commit/2b52cde565d542c03f004b48ee9c1a6a25f5b7cd

  With that change, the MTU is added to the configuration, which means
  that the destination is launched with host_mtu=N, which apparently
  changes the guest ABI (see:
  https://bugzilla.redhat.com/show_bug.cgi?id=1449346).  This means the
  live migration will fail with an error looking like this:

  2018-10-29 14:59:15.126+: 5289: error : qemuProcessReportLogError:1914 : 
internal error: qemu unexpectedly closed the monitor: 
2018-10-29T14:59:14.977084Z qemu-kvm: get_pci_config_device: Bad config data: 
i=0x10 read: 61 device: 1 cmask: ff wmask: c0 w1cmask:0
  2018-10-29T14:59:14.977105Z qemu-kvm: Failed to load PCIDevice:config
  2018-10-29T14:59:14.977109Z qemu-kvm: Failed to load virtio-net:virtio
  2018-10-29T14:59:14.977112Z qemu-kvm: error while loading state for instance 
0x0 of device ‘:00:03.0/virtio-net’
  2018-10-29T14:59:14.977283Z qemu-kvm: load of migration failed: Invalid 
argument

  I was able to further verify this by seeing that `host_mtu` exists in
  the command line when looking at the destination host instance logs in
  /var/log/libvirt/qemu/instance-foo.log

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1800511/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1800508] [NEW] Missing exception handling mechanism in 'schedule_and_build_instances' for DBError at line 1180

2018-10-29 Thread Wallace Cardoso
Public bug reported:

Description
==
If an error occurs during instance creation, the user won't be able to know 
what exactly happened with the VM that remains always building. As usual, the 
workflow of creating a VM was interrupted by an exception in the method 
schedule_and_build_instances, so the result would be the VM is in 'error' state.

Steps to reproduce
=
1) Create a VM;
2) Inject an out-of-range value in 
"schedule_and_build_instances.args.build_requests->'nova_object.data'.instance.'nova_object.data'.instance_type_id",
 this will be enough to cause a DBError. For instance, it can be used the 1E+22 
value.
3) An exception will be thrown, but seems there no exist an appropriate action 
when this DBError happens.

Expected result
==
The VM is put in 'error' state

Actual result

The VM is in 'build' state indeterminately, and the user never will know 
(without searching in the logs) what happened with the VM.

Environment
==
Devstack/Stable/Queens.

Logs & Configs
=
Logs attached.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1800508

Title:
  Missing exception handling mechanism in 'schedule_and_build_instances'
  for DBError at line 1180

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ==
  If an error occurs during instance creation, the user won't be able to know 
what exactly happened with the VM that remains always building. As usual, the 
workflow of creating a VM was interrupted by an exception in the method 
schedule_and_build_instances, so the result would be the VM is in 'error' state.

  Steps to reproduce
  =
  1) Create a VM;
  2) Inject an out-of-range value in 
"schedule_and_build_instances.args.build_requests->'nova_object.data'.instance.'nova_object.data'.instance_type_id",
 this will be enough to cause a DBError. For instance, it can be used the 1E+22 
value.
  3) An exception will be thrown, but seems there no exist an appropriate 
action when this DBError happens.

  Expected result
  ==
  The VM is put in 'error' state

  Actual result
  
  The VM is in 'build' state indeterminately, and the user never will know 
(without searching in the logs) what happened with the VM.

  Environment
  ==
  Devstack/Stable/Queens.

  Logs & Configs
  =
  Logs attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1800508/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1800494] [NEW] Volume QoS Create Extra Specs description incorrect

2018-10-29 Thread Simon Dodsley
Public bug reported:

When creating a new extra-spec for a volume QoS the description states 3 
acceptable values for keys. 
This is incorrect as the values given (and the examples) are very specific for 
a Solidfire storage array using back-end QoS.

This is confusing when setting up QoS extra-specs as it implies the
dashboard will only accept the listed back-end extra-specs.

Front-end QoS has, currently, 13 different acceptable keys, plus all the
different supported back-end keys from vendors other than Solidfire.

The wording needs be changed to be more generic and if examples are
given then they should cover the more generic front-end extra-specs than
something that is specific to a single vendors implementation of volume
QoS.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1800494

Title:
  Volume QoS Create Extra Specs description incorrect

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When creating a new extra-spec for a volume QoS the description states 3 
acceptable values for keys. 
  This is incorrect as the values given (and the examples) are very specific 
for a Solidfire storage array using back-end QoS.

  This is confusing when setting up QoS extra-specs as it implies the
  dashboard will only accept the listed back-end extra-specs.

  Front-end QoS has, currently, 13 different acceptable keys, plus all
  the different supported back-end keys from vendors other than
  Solidfire.

  The wording needs be changed to be more generic and if examples are
  given then they should cover the more generic front-end extra-specs
  than something that is specific to a single vendors implementation of
  volume QoS.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1800494/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687027] Re: test_walk_versions tests fail with "IndexError: tuple index out of range" after timeout

2018-10-29 Thread Hongbin Lu
Still happening: http://logs.openstack.org/88/555088/29/check/neutron-
functional/0b00b31/job-output.txt.gz#_2018-10-25_22_18_28_107639

** Changed in: neutron
   Status: Fix Released => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1687027

Title:
  test_walk_versions tests fail with "IndexError: tuple index out of
  range" after timeout

Status in neutron:
  Confirmed

Bug description:
  http://logs.openstack.org/99/460399/1/check/gate-neutron-dsvm-
  functional-ubuntu-xenial/25de43d/testr_results.html.gz

  Traceback (most recent call last):
File "neutron/tests/base.py", line 115, in func
  return f(self, *args, **kwargs)
File "neutron/tests/base.py", line 115, in func
  return f(self, *args, **kwargs)
File "neutron/tests/functional/db/test_migrations.py", line 551, in 
test_walk_versions
  self._migrate_up(config, engine, dest, curr, with_data=True)
File "neutron/tests/functional/db/test_migrations.py", line 537, in 
_migrate_up
  migration.do_alembic_command(config, 'upgrade', dest)
File "neutron/db/migration/cli.py", line 109, in do_alembic_command
  getattr(alembic_command, cmd)(config, *args, **kwargs)
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/alembic/command.py",
 line 254, in upgrade
  script.run_env()
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/alembic/script/base.py",
 line 416, in run_env
  util.load_python_file(self.dir, 'env.py')
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/alembic/util/pyfiles.py",
 line 93, in load_python_file
  module = load_module_py(module_id, path)
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/alembic/util/compat.py",
 line 75, in load_module_py
  mod = imp.load_source(module_id, path, fp)
File "neutron/db/migration/alembic_migrations/env.py", line 120, in 
  run_migrations_online()
File "neutron/db/migration/alembic_migrations/env.py", line 114, in 
run_migrations_online
  context.run_migrations()
File "", line 8, in run_migrations
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/alembic/runtime/environment.py",
 line 817, in run_migrations
  self.get_context().run_migrations(**kw)
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/alembic/runtime/migration.py",
 line 323, in run_migrations
  step.migration_fn(**kw)
File 
"/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/versions/mitaka/expand/3894bccad37f_add_timestamp_to_base_resources.py",
 line 36, in upgrade
  sa.Column(column_name, sa.DateTime(), nullable=True)
File "", line 8, in add_column
File "", line 3, in add_column
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/alembic/operations/ops.py",
 line 1551, in add_column
  return operations.invoke(op)
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/alembic/operations/base.py",
 line 318, in invoke
  return fn(self, operation)
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/alembic/operations/toimpl.py",
 line 123, in add_column
  schema=schema
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/alembic/ddl/impl.py",
 line 172, in add_column
  self._exec(base.AddColumn(table_name, column, schema=schema))
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/alembic/ddl/impl.py",
 line 118, in _exec
  return conn.execute(construct, *multiparams, **params)
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 945, in execute
  return meth(self, multiparams, params)
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py",
 line 68, in _execute_on_connection
  return connection._execute_ddl(self, multiparams, params)
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1002, in _execute_ddl
  compiled
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1189, in _execute_context
  context)
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1398, in _handle_dbapi_exception
  util.raise_from_cause(newraise, exc_info)
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py",
 line 203, in raise_from_cause
  reraise(type(exception), exception, tb=exc_tb, cause=

[Yahoo-eng-team] [Bug 1798424] Re: Xenial Azure: Make generation of network config from IMDS hotplug scripts configurable opt-in

2018-10-29 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 18.4-0ubuntu1~16.04.2

---
cloud-init (18.4-0ubuntu1~16.04.2) xenial; urgency=medium

  * cherry-pick 1d5e9aef: azure: Add apply_network_config option to
disable network (LP: #1798424)
  * debian/patches/openstack-no-network-config.patch
add patch to default Azure apply_network_config to False. Only
fallback network config on eth0 is generated by cloud-init. IMDS
network_config is ignored.

cloud-init (18.4-0ubuntu1~16.04.1) xenial-proposed; urgency=medium

  * drop the following cherry-picks now included:
+ cpick-3cee0bf8-oracle-fix-detect_openstack-to-report-True-on
  * refresh patches:
   + debian/patches/azure-use-walinux-agent.patch
   + debian/patches/openstack-no-network-config.patch
  * refresh patches:
   + debian/patches/ds-identify-behavior-xenial.patch
  * New upstream release. (LP: #1795953)
- release 18.4
- tests: allow skipping an entire cloud_test without running.
- tests: disable lxd tests on cosmic
- cii-tests: use unittest2.SkipTest in ntp_chrony due to new deps
- lxd: adjust to snap installed lxd.
- docs: surface experimental doc in instance-data.json
- tests: fix ec2 integration tests. process meta_data instead of meta-data
- Add support for Infiniband network interfaces (IPoIB). [Mark Goddard]
- cli: add cloud-init query subcommand to query instance metadata
- tools/tox-venv: update for new features.
- pylint: ignore warning assignment-from-no-return for _write_network
- stages: Fix bug causing datasource to have incorrect sys_cfg.
- Remove dead-code _write_network distro implementations.
- net_util: ensure static configs have netmask in translate_network result
  [Thomas Berger]
- Fall back to root:root on syslog permissions if other options fail.
  [Robert Schweikert]
- tests: Add mock for util.get_hostname. [Robert Schweikert]
- ds-identify: doc string cleanup.
- OpenStack: Support setting mac address on bond. [Fabian Wiesel]
- bash_completion/cloud-init: fix shell syntax error.
- EphemeralIPv4Network: Be more explicit when adding default route.
- OpenStack: support reading of newer versions of metdata.
- OpenStack: fix bug causing 'latest' version to be used from network.
- user-data: jinja template to render instance-data.json in cloud-config
- config: disable ssh access to a configured user account
- tests: print failed testname instead of docstring upon failure
- tests: Disallow use of util.subp except for where needed.
- sysconfig: refactor sysconfig to accept distro specific templates paths
- Add unit tests for config/cc_ssh.py [Francis Ginther]
- Fix the built-in cloudinit/tests/helpers:skipIf
- read-version: enhance error message [Joshua Powers]
- hyperv_reporting_handler: simplify threaded publisher
- VMWare: Fix a network config bug in vm with static IPv4 and no gateway.
  [Pengpeng Sun]
- logging: Add logging config type hyperv for reporting via Azure KVP
  [Andy Liu]
- tests: disable other snap test as well [Joshua Powers]
- tests: disable snap, fix write_files binary [Joshua Powers]
- Add datasource Oracle Compute Infrastructure (OCI).
- azure: allow azure to generate network configuration from IMDS per boot.
- Scaleway: Add network configuration to the DataSource [Louis Bouchard]
- docs: Fix example cloud-init analyze command to match output.
  [Wesley Gao]
- netplan: Correctly render macaddress on a bonds and bridges when
  provided.
- tools: Add 'net-convert' subcommand command to 'cloud-init devel'.
- redhat: remove ssh keys on new instance.
- Use typeset or local in profile.d scripts.
- OpenNebula: Fix null gateway6 [Akihiko Ota]
- oracle: fix detect_openstack to report True on OracleCloud.com DMI data
- tests: improve LXDInstance trying to workaround or catch bug.
- update_metadata re-config on every boot comments and tests not quite
  right [Mike Gerdts]
- tests: Collect build_info from system if available.
- pylint: Fix pylint warnings reported in pylint 2.0.0.
- get_linux_distro: add support for rhel via redhat-release.
- get_linux_distro: add support for centos6 and rawhide flavors of redhat
- tools: add '--debug' to tools/net-convert.py
- tests: bump the version of paramiko to 2.4.1.

 -- Chad Smith   Wed, 17 Oct 2018 12:51:09
-0600

** Changed in: cloud-init (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1798424

Title:
  Xenial Azure: Make generation of network config from IMDS  hotplug
  scripts configurable opt-in

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  New
Status in cloud-init source package in Xenial:
  Fix Released

Bug descrip

[Yahoo-eng-team] [Bug 1799954] Re: cloud-init fails openstack detection from openstack lxd virtual instances

2018-10-29 Thread Scott Moser
** Changed in: cloud-init
   Status: Incomplete => Invalid

** Changed in: cloud-init (Ubuntu)
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1799954

Title:
  cloud-init fails openstack detection from openstack lxd virtual
  instances

Status in cloud-init:
  Invalid
Status in cloud-init package in Ubuntu:
  Invalid

Bug description:
  Hi,
   I'm using openstack (2:13.1.3-0ubuntu1) in a on-premise cloud to
  manage, among other types, LXD ubuntu bionic virtual instances
  (that run cloud-init version 18.3-9-g2e62cb8a-0ubuntu1~18.04.2),
  and unfortunately the DataSourceOpenStack.py module fails to detect
  openstack's presence, even if explicitely configured as the only data
  source with "datasource_list: [ OpenStack ]".

  After some investigation, I found that the detect_openstack function
  fails because:

  1) read_dmi_data always returns None because it detects to be inside a
 container ("systemd-detect-virt --container" returns "lxc")

  2) even if 1) worked, none of the openstack detection rules would match: 
  "/proc/1/environ" does not contain "product_name=OpenStack Nova"
 (it contains "container=lxc" instead), DMI
  product_name is not "Openstack Nova" nor "OpenStack Compute" and
  DMI chassis_asset_tag is not "OpenTelekomCloud".

  The same configuration worked well with the previous version of
  cloud-init that did not enforce openstack detection.

  Any advice for a proper solution? Is there any configuration to force
  Openstack detection?

  Note that... replacing "detect_openstack"'s content with "return
  True", everything works correctly.

  Thank you!

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1799954/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1800472] [NEW] nova.tests.functional.test_server_group.ServerGroupTestV264.test_boot_servers_with_affinity_no_valid_host intermittently failing with "OpenStackApiNotFoundExceptio

2018-10-29 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/48/613348/1/check/nova-tox-functional/16b5d01
/job-output.txt.gz#_2018-10-29_10_14_46_484703

2018-10-29 10:14:46.550516 | ubuntu-xenial | 2018-10-29 10:14:45,457 INFO 
[nova.api.openstack.requestlog] 127.0.0.1 "POST 
/v2.1/6f70656e737461636b20342065766572/servers" status: 202 len: 480 
microversion: 2.64 time: 0.182153
2018-10-29 10:14:46.550841 | ubuntu-xenial | 2018-10-29 10:14:45,485 INFO 
[nova.api.openstack.requestlog] 127.0.0.1 "GET 
/v2.1/6f70656e737461636b20342065766572/servers/e5f8520d-9f2f-4895-bcf8-485e50898bd0"
 status: 200 len: 1806 microversion: 2.64 time: 0.023981
2018-10-29 10:14:46.551172 | ubuntu-xenial | 2018-10-29 10:14:45,639 INFO 
[nova.api.openstack.placement.requestlog] 127.0.0.1 "GET 
/placement/allocation_candidates?limit=1000&resources=DISK_GB%3A40%2CMEMORY_MB%3A4096%2CVCPU%3A2"
 status: 200 len: 467 microversion: 1.29
2018-10-29 10:14:46.551333 | ubuntu-xenial | 2018-10-29 10:14:45,657 INFO 
[nova.filters] Filter ServerGroupAffinityFilter returned 0 hosts
2018-10-29 10:14:46.552003 | ubuntu-xenial | 2018-10-29 10:14:45,657 INFO 
[nova.filters] Filtering removed all hosts for the request with instance ID 
'e5f8520d-9f2f-4895-bcf8-485e50898bd0'. Filter results: ['RetryFilter: (start: 
1, end: 1)', 'AvailabilityZoneFilter: (start: 1, end: 1)', 'ComputeFilter: 
(start: 1, end: 1)', 'ComputeCapabilitiesFilter: (start: 1, end: 1)', 
'ImagePropertiesFilter: (start: 1, end: 1)', 'ServerGroupAntiAffinityFilter: 
(start: 1, end: 1)', 'ServerGroupAffinityFilter: (start: 1, end: 0)']
2018-10-29 10:14:46.552140 | ubuntu-xenial | 2018-10-29 10:14:45,658 ERROR 
[nova.conductor.manager] Failed to schedule instances
2018-10-29 10:14:46.552209 | ubuntu-xenial | Traceback (most recent call 
last):
2018-10-29 10:14:46.552345 | ubuntu-xenial |   File 
"nova/conductor/manager.py", line 1255, in schedule_and_build_instances
2018-10-29 10:14:46.552423 | ubuntu-xenial | instance_uuids, 
return_alternates=True)
2018-10-29 10:14:46.552545 | ubuntu-xenial |   File 
"nova/conductor/manager.py", line 750, in _schedule_instances
2018-10-29 10:14:46.552626 | ubuntu-xenial | 
return_alternates=return_alternates)
2018-10-29 10:14:46.552740 | ubuntu-xenial |   File 
"nova/scheduler/utils.py", line 953, in wrapped
2018-10-29 10:14:46.552810 | ubuntu-xenial | return func(*args, 
**kwargs)
2018-10-29 10:14:46.552942 | ubuntu-xenial |   File 
"nova/scheduler/client/__init__.py", line 53, in select_destinations
2018-10-29 10:14:46.553051 | ubuntu-xenial | instance_uuids, 
return_objects, return_alternates)
2018-10-29 10:14:46.553167 | ubuntu-xenial |   File 
"nova/scheduler/client/__init__.py", line 37, in __run_method
2018-10-29 10:14:46.553268 | ubuntu-xenial | return 
getattr(self.instance, __name)(*args, **kwargs)
2018-10-29 10:14:46.553387 | ubuntu-xenial |   File 
"nova/scheduler/client/query.py", line 42, in select_destinations
2018-10-29 10:14:46.553483 | ubuntu-xenial | instance_uuids, 
return_objects, return_alternates)
2018-10-29 10:14:46.553596 | ubuntu-xenial |   File 
"nova/scheduler/rpcapi.py", line 158, in select_destinations
2018-10-29 10:14:46.553703 | ubuntu-xenial | return cctxt.call(ctxt, 
'select_destinations', **msg_args)
2018-10-29 10:14:46.553934 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/py27/local/lib/python2.7/site-packages/oslo_messaging/rpc/client.py",
 line 179, in call
2018-10-29 10:14:46.553995 | ubuntu-xenial | retry=self.retry)
2018-10-29 10:14:46.554235 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/py27/local/lib/python2.7/site-packages/oslo_messaging/transport.py",
 line 128, in _send
2018-10-29 10:14:46.554280 | ubuntu-xenial | retry=retry)
2018-10-29 10:14:46.554693 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/py27/local/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_fake.py",
 line 222, in send
2018-10-29 10:14:46.554906 | ubuntu-xenial | return self._send(target, 
ctxt, message, wait_for_reply, timeout)
2018-10-29 10:14:46.555228 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/py27/local/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_fake.py",
 line 209, in _send
2018-10-29 10:14:46.555319 | ubuntu-xenial | raise failure
2018-10-29 10:14:46.555485 | ubuntu-xenial | NoValidHost: No valid host was 
found. There are not enough hosts available.
2018-10-29 10:14:46.555816 | ubuntu-xenial | 2018-10-29 10:14:45,697 
WARNING [nova.scheduler.utils] Failed to compute_task_build_instances: No valid 
host was found. There are not enough hosts available.
2018-10-29 10:14:46.556006 | ubuntu-xenial | 2018-10-29 10:14:45,698 
WARNING [nova.scheduler.utils] Setting instance to ERROR state.

Looks like the test creates two 

[Yahoo-eng-team] [Bug 1800157] Re: privsep: lack of capabilities on kernel 4.15

2018-10-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/613591
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=32cc8b63d7bbe5cfc83b82a058d1c5832980f290
Submitter: Zuul
Branch:master

commit 32cc8b63d7bbe5cfc83b82a058d1c5832980f290
Author: Oleg Bondarev 
Date:   Fri Oct 26 18:02:27 2018 +0400

Add capabilities for privsep

CAP_DAC_OVERRIDE and CAP_DAC_READ_SEARCH were added
(like in nova) to fix agents on kernel 4.15.
Please see bug for details

Change-Id: Ieed6f5f6906036cdeaf2c3d96350eeae9559c0c7
Closes-Bug: #1800157


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1800157

Title:
  privsep: lack of capabilities on kernel 4.15

Status in neutron:
  Fix Released

Bug description:
  l3 and dhcp agents are not functioning on kernel 4.15 due to privsep
  errors:

  2018-10-25 09:10:38,747.747 24060 INFO oslo.privsep.daemon [-] Running 
privsep helper: ['sudo', '/usr/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', 
'/etc/neutron/l3_agent.ini', '--config-file', '/etc/neutron/fwaas_driver.ini', 
'--config-file', '/etc/neutron/neutron.conf', '--privsep_context', 
'neutron.privileged.default', '--privsep_sock_path', 
'/tmp/tmpS5k5y2/privsep.sock']
  2018-10-25 09:10:39,361.361 24060 WARNING oslo.privsep.daemon [-] privsep 
log: Error in sys.excepthook:
  2018-10-25 09:10:39,363.363 24060 WARNING oslo.privsep.daemon [-] privsep 
log: Traceback (most recent call last):
  2018-10-25 09:10:39,363.363 24060 WARNING oslo.privsep.daemon [-] privsep 
log:   File "/usr/lib/python2.7/dist-packages/oslo_log/log.py", line 193, in 
logging_excepthook
  2018-10-25 09:10:39,364.364 24060 WARNING oslo.privsep.daemon [-] privsep 
log: getLogger(product_name).critical('Unhandled error', **extra)
  2018-10-25 09:10:39,365.365 24060 WARNING oslo.privsep.daemon [-] privsep 
log:   File "/usr/lib/python2.7/logging/__init__.py", line 1481, in critical
  2018-10-25 09:10:39,365.365 24060 WARNING oslo.privsep.daemon [-] privsep 
log: self.logger.critical(msg, *args, **kwargs)
  2018-10-25 09:10:39,366.366 24060 WARNING oslo.privsep.daemon [-] privsep 
log:   File "/usr/lib/python2.7/logging/__init__.py", line 1212, in critical
  2018-10-25 09:10:39,366.366 24060 WARNING oslo.privsep.daemon [-] privsep 
log: self._log(CRITICAL, msg, args, **kwargs)
  2018-10-25 09:10:39,367.367 24060 WARNING oslo.privsep.daemon [-] privsep 
log:   File "/usr/lib/python2.7/logging/__init__.py", line 1286, in _log
  2018-10-25 09:10:39,367.367 24060 WARNING oslo.privsep.daemon [-] privsep 
log: self.handle(record)
  2018-10-25 09:10:39,368.368 24060 WARNING oslo.privsep.daemon [-] privsep 
log:   File "/usr/lib/python2.7/logging/__init__.py", line 1296, in handle
  2018-10-25 09:10:39,368.368 24060 WARNING oslo.privsep.daemon [-] privsep 
log: self.callHandlers(record)
  2018-10-25 09:10:39,369.369 24060 WARNING oslo.privsep.daemon [-] privsep 
log:   File "/usr/lib/python2.7/logging/__init__.py", line 1336, in callHandlers
  2018-10-25 09:10:39,370.370 24060 WARNING oslo.privsep.daemon [-] privsep 
log: hdlr.handle(record)
  2018-10-25 09:10:39,370.370 24060 WARNING oslo.privsep.daemon [-] privsep 
log:   File "/usr/lib/python2.7/logging/__init__.py", line 759, in handle
  2018-10-25 09:10:39,371.371 24060 WARNING oslo.privsep.daemon [-] privsep 
log: self.emit(record)
  2018-10-25 09:10:39,371.371 24060 WARNING oslo.privsep.daemon [-] privsep 
log:   File "/usr/lib/python2.7/logging/handlers.py", line 414, in emit
  2018-10-25 09:10:39,372.372 24060 WARNING oslo.privsep.daemon [-] privsep 
log: sres = os.stat(self.baseFilename)
  2018-10-25 09:10:39,372.372 24060 WARNING oslo.privsep.daemon [-] privsep 
log: OSError: [Errno 13] Permission denied: '/var/log/neutron/neutron.log'
  ...
  24060 ERROR neutron.agent.l3.agent FailedToDropPrivileges: Privsep daemon 
failed to start

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1800157/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1713574] Re: python 3 errors with memcache enabled

2018-10-29 Thread Colleen Murphy
I don't think this should have been triaged for keystoneauth, the fix
for this was merged in keystonemiddleware.

** Changed in: keystoneauth
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1713574

Title:
  python 3 errors with memcache enabled

Status in OpenStack Identity (keystone):
  Invalid
Status in keystoneauth:
  Invalid
Status in keystonemiddleware:
  Fix Released

Bug description:
  Hi, we are using gnocchi 4 running the following:

  keystoneauth1 (3.1.0)
  keystonemiddleware (4.14.0)
  python-keystoneclient (3.13.0)

  with python 3.5.4

  on a configuration file like this :

  [keystone_authtoken]
  signing_dir = /var/cache/gnocchi
  project_domain_name = default
  user_domain_name = default
  signing_dir = /var/cache/gnocchi
  auth_uri = http://yourmomkeystone.com:5000/v3
  auth_url = http://yourmomkeystone.com:35357/v3
  project_name = admin
  password = porotito
  username = cloudadmin
  auth_type = password
  auth_type = password
  memcached_servers = yourmommecached:11211
  insecure=true
  endpoint_type = internal
  region_name = yourmomregion
  memcache_security_strategy = ENCRYPT
  memcache_secret_key = lalalalalalaalala

  After the api starts, the token is asked successfully, but we have
  this stacktrace when trying to use memcached.

  2017-08-28 20:12:41,029 [7] CRITICAL root: Traceback (most recent call last):
File "/usr/local/lib/python3.5/site-packages/webob/dec.py", line 131, in 
__call__
  resp = self.call_func(req, *args, **self.kwargs)
File "/usr/local/lib/python3.5/site-packages/webob/dec.py", line 196, in 
call_func
  return self.func(req, *args, **kwargs)
File "/usr/local/lib/python3.5/site-packages/oslo_middleware/base.py", line 
131, in __call__
  response = req.get_response(self.application)
File "/usr/local/lib/python3.5/site-packages/webob/request.py", line 1316, 
in send
  application, catch_exc_info=False)
File "/usr/local/lib/python3.5/site-packages/webob/request.py", line 1280, 
in call_application
  app_iter = application(self.environ, start_response)
File "/usr/local/lib/python3.5/site-packages/paste/urlmap.py", line 216, in 
__call__
  return app(environ, start_response)
File "/usr/local/lib/python3.5/site-packages/webob/dec.py", line 131, in 
__call__
  resp = self.call_func(req, *args, **self.kwargs)
File "/usr/local/lib/python3.5/site-packages/webob/dec.py", line 196, in 
call_func
  return self.func(req, *args, **kwargs)
File "/usr/local/lib/python3.5/site-packages/oslo_middleware/base.py", line 
131, in __call__
  response = req.get_response(self.application)
File "/usr/local/lib/python3.5/site-packages/webob/request.py", line 1316, 
in send
  application, catch_exc_info=False)
File "/usr/local/lib/python3.5/site-packages/webob/request.py", line 1280, 
in call_application
  app_iter = application(self.environ, start_response)
File "/usr/local/lib/python3.5/site-packages/webob/dec.py", line 131, in 
__call__
  resp = self.call_func(req, *args, **self.kwargs)
File "/usr/local/lib/python3.5/site-packages/webob/dec.py", line 196, in 
call_func
  return self.func(req, *args, **kwargs)
File 
"/usr/local/lib/python3.5/site-packages/keystonemiddleware/auth_token/__init__.py",
 line 331, in __call__
  response = self.process_request(req)
File 
"/usr/local/lib/python3.5/site-packages/keystonemiddleware/auth_token/__init__.py",
 line 622, in process_request
  resp = super(AuthProtocol, self).process_request(request)
File 
"/usr/local/lib/python3.5/site-packages/keystonemiddleware/auth_token/__init__.py",
 line 404, in process_request
  allow_expired=allow_expired)
File 
"/usr/local/lib/python3.5/site-packages/keystonemiddleware/auth_token/__init__.py",
 line 434, in _do_fetch_token
  data = self.fetch_token(token, **kwargs)
File 
"/usr/local/lib/python3.5/site-packages/keystonemiddleware/auth_token/__init__.py",
 line 736, in fetch_token
  cached = self._cache_get_hashes(token_hashes)
File 
"/usr/local/lib/python3.5/site-packages/keystonemiddleware/auth_token/__init__.py",
 line 719, in _cache_get_hashes
  cached = self._token_cache.get(token)
File 
"/usr/local/lib/python3.5/site-packages/keystonemiddleware/auth_token/_cache.py",
 line 212, in get
  key, context = self._get_cache_key(token_id)
File 
"/usr/local/lib/python3.5/site-packages/keystonemiddleware/auth_token/_cache.py",
 line 268, in _get_cache_key
  self._security_strategy)
File 
"/usr/local/lib/python3.5/site-packages/keystonemiddleware/auth_token/_memcache_crypt.py",
 line 101, in derive_keys
  digest = hmac.new(secret, token + strategy, HASH_FUNCTION).digest()
  TypeError: Can't convert 'bytes' object to str implicitly

  any help ?

To manage notifications about 

[Yahoo-eng-team] [Bug 1781039] Re: GCE cloudinit and ubuntu keys from metadata to ubuntu authorized_keys

2018-10-29 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.5-0ubuntu1.23

---
cloud-init (0.7.5-0ubuntu1.23) trusty; urgency=medium

  - debian/control: added python-six dependency.
  - debian/patches/lp-1781039-gce-datasource-update.patch:
Backport GCE datasource functionality from Xenial (LP: #1781039).

 -- Shane Peters   Tue, 06 Sep 2018 17:57:23
-0400

** Changed in: cloud-init (Ubuntu Trusty)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1781039

Title:
  GCE cloudinit and ubuntu keys from metadata to ubuntu authorized_keys

Status in cloud-init:
  Fix Released
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Trusty:
  Fix Released

Bug description:
  [Impact]

   * Per documentation at
  https://wiki.ubuntu.com/GoogleComputeEngineSSHKeys ssh keys for
  cloudinit and ubuntu users should both be added to the 'ubuntu' users
  authorized_keys file.

   * This works fine in Xenial (16.04) and higher, but doesn't work for
  Trusty (14.04).

  
  [Test Case]

   * Create a file that contains ssh public keys

 $ cat googlekeys
 test:ssh-rsa  t...@example.com
 ubuntu:ssh-rsa  t...@example.com
 cloudinit:ssh-rsa  t...@example.com

* Create an ubuntu 14.04 instance

  gcloud compute instances create ubuntu1404cloudinit --image-family 
ubuntu-1404-lts --image-project ubuntu-os-cloud 
--metadata-from-file=ssh-keys=googlekeys --metadata=block-project-ssh-keys=True

* Create an ubuntu 16.04 instance

  gcloud compute instances create ubuntu1604cloudinit --image-family 
ubuntu-1604-lts --image-project ubuntu-os-cloud 
--metadata-from-file=ssh-keys=googlekeys --metadata=block-project-ssh-keys=True
  
* Notice that the ubuntu user in the ubuntu 14.04 instance contains no keys 
from cloud-init (the keys there are added by the google daemon):
  
  $ sudo cat /home/ubuntu/.ssh/authorized_keys
  # Added by Google
  ssh-rsa  
t...@example.com

* However, in 16.04,

  $ sudo cat /home/ubuntu/.ssh/authorized_keys
  ssh-rsa  t...@example.com
  ssh-rsa  t...@example.com
  # Added by Google
  ssh-rsa  
t...@example.com

  
  [Regression Potential] 

   * DatasourceGCE.py is heavily modified to fix this behavior in 14.04.
  That said, there is a medium amount of regression potential when using
  the GCE datasource. More specificallly, there is now stricter checking
  of the metadata source when used(platform_check=True).

   * Significant testing has been completed via the Google Compute
  platform as well as other none-GCE datasources (lxd) to confirm
  functionality and to test for possible regressions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1781039/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1800441] [NEW] Unexpected exception in API method: MessagingTimeout: Timed out waiting for a reply to message ID

2018-10-29 Thread chinassh1209
Public bug reported:

Hello, I built all of rocky on one mainframe. The problem now is that I
have no access to novnc. Everything else is okay.

Some system versions and information:

[root@all-in-one-202 ~]# cat /etc/centos-release
CentOS Linux release 7.5.1804 (Core)

[root@all-in-one-202 ~]# uname -a
Linux all-in-one-202 3.10.0-862.11.6.el7.x86_64 #1 SMP Tue Aug 14 21:49:04 UTC 
2018 x86_64 x86_64 x86_64 GNU/Linux

[root@all-in-one-202 ~]# rpm -qa | grep openstack-rocky
centos-release-openstack-rocky-1-1.el7.centos.noarch

[root@all-in-one-202 ~]# rpm -qa | grep memcache
memcached-1.5.6-1.el7.x86_64
python-memcached-1.58-1.el7.noarch

[root@all-in-one-202 ~]# cat /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS=""

[root@all-in-one-202 ~]# egrep -v "^#|^$" /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:Admin@10.100.26.202
my_ip = 10.100.26.202
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
enabled_apis = osapi_compute,metadata
enable_network_quota=false
[api]
auth_strategy = keystone
[api_database]
connection = mysql+pymysql://nova:Admin@10.100.26.202/nova_api
[barbican]
[cache]
memcached_servers=10.100.26.202:11211
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[database]
connection = mysql+pymysql://nova:Admin@10.100.26.202/nova
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://10.100.26.202:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://10.100.26.202:5000/v3
memcached_servers = 10.100.26.202:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = Admin
[libvirt]
virt_type = qemu
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://10.100.26.202:9696
auth_url = http://10.100.26.202:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = Admin
service_metadata_proxy = true
metadata_proxy_shared_secret = Admin
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://10.100.26.202:5000/v3
username = placement
password = Admin
[placement_database]
connection = mysql+pymysql://placement:Admin@10.100.26.202/placement
[powervm]
[profiler]
[quota]
instances=-1
cores=-1
ram=-1
floating_ips=-1
metadata_items=-1
injected_files=-1
injected_file_content_bytes=-1
security_groups=-1
security_group_rules=-1
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = True
server_listen = 10.100.26.202
server_proxyclient_address = 10.100.26.202
novncproxy_base_url = http://10.100.26.202:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]

[root@all-in-one-202 ~]# tail -200f /var/log/nova/nova-api.log

2018-10-29 17:26:13.443 16284 ERROR nova.api.openstack.wsgi 
[req-155d3560-8788-44be-ac3d-84b99baa1196 3e674e62b3cf490abbba13bab06449e2 
63ff15cc51b7432a845633576b5183d6 - default default] Unexpected exception in API 
method: MessagingTimeout: Timed out waiting for a reply to message ID 
991b3e95605c4ac6a33ba0a8ed722a4c
2018-10-29 17:26:13.443 16284 ERROR nova.api.openstack.wsgi Traceback (most 
recent call last):
2018-10-29 17:26:13.443 16284 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py", line 801, in 
wrapped
2018-10-29 17:26:13.443 16284 ERROR nova.api.openstack.wsgi return f(*args, 
**kwargs)
2018-10-29 17:26:13.443 16284 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 110, 
in wrapper
2018-10-29 17:26:13.443 16284 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
2018-10-29 17:26:13.443 16284 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/remote_consoles.py",
 line 52, in get_vnc_console
2018-10-29 17:26:13.443 16284 ERROR nova.api.openstack.wsgi console_type)
2018-10-29 17:26:13.443 16284 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 196, in wrapped
2018-10-29 17:26:13.443 16284 ERROR nova.api.openstack.wsgi return 
function(self, context, instance, *args, **kwargs)
2018-10-29 17:26:13.443 16284 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 186, in inner
2018-10-29 17:26:13.443 16284 ERROR nova.api.openstack.wsgi return f(self, 
context, instance, *args, **

[Yahoo-eng-team] [Bug 1800435] [NEW] Updating metadata when filtering existing onws will have a problem

2018-10-29 Thread Wangliangyu
Public bug reported:

1.Open one updating metadata page, like flavor's;
2.Add two custom metadatas, like m1='v1' and m2='';

Now, the submit button is gray.

3.Filter existing metadata [m1], and only m1 leaved right side.

Now, the submit button is green and could be submitted.

4.Submit

Now, the item's metadata is m1=v1 and m2=null.

In above, the submit button could not be changed whether filtering.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1800435

Title:
  Updating metadata when filtering existing onws will have a problem

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  1.Open one updating metadata page, like flavor's;
  2.Add two custom metadatas, like m1='v1' and m2='';

  Now, the submit button is gray.

  3.Filter existing metadata [m1], and only m1 leaved right side.

  Now, the submit button is green and could be submitted.

  4.Submit

  Now, the item's metadata is m1=v1 and m2=null.

  In above, the submit button could not be changed whether filtering.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1800435/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp