[Yahoo-eng-team] [Bug 1762733] Re: l3agentscheduler doesn't return a response body with POST /v2.0/agents/{agent_id}/l3-routers

2018-06-15 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1762733

Title:
  l3agentscheduler doesn't return a response body with POST
  /v2.0/agents/{agent_id}/l3-routers

Status in neutron:
  Expired

Bug description:
  As discussed in [1], the

  POST /v2.0/agents/{agent_id}/l3-routers

  does not return a response body. This seems inconsistent with our
  other APIs as POSTs typically return the created resource. This is
  even true with other APIs that 'add' something to a resource.

  It seems we should consider returning the resource here; I suspect
  it's just a few LOC changes in the API.

  [1] https://review.openstack.org/#/c/543408/6/api-ref/source/v2/l3
  -agent-scheduler.inc@76

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1762733/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1764385] Re: no intimation to the admin that nova-api is stopped during execution of polling compute

2018-06-15 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1764385

Title:
  no intimation to the admin that nova-api is stopped during execution
  of polling compute

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Since polling_compute is a background process there are no intimation to the 
admin that nova-api service is stopped during execution of polling compute. 
Only by checking /var/log/messages logs, one can detect the error.
  however, admin must be informed about failure of the service.

  
  There must be a mechanism (e.g.: email etc.) to notify admin that nova-api 
service is down.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1764385/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1773225] Re: placement needs to stop using accept.best_match from webob it is deprecated

2018-06-15 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/575127
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=450444a7829506f5539bf25ecf6cee1c5d82c48d
Submitter: Zuul
Branch:master

commit 450444a7829506f5539bf25ecf6cee1c5d82c48d
Author: Chris Dent 
Date:   Wed Jun 13 15:16:35 2018 +0100

[placement] replace deprecated accept.best_match

Webob has deprecated the best_match[1] method on accept headers and now
spews warnings when it sees it.

This change fixes it by using the equivalent (but more correct with
regard to the relevant RFCs[2]) acceptable_offers[3] method.

Existing unit tests in placement/test_util.py cover this change.

[1] 
https://docs.pylonsproject.org/projects/webob/en/stable/api/webob.html#webob.acceptparse.AcceptValidHeader.best_match
[2] https://tools.ietf.org/html/rfc7231#section-5.3.2
[3] 
https://docs.pylonsproject.org/projects/webob/en/stable/api/webob.html#webob.acceptparse.AcceptValidHeader.acceptable_offers

Change-Id: Ie4d81fa178b3ed6b2a7b450b4978009486f07810
Closes-Bug: #1773225


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1773225

Title:
  placement needs to stop using accept.best_match from webob it is
  deprecated

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Modern webob has improved its management of accept headers to be more
  in alignment with the HTTP RFCs (see bug
  https://bugs.launchpad.net/nova/+bug/1765748 ), deprecating their old
  handling:

  DeprecationWarning: The behavior of AcceptValidHeader.best_match is
  currently being maintained for backward compatibility, but it will be
  deprecated in the future, as it does not conform to the RFC.

  Eventually placement (in
  nova.api.openstack.placement.util:check_accept) should be updated to
  use the new way.

  Creating a separate bug to be task oriented.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1773225/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776797] Re: test_convert_image_with_errors fails with OSError: [Errno 2] No such file or directory

2018-06-15 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/575305
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=5e36e3342eb59b4e77f4c8408190e606cabdf75c
Submitter: Zuul
Branch:master

commit 5e36e3342eb59b4e77f4c8408190e606cabdf75c
Author: Corey Bryant 
Date:   Wed Jun 13 21:58:49 2018 -0400

Fix execute mock for test_convert_image_with_errors

This was introduced in https://review.openstack.org/#/c/554437/
with the move of image conversion to privsep.

Change-Id: Ia68b0124ce59a256dfee19b6ab03253969a1
Closes-Bug: #1776797


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1776797

Title:
  test_convert_image_with_errors fails with OSError: [Errno 2] No such
  file or directory

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  test_convert_image_with_errors fails with OSError: [Errno 2] No such
  file or directory

  See traceback here: https://paste.ubuntu.com/p/bQ6Z9QPXCY/

  where args = ['qemu-img', 'convert', '-t', 'none', '-O', 'raw', '-f',
  'qcow2', '/path/that/does/not/exist',
  '/other/path/that/does/not/exist']

  It seems the execute mock is incorrect.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1776797/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1777190] [NEW] All neutron-tempest-plugin-api jobs for stable/queens fails

2018-06-15 Thread Slawek Kaplonski
Public bug reported:

All runs of neutron-tempest-plugin-api job for stable/queens branch in neutron 
fails.
It is because we have in neutron defined to run jobs from neutron-tempest-repo 
which uses master branch. In Queens there are missing some extensions and some 
API tests are failing then.

Solution is to use neutron-tempest-plugin-jobs-stable template from
neutron_tempest_repo in neutron/.zuul.conf file in stable/queens branch

** Affects: neutron
 Importance: Critical
 Assignee: Slawek Kaplonski (slaweq)
 Status: Confirmed


** Tags: gate-failure tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1777190

Title:
  All neutron-tempest-plugin-api jobs for stable/queens fails

Status in neutron:
  Confirmed

Bug description:
  All runs of neutron-tempest-plugin-api job for stable/queens branch in 
neutron fails.
  It is because we have in neutron defined to run jobs from 
neutron-tempest-repo which uses master branch. In Queens there are missing some 
extensions and some API tests are failing then.

  Solution is to use neutron-tempest-plugin-jobs-stable template from
  neutron_tempest_repo in neutron/.zuul.conf file in stable/queens
  branch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1777190/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1777157] [NEW] cold migration fails for ceph volume instances

2018-06-15 Thread Vladislav Belogrudov
Public bug reported:

queens release,

running instance from ceph volume, in horizon: 'migrate'

Instance is in ERROR state. Horizon reports:

Error: Failed to perform requested operation on instance "instance3", the
instance has an error status: Please try again later [Error: list index out
of range].

yet another instance got:

Error: Failed to perform requested operation on instance "instance4", the
instance has an error status: Please try again later [Error: Conflict
updating instance 6b837382-2a75-46a6-9a09-8c3f90f0ffd7. Expected:
{'task_state': [u'resize_prep']}. Actual: {'task_state': None}].

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1777157

Title:
  cold migration fails for ceph volume instances

Status in OpenStack Compute (nova):
  New

Bug description:
  queens release,

  running instance from ceph volume, in horizon: 'migrate'

  Instance is in ERROR state. Horizon reports:

  Error: Failed to perform requested operation on instance "instance3", the
  instance has an error status: Please try again later [Error: list index out
  of range].

  yet another instance got:

  Error: Failed to perform requested operation on instance "instance4", the
  instance has an error status: Please try again later [Error: Conflict
  updating instance 6b837382-2a75-46a6-9a09-8c3f90f0ffd7. Expected:
  {'task_state': [u'resize_prep']}. Actual: {'task_state': None}].

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1777157/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776255] Re: DVR scheduling checks wrong port binding profile for host in live-migration

2018-06-15 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/574370
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=e356345261737162aab90bf7338d931d64ae524e
Submitter: Zuul
Branch:master

commit e356345261737162aab90bf7338d931d64ae524e
Author: Kailun Qin 
Date:   Tue Jun 12 10:22:32 2018 +0800

Fix DVR scheduling checks wrong profile for host

When DVR scheduling in live-migration, the current implementation in DVR
serviceable ports checking on host performs a "contains" operation of
sub-string match which checks the wrong port binding profile for host
(i.e. compute-1 will also match compute-10).

Add quotes to force an exact match of the host name in the port binding
profile dictionary to address this issue.

Closes-Bug: #1776255
Change-Id: I0d2bd9b9ff0aa58a7cce1b8da2a5f21ac6b38c57
Signed-off-by: Kailun Qin 


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1776255

Title:
  DVR scheduling checks wrong port binding profile for host in live-
  migration

Status in neutron:
  Fix Released

Bug description:
  When live-migrating, active l3 agents on compute nodes will request a
  router sync against DVR routers impacted by the migration. This will
  check for existence of dvr serviceable ports on host to further filter
  the router applicable to it. However, the current implementation
  performs a "contains" operation on the port binding profile for host
  which will produce a LIKE expression that tests against a match for
  the middle of a string value: column LIKE '%' ||  || '%' [1].
  This leads to a wrong filtering due to the sub-string match (i.e.
  compute-1 will match compute-10).

  [1]
  
http://docs.sqlalchemy.org/en/latest/orm/internals.html?highlight=contains#sqlalchemy.orm.attributes.QueryableAttribute.contains

  Example binding profile (dict) for host:
  {
"binding:profile": {"migrating_to": "compute-1"}
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1776255/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1777144] [NEW] VMware vSphere in nova for queens still reports nova-network

2018-06-15 Thread Fairbanks.
Public bug reported:

- [x] This doc is inaccurate in this way:

For queens it still reports that nova-network is able to be used since
it has been deprecated for a while, and it needs cells v1 which is not
recommended.

Ether nova-networks needs to be removed from these docs, or a better
explanation that it is not the recommended way to use.

Maybe even update the documentation according to the current status of
the vmware integration.

---
Release: 17.0.6.dev8 on 2018-06-14 13:19
SHA: d26dc0ca03e9cc9a04ac02d88ba2d2867340f5cd
Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/admin/configuration/hypervisor-vmware.rst
URL: 
https://docs.openstack.org/nova/queens/admin/configuration/hypervisor-vmware.html

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1777144

Title:
  VMware vSphere in nova for queens still reports nova-network

Status in OpenStack Compute (nova):
  New

Bug description:
  - [x] This doc is inaccurate in this way:

  For queens it still reports that nova-network is able to be used since
  it has been deprecated for a while, and it needs cells v1 which is not
  recommended.

  Ether nova-networks needs to be removed from these docs, or a better
  explanation that it is not the recommended way to use.

  Maybe even update the documentation according to the current status of
  the vmware integration.

  ---
  Release: 17.0.6.dev8 on 2018-06-14 13:19
  SHA: d26dc0ca03e9cc9a04ac02d88ba2d2867340f5cd
  Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/admin/configuration/hypervisor-vmware.rst
  URL: 
https://docs.openstack.org/nova/queens/admin/configuration/hypervisor-vmware.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1777144/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1777129] [NEW] live migration doc says "convergence" while config uses "converge"

2018-06-15 Thread do3meli
Public bug reported:

The documentation mentions the configuration option
live_migration_permit_auto_convergence while the actual config file uses
live_migration_permit_auto_converge. There seems to be some spelling
issues in the titles + text + config options.


---
Release: 16.1.4.dev15 on 2018-06-04 05:37
SHA: 2c9c4a09cb5fd31ccff368315534eaa788e90e67
Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/admin/live-migration-usage.rst
URL: https://docs.openstack.org/nova/pike/admin/live-migration-usage.html

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: doc live-migration

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1777129

Title:
  live migration doc says "convergence" while config uses "converge"

Status in OpenStack Compute (nova):
  New

Bug description:
  The documentation mentions the configuration option
  live_migration_permit_auto_convergence while the actual config file
  uses live_migration_permit_auto_converge. There seems to be some
  spelling issues in the titles + text + config options.

  
  ---
  Release: 16.1.4.dev15 on 2018-06-04 05:37
  SHA: 2c9c4a09cb5fd31ccff368315534eaa788e90e67
  Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/admin/live-migration-usage.rst
  URL: https://docs.openstack.org/nova/pike/admin/live-migration-usage.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1777129/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775947] Re: tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest failing

2018-06-15 Thread Slawek Kaplonski
** Project changed: neutron => tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1775947

Title:
  tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest
  failing

Status in tempest:
  Confirmed

Bug description:
  Since few days I see that
  
tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest.test_tagged_attachment
  in neutron-tempest-dvr job.

  Example of failure: http://logs.openstack.org/90/572690/2/check
  /neutron-tempest-dvr/45ec391/logs/testr_results.html.gz

  It happened at least 3 times on 8.06:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22exit%20status%3A%20137%2C%20stderr%3A%20Killed%5C%22%20AND%20build_name%3A%5C
  %22neutron-tempest-dvr%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1775947/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1777123] [NEW] Nova 17.04 fails to create ephemeral storage with rbd driver

2018-06-15 Thread Dmitriy R.
Public bug reported:

Description
===

Nova-compute is unable to create ephemeral storage due to type error, as
it sends unicode object to rbd.Image() class as a storage name.
Meanwhile, rbd.Image() accepts only string value.


Steps to reproduce
==

Try to create an instance without cinder drive or with flavor which has
a swap defined. Also, nova-compute should be configured to interact with
ceph.


Steps to fix the problem
==
in nova/virt/libvirt/storage/rbd_utils.py make the following changes:
- L69:  self.volume = tpool.Proxy(rbd.Image(ioctx, name,
+ L69:  self.volume = tpool.Proxy(rbd.Image(ioctx, str(name),


Expected result
===

Instance should be created with swap storage and/or with ephemeral drive
in ceph.


Actual result
=
Instance creation fails with:

Exceeded maximum number of retries. Exhausted all hosts available for retrying 
build failures for instance bdd019bd-4127-4186-a1ee-f4b1891e1730.
File 
"/openstack/venvs/nova-17.0.5/lib/python2.7/site-packages/nova/conductor/manager.py",
 line 580, in build_instances raise exception.MaxRetriesExceeded(reason=msg)


Environment
===
1. I've installed openstack with OSA (OpenStack-Ansible) version 17.0.5. So it 
would be openstack queens version
(nova-17.0.5) root@uacloud-nova01:~# pip freeze | grep nova
nova==17.0.4
nova-lxd==17.0.0
nova-powervm==6.0.2.dev3
python-novaclient==9.1.1
(nova-17.0.5) root@uacloud-nova01:~#
2. KVM hypervisor
3. CEPH storage
4. Neutron networking


Logs & Configs
==
In logs I see the following error:
2018-06-15 14:38:56.904 13411 INFO nova.compute.claims 
[req-b7277d7c-14b6-4f0c-97fb-bb29a6c379a5 cf35f21e050d462db0e0ecf20da2a9de 
d605898aeb654bfea6e18c3c0321d8a8 - 309afdf3dd6242338cd5eb4cda07f1d9 
309afdf3dd6242338cd5eb4cda07f1d9] [instance: 
bdd019bd-4127-4186-a1ee-f4b1891e1730] vcpu limit: 320.00 VCPU, free: 311.00 VCPU
2018-06-15 14:38:56.909 13411 INFO nova.compute.claims 
[req-b7277d7c-14b6-4f0c-97fb-bb29a6c379a5 cf35f21e050d462db0e0ecf20da2a9de 
d605898aeb654bfea6e18c3c0321d8a8 - 309afdf3dd6242338cd5eb4cda07f1d9 
309afdf3dd6242338cd5eb4cda07f1d9] [instance: 
bdd019bd-4127-4186-a1ee-f4b1891e1730] Claim successful on node 
uacloud-nova01.twinservers.net
2018-06-15 14:38:57.837 13411 INFO nova.virt.libvirt.driver 
[req-b7277d7c-14b6-4f0c-97fb-bb29a6c379a5 cf35f21e050d462db0e0ecf20da2a9de 
d605898aeb654bfea6e18c3c0321d8a8 - 309afdf3dd6242338cd5eb4cda07f1d9 
309afdf3dd6242338cd5eb4cda07f1d9] [instance: 
bdd019bd-4127-4186-a1ee-f4b1891e1730] Creating image
2018-06-15 14:38:57.899 13411 ERROR nova.compute.manager 
[req-b7277d7c-14b6-4f0c-97fb-bb29a6c379a5 cf35f21e050d462db0e0ecf20da2a9de 
d605898aeb654bfea6e18c3c0321d8a8 - 309afdf3dd6242338cd5eb4cda07f1d9 
309afdf3dd6242338cd5eb4cda07f1d9] [instance: 
bdd019bd-4127-4186-a1ee-f4b1891e1730] Instance failed to spawn: TypeError: name 
must be a string
2018-06-15 14:38:57.899 13411 ERROR nova.compute.manager [instance: 
bdd019bd-4127-4186-a1ee-f4b1891e1730] Traceback (most recent call last):
2018-06-15 14:38:57.899 13411 ERROR nova.compute.manager [instance: 
bdd019bd-4127-4186-a1ee-f4b1891e1730]   File 
"/openstack/venvs/nova-17.0.5/lib/python2.7/site-packages/nova/compute/manager.py",
 line 2248, in _build_resources
2018-06-15 14:38:57.899 13411 ERROR nova.compute.manager [instance: 
bdd019bd-4127-4186-a1ee-f4b1891e1730] yield resources
2018-06-15 14:38:57.899 13411 ERROR nova.compute.manager [instance: 
bdd019bd-4127-4186-a1ee-f4b1891e1730]   File 
"/openstack/venvs/nova-17.0.5/lib/python2.7/site-packages/nova/compute/manager.py",
 line 2031, in _build_and_run_instance
2018-06-15 14:38:57.899 13411 ERROR nova.compute.manager [instance: 
bdd019bd-4127-4186-a1ee-f4b1891e1730] block_device_info=block_device_info)
2018-06-15 14:38:57.899 13411 ERROR nova.compute.manager [instance: 
bdd019bd-4127-4186-a1ee-f4b1891e1730]   File 
"/openstack/venvs/nova-17.0.5/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
 line 3072, in spawn
2018-06-15 14:38:57.899 13411 ERROR nova.compute.manager [instance: 
bdd019bd-4127-4186-a1ee-f4b1891e1730] block_device_info=block_device_info)
2018-06-15 14:38:57.899 13411 ERROR nova.compute.manager [instance: 
bdd019bd-4127-4186-a1ee-f4b1891e1730]   File 
"/openstack/venvs/nova-17.0.5/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
 line 3450, in _create_image
2018-06-15 14:38:57.899 13411 ERROR nova.compute.manager [instance: 
bdd019bd-4127-4186-a1ee-f4b1891e1730] fallback_from_host)
2018-06-15 14:38:57.899 13411 ERROR nova.compute.manager [instance: 
bdd019bd-4127-4186-a1ee-f4b1891e1730]   File 
"/openstack/venvs/nova-17.0.5/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
 line 3541, in _create_and_inject_local_root
2018-06-15 14:38:57.899 13411 ERROR nova.compute.manager [instance: 
bdd019bd-4127-4186-a1ee-f4b1891e1730] instance, size, fallback_from_host)
2018-06-15 14:38:57.899 13411 ERROR nova.compute.manager [instance:

[Yahoo-eng-team] [Bug 1777110] [NEW] horizon incorrectly shows backup to swift

2018-06-15 Thread Vladislav Belogrudov
Public bug reported:

queens,

when running with nfs backup driver horizon still thinks it is going to
use object storage and even gives user ability to enter container name.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "backup.png"
   https://bugs.launchpad.net/bugs/1777110/+attachment/5152986/+files/backup.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1777110

Title:
  horizon incorrectly shows backup to swift

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  queens,

  when running with nfs backup driver horizon still thinks it is going
  to use object storage and even gives user ability to enter container
  name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1777110/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1777088] [NEW] controller fails NUMA topology requirements. The instance does not fit on this host. host_passes

2018-06-15 Thread men
Public bug reported:

openstack queens:


Turn on NUMA scheduling:
vi /etc/nova/nova.conf
enabled_filters =,NUMATopologyFilter


  
(openstack) flavor show p1
++--+
| Field  | Value
|
++--+
| OS-FLV-DISABLED:disabled   | False
|
| OS-FLV-EXT-DATA:ephemeral  | 0
|
| access_project_ids | None 
|
| disk   | 10   
|
| id | ab9f4851-c4a0-48e4-affe-e780ad8a87a1 
|
| name   | p1   
|
| os-flavor-access:is_public | True 
|
| properties | hw:mem_page_size='1024', hw:numa_cpus.1='20', 
hw:numa_mem.1='512', hw:numa_nodes='1' |
| ram| 512  
|
| rxtx_factor| 1.0  
|
| swap   |  
|
| vcpus  | 1
|
++--+


[root@controller ~]# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23
node 0 size: 130669 MB
node 0 free: 116115 MB
node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31
node 1 size: 131072 MB
node 1 free: 114675 MB
node distances:
node   0   1 
  0:  10  21 
  1:  21  10 


Error log
tail -f /var/log/nova/nova-conductor.log ::

 default default] Failed to compute_task_build_instances: No valid host was 
found. There are not enough hosts available.
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 
226, in inner
return func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 154, 
in select_destinations
allocation_request_version, return_alternates)
  File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", 
line 91, in select_destinations
allocation_request_version, return_alternates)
  File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", 
line 243, in _schedule
claimed_instance_uuids)
  File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", 
line 280, in _ensure_sufficient_hosts
raise exception.NoValidHost(reason=reason)
NoValidHost: No valid host was found. There are not enough hosts available.
: NoValidHost_Remote: No valid host was found. There are not enough hosts 
available.


Error log
tail -f /var/log/nova/nova-scheduler.log::

2018-06-15 16:52:33.457 5829 DEBUG nova.virt.hardware 
[req-be251765-6c3b-46aa-ae05-6c2e12ae8661 7e909565a4b847fe81cd6d1cf778c893 
b2760ba26e5645bf9856669d560d91c7 - default default] Attempting to fit instance 
cell 
InstanceNUMACell(cpu_pinning_raw=None,cpu_policy=None,cpu_thread_policy=None,cpu_topology=,cpuset=set([0]),cpuset_reserved=None,id=0,memory=512,pagesize=1024)
 on host_cell 
NUMACell(cpu_usage=0,cpuset=set([8,9,10,11,12,13,14,15,24,25,26,27,28,29,30,31]),id=1,memory=131072,memory_usage=0,mempages=[NUMAPagesTopology,NUMAPagesTopology],pinned_cpus=set([]),siblings=[set([8,24]),set([14,30]),set([15,31]),set([11,27]),set([10,26]),set([12,28]),set([9,25]),set([13,29])])
 _numa_fit_instance_cell 
/usr/lib/python2.7/site-packages/nova/virt/hardware.py:974
2018-06-15 16:52:33.458 5829 DEBUG nova.virt.hardware 
[req-be251765-6c3b-46aa-ae05-6c2e12ae8661 7e909565a4b847fe81cd6d1cf778c893 
b2760ba26e5645bf9856669d560d91c7 - default default] No pinning requested, 
considering limitations on usable cpu and memory _numa_fit_instance_cell 
/usr/lib/python2.7/site-packages/nova/virt/hardware.py:1003
2018-06-15 16:52:33.459 5829 DEBUG nova.scheduler.filters.numa_topology_filter 
[req-be251765-6c3b-46aa-ae05-6c2e12ae8661 7e909565a4b847fe81cd6d1cf778c893 
b2760ba26e5645bf9856669d560d91c7 - default default] [instance: 
b1dff78a-ff24-4337-9cdb-edcecc9f9d00] controller, controller fails NUMA 
topology requirements. The instanc

[Yahoo-eng-team] [Bug 1777086] [NEW] Identity API v3 extensions (CURRENT) in Identity API Reference

2018-06-15 Thread 徐爱保
Public bug reported:


https://developer.openstack.org/api-ref/identity/v3-ext/#authenticate-with-identity-api
oauth1 /v3/auth/tokens
The reference does not list the field body, if no body post , return failure. 
{
"auth": {
"identity": {
"methods": [
"oauth1"
],
"oauth1": {
"id": "xx"

}
}
}
}

---
Release: v3.10 on 'Sat Jun 9 01:55:42 2018, commit 0e24f91'
SHA: 
Source: Can't derive source file URL
URL: 
https://developer.openstack.org/api-ref/identity/v3-ext/#authenticate-with-identity-api

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1777086

Title:
  Identity API v3 extensions (CURRENT) in Identity API Reference

Status in OpenStack Identity (keystone):
  New

Bug description:
  
  
https://developer.openstack.org/api-ref/identity/v3-ext/#authenticate-with-identity-api
  oauth1 /v3/auth/tokens
  The reference does not list the field body, if no body post , return failure. 
  {
"auth": {
  "identity": {
  "methods": [
  "oauth1"
  ],
  "oauth1": {
  "id": "xx"
  
  }
  }
  }
  }

  ---
  Release: v3.10 on 'Sat Jun 9 01:55:42 2018, commit 0e24f91'
  SHA: 
  Source: Can't derive source file URL
  URL: 
https://developer.openstack.org/api-ref/identity/v3-ext/#authenticate-with-identity-api

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1777086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1777089] [NEW] controller fails NUMA topology requirements. The instance does not fit on this host. host_passes

2018-06-15 Thread men
Public bug reported:

openstack queens:


Turn on NUMA scheduling:
vi /etc/nova/nova.conf
enabled_filters =,NUMATopologyFilter


  
(openstack) flavor show p1
++--+
| Field  | Value
|
++--+
| OS-FLV-DISABLED:disabled   | False
|
| OS-FLV-EXT-DATA:ephemeral  | 0
|
| access_project_ids | None 
|
| disk   | 10   
|
| id | ab9f4851-c4a0-48e4-affe-e780ad8a87a1 
|
| name   | p1   
|
| os-flavor-access:is_public | True 
|
| properties | hw:mem_page_size='1024', hw:numa_cpus.1='20', 
hw:numa_mem.1='512', hw:numa_nodes='1' |
| ram| 512  
|
| rxtx_factor| 1.0  
|
| swap   |  
|
| vcpus  | 1
|
++--+


[root@controller ~]# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23
node 0 size: 130669 MB
node 0 free: 116115 MB
node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31
node 1 size: 131072 MB
node 1 free: 114675 MB
node distances:
node   0   1 
  0:  10  21 
  1:  21  10 


Error log
tail -f /var/log/nova/nova-conductor.log ::

 default default] Failed to compute_task_build_instances: No valid host was 
found. There are not enough hosts available.
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 
226, in inner
return func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 154, 
in select_destinations
allocation_request_version, return_alternates)
  File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", 
line 91, in select_destinations
allocation_request_version, return_alternates)
  File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", 
line 243, in _schedule
claimed_instance_uuids)
  File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", 
line 280, in _ensure_sufficient_hosts
raise exception.NoValidHost(reason=reason)
NoValidHost: No valid host was found. There are not enough hosts available.
: NoValidHost_Remote: No valid host was found. There are not enough hosts 
available.


Error log
tail -f /var/log/nova/nova-scheduler.log::

2018-06-15 16:52:33.457 5829 DEBUG nova.virt.hardware 
[req-be251765-6c3b-46aa-ae05-6c2e12ae8661 7e909565a4b847fe81cd6d1cf778c893 
b2760ba26e5645bf9856669d560d91c7 - default default] Attempting to fit instance 
cell 
InstanceNUMACell(cpu_pinning_raw=None,cpu_policy=None,cpu_thread_policy=None,cpu_topology=,cpuset=set([0]),cpuset_reserved=None,id=0,memory=512,pagesize=1024)
 on host_cell 
NUMACell(cpu_usage=0,cpuset=set([8,9,10,11,12,13,14,15,24,25,26,27,28,29,30,31]),id=1,memory=131072,memory_usage=0,mempages=[NUMAPagesTopology,NUMAPagesTopology],pinned_cpus=set([]),siblings=[set([8,24]),set([14,30]),set([15,31]),set([11,27]),set([10,26]),set([12,28]),set([9,25]),set([13,29])])
 _numa_fit_instance_cell 
/usr/lib/python2.7/site-packages/nova/virt/hardware.py:974
2018-06-15 16:52:33.458 5829 DEBUG nova.virt.hardware 
[req-be251765-6c3b-46aa-ae05-6c2e12ae8661 7e909565a4b847fe81cd6d1cf778c893 
b2760ba26e5645bf9856669d560d91c7 - default default] No pinning requested, 
considering limitations on usable cpu and memory _numa_fit_instance_cell 
/usr/lib/python2.7/site-packages/nova/virt/hardware.py:1003
2018-06-15 16:52:33.459 5829 DEBUG nova.scheduler.filters.numa_topology_filter 
[req-be251765-6c3b-46aa-ae05-6c2e12ae8661 7e909565a4b847fe81cd6d1cf778c893 
b2760ba26e5645bf9856669d560d91c7 - default default] [instance: 
b1dff78a-ff24-4337-9cdb-edcecc9f9d00] controller, controller fails NUMA 
topology requirements. The instanc

[Yahoo-eng-team] [Bug 1191960] Re: force-delete of cinder volume errors with Can\'t remove open logical volume

2018-06-15 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/565703
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=8b8c5da59efb087295b676d4261f84dfadf62503
Submitter: Zuul
Branch:master

commit 8b8c5da59efb087295b676d4261f84dfadf62503
Author: Vishakha Agarwal 
Date:   Wed May 2 16:42:58 2018 +0530

Re-using the code of os brick cinder

To avoid the errors during force delete of logical volume,
cinder library os brick is already using udevadm settle for it.
Calling the same library of cinder in nova too.

Change-Id: I092afdd0409ab27187cf74cd1514e9e0c550d52c
Closes-Bug: #1191960


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1191960

Title:
  force-delete of cinder volume errors with Can\'t remove open logical
  volume

Status in Cinder:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  As a consequence of Bug #1191431, few volumes were left in
  error_deleting state. Few of the cleared off by issuing cinder delete
  , however few of the errored out.

  1.When you try deleting such volume from Horizon > volume > check box > 
Delete Volumes 
  Error: You do not have permission to delete volume:   

  2.When you try using 'Force Delete Volume' option against the suspected 
volume. The request gets submitted successfully, however you will see following 
error messages in /var/log/cinder/cinder-volume on the controller node: 
  ProcessExecutionError: Unexpected error while running command. Command: sudo 
cinder-rootwrap /etc/cinder/rootwrap.conf lvremove -f 
cinder-volumes/volume-078cd44b-7b39-4867-a1e9-78bb758ae0a7 
  Exit code: 5
  Stdout: ''Stderr: '  Can\'t remove open logical volume 
"volume-078cd44b-7b39-4867-a1e9-78bb758ae0a7"\n' 

  3.When you try delete manually through command line, you get the following 
error: 
  lvremove -f /dev/cinder-volumes/volume-078cd44b-7b39-4867-a1e9-78bb758ae0a7 
Can't remove open logical volume "volume-078cd44b-7b39-4867-a1e9-78bb758ae0a7" 

  
  Workaround
  1.The volume is left in in-use state by tgtd service that causes cinder 
delete and force-delete not to work. Stop the service that is using it: 
  service tgt stop 
  lvremove /dev/cinder-volumes/volume-078cd44b-7b39-4867-a1e9-78bb758ae0a7  

  2.Now, remove it through cinder-api or cli 
  service tgt start 
  cinder force-delete 078cd44b-7b39-4867-a1e9-78bb758ae0a7  

  Note: lsof /dev/cinder-volumes/volume-078cd44b-
  7b39-4867-a1e9-78bb758ae0a7  reported tgtd using it.

  
  Expected behavior: force-delete option must address such anomalies.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1191960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp