[Yahoo-eng-team] [Bug 1724771] Re: start image as instance via horizon failed

2017-10-19 Thread Markus Zoeller (markus_z)
@Robert Holling: 
Because of the information before, I think it's a setup/configuration issue. 
Feel free to re-open this issue if you think there is a bug in OpenStack. 

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1724771

Title:
  start image as instance via horizon failed

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  environment:
  - fresh/inital installation of OpenStack Pike on Ubuntu 16.04.3 LTS
  - Horizon, Glance, Nova and Keystone as components on three different nodes 
with five compute nodes

  steps to procedure until error:
  - download cirros-image as described in manual -> donwload worked, image is 
shown in horizon
  - created a flavor (128 MB RAM, 1G HDD,...) -> worked, flavor created
  - tried to start image as an instance by klicking on , choosing name 
and flavor -> failure with the following message:

  "Unexpected API Error. Please report this at
  http://bugs.launchpad.net/nove/ and attach the NOVA API log if
  possible. 
  (HTTP 500) (Request-ID: req-xx)"

  I expected, that the instance should start...

  If I wouldn't get the message with "... please report this at..." I wouldn't 
do that here. 
  So, hopefully we could get an answer resp. solution.

  THX & kind regards
  Robert

  infos:
  a) result of command $ dpkg -l | grep nova:

  ii  nova-api   2:16.0.0-0ubuntu2~cloud0   
all  OpenStack Compute - API frontend
  ii  nova-common2:16.0.0-0ubuntu2~cloud0   
all  OpenStack Compute - common files
  ii  nova-conductor 2:16.0.0-0ubuntu2~cloud0   
all  OpenStack Compute - conductor service
  ii  nova-consoleauth   2:16.0.0-0ubuntu2~cloud0   
all  OpenStack Compute - Console Authenticator
  ii  nova-novncproxy2:16.0.0-0ubuntu2~cloud0   
all  OpenStack Compute - NoVNC proxy
  ii  nova-placement-api 2:16.0.0-0ubuntu2~cloud0   
all  OpenStack Compute - placement API frontend
  ii  nova-scheduler 2:16.0.0-0ubuntu2~cloud0   
all  OpenStack Compute - virtual machine scheduler
  ii  python-nova2:16.0.0-0ubuntu2~cloud0   
all  OpenStack Compute Python libraries
  ii  python-novaclient  2:9.1.0-0ubuntu1~cloud0
all  client library for OpenStack Compute API - Python 2.7

  b) hypervisor QEMU
  c) storage not implemented yet (CINDER t.b.d.)
  d) network not implemented yet (NEUTRON t.b.d)

  attachments:
  a) nova-api.log file as requested

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1724771/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1706399] [NEW] unit test 'test_get_volume_config' doesn't test anything useful

2017-07-25 Thread Markus Zoeller (markus_z)
Public bug reported:

Description
===
The unit test `test_get_volume_config` [1] asserts that a mock is equal
to itself:

@mock.patch.object(volume_drivers.LibvirtFakeVolumeDriver,
   'connect_volume')
@mock.patch.object(volume_drivers.LibvirtFakeVolumeDriver, 'get_config')
def test_get_volume_config(self, get_config, connect_volume):
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False)
connection_info = {'driver_volume_type': 'fake',
   'data': {'device_path': '/fake',
'access_mode': 'rw'}}
bdm = {'device_name': 'vdb',
   'disk_bus': 'fake-bus',
   'device_type': 'fake-type'}
disk_info = {'bus': bdm['disk_bus'], 'type': bdm['device_type'],
 'dev': 'vdb'}
mock_config = mock.MagicMock()

get_config.return_value = mock_config
config = drvr._get_volume_config(connection_info, disk_info)
get_config.assert_called_once_with(connection_info, disk_info)
self.assertEqual(mock_config, config)


self.assertEqual(mock_config, config)


Steps to reproduce
==
$ .tox/py27/bin/python -m testtools.run 
nova.tests.unit.virt.libvirt.test_driver.LibvirtConnTestCase.test_get_volume_config


Add these two lines to see that the objects are the same:

print("mock config: %s" % mock_config)
print("config: %s" % config)


Expected result
===
The `mock_config` has some kind of recording how it should look like and
the created `config` object is tested against that.


Actual result
=
`mock_config` and `config` are the very same python object.

$ .tox/py27/bin/python -m testtools.run 
nova.tests.unit.virt.libvirt.test_driver.LibvirtConnTestCase.test_get_volume_config
Tests running...
mock config: 
config: 


Environment
===
$ git log --oneline -1
87a0143 Merge "[placement] Flush RC_CACHE after each gabbit sequence"


References:
===
[1] 
https://github.com/openstack/nova/blob/bbe0f313bdfd30cc1c740709543b679567b42f0f/nova/tests/unit/virt/libvirt/test_driver.py#L6355-L6373

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1706399

Title:
  unit test 'test_get_volume_config' doesn't test anything useful

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  The unit test `test_get_volume_config` [1] asserts that a mock is equal
  to itself:

  @mock.patch.object(volume_drivers.LibvirtFakeVolumeDriver,
 'connect_volume')
  @mock.patch.object(volume_drivers.LibvirtFakeVolumeDriver, 'get_config')
  def test_get_volume_config(self, get_config, connect_volume):
  drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False)
  connection_info = {'driver_volume_type': 'fake',
 'data': {'device_path': '/fake',
  'access_mode': 'rw'}}
  bdm = {'device_name': 'vdb',
 'disk_bus': 'fake-bus',
 'device_type': 'fake-type'}
  disk_info = {'bus': bdm['disk_bus'], 'type': bdm['device_type'],
   'dev': 'vdb'}
  mock_config = mock.MagicMock()

  get_config.return_value = mock_config
  config = drvr._get_volume_config(connection_info, disk_info)
  get_config.assert_called_once_with(connection_info, disk_info)
  self.assertEqual(mock_config, config)

  
  self.assertEqual(mock_config, config)

  
  Steps to reproduce
  ==
  $ .tox/py27/bin/python -m testtools.run 
nova.tests.unit.virt.libvirt.test_driver.LibvirtConnTestCase.test_get_volume_config

  
  Add these two lines to see that the objects are the same:

  print("mock config: %s" % mock_config)
  print("config: %s" % config)


  Expected result
  ===
  The `mock_config` has some kind of recording how it should look like and
  the created `config` object is tested against that.

  
  Actual result
  =
  `mock_config` and `config` are the very same python object.

  $ .tox/py27/bin/python -m testtools.run 
nova.tests.unit.virt.libvirt.test_driver.LibvirtConnTestCase.test_get_volume_config
  Tests running...
  mock config: 
  config: 

  
  Environment
  ===
  $ git log --oneline -1
  87a0143 Merge "[placement] Flush RC_CACHE after each gabbit sequence"

  
  References:
  ===
  [1] 
https://github.com/openstack/nova/blob/bbe0f313bdfd30cc1c740709543b679567b42f0f/nova/tests/unit/virt/libvirt/test_driver.py#L6355-L6373

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1706399/+subscriptions

-- 
Mailing list: 

[Yahoo-eng-team] [Bug 1695533] Re: Incorrect URL in XML used by libvirt to launch instance

2017-06-06 Thread Markus Zoeller (markus_z)
I don't think that this is a valid bug, as the XML namespace URI is
usually not a "real" URI. In Wikipedia you will also find:

"[...] the namespace specification does not 
require nor suggest that the namespace URI 
be used to retrieve information. [...]"

https://en.wikipedia.org/wiki/XML_namespace

FWIW:
The XML metadata in the domain XML got introduced with commit
https://github.com/openstack/nova/commit/bf02f13  
The blueprint is this:

http://specs.openstack.org/openstack/nova-specs/specs/juno/implemented/libvirt-driver-domain-metadata.html

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1695533

Title:
  Incorrect URL in XML used by libvirt to launch instance

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I launched an instance using Devstack running on QEMU hypervisor. I
  dumped the XML of VM to see what things are getting configured and
  found a URL in the XML. The URL points to a page which no longer
  exists on the openstack.org website. I guess this URL gets embedded in
  each VM's XML. Though it does nothing (maybe pointing to the syntax
  used to define XML), and I am interested in this URL. I have filed
  this bug so that the URL can be corrected or can be removed from the
  XML. Attached is the full dumpxml. Here is a snippet of it:


  http://openstack.org/xmlns/libvirt/nova/1.0;>

ttt

  Steps to reproduce:
  1. Launch an instance
  2. Goto compute node where the instance is running.
  3. Do "virsh dumpxml "
  4. Check the URL in dumped output.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1695533/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1682444] [NEW] HTTP 404 for /dashboard/scss/serial_console.css

2017-04-13 Thread Markus Zoeller (markus_z)
Public bug reported:

https://github.com/openstack/horizon/commit/09706c6 renamed the
file "serial_console.css" to "serial_console.scss". That change
did forget to rename the reference in "serial_console.html".
Everytime a user open the serial console, a 404 for the css is
returned. That 404 looks like this:

[11/Apr/2017:11:19:12 +0200]
"GET /horizon/static/dashboard/scss/serial_console.css HTTP/1.1" 404 538
"http://controller/horizon/project/instances//serial"
"Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0"

I tried to solve it with https://review.openstack.org/#/c/455741/1 but
that wasn't the right way.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1682444

Title:
  HTTP 404 for /dashboard/scss/serial_console.css

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  https://github.com/openstack/horizon/commit/09706c6 renamed the
  file "serial_console.css" to "serial_console.scss". That change
  did forget to rename the reference in "serial_console.html".
  Everytime a user open the serial console, a 404 for the css is
  returned. That 404 looks like this:

  [11/Apr/2017:11:19:12 +0200]
  "GET /horizon/static/dashboard/scss/serial_console.css HTTP/1.1" 404 538
  "http://controller/horizon/project/instances//serial"
  "Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0"

  I tried to solve it with https://review.openstack.org/#/c/455741/1 but
  that wasn't the right way.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1682444/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1679223] Re: tempest.api.compute.servers.test_server_tags.ServerTagsTestJSON fail on centos7 nodes

2017-04-04 Thread Markus Zoeller (markus_z)
** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1679223

Title:
  tempest.api.compute.servers.test_server_tags.ServerTagsTestJSON fail
  on centos7 nodes

Status in OpenStack Compute (nova):
  New
Status in tempest:
  New

Bug description:
  Description
  ===
  These tempest test cases fail on centos7 nodes:
  test_server_tags.ServerTagsTestJSON.test_create_delete_tag
  test_server_tags.ServerTagsTestJSON.test_delete_all_tags
  test_server_tags.ServerTagsTestJSON.test_update_all_tags
  test_server_tags.ServerTagsTestJSON.test_check_tag_existence

  
http://logstash.openstack.org/#/dashboard/file/logstash.json?from=7d=message:%5C%22Malformed%20request%20body%5C%22

  
  Steps to reproduce
  ==
  See any test run of the 3rd party CI "IBM zKVM CI" after the last successful 
run from 2017-03-30T06:22:14:
  http://ci-watch.tintri.com/project?project=nova=7+days

  
  Expected result
  ===
  The server tagging functionality tests should work since they got introduced 
with 
https://github.com/openstack/tempest/commit/7c95befefb7db27296114f87cf49e8f2b8f43a59

  
  Actual result
  =

  As an example:

  [tempest.lib.common.rest_client] 
  Request (ServerTagsTestJSON:test_check_tag_existence): 
  400 PUT 
https://15.184.67.250:8774/v2.1/servers/b56af78e-d406-4858-9509-473863275223/tags/tempest-tag-151463324

  [tempest.lib.common.rest_client] 
  Request - Headers: 
  {'Content-Type': 'application/json', 'Accept': 'application/json', 
  'X-OpenStack-Nova-API-Version': '2.26', 'X-Auth-Token': ''}
   Body: None

  Response - Headers: {'status': '400', u'content-length': '66', 
  u'server': 'Apache/2.4.6 (CentOS) OpenSSL/1.0.1e-fips mod_wsgi/3.4 
Python/2.7.5',
  u'date': 'Mon, 03 Apr 2017 10:15:12 GMT', 
  u'x-openstack-nova-api-version': '2.26', 
  u'x-compute-request-id': 'req-1f8726fc-df20-4592-a214-cff3fa73c8e6', 
  u'content-type': 'application/json; charset=UTF-8', 
  content-location': 
'https://15.184.67.250:8774/v2.1/servers/b56af78e-d406-4858-9509-473863275223/tags/tempest-tag-151463324',
 
  u'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 
  u'openstack-api-version': 'compute 2.26', 
  u'connection': 'close'}
  Body: {"badRequest": {"message": "Malformed request body", "code": 400}}

  Traceback: http://paste.openstack.org/show/605262/

  
  Environment
  ===
  This happens in various gate test jobs. The common theme is that the build
  node is centos7 based. 
  Logstash query: 
http://logstash.openstack.org/#/dashboard/file/logstash.json?from=7d=message:%5C%22Malformed%20request%20body%5C%22

  You can also see this constantly on the centos based 3rd party CI "IBM zKVM 
CI":
  https://review.openstack.org/#/q/reviewer:%22IBM+zKVM+CI%22

  Logs & Configs
  ==
  See any test run of the 3rd party CI "IBM zKVM CI" after the last successful 
run from 2017-03-30T06:22:14:
  http://ci-watch.tintri.com/project?project=nova=7+days

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1679223/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1679223] [NEW] tempest.api.compute.servers.test_server_tags.ServerTagsTestJSON fail on centos7 nodes

2017-04-03 Thread Markus Zoeller (markus_z)
Public bug reported:

Description
===
These tempest test cases fail on centos7 nodes:
test_server_tags.ServerTagsTestJSON.test_create_delete_tag
test_server_tags.ServerTagsTestJSON.test_delete_all_tags
test_server_tags.ServerTagsTestJSON.test_update_all_tags
test_server_tags.ServerTagsTestJSON.test_check_tag_existence

http://logstash.openstack.org/#/dashboard/file/logstash.json?from=7d=message:%5C%22Malformed%20request%20body%5C%22


Steps to reproduce
==
See any test run of the 3rd party CI "IBM zKVM CI" after the last successful 
run from 2017-03-30T06:22:14:
http://ci-watch.tintri.com/project?project=nova=7+days


Expected result
===
The server tagging functionality tests should work since they got introduced 
with 
https://github.com/openstack/tempest/commit/7c95befefb7db27296114f87cf49e8f2b8f43a59


Actual result
=

As an example:

[tempest.lib.common.rest_client] 
Request (ServerTagsTestJSON:test_check_tag_existence): 
400 PUT 
https://15.184.67.250:8774/v2.1/servers/b56af78e-d406-4858-9509-473863275223/tags/tempest-tag-151463324

[tempest.lib.common.rest_client] 
Request - Headers: 
{'Content-Type': 'application/json', 'Accept': 'application/json', 
'X-OpenStack-Nova-API-Version': '2.26', 'X-Auth-Token': ''}
 Body: None

Response - Headers: {'status': '400', u'content-length': '66', 
u'server': 'Apache/2.4.6 (CentOS) OpenSSL/1.0.1e-fips mod_wsgi/3.4 
Python/2.7.5',
u'date': 'Mon, 03 Apr 2017 10:15:12 GMT', 
u'x-openstack-nova-api-version': '2.26', 
u'x-compute-request-id': 'req-1f8726fc-df20-4592-a214-cff3fa73c8e6', 
u'content-type': 'application/json; charset=UTF-8', 
content-location': 
'https://15.184.67.250:8774/v2.1/servers/b56af78e-d406-4858-9509-473863275223/tags/tempest-tag-151463324',
 
u'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 
u'openstack-api-version': 'compute 2.26', 
u'connection': 'close'}
Body: {"badRequest": {"message": "Malformed request body", "code": 400}}

Traceback: http://paste.openstack.org/show/605262/


Environment
===
This happens in various gate test jobs. The common theme is that the build
node is centos7 based. 
Logstash query: 
http://logstash.openstack.org/#/dashboard/file/logstash.json?from=7d=message:%5C%22Malformed%20request%20body%5C%22

You can also see this constantly on the centos based 3rd party CI "IBM zKVM CI":
https://review.openstack.org/#/q/reviewer:%22IBM+zKVM+CI%22

Logs & Configs
==
See any test run of the 3rd party CI "IBM zKVM CI" after the last successful 
run from 2017-03-30T06:22:14:
http://ci-watch.tintri.com/project?project=nova=7+days

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1679223

Title:
  tempest.api.compute.servers.test_server_tags.ServerTagsTestJSON fail
  on centos7 nodes

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  These tempest test cases fail on centos7 nodes:
  test_server_tags.ServerTagsTestJSON.test_create_delete_tag
  test_server_tags.ServerTagsTestJSON.test_delete_all_tags
  test_server_tags.ServerTagsTestJSON.test_update_all_tags
  test_server_tags.ServerTagsTestJSON.test_check_tag_existence

  
http://logstash.openstack.org/#/dashboard/file/logstash.json?from=7d=message:%5C%22Malformed%20request%20body%5C%22

  
  Steps to reproduce
  ==
  See any test run of the 3rd party CI "IBM zKVM CI" after the last successful 
run from 2017-03-30T06:22:14:
  http://ci-watch.tintri.com/project?project=nova=7+days

  
  Expected result
  ===
  The server tagging functionality tests should work since they got introduced 
with 
https://github.com/openstack/tempest/commit/7c95befefb7db27296114f87cf49e8f2b8f43a59

  
  Actual result
  =

  As an example:

  [tempest.lib.common.rest_client] 
  Request (ServerTagsTestJSON:test_check_tag_existence): 
  400 PUT 
https://15.184.67.250:8774/v2.1/servers/b56af78e-d406-4858-9509-473863275223/tags/tempest-tag-151463324

  [tempest.lib.common.rest_client] 
  Request - Headers: 
  {'Content-Type': 'application/json', 'Accept': 'application/json', 
  'X-OpenStack-Nova-API-Version': '2.26', 'X-Auth-Token': ''}
   Body: None

  Response - Headers: {'status': '400', u'content-length': '66', 
  u'server': 'Apache/2.4.6 (CentOS) OpenSSL/1.0.1e-fips mod_wsgi/3.4 
Python/2.7.5',
  u'date': 'Mon, 03 Apr 2017 10:15:12 GMT', 
  u'x-openstack-nova-api-version': '2.26', 
  u'x-compute-request-id': 'req-1f8726fc-df20-4592-a214-cff3fa73c8e6', 
  u'content-type': 'application/json; charset=UTF-8', 
  content-location': 

[Yahoo-eng-team] [Bug 1669468] Re: tempest.api.compute.servers.test_novnc.NoVNCConsoleTestJSON.test_novnc fails intermittently in neutron multinode nv job

2017-03-22 Thread Markus Zoeller (markus_z)
No hits in logstash after https://review.openstack.org/#/c/448078/1
merged. => resolved

** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1669468

Title:
  tempest.api.compute.servers.test_novnc.NoVNCConsoleTestJSON.test_novnc
  fails intermittently in neutron multinode nv job

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Example output:

  2017-02-21 06:42:10.010442 | ==
  2017-02-21 06:42:10.010458 | Failed 1 tests - output below:
  2017-02-21 06:42:10.010471 | ==
  2017-02-21 06:42:10.010477 | 
  2017-02-21 06:42:10.010507 | 
tempest.api.compute.servers.test_novnc.NoVNCConsoleTestJSON.test_novnc[id-c640fdff-8ab4-45a4-a5d8-7e6146cbd0dc]
  2017-02-21 06:42:10.010542 | 
---
  2017-02-21 06:42:10.010548 | 
  2017-02-21 06:42:10.010558 | Captured traceback:
  2017-02-21 06:42:10.010569 | ~~~
  2017-02-21 06:42:10.010583 | Traceback (most recent call last):
  2017-02-21 06:42:10.010606 |   File 
"tempest/api/compute/servers/test_novnc.py", line 152, in test_novnc
  2017-02-21 06:42:10.010621 | self._validate_rfb_negotiation()
  2017-02-21 06:42:10.010646 |   File 
"tempest/api/compute/servers/test_novnc.py", line 77, in 
_validate_rfb_negotiation
  2017-02-21 06:42:10.010665 | 'Token must be invalid because the 
connection '
  2017-02-21 06:42:10.010721 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/unittest2/case.py",
 line 696, in assertFalse
  2017-02-21 06:42:10.010737 | raise self.failureException(msg)
  2017-02-21 06:42:10.010762 | AssertionError: True is not false : Token 
must be invalid because the connection closed.
  2017-02-21 06:42:10.010768 | 
  2017-02-21 06:42:10.010774 | 
  2017-02-21 06:42:10.010785 | Captured pythonlogging:
  2017-02-21 06:42:10.010796 | ~~~
  2017-02-21 06:42:10.010848 | 2017-02-21 06:07:18,545 16286 INFO 
[tempest.lib.common.rest_client] Request (NoVNCConsoleTestJSON:test_novnc): 200 
POST 
https://10.27.33.58:8774/v2.1/servers/82d4d4ca-c263-4ac5-85bc-a33488af5ff5/action
 0.165s
  2017-02-21 06:42:10.010905 | 2017-02-21 06:07:18,545 16286 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'Accept': 
'application/json', 'X-Auth-Token': '', 'Content-Type': 
'application/json'}
  2017-02-21 06:42:10.010925 | Body: {"os-getVNCConsole": {"type": 
"novnc"}}
  2017-02-21 06:42:10.011109 | Response - Headers: {u'content-type': 
'application/json', 'content-location': 
'https://10.27.33.58:8774/v2.1/servers/82d4d4ca-c263-4ac5-85bc-a33488af5ff5/action',
 u'date': 'Tue, 21 Feb 2017 06:07:18 GMT', u'x-openstack-nova-api-version': 
'2.1', 'status': '200', u'content-length': '121', u'server': 'Apache/2.4.18 
(Ubuntu)', u'connection': 'close', u'openstack-api-version': 'compute 2.1', 
u'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 
u'x-compute-request-id': 'req-d9681919-5b5e-4477-b38d-2734b660a099'}
  2017-02-21 06:42:10.011153 | Body: {"console": {"url": 
"http://10.27.33.58:6080/vnc_auto.html?token=f8a52df3-8e0d-4d64-8877-07f607f84b74;,
 "type": "novnc"}}
  2017-02-21 06:42:10.011161 | 
  2017-02-21 06:42:10.011167 | 
  2017-02-21 06:42:10.011172 | 

  
  Full logs at: 
http://logs.openstack.org/38/431038/3/check/gate-tempest-dsvm-neutron-multinode-full-ubuntu-xenial-nv/5e1d485/console.html#_2017-02-21_06_07_18_740230

  This started at 2017-02-21

  The very first change which failed here was
  https://review.openstack.org/#/c/431038/ but is not related to the
  error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1669468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1669468] [NEW] tempest.api.compute.servers.test_novnc.NoVNCConsoleTestJSON.test_novnc fails intermittently

2017-03-02 Thread Markus Zoeller (markus_z)
Public bug reported:

Example output:

2017-02-21 06:42:10.010442 | ==
2017-02-21 06:42:10.010458 | Failed 1 tests - output below:
2017-02-21 06:42:10.010471 | ==
2017-02-21 06:42:10.010477 | 
2017-02-21 06:42:10.010507 | 
tempest.api.compute.servers.test_novnc.NoVNCConsoleTestJSON.test_novnc[id-c640fdff-8ab4-45a4-a5d8-7e6146cbd0dc]
2017-02-21 06:42:10.010542 | 
---
2017-02-21 06:42:10.010548 | 
2017-02-21 06:42:10.010558 | Captured traceback:
2017-02-21 06:42:10.010569 | ~~~
2017-02-21 06:42:10.010583 | Traceback (most recent call last):
2017-02-21 06:42:10.010606 |   File 
"tempest/api/compute/servers/test_novnc.py", line 152, in test_novnc
2017-02-21 06:42:10.010621 | self._validate_rfb_negotiation()
2017-02-21 06:42:10.010646 |   File 
"tempest/api/compute/servers/test_novnc.py", line 77, in 
_validate_rfb_negotiation
2017-02-21 06:42:10.010665 | 'Token must be invalid because the 
connection '
2017-02-21 06:42:10.010721 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/unittest2/case.py",
 line 696, in assertFalse
2017-02-21 06:42:10.010737 | raise self.failureException(msg)
2017-02-21 06:42:10.010762 | AssertionError: True is not false : Token must 
be invalid because the connection closed.
2017-02-21 06:42:10.010768 | 
2017-02-21 06:42:10.010774 | 
2017-02-21 06:42:10.010785 | Captured pythonlogging:
2017-02-21 06:42:10.010796 | ~~~
2017-02-21 06:42:10.010848 | 2017-02-21 06:07:18,545 16286 INFO 
[tempest.lib.common.rest_client] Request (NoVNCConsoleTestJSON:test_novnc): 200 
POST 
https://10.27.33.58:8774/v2.1/servers/82d4d4ca-c263-4ac5-85bc-a33488af5ff5/action
 0.165s
2017-02-21 06:42:10.010905 | 2017-02-21 06:07:18,545 16286 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'Accept': 
'application/json', 'X-Auth-Token': '', 'Content-Type': 
'application/json'}
2017-02-21 06:42:10.010925 | Body: {"os-getVNCConsole": {"type": 
"novnc"}}
2017-02-21 06:42:10.011109 | Response - Headers: {u'content-type': 
'application/json', 'content-location': 
'https://10.27.33.58:8774/v2.1/servers/82d4d4ca-c263-4ac5-85bc-a33488af5ff5/action',
 u'date': 'Tue, 21 Feb 2017 06:07:18 GMT', u'x-openstack-nova-api-version': 
'2.1', 'status': '200', u'content-length': '121', u'server': 'Apache/2.4.18 
(Ubuntu)', u'connection': 'close', u'openstack-api-version': 'compute 2.1', 
u'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 
u'x-compute-request-id': 'req-d9681919-5b5e-4477-b38d-2734b660a099'}
2017-02-21 06:42:10.011153 | Body: {"console": {"url": 
"http://10.27.33.58:6080/vnc_auto.html?token=f8a52df3-8e0d-4d64-8877-07f607f84b74;,
 "type": "novnc"}}
2017-02-21 06:42:10.011161 | 
2017-02-21 06:42:10.011167 | 
2017-02-21 06:42:10.011172 | 


Full logs at: 
http://logs.openstack.org/38/431038/3/check/gate-tempest-dsvm-neutron-multinode-full-ubuntu-xenial-nv/5e1d485/console.html#_2017-02-21_06_07_18_740230

This started at 2017-02-21

The very first change which failed here was
https://review.openstack.org/#/c/431038/ but is not related to the
error.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1669468

Title:
  tempest.api.compute.servers.test_novnc.NoVNCConsoleTestJSON.test_novnc
  fails intermittently

Status in OpenStack Compute (nova):
  New

Bug description:
  Example output:

  2017-02-21 06:42:10.010442 | ==
  2017-02-21 06:42:10.010458 | Failed 1 tests - output below:
  2017-02-21 06:42:10.010471 | ==
  2017-02-21 06:42:10.010477 | 
  2017-02-21 06:42:10.010507 | 
tempest.api.compute.servers.test_novnc.NoVNCConsoleTestJSON.test_novnc[id-c640fdff-8ab4-45a4-a5d8-7e6146cbd0dc]
  2017-02-21 06:42:10.010542 | 
---
  2017-02-21 06:42:10.010548 | 
  2017-02-21 06:42:10.010558 | Captured traceback:
  2017-02-21 06:42:10.010569 | ~~~
  2017-02-21 06:42:10.010583 | Traceback (most recent call last):
  2017-02-21 06:42:10.010606 |   File 
"tempest/api/compute/servers/test_novnc.py", line 152, in test_novnc
  2017-02-21 06:42:10.010621 | self._validate_rfb_negotiation()
  2017-02-21 06:42:10.010646 |   File 
"tempest/api/compute/servers/test_novnc.py", line 77, in 
_validate_rfb_negotiation
  2017-02-21 06:42:10.010665 | 'Token must be invalid because the 
connection '
  2017-02-21 06:42:10.010721 |   File 

[Yahoo-eng-team] [Bug 1634058] Re: keystone service not starting when apache2 is running

2016-10-17 Thread Markus Zoeller (markus_z)
Looks like this is a Keystone issue, not a Nova one, so I changed the
affected project.

** Project changed: nova => keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1634058

Title:
  keystone service not starting when apache2 is running

Status in OpenStack Identity (keystone):
  New

Bug description:
  Hello,

  Liberty on Ubuntu 14.04

  I am unable to start keystone service when apache2 is running. Below
  is the error in log file

  2016-10-17 11:46:08.929 6204 WARNING root [-] Running keystone via eventlet 
is deprecated as of Kilo in favor of running in a WSGI server (e.g. mod_wsgi). 
Support for keystone under eventlet will be removed in the "M"-Release.
  2016-10-17 11:46:08.931 6204 ERROR 
keystone.common.environment.eventlet_server [-] Could not bind to 0.0.0.0:35357
  2016-10-17 11:46:08.931 6204 ERROR root [-] Failed to start the admin server
  2016-10-17 11:46:08.931 6204 ERROR root Traceback (most recent call last):
  2016-10-17 11:46:08.931 6204 ERROR root   File 
"/usr/lib/python2.7/dist-packages/keystone/server/eventlet.py", line 88, in 
serve
  2016-10-17 11:46:08.931 6204 ERROR root server.launch_with(launcher)
  2016-10-17 11:46:08.931 6204 ERROR root   File 
"/usr/lib/python2.7/dist-packages/keystone/server/eventlet.py", line 54, in 
launch_with
  2016-10-17 11:46:08.931 6204 ERROR root self.server.listen()
  2016-10-17 11:46:08.931 6204 ERROR root   File 
"/usr/lib/python2.7/dist-packages/keystone/common/environment/eventlet_server.py",
 line 110, in listen
  2016-10-17 11:46:08.931 6204 ERROR root backlog=backlog)
  2016-10-17 11:46:08.931 6204 ERROR root   File 
"/usr/lib/python2.7/dist-packages/eventlet/convenience.py", line 44, in listen
  2016-10-17 11:46:08.931 6204 ERROR root sock.listen(backlog)
  2016-10-17 11:46:08.931 6204 ERROR root   File 
"/usr/lib/python2.7/socket.py", line 228, in meth
  2016-10-17 11:46:08.931 6204 ERROR root return 
getattr(self._sock,name)(*args)
  2016-10-17 11:46:08.931 6204 ERROR root error: [Errno 98] Address already in 
use

  Attached are the config files and if I stop apache2, keystone service
  starts but not apache2

  Best regards,
  Dhanabalan

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1634058/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632987] [NEW] _nova_to_osvif_vif_bridge: 'module' object has no attribute 'vif'

2016-10-13 Thread Markus Zoeller (markus_z)
 pc-0.13
pc-0.12
pc-0.11
pc-0.10
isapc
  


  
  
  
  
  
  

  

  
hvm

  64
  /usr/bin/qemu-system-x86_64
  pc-0.14
  pc
  pc-0.13
  pc-0.12
  pc-0.11
  pc-0.10
  isapc
  
  
  
/usr/bin/kvm
pc-0.14
pc
pc-0.13
pc-0.12
pc-0.11
pc-0.10
isapc
  


  
  
  
  

  

  
hvm

  32
  /usr/bin/qemu-system-arm
  integratorcp
  vexpress-a9
  syborg
  musicpal
  mainstone
  n800
  n810
  n900
  cheetah
  sx1
  sx1-v1
  beagle
  beaglexm
  tosa
  akita
  spitz
  borzoi
  terrier
  connex
  verdex
  lm3s811evb
  lm3s6965evb
  realview-eb
  realview-eb-mpcore
  realview-pb-a8
  realview-pbx-a9
  versatilepb
  versatileab
  
  


  

  

  
hvm

  32
  /usr/bin/qemu-system-mips
  malta
  mipssim
  magnum
  pica61
  mips
  
  


  

  

  
hvm

  32
  /usr/bin/qemu-system-mipsel
  malta
  mipssim
  magnum
  pica61
  mips
  
  


  

  

  
hvm

  32
  /usr/bin/qemu-system-sparc
  SS-5
  leon3_generic
  SS-10
  SS-600MP
  SS-20
  Voyager
  LX
  SS-4
  SPARCClassic
  SPARCbook
  SS-1000
  SS-2000
  SS-2
  
  

  

  
hvm

  32
  /usr/bin/qemu-system-ppc
  g3beige
  virtex-ml507
  mpc8544ds
  bamboo
  bamboo-0.13
  bamboo-0.12
  ref405ep
  taihu
  mac99
  prep
  
  


  

  





==
Totals
==
Ran: 1 tests in 30. sec.
 - Passed: 0
 - Skipped: 0
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 1
Sum of execute time for each test: 3.6373 sec.

==
Worker Balance
==
 - Worker 0 (1 tests) => 0:00:03.637283

No tests were successful during the run
ERROR: InvocationError: '/usr/bin/bash tools/pretty_tox.sh 
nova.tests.unit.virt.test_virt_drivers.LibvirtConnTestCase.test_set_admin_password'
__ summary 
__
ERROR:   py27: commands failed


Environment
===
1. Nova master (Ocata dev cycle)

[10:13:39 markus@oc5730007623 ~/git/nova ] $ git log -1
commit bc1b11fdc2c140be916ee3b3b31993cbd6d13ac6
Merge: 9be53df 0fafb81
Author: Jenkins <jenk...@review.openstack.org>
Date:   Thu Oct 13 03:04:45 2016 +

Merge "libvirt: cleanup never used migratable flag checking"


Logs & Configs
==

N/A

** Affects: nova
 Importance: Undecided
 Assignee: Markus Zoeller (markus_z) (mzoeller)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1632987

Title:
  _nova_to_osvif_vif_bridge: 'module' object has no attribute 'vif'

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  The unit-test 
"nova.tests.unit.virt.test_virt_drivers.LibvirtConnTestCase.test_set_admin_password"
 fails. I could narrow it down to this commit: 
https://github.com/openstack/nova/commit/735f710

   move os_vif.initialize() to nova-compute start

  os_vif.initialize() was previous

[Yahoo-eng-team] [Bug 1554226] Re: Clean up warnings about enginefacade

2016-10-07 Thread Markus Zoeller (markus_z)
As we use the "direct-release" model in Nova we don't use the
"Fix Comitted" status for merged bug fixes anymore. I'm setting
this manually to "Fix Released" to be consistent.

[1] "[openstack-dev] [release][all] bugs will now close automatically
when patches merge"; Doug Hellmann; 2015-12-07;
http://lists.openstack.org/pipermail/openstack-dev/2015-December/081612.html

** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1554226

Title:
  Clean up warnings about enginefacade

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  You see a bunch of the following in Nova's test runs:

  Captured stderr:
  
  
/home/jaypipes/repos/nova/.tox/py27/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:254:
 OsloDBDeprecationWarning: EngineFacade is deprecated; please use 
oslo_db.sqlalchemy.enginefacade
self._legacy_facade = LegacyEngineFacade(None, _factory=self)

  We should use oslo_db.sqlalchemy.enginefacade now in all cases.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1554226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586309] Re: Delete a instance after this instance resized failed, source resource is not cleared.

2016-10-07 Thread Markus Zoeller (markus_z)
As we use the "direct-release" model in Nova we don't use the
"Fix Comitted" status for merged bug fixes anymore. I'm setting
this manually to "Fix Released" to be consistent.

[1] "[openstack-dev] [release][all] bugs will now close automatically
when patches merge"; Doug Hellmann; 2015-12-07;
http://lists.openstack.org/pipermail/openstack-dev/2015-December/081612.html

** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1586309

Title:
  Delete a instance after this instance resized failed, source resource
  is not cleared.

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Environment
  ===
  stable/mitaka

  Steps to reproduce
  ==
  * I did boot a instance in compute node of SBCJSlot5Rack2Centos7, instance 
uuid was 00bc72d0-0778-4e69-bfee-b58b87dd1532.
  * Then I did resize this instance, resize failed on finish_resize function on 
destination compute node SBCJSlot3Rack2Centos7.

  [stack@SBCJSlot5Rack2Centos7 ~]$ openstack server show 
00bc72d0-0778-4e69-bfee-b58b87dd1532.
  
+--++
  | Field| Value

  |
  
+--++
  | OS-DCF:diskConfig| AUTO 

  |
  | OS-EXT-AZ:availability_zone  | nova 

  |
  | OS-EXT-SRV-ATTR:host | SBCJSlot3Rack2Centos7

  |
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | SBCJSlot3Rack2Centos7

  |
  | OS-EXT-SRV-ATTR:instance_name| instance-0014

  |
  | OS-EXT-STS:power_state   | 1

  |
  | OS-EXT-STS:task_state| None 

  |
  | OS-EXT-STS:vm_state  | error

  |
  | OS-SRV-USG:launched_at   | 2016-05-27T02:28:07.00   

  |
  | OS-SRV-USG:terminated_at | None 

  |
  | accessIPv4   |  

  |
  | accessIPv6   |  

  |
  | addresses| public=2001:db8::6, 10.43.239.76 

  |
  | config_drive | True 

  |
  | created  | 2016-05-27T02:27:56Z 

  |
  | fault| {u'message': u'Unexpected 
vif_type=binding_failed', u'code': 500, u'details': u'  File 
"/opt/stack/nova/nova/compute/manager.py", line |
  |  | 375, in decorated_function\n
return function(self, context, *args, **kwargs)\n  File 
"/opt/stack/nova/nova/compute/manager.py", |
  |  | line 4054, in finish_resize\n
self._set_instance_obj_error_state(context, 

[Yahoo-eng-team] [Bug 1597789] Re: libvirt: virtlogd: qemu 2.6.0 doesn't log boot message

2016-10-05 Thread Markus Zoeller (markus_z)
*** This bug is a duplicate of bug 1599214 ***
https://bugs.launchpad.net/bugs/1599214

Marked as duplicate. The issue was in qemu and got fixed with v2.7.0.
Also, Nova doesn't yet have a dependency to virtlogd. This is still
under development with https://review.openstack.org/#/c/323765/

** This bug has been marked a duplicate of bug 1599214
   virtlogd: qemu 2.6.0 doesn't log boot message

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1597789

Title:
  libvirt: virtlogd: qemu 2.6.0 doesn't log boot message

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Description
  ===
  With libvirt 1.3.3 and Qemu 2.6.0 char devices are able to log their
  stdout/stderr into a log file via a "" XML element in the domain XML.
  This feature solves the long standing bug 832507 (which can flood the log)
  by a built-in log rotation. It also removes the mutually exclusivity of
  "serial console" and "get console output".

  Unfortunately, an (assumed) issue in Qemu prevents the logging of the
  boot messages of the guest *unless* the connection to the char device is
  already established. 

  Steps to reproduce
  ==
  A chronological list of steps which will bring off the
  issue you noticed:
  * Ensure to have the code of https://review.openstack.org/#/c/323765/14 
applied
  * Ensure to have libvirt 1.3.3 and qemu 2.6.0
  * Launch an instance
  * execute: nova console-log 

  Expected result
  ===
  The CLI returns the boot messages.

  Actual result
  =
  The result is an empty string.

  If I connect to the used console (via Horizon for example), execute
  "echo 'foo'" and things like that, the next call of "nova console-log
  " *does* return those executed commands (but still not the
  boot messages). If I reboot withing the console, *then* the boot
  messages will be logged too.

  Environment
  ===
  1. OpenStack version: master (Newton)
 $ git log --oneline -4
 201e231 libvirt: virtlogd: use virtlogd for char devices
 73e931a libvirt: simplify "get_console_output" interface
 ec94d7b libvirt: fix live-migration with serial console check
 fcb3dbf Merge "Fix error message for VirtualInterfaceUnplugException"

  2. Which hypervisor did you use?
 libvirt 1.3.3 and kvm-qemu 2.6.0

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?

  3. Which networking type did you use?
 neutron + ovs

  Logs & Configs
  ==
  N/A

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1597789/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334398] Re: libvirt live_snapshot periodically explodes on libvirt 1.2.2 in the gate

2016-08-31 Thread Markus Zoeller (markus_z)
CONFIRMED FOR: NEWTON

** Changed in: nova
   Status: Expired => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334398

Title:
  libvirt live_snapshot periodically explodes on libvirt 1.2.2 in the
  gate

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Seeing this here:

  http://logs.openstack.org/70/97670/5/check/check-tempest-dsvm-
  postgres-full/7d4c7cf/console.html

  2014-06-24 23:15:41.714 | 
tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestJSON.test_create_image_specify_multibyte_character_image_name[gate]
  2014-06-24 23:15:41.714 | 
---
  2014-06-24 23:15:41.714 | 
  2014-06-24 23:15:41.714 | Captured traceback-1:
  2014-06-24 23:15:41.714 | ~
  2014-06-24 23:15:41.715 | Traceback (most recent call last):
  2014-06-24 23:15:41.715 |   File 
"tempest/services/compute/json/images_client.py", line 86, in delete_image
  2014-06-24 23:15:41.715 | resp, body = self.delete("images/%s" % 
str(image_id))
  2014-06-24 23:15:41.715 |   File "tempest/common/rest_client.py", line 
224, in delete
  2014-06-24 23:15:41.715 | return self.request('DELETE', url, 
extra_headers, headers, body)
  2014-06-24 23:15:41.715 |   File "tempest/common/rest_client.py", line 
430, in request
  2014-06-24 23:15:41.715 | resp, resp_body)
  2014-06-24 23:15:41.715 |   File "tempest/common/rest_client.py", line 
474, in _error_checker
  2014-06-24 23:15:41.715 | raise exceptions.NotFound(resp_body)
  2014-06-24 23:15:41.715 | NotFound: Object not found
  2014-06-24 23:15:41.715 | Details: {"itemNotFound": {"message": "Image 
not found.", "code": 404}}
  2014-06-24 23:15:41.716 | 
  2014-06-24 23:15:41.716 | 
  2014-06-24 23:15:41.716 | Captured traceback:
  2014-06-24 23:15:41.716 | ~~~
  2014-06-24 23:15:41.716 | Traceback (most recent call last):
  2014-06-24 23:15:41.716 |   File 
"tempest/api/compute/images/test_images_oneserver.py", line 31, in tearDown
  2014-06-24 23:15:41.716 | self.server_check_teardown()
  2014-06-24 23:15:41.716 |   File "tempest/api/compute/base.py", line 161, 
in server_check_teardown
  2014-06-24 23:15:41.716 | 'ACTIVE')
  2014-06-24 23:15:41.716 |   File 
"tempest/services/compute/json/servers_client.py", line 173, in 
wait_for_server_status
  2014-06-24 23:15:41.716 | raise_on_error=raise_on_error)
  2014-06-24 23:15:41.717 |   File "tempest/common/waiters.py", line 107, 
in wait_for_server_status
  2014-06-24 23:15:41.717 | raise exceptions.TimeoutException(message)
  2014-06-24 23:15:41.717 | TimeoutException: Request timed out
  2014-06-24 23:15:41.717 | Details: (ImagesOneServerTestJSON:tearDown) 
Server 90c79adf-4df1-497c-a786-13bdc5cca98d failed to reach ACTIVE status and 
task state "None" within the required time (196 s). Current status: ACTIVE. 
Current task state: image_pending_upload.

  
  Looks like it's trying to delete image with uuid 
518a32d0-f323-413c-95c2-dd8299716c19 which doesn't exist, because it's still 
uploading?

  
  This is maybe related to bug 1320617 as a general performance issue with 
glance.

  Looking in the glance registry log, the image is created here:

  2014-06-24 22:51:23.538 15740 INFO glance.registry.api.v1.images
  [13c1b477-cd22-44ca-ba0d-bf1b19202df6 d01d4977b5cc4e20a99e1d7ca58ce444
  207d083a31944716b9cd2ecda0f09ce7 - - -] Successfully created image
  518a32d0-f323-413c-95c2-dd8299716c19

  The image is deleted here:

  2014-06-24 22:54:53.146 15740 INFO glance.registry.api.v1.images
  [7c29f253-acef-41a0-b62b-c3087f7617ef d01d4977b5cc4e20a99e1d7ca58ce444
  207d083a31944716b9cd2ecda0f09ce7 - - -] Successfully deleted image
  518a32d0-f323-413c-95c2-dd8299716c19

  And the 'not found' is here:

  2014-06-24 22:54:56.508 15740 INFO glance.registry.api.v1.images
  [c708cf1f-27a8-4003-9c29-6afca7dd9bb8 d01d4977b5cc4e20a99e1d7ca58ce444
  207d083a31944716b9cd2ecda0f09ce7 - - -] Image 518a32d0-f323-413c-
  95c2-dd8299716c19 not found

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1334398/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487477] Re: Mess in live-migration compute-manager and drivers code

2016-08-11 Thread Markus Zoeller (markus_z)
As discussed with Timofey in IRC #nova, this needs to be driven most
likely by a blueprint.

** Changed in: nova
   Status: In Progress => Opinion

** Changed in: nova
   Importance: Low => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1487477

Title:
  Mess in live-migration compute-manager and drivers code

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  There is _live_migration_cleanup_flags method in compute's manager class 
which should decide whether it's needed to make cleanup after live-migration is 
done or not. It accepts 2 params, from doc: 
   :param block_migration: if true, it was a block migration
   :param migrate_data: implementation specific data
  The problem is that current compute's manager code is libvirt-specific.
  It operates values in migrate_data dictionary that valid only for libvirt 
driver implementation. 
  This doesn't cause any bug yet because other drivers doesn't implement 
cleanup method at all. 
  When anyone decide to implement this live-migration starts to fail. There is 
no valid ci job to verify that. 

  live_migration_cleanup_flags - should become hypervisor specific. and
  we should move it from compute manager to drivers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1487477/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494207] Re: console proxies options in [DEFAULT] group are confusing

2016-08-11 Thread Markus Zoeller (markus_z)
This bug report doesn't describe a failure in the behavior of Nova.
It's a personal todo item which doesn't need the overhead of a bug
report. Because of this, I'm closing this report as invalid. This
shouldn't stop you though to do your item and push it as a review.

** Changed in: nova
   Status: In Progress => Opinion

** Changed in: nova
   Importance: Low => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1494207

Title:
  console proxies options in [DEFAULT] group are confusing

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Now config options of different consoles using baseproxy reside in
  [DEFAULT] group, which is very confusing given the fact how they are
  named, e.g.:

  cfg.StrOpt('cert',
     default='self.pem',
     help='SSL certificate file'),
  cfg.StrOpt('key',
     help='SSL key file (if separate from cert)'),

  one would probably expect these options to set SSL key/cert for other
  places in Nova as well (e.g. API), but those are used solely in
  console proxies.

  We could probably give these options their own group in the config and
  use deprecate_name/deprecate_group for backwards compatibility with
  existing config files.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1494207/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604428] Re: NoSuchOptError: no such option in group neutron: auth_plugin

2016-08-11 Thread Markus Zoeller (markus_z)
That was a Mitaka bug only as the bug in Newton got fixed with bug
1574988

** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1604428

Title:
  NoSuchOptError: no such option in group neutron: auth_plugin

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Im' runnign openstack mitaka on a 3 node installation (controller,
  compute and network).  I installed it first under ubuntu 14.04  and
  later under ubuntu 16.04, but both have the same error when I try to
  launch an instance.

  The error I get is the same using horizon or the command "nova boot
  --image cirros --flavor 1 --nic net-name=test erste".

  Errormessage from the command line is 
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-2c2bb960-feb8-45ed-a3d1-65b8833d9228)

  I'm using the versions below:
   dpkg -l | grep nova
  ii  nova-api 2:13.0.0-0ubuntu5
 all  OpenStack Compute - API frontend
  ii  nova-common  2:13.0.0-0ubuntu5
 all  OpenStack Compute - common files
  ii  nova-conductor   2:13.0.0-0ubuntu5
 all  OpenStack Compute - conductor service
  ii  nova-consoleauth 2:13.0.0-0ubuntu5
 all  OpenStack Compute - Console Authenticator
  ii  nova-novncproxy  2:13.0.0-0ubuntu5
 all  OpenStack Compute - NoVNC proxy
  ii  nova-scheduler   2:13.0.0-0ubuntu5
 all  OpenStack Compute - virtual machine scheduler
  ii  python-nova  2:13.0.0-0ubuntu5
 all  OpenStack Compute Python libraries
  ii  python-novaclient2:3.3.1-2
 all  client library for OpenStack Compute API - Python 2.7

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1604428/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1611380] [NEW] libvirt: drop py26 compat for get_console_output

2016-08-09 Thread Markus Zoeller (markus_z)
Public bug reported:

There is py26 compatibility code in the libvirt driver to query the
console output [1]. We can drop that as we have py27 as minimum. At the
same time we can refactor that method a little to make it easier to
read.

References:
[1] 
https://github.com/openstack/nova/blob/b2100015ac6f98c68451cb8827687866fe695452/nova/virt/libvirt/driver.py#L2701-L2704

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1611380

Title:
  libvirt: drop py26 compat for get_console_output

Status in OpenStack Compute (nova):
  New

Bug description:
  There is py26 compatibility code in the libvirt driver to query the
  console output [1]. We can drop that as we have py27 as minimum. At
  the same time we can refactor that method a little to make it easier
  to read.

  References:
  [1] 
https://github.com/openstack/nova/blob/b2100015ac6f98c68451cb8827687866fe695452/nova/virt/libvirt/driver.py#L2701-L2704

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1611380/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1610887] Re: Error while launching instance. ERROR (ClientException): Unexpected API Error.

2016-08-09 Thread Markus Zoeller (markus_z)
Unfortunately we don't have the capacity to resolve support requests here.
The issue you describe is most probably a configuration issue where Nova
cannot communicate/authenticate with Keystone. The manuals [1] should give you
enough information how to solve this. There is also a mailing list [2] where
OpenStack users help each other and there's also a forum [3].

References:
[1] 
http://docs.openstack.org/liberty/install-guide-ubuntu/nova-controller-install.html#install-and-configure-components
[2] https://wiki.openstack.org/wiki/Mailing_Lists#General_List
[3] https://ask.openstack.org/

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1610887

Title:
  Error while launching instance. ERROR (ClientException): Unexpected
  API Error.

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I'm unable to launch instance using nova boot command.
  I'm running openstack liberty on Ubuntu Server 14 VMs on virtual box in a 
multi-node set-up. Following official docs for liberty on ubuntu.
  Below are command and error details, nova-api.log and nova-compute.log lines 
and my nova.conf contenets.

  Command and Error;
  root@controller:~# nova boot --flavor m1.tiny --image cirros --nic 
net-id=aaa59963-a3b2-4c37-b691-5a0369b6ffde \
  > --security-group default --key-name mykey vm1
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-91af2d4b-8948-476e-b87a-00c34598dde7)

  
  Error log in 'Nova-api.log' file;

  2016-08-08 12:30:41.195 4992 ERROR nova.api.openstack.extensions return 
self.request(url, 'POST', **kwargs)
  2016-08-08 12:30:41.195 4992 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/utils.py", line 337, in inner
  2016-08-08 12:30:41.195 4992 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-08-08 12:30:41.195 4992 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/session.py", line 401, in 
request
  2016-08-08 12:30:41.195 4992 ERROR nova.api.openstack.extensions raise 
exceptions.from_response(resp, method, url)
  2016-08-08 12:30:41.195 4992 ERROR nova.api.openstack.extensions BadRequest: 
Expecting to find username or userId in passwordCredentials - the server could 
not comply with the request since it is either malformed or otherwise 
incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: 
req-3265697b-246a-4596-8e35-348e4f36a671)
  2016-08-08 12:30:41.195 4992 ERROR nova.api.openstack.extensions
  2016-08-08 12:30:41.291 4992 INFO nova.api.openstack.wsgi 
[req-91af2d4b-8948-476e-b87a-00c34598dde7 fd2b20887e9847bf80d84ee31e47aec9 
61544fe6c61040cc98ba2c636cb0f889 - - -] HTTP exception thrown: Unexpected API 
Error. Please report this at http://bugs.launchpad.net/nova/ and attach the 
Nova API log if possible.
  
  2016-08-08 12:30:41.322 4992 INFO nova.osapi_compute.wsgi.server 
[req-91af2d4b-8948-476e-b87a-00c34598dde7 fd2b20887e9847bf80d84ee31e47aec9 
61544fe6c61040cc98ba2c636cb0f889 - - -] 10.1.1.21 "POST 
/v2/61544fe6c61040cc98ba2c636cb0f889/servers HTTP/1.1" status: 500 len: 441 
time: 2.0076089

  
  Error in nova-compute.log on compute noce;
  2016-08-08 11:41:25.074 1581 ERROR oslo.messaging._drivers.impl_rabbit 
[req-cddbd2fb-f356-4d84-823b-a9703fc70751 - - - - -] AMQP server on 
controller:5672 is unreachable: timed out. Trying again in 2 seconds.

  
  Below is my /etc/nova/nova.conf file on controller;
  [DEFAULT]
  dhcpbridge_flagfile=/etc/nova/nova.conf
  dhcpbridge=/usr/bin/nova-dhcpbridge
  logdir=/var/log/nova
  state_path=/var/lib/nova
  lock_path=/var/lock/nova
  force_dhcp_release=True
  libvirt_use_virtio_for_bridges=True
  #verbose=True
  ec2_private_dns_show_ip=True
  api_paste_config=/etc/nova/api-paste.ini
  #enabled_apis=ec2,osapi_compute,metadata

  rpc_backend = rabbit

  auth_strategy = keystone

  my_ip = 

  network_api_class = nova.network.neutronv2.api.API
  security_group_api = neutron
  linuxnet_interface_driver = 
nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
  firewall_driver = nova.virt.firewall.NoopFirewallDriver

  enabled_apis=osapi_compute,metadata

  verbose = True

  [database]
  connection = mysql+pymysql://nova:nova@controller/nova

  
  [oslo_messaging_rabbit]
  rabbit_host = controller
  rabbit_userid = openstack
  rabbit_password = 

  
  [keystone_authtoken]
  auth_uri = http://controller:5000
  #identity_url = http://controller:35357
  auth_host = controller
  auth_port = 35357
  auth_protocol = http
  #auth_plugin = password
  admin_tenant_name = service
  admin_user = nova
  admin_password = 
  #auth_plugin = password
  #project_domain_id = default
  #user_domain_id = default
  

[Yahoo-eng-team] [Bug 1607825] Re: A question about compute node docking VMware virtualization platform

2016-08-09 Thread Markus Zoeller (markus_z)
Support requests won't get handled here. The mailing list
(openst...@lists.openstack.org) or the ask.openstack.org forum are
better places.

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1607825

Title:
  A question about compute node docking VMware virtualization platform

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I am running OpenStack Kilo on CentOS7.1.I want to launch a vm with
  two port,and each port can communicate with different standard
  vswitch,then the vm can be access to the outside with two physical
  cards.But now my environment can only be accessed through a standard
  vswitch.Current OpenStack release can support?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1607825/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605742] Re: Paramiko 2.0 is incompatible with Mitaka

2016-08-08 Thread Markus Zoeller (markus_z)
@Jesse: Thanks for double-checking.

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1605742

Title:
  Paramiko 2.0 is incompatible with Mitaka

Status in OpenStack Compute (nova):
  Invalid
Status in openstack-ansible:
  Fix Committed

Bug description:
  Unexpected API Error. TypeError. Code: 500. os-keypairs v2.1 
  nova (stable/mitaka , 98b38df57bfed3802ce60ee52e4450871fccdbfa) 

  Tempest tests (for example
  TestMinimumBasicScenario:test_minimum_basic_scenario) are failed on
  gate job for project openstack-ansible  with such error (please find
  full logs [1]) :

  -
  2016-07-22 18:46:07.399604 | 
  2016-07-22 18:46:07.399618 | Captured pythonlogging:
  2016-07-22 18:46:07.399632 | ~~~
  2016-07-22 18:46:07.399733 | 2016-07-22 18:45:47,861 2312 DEBUG
[tempest.scenario.manager] paths: img: 
/opt/images/cirros-0.3.4-x86_64-disk.img, container_fomat: bare, disk_format: 
qcow2, properties: None, ami: /opt/images/cirros-0.3.4-x86_64-blank.img, ari: 
/opt/images/cirros-0.3.4-x86_64-initrd, aki: 
/opt/images/cirros-0.3.4-x86_64-vmlinuz
  2016-07-22 18:46:07.399799 | 2016-07-22 18:45:48,513 2312 INFO 
[tempest.lib.common.rest_client] Request 
(TestMinimumBasicScenario:test_minimum_basic_scenario): 201 POST 
http://172.29.236.100:9292/v1/images 0.651s
  2016-07-22 18:46:07.399889 | 2016-07-22 18:45:48,513 2312 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'x-image-meta-name': 
'tempest-scenario-img--306818818', 'x-image-meta-container_format': 'bare', 
'X-Auth-Token': '', 'x-image-meta-disk_format': 'qcow2', 
'x-image-meta-is_public': 'False'}
  2016-07-22 18:46:07.399907 | Body: None
  2016-07-22 18:46:07.400027 | Response - Headers: {'status': '201', 
'content-length': '481', 'content-location': 
'http://172.29.236.100:9292/v1/images', 'connection': 'close', 'location': 
'http://172.29.236.100:9292/v1/images/5c390277-ec8d-4d82-b8d8-b8978473ecbe', 
'date': 'Fri, 22 Jul 2016 18:45:48 GMT', 'content-type': 'application/json', 
'x-openstack-request-id': 'req-6b3c6218-b3e6-4884-bb3c-b88c70733d0c'}
  2016-07-22 18:46:07.400183 | Body: {"image": {"status": "queued", 
"deleted": false, "container_format": "bare", "min_ram": 0, "updated_at": 
"2016-07-22T18:45:48.00", "owner": "1fbbcc542db344f394b4f1565a7e48fd", 
"min_disk": 0, "is_public": false, "deleted_at": null, "id": 
"5c390277-ec8d-4d82-b8d8-b8978473ecbe", "size": 0, "virtual_size": null, 
"name": "tempest-scenario-img--306818818", "checksum": null, "created_at": 
"2016-07-22T18:45:48.00", "disk_format": "qcow2", "properties": {}, 
"protected": false}}
  2016-07-22 18:46:07.400241 | 2016-07-22 18:45:48,517 2312 INFO 
[tempest.common.glance_http] Request: PUT 
http://172.29.236.100:9292/v1/images/5c390277-ec8d-4d82-b8d8-b8978473ecbe
  2016-07-22 18:46:07.400359 | 2016-07-22 18:45:48,517 2312 INFO 
[tempest.common.glance_http] Request Headers: {'Transfer-Encoding': 'chunked', 
'User-Agent': 'tempest', 'Content-Type': 'application/octet-stream', 
'X-Auth-Token': 
'gABXkmnbJaM7C2EMxfEELQEWlU27v4pCt_9tF_XGlYrgEu-eXvDcEclzZc2OyFnVy79Dfz_pH2gGvKveSTihW-hzV6ucHyF1JrdqwOYr6Z7ZoUe_0BQ4gOdxKZoqzSaqQKfdfrZnojq9OE9Dy11frFI59qqkk0303j3fWlFIUeV6NtrzX-s'}
  2016-07-22 18:46:07.400403 | 2016-07-22 18:45:48,517 2312 DEBUG
[tempest.common.glance_http] Actual Path: 
/v1/images/5c390277-ec8d-4d82-b8d8-b8978473ecbe
  2016-07-22 18:46:07.400440 | 2016-07-22 18:45:50,721 2312 INFO 
[tempest.common.glance_http] Response Status: 200
  2016-07-22 18:46:07.400555 | 2016-07-22 18:45:50,722 2312 INFO 
[tempest.common.glance_http] Response Headers: [('date', 'Fri, 22 Jul 2016 
18:45:50 GMT'), ('content-length', '518'), ('etag', 
'ee1eca47dc88f4879d8a229cc70a07c6'), ('content-type', 'application/json'), 
('x-openstack-request-id', 'req-2e385c60-1755-4221-8325-caa98da1f760')]
  2016-07-22 18:46:07.400597 | 2016-07-22 18:45:50,723 2312 DEBUG
[tempest.scenario.manager] image:5c390277-ec8d-4d82-b8d8-b8978473ecbe
  2016-07-22 18:46:07.400669 | 2016-07-22 18:45:52,416 2312 INFO 
[tempest.lib.common.rest_client] Request 
(TestMinimumBasicScenario:test_minimum_basic_scenario): 500 POST 
http://172.29.236.100:8774/v2.1/1fbbcc542db344f394b4f1565a7e48fd/os-keypairs 
1.689s
  2016-07-22 18:46:07.400778 | 2016-07-22 18:45:52,416 2312 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 
'application/json', 'Accept': 'application/json', 'X-Auth-Token': ''}
  2016-07-22 18:46:07.400813 | Body: {"keypair": {"name": 
"tempest-TestMinimumBasicScenario-1803650811"}}
  2016-07-22 18:46:07.400940 | Response - Headers: {'status': '500', 
'content-length': '193', 'content-location': 

[Yahoo-eng-team] [Bug 1605742] Re: Paramiko 2.0 is incompatible with Mitaka

2016-08-02 Thread Markus Zoeller (markus_z)
The Nova periodic stable mitaka test job installs paramiko==1.16.0

http://logs.openstack.org/periodic-stable/periodic-nova-python27-db-mitaka/9d14b47/console.html#_2016-08-02_06_16_11_180799
This is the upper-constraint since Nov 2015

https://github.com/openstack/requirements/commit/6bb1357b2a4347a29ca5911499b86de71b92fdc8#diff-0bdd949ed8a7fdd4f95240bd951779c8R212
This works fine.

The openstack-ansible project used paramiko>=1.16.0 and installed 2.0.1

http://logs.openstack.org/09/342309/8/check/gate-openstack-ansible-dsvm-commit/224f9c0/console.html.gz#_2016-07-22_17_21_57_050330

https://github.com/openstack/openstack-ansible/commit/9de9f4def3a731563da5778546e7f9f73e2c4214#diff-b4ef698db8ca845e5845c4618278f29aR9
openstack-ansible removed the requirement "paramiko>=1.6.0" later

https://github.com/openstack/openstack-ansible/commit/b15363c#diff-b4ef698db8ca845e5845c4618278f29aL3
This however is only in openstack-ansible Newton and not in Mitaka:

https://github.com/openstack/openstack-ansible/blob/stable/mitaka/requirements.txt#L9

Based on ^ I believe this is an issue of the openstack-ansible project which
doesn't cap the upper-constraint of paramiko in its stable/mitaka branch.
I don't see the need to backport anything to Nova's stable/mitaka branch.
I leave this as "incomplete" to get a second pair of eyes from auggy.

** Also affects: openstack-ansible
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1605742

Title:
  Paramiko 2.0 is incompatible with Mitaka

Status in OpenStack Compute (nova):
  Incomplete
Status in openstack-ansible:
  New

Bug description:
  Unexpected API Error. TypeError. Code: 500. os-keypairs v2.1 
  nova (stable/mitaka , 98b38df57bfed3802ce60ee52e4450871fccdbfa) 

  Tempest tests (for example
  TestMinimumBasicScenario:test_minimum_basic_scenario) are failed on
  gate job for project openstack-ansible  with such error (please find
  full logs [1]) :

  -
  2016-07-22 18:46:07.399604 | 
  2016-07-22 18:46:07.399618 | Captured pythonlogging:
  2016-07-22 18:46:07.399632 | ~~~
  2016-07-22 18:46:07.399733 | 2016-07-22 18:45:47,861 2312 DEBUG
[tempest.scenario.manager] paths: img: 
/opt/images/cirros-0.3.4-x86_64-disk.img, container_fomat: bare, disk_format: 
qcow2, properties: None, ami: /opt/images/cirros-0.3.4-x86_64-blank.img, ari: 
/opt/images/cirros-0.3.4-x86_64-initrd, aki: 
/opt/images/cirros-0.3.4-x86_64-vmlinuz
  2016-07-22 18:46:07.399799 | 2016-07-22 18:45:48,513 2312 INFO 
[tempest.lib.common.rest_client] Request 
(TestMinimumBasicScenario:test_minimum_basic_scenario): 201 POST 
http://172.29.236.100:9292/v1/images 0.651s
  2016-07-22 18:46:07.399889 | 2016-07-22 18:45:48,513 2312 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'x-image-meta-name': 
'tempest-scenario-img--306818818', 'x-image-meta-container_format': 'bare', 
'X-Auth-Token': '', 'x-image-meta-disk_format': 'qcow2', 
'x-image-meta-is_public': 'False'}
  2016-07-22 18:46:07.399907 | Body: None
  2016-07-22 18:46:07.400027 | Response - Headers: {'status': '201', 
'content-length': '481', 'content-location': 
'http://172.29.236.100:9292/v1/images', 'connection': 'close', 'location': 
'http://172.29.236.100:9292/v1/images/5c390277-ec8d-4d82-b8d8-b8978473ecbe', 
'date': 'Fri, 22 Jul 2016 18:45:48 GMT', 'content-type': 'application/json', 
'x-openstack-request-id': 'req-6b3c6218-b3e6-4884-bb3c-b88c70733d0c'}
  2016-07-22 18:46:07.400183 | Body: {"image": {"status": "queued", 
"deleted": false, "container_format": "bare", "min_ram": 0, "updated_at": 
"2016-07-22T18:45:48.00", "owner": "1fbbcc542db344f394b4f1565a7e48fd", 
"min_disk": 0, "is_public": false, "deleted_at": null, "id": 
"5c390277-ec8d-4d82-b8d8-b8978473ecbe", "size": 0, "virtual_size": null, 
"name": "tempest-scenario-img--306818818", "checksum": null, "created_at": 
"2016-07-22T18:45:48.00", "disk_format": "qcow2", "properties": {}, 
"protected": false}}
  2016-07-22 18:46:07.400241 | 2016-07-22 18:45:48,517 2312 INFO 
[tempest.common.glance_http] Request: PUT 
http://172.29.236.100:9292/v1/images/5c390277-ec8d-4d82-b8d8-b8978473ecbe
  2016-07-22 18:46:07.400359 | 2016-07-22 18:45:48,517 2312 INFO 
[tempest.common.glance_http] Request Headers: {'Transfer-Encoding': 'chunked', 
'User-Agent': 'tempest', 'Content-Type': 'application/octet-stream', 
'X-Auth-Token': 
'gABXkmnbJaM7C2EMxfEELQEWlU27v4pCt_9tF_XGlYrgEu-eXvDcEclzZc2OyFnVy79Dfz_pH2gGvKveSTihW-hzV6ucHyF1JrdqwOYr6Z7ZoUe_0BQ4gOdxKZoqzSaqQKfdfrZnojq9OE9Dy11frFI59qqkk0303j3fWlFIUeV6NtrzX-s'}
  2016-07-22 18:46:07.400403 | 2016-07-22 18:45:48,517 2312 DEBUG
[tempest.common.glance_http] Actual Path: 
/v1/images/5c390277-ec8d-4d82-b8d8-b8978473ecbe
  

[Yahoo-eng-team] [Bug 1600109] Re: Unit tests should not perform logging, but some tests still use

2016-08-01 Thread Markus Zoeller (markus_z)
Nova has no rules in place which forbids that, so it's not a bug. I also
don't see a reason to put effort into this.

** Changed in: python-novaclient
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1600109

Title:
  Unit tests should not perform logging,but some tests still use

Status in Ceilometer:
  Incomplete
Status in Cinder:
  Incomplete
Status in Glance:
  Incomplete
Status in OpenStack Identity (keystone):
  Won't Fix
Status in Magnum:
  Incomplete
Status in OpenStack Compute (nova):
  Invalid
Status in python-cinderclient:
  Incomplete
Status in python-glanceclient:
  Incomplete
Status in python-keystoneclient:
  Won't Fix
Status in python-neutronclient:
  Incomplete
Status in python-novaclient:
  Invalid
Status in python-rackclient:
  Incomplete
Status in python-swiftclient:
  Incomplete
Status in rack:
  Incomplete
Status in OpenStack Object Storage (swift):
  Incomplete
Status in OpenStack DBaaS (Trove):
  Incomplete

Bug description:
  We shuld remove the logging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1600109/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600109] Re: Unit tests should not perform logging, but some tests still use

2016-08-01 Thread Markus Zoeller (markus_z)
Nova has no rules in place which forbids that, so it's not a bug. I also
don't see a reason to put effort into this.

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1600109

Title:
  Unit tests should not perform logging,but some tests still use

Status in Ceilometer:
  Incomplete
Status in Cinder:
  Incomplete
Status in Glance:
  Incomplete
Status in OpenStack Identity (keystone):
  Won't Fix
Status in Magnum:
  Incomplete
Status in OpenStack Compute (nova):
  Invalid
Status in python-cinderclient:
  Incomplete
Status in python-glanceclient:
  Incomplete
Status in python-keystoneclient:
  Won't Fix
Status in python-neutronclient:
  Incomplete
Status in python-novaclient:
  Invalid
Status in python-rackclient:
  Incomplete
Status in python-swiftclient:
  Incomplete
Status in rack:
  Incomplete
Status in OpenStack Object Storage (swift):
  Incomplete
Status in OpenStack DBaaS (Trove):
  Incomplete

Bug description:
  We shuld remove the logging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1600109/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607934] Re: Unable to launch an instance Bug 1219890

2016-08-01 Thread Markus Zoeller (markus_z)
That's a configuration issue and not a bug in the nova code base. You
can give this a try:
https://bugs.launchpad.net/nova/+bug/1534273/comments/8

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1607934

Title:
  Unable to launch an instance Bug 1219890

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I am unable to launch an instance.


  [root@controller keystone]# nova boot --flavor m1.tiny --image cirros
  --nic net-id=544bcd85-b051-4e42-a462-d7dab712de5a   --security-group
  default --key-name mykey public-instance

  # ERRORs I am seeing:
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-5830ff32-d3e1-43a1-a3f9-7c6f96fb360b)

  ==> keystone/keystone.log <==
  2016-07-29 22:03:07.865 2175 INFO keystone.common.wsgi 
[req-7e612649-5070-49f7-87da-71afe5f1c51c - - - - -] GET 
http://controller:5000/v3/
  2016-07-29 22:03:07.980 2177 INFO keystone.common.wsgi 
[req-a6eda653-c25a-4453-b2e5-9736450f7568 - - - - -] POST 
http://controller:5000/v3/auth/tokens
  2016-07-29 22:03:08.034 2198 INFO keystone.common.wsgi 
[req-4f9430cd-7579-489e-9bc7-c5cb3f477d5a - - - - -] GET 
http://controller:35357/v3/auth/tokens
  2016-07-29 22:03:37.773 2186 INFO keystone.common.wsgi 
[req-8964cb8d-ea9c-4e7e-b153-4146c4e5b031 - - - - -] GET 
http://controller:5000/v3/
  2016-07-29 22:03:37.778 2176 INFO keystone.common.wsgi 
[req-31e52a1e-ac13-42d9-92c5-9deea759e9e9 - - - - -] POST 
http://controller:5000/v3/auth/tokens
  2016-07-29 22:03:37.828 2194 INFO keystone.common.wsgi 
[req-336a2901-5827-4508-829f-9de92ebd28da - - - - -] GET 
http://controller:35357/v3/auth/tokens
  2016-07-29 22:03:37.961 2203 INFO keystone.common.wsgi 
[req-eb3d31c9-0aa3-4572-b78a-e087ae509de9 - - - - -] GET 
http://controller:35357/v3/auth/tokens
  2016-07-29 22:03:37.985 2200 INFO keystone.common.wsgi 
[req-ef7ab927-e5b3-45c0-89f9-d7191dc4cea4 - - - - -] GET 
http://controller:35357/v3/auth/tokens
  2016-07-29 22:03:38.120 2197 INFO keystone.common.wsgi 
[req-1ceee11a-cc13-490a-96ba-47081c1fe154 - - - - -] GET 
http://controller:35357/v3/auth/tokens
  2016-07-29 22:03:38.174 2178 INFO keystone.common.wsgi 
[req-ebe3d994-18c2-4304-91a8-04143ba6a627 - - - - -] POST 
http://localhost:5000/v2.0/tokens
  2016-07-29 22:03:38.174 2178 WARNING keystone.common.wsgi 
[req-ebe3d994-18c2-4304-91a8-04143ba6a627 - - - - -] Expecting to find username 
or userId in passwordCredentials - the server could not comply with the request 
since it is either malformed or otherwise incorrect. The client is assumed to 
be in error.

  ==> nova/nova-scheduler.log <==
  [root@controller nova]# tail -1 nova-scheduler.log 
  2016-07-29 22:06:01.868 1409 INFO nova.scheduler.host_manager 
[req-1360cc28-fd91-42cb-918b-37db9432cc77 - - - - -] Successfully synced 
instances from host 'compute.example.com'.

  ==> nova/nova-api.log <==
  2016-07-29 22:05:12.728 8504 INFO nova.osapi_compute.wsgi.server 
[req-54a6644f-96fe-4663-9825-ebc0e1ac920b 5d7190fb0b224c3484be7854426c88b1 
7f659e1816d24cb2b2fdacfef970cfc6 - - -] 10.0.0.11 "GET /v2/ HTTP/1.1" status: 
200 len: 572 time: 0.0156450
  2016-07-29 22:05:12.898 8504 INFO nova.osapi_compute.wsgi.server 
[req-d1ad4413-4c34-4a58-8b30-8dd258a8c9b1 5d7190fb0b224c3484be7854426c88b1 
7f659e1816d24cb2b2fdacfef970cfc6 - - -] 10.0.0.11 "GET 
/v2/7f659e1816d24cb2b2fdacfef970cfc6/images HTTP/1.1" status: 200 len: 692 
time: 0.0624750
  2016-07-29 22:05:12.964 8504 INFO nova.osapi_compute.wsgi.server 
[req-d62a91fe-c059-46e9-9084-e6294c714d5c 5d7190fb0b224c3484be7854426c88b1 
7f659e1816d24cb2b2fdacfef970cfc6 - - -] 10.0.0.11 "GET 
/v2/7f659e1816d24cb2b2fdacfef970cfc6/images/ffc86274-9a08-4b13-a23d-37c49bedb818
 HTTP/1.1" status: 200 len: 873 time: 0.0643079
  2016-07-29 22:05:12.979 8504 INFO nova.api.openstack.wsgi 
[req-e6f20077-6da7-4423-bb04-791b8eb1db2c 5d7190fb0b224c3484be7854426c88b1 
7f659e1816d24cb2b2fdacfef970cfc6 - - -] HTTP exception thrown: Flavor m1.tiny 
could not be found.
  2016-07-29 22:05:12.980 8504 INFO nova.osapi_compute.wsgi.server 
[req-e6f20077-6da7-4423-bb04-791b8eb1db2c 5d7190fb0b224c3484be7854426c88b1 
7f659e1816d24cb2b2fdacfef970cfc6 - - -] 10.0.0.11 "GET 
/v2/7f659e1816d24cb2b2fdacfef970cfc6/flavors/m1.tiny HTTP/1.1" status: 404 len: 
298 time: 0.0130160
  2016-07-29 22:05:12.997 8504 INFO nova.osapi_compute.wsgi.server 
[req-c7db45cc-4338-4903-b97b-4672b96ac337 5d7190fb0b224c3484be7854426c88b1 
7f659e1816d24cb2b2fdacfef970cfc6 - - -] 10.0.0.11 "GET 
/v2/7f659e1816d24cb2b2fdacfef970cfc6/flavors?is_public=None HTTP/1.1" status: 
200 len: 1407 time: 0.0142021
  2016-07-29 22:05:13.012 8504 INFO nova.osapi_compute.wsgi.server 
[req-d82760c1-a126-4bbc-b599-5fa2a7d06bea 5d7190fb0b224c3484be7854426c88b1 

[Yahoo-eng-team] [Bug 1604943] Re: non-ASCII chars ( Chinese ) not allowed in Keypair name

2016-07-26 Thread Markus Zoeller (markus_z)
This is as intended, please see [1]. You should also get the error
message

 "Keypair data is invalid: Keypair name contains unsafe characters"

References:
[1] 
https://github.com/openstack/nova/blob/aa81d6c301d6549af6fe8e8a9fb55facf898f809/nova/compute/api.py#L3907-L3911

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1604943

Title:
  non-ASCII chars ( Chinese ) not allowed in Keypair name

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  On a create (POST) request on /os-keypairs API, using a non-ASCII
  character such as Chinese or Japanese characters for the name
  parameter produces a 400 error return.

  "POST /v2.1/f60dbb1f1d2e4f8cb2434f0ed1016d97/os-keypairs HTTP/1.1"
  status: 400 len: 401 time: 0.0861628

  Openstack version is mitaka.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1604943/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603979] Re: gate: context tests failed because missing parameter "is_admin_project" (oslo.context 2.6.0)

2016-07-26 Thread Markus Zoeller (markus_z)
Looks like https://review.openstack.org/#/c/345633/ solved this issue. I see no 
hits in logstash in the last 24 hours:
Logstash query: 
http://logstash.openstack.org/#/dashboard/file/logstash.json?from=7d=build_name:gate-nova-python27-db%20AND%20message:%5C%22testtools.matchers._impl.MismatchError:%200%20!%3D%201:%20%5B%5C%22Arguments%20dropped%20when%20creating%20context:%20%7B'is_admin_project':%20True%7D%5C%22%5D%5C%22

** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1603979

Title:
  gate: context tests failed because missing parameter
  "is_admin_project" (oslo.context 2.6.0)

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  The following 3 tests failed:
  1. 
nova.tests.unit.test_context.ContextTestCase.test_convert_from_dict_then_to_dict
  Captured traceback:
  ~~~
  Traceback (most recent call last):
    File "nova/tests/unit/test_context.py", line 230, in 
test_convert_from_dict_then_to_dict
  self.assertEqual(values, values2)
    File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 411, in assertEqual
  self.assertThat(observed, matcher, message)
    File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: !=:
  reference = {
   ..
   'is_admin': True,
   ..}
  actual= {
   ..
   'is_admin': True,
   'is_admin_project': True,
   ..}

  2. nova.tests.unit.test_context.ContextTestCase.test_convert_from_rc_to_dict
  Captured traceback:
  ~~~
  Traceback (most recent call last):
    File "nova/tests/unit/test_context.py", line 203, in 
test_convert_from_rc_to_dict
  self.assertEqual(expected_values, values2)
    File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 411, in assertEqual
  self.assertThat(observed, matcher, message)
    File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: !=:
  reference = {
   ..
   'is_admin': True,
   ..}
  actual= {
   ..
   'is_admin': True,
   'is_admin_project': True,
   ..}

  3. nova.tests.unit.test_context.ContextTestCase.test_to_dict_from_dict_no_log
  Captured traceback:
  ~~~
  Traceback (most recent call last):
    File "nova/tests/unit/test_context.py", line 144, in 
test_to_dict_from_dict_no_log
  self.assertEqual(0, len(warns), warns)
    File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 411, in assertEqual
  self.assertThat(observed, matcher, message)
    File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 0 != 1: ["Arguments dropped when 
creating context: {'is_admin_project': True}"]

  Steps to reproduce
  ==
  Just run the context tests:
  tox -e py27 test_context

  This is because we missed to pass "is_admin_project" parameter to
  __init__() of  oslo.context.ResourceContext when initializing a nova
  ResourceContext object.

  In nova/context.py

  @enginefacade.transaction_context_provider
  class RequestContext(context.RequestContext):
  """Security context and request information.

  Represents the user taking a given action within the system.

  """

  def __init__(self, user_id=None, project_id=None,
   is_admin=None, read_deleted="no",
   roles=None, remote_address=None, timestamp=None,
   request_id=None, auth_token=None, overwrite=True,
   quota_class=None, user_name=None, project_name=None,
   service_catalog=None, instance_lock_checked=False,
   user_auth_plugin=None, **kwargs):
  ..
  super(RequestContext, self).__init__(
  ..
  is_admin=is_admin,
  ..)

  But in oslo_context/context.py,

  class RequestContext(object):

  ..

  def __init__(..
   is_admin=False,
   ..
   is_admin_project=True):

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1603979/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : 

[Yahoo-eng-team] [Bug 1604798] Re: Use DDT library to reduce code duplication

2016-07-20 Thread Markus Zoeller (markus_z)
That's not a bug. The lib looks interesting though. Maybe start a
conversion on the ML if Nova sees a benefit in using that.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1604798

Title:
  Use DDT library to reduce code duplication

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Use DDT library to reduce code duplication

  DDT can ease up error tracing and autogenerates tests on basis of different 
input data.
  It allows to multiply one test case by running it with different test data, 
and make it
  appear as multiple test cases. This will help to reduce code duplication.

  Please refer example use: 
  http://ddt.readthedocs.io/en/latest/example.html

  Currently DDT is implemented in openstack/cinder and openstack/rally.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1604798/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1584911] Re: Could not find resource cirros at launch instance

2016-07-19 Thread Markus Zoeller (markus_z)
** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1584911

Title:
  Could not find resource cirros at launch instance

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I am following the Openstack documentation
  http://docs.openstack.org/mitaka/install-guide-ubuntu/launch-instance-
  provider.html for installing Openstack on Ubuntu server 14.04., and i
  have this problem when launch an instance on the provider network.

  root@srv-controller:~# openstack server create --debug --flavor 1 --image 
cirros --security-group default --key-name mykey provider-instance
  START with options: ['server', 'create', '--debug', '--flavor', '1', 
'--image', 'cirros', '--security-group', 'default', '--key-name', 'mykey', 
'provider-instance']
  options: Namespace(access_token_endpoint='', auth_type='', 
auth_url='http://srv-controller:5000/v3', cacert='', client_id='', 
client_secret='***', cloud='', debug=True, default_domain='default', 
deferred_help=False, domain_id='', domain_name='', endpoint='', 
identity_provider='', identity_provider_url='', insecure=None, interface='', 
log_file=None, os_compute_api_version='', os_dns_api_version='2', 
os_identity_api_version='3', os_image_api_version='2', 
os_network_api_version='', os_object_api_version='', os_project_id=None, 
os_project_name=None, os_volume_api_version='', password='***', profile=None, 
project_domain_id='', project_domain_name='default', project_id='', 
project_name='demo', protocol='', region_name='', scope='', 
service_provider_endpoint='', timing=False, token='***', trust_id='', url='', 
user_domain_id='', user_domain_name='default', user_id='', username='demo', 
verbose_level=3, verify=None)
  defaults: {u'auth_type': 'password', u'compute_api_version': u'2', 'key': 
None, u'database_api_version': u'1.0', 'api_timeout': None, 
u'baremetal_api_version': u'1', u'image_api_version': u'2', 'cacert': None, 
u'image_api_use_tasks': False, u'floating_ip_source': u'neutron', 
u'orchestration_api_version': u'1', u'interface': None, u'network_api_version': 
u'2', u'image_format': u'qcow2', u'key_manager_api_version': u'v1', 
u'metering_api_version': u'2', 'verify': True, u'identity_api_version': u'2.0', 
u'volume_api_version': u'2', 'cert': None, u'secgroup_source': u'neutron', 
u'container_api_version': u'1', u'dns_api_version': u'2', 
u'object_store_api_version': u'1', u'disable_vendor_agent': {}}
  cloud cfg: {'auth_type': 'password', u'compute_api_version': u'2', 'key': 
None, u'database_api_version': u'1.0', 'timing': False, u'network_api_version': 
u'2', u'image_format': u'qcow2', u'image_api_version': '2', 'verify': True, 
u'dns_api_version': '2', u'object_store_api_version': u'1', 'verbose_level': 3, 
'region_name': '', 'api_timeout': None, u'baremetal_api_version': u'1', 'auth': 
{'username': 'demo', 'project_name': 'demo', 'user_domain_name': 'default', 
'auth_url': 'http://srv-controller:5000/v3', 'password': '***', 
'project_domain_name': 'default'}, 'default_domain': 'default', 
u'container_api_version': u'1', u'image_api_use_tasks': False, 
u'floating_ip_source': u'neutron', u'orchestration_api_version': u'1', 
u'interface': None, 'cacert': None, u'key_manager_api_version': u'v1', 
u'metering_api_version': u'2', 'deferred_help': False, u'identity_api_version': 
'3', u'volume_api_version': u'2', 'cert': None, u'secgroup_source': u'neutron', 
'debug': True, u'disable_vendor_agent
 ': {}}
  compute API version 2, cmd group openstack.compute.v2
  network API version 2, cmd group openstack.network.v2
  image API version 2, cmd group openstack.image.v2
  volume API version 2, cmd group openstack.volume.v2
  identity API version 3, cmd group openstack.identity.v3
  object_store API version 1, cmd group openstack.object_store.v1
  dns API version 2, cmd group openstack.dns.v2
  command: server create -> openstackclient.compute.v2.server.CreateServer
  Auth plugin password selected
  auth_type: password
  Using auth plugin: password
  Using parameters {'username': 'demo', 'project_name': 'demo', 'auth_url': 
'http://srv-controller:5000/v3', 'user_domain_name': 'default', 'password': 
'***', 'project_domain_name': 'default'}
  Get auth_ref
  REQ: curl -g -i -X GET http://srv-controller:5000/v3 -H "Accept: 
application/json" -H "User-Agent: python-openstackclient keystoneauth1/2.4.0 
python-requests/2.9.1 CPython/2.7.6"
  Starting new HTTP connection (1): srv-controller
  "GET /v3 HTTP/1.1" 200 253
  RESP: [200] Content-Length: 253 Vary: X-Auth-Token Keep-Alive: timeout=5, 
max=100 Server: Apache/2.4.7 (Ubuntu) Connection: Keep-Alive Date: Mon, 23 May 
2016 17:38:07 GMT x-openstack-request-id: 
req-a34d7305-095c-4f51-98ae-d2791b467a72 Content-Type: application/json 
X-Distribution: Ubuntu 
  RESP BODY: {"version": {"status": "stable", "updated": 

[Yahoo-eng-team] [Bug 1603979] Re: gate: context tests failed because missing parameter "is_admin_project" (oslo.context 2.6.0)

2016-07-19 Thread Markus Zoeller (markus_z)
fixed by: https://review.openstack.org/#/c/343683/

ML: http://lists.openstack.org/pipermail/openstack-
dev/2016-July/099467.html

** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1603979

Title:
  gate: context tests failed because missing parameter
  "is_admin_project" (oslo.context 2.6.0)

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  The following 3 tests failed:
  1. 
nova.tests.unit.test_context.ContextTestCase.test_convert_from_dict_then_to_dict
  Captured traceback:
  ~~~
  Traceback (most recent call last):
    File "nova/tests/unit/test_context.py", line 230, in 
test_convert_from_dict_then_to_dict
  self.assertEqual(values, values2)
    File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 411, in assertEqual
  self.assertThat(observed, matcher, message)
    File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: !=:
  reference = {
   ..
   'is_admin': True,
   ..}
  actual= {
   ..
   'is_admin': True,
   'is_admin_project': True,
   ..}

  2. nova.tests.unit.test_context.ContextTestCase.test_convert_from_rc_to_dict
  Captured traceback:
  ~~~
  Traceback (most recent call last):
    File "nova/tests/unit/test_context.py", line 203, in 
test_convert_from_rc_to_dict
  self.assertEqual(expected_values, values2)
    File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 411, in assertEqual
  self.assertThat(observed, matcher, message)
    File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: !=:
  reference = {
   ..
   'is_admin': True,
   ..}
  actual= {
   ..
   'is_admin': True,
   'is_admin_project': True,
   ..}

  3. nova.tests.unit.test_context.ContextTestCase.test_to_dict_from_dict_no_log
  Captured traceback:
  ~~~
  Traceback (most recent call last):
    File "nova/tests/unit/test_context.py", line 144, in 
test_to_dict_from_dict_no_log
  self.assertEqual(0, len(warns), warns)
    File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 411, in assertEqual
  self.assertThat(observed, matcher, message)
    File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 0 != 1: ["Arguments dropped when 
creating context: {'is_admin_project': True}"]

  Steps to reproduce
  ==
  Just run the context tests:
  tox -e py27 test_context

  This is because we missed to pass "is_admin_project" parameter to
  __init__() of  oslo.context.ResourceContext when initializing a nova
  ResourceContext object.

  In nova/context.py

  @enginefacade.transaction_context_provider
  class RequestContext(context.RequestContext):
  """Security context and request information.

  Represents the user taking a given action within the system.

  """

  def __init__(self, user_id=None, project_id=None,
   is_admin=None, read_deleted="no",
   roles=None, remote_address=None, timestamp=None,
   request_id=None, auth_token=None, overwrite=True,
   quota_class=None, user_name=None, project_name=None,
   service_catalog=None, instance_lock_checked=False,
   user_auth_plugin=None, **kwargs):
  ..
  super(RequestContext, self).__init__(
  ..
  is_admin=is_admin,
  ..)

  But in oslo_context/context.py,

  class RequestContext(object):

  ..

  def __init__(..
   is_admin=False,
   ..
   is_admin_project=True):

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1603979/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583004] Re: Incorrect availability zone shown by nova show command

2016-07-13 Thread Markus Zoeller (markus_z)
This bug lacks the necessary information to effectively reproduce and
fix it, therefore it has been closed. Feel free to reopen the bug by
providing the requested information and set the bug status back to "New".


** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1583004

Title:
  Incorrect availability zone shown by nova show command

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Nova show command shows availability zone as "AZ1" but nova database
  shows as 'nova' after live migrating VM to other node which is in
  different availability zone. This is for Juno release.

  nova show a5bbc4fa-7ffb-42c3-b60e-b54885227bdd | grep availibility_zone
  | OS-EXT-AZ:availability_zone  | AZ1   |

  
  node:~# mysql -e "use nova; select availability_zone from instances where 
uuid='a5bbc4fa-7ffb-42c3-b60e-b54885227bdd';"
  +---+
  | availability_zone |
  +---+
  | nova  |
  +---+

  This is causing problem while trying to resize the vm from m1.large to
  m1.xlarge flavor. The node (with availability zone as 'nova') doesn't
  have enough memory left. After live migrating this VM to another node
  (with availability zone as 'AZ1'), it tries to launch on same node
  (with availability zone as 'nova') during resize procedure. After
  changing the DB value to 'AZ1', VM was resized successfully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1583004/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1564896] Re: Boot Failed, Not a bootable disk

2016-07-13 Thread Markus Zoeller (markus_z)
This bug lacks the necessary information to effectively reproduce and
fix it, therefore it has been closed. Feel free to reopen the bug by
providing the requested information and set the bug status back to "New".


** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1564896

Title:
  Boot Failed, Not a bootable disk

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  This bug  happen in my environment with these steps on liberty
  release:

  1.  Create a instance by using centos 7 image.  and boot it. every thing work 
well so far
  2.  Create 10G volume and attach it to this instance  onto  /dev/vda
  Note: The volume is LVM based on loop device.
  3.   Format it and mount it,  try to create file under it,  work well.
  4.  Restart  compute node with "systemctl restart"
  5.   When it is up,  start all openstack services.  and go to dashboard to 
start this instance,  it say the install it running, but open the close to it, 
and found it fail to boot with the following log on screen:

  Boot Failed, Not a bootable disk*

  and  the output from "nova show " are as
  [root@controller ~(keystone_admin)]# nova show 
bb5e9b52-0439-42a0-ac53-75a1ed25e7a8
  
+--+--+
  | Property | Value
|
  
+--+--+
  | OS-DCF:diskConfig| AUTO 
|
  | OS-EXT-AZ:availability_zone  | Nova 
|
  | OS-EXT-SRV-ATTR:host | compute001   
|
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | compute001   
|
  | OS-EXT-SRV-ATTR:instance_name| instance-0051
|
  | OS-EXT-STS:power_state   | 1
|
  | OS-EXT-STS:task_state| -
|
  | OS-EXT-STS:vm_state  | active   
|
  | OS-SRV-USG:launched_at   | 2016-03-30T03:53:31.00   
|
  | OS-SRV-USG:terminated_at | -
|
  | accessIPv4   |  
|
  | accessIPv6   |  
|
  | config_drive |  
|
  | created  | 2016-03-30T03:51:14Z 
|
  | flavor   | C1M1D20 
(447288ec-4b9c-4a00-9001-8b68e7dc4ee0)   |
  | hostId   | 
375e5133fd870cec7008e67b2ffa1af6551267a593a6825c7a1bcf95 |
  | id   | bb5e9b52-0439-42a0-ac53-75a1ed25e7a8 
|
  | image| Attempt to boot from volume - no 
image supplied  |
  | key_name | ceshi
|
  | metadata | {}   
|
  | name | ceph1
|
  | os-extended-volumes:volumes_attached | [{"id": 
"059e67a5-6229-4a36-8d8a-545b02a9c160"}] |
  | progress | 0
|
  | security_groups  | default  
|
  | stacknet network | 192.168.30.127, 192.168.199.67   
|
  | status   | ACTIVE   
|
  | tenant_id| 8fc7a18927dc433aa4136a3be3548068 
|
  | updated  | 2016-04-01T12:27:59Z 
|
  | user_id  | 284beeb5bc9245098246b733cb8371d7 
|
  
+--+--+

  The highlight here is value of image:  “Attempt to boot from volume -
  no image supplied ”

  and the use virsh dumpxml to show the disk is lost. only keep the
  attached volume here, but it is that this is volume is not 

[Yahoo-eng-team] [Bug 1513808] Re: cannot schedule instances in NUMA topology with more than 6 vcpus using SR-IOV ports

2016-07-13 Thread Markus Zoeller (markus_z)
This bug lacks the necessary information to effectively reproduce and
fix it, therefore it has been closed. Feel free to reopen the bug by
providing the requested information and set the bug status back to "New".


** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1513808

Title:
  cannot schedule instances in NUMA topology with more than 6 vcpus
  using SR-IOV ports

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Hi all,

  I have a Centos 7.1 kilo deployment, using sr-iov ports to my
  instances. I'm trying to configure NUMA topology and CPU pinning for
  some telco based workloads.

  I have 3 compute nodes, I'm trying to use one of them to use cpu
  pinning.

  I've configured it like this:
  Compute Node (total 24 cpus)

  /etc/nova/nova.conf
  vcpu_pin_set=2,3,4,5,6,7,8,9,10,11,12,13,14,15,18,19,22,23

  Changed grub to isolate my cpus:
  #grubby --update-kernel=ALL 
--args="isolcpus=2,3,4,5,6,7,8,9,10,11,12,13,14,15,18,19,22,23"
  #grub2-install /dev/sda

  
  Controller Nodes:

  /etc/nova/nova.conf
  
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,PciPassthroughFilter,NUMATopologyFilter,AggregateInstanceExtraSpecsFilter
  scheduler_available_filters = nova.scheduler.filters.all_filters
  scheduler_available_filters = 
nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter

  Created host aggregate performance 
  #nova aggregate-create performance
  #nova aggregate-set-metadata 1 pinned=true
  #nova aggregate-add-host 1 compute03

  Created host aggregate normal
  #nova aggregate-create normal
  #nova aggregate-set-metadata 2 pinned=false
  #nova aggregate-add-host 2 compute01
  #nova aggregate-add-host 2 compute02

  Created the flavor with cpu pinning
  #nova flavor-create m1.performance 6 2048 20 4
  #nova flavor-key 6 set hw:cpu_policy=dedicated
  #nova flavor-key 6 set aggregate_instance_extra_specs:pinned=true

  The issue is:
  With SR-IOV ports it only let's me create instances with 6 vcpus in total 
with the conf described above. Without SR-IOV, using OVS, I don't have that 
limitation. Is this a bug or something? I've seen this: 
https://bugs.launchpad.net/nova/+bug/1441169, however I have the patch, and as 
I said it works for the first 6 vcpus with my configuration. 

  Some relevant logs:
  /var/log/nova/nova-scheduler.log
  2015-11-06 11:18:17.955 59494 DEBUG nova.filters 
[req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d 9340dc4e70a14aeb82013e5a1631de80 
d5ecb0eea96f4996b565fd983a768b11 - - -] Starting with 3 host(s) 
get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:70
  2015-11-06 11:18:17.955 59494 DEBUG nova.filters 
[req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d 9340dc4e70a14aeb82013e5a1631de80 
d5ecb0eea96f4996b565fd983a768b11 - - -] Filter RetryFilter returned 3 host(s) 
get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:84
  2015-11-06 11:18:17.955 59494 DEBUG nova.filters 
[req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d 9340dc4e70a14aeb82013e5a1631de80 
d5ecb0eea96f4996b565fd983a768b11 - - -] Filter AvailabilityZoneFilter returned 
3 host(s) get_filtered_objects 
/usr/lib/python2.7/site-packages/nova/filters.py:84
  2015-11-06 11:18:17.955 59494 DEBUG nova.filters 
[req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d 9340dc4e70a14aeb82013e5a1631de80 
d5ecb0eea96f4996b565fd983a768b11 - - -] Filter RamFilter returned 3 host(s) 
get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:84
  2015-11-06 11:18:17.956 59494 DEBUG nova.filters 
[req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d 9340dc4e70a14aeb82013e5a1631de80 
d5ecb0eea96f4996b565fd983a768b11 - - -] Filter ComputeFilter returned 3 host(s) 
get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:84
  2015-11-06 11:18:17.956 59494 DEBUG nova.filters 
[req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d 9340dc4e70a14aeb82013e5a1631de80 
d5ecb0eea96f4996b565fd983a768b11 - - -] Filter ComputeCapabilitiesFilter 
returned 3 host(s) get_filtered_objects 
/usr/lib/python2.7/site-packages/nova/filters.py:84
  2015-11-06 11:18:17.956 59494 DEBUG nova.filters 
[req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d 9340dc4e70a14aeb82013e5a1631de80 
d5ecb0eea96f4996b565fd983a768b11 - - -] Filter ImagePropertiesFilter returned 3 
host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:84
  2015-11-06 11:18:17.956 59494 DEBUG nova.filters 
[req-9e20f8a9-384f-45c2-aa99-2d7b3100c98d 9340dc4e70a14aeb82013e5a1631de80 
d5ecb0eea96f4996b565fd983a768b11 - - -] Filter ServerGroupAntiAffinityFilter 
returned 3 host(s) get_filtered_objects 
/usr/lib/python2.7/site-packages/nova/filters.py:84
  2015-11-06 11:18:17.956 59494 DEBUG nova.filters 

[Yahoo-eng-team] [Bug 1583499] Re: ironic instance_info does not update when nova instance has been changed

2016-07-13 Thread Markus Zoeller (markus_z)
This bug lacks the necessary information to effectively reproduce and
fix it, therefore it has been closed. Feel free to reopen the bug by
providing the requested information and set the bug status back to "New".


** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1583499

Title:
  ironic instance_info does  not update when nova instance has been
  changed

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  like this :
  [root@controller1 ironic]# ironic node-show 
7f63ec59-d791-43fd-933d-16c05490a8ee | grep instance_info
  | instance_info  | {u'root_gb': u'100', u'display_name': u'test', 
u'image_source':  |
  [root@controller1 ironic]#

  [root@controller1 ironic]# nova list
  
+--+--+++-+---+
  | ID   | Name | Status | Task State | 
Power State | Networks  |
  
+--+--+++-+---+
  | 1e5149f4-4923-4e21-ad1b-c418edfb0479 | test | ACTIVE | -  | 
Running | ironic=192.168.189.37 |
  
+--+--+++-+---+
  [root@controller1 ironic]#

  this display_name is different, ironic does not update the
  instance_info from time to time

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1583499/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475655] Re: Unit_add call fails for fcp volumes when target port has not been configured

2016-07-13 Thread Markus Zoeller (markus_z)
Solved by: https://review.openstack.org/#/c/203026/

** Changed in: os-brick
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475655

Title:
  Unit_add call fails for fcp volumes when target port has not been
  configured

Status in OpenStack Compute (nova):
  Invalid
Status in os-brick:
  Fix Released

Bug description:
  Linux on System z can be configured for automated port and LUN scanning. If 
both features are turned off, ports and LUNs need to be added using explicit 
calls.
  While os-brick currently uses explicit calls to add LUNs, the calls for 
adding ports are missing. If an administrator does not manually issue the 
port_rescan call to add fibre-channel target ports, OpenStack will fail to add 
any fibre-channel LUN on System z.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475655/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1578691] Re: live-migration failed Unexpected API Error

2016-07-12 Thread Markus Zoeller (markus_z)
@Eric:
Without the logs asked in comment #1 we're not able to take action on this 
report. Comment #2 gave a reminder. I'm closing this now. Feel free to reopen 
after you attached the logs.

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1578691

Title:
  live-migration failed Unexpected API Error

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  nova live-migration fails with the following error:

  [root@ler-cloudnet-01 ~]# nova live-migration 
badacdde-0b93-4a96-a790-3c1f6a71ea47 ler-cloudnet-01
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-186d7d86-ad97-4608-a71d-2298c6742b0f)
  [root@ler-cloudnet-01 ~]# nova --debug live-migration 
badacdde-0b93-4a96-a790-3c1f6a71ea47 ler-cloudnet-01
  DEBUG (extension:157) found extension EntryPoint.parse('v2token = 
keystoneauth1.loading._plugins.identity.v2:Token')
  DEBUG (extension:157) found extension EntryPoint.parse('admin_token = 
keystoneauth1.loading._plugins.admin_token:AdminToken')
  DEBUG (extension:157) found extension EntryPoint.parse('v3oidcauthcode = 
keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuthorizationCode')
  DEBUG (extension:157) found extension EntryPoint.parse('v2password = 
keystoneauth1.loading._plugins.identity.v2:Password')
  DEBUG (extension:157) found extension EntryPoint.parse('v3password = 
keystoneauth1.loading._plugins.identity.v3:Password')
  DEBUG (extension:157) found extension EntryPoint.parse('v3oidcpassword = 
keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword')
  DEBUG (extension:157) found extension EntryPoint.parse('token = 
keystoneauth1.loading._plugins.identity.generic:Token')
  DEBUG (extension:157) found extension EntryPoint.parse('v3token = 
keystoneauth1.loading._plugins.identity.v3:Token')
  DEBUG (extension:157) found extension EntryPoint.parse('password = 
keystoneauth1.loading._plugins.identity.generic:Password')
  DEBUG (session:248) REQ: curl -g -i -X GET http://ler-cloudnet-01:35357/v3 -H 
"Accept: application/json" -H "User-Agent: keystoneauth1/2.3.0 
python-requests/2.9.1 CPython/2.7.5"
  INFO (connectionpool:213) Starting new HTTP connection (1): ler-cloudnet-01
  DEBUG (connectionpool:393) "GET /v3 HTTP/1.1" 200 255
  DEBUG (session:277) RESP: [200] Content-Length: 255 Vary: X-Auth-Token 
Keep-Alive: timeout=5, max=100 Server: Apache/2.4.6 (CentOS) mod_wsgi/3.4 
Python/2.7.5 Connection: Keep-Alive Date: Thu, 05 May 2016 15:17:34 GMT 
Content-Type: application/json x-openstack-request-id: 
req-7bf67088-e80e-403c-912f-6763a44e2493 
  RESP BODY: {"version": {"status": "stable", "updated": 
"2016-04-04T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v3+json"}], "id": "v3.6", "links": 
[{"href": "http://ler-cloudnet-01:35357/v3/;, "rel": "self"}]}}

  DEBUG (base:165) Making authentication request to 
http://ler-cloudnet-01:35357/v3/auth/tokens
  DEBUG (connectionpool:393) "POST /v3/auth/tokens HTTP/1.1" 201 4234
  DEBUG (session:248) REQ: curl -g -i -X GET 
http://ler-cloudnet-01:8774/v2.1/32d0fe3c882b4e98b4ad9e3e3dd6e419 -H 
"User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: 
{SHA1}a45b40d5aa2cbf5b76e7e4a0809f5eb4cd5001cb"
  INFO (connectionpool:213) Starting new HTTP connection (1): ler-cloudnet-01
  DEBUG (connectionpool:393) "GET /v2.1/32d0fe3c882b4e98b4ad9e3e3dd6e419 
HTTP/1.1" 404 52
  DEBUG (session:277) RESP: [404] Date: Thu, 05 May 2016 15:17:35 GMT 
Connection: keep-alive Content-Type: text/plain; charset=UTF-8 Content-Length: 
52 X-Compute-Request-Id: req-578989bf-2fbd-429c-af57-b9c16744dd8f 
  RESP BODY: 404 Not Found

  The resource could not be found.


  DEBUG (session:248) REQ: curl -g -i -X GET http://ler-cloudnet-01:8774/v2.1/ 
-H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}a45b40d5aa2cbf5b76e7e4a0809f5eb4cd5001cb"
  DEBUG (connectionpool:393) "GET /v2.1/ HTTP/1.1" 200 389
  DEBUG (session:277) RESP: [200] Content-Length: 389 X-Compute-Request-Id: 
req-d2c6d076-2edc-41f2-967d-0b7db72efb17 Vary: X-OpenStack-Nova-API-Version 
Connection: keep-alive X-Openstack-Nova-Api-Version: 2.1 Date: Thu, 05 May 2016 
15:17:35 GMT Content-Type: application/json 
  RESP BODY: {"version": {"status": "CURRENT", "updated": 
"2013-07-23T11:33:21Z", "links": [{"href": "http://ler-cloudnet-01:8774/v2.1/;, 
"rel": "self"}, {"href": "http://docs.openstack.org/;, "type": "text/html", 
"rel": "describedby"}], "min_version": "2.1", "version": "2.25", "media-types": 
[{"base": "application/json", "type": 
"application/vnd.openstack.compute+json;version=2.1"}], "id": "v2.1"}}

  DEBUG (extension:157) found extension EntryPoint.parse('v2token 

[Yahoo-eng-team] [Bug 1270332] Re: cold migration fails in VMware driver

2016-07-08 Thread Markus Zoeller (markus_z)
Looks like this will be driven by the blueprint [1].

[1] https://blueprints.launchpad.net/nova/+spec/vmware-live-migration

** Changed in: nova
   Status: In Progress => Opinion

** Changed in: nova
   Importance: Medium => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270332

Title:
  cold migration fails in VMware driver

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  With two compute nodes (on different hosts) configured to two
  different clusters in the same vCenter Server i.e :-

  
  nova migrate  fails to migrate a server with the following error :-

  2014-01-17 16:00:21.336 ERROR nova.openstack.common.rpc.amqp 
[req-0c587eb7-3a23-4790-b23d-f4ad005b5fe7 admin admin] Exception during message 
handling
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp Traceback (most 
recent call last):
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/openstack/common/rpc/amqp.py", line 461, in _process_data
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp **args)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/openstack/common/rpc/dispatcher.py", line 172, in dispatch
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp result = 
getattr(proxyobj, method)(ctxt, **kwargs)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/exception.py", line 90, in wrapped
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp payload)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 68, in __exit__
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp 
six.reraise(self.type_, self.value, self.tb)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/exception.py", line 73, in wrapped
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 244, in decorated_function
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp pass
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 68, in __exit__
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp 
six.reraise(self.type_, self.value, self.tb)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 230, in decorated_function
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 295, in decorated_function
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp 
function(self, context, *args, **kwargs)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 272, in decorated_function
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 68, in __exit__
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp 
six.reraise(self.type_, self.value, self.tb)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 259, in decorated_function
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 3163, in resize_instance
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp 
block_device_info)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 406, in 
migrate_disk_and_power_off
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp dest, flavor)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 1182, in 
migrate_disk_and_power_off
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp 
self._session._wait_for_task(instance['uuid'], vm_clone_task)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 857, in _wait_for_task
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp ret_val = 
done.wait()
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 

[Yahoo-eng-team] [Bug 1531473] Re: Move graphics and serial console check to can_live_migrate_source/dest

2016-07-08 Thread Markus Zoeller (markus_z)
Already done and introduced bug 1595962

The report is also more a personal todo item and not a flaw in the
behavior of nova which could affect the ops.

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1531473

Title:
  Move graphics and serial console check to can_live_migrate_source/dest

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  _check_graphics_addresses_can_live_migrate(listen_addrs) and
  _verify_serial_console_is_disabled() should be move to
  can_live_migrate_source/dest method to reduce extra operations of
  pre_live_migration and roll_back calling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1531473/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1306229] Re: Project quota should not be less than user quota

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1306229

Title:
  Project quota should not be less than user quota

Status in OpenStack Compute (nova):
  Expired

Bug description:
  This is the case and steps to reproduce:

  #Step 1: set project quota to unlimited
  $ nova quota-update --instances -1 $tenant

  #Step 2: set user quota to unlimited
  $ nova quota-update --user $tenantUser --instances -1 $tenant

  #Step 3: set project quota to 10
  $ nova quota-update --instances 10 $tenant

  The expected result for Step 3 should be an error message saying
  "Quota limit must be greater than -1" or "Quota should be unlimited".

  Today following the steps you will end up with unlimited quota for
  users and limited quota for project.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1306229/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1400015] Re: Enable rescheduling for live_migrate, unshelve, evacuate

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

** Changed in: nova
 Assignee: Alex Xu (xuhj) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1400015

Title:
  Enable rescheduling for live_migrate,unshelve,evacuate

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Currently only build instance support rescheduling.

  We should support rescheduling for any migration(live_migration,
  unshelve, evacuate) also. There also need resource claim for any
  migration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1400015/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382153] Re: n-cond shoul not joining to servicegroup an all worker

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Medium => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1382153

Title:
  n-cond shoul not joining to servicegroup an all worker

Status in OpenStack Compute (nova):
  Expired

Bug description:
  All nova conductor worker process attempts to join to the service on the same 
host. It does not seams required.
  If you have 48 conductor worker on a node, it means it tries to maintain the 
membership with all 48 worker.

  Since the workers are started almost at the same time, it means 48
  burst update attempt close to each other.

  The situation even worse with zk driver,  it does not works with multiple 
workers >1 , because all worker thread inherited the same zookeeper connection 
from it's parent.  (4096 connection allowed from the same ip on my zk servers)
  (The api service does not do status report, so it can work with multiple 
workers)

  The  "lsof -P |grep cond | grep 2181"  indicates all conductor worker
  uses the same tcp source port  --> the same socket inherited.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1382153/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370867] Re: absolute-limits sometimes returns negative value

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Medium => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370867

Title:
  absolute-limits sometimes returns negative value

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Nova icehouse 2014.1.2

  There is a case where nova absolute-limits returns negative values for *Used 
fields
  even when there is no instance for the project (as below)

  *Used should be 0 when there is no instance.

  Note that after it happened, I booted one instance.
  Then *Used fields became the correct value (1 for totalInstancesUsed)
  After that the instance was deleted and *Used fields was reset to 0.

  ubuntu@dev03:~$ nova list
  ++--+++---i-+--+
  | ID | Name | Status | Task State | Power State | Networks |
  ++--+++-+--+
  ++--+++-+--+
  ubuntu@dev03:~$ nova absolute-limits
  +-+---+
  | Name| Value |
  +-+---+
  | maxServerMeta   | 128   |
  | maxPersonality  | 5 |
  | maxImageMeta| 128   |
  | maxPersonalitySize  | 10240 |
  | maxTotalRAMSize | 51200 |
  | maxSecurityGroupRules   | 20|
  | maxTotalKeypairs| 100   |
  | totalRAMUsed| -2048 |
  | maxSecurityGroups   | 10|*
  | totalFloatingIpsUsed| 0 |
  | totalInstancesUsed  | -1|
  | totalSecurityGroupsUsed | 1 |
  | maxTotalFloatingIps | 10|
  | maxTotalInstances   | 10|
  | totalCoresUsed  | -1|
  | maxTotalCores   | 20|
  +-+---+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370867/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297635] Re: Race condition when deleting iscsi devices

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Medium => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1297635

Title:
  Race condition when deleting iscsi devices

Status in OpenStack Compute (nova):
  Expired

Bug description:
  If you have two instances on the same compute node that each have a
  volume attached (using iscsi backend)

  If you delete both of them triggering a disconnect volume the
  following happens:

  First request will delete the device
  echo 1> /sys/block/sdr/device/delete

  The second request triggers an iscsi_rescan which then rediscovers the
  device.

  The volume is then deleted from the backend cinder.

  now you have a device which is pointing back to a deleted volume.

  This is using an NetApp device where all the devices are in the same
  IQN and using multipath on stable/havana

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1297635/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357677] Re: Instances failes to boot from volume

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: High => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357677

Title:
  Instances failes to boot from volume

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Logstash query for full console outputs which does not contains 'info:
  initramfs loading root from /dev/vda' , but contains the previous boot
  message.

  These issues look like ssh connectivity issue, but the instance is not
  booted and it happens regardless to the network type.

  message: "Freeing unused kernel memory" AND message: "Initializing
  cgroup subsys cpuset" AND NOT message: "initramfs loading root from"
  AND tags:"console"

  49 incident/week.

  Example console log:
  
http://logs.openstack.org/75/113175/1/gate/check-tempest-dsvm-neutron-full/827c854/console.html.gz#_2014-08-14_11_23_30_120

  It failed when it's tried to ssh 3th server.
  WARNING: The conole.log contains two instances serial console output,  try no 
to mix them when reading.

  The fail point in the test code was here:
  
https://github.com/openstack/tempest/blob/b7144eb08175d010e1300e14f4f75d04d9c63c98/tempest/scenario/test_volume_boot_pattern.py#L175

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360260] Re: 'allow_same_net_traffic=true' has no effect

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1360260

Title:
  'allow_same_net_traffic=true' has no effect

Status in OpenStack Compute (nova):
  Expired
Status in openstack-manuals:
  Expired

Bug description:
  environment: Ubuntu trusty, icehouse from repos. 
  Setup per 'Openstack Installation Guide for Ubuntu 12.04/14.04 LTS' 

  **brief**

  two instances X and Y are members of security group A. Despite the
  following explicit setting in nova.conf:

  allow_same_net_traffic=True

  ...the instances are only allowed to communicate according to the
  rules defined in security group A.

  
  **detail**

  I first noticed this attempting to run iperf between two instances on
  the same security network; they were unable to connect via the default
  TCP port 5001.

  They were able to ping...looking at rules for the security group they
  are are associated with, ping was allowed, so I then suspected the
  security group rules were being applied to all communication, despite
  them being on the same security group.

  To test, I added rules to group A that allowed all communication, and
  associated the rules with itself (i.e. security group A) and voila,
  they could talk!

  I then thought I had remembered incorrectly that by default all
  traffic is allowed between instances on the same security group, so I
  double-checked the documentation, but according to the documentation I
  had remembered correctly:

  allow_same_net_traffic = True (BoolOpt) Whether to allow network
  traffic from same network

  ...I searched through my nova.conf files, but there was no
  'allow_same_net_traffic' entry, so the default ought to be True,
  right? Just to be sure, I explicitly added:

  allow_same_net_traffic = True

  to nova.conf and restarted nova services, but the security group rules
  are still being applied to communication between instances that are
  associated with the same security group.

  I thought the 'default' security group might be a special case, so I
  tested on another security group, but still get the same behaviour.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1360260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1400814] Re: Libvirt: SMB volume driver

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Medium => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1400814

Title:
  Libvirt: SMB volume driver

Status in OpenStack Compute (nova):
  Expired

Bug description:
  https://review.openstack.org/131734
  commit 561f8afa5fbc7d94cd65616225597850585d909f
  Author: Lucian Petrut 
  Date:   Tue Oct 28 14:49:26 2014 +0200

  Libvirt: SMB volume driver
  
  Currently, there are Libvirt volume drivers that support
  network-attached file systems such as Gluster or NFS. This patch
  adds a new volume driver in order to support attaching volumes
  hosted on SMB shares.
  
  Co-Authored-By: Gabriel Samfira 
  
  DocImpact
  
  Change-Id: I1db3d2a6d8ee94932348c63cc03698fdefff0b5c
  Implements: blueprint libvirt-smbfs-volume-support

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1400814/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323722] Re: libvirt Xen have to use "file" disk driver in the case of compute node doesn't support blktap.

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1323722

Title:
  libvirt Xen have to use "file" disk driver in the case of compute node
  doesn't support blktap.

Status in OpenStack Compute (nova):
  Expired

Bug description:
  There are Xen servers which do not support http://wiki.xen.org/wiki/Blktap 
(e.g. Oracle VM Server for x86) but are still operational with  simple file 
driver. To support those xen servers as compute platform we have to change 
method pick_disk_driver_name() in nova/virt/libvirt/utils.py to return "file" 
in the case hypervisor is Xen and Blktap is not operational.
  The side effect is: file driver does not support "qcow2" disk format. So we 
also have to force 'use_cow_images' config to False for those compute nodes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1323722/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1388095] Re: VMware fake driver returns invalid search results due to incorrect use of lstrip()

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1388095

Title:
  VMware fake driver returns invalid search results due to incorrect use
  of lstrip()

Status in OpenStack Compute (nova):
  Expired

Bug description:
  _search_ds in the fake driver does:

  path = file.lstrip(dname).split('/')

  The intention is to remove a prefix of dname from the beginning of
  file, but this actually removes all instances of all characters in
  dname from the left of file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1388095/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260118] Re: VNC on compute node refused connection from controller

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260118

Title:
  VNC on compute node refused connection from controller

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Compute (nova):
  Expired

Bug description:
  Nova-novncproxy threw this error when trying to connect to the VM's
  console via VNC.  The root cause was found to be at the compute node
  (10.52.224.20) which dropped network connection on vnc-server port(s)
  (i.e. 5900+N)  from the controller node.

   22: connecting to: 10.52.224.20:5903
   22: handler exception: [Errno 113] EHOSTUNREACH

  A temp work-around is to manually open up vnc ports on compute
  node(s).

  -A INPUT -m state --state NEW -m tcp -p tcp --dport 5900:5950 -j
  ACCEPT

  Wondering if this could be addressed dynamically, during VM creation
  ideally.  The issue was found with openstack-nova-compute-2013.2-2.el6
  running on CentOS kernel 2.6.32-358.123.2.openstack.el6.x86_64.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1260118/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334974] Re: create vm fail by volume when use diskfilter

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

** Changed in: nova
 Assignee: Ankit Agrawal (ankitagrawal) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334974

Title:
  create vm fail by  volume when use diskfilter

Status in OpenStack Compute (nova):
  Expired

Bug description:
  I configed the  diskfilter in nova.conf,i created vm by a bootable volume and 
use the back-end storage , the vm flavor is following:
  cpu 1
  mem 2G
  root_gb 2T

  In fact , the local disk in computer node  is 80G, so i create vm that remind 
no available host;
  I think , i create vm  by volume and use  back-end storage,  the diskfilter 
should not be  chose the host by local disk size.
  In addition,the compute node report the resource info that shoud check the 
instance status, if it createdby a  volume ,the local_gb_used shoud not add the 
flavor's root_db size.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1334974/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1386975] Re: VMWare: scan iscsi hba on wrong host and can't discover iscsi target

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1386975

Title:
  VMWare: scan iscsi hba on wrong host and can't discover iscsi target

Status in OpenStack Compute (nova):
  Expired

Bug description:
  When try to attach a raw iscsi volume to a VM (using RDMp), current
  code will try to get the iscsi target on the first host of the
  cluster. While the VM perhaps locates on other host, and an error will
  be thrown as this:

  The virtual disk is either corrupted or not a supported format.

  To fix this issue, need to replace get_host_ref with
  get_host_ref_for_vm in the following places:

  in volumeops.py:
  def _iscsi_get_target(self, data):
 target_portal = data['target_portal']
 target_iqn = data['target_iqn']
 host_mor = vm_util.get_host_ref(self._session, self._cluster)   #need to 
get the host in which the VM resides
 

  def _iscsi_rescan_hba(self, target_portal):
 host_mor = vm_util.get_host_ref(self._session, self._cluster)   #need to 
get the host in which the VM resides
..

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1386975/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337265] Re: HTTP 500 when `nova list --name` contains invalid regexp

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

** Changed in: nova
 Assignee: Preeti  (pandey-preeti1) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1337265

Title:
  HTTP 500 when `nova list --name` contains invalid regexp

Status in OpenStack Compute (nova):
  Expired

Bug description:
  # nova list --name \*
  ERROR: The server has either erred or is incapable of performing the 
requested operation. (HTTP 500) (Request-ID: 
req-e399bee0-2491-4e4a-9197-944b19c86075)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1337265/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334398] Re: libvirt live_snapshot periodically explodes on libvirt 1.2.2 in the gate

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: High => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334398

Title:
  libvirt live_snapshot periodically explodes on libvirt 1.2.2 in the
  gate

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Seeing this here:

  http://logs.openstack.org/70/97670/5/check/check-tempest-dsvm-
  postgres-full/7d4c7cf/console.html

  2014-06-24 23:15:41.714 | 
tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestJSON.test_create_image_specify_multibyte_character_image_name[gate]
  2014-06-24 23:15:41.714 | 
---
  2014-06-24 23:15:41.714 | 
  2014-06-24 23:15:41.714 | Captured traceback-1:
  2014-06-24 23:15:41.714 | ~
  2014-06-24 23:15:41.715 | Traceback (most recent call last):
  2014-06-24 23:15:41.715 |   File 
"tempest/services/compute/json/images_client.py", line 86, in delete_image
  2014-06-24 23:15:41.715 | resp, body = self.delete("images/%s" % 
str(image_id))
  2014-06-24 23:15:41.715 |   File "tempest/common/rest_client.py", line 
224, in delete
  2014-06-24 23:15:41.715 | return self.request('DELETE', url, 
extra_headers, headers, body)
  2014-06-24 23:15:41.715 |   File "tempest/common/rest_client.py", line 
430, in request
  2014-06-24 23:15:41.715 | resp, resp_body)
  2014-06-24 23:15:41.715 |   File "tempest/common/rest_client.py", line 
474, in _error_checker
  2014-06-24 23:15:41.715 | raise exceptions.NotFound(resp_body)
  2014-06-24 23:15:41.715 | NotFound: Object not found
  2014-06-24 23:15:41.715 | Details: {"itemNotFound": {"message": "Image 
not found.", "code": 404}}
  2014-06-24 23:15:41.716 | 
  2014-06-24 23:15:41.716 | 
  2014-06-24 23:15:41.716 | Captured traceback:
  2014-06-24 23:15:41.716 | ~~~
  2014-06-24 23:15:41.716 | Traceback (most recent call last):
  2014-06-24 23:15:41.716 |   File 
"tempest/api/compute/images/test_images_oneserver.py", line 31, in tearDown
  2014-06-24 23:15:41.716 | self.server_check_teardown()
  2014-06-24 23:15:41.716 |   File "tempest/api/compute/base.py", line 161, 
in server_check_teardown
  2014-06-24 23:15:41.716 | 'ACTIVE')
  2014-06-24 23:15:41.716 |   File 
"tempest/services/compute/json/servers_client.py", line 173, in 
wait_for_server_status
  2014-06-24 23:15:41.716 | raise_on_error=raise_on_error)
  2014-06-24 23:15:41.717 |   File "tempest/common/waiters.py", line 107, 
in wait_for_server_status
  2014-06-24 23:15:41.717 | raise exceptions.TimeoutException(message)
  2014-06-24 23:15:41.717 | TimeoutException: Request timed out
  2014-06-24 23:15:41.717 | Details: (ImagesOneServerTestJSON:tearDown) 
Server 90c79adf-4df1-497c-a786-13bdc5cca98d failed to reach ACTIVE status and 
task state "None" within the required time (196 s). Current status: ACTIVE. 
Current task state: image_pending_upload.

  
  Looks like it's trying to delete image with uuid 
518a32d0-f323-413c-95c2-dd8299716c19 which doesn't exist, because it's still 
uploading?

  
  This is maybe related to bug 1320617 as a general performance issue with 
glance.

  Looking in the glance registry log, the image is created here:

  2014-06-24 22:51:23.538 15740 INFO glance.registry.api.v1.images
  [13c1b477-cd22-44ca-ba0d-bf1b19202df6 d01d4977b5cc4e20a99e1d7ca58ce444
  207d083a31944716b9cd2ecda0f09ce7 - - -] Successfully created image
  518a32d0-f323-413c-95c2-dd8299716c19

  The image is deleted here:

  2014-06-24 22:54:53.146 15740 INFO glance.registry.api.v1.images
  [7c29f253-acef-41a0-b62b-c3087f7617ef d01d4977b5cc4e20a99e1d7ca58ce444
  207d083a31944716b9cd2ecda0f09ce7 - - -] Successfully deleted image
  518a32d0-f323-413c-95c2-dd8299716c19

  And the 'not found' is here:

  2014-06-24 22:54:56.508 15740 INFO glance.registry.api.v1.images
  [c708cf1f-27a8-4003-9c29-6afca7dd9bb8 d01d4977b5cc4e20a99e1d7ca58ce444
  207d083a31944716b9cd2ecda0f09ce7 - - -] Image 518a32d0-f323-413c-
  95c2-dd8299716c19 not found

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1329299] Re: In nova.db.sqlalchemy.model.ComputeNode, column hypervisor_hostname should be unique values

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1329299

Title:
  In nova.db.sqlalchemy.model.ComputeNode, column hypervisor_hostname
  should be unique values

Status in OpenStack Compute (nova):
  Expired

Bug description:
  In the nova model nova.db.sqlalchemy.model.ComputeNode, the column
  represents the hypervisor name and is not set to unique and not null
  values.  This leads to have same hyerpvisor host name for more than
  one hypervisors and will end up in error scenarios.

  In order to avoid the this scenario, column should be set to following values
  nullable=False
  unique=True

  This bug is filed to address the same.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1329299/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1309184] Re: nova should delete neutron ports before calling unplug_vifs

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1309184

Title:
  nova should delete neutron ports before calling unplug_vifs

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Currently nova unplugs the vifs of neutron ports first and then
  deletes the ports in neutron. Because of this it's possible for
  neutron to detect that the port has gone down and then notify nova of
  this change. During this time the instance will probably already be
  deleted. We should probably change the order of events in nova-compute
  do do port-delete in neutron then vif-uplugged()

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1309184/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355623] Re: nova floating-ip-create need pool name

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1355623

Title:
  nova floating-ip-create need pool name

Status in OpenStack Compute (nova):
  Expired

Bug description:
  #
  # help menu
  #
  [root@cnode35-m ~(keystone_admin)]# nova help floating-ip-create
  usage: nova floating-ip-create []

  Allocate a floating IP for the current tenant.

  Positional arguments:
 Name of Floating IP Pool. (Optional)

  #
  # error log
  #
  [root@cnode35-m ~(keystone_admin)]# nova floating-ip-create
  ERROR: FloatingIpPoolNotFound: Floating ip pool not found. (HTTP 404) 
(Request-ID: req-224995d7-b1bf-4b82-83f6-d9259c1ca265)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1355623/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1379663] Re: After upgrading - ovs-vswitchd cannot add existing ports

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1379663

Title:
  After upgrading - ovs-vswitchd cannot add existing ports

Status in neutron:
  Incomplete
Status in OpenStack Compute (nova):
  Expired

Bug description:
  Hi there,

  After upgrading (stop all services, yum upgrade, db sync) from older
  Icehouse build to latest Icehouse build, my compute node (specifically
  openstack-nova-compute) cannot be started. I deployed a number of
  instances before upgrade, and after upgrading openstack-nova-compute
  refuses to start up. The logs seem to point to some issue with ovs-
  vswitch unable to bind ports of the existing instances.

  All other services at controller and network nodes seem to be running
  fine. And before upgrading, everything was working fine.

  # rpm -qa | grep openstack-nova
  openstack-nova-compute-2014.1.2-1.el6.noarch
  openstack-nova-common-2014.1.2-1.el6.noarch

  At compute.log:

  2014-10-10 14:37:39.372 24897 ERROR nova.openstack.common.threadgroup [-] 
Unexpected vif_type=binding_failed
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py", line 
117, in wait
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
x.wait()
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py", line 
49, in wait
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line 173, in wait
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/eventlet/event.py", line 121, in wait
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 293, in switch
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line 212, in main
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/service.py", line 486, 
in run_service
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
service.start()
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/service.py", line 163, in start
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
self.manager.init_host()
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1044, in 
init_host
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
self._init_instance(context, instance)
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 902, in 
_init_instance
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
self.driver.plug_vifs(instance, net_info)
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 860, in 
plug_vifs
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
self.vif_driver.plug(instance, vif)
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/vif.py", line 616, in plug
  

[Yahoo-eng-team] [Bug 1279858] Re: nova-compute shouldn't spawn two libguestfs appliances every time an instance is launched

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279858

Title:
  nova-compute shouldn't spawn two libguestfs appliances every time an
  instance is launched

Status in OpenStack Compute (nova):
  Expired

Bug description:
  https://bugzilla.redhat.com/show_bug.cgi?id=1064947

  Using RHELOSP 4.0 GA bits, I'm finding that when I launch the Cirros
  0.3.1 image, separate calls to libguestfs within the nova codebase
  cause qemu-kvm to be run twice *before* the instance is launched.
  This is suboptimal.

  One libguestfs call (file injection) can be disabled by setting
  libvirt_inject_partition=-2, but this does not work for the second one
  (checking to see if the volume partition/filesystem can be extended).
  The codepath for the second call is approximately:

  /nova/virt/disk/api.py extend()
  /nova/virt/disk/api.py is_image_partitionless()
  /nova/virt/disk/vfs/guestfs.py VFSGuestFS.setup()

  It would be good if all of this could be done with one libguestfs
  instance which could also be disabled in the global nova config.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1279858/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1345905] Re: Fail to use shareable image created from a volume-booted VM

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Medium => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1345905

Title:
  Fail to use shareable image created from a volume-booted VM

Status in OpenStack Compute (nova):
  Expired

Bug description:
  If one image created from a volume-booted instance and shared, it
  can't be used by other tenants due to the snapshot is only owned by
  the original tenant in cinder.

  The problem can be reproduced following this steps:

  1. Create one bootable volume from an image.
  2. Create one instance with this volume.
  3. Create an image from this instance.
  4. Share the image to other tenants, like tenant B.
  5. Create a new instance from this shared image by tenant B.
  6. One error("failed to get snapshot") will be raised.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1345905/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1312002] Re: nova cell-show causes ValueError: Circular reference detected

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1312002

Title:
  nova cell-show  causes ValueError: Circular reference
  detected

Status in OpenStack Compute (nova):
  Expired

Bug description:
  I am using devstack development environment with n-cell enabled in localrc.   

  When I call API  to list all cells a cell with name "child" is explored.
  Again when I try to "cell-show" this cell "child" call is failed with an 
error response 500.
  following is the operations followed - 

   Setting in /etc/nova/nova.conf
   
  [cells]
  name = region
  cell_type = api
  enable = True
   
   


  When API call to get cells -

  REQUEST -  
  curl -i 
'http://10.0.9.40:8774/v2/dcab4e3fc2734bad97c43d46bb77d076/os-cells' -X GET -H 
"X-Auth-Project-Id: demo" -H "User-Agent: python-novaclient" -H "Accept: 
application/json" -H "X-Auth-Token:"

  RESPONSE - 
  HTTP/1.1 200 OK
  Content-Type: application/json
  Content-Length: 111
  X-Compute-Request-Id: req-e83c337a-a7d9-47e5-a472-72cd41126f21
  Date: Thu, 24 Apr 2014 04:12:34 GMT
   
  {"cells": [{"username": "guest", "rpc_host": "10.0.9.40", "type": 
"child", "name": "child", "rpc_port": 5672}]}
   
   
   
  During particular cell show (using cell name as "child")-
   
  REQUEST -
  curl -i 
'http://10.0.9.40:8774/v2/dcab4e3fc2734bad97c43d46bb77d076/os-cells/child' -X 
GET -H "X-Auth-Project-Id: demo" -H "User-Agent: python-novaclient" -H "Accept: 
application/json" -H "X-Auth-Token: "
   
  RESPONSE - 
  HTTP/1.1 500 Internal Server Error
  Content-Length: 128
  Content-Type: application/json; charset=UTF-8
  X-Compute-Request-Id: req-ba2503bd-3948-4cce-8ee6-543aa597497b
  Date: Thu, 24 Apr 2014 04:10:33 GMT
   
  {"computeFault": {"message": "The server has either erred or is 
incapable of performing the requested operation.", "code": 500}}
   

  
  Note: when using cell name as "api" or "region" 404 is returned.
   
   
   
   
  nova-api logs during show call is traced as below -
   
  2014-04-24 11:15:20.219 DEBUG keystoneclient.middleware.auth_token [-] 
Authenticating user token from (pid=705) __call__ 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:603


  2014-04-24 11:15:20.220 DEBUG keystoneclient.middleware.auth_token [-] 
Removing headers from request environment: 
X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role
 from (pid=705) _remove_auth_headers 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:662

  
  2014-04-24 11:15:20.232 DEBUG keystoneclient.middleware.auth_token [-] 
Storing token in cache from (pid=705) _cache_put 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:1121   

 
  2014-04-24 11:15:20.233 DEBUG keystoneclient.middleware.auth_token [-] 
Received request from user: 6cf0c59310fb4b189fc157b19d0e1026 with project_id : 
2c3857a83f454b7cb073891ef47acd11 and roles: heat_stack_owner,admin  from 
(pid=705) _build_user_headers 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:910

  
  2014-04-24 11:15:20.238 DEBUG routes.middleware 
[req-3ddd31fe-cd31-4c81-b960-81c753566b4a admin demo] Matched GET 
/2c3857a83f454b7cb073891ef47acd11/os-cells/child from (pid=705) __call__ 

[Yahoo-eng-team] [Bug 1372708] Re: VMWare: vm spawn failure due to no attribute 'propSet' in concurrent case

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Medium => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1372708

Title:
  VMWare: vm spawn failure due to no attribute 'propSet' in concurrent
  case

Status in OpenStack Compute (nova):
  Expired

Bug description:
  When doing a concurrent spawn for VMs(30). Noticed some VM spawn
  failures due to error below.

  The reason is that during prebuild_instance, it will list all
  instances of existing. But if concurrent process for spawning is
  happening, it is possible that certain VM is just in process of
  creation in VCenter, and the vsphere SDK only return partial object
  for that VM. So there is no propSet for that VM created at the phase.

  Should add check for these cases.

  
  2014-09-14 03:53:31.311 2680 ERROR oslo.messaging.rpc.dispatcher [-] 
Exception during message handling: ObjectContent instance has no attribute 
'propSet'
  2014-09-14 03:53:31.311 2680 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-09-14 03:53:31.311 2680 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 134, 
in _dispatch_and_reply
  2014-09-14 03:53:31.311 2680 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-09-14 03:53:31.311 2680 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 177, 
in _dispatch
  2014-09-14 03:53:31.311 2680 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-09-14 03:53:31.311 2680 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 123, 
in _do_dispatch
  2014-09-14 03:53:31.311 2680 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  2014-09-14 03:53:31.311 2680 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/server.py", line 139, in 
inner
  2014-09-14 03:53:31.311 2680 TRACE oslo.messaging.rpc.dispatcher return 
func(*args, **kwargs)
  2014-09-14 03:53:31.311 2680 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/exception.py", line 88, in wrapped
  2014-09-14 03:53:31.311 2680 TRACE oslo.messaging.rpc.dispatcher payload)
  2014-09-14 03:53:31.311 2680 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
  2014-09-14 03:53:31.311 2680 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-09-14 03:53:31.311 2680 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/exception.py", line 71, in wrapped
  2014-09-14 03:53:31.311 2680 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2014-09-14 03:53:31.311 2680 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 282, in 
decorated_function
  2014-09-14 03:53:31.311 2680 TRACE oslo.messaging.rpc.dispatcher pass
  2014-09-14 03:53:31.311 2680 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
  2014-09-14 03:53:31.311 2680 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-09-14 03:53:31.311 2680 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 268, in 
decorated_function
  2014-09-14 03:53:31.311 2680 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-09-14 03:53:31.311 2680 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 335, in 
decorated_function
  2014-09-14 03:53:31.311 2680 TRACE oslo.messaging.rpc.dispatcher 
function(self, context, *args, **kwargs)
  2014-09-14 03:53:31.311 2680 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 311, in 
decorated_function
  2014-09-14 03:53:31.311 2680 TRACE oslo.messaging.rpc.dispatcher e, 
sys.exc_info())
 

[Yahoo-eng-team] [Bug 1342016] Re: race windown in volume attach and spawn with volumes

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Medium => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1342016

Title:
  race windown in volume attach and spawn with volumes

Status in OpenStack Compute (nova):
  Expired

Bug description:
  there is race window between attach volume and spawn with volumes
  we should reserve volumes when spawn

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1342016/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1315201] Re: test_create_server TimeoutException failed while waiting for server to build in setup

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: High => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1315201

Title:
  test_create_server TimeoutException failed while waiting for server to
  build in setup

Status in OpenStack Compute (nova):
  Expired
Status in tempest:
  In Progress

Bug description:
  There are already several timeout related bugs but nothing really fit
  the timeout to build in setup for this test, and it's not really the
  same as bug 1254890 as far as where it fails in Tempest, but could
  potentially be similar issues under the covers in nova.

  http://logs.openstack.org/37/84037/8/check/check-grenade-dsvm-partial-
  ncpu/ab64155/console.html

  message:"Details\: Server" AND message:"failed to reach ACTIVE status
  and task state \"None\" within the required time" AND message:"Current
  status\: BUILD. Current task state\: spawning." AND tags:console

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRGV0YWlsc1xcOiBTZXJ2ZXJcIiBBTkQgbWVzc2FnZTpcImZhaWxlZCB0byByZWFjaCBBQ1RJVkUgc3RhdHVzIGFuZCB0YXNrIHN0YXRlIFxcXCJOb25lXFxcIiB3aXRoaW4gdGhlIHJlcXVpcmVkIHRpbWVcIiBBTkQgbWVzc2FnZTpcIkN1cnJlbnQgc3RhdHVzXFw6IEJVSUxELiBDdXJyZW50IHRhc2sgc3RhdGVcXDogc3Bhd25pbmcuXCIgQU5EIHRhZ3M6Y29uc29sZSIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5ODk4ODE3Njc0OX0=

  48 hits in 7 days, all failures, check and gate, several different
  jobs.  Since it's a timeout there isn't an error in the nova logs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1315201/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394219] Re: Failed to deploy new instance that server group on failed host

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1394219

Title:
  Failed to deploy new instance that server group on failed host

Status in OpenStack Compute (nova):
  Expired

Bug description:
  when the host is down/disable, for deploying instance that belongs to
  server group with affinity policy and other instances with same server
  group on the down/disable host, the scheduling will fail. Because
  scheduler is trying to find same host for the new instance.

  This is also for after we honors the server group when evacuate, if
  the instance is in server group with affinity can't be evacuated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1394219/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1308517] Re: migrating a vm with pci devices caused DB inconsistent and vm state error

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Medium => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1308517

Title:
  migrating a vm with pci devices  caused DB inconsistent and vm state
  error

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Environment:
  1. Two compute nodes A and B, each of them has four pci devices of 
"vendor_id":"8086","product_id":"150e", which have all been configured to 
passthrough_whitelist.
  2. Controller nova conf configured to "pci_alias={"vendor_id":"8086", 
"product_id":"150e", "name":"a1"}"
  3. Extra_specs of flavor pci_flavor configured to "{u'pci_passthrough:alias': 
u'a1:2'}".

  Test Steps:
  1. Create instance vm1 with pci_flavor, then vm1 is created on A, two of the 
pci devices were allocated.
  2. Migrate vm1 from A to B, then vm1 state changed to error and two pci 
devices' status on node B changed to "claimed" while two pci devices on A are 
still "allocated".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1308517/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1252947] Re: libvirtError: Cannot recv data: Connection reset by peer

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1252947

Title:
  libvirtError: Cannot recv data: Connection reset by peer

Status in OpenStack Compute (nova):
  Expired

Bug description:
  tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON
  fails sporadically.

  See:  http://logs.openstack.org/66/54966/2/check/check-tempest-
  devstack-vm-full/d611ed0/console.html

  2013-11-19 22:24:52.379 | 
==
  2013-11-19 22:24:52.380 | FAIL: setUpClass 
(tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON)
  2013-11-19 22:24:52.380 | setUpClass 
(tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON)
  2013-11-19 22:24:52.380 | 
--
  2013-11-19 22:24:52.380 | _StringException: Traceback (most recent call last):
  2013-11-19 22:24:52.380 |   File 
"tempest/api/compute/servers/test_servers_negative.py", line 46, in setUpClass
  2013-11-19 22:24:52.380 | resp, server = 
cls.create_test_server(wait_until='ACTIVE')
  2013-11-19 22:24:52.381 |   File "tempest/api/compute/base.py", line 118, in 
create_test_server
  2013-11-19 22:24:52.381 | server['id'], kwargs['wait_until'])
  2013-11-19 22:24:52.381 |   File 
"tempest/services/compute/json/servers_client.py", line 160, in 
wait_for_server_status
  2013-11-19 22:24:52.381 | extra_timeout=extra_timeout)
  2013-11-19 22:24:52.381 |   File "tempest/common/waiters.py", line 73, in 
wait_for_server_status
  2013-11-19 22:24:52.381 | raise 
exceptions.BuildErrorException(server_id=server_id)
  2013-11-19 22:24:52.381 | BuildErrorException: Server 
62bfeebd-8878-477f-9eac-a8b21ec5ac26 failed to build and is in ERROR status

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1252947/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1406431] Re: neutron port security-group not properly updated on nova interface-attach

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1406431

Title:
  neutron port security-group not properly updated on nova interface-
  attach

Status in OpenStack Compute (nova):
  Expired

Bug description:
  With the reference implementation, there exists a problem when using
  'nova-interface-attach' using 'net-id' parameter. The neutron port
  created for this operation does not inherit the instance's security-
  groups, but instead uses just the 'default' security-group.

  Steps to recreate:

  [root@osnode2 ~(keystone_admin)]# neutron net-list
  
+--+-+-+
  | id   | name| subnets
 |
  
+--+-+-+
  | e98cdc79-f385-498e-be99-5bf879f26741 | datanw  | 
42d6b5a9-b415-41db-911e-89956df77852 192.168.0.0/24 |
  | 2b9cc6e2-e50d-494b-87cd-0520013f9cdb | public2 | 
6987510e-495b-4d45-bba2-327f362a04a4 10.10.0.0/21   |
  
+--+-+-+

  [root@osnode2 ~(keystone_admin)]# neutron  security-group-list
  +--+---+-+
  | id   | name  | description |
  +--+---+-+
  | 66a6bae9-2249-42f0-9c8e-fa058224adff | default   | default |
  | 85ee063b-f688-45ad-b35c-a2f102943d32 | custom_sg | custom_sg   |
  +--+---+-+

  [root@osnode2 ~(keystone_admin)]# nova boot --flavor m1.tiny --image
  cirros --nic net-id=2b9cc6e2-e50d-494b-87cd-0520013f9cdb cirros_vm
  --security_groups custom_sg

  [root@osnode2 ~(keystone_admin)]# nova show cirros_vm
  
+--+--+
  | Property | Value
|
  
+--+--+
  | OS-DCF:diskConfig| MANUAL   
|
  | OS-EXT-AZ:availability_zone  | nova 
|
  | OS-EXT-SRV-ATTR:host | osnode2  
|
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | osnode2  
|
  | OS-EXT-SRV-ATTR:instance_name| instance-00c5
|
  | OS-EXT-STS:power_state   | 1
|
  | OS-EXT-STS:task_state| -
|
  | OS-EXT-STS:vm_state  | active   
|
  | OS-SRV-USG:launched_at   | 2014-12-25T01:57:02.00   
|
  | OS-SRV-USG:terminated_at | -
|
  | accessIPv4   |  
|
  | accessIPv6   |  
|
  | config_drive |  
|
  | created  | 2014-12-25T01:56:51Z 
|
  | flavor   | m1.tiny (1)  
|
  | hostId   | 
5b3db263e5f581e1e5141018ab5f81f1ab313bbd9514f9e64ee6d3d9 |
  | id   | d6221cd5-1e02-4759-9412-1f238b511667 
|
  | image| cirros 
(58dcb5ba-2882-4069-9f9a-be671f8f11c6)|
  | key_name | -
|
  | metadata

[Yahoo-eng-team] [Bug 1407936] Re: service_update in conductor can be a async call

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1407936

Title:
  service_update in conductor can be a async call

Status in OpenStack Compute (nova):
  Expired

Bug description:
   https://bugs.launchpad.net/nova/+bug/1331537 reported an error for
  processing conductor returned data

  actually ,the service_update is only a timely function to update conductor 
service data
  and we don't need to keep it as a 'call' API, instead, a 'cast' is fine 
because we don't need to wait for the return data 

  to make it cast also can solve the bug 1331537 though we don't know
  the real cause of the problem

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1407936/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407643] Re: Setting network bandwidth quota in extra_specs causes a VM creation to fail in devstack

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1407643

Title:
  Setting network bandwidth quota in extra_specs causes a VM creation to
  fail in devstack

Status in OpenStack Compute (nova):
  Expired

Bug description:
  https://blueprints.launchpad.net/nova/+spec/quota-instance-resource
  added a number of resource managment capabilites via extra_specs foe
  libvirt - but at least one of these causes VMs to fail on devstcak
  with Neutron (so I'm guessing that they aren't covered in Tempest ?)

  
  On a devstack system with Neutron Networking:

  nova flavor-key m1.tiny set quota:vif_inbound_average=1024
  ubuntu@devstack-forced-shutdown:/mnt/devstack$ nova boot --image  
02985e98-a163-4ce9-afb8-098c41c6573c --flavor 1 phil.limit
   
+--++
   | Property | Value   
   |
   
+--++
   | OS-DCF:diskConfig| MANUAL  
   |
   | OS-EXT-AZ:availability_zone  | nova
   |
   | OS-EXT-SRV-ATTR:host | -   
   |
   | OS-EXT-SRV-ATTR:hypervisor_hostname  | -   
   |
   | OS-EXT-SRV-ATTR:instance_name| instance-0003   
   |
   | OS-EXT-STS:power_state   | 0   
   |
   | OS-EXT-STS:task_state| scheduling  
   |
   | OS-EXT-STS:vm_state  | building
   |
   | OS-SRV-USG:launched_at   | -   
   |
   | OS-SRV-USG:terminated_at | -   
   |
   | accessIPv4   | 
   |
   | accessIPv6   | 
   |
   | adminPass| 3SrCw22q8Prz
   |
   | config_drive | 
   |
   | created  | 2015-01-05T11:21:10Z
   |
   | flavor   | m1.tiny (1) 
   |
   | hostId   | 
   |
   | id   | 
72c953c8-9bd3-4e94-8fbb-db54f77509b7   |
   | image| cirros-0.3.2-x86_64-uec 
(02985e98-a163-4ce9-afb8-098c41c6573c) |
   | key_name | -   
   |
   | metadata | {}  
   |
   | name | phil.limit  
   |
   | os-extended-volumes:volumes_attached | []  
   |
   | progress | 0   
   |
   | security_groups  | default 
   |
   | status   | BUILD   
   |
   | tenant_id| 0c1ece771f3f43958d010dfbfba52b83
   |
   | updated  | 2015-01-05T11:21:10Z
   |
   | user_id 

[Yahoo-eng-team] [Bug 1408283] Re: nova list-secgroup instanceName fails if instance isn't running under admin tenant

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408283

Title:
  nova list-secgroup instanceName fails if instance isn't running under
  admin tenant

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Description of problem:  Ran #nova list --all-tenats as admin, figured
  I'd check nova list-secgroup on one of the instances. By chance the
  instance I chose happened to run under another tenant not admin's,
  critical for this bug, as if I'd have chosen an instance from admin
  tenant bug doesn't happen.

  Any way #nova list-secgroup only accepts instanceID not instanceName
  as mentioned on command's help,

  # nova list-secgroup tshefitest
  ERROR: No server with a name or ID of 'tshefitest' exists.

  Running same command with tshefitest instance's ID works fine

  # nova list-secgroup 23244bce-0232-45e6-9a1d-10e493593b7d
  +--+-+-+
  | Id   | Name| Description |
  +--+-+-+
  | 442f555c-897b-4ad3-b46e-85c5e84ff47d | default | default |
  +--+-+-+

  If however I do same test on an instance belonging to  admin tenant
  both paramaters  instanceName / instanceID work fine.

  
  Version-Release number of selected component (if applicable):
  RHEL7 
  python-novaclient-2.17.0-2.el7ost.noarch
  HA deployment if it matters. 

  Same happens on Juno as well
  python-novaclient-2.20.0-1.el7ost.noarch

  
  How reproducible:
  Every time

  Steps to Reproduce:
  1. Create a new user and tenant, admin user shouldn't be a member of this new 
tenant!  
  2. Create an instance under above tenant. 
  3. Create another instance this time under admin tenant. 
  4. under cli with admin credentials run  #nova list --all-tenants 
  4. Notice name and instance id's for both instances. 
  5. #nova list-secgroup instanceName of instance from step 2 -> you should get 
the error message. 
  6. #nova list-secgroup instanceID of instance from step 2 will work as 
expected. 

  7. Nova list-secgroup instanceName (or instanceID) from step3 both
  work fine.

  Actual results:
  ERROR: No server with a name or ID of 'tshefitest' exists.

  Expected results:
  IMHO if this command works fine with instanceID it should also work with 
instanceName even if instance isn't running under admin tenant. 

  Additional info:
  Running with --debug  exposes this:

  DEBUG (shell:783) No server with a name or ID of 'tshefitest' exists.
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 780, in 
main
  OpenStackComputeShell().main(map(strutils.safe_decode, sys.argv[1:]))
File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 716, in 
main
  args.func(self.cs, args)
File "/usr/lib/python2.7/site-packages/novaclient/v1_1/shell.py", line 
2018, in do_list_secgroup
  server = _find_server(cs, args.server)
File "/usr/lib/python2.7/site-packages/novaclient/v1_1/shell.py", line 
1549, in _find_server
  return utils.find_resource(cs.servers, server)
File "/usr/lib/python2.7/site-packages/novaclient/utils.py", line 244, in 
find_resource
  raise exceptions.CommandError(msg)
  CommandError: No server with a name or ID of 'tshefitest' exists.
  ERROR: No server with a name or ID of 'tshefitest' exists.

  On Juno same issue error:
  # nova list-secgroup tshefi1
  ERROR (CommandError): No server with a name or ID of 'tshefi1' exists.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408283/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407417] Re: Error: No nw_info cache associated with instance (HTTP 400)

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1407417

Title:
  Error: No nw_info cache associated with instance (HTTP 400)

Status in OpenStack Compute (nova):
  Expired

Bug description:
  while associating floating-ip it gave errors
  Error: No nw_info cache associated with instance (HTTP 400) (Request-ID: 
req-746ca660-bc26-4fb8-bcd7-461b9bb6d68d)
  Error: Unable to associate IP address 192.168.255.1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1407417/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408954] Re: nova flavor-create return empty string when swap == 0

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408954

Title:
  nova flavor-create return empty string when swap == 0

Status in OpenStack Compute (nova):
  Expired

Bug description:
  The type of 'swap' is string or int according to 'swap' value in DB.
  swap == 0, return empty string.
  swap > 0, return int.
  The client code is complex when return value of API is not fixed type.
  On the other hand, the type of 'swap' in the request and response of 
flavor-create should be same type.

  DEBUG (session:169) REQ: curl -g -i -X POST 
http://10.250.10.29:8774/v2/e159961712f64b388b57062483d98a91/flavors -H 
"User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: 
application/json" -H "X-Auth-Token: 
{SHA1}6ba0b4e647a5ca47937c48941ad361d02d4ff604" -d '{"flavor": {"vcpus": 1, 
"disk": 1, "name": "chenrui_f", "os-flavor-access:is_public": true, 
"rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "ram": 1, "id": "11", 
"swap": 0}}'
  INFO (connectionpool:188) Starting new HTTP connection (1): 10.250.10.29
  DEBUG (connectionpool:364) "POST /v2/e159961712f64b388b57062483d98a91/flavors 
HTTP/1.1" 200 425
  DEBUG (session:197) RESP: [200] date: Fri, 09 Jan 2015 09:36:44 GMT 
connection: keep-alive content-type: application/json content-length: 425 
x-compute-request-id: req-df79ffc1-08ff-4569-945f-4fa2fdf49436
  RESP BODY: {"flavor": {"name": "chenrui_f", "links": [{"href": 
"http://10.250.10.29:8774/v2/e159961712f64b388b57062483d98a91/flavors/11;, 
"rel": "self"}, {"href": 
"http://10.250.10.29:8774/e159961712f64b388b57062483d98a91/flavors/11;, "rel": 
"bookmark"}], "ram": 1, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": 
"", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, 
"OS-FLV-EXT-DATA:ephemeral": 0, "disk": 1, "id": "11"}}

  mysql> select * from instance_types where flavorid=11;
  
+-+++---++---+---+--+-+--+-+-+--+--+---+-+
  | created_at  | updated_at | deleted_at | name  | id | memory_mb 
| vcpus | swap | vcpu_weight | flavorid | rxtx_factor | root_gb | ephemeral_gb 
| disabled | is_public | deleted |
  
+-+++---++---+---+--+-+--+-+-+--+--+---+-+
  | 2015-01-09 09:36:44 | NULL   | NULL   | chenrui_f |  9 | 1 
| 1 |0 |NULL | 11   |   1 |   1 |0 
|0 | 1 |   0 |
  
+-+++---++---+---+--+-+--+-+-+--+--+---+-+
  1 row in set (0.00 sec)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408954/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408176] Re: Nova instance not boot after host restart but still show as Running

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Status: Confirmed => Expired

** Changed in: nova
 Assignee: Alex Xu (xuhj) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408176

Title:
  Nova instance not boot after host restart but still show as Running

Status in OpenStack Compute (nova):
  Expired
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  The nova host lost power and after restarted, the previous running instance 
is still shown in 
  "Running" state but actually not started:

  root@allinone-controller0-esenfmnxzcvk:~# nova list
  
+--++++-+---+
  | ID   | Name   | 
Status | Task State | Power State | Networks  |
  
+--++++-+---+
  | 13d9eead-191e-434e-8813-2d3bf8d3aae4 | alexcloud-controller0-rr5kdtqmv7qz | 
ACTIVE | -  | Running | default-net=172.16.0.15, 30.168.98.61 |
  
+--++++-+---+
  root@allinone-controller0-esenfmnxzcvk:~# ps -ef |grep -i qemu
  root  95513  90291  0 14:46 pts/000:00:00 grep --color=auto -i qemu

  
  Please note the resume_guests_state_on_host_boot flag is False. Log file is 
attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408176/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1409024] Re: DNSDomain.register_for_zone races

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1409024

Title:
  DNSDomain.register_for_zone races

Status in OpenStack Compute (nova):
  Expired

Bug description:
  2 simultaneous calls to DNSDomain.register_for_zone or
  DNSDomain.register_for_project will race. The winner is undefined.
  Consequently, the caller has no way of knowing if the DNSDomain is
  appropriately registered following a call. register_for_zone or
  register_for_project will not currently generate an error in this
  case.

  I can think of 2 ways to resolve this:

  1. Assert that only an unregistered domain can be registered.
  Attempting to register a registered domain is an error. This would be
  a semantic change to the existing APIs.

  2. Create new APIs which additionally take the expected current
  registration, and fail if it is not as expected. Deprecate the
  existing APIs.

  I favour the former.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1409024/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408865] Re: "Ignoring EndpointNotFound: The service catalog is empty" error when init_host

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Triaged => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408865

Title:
  "Ignoring EndpointNotFound: The service catalog is empty" error when
  init_host

Status in OpenStack Compute (nova):
  Expired

Bug description:
  the scenario:

  1. create a vm using bootable volume.

  2. delete this vm

  3. restart service nova-compute when vm's task state is deleting.

  When nova-compute is up, vm became deleted successful, but the
  bootable volume is still in-use state and can't delete it using cinder
  delete volume.

  The error point is when nova-compute is up, "init_host" will go to
  delete the vm whose task state is "deleting", but the context using is
  got from "nova.context.get_admin_context()" function. There is no
  auth_token.  When call "self.volume_api.terminate_connection(context,
  bdm.volume_id, connector)" in deleting vm process, it will log
  "Ignoring EndpointNotFound: The service catalog is empty" warning and
  can't detach the bootable volume. The volume's status is still 'in-
  use' in cinder side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408865/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284559] Re: VMware: Openstack can not adopt more than one existing port groups in vCenter

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Medium => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1284559

Title:
  VMware: Openstack can not adopt more than one existing port groups in
  vCenter

Status in OpenStack Compute (nova):
  Expired

Bug description:
  When booting an instance via Openstack, the network bridge used by
  nova comes from the attribute integration_bridge in nova.conf
  (networks are managed via neutron, nova-network will be deprecated).
  integration_bridge can only specify one bridge name and there is no
  way to give more than one bridges.

  vCenter allows more than one port-groups to be created. If vCenter is
  controlled by openstack, Openstack can not make all port-groups in
  vCenter able used via openstack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1284559/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1282582] Re: Ignoring network id during instance launching

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1282582

Title:
  Ignoring network id during instance launching

Status in OpenStack Compute (nova):
  Expired

Bug description:
  I have met strange behavior of nova API.  
  When we create instance and send request with network parameters (port_id, 
network_id, fixed_ip) with not None values,
  nova ignore network_id without any message. As result we get instance only 
with pointed port_id.
  In case when we send same request using interface-attach we get BadRequest 
answer here
  
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/attach_interfaces.py#L96
 
  because we have port_id and network_id together in request.

  Also I have look in v3 plugins and it have a little difference.
  If we have not None value for port_id and fixed_ip, will be raised Error
  
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/plugins/v3/servers.py#L348
  but it still ignore network_id.

  If it's normal behavior: Could you explain why nova send BadRequest
  Error in same case during attaching interface.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1282582/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271479] Re: ide disk type is not set when starting vm from dashboard

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Medium => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1271479

Title:
  ide disk type is not set when starting vm from dashboard

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Hello

  we have a openstack havana controller installation with 2 compute
  nodes  centos 6.4 /KVM

  i get 3 ide disk images from i need to build a vm by booting on the
  first one and attach the 2 others as volumes

  when i play the nova command line below the vm start well and volumes
  are attached

  nova boot --flavor vcdn_mn --key_name KEY_NAME --availability-zone 
nova:vcdn_kvm5 --image c5813703-2bfb-4f58-b0f1-9cd08fb8e596  --nic 
net-id=a2b8e7e4-cceb-4e5a-881b-8d872f6384db --nic 
net-id=d2292ebe-a431-4bd9-a5e4-570e2f98f169 --nic 
net-id=7368ec5a-3f9d-49af-a254-87a754f9b952 --block-device 
source=volume,dest=volume,id=8d8a2e6d-9c7e-4905-b194-8a74b6ae6c42,bus=ide,shutdown=preserve
 --block-device 
source=volume,dest=volume,id=7379756e-e435-40ad-8f0c-de43b0adcac4,bus=ide,shutdown=preserve
 mn1.vxn1s1.cdn
   
  the libvirt.xml disk device section looks like :

   



  
  



8d8a2e6d-9c7e-4905-b194-8a74b6ae6c42
  
  



7379756e-e435-40ad-8f0c-de43b0adcac4
  

  but if i restart the vm from the dashboard  the libvirt.xml is
  modified and looks like, the target bus is changed from "ide" to
  virtio" so the vm cant boot anymore because of partition table ... :

   



  
  



8d8a2e6d-9c7e-4905-b194-8a74b6ae6c42
  
  



7379756e-e435-40ad-8f0c-de43b0adcac4
  

  
  i ve tried to set libvirt_disk_prefix=hd (even if it not seems possible 
value..) into /etc/nova/nova.conf

  i ve  set some metadata on boot disk image  => disk_bus, disk_dev,
  hw_disk_bus  without success

  ++--+
  | Property   | Value|
  ++--+
  | Property 'disk_bus'| ide  |
  | Property 'disk_dev'| hda  |
  | Property 'hw_disk_bus' | ide  |
  | checksum   | 29bd44d6edd8358d6b74967ab7eaf526 |
  | container_format   | bare |
  | created_at | 2014-01-21T12:41:29  |
  | deleted| False|
  | deleted_at | None |
  | disk_format| qcow2|
  | id | c5813703-2bfb-4f58-b0f1-9cd08fb8e596 |
  | is_public  | True |
  | min_disk   | 0|
  | min_ram| 0|
  | name   | vcdn_mn_img0 |
  | owner  | None |
  | protected  | False|
  | size   | 915537920|
  | status | active   |
  | updated_at | 2014-01-22T10:24:35  |
  ++--+

  i ve tried some modification tests into /usr/lib/python2.6/site-
  packages/nova/virt/libvirt/blockinfo.py but without success


  thanks for your help 
  KR

  Philippe

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1271479/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281928] Re: VMware VC driver reports incorrect value for vcpus_used

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Medium => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1281928

Title:
  VMware VC driver reports incorrect value for vcpus_used

Status in OpenStack Compute (nova):
  Expired

Bug description:
  The 'vcpus_used' value returned by the driver when reporting available
  resources is always 0. The VC driver doesn't track number of vcpus
  instead it trackers MHz shares.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1281928/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267862] Re: launch a new vm fail in source host after live migration

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Status: Confirmed => Expired

** Changed in: nova
 Assignee: Tiago Mello (timello) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1267862

Title:
  launch a new vm fail in source host after live migration

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Nova version:Havana

  We have two compute nodes: Host A and Host B. Each one have 100G disk,
  4 cpus and 2G mem for example.

  First, launch an image-backed instance named vm-1 in Host A using the
  flavor successfully. The flavor's specs is 60G disk, 2cpus and 1G mem.
  So, obviously the free resource on Host A is 40G disk, 2 cpus and 1
  mem.

  Second, doing live migration with block migation flag from Host A to
  Host B.Then success. Now, no active instance exist on Host A.

  But, the problem is the free resource on Host A is still 40G disk, 2
  cpu and 1G mem. The resource described in compute_nodes table don't
  add back.

  Then adding another new instance named vm-2 to Host A using same
  flavor as vm-1. We are notified that resource is insuffieint on Host
  A.(40G<60G disk denied)

  Notice that the data would be correct after next priodic task of
  update_available_resource. Wihin the interval time, it means we can't
  deploy another instance, but it can be in fact.

  I think the resource should be recaculate immediatly on Host A,
  otherwise it may affects vm delpoyment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1267862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278741] Re: resource tracker fails after migration if instance is already tracked on new node

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1278741

Title:
  resource tracker fails after migration if instance is already tracked
  on new node

Status in OpenStack Compute (nova):
  Expired

Bug description:
   {u'message': u'\'list\' object has no attribute \'iteritems\'


   Traceback (most recent call last):   





 File "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", 
line 461, in _process_data  

   **args)  





 File "/usr/lib/python2.7/dist-packages/nova/openstack/com', u'code': 500, 
u'details': u'  File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 255, in 
decorated_function 
   return function(self, context, *args, **kwargs)  


 File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 
2949, in prep_resize
 
   filter_properties)   


 File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 
2943, in prep_resize
 
   node)


 File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 
2911, in _prep_resize   
 
   limits=limits) as claim: 


 File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", line 
246, in inner   
  
   return f(*args, **kwargs)


 File "/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", 
line 173, in resize_claim   
 
   self._update(elevated, self.compute_node)


 File "/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", 
line 428, in _update
 
   context, self.compute_node, values, prune_stats) 


 File "/usr/lib/python2.7/dist-packages/nova/conductor/api.py", line 263, 
in compute_node_update  

[Yahoo-eng-team] [Bug 1275875] Re: Virt drivers should use standard image properties

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Medium => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1275875

Title:
  Virt drivers should use standard image properties

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Several virt drivers are using non-standard driver-specific image
  metadata properties. This creates an API contract between the external
  user and the driver implementation. These non-standard ones should be
  marked as deprecated in some way, enforced in v3, etc. We need a
  global whitelist of keys and values that are allowed so that we can
  make sure others don't leak in.

  Examples:

  nova/virt/vmwareapi/vmops.py:os_type = 
image_properties.get("vmware_ostype", "otherGuest")
  nova/virt/vmwareapi/vmops.py:adapter_type = 
image_properties.get("vmware_adaptertype",
  nova/virt/vmwareapi/vmops.py:disk_type = 
image_properties.get("vmware_disktype",
  nova/virt/vmwareapi/vmops.py:vif_model = 
image_properties.get("hw_vif_model", "VirtualE1000")

  nova/virt/xenapi/vm_utils.py:device_id =
  image_properties.get('xenapi_device_id')

  I think it's important to try to get this fixed (or as close as
  possible) before the icehouse release.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1275875/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1275144] Re: Volume operations should set task state

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1275144

Title:
  Volume operations should set task state

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Volume attach/detach/swap should set task_state so that conflicting
  operations such as migrate can be blocked via the check_instance_state
  decorator.

  This would also allow users can see slow operations are still in
  progress.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1275144/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1391761] Re: info about migration is not appropriate

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1391761

Title:
  info about migration is not appropriate

Status in OpenStack Compute (nova):
  Expired

Bug description:
  nova migration information reports flavor.id not flavor.flavorid which
  the API requires everywhere else.

  
  
---++++
  | Source Node | Dest Node   | Source Compute  | Dest Compute| 
Dest Host   | Status| Instance UUID| Old Flavor 
| New Flavor | Created At | Updated At |
  
+-+-+-+-+-+---+--+++++
  | cloudcontroller | cloudcontroller | cloudcontroller | cloudcontroller | 
192.168.122.202 | confirmed | cb7c6742-7b7a-44de-ad5a-8570ee520f9e | 2  
| 15 | 2014-08-27T01:36:58.00 | 2014-08-27T01:42:27.00 |
  | cloudcontroller | cloudcontroller | cloudcontroller | cloudcontroller | 
192.168.122.202 | error | cb7c6742-7b7a-44de-ad5a-8570ee520f9e | 15 
| 2  | 2014-08-28T05:03:56.00 | 2014-08-28T05:03:57.00 |
  | cloudcontroller | cloudcontroller | cloudcontroller | cloudcontroller | 
192.168.122.202 | confirmed | 2ef6c554-bffa-4ec8-adfd-9ede6ee7e389 | 2  
| 15 | 2014-10-15T09:45:25.00 | 2014-10-15T09:46:16.00 |
  | cloudcontroller | cloudcontroller | cloudcontroller | cloudcontroller | 
192.168.122.202 | reverted  | 2ef6c554-bffa-4ec8-adfd-9ede6ee7e389 | 15 
| 2  | 2014-10-15T09:50:54.00 | 2014-10-15T09:57:28.00 |
  | cloudcontroller | cloudcontroller | cloudcontroller | cloudcontroller | 
192.168.122.202 | confirmed | 1a000511-f17b-4024-b54d-6ae3cf00673a | 17 
| 19 | 2014-11-04T02:17:57.00 | 2014-11-04T02:23:14.00 |
  | cloudcontroller | cloudcontroller | cloudcontroller | cloudcontroller | 
192.168.122.202 | error | 1a000511-f17b-4024-b54d-6ae3cf00673a | 19 
| 2  | 2014-11-04T04:12:55.00 | 2014-11-04T04:12:56.00 |
  | cloudcontroller | cloudcontroller | cloudcontroller | cloudcontroller | 
192.168.122.202 | error | 1a000511-f17b-4024-b54d-6ae3cf00673a | 19 
| 15 | 2014-11-12T06:34:24.00 | 2014-11-12T06:34:25.00 |
  | cloudcontroller | cloudcontroller | cloudcontroller | cloudcontroller | 
192.168.122.202 | error | 1a000511-f17b-4024-b54d-6ae3cf00673a | 19 
| 15 | 2014-11-12T06:36:19.00 | 2014-11-12T06:36:20.00 |
  
+-+-+-+-+-+---+--+++++

  jichen@cloudcontroller:~$ nova flavor-list
  
+--+---+---+--+---+-+---+-+---+
  | ID   | Name  | Memory_MB | Disk | Ephemeral | Swap_MB | VCPUs | 
RXTX_Factor | Is_Public |
  
+--+---+---+--+---+-+---+-+---+
  | 1| m1.tiny   | 512   | 1| 0 | | 1 | 1.0 
| True  |
  | 100  | m1.test   | 1024  | 5| 0 | | 1 | 1.0 
| True  |
  | 101  | m1.test1  | 512   | 5| 0 | | 1 | 1.0 
| True  |
  | 1010 | ji.t1 | 512   | 1| 10| | 1 | 1.0 
| True  |
  | 1011 | ji.t2 | 512   | 1| 20| | 1 | 1.0 
| True  |
  | 2| m1.small  | 2048  | 20   | 0 | | 1 | 1.0 
| True  |
  | 3| m1.medium | 4096  | 40   | 0 | | 2 | 1.0 
| True  |
  | 4| m1.large  | 8192  | 80   | 0 | | 4 | 1.0 
| True  |
  | 42   | m1.nano   | 64| 0| 0 | | 1 | 1.0 
| True  |
  | 451  | m1.heat   | 512   | 0| 0 |  

[Yahoo-eng-team] [Bug 1375868] Re: libvirt: race between hot unplug and XMLDesc in _get_instance_disk_info

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: High => Undecided

** Changed in: nova
   Status: Confirmed => Expired

** Changed in: nova
 Assignee: Chen Fan (fan-chen) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375868

Title:
  libvirt: race between hot unplug and XMLDesc in
  _get_instance_disk_info

Status in OpenStack Compute (nova):
  Expired

Bug description:
  THis came up when analyzing
  https://bugs.launchpad.net/nova/+bug/1371677 and there is a lot
  information on there. The bug in short is that _get_instance_disk_info
  will rely on db information to filter out the volumes from the list of
  discs it gets from libvirt XML, but due to the async nature of unplug
  - this can still contain a volume that does not exist in the DB and
  will not be filtered out, so the code will assume it's an lvm image
  and do a blockdev on it which can block for a very long time.

  The solution is to NOT use libvirt XML in this particular case (but
  anywhere in Nova really) to find out information about running
  instances.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1375868/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284658] Re: VMware: refactor how we iterate result objects from vCenter

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1284658

Title:
  VMware: refactor how we iterate result objects from vCenter

Status in OpenStack Compute (nova):
  Expired

Bug description:
  There is lot of duplicate code which does the following (pseudo code):

  result = session.get_objects_from_vcenter()
  while result:
  do_something(result)
  token = get_token(result)
  if token:
  result = session.continue_to_get_objects(token)
  else:
  break

  The part that retrieves more objects if token is returned is repeated
  over and over again. We need to come up with a common utility (e.g. an
  iterator) which abstracts this boilerplate and then have something
  like:

  for result in session.get_objects():
  do_something_with_result(result)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1284658/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1310131] Re: Some non-supported actions in Ironic nova driver do not return errors to the user

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Medium => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1310131

Title:
  Some non-supported actions in Ironic nova driver do not return errors
  to the user

Status in Ironic:
  Confirmed
Status in OpenStack Compute (nova):
  Expired

Bug description:
  While performing checking Nova API actions that I expected to fail
  when testing with the Ironic driver, I noticed in some cases a
  positive response is returned, but the action fails within the compute
  process when the action is attempted to be executed. When working with
  other drivers, I expect to see some time of immediate response from
  the initial request stating that the action isn't possible. The
  actions I've specifically verified this with are:

  - Pause

2014-04-19 21:47:30.940 ERROR oslo.messaging._drivers.common 
[req-10dedfe7-9fe2-4c0d-9a4e-a85abdd137df demo demo] ['Traceback (most recent 
call last):\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
133, in _dispatch_and_reply\nincoming.message))\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
176, in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, 
args)\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
122, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n', '  File "/opt/stack/nova/nova/exception.py", line 88, in 
wrapped\npayload)\n', '  File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__\n
six.reraise(self.type_, self.value, self.tb)\n', '  File 
"/opt/stack/nova/nova/exception.py", line 71, in wrapped\nreturn f(self, 
context, *args, **kw)\n', '  File "/opt/stack/nova/
 nova/compute/manager.py", line 276, in decorated_function\npass\n', '  
File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in 
__exit__\nsix.reraise(self.type_, self.value, self.tb)\n', '  File 
"/opt/stack/nova/nova/compute/manager.py", line 262, in decorated_function\n
return function(self, context, *args, **kwargs)\n', '  File 
"/opt/stack/nova/nova/compute/manager.py", line 329, in decorated_function\n
 function(self, context, *args, **kwargs)\n', '  File 
"/opt/stack/nova/nova/compute/manager.py", line 305, in decorated_function\n
e, sys.exc_info())\n', '  File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__\n
six.reraise(self.type_, self.value, self.tb)\n', '  File 
"/opt/stack/nova/nova/compute/manager.py", line 292, in decorated_function\n
return function(self, context, *args, **kwargs)\n', '  File 
"/opt/stack/nova/nova/compute/manager.py", line 3659, in pause_instance\n
self.driver.pause(instance)\n', '  File "/opt/stack/nova/nova/virt/driver.py", 
line 521, in pause\nraise NotImplementedError()\n', 'NotImplementedError\n']

  - Rescue

screen-n-cpu.log:2014-04-19 21:56:29.518 DEBUG ironicclient.common.http 
[req-d3128aae-9558-4f4b-adc4-b75b092a3acb demo demo]
screen-n-cpu.log:2014-04-19 21:56:29.523 ERROR 
oslo.messaging.rpc.dispatcher [req-d3128aae-9558-4f4b-adc4-b75b092a3acb demo 
demo] Exception during message handling: Instance 
5b43d631-91e1-4384-9b87-93283b3ae958 cannot be rescued: Driver Error:
screen-n-cpu.log:2014-04-19 21:56:29.524 ERROR 
oslo.messaging._drivers.common [req-d3128aae-9558-4f4b-adc4-b75b092a3acb demo 
demo] Returning exception Instance 5b43d631-91e1-4384-9b87-93283b3ae958 cannot 
be rescued: Driver Error:  to caller
screen-n-cpu.log:2014-04-19 21:56:29.524 ERROR 
oslo.messaging._drivers.common [req-d3128aae-9558-4f4b-adc4-b75b092a3acb demo 
demo] ['Traceback (most recent call last):\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
133, in _dispatch_and_reply\nincoming.message))\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
176, in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, 
args)\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
122, in _do_dispatch\nresult = getattr(endpoint, 

[Yahoo-eng-team] [Bug 1340709] Re: detach volume when call cinder's attach volume fail

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340709

Title:
  detach volume when call cinder's attach volume fail

Status in OpenStack Compute (nova):
  Expired

Bug description:
  I attach volume on VM, there is a exception occured when call Cinder's 
attach_volume, Cinder is shown that the volume is not mounted,but  i login in 
to the VM found volume was mounted on the VM.
  I think  that Nova need to detach volume when the excepiton occured, 
because,the nova driver has mounted successful before call cinder's 
attach_volume.
  ps:
   1.nova attach a volume;
   2.after nova's driver connect the volume,the nova call the cinder's attach, 
at this time, the cinder's api get the msg from MQ, but the cinder-volume is 
reboot.
   3.excute the command(cinder list) the volume is not attach any VM, but login 
into the VM,
  found the volume is attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340709/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1309733] Re: VMWARE: os-hypervisors api call returning invalid response

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1309733

Title:
  VMWARE: os-hypervisors api call returning invalid response

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Following tempest test failed:
  
tempest.api.compute.admin.test_hypervisor.HypervisorAdminTestJSON.test_get_hypervisor_show_details

  Error:
  ft115.3: 
tempest.api.compute.admin.test_hypervisor.HypervisorAdminTestJSON.test_get_hypervisor_show_details[gate]_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  2014-04-18 12:09:28,673 Request 
(HypervisorAdminTestJSON:test_get_hypervisor_show_details): 200 GET 
http://172.30.0.3:8774/v2/1f6893bb24904ef0a9a90ed9fef9d86c/os-hypervisors 0.013s
  2014-04-18 12:09:28,715 Request 
(HypervisorAdminTestJSON:test_get_hypervisor_show_details): 200 GET 
http://172.30.0.3:8774/v2/1f6893bb24904ef0a9a90ed9fef9d86c/os-hypervisors/1 
0.038s
  }}}

  Traceback (most recent call last):
File "tempest/api/compute/admin/test_hypervisor.py", line 60, in 
test_get_hypervisor_show_details
  get_hypervisor_show_details(hypers[0]['id']))
File "tempest/services/compute/json/hypervisor_client.py", line 53, in 
get_hypervisor_show_details
  resp, body)
File "tempest/common/rest_client.py", line 602, in validate_response
  raise exceptions.InvalidHTTPResponseBody(msg)
  InvalidHTTPResponseBody: HTTP response body is invalid json or xml
  Details: HTTP response body is invalid (None is not of type 'integer'

  Failed validating 'type' in 
schema['properties']['hypervisor']['properties']['disk_available_least']:
  {'type': 'integer'}

  On instance['hypervisor']['disk_available_least']:
  None)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1309733/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1383899] Re: xenapi auto disk config uses wrong size value when booting from volume

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1383899

Title:
  xenapi auto disk config uses wrong size value when booting from volume

Status in OpenStack Compute (nova):
  Expired

Bug description:
  The auto disk config setting to resize a guest filesystem on boot in
  the xenapi driver can destroy the partition when booting from a
  volume.  The end result of which is the following error during boot:

  nova-compute.log:2014-10-10 16:04:30.829 19672 TRACE nova.utils [instance: 
uuid] raise Failure(result['ErrorDescription'])
  nova-compute.log:2014-10-10 16:04:30.829 19672 TRACE nova.utils [instance: 
uuid] Failure: ['BOOTLOADER_FAILED', 
'OpaqueRef:cd142319-cf6e-c2eb-8c1d-b303a5157ac2', 'Disk has no partitions\n']

  This happens because auto_disk_config gets a size value from the
  flavor root_gb setting, but when booting from a volume this value is
  ignored in favor of the volume size.  This can lead to unexpected
  behavior when volume size > root_gb, and the above error when volume
  size < root_gb.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1383899/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270470] Re: nova.servicegroup.drivers.db error in n-cpu log after successful tempest run

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270470

Title:
  nova.servicegroup.drivers.db error in n-cpu log after successful
  tempest run

Status in OpenStack Compute (nova):
  Expired

Bug description:
  2014-01-18 09:28:58.369 25179 ERROR nova.servicegroup.drivers.db [-] model 
server went away
  2014-01-18 09:28:58.369 25179 TRACE nova.servicegroup.drivers.db Traceback 
(most recent call last):
  2014-01-18 09:28:58.369 25179 TRACE nova.servicegroup.drivers.db   File 
"/opt/stack/new/nova/nova/servicegroup/drivers/db.py", line 96, in _report_state
  2014-01-18 09:28:58.369 25179 TRACE nova.servicegroup.drivers.db 
service.service_ref, state_catalog)
  2014-01-18 09:28:58.369 25179 TRACE nova.servicegroup.drivers.db   File 
"/opt/stack/new/nova/nova/conductor/api.py", line 246, in service_update
  2014-01-18 09:28:58.369 25179 TRACE nova.servicegroup.drivers.db return 
self._manager.service_update(context, service, values)
  2014-01-18 09:28:58.369 25179 TRACE nova.servicegroup.drivers.db   File 
"/opt/stack/new/nova/nova/conductor/rpcapi.py", line 375, in service_update
  2014-01-18 09:28:58.369 25179 TRACE nova.servicegroup.drivers.db 
service=service_p, values=values)
  2014-01-18 09:28:58.369 25179 TRACE nova.servicegroup.drivers.db   File 
"/opt/stack/new/nova/nova/rpcclient.py", line 85, in call
  2014-01-18 09:28:58.369 25179 TRACE nova.servicegroup.drivers.db return 
self._invoke(self.proxy.call, ctxt, method, **kwargs)
  2014-01-18 09:28:58.369 25179 TRACE nova.servicegroup.drivers.db   File 
"/opt/stack/new/nova/nova/rpcclient.py", line 63, in _invoke
  2014-01-18 09:28:58.369 25179 TRACE nova.servicegroup.drivers.db return 
cast_or_call(ctxt, msg, **self.kwargs)
  2014-01-18 09:28:58.369 25179 TRACE nova.servicegroup.drivers.db   File 
"/opt/stack/new/nova/nova/openstack/common/rpc/proxy.py", line 130, in call
  2014-01-18 09:28:58.369 25179 TRACE nova.servicegroup.drivers.db 
exc.info, real_topic, msg.get('method'))
  2014-01-18 09:28:58.369 25179 TRACE nova.servicegroup.drivers.db Timeout: 
Timeout while waiting on RPC response - topic: "conductor", RPC method: 
"service_update" info: ""
  2014-01-18 09:28:58.369 25179 TRACE nova.servicegroup.drivers.db 

  ...

  2014-01-18 09:30:09.105 25179 ERROR nova.servicegroup.drivers.db [-]
  Recovered model server connection!

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1270470/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273894] Re: GlusterFS: Do not time out long-running volume snapshot operations

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273894

Title:
  GlusterFS: Do not time out long-running volume snapshot operations

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  Expired

Bug description:
  Currently, when Cinder sends a snapshot create or delete job to Nova
  for the GlusterFS driver, it has a fixed timeout window, and if the
  job takes longer than that, the snapshot operation is failed.  (The
  assumption is that Nova has somehow failed.)

  This is problematic because it fails operations that are still active
  but running very slowly.

  The fix proposed here is to use the same update_snapshot_status API
  which is used to finalize these operations to send periodic updates
  while the operation is in progress, so that Cinder knows that Nova is
  still active, and that the job does not need to be timed out.

  This is backward compatible for both Havana Cinder and Havana Nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1273894/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338672] Re: Nova might spawn without waiting for network-vif-plugged event

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: High => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338672

Title:
  Nova might spawn without waiting for network-vif-plugged event

Status in OpenStack Compute (nova):
  Expired

Bug description:
  This applies only when the nova/neutron event reporting mechanism is
  enabled.

  It has been observed that in some cases Nova spawns an instance
  without waiting for network-vif-plugged event, even if the vif was
  unplugged and then plugged again.

  This happens because the status of the VIF in the network info cache is not 
updated when such events are received.
  Therefore the cache contains an out-of-date value and the VIF might already 
be in status ACTIVE when the instance is being spawned. However there is no 
guarantee that this would be the actual status of the VIF.

  For instance in this case there are only two instances in which nova
  starts waiting for 'network-vif-plugged' on f800d4a8-0a01-475f-
  bd34-8d975ce6f1ab. However this instance is used in
  tempest.api.compute.servers.test_server_actions, and the tests in this
  suite should trigger more than 2 events requiring a respawn of an
  instance after unplugging vifs.

  From what can be gathered by logs, this issue, if confirmed, should
  occur only when actions such as stop, resize, reboot_hard are executed
  on a instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1338672/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403856] Re: VMware VCDriver: A node crash, vSphere HA and badly timed _sync_power_states() will shut instances down

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: High => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1403856

Title:
  VMware VCDriver: A node crash, vSphere HA and badly timed
  _sync_power_states() will shut instances down

Status in OpenStack Compute (nova):
  Expired

Bug description:
  The release: Icehouse, however the code in juno seems to same

  When a VMware node crashes, the instances will be restarted on a new
  node because of vSphere HA.

  If _sync_power_states() is triggered just after node crash, the vms are down 
and Nova will update the database and print
  "Instance shutdown by itself. Calling the stop API."

  On next _sync_power_states() run, Nova will notice that power state is 
changed and will shut the instances down and print
  "Instance is not stopped. Calling the stop API.". This happens because 
vSphere HA has started instances meanwhile.

  To my understanding to fix this we need either
  1. change the logic (I don't have ideas unfortunately) or
  2. add a config option that states if we force stop or not when an instance 
is stopped from the database point of view.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1403856/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1305892] Re: nova-manage db archive_deleted_rows fails with pgsql on low row count

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1305892

Title:
  nova-manage db archive_deleted_rows fails with pgsql  on low row count

Status in OpenStack Compute (nova):
  Expired

Bug description:
  # nova-manage db archive_deleted_rows 10 fails with postgresql,
  when I do not have at least 10 rows for archive.

   
  # nova delete 27d7de76-3d41-4b37-8980-2a783f8296ac
  # nova list
  
+--++++-+--+
  | ID   | Name   | Status | Task State | Power 
State | Networks |
  
+--++++-+--+
  | 526d13d4-420d-4b5c-b469-bd997ef4da99 | server | ACTIVE | -  | 
Running | private=10.1.0.4 |
  | d01ce4e4-a33d-4583-96a4-b9a942d08dd8 | server | ACTIVE | -  | 
Running | private=10.1.0.6 |
  
+--++++-+--+
  # /usr/bin/nova-manage db archive_deleted_rows 1  SUCESSS 
  # nova delete  526d13d4-420d-4b5c-b469-bd997ef4da99
  # nova delete d01ce4e4-a33d-4583-96a4-b9a942d08dd8
  # nova list
  ++--+++-+--+
  | ID | Name | Status | Task State | Power State | Networks |
  ++--+++-+--+
  ++--+++-+--+
  # /usr/bin/nova-manage db archive_deleted_rows 3 # FAILURE 
  Command failed, please check log for more info
  2014-04-10 13:40:06.716 CRITICAL nova 
[req-43b1f10f-9ece-4aae-8812-cd77f6556d38 None None] ProgrammingError: 
(ProgrammingError) column "locked_by" is of type shadow_instances0locked_by but 
expression is of type instances0locked_by
  LINE 1: ...ces.cell_name, instances.node, instances.deleted, instances
   ^
  HINT:  You will need to rewrite or cast the expression.
   'INSERT INTO shadow_instances SELECT instances.created_at, 
instances.updated_at, instances.deleted_at, instances.id, 
instances.internal_id, instances.user_id, instances.project_id, 
instances.image_ref, instances.kernel_id, instances.ramdisk_id, 
instances.launch_index, instances.key_name, instances.key_data, 
instances.power_state, instances.vm_state, instances.memory_mb, 
instances.vcpus, instances.hostname, instances.host, instances.user_data, 
instances.reservation_id, instances.scheduled_at, instances.launched_at, 
instances.terminated_at, instances.display_name, instances.display_description, 
instances.availability_zone, instances.locked, instances.os_type, 
instances.launched_on, instances.instance_type_id, instances.vm_mode, 
instances.uuid, instances.architecture, instances.root_device_name, 
instances.access_ip_v4, instances.access_ip_v6, instances.config_drive, 
instances.task_state, instances.default_ephemeral_device, 
instances.default_swap_device, instances.progress, instances.
 auto_disk_config, instances.shutdown_terminate, instances.disable_terminate, 
instances.root_gb, instances.ephemeral_gb, instances.cell_name, instances.node, 
instances.deleted, instances.locked_by, instances.cleaned, 
instances.ephemeral_key_uuid \nFROM instances \nWHERE instances.deleted != 
%(deleted_1)s ORDER BY instances.id \n LIMIT %(param_1)s' {'param_1': 1, 
'deleted_1': 0}
  2014-04-10 13:40:06.716 14789 TRACE nova Traceback (most recent call last):
  2014-04-10 13:40:06.716 14789 TRACE nova   File "/usr/bin/nova-manage", line 
10, in 
  2014-04-10 13:40:06.716 14789 TRACE nova sys.exit(main())
  2014-04-10 13:40:06.716 14789 TRACE nova   File 
"/opt/stack/new/nova/nova/cmd/manage.py", line 1376, in main
  2014-04-10 13:40:06.716 14789 TRACE nova ret = fn(*fn_args, **fn_kwargs)
  2014-04-10 13:40:06.716 14789 TRACE nova   File 
"/opt/stack/new/nova/nova/cmd/manage.py", line 902, in archive_deleted_rows
  2014-04-10 13:40:06.716 14789 TRACE nova 
db.archive_deleted_rows(admin_context, max_rows)
  2014-04-10 13:40:06.716 14789 TRACE nova   File 
"/opt/stack/new/nova/nova/db/api.py", line 1915, in 

[Yahoo-eng-team] [Bug 1315988] Re: report disk consumption incorrect in nova-compute

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1315988

Title:
  report disk consumption incorrect in nova-compute

Status in OpenStack Compute (nova):
  Expired

Bug description:
  I have following flavor
  jichen@controller:~$ nova flavor-list
  
++---+---+--+---+--+---+-+---+
  | ID | Name  | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor 
| Is_Public |
  
++---+---+--+---+--+---+-+---+
  | 1  | m1.tiny   | 512   | 1| 0 |  | 1 | 1.0 
| True  |
  | 11 | t.test1   | 512   | 1| 5 |  | 1 | 1.0 
| True  |
  | 2  | m1.small  | 2048  | 20   | 0 |  | 1 | 1.0 
| True  |
  | 3  | m1.medium | 4096  | 40   | 0 |  | 2 | 1.0 
| True  |
  | 4  | m1.large  | 8192  | 80   | 0 |  | 4 | 1.0 
| True  |
  | 5  | m1.xlarge | 16384 | 160  | 0 |  | 8 | 1.0 
| True  |
  
++---+---+--+---+--+---+-+---+

  I used following command
  nova boot --flavor 11 --key_name mykey --image 
47b00a69-ba84-4dce-bc7e-72ffc5a5d93e --ephemeral 2 t6

  so the disk consumption is 2G instead of 5G

  
  but I added following logs:
  def _update_usage(self, resources, usage, sign=1):
  mem_usage = usage['memory_mb']

  overhead = self.driver.estimate_instance_overhead(usage)
  mem_usage += overhead['memory_mb']

  resources['memory_mb_used'] += sign * mem_usage
  resources['local_gb_used'] += sign * usage.get('root_gb', 0)
  resources['local_gb_used'] += sign * usage.get('ephemeral_gb', 0)

  LOG.audit(_("--%s %s--") % (usage.get('ephemeral_gb',
  0), usage.get('root_gb', 0)))

  
  I got following info in the log
  2014-05-05 11:27:12.105 5209 AUDIT nova.compute.resource_tracker [-] --5 
1--

  
  so the free disk was changed from 
  2014-05-05 10:16:16.174 3033 AUDIT nova.compute.resource_tracker [-] Free 
disk (GB): 35

  to

  2014-05-05 10:16:16.174 3033 AUDIT nova.compute.resource_tracker [-]
  Free disk (GB): 29

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1315988/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1321774] Re: Wrong error when creating different instances with the same hostname into the same DNS domain

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1321774

Title:
  Wrong error when creating different instances with the same hostname
  into the same DNS domain

Status in OpenStack Compute (nova):
  Expired

Bug description:
  The bug related to creating different instances with the same hostname
  into the same DNS domain was reported on launchpad
  (https://bugs.launchpad.net/nova/+bug/1283538) and is being solved
  with its review (https://review.openstack.org/#/c/94252/).

  However, the error is thrown when the instance is being built and it
  says "Error: No valid host was found.". It should say something like
  "Error: An instance already exists with the hostname ".

  Internally, an error with message "The DNS entry %(name)s already
  exists in domain %(domain)s." is being thrown, but it is not shown to
  the caller interface, such as Horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1321774/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347039] Re: VMWare: available disk spaces(hypervisor-list) only based on a single datastore instead of all available datastores from cluster

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Medium => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1347039

Title:
  VMWare: available disk spaces(hypervisor-list) only based on a single
  datastore instead of all available datastores from cluster

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Currently the with vmware backend nova hypervisor-list. 
  The local_gb, free_disk_gb, local_gb_used  are not displaying correct values 
if the compute node(cluster) have more than 1 datastores. Currently the code 
only report the resource update picking up the first datastore which is 
incorrect. 

  The real situation is for example, 20 datastores availabel for the
  compute cluster node, but it only calculate the 1 for resource report.
  Which will cause easily deployment failure saying no freespace, in
  fact there are enough disk for vm deployments.

  
  [root@RegionServer1 nova]# nova hypervisor-show 1 
  +---+
  | Property  | Value   

   
+---+-
  | cpu_info_model| ["Intel(R) Xeon(R) CPU   X5675  @ 
3.07GHz", "Intel(R) Xeon(R) CPU   X5675  @ 3.07GHz", "Intel(R) Xeon(R) 
CPU   X5675  @ 3.07GHz", "Intel(R) Xeon(R) CPU   X5675  @ 
3.07GHz", "Intel(R) Xeon(R) CPU   X5675  @ 3.07GHz", "Intel(R) Xeon(R) 
CPU   X5675  @ 3.07GHz", "Intel(R) Xeon(R) CPU   X5675  @ 
3.07GHz", "Intel(R) Xeon(R) CPU   X5675  @ 3.07GHz", "Intel(R) Xeon(R) 
CPU 
   X5675  @ 3.07GHz", "Intel(R) Xeon(R) CPU   X5675  @ 3.07GHz", 
"Intel(R) Xeon(R) CPU   X5675  @ 3.07GHz", "Intel(R) Xeon(R) CPU
   X5675  @ 3.07GHz", "Intel(R) Xeon(R) CPU   X5675  @ 3.07GHz"] | | 
cpu_info_topology_cores   | 156 

   
  | cpu_info_topology_threads | 312 

   
  | cpu_info_vendor"IBM"]   
 
  | current_workload  | 0   

   
  | disk_available_least  | -   

   
  | free_disk_gb  | -2682   

   
  | free_ram_mb   | 1545886 

   
  | host_ip   | 172.18.152.120  

   
  | hypervisor_hostname   | domain-c661(BC1-Cluster)

   
  | hypervisor_type   | VMware vCenter Server   

   
  | hypervisor_version| 5001000 

   

[Yahoo-eng-team] [Bug 1330758] Re: VMware: iSCSI targets needs to be propagated to all hosts of the cluster

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Low => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1330758

Title:
  VMware: iSCSI targets needs to be propagated to all hosts of the
  cluster

Status in OpenStack Compute (nova):
  Expired

Bug description:
  With current state of the codebase, vMotion will be a no-op if a VM has a RDM 
to a iSCSI cinder volume because the target host where the VM should be moved 
doesn't know the iSCSI target of the host where the VM is initially running.
  This means that the VM cannot move and have to stay on the same host as soon 
as there is a RDM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1330758/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337214] Re: VMware: Fail to boot VM when using VDS or the port gropu be created on different vSwitch

2016-07-05 Thread Markus Zoeller (markus_z)
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY


** Changed in: nova
   Importance: Medium => Undecided

** Changed in: nova
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1337214

Title:
  VMware: Fail to boot VM when using VDS or the port gropu be created on
  different vSwitch

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Fail to boot a instance when using the nova-network, I got the error
  message in log file

  'InvalidVLANPortGroup: vSwitch which contains the port group
  VS5_GUEST_SCODEV_G1_V231 is not associated with the desired physical
  adapter. Expected vSwitch is vSwitch0, but the one associated is
  vSwitch5.

  Currently, the logic in vmware driver is that all ESXi systems must
  have the exact same networking configuration (the same
  PortGroup/vSwitch/pNIC mapping), which typically isn't the case on
  customer environments.

  In my case, on ESX1 the portgroup could be on vSwitch1, but on ESX2 the 
portgroup could be on vSwitch2, so one of them would fail.
  And also if I use the VDS, it doesn't have a physical adapter associated with 
it and there is a virtual router/firewall connected to that vSwitch which then 
acts as the gateway for the different PortGroups on it.

  So our vSwitch validation should be enhanced.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1337214/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   3   4   5   6   >