[Yahoo-eng-team] [Bug 1434407] [NEW] Instance percentage couldn't show in the new launch instance form

2015-03-20 Thread Liyingjun
Public bug reported:

When the decimal point is too long, the instance percentage will be not
shown right. see attachment.

** Affects: horizon
 Importance: Undecided
 Assignee: Liyingjun (liyingjun)
 Status: In Progress

** Attachment added: "Screen Shot 2015-03-20 at 2.49.08 PM.png"
   
https://bugs.launchpad.net/bugs/1434407/+attachment/4350543/+files/Screen%20Shot%202015-03-20%20at%202.49.08%20PM.png

** Changed in: horizon
 Assignee: (unassigned) => Liyingjun (liyingjun)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1434407

Title:
  Instance percentage couldn't show in the new launch instance form

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When the decimal point is too long, the instance percentage will be
  not shown right. see attachment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1434407/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1434406] [NEW] missing admin state column in firewall table

2015-03-20 Thread Masco Kaliyamoorthy
Public bug reported:

admin state column is missing in the firewall table

** Affects: horizon
 Importance: Undecided
 Assignee: Masco Kaliyamoorthy (masco)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Masco Kaliyamoorthy (masco)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1434406

Title:
  missing admin state column in firewall table

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  admin state column is missing in the firewall table

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1434406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430822] Re: cells rpc API _handle_cell_delete regressed with commit 222d44532c65ddf3f26532ced217890628352536

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1430822

Title:
  cells rpc API _handle_cell_delete regressed with commit
  222d44532c65ddf3f26532ced217890628352536

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  https://review.openstack.org/#/c/121800/ changes the cells RPC API in
  a non-backwards compatible way in the _handle_cell_delete method where
  method_name should be 'soft' or 'hard' but it's changed to
  'soft_delete', 'delete' or 'force_delete'.

  We need this reverted or fixed on master before we release kilo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1430822/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431562] Re: cells: Error processing message locally: Object action save failed because: Calling remotables with context is deprecated

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1431562

Title:
  cells: Error processing message locally: Object action save failed
  because: Calling remotables with context is deprecated

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Looks like this breaks the cells non-voting job:

  http://logs.openstack.org/90/163890/1/check/check-tempest-dsvm-
  cells/896e7e0/logs/screen-n-cell-
  child.txt.gz?level=TRACE#_2015-03-12_17_44_00_202

  2015-03-12 17:44:00.202 ERROR nova.cells.messaging 
[req-f0a74b8e-f85f-4711-ac78-ecca301bdd08 DeleteServersTestJSON-678023458 
DeleteServersTestJSON-122475576] Error processing message locally: Object 
action save failed because: Calling remotables with context is deprecated
  2015-03-12 17:44:00.202 19800 TRACE nova.cells.messaging Traceback (most 
recent call last):
  2015-03-12 17:44:00.202 19800 TRACE nova.cells.messaging   File 
"/opt/stack/new/nova/nova/cells/messaging.py", line 200, in _process_locally
  2015-03-12 17:44:00.202 19800 TRACE nova.cells.messaging resp_value = 
self.msg_runner._process_message_locally(self)
  2015-03-12 17:44:00.202 19800 TRACE nova.cells.messaging   File 
"/opt/stack/new/nova/nova/cells/messaging.py", line 1306, in 
_process_message_locally
  2015-03-12 17:44:00.202 19800 TRACE nova.cells.messaging return 
fn(message, **message.method_kwargs)
  2015-03-12 17:44:00.202 19800 TRACE nova.cells.messaging   File 
"/opt/stack/new/nova/nova/cells/messaging.py", line 838, in 
instance_update_from_api
  2015-03-12 17:44:00.202 19800 TRACE nova.cells.messaging 
expected_task_state=expected_task_state)
  2015-03-12 17:44:00.202 19800 TRACE nova.cells.messaging   File 
"/opt/stack/new/nova/nova/objects/base.py", line 186, in wrapper
  2015-03-12 17:44:00.202 19800 TRACE nova.cells.messaging reason='Calling 
remotables with context is deprecated')
  2015-03-12 17:44:00.202 19800 TRACE nova.cells.messaging ObjectActionError: 
Object action save failed because: Calling remotables with context is deprecated
  2015-03-12 17:44:00.202 19800 TRACE nova.cells.messaging 

  Introduced with this change: https://review.openstack.org/#/c/160500/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1431562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431571] Re: archive_deleted_rows_for_table relies on reflection to access the "default" for soft-delete columns, but this is not a server default

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1431571

Title:
  archive_deleted_rows_for_table relies on reflection to access the
  "default" for soft-delete columns, but this is not a server default

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Running subsets of Nova tests or individual tests within test_db_api
  reveals a simple error in several of the tests within ArchiveTestCase.

  A test such as test_archive_deleted_rows_2_tables attempts the
  following:

  1. places six rows into instance_id_mappings
  2. places six rows into instances
  3. runs the archive_deleted_rows_ routine with a max of 7 rows to archive
  4. runs a SELECT of instances and instance_id_mappings, and confirms that 
only 5 remain.

  Running this test directly with PYTHONHASHSEED=random will very easily
  encounter failures such as:

  Traceback (most recent call last):
File 
"/Users/classic/dev/redhat/openstack/nova/nova/tests/unit/db/test_db_api.py", 
line 7869, in test_archive_deleted_rows_2_tables
  self.assertEqual(len(iim_rows) + len(i_rows), 5)
File 
"/Users/classic/dev/redhat/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 350, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/Users/classic/dev/redhat/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 435, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 8 != 5

  
  or 

  Traceback (most recent call last):
File 
"/Users/classic/dev/redhat/openstack/nova/nova/tests/unit/db/test_db_api.py", 
line 7872, in test_archive_deleted_rows_2_tables
  self.assertEqual(len(iim_rows) + len(i_rows), 5)
File 
"/Users/classic/dev/redhat/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 350, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/Users/classic/dev/redhat/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 435, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 10 != 5

  
  The reason is that the archive_deleted_rows() routine looks for rows in *all* 
tables, in *non-deterministic order*, e.g. by searching through 
"models.__dict__.itervalues()".   In the "8 != 5" case, there are rows present 
also in the instance_types table.  By PDBing into archive_deleted_rows during 
the test, we can see here:

  ARCHIVED 4 ROWS FROM TABLE instances
  ARCHIVED 3 ROWS FROM TABLE instance_types
  Traceback (most recent call last):
  ...
  testtools.matchers._impl.MismatchError: 8 != 5

  that is, the archiver locates seven rows just between instances and
  instance_types, then stops.  It never even gets to the
  instance_id_mappings table.

  The serious problem with the way this test is designed, is that if we
  were to make it ignore only certain tables, or make the ordering
  fixed, or anything else, that will never keep the test from breaking
  again, any time a new table is added which contains rows when the test
  fixtures start.

  The only solution to making these tests runnable in their current form
  is to limit the listing of tables that are searched in
  archive_deleted_rows; that is, the test needs to inject a fixture into
  it.  The most straightforward way to achieve this would look like
  this:

   @require_admin_context
  -def archive_deleted_rows(context, max_rows=None):
  +def archive_deleted_rows(context, max_rows=None, 
_limit_tablenames_fixture=None):
   """Move up to max_rows rows from production tables to the corresponding
   shadow tables.
   
  @@ -5870,6 +5870,9 @@ def archive_deleted_rows(context, max_rows=None):
   if hasattr(model_class, "__tablename__"):
   tablenames.append(model_class.__tablename__)
   rows_archived = 0
  +if _limit_tablenames_fixture:
  +tablenames = set(tablenames).intersection(_limit_tablenames_fixture)
  +
   for tablename in tablenames:
   rows_archived += archive_deleted_rows_for_table(context, tablename,
max_rows=max_rows - rows_archived)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1431571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431549] Re: "Arguments dropped when creating context" warnings are spamming logs

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1431549

Title:
  "Arguments dropped when creating context" warnings are spamming logs

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  ~5.6 million hits in a 7 day gate run:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQXJndW1lbnRzIGRyb3BwZWQgd2hlbiBjcmVhdGluZyBjb250ZXh0OiB7dSdyZWFkX29ubHknOiBGYWxzZSwgdSdkb21haW4nOiBOb25lLCB1J3Nob3dfZGVsZXRlZCc6IEZhbHNlXCIgQU5EIHRhZ3M6XCJzY3JlZW4tbi1jcHUudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MjYxOTE5NzQyODB9

  Probably introduced with the move to oslo_context for the nova
  RequestContext class.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1431549/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430383] Re: "libvirtError: Network filter not found: no nwfilter with matching name" tracing a ton since 3/3

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1430383

Title:
  "libvirtError: Network filter not found: no nwfilter with matching
  name" tracing a ton since 3/3

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  http://logs.openstack.org/74/162774/2/gate/gate-tempest-dsvm-
  full/8d8876c/logs/screen-n-cpu.txt.gz?level=TRACE

  Seeing a lot of traces like this since 3/3:

  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall Traceback 
(most recent call last):
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall   File 
"/opt/stack/new/nova/nova/virt/libvirt/firewall.py", line 249, in 
_get_filter_uuid
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall flt = 
self._conn.nwfilterLookupByName(name)
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, in doit
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall result = 
proxy_call(self._autowrap, f, *args, **kwargs)
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, in 
proxy_call
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall rv = 
execute(f, *args, **kwargs)
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, in execute
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall 
six.reraise(c, e, tb)
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, in tworker
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall rv = 
meth(*args, **kwargs)
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall   File 
"/usr/lib/python2.7/dist-packages/libvirt.py", line 3783, in 
nwfilterLookupByName
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall if ret is 
None:raise libvirtError('virNWFilterLookupByName() failed', conn=self)
  2015-03-10 06:04:45.475 20202 TRACE nova.virt.libvirt.firewall libvirtError: 
Network filter not found: no nwfilter with matching name 'nova-no-nd-reflection'

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwibGlidmlydEVycm9yOiBOZXR3b3JrIGZpbHRlciBub3QgZm91bmQ6IG5vIG53ZmlsdGVyIHdpdGggbWF0Y2hpbmcgbmFtZSAnbm92YS1uby1uZC1yZWZsZWN0aW9uJ1wiIEFORCB0YWdzOlwic2NyZWVuLW4tY3B1LnR4dFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiJjdXN0b20iLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsiZnJvbSI6IjIwMTUtMDMtMDFUMTU6MDk6MjQrMDA6MDAiLCJ0byI6IjIwMTUtMDMtMTBUMTU6MDk6MjQrMDA6MDAiLCJ1c2VyX2ludGVydmFsIjoiMCJ9LCJzdGFtcCI6MTQyNjAwMDIzMDY0NH0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1430383/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427209] Re: oslo.log doesn't log request_id, project_id, user_id in nova

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1427209

Title:
  oslo.log doesn't log request_id, project_id, user_id in nova

Status in OpenStack Compute (Nova):
  Fix Released
Status in Logging configuration library for OpenStack:
  Invalid

Bug description:
  The switch to oslo.log broke the nova logs so request_id, project_id,
  user_id are no longer logged. This is a critical breakage of the Nova
  logs, and makes them nearly useless.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1427209/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431551] Re: nova.tests.unit.api.openstack.compute.contrib.test_block_device_mapping_v1 hits OverQuota with --concurrency=1

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1431551

Title:
  nova.tests.unit.api.openstack.compute.contrib.test_block_device_mapping_v1
  hits OverQuota with --concurrency=1

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  If you run 'tox -e py27 -- --concurrency=1' you get a bunch of
  OverQuota errors in
  nova.tests.unit.api.openstack.compute.contrib.test_block_device_mapping_v1:

  http://paste.openstack.org/show/191932/

  e.g.:

  
nova.tests.unit.api.openstack.compute.contrib.test_block_device_mapping_v1.BlockDeviceMappingTestV2.test_create_instance_decide_format_legacy
  
-

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File 
"nova/tests/unit/api/openstack/compute/contrib/test_block_device_mapping_v1.py",
 line 399, in test_create_instance_decide_format_legacy
  self._test_create(params, override_controller=controller)
File 
"nova/tests/unit/api/openstack/compute/contrib/test_block_device_mapping_v1.py",
 line 94, in _test_create
  override_controller.create(req, body=body).obj['server']
File "nova/api/openstack/compute/servers.py", line 614, in create
  headers={'Retry-After': 0})
  webob.exc.HTTPForbidden: Quota exceeded for cores,instances: Requested 2, 
but already used 20 of 20 cores

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1431551/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431404] Re: Don't trace when @reverts_task_state fails on InstanceNotFound

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1431404

Title:
  Don't trace when @reverts_task_state fails on InstanceNotFound

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  This change https://review.openstack.org/#/c/163515/ added a warning
  when the @reverts_task_state decorator in the compute manager fails
  rather than just pass, because we were getting KeyErrors and never
  noticing them which broke the decorator.

  However, now we're tracing on InstanceNotFound which is a normal case
  if we're deleting the instance after a failure (tempest will delete
  the instance immediately after failures when tearing down a test):

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRmFpbGVkIHRvIHJldmVydCB0YXNrIHN0YXRlIGZvciBpbnN0YW5jZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI4NjQwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MjYxNzA3MDE2OTV9

  http://logs.openstack.org/98/163798/1/check/check-tempest-dsvm-
  postgres-
  full/6eff665/logs/screen-n-cpu.txt.gz#_2015-03-12_13_11_36_304

  2015-03-12 13:11:36.304 WARNING nova.compute.manager 
[req-a5f3b37e-19e9-4e1d-9be7-bbb9a8e7f4c1 DeleteServersTestJSON-706956764 
DeleteServersTestJSON-535578435] [instance: 
6de2ad51-3155-4538-830d-f02de39b4be3] Failed to revert task state for instance. 
Error: Instance 6de2ad51-3155-4538-830d-f02de39b4be3 could not be found.
  Traceback (most recent call last):

File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", 
line 142, in inner
  return func(*args, **kwargs)

File "/opt/stack/new/nova/nova/conductor/manager.py", line 134, in 
instance_update
  columns_to_join=['system_metadata'])

File "/opt/stack/new/nova/nova/db/api.py", line 774, in 
instance_update_and_get_original
  columns_to_join=columns_to_join)

File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 143, in wrapper
  return f(*args, **kwargs)

File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 2395, in 
instance_update_and_get_original
  columns_to_join=columns_to_join)

File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 181, in wrapped
  return f(*args, **kwargs)

File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 2434, in 
_instance_update
  columns_to_join=columns_to_join)

File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 1670, in 
_instance_get_by_uuid
  raise exception.InstanceNotFound(instance_id=uuid)

  InstanceNotFound: Instance 6de2ad51-3155-4538-830d-f02de39b4be3 could
  not be found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1431404/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431201] Re: kilo controller can't conduct juno compute nodes

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1431201

Title:
  kilo controller can't conduct juno compute nodes

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When I tried to use kilo controller to conduct juno compute nodes,
  the juno nova-compute service start with the following two errors:

  1. 2015-03-10 06:37:18.525 18900 TRACE nova.openstack.common.threadgroup 
return self._update_available_resource(context, resources)
  2015-03-10 06:37:18.525 18900 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py", line 
272, in inner
  2015-03-10 06:37:18.525 18900 TRACE nova.openstack.common.threadgroup 
return f(*args, **kwargs)
  2015-03-10 06:37:18.525 18900 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib64/python2.6/contextlib.py", line 34, in __exit__
  2015-03-10 06:37:18.525 18900 TRACE nova.openstack.common.threadgroup 
self.gen.throw(type, value, traceback)
  2015-03-10 06:37:18.525 18900 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py", line 
236, in lock
  2015-03-10 06:37:18.525 18900 TRACE nova.openstack.common.threadgroup 
yield int_lock
  2015-03-10 06:37:18.525 18900 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py", line 
272, in inner
  2015-03-10 06:37:18.525 18900 TRACE nova.openstack.common.threadgroup 
return f(*args, **kwargs)
  2015-03-10 06:37:18.525 18900 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/compute/resource_tracker.py", line 377, 
in _update_available_resource
  2015-03-10 06:37:18.525 18900 TRACE nova.openstack.common.threadgroup 
self._sync_compute_node(context, resources)
  2015-03-10 06:37:18.525 18900 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/compute/resource_tracker.py", line 388, 
in _sync_compute_node
  2015-03-10 06:37:18.525 18900 TRACE nova.openstack.common.threadgroup 
compute_node_refs = service['compute_node']
  2015-03-10 06:37:18.525 18900 TRACE nova.openstack.common.threadgroup 
KeyError: 'compute_node'

  
  We can revert this commit to fix this error: 
  
https://github.com/openstack/nova/commit/83b64ceb871b1553b1bb1e0bb9270816db892552

  2.  2015-03-10 06:41:29.388 19336 TRACE nova.virt.libvirt.driver   File 
"/usr/lib/python2.6/site-packages/nova/rpc.py", line 111, in deserialize_entity
  2015-03-10 06:41:29.388 19336 TRACE nova.virt.libvirt.driver return 
self._base.deserialize_entity(context, entity)
  2015-03-10 06:41:29.388 19336 TRACE nova.virt.libvirt.driver   File 
"/usr/lib/python2.6/site-packages/nova/objects/base.py", line 649, in 
deserialize_entity
  2015-03-10 06:41:29.388 19336 TRACE nova.virt.libvirt.driver entity = 
self._process_object(context, entity)
  2015-03-10 06:41:29.388 19336 TRACE nova.virt.libvirt.driver   File 
"/usr/lib/python2.6/site-packages/nova/objects/base.py", line 615, in 
_process_object
  2015-03-10 06:41:29.388 19336 TRACE nova.virt.libvirt.driver 
e.kwargs['supported'])
  2015-03-10 06:41:29.388 19336 TRACE nova.virt.libvirt.driver   File 
"/usr/lib/python2.6/site-packages/nova/conductor/api.py", line 217, in 
object_backport
  2015-03-10 06:41:29.388 19336 TRACE nova.virt.libvirt.driver return 
self._manager.object_backport(context, objinst, target_version)
  2015-03-10 06:41:29.388 19336 TRACE nova.virt.libvirt.driver   File 
"/usr/lib/python2.6/site-packages/nova/conductor/rpcapi.py", line 358, in 
object_backport
  2015-03-10 06:41:29.388 19336 TRACE nova.virt.libvirt.driver 
target_version=target_version)
  2015-03-10 06:41:29.388 19336 TRACE nova.virt.libvirt.driver   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/client.py", line 152, in 
call

  We can revert this commit to fix this error:
  
https://github.com/openstack/nova/commit/f287b75138129542436b2085d52d6fe201ca7e14

  
  Andbody  know is there  something like gate keeper to make kilo controller 
can keep conducting the juno compute nodes ? Thanks!

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1431201/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426364] Re: scheduler does not display chosen host

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1426364

Title:
  scheduler does not display chosen host

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The scheduler is configured to schedule an instance on a host chosen
  randomly from a subset of the N best hosts. Where the size of the
  subset is defined as 'scheduler_host_subset_size' from the
  configuration.

  In a configuration where the subset is greater than the default (1)
  the scheduler does not display where it has scheduled the instance to.
  It gives a list of all those that it has weighed - just not the one
  chosen.

  Annoying when trying to debug a No Valid Host error with a subset size
  of 20.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1426364/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424756] Re: nova network on multihost does not delete bridges during deletion of networks

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1424756

Title:
  nova network on multihost does not delete bridges during deletion of
  networks

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The teardown_unused_network_gateway parameter=true parameters is used
  by VlanManager in nova-network to remove bridge and vlan interfaces
  when the network is deleted. With CentOS 6.5 these interfaces are not
  delete bridge. In the log there are right lines:

  2014-12-12 14:50:33.957 4606 DEBUG nova.openstack.common.processutils 
[req-8f33392b-db62-462c-a6b5-8008b7ea5412 ] Running cmd (subprocess): sudo 
nova-rootwrap /etc/nova/rootwrap.conf ip link delete br204 execute 
/usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py:154
  2014-12-12 14:50:32.881 4606 DEBUG nova.network.linux_net 
[req-3ec984c3-fbab-45c2-9002-057a6e3306a2 None] Net device removed: 'br204' 
delete_net_dev /usr/lib/python2.6/site-packages/nova/network/linux_net.py:1361

  But actually the bridge interface still exists:

  154: br204:  mtu 1500 qdisc noqueue state 
UNKNOWN
  link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
  inet 10.42.1.3/24 brd 10.42.1.255 scope global br204
  inet 10.42.2.3/24 brd 10.42.2.255 scope global br204
  inet6 fe80::109d:f0ff:feb2:9d64/64 scope link
 valid_lft forever preferred_lft forever

  The command fails in the shell:

  [root@node-77 ~]# ip link delete br204
  RTNETLINK answers: Operation not supported
  [root@node-77 ~]# echo $?
  2

  This bug is not a nova-network issue, but rather a problem with linux
  kernel: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=680094 , but
  can be fixed in nova-network by replacing "ip link delete" command
  with "ip link set  down &&  brctl delbr "  commands

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1424756/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424532] Re: setup() takes at least 2 arguments (1 given)

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1424532

Title:
  setup() takes at least 2 arguments (1 given)

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Trying to install devstack K on CI slave.

  Local conf:
  [[local|localrc]]
  HOST_IP=10.140.0.3
  FLAT_INTERFACE=eth0
  FIXED_RANGE=10.150.0.0/16
  FIXED_NETWORK_SIZE=255
  FLOATING_RANGE=10.140.0.0/16 
  PUBLIC_NETWORK_GATEWAY=10.140.0.3
  NETWORK_GATEWAY=10.150.0.1
  MULTI_HOST=0
  SYSLOG=False
  SCREEN_LOGDIR=/opt/stack/logs/screen-logs
  LOGFILE=/opt/stack/logs/stack.sh.log
  ADMIN_PASSWORD=*
  MYSQL_PASSWORD=*
  RABBIT_PASSWORD=*
  SERVICE_PASSWORD=*
  SERVICE_TOKEN=*
  CINDER_REPO=https://review.openstack.org/openstack/cinder
  CINDER_BRANCH=refs/changes/01/152401/21

  Error during install:
  15-02-23 06:30:11.138 | ++ ls /opt/stack/status/stack/n-novnc.failure
  2015-02-23 06:30:11.141 | + failures=/opt/stack/status/stack/n-novnc.failure
  2015-02-23 06:30:11.141 | + for service in '$failures'
  2015-02-23 06:30:11.142 | ++ basename /opt/stack/status/stack/n-novnc.failure
  2015-02-23 06:30:11.143 | + service=n-novnc.failure
  2015-02-23 06:30:11.143 | + service=n-novnc
  2015-02-23 06:30:11.143 | + echo 'Error: Service n-novnc is not running'
  2015-02-23 06:30:11.143 | Error: Service n-novnc is not running
  2015-02-23 06:30:11.143 | + '[' -n /opt/stack/status/stack/n-novnc.failure ']'
  2015-02-23 06:30:11.143 | + die 1494 'More details about the above errors can 
be found with screen, with ./rejoin-stack.sh'
  2015-02-23 06:30:11.143 | + local exitcode=0
  2015-02-23 06:30:11.143 | [Call Trace]
  2015-02-23 06:30:11.143 | /opt/devstack/stack.sh:1297:service_check
  2015-02-23 06:30:11.143 | /opt/devstack/functions-common:1494:die
  2015-02-23 06:30:11.147 | [ERROR] /opt/devstack/functions-common:1494 More 
details about the above errors can be found with screen, with ./rejoin-stack.sh
  2015-02-23 06:30:12.151 | Error on exit

  Novnc screen:
  stack@d-p-c-local-01-995:/opt/devstack$ /usr/local/bin/nova-novncproxy 
--config- 
  file /etc/nova/nova.conf --web /opt/stack/noVNC & echo $! 
>/opt/stack/status/sta 
  ck/n-novnc.pid; fg || echo "n-novnc failed to start" | tee 
"/opt/stack/status/st 
  ack/n-novnc.failure"
  [1] 10200
  /usr/local/bin/nova-novncproxy --config-file /etc/nova/nova.conf --web 
/opt/stack/noVNC
  Traceback (most recent call last):
File "/usr/local/bin/nova-novncproxy", line 9, in 
  load_entry_point('nova==2015.1.dev387', 'console_scripts', 
'nova-novncproxy')()
File "/opt/stack/nova/nova/cmd/novncproxy.py", line 45, in main
  port=CONF.novncproxy_port)
File "/opt/stack/nova/nova/cmd/baseproxy.py", line 57, in proxy
  logging.setup("nova")
  TypeError: setup() takes at least 2 arguments (1 given)
  n-novnc failed to start
  stack@d-p-c-local-01-995:/opt/devstack$

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1424532/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1425343] Re: KeyErrors in NovaException message format during unit test runs

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1425343

Title:
  KeyErrors in NovaException message format during unit test runs

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  For example:

  http://logs.openstack.org/88/15/1/check/gate-nova-
  python27/dd00931/console.html#_2015-02-24_22_20_22_065

  2015-02-24 22:20:21.882 | 2015-02-24 22:20:20.673 11397 ERROR nova.exception 
[-] instance: 1
  2015-02-24 22:20:21.912 | 2015-02-24 22:ei2Fi = oleself0m "n.m.ovan 
sg_fm[-664/tt20:20  .673 1%1397 E 0:2 1130.73kRROR nwova.exace91 Trgptions
  2015-02-24 22:20:21.918 | 201]51 11  -399 ETRAC[-xE nov]ca.exc eeptie% kwargs
  2015-02-24 22:20:21.945 | 2015-02-24 22:20:20.659 RACEop1138 nxce9  
ption.nova.  pyr", eFi02-ex24 ac22:se20:op20.nt706:ilet 1 oliion
  2015-02-24 22:20:21.947 | n TR1201 
  2015-02-24 22:20:21.971 |  inne 1 st18, 2015"nri395Ao--02-va/02-24 
22:20:20.665 11391 ERROR nova.exception [-] instance: 1
  2015-02-24 22:20:21.990 | 2015-02-24 22:20:20.665 11391 ERROR nova.exception 
[-] reason: 
  2015-02-24 22:20:22.000 | 2015-02-24 22:20:20.665 11391 ERROng format 
operation
  2015-02-24 22:20:22.021 | 2015-02-24 22:20:20.742 11401 TRACE nova.exception 
Traceback (most recent call lastin __init__
  2015-02-24 22:20:22.065 | 2015-02-24 22:20:20.732 11395 TRACE nova.exception  
   message = self.msg_fmt %CE nova.exception KeyError: u'instanc3 T):2015-02-24 
22:20:20.893 11403 ERROR nova.e24 22:20:20.673 11397 ERROR nova.exception [-] 
code: 400
  2015-02-24 22:20:22.089 | 2015-02-24 22:20:20.777 11397 ERROR 
nova.exexception.py", line 118, in __init__
  2015-02-24 22:20:22.100 | 2015-02-24 22:20:20.731 11399 TRACE nova.exception  
   message RACE nova.exception KeyError: u'instance_uuid'
  2015-02-24 22:20:22.106 | 2015-02-24 22:20:20.706 11393 TRACE nova.exception 
  2015-02-24 22:20:22.119 | 20xception [-] Exception in string format operation
  2015-02-24 22:20:22.126 | 2015-02-24 22:20:20.893 11403 TRACE nova.exc 
e_uuied'
  2015-02-24 22:20:22.142 | 2015p-02-24 2t2:20:20.i659 1138o15-02-24 n22:20:20 
.708 113T9 TRACE rnova.excaeption 
  2015-02-24 22:20:22.144 | c93 ERROkwargs
  2015-02-24 22:20:22.157 | e2015-02-bR nova.ea24 22:2ck (most recent call 
last):

  
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwibm92YS5leGNlcHRpb24gS2V5RXJyb3JcIiBBTkQgdGFnczpcImNvbnNvbGVcIiBBTkQgYnVpbGRfbmFtZTpcImdhdGUtbm92YS1weXRob24yN1wiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiJjdXN0b20iLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsiZnJvbSI6IjIwMTUtMDItMTVUMDE6NDU6NDErMDA6MDAiLCJ0byI6IjIwMTUtMDItMjVUMDE6NDU6NDErMDA6MDAiLCJ1c2VyX2ludGVydmFsIjoiMCJ9LCJzdGFtcCI6MTQyNDgyODc5MzQzMn0=

  We could set fatal_exception_format_errors=True to make the tests
  causing this to fail and then clean them up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1425343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427745] Re: Tests fail in non-en_US locale

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1427745

Title:
  Tests fail in non-en_US locale

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When I run `tox -epy27`, generally I get 13 test failures. One of
  these is a proxy/WSGI interaction I don't care about. The remaining 12
  are due to tests which compare translated strings to untranslated
  message IDs. My locale is:

  $ locale
  LANG=en_GB.UTF-8
  LANGUAGE=en_GB:en
  LC_CTYPE="en_GB.UTF-8"
  LC_NUMERIC="en_GB.UTF-8"
  LC_TIME="en_GB.UTF-8"
  LC_COLLATE="en_GB.UTF-8"
  LC_MONETARY="en_GB.UTF-8"
  LC_MESSAGES="en_GB.UTF-8"
  LC_PAPER="en_GB.UTF-8"
  LC_NAME="en_GB.UTF-8"
  LC_ADDRESS="en_GB.UTF-8"
  LC_TELEPHONE="en_GB.UTF-8"
  LC_MEASUREMENT="en_GB.UTF-8"
  LC_IDENTIFICATION="en_GB.UTF-8"
  LC_ALL=

  See this patch for the tests which fail for me:
  I352cd37d79401866e3116bcf0a62031bfe1d5d93

  This patch removes the TranslationFixture which was supposed to
  prevent translations during tests but didn't:
  Idcc4409edae5ddfa0a1c2052a746d6412dda24ac

  Suggested fix: enforce an en_US locale or prevent translations from
  occurring during tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1427745/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424462] Re: Nova/Neutron v3 authentication

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1424462

Title:
  Nova/Neutron v3 authentication

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  It is currently only possible for the service user that enables nova
  to talk to neutron to authenticate via the standard keystone v2 auth
  mechanisms. As we progress we should support v3 auth and any new
  formats that come along.

  Adopting keystoneclient's session work will give us forward
  compatibility with any authentication mechanisms they allow.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1424462/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1425763] Re: test_flavor_manage.rand_flavor sometimes generates an invalid flavor

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1425763

Title:
  test_flavor_manage.rand_flavor sometimes generates an invalid flavor

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The gate-nova-tox-functional job can fail occasionally [1] if
  rand_flavor happens to generate an invalid flavor, for example one
  with ram == 0.

  2015-02-25 23:10:18,113 INFO [nova.api.openstack.wsgi] HTTP exception thrown: 
Invalid input received: ram must be >= 1
  2015-02-25 23:10:18,115 INFO [nova.osapi_compute.wsgi.server] 127.0.0.1 "POST 
/v2/openstack/flavors HTTP/1.1" status: 400 len: 304 time: 0.4035170
  2015-02-25 23:10:18,117 INFO [nova.wsgi] Stopping WSGI server.
  }}}

  Traceback (most recent call last):
File "nova/tests/functional/wsgi/test_flavor_manage.py", line 110, in 
test_flavor_manage_func
  resp = self.api.api_post('flavors', flav1)
File "nova/tests/functional/api/client.py", line 168, in api_post
  response = self.api_request(relative_uri, **kwargs)
File "nova/tests/functional/api/client.py", line 143, in api_request
  response=response)
  OpenStackApiException: Unexpected status code

  [1] http://logs.openstack.org/18/154718/7/gate/gate-nova-tox-
  functional/82d4cb1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1425763/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1423952] Re: It is impossible to delete an instance that has failed due to neutron/nova notification problems

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1423952

Title:
  It is impossible to delete an instance that has failed due to
  neutron/nova notification problems

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed

Bug description:
  If you attempt to boot a nova instance without Neutron properly
  configured for neutron/nova notifications, the instance will
  eventually fail to spawn:

[-] [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] Instance failed to 
spawn
[instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] Traceback (most recent 
call last):
[instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2243, in 
_build_resources
[instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] yield resources
[instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2113, in 
_build_and_run_instance
[instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] 
block_device_info=block_device_info)
[instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2622, in 
spawn
[instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] block_device_info, 
disk_info=disk_info)
[instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4439, in 
_create_domain_and_network
[instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] raise 
exception.VirtualInterfaceCreateException()
[instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] 
VirtualInterfaceCreateException: Virtual Interface creation failed

  If you try to delete this instance, the delete operation will fail. In
  the logs, you see:

AUDIT nova.compute.manager [req-a4b30d0b-e6d3-429f-8f7a-b7788b79c86c None] 
[instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] Terminating instance
WARNING nova.virt.libvirt.driver [-] [instance: 
1541a197-9f80-4ee5-a7d6-08e591aa83fd] During wait destroy, instance disappeared.
INFO nova.virt.libvirt.driver [req-a4b30d0b-e6d3-429f-8f7a-b7788b79c86c 
None] [instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] Deletion of 
/var/lib/nova/instances/1541a197-9f80-4ee5-a7d6-08e591aa83fd_del complete
INFO nova.compute.manager [req-a4b30d0b-e6d3-429f-8f7a-b7788b79c86c None] 
[instance: 1541a197-9f80-4ee5-a7d6-08e591aa83fd] Instance disappeared during 
terminate

  At this point, `nova list` will show:

| 1541a197-9f80-4ee5-a7d6-08e591aa83fd | test0| ERROR   |
  deleting   | NOSTATE |  |

  And it appears to be impossible to delete this instance.  Running
  "nova reset-state "  has no effect (with or without
  --active), nor does correctly configuring neutron.

  The only way to get rid of this instance appears to be directly
  editing the database.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1423952/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424792] Re: DB API fixed_ip_associate() still uses SELECT FOR UPDATE

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1424792

Title:
  DB API fixed_ip_associate() still uses SELECT FOR UPDATE

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Nova's DB API fixed_ip_associate() method is still using
  with_lockmode('update'). It should be updated to use a compare and
  swap technique with a retry loop.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1424792/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1422385] Re: libvirt unit tests fail with older libvirt (or no libvirt installed)

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1422385

Title:
  libvirt unit tests fail with older libvirt (or no libvirt installed)

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  nova.tests.unit.virt.libvirt.test_host.HostTestCase.test_find_secret
  

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File 
"/home/jenkins/workspace/gerrit-nova-es-py27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 1201, in patched
  return func(*args, **keywargs)
File "nova/tests/unit/virt/libvirt/test_host.py", line 668, in 
test_find_secret
  mock.call(libvirt.VIR_SECRET_USAGE_TYPE_ISCSI, 'iscsivol'),
  AttributeError: 'module' object has no attribute 
'VIR_SECRET_USAGE_TYPE_ISCSI'
  Traceback (most recent call last):
  _StringException: Empty attachments:
pythonlogging:''
stderr
stdout
  
  

  
nova.tests.unit.virt.libvirt.test_host.HostTestCase.test_list_instance_domains_fallback
  
---

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File 
"/home/jenkins/workspace/gerrit-nova-es-py27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 1201, in patched
  return func(*args, **keywargs)
File "nova/tests/unit/virt/libvirt/test_host.py", line 516, in 
test_list_instance_domains_fallback
  libvirt.VIR_CONNECT_LIST_DOMAINS_ACTIVE)
  AttributeError: 'module' object has no attribute 
'VIR_CONNECT_LIST_DOMAINS_ACTIVE'
  Traceback (most recent call last):
  _StringException: Empty attachments:
stderr
stdout


  

  Those constants should be in fakelibvirt now that libvirt-python is no
  longer in test-requirements.txt.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1422385/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1422901] Re: test_delete_server_while_in_verify_resize_state fails with "ValueError: You must specify a valid interface name."

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1422901

Title:
  test_delete_server_while_in_verify_resize_state fails with
  "ValueError: You must specify a valid interface name."

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  http://logs.openstack.org/24/156324/3/gate/gate-tempest-dsvm-neutron-
  full/21b554f/logs/screen-n-cpu.txt.gz?level=TRACE#_2015-02-17_11_45_29_763

  2015-02-17 11:45:29.763 ERROR nova.compute.manager 
[req-02181adb-1b8b-48b1-ad72-ef5a650789e1 DeleteServersTestJSON-42620796 
DeleteServersTestJSON-345447266] [instance: 
96993b52-7955-4c64-95e2-b332471e1915] Setting instance vm_state to ERROR
  2015-02-17 11:45:29.763 4456 TRACE nova.compute.manager [instance: 
96993b52-7955-4c64-95e2-b332471e1915] Traceback (most recent call last):
  2015-02-17 11:45:29.763 4456 TRACE nova.compute.manager [instance: 
96993b52-7955-4c64-95e2-b332471e1915]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 6188, in 
_error_out_instance_on_exception
  2015-02-17 11:45:29.763 4456 TRACE nova.compute.manager [instance: 
96993b52-7955-4c64-95e2-b332471e1915] yield
  2015-02-17 11:45:29.763 4456 TRACE nova.compute.manager [instance: 
96993b52-7955-4c64-95e2-b332471e1915]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 3850, in resize_instance
  2015-02-17 11:45:29.763 4456 TRACE nova.compute.manager [instance: 
96993b52-7955-4c64-95e2-b332471e1915] timeout, retry_interval)
  2015-02-17 11:45:29.763 4456 TRACE nova.compute.manager [instance: 
96993b52-7955-4c64-95e2-b332471e1915]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5939, in 
migrate_disk_and_power_off
  2015-02-17 11:45:29.763 4456 TRACE nova.compute.manager [instance: 
96993b52-7955-4c64-95e2-b332471e1915] shared_storage = 
self._is_storage_shared_with(dest, inst_base)
  2015-02-17 11:45:29.763 4456 TRACE nova.compute.manager [instance: 
96993b52-7955-4c64-95e2-b332471e1915]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5891, in 
_is_storage_shared_with
  2015-02-17 11:45:29.763 4456 TRACE nova.compute.manager [instance: 
96993b52-7955-4c64-95e2-b332471e1915] shared_storage = (dest == 
self.get_host_ip_addr())
  2015-02-17 11:45:29.763 4456 TRACE nova.compute.manager [instance: 
96993b52-7955-4c64-95e2-b332471e1915]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2460, in 
get_host_ip_addr
  2015-02-17 11:45:29.763 4456 TRACE nova.compute.manager [instance: 
96993b52-7955-4c64-95e2-b332471e1915] ips = compute_utils.get_machine_ips()
  2015-02-17 11:45:29.763 4456 TRACE nova.compute.manager [instance: 
96993b52-7955-4c64-95e2-b332471e1915]   File 
"/opt/stack/new/nova/nova/compute/utils.py", line 470, in get_machine_ips
  2015-02-17 11:45:29.763 4456 TRACE nova.compute.manager [instance: 
96993b52-7955-4c64-95e2-b332471e1915] iface_data = 
netifaces.ifaddresses(interface)
  2015-02-17 11:45:29.763 4456 TRACE nova.compute.manager [instance: 
96993b52-7955-4c64-95e2-b332471e1915] ValueError: You must specify a valid 
interface name.
  2015-02-17 11:45:29.763 4456 TRACE nova.compute.manager [instance: 
96993b52-7955-4c64-95e2-b332471e1915] 

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVmFsdWVFcnJvcjogWW91IG11c3Qgc3BlY2lmeSBhIHZhbGlkIGludGVyZmFjZSBuYW1lLlwiIEFORCBtZXNzYWdlOlwiZ2V0X21hY2hpbmVfaXBzXCIgQU5EIG1lc3NhZ2U6XCJtaWdyYXRlX2Rpc2tfYW5kX3Bvd2VyX29mZlwiIEFORCB0YWdzOlwic2NyZWVuLW4tY3B1LnR4dFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiJjdXN0b20iLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsiZnJvbSI6IjIwMTUtMDItMDNUMjA6NDU6NDcrMDA6MDAiLCJ0byI6IjIwMTUtMDItMTdUMjA6NDU6NDcrMDA6MDAiLCJ1c2VyX2ludGVydmFsIjoiMCJ9LCJzdGFtcCI6MTQyNDIwNjAzNzg2OX0=

  48 hits in the last 10 days, started on 2/12 so we merged something
  bad (or neutron did), check and gate, all failures.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1422901/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1422239] Re: vmware: It can not be select hard or soft reboot

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1422239

Title:
  vmware: It can not be select hard or soft reboot

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Our users want to select hard or soft reboot in vmware environment.
  But vmware VCdriver do not pass a reboot type as a parameter.
  So I think that it must be able to selectively use this action.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1422239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420971] Re: iscsi_transport parameter should be called iscsi_iface

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1420971

Title:
  iscsi_transport parameter should be called iscsi_iface

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The change https://review.openstack.org/#/c/146233/ added a libvirt
  parameter called iscsi_transport for open-iscsi transport supports.

  We actually need transport_iface, not transport as the param (iscsi_tcp & 
iser are exceptions to this, where transport and
  transport_iface are one and the same), and this is misleading. Giving 
transport name instead of transport_iface name will also cause login to fail 
currently, would be better to fall back to iscsi_tcp (also called the default 
param) when the parameter is incorrect (i.e. the corresponding transport_iface 
file is non-existent)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1420971/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420322] Re: gate-devstack-dsvm-cells fails in volumes exercise with "Server ex-vol-inst not deleted"

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1420322

Title:
  gate-devstack-dsvm-cells fails in volumes exercise with "Server ex-
  vol-inst not deleted"

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  http://logs.openstack.org/02/153902/2/gate/gate-devstack-dsvm-
  cells/14ce82b/console.html#_2015-02-10_04_21_33_685

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiZGV2c3RhY2svZXhlcmNpc2VzL3ZvbHVtZXMuc2hcIiBBTkQgbWVzc2FnZTpcIlNlcnZlciBleC12b2wtaW5zdCBub3QgZGVsZXRlZFwiIEFORCB0YWdzOlwiY29uc29sZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDIzNTc5NDY1MDQ0fQ==

  6 hits in 7 days, check and gate, all failures.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1420322/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414530] Re: cwd might be set incorrectly when exceptions are thrown

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1414530

Title:
  cwd might be set incorrectly when exceptions are thrown

Status in OpenStack Compute (Nova):
  Fix Released
Status in Oslo Concurrency Library:
  In Progress

Bug description:
  CWD might be set incorrectly when exceptions are thrown

  The call to utils.execute ends up in /opt/stack/nova/nova/utils.py which
  ultimately calls processutils.execute() in the oslo_concurrency module.
  If there's an error when executing the command which calls an bash script
  then an exception ProcessExecutionError will be raised at #1. This means that
  the code at #2 will never be reached resulting in the Exception being 
propagated
  up to the call-stack but now one is still stuck with the wrong working
  directory which can lead to problems. One should catch the Exception and make 
sure
  that in all cases the working directory is reset to the original one. 

  /opt/stack/nova/nova/crypto.py

  def ensure_ca_filesystem():
  """Ensure the CA filesystem exists."""
  ca_dir = ca_folder()
  if not os.path.exists(ca_path()):
  genrootca_sh_path = os.path.abspath(
  os.path.join(os.path.dirname(__file__), 'CA',
  'genrootca.sh'))

  start = os.getcwd()
  fileutils.ensure_tree(ca_dir)
  os.chdir(ca_dir)
  utils.execute("sh", genrootca_sh_path) <--- #1
  os.chdir(start)<--- #2

  
  One can see in
  
https://github.com/openstack/oslo.concurrency/blob/master/oslo_concurrency/processutils.py
  that this Exception can indeed be thrown.

  Analogously there's a similar issue also in the aforementioned file in
  _ensure_project_folder.

  def _ensure_project_folder(project_id):
  if not os.path.exists(ca_path(project_id)):
  geninter_sh_path = os.path.abspath(
  os.path.join(os.path.dirname(__file__), 'CA',
  'geninter.sh'))
  start = os.getcwd()
  os.chdir(ca_folder())
  utils.execute('sh', geninter_sh_path, project_id,
_project_cert_subject(project_id))
  os.chdir(start)

  
  I'm not sure whether this has a potential security vulnerability impact or 
not. The potential risk is definitely there but it remains to be seen whether 
an attacker can actually reliably trigger this and then possibly gain something 
else by having a different working directory. That's why I didn't tag it as a 
security bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1414530/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1409733] Re: adopt namespace-less oslo imports

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1409733

Title:
  adopt namespace-less oslo imports

Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in Python client library for Neutron:
  In Progress

Bug description:
  Oslo is migrating from oslo.* namespace to separate oslo_* namespaces
  for each library: https://blueprints.launchpad.net/oslo-
  incubator/+spec/drop-namespace-packages

  We need to adopt to the new paths in neutron. Specifically, for
  oslo.config, oslo.middleware, oslo.i18n, oslo.serialization,
  oslo.utils.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1409733/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1409142] Re: [OSSA 2015-005] Websocket Hijacking Vulnerability in Nova VNC Server (CVE-2015-0259)

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1409142

Title:
  [OSSA 2015-005] Websocket Hijacking Vulnerability in Nova VNC Server
  (CVE-2015-0259)

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  This issue is being treated as a potential security risk under
  embargo. Please do not make any public mention of embargoed (private)
  security vulnerabilities before their coordinated publication by the
  OpenStack Vulnerability Management Team in the form of an official
  OpenStack Security Advisory. This includes discussion of the bug or
  associated fixes in public forums such as mailing lists, code review
  systems and bug trackers. Please also avoid private disclosure to
  other individuals not already approved for access to this information,
  and provide this same reminder to those who are made aware of the
  issue prior to publication. All discussion should remain confined to
  this private bug report, and any proposed fixes should be added as to
  the bug as attachments.

  OpenStack Vulnerability Team:

  Brian Manifold (bmani...@cisco.com) from Cisco has discovered a
  vulnerability in the Nova VNC server implementation. We have a patch for
  this vulnerability and consider this a very high risk.

  Please email Dave McCowan (dmcco...@cisco.com) for more details on the
  attached patch.

  Issue Details:

  Horizon uses a VNC client which uses websockets to pass information.  The
  Nova VNC server does not validate the origin of the websocket request,
  which allows an attacker to make a websocket request from another domain.
  If the victim opens both an attacker's site and the VNC console
  simultaneously, or if the victim has recently been using the VNC console
  and then visits the attacker's site, the attacker can make a websocket
  request to the Horizon domain and proxy the connection to another
  destination.

  This gives the attacker full read-write access to the VNC console of any
  instance recently accessed by the victim.

  Recommendation:
   Verify the origin field in request header on all websocket requests.

  Threat:
    CWE-345
   * Insufficient Verification of Data Authenticity -- The software does not
  sufficiently verify the origin or authenticity of data, in a way that
  causes it to accept invalid data.

    CWE-346
   * Origin Validation Error -- The software does not properly verify that
  the source of data or communication is valid.

    CWE-441
   * Unintended Proxy or Intermediary ('Confused Deputy') -- The software
  receives a request, message, or directive from an upstream component, but
  the software does not sufficiently preserve the original source of the
  request before forwarding the request to an external actor that is outside
  of the software's control sphere. This causes the software to appear to be
  the source of the request, leading it to act as a proxy or other
  intermediary between the upstream component and the external actor.

  Steps to reproduce:
   1. Login to horizon
   2. Pick an instance, go to console/vnc tab, wait for console to be loaded
   3. In another browser tab or window, load a VNC console script from local
  disk or remote site
   4. Point the newly loaded VNC console to the VNC server and a connection
  is made
  Result:
   The original connection has been been hijacked by the second connection

  Root cause:
   Cross-Site WebSocket Hijacking is concept that has been written about in
  various security blogs.
  One of the recommended countermeasures is to check the Origin header of
  the WebSocket handshake request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1409142/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407050] Re: VMware: resize fails when using config drive

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1407050

Title:
  VMware: resize fails when using config drive

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When config drive is configured. A resize may fail when the selected
  host does not have access to the original datastore of the host that
  the instance was running on.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1407050/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414065] Re: Nova can loose track of running VM if live migration raises an exception

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1414065

Title:
  Nova can loose track of running VM if live migration raises an
  exception

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  There is a fairly serious bug in VM state handling during live
  migration, with a result that if libvirt raises an error *after* the
  VM has successfully live migrated to the target host, Nova can end up
  thinking the VM is shutoff everywhere, despite it still being active.
  The consequences of this are quite dire as the user can then manually
  start the VM again and corrupt any data in shared volumes and the
  like.

  The fun starts in the _live_migration method in
  nova.virt.libvirt.driver, if the 'migrateToURI2' method fails *after*
  the guest has completed migration.

  At start of migration, we see an event received by Nova for the new
  QEMU process starting on target host

  2015-01-23 15:39:57.743 DEBUG nova.compute.manager [-] [instance:
  12bac45e-aca8-40d1-8f39-941bc6bb59f0] Synchronizing instance power
  state after lifecycle event "Started"; current vm_state: active,
  current task_state: migrating, current DB power_state: 1, VM
  power_state: 1 from (pid=19494) handle_lifecycle_event
  /home/berrange/src/cloud/nova/nova/compute/manager.py:1134

  
  Upon migration completion we see CPUs start running on the target host

  2015-01-23 15:40:02.794 DEBUG nova.compute.manager [-] [instance:
  12bac45e-aca8-40d1-8f39-941bc6bb59f0] Synchronizing instance power
  state after lifecycle event "Resumed"; current vm_state: active,
  current task_state: migrating, current DB power_state: 1, VM
  power_state: 1 from (pid=19494) handle_lifecycle_event
  /home/berrange/src/cloud/nova/nova/compute/manager.py:1134

  And finally an event saying that the QEMU on the source host has
  stopped

  2015-01-23 15:40:03.629 DEBUG nova.compute.manager [-] [instance:
  12bac45e-aca8-40d1-8f39-941bc6bb59f0] Synchronizing instance power
  state after lifecycle event "Stopped"; current vm_state: active,
  current task_state: migrating, current DB power_state: 1, VM
  power_state: 4 from (pid=23081) handle_lifecycle_event
  /home/berrange/src/cloud/nova/nova/compute/manager.py:1134

  
  It is the last event that causes the trouble.  It causes Nova to mark the VM 
as shutoff at this point.

  Normally the '_live_migrate' method would succeed and so Nova would
  then immediately & explicitly mark the guest as running on the target
  host.   If an exception occurrs though, this explicit update of VM
  state doesn't happen so Nova considers the guest shutoff, even though
  it is still running :-(

  
  The lifecycle events from libvirt have an associated "reason", so we could 
see that the shutoff event from libvirt corresponds to a migration being 
completed, and so not mark the VM as shutoff in Nova.  We would also have to 
make sure the target host processes the 'resume' event upon migrate completion.

  An safer approach though, might be to just mark the VM as in an ERROR
  state if any exception occurs during migration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1414065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404791] Re: can not delete an instance if the instance's rescue volume is not found

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404791

Title:
  can not delete an instance if the instance's rescue volume is not
  found

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  can not delete an instance if the instance's rescue lvm can not be
  found.

  how to reproduce:

  1.  configure images_type lvm for libvirt dirver.
  [libvirt]
  images_type = lvm < 
  images_volume_group = stack-volumes-lvmdriver-1 <-- lvm used

  2. rescue the instance, will generate uuid.rescue lvm
  3. unrescue the instance, the rescue lvm can not be deleted due to bug   
https://bugs.launchpad.net/nova/+bug/1385480
  4. delete the uuid.rescue lvm manually.
  5. delete the instance(failed and set to error state)

  below is the call trace of nova-compute driver.

  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher 
self._cleanup_lvm(instance)
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 944, in _cleanup_lvm
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher 
lvm.remove_volumes(disks)
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/virt/libvirt/lvm.py", line 272, in remove_volumes
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher 
clear_volume(path)
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/virt/libvirt/lvm.py", line 250, in clear_volume
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher volume_size = 
get_volume_size(path)
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/virt/libvirt/lvm.py", line 66, in decorated_function
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher return 
function(path)
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/virt/libvirt/lvm.py", line 197, in get_volume_size
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher 
run_as_root=True)
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/virt/libvirt/utils.py", line 53, in execute
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher return 
utils.execute(*args, **kwargs)
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/utils.py", line 164, in execute
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher return 
processutils.execute(*cmd, **kwargs)
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line 224, 
in execute
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher 
cmd=sanitized_cmd)
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher 
ProcessExecutionError: Unexpected error while running command.
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher Command: sudo 
nova-rootwrap /etc/nova/rootwrap.conf blockdev --getsize64 
/dev/stack-volumes-lvmdriver-1/b09687ee-f525-4edc-aaf4-1272562d46fd_disk.rescue
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher Exit code: 1
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher Stdout: u''
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher Stderr: 
u'blockdev: cannot open 
/dev/stack-volumes-lvmdriver-1/b09687ee-f525-4edc-aaf4-1272562d46fd_disk.rescue:
 No such file or directory\n'
  2014-12-22 13:18:29.691 TRACE oslo.messaging.rpc.dispatcher

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1404791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1402784] Re: VMware: resized instance not marked as 'belonging' to OpenStack

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1402784

Title:
  VMware: resized instance not marked as 'belonging' to OpenStack

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Commit f4fec08e9850dae163f447e72cd1c7f638b2bb10 added support for
  'belonging' to OpenStack. The resize was not treated

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1402784/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1400069] Re: Hyper-V configdrive is missing the static ip configuration

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1400069

Title:
  Hyper-V configdrive is missing the static ip configuration

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The Hyper-V driver is not properly handling static IP configuration
  injection when flat_injected is true and networks don't have DHCP
  enabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1400069/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1418298] Re: After service deleted, the corresponding compute-node can't restart again

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1418298

Title:
  After service deleted, the corresponding compute-node can't restart
  again

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  After remove a stopped service for nova-compute, then I can't restart
  it again. The error as below

  
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup Traceback 
(most recent call last):
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/openstack/common/threadgroup.py", line 145, in wait
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup x.wait()
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/openstack/common/threadgroup.py", line 47, in wait
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup return 
self.thread.wait()
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 173, in 
wait
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup return 
self._exit_event.wait()
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 121, in wait
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup return 
hubs.get_hub().switch()
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 293, in 
switch
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup return 
self.greenlet.switch()
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 212, in 
main
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup result = 
function(*args, **kwargs)
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/openstack/common/service.py", line 492, in run_service
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup 
service.start()
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/service.py", line 181, in start
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup 
self.manager.pre_start_hook()
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/compute/manager.py", line 1181, in pre_start_hook
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup 
self.update_available_resource(nova.context.get_admin_context())
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/compute/manager.py", line 6058, in 
update_available_resource
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup 
rt.update_available_resource(context)
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/compute/resource_tracker.py", line 342, in 
update_available_resource
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup return 
self._update_available_resource(context, resources)
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
431, in inner
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup return 
f(*args, **kwargs)
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/compute/resource_tracker.py", line 396, in 
_update_available_resource
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup 
self._sync_compute_node(context, resources)
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/compute/resource_tracker.py", line 417, in 
_sync_compute_node
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup 
self._create(context, resources)
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/compute/resource_tracker.py", line 466, in _create
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup values)
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/conductor/api.py", line 170, in compute_node_create
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup return 
self._manager.compute_node_create(context, values)
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/conductor/rpcapi.py", line 271, in compute_node_create
  2015-02-05 11:09:03.302 TRACE nova.openstack.common.threadgroup return 
cctxt.call(context, 'compu

[Yahoo-eng-team] [Bug 1398349] Re: nova service-update fails for services on non-child (top) cell

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1398349

Title:
  nova service-update fails for services on non-child (top) cell

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Nova service-update fails for services on non-child (top) cell.

  How to reproduce:
  1) List available services using below command.
  $ nova --os-username admin service-list

  Output:
  
++--+-+--+-+---++-+
  | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
  
++--+-+--+-+---++-+
  | region!child@1 | nova-conductor | region!child@ubuntu | internal | enabled 
| up | 2014-08-18T06:17:36.00 | - |
  | region!child@3 | nova-cells | region!child@ubuntu | internal | enabled | up 
| 2014-08-18T06:17:29.00 | - |
  | region!child@4 | nova-scheduler | region!child@ubuntu | internal | enabled 
| up | 2014-08-18T06:17:30.00 | - |
  | region!child@5 | nova-compute | region!child@ubuntu | nova | enabled | up | 
2014-08-18T06:17:31.00 | - |
  | region@1 | nova-cells | region@ubuntu | internal | enabled | up | 
2014-08-18T06:17:29.00 | - |
  | region@2 | nova-cert | region@ubuntu | internal | enabled | down | 
2014-08-18T06:08:28.00 | - |
  | region@3 | nova-consoleauth | region@ubuntu | internal | enabled | up | 
2014-08-18T06:17:37.00 | - |
  
++--+-+--+-+---++-+

  2) disable one of the services on top cell (e.g. nova-cert)
  $ nova --os-username admin service-disable 'region@ubuntu' nova-cert

  The above command gives the following error:
  a) On user console:
  ERROR (ClientException): The server has either erred or is incapable of 
performing the requested operation. (HTTP 500) (Request-ID: 
req-529f926f-fbda-4748-afb7-dfe8c7cc7877)

  b) In nova-api logs, it shows following error message:
  2014-12-01 00:50:08.459 TRACE nova.api.openstack Traceback (most recent call 
last):
  2014-12-01 00:50:08.459 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
  2014-12-01 00:50:08.459 TRACE nova.api.openstack incoming.message))
  2014-12-01 00:50:08.459 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
  2014-12-01 00:50:08.459 TRACE nova.api.openstack return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-12-01 00:50:08.459 TRACE nova.api.openstack
  2014-12-01 00:50:08.459 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
  2014-12-01 00:50:08.459 TRACE nova.api.openstack result = 
getattr(endpoint, method)(ctxt, **new_args)
  2014-12-01 00:50:08.459 TRACE nova.api.openstack
  2014-12-01 00:50:08.459 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/cells/manager.py", line 296, in service_update
  2014-12-01 00:50:08.459 TRACE nova.api.openstack service = 
response.value_or_raise()
  2014-12-01 00:50:08.459 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/cells/messaging.py", line 407, in process
  2014-12-01 00:50:08.459 TRACE nova.api.openstack next_hop = 
self._get_next_hop()
  2014-12-01 00:50:08.459 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/cells/messaging.py", line 362, in _get_next_hop
  2014-12-01 00:50:08.459 TRACE nova.api.openstack dest_hops = 
target_cell.count(_PATH_CELL_SEP)
  2014-12-01 00:50:08.459 TRACE nova.api.openstack
  2014-12-01 00:50:08.459 TRACE nova.api.openstack AttributeError: 'NoneType' 
object has no attribute 'count'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1398349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1417340] Re: Wrong hypervisor statistics reported

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1417340

Title:
  Wrong hypervisor statistics reported

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The hypervisor statistics reported through nova hypervisor-stats is
  wrong if there are other nova services running on the compute node,
  output below:

  $ nova hypervisor-stats
  +--+---+
  | Property | Value |
  +--+---+
  | count| 5 |
  | current_workload | 0 |
  | disk_available_least | 395   |
  | free_disk_gb | 455   |
  | free_ram_mb  | 34160 |
  | local_gb | 455   |
  | local_gb_used| 0 |
  | memory_mb| 39920 |
  | memory_mb_used   | 5760  |
  | running_vms  | 15|
  | vcpus| 20|
  | vcpus_used   | 15|
  +--+---+

  But i only have 3 instances launched, and 1 hypervisor with 4 vcpus...
  seems like the result is multiplied by 5.

  $ nova hypervisor-list
  ++-+
  | ID | Hypervisor hostname |
  ++-+
  | 1  | holly   |
  ++-+

  $ nova hypervisor-show 1
  
+---+-+
  | Property  | Value   



|
  
+---+-+
  | cpu_info_arch | x86_64  



|
  | cpu_info_features | ["pge", "clflush", "sep", "syscall", "vme", 
"dtes64", "tsc", "xsave", "vmx", "xtpr", "cmov", "ssse3", "est", "pat", 
"monitor", "lm", "msr", "nx", "fxsr", "tm", "sse4.1", "pae", "acpi", "de", 
"mmx", "osxsave", "cx8", "mce", "mtrr", "ht", "pse", "lahf_lm", "pdcm", "mca", 
"apic", "sse", "ds", "pni", "tm2", "sse2", "ss", "pbe", "fpu", "cx16", "pse36", 
"ds_cpl"] |
  | cpu_info_model| Penryn  



|
  | cpu_info_topology_cores   | 4   



|
  | cpu_info_topology_sockets | 1   



|
  | cpu_info_topology_threads | 1   



  

[Yahoo-eng-team] [Bug 1418155] Re: nova will try to create unlimited instances concurrently and timeout when resources are depleted

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1418155

Title:
  nova will try to create unlimited instances concurrently and timeout
  when resources are depleted

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  running with --num-instances=16 I saw a couple of instances go into
  ERROR State, on the hypervisor side, i saw the following issue:

  2015-02-04 09:03:02.840 5077 ERROR nova.compute.manager [-] [instance: 
e277cf66-167f-4e81-a141-8dec12290015] Instance failed to spawn
  2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: 
e277cf66-167f-4e81-a141-8dec12290015] Traceback (most recent call last):
  2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: 
e277cf66-167f-4e81-a141-8dec12290015]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2243, in 
_build_resources
  2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: 
e277cf66-167f-4e81-a141-8dec12290015] yield resources
  2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: 
e277cf66-167f-4e81-a141-8dec12290015]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2113, in 
_build_and_run_instance
  2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: 
e277cf66-167f-4e81-a141-8dec12290015] block_device_info=block_device_info)
  2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: 
e277cf66-167f-4e81-a141-8dec12290015]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2622, in 
spawn
  2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: 
e277cf66-167f-4e81-a141-8dec12290015] block_device_info, 
disk_info=disk_info)
  2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: 
e277cf66-167f-4e81-a141-8dec12290015]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4508, in 
_create_domain_and_network
  2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: 
e277cf66-167f-4e81-a141-8dec12290015] power_on=power_on)
  2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: 
e277cf66-167f-4e81-a141-8dec12290015]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4432, in 
_create_domain
  2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: 
e277cf66-167f-4e81-a141-8dec12290015] LOG.error(err)
  2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: 
e277cf66-167f-4e81-a141-8dec12290015]   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, 
in __exit__
  2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: 
e277cf66-167f-4e81-a141-8dec12290015] six.reraise(self.type_, self.value, 
self.tb)
  2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: 
e277cf66-167f-4e81-a141-8dec12290015]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4423, in 
_create_domain
  2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: 
e277cf66-167f-4e81-a141-8dec12290015] domain.createWithFlags(launch_flags)
  2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: 
e277cf66-167f-4e81-a141-8dec12290015]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit
  2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: 
e277cf66-167f-4e81-a141-8dec12290015] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
  2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: 
e277cf66-167f-4e81-a141-8dec12290015]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
  2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: 
e277cf66-167f-4e81-a141-8dec12290015] rv = execute(f, *args, **kwargs)
  2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: 
e277cf66-167f-4e81-a141-8dec12290015]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
  2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: 
e277cf66-167f-4e81-a141-8dec12290015] six.reraise(c, e, tb)
  2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: 
e277cf66-167f-4e81-a141-8dec12290015]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
  2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: 
e277cf66-167f-4e81-a141-8dec12290015] rv = meth(*args, **kwargs)
  2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: 
e277cf66-167f-4e81-a141-8dec12290015]   File 
"/usr/lib64/python2.7/site-packages/libvirt.py", line 993, in createWithFlags
  2015-02-04 09:03:02.840 5077 TRACE nova.compute.manager [instance: 
e277cf66-167f-4e81-a141-8dec12290015] if ret == -1: raise libvirtErro

[Yahoo-eng-team] [Bug 1407438] Re: VMware: resize may select wrong datastore

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1407438

Title:
  VMware: resize may select wrong datastore

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When resizing a root disk the first datastore is selected. This may
  not be the datastore that has the root disk. We need the root disk
  datstore

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1407438/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404268] Re: Missing nova context during spawn

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404268

Title:
  Missing nova context during spawn

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The nova request context tracks a security context and other request
  information, including a request id that is added to log entries
  associated with this request.  The request context is passed around
  explicitly in many chunks of OpenStack code.  But nova/context.py also
  stores the RequestContext in the thread's local store (when the
  RequestContext is created, or when it is explicitly stored through a
  call to update_store).  The nova logger will use an explicitly passed
  context, or look for it in the local.store.

  A recent change in community openstack code has resulted in the
  context not being set for many nova log messages during spawn:

  https://bugs.launchpad.net/neutron/+bug/1372049

  This change spawns a new thread in nova/compute/manager.py
  build_and_run_instance, and the spawn runs in that new thread.  When
  the original RPC thread created the nova RequestContext, the context
  was set in the thread's local store.  But the context does not get set
  in the newly-spawned thread.

  Example of log messages with missing req id during spawn:

  014-12-13 22:20:30.987 18219 DEBUG nova.openstack.common.lockutils [-] 
Acquired semaphore "87c7fc32-042e-40b7-af46-44bff50fa1b4" lock 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:229
  2014-12-13 22:20:30.987 18219 DEBUG nova.openstack.common.lockutils [-] Got 
semaphore / lock "_locked_do_build_and_run_instance" inner 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:271
  2014-12-13 22:20:31.012 18219 AUDIT nova.compute.manager 
[req-bd959d69-86de-4eea-ae1d-a066843ca317 None] [instance: 
87c7fc32-042e-40b7-af46-44bff50fa1b4] Starting instance...
  ...
  2014-12-13 22:20:31.280 18219 DEBUG nova.openstack.common.lockutils [-] 
Created new semaphore "compute_resources" internal_lock 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:206
  2014-12-13 22:20:31.281 18219 DEBUG nova.openstack.common.lockutils [-] 
Acquired semaphore "compute_resources" lock 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:229
  2014-12-13 22:20:31.282 18219 DEBUG nova.openstack.common.lockutils [-] Got 
semaphore / lock "instance_claim" inner 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:271
  2014-12-13 22:20:31.284 18219 DEBUG nova.compute.resource_tracker [-] Memory 
overhead for 512 MB instance; 0 MB instance_claim 
/usr/lib/python2.6/site-packages/nova/compute/resource_tracker.py:1272014-12-13 
22:20:31.290 18219 AUDIT nova.compute.claims [-] [instance: 
87c7fc32-042e-40b7-af46-44bff50fa1b4] Attempting claim: memory 512 MB, disk 10 
GB2014-12-13 22:20:31.292 18219 AUDIT nova.compute.claims [-] [instance: 
87c7fc32-042e-40b7-af46-44bff50fa1b4] Total memory: 131072 MB, used: 12288.00 
MB2014-12-13 22:20:31.296 18219 AUDIT nova.compute.claims [-] [instance: 
87c7fc32-042e-40b7-af46-44bff50fa1b4] memory limit not specified, defaulting to 
unlimited2014-12-13 22:20:31.300 18219 AUDIT nova.compute.claims [-] [instance: 
87c7fc32-042e-40b7-af46-44bff50fa1b4] Total disk: 2097152 GB, used: 60.00 
GB2014-12-13 22:20:31.304 18219 AUDIT nova.compute.claims [-] [instance: 
87c7fc32-042e-40b7-af46-44bff50fa1b4] disk limit not specified, defaulting to 
unlimited
  ...

  2014-12-13 22:20:32.850 18219 DEBUG nova.network.neutronv2.api [-]
  [instance: 87c7fc32-042e-40b7-af46-44bff50fa1b4]
  get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.6/site-
  packages/nova/network/neutronv2/api.py:611

  Proposed patch:

  one new line of code at the beginning of nova/compute/manager.py
  _do_build_and_run_instance:

  context.update_store()

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1404268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1417671] Re: when using dedicated cpus, the emulator thread should be affined as well

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1417671

Title:
  when using dedicated cpus, the emulator thread should be affined as
  well

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  I'm running nova trunk, commit 752954a.

  I configured a flavor with two vcpus and extra specs
  "hw:cpu_policy=dedicated" in order to enable vcpu pinning.

  I booted up an instance with this flavor, and "virsh dumpxml" shows
  that the two vcpus were affined suitably to host cpus, but the
  emulator thread was left to float across the available host cores on
  that numa node.


  2048
  
  
  



  Looking at the kvm process shortly after creation, we see quite a few
  emulator threads running with the emulatorpin affinity:

  compute-2:~$ taskset -apc 136143
  pid 136143's current affinity list: 3-11
  pid 136144's current affinity list: 0,3-24,27-47
  pid 136146's current affinity list: 4
  pid 136147's current affinity list: 5
  pid 136149's current affinity list: 0
  pid 136433's current affinity list: 3-11
  pid 136434's current affinity list: 3-11
  pid 136435's current affinity list: 3-11
  pid 136436's current affinity list: 3-11
  pid 136437's current affinity list: 3-11
  pid 136438's current affinity list: 3-11
  pid 136439's current affinity list: 3-11
  pid 136440's current affinity list: 3-11
  pid 136441's current affinity list: 3-11
  pid 136442's current affinity list: 3-11
  pid 136443's current affinity list: 3-11
  pid 136444's current affinity list: 3-11
  pid 136445's current affinity list: 3-11
  pid 136446's current affinity list: 3-11
  pid 136447's current affinity list: 3-11
  pid 136448's current affinity list: 3-11
  pid 136449's current affinity list: 3-11
  pid 136450's current affinity list: 3-11
  pid 136451's current affinity list: 3-11
  pid 136452's current affinity list: 3-11
  pid 136453's current affinity list: 3-11
  pid 136454's current affinity list: 3-11

  
  Since the purpose of "hw:cpu_policy=dedicated" is to provide a dedicated host 
CPU for each guest CPU, the libvirt emulatorpin cpuset for a given guest should 
be set to one (or possibly more) of the CPUs specified for that guest.  
Otherwise, any work done by the emulator threads could rob CPU time from 
another guest instance.

  Personally I'd like to see the emulator thread affined the same as
  guest vCPU 0 (we use guest vCPU0 as a maintenance processor while
  doing the "real work" on the other vCPUs), but an argument could be
  made that it should be affined to the logical OR of all the guest vCPU
  cpusets.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1417671/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403889] Re: force_config_drive gives inconsistent instance state

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1403889

Title:
  force_config_drive gives inconsistent instance state

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Default devstack install leaves

    force_config_drive = always

   in /etc/nova/nova.conf (slightly contradicting the docs:
  http://docs.openstack.org/user-guide/content/enable_config_drive.html
  which expects ' = true')

  An instance booted on this system does not have a config drive
  according to 'nova show', and does not get an entry in the
  config_drive column of the instances table. However the libvirt xml
  does show a config drive, and of course it's visible on the instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1403889/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399498] Re: centos 7 unit test fails

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1399498

Title:
  centos 7 unit test fails

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  centos 7 unit test fails.

  to pass this test:
  export OPENSSL_ENABLE_MD5_VERIFY=1
  export NSS_HASH_ALG_SUPPORT=+MD5 

  
  # ./run_tests.sh -V -s nova.tests.unit.test_crypto.X509Test
  Running `tools/with_venv.sh python setup.py testr --testr-args='--subunit 
--concurrency 0  nova.tests.unit.test_crypto.X509Test'`
  nova.tests.unit.test_crypto.X509Test
  test_encrypt_decrypt_x509 OK  2.73
  test_can_generate_x509FAIL

  Slowest 2 tests took 6.24 secs:
  nova.tests.unit.test_crypto.X509Test
  test_can_generate_x5093.51
  test_encrypt_decrypt_x509 2.73

  ==
  FAIL: nova.tests.unit.test_crypto.X509Test.test_can_generate_x509
  --

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1399498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398086] Re: nova servers pagination does not work with deleted marker

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1398086

Title:
  nova servers pagination does not work with deleted marker

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Nova does not paginate correctly if the marker is a deleted server.

  I am trying to get all of the servers for a given tenant. In total
  (i.e. active, delete, error, etc.) there are 405 servers.

  If I query the API without a marker and with a limit larger (for example, 500)
  than the total number of servers I get all of them, i.e. the following query
  correctly returns 405 servers:

  curl (...) "http://cloud.example.org:8774/v1.1/foo/servers
  ?changes-since=2014-01-01&limit=500"

  However, if I try to paginate over them, doing:

  curl (...) "http://cloud.example.org:8774/v1.1/foo/servers
  ?changes-since=2014-01-01&limit=100"

  I get the first 100 with a link to the next page. If I try to follow
  it:

  curl (...) "http://cloud.example.org:8774/v1.1/foo/servers
  ?changes-since=2014-01-01&limit=100&marker=foobar"

  I am always getting a "badRequest" error saying that the marker is not found. 
I
  guess this is because of these lines in "nova/db/sqlalchemy/api.py"

  2000 # paginate query
  2001 if marker is not None:
  2002 try:
  2003 marker = _instance_get_by_uuid(context, marker, 
session=session)
  2004 except exception.InstanceNotFound:
  2005 raise exception.MarkerNotFound(marker)

  The function "_instance_get_by_uuid" gets the machines that are not
  deleted, therefore it fails to locate the marker if it is a deleted
  server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1398086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1406484] Re: Connection info retrieved on each call to get_volume_connector

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1406484

Title:
  Connection info retrieved on each call to get_volume_connector

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  On a system with no Fibre Channel (FC) HBAs, each call to
  get_volume_connector() in virt/libvirt/driver.py will result in
  retrieving the FC HBA info.  This is because the code looks like this:

  self._fc_wwnns = None
  self._fc_wwpns = None
  ...
  if not self._fc_wwnns:
  self._fc_wwnns = libvirt_utils.get_fc_wwnns()
  ...
  if not self._fc_wwpns:
  self._fc_wwpns = libvirt_utils.get_fc_wwpns()

  In a system with no HBAs, the two utils functions return empty lists.
  Therefore we will go into these ifs on every call.  The if statements
  should be re-written as "if self.foo is not None".

  I have seen busy systems where these two calls add 800ms to each
  attach call!

  Similarly, if there is no iSCSI initiator name defined, the
  get_iscsi_initiator() function is called each time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1406484/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1400080] Re: interfaces.template generation needs to include the hw address for usage on multi-nic Windows machines

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1400080

Title:
  interfaces.template generation needs to include the hw address for
  usage on multi-nic Windows machines

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When using networks without DHCP enabled and "flat_injected" set to
  True, the interfaces template is injected in the associated instances
  or included in the config drive metadata.

  The template includes the interface name, based on a progressive
  numbering (eth0, eth1, etc). In case of multiple nics, there's no
  clear way to identify the interfaces in the guest OS if the actual
  interface naming differs, this is especially valid for Windows
  instances.

  Since the MAC address (hardware address) assigned to each vNIC
  identifies uniquely the interface, providing the mac address during
  the template generation solves the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1400080/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1406167] Re: Block migration fails because destination compute node refuses ssh connection

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1406167

Title:
  Block migration fails because destination compute node refuses ssh
  connection

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Summary:
  When block-migrating a vm between two compute nodes, if the destination node 
lacks ssh daemon or refuses ssh connection, the block-migration would fail and 
cause damage to the vm, vm will be in error state for a certain time.

  Scenario:
  Block-migrating a vm between two compute nodes

  Example:
  Compute Node 1: ly-compute1 (10.0.0.31)
  Compute Node 2: ly-compute2 (10.0.0.32) with VM: test11

  The below tutorial is what I followed for openstack installation, it doesn't 
say about installing ssh support on compute nodes, so I didn't install ssh on 
both compute nodes.
  OpenStack Installation Guide for Ubuntu 12.04/14.04 (LTS)
  http://docs.openstack.org/icehouse/install-guide/install/apt/content/

  Error occurs when I use the command "nova migrate --poll test11" trying to 
migrate vm "test11" from ly-compute2 to ly-compute1.
  both error message and log said that ssh connecting to 10.0.0.31 failed 
(because ssh daemon is NOT even installed.) And the vm "test11" will enter a 
vm_state error state.
  *
  Tring to migrate:
  C:\Windows\system32>nova migrate --poll test11

  Server migrating... 0% complete
  Error migrating server
  ERROR (InstanceInErrorState): Unexpected error while running command.
  Command: ssh 10.0.0.31 mkdir -p 
/var/lib/nova/instances/30c4dac1-f3bc-4e6a-8a38-ee49671eee6a
  Exit code: 255
  Stdout: u''
  Stderr: u'ssh: connect to host 10.0.0.31 port 22: Connection refused\r\n'
  *
  dashboard log message:
  Unexpected error while running command. Command: ssh 10.0.0.31 mkdir -p 
/var/lib/nova/instances/30c4dac1-f3bc-4e6a-8a38-ee49671eee6a Exit code: 255 
Stdout: u'' Stderr: u'ssh: connect to host 10.0.0.31 port 22: Connection 
refused\r\n'
  Code
  500
  Details
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 290, in 
decorated_function return function(self, context, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3472, 
in resize_instance block_device_info)
  File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
4954, in migrate_disk_and_power_off utils.execute('ssh', dest, 'mkdir', '-p', 
inst_base)
  File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 165, in execute 
return processutils.execute(*cmd, **kwargs)
  File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py", line 
195, in execute cmd=sanitized_cmd)
  *
  Second time tring to migrate:
  C:\Windows\system32>nova migrate --poll test11

  ERROR (Conflict): Cannot 'migrate' while instance is in vm_state error (HTTP 
409) (Request-ID: req-ba3ca8e1-0753-40ac-9e2e-2f7c319ec691)
  *

  Request:
  This fault can be dangerous, because it will cause damage to the vm of a 
user. The migrate_disk_and_power_off function from nova/virt/libvirt/driver.py 
should pre-check if ssh daemon is running on dest node, before the actual 
block-migration process.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1406167/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1402728] Re: VMware: resize does not update cpu limits

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1402728

Title:
  VMware: resize does not update cpu limits

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  A resize of a VM does not update the cpu limits correctly. That is if
  resources or sharing were on the flavor extra specs then they were not
  updated after the resize (if necessary)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1402728/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1389127] Re: instance can not recovery from resize status when nova-compute down after resize starting

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1389127

Title:
  instance can not recovery from resize status when nova-compute down
  after resize starting

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  reproduce steps:

  [tagett@stack-01 devstack]$ nova resize test1 4
  [tagett@stack-01 devstack]$ nova list
  
+--+---++-+-+--+
  | ID   | Name  | Status | Task State  | 
Power State | Networks |
  
+--+---++-+-+--+
  | fb326f1c-05cb-4080-a133-2688a1580bdb | spacewalk | ACTIVE | -   | 
Running |  |
  | d7ba639c-d261-4dbe-ae70-3aaefc4de339 | test1 | RESIZE | resize_prep | 
Running | private=192.168.1.94 |
  
+--+---++-+-+--+

  kill nova-compute, then restart it.

  [tagett@stack-01 devstack]$ nova list
  
+--+---++-+-+--+
  | ID   | Name  | Status | Task State  | 
Power State | Networks |
  
+--+---++-+-+--+
  | fb326f1c-05cb-4080-a133-2688a1580bdb | spacewalk | ACTIVE | -   | 
Running |  |
  | d7ba639c-d261-4dbe-ae70-3aaefc4de339 | test1 | RESIZE | resize_prep | 
Running | private=192.168.1.94 |
  
+--+---++-+-+--

  [tagett@stack-01 devstack]$ nova list
  
+--+---++-+-+--+
  | ID   | Name  | Status | Task State  | 
Power State | Networks |
  
+--+---++-+-+--+
  | fb326f1c-05cb-4080-a133-2688a1580bdb | spacewalk | ACTIVE | -   | 
Running |  |
  | d7ba639c-d261-4dbe-ae70-3aaefc4de339 | test1 | RESIZE | resize_prep | 
Running | private=192.168.1.94 |
  
+--+---++-+-+--+
  [tagett@stack-01 devstack]$ nova reboot test1 
  ERROR (Conflict): Cannot 'reboot' instance 
d7ba639c-d261-4dbe-ae70-3aaefc4de339 while it is in task_state resize_prep 
(HTTP 409) (Request-ID: req-7a9f4e54-388a-48b1-b361-4b82496542da)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1389127/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385480] Re: LVM rescue disk not removed during unrescue

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1385480

Title:
  LVM rescue disk not removed during unrescue

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Rescuing and unrescuing LVM backed instance leaves behind the .rescue
  disk image. This is caused by unrescue assuming that instances have
  file based disks.

  def unrescue(self, instance, network_info):
  """Reboot the VM which is being rescued back into primary images.
  """
  instance_dir = libvirt_utils.get_instance_path(instance)
  unrescue_xml_path = os.path.join(instance_dir, 'unrescue.xml')
  xml = libvirt_utils.load_file(unrescue_xml_path)
  virt_dom = self._lookup_by_name(instance.name)
  self._destroy(instance)
  self._create_domain(xml, virt_dom)
  libvirt_utils.file_delete(unrescue_xml_path)
  rescue_files = os.path.join(instance_dir, "*.rescue")
  for rescue_file in glob.iglob(rescue_files):
  libvirt_utils.file_delete(rescue_file)<<<-- here

  The last line deletes all of the .rescue files in the instance
  directory but does not clean up the .rescue LVM volumes.

  --
  To reproduce:

  1. Configure nova for LVM ephemeral storage with

  [libvirt]
  images_type = lvm
  images_volume_group = nova-lvm

  2. Stack
  3. Boot an instance with flavor other than nano or micro, so the instance has 
a non-zero disk size
  4. Rescue the instance
  5. Unrescue the instance
  6. Observe the rescue image left in nova-lvm/.rescue

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1385480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356389] Re: VMware: unable to access VNC console of rescue VM

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1356389

Title:
  VMware: unable to access VNC console of rescue VM

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When doing a rescue VM the user is unable to access the VNC console of that VM
  In addition the state of the rescue VM is also 'SHUTDOWN'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1356389/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384653] Re: attach encrypted volume, raise "Empty module name" exception

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384653

Title:
  attach encrypted volume, raise "Empty module name" exception

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Create encrypted volume type by using "encryption-type-create", the
  provider is "LuksEncryptor". Then attach the encrypted volume to vm,
  there is an exception raising in nova/volume/encryptor/__init__.py.

  The error log:

  2014-10-23 02:03:04.115 ERROR nova.virt.libvirt.driver 
[req-5f4f611c-2c6f-4e2f-9a42-f9dad529054a admin demo] [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d] Failed to attach volume at mountpoint: 
/dev/vdb
  2014-10-23 02:03:04.115 15313 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d] Traceback (most recent call last):
  2014-10-23 02:03:04.115 15313 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1380, in attach_volume
  2014-10-23 02:03:04.115 15313 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d] encryption)
  2014-10-23 02:03:04.115 15313 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1327, in 
_get_volume_encryptor
  2014-10-23 02:03:04.115 15313 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d] **encryption)
  2014-10-23 02:03:04.115 15313 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d]   File 
"/opt/stack/nova/nova/volume/encryptors/__init__.py", line 41, in 
get_volume_encryptor
  2014-10-23 02:03:04.115 15313 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d] **kwargs)
  2014-10-23 02:03:04.115 15313 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d]   File 
"/usr/local/lib/python2.7/dist-packages/oslo/utils/importutils.py", line 38, in 
import_object
  2014-10-23 02:03:04.115 15313 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d] return 
import_class(import_str)(*args, **kwargs)
  2014-10-23 02:03:04.115 15313 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d]   File 
"/usr/local/lib/python2.7/dist-packages/oslo/utils/importutils.py", line 27, in 
import_class
  2014-10-23 02:03:04.115 15313 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d] __import__(mod_str)
  2014-10-23 02:03:04.115 15313 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d] ValueError: Empty module name

  the code should add full class name when user set encryption provider
  like "LuksEncryptor" or "CryptsetupEncryptor".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1384653/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338551] Re: Failure in interface-attach may leave port around

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338551

Title:
  Failure in interface-attach may leave port around

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When the interface-attach action is run, it may be passed in a network
  (but no port identifier).  Therefore, the action allocates a port on
  that network.  However, if the attach method fails for some reason,
  the port is not cleaned up.

  This behavior would be appropriate if the invoker had passed in a port
  identifier.  However if nova created the port for the action and that
  action failed, the port should be cleaned up as part of the failure.

  The allocation of the port occurs in nova/compute/manager.py in the
  attach_interface method.  Recommend that we de-allocate the port for
  the instance had no port_id been passed in.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1338551/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1249992] Re: the task_state is migrating if compare cpu failed

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1249992

Title:
  the task_state is migrating if compare cpu failed

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Code version:
  The lastest version of master

  API version:
  V2

  Compute driver:
  libvirt.LibvirtDriver

  Libvert type:
  KVM

  Steps:
  1.Create a vm
  2.Live migrate the vm to the other host

  Bugs:
  The task_state of the instance is "migrating" if compare cpu failed. So, i 
can not migrate the vm anymore.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1249992/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271966] Re: Not possible to spawn vmware instance with multiple disks

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1271966

Title:
  Not possible to spawn vmware instance with multiple disks

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The behaviour of spawn() in the vmwareapi driver wrt images and block
  device mappings is currently as follows:

  If there are any block device mappings, images are ignored
  If there are any block device mappings, the last becomes the root device and 
all others are ignored

  This means that, for example, the following scenarios are not
  possible:

  1. Spawn an instance with a root device from an image, and a secondary volume
  2. Spawn an instance with a volume as a root device, and a secondary volume

  The behaviour of the libvirt driver is as follows:

  If there is an image, it will be the root device unless there is also a block 
device mapping for the root device
  All block device mappings are used
  If there are multiple block device mappings for the same device, the last one 
is used

  The vmwareapi driver's behaviour is surprising, and should be modified
  to follow the libvirt driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1271966/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381598] Re: boot from image created with nova image-create from a volume backed instance is rejected

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381598

Title:
  boot from image created with nova image-create from a volume backed
  instance is rejected

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  It is not possible to boot the image that was created with nova image-
  create from a volume backed instance.

  Steps to reproduce:
  stack@stack:~/devstack$ nova boot --flavor 100 --block-device 
source=image,id=70b5a8e8-846f-40dc-a52d-558d37dfc7f1,dest=volume,bootindex=0,size=1
 volume-backed
  
+--+-+
  | Property | Value
   |
  
+--+-+
  | OS-DCF:diskConfig| MANUAL   
   |
  | OS-EXT-AZ:availability_zone  | nova 
   |
  | OS-EXT-SRV-ATTR:host | -
   |
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | -
   |
  | OS-EXT-SRV-ATTR:instance_name| instance-0017
   |
  | OS-EXT-STS:power_state   | 0
   |
  | OS-EXT-STS:task_state| scheduling   
   |
  | OS-EXT-STS:vm_state  | building 
   |
  | OS-SRV-USG:launched_at   | -
   |
  | OS-SRV-USG:terminated_at | -
   |
  | accessIPv4   |  
   |
  | accessIPv6   |  
   |
  | adminPass| wvUa22QCTaoR 
   |
  | config_drive |  
   |
  | created  | 2014-10-15T15:07:39Z 
   |
  | flavor   | install-test (100)   
   |
  | hostId   |  
   |
  | id   | 9ad985f6-5e76-4545-9702-0b8a6058ef57 
   |
  | image| Attempt to boot from volume - no 
image supplied |
  | key_name | -
   |
  | metadata | {}   
   |
  | name | volume-backed
   |
  | os-extended-volumes:volumes_attached | []   
   |
  | progress | 0
   |
  | security_groups  | default  
   |
  | status   | BUILD
   |
  | tenant_id| 89dda4659c7e403392e9bcfc14ca6c80 
   |
  | updated  | 2014-10-15T15:07:39Z 
   |
  | user_id  | 4c9283c1cbc54d688e2dda83fbc4aa11 
   |
  
+--+-+

  
  stack@stack:~/devstack$ nova show 9ad985f6-5e76-4545-9702-0b8a6058ef57
  
+--+--+
  | Property | Value
|
  
+--+--+
  | OS-DCF:diskConfig| MANUAL   
|
  | OS-EXT-AZ:availability_zone  | nova 
|
  | OS-EXT-SRV-ATTR:host | stack
|
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | stack
|
  | OS-EXT-SRV-ATTR:instance_name| instance-0017
|
  | OS-EXT-STS:power_state   | 1
|
  | OS-EXT-STS:task_state| -
|
  | OS-EXT-STS:vm_sta

[Yahoo-eng-team] [Bug 1352668] Re: After archive db the instance with deleted flavor failed

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1352668

Title:
  After archive db the instance with deleted flavor failed

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  reproduce as below:

  os@os2:~$ nova show vm1
  
+--++
  | Property | Value
  |
  
+--++
  | OS-DCF:diskConfig| MANUAL   
  |
  | OS-EXT-AZ:availability_zone  | nova 
  |
  | OS-EXT-SRV-ATTR:host | os3  
  |
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | os3  
  |
  | OS-EXT-SRV-ATTR:instance_name| instance-0045
  |
  | OS-EXT-STS:power_state   | 1
  |
  | OS-EXT-STS:task_state| -
  |
  | OS-EXT-STS:vm_state  | active   
  |
  | OS-SRV-USG:launched_at   | 2014-08-05T03:47:09.00   
  |
  | OS-SRV-USG:terminated_at | -
  |
  | accessIPv4   |  
  |
  | accessIPv6   |  
  |
  | config_drive |  
  |
  | created  | 2014-08-05T03:47:01Z 
  |
  | flavor   | test1 (333)  
  |
  | hostId   | 
c8e8cab21e9e22dbc3779fd171e77f44940ba1c81161dc114ba4ad85   |
  | id   | c2e84eda-4bc6-4ef7-a5ee-f6590fb1f6f7 
  |
  | image| cirros-0.3.2-x86_64-uec 
(da82a342-aeac-407a-bf9d-cf28bf68dc6b) |
  | key_name | -
  |
  | metadata | {}   
  |
  | name | vm1  
  |
  | net1 network | 12.0.0.55
  |
  | os-extended-volumes:volumes_attached | []   
  |
  | progress | 0
  |
  | security_groups  | default  
  |
  | status   | ACTIVE   
  |
  | tenant_id| fdbb1e8f23eb40c89f3a677e2621b95c 
  |
  | updated  | 2014-08-05T03:47:09Z 
  |
  | user_id  | 158d3c971e244f479593c86ff751bf8f 
  |
  
+--++
  os@os2:~$ nova delete ^C
  os@os2:~$ nova flavor-delete 333
  
+-+---+---+--+---+--+---+-+---+
  | ID  | Name  | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | 
Is_Public |
  
+-+---+---+--+---+--+---+-+---+
  | 333 | test1 | 512   | 1| 1 | 10   | 1 | 1.0 | 
True  |
  
+-+---+---+--+---+--+---+-+---+


  2014-08-05 12:16:09.558 TRACE oslo.messaging.rpc.dispatcher Traceback (most 
recent call last):
  2014-08-05 12:16:09.558 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134,
   in _dispatch_and_reply
  2014-08-05 12:16:09.558 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-08-05 12:16:09.558 TRACE o

[Yahoo-eng-team] [Bug 1270825] Re: Live block migration fails for instances whose glance images have been deleted

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270825

Title:
  Live block migration fails for instances whose glance images have been
  deleted

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Once the glance image from which an instance was spawned is deleted
  it's not possible to block migrate this instance.

  To recreate:

  1. Boot an instance off a public image or snapshot
  2. Delete the image from glance
  3. Live block migrate this instance. It will fail at pre-live-migration stage 
as the image could not be downloaded.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1270825/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273451] Re: improper use of mock with stevedore in tests

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273451

Title:
  improper use of mock with stevedore in tests

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The tests in nova/tests/test_hook.py are mocking a private part of the
  stevedore API (_load_plugins) instead of using
  ExtensionManager.make_test_instance() to create a test version of an
  ExtensionManager and passing that somewhere instead.

  See https://review.openstack.org/#/c/69475/1 as a first-pass work-
  around.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1273451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1380780] Re: Boot from image and create a new volume ignores availability zone

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1380780

Title:
  Boot from image and create a new volume ignores availability zone

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Boot from image and creation of a new volume does not pass instance
  availability zone to cinder when creating volume.

  Here is a fail scenario.

  Configure cinder to run volume service in different availability zones.
  Cross zone volume usage should be disabled (in nova conf 
cinder_cross_az_attach=false ).

  [root@node-7 ~]# cinder service-list
  
+--++--+-+---++
  |  Binary  |Host|   Zone   |  Status | State |
 Updated_at |
  
+--++--+-+---++
  | cinder-scheduler | node-10.domain.tld | internal | enabled |   up  | 
2014-10-13T20:12:18.00 |
  | cinder-scheduler | node-7.domain.tld  | internal | enabled |   up  | 
2014-10-13T20:12:15.00 |
  | cinder-scheduler | node-8.domain.tld  | internal | enabled |   up  | 
2014-10-13T20:12:18.00 |
  |  cinder-volume   |   node-10.reg1a|  reg1a   | enabled |   up  | 
2014-10-13T20:12:14.00 |
  |  cinder-volume   |   node-10.reg1b|  reg1b   | enabled |   up  | 
2014-10-13T20:12:14.00 |
  |  cinder-volume   |node-7.reg1a|  reg1a   | enabled |   up  | 
2014-10-13T20:12:18.00 |
  |  cinder-volume   |node-7.reg1b|  reg1b   | enabled |   up  | 
2014-10-13T20:12:14.00 |
  |  cinder-volume   |node-8.reg1a|  reg1a   | enabled |   up  | 
2014-10-13T20:12:21.00 |
  |  cinder-volume   |node-8.reg1b|  reg1b   | enabled |   up  | 
2014-10-13T20:12:21.00 |
  
+--++--+-+---++

  Run CLI as below to create a volume from an existing image and using
  it to boot an instance.

  nova boot test --flavor 1 --image 32705323-4bfb-4cd7-9711-f5459fd236d8
  --nic net-id=ca3b4232-405c-4225-9724-0f0dde69c1d5  --availability-
  zone=reg1a  --block-device "source=image,id=32705323-4bfb-
  4cd7-9711-f5459fd236d8,dest=volume,size=10,bootindex=1"

  This will attempt to create volume in internal (default) availability
  zone. But creation will fail because there are no volume service in
  internal  availability zone.

  ++--+
  |Property|Value |
  ++--+
  |  attachments   |  []  |
  |   availability_zone|   internal   |
  |bootable|false |
  |   created_at   |  2014-10-13T19:45:24.00  |
  |  display_description   |  |
  |  display_name  |  |
  |   encrypted|False |
  |   id   | e358b519-4287-45a5-85cc-1e6a0d371fb1 |
  |metadata|  {}  |
  | os-vol-host-attr:host  | None |
  | os-vol-mig-status-attr:migstat | None |
  | os-vol-mig-status-attr:name_id | None |
  |  os-vol-tenant-attr:tenant_id  |   1cd2c85585ed42dcaf266b57c22c86ef   |
  |  size  |  10  |
  |  snapshot_id   | None |
  |  source_volid  | None |
  | status |error |
  |  volume_type   | None |
  ++--+

  The instance boot fail with error: "InvalidVolume: Invalid volume:
  status must be 'available'".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1380780/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327218] Re: Volume detach failure because of invalid bdm.connection_info

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1327218

Title:
  Volume detach failure because of invalid bdm.connection_info

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Example of this here:

  http://logs.openstack.org/33/97233/1/check/check-grenade-
  dsvm/f7b8a11/logs/old/screen-n-cpu.txt.gz?level=TRACE#_2014-06-02_14_13_51_125

     File "/opt/stack/old/nova/nova/compute/manager.py", line 4153, in 
_detach_volume
   connection_info = jsonutils.loads(bdm.connection_info)
     File "/opt/stack/old/nova/nova/openstack/common/jsonutils.py", line 164, 
in loads
   return json.loads(s)
     File "/usr/lib/python2.7/json/__init__.py", line 326, in loads
   return _default_decoder.decode(s)
     File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
   obj, end = self.raw_decode(s, idx=_w(s, 0).end())
   TypeError: expected string or buffer

  and this was in grenade with stable/icehouse nova commit 7431cb9

  There's nothing unusual about the test which triggers this - simply
  attaches a volume to an instance, waits for it to show up in the
  instance and then tries to detach it

  logstash query for this:

    message:"Exception during message handling" AND message:"expected
  string or buffer" AND message:"connection_info =
  jsonutils.loads(bdm.connection_info)" AND tags:"screen-n-cpu.txt"

  but it seems to be very rare

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1327218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350224] Re: VMWare: Operating System Not Found, using block device mapping for volume during VM spawn

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1350224

Title:
  VMWare: Operating System Not Found, using block device mapping for
  volume during VM spawn

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When using vmware driver to attach a volume during VM spawn as below
  using --block-device.

  The VM will show 'Active' in openstack, but the actuall the VM
  couldn't be loaded. Showing 'Operating System Not Found'.

  nova boot --flavor 7 --image trend-thin --block-device
  source=volume,id=0fa2137c-ef9f-413c-bf6b-
  1a8b4fcf2e35,dest=volume,shutdown=preserve myInstanceWithVolume --nic
  net-id=e7ef5ccb-1718-42b6-a99c-37d5a509c339

  Note: the volume is not bootable volume. Just want to deployment the
  VM  from backend image and then attach the volume to the VM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1350224/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366139] Re: Metadata cache time should be configurable

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1366139

Title:
  Metadata cache time should be configurable

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The nova metadata request handler uses an in-memory cache of 15
  seconds. Under very heavy usage of the metadata service, this can
  drastically limit the cache hit rate, since it expires so quickly.

  Adding the ability to control the cache timeout has, in our tests,
  increased the average cache hit rate from around 20% to 80% or better
  with approximately a thousand metadata calls per minute.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1366139/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363119] Re: nova messaging queues slow in idle cluster

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363119

Title:
  nova messaging queues slow in idle cluster

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Each running nova-network service periodically calls
  fixed_ip_disassociate_all_by_timeout(). This searches all fixed ips,
  filtering on multiple columns. With a large number of fixed_ips, each
  call to this function can take several seconds to complete. With many
  hosts running nova-network, a cluster with little to no activity can
  experience prolonged delays in message processing, eventually
  rendering some or all hosts unresponsive to nova commands (boot
  instance, etc). The only column referenced in that query that is not
  represented in any existing index is updated_at; a new index including
  fixed_ips.updated_at is probably called for.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1363119/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350704] Re: The floating ip should be removed after shelve offload instance

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1350704

Title:
  The floating ip should be removed after shelve offload instance

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Commit: 4f8185549dfe11eb1ce405711593baa1528045ea fixes the problem of update 
binding when unshelve instance for neutron. But that fixing didn't cover the 
case for nova-network. For nova-network with multi host, the floating ip should 
be removed when shelve offload instance.
 
  produce as below:
  1. nova boot vm1
  2. iptables -t nat -L -n check the floating ip rules for vm1
  3. nova shelve vm1
  4. waiting for vm1 shelve offload
  6. running iptables again, found the floating ip rules still existed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1350704/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352728] Re: The simultaneous launch of two or more VMs will fail

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1352728

Title:
  The simultaneous launch of  two or more VMs  will fail

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Using dashboard launching VMs, the setting is  that 'Instance Count :
  2, Instance Boot Source : Boot from image'. After clicking the button
  of 'launch', the simultaneous launch of two VMs failed.

  the nova-api.log shows as follows.
  "/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1356, in create
  2014-08-05 21:47:46.117 25518 TRACE nova.api.openstack 
self._check_multiple_instances_neutron_ports(requested_networks)
  2014-08-05 21:47:46.117 25518 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1327, in 
_check_multiple_instances_neutron_ports
  2014-08-05 21:47:46.117 25518 TRACE nova.api.openstack for net, ip, port 
in requested_networks:
  2014-08-05 21:47:46.117 25518 TRACE nova.api.openstack ValueError: too many 
values to unpack

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1352728/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375320] Re: VMware: VM does not have network connectivty when there are many port groups defined

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375320

Title:
  VMware: VM does not have network connectivty when there are many port
  groups defined

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  If the VC did not get the port group in the first response form the VC
  then it will not match any of the networks. This happens when there
  are many (more than a few hundred port groups defined)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1375320/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278849] Re: Need more log info for "Instance could not be found" error.

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1278849

Title:
  Need more log info for "Instance could not be found" error.

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Rarely, when looking up for the details of an 'instance_id' we get to
  see the following error. But the instance pertaining to the
  instance_id is still available and active.

  call: GET /${project_id}/servers/${instance_id}
  HTTP exception thrown: Instance could not be found (404)

  Apart from the above, there is not enough information to traceback
  what exactly caused the 404. Hence some log message with traceback can
  help us identify the root cause.

  This bug is to add more log messages with traceback for more
  information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1278849/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1329313] Re: offline server migration fails if it image in glance was deleted

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1329313

Title:
  offline server migration fails if it image in glance was deleted

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  If instance is migrated from hypervisor by 'nova host-servers-migrate'
  and it image was deleted, instance fails to start with message

  {u'message': u'Image d2ab45e6-3db0-450b-b5aa-8b0646e063a2 could not be
  found.', u'code': 404, u'created': u'2014-06-12T12:39:27Z'}

  Steps to reproduce:
  1. Create instance
  2. Delete image that instance starts from.
  3. check if image is expired on destination node (set 
remove_unused_original_minimum_age_seconds=1 in nova.conf)
  4. Run nova host-servers-migrate on host where instance running

  Expected behavior:
  Instance migrate successfully.

  Actual behavior:
  Instance transferring to new hypervisor but fail to start with message:

  status: ERROR
  fault: {u'message': u'Image d2ab45e6-3db0-450b-b5aa-8b0646e063a2 could not be 
found.', u'code': 404, u'created': u'2014-06-12T12:39:27Z'}

  nova-compute at destination hypervisor:

  Traceback (most recent call last):
    File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3162, 
in finish_resize
  disk_info, image)
    File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3130, 
in _finish_resize
  block_device_info, power_on)
    File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
4605, in finish_migration
  block_device_info=None, inject_files=False)
    File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
2389, in _create_image
  project_id=instance['project_id'])
    File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", 
line 179, in cache
  *args, **kwargs)
    File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", 
line 336, in create_image
  prepare_template(target=base, max_size=size, *args, **kwargs)
    File "/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", 
line 246, in inner
  return f(*args, **kwargs)
    File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", 
line 167, in call_if_not_exists
  fetch_func(target=target, *args, **kwargs)
    File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 
645, in fetch_image
  max_size=max_size)
    File "/usr/lib/python2.7/dist-packages/nova/virt/images.py", line 196, in 
fetch_to_raw
  max_size=max_size)
    File "/usr/lib/python2.7/dist-packages/nova/virt/images.py", line 190, in 
fetch
  image_service.download(context, image_id, dst_path=path)
    File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 349, in 
download
  _reraise_translated_image_exception(image_id)
    File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 347, in 
download
  image_chunks = self._client.call(context, 1, 'data', image_id)
    File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 212, in 
call
  return getattr(client.images, method)(*args, **kwargs)
    File "/usr/lib/python2.7/dist-packages/glanceclient/v1/images.py", line 
127, in data
  % urllib.quote(str(image_id)))
    File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 
272, in raw_request
  return self._http_request(url, method, **kwargs)
    File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 
233, in _http_request
  raise exc.from_response(resp, body_str)
  ImageNotFound: Image d2ab45e6-3db0-450b-b5aa-8b0646e063a2 could not be found.

  Version: havana,  1:2013.2.3-0ubuntu1~cloud0 (ubuntu)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1329313/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255317] Re: VMware: can't boot from sparse image copied to volume

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1255317

Title:
  VMware: can't boot from sparse image copied to volume

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  Using VC Driver, we are unable to boot from a sparse image copied to a
  volume. Scenario is as follows:

  1. Create an image using the cirros vmdk image (linked below) with 
vmware_disktype="sparse"
  2. Copy the image to a volume
  3. Boot from the volume

  Expected: Able to boot into OS and see the login screen
  Actual: "Operating system is not found"

  [1]
  http://partnerweb.vmware.com/programs/vmdkimage/cirros-0.3.0-i386-disk.vmdk

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1255317/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1316621] Re: ebtables calls can race with libvirt

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1316621

Title:
  ebtables calls can race with libvirt

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Sometimes request to associate floating IP may fail, when using nova
  network with libvirt like:

  > 
http://192.168.1.12:8774/v2/258a4b20c77240bf9b386411430683fa/servers/a9e734e4-5310-4191-a7f0-78fca4b367e7/action
  > 
  > BadRequest: Bad request
  > Details: {'message': 'Error. Unable to associate floating ip', 'code': 
'400'}

  Real issue is that ebtables rootwrap call fails:
  Command: sudo nova-rootwrap /etc/nova/rootwrap.conf ebtables -t nat -I 
PREROUTING --logical-in br100 -p ipv4 --ip-src 192.168.32.10 ! --ip-dst 
192.168.32.0/22 -j redirect --redirect-target ACCEPT
  Exit code: 255
  Stdout: ''
  Stderr: "Unable to update the kernel. Two possible causes:\n1. Multiple 
ebtables programs were executing simultaneously. The ebtables\n   userspace 
tool doesn't by default support multiple ebtables programs running\n   
concurrently. The ebtables option --concurrent or a tool like flock can be\n   
used to support concurrent scripts that update the ebtables kernel tables.\n2. 
The kernel doesn't support a certain ebtables extension, consider\n   
recompiling your kernel or insmod the extension.\n.\n"

  It happens like once in whole tempest run, and also not always, so kernel 
support and other reasons should not apply here.
  Probably already mentioned in 
https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg23422.html.

  As that call in nova is synchronized, locked, it could be that nova
  can actually race with libvirt itself calling ebtables?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1316621/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291199] Re: update quota multi-value in one request, half done half failed

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291199

Title:
  update quota multi-value in one request, half done half failed

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Update quota multi-value in one request is not an atomicity operation.
  I update quota(floating_ips, key_pairs, instances, ram) in one request, and 
then get response code 400, because of ram should not less than used. That is 
ok. The problem is that floating_ips, key_pairs, instances have updated.
  I think the update operation should be an atomicity operation. When update 
failed, we should roll back all the change on quota.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1322096] Re: [HyperV]: configdrive.iso is not migrated when live-migration

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1322096

Title:
  [HyperV]: configdrive.iso is not migrated when live-migration

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  If we use config-drive (whether set --config-drive=true in boot
  command or set force_config_drive=always in nova.conf), there is bug
  for config-drive when live-migration instances on hyperv.

  Live migration on Hyperv only move the root.vhd to another host, and not the 
configdrive.iso.
  But live-migrated instance has the configdrive.iso attached.
  If you want to migrate or resize the instance again, nova will error with 
configdrive.iso is not found.

  we should move configdrive.iso to target host when live migration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1322096/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1082414] Re: Live migration between hosts with differents CPU models fails

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1082414

Title:
  Live migration between hosts with differents CPU models fails

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When trying to migrate an instance from a new compute node to an older
  compute node (hardware-wise), nova-computes fails saying that the CPU
  models are incompatible and leaves the instance in Error Status.

  I did specify the CPU model in instances with the options :

  libvirt_cpu_mode=custom
  libvirt_cpu_model=kvm64

  While it seems normal to refuse migration when libvirt_cpu_mode is set
  to host-model or host-passthrough, setting it to custom and using
  kvm64 in instances should not impact CPU compatibility between hosts.

  Migrating the other way around (old to new) works correctly, mainly
  because of a superseeding set of instructions.

  I'm using the Folsom Ubuntu packaged release of nova-compute on Ubuntu
  12.10.

  Edit : Adding stacktrace

  MSG_ID is fe5d9a7c1a634d7f9c0270e35fed083b from (pid=2363) multicall 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:354
  [req-0512c53b-1b31-4924-a7d8-1ab57505ca0b ac90b7751f7f4da78f238916f3d01788 
0173fc0a9e3d42caa34659f84ee7cb7c] Live migration of instance 
97177534-6947-4756-9db5-b92fd4e504df to host os-node1 failed
  Traceback (most recent call last):
File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/contrib/admin_actions.py",
 line 282, in _migrate_live
  disk_over_commit, host)
File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 94, in 
inner
  return f(self, context, instance, *args, **kw)
File "/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 1960, in 
live_migrate
  disk_over_commit, instance, host)
File "/usr/lib/python2.7/dist-packages/nova/scheduler/rpcapi.py", line 96, 
in live_migration
  dest=dest))
File "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/proxy.py", 
line 80, in call
  return rpc.call(context, self._get_topic(topic), msg, timeout)
File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/__init__.py", line 
102, in call
  return _get_impl().call(cfg.CONF, context, topic, msg, timeout)
File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py", 
line 712, in call
  rpc_amqp.get_connection_pool(conf, Connection))
File "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", 
line 368, in call
  rv = list(rv)
File "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", 
line 336, in __iter__
  raise result
  RemoteError: Remote error: InvalidCPUInfo_Remote Unacceptable CPU info: CPU 
doesn't have compatibility.
  2012-11-14 22:33:41 TRACE nova.api.openstack.compute.contrib.admin_actions
  0
  2012-11-14 22:33:41 TRACE nova.api.openstack.compute.contrib.admin_actions
  Refer to http://libvirt.org/html/libvirt-libvirt.html#virCPUCompareResult
  Traceback (most recent call last):
  2012-11-14 22:33:41 TRACE nova.api.openstack.compute.contrib.admin_actions
File "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", 
line 275, in _process_data
  rval = self.proxy.dispatch(ctxt, version, method, **args)
  2012-11-14 22:33:41 TRACE nova.api.openstack.compute.contrib.admin_actions
File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py", 
line 145, in dispatch
  return getattr(proxyobj, method)(ctxt, **kwargs)
  2012-11-14 22:33:41 TRACE nova.api.openstack.compute.contrib.admin_actions
File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 117, in 
wrapped
  temp_level, payload)
  2012-11-14 22:33:41 TRACE nova.api.openstack.compute.contrib.admin_actions
File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
  self.gen.next()
  2012-11-14 22:33:41 TRACE nova.api.openstack.compute.contrib.admin_actions
File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 92, in 
wrapped
  return f(*args, **kw)
  2012-11-14 22:33:41 TRACE nova.api.openstack.compute.contrib.admin_actions
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2076, 
in check_can_live_migrate_destination
  instance, block_migration, disk_over_commit)
  2012-11-14 22:33:41 TRACE nova.api.openstack.compute.contrib.admin_actions
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
2238, in check_can_live_migrate_destination
  self._compare_cpu(source_cpu_info)
  2012-11-14 22:33:41 TRACE nova.api.openstack.compute.contrib.admin_actions
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
2367, in _compare_cpu
  raise exception.InvalidCPUInfo(reason=m % 

[Yahoo-eng-team] [Bug 1210261] Re: remove openstack.common.context

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1210261

Title:
  remove openstack.common.context

Status in Cinder:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Fix Released
Status in Logging configuration library for OpenStack:
  Fix Released

Bug description:
  relates to https://bugs.launchpad.net/neutron/+bug/1208734, and
  according to https://github.com/openstack/oslo-
  incubator/blob/master/MAINTAINERS#L87, i think we'd better remove
  openstack/comon/context

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1210261/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1071799] Re: commands crash when don't have permissions to read config

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1071799

Title:
  commands crash when don't have permissions to read config

Status in OpenStack Compute (Nova):
  Fix Released
Status in Oslo configuration management library:
  Fix Released

Bug description:
  In summary running nova-all as a normal user that can't read 
/etc/nova/nova.conf
  will result in a crash. This config file is only readable by 'nova' and 
'root' on
  Red Hat and Fedora systems for example.
  Instead a "permission denied" message should be printed/logged,
  rather than a crash and a confusing FileNotFound emitted.
  The same is probably true for cinder-all etc.

  Here is a a trace of what happens...

  [bob@lxbsp2932 ~]$ nova-all
  Traceback (most recent call last):
  File "/usr/bin/nova-all", line 54, in 
  flags.parse_args(sys.argv)
  File "/usr/lib/python2.6/site-packages/nova/flags.py", line 43, in parse_args
  default_config_files=default_config_files)
  File "/usr/lib/python2.6/site-packages/nova/openstack/common/cfg.py", line 
1026, in __call__
  self._parse_config_files()
  File "/usr/lib/python2.6/site-packages/nova/openstack/common/cfg.py", line 
1496, in _parse_config_files
  raise ConfigFilesNotFoundError(not_read_ok)
  nova.openstack.common.cfg.ConfigFilesNotFoundError: Failed to read some 
config files: /etc/nova/nova.conf
   
   
  Oct 26 16:49:27 lxbsp2932 abrt: detected unhandled Python exception in 
'/usr/bin/nova-all'
  Oct 26 16:49:27 lxbsp2932 abrtd: New client connected
  Oct 26 16:49:27 lxbsp2932 abrtd: Directory 'pyhook-2012-10-26-16:49:27-25525' 
creation detected
  Oct 26 16:49:27 lxbsp2932 abrt-server[25530]: Saved Python crash dump of pid 
25525 to /var/spool/abrt/pyhook-2012-10-26-16:49:27-25525
  Oct 26 16:49:38 lxbsp2932 abrtd: Sending an email...
  Oct 26 16:49:38 lxbsp2932 abrtd: Email was sent to: root@localhost
  Oct 26 16:49:38 lxbsp2932 abrtd: New problem directory 
/var/spool/abrt/pyhook-2012-10-26-16:49:27-25525, processin

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1071799/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1158684] Re: Pre-created ports get deleted on VM delete

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1158684

Title:
  Pre-created ports get deleted on VM delete

Status in Group Based Policy:
  Confirmed
Status in Orchestration API (Heat):
  Invalid
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  1) Pre create a port using port-create
  2) Boot a VM with nova boot --nic port_id=
  3) Delete a VM.

  Expected: VM should boot using provided port_id at boot time.
  When VM is deleted, port corresponding to pre-created port_id should not get 
deleted,
  as a lot of application, security settings could have port properties 
configured in them in a large network.

  Observed behavior:
  There is no way, I could prevent port_id associated with VM from being 
deleted with nova delete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/group-based-policy/+bug/1158684/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1161661] Re: Rescheduling loses reasons

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1161661

Title:
  Rescheduling loses reasons

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  In nova.compute.manager when an instance is rescheduled (for whatever
  reason) the exception that caused said rescheduling is only logged,
  and not shown to the user in any fashion. In the extreme case this can
  cause the user to have no idea what happened when rescheduling finally
  fails.

  For example:

  Say the following happens, on schedule instance 1, on hypervisor A it errors 
with error X then rescheduled to hypervisor B, which
  errors with error Y, then next can't reschedule due to no more hypervisors 
being able to be scheduled to (aka no more compute nodes), then you basically 
get an error that says no more instances to schedule on (which is not connected 
to the original error in any fashion).

  Likely there needs to be a record of the rescheduling exceptions, or
  rescheduling needs to be rethought, where a orchestration unit can
  perform this rescheduling and be more aware of the rescheduling
  attempts (and there success and failures).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1161661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405359] Re: Instance's numa_topology shouldn't be changed in NUMATopologyFilter

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1405359

Title:
  Instance's numa_topology shouldn't be changed in NUMATopologyFilter

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  In change https://review.openstack.org/#/c/133998, the
  instance['numa_topology'] will be set when filter successfully. when
  we have many hosts in environment, the instance['numa_topology'] will
  be set every time when filter host successfully, and it will be the
  numa_topology that base on last fitting successfully host's
  numa_topology. But the instance may will don't boot on the last
  filtered host after weighting and random selecting. That may lead
  booting failed because the numa_topology of "last filtered host" may
  be different with the chosen host's.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1405359/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1417201] Re: nova-scheduler exception when trying to use hugepages

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1417201

Title:
  nova-scheduler exception when trying to use hugepages

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  I'm trying to make use of huge pages as described in
  "http://specs.openstack.org/openstack/nova-
  specs/specs/kilo/implemented/virt-driver-large-pages.html".  I'm
  running nova kilo as of Jan 27th.  The other openstack services are
  juno.  Libvirt is 1.2.8.

  I've allocated 1 2MB pages on a compute node.  "virsh
  capabilities" on that node contains:

  
    
  
    67028244
    16032069
    5000
    1
  ...
  
    67108864
    16052224
    5000
    1

  I then restarted nova-compute, I set "hw:mem_page_size=large" on a
  flavor, and then tried to boot up an instance with that flavor.  I got
  the error logs below in nova-scheduler.  Is this a bug?

  Feb  2 16:23:10 controller-0 nova-scheduler Exception during message 
handling: Cannot load 'mempages' in the base class
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/server.py", line 139, in 
inner
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher return 
func(*args, **kwargs)
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/manager.py", line 86, in 
select_destinations
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
filter_properties)
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 
67, in select_destinations
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
filter_properties)
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 
138, in _schedule
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
filter_properties, index=num)
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/host_manager.py", line 391, 
in get_filtered_hosts
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher hosts, 
filter_properties, index)
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/filters.py", line 77, in 
get_filtered_objects
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
list_objs = list(objs)
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/filters.py", line 43, in filter_all
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher if 
self._filter_one(obj, filter_properties):
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/filters/__init__.py", line 
27, in _filter_one
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher return 
self.host_passes(obj, filter_properties)
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/scheduler/filters/numa_topology_filter.py",
 line 45, in host_passes
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
limits_topology=limits))
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib64/python2.7/site-packages/nova/virt/hardware.py", line 1161, in 
numa_fit_instance_to_host
  2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.di

[Yahoo-eng-team] [Bug 1419905] Re: Nova may not start instances when OS is installed with locale not en_US

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1419905

Title:
  Nova may not start instances when OS is installed with locale not
  en_US

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Nova fails to start instances on compute nodes when the base OS
  (CentOS 7) is installed with a locale other than en_US.UTF-8 (e.g.
  es_ES.UTF-8).

  To avoid this bug, the base system should always be installed with
  locale "en_US.UTF-8" (US English) on all nodes, esp. Compute nodes.

  This has been reported to RedHat RDO here:

  https://bugzilla.redhat.com/show_bug.cgi?id=1190837

  ---
  Built: 2015-02-05T19:26:09 00:00
  git SHA: e41ca113a7d9a8d30e2fa7009f4da82a26c3222b
  URL: 
http://docs.openstack.org/juno/install-guide/install/yum/content/ch_basic_environment.html
  source File: 
file:/home/jenkins/workspace/openstack-manuals-tox-doc-publishdocs/doc/install-guide/ch_basic_environment.xml
  xml:id: ch_basic_environment

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1419905/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1419629] Re: HTTPInternalServerError if 'label' is missing in request body of create tenant network API

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1419629

Title:
  HTTPInternalServerError if 'label' is missing in request body of
  create tenant network API

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Create tenant network API expect 'label' of network as one of the
  input. If that is missing from request body then, 500 error code is
  returned.

  This is not appropriate error code in above case, user should get bad
  request error for better error handling .

  Also there is no negative tests present to cover above case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1419629/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1417649] Re: IP filtering is not accurate when used with limit

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1417649

Title:
  IP filtering is not accurate when used with limit

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When applying an IP address filter to a servers query, the IP address
  filtering is manually applied in the compute API against the servers that are
  retrieved from the DB.

  The problem is when a limit is supplied; in this case, the IP address filter
  is only applied to the page of servers that are returned from the DB. For
  example, assume that you have 3 instances that match a given IP address filter
  and that those instances are returned from the DB in the 5th, 20th, and 100th
  positions. If you supply this IP address filter with a limit of 10, then only
  a single server is returned (the one in the 5th position). In this case, all
  3 instances should have been returned.

  A simple example (note that I manually added --limit to the CLI):

  * List all 3 serves:

   $ nova list --sort display_name:asc
   +--+---+--+
   | ID   | Name  | Networks |
   +--+---+--+
   | 65515d56-6103-43dd-ac58-238baabda422 | Instance1 | private=10.0.0.2 |
   | c9ab681f-e930-4e4e-814d-d6f1cf084480 | Instance2 | private=10.0.0.3 |
   | f1d6d9ef-e31d-46b5-86a2-da34b45007b0 | Instance3 | private=10.0.0.4 |
   +--+---+--+

  * Limit the list to a page size of 1:
   
   $ nova list --sort display_name:asc --limit 1
   +--+---+--+
   | ID   | Name  | Networks |
   +--+---+--+
   | 65515d56-6103-43dd-ac58-238baabda422 | Instance1 | private=10.0.0.2 |
   +--+---+--+

  * Supply only an IP address filter:

   $ nova list --sort display_name:asc --ip 10.0.0.3
   +--+---+--+
   | ID   | Name  | Networks |
   +--+---+--+
   | c9ab681f-e930-4e4e-814d-d6f1cf084480 | Instance2 | private=10.0.0.3 |
   +--+---+--+

  * Supply both an IP address filter and a limit (should show a single
  server):

   $ nova list --sort display_name:asc --ip 10.0.0.3 --limit 1
   ++--+--+
   | ID | Name | Networks |
   ++--+--+
   ++--+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1417649/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1425571] Re: Unhelpful error message and log when virt_type is incorrect in nova.conf

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1425571

Title:
  Unhelpful error message and log when virt_type is incorrect in
  nova.conf

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  I came across this issue for a simple typo in virt_type parameter in
  the file nova-compute.conf (same results if the parameter is defined
  in nova.conf) where I wrote:

  [libvirt]
  virt_type = quemu

  instead of "qemu".

  In the Horizon dashboard this lead to an error with explanation
  "Failure to prepping block device" (see image in attach), not only
  useless, but misleading since I thought about a problem with cinder.

  Also the log file nova-compute.log was not so clear (see text file in
  attach): once again "Failure prepping block device" with the addition
  of this info: "NovaException: Unable to determine disk prefix for
  None".

  Only analyzing the python code of the file in the traceback of the log
  (nova/compute/manager.py, nova/virt/libvirt/driver.py,
  nova/virt/libvirt/blockinfo.py) I could understand that the disk
  prefix (i.e. vd/hs/sd) is indeed bound to CONF.libvirt.virt_type (i.e.
  virt_type parameter in nova.conf).

  Configuration files can be modified by operators and I don't think
  they should have to look at the code to get a clue of what is going
  on, especially for an error that can be simply (at least this is my
  first impression) caught and tracked.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1425571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430223] Re: Live migration with ceph fails to cleanup instance directory on failure

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1430223

Title:
  Live migration with ceph fails to cleanup instance directory on
  failure

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When doing a live migration of an instance using ceph for shared
  storage, if the migration fails then the instance directory will not
  be cleaned up on the destination host. The next attempt to do the live
  migration will fail with DestinationDiskExists, but will cleanup the
  directory.

  A simple way to test this is to setup a working system which allows a
  ceph instance to be live migrated, then delete the relevant ceph
  secret from libvirt on one of the hosts. Live migration to that host
  will fail, triggering this bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1430223/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432441] Re: do away with PROTOCOL_SSLv3

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1432441

Title:
  do away with PROTOCOL_SSLv3

Status in OpenStack Compute (Nova):
  Fix Released
Status in The Oslo library incubator:
  Fix Committed

Bug description:
  In Debian Testing, Python 2.7 does no longer include the symbols
  PROTOCOL_SSLv2 and PROTOCOL_SSLv3. The attached patch fixes this
  against Nova trunk.

  Unrelated: The following command, run from /opt/stack in a devstack
  environment, did not yield any call to the function
  validate_ssl_version(), which is the only user of the affected data
  structure:

  find . -name '*.py' -type f |xargs grep -nF validate_ssl_version

  The nova version, according to the PKG-INFO file, is 2015.1.dev70.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1432441/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421272] Re: Hyper-V: Attribute error when trying to spawn instance from vhd image

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1421272

Title:
  Hyper-V: Attribute error when trying to spawn instance from vhd image

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When trying to boot an instance from a vhd image we get:

  AttributeError: 'NoneType' object has no attribute  "root_gb"

  This happens when we try to get the root disk size from the old
  flavor. Since on creation there is no old flavor, instance.get_flavor
  will return None and thus the AttributeError when trying to return the
  root_gb.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1421272/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367186] Re: Instances stuck with task_state of unshelving after RPC call timeout.

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367186

Title:
  Instances stuck with task_state of unshelving after RPC call timeout.

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Instances stuck with task_state of unshelving after RPC call between
  nova-conductor and nova-scheduler fails(because of, for example,
  timeout) in the operation of unshelve.

  The environment:
  Ubuntu 14.04 LTS(64bit)
  stable/icehouse(2014.1.2)
  (I could also reproduce it with 
master(commit:a1fa42f2ad11258f8b9482353e078adcf73ee9c2).)

  How to reproduce:
  1. create a VM instance
  2. shelve the VM instance
  3. stop nova-scheduler process
  4. unshelve the VM instance
  (The nova-conductor calls the nova-scheduler, but the RPC call times out.)

  Then the VM instance stucks with task_state of unshelving(See the following).
  The VM instance still remains stuck even after nova-scheduler process starts 
again.

  stack@devstack-icehouse:/opt/devstack$ nova list
  
+--+-+---++-+---+
  | ID   | Name| Status| Task 
State | Power State | Networks  |
  
+--+-+---++-+---+
  | 12e488e8-1df1-479d-866e-51c3117e384b | server1 | SHELVED_OFFLOADED | 
unshelving | Shutdown| public=10.0.2.194 |
  
+--+-+---++-+---+

  nova-conductor.log:
  
---
  2014-09-09 18:18:13.263 13087 ERROR oslo.messaging.rpc.dispatcher [-] 
Exception during message handling: Timed out waiting for a reply to message ID 
934be80a9798443597f355d60fa08e56
  2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
  2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
  2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
  2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/conductor/manager.py", line 849, in unshelve_instance
  2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher 
instance)
  2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/conductor/manager.py", line 816, in _schedule_instances
  2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher 
request_spec, filter_properties)
  2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/rpcapi.py", line 103, in select_destinations
  2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher 
request_spec=request_spec, filter_properties=filter_properties)
  2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 
152, in call
  2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher 
retry=self.retry)
  2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/transport.py", line 90, 
in _send
  2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher 
timeout=timeout, retry=retry)
  2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", 
line 404, in send
  2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher 
retry=retry)
  2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", 
line 393, in _send
  2014-09-09 18:18:13.263 13087 TR

[Yahoo-eng-team] [Bug 1424647] Re: Allow configuring proxy_host and proxy_port in nova.conf

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1424647

Title:
  Allow configuring proxy_host and proxy_port in nova.conf

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Following patch I2d46b926f1c895aba412d84b4ee059fda3df9011,
  proxy_host/proxy_port configured in nova.conf or passed via
  command line are not taking effect for novncproxy, spicehtmlproxy
  and serial proxy.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1424647/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1417798] Re: IP filtering can include duplicate instances

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1417798

Title:
  IP filtering can include duplicate instances

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The IP address filtering logic implemented in the compute API can duplicate 
instances if a given instance either:
  - Has a fixed IP address in more then 1 network that matches the filter
  - Has more then 1 fixed IP address in the same network matches the filter

  For example:

  $ nova list
  +-+-+-+
  | ID  | Name| Networks|
  +-+-+-+
  | 123 | InstanceTest| gre_shared_1=192.168.0.11; network=194.168.0.14 |
  | 456 | InstanceOne | gre_shared_1=192.168.0.3|
  +-+-+-+

  $ nova list --ip 19
  +-+-+-+
  | ID  | Name| Networks|
  +-+-+-+
  | 123 | InstanceTest| gre_shared_1=192.168.0.11; network=194.168.0.14 |
  | 123 | InstanceTest| gre_shared_1=192.168.0.11; network=194.168.0.14 |
  | 456 | InstanceOne | gre_shared_1=192.168.0.3|
  +-+-+-+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1417798/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262424] Re: Files without code should not contain copyright notices

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1262424

Title:
  Files without code should not contain copyright notices

Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Fix Released
Status in Python client library for Cinder:
  Fix Committed
Status in Taskflow for task-oriented systems.:
  Fix Released

Bug description:
  Due to a recent policy change in HACKING
  (http://docs.openstack.org/developer/hacking/#openstack-licensing),
  empty files should no longer contain copyright notices.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1262424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1428481] Re: Wrong exception raised for PortNotFound and NetworkNotFound

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1428481

Title:
  Wrong exception raised for PortNotFound and NetworkNotFound

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Http exception should be HTTPNotFound instead of HTTPBadRequest for
  PortNotFound/NetworkNotFound in [1]:

  [1]
  
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/contrib/attach_interfaces.py#n124

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1428481/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1419002] Re: nova do not compain if 'my_ip' is wrong

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1419002

Title:
  nova do not compain if 'my_ip' is wrong

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  If my_ip in nova config do not exit on any interface of the compute
  host, nova-compute silently accepts it and failing cold migration.

  Expected behaviour: error or warning if my_ip can not be found on any
  interface.

  Nova version: 1:2014.2.1-0ubuntu1~cloud0

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1419002/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1425485] Re: Method sqlalchemy.api._check_instance_exists has incorrect behavior

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1425485

Title:
  Method sqlalchemy.api._check_instance_exists has incorrect behavior

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  This method must raise InstanceNotFound exception if there is no
  instance with specified UUID. But now it raises exception only if we
  have no instances at all. It happens because filter with UUID in
  sqlalchemy query was missed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1425485/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1419474] Re: VMware: Volume attach fails with "VMwareDriverException: A specified parameter was not correct."

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1419474

Title:
  VMware: Volume attach fails with "VMwareDriverException: A specified
  parameter was not correct."

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Steps to reproduce:
  * Attach vol-1 to vm-1
  * Change host and datastore of vm-1
  * Detach vol-1 from vm-1
  * Attach vol-1 to vm-1

  
  2015-02-04 14:30:21.098 DEBUG oslo.vmware.exceptions [-] Fault 
InvalidArgument not matched. from (pid=21418) get_fault_class 
/usr/local/lib/python2.7/dist-packages/oslo/vmware/exceptions.py:249
  2015-02-04 14:30:21.098 ERROR oslo.vmware.common.loopingcall [-] in fixed 
duration looping call
  2015-02-04 14:30:21.098 TRACE oslo.vmware.common.loopingcall Traceback (most 
recent call last):
  2015-02-04 14:30:21.098 TRACE oslo.vmware.common.loopingcall   File 
"/usr/local/lib/python2.7/dist-packages/oslo/vmware/common/loopingcall.py", 
line 76, in _inner
  2015-02-04 14:30:21.098 TRACE oslo.vmware.common.loopingcall 
self.f(*self.args, **self.kw)
  2015-02-04 14:30:21.098 TRACE oslo.vmware.common.loopingcall   File 
"/usr/local/lib/python2.7/dist-packages/oslo/vmware/api.py", line 424, in 
_poll_task
  2015-02-04 14:30:21.098 TRACE oslo.vmware.common.loopingcall raise task_ex
  2015-02-04 14:30:21.098 TRACE oslo.vmware.common.loopingcall 
VMwareDriverException: A specified parameter was not correct.
  2015-02-04 14:30:21.098 TRACE oslo.vmware.common.loopingcall 
config.extraConfig["volume-e375857b-5b8f-409a-9303-db6d33956fe1"]
  2015-02-04 14:30:21.098 TRACE oslo.vmware.common.loopingcall
  2015-02-04 14:30:21.099 ERROR nova.virt.block_device 
[req-737b0e79-f91d-41b4-96f0-7449251545b9 admin demo] [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591] Driver failed to attach volume 
e375857b-5b8f-409a-9303-db6d33956fe1 at /dev/sdb
  2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591] Traceback (most recent call last):
  2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 249, in attach
  2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591] device_type=self['device_type'], 
encryption=encryption)
  2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591]   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 479, in attach_volume
  2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591] return 
_volumeops.attach_volume(connection_info, instance)
  2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591]   File 
"/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 427, in attach_volume
  2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591] 
self._attach_volume_vmdk(connection_info, instance)
  2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591]   File 
"/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 395, in 
_attach_volume_vmdk
  2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591] self._update_volume_details(vm_ref, 
instance, data['volume_id'])
  2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591]   File 
"/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 151, in 
_update_volume_details 
  2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591] vm_util.reconfigure_vm(self._session, 
vm_ref, extra_config_specs)
  2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591]   File 
"/opt/stack/nova/nova/virt/vmwareapi/vm_util.py", line 1625, in reconfigure_vm
  2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591] session._wait_for_task(reconfig_task)
  2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591]   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 668, in _wait_for_task
  2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591] return self.wait_for_task(task_ref)
  2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216ebb943591]   File 
"/usr/local/lib/python2.7/dist-packages/oslo/vmware/api.py", line 387, in 
wait_for_task 
  2015-02-04 14:30:21.099 TRACE nova.virt.block_device [instance: 
0f756e6b-f8c0-4efc-880e-216eb

[Yahoo-eng-team] [Bug 1422610] Re: No retries in 'network_set_host' function

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1422610

Title:
  No retries in 'network_set_host' function

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  In nova  'network_set_host' function could happen db_exc.DBDeadlock or
  the function could return 0 rows updated. These cases mean that
  concurrent transactions try to update the same row. In these cases we
  should retry the transactions and try to fetch another row to update.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1422610/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432465] Re: nova detach interface will get inconsistent if hypervisor failed to detach a port

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1432465

Title:
  nova detach interface will get inconsistent if hypervisor failed to
  detach a port

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Currently, in compute api, detach_interface will delete neutron port
  first then calls hypervisor driver to do detach_interface on the guest.
  If the driver does detach_interface failed, in case of the driver raise
  an exception.InterfaceDetachFailed or other NovaExcptions, there is no
 handler for them. Besides this is an asyn rpc call, so nova-api will not
  notice this exception. End user will find the port has been deleted
  in neutron side, but guest still can see this port on guest, this is
  inconsistent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1432465/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1203981] Re: EC2 hostnames are too long when launching multiple at once

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1203981

Title:
  EC2 hostnames are too long when launching multiple at once

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When launching multiple instances via ec2 you get hostnames like this:

  1: Server -
  2: Server -
  3: Server -
  4: Server -

  Which turns out to be longer than 64 characters which unix doesn't
  like.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1203981/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403441] Re: remove detail method from the LimitsController class

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1403441

Title:
  remove detail method from the LimitsController class

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  we should remove the detail method from the LimitsController class
  because the detail is not the default method and it doesn't add the
  detail  operation in the _setup_routes method from the class APIRouter

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1403441/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1417555] Re: Current AWS CLI and botocore do not work with EC2 API in nova

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1417555

Title:
  Current AWS CLI and botocore do not work with EC2 API in nova

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  AWS CLI is a recommended Amazon's CLI and it uses botocore library instead of 
boto to access EC2 API:
  http://aws.amazon.com/cli/

  Amazon's URL for AWS EC2 API is: 
  https://ec2.amazonaws.com/

  OpenStack nova's EC2 API service URL is: 
  http://some.server.com:8773/services/Cloud

  AWS CLI works with the root URLs (without something trailing, like 
../services/Cloud). This is so because of the following bug in botocore:
  https://github.com/boto/botocore/issues/420

  We did supply the fix here:
  
https://github.com/Andrey-mp/botocore/commit/162bdc22de3ff3d6243459c132ca9bd01e02533f

  However, we do not control botocore and cannot predict when this fix is going 
to be applied or guarantee that in the future nothing alike happens again (in 
fact, older botocore didn't have this bug).
  Another problem is that there is a set of Tempest tests currently used to 
work against stackforge/ec2-api but which is about to be employed against 
nova's ec2-api. These Tempest tests use botocore too. And won't work with the 
nova's EC2-API until either botocore or nova is fixed.

  So we suggest to fix the situation in nova by changing the URL to 
http://some.server.com:8773/.
  It'll provide the solution for OpenStack independent of botocore bugs and 
fixes in this area, and will make the service URL more alike (compatible, sort 
of) with the AWS EC2 API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1417555/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1406486] Re: Suspending an instance fails when using vnic_type=direct

2015-03-20 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1406486

Title:
  Suspending an instance fails when using vnic_type=direct

Status in OpenStack Compute (Nova):
  Fix Released
Status in Python client library for Glance:
  New

Bug description:
  When launching an instance with a pre-created port with 
binding:vnic_type='direct' and suspending the instance 
  fails with error  'NoneType' object has no attribute 'encode'

  Nova compute log:
  http://paste.openstack.org/show/155141/

  Version
  ==
  openstack-nova-common-2014.2.1-3.el7ost.noarch
  openstack-nova-compute-2014.2.1-3.el7ost.noarch
  python-novaclient-2.20.0-1.el7ost.noarch
  python-nova-2014.2.1-3.el7ost.noarch

  How to Reproduce
  ===
  # neutron port-create tenant1-net1 --binding:vnic-type direct
  # nova boot --flavor m1.small --image rhel7 --nic port-id= vm1
  # nova suspend 
  # nova show 

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1406486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >