[Yahoo-eng-team] [Bug 1396703] [NEW] NoSuchOptError: no such option in group database: backend

2014-11-26 Thread Johannes Erdfelt
Public bug reported:

When running any of the database specific test classes in
nova.tests.unit.db.test_migrations, each individual test will fail with
a traceback similar to this:

Traceback (most recent call last):
  File /home/johannes/openstack/nova/nova/tests/unit/db/test_migrations.py, 
line 164, in test_compare_schema_alembic
self.walk_versions(snake_walk=False, downgrade=False)
  File 
/home/johannes/virtualenvs/migrations/local/lib/python2.7/site-packages/oslo/db/sqlalchemy/test_migrations.py,
 line 187, in walk_versions
self.INIT_VERSION)
  File /home/johannes/openstack/nova/nova/tests/unit/db/test_migrations.py, 
line 73, in INIT_VERSION
return migration.db_initial_version()
  File /home/johannes/openstack/nova/nova/db/migration.py, line 44, in 
db_initial_version
return IMPL.db_initial_version()
  File /home/johannes/openstack/nova/nova/utils.py, line 427, in __getattr__
backend = self.__get_backend()
  File /home/johannes/openstack/nova/nova/utils.py, line 410, in __get_backend
backend_name = CONF[self.__config_group][self.__pivot]
  File 
/home/johannes/virtualenvs/migrations/local/lib/python2.7/site-packages/oslo/config/cfg.py,
 line 2313, in __getitem__
return self.__getattr__(key)
  File 
/home/johannes/virtualenvs/migrations/local/lib/python2.7/site-packages/oslo/config/cfg.py,
 line 2309, in __getattr__
return self._conf._get(name, self._group)
  File 
/home/johannes/virtualenvs/migrations/local/lib/python2.7/site-packages/oslo/config/cfg.py,
 line 2043, in _get
value = self._do_get(name, group, namespace)
  File 
/home/johannes/virtualenvs/migrations/local/lib/python2.7/site-packages/oslo/config/cfg.py,
 line 2061, in _do_get
info = self._get_opt_info(name, group)
  File 
/home/johannes/virtualenvs/migrations/local/lib/python2.7/site-packages/oslo/config/cfg.py,
 line 2189, in _get_opt_info
raise NoSuchOptError(opt_name, group)
NoSuchOptError: no such option in group database: backend

This appears to be because of an incorrect use of conf_fixture

** Affects: nova
 Importance: Undecided
 Assignee: Johannes Erdfelt (johannes.erdfelt)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1396703

Title:
  NoSuchOptError: no such option in group database: backend

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  When running any of the database specific test classes in
  nova.tests.unit.db.test_migrations, each individual test will fail
  with a traceback similar to this:

  Traceback (most recent call last):
File /home/johannes/openstack/nova/nova/tests/unit/db/test_migrations.py, 
line 164, in test_compare_schema_alembic
  self.walk_versions(snake_walk=False, downgrade=False)
File 
/home/johannes/virtualenvs/migrations/local/lib/python2.7/site-packages/oslo/db/sqlalchemy/test_migrations.py,
 line 187, in walk_versions
  self.INIT_VERSION)
File /home/johannes/openstack/nova/nova/tests/unit/db/test_migrations.py, 
line 73, in INIT_VERSION
  return migration.db_initial_version()
File /home/johannes/openstack/nova/nova/db/migration.py, line 44, in 
db_initial_version
  return IMPL.db_initial_version()
File /home/johannes/openstack/nova/nova/utils.py, line 427, in __getattr__
  backend = self.__get_backend()
File /home/johannes/openstack/nova/nova/utils.py, line 410, in 
__get_backend
  backend_name = CONF[self.__config_group][self.__pivot]
File 
/home/johannes/virtualenvs/migrations/local/lib/python2.7/site-packages/oslo/config/cfg.py,
 line 2313, in __getitem__
  return self.__getattr__(key)
File 
/home/johannes/virtualenvs/migrations/local/lib/python2.7/site-packages/oslo/config/cfg.py,
 line 2309, in __getattr__
  return self._conf._get(name, self._group)
File 
/home/johannes/virtualenvs/migrations/local/lib/python2.7/site-packages/oslo/config/cfg.py,
 line 2043, in _get
  value = self._do_get(name, group, namespace)
File 
/home/johannes/virtualenvs/migrations/local/lib/python2.7/site-packages/oslo/config/cfg.py,
 line 2061, in _do_get
  info = self._get_opt_info(name, group)
File 
/home/johannes/virtualenvs/migrations/local/lib/python2.7/site-packages/oslo/config/cfg.py,
 line 2189, in _get_opt_info
  raise NoSuchOptError(opt_name, group)
  NoSuchOptError: no such option in group database: backend

  This appears to be because of an incorrect use of conf_fixture

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1396703/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378395] [NEW] Slow MySQL queries with lots of deleted instances

2014-10-07 Thread Johannes Erdfelt
Public bug reported:

While analyzing the slow query log in our public cloud, we ran across
this slow query:

# Query_time: 21.113669  Lock_time: 0.000485 Rows_sent: 46  Rows_examined: 
848516
SET timestamp=1412484367;
SELECT anon_1.instances_created_at AS anon_1_instances_created_at, 
anon_1.instances_updated_at AS anon_1_instances_updated_at, 
anon_1.instances_deleted_at AS anon_1_instances_deleted_at, 
anon_1.instances_deleted AS anon_1_instances_deleted, anon_1.instances_id AS 
anon_1_instances_id, anon_1.instances_user_id AS anon_1_instances_user_id, 
anon_1.instances_project_id AS anon_1_instances_project_id, 
anon_1.instances_image_ref AS anon_1_instances_image_ref, 
anon_1.instances_kernel_id AS anon_1_instances_kernel_id, 
anon_1.instances_ramdisk_id AS anon_1_instances_ramdisk_id, 
anon_1.instances_hostname AS anon_1_instances_hostname, 
anon_1.instances_launch_index AS anon_1_instances_launch_index, 
anon_1.instances_key_name AS anon_1_instances_key_name, 
anon_1.instances_key_data AS anon_1_instances_key_data, 
anon_1.instances_power_state AS anon_1_instances_power_state, 
anon_1.instances_vm_state AS anon_1_instances_vm_state, 
anon_1.instances_task_state AS anon_1_instances_task_state, anon_1.instan
 ces_memory_mb AS anon_1_instances_memory_mb, anon_1.instances_vcpus AS 
anon_1_instances_vcpus, anon_1.instances_root_gb AS anon_1_instances_root_gb, 
anon_1.instances_ephemeral_gb AS anon_1_instances_ephemeral_gb, 
anon_1.instances_ephemeral_key_uuid AS anon_1_instances_ephemeral_key_uuid, 
anon_1.instances_host AS anon_1_instances_host, anon_1.instances_node AS 
anon_1_instances_node, anon_1.instances_instance_type_id AS 
anon_1_instances_instance_type_id, anon_1.instances_user_data AS 
anon_1_instances_user_data, anon_1.instances_reservation_id AS 
anon_1_instances_reservation_id, anon_1.instances_scheduled_at AS 
anon_1_instances_scheduled_at, anon_1.instances_launched_at AS 
anon_1_instances_launched_at, anon_1.instances_terminated_at AS 
anon_1_instances_terminated_at, anon_1.instances_availability_zone AS 
anon_1_instances_availability_zone, anon_1.instances_display_name AS 
anon_1_instances_display_name, anon_1.instances_display_description AS 
anon_1_instances_display_description, anon_1
 .instances_launched_on AS anon_1_instances_launched_on, 
anon_1.instances_locked AS anon_1_instances_locked, anon_1.instances_locked_by 
AS anon_1_instances_locked_by, anon_1.instances_os_type AS 
anon_1_instances_os_type, anon_1.instances_architecture AS 
anon_1_instances_architecture, anon_1.instances_vm_mode AS 
anon_1_instances_vm_mode, anon_1.instances_uuid AS anon_1_instances_uuid, 
anon_1.instances_root_device_name AS anon_1_instances_root_device_name, 
anon_1.instances_default_ephemeral_device AS 
anon_1_instances_default_ephemeral_device, anon_1.instances_default_swap_device 
AS anon_1_instances_default_swap_device, anon_1.instances_config_drive AS 
anon_1_instances_config_drive, anon_1.instances_access_ip_v4 AS 
anon_1_instances_access_ip_v4, anon_1.instances_access_ip_v6 AS 
anon_1_instances_access_ip_v6, anon_1.instances_auto_disk_config AS 
anon_1_instances_auto_disk_config, anon_1.instances_progress AS 
anon_1_instances_progress, anon_1.instances_shutdown_terminate AS anon_1_instanc
 es_shutdown_terminate, anon_1.instances_disable_terminate AS 
anon_1_instances_disable_terminate, anon_1.instances_cell_name AS 
anon_1_instances_cell_name, anon_1.instances_internal_id AS 
anon_1_instances_internal_id, anon_1.instances_cleaned AS 
anon_1_instances_cleaned, security_groups_1.created_at AS 
security_groups_1_created_at, security_groups_1.updated_at AS 
security_groups_1_updated_at, security_groups_1.deleted_at AS 
security_groups_1_deleted_at, security_groups_1.deleted AS 
security_groups_1_deleted, security_groups_1.id AS security_groups_1_id, 
security_groups_1.name AS security_groups_1_name, security_groups_1.description 
AS security_groups_1_description, security_groups_1.user_id AS 
security_groups_1_user_id, security_groups_1.project_id AS 
security_groups_1_project_id, instance_info_caches_1.created_at AS 
instance_info_caches_1_created_at, instance_info_caches_1.updated_at AS 
instance_info_caches_1_updated_at, instance_info_caches_1.deleted_at AS 
instance_info_caches_1_de
 leted_at, instance_info_caches_1.deleted AS instance_info_caches_1_deleted, 
instance_info_caches_1.id AS instance_info_caches_1_id, 
instance_info_caches_1.network_info AS instance_info_caches_1_network_info, 
instance_info_caches_1.instance_uuid AS instance_info_caches_1_instance_uuid
FROM (SELECT instances.created_at AS instances_created_at, instances.updated_at 
AS instances_updated_at, instances.deleted_at AS instances_deleted_at, 
instances.deleted AS instances_deleted, instances.id AS instances_id, 
instances.user_id AS instances_user_id, instances.project_id AS 
instances_project_id, instances.image_ref AS instances_image_ref, 
instances.kernel_id AS instances_kernel_id, instances.ramdisk_id AS 

[Yahoo-eng-team] [Bug 1378088] [NEW] nova/tests/virt/vmwareapi/test_vmops:test_spawn_mask_block_device_info_password doesn't correctly assert password is scrubbed

2014-10-06 Thread Johannes Erdfelt
Public bug reported:

While looking at some new code, I noticed this test has a bug.

It's easy to reproduce, just remove the call to logging.mask_password
(but keep the LOG.debug) in nova/virt/vmwareapi/vmops.py:spawn. The test
will still pass.

The reason is because failed assertions raise exceptions that are a
subclass of Exception.

The test catches anything derived from Exception and silently ignores
them, including any failed assertions.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378088

Title:
  
nova/tests/virt/vmwareapi/test_vmops:test_spawn_mask_block_device_info_password
  doesn't correctly assert password is scrubbed

Status in OpenStack Compute (Nova):
  New

Bug description:
  While looking at some new code, I noticed this test has a bug.

  It's easy to reproduce, just remove the call to logging.mask_password
  (but keep the LOG.debug) in nova/virt/vmwareapi/vmops.py:spawn. The
  test will still pass.

  The reason is because failed assertions raise exceptions that are a
  subclass of Exception.

  The test catches anything derived from Exception and silently ignores
  them, including any failed assertions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378088/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362799] [NEW] Hard reboot escalation regression

2014-08-28 Thread Johannes Erdfelt
Public bug reported:

Nova used to allow a hard reboot when an instance is already being soft
rebooted. However, with commit cc0be157d005c5588fe5db779fc30fefbf22b44d,
this is no longer allowed.

This is because two new task states were introduced, REBOOT_PENDING and
REBOOT_STARTED (and corresponding values for hard reboots). A soft
reboot now spends most of it's time in REBOOT_STARTED instead of
REBOOTING.

REBOOT_PENDING and REBOOT_STARTED were not added to the
@check_instance_state decorator. As a result, an attempt to hard reboot
an instance which is stuck trying to do a soft reboot will now fail with
an InstanceInvalidState exception.

This provides a poor user experience since a reboot is often attempted
for instances that aren't responsive. A soft reboot is not guaranteed to
work even if the system is responsive. The soft reboot prevents a hard
reboot from being performed.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362799

Title:
  Hard reboot escalation regression

Status in OpenStack Compute (Nova):
  New

Bug description:
  Nova used to allow a hard reboot when an instance is already being
  soft rebooted. However, with commit
  cc0be157d005c5588fe5db779fc30fefbf22b44d, this is no longer allowed.

  This is because two new task states were introduced, REBOOT_PENDING
  and REBOOT_STARTED (and corresponding values for hard reboots). A soft
  reboot now spends most of it's time in REBOOT_STARTED instead of
  REBOOTING.

  REBOOT_PENDING and REBOOT_STARTED were not added to the
  @check_instance_state decorator. As a result, an attempt to hard
  reboot an instance which is stuck trying to do a soft reboot will now
  fail with an InstanceInvalidState exception.

  This provides a poor user experience since a reboot is often attempted
  for instances that aren't responsive. A soft reboot is not guaranteed
  to work even if the system is responsive. The soft reboot prevents a
  hard reboot from being performed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362799/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1343331] [NEW] quota_usages and pci_devices tables have columns with mismatching nullable attribute

2014-07-17 Thread Johannes Erdfelt
Public bug reported:

The database model defines these columns:

quota_usages
resource = Column(String(255), nullable=False)

pci_devices
deleted = Column(Integer, default=0)
vendor_id = Column(String(4), nullable=False)
product_id = Column(String(4), nullable=False)
dev_type = Column(String(8), nullable=False)

However, the tables where created with different nullable attributes in
the database migrations:

quota_usages
Column('resource', String(length=255)),

pci_devices
Column('deleted', Integer, default=0, nullable=False),
Column('product_id', String(4)),
Column('vendor_id', String(4)),
Column('dev_type', String(8)),

It appears that the model is correct in all cases and a database
migration should be added to make the applied schema match the model

** Affects: nova
 Importance: Undecided
 Assignee: Johannes Erdfelt (johannes.erdfelt)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1343331

Title:
  quota_usages and pci_devices tables have columns with mismatching
  nullable attribute

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The database model defines these columns:

  quota_usages
  resource = Column(String(255), nullable=False)

  pci_devices
  deleted = Column(Integer, default=0)
  vendor_id = Column(String(4), nullable=False)
  product_id = Column(String(4), nullable=False)
  dev_type = Column(String(8), nullable=False)

  However, the tables where created with different nullable attributes
  in the database migrations:

  quota_usages
  Column('resource', String(length=255)),

  pci_devices
  Column('deleted', Integer, default=0, nullable=False),
  Column('product_id', String(4)),
  Column('vendor_id', String(4)),
  Column('dev_type', String(8)),

  It appears that the model is correct in all cases and a database
  migration should be added to make the applied schema match the model

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1343331/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342834] [NEW] pci_devices.compute_node_id foreign key never actually created

2014-07-16 Thread Johannes Erdfelt
Public bug reported:

The model in nova/db/sqlalchemy/model.py defines the compute_node_id
column has a foreign key on to compute_nodes.id

However, neither the 213 migration (which initially introduced the
pci_devices table) nor the collapsed 216 migration actually create that
foreign key.

It looks like the model is correct and a foreign key should exist and a
migration should be added.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1342834

Title:
  pci_devices.compute_node_id foreign key never actually created

Status in OpenStack Compute (Nova):
  New

Bug description:
  The model in nova/db/sqlalchemy/model.py defines the compute_node_id
  column has a foreign key on to compute_nodes.id

  However, neither the 213 migration (which initially introduced the
  pci_devices table) nor the collapsed 216 migration actually create
  that foreign key.

  It looks like the model is correct and a foreign key should exist and
  a migration should be added.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1342834/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1329538] [NEW] Failed delete leaves instance in undeletable state

2014-06-12 Thread Johannes Erdfelt
Public bug reported:

A recent change changed the task_state behavior of instance deletes:

https://review.openstack.org/#/c/58829/

This leaves the task_state unmodified after a failed delete. Since the
task_state is unmodified, subsequent attempts to delete the instance are
skipped with the message Instance is already in deleting state,
ignoring this request.

This is because the task_state remains 'deleting' and the delete code
thinks another delete is already happening.

At a minimum, an admin needs to intervene and reset the task_state. This
is a regression from previous behavior where retrying a delete would
attempt to delete the instance again.

This has been causing problems for the OpenStack CI infrastructure where
some instances initially failed to delete and reissuing delete requests
doesn't work.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1329538

Title:
  Failed delete leaves instance in undeletable state

Status in OpenStack Compute (Nova):
  New

Bug description:
  A recent change changed the task_state behavior of instance deletes:

  https://review.openstack.org/#/c/58829/

  This leaves the task_state unmodified after a failed delete. Since the
  task_state is unmodified, subsequent attempts to delete the instance
  are skipped with the message Instance is already in deleting state,
  ignoring this request.

  This is because the task_state remains 'deleting' and the delete code
  thinks another delete is already happening.

  At a minimum, an admin needs to intervene and reset the task_state.
  This is a regression from previous behavior where retrying a delete
  would attempt to delete the instance again.

  This has been causing problems for the OpenStack CI infrastructure
  where some instances initially failed to delete and reissuing delete
  requests doesn't work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1329538/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1324277] [NEW] Use of finally/return considered harmful

2014-05-28 Thread Johannes Erdfelt
Public bug reported:

Doing a return from a finally block will end up silently dropping
exceptions.

This can cause unexpected behavior at runtime where unhandled exceptions
are silently dropped when not intended.

This has caused some tests that would should fail because of API
changes, to end up passing.

Examples are test_init_instance_stuck_in_deleting and
test_init_instance_deletes_error_deleting_instance in
nova/tests/compute/test_compute_mgr.py. The _delete_instance method that
is being mocked out has changed, but the finally/return in
nova/compute/manager.py:_init_instance has ended up ignoring the test
failures causing the tests to continue passing.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1324277

Title:
  Use of finally/return considered harmful

Status in OpenStack Compute (Nova):
  New

Bug description:
  Doing a return from a finally block will end up silently dropping
  exceptions.

  This can cause unexpected behavior at runtime where unhandled
  exceptions are silently dropped when not intended.

  This has caused some tests that would should fail because of API
  changes, to end up passing.

  Examples are test_init_instance_stuck_in_deleting and
  test_init_instance_deletes_error_deleting_instance in
  nova/tests/compute/test_compute_mgr.py. The _delete_instance method
  that is being mocked out has changed, but the finally/return in
  nova/compute/manager.py:_init_instance has ended up ignoring the test
  failures causing the tests to continue passing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1324277/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302831] [NEW] hacking/flake8 skips most xenapi plugins

2014-04-04 Thread Johannes Erdfelt
Public bug reported:

It appears to be because the plugins themselves don't have a filename
that ends in .py so they get skipped. Only the few files in there that
end .py are being checked.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1302831

Title:
  hacking/flake8 skips most xenapi plugins

Status in OpenStack Compute (Nova):
  New

Bug description:
  It appears to be because the plugins themselves don't have a filename
  that ends in .py so they get skipped. Only the few files in there that
  end .py are being checked.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1302831/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296818] [NEW] xenapi: vm_mode cannot be changed during rebuild

2014-03-24 Thread Johannes Erdfelt
Public bug reported:

When rebuilding an instance to a new image with a different effective
vm_mode, this isn't seen and the original vm_mode is used. This causes
problems when going from HVM to PV leading to an instance that cannot
boot.

** Affects: nova
 Importance: Undecided
 Assignee: Johannes Erdfelt (johannes.erdfelt)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = Johannes Erdfelt (johannes.erdfelt)

** Changed in: nova
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296818

Title:
  xenapi: vm_mode cannot be changed during rebuild

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  When rebuilding an instance to a new image with a different effective
  vm_mode, this isn't seen and the original vm_mode is used. This causes
  problems when going from HVM to PV leading to an instance that cannot
  boot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290903] [NEW] xenapi: test_rescue incorrectly verifies original swap wasn't attached

2014-03-11 Thread Johannes Erdfelt
Public bug reported:

The code currently does:

vdi_uuids = []
 for vbd_uuid in rescue_vm[VBDs]:
vdi_uuids.append(xenapi_fake.get_record('VBD', vbd_uuid)[VDI])
self.assertNotIn(swap, vdi_uuids)

vdi_uuids is a list of uuid references. swap will never match a uuid,
so that test will always be true, even if the code is broken.

** Affects: nova
 Importance: Undecided
 Assignee: Johannes Erdfelt (johannes.erdfelt)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Johannes Erdfelt (johannes.erdfelt)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290903

Title:
  xenapi: test_rescue incorrectly verifies original swap wasn't attached

Status in OpenStack Compute (Nova):
  New

Bug description:
  The code currently does:

  vdi_uuids = []
   for vbd_uuid in rescue_vm[VBDs]:
  vdi_uuids.append(xenapi_fake.get_record('VBD', vbd_uuid)[VDI])
  self.assertNotIn(swap, vdi_uuids)

  vdi_uuids is a list of uuid references. swap will never match a
  uuid, so that test will always be true, even if the code is broken.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1290903/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1286187] [NEW] plugins/xenserver/networking/etc/xensource/scripts/novalib.py uses subprocess incorrectly

2014-02-28 Thread Johannes Erdfelt
Public bug reported:

Both execute_get_output() and execute() don't wait until the process is
finished running.

execute_get_output() probably hasn't caused a problem since it at least
does one read and the commands it runs likely would finish (but this
isn't guaranteed).

execute() sets up a PIPE for the process stdout, but doesn't do any
reads before returning to the caller. This could make the code execute
multiple processes in parallel leading to a race condition that could
cause commands to execute in the opposite order that is intended. It
could potentially also cause the process to block on writes to the PIPE
that isn't being read leading it to never finish executing.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1286187

Title:
  plugins/xenserver/networking/etc/xensource/scripts/novalib.py uses
  subprocess incorrectly

Status in OpenStack Compute (Nova):
  New

Bug description:
  Both execute_get_output() and execute() don't wait until the process
  is finished running.

  execute_get_output() probably hasn't caused a problem since it at
  least does one read and the commands it runs likely would finish (but
  this isn't guaranteed).

  execute() sets up a PIPE for the process stdout, but doesn't do any
  reads before returning to the caller. This could make the code execute
  multiple processes in parallel leading to a race condition that could
  cause commands to execute in the opposite order that is intended. It
  could potentially also cause the process to block on writes to the
  PIPE that isn't being read leading it to never finish executing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1286187/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp