[Yahoo-eng-team] [Bug 1835037] Re: Upgrade from bionic-rocky to bionic-stein failed migrations.

2019-11-10 Thread Ryan Beisner
The fix merged in master and is in the current stable charms as of
19.10.

** Changed in: charm-nova-cloud-controller
   Status: In Progress => Fix Committed

** Changed in: charm-nova-cloud-controller
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1835037

Title:
  Upgrade from bionic-rocky to bionic-stein failed migrations.

Status in OpenStack nova-cloud-controller charm:
  Fix Released
Status in OpenStack Compute (nova):
  New

Bug description:
  We were trying to upgrade from rocky to stein using the charm
  procedure described here:

  https://docs.openstack.org/project-deploy-guide/charm-deployment-
  guide/latest/app-upgrade-openstack.html

  and we got into this problem,

  
  2019-07-02 09:56:44 ERROR juju-log online_data_migrations failed
  b'Running batches of 50 until complete\nError attempting to run \n9 rows matched query 
populate_user_id, 0 
migrated\n+-+--+---+\n|
  Migration  | Total Needed | Completed 
|\n+-+--+---+\n|
 create_incomplete_consumers |  0   | 0 |\n| 
delete_build_requests_with_no_instance_uuid |  0   | 0 |\n| 
fill_virtual_interface_list |  0   | 0 |\n| 
migrate_empty_ratio |  0   | 0 |\n|  
migrate_keypairs_to_api_db |  0   | 0 |\n|   
migrate_quota_classes_to_api_db   |  0   | 0 |\n|
migrate_quota_limits_to_api_db   |  0   | 0 |\n|  
migration_migrate_to_uuid  |  0   | 0 |\n| 
populate_missing_availability_zones |  0   | 0 |\n| 
 populate_queued_for_delete |  0   | 0 |\n| 
  populate_user_id  |  9   | 0 |\n|
populate_uuids   |  0   | 0 |\n| 
service_uuids_online_data_migration |  0   | 0 
|\n+-+--+---+\nSome
 migrations failed unexpectedly. Check log for details.\n'

  What should we do to get this fixed?

  Regards,

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-cloud-controller/+bug/1835037/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714235] Re: os-migrateLive API does not restrict one from trying to migrate to the original host

2019-11-10 Thread Matt Riedemann
I'm just going to invalidate this since if you're using microversion <
2.34 you'll get a 400 because of the UnableToMigrateToSelf error from
conductor and if you're using >= 2.34 you'll get a 202 response but the
operation will fail and a fault will be recorded, there would just be
more work to do, e.g. swapping allocations, calling the scheduler,
reverting the allocation swap, etc. As long as the instance isn't put
into ERROR state though we're probably OK.

** Changed in: nova
 Assignee: jichenjc (jichenjc) => (unassigned)

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714235

Title:
  os-migrateLive API does not restrict one from trying to migrate to the
  original host

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  This is purely based on code inspection, but the compute API method
  'evacuate' does not check if the specified host (if there was one) is
  different from instance.host. It checks if the service is up on that
  host, which could be down and you can still specify the instance.host.

  Eventually the compute API will RPC cast to conductor task manager
  which will fail with an RPC error trying to RPC cast to the
  ComputeManager.rebuild_instance method on the compute service, which
  is down.

  The bug here is that instead of getting an obvious 400 error from the
  API, you're left with not much for details when it fails. There should
  be an instance action and finish event, but only the admin can see the
  traceback in the event. Also, the instance.task_state would be left in
  'rebuilding' state, and would require it to be reset to use the
  instance again.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1714235/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1732363] Re: Use subprocess securely

2019-11-10 Thread Matt Riedemann
I'm going to invalidate this because the code being pointed out is just
dev/test scripts, not production runtime code. It can be fixed if
someone wants to restore and rework the patch but it's super low
priority IMO.

** Changed in: nova
   Importance: Undecided => Low

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1732363

Title:
  Use subprocess securely

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Due to 
https://docs.openstack.org/bandit/latest/plugins/subprocess_popen_with_shell_equals_true.html,
  and reference 
https://security.openstack.org/guidelines/dg_avoid-shell-true.html and
  https://security.openstack.org/guidelines/dg_use-subprocess-securely.html,
  we should use python pipes to avoid shells and use subprocess more securely.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1732363/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp