[Yahoo-eng-team] [Bug 1475652] Re: libvirt, rbd imagebackend, disk.rescue not deleted when unrescued

2016-10-14 Thread Matt Riedemann
** Changed in: nova
   Importance: Undecided => Medium

** Also affects: nova/newton
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475652

Title:
  libvirt, rbd imagebackend, disk.rescue not deleted when unrescued

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  In Progress

Bug description:
  Reproduced on juno version (actually tested on a fork from 2014.2.3,
  sorry in advance if invalid but i think the legacy version is also
  concerned by it)

  not tested on younger versions, but looking at the code they seem
  impacted too

  For Rbd imagebackend only, when unrescuing an instance the disk.rescue
  file is actually not deleted on remote storage (only the rbd session
  is destroyed)

  Consequence: when rescuing instance once again, it simply ignores the
  new rescue image and takes the old _disk.rescue image

  Reproduce:

  1. nova rescue instance

  (take care that you are booted to the vda rescue disk -> when rescuing
  an instance from the same image it was spawned from (case by default),
  since fs uuid is the same, according to your image fstab (if entry
  UUID=) you can actually boot from the image you are actually trying to
  rescue, but this is another matter that concerns template building ->
  see https://bugs.launchpad.net/nova/+bug/1460536)

  edit rescue image disk

  2. nova unrescue instance

  3. nova rescue instance -> you get back the disk.rescue spawned in 1

  if confirmed, fix coming soon

  Concerning fix several possibilities:
  - nova.virt.libvirt.driver :LibvirtDriver-> unrescue method, not deleting the 
correct files
  or
  - nova.virt.libvirt.imagebackend:Rbd -> erase disk.rescue in create image 
method if already existing

  Rebuild not concerned by issue, delete instance correctly deletes
  files on remote storage

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475652/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475652] Re: libvirt, rbd imagebackend, disk.rescue not deleted when unrescued

2016-10-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/314928
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=c12d388070895e40be19f4f4e5fded736a5376be
Submitter: Jenkins
Branch:master

commit c12d388070895e40be19f4f4e5fded736a5376be
Author: Bartek Zurawski 
Date:   Tue May 10 17:31:19 2016 +0200

Fix issue with not removing rbd rescue disk

Currently when instance that use RBD as backend
is rescued and next unrescued, rescue image is
not removed, this cause issue when the same
instance is rescued again it's use old rescue
image not new one.

Change-Id: Idf4086303baa4b936c90be89552ad8deb45cef3a
Closes-Bug: #1475652


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475652

Title:
  libvirt, rbd imagebackend, disk.rescue not deleted when unrescued

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Reproduced on juno version (actually tested on a fork from 2014.2.3,
  sorry in advance if invalid but i think the legacy version is also
  concerned by it)

  not tested on younger versions, but looking at the code they seem
  impacted too

  For Rbd imagebackend only, when unrescuing an instance the disk.rescue
  file is actually not deleted on remote storage (only the rbd session
  is destroyed)

  Consequence: when rescuing instance once again, it simply ignores the
  new rescue image and takes the old _disk.rescue image

  Reproduce:

  1. nova rescue instance

  (take care that you are booted to the vda rescue disk -> when rescuing
  an instance from the same image it was spawned from (case by default),
  since fs uuid is the same, according to your image fstab (if entry
  UUID=) you can actually boot from the image you are actually trying to
  rescue, but this is another matter that concerns template building ->
  see https://bugs.launchpad.net/nova/+bug/1460536)

  edit rescue image disk

  2. nova unrescue instance

  3. nova rescue instance -> you get back the disk.rescue spawned in 1

  if confirmed, fix coming soon

  Concerning fix several possibilities:
  - nova.virt.libvirt.driver :LibvirtDriver-> unrescue method, not deleting the 
correct files
  or
  - nova.virt.libvirt.imagebackend:Rbd -> erase disk.rescue in create image 
method if already existing

  Rebuild not concerned by issue, delete instance correctly deletes
  files on remote storage

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475652/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp