Hello Brett, or anyone else affected,

Accepted qemu into focal-proposed. The package will build now and be
available at https://launchpad.net/ubuntu/+source/qemu/1:4.2-3ubuntu6.25
in a few hours, and then in the -proposed repository.

Please help us by testing this new package.  See
https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how
to enable and use -proposed.  Your feedback will aid us getting this
update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested, what testing has been
performed on the package and change the tag from verification-needed-
focal to verification-done-focal. If it does not fix the bug for you,
please add a comment stating that, and change the tag to verification-
failed-focal. In either case, without details of your testing we will
not be able to proceed.

Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification .  Thank you in
advance for helping!

N.B. The updated package will be released to -updates after the bug(s)
fixed by this package have been verified and the package has been in
-proposed for a minimum of 7 days.

** Changed in: qemu (Ubuntu Bionic)
       Status: In Progress => Fix Committed

** Tags added: verification-needed-bionic

-- 
You received this bug notification because you are a member of SE
("STS") Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1994002

Title:
  [SRU] migration was active, but no RAM info was set

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive ussuri series:
  New
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Bionic:
  Fix Committed
Status in qemu source package in Focal:
  Fix Committed
Status in qemu source package in Jammy:
  Fix Committed
Status in qemu source package in Kinetic:
  Fix Released

Bug description:
  [Impact]

   * While live-migrating many instances concurrently, libvirt sometimes
  return `internal error: migration was active, but no RAM info was
  set:`

   * Effects of this bug are mostly observed in large scale clusters
  with a lot of live migration activity.

   * Has second order effects for consumers of migration monitor such as
  libvirt and openstack.

  [Test Case]

  Synthetic reproducer with GDB in comment #21.

  Steps to Reproduce:
  1. live evacuate a compute
  2. live migration of one or more instances fails with the above error

  N.B Due to the nature of this bug it is difficult consistently reproduce.
  In an environment where it has been observed it is estimated to occur 
approximately 1/1000 migrations.

  [Where problems could occur]
   * In the event of a regression the migration monitor may report an 
inconsistent state.

  [Original Bug Description]

  While live-migrating many instances concurrently, libvirt sometimes return 
internal error: migration was active, but no RAM info was set:
  ~~~
  2022-03-30 06:08:37.197 7 WARNING nova.virt.libvirt.driver 
[req-5c3296cf-88ee-4af6-ae6a-ddba99935e23 - - - - -] [instance: 
af339c99-1182-4489-b15c-21e52f50f724] Error monitoring migration: internal 
error: migration was active, but no RAM info was set: libvirt.libvirtError: 
internal error: migration was active, but no RAM info was set
  ~~~

  From upstream bug: https://bugzilla.redhat.com/show_bug.cgi?id=2074205

  [Other Information]
  Related bug: https://bugs.launchpad.net/nova/+bug/1982284

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1994002/+subscriptions


-- 
Mailing list: https://launchpad.net/~sts-sponsors
Post to     : sts-sponsors@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sts-sponsors
More help   : https://help.launchpad.net/ListHelp

Reply via email to