Thanks for the Data Eric.
It confirms Ryans assumptions.

The ongoing I/O keeps the device queue with 50-70 Requests of 1MB size busy.
Any request from update-grub that enters has to go through that queue.
Which due to the load has 190+-20ms.

If now also the assumption on the barriers is true that means it can not
enqueue all its requests at once but instead has to wait for each to
complete first.

With that as a rule of thumb it becomes #requests*avg-latency.
Even if we assume no other time spent the 14 seconds are 14.000 ms and /190 
that means ~#74 requests which could be true for an update-grub to run.

So I don't see it as a bug, but more as a tuning and
performance/integrity tradeoff.

Please let us know once you tried the other tunings Ryan recommended.
There is more you could do (mostly I think on workload disk split in the case 
here), but that is all performance tuning which I'm not sure you want look into 
here - do you?

One question I could not find answered in the bug - does the additional
I/O to the disk come from "inside" the guest or from a 3rd source?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1666555

Title:
  update-grub slow with raw LVM source dev + VirtIO bus

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1666555/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to