On 03.03.23 17:20, Peter Xu wrote:
On Fri, Mar 03, 2023 at 10:10:12AM +0100, David Hildenbrand wrote:
On 02.03.23 22:50, Peter Xu wrote:
On Thu, Mar 02, 2023 at 04:11:56PM +0100, David Hildenbrand wrote:
I guess the main concern here would be overhead from gabbing/releasing the
BQL very often, and blocking the BQL while we're eventually in the kernel,
clearing bitmaps, correct?

More or less yes.  I think it's pretty clear we move on with RCU unless
extremely necessary (which I don't think..), then it's about how to fix the
bug so rcu safety guaranteed.

What about an additional simple lock?

Like:

* register/unregister requires that new notifier lock + BQL
* traversing notifiers requires either that new lock or the BQL

This will work, but this will also brings us backstep a bit.

I think we shouldn't allow concurrency for notifiers, more below.  It's
more about sometimes QEMU walking the two lists has nothing to do with
notifiers (like memory_region_find_rcu), that's the major uncertainty to
me.  Also on the future plans of using more RCU in QEMU code.

We simply take the new lock in that problematic function. That would work as
long as we don't require traversal of the notifiers concurrently -- and as
long as we have a lot of bouncing back and forth (I don't think we have,
even in the migration context, or am I wrong?).

That way we also make sure that each notifier is only called once. I'm not
100% sure if all notifiers would expect to be called concurrently.

Yes I think so.  AFAIU most of the notifiers should only be called with BQL
then they'll already be serialized (and hooks normally has yet another
layer of protection like kvm).

Clear log is something special. Afaik it's protected by RAMState's
bitmap_mutex so far, but not always..

The unaccuracy is because clear log can also be triggered outside migration
where there's no context of bitmap_mutex.

But AFAICT concurrent clear log is also fine because it was (somehow
tailored...) for kvm, so it'll anyway be serialized at kvm_slots_lock().
We'll need to be careful when growing log_clear support, though.


On a related not, I was wondering if we should tackle this from a different direction and not care about locking at all for this special migration case.

The thing is, during migration most operation either are (or should be) disabled. Consequently, I would expect that it's very rare that we even get a register/unregister while migration is running. Anything that might do it could already indicate a potential problem.

For example, device hotplug/unplug should be forbidden while migration is happening.

guest_phys_blocks_append() temporarily registers a listener. IIRC, we already disable memory dumping while migration is active. From what I can tell, qmp_dump_skeys() and tpm_ppi_reset() could still call it, AFAIKs.


Do we have any other known call paths that are problematic while migration is active? The guest_phys_blocks_append() could be re-implemented easily to handle it without a temporary notifier registration.

There are not too many calls of memory_listener_unregister().

--
Thanks,

David / dhildenb


Reply via email to