In next patch, we wrap the vm_load into a whole mr transaction to speed up vm_load. This brings a problem, old flatviews may be referenced during the vm_load. vm_load contains far more memory updates than referencing to a specific flatview, hence we introduce do_commit to make sure address_space_to_flatview will return the newest flatview and it should logically only be triggered in a few spots during vm_load.
Other than that, sanity check whether BQL or rcu is held before using any flatview. Signed-off-by: Chuang Xu <xuchuangxc...@bytedance.com> --- include/exec/memory.h | 23 +++++++++++++++++++++++ softmmu/memory.c | 5 +++++ 2 files changed, 28 insertions(+) diff --git a/include/exec/memory.h b/include/exec/memory.h index 6fa0b071f0..d6fd89db64 100644 --- a/include/exec/memory.h +++ b/include/exec/memory.h @@ -27,6 +27,7 @@ #include "qemu/notify.h" #include "qom/object.h" #include "qemu/rcu.h" +#include "qemu/main-loop.h" #define RAM_ADDR_INVALID (~(ram_addr_t)0) @@ -1095,8 +1096,30 @@ struct FlatView { MemoryRegion *root; }; +bool memory_region_transaction_in_progress(void); + +void memory_region_transaction_do_commit(void); + static inline FlatView *address_space_to_flatview(AddressSpace *as) { + if (qemu_mutex_iothread_locked()) { + /* We exclusively own the flatview now.. */ + if (memory_region_transaction_in_progress()) { + /* + * Fetch the flatview within a transaction in-progress, it + * means current_map may not be the latest, we need to update + * immediately to make sure the caller won't see obsolete + * mapping. + */ + memory_region_transaction_do_commit(); + } + + /* No further protection needed to access current_map */ + return as->current_map; + } + + /* Otherwise we must have had the RCU lock or something went wrong */ + assert(rcu_read_is_locked()); return qatomic_rcu_read(&as->current_map); } diff --git a/softmmu/memory.c b/softmmu/memory.c index 33ecc62ee9..6a8e8b4e71 100644 --- a/softmmu/memory.c +++ b/softmmu/memory.c @@ -1130,6 +1130,11 @@ void memory_region_transaction_commit(void) } } +bool memory_region_transaction_in_progress(void) +{ + return memory_region_transaction_depth != 0; +} + static void memory_region_destructor_none(MemoryRegion *mr) { } -- 2.20.1