On 08/17/2011 02:28 AM, Paolo Bonzini wrote:
On 08/16/2011 08:56 PM, Umesh Deshpande wrote:
@@ -3001,8 +3016,10 @@ void qemu_ram_free_from_ptr(ram_addr_t addr)

      QLIST_FOREACH(block,&ram_list.blocks, next) {
          if (addr == block->offset) {
+            qemu_mutex_lock_ramlist();
              QLIST_REMOVE(block, next);
              QLIST_REMOVE(block, next_mru);
+            qemu_mutex_unlock_ramlist();
              qemu_free(block);
              return;
          }
@@ -3015,8 +3032,10 @@ void qemu_ram_free(ram_addr_t addr)

      QLIST_FOREACH(block,&ram_list.blocks, next) {
          if (addr == block->offset) {
+            qemu_mutex_lock_ramlist();
              QLIST_REMOVE(block, next);
              QLIST_REMOVE(block, next_mru);
+            qemu_mutex_unlock_ramlist();
              if (block->flags&  RAM_PREALLOC_MASK) {
                  ;
              } else if (mem_path) {

You must protect the whole QLIST_FOREACH.  Otherwise looks good.
Or, is it okay to convert all the ramblock list traversals in exec.c (under iothread) to mru traversals, and probably it makes sense as the original list was also maintained in the mru order, whereas the sequence of blocks doesn't matter for the migration code. This way we don't have to acquire the mutex for block list traversals.

- Umesh

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to