On 2/20/23 03:43, Martin Pieuchot wrote:
On 20/02/23(Mon) 03:59, Renato Aguiar wrote:
[...]
I can't reproduce it anymore with this patch on 7.2-stable :)

Thanks a lot for testing!  Here's a better fix from Chuck Silvers.
That's what I believe we should commit.

The idea is to prevent sibling from modifying the vm_map by marking
it as "busy" in msync(2) instead of holding the exclusive lock while
sleeping.  This let siblings make progress and stop possible writers.

Could you all guys confirm this also prevent the deadlock?  Thanks!


I cannot reproduce the deadlock with this diff either. I ran the individual test numerous times and also completed the entire bbolt test suite without error.

Index: uvm/uvm_map.c
===================================================================
RCS file: /cvs/src/sys/uvm/uvm_map.c,v
retrieving revision 1.312
diff -u -p -r1.312 uvm_map.c
--- uvm/uvm_map.c       13 Feb 2023 14:52:55 -0000      1.312
+++ uvm/uvm_map.c       20 Feb 2023 08:10:39 -0000
@@ -4569,8 +4569,7 @@ fail:
   * => never a need to flush amap layer since the anonymous memory has
   *    no permanent home, but may deactivate pages there
   * => called from sys_msync() and sys_madvise()
- * => caller must not write-lock map (read OK).
- * => we may sleep while cleaning if SYNCIO [with map read-locked]
+ * => caller must not have map locked
   */
int
@@ -4592,25 +4591,27 @@ uvm_map_clean(struct vm_map *map, vaddr_
        if (start > end || start < map->min_offset || end > map->max_offset)
                return EINVAL;
- vm_map_lock_read(map);
+       vm_map_lock(map);
        first = uvm_map_entrybyaddr(&map->addr, start);
/* Make a first pass to check for holes. */
        for (entry = first; entry != NULL && entry->start < end;
            entry = RBT_NEXT(uvm_map_addr, entry)) {
                if (UVM_ET_ISSUBMAP(entry)) {
-                       vm_map_unlock_read(map);
+                       vm_map_unlock(map);
                        return EINVAL;
                }
                if (UVM_ET_ISSUBMAP(entry) ||
                    UVM_ET_ISHOLE(entry) ||
                    (entry->end < end &&
                    VMMAP_FREE_END(entry) != entry->end)) {
-                       vm_map_unlock_read(map);
+                       vm_map_unlock(map);
                        return EFAULT;
                }
        }
+ vm_map_busy(map);
+       vm_map_unlock(map);
        error = 0;
        for (entry = first; entry != NULL && entry->start < end;
            entry = RBT_NEXT(uvm_map_addr, entry)) {
@@ -4722,7 +4723,7 @@ flush_object:
                }
        }
- vm_map_unlock_read(map);
+       vm_map_unbusy(map);
        return error;
  }

Reply via email to