On Wed, Jun 25, 2025 at 07:43:32AM +0200, Otto Moerbeek wrote: > On Tue, Jun 24, 2025 at 05:21:56PM +0200, Jeremie Courreges-Anglas wrote: > > > > > I think it's uvm_purge(), as far as I can see it happens when building > > rust with cvs up -D2025/06/04 in /sys, not with -D2025/06/03. Maybe I > > missed lang/rust when testing the diff. > > > > This is with additional MP_LOCKDEBUG support for mutexes, and > > __mp_lock_spinout = 50L * INT_MAX. > > > > Suggested by claudio: tr /t 0t269515 fails. > > > > WITNESS doesn't flag an obvious lock ordering issue. I'm not even > > sure there is one. It also happen with CPU_MAX_BUSY_CYCLES == 64. > > > > Maybe we're still hammering too much the locks? Input and ideas to > > test welcome. Right now I'm running with just uvm_purge() reverted. > > I'm also seeing issues when my M1 macmini is under load, for me its > spontaneous reboots with RTKit errors reported. > > I'll try to run with uvm_purge() reverted.
That does not seem to have made a difference. The revert I did was https://github.com/openbsd/src/commit/74ffca42a88be2945f3de3bdd928bbdb89778e24.diff > > -Otto > > > > > > > mtx_enter: 0xffffff80012c1660 lock spun out > > Stopped at mtx_enter+0x134: ldr x26, [x25,#2712] > > ddb{9}> tr > > db_enter() at mtx_enter+0x130 > > mtx_enter() at uvm_pmr_getpages+0x16c > > uvm_pmr_getpages() at uvm_pmr_cache_alloc+0x74 > > uvm_pmr_cache_alloc() at uvm_pmr_cache_get+0x114 > > uvm_pmr_cache_get() at uvm_pagealloc+0xc4 > > uvm_pagealloc() at uvmfault_promote+0xac > > uvmfault_promote() at uvm_fault_lower+0x2e4 > > uvm_fault_lower() at uvm_fault+0x158 > > uvm_fault() at udata_abort+0x128 > > udata_abort() at do_el0_sync+0x100 > > do_el0_sync() at handle_el0_sync+0x70 > > handle_el0_sync() at __ALIGN_SIZE+0x6154444 > > --- trap --- > > end of kernel > > ddb{9}> ps /o > > TID PID UID PRFLAGS PFLAGS CPU COMMAND > > 439247 54947 55 0x3 0 7 c++ > > *151355 72169 55 0x3 0 9 c++ > > 391653 25519 55 0x3 0 8 c++ > > 269515 26465 55 0x3 0 4 c++ > > 264732 5469 55 0x3 0 6 c++ > > 473294 43623 55 0x2000003 0x4000000 1 rustc > > 388499 43623 55 0x2000003 0x4000000 5 rustc > > 132024 43623 55 0x2000003 0x4000000 3 rustc > > 294256 43623 55 0x2000003 0x4000000 0 rustc > > ddb{9}> show all locks > > CPU 4: > > exclusive mutex &uvm.fpageqlock r = 0 (0xffffff80012c1670) > > CPU 2: > > exclusive mutex &sched_lock r = 0 (0xffffff80012c2348) > > Process 54947 (c++) thread 0xffffff825a1cf9c0 (439247) > > exclusive rwlock amaplk r = 0 (0xffffff8113992b68) > > shared rwlock vmmaplk r = 0 (0xffffff81112ffe78) > > Process 72169 (c++) thread 0xffffff825a1cf738 (151355) > > exclusive rwlock amaplk r = 0 (0xffffff8113992418) > > shared rwlock vmmaplk r = 0 (0xffffff81112ff518) > > Process 25519 (c++) thread 0xffffff825a1d14a0 (391653) > > exclusive rwlock amaplk r = 0 (0xffffff80541125c0) > > shared rwlock vmmaplk r = 0 (0xffffff80547f5e60) > > Process 26465 (c++) thread 0xffffff825a1d1218 (269515) > > exclusive rwlock amaplk r = 0 (0xffffff81120f34e0) > > Process 5469 (c++) thread 0xffffff825a1cefa0 (264732) > > exclusive rwlock amaplk r = 0 (0xffffff811645d3d8) > > shared rwlock vmmaplk r = 0 (0xffffff81112ff338) > > Process 43623 (rustc) thread 0xffffff825a1d0f90 (166842) > > exclusive rwlock sysctllk r = 0 (0xffffff8001231b60) > > Process 43623 (rustc) thread 0xffffff825a1d0060 (294256) > > exclusive rwlock amaplk r = 0 (0xffffff811645d858) > > shared rwlock vmmaplk r = 0 (0xffffff81112ff8d8) > > ddb{9}> sh struct mutex 0xffffff80012c1660 > > struct mutex at 0xffffff80012c1660 (56 bytes) {mtx_owner = (void > > *)0xffffff8055 > > 63d000, mtx_wantipl = 8, mtx_oldipl = 0, mtx_lock_obj = {lo_type = (const > > lock_ > > type *)0x0, lo_name = (const unsigned char *)0xffffff8000e3ae71, lo_witness > > = ( > > struct witness *)0xffffff80043eb280, lo_relative = (struct lock_object > > *)0x0, l > > o_flags = 16973824}} > > ddb{9}> x/s 0xffffff8000e3ae71 > > $d+0x122: &uvm.fpageqlock > > ddb{9}> > > > > > > -- > > jca > > >
