On Thursday, July 29, 2010 7:39:02 pm m...@freebsd.org wrote: > We've seen a few instances at work where witness_warn() in ast() > indicates the sched lock is still held, but the place it claims it was > held by is in fact sometimes not possible to keep the lock, like: > > thread_lock(td); > td->td_flags &= ~TDF_SELECT; > thread_unlock(td); > > What I was wondering is, even though the assembly I see in objdump -S > for witness_warn has the increment of td_pinned before the PCPU_GET: > > ffffffff802db210: 65 48 8b 1c 25 00 00 mov %gs:0x0,%rbx > ffffffff802db217: 00 00 > ffffffff802db219: ff 83 04 01 00 00 incl 0x104(%rbx) > * Pin the thread in order to avoid problems with thread migration. > * Once that all verifies are passed about spinlocks ownership, > * the thread is in a safe path and it can be unpinned. > */ > sched_pin(); > lock_list = PCPU_GET(spinlocks); > ffffffff802db21f: 65 48 8b 04 25 48 00 mov %gs:0x48,%rax > ffffffff802db226: 00 00 > if (lock_list != NULL && lock_list->ll_count != 0) { > ffffffff802db228: 48 85 c0 test %rax,%rax > * Pin the thread in order to avoid problems with thread migration. > * Once that all verifies are passed about spinlocks ownership, > * the thread is in a safe path and it can be unpinned. > */ > sched_pin(); > lock_list = PCPU_GET(spinlocks); > ffffffff802db22b: 48 89 85 f0 fe ff ff mov %rax,-0x110(%rbp) > ffffffff802db232: 48 89 85 f8 fe ff ff mov %rax,-0x108(%rbp) > if (lock_list != NULL && lock_list->ll_count != 0) { > ffffffff802db239: 0f 84 ff 00 00 00 je ffffffff802db33e > <witness_warn+0x30e> > ffffffff802db23f: 44 8b 60 50 mov 0x50(%rax),%r12d > > is it possible for the hardware to do any re-ordering here? > > The reason I'm suspicious is not just that the code doesn't have a > lock leak at the indicated point, but in one instance I can see in the > dump that the lock_list local from witness_warn is from the pcpu > structure for CPU 0 (and I was warned about sched lock 0), but the > thread id in panic_cpu is 2. So clearly the thread was being migrated > right around panic time. > > This is the amd64 kernel on stable/7. I'm not sure exactly what kind > of hardware; it's a 4-way Intel chip from about 3 or 4 years ago IIRC. > > So... do we need some kind of barrier in the code for sched_pin() for > it to really do what it claims? Could the hardware have re-ordered > the "mov %gs:0x48,%rax" PCPU_GET to before the sched_pin() > increment?
Hmmm, I think it might be able to because they refer to different locations. Note this rule in section 8.2.2 of Volume 3A: • Reads may be reordered with older writes to different locations but not with older writes to the same location. It is certainly true that sparc64 could reorder with RMO. I believe ia64 could reorder as well. Since sched_pin/unpin are frequently used to provide this sort of synchronization, we could use memory barriers in pin/unpin like so: sched_pin() { td->td_pinned = atomic_load_acq_int(&td->td_pinned) + 1; } sched_unpin() { atomic_store_rel_int(&td->td_pinned, td->td_pinned - 1); } We could also just use atomic_add_acq_int() and atomic_sub_rel_int(), but they are slightly more heavyweight, though it would be more clear what is happening I think. -- John Baldwin _______________________________________________ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"