On 4/28/19 6:46 PM, Linus Torvalds wrote: > This doesn't seem to be the full diff - looking at that patch 1 you > seem to have taken my suggested list_cut_before() change too. > > I'm not against it (it does seem to be simpler and better), I just > hope you double-checked it, since I kind of hand-waved it. > > Linus
I implemented your suggestion in patch 1 as it will produce simpler and faster code. However, one of the changes in my patchset is to wake up all the readers in the wait list. This means I have to jump over the writers and wake up the readers behind them as well. See patch 11 for details. As a result, I have to revert back to use list_add_tail() and list_for_each_entry_safe() for the first pass. That is why the diff for the whole patchset is just the below change. It is done on purpose, not an omission. Cheers, Longman > > On Sun, Apr 28, 2019 at 2:26 PM Waiman Long <long...@redhat.com> wrote: >> v6=>v7 diff >> ----------- >> diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c >> index 97a2334d9cd3..60783267b50d 100644 >> --- a/kernel/locking/rwsem.c >> +++ b/kernel/locking/rwsem.c >> @@ -693,7 +693,7 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem, >> atomic_long_add(adjustment, &sem->count); >> >> /* 2nd pass */ >> - list_for_each_entry(waiter, &wlist, list) { >> + list_for_each_entry_safe(waiter, tmp, &wlist, list) { >> struct task_struct *tsk; >> >> tsk = waiter->task; >>