On Tue, Jun 17, 2014 at 8:56 PM, Andres Freund <and...@2ndquadrant.com> wrote: > On 2014-06-17 20:47:51 +0530, Amit Kapila wrote: > > On Tue, Jun 17, 2014 at 6:35 PM, Andres Freund <and...@2ndquadrant.com> > > wrote: > > > > You have followed it pretty well as far as I can understand from your > > replies, as there is no reproducible test (which I think is bit tricky to > > prepare), so it becomes difficult to explain by theory. > > I'm working an updated patch that moves the releaseOK into the > spinlocks. Maybe that's the problem already - it's certainly not correct > as is.
Sure, I will do the test/performance test with updated patch; you might want to include some more changes based on comments in mail below: > > You are right, it will wakeup the existing waiters, but I think the > > new logic has one difference which is that it can allow the backend to > > take Exclusive lock when there are already waiters in queue. As per > > above example even though Session-2 and Session-3 are in wait > > queue, Session-4 will be able to acquire Exclusive lock which I think > > was previously not possible. > > I think that was previously possible as well - in a slightly different > set of circumstances though. After a process releases a lock and wakes > up some of several waiters another process can come in and acquire the > lock. Before the woken up process gets scheduled again. lwlocks aren't > fair locks... Okay, but I think changing behaviour for lwlocks might impact some tests/applications. As they are not fair, I think defining exact behaviour is not easy and we don't have any concrete scenario which can be effected, so there should not be problem in accepting slightly different behaviour. Few more comments: 1. LWLockAcquireCommon() { .. iterations++; } In current logic, I could not see any use of these *iterations* variable. 2. LWLockAcquireCommon() { .. if (!LWLockDequeueSelf(l)) { /* * Somebody else dequeued us and has or will.. .. */ for (;;) { PGSemaphoreLock(&proc->sem, false); if (!proc->lwWaiting) break; extraWaits++; } lock->releaseOK = true; } Do we want to set result = false; after waking in above code? The idea behind setting false is to indicate whether we get the lock immediately or not which previously was decided based on if it needs to queue itself? 3. LWLockAcquireCommon() { .. /* * Ok, at this point we couldn't grab the lock on the first try. We * cannot simply queue ourselves to the end of the list and wait to be * woken up because by now the lock could long have been released. * Instead add us to the queue and try to grab the lock again. If we * suceed we need to revert the queuing and be happy, otherwise we * recheck the lock. If we still couldn't grab it, we know that the * other lock will see our queue entries when releasing since they * existed before we checked for the lock. */ /* add to the queue */ LWLockQueueSelf(l, mode); /* we're now guaranteed to be woken up if necessary */ mustwait = LWLockAttemptLock(l, mode, false, &potentially_spurious); .. } a. By reading above code and comments, it is not quite clear why second attempt is important unless somebody thinks on it or refer your comments in *Notes* section at top of file. I think it's better to indicate in some way so that code reader can refer to Notes section or whereever you are planing to keep those comments. b. There is typo in above comment suceed/succeed. 4. LWLockAcquireCommon() { .. if (!LWLockDequeueSelf(l)) { for (;;) { PGSemaphoreLock(&proc->sem, false); if (!proc->lwWaiting) break; extraWaits++; } lock->releaseOK = true; .. } Setting releaseOK in above context might not be required because if the control comes in this part of code, it will not retry to acquire another time. 5. LWLockWaitForVar() { .. else mustwait = false; if (!mustwait) break; .. } I think we can directly break in else part in above code. 6. LWLockWaitForVar() { .. /* * Quick test first to see if it the slot is free right now. * * XXX: the caller uses a spinlock before this,... */ if (pg_atomic_read_u32(&lock->lockcount) == 0) return true; } Does the part of comment that refers to spinlock is still relevant after using atomic ops? 7. LWLockWaitForVar() { .. /* * Add myself to wait queue. Note that this is racy, somebody else * could wakeup before we're finished queuing. * NB: We're using nearly the same twice-in-a-row lock acquisition * protocol as LWLockAcquire(). Check its comments for details. */ LWLockQueueSelf(l, LW_WAIT_UNTIL_FREE); /* we're now guaranteed to be woken up if necessary */ mustwait = LWLockAttemptLock(l, LW_EXCLUSIVE, false, &potentially_spurious); } Why is it important to Attempt lock after queuing in this case, can't we just re-check exclusive lock as done before queuing? 8. LWLockWaitForVar() { .. PRINT_LWDEBUG("LWLockAcquire undo queue", lock, mode); break; } else { PRINT_LWDEBUG("LWLockAcquire waiting 4", lock, mode); } .. } a. I think instead of LWLockAcquire, here we should use LWLockWaitForVar b. Isn't it better to use LOG_LWDEBUG instead of PRINT_LWDEBUG(), as PRINT_LWDEBUG() is generally used in file at entry of functions to log info about locks? 9. LWLockUpdateVar() { /* Acquire mutex. Time spent holding mutex should be short! */ SpinLockAcquire(&lock->mutex); .. } Current code has an Assert for exclusive lock which is missing in patch, is there any reason for removing Assert? 10. LWLockRelease(LWLock *l) { .. if (l == held_lwlocks[i].lock) { mode = held_lwlocks[i].mode; .. } It seems that mode is added to held_lwlocks to use it in LWLockRelease(). If yes, then can we deduce the same from lockcount? 11. LWLockRelease() { .. PRINT_LWDEBUG("LWLockRelease", lock, mode); } Shouldn't this be in begining of LWLockRelease function rather than after processing held_lwlocks array? 12. #ifdef LWLOCK_DEBUG lock->owner = MyProc; #endif Shouldn't it be reset in LWLockRelease? 13. * This protects us against both problems from above: * 1) Nobody can release too quick, before we're queued, since after Phase 2 since we're * already queued. Second *since* seems to be typo. * 2) If somebody spuriously got blocked from acquiring the lock, they will * get queued in Phase 2 and we can wake them up if neccessary or they will * have gotten the lock in Phase 2. In the above line, I think the second mention of *Phase 2* should be *Phase 3*. > > > Somewhat unrelated: > > I think it will also maintain that the wokedup process won't stall for > > very long time, because if we wake new waiters, then previously woked > > process can again enter into wait queue and similar thing can repeat > > for long time. > > I don't think it effectively does that - newly incoming lockers ignore > the queue and just acquire the lock. Even if there's some other backend > scheduled to wake up. And shared locks can be acquired when there's > exclusive locks waiting. They ignore the queue but I think they won't wakeup new waiters unless some previous wokedup waiter again tries to acquire lock (as that will set releaseOK). I am not sure how much such a restriction helps, but still I think it reduces the chance of getting it stalled. > I think both are actually critical for performance... Otherwise even a > only lightly contended lock would require scheduler activity when a > processes tries to lock something twice. Given the frequency we acquire > some locks with that'd be disastrous... Do you have any suggestion how both behaviours can be retained? Note - Still there is more to review in this patch, however I feel it is good idea to start some test/performance test of this patch, if you agree, then I will start the same with the updated patch (result of conclusion of current review comments). With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com