Hi,

On 2016-04-03 16:47:49 +0530, Dilip Kumar wrote:
> 6. With Head+ pinunpin-cas-8 +
> 0001-WIP-Avoid-the-use-of-a-separate-spinlock-to-protect performance is
> almost same as with
> Head+pinunpin-cas-8, only sometime performance at 128 client is low
> (~250,000 instead of 650,000)

Hm, interesting. I suspect that's because of the missing backoff in my
experimental patch. If you apply the attached patch ontop of that
(requires infrastructure from pinunpin), how does performance develop?

Regards,

Andres
diff --git a/src/backend/storage/lmgr/lwlock.c b/src/backend/storage/lmgr/lwlock.c
index ec6baf6..4216be5 100644
--- a/src/backend/storage/lmgr/lwlock.c
+++ b/src/backend/storage/lmgr/lwlock.c
@@ -858,11 +858,15 @@ LWLockWaitListLock(LWLock *lock)
 	{
 		if (old_state & LW_FLAG_LOCKED)
 		{
-			/* FIXME: add exponential backoff */
-			pg_spin_delay();
-			old_state = pg_atomic_read_u32(&lock->state);
+			SpinDelayStatus delayStatus = init_spin_delay((void*)&lock->state);
+			while (old_state & LW_FLAG_LOCKED)
+			{
+				perform_spin_delay(&delayStatus);
+				old_state = pg_atomic_read_u32(&lock->state);
+			}
+			finish_spin_delay(&delayStatus);
 #ifdef LWLOCK_STATS
-			delays++;
+			delays += delayStatus.delays;
 #endif
 		}
 		else
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to