On 27 July 2014 23:19, Thomas Munro <mu...@ip9.org> wrote:
> On the subject of isolation tests, I think skip-locked.spec is only
> producing schedules that reach third of the three 'return
> HeapTupleWouldBlock' statements in heap_lock_tuple.  I will follow up
> with some more thorough isolation tests in the next week or so to
> cover the other two, and some other scenarios and interactions with
> other feature.

Now with extra isolation tests so that the three different code
branches that can skip rows are covered.  I temporarily added some
logging lines to double check that the expected branches are reached
by each permutation while developing the specs.  They change the
output and are not part of the patch -- attaching separately.
diff --git a/doc/src/sgml/ref/select.sgml b/doc/src/sgml/ref/select.sgml
index 231dc6a..0469705 100644
--- a/doc/src/sgml/ref/select.sgml
+++ b/doc/src/sgml/ref/select.sgml
@@ -45,7 +45,7 @@ SELECT [ ALL | DISTINCT [ ON ( <replaceable class="parameter">expression</replac
     [ LIMIT { <replaceable class="parameter">count</replaceable> | ALL } ]
     [ OFFSET <replaceable class="parameter">start</replaceable> [ ROW | ROWS ] ]
     [ FETCH { FIRST | NEXT } [ <replaceable class="parameter">count</replaceable> ] { ROW | ROWS } ONLY ]
-    [ FOR { UPDATE | NO KEY UPDATE | SHARE | KEY SHARE } [ OF <replaceable class="parameter">table_name</replaceable> [, ...] ] [ NOWAIT ] [...] ]
+    [ FOR { UPDATE | NO KEY UPDATE | SHARE | KEY SHARE } [ OF <replaceable class="parameter">table_name</replaceable> [, ...] ] [ NOWAIT | SKIP LOCKED ] [...] ]
 
 <phrase>where <replaceable class="parameter">from_item</replaceable> can be one of:</phrase>
 
@@ -1283,7 +1283,7 @@ FETCH { FIRST | NEXT } [ <replaceable class="parameter">count</replaceable> ] {
     The locking clause has the general form
 
 <synopsis>
-FOR <replaceable>lock_strength</> [ OF <replaceable class="parameter">table_name</replaceable> [, ...] ] [ NOWAIT ]
+FOR <replaceable>lock_strength</> [ OF <replaceable class="parameter">table_name</replaceable> [, ...] ] [ NOWAIT | SKIP LOCKED ]
 </synopsis>
 
     where <replaceable>lock_strength</> can be one of
@@ -1359,11 +1359,17 @@ KEY SHARE
 
    <para>
     To prevent the operation from waiting for other transactions to commit,
-    use the <literal>NOWAIT</> option.  With <literal>NOWAIT</>, the statement
-    reports an error, rather than waiting, if a selected row
-    cannot be locked immediately.  Note that <literal>NOWAIT</> applies only
-    to the row-level lock(s) &mdash; the required <literal>ROW SHARE</literal>
-    table-level lock is still taken in the ordinary way (see
+    use either the <literal>NOWAIT</> or <literal>SKIP LOCKED</literal>
+    option.  With <literal>NOWAIT</>, the statement reports an error, rather
+    than waiting, if a selected row cannot be locked immediately.
+    With <literal>SKIP LOCKED</literal>, any selected rows that cannot be
+    immediately locked are skipped.  Skipping locked rows provides an
+    inconsistent view of the data, so this is not suitable for general purpose
+    work, but can be used to avoid lock contention with multiple consumers
+    accessing a queue-like table.  Note that <literal>NOWAIT</>
+    and <literal>SKIP LOCKED</literal> apply only to the row-level lock(s)
+    &mdash; the required <literal>ROW SHARE</literal> table-level lock is
+    still taken in the ordinary way (see
     <xref linkend="mvcc">).  You can use
     <xref linkend="sql-lock">
     with the <literal>NOWAIT</> option first,
@@ -1386,14 +1392,14 @@ KEY SHARE
    </para>
 
    <para>
-    Multiple locking
-    clauses can be written if it is necessary to specify different locking
-    behavior for different tables.  If the same table is mentioned (or
-    implicitly affected) by more than one locking clause,
-    then it is processed as if it was only specified by the strongest one.
-    Similarly, a table is processed
-    as <literal>NOWAIT</> if that is specified in any of the clauses
-    affecting it.
+    Multiple locking clauses can be written if it is necessary to specify
+    different locking behavior for different tables.  If the same table is
+    mentioned (or implicitly affected) by more than one locking clause, then
+    it is processed as if it was only specified by the strongest one.
+    Similarly, a table is processed as <literal>NOWAIT</> if that is specified
+    in any of the clauses affecting it.  Otherwise, it is processed
+    as <literal>SKIP LOCKED</literal> if that is specified in any of the
+    clauses affecting it.
    </para>
 
    <para>
@@ -1930,9 +1936,9 @@ SELECT distributors.* WHERE distributors.name = 'Westward';
     <productname>PostgreSQL</productname> allows it in any <command>SELECT</>
     query as well as in sub-<command>SELECT</>s, but this is an extension.
     The <literal>FOR NO KEY UPDATE</>, <literal>FOR SHARE</> and
-    <literal>FOR KEY SHARE</> variants,
-    as well as the <literal>NOWAIT</> option,
-    do not appear in the standard.
+    <literal>FOR KEY SHARE</> variants, as well as the <literal>NOWAIT</>
+    and <literal>SKIP LOCKED</literal> options, do not appear in the
+    standard.
    </para>
   </refsect2>
 
diff --git a/doc/src/sgml/sql.sgml b/doc/src/sgml/sql.sgml
index ba92607..57396d7 100644
--- a/doc/src/sgml/sql.sgml
+++ b/doc/src/sgml/sql.sgml
@@ -863,7 +863,7 @@ SELECT [ ALL | DISTINCT [ ON ( <replaceable class="PARAMETER">expression</replac
     [ ORDER BY <replaceable class="parameter">expression</replaceable> [ ASC | DESC | USING <replaceable class="parameter">operator</replaceable> ] [ NULLS { FIRST | LAST } ] [, ...] ]
     [ LIMIT { <replaceable class="PARAMETER">count</replaceable> | ALL } ]
     [ OFFSET <replaceable class="PARAMETER">start</replaceable> ]
-    [ FOR { UPDATE | SHARE } [ OF <replaceable class="parameter">table_name</replaceable> [, ...] ] [ NOWAIT ] [...] ]
+    [ FOR { UPDATE | SHARE } [ OF <replaceable class="parameter">table_name</replaceable> [, ...] ] [ NOWAIT | SKIP LOCKED ] [...] ]
 </synopsis>
     </para>
 
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 35f9404..51e4d28 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -4091,7 +4091,7 @@ get_mxact_status_for_lock(LockTupleMode mode, bool is_update)
  *	cid: current command ID (used for visibility test, and stored into
  *		tuple's cmax if lock is successful)
  *	mode: indicates if shared or exclusive tuple lock is desired
- *	nowait: if true, ereport rather than blocking if lock not available
+ *	wait_policy: whether to block, ereport or skip if lock not available
  *	follow_updates: if true, follow the update chain to also lock descendant
  *		tuples.
  *
@@ -4104,6 +4104,7 @@ get_mxact_status_for_lock(LockTupleMode mode, bool is_update)
  *	HeapTupleMayBeUpdated: lock was successfully acquired
  *	HeapTupleSelfUpdated: lock failed because tuple updated by self
  *	HeapTupleUpdated: lock failed because tuple updated by other xact
+ *	HeapTupleWouldBlock: lock couldn't be acquired and wait_policy is skip
  *
  * In the failure cases, the routine fills *hufd with the tuple's t_ctid,
  * t_xmax (resolving a possible MultiXact, if necessary), and t_cmax
@@ -4115,7 +4116,7 @@ get_mxact_status_for_lock(LockTupleMode mode, bool is_update)
  */
 HTSU_Result
 heap_lock_tuple(Relation relation, HeapTuple tuple,
-				CommandId cid, LockTupleMode mode, bool nowait,
+				CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 				bool follow_updates,
 				Buffer *buffer, HeapUpdateFailureData *hufd)
 {
@@ -4218,7 +4219,11 @@ l3:
 		 */
 		if (!have_tuple_lock)
 		{
-			if (nowait)
+			if (wait_policy == LockWaitBlock)
+			{
+				LockTupleTuplock(relation, tid, mode);
+			}
+			else if (wait_policy == LockWaitError)
 			{
 				if (!ConditionalLockTupleTuplock(relation, tid, mode))
 					ereport(ERROR,
@@ -4226,8 +4231,11 @@ l3:
 					errmsg("could not obtain lock on row in relation \"%s\"",
 						   RelationGetRelationName(relation))));
 			}
-			else
-				LockTupleTuplock(relation, tid, mode);
+			else /* wait_policy == LockWaitSkip */
+			{
+				if (!ConditionalLockTupleTuplock(relation, tid, mode))
+					return HeapTupleWouldBlock;
+			}
 			have_tuple_lock = true;
 		}
 
@@ -4429,7 +4437,13 @@ l3:
 					elog(ERROR, "invalid lock mode in heap_lock_tuple");
 
 				/* wait for multixact to end */
-				if (nowait)
+				if (wait_policy == LockWaitBlock)
+				{
+					MultiXactIdWait((MultiXactId) xwait, status, infomask,
+									relation, &tuple->t_data->t_ctid,
+									XLTW_Lock, NULL);
+				}
+				else if (wait_policy == LockWaitError)
 				{
 					if (!ConditionalMultiXactIdWait((MultiXactId) xwait,
 												  status, infomask, relation,
@@ -4440,10 +4454,17 @@ l3:
 								 errmsg("could not obtain lock on row in relation \"%s\"",
 										RelationGetRelationName(relation))));
 				}
-				else
-					MultiXactIdWait((MultiXactId) xwait, status, infomask,
-									relation, &tuple->t_data->t_ctid,
-									XLTW_Lock, NULL);
+				else /* wait_policy == LockWaitSkip */
+				{
+					if (!ConditionalMultiXactIdWait((MultiXactId) xwait,
+													status, infomask, relation,
+													&tuple->t_data->t_ctid,
+													XLTW_Lock, NULL))
+					{
+						UnlockTupleTuplock(relation, tid, mode);
+						return HeapTupleWouldBlock;
+					}
+				}
 
 				/* if there are updates, follow the update chain */
 				if (follow_updates &&
@@ -4489,7 +4510,12 @@ l3:
 			else
 			{
 				/* wait for regular transaction to end */
-				if (nowait)
+				if (wait_policy == LockWaitBlock)
+				{
+					XactLockTableWait(xwait, relation, &tuple->t_data->t_ctid,
+									  XLTW_Lock);
+				}
+				else if (wait_policy == LockWaitError)
 				{
 					if (!ConditionalXactLockTableWait(xwait))
 						ereport(ERROR,
@@ -4497,9 +4523,14 @@ l3:
 								 errmsg("could not obtain lock on row in relation \"%s\"",
 										RelationGetRelationName(relation))));
 				}
-				else
-					XactLockTableWait(xwait, relation, &tuple->t_data->t_ctid,
-									  XLTW_Lock);
+				else /* wait_policy == LockWaitSkip */
+				{
+					if (!ConditionalXactLockTableWait(xwait))
+					{
+						UnlockTupleTuplock(relation, tid, mode);
+						return HeapTupleWouldBlock;
+					}
+				}
 
 				/* if there are updates, follow the update chain */
 				if (follow_updates &&
diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c
index 9bf0098..f4c0ffa 100644
--- a/src/backend/commands/trigger.c
+++ b/src/backend/commands/trigger.c
@@ -2706,7 +2706,7 @@ ltrmark:;
 		tuple.t_self = *tid;
 		test = heap_lock_tuple(relation, &tuple,
 							   estate->es_output_cid,
-							   lockmode, false /* wait */ ,
+							   lockmode, LockWaitBlock,
 							   false, &buffer, &hufd);
 		switch (test)
 		{
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 072c7df..0f8e357 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -830,7 +830,7 @@ InitPlan(QueryDesc *queryDesc, int eflags)
 		erm->prti = rc->prti;
 		erm->rowmarkId = rc->rowmarkId;
 		erm->markType = rc->markType;
-		erm->noWait = rc->noWait;
+		erm->waitPolicy = rc->waitPolicy;
 		ItemPointerSetInvalid(&(erm->curCtid));
 		estate->es_rowMarks = lappend(estate->es_rowMarks, erm);
 	}
@@ -2012,7 +2012,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 			 */
 			test = heap_lock_tuple(relation, &tuple,
 								   estate->es_output_cid,
-								   lockmode, false /* wait */ ,
+								   lockmode, LockWaitBlock,
 								   false, &buffer, &hufd);
 			/* We now have two pins on the buffer, get rid of one */
 			ReleaseBuffer(buffer);
diff --git a/src/backend/executor/nodeLockRows.c b/src/backend/executor/nodeLockRows.c
index 298d4b4..45a0304 100644
--- a/src/backend/executor/nodeLockRows.c
+++ b/src/backend/executor/nodeLockRows.c
@@ -133,11 +133,15 @@ lnext:
 
 		test = heap_lock_tuple(erm->relation, &tuple,
 							   estate->es_output_cid,
-							   lockmode, erm->noWait, true,
+							   lockmode, erm->waitPolicy, true,
 							   &buffer, &hufd);
 		ReleaseBuffer(buffer);
 		switch (test)
 		{
+			case HeapTupleWouldBlock:
+				/* couldn't lock tuple in SKIP LOCKED mode */
+				goto lnext;
+
 			case HeapTupleSelfUpdated:
 
 				/*
diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
index 3088578..e37d3bb 100644
--- a/src/backend/nodes/copyfuncs.c
+++ b/src/backend/nodes/copyfuncs.c
@@ -959,7 +959,7 @@ _copyPlanRowMark(const PlanRowMark *from)
 	COPY_SCALAR_FIELD(prti);
 	COPY_SCALAR_FIELD(rowmarkId);
 	COPY_SCALAR_FIELD(markType);
-	COPY_SCALAR_FIELD(noWait);
+	COPY_SCALAR_FIELD(waitPolicy);
 	COPY_SCALAR_FIELD(isParent);
 
 	return newnode;
@@ -2071,7 +2071,7 @@ _copyRowMarkClause(const RowMarkClause *from)
 
 	COPY_SCALAR_FIELD(rti);
 	COPY_SCALAR_FIELD(strength);
-	COPY_SCALAR_FIELD(noWait);
+	COPY_SCALAR_FIELD(waitPolicy);
 	COPY_SCALAR_FIELD(pushedDown);
 
 	return newnode;
@@ -2452,7 +2452,7 @@ _copyLockingClause(const LockingClause *from)
 
 	COPY_NODE_FIELD(lockedRels);
 	COPY_SCALAR_FIELD(strength);
-	COPY_SCALAR_FIELD(noWait);
+	COPY_SCALAR_FIELD(waitPolicy);
 
 	return newnode;
 }
diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c
index 1b07db6..5e915ba 100644
--- a/src/backend/nodes/equalfuncs.c
+++ b/src/backend/nodes/equalfuncs.c
@@ -2286,7 +2286,7 @@ _equalLockingClause(const LockingClause *a, const LockingClause *b)
 {
 	COMPARE_NODE_FIELD(lockedRels);
 	COMPARE_SCALAR_FIELD(strength);
-	COMPARE_SCALAR_FIELD(noWait);
+	COMPARE_SCALAR_FIELD(waitPolicy);
 
 	return true;
 }
@@ -2382,7 +2382,7 @@ _equalRowMarkClause(const RowMarkClause *a, const RowMarkClause *b)
 {
 	COMPARE_SCALAR_FIELD(rti);
 	COMPARE_SCALAR_FIELD(strength);
-	COMPARE_SCALAR_FIELD(noWait);
+	COMPARE_SCALAR_FIELD(waitPolicy);
 	COMPARE_SCALAR_FIELD(pushedDown);
 
 	return true;
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index e686a6c..e5d8502 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -836,7 +836,7 @@ _outPlanRowMark(StringInfo str, const PlanRowMark *node)
 	WRITE_UINT_FIELD(prti);
 	WRITE_UINT_FIELD(rowmarkId);
 	WRITE_ENUM_FIELD(markType, RowMarkType);
-	WRITE_BOOL_FIELD(noWait);
+	WRITE_BOOL_FIELD(waitPolicy);
 	WRITE_BOOL_FIELD(isParent);
 }
 
@@ -2136,7 +2136,7 @@ _outLockingClause(StringInfo str, const LockingClause *node)
 
 	WRITE_NODE_FIELD(lockedRels);
 	WRITE_ENUM_FIELD(strength, LockClauseStrength);
-	WRITE_BOOL_FIELD(noWait);
+	WRITE_ENUM_FIELD(waitPolicy, LockWaitPolicy);
 }
 
 static void
@@ -2326,7 +2326,7 @@ _outRowMarkClause(StringInfo str, const RowMarkClause *node)
 
 	WRITE_UINT_FIELD(rti);
 	WRITE_ENUM_FIELD(strength, LockClauseStrength);
-	WRITE_BOOL_FIELD(noWait);
+	WRITE_ENUM_FIELD(waitPolicy, LockWaitPolicy);
 	WRITE_BOOL_FIELD(pushedDown);
 }
 
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 69d9989..e3a12e9 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -320,7 +320,7 @@ _readRowMarkClause(void)
 
 	READ_UINT_FIELD(rti);
 	READ_ENUM_FIELD(strength, LockClauseStrength);
-	READ_BOOL_FIELD(noWait);
+	READ_ENUM_FIELD(waitPolicy, LockWaitPolicy);
 	READ_BOOL_FIELD(pushedDown);
 
 	READ_DONE();
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index e1480cd..cd61309 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -2229,7 +2229,7 @@ preprocess_rowmarks(PlannerInfo *root)
 				newrc->markType = ROW_MARK_KEYSHARE;
 				break;
 		}
-		newrc->noWait = rc->noWait;
+		newrc->waitPolicy = rc->waitPolicy;
 		newrc->isParent = false;
 
 		prowmarks = lappend(prowmarks, newrc);
@@ -2257,7 +2257,7 @@ preprocess_rowmarks(PlannerInfo *root)
 			newrc->markType = ROW_MARK_REFERENCE;
 		else
 			newrc->markType = ROW_MARK_COPY;
-		newrc->noWait = false;	/* doesn't matter */
+		newrc->waitPolicy = LockWaitBlock;	/* doesn't matter */
 		newrc->isParent = false;
 
 		prowmarks = lappend(prowmarks, newrc);
diff --git a/src/backend/optimizer/prep/prepsecurity.c b/src/backend/optimizer/prep/prepsecurity.c
index dd7f900..2c5c85d 100644
--- a/src/backend/optimizer/prep/prepsecurity.c
+++ b/src/backend/optimizer/prep/prepsecurity.c
@@ -233,19 +233,19 @@ expand_security_qual(PlannerInfo *root, List *tlist, int rt_index,
 				{
 					case ROW_MARK_EXCLUSIVE:
 						applyLockingClause(subquery, 1, LCS_FORUPDATE,
-										   rc->noWait, false);
+										   rc->waitPolicy, false);
 						break;
 					case ROW_MARK_NOKEYEXCLUSIVE:
 						applyLockingClause(subquery, 1, LCS_FORNOKEYUPDATE,
-										   rc->noWait, false);
+										   rc->waitPolicy, false);
 						break;
 					case ROW_MARK_SHARE:
 						applyLockingClause(subquery, 1, LCS_FORSHARE,
-										   rc->noWait, false);
+										   rc->waitPolicy, false);
 						break;
 					case ROW_MARK_KEYSHARE:
 						applyLockingClause(subquery, 1, LCS_FORKEYSHARE,
-										   rc->noWait, false);
+										   rc->waitPolicy, false);
 						break;
 					case ROW_MARK_REFERENCE:
 					case ROW_MARK_COPY:
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index 0410fdd..69756e4 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -1389,7 +1389,7 @@ expand_inherited_rtentry(PlannerInfo *root, RangeTblEntry *rte, Index rti)
 			newrc->prti = rti;
 			newrc->rowmarkId = oldrc->rowmarkId;
 			newrc->markType = oldrc->markType;
-			newrc->noWait = oldrc->noWait;
+			newrc->waitPolicy = oldrc->waitPolicy;
 			newrc->isParent = false;
 
 			root->rowMarks = lappend(root->rowMarks, newrc);
diff --git a/src/backend/parser/analyze.c b/src/backend/parser/analyze.c
index fb6c44c..dc9296e 100644
--- a/src/backend/parser/analyze.c
+++ b/src/backend/parser/analyze.c
@@ -2358,7 +2358,7 @@ transformLockingClause(ParseState *pstate, Query *qry, LockingClause *lc,
 	allrels = makeNode(LockingClause);
 	allrels->lockedRels = NIL;	/* indicates all rels */
 	allrels->strength = lc->strength;
-	allrels->noWait = lc->noWait;
+	allrels->waitPolicy = lc->waitPolicy;
 
 	if (lockedRels == NIL)
 	{
@@ -2373,12 +2373,12 @@ transformLockingClause(ParseState *pstate, Query *qry, LockingClause *lc,
 			{
 				case RTE_RELATION:
 					applyLockingClause(qry, i,
-									   lc->strength, lc->noWait, pushedDown);
+									   lc->strength, lc->waitPolicy, pushedDown);
 					rte->requiredPerms |= ACL_SELECT_FOR_UPDATE;
 					break;
 				case RTE_SUBQUERY:
 					applyLockingClause(qry, i,
-									   lc->strength, lc->noWait, pushedDown);
+									   lc->strength, lc->waitPolicy, pushedDown);
 
 					/*
 					 * FOR UPDATE/SHARE of subquery is propagated to all of
@@ -2425,13 +2425,13 @@ transformLockingClause(ParseState *pstate, Query *qry, LockingClause *lc,
 					{
 						case RTE_RELATION:
 							applyLockingClause(qry, i,
-											   lc->strength, lc->noWait,
+											   lc->strength, lc->waitPolicy,
 											   pushedDown);
 							rte->requiredPerms |= ACL_SELECT_FOR_UPDATE;
 							break;
 						case RTE_SUBQUERY:
 							applyLockingClause(qry, i,
-											   lc->strength, lc->noWait,
+											   lc->strength, lc->waitPolicy,
 											   pushedDown);
 							/* see comment above */
 							transformLockingClause(pstate, rte->subquery,
@@ -2499,7 +2499,8 @@ transformLockingClause(ParseState *pstate, Query *qry, LockingClause *lc,
  */
 void
 applyLockingClause(Query *qry, Index rtindex,
-				   LockClauseStrength strength, bool noWait, bool pushedDown)
+				   LockClauseStrength strength, LockWaitPolicy waitPolicy,
+				   bool pushedDown)
 {
 	RowMarkClause *rc;
 
@@ -2516,15 +2517,18 @@ applyLockingClause(Query *qry, Index rtindex,
 		 * a shared and exclusive lock at the same time; it'll end up being
 		 * exclusive anyway.)
 		 *
-		 * We also consider that NOWAIT wins if it's specified both ways. This
-		 * is a bit more debatable but raising an error doesn't seem helpful.
-		 * (Consider for instance SELECT FOR UPDATE NOWAIT from a view that
-		 * internally contains a plain FOR UPDATE spec.)
+		 * We also consider that NOWAIT wins if it is specified multiple ways,
+		 * otherwise SKIP LOCKED wins. This is a bit more debatable but
+		 * raising an error doesn't seem helpful.  (Consider for instance
+		 * SELECT FOR UPDATE NOWAIT from a view that internally contains a
+		 * plain FOR UPDATE spec.)
 		 *
 		 * And of course pushedDown becomes false if any clause is explicit.
 		 */
 		rc->strength = Max(rc->strength, strength);
-		rc->noWait |= noWait;
+		StaticAssertStmt(LockWaitError > LockWaitSkip, "LockWaitPolicy order");
+		StaticAssertStmt(LockWaitSkip > LockWaitBlock, "LockWaitPolicy order");
+		rc->waitPolicy = Max(rc->waitPolicy, waitPolicy);
 		rc->pushedDown &= pushedDown;
 		return;
 	}
@@ -2533,7 +2537,7 @@ applyLockingClause(Query *qry, Index rtindex,
 	rc = makeNode(RowMarkClause);
 	rc->rti = rtindex;
 	rc->strength = strength;
-	rc->noWait = noWait;
+	rc->waitPolicy = waitPolicy;
 	rc->pushedDown = pushedDown;
 	qry->rowMarks = lappend(qry->rowMarks, rc);
 }
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index a113809..b90ead8 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -284,6 +284,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <boolean>	opt_force opt_or_replace
 				opt_grant_grant_option opt_grant_admin_option
 				opt_nowait opt_if_exists opt_with_data
+%type <ival>	opt_nowait_or_skip
 
 %type <list>	OptRoleList AlterOptRoleList
 %type <defelt>	CreateOptRoleElem AlterOptRoleElem
@@ -577,7 +578,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 	LABEL LANGUAGE LARGE_P LAST_P LATERAL_P
 	LEADING LEAKPROOF LEAST LEFT LEVEL LIKE LIMIT LISTEN LOAD LOCAL
-	LOCALTIME LOCALTIMESTAMP LOCATION LOCK_P
+	LOCALTIME LOCALTIMESTAMP LOCATION LOCK_P LOCKED
 
 	MAPPING MATCH MATERIALIZED MAXVALUE MINUTE_P MINVALUE MODE MONTH_P MOVE
 
@@ -601,7 +602,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 
 	SAVEPOINT SCHEMA SCROLL SEARCH SECOND_P SECURITY SELECT SEQUENCE SEQUENCES
 	SERIALIZABLE SERVER SESSION SESSION_USER SET SETOF SHARE
-	SHOW SIMILAR SIMPLE SMALLINT SNAPSHOT SOME STABLE STANDALONE_P START
+	SHOW SIMILAR SIMPLE SKIP SMALLINT SNAPSHOT SOME STABLE STANDALONE_P START
 	STATEMENT STATISTICS STDIN STDOUT STORAGE STRICT_P STRIP_P SUBSTRING
 	SYMMETRIC SYSID SYSTEM_P
 
@@ -9217,6 +9218,12 @@ opt_nowait:	NOWAIT							{ $$ = TRUE; }
 			| /*EMPTY*/						{ $$ = FALSE; }
 		;
 
+opt_nowait_or_skip:	
+			NOWAIT							{ $$ = LockWaitError; }
+			| SKIP LOCKED					{ $$ = LockWaitSkip; }
+			| /*EMPTY*/						{ $$ = LockWaitBlock; }
+		;
+
 
 /*****************************************************************************
  *
@@ -9858,12 +9865,12 @@ for_locking_items:
 		;
 
 for_locking_item:
-			for_locking_strength locked_rels_list opt_nowait
+			for_locking_strength locked_rels_list opt_nowait_or_skip
 				{
 					LockingClause *n = makeNode(LockingClause);
 					n->lockedRels = $2;
 					n->strength = $1;
-					n->noWait = $3;
+					n->waitPolicy = $3;
 					$$ = (Node *) n;
 				}
 		;
@@ -12992,6 +12999,7 @@ unreserved_keyword:
 			| LOCAL
 			| LOCATION
 			| LOCK_P
+			| LOCKED
 			| MAPPING
 			| MATCH
 			| MATERIALIZED
@@ -13074,6 +13082,7 @@ unreserved_keyword:
 			| SHARE
 			| SHOW
 			| SIMPLE
+			| SKIP
 			| SNAPSHOT
 			| STABLE
 			| STANDALONE_P
diff --git a/src/backend/rewrite/rewriteHandler.c b/src/backend/rewrite/rewriteHandler.c
index e6c5530..287e710 100644
--- a/src/backend/rewrite/rewriteHandler.c
+++ b/src/backend/rewrite/rewriteHandler.c
@@ -62,7 +62,8 @@ static void rewriteValuesRTE(RangeTblEntry *rte, Relation target_relation,
 static void rewriteTargetListUD(Query *parsetree, RangeTblEntry *target_rte,
 					Relation target_relation);
 static void markQueryForLocking(Query *qry, Node *jtnode,
-				  LockClauseStrength strength, bool noWait, bool pushedDown);
+								LockClauseStrength strength, LockWaitPolicy waitPolicy, 
+								bool pushedDown);
 static List *matchLocks(CmdType event, RuleLock *rulelocks,
 		   int varno, Query *parsetree);
 static Query *fireRIRrules(Query *parsetree, List *activeRIRs,
@@ -1481,7 +1482,7 @@ ApplyRetrieveRule(Query *parsetree,
 	 */
 	if (rc != NULL)
 		markQueryForLocking(rule_action, (Node *) rule_action->jointree,
-							rc->strength, rc->noWait, true);
+							rc->strength, rc->waitPolicy, true);
 
 	return parsetree;
 }
@@ -1499,7 +1500,8 @@ ApplyRetrieveRule(Query *parsetree,
  */
 static void
 markQueryForLocking(Query *qry, Node *jtnode,
-					LockClauseStrength strength, bool noWait, bool pushedDown)
+					LockClauseStrength strength, LockWaitPolicy waitPolicy,
+					bool pushedDown)
 {
 	if (jtnode == NULL)
 		return;
@@ -1510,15 +1512,15 @@ markQueryForLocking(Query *qry, Node *jtnode,
 
 		if (rte->rtekind == RTE_RELATION)
 		{
-			applyLockingClause(qry, rti, strength, noWait, pushedDown);
+			applyLockingClause(qry, rti, strength, waitPolicy, pushedDown);
 			rte->requiredPerms |= ACL_SELECT_FOR_UPDATE;
 		}
 		else if (rte->rtekind == RTE_SUBQUERY)
 		{
-			applyLockingClause(qry, rti, strength, noWait, pushedDown);
+			applyLockingClause(qry, rti, strength, waitPolicy, pushedDown);
 			/* FOR UPDATE/SHARE of subquery is propagated to subquery's rels */
 			markQueryForLocking(rte->subquery, (Node *) rte->subquery->jointree,
-								strength, noWait, true);
+								strength, waitPolicy, true);
 		}
 		/* other RTE types are unaffected by FOR UPDATE */
 	}
@@ -1528,14 +1530,14 @@ markQueryForLocking(Query *qry, Node *jtnode,
 		ListCell   *l;
 
 		foreach(l, f->fromlist)
-			markQueryForLocking(qry, lfirst(l), strength, noWait, pushedDown);
+			markQueryForLocking(qry, lfirst(l), strength, waitPolicy, pushedDown);
 	}
 	else if (IsA(jtnode, JoinExpr))
 	{
 		JoinExpr   *j = (JoinExpr *) jtnode;
 
-		markQueryForLocking(qry, j->larg, strength, noWait, pushedDown);
-		markQueryForLocking(qry, j->rarg, strength, noWait, pushedDown);
+		markQueryForLocking(qry, j->larg, strength, waitPolicy, pushedDown);
+		markQueryForLocking(qry, j->rarg, strength, waitPolicy, pushedDown);
 	}
 	else
 		elog(ERROR, "unrecognized node type: %d",
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 7237e5d..6e41cbd 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -4446,8 +4446,10 @@ get_select_query_def(Query *query, deparse_context *context,
 			appendStringInfo(buf, " OF %s",
 							 quote_identifier(get_rtable_name(rc->rti,
 															  context)));
-			if (rc->noWait)
+			if (rc->waitPolicy == LockWaitError)
 				appendStringInfoString(buf, " NOWAIT");
+			else if (rc->waitPolicy == LockWaitSkip)
+				appendStringInfoString(buf, " SKIP LOCKED");
 		}
 	}
 
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 493839f..7f7166d 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -19,6 +19,7 @@
 #include "nodes/primnodes.h"
 #include "storage/bufpage.h"
 #include "storage/lock.h"
+#include "utils/lockwaitpolicy.h"
 #include "utils/relcache.h"
 #include "utils/snapshot.h"
 
@@ -144,7 +145,7 @@ extern HTSU_Result heap_update(Relation relation, ItemPointer otid,
 			CommandId cid, Snapshot crosscheck, bool wait,
 			HeapUpdateFailureData *hufd, LockTupleMode *lockmode);
 extern HTSU_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
-				CommandId cid, LockTupleMode mode, bool nowait,
+				CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 				bool follow_update,
 				Buffer *buffer, HeapUpdateFailureData *hufd);
 extern void heap_inplace_update(Relation relation, HeapTuple tuple);
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index b271f21..39d2c10 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -429,7 +429,7 @@ typedef struct ExecRowMark
 	Index		prti;			/* parent range table index, if child */
 	Index		rowmarkId;		/* unique identifier for resjunk columns */
 	RowMarkType markType;		/* see enum in nodes/plannodes.h */
-	bool		noWait;			/* NOWAIT option */
+	LockWaitPolicy waitPolicy;	/* NOWAIT and SKIP LOCKED */
 	ItemPointerData curCtid;	/* ctid of currently locked tuple, if any */
 } ExecRowMark;
 
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 8364bef..b1b36c1 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -23,6 +23,7 @@
 #include "nodes/bitmapset.h"
 #include "nodes/primnodes.h"
 #include "nodes/value.h"
+#include "utils/lockwaitpolicy.h"
 
 /* Possible sources of a Query */
 typedef enum QuerySource
@@ -630,7 +631,7 @@ typedef struct LockingClause
 	NodeTag		type;
 	List	   *lockedRels;		/* FOR [KEY] UPDATE/SHARE relations */
 	LockClauseStrength strength;
-	bool		noWait;			/* NOWAIT option */
+	LockWaitPolicy	waitPolicy;	/* NOWAIT and SKIP LOCKED */
 } LockingClause;
 
 /*
@@ -975,7 +976,7 @@ typedef struct RowMarkClause
 	NodeTag		type;
 	Index		rti;			/* range table index of target relation */
 	LockClauseStrength strength;
-	bool		noWait;			/* NOWAIT option */
+	LockWaitPolicy waitPolicy;	/* NOWAIT and SKIP LOCKED */
 	bool		pushedDown;		/* pushed down from higher query level? */
 } RowMarkClause;
 
diff --git a/src/include/nodes/plannodes.h b/src/include/nodes/plannodes.h
index 3b9c683..c174fb6 100644
--- a/src/include/nodes/plannodes.h
+++ b/src/include/nodes/plannodes.h
@@ -17,6 +17,7 @@
 #include "access/sdir.h"
 #include "nodes/bitmapset.h"
 #include "nodes/primnodes.h"
+#include "utils/lockwaitpolicy.h"
 
 
 /* ----------------------------------------------------------------
@@ -831,7 +832,7 @@ typedef struct PlanRowMark
 	Index		prti;			/* range table index of parent relation */
 	Index		rowmarkId;		/* unique identifier for resjunk columns */
 	RowMarkType markType;		/* see enum above */
-	bool		noWait;			/* NOWAIT option */
+	LockWaitPolicy waitPolicy;  /* NOWAIT and SKIP LOCKED options */
 	bool		isParent;		/* true if this is a "dummy" parent entry */
 } PlanRowMark;
 
diff --git a/src/include/parser/analyze.h b/src/include/parser/analyze.h
index 370a445..383df42 100644
--- a/src/include/parser/analyze.h
+++ b/src/include/parser/analyze.h
@@ -39,6 +39,7 @@ extern bool analyze_requires_snapshot(Node *parseTree);
 extern char *LCS_asString(LockClauseStrength strength);
 extern void CheckSelectLocking(Query *qry, LockClauseStrength strength);
 extern void applyLockingClause(Query *qry, Index rtindex,
-				   LockClauseStrength strength, bool noWait, bool pushedDown);
+							   LockClauseStrength strength,
+							   LockWaitPolicy waitPolicy, bool pushedDown);
 
 #endif   /* ANALYZE_H */
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index b52e507..8d0bfb2 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -230,6 +230,7 @@ PG_KEYWORD("localtime", LOCALTIME, RESERVED_KEYWORD)
 PG_KEYWORD("localtimestamp", LOCALTIMESTAMP, RESERVED_KEYWORD)
 PG_KEYWORD("location", LOCATION, UNRESERVED_KEYWORD)
 PG_KEYWORD("lock", LOCK_P, UNRESERVED_KEYWORD)
+PG_KEYWORD("locked", LOCKED, UNRESERVED_KEYWORD)
 PG_KEYWORD("mapping", MAPPING, UNRESERVED_KEYWORD)
 PG_KEYWORD("match", MATCH, UNRESERVED_KEYWORD)
 PG_KEYWORD("materialized", MATERIALIZED, UNRESERVED_KEYWORD)
@@ -343,6 +344,7 @@ PG_KEYWORD("share", SHARE, UNRESERVED_KEYWORD)
 PG_KEYWORD("show", SHOW, UNRESERVED_KEYWORD)
 PG_KEYWORD("similar", SIMILAR, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("simple", SIMPLE, UNRESERVED_KEYWORD)
+PG_KEYWORD("skip", SKIP, UNRESERVED_KEYWORD)
 PG_KEYWORD("smallint", SMALLINT, COL_NAME_KEYWORD)
 PG_KEYWORD("snapshot", SNAPSHOT, UNRESERVED_KEYWORD)
 PG_KEYWORD("some", SOME, RESERVED_KEYWORD)
diff --git a/src/include/utils/lockwaitpolicy.h b/src/include/utils/lockwaitpolicy.h
new file mode 100644
index 0000000..600e9b0
--- /dev/null
+++ b/src/include/utils/lockwaitpolicy.h
@@ -0,0 +1,30 @@
+/*-------------------------------------------------------------------------
+ *
+ * lockwaitpolicy.h
+ *	  Header file for the enum LockWaitPolicy enum, which is needed in
+ *	  several modules (parser, planner, executor, heap manager).
+ *
+ * Copyright (c) 2014, PostgreSQL Global Development Group
+ *
+ * src/include/utils/lockwaitpolicy.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef LOCKWAITPOLICY_H
+#define LOCKWAITPOLICY_H
+
+/*
+ * Policy for what to do when a row lock cannot be obtained immediately.
+ * Order is important -- see applyLockingClause.
+ */
+typedef enum
+{
+	/* Wait for the lock to become available */
+	LockWaitBlock = 1,
+	/* SELECT FOR UPDATE SKIP LOCKED, skipping rows that can't be locked */
+	LockWaitSkip = 2,
+	/* SELECT FOR UPDATE NOWAIT, abandoning the transaction */
+	LockWaitError = 3
+} LockWaitPolicy;
+
+#endif   /* LOCKWAITPOLICY_H */
diff --git a/src/include/utils/snapshot.h b/src/include/utils/snapshot.h
index d8e8b35..53e474f 100644
--- a/src/include/utils/snapshot.h
+++ b/src/include/utils/snapshot.h
@@ -104,7 +104,8 @@ typedef enum
 	HeapTupleInvisible,
 	HeapTupleSelfUpdated,
 	HeapTupleUpdated,
-	HeapTupleBeingUpdated
+	HeapTupleBeingUpdated,
+	HeapTupleWouldBlock	/* can be returned by heap_tuple_lock */
 } HTSU_Result;
 
 #endif   /* SNAPSHOT_H */
diff --git a/src/test/isolation/expected/skip-locked-2.out b/src/test/isolation/expected/skip-locked-2.out
new file mode 100644
index 0000000..9240543
--- /dev/null
+++ b/src/test/isolation/expected/skip-locked-2.out
@@ -0,0 +1,49 @@
+Parsed test spec with 2 sessions
+
+starting permutation: s1a s2a s2b s1b s2c
+step s1a: SELECT * FROM queue ORDER BY id FOR SHARE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2a: SELECT * FROM queue ORDER BY id FOR SHARE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s1b: COMMIT;
+step s2c: COMMIT;
+
+starting permutation: s2a s1a s2b s1b s2c
+step s2a: SELECT * FROM queue ORDER BY id FOR SHARE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1a: SELECT * FROM queue ORDER BY id FOR SHARE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s1b: COMMIT;
+step s2c: COMMIT;
+
+starting permutation: s2a s2b s1a s1b s2c
+step s2a: SELECT * FROM queue ORDER BY id FOR SHARE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1a: SELECT * FROM queue ORDER BY id FOR SHARE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s1b: COMMIT;
+step s2c: COMMIT;
diff --git a/src/test/isolation/expected/skip-locked-3.out b/src/test/isolation/expected/skip-locked-3.out
new file mode 100644
index 0000000..fa8fe87
--- /dev/null
+++ b/src/test/isolation/expected/skip-locked-3.out
@@ -0,0 +1,19 @@
+Parsed test spec with 3 sessions
+
+starting permutation: s1a s2a s3a s1b s2b s3b
+step s1a: SELECT * FROM queue ORDER BY id FOR UPDATE LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2a: SELECT * FROM queue ORDER BY id FOR UPDATE LIMIT 1; <waiting ...>
+step s3a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s1b: COMMIT;
+step s2a: <... completed>
+id             data           status         
+
+1              foo            NEW            
+step s2b: COMMIT;
+step s3b: COMMIT;
diff --git a/src/test/isolation/expected/skip-locked.out b/src/test/isolation/expected/skip-locked.out
new file mode 100644
index 0000000..f9b9cf2
--- /dev/null
+++ b/src/test/isolation/expected/skip-locked.out
@@ -0,0 +1,401 @@
+Parsed test spec with 2 sessions
+
+starting permutation: s1a s1b s1c s2a s2b s2c
+step s1a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1c: COMMIT;
+step s2a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2c: COMMIT;
+
+starting permutation: s1a s1b s2a s1c s2b s2c
+step s1a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s1c: COMMIT;
+step s2b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2c: COMMIT;
+
+starting permutation: s1a s1b s2a s2b s1c s2c
+step s1a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s2b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s1c: COMMIT;
+step s2c: COMMIT;
+
+starting permutation: s1a s1b s2a s2b s2c s1c
+step s1a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s2b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s2c: COMMIT;
+step s1c: COMMIT;
+
+starting permutation: s1a s2a s1b s1c s2b s2c
+step s1a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s1b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1c: COMMIT;
+step s2b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2c: COMMIT;
+
+starting permutation: s1a s2a s1b s2b s1c s2c
+step s1a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s1b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s1c: COMMIT;
+step s2c: COMMIT;
+
+starting permutation: s1a s2a s1b s2b s2c s1c
+step s1a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s1b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s2c: COMMIT;
+step s1c: COMMIT;
+
+starting permutation: s1a s2a s2b s1b s1c s2c
+step s1a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s2b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s1b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1c: COMMIT;
+step s2c: COMMIT;
+
+starting permutation: s1a s2a s2b s1b s2c s1c
+step s1a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s2b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s1b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2c: COMMIT;
+step s1c: COMMIT;
+
+starting permutation: s1a s2a s2b s2c s1b s1c
+step s1a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s2b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s2c: COMMIT;
+step s1b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1c: COMMIT;
+
+starting permutation: s2a s1a s1b s1c s2b s2c
+step s2a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s1b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s1c: COMMIT;
+step s2b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2c: COMMIT;
+
+starting permutation: s2a s1a s1b s2b s1c s2c
+step s2a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s1b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s2b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1c: COMMIT;
+step s2c: COMMIT;
+
+starting permutation: s2a s1a s1b s2b s2c s1c
+step s2a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s1b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s2b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2c: COMMIT;
+step s1c: COMMIT;
+
+starting permutation: s2a s1a s2b s1b s1c s2c
+step s2a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s2b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s1c: COMMIT;
+step s2c: COMMIT;
+
+starting permutation: s2a s1a s2b s1b s2c s1c
+step s2a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s2b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s2c: COMMIT;
+step s1c: COMMIT;
+
+starting permutation: s2a s1a s2b s2c s1b s1c
+step s2a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s2b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2c: COMMIT;
+step s1b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1c: COMMIT;
+
+starting permutation: s2a s2b s1a s1b s1c s2c
+step s2a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s1b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s1c: COMMIT;
+step s2c: COMMIT;
+
+starting permutation: s2a s2b s1a s1b s2c s1c
+step s2a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s1b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s2c: COMMIT;
+step s1c: COMMIT;
+
+starting permutation: s2a s2b s1a s2c s1b s1c
+step s2a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+2              bar            NEW            
+step s2c: COMMIT;
+step s1b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1c: COMMIT;
+
+starting permutation: s2a s2b s2c s1a s1b s1c
+step s2a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s2c: COMMIT;
+step s1a: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1b: SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1;
+id             data           status         
+
+1              foo            NEW            
+step s1c: COMMIT;
diff --git a/src/test/isolation/isolation_schedule b/src/test/isolation/isolation_schedule
index 36acec1..2b5cabe 100644
--- a/src/test/isolation/isolation_schedule
+++ b/src/test/isolation/isolation_schedule
@@ -25,3 +25,6 @@ test: propagate-lock-delete
 test: drop-index-concurrently-1
 test: alter-table-1
 test: timeouts
+test: skip-locked
+test: skip-locked-2
+test: skip-locked-3
diff --git a/src/test/isolation/specs/skip-locked-2.spec b/src/test/isolation/specs/skip-locked-2.spec
new file mode 100644
index 0000000..9d70b7b
--- /dev/null
+++ b/src/test/isolation/specs/skip-locked-2.spec
@@ -0,0 +1,39 @@
+# Test SKIP LOCKED with multixact locks.
+
+setup
+{
+  CREATE TABLE queue (
+	id	int		PRIMARY KEY,
+	data			text	NOT NULL,
+	status			text	NOT NULL
+  );
+  INSERT INTO queue VALUES (1, 'foo', 'NEW'), (2, 'bar', 'NEW');
+}
+
+teardown
+{
+  DROP TABLE queue;
+}
+
+session "s1"
+setup		{ BEGIN; }
+step "s1a"	{ SELECT * FROM queue ORDER BY id FOR SHARE SKIP LOCKED LIMIT 1; }
+step "s1b"	{ COMMIT; }
+
+session "s2"
+setup		{ BEGIN; }
+step "s2a"	{ SELECT * FROM queue ORDER BY id FOR SHARE SKIP LOCKED LIMIT 1; }
+step "s2b"	{ SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1; }
+step "s2c"	{ COMMIT; }
+
+# s1 and s2 both get SHARE lock, creating a multixact lock, then s2
+# tries to update to UPDATE but skips the record because it can't
+# acquire a multi-xact lock
+permutation "s1a" "s2a" "s2b" "s1b" "s2c"
+# the same but with the SHARE locks acquired in a different order, so
+# s2 againt skips because it can't acquired a multi-xact lock
+permutation "s2a" "s1a" "s2b" "s1b" "s2c"
+# s2 acquires SHARE then UPDATE, then s1 tries to acquire SHARE but
+# can't so skips the first record because it can't acquire a regular
+# lock
+permutation "s2a" "s2b" "s1a" "s1b" "s2c"
\ No newline at end of file
diff --git a/src/test/isolation/specs/skip-locked-3.spec b/src/test/isolation/specs/skip-locked-3.spec
new file mode 100644
index 0000000..34e0f3d
--- /dev/null
+++ b/src/test/isolation/specs/skip-locked-3.spec
@@ -0,0 +1,34 @@
+# Test SKIP LOCKED with tuple locks.
+
+setup
+{
+  CREATE TABLE queue (
+	id	int		PRIMARY KEY,
+	data			text	NOT NULL,
+	status			text	NOT NULL
+  );
+  INSERT INTO queue VALUES (1, 'foo', 'NEW'), (2, 'bar', 'NEW');
+}
+
+teardown
+{
+  DROP TABLE queue;
+}
+
+session "s1"
+setup		{ BEGIN; }
+step "s1a"	{ SELECT * FROM queue ORDER BY id FOR UPDATE LIMIT 1; }
+step "s1b"	{ COMMIT; }
+
+session "s2"
+setup		{ BEGIN; }
+step "s2a"	{ SELECT * FROM queue ORDER BY id FOR UPDATE LIMIT 1; }
+step "s2b"	{ COMMIT; }
+
+session "s3"
+setup		{ BEGIN; }
+step "s3a"	{ SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1; }
+step "s3b"	{ COMMIT; }
+
+# s3 skips to second record due to tuple lock held by s2
+permutation "s1a" "s2a" "s3a" "s1b" "s2b" "s3b"
\ No newline at end of file
diff --git a/src/test/isolation/specs/skip-locked.spec b/src/test/isolation/specs/skip-locked.spec
new file mode 100644
index 0000000..3565963
--- /dev/null
+++ b/src/test/isolation/specs/skip-locked.spec
@@ -0,0 +1,28 @@
+# Test SKIP LOCKED when regular row locks can't be acquired.
+
+setup
+{
+  CREATE TABLE queue (
+	id	int		PRIMARY KEY,
+	data			text	NOT NULL,
+	status			text	NOT NULL
+  );
+  INSERT INTO queue VALUES (1, 'foo', 'NEW'), (2, 'bar', 'NEW');
+}
+
+teardown
+{
+  DROP TABLE queue;
+}
+
+session "s1"
+setup		{ BEGIN; }
+step "s1a"	{ SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1; }
+step "s1b"	{ SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1; }
+step "s1c"	{ COMMIT; }
+
+session "s2"
+setup		{ BEGIN; }
+step "s2a"	{ SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1; }
+step "s2b"	{ SELECT * FROM queue ORDER BY id FOR UPDATE SKIP LOCKED LIMIT 1; }
+step "s2c"	{ COMMIT; }
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 51e4d28..881d997 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -4234,7 +4234,10 @@ l3:
 			else /* wait_policy == LockWaitSkip */
 			{
 				if (!ConditionalLockTupleTuplock(relation, tid, mode))
+				{
+					elog(WARNING, "skipping because cannot obtain tuple lock");
 					return HeapTupleWouldBlock;
+				}
 			}
 			have_tuple_lock = true;
 		}
@@ -4462,6 +4465,7 @@ l3:
 													XLTW_Lock, NULL))
 					{
 						UnlockTupleTuplock(relation, tid, mode);
+						elog(WARNING, "skipping because cannot obtain multi-xact lock");
 						return HeapTupleWouldBlock;
 					}
 				}
@@ -4528,6 +4532,7 @@ l3:
 					if (!ConditionalXactLockTableWait(xwait))
 					{
 						UnlockTupleTuplock(relation, tid, mode);
+						elog(WARNING, "skipping because cannot obtain regular lock");
 						return HeapTupleWouldBlock;
 					}
 				}
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to