Hello again!

Here is the third version of the patch for pgbench thanks to Fabien Coelho comments. As in the previous one, transactions with serialization and deadlock failures are rolled back and retried until they end successfully or their number of tries reaches maximum.

Differences from the previous version:
* Some code cleanup :) In particular, the Variables structure for managing client variables and only one new tap tests file (as they were recommended here [1] and here [2]). * There's no error if the last transaction in the script is not completed. But the transactions started in the previous scripts and/or not ending in the current script, are not rolled back and retried after the failure. Such script try is reported as failed because it contains a failure that was not rolled back and retried. * Usually the retries and/or failures are printed if they are not equal to zeros. In transaction/aggregation logs the failures are always printed and the retries are printed if max_tries is greater than 1. It is done for the general format of the log during the execution of the program.

Patch is attached. Any suggestions are welcome!

[1] https://www.postgresql.org/message-id/alpine.DEB.2.20.1707121338090.12795%40lancre [2] https://www.postgresql.org/message-id/alpine.DEB.2.20.1707121142300.12795%40lancre

--
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
From 0ee37aaaa2e93b8d7017563d2f2f55357c39c08a Mon Sep 17 00:00:00 2001
From: Marina Polyakova <m.polyak...@postgrespro.ru>
Date: Fri, 21 Jul 2017 17:57:58 +0300
Subject: [PATCH v3] Pgbench Retry transactions with serialization or deadlock
 errors

Now transactions with serialization or deadlock failures can be rolled back and
retried again and again until they end successfully or their number of tries
reaches maximum. You can set the maximum number of tries by using the
appropriate benchmarking option (--max-tries). The default value is 1. If
there're retries and/or failures their statistics are printed in the progress,
in the transaction / aggregation logs and in the end with other results (all and
for each script). A transaction failure is reported here only if the last try of
this transaction fails. Also retries and/or failures are printed per-command
with average latencies if you use the appropriate benchmarking option
(--report-per-command, -r) and the total number of retries and/or failures is
not zero.

Note that the transactions started in the previous scripts and/or not ending in
the current script, are not rolled back and retried after the failure. Such
script try is reported as failed because it contains a failure that was not
rolled back and retried.
---
 doc/src/sgml/ref/pgbench.sgml                      | 240 +++++-
 src/bin/pgbench/pgbench.c                          | 872 +++++++++++++++++----
 .../t/002_serialization_and_deadlock_failures.pl   | 459 +++++++++++
 3 files changed, 1412 insertions(+), 159 deletions(-)
 create mode 100644 src/bin/pgbench/t/002_serialization_and_deadlock_failures.pl

diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml
index 64b043b..3bbeec5 100644
--- a/doc/src/sgml/ref/pgbench.sgml
+++ b/doc/src/sgml/ref/pgbench.sgml
@@ -49,6 +49,7 @@
 
 <screen>
 transaction type: &lt;builtin: TPC-B (sort of)&gt;
+transaction maximum tries number: 1
 scaling factor: 10
 query mode: simple
 number of clients: 10
@@ -59,7 +60,7 @@ tps = 85.184871 (including connections establishing)
 tps = 85.296346 (excluding connections establishing)
 </screen>
 
-  The first six lines report some of the most important parameter
+  The first seven lines report some of the most important parameter
   settings.  The next line reports the number of transactions completed
   and intended (the latter being just the product of number of clients
   and number of transactions per client); these will be equal unless the run
@@ -436,22 +437,33 @@ pgbench <optional> <replaceable>options</> </optional> <replaceable>dbname</>
         Show progress report every <replaceable>sec</> seconds.  The report
         includes the time since the beginning of the run, the tps since the
         last report, and the transaction latency average and standard
-        deviation since the last report.  Under throttling (<option>-R</>),
-        the latency is computed with respect to the transaction scheduled
-        start time, not the actual transaction beginning time, thus it also
-        includes the average schedule lag time.
+        deviation since the last report.  If since the last report there are
+        transactions that ended with serialization/deadlock failures they are
+        also reported here as failed (see
+        <xref linkend="failures-and-retries"
+        endterm="failures-and-retries-title"> for more information).  Under
+        throttling (<option>-R</>), the latency is computed with respect to the
+        transaction scheduled start time, not the actual transaction beginning
+        time, thus it also includes the average schedule lag time.  If since the
+        last report there're transactions that have been rolled back and retried
+        after a serialization/deadlock failure, the report includes the
+        number of retries of all such transactions (use option
+        <option>--max-tries</> to make it possible).
        </para>
       </listitem>
      </varlistentry>
 
      <varlistentry>
       <term><option>-r</option></term>
-      <term><option>--report-latencies</option></term>
+      <term><option>--report-per-command</option></term>
       <listitem>
        <para>
-        Report the average per-statement latency (execution time from the
-        perspective of the client) of each command after the benchmark
-        finishes.  See below for details.
+        Report the following statistics for each command after the benchmark
+        finishes: the average per-statement latency (execution time from the
+        perspective of the client), the number of serialization failures and
+        retries, the number of deadlock failures and retries. Note that the
+        report contains failures only if the total number of failures for all
+        scripts is not zero. The same for retries. See below for details.
        </para>
       </listitem>
      </varlistentry>
@@ -496,6 +508,15 @@ pgbench <optional> <replaceable>options</> </optional> <replaceable>dbname</>
        </para>
 
        <para>
+        Transactions with serialization or deadlock failures (or with both
+        of them if used script contains several transactions; see
+        <xref linkend="transactions-and-scripts"
+        endterm="transactions-and-scripts-title"> for more information) are
+        marked separately and their time is not reported as for skipped
+        transactions.
+       </para>
+
+       <para>
         A high schedule lag time is an indication that the system cannot
         process transactions at the specified rate, with the chosen number of
         clients and threads. When the average transaction execution time is
@@ -590,6 +611,32 @@ pgbench <optional> <replaceable>options</> </optional> <replaceable>dbname</>
      </varlistentry>
 
      <varlistentry>
+      <term><option>--max-tries=<replaceable>tries_number</></option></term>
+      <listitem>
+       <para>
+        Set the maximum number of tries for transactions with
+        serialization/deadlock failures. Default is 1.
+       </para>
+       <note>
+         <para>
+         Be careful if you want to repeat transactions with shell commands
+         inside. Unlike sql commands the result of shell command is not rolled
+         back except for the variable value of command
+         <command>\setshell</command>. If a shell command fails its client is
+         aborted without restarting.
+         </para>
+       </note>
+       <note>
+         <para>
+         The transactions started in the previous scripts and/or not ending in
+         the current script, are not rolled back and retried after the failure.
+         Such script try is reported as failed.
+         </para>
+       </note>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
       <term><option>--progress-timestamp</option></term>
       <listitem>
        <para>
@@ -693,8 +740,8 @@ pgbench <optional> <replaceable>options</> </optional> <replaceable>dbname</>
  <refsect1>
   <title>Notes</title>
 
- <refsect2>
-  <title>What is the <quote>Transaction</> Actually Performed in <application>pgbench</application>?</title>
+ <refsect2 id="transactions-and-scripts">
+  <title id="transactions-and-scripts-title">What is the <quote>Transaction</> Actually Performed in <application>pgbench</application>?</title>
 
   <para>
    <application>pgbench</> executes test scripts chosen randomly
@@ -1148,7 +1195,7 @@ END;
    The format of the log is:
 
 <synopsis>
-<replaceable>client_id</> <replaceable>transaction_no</> <replaceable>time</> <replaceable>script_no</> <replaceable>time_epoch</> <replaceable>time_us</> <optional> <replaceable>schedule_lag</replaceable> </optional>
+<replaceable>client_id</> <replaceable>transaction_no</> <replaceable>time</> <replaceable>script_no</> <replaceable>time_epoch</> <replaceable>time_us</> <optional> <replaceable>schedule_lag</replaceable> </optional> <optional> <replaceable>serialization_retries</replaceable> <replaceable>deadlock_retries</replaceable> </optional>
 </synopsis>
 
    where
@@ -1169,6 +1216,14 @@ END;
    When both <option>--rate</> and <option>--latency-limit</> are used,
    the <replaceable>time</> for a skipped transaction will be reported as
    <literal>skipped</>.
+   <replaceable>serialization_retries</> and <replaceable>deadlock_retries</>
+   are the sums of all the retries after the corresponding failures during the
+   current script execution. They are only present when the maximum number of
+   tries for transactions is more than 1 (<option>--max-tries</>).
+   If the transaction ended with a serialization/deadlock failure, its
+   <replaceable>time</> will be reported as <literal>failed</> (see
+   <xref linkend="failures-and-retries" endterm="failures-and-retries-title">
+   for more information).
   </para>
 
   <para>
@@ -1198,6 +1253,22 @@ END;
   </para>
 
   <para>
+   Example with failures and retries (the maximum number of tries is 10):
+<screen>
+3 0 47423 0 1499414498 34501 4 0
+3 1 8333 0 1499414498 42848 1 0
+3 2 8358 0 1499414498 51219 1 0
+4 0 72345 0 1499414498 59433 7 0
+1 3 41718 0 1499414498 67879 5 0
+1 4 8416 0 1499414498 76311 1 0
+3 3 33235 0 1499414498 84469 4 0
+0 0 failed 0 1499414498 84905 10 0
+2 0 failed 0 1499414498 86248 10 0
+3 4 8307 0 1499414498 92788 1 0
+</screen>
+  </para>
+
+  <para>
    When running a long test on hardware that can handle a lot of transactions,
    the log files can become very large.  The <option>--sampling-rate</> option
    can be used to log only a random sample of transactions.
@@ -1212,7 +1283,7 @@ END;
    format is used for the log files:
 
 <synopsis>
-<replaceable>interval_start</> <replaceable>num_transactions</> <replaceable>sum_latency</> <replaceable>sum_latency_2</> <replaceable>min_latency</> <replaceable>max_latency</> <optional> <replaceable>sum_lag</> <replaceable>sum_lag_2</> <replaceable>min_lag</> <replaceable>max_lag</> <optional> <replaceable>skipped</> </optional> </optional>
+<replaceable>interval_start</> <replaceable>num_transactions</> <replaceable>sum_latency</> <replaceable>sum_latency_2</> <replaceable>min_latency</> <replaceable>max_latency</> <replaceable>failures</> <optional> <replaceable>sum_lag</> <replaceable>sum_lag_2</> <replaceable>min_lag</> <replaceable>max_lag</> <optional> <replaceable>skipped</> </optional> </optional> <optional> <replaceable>serialization_retries</> <replaceable>deadlock_retries</> </optional>
 </synopsis>
 
    where
@@ -1226,7 +1297,11 @@ END;
    transaction latencies within the interval,
    <replaceable>min_latency</> is the minimum latency within the interval,
    and
-   <replaceable>max_latency</> is the maximum latency within the interval.
+   <replaceable>max_latency</> is the maximum latency within the interval,
+   <replaceable>failures</> is the number of transactions ended with
+   serialization/deadlock failures within the interval (see
+   <xref linkend="failures-and-retries" endterm="failures-and-retries-title">
+   for more information).
    The next fields,
    <replaceable>sum_lag</>, <replaceable>sum_lag_2</>, <replaceable>min_lag</>,
    and <replaceable>max_lag</>, are only present if the <option>--rate</>
@@ -1234,21 +1309,26 @@ END;
    They provide statistics about the time each transaction had to wait for the
    previous one to finish, i.e. the difference between each transaction's
    scheduled start time and the time it actually started.
-   The very last field, <replaceable>skipped</>,
+   The next field, <replaceable>skipped</>,
    is only present if the <option>--latency-limit</> option is used, too.
    It counts the number of transactions skipped because they would have
    started too late.
+   The very last fields, <replaceable>serialization_retries</> and
+   <replaceable>deadlock_retries</>, are the sums of all the retries after the
+   corresponding failures within the interval. They are only present when the
+   maximum number of tries for transactions is more than 1
+   (<option>--max-tries</>).
    Each transaction is counted in the interval when it was committed.
   </para>
 
   <para>
    Here is some example output:
 <screen>
-1345828501 5601 1542744 483552416 61 2573
-1345828503 7884 1979812 565806736 60 1479
-1345828505 7208 1979422 567277552 59 1391
-1345828507 7685 1980268 569784714 60 1398
-1345828509 7073 1979779 573489941 236 1411
+1345828501 5601 1542744 483552416 61 2573 0
+1345828503 7884 1979812 565806736 60 1479 0
+1345828505 7208 1979422 567277552 59 1391 0
+1345828507 7685 1980268 569784714 60 1398 0
+1345828509 7073 1979779 573489941 236 1411 0
 </screen></para>
 
   <para>
@@ -1260,13 +1340,51 @@ END;
  </refsect2>
 
  <refsect2>
-  <title>Per-Statement Latencies</title>
+  <title>Per-Statement Report</title>
 
   <para>
-   With the <option>-r</> option, <application>pgbench</> collects
-   the elapsed transaction time of each statement executed by every
-   client.  It then reports an average of those values, referred to
-   as the latency for each statement, after the benchmark has finished.
+   With the <option>-r</> option, <application>pgbench</> collects the following
+   statistics for each statement:
+   <itemizedlist>
+     <listitem>
+       <para>
+         the elapsed transaction time of each statement; <application>pgbench</>
+         reports an average of those values, referred to as the latency for each
+         statement;
+       </para>
+     </listitem>
+     <listitem>
+       <para>
+         the number of serialization and deadlock failures that were not retried
+         (see <xref linkend="failures-and-retries"
+         endterm="failures-and-retries-title"> for more information);
+       </para>
+       <note>
+         <para>The total sum of per-command failures can be greater than the
+         number of failed transactions. See
+         <xref linkend="transactions-and-scripts"
+         endterm="transactions-and-scripts-title"> for more information.
+         </para>
+       </note>
+     </listitem>
+     <listitem>
+       <para>
+         the number of retries when there was a serialization/deadlock failure
+         in this command; they are reported as serialization/deadlock retries,
+         respectively.
+       </para>
+     </listitem>
+   </itemizedlist>
+
+   <note>
+     <para>
+     The report contains failures only if the total number of failures for all
+     scripts is not zero. The same for retries.
+     </para>
+   </note>
+
+   All values are computed for each statement executed by every client and are
+   reported after the benchmark has finished.
   </para>
 
   <para>
@@ -1274,6 +1392,7 @@ END;
 <screen>
 starting vacuum...end.
 transaction type: &lt;builtin: TPC-B (sort of)&gt;
+transaction maximum tries number: 1
 scaling factor: 1
 query mode: simple
 number of clients: 10
@@ -1298,10 +1417,51 @@ script statistics:
         0.371  INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
         1.212  END;
 </screen>
+
+   Another example of output for the default script using serializable default
+   transaction isolation level (<command>PGOPTIONS='-c
+   default_transaction_isolation=serializable' pgbench ...</command>):
+<screen>
+starting vacuum...end.
+transaction type: &lt;builtin: TPC-B (sort of)&gt;
+transaction maximum tries number: 100
+scaling factor: 1
+query mode: simple
+number of clients: 10
+number of threads: 1
+number of transactions per client: 1000
+number of transactions actually processed: 10000/10000
+number of failures: 3493 (34.930 %)
+number of retries: 449743 (serialization: 449743, deadlocks: 0)
+latency average = 211.539 ms
+latency stddev = 354.318 ms
+tps = 29.310488 (including connections establishing)
+tps = 29.310885 (excluding connections establishing)
+script statistics:
+ - statement latencies in milliseconds, serialization failures and retries,
+   deadlock failures and retries:
+  0.004     0       0  0  0  \set aid random(1, 100000 * :scale)
+  0.001     0       0  0  0  \set bid random(1, 1 * :scale)
+  0.001     0       0  0  0  \set tid random(1, 10 * :scale)
+  0.001     0       0  0  0  \set delta random(-5000, 5000)
+  0.452     0       0  0  0  BEGIN;
+  1.080     0       1  0  0  UPDATE pgbench_accounts
+                             SET abalance = abalance + :delta WHERE aid = :aid;
+  0.853     0       1  0  0  SELECT abalance FROM pgbench_accounts
+                             WHERE aid = :aid;
+  1.028  3455  436867  0  0  UPDATE pgbench_tellers
+                             SET tbalance = tbalance + :delta WHERE tid = :tid;
+  0.860    38   12836  0  0  UPDATE pgbench_branches
+                             SET bbalance = bbalance + :delta WHERE bid = :bid;
+  1.027     0       0  0  0  INSERT INTO pgbench_history
+                                    (tid, bid, aid, delta, mtime)
+                             VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
+  1.147     0      38  0  0  END;
+</screen>
   </para>
 
   <para>
-   If multiple script files are specified, the averages are reported
+   If multiple script files are specified, all statistics are reported
    separately for each script file.
   </para>
 
@@ -1315,6 +1475,34 @@ script statistics:
   </para>
  </refsect2>
 
+ <refsect2 id="failures-and-retries">
+  <title id="failures-and-retries-title">Serialization/deadlock failures and retries</title>
+
+  <para>
+   Transactions with serialization or deadlock failures are rolled back and
+   repeated again and again until they end sucessufully or their number of tries
+   reaches maximum (to change this maximum see the appropriate benchmarking
+   option <option>--max-tries</>). If the last try of a transaction fails this
+   transaction will be reported as failed. Note that the transactions started in
+   the previous scripts and/or not ending in the current script, are not rolled
+   back and retried after the failure. Such script try is reported as failed
+   because it contains a failure that was not rolled back and retried. The
+   latencies are not computed for the failed transactions and commands. The
+   latency of sucessful transaction includes the entire time of transaction
+   execution with roll backs and retries.
+  </para>
+
+  <para>
+   The main report contains the number of failed transactions if it is not zero
+   (see <xref linkend="transactions-and-scripts"
+   endterm="transactions-and-scripts-title"> for more information). If the total
+   number of retries is not zero the main report also contains it and the number
+   of retries after each kind of failure (use option <option>--max-tries</> to
+   make it possible). The per-statement report contains the failures if the main
+   report contains them too. The same for retries.
+  </para>
+ </refsect2>
+
  <refsect2>
   <title>Good Practices</title>
 
diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c
index 4d364a1..0a5a0d9 100644
--- a/src/bin/pgbench/pgbench.c
+++ b/src/bin/pgbench/pgbench.c
@@ -58,6 +58,9 @@
 
 #include "pgbench.h"
 
+#define ERRCODE_IN_FAILED_SQL_TRANSACTION  "25P02"
+#define ERRCODE_T_R_SERIALIZATION_FAILURE  "40001"
+#define ERRCODE_T_R_DEADLOCK_DETECTED  "40P01"
 #define ERRCODE_UNDEFINED_TABLE  "42P01"
 
 /*
@@ -174,8 +177,12 @@ bool		progress_timestamp = false; /* progress report with Unix time */
 int			nclients = 1;		/* number of clients */
 int			nthreads = 1;		/* number of threads */
 bool		is_connect;			/* establish connection for each transaction */
-bool		is_latencies;		/* report per-command latencies */
+bool		report_per_command = false;	/* report per-command latencies, retries
+										 * and failures without retrying */
 int			main_pid;			/* main process id used in log filename */
+int			max_tries = 1;		/* maximum number of tries to run the
+								 * transaction with serialization or deadlock
+								 * failures */
 
 char	   *pghost = "";
 char	   *pgport = "";
@@ -223,6 +230,16 @@ typedef struct SimpleStats
 } SimpleStats;
 
 /*
+ * Data structure to hold retries after failures.
+ */
+typedef struct Retries
+{
+	int64		serialization;	/* number of retries after serialization
+								 * failures */
+	int64		deadlocks;		/* number of retries after deadlock failures */
+} Retries;
+
+/*
  * Data structure to hold various statistics: per-thread and per-script stats
  * are maintained and merged together.
  */
@@ -232,11 +249,43 @@ typedef struct StatsData
 	int64		cnt;			/* number of transactions */
 	int64		skipped;		/* number of transactions skipped under --rate
 								 * and --latency-limit */
+	Retries		retries;
+	int64		failures;		/* number of transactions that were not retried
+								 * after a serialization or a deadlock
+								 * failure */
 	SimpleStats latency;
 	SimpleStats lag;
 } StatsData;
 
 /*
+ * Data structure for client variables.
+ */
+typedef struct Variables
+{
+	Variable   *array;			/* array of variable definitions */
+	int			nvariables;		/* number of variables */
+	bool		vars_sorted;	/* are variables sorted by name? */
+} Variables;
+
+/*
+ * Data structure for repeating a transaction from the beginnning with the same
+ * parameters.
+ */
+typedef struct RetryState
+{
+	/*
+	* Command number in script; -1 if there were not any transactions yet or we
+	* continue the transaction block from the previous scripts
+	*/
+	int			command;
+
+	int			retries;
+
+	unsigned short random_state[3];	/* random seed */
+	Variables   variables;		/* client variables */
+} RetryState;
+
+/*
  * Connection state machine states.
  */
 typedef enum
@@ -287,6 +336,19 @@ typedef enum
 	CSTATE_END_COMMAND,
 
 	/*
+	 * States for transactions with serialization or deadlock failures.
+	 *
+	 * First, report the failure in CSTATE_FAILURE. Then, if we need to end the
+	 * failed transaction block, go to states CSTATE_START_COMMAND ->
+	 * CSTATE_WAIT_RESULT -> CSTATE_END_COMMAND with the appropriate command.
+	 * After that, go to CSTATE_RETRY. If we can repeat the failed transaction,
+	 * set the same parameters for the transaction execution as in the previous
+	 * tries. Otherwise, go to the next command after the failed transaction.
+	 */
+	CSTATE_FAILURE,
+	CSTATE_RETRY,
+
+	/*
 	 * CSTATE_END_TX performs end-of-transaction processing.  Calculates
 	 * latency, and logs the transaction.  In --connect mode, closes the
 	 * current connection.  Chooses the next script to execute and starts over
@@ -311,14 +373,13 @@ typedef struct
 	PGconn	   *con;			/* connection handle to DB */
 	int			id;				/* client No. */
 	ConnectionStateEnum state;	/* state machine's current state. */
+	unsigned short random_state[3];	/* separate randomness for each client */
 
 	int			use_file;		/* index in sql_script for this client */
 	int			command;		/* command number in script */
 
 	/* client variables */
-	Variable   *variables;		/* array of variable definitions */
-	int			nvariables;		/* number of variables */
-	bool		vars_sorted;	/* are variables sorted by name? */
+	Variables   variables;
 
 	/* various times about current transaction */
 	int64		txn_scheduled;	/* scheduled start time of transaction (usec) */
@@ -328,6 +389,15 @@ typedef struct
 
 	bool		prepared[MAX_SCRIPTS];	/* whether client prepared the script */
 
+	/* for repeating transactions with serialization or deadlock failures: */
+	bool		in_transaction_block;	/* are we in transaction block? */
+	bool		end_failed_transaction_block; /* are we ending the failed
+											   * transaction block? */
+	RetryState  retry_state;
+	Retries		retries;
+	bool		failure;		/* if there was a serialization or a deadlock
+								 * failure without retrying */
+
 	/* per client collected stats */
 	int64		cnt;			/* transaction count */
 	int			ecnt;			/* error count */
@@ -342,7 +412,6 @@ typedef struct
 	pthread_t	thread;			/* thread handle */
 	CState	   *state;			/* array of CState */
 	int			nstate;			/* length of state[] */
-	unsigned short random_state[3]; /* separate randomness for each thread */
 	int64		throttle_trigger;	/* previous/next throttling (us) */
 	FILE	   *logfile;		/* where to log, or NULL */
 
@@ -382,6 +451,17 @@ typedef struct
 	char	   *argv[MAX_ARGS]; /* command word list */
 	PgBenchExpr *expr;			/* parsed expression, if needed */
 	SimpleStats stats;			/* time spent in this command */
+	Retries		retries;
+	int64		serialization_failures;	/* number of serialization failures that
+										 * were not retried */
+	int64		deadlock_failures;	/* number of deadlock failures that were not
+									 * retried */
+
+	/* for repeating transactions with serialization and deadlock failures: */
+	bool		is_transaction_block_begin;	/* if command syntactically start a
+											 * a transaction block */
+	int			transaction_block_end;	/* nearest command number to complete
+										 * the transaction block or -1 */
 } Command;
 
 typedef struct ParsedScript
@@ -445,6 +525,17 @@ static const BuiltinScript builtin_script[] =
 	}
 };
 
+/*
+ * For the failures during script execution.
+ */
+typedef enum FailureStatus
+{
+	SERIALIZATION_FAILURE,
+	DEADLOCK_FAILURE,
+	IN_FAILED_TRANSACTION,
+	FAILURE_STATUS_ANOTHER		/* another failure or no failure */
+} FailureStatus;
+
 
 /* Function prototypes */
 static void setIntValue(PgBenchValue *pv, int64 ival);
@@ -504,7 +595,7 @@ usage(void)
 		   "                           protocol for submitting queries (default: simple)\n"
 		   "  -n, --no-vacuum          do not run VACUUM before tests\n"
 		   "  -P, --progress=NUM       show thread progress report every NUM seconds\n"
-		   "  -r, --report-latencies   report average latency per command\n"
+		   "  -r, --report-per-command report latencies, failures and retries per command\n"
 		   "  -R, --rate=NUM           target rate in transactions per second\n"
 		   "  -s, --scale=NUM          report this scale factor in output\n"
 		   "  -t, --transactions=NUM   number of transactions each client runs (default: 10)\n"
@@ -513,6 +604,7 @@ usage(void)
 		   "  --aggregate-interval=NUM aggregate data over NUM seconds\n"
 		   "  --log-prefix=PREFIX      prefix for transaction time log file\n"
 		   "                           (default: \"pgbench_log\")\n"
+		   "  --max-tries=NUM          max number of tries to run transaction (default: 1)\n"
 		   "  --progress-timestamp     use Unix epoch timestamps for progress\n"
 		   "  --sampling-rate=NUM      fraction of transactions to log (e.g., 0.01 for 1%%)\n"
 		   "\nCommon options:\n"
@@ -624,7 +716,7 @@ gotdigits:
 
 /* random number generator: uniform distribution from min to max inclusive */
 static int64
-getrand(TState *thread, int64 min, int64 max)
+getrand(CState *st, int64 min, int64 max)
 {
 	/*
 	 * Odd coding is so that min and max have approximately the same chance of
@@ -635,7 +727,7 @@ getrand(TState *thread, int64 min, int64 max)
 	 * protected by a mutex, and therefore a bottleneck on machines with many
 	 * CPUs.
 	 */
-	return min + (int64) ((max - min + 1) * pg_erand48(thread->random_state));
+	return min + (int64) ((max - min + 1) * pg_erand48(st->random_state));
 }
 
 /*
@@ -644,7 +736,7 @@ getrand(TState *thread, int64 min, int64 max)
  * value is exp(-parameter).
  */
 static int64
-getExponentialRand(TState *thread, int64 min, int64 max, double parameter)
+getExponentialRand(CState *st, int64 min, int64 max, double parameter)
 {
 	double		cut,
 				uniform,
@@ -654,7 +746,7 @@ getExponentialRand(TState *thread, int64 min, int64 max, double parameter)
 	Assert(parameter > 0.0);
 	cut = exp(-parameter);
 	/* erand in [0, 1), uniform in (0, 1] */
-	uniform = 1.0 - pg_erand48(thread->random_state);
+	uniform = 1.0 - pg_erand48(st->random_state);
 
 	/*
 	 * inner expression in (cut, 1] (if parameter > 0), rand in [0, 1)
@@ -667,7 +759,7 @@ getExponentialRand(TState *thread, int64 min, int64 max, double parameter)
 
 /* random number generator: gaussian distribution from min to max inclusive */
 static int64
-getGaussianRand(TState *thread, int64 min, int64 max, double parameter)
+getGaussianRand(CState *st, int64 min, int64 max, double parameter)
 {
 	double		stdev;
 	double		rand;
@@ -695,8 +787,8 @@ getGaussianRand(TState *thread, int64 min, int64 max, double parameter)
 		 * are expected in (0, 1] (see
 		 * http://en.wikipedia.org/wiki/Box_muller)
 		 */
-		double		rand1 = 1.0 - pg_erand48(thread->random_state);
-		double		rand2 = 1.0 - pg_erand48(thread->random_state);
+		double		rand1 = 1.0 - pg_erand48(st->random_state);
+		double		rand2 = 1.0 - pg_erand48(st->random_state);
 
 		/* Box-Muller basic form transform */
 		double		var_sqrt = sqrt(-2.0 * log(rand1));
@@ -723,7 +815,7 @@ getGaussianRand(TState *thread, int64 min, int64 max, double parameter)
  * will approximate a Poisson distribution centered on the given value.
  */
 static int64
-getPoissonRand(TState *thread, int64 center)
+getPoissonRand(CState *st, int64 center)
 {
 	/*
 	 * Use inverse transform sampling to generate a value > 0, such that the
@@ -732,7 +824,7 @@ getPoissonRand(TState *thread, int64 center)
 	double		uniform;
 
 	/* erand in [0, 1), uniform in (0, 1] */
-	uniform = 1.0 - pg_erand48(thread->random_state);
+	uniform = 1.0 - pg_erand48(st->random_state);
 
 	return (int64) (-log(uniform) * ((double) center) + 0.5);
 }
@@ -777,6 +869,25 @@ mergeSimpleStats(SimpleStats *acc, SimpleStats *ss)
 }
 
 /*
+ * Initialize the given Retries struct to all zeroes
+ */
+static void
+initRetries(Retries *retries)
+{
+	memset(retries, 0, sizeof(Retries));
+}
+
+/*
+ * Merge two Retries objects
+ */
+static void
+mergeRetries(Retries *acc, Retries *retries)
+{
+	acc->serialization += retries->serialization;
+	acc->deadlocks += retries->deadlocks;
+}
+
+/*
  * Initialize a StatsData struct to mostly zeroes, with its start time set to
  * the given value.
  */
@@ -786,24 +897,37 @@ initStats(StatsData *sd, time_t start_time)
 	sd->start_time = start_time;
 	sd->cnt = 0;
 	sd->skipped = 0;
+	sd->failures = 0;
+	initRetries(&sd->retries);
 	initSimpleStats(&sd->latency);
 	initSimpleStats(&sd->lag);
 }
 
 /*
- * Accumulate one additional item into the given stats object.
+ * Accumulate statistics regardless of whether there was a failure / transaction
+ * was skipped or not.
  */
 static void
-accumStats(StatsData *stats, bool skipped, double lat, double lag)
+accumMainStats(StatsData *stats, bool skipped, bool failure, Retries *retries)
 {
 	stats->cnt++;
-
 	if (skipped)
-	{
-		/* no latency to record on skipped transactions */
 		stats->skipped++;
-	}
-	else
+	else if (failure)
+		stats->failures++;
+	mergeRetries(&stats->retries, retries);
+}
+
+/*
+ * Accumulate one additional item into the given stats object.
+ */
+static void
+accumStats(StatsData *stats, bool skipped, bool failure, double lat, double lag,
+		   Retries *retries)
+{
+	accumMainStats(stats, skipped, failure, retries);
+
+	if (!skipped && !failure)
 	{
 		addToSimpleStats(&stats->latency, lat);
 
@@ -936,39 +1060,39 @@ compareVariableNames(const void *v1, const void *v2)
 
 /* Locate a variable by name; returns NULL if unknown */
 static Variable *
-lookupVariable(CState *st, char *name)
+lookupVariable(Variables *variables, char *name)
 {
 	Variable	key;
 
 	/* On some versions of Solaris, bsearch of zero items dumps core */
-	if (st->nvariables <= 0)
+	if (variables->nvariables <= 0)
 		return NULL;
 
 	/* Sort if we have to */
-	if (!st->vars_sorted)
+	if (!variables->vars_sorted)
 	{
-		qsort((void *) st->variables, st->nvariables, sizeof(Variable),
-			  compareVariableNames);
-		st->vars_sorted = true;
+		qsort((void *) variables->array, variables->nvariables,
+			  sizeof(Variable), compareVariableNames);
+		variables->vars_sorted = true;
 	}
 
 	/* Now we can search */
 	key.name = name;
 	return (Variable *) bsearch((void *) &key,
-								(void *) st->variables,
-								st->nvariables,
+								(void *) variables->array,
+								variables->nvariables,
 								sizeof(Variable),
 								compareVariableNames);
 }
 
 /* Get the value of a variable, in string form; returns NULL if unknown */
 static char *
-getVariable(CState *st, char *name)
+getVariable(Variables *variables, char *name)
 {
 	Variable   *var;
 	char		stringform[64];
 
-	var = lookupVariable(st, name);
+	var = lookupVariable(variables, name);
 	if (var == NULL)
 		return NULL;			/* not found */
 
@@ -1041,11 +1165,11 @@ isLegalVariableName(const char *name)
  * Returns NULL on failure (bad name).
  */
 static Variable *
-lookupCreateVariable(CState *st, const char *context, char *name)
+lookupCreateVariable(Variables *variables, const char *context, char *name)
 {
 	Variable   *var;
 
-	var = lookupVariable(st, name);
+	var = lookupVariable(variables, name);
 	if (var == NULL)
 	{
 		Variable   *newvars;
@@ -1062,23 +1186,24 @@ lookupCreateVariable(CState *st, const char *context, char *name)
 		}
 
 		/* Create variable at the end of the array */
-		if (st->variables)
-			newvars = (Variable *) pg_realloc(st->variables,
-											  (st->nvariables + 1) * sizeof(Variable));
+		if (variables->array)
+			newvars = (Variable *) pg_realloc(
+								variables->array,
+								(variables->nvariables + 1) * sizeof(Variable));
 		else
 			newvars = (Variable *) pg_malloc(sizeof(Variable));
 
-		st->variables = newvars;
+		variables->array = newvars;
 
-		var = &newvars[st->nvariables];
+		var = &newvars[variables->nvariables];
 
 		var->name = pg_strdup(name);
 		var->value = NULL;
 		/* caller is expected to initialize remaining fields */
 
-		st->nvariables++;
+		variables->nvariables++;
 		/* we don't re-sort the array till we have to */
-		st->vars_sorted = false;
+		variables->vars_sorted = false;
 	}
 
 	return var;
@@ -1087,12 +1212,13 @@ lookupCreateVariable(CState *st, const char *context, char *name)
 /* Assign a string value to a variable, creating it if need be */
 /* Returns false on failure (bad name) */
 static bool
-putVariable(CState *st, const char *context, char *name, const char *value)
+putVariable(Variables *variables, const char *context, char *name,
+			const char *value)
 {
 	Variable   *var;
 	char	   *val;
 
-	var = lookupCreateVariable(st, context, name);
+	var = lookupCreateVariable(variables, context, name);
 	if (!var)
 		return false;
 
@@ -1110,12 +1236,12 @@ putVariable(CState *st, const char *context, char *name, const char *value)
 /* Assign a numeric value to a variable, creating it if need be */
 /* Returns false on failure (bad name) */
 static bool
-putVariableNumber(CState *st, const char *context, char *name,
+putVariableNumber(Variables *variables, const char *context, char *name,
 				  const PgBenchValue *value)
 {
 	Variable   *var;
 
-	var = lookupCreateVariable(st, context, name);
+	var = lookupCreateVariable(variables, context, name);
 	if (!var)
 		return false;
 
@@ -1131,12 +1257,13 @@ putVariableNumber(CState *st, const char *context, char *name,
 /* Assign an integer value to a variable, creating it if need be */
 /* Returns false on failure (bad name) */
 static bool
-putVariableInt(CState *st, const char *context, char *name, int64 value)
+putVariableInt(Variables *variables, const char *context, char *name,
+			   int64 value)
 {
 	PgBenchValue val;
 
 	setIntValue(&val, value);
-	return putVariableNumber(st, context, name, &val);
+	return putVariableNumber(variables, context, name, &val);
 }
 
 static char *
@@ -1181,7 +1308,7 @@ replaceVariable(char **sql, char *param, int len, char *value)
 }
 
 static char *
-assignVariables(CState *st, char *sql)
+assignVariables(Variables *variables, char *sql)
 {
 	char	   *p,
 			   *name,
@@ -1202,7 +1329,7 @@ assignVariables(CState *st, char *sql)
 			continue;
 		}
 
-		val = getVariable(st, name);
+		val = getVariable(variables, name);
 		free(name);
 		if (val == NULL)
 		{
@@ -1217,12 +1344,13 @@ assignVariables(CState *st, char *sql)
 }
 
 static void
-getQueryParams(CState *st, const Command *command, const char **params)
+getQueryParams(Variables *variables, const Command *command,
+			   const char **params)
 {
 	int			i;
 
 	for (i = 0; i < command->argc - 1; i++)
-		params[i] = getVariable(st, command->argv[i + 1]);
+		params[i] = getVariable(variables, command->argv[i + 1]);
 }
 
 /* get a value as an int, tell if there is a problem */
@@ -1593,7 +1721,7 @@ evalFunc(TState *thread, CState *st,
 				if (func == PGBENCH_RANDOM)
 				{
 					Assert(nargs == 2);
-					setIntValue(retval, getrand(thread, imin, imax));
+					setIntValue(retval, getrand(st, imin, imax));
 				}
 				else			/* gaussian & exponential */
 				{
@@ -1615,7 +1743,7 @@ evalFunc(TState *thread, CState *st,
 						}
 
 						setIntValue(retval,
-									getGaussianRand(thread, imin, imax, param));
+									getGaussianRand(st, imin, imax, param));
 					}
 					else		/* exponential */
 					{
@@ -1628,7 +1756,7 @@ evalFunc(TState *thread, CState *st,
 						}
 
 						setIntValue(retval,
-									getExponentialRand(thread, imin, imax, param));
+									getExponentialRand(st, imin, imax, param));
 					}
 				}
 
@@ -1664,7 +1792,7 @@ evaluateExpr(TState *thread, CState *st, PgBenchExpr *expr, PgBenchValue *retval
 			{
 				Variable   *var;
 
-				if ((var = lookupVariable(st, expr->u.variable.varname)) == NULL)
+				if ((var = lookupVariable(&st->variables, expr->u.variable.varname)) == NULL)
 				{
 					fprintf(stderr, "undefined variable \"%s\"\n",
 							expr->u.variable.varname);
@@ -1697,7 +1825,7 @@ evaluateExpr(TState *thread, CState *st, PgBenchExpr *expr, PgBenchValue *retval
  * Return true if succeeded, or false on error.
  */
 static bool
-runShellCommand(CState *st, char *variable, char **argv, int argc)
+runShellCommand(Variables *variables, char *variable, char **argv, int argc)
 {
 	char		command[SHELL_COMMAND_SIZE];
 	int			i,
@@ -1728,7 +1856,7 @@ runShellCommand(CState *st, char *variable, char **argv, int argc)
 		{
 			arg = argv[i] + 1;	/* a string literal starting with colons */
 		}
-		else if ((arg = getVariable(st, argv[i] + 1)) == NULL)
+		else if ((arg = getVariable(variables, argv[i] + 1)) == NULL)
 		{
 			fprintf(stderr, "%s: undefined variable \"%s\"\n",
 					argv[0], argv[i]);
@@ -1791,7 +1919,7 @@ runShellCommand(CState *st, char *variable, char **argv, int argc)
 				argv[0], res);
 		return false;
 	}
-	if (!putVariableInt(st, "setshell", variable, retval))
+	if (!putVariableInt(variables, "setshell", variable, retval))
 		return false;
 
 #ifdef DEBUG
@@ -1817,7 +1945,7 @@ commandFailed(CState *st, char *message)
 
 /* return a script number with a weighted choice. */
 static int
-chooseScript(TState *thread)
+chooseScript(CState *st)
 {
 	int			i = 0;
 	int64		w;
@@ -1825,7 +1953,7 @@ chooseScript(TState *thread)
 	if (num_scripts == 1)
 		return 0;
 
-	w = getrand(thread, 0, total_weight - 1);
+	w = getrand(st, 0, total_weight - 1);
 	do
 	{
 		w -= sql_script[i++].weight;
@@ -1845,7 +1973,7 @@ sendCommand(CState *st, Command *command)
 		char	   *sql;
 
 		sql = pg_strdup(command->argv[0]);
-		sql = assignVariables(st, sql);
+		sql = assignVariables(&st->variables, sql);
 
 		if (debug)
 			fprintf(stderr, "client %d sending %s\n", st->id, sql);
@@ -1857,7 +1985,7 @@ sendCommand(CState *st, Command *command)
 		const char *sql = command->argv[0];
 		const char *params[MAX_ARGS];
 
-		getQueryParams(st, command, params);
+		getQueryParams(&st->variables, command, params);
 
 		if (debug)
 			fprintf(stderr, "client %d sending %s\n", st->id, sql);
@@ -1891,7 +2019,7 @@ sendCommand(CState *st, Command *command)
 			st->prepared[st->use_file] = true;
 		}
 
-		getQueryParams(st, command, params);
+		getQueryParams(&st->variables, command, params);
 		preparedStatementName(name, st->use_file, st->command);
 
 		if (debug)
@@ -1919,14 +2047,14 @@ sendCommand(CState *st, Command *command)
  * of delay, in microseconds.  Returns true on success, false on error.
  */
 static bool
-evaluateSleep(CState *st, int argc, char **argv, int *usecs)
+evaluateSleep(Variables *variables, int argc, char **argv, int *usecs)
 {
 	char	   *var;
 	int			usec;
 
 	if (*argv[1] == ':')
 	{
-		if ((var = getVariable(st, argv[1] + 1)) == NULL)
+		if ((var = getVariable(variables, argv[1] + 1)) == NULL)
 		{
 			fprintf(stderr, "%s: undefined variable \"%s\"\n",
 					argv[0], argv[1]);
@@ -1951,6 +2079,67 @@ evaluateSleep(CState *st, int argc, char **argv, int *usecs)
 	return true;
 }
 
+/* make a deep copy of variables array */
+static void
+copyVariables(Variables *destination_vars, const Variables *source_vars)
+{
+	Variable   *destination = destination_vars->array;
+	Variable   *current_destination;
+	const Variable *source = source_vars->array;
+	const Variable *current_source;
+	int			nvariables = source_vars->nvariables;
+
+	for (current_destination = destination;
+		 current_destination - destination < destination_vars->nvariables;
+		 ++current_destination)
+	{
+		pg_free(current_destination->name);
+		pg_free(current_destination->value);
+	}
+
+	destination_vars->array = pg_realloc(destination_vars->array,
+										 sizeof(Variable) * nvariables);
+	destination = destination_vars->array;
+
+	for (current_source = source, current_destination = destination;
+		 current_source - source < nvariables;
+		 ++current_source, ++current_destination)
+	{
+		current_destination->name = pg_strdup(current_source->name);
+		if (current_source->value)
+			current_destination->value = pg_strdup(current_source->value);
+		else
+			current_destination->value = NULL;
+		current_destination->is_numeric = current_source->is_numeric;
+		current_destination->num_value = current_source->num_value;
+	}
+
+	destination_vars->nvariables = nvariables;
+	destination_vars->vars_sorted = source_vars->vars_sorted;
+}
+
+/*
+ * Returns true if there's a serialization/deadlock failure.
+ */
+static bool
+anyFailure(FailureStatus status)
+{
+	return status == SERIALIZATION_FAILURE || status == DEADLOCK_FAILURE;
+}
+
+/*
+ * Returns true if the failure can be retried.
+ */
+static bool
+canRetry(CState *st)
+{
+	Command    *command = sql_script[st->use_file].commands[st->command];
+
+	return (!(st->in_transaction_block && command->transaction_block_end < 0) &&
+			st->retry_state.command >= 0 &&
+			st->retry_state.retries + 1 < max_tries);
+}
+
 /*
  * Advance the state machine of a connection, if possible.
  */
@@ -1962,6 +2151,7 @@ doCustom(TState *thread, CState *st, StatsData *agg)
 	instr_time	now;
 	bool		end_tx_processed = false;
 	int64		wait;
+	FailureStatus failure_status;
 
 	/*
 	 * gettimeofday() isn't free, so we get the current timestamp lazily the
@@ -1990,7 +2180,7 @@ doCustom(TState *thread, CState *st, StatsData *agg)
 				 */
 			case CSTATE_CHOOSE_SCRIPT:
 
-				st->use_file = chooseScript(thread);
+				st->use_file = chooseScript(st);
 
 				if (debug)
 					fprintf(stderr, "client %d executing script \"%s\"\n", st->id,
@@ -2017,7 +2207,7 @@ doCustom(TState *thread, CState *st, StatsData *agg)
 				 * away.
 				 */
 				Assert(throttle_delay > 0);
-				wait = getPoissonRand(thread, throttle_delay);
+				wait = getPoissonRand(st, throttle_delay);
 
 				thread->throttle_trigger += wait;
 				st->txn_scheduled = thread->throttle_trigger;
@@ -2049,7 +2239,7 @@ doCustom(TState *thread, CState *st, StatsData *agg)
 					{
 						processXactStats(thread, st, &now, true, agg);
 						/* next rendez-vous */
-						wait = getPoissonRand(thread, throttle_delay);
+						wait = getPoissonRand(st, throttle_delay);
 						thread->throttle_trigger += wait;
 						st->txn_scheduled = thread->throttle_trigger;
 					}
@@ -2102,6 +2292,11 @@ doCustom(TState *thread, CState *st, StatsData *agg)
 					memset(st->prepared, 0, sizeof(st->prepared));
 				}
 
+				/* reset transaction variables to default values */
+				st->retry_state.command = -1;
+				initRetries(&st->retries);
+				st->failure = false;
+
 				/*
 				 * Record transaction start time under logging, progress or
 				 * throttling.
@@ -2143,10 +2338,38 @@ doCustom(TState *thread, CState *st, StatsData *agg)
 				}
 
 				/*
+				 * It will be changed in CSTATE_WAIT_RESULT if there is a
+				 * serialization/deadlock failure or we continue the failed
+				 * transaction block. It is set here because the meta commands
+				 * don't use CSTATE_WAIT_RESULT.
+				 */
+				failure_status = FAILURE_STATUS_ANOTHER;
+
+				if (command->type == SQL_COMMAND &&
+					!st->in_transaction_block &&
+					st->retry_state.command < st->command)
+				{
+					/*
+					 * It is a first try to run the transaction which begins in
+					 * current command.  Remember its parameters just in case we
+					 * should repeat it in future.
+					 */
+					st->retry_state.command = st->command;
+					st->retry_state.retries = 0;
+					memcpy(st->retry_state.random_state, st->random_state,
+						   sizeof(unsigned short) * 3);
+					copyVariables(&st->retry_state.variables, &st->variables);
+				}
+
+				if (command->is_transaction_block_begin &&
+					!st->in_transaction_block)
+					st->in_transaction_block = true;
+
+				/*
 				 * Record statement start time if per-command latencies are
 				 * requested
 				 */
-				if (is_latencies)
+				if (report_per_command)
 				{
 					if (INSTR_TIME_IS_ZERO(now))
 						INSTR_TIME_SET_CURRENT(now);
@@ -2192,7 +2415,7 @@ doCustom(TState *thread, CState *st, StatsData *agg)
 						 */
 						int			usec;
 
-						if (!evaluateSleep(st, argc, argv, &usec))
+						if (!evaluateSleep(&st->variables, argc, argv, &usec))
 						{
 							commandFailed(st, "execution of meta-command 'sleep' failed");
 							st->state = CSTATE_ABORTED;
@@ -2219,7 +2442,8 @@ doCustom(TState *thread, CState *st, StatsData *agg)
 								break;
 							}
 
-							if (!putVariableNumber(st, argv[0], argv[1], &result))
+							if (!putVariableNumber(&st->variables, argv[0],
+												   argv[1], &result))
 							{
 								commandFailed(st, "assignment of meta-command 'set' failed");
 								st->state = CSTATE_ABORTED;
@@ -2228,7 +2452,9 @@ doCustom(TState *thread, CState *st, StatsData *agg)
 						}
 						else if (pg_strcasecmp(argv[0], "setshell") == 0)
 						{
-							bool		ret = runShellCommand(st, argv[1], argv + 2, argc - 2);
+							bool		ret = runShellCommand(&st->variables,
+															  argv[1], argv + 2,
+															  argc - 2);
 
 							if (timer_exceeded) /* timeout */
 							{
@@ -2248,7 +2474,9 @@ doCustom(TState *thread, CState *st, StatsData *agg)
 						}
 						else if (pg_strcasecmp(argv[0], "shell") == 0)
 						{
-							bool		ret = runShellCommand(st, NULL, argv + 1, argc - 1);
+							bool		ret = runShellCommand(&st->variables,
+															  NULL, argv + 1,
+															  argc - 1);
 
 							if (timer_exceeded) /* timeout */
 							{
@@ -2283,37 +2511,81 @@ doCustom(TState *thread, CState *st, StatsData *agg)
 				 * Wait for the current SQL command to complete
 				 */
 			case CSTATE_WAIT_RESULT:
-				command = sql_script[st->use_file].commands[st->command];
-				if (debug)
-					fprintf(stderr, "client %d receiving\n", st->id);
-				if (!PQconsumeInput(st->con))
-				{				/* there's something wrong */
-					commandFailed(st, "perhaps the backend died while processing");
-					st->state = CSTATE_ABORTED;
-					break;
-				}
-				if (PQisBusy(st->con))
-					return;		/* don't have the whole result yet */
-
-				/*
-				 * Read and discard the query result;
-				 */
-				res = PQgetResult(st->con);
-				switch (PQresultStatus(res))
 				{
-					case PGRES_COMMAND_OK:
-					case PGRES_TUPLES_OK:
-					case PGRES_EMPTY_QUERY:
+					ExecStatusType result_status;
+					char	   *sqlState;
+
+					command = sql_script[st->use_file].commands[st->command];
+					if (debug)
+						fprintf(stderr, "client %d receiving\n", st->id);
+					if (!PQconsumeInput(st->con))
+					{				/* there's something wrong */
+						commandFailed(st, "perhaps the backend died while processing");
+						st->state = CSTATE_ABORTED;
+						break;
+					}
+					if (PQisBusy(st->con))
+						return;		/* don't have the whole result yet */
+
+					/*
+					 * Read and discard the query result;
+					 */
+					res = PQgetResult(st->con);
+					result_status = PQresultStatus(res);
+					sqlState = PQresultErrorField(res, PG_DIAG_SQLSTATE);
+					failure_status = FAILURE_STATUS_ANOTHER;
+					if (sqlState) {
+						if (strcmp(sqlState,
+								   ERRCODE_T_R_SERIALIZATION_FAILURE) == 0)
+							failure_status = SERIALIZATION_FAILURE;
+						else if (strcmp(sqlState,
+										ERRCODE_T_R_DEADLOCK_DETECTED) == 0)
+							failure_status = DEADLOCK_FAILURE;
+						else if (strcmp(sqlState,
+										ERRCODE_IN_FAILED_SQL_TRANSACTION) == 0)
+							failure_status = IN_FAILED_TRANSACTION;
+					}
+
+					if (debug)
+					{
+						if (anyFailure(failure_status))
+							fprintf(stderr, "client %d got a %s failure (try %d/%d)\n",
+									st->id,
+									(failure_status == SERIALIZATION_FAILURE ?
+									 "serialization" :
+									 "deadlock"),
+									st->retry_state.retries + 1,
+									max_tries);
+						else if (failure_status == IN_FAILED_TRANSACTION)
+							fprintf(stderr, "client %d in the failed transaction\n",
+									st->id);
+					}
+
+					/*
+					 * All is ok if one of the following conditions is
+					 * satisfied:
+					 * - there's no failure;
+					 * - there is a serialization/deadlock failure (these
+					 * failures will be processed later);
+					 * - we continue the failed transaction block (move on to
+					 * the next command).
+					 */
+					if (result_status == PGRES_COMMAND_OK ||
+						result_status == PGRES_TUPLES_OK ||
+						result_status == PGRES_EMPTY_QUERY ||
+						failure_status != FAILURE_STATUS_ANOTHER)
+					{
 						/* OK */
 						PQclear(res);
 						discard_response(st);
 						st->state = CSTATE_END_COMMAND;
-						break;
-					default:
+					}
+					else
+					{
 						commandFailed(st, PQerrorMessage(st->con));
 						PQclear(res);
 						st->state = CSTATE_ABORTED;
-						break;
+					}
 				}
 				break;
 
@@ -2337,12 +2609,20 @@ doCustom(TState *thread, CState *st, StatsData *agg)
 				 */
 			case CSTATE_END_COMMAND:
 
+				/* process the serialization/deadlock failure if we have it */
+				if (anyFailure(failure_status))
+				{
+					st->state = CSTATE_FAILURE;
+					break;
+				}
+
 				/*
 				 * command completed: accumulate per-command execution times
 				 * in thread-local data structure, if per-command latencies
 				 * are requested.
 				 */
-				if (is_latencies)
+				if (report_per_command &&
+					failure_status != IN_FAILED_TRANSACTION)
 				{
 					if (INSTR_TIME_IS_ZERO(now))
 						INSTR_TIME_SET_CURRENT(now);
@@ -2354,8 +2634,135 @@ doCustom(TState *thread, CState *st, StatsData *agg)
 									 INSTR_TIME_GET_DOUBLE(st->stmt_begin));
 				}
 
-				/* Go ahead with next command */
-				st->command++;
+				if (st->in_transaction_block &&
+					command->transaction_block_end == st->command)
+					st->in_transaction_block = false;
+
+				if (st->end_failed_transaction_block)
+				{
+					/*
+					 * The failed transaction block has ended. Retry it if
+					 * possible.
+					 */
+					st->end_failed_transaction_block = false;
+					st->state = CSTATE_RETRY;
+				}
+				else
+				{
+					/* Go ahead with next command */
+					st->command++;
+					st->state = CSTATE_START_COMMAND;
+				}
+
+				break;
+
+				/*
+				 * Report about failure and end the failed transaction block.
+				 */
+			case CSTATE_FAILURE:
+
+				if (canRetry(st))
+				{
+					/*
+					 * The failed transaction will be retried. So accumulate
+					 * the retry for the command and for the current script
+					 * execution.
+					 */
+					if (failure_status == SERIALIZATION_FAILURE)
+					{
+						st->retries.serialization++;
+						if (report_per_command)
+							command->retries.serialization++;
+					}
+					else
+					{
+						st->retries.deadlocks++;
+						if (report_per_command)
+							command->retries.deadlocks++;
+					}
+				}
+				else
+				{
+					/*
+					 * We will not be able to retry this failed transaction.
+					 * So accumulate the failure for the command and for the
+					 * current script execution.
+					 */
+					st->failure = true;
+					if (report_per_command)
+					{
+						if (failure_status == SERIALIZATION_FAILURE)
+							command->serialization_failures++;
+						else
+							command->deadlock_failures++;
+					}
+				}
+
+				if (st->in_transaction_block)
+				{
+					if (command->transaction_block_end >= 0)
+					{
+						if (st->command == command->transaction_block_end)
+						{
+							/*
+							 * The failed transaction block has ended. Retry
+							 * it if possible.
+							 */
+							st->in_transaction_block = false;
+							st->state = CSTATE_RETRY;
+						}
+						else
+						{
+							/* end the failed transaction block */
+							st->command = command->transaction_block_end;
+							st->end_failed_transaction_block = true;
+							st->state = CSTATE_START_COMMAND;
+						}
+					}
+					else
+					{
+						/*
+						 * There's not a transaction block end later in this
+						 * script. We are in the failed transaction block and
+						 * all next commands will fail. So let's end the current
+						 * script execution.
+						 */
+						st->state = CSTATE_END_TX;
+					}
+				}
+				else
+				{
+					/* retry the failed transaction if possible */
+					st->state = CSTATE_RETRY;
+				}
+
+				break;
+
+				/*
+				 * Retry the failed transaction if possible.
+				 */
+			case CSTATE_RETRY:
+
+				if (canRetry(st))
+				{
+					st->retry_state.retries++;
+					if (debug)
+						fprintf(stderr, "client %d repeats the failed transaction (try %d/%d)\n",
+								st->id,
+								st->retry_state.retries + 1,
+								max_tries);
+
+					st->command = st->retry_state.command;
+					memcpy(st->random_state, st->retry_state.random_state,
+						   sizeof(unsigned short) * 3);
+					copyVariables(&st->variables, &st->retry_state.variables);
+				}
+				else
+				{
+					/* Go ahead with next command */
+					st->command++;
+				}
+
 				st->state = CSTATE_START_COMMAND;
 				break;
 
@@ -2372,7 +2779,8 @@ doCustom(TState *thread, CState *st, StatsData *agg)
 					per_script_stats || use_log)
 					processXactStats(thread, st, &now, false, agg);
 				else
-					thread->stats.cnt++;
+					accumMainStats(&thread->stats, false, st->failure,
+								   &st->retries);
 
 				if (is_connect)
 				{
@@ -2446,7 +2854,7 @@ doLog(TState *thread, CState *st,
 	 * to the random sample.
 	 */
 	if (sample_rate != 0.0 &&
-		pg_erand48(thread->random_state) > sample_rate)
+		pg_erand48(st->random_state) > sample_rate)
 		return;
 
 	/* should we aggregate the results or not? */
@@ -2462,13 +2870,14 @@ doLog(TState *thread, CState *st,
 		while (agg->start_time + agg_interval <= now)
 		{
 			/* print aggregated report to logfile */
-			fprintf(logfile, "%ld " INT64_FORMAT " %.0f %.0f %.0f %.0f",
+			fprintf(logfile, "%ld " INT64_FORMAT " %.0f %.0f %.0f %.0f " INT64_FORMAT,
 					(long) agg->start_time,
 					agg->cnt,
 					agg->latency.sum,
 					agg->latency.sum2,
 					agg->latency.min,
-					agg->latency.max);
+					agg->latency.max,
+					agg->failures);
 			if (throttle_delay)
 			{
 				fprintf(logfile, " %.0f %.0f %.0f %.0f",
@@ -2479,6 +2888,10 @@ doLog(TState *thread, CState *st,
 				if (latency_limit)
 					fprintf(logfile, " " INT64_FORMAT, agg->skipped);
 			}
+			if (max_tries > 1)
+				fprintf(logfile, " " INT64_FORMAT " " INT64_FORMAT,
+						agg->retries.serialization,
+						agg->retries.deadlocks);
 			fputc('\n', logfile);
 
 			/* reset data and move to next interval */
@@ -2486,7 +2899,7 @@ doLog(TState *thread, CState *st,
 		}
 
 		/* accumulate the current transaction */
-		accumStats(agg, skipped, latency, lag);
+		accumStats(agg, skipped, st->failure, latency, lag, &st->retries);
 	}
 	else
 	{
@@ -2498,12 +2911,20 @@ doLog(TState *thread, CState *st,
 			fprintf(logfile, "%d " INT64_FORMAT " skipped %d %ld %ld",
 					st->id, st->cnt, st->use_file,
 					(long) tv.tv_sec, (long) tv.tv_usec);
+		else if (st->failure)
+			fprintf(logfile, "%d " INT64_FORMAT " failed %d %ld %ld",
+					st->id, st->cnt, st->use_file,
+					(long) tv.tv_sec, (long) tv.tv_usec);
 		else
 			fprintf(logfile, "%d " INT64_FORMAT " %.0f %d %ld %ld",
 					st->id, st->cnt, latency, st->use_file,
 					(long) tv.tv_sec, (long) tv.tv_usec);
 		if (throttle_delay)
 			fprintf(logfile, " %.0f", lag);
+		if (max_tries > 1)
+				fprintf(logfile, " " INT64_FORMAT " " INT64_FORMAT,
+						st->retries.serialization,
+						st->retries.deadlocks);
 		fputc('\n', logfile);
 	}
 }
@@ -2523,7 +2944,7 @@ processXactStats(TState *thread, CState *st, instr_time *now,
 	if ((!skipped) && INSTR_TIME_IS_ZERO(*now))
 		INSTR_TIME_SET_CURRENT(*now);
 
-	if (!skipped)
+	if (!skipped && !st->failure)
 	{
 		/* compute latency & lag */
 		latency = INSTR_TIME_GET_MICROSEC(*now) - st->txn_scheduled;
@@ -2532,21 +2953,23 @@ processXactStats(TState *thread, CState *st, instr_time *now,
 
 	if (progress || throttle_delay || latency_limit)
 	{
-		accumStats(&thread->stats, skipped, latency, lag);
+		accumStats(&thread->stats, skipped, st->failure, latency, lag,
+				   &st->retries);
 
 		/* count transactions over the latency limit, if needed */
 		if (latency_limit && latency > latency_limit)
 			thread->latency_late++;
 	}
 	else
-		thread->stats.cnt++;
+		accumMainStats(&thread->stats, skipped, st->failure, &st->retries);
 
 	if (use_log)
 		doLog(thread, st, agg, skipped, latency, lag);
 
 	/* XXX could use a mutex here, but we choose not to */
 	if (per_script_stats)
-		accumStats(&sql_script[st->use_file].stats, skipped, latency, lag);
+		accumStats(&sql_script[st->use_file].stats, skipped, st->failure,
+				   latency, lag, &st->retries);
 }
 
 
@@ -2985,6 +3408,11 @@ process_sql_command(PQExpBuffer buf, const char *source)
 	my_command->type = SQL_COMMAND;
 	my_command->argc = 0;
 	initSimpleStats(&my_command->stats);
+	initRetries(&my_command->retries);
+	my_command->serialization_failures = 0;
+	my_command->deadlock_failures = 0;
+	my_command->is_transaction_block_begin = false;
+	my_command->transaction_block_end = -1;
 
 	/*
 	 * If SQL command is multi-line, we only want to save the first line as
@@ -3054,6 +3482,11 @@ process_backslash_command(PsqlScanState sstate, const char *source)
 	my_command->type = META_COMMAND;
 	my_command->argc = 0;
 	initSimpleStats(&my_command->stats);
+	initRetries(&my_command->retries);
+	my_command->serialization_failures = 0;
+	my_command->deadlock_failures = 0;
+	my_command->is_transaction_block_begin = false;
+	my_command->transaction_block_end = -1;
 
 	/* Save first word (command name) */
 	j = 0;
@@ -3185,6 +3618,60 @@ process_backslash_command(PsqlScanState sstate, const char *source)
 }
 
 /*
+ * Returns the same command where all continuous blocks of whitespaces are
+ * replaced by one space symbol.
+ *
+ * Returns a malloc'd string.
+ */
+static char *
+normalize_whitespaces(const char *command)
+{
+	const char *ptr = command;
+	char	   *buffer = pg_malloc(strlen(command) + 1);
+	int			length = 0;
+
+	while (*ptr)
+	{
+		while (*ptr && !isspace((unsigned char) *ptr))
+			buffer[length++] = *(ptr++);
+		if (isspace((unsigned char) *ptr))
+		{
+			buffer[length++] = ' ';
+			while (isspace((unsigned char) *ptr))
+				ptr++;
+		}
+	}
+	buffer[length] = '\0';
+
+	return buffer;
+}
+
+/*
+ * Returns true if given command generally ends a transaction block (we don't
+ * check here if the last transaction block is already completed).
+ */
+static bool
+is_transaction_block_end(const char *command_text)
+{
+	bool		result = false;
+	char	   *command = normalize_whitespaces(command_text);
+
+	if (pg_strncasecmp(command, "end", 3) == 0 ||
+		(pg_strncasecmp(command, "commit", 6) == 0 &&
+		 pg_strncasecmp(command, "commit prepared", 15) != 0) ||
+		(pg_strncasecmp(command, "rollback", 8) == 0 &&
+		 pg_strncasecmp(command, "rollback prepared", 17) != 0 &&
+		 pg_strncasecmp(command, "rollback to", 11) != 0) ||
+		(pg_strncasecmp(command, "prepare transaction ", 20) == 0 &&
+		 pg_strncasecmp(command, "prepare transaction (", 21) != 0 &&
+		 pg_strncasecmp(command, "prepare transaction as ", 23) != 0))
+		result = true;
+
+	pg_free(command);
+	return result;
+}
+
+/*
  * Parse a script (either the contents of a file, or a built-in script)
  * and add it to the list of scripts.
  */
@@ -3196,6 +3683,7 @@ ParseScript(const char *script, const char *desc, int weight)
 	PQExpBufferData line_buf;
 	int			alloc_num;
 	int			index;
+	int			last_transaction_block_end = -1;
 
 #define COMMANDS_ALLOC_NUM 128
 	alloc_num = COMMANDS_ALLOC_NUM;
@@ -3238,6 +3726,9 @@ ParseScript(const char *script, const char *desc, int weight)
 		command = process_sql_command(&line_buf, desc);
 		if (command)
 		{
+			char	   *command_text = command->argv[0];
+			int			cur_index;
+
 			ps.commands[index] = command;
 			index++;
 
@@ -3247,6 +3738,25 @@ ParseScript(const char *script, const char *desc, int weight)
 				ps.commands = (Command **)
 					pg_realloc(ps.commands, sizeof(Command *) * alloc_num);
 			}
+
+			/* check if the command syntactically starts a transaction block */
+			if (pg_strncasecmp(command_text, "begin", 5) == 0 ||
+				pg_strncasecmp(command_text, "start", 5) == 0)
+				command->is_transaction_block_begin = true;
+
+			/* check if the command syntactically ends a transaction block*/
+			if (is_transaction_block_end(command_text))
+			{
+				/*
+				 * Remember it for commands that can fail a transaction block
+				 * earlier.
+				 */
+				for (cur_index = last_transaction_block_end + 1;
+					 cur_index < index;
+					 cur_index++)
+					ps.commands[cur_index]->transaction_block_end = index - 1;
+				last_transaction_block_end = index - 1;
+			}
 		}
 
 		/* If we reached a backslash, process that */
@@ -3484,6 +3994,15 @@ printSimpleStats(char *prefix, SimpleStats *ss)
 	printf("%s stddev = %.3f ms\n", prefix, 0.001 * stddev);
 }
 
+/*
+ * Return the sum of all retries.
+ */
+static int64
+getAllRetries(Retries *retries)
+{
+	return retries->serialization + retries->deadlocks;
+}
+
 /* print out results */
 static void
 printResults(TState *threads, StatsData *total, instr_time total_time,
@@ -3492,6 +4011,8 @@ printResults(TState *threads, StatsData *total, instr_time total_time,
 	double		time_include,
 				tps_include,
 				tps_exclude;
+	int64		all_failures = total->failures;
+	int64		all_retries = getAllRetries(&total->retries);
 
 	time_include = INSTR_TIME_GET_DOUBLE(total_time);
 	tps_include = total->cnt / time_include;
@@ -3501,6 +4022,7 @@ printResults(TState *threads, StatsData *total, instr_time total_time,
 	/* Report test parameters. */
 	printf("transaction type: %s\n",
 		   num_scripts == 1 ? sql_script[0].desc : "multiple scripts");
+	printf("transaction maximum tries number: %d\n", max_tries);
 	printf("scaling factor: %d\n", scale);
 	printf("query mode: %s\n", QUERYMODE[querymode]);
 	printf("number of clients: %d\n", nclients);
@@ -3522,6 +4044,16 @@ printResults(TState *threads, StatsData *total, instr_time total_time,
 	if (total->cnt <= 0)
 		return;
 
+	if (all_failures > 0)
+		printf("number of failures: " INT64_FORMAT " (%.3f %%)\n",
+			   all_failures, (100.0 * all_failures / total->cnt));
+
+	if (all_retries > 0)
+		printf("number of retries: " INT64_FORMAT " (serialization: " INT64_FORMAT ", deadlocks: " INT64_FORMAT ")\n",
+			   all_retries,
+			   total->retries.serialization,
+			   total->retries.deadlocks);
+
 	if (throttle_delay && latency_limit)
 		printf("number of transactions skipped: " INT64_FORMAT " (%.3f %%)\n",
 			   total->skipped,
@@ -3557,13 +4089,14 @@ printResults(TState *threads, StatsData *total, instr_time total_time,
 	printf("tps = %f (excluding connections establishing)\n", tps_exclude);
 
 	/* Report per-script/command statistics */
-	if (per_script_stats || latency_limit || is_latencies)
+	if (per_script_stats || latency_limit || report_per_command)
 	{
 		int			i;
 
 		for (i = 0; i < num_scripts; i++)
 		{
 			if (num_scripts > 1)
+			{
 				printf("SQL script %d: %s\n"
 					   " - weight: %d (targets %.1f%% of total)\n"
 					   " - " INT64_FORMAT " transactions (%.1f%% of total, tps = %f)\n",
@@ -3573,8 +4106,23 @@ printResults(TState *threads, StatsData *total, instr_time total_time,
 					   sql_script[i].stats.cnt,
 					   100.0 * sql_script[i].stats.cnt / total->cnt,
 					   sql_script[i].stats.cnt / time_include);
+
+				if (all_failures > 0)
+					printf(" - number of failures: " INT64_FORMAT " (%.3f %%)\n",
+						   sql_script[i].stats.failures,
+						   (100.0 * sql_script[i].stats.failures /
+							sql_script[i].stats.cnt));
+
+				if (all_retries > 0)
+					printf(" - number of retries: " INT64_FORMAT " (serialization: " INT64_FORMAT ", deadlocks: " INT64_FORMAT ")\n",
+						   getAllRetries(&sql_script[i].stats.retries),
+						   sql_script[i].stats.retries.serialization,
+						   sql_script[i].stats.retries.deadlocks);
+			}
 			else
+			{
 				printf("script statistics:\n");
+			}
 
 			if (latency_limit)
 				printf(" - number of transactions skipped: " INT64_FORMAT " (%.3f%%)\n",
@@ -3585,20 +4133,45 @@ printResults(TState *threads, StatsData *total, instr_time total_time,
 			if (num_scripts > 1)
 				printSimpleStats(" - latency", &sql_script[i].stats.latency);
 
-			/* Report per-command latencies */
-			if (is_latencies)
+			/*
+			 * Report per-command statistics: latencies, retries after failures,
+			 * failures without retrying.
+			 */
+			if (report_per_command)
 			{
 				Command   **commands;
 
-				printf(" - statement latencies in milliseconds:\n");
+				printf(" - statement latencies in milliseconds");
+				if (all_failures > 0 && all_retries > 0)
+					printf(", serialization failures and retries, deadlock failures and retries");
+				else if (all_failures > 0 || all_retries > 0)
+					printf(", serialization and deadlock %s",
+						   (all_failures > 0 ? "failures" : "retries"));
+				printf(":\n");
 
 				for (commands = sql_script[i].commands;
 					 *commands != NULL;
 					 commands++)
-					printf("   %11.3f  %s\n",
+				{
+					printf("   %11.3f",
 						   1000.0 * (*commands)->stats.sum /
-						   (*commands)->stats.count,
-						   (*commands)->line);
+						   (*commands)->stats.count);
+					if (all_failures > 0 && all_retries > 0)
+						printf("  %25" INT64_MODIFIER "d  %25" INT64_MODIFIER "d  %25" INT64_MODIFIER "d  %25" INT64_MODIFIER "d",
+							   (*commands)->serialization_failures,
+							   (*commands)->retries.serialization,
+							   (*commands)->deadlock_failures,
+							   (*commands)->retries.deadlocks);
+					else if (all_failures > 0 || all_retries > 0)
+						printf("  %25" INT64_MODIFIER "d  %25" INT64_MODIFIER "d",
+							   (all_failures > 0 ?
+								(*commands)->serialization_failures :
+								(*commands)->retries.serialization),
+							   (all_failures > 0 ?
+								(*commands)->deadlock_failures :
+								(*commands)->retries.deadlocks));
+					printf("  %s\n", (*commands)->line);
+				}
 			}
 		}
 	}
@@ -3627,7 +4200,7 @@ main(int argc, char **argv)
 		{"progress", required_argument, NULL, 'P'},
 		{"protocol", required_argument, NULL, 'M'},
 		{"quiet", no_argument, NULL, 'q'},
-		{"report-latencies", no_argument, NULL, 'r'},
+		{"report-per-command", no_argument, NULL, 'r'},
 		{"rate", required_argument, NULL, 'R'},
 		{"scale", required_argument, NULL, 's'},
 		{"select-only", no_argument, NULL, 'S'},
@@ -3645,6 +4218,7 @@ main(int argc, char **argv)
 		{"aggregate-interval", required_argument, NULL, 5},
 		{"progress-timestamp", no_argument, NULL, 6},
 		{"log-prefix", required_argument, NULL, 7},
+		{"max-tries", required_argument, NULL, 8},
 		{NULL, 0, NULL, 0}
 	};
 
@@ -3710,6 +4284,7 @@ main(int argc, char **argv)
 
 	state = (CState *) pg_malloc(sizeof(CState));
 	memset(state, 0, sizeof(CState));
+	state->retry_state.command = -1;
 
 	while ((c = getopt_long(argc, argv, "ih:nvp:dqb:SNc:j:Crs:t:T:U:lf:D:F:M:P:R:L:", long_options, &optindex)) != -1)
 	{
@@ -3787,7 +4362,7 @@ main(int argc, char **argv)
 			case 'r':
 				benchmarking_option_set = true;
 				per_script_stats = true;
-				is_latencies = true;
+				report_per_command = true;
 				break;
 			case 's':
 				scale_given = true;
@@ -3881,7 +4456,7 @@ main(int argc, char **argv)
 					}
 
 					*p++ = '\0';
-					if (!putVariable(&state[0], "option", optarg, p))
+					if (!putVariable(&state[0].variables, "option", optarg, p))
 						exit(1);
 				}
 				break;
@@ -3991,6 +4566,16 @@ main(int argc, char **argv)
 				benchmarking_option_set = true;
 				logfile_prefix = pg_strdup(optarg);
 				break;
+			case 8:
+				benchmarking_option_set = true;
+				max_tries = atoi(optarg);
+				if (max_tries <= 0)
+				{
+					fprintf(stderr, "invalid number of maximum tries: \"%s\"\n",
+							optarg);
+					exit(1);
+				}
+				break;
 			default:
 				fprintf(stderr, _("Try \"%s --help\" for more information.\n"), progname);
 				exit(1);
@@ -4123,19 +4708,19 @@ main(int argc, char **argv)
 			int			j;
 
 			state[i].id = i;
-			for (j = 0; j < state[0].nvariables; j++)
+			for (j = 0; j < state[0].variables.nvariables; j++)
 			{
-				Variable   *var = &state[0].variables[j];
+				Variable   *var = &state[0].variables.array[j];
 
 				if (var->is_numeric)
 				{
-					if (!putVariableNumber(&state[i], "startup",
+					if (!putVariableNumber(&state[i].variables, "startup",
 										   var->name, &var->num_value))
 						exit(1);
 				}
 				else
 				{
-					if (!putVariable(&state[i], "startup",
+					if (!putVariable(&state[i].variables, "startup",
 									 var->name, var->value))
 						exit(1);
 				}
@@ -4143,6 +4728,18 @@ main(int argc, char **argv)
 		}
 	}
 
+	/* set random seed */
+	INSTR_TIME_SET_CURRENT(start_time);
+	srandom((unsigned int) INSTR_TIME_GET_MICROSEC(start_time));
+
+	/* set random states for clients */
+	for (i = 0; i < nclients; i++)
+	{
+		state[i].random_state[0] = random();
+		state[i].random_state[1] = random();
+		state[i].random_state[2] = random();
+	}
+
 	if (debug)
 	{
 		if (duration <= 0)
@@ -4204,11 +4801,11 @@ main(int argc, char **argv)
 	 * :scale variables normally get -s or database scale, but don't override
 	 * an explicit -D switch
 	 */
-	if (lookupVariable(&state[0], "scale") == NULL)
+	if (lookupVariable(&state[0].variables, "scale") == NULL)
 	{
 		for (i = 0; i < nclients; i++)
 		{
-			if (!putVariableInt(&state[i], "startup", "scale", scale))
+			if (!putVariableInt(&state[i].variables, "startup", "scale", scale))
 				exit(1);
 		}
 	}
@@ -4217,11 +4814,11 @@ main(int argc, char **argv)
 	 * Define a :client_id variable that is unique per connection. But don't
 	 * override an explicit -D switch.
 	 */
-	if (lookupVariable(&state[0], "client_id") == NULL)
+	if (lookupVariable(&state[0].variables, "client_id") == NULL)
 	{
 		for (i = 0; i < nclients; i++)
 		{
-			if (!putVariableInt(&state[i], "startup", "client_id", i))
+			if (!putVariableInt(&state[i].variables, "startup", "client_id", i))
 				exit(1);
 		}
 	}
@@ -4243,10 +4840,6 @@ main(int argc, char **argv)
 	}
 	PQfinish(con);
 
-	/* set random seed */
-	INSTR_TIME_SET_CURRENT(start_time);
-	srandom((unsigned int) INSTR_TIME_GET_MICROSEC(start_time));
-
 	/* set up thread data structures */
 	threads = (TState *) pg_malloc(sizeof(TState) * nthreads);
 	nclients_dealt = 0;
@@ -4259,9 +4852,6 @@ main(int argc, char **argv)
 		thread->state = &state[nclients_dealt];
 		thread->nstate =
 			(nclients - nclients_dealt + nthreads - i - 1) / (nthreads - i);
-		thread->random_state[0] = random();
-		thread->random_state[1] = random();
-		thread->random_state[2] = random();
 		thread->logfile = NULL; /* filled in later */
 		thread->latency_late = 0;
 		initStats(&thread->stats, 0);
@@ -4340,6 +4930,8 @@ main(int argc, char **argv)
 		mergeSimpleStats(&stats.lag, &thread->stats.lag);
 		stats.cnt += thread->stats.cnt;
 		stats.skipped += thread->stats.skipped;
+		stats.failures += thread->stats.failures;
+		mergeRetries(&stats.retries, &thread->stats.retries);
 		latency_late += thread->latency_late;
 		INSTR_TIME_ADD(conn_total_time, thread->conn_time);
 	}
@@ -4613,6 +5205,8 @@ threadRun(void *arg)
 				/* generate and show report */
 				StatsData	cur;
 				int64		run = now - last_report;
+				int64		failures,
+							retries;
 				double		tps,
 							total_run,
 							latency,
@@ -4639,6 +5233,8 @@ threadRun(void *arg)
 					mergeSimpleStats(&cur.lag, &thread[i].stats.lag);
 					cur.cnt += thread[i].stats.cnt;
 					cur.skipped += thread[i].stats.skipped;
+					cur.failures += thread[i].stats.failures;
+					mergeRetries(&cur.retries, &thread[i].stats.retries);
 				}
 
 				total_run = (now - thread_start) / 1000000.0;
@@ -4650,6 +5246,9 @@ threadRun(void *arg)
 				stdev = 0.001 * sqrt(sqlat - 1000000.0 * latency * latency);
 				lag = 0.001 * (cur.lag.sum - last.lag.sum) /
 					(cur.cnt - last.cnt);
+				failures = cur.failures - last.failures;
+				retries = getAllRetries(&cur.retries) -
+					getAllRetries(&last.retries);
 
 				if (progress_timestamp)
 				{
@@ -4672,6 +5271,9 @@ threadRun(void *arg)
 						"progress: %s, %.1f tps, lat %.3f ms stddev %.3f",
 						tbuf, tps, latency, stdev);
 
+				if (failures > 0)
+					fprintf(stderr, ", " INT64_FORMAT " failed" , failures);
+
 				if (throttle_delay)
 				{
 					fprintf(stderr, ", lag %.3f ms", lag);
@@ -4679,6 +5281,10 @@ threadRun(void *arg)
 						fprintf(stderr, ", " INT64_FORMAT " skipped",
 								cur.skipped - last.skipped);
 				}
+
+				if (retries > 0)
+					fprintf(stderr, ", " INT64_FORMAT " retries", retries);
+
 				fprintf(stderr, "\n");
 
 				last = cur;
diff --git a/src/bin/pgbench/t/002_serialization_and_deadlock_failures.pl b/src/bin/pgbench/t/002_serialization_and_deadlock_failures.pl
new file mode 100644
index 0000000..4849aee
--- /dev/null
+++ b/src/bin/pgbench/t/002_serialization_and_deadlock_failures.pl
@@ -0,0 +1,459 @@
+use strict;
+use warnings;
+
+use Config;
+use PostgresNode;
+use TestLib;
+use Test::More tests => 57;
+
+use constant
+{
+	READ_COMMITTED  => 0,
+	REPEATABLE_READ => 1,
+	SERIALIZABLE    => 2,
+};
+
+my @isolation_level_sql = ('read committed', 'repeatable read', 'serializable');
+my @isolation_level_shell = (
+	'read\\ committed',
+	'repeatable\\ read',
+	'serializable');
+
+# Test concurrent update in table row with different default transaction
+# isolation levels.
+my $node = get_new_node('main');
+$node->init;
+$node->start;
+$node->safe_psql('postgres',
+    'CREATE UNLOGGED TABLE xy (x integer, y integer); '
+  . 'INSERT INTO xy VALUES (1, 2), (2, 3);');
+
+my $script_serialization = $node->basedir . '/pgbench_script_serialization';
+append_to_file($script_serialization,
+		"BEGIN;\n"
+	  . "\\set delta random(-5000, 5000)\n"
+	  . "UPDATE xy SET y = y + :delta WHERE x = 1;\n"
+	  . "END;");
+
+my $script_deadlocks1 = $node->basedir . '/pgbench_script_deadlocks1';
+append_to_file($script_deadlocks1,
+		"BEGIN;\n"
+	  . "\\set delta1 random(-5000, 5000)\n"
+	  . "\\set delta2 random(-5000, 5000)\n"
+	  . "UPDATE xy SET y = y + :delta1 WHERE x = 1;\n"
+	  . "SELECT pg_sleep(20);\n"
+	  . "UPDATE xy SET y = y + :delta2 WHERE x = 2;\n"
+	  . "END;");
+
+my $script_deadlocks2 = $node->basedir . '/pgbench_script_deadlocks2';
+append_to_file($script_deadlocks2,
+		"BEGIN;\n"
+	  . "\\set delta1 random(-5000, 5000)\n"
+	  . "\\set delta2 random(-5000, 5000)\n"
+	  . "UPDATE xy SET y = y + :delta2 WHERE x = 2;\n"
+	  . "UPDATE xy SET y = y + :delta1 WHERE x = 1;\n"
+	  . "END;");
+
+sub test_pgbench_serialization_failures
+{
+	my ($isolation_level) = @_;
+
+	my $isolation_level_sql = $isolation_level_sql[$isolation_level];
+	my $isolation_level_shell = $isolation_level_shell[$isolation_level];
+
+	local $ENV{PGPORT} = $node->port;
+	local $ENV{PGOPTIONS} =
+		"-c default_transaction_isolation=" . $isolation_level_shell;
+	print "# PGOPTIONS: " . $ENV{PGOPTIONS} . "\n";
+
+	my ($h_psql, $in_psql, $out_psql);
+	my ($h_pgbench, $in_pgbench, $out_pgbench, $err_pgbench);
+
+	# Open the psql session and run the parallel transaction:
+	print "# Starting psql\n";
+	$h_psql = IPC::Run::start [ 'psql' ], \$in_psql, \$out_psql;
+
+	$in_psql = "begin;\n";
+	print "# Running in psql: " . join(" ", $in_psql);
+	$h_psql->pump() until $out_psql =~ /BEGIN/;
+
+	$in_psql = "update xy set y = y + 1 where x = 1;\n";
+	print "# Running in psql: " . join(" ", $in_psql);
+	$h_psql->pump() until $out_psql =~ /UPDATE 1/;
+
+	# Start pgbench:
+	my @command = (
+		qw(pgbench --no-vacuum --debug --file),
+		$script_serialization);
+	print "# Running: " . join(" ", @command) . "\n";
+	$h_pgbench = IPC::Run::start \@command, \$in_pgbench, \$out_pgbench,
+	  \$err_pgbench;
+
+	# Let pgbench run the update command in the transaction:
+	sleep 10;
+
+	# In psql, commit the transaction and end the session:
+	$in_psql = "end;\n";
+	print "# Running in psql: " . join(" ", $in_psql);
+	$h_psql->pump() until $out_psql =~ /COMMIT/;
+
+	$in_psql = "\\q\n";
+	print "# Running in psql: " . join(" ", $in_psql);
+	$h_psql->pump() while length $in_psql;
+
+	$h_psql->finish();
+
+	# Get pgbench results
+	$h_pgbench->pump() until length $out_pgbench;
+	$h_pgbench->finish();
+
+	# On Windows, the exit status of the process is returned directly as the
+	# process's exit code, while on Unix, it's returned in the high bits
+	# of the exit code (see WEXITSTATUS macro in the standard <sys/wait.h>
+	# header file). IPC::Run's result function always returns exit code >> 8,
+	# assuming the Unix convention, which will always return 0 on Windows as
+	# long as the process was not terminated by an exception. To work around
+	# that, use $h->full_result on Windows instead.
+	my $result =
+	    ($Config{osname} eq "MSWin32")
+	  ? ($h_pgbench->full_results)[0]
+	  : $h_pgbench->result(0);
+
+	# Check pgbench results
+	ok(!$result, "@command exit code 0");
+
+	like($out_pgbench,
+		qr{processed: 10/10},
+		"concurrent update: $isolation_level_sql: check processed transactions");
+
+	my $regex =
+		($isolation_level == READ_COMMITTED)
+	  ? qr{^((?!number of failures)(.|\n))*$}
+	  : qr{number of failures: [1-9]\d* \([1-9]\d*\.\d* %\)};
+
+	like($out_pgbench,
+		$regex,
+		"concurrent update: $isolation_level_sql: check failures");
+
+	$regex =
+		($isolation_level == READ_COMMITTED)
+	  ? qr{^((?!client 0 got a serialization failure \(try 1/1\))(.|\n))*$}
+	  : qr{client 0 got a serialization failure \(try 1/1\)};
+
+	like($err_pgbench,
+		$regex,
+		"concurrent update: $isolation_level_sql: check serialization failure");
+}
+
+sub test_pgbench_serialization_failures_retry
+{
+	my ($isolation_level) = @_;
+
+	my $isolation_level_sql = $isolation_level_sql[$isolation_level];
+	my $isolation_level_shell = $isolation_level_shell[$isolation_level];
+
+	local $ENV{PGPORT} = $node->port;
+	local $ENV{PGOPTIONS} =
+		"-c default_transaction_isolation=" . $isolation_level_shell;
+	print "# PGOPTIONS: " . $ENV{PGOPTIONS} . "\n";
+
+	my ($h_psql, $in_psql, $out_psql);
+	my ($h_pgbench, $in_pgbench, $out_pgbench, $stderr);
+
+	# Open the psql session and run the parallel transaction:
+	print "# Starting psql\n";
+	$h_psql = IPC::Run::start [ 'psql' ], \$in_psql, \$out_psql;
+
+	$in_psql ="begin;\n";
+	print "# Running in psql: " . join(" ", $in_psql);
+	$h_psql->pump() until $out_psql =~ /BEGIN/;
+
+	$in_psql = "update xy set y = y + 1 where x = 1;\n";
+	print "# Running in psql: " . join(" ", $in_psql);
+	$h_psql->pump() until $out_psql =~ /UPDATE 1/;
+
+	# Start pgbench:
+	my @command = (
+		qw(pgbench --no-vacuum --debug --max-tries 2 --file),
+		$script_serialization);
+	print "# Running: " . join(" ", @command) . "\n";
+	$h_pgbench = IPC::Run::start \@command, \$in_pgbench, \$out_pgbench,
+	  \$stderr;
+
+	# Let pgbench run the update command in the transaction:
+	sleep 10;
+
+	# In psql, commit the transaction and end the session:
+	$in_psql = "end;\n";
+	print "# Running in psql: " . join(" ", $in_psql);
+	$h_psql->pump() until $out_psql =~ /COMMIT/;
+
+	$in_psql = "\\q\n";
+	print "# Running in psql: " . join(" ", $in_psql);
+	$h_psql->pump() while length $in_psql;
+
+	$h_psql->finish();
+
+	# Get pgbench results
+	$h_pgbench->pump() until length $out_pgbench;
+	$h_pgbench->finish();
+
+	# On Windows, the exit status of the process is returned directly as the
+	# process's exit code, while on Unix, it's returned in the high bits
+	# of the exit code (see WEXITSTATUS macro in the standard <sys/wait.h>
+	# header file). IPC::Run's result function always returns exit code >> 8,
+	# assuming the Unix convention, which will always return 0 on Windows as
+	# long as the process was not terminated by an exception. To work around
+	# that, use $h->full_result on Windows instead.
+	my $result =
+	    ($Config{osname} eq "MSWin32")
+	  ? ($h_pgbench->full_results)[0]
+	  : $h_pgbench->result(0);
+
+	# Check pgbench results
+	ok(!$result, "@command exit code 0");
+
+	like($out_pgbench,
+		qr{processed: 10/10},
+		"concurrent update with retrying: "
+	  . $isolation_level_sql
+	  . ": check processed transactions");
+
+	like($out_pgbench,
+		qr{^((?!number of failures)(.|\n))*$},
+		"concurrent update with retrying: "
+	  . $isolation_level_sql
+	  . ": check failures");
+
+	my $pattern =
+		"client 0 sending UPDATE xy SET y = y \\+ (-?\\d+) WHERE x = 1;\n"
+	  . "(client 0 receiving\n)+"
+	  . "client 0 got a serialization failure \\(try 1/2\\)\n"
+	  . "client 0 sending END;\n"
+	  . "\\g2+"
+	  . "client 0 repeats the failed transaction \\(try 2/2\\)\n"
+	  . "client 0 sending BEGIN;\n"
+	  . "\\g2+"
+	  . "client 0 executing \\\\set delta\n"
+	  . "client 0 sending UPDATE xy SET y = y \\+ \\g1 WHERE x = 1;";
+
+	like($stderr,
+		qr{$pattern},
+		"concurrent update with retrying: "
+	  . $isolation_level_sql
+	  . ": check the retried transaction");
+}
+
+sub test_pgbench_deadlock_failures
+{
+	my ($isolation_level) = @_;
+
+	my $isolation_level_sql = $isolation_level_sql[$isolation_level];
+	my $isolation_level_shell = $isolation_level_shell[$isolation_level];
+
+	local $ENV{PGPORT} = $node->port;
+	local $ENV{PGOPTIONS} =
+		"-c default_transaction_isolation=" . $isolation_level_shell;
+	print "# PGOPTIONS: " . $ENV{PGOPTIONS} . "\n";
+
+	my ($h1, $in1, $out1, $err1);
+	my ($h2, $in2, $out2, $err2);
+
+	# Run first pgbench
+	my @command1 = (
+		qw(pgbench --no-vacuum --debug --transactions 1 --file),
+		$script_deadlocks1);
+	print "# Running: " . join(" ", @command1) . "\n";
+	$h1 = IPC::Run::start \@command1, \$in1, \$out1, \$err1;
+
+	# Let pgbench run first update command in the transaction:
+	sleep 10;
+
+	# Run second pgbench
+	my @command2 = (
+		qw(pgbench --no-vacuum --debug --transactions 1 --file),
+		$script_deadlocks2);
+	print "# Running: " . join(" ", @command2) . "\n";
+	$h2 = IPC::Run::start \@command2, \$in2, \$out2, \$err2;
+
+	# Get all pgbench results
+	$h1->pump() until length $out1;
+	$h1->finish();
+
+	$h2->pump() until length $out2;
+	$h2->finish();
+
+	# On Windows, the exit status of the process is returned directly as the
+	# process's exit code, while on Unix, it's returned in the high bits
+	# of the exit code (see WEXITSTATUS macro in the standard <sys/wait.h>
+	# header file). IPC::Run's result function always returns exit code >> 8,
+	# assuming the Unix convention, which will always return 0 on Windows as
+	# long as the process was not terminated by an exception. To work around
+	# that, use $h->full_result on Windows instead.
+	my $result1 =
+	    ($Config{osname} eq "MSWin32")
+	  ? ($h1->full_results)[0]
+	  : $h1->result(0);
+
+	my $result2 =
+	    ($Config{osname} eq "MSWin32")
+	  ? ($h2->full_results)[0]
+	  : $h2->result(0);
+
+	# Check all pgbench results
+	ok(!$result1, "@command1 exit code 0");
+	ok(!$result2, "@command2 exit code 0");
+
+	like($out1,
+		qr{processed: 1/1},
+		"concurrent deadlock update: "
+	  . $isolation_level_sql
+	  . ": pgbench 1: check processed transactions");
+	like($out2,
+		qr{processed: 1/1},
+		"concurrent deadlock update: "
+	  . $isolation_level_sql
+	  . ": pgbench 2: check processed transactions");
+
+	# First or second pgbench should get a deadlock error
+	like($out1 . $out2,
+		qr{number of failures: 1 \(100\.000 %\)},
+		"concurrent deadlock update: "
+	  . $isolation_level_sql
+	  . ": check failures");
+
+	like($err1 . $err2,
+		qr{client 0 got a deadlock failure \(try 1/1\)},
+		"concurrent deadlock update: "
+	  . $isolation_level_sql
+	  . ": check deadlock failure");
+}
+
+sub test_pgbench_deadlock_failures_retry
+{
+	my ($isolation_level) = @_;
+
+	my $isolation_level_sql = $isolation_level_sql[$isolation_level];
+	my $isolation_level_shell = $isolation_level_shell[$isolation_level];
+
+	local $ENV{PGPORT} = $node->port;
+	local $ENV{PGOPTIONS} =
+		"-c default_transaction_isolation=" . $isolation_level_shell;
+
+	my ($h1, $in1, $out1, $err1);
+	my ($h2, $in2, $out2, $err2);
+
+	# Run first pgbench
+	my @command1 = (
+		qw(pgbench --no-vacuum --debug --transactions 1  --max-tries 2 --file),
+		$script_deadlocks1);
+	print "# Running: " . join(" ", @command1) . "\n";
+	$h1 = IPC::Run::start \@command1, \$in1, \$out1, \$err1;
+
+	# Let pgbench run first update command in the transaction:
+	sleep 10;
+
+	# Run second pgbench
+	my @command2 = (
+		qw(pgbench --no-vacuum --debug --transactions 1  --max-tries 2 --file),
+		$script_deadlocks2);
+	print "# Running: " . join(" ", @command2) . "\n";
+	$h2 = IPC::Run::start \@command2, \$in2, \$out2, \$err2;
+
+	# Get all pgbench results
+	$h1->pump() until length $out1;
+	$h1->finish();
+
+	$h2->pump() until length $out2;
+	$h2->finish();
+
+	# On Windows, the exit status of the process is returned directly as the
+	# process's exit code, while on Unix, it's returned in the high bits
+	# of the exit code (see WEXITSTATUS macro in the standard <sys/wait.h>
+	# header file). IPC::Run's result function always returns exit code >> 8,
+	# assuming the Unix convention, which will always return 0 on Windows as
+	# long as the process was not terminated by an exception. To work around
+	# that, use $h->full_result on Windows instead.
+	my $result1 =
+	    ($Config{osname} eq "MSWin32")
+	  ? ($h1->full_results)[0]
+	  : $h1->result(0);
+
+	my $result2 =
+	    ($Config{osname} eq "MSWin32")
+	  ? ($h2->full_results)[0]
+	  : $h2->result(0);
+
+	# Check all pgbench results
+	ok(!$result1, "@command1 exit code 0");
+	ok(!$result2, "@command2 exit code 0");
+
+	like($out1,
+		qr{processed: 1/1},
+		"concurrent deadlock update with retrying: "
+	  . $isolation_level_sql
+	  . ": pgbench 1: check processed transactions");
+	like($out2,
+		qr{processed: 1/1},
+		"concurrent deadlock update with retrying: "
+	  . $isolation_level_sql
+	  . ": pgbench 2: check processed transactions");
+
+	like($out1 . $out2,
+		qr{^((?!number of failures)(.|\n))*$},
+		"concurrent deadlock update with retrying: "
+	  . $isolation_level_sql
+	  . ": check failures");
+
+	# First or second pgbench should get a deadlock error
+	like($err1 . $err2,
+		qr{client 0 got a deadlock failure \(try 1/2\)},
+		"concurrent deadlock update with retrying: "
+	  . $isolation_level_sql
+	  . ": check deadlock failure");
+
+	if ($isolation_level == READ_COMMITTED)
+	{
+		my $pattern =
+			"client 0 sending UPDATE xy SET y = y \\+ (-?\\d+) WHERE x = (\\d);\n"
+		  . "(client 0 receiving\n)+"
+		  . "(|client 0 sending SELECT pg_sleep\\(20\\);\n)"
+		  . "\\g3*"
+		  . "client 0 sending UPDATE xy SET y = y \\+ (-?\\d+) WHERE x = (\\d);\n"
+		  . "\\g3+"
+		  . "client 0 got a deadlock failure \\(try 1/2\\)\n"
+		  . "client 0 sending END;\n"
+		  . "\\g3+"
+		  . "client 0 repeats the failed transaction \\(try 2/2\\)\n"
+		  . "client 0 sending BEGIN;\n"
+		  . "\\g3+"
+		  . "client 0 executing \\\\set delta1\n"
+		  . "client 0 executing \\\\set delta2\n"
+		  . "client 0 sending UPDATE xy SET y = y \\+ \\g1 WHERE x = \\g2;\n"
+		  . "\\g3+"
+		  . "\\g4"
+		  . "\\g3*"
+		  . "client 0 sending UPDATE xy SET y = y \\+ \\g5 WHERE x = \\g6;\n";
+
+		like($err1 . $err2,
+			qr{$pattern},
+			"concurrent deadlock update with retrying: "
+		  . $isolation_level_sql
+		  . ": check the retried transaction");
+	}
+}
+
+test_pgbench_serialization_failures(READ_COMMITTED);
+test_pgbench_serialization_failures(REPEATABLE_READ);
+test_pgbench_serialization_failures(SERIALIZABLE);
+
+test_pgbench_serialization_failures_retry(REPEATABLE_READ);
+test_pgbench_serialization_failures_retry(SERIALIZABLE);
+
+test_pgbench_deadlock_failures(READ_COMMITTED);
+test_pgbench_deadlock_failures(REPEATABLE_READ);
+test_pgbench_deadlock_failures(SERIALIZABLE);
+
+test_pgbench_deadlock_failures_retry(READ_COMMITTED);
+test_pgbench_deadlock_failures_retry(REPEATABLE_READ);
+test_pgbench_deadlock_failures_retry(SERIALIZABLE);
-- 
1.9.1

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to