Try this,

max_standby_archive_delay = 600s        # max delay before canceling queries
                                        # when reading WAL from archive;
                                        # -1 allows indefinite delay
max_standby_streaming_delay = 600s      # max delay before canceling queries
                                        # when reading streaming WAL;
                                        # -1 allows indefinite delay

or try
pg_xlog_replay_pause()
pg_xlog_replay_resume()




On Fri, Nov 8, 2013 at 10:06 AM, Victor Hooi <victorh...@yahoo.com> wrote:

> Hi,
>
> We're using psycopg2 with COPY to dump CSV output from a large query.
>
> The actual SELECT query itself is large (both in number of
> records/columns, and also in width of values in columns), but still
> completes in around under a minute on the server.
>
> However, if you then use a COPY with it, it will often time out.
>
> We're using psycopg2 to run the command, the trace we get is something
> like:
>
> Traceback (most recent call last):
>   File "foo.py", line 259, in <module>
>     jobs[job].run_all()
>   File "foo.py", line 127, in run_all
>     self.export_to_csv()
>   File "foo.py", line 168, in export_to_csv
>     cur.copy_expert(self.export_sql_statement, f)
> psycopg2.extensions.TransactionRollbackError: canceling statement due to
> conflict with recovery
>  DETAIL:  User was holding shared buffer pin for too long.
>
> My question is, what are some simple ways we can use to chunk up the query?
>
> Could we pull down a list of all the ids (auto-incrementing int), break
> this list up, then use a WHERE clause to break it up, running multiple COPY
> commands?
>
> Or would it be better to use LIMIT/OFFSET to break it up? I'm not sure how
> we'd figure out when we reached the end of the results set though (apart
> from just counting the results?).
>
> Or are there other approaches you guys could recommend?
>
> Cheers,
> Victor
>

Reply via email to