Tom,
Yes, it is what I mean. Is what pg_dump uses to get things synchronized. It
seems to me a clear marker that the same task is using more than one
connection to accomplish the one job.
Em 08/09/2016 6:34 PM, "Tom Lane" escreveu:
> Lucas writes:
> >
Lucas writes:
> The queue jumping logic can not use the distributed transaction id?
If we had such a thing as a distributed transaction id, maybe the
answer could be yes. We don't.
I did wonder whether using a shared snapshot might be a workable proxy
for that, but haven't
I agree. It is an ugly hack.
But to me, the reduced window for failure is important. And that way an
failure will happen right away to be submitted to my operators as soon as
possible.
The queue jumping logic can not use the distributed transaction id?
On my logic, if a connection requests a
Lucas writes:
> I made a small modification in pg_dump to prevent parallel backup failures
> due to exclusive lock requests made by other tasks.
> The modification I made take shared locks for each parallel backup worker
> at the very beginning of the job. That way, any other