Re: [HACKERS] Preventing deadlock on parallel backup

2016-09-08 Thread Lucas
Tom,

Yes, it is what I mean. Is what pg_dump uses to get things synchronized. It
seems to me a clear marker that the same task is using more than one
connection to accomplish the one job.

Em 08/09/2016 6:34 PM, "Tom Lane"  escreveu:

> Lucas  writes:
> > The queue jumping logic can not use the distributed transaction id?
>
> If we had such a thing as a distributed transaction id, maybe the
> answer could be yes.  We don't.
>
> I did wonder whether using a shared snapshot might be a workable proxy
> for that, but haven't pursued it.
>
> regards, tom lane
>


Re: [HACKERS] Preventing deadlock on parallel backup

2016-09-08 Thread Tom Lane
Lucas  writes:
> The queue jumping logic can not use the distributed transaction id?

If we had such a thing as a distributed transaction id, maybe the
answer could be yes.  We don't.

I did wonder whether using a shared snapshot might be a workable proxy
for that, but haven't pursued it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Preventing deadlock on parallel backup

2016-09-08 Thread Lucas
I agree. It is an ugly hack.

But to me, the reduced window for failure is important. And that way an
failure will happen right away to be submitted to my operators as soon as
possible.

The queue jumping logic can not use the distributed transaction id?

On my logic, if a connection requests a shared lock that is already granted
to another connection in the same distributed transaction it should be
granted right away... make sense?

Em 08/09/2016 4:15 PM, "Tom Lane"  escreveu:

> Lucas  writes:
> > I made a small modification in pg_dump to prevent parallel backup
> failures
> > due to exclusive lock requests made by other tasks.
>
> > The modification I made take shared locks for each parallel backup worker
> > at the very beginning of the job. That way, any other job that attempts
> to
> > acquire exclusive locks will wait for the backup to finish.
>
> I do not think this would eliminate the problem; all it's doing is making
> the window for trouble a bit narrower.  Also, it implies taking out many
> locks that would never be used, since no worker process will be touching
> all of the tables.
>
> I think a real solution involves teaching the backend to allow a worker
> process to acquire a lock as long as its master already has the same lock.
> There's already queue-jumping logic of that sort in the lock manager, but
> it doesn't fire because we don't see that there's a potential deadlock.
> What needs to be worked out, mostly, is how we can do that without
> creating security hazards (since the backend would have to accept a
> command enabling this behavior from the client).  Maybe it's good enough
> to insist that leader and follower be same user ID, or maybe not.
>
> There's some related problems in parallel query, which AFAIK we just have
> an ugly kluge solution for ATM.  It'd be better if there were a clear
> model of when to allow a parallel worker to get a lock out-of-turn.
>
> regards, tom lane
>


Re: [HACKERS] Preventing deadlock on parallel backup

2016-09-08 Thread Tom Lane
Lucas  writes:
> I made a small modification in pg_dump to prevent parallel backup failures
> due to exclusive lock requests made by other tasks.

> The modification I made take shared locks for each parallel backup worker
> at the very beginning of the job. That way, any other job that attempts to
> acquire exclusive locks will wait for the backup to finish.

I do not think this would eliminate the problem; all it's doing is making
the window for trouble a bit narrower.  Also, it implies taking out many
locks that would never be used, since no worker process will be touching
all of the tables.

I think a real solution involves teaching the backend to allow a worker
process to acquire a lock as long as its master already has the same lock.
There's already queue-jumping logic of that sort in the lock manager, but
it doesn't fire because we don't see that there's a potential deadlock.
What needs to be worked out, mostly, is how we can do that without
creating security hazards (since the backend would have to accept a
command enabling this behavior from the client).  Maybe it's good enough
to insist that leader and follower be same user ID, or maybe not.

There's some related problems in parallel query, which AFAIK we just have
an ugly kluge solution for ATM.  It'd be better if there were a clear
model of when to allow a parallel worker to get a lock out-of-turn.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Preventing deadlock on parallel backup

2016-09-08 Thread Lucas
People,

I made a small modification in pg_dump to prevent parallel backup failures
due to exclusive lock requests made by other tasks.

The modification I made take shared locks for each parallel backup worker
at the very beginning of the job. That way, any other job that attempts to
acquire exclusive locks will wait for the backup to finish.

In my case, each server was taking a day to complete the backup, now with
parallel backup one is taking 3 hours and the others less than a hour.

The code below is not very elegant, but it works for me. My whishlist for
the backup is:

1) replace plpgsql by c code reading the backup toc and assembling the lock
commands.
2) create an timeout to the locks.
3) broadcast the end of copy to every worker in order to release the locks
as early as possible;
4) create a monitor thread that prioritize an copy job based on a exclusive
lock acquired;
5) grant the lock for other connection of the same distributed transaction
if it is held by any connection of the same distributed transaction. There
is some sideefect I can't see on that?

1 to 4 are within my capabilities and I may do it in the future. 4 is to
advanced for me and I do not dare to mess with something so fundamental
rights now.

Anyone else is working on that?

On, Parallel.c, void RunWorker(...), add:

PQExpBuffer query;
PGresult   *res;

query = createPQExpBuffer();
resetPQExpBuffer(query);
appendPQExpBuffer(query,
"do language 'plpgsql' $$"
" declare "
"x record;"
" begin"
"for x in select * from pg_tables where schemaname not in
('pg_catalog','information_schema') loop"
"raise info 'lock table %.%', x.schemaname, x.tablename;"
"execute 'LOCK TABLE
'||quote_ident(x.schemaname)||'.'||quote_ident(x.tablename)||' IN ACCESS
SHARE MODE NOWAIT';"
"end loop;"
"end"
"$$" );

res = PQexec(AH->connection, query->data);

if (!res || PQresultStatus(res) != PGRES_COMMAND_OK)
exit_horribly(modulename,"Could not lock the tables to begin the
work\n\n");
PQclear(res);
destroyPQExpBuffer(query);