On Fri, Oct 14, 2016 at 2:52 AM, Robert Haas <robertmh...@gmail.com> wrote:
> On Thu, Oct 13, 2016 at 6:33 AM, Amit Kapila <amit.kapil...@gmail.com> wrote:
>> If we agree that above is a problematic case, then some of the options
>> to solve it could be (a) Vacuum should not wait for a cleanup lock and
>> instead just give up and start again which I think is a bad idea (b)
>> don't allow to take lock of higher granularity after the scan is
>> suspended, not sure if that is feasible (c) document the above danger,
>> this sounds okay on the ground that nobody has reported the problem
>> till now
>
> I don't think any of these sound particularly good.
>

Tom has suggested something similar to approach (b) in his mail [1],
basically rejecting some commands like Truncate, Reindex,.. if the
current transaction is already holding the table open and I think if
we want to go that way, it might be better to reject any command that
requires lock level higher than the current transaction has on table.
Tom has suggested few more ways to resolve such deadlocks in that
thread, but I think we never implemented those.

Here, one point to think is do we really need to invent some ways to
make hash indexes not prone to that problem when it can occur for
other indexes or even for heap.  Even if want to do something, I think
the solution has to be common.


[1] - https://www.postgresql.org/message-id/21534.1200956...@sss.pgh.pa.us

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to