ns due to client application shutdown then client OS should itself
properly close than connection and therefore this patch will detect
such situation without keepalives configured.
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
> On 14 May 2019, at 12:53, Stas Kelvich wrote:
>
> Hi,
>
> That is an attempt number N+1 to relax checks for a temporary table access
> in a transaction that is going to be prepared.
>
Konstantin Knizhnik made off-list review of this patch and spotted
few problems.
table during
current transaction, so during commit, only tables from that hash will be
truncated.
That way ON COMMIT DELETE tables in the backend will not prevent read-only
access to
some other table in a given backend.
Any thoughts?
--
Stas Kelvich
Postgres Professional: http://www.postgre
Hi, hackers.
It seems that heapam.c:3082 calls XLogRegisterData() with an argument
allocated on stack, but following call to XLogInsert() happens after
end of context for that variable.
Issue spotted by clang's AddressSanitizer. Fix attached.
--
Stas Kelvich
Postgres Professional:
> On 31 Jan 2019, at 18:42, Andres Freund wrote:
>
> Hi,
>
> On 2018-11-30 16:00:17 +0300, Stas Kelvich wrote:
>>> On 29 Nov 2018, at 18:21, Dmitry Dolgov <9erthali...@gmail.com> wrote:
>>> Is there any resulting patch where the ideas how to implement
though first was [BERN83], but actually he references
bunch of previous articles and [REED78] is one them) was actually about
distributed
transactions and uses more or less the same approach with pseudo-time in their
terminology to order transaction and assign snapshots.
[HARD17] https://dl.acm.org/citation.cfm?id=3055548
[REED78] https://dl.acm.org/citation.cfm?id=889815
[BERN83] https://dl.acm.org/citation.cfm?id=319998
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
.
> With this patch, can we start a remote transaction at READ COMMITTED
> with imported a global snapshot if the local transaction started at
> READ COMMITTED?
In theory it is possible, one just need to send new snapshot before each
statement. With some amount of careful work it is poss
atch set and that it will
be possible to address that later (in a long run such connection will be anyway
needed at least for a deadlock detection). However, if you think that current
behavior + STO analog isn't good enough, then I'm ready to pursue that track.
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
;s not obvious how to integrate that into postgres_fdw. Probably
that will require bi-derectional connection between postgres_fdw nodes
(also distributed deadlock detection will be easy with such connection).
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
lue that were
global_snapshot_xmin seconds ago and we have mapping from time (or
GlobalCSN) to globalxmin for each second in this range. So when
some backends imports global snapshot with some GlobalCSN, that
GlobalCSN is mapped to a xmin and this xmin is set as a Proc->xmin.
--
Sta
> On 3 May 2018, at 18:28, Masahiko Sawada wrote:
>
> On Wed, May 2, 2018 at 1:27 AM, Stas Kelvich wrote:
>> 1) To achieve commit atomicity of different nodes intermediate step is
>> introduced: at first running transaction is marked as InDoubt on all nodes,
>>
> On 2 May 2018, at 05:58, Peter Eisentraut
> wrote:
>
> On 5/1/18 12:27, Stas Kelvich wrote:
>> Clock-SI is described in [5] and here I provide a small overview, which
>> supposedly should be enough to catch the idea. Assume that each node runs
>> Commit
). And clock time is supposedly more or less the same
on different nodes in normal condition. But correctness here will not
depend on degree of clock synchronisation, only performance of
global transactions will.
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
rupts and
therefore can cancel backend or throw an error before GXact clean-up.
Other similar places like CommitTransaction and PrepareTransaction have
such hold interrupts sections.
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
000
> On 25 Apr 2018, at 17:55, John Naylor wrote:
>
> On 4/25/18, Stas Kelvich wrote:
>>> On 25 Apr 2018, at 17:18, Tom Lane wrote:
>>> I think we should rewrite
>>> both of them to use the Catalog.pm infrastructure.
>>
>> Okay, seems re
> On 25 Apr 2018, at 17:18, Tom Lane wrote:
> I think we should rewrite
> both of them to use the Catalog.pm infrastructure.
Okay, seems reasonable. I'll put shared code in Catalog.pm and
update patch.
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Ru
lane
Hm, I attached patch in first message, but seems that my mail client again
messed with attachment. However archive caught it:
https://www.postgresql.org/message-id/attachment/60920/0001-Rewrite-unused_oids-in-perl.patch
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
{*.*", but it seems easier for feature use to just
rewrite unused_oids in perl to match duplicate_oids. Also add in-place
complain about duplicates instead of running uniq through oids array.
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
000
replication between postgres-10 and
postgres-with-2pc-decoding will be broken. So ISTM it’s better to set
LOGICALREP_IS_COMMIT to zero and change flags checking rules to accommodate
that.
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
additional regular and tap
>> tests that we have added as part of this patch.
>>
>
> PFA, latest version of this patch.
>
> This latest version takes care of the abort-while-decoding issue along
> with additional test cases and documentation changes.
>
>
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
oth
cases patch works.
Thanks!
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
at having busy loop is the best idea out of several discussed.
I thought about small sleep at the bottom of that loop if we reached topmost
transaction, but taking into account low probability of that event may be
it is faster to do just busy wait.
Also some clarifying comment in code would be nice.
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
ock.
Probability of that crash can be significantly increased be adding sleep
between xid generation and lock insertion in AssignTransactionId().
AssignTransactionId.patch
Description: Binary data
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
ningTransactionData it possible to have a custom lock there. In
this case GetRunningTransactionData will hold three locks simultaneously,
since it already holds ProcArrayLock and XidGenLock =)
Any better ideas?
--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
xltw_fix.diff
Description: Binary data
24 matches
Mail list logo