pgsql: Add GUC checks for ssl_min_protocol_version and ssl_max_protocol

2020-01-17 Thread Michael Paquier
Add GUC checks for ssl_min_protocol_version and ssl_max_protocol_version

Mixing incorrect bounds set in the SSL context leads to confusing error
messages generated by OpenSSL which are hard to act on.  New checks are
added within the GUC machinery to improve the user experience as they
apply to any SSL implementation, not only OpenSSL, and doing the checks
beforehand avoids the creation of a SSL during a reload (or startup)
which we know will never be used anyway.

Backpatch down to 12, as those parameters have been introduced by
e73e67c.

Author: Michael Paquier
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/20200114035420.ge1...@paquier.xyz
Backpatch-through: 12

Branch
--
REL_12_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/ac2dcca5dfe62177fd871a8f4f71430a1c92382c

Modified Files
--
src/backend/utils/misc/guc.c   | 51 --
src/test/ssl/t/001_ssltests.pl | 20 -
src/test/ssl/t/SSLServer.pm|  2 +-
3 files changed, 69 insertions(+), 4 deletions(-)



pgsql: Add GUC checks for ssl_min_protocol_version and ssl_max_protocol

2020-01-17 Thread Michael Paquier
Add GUC checks for ssl_min_protocol_version and ssl_max_protocol_version

Mixing incorrect bounds set in the SSL context leads to confusing error
messages generated by OpenSSL which are hard to act on.  New checks are
added within the GUC machinery to improve the user experience as they
apply to any SSL implementation, not only OpenSSL, and doing the checks
beforehand avoids the creation of a SSL during a reload (or startup)
which we know will never be used anyway.

Backpatch down to 12, as those parameters have been introduced by
e73e67c.

Author: Michael Paquier
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/20200114035420.ge1...@paquier.xyz
Backpatch-through: 12

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/41aadeeb124ee5f8e7d154a16a74d53286882b74

Modified Files
--
src/backend/utils/misc/guc.c   | 51 --
src/test/ssl/t/001_ssltests.pl | 20 -
2 files changed, 68 insertions(+), 3 deletions(-)



Re: pgsql: Add a non-strict version of jsonb_set

2020-01-17 Thread Tom Lane
Andrew Dunstan  writes:
>> On Jan 17, 2020, at 12:44 PM, Tom Lane  wrote:
>>> Shoulda been a catversion bump in here, if only for protocol's sake.

> I'd love to have a git pre-commit hook that would warn about this, it
> seems to happen several times a year, and I know I've transgressed
> more than once. Not sure what the rules should be, something like if
> you changed src/include/catalog/* but not
> src/include/catalog/catversion.h ?

Meh.  I think that would lead to forced catversion bumps even when
not necessary (ex: when just correcting description strings).
The cure could easily be worse than the disease.

In reality, the only reason for repeated catversion bumps during
development is to warn fellow developers that they have to do
an initdb after a git pull.  That's certainly a valuable courtesy,
but the sky generally isn't going to fall if you forget.

I'd be okay with a hook that there was a way to override ("yes,
I know what I'm doing, this doesn't require a catversion change").
But there's no way to do that is there?

regards, tom lane




Re: pgsql: Add a non-strict version of jsonb_set

2020-01-17 Thread Andrew Dunstan
On Fri, Jan 17, 2020 at 1:50 PM Andrew Dunstan  wrote:
>
>
>
>
>
> > On Jan 17, 2020, at 12:44 PM, Tom Lane  wrote:
> >
> > Andrew Dunstan  writes:
> >> Add a non-strict version of jsonb_set
> >
> > Shoulda been a catversion bump in here, if only for protocol's sake.
> >
> > (A useful rule of thumb is "if you won't pass the regression tests
> > without doing an initdb, there should be a catversion change".)
> >
> >
>
> Argh! Will fix when back at my desk
>


I'd love to have a git pre-commit hook that would warn about this, it
seems to happen several times a year, and I know I've transgressed
more than once. Not sure what the rules should be, something like if
you changed src/include/catalog/* but not
src/include/catalog/catversion.h ?

cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




pgsql: Avoid full scan of GIN indexes when possible

2020-01-17 Thread Alexander Korotkov
Avoid full scan of GIN indexes when possible

The strategy of GIN index scan is driven by opclass-specific extract_query
method.  This method that needed search mode is GIN_SEARCH_MODE_ALL.  This
mode means that matching tuple may contain none of extracted entries.  Simple
example is '!term' tsquery, which doesn't need any term to exist in matching
tsvector.

In order to handle such scan key GIN calculates virtual entry, which contains
all TIDs of all entries of attribute.  In fact this is full scan of index
attribute.  And typically this is very slow, but allows to handle some queries
correctly in GIN.  However, current algorithm calculate such virtual entry for
each GIN_SEARCH_MODE_ALL scan key even if they are multiple for the same
attribute.  This is clearly not optimal.

This commit improves the situation by introduction of "exclude only" scan keys.
Such scan keys are not capable to return set of matching TIDs.  Instead, they
are capable only to filter TIDs produced by normal scan keys.  Therefore,
each attribute should contain at least one normal scan key, while rest of them
may be "exclude only" if search mode is GIN_SEARCH_MODE_ALL.

The same optimization might be applied to the whole scan, not per-attribute.
But that leads to NULL values elimination problem.  There is trade-off between
multiple possible ways to do this.  We probably want to do this later using
some cost-based decision algorithm.

Discussion: 
https://postgr.es/m/CAOBaU_YGP5-BEt5Cc0%3DzMve92vocPzD%2BXiZgiZs1kjY0cj%3DXBg%40mail.gmail.com
Author: Nikita Glukhov, Alexander Korotkov, Tom Lane, Julien Rouhaud
Reviewed-by: Julien Rouhaud, Tomas Vondra, Tom Lane

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/4b754d6c16e16cc1a1adf12ab0f48603069a0efd

Modified Files
--
contrib/pg_trgm/expected/pg_trgm.out  | 101 ++
contrib/pg_trgm/sql/pg_trgm.sql   |  27 +++
src/backend/access/gin/ginget.c   |  84 --
src/backend/access/gin/ginscan.c  | 127 +++-
src/backend/utils/adt/selfuncs.c  |  37 --
src/include/access/gin_private.h  |  14 
src/test/regress/expected/gin.out | 131 +-
src/test/regress/expected/tsearch.out |  35 +
src/test/regress/sql/gin.sql  |  91 ++-
src/test/regress/sql/tsearch.sql  |   9 +++
10 files changed, 579 insertions(+), 77 deletions(-)



pgsql: Repair more failures with SubPlans in multi-row VALUES lists.

2020-01-17 Thread Tom Lane
Repair more failures with SubPlans in multi-row VALUES lists.

Commit 9b63c13f0 turns out to have been fundamentally misguided:
the parent node's subPlan list is by no means the only way in which
a child SubPlan node can be hooked into the outer execution state.
As shown in bug #16213 from Matt Jibson, we can also get short-lived
tuple table slots added to the outer es_tupleTable list.  At this point
I have little faith that there aren't other possible connections as
well; the long time it took to notice this problem shows that this
isn't a heavily-exercised situation.

Therefore, revert that fix, returning to the coding that passed a
NULL parent plan pointer down to the transiently-built subexpressions.
That gives us a pretty good guarantee that they won't hook into the
outer executor state in any way.  But then we need some other solution
to make SubPlans work.  Adopt the solution speculated about in the
previous commit's log message: do expression initialization at plan
startup for just those VALUES rows containing SubPlans, abandoning the
goal of reclaiming memory intra-query for those rows.  In practice it
seems unlikely that queries containing a vast number of VALUES rows
would be using SubPlans in them, so this should not give up much.

(BTW, this test case also refutes my claim in connection with the prior
commit that the issue only arises with use of LATERAL.  That was just
wrong: some variants of SubLink always produce SubPlans.)

As with previous patch, back-patch to all supported branches.

Discussion: https://postgr.es/m/16213-871ac3bc208ec...@postgresql.org

Branch
--
REL9_5_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/3964722780d811430521b6051bc350ead03fb708

Modified Files
--
src/backend/executor/nodeValuesscan.c   | 83 -
src/include/nodes/execnodes.h   | 11 -
src/test/regress/expected/subselect.out | 27 +++
src/test/regress/sql/subselect.sql  | 14 ++
4 files changed, 101 insertions(+), 34 deletions(-)



pgsql: Repair more failures with SubPlans in multi-row VALUES lists.

2020-01-17 Thread Tom Lane
Repair more failures with SubPlans in multi-row VALUES lists.

Commit 9b63c13f0 turns out to have been fundamentally misguided:
the parent node's subPlan list is by no means the only way in which
a child SubPlan node can be hooked into the outer execution state.
As shown in bug #16213 from Matt Jibson, we can also get short-lived
tuple table slots added to the outer es_tupleTable list.  At this point
I have little faith that there aren't other possible connections as
well; the long time it took to notice this problem shows that this
isn't a heavily-exercised situation.

Therefore, revert that fix, returning to the coding that passed a
NULL parent plan pointer down to the transiently-built subexpressions.
That gives us a pretty good guarantee that they won't hook into the
outer executor state in any way.  But then we need some other solution
to make SubPlans work.  Adopt the solution speculated about in the
previous commit's log message: do expression initialization at plan
startup for just those VALUES rows containing SubPlans, abandoning the
goal of reclaiming memory intra-query for those rows.  In practice it
seems unlikely that queries containing a vast number of VALUES rows
would be using SubPlans in them, so this should not give up much.

(BTW, this test case also refutes my claim in connection with the prior
commit that the issue only arises with use of LATERAL.  That was just
wrong: some variants of SubLink always produce SubPlans.)

As with previous patch, back-patch to all supported branches.

Discussion: https://postgr.es/m/16213-871ac3bc208ec...@postgresql.org

Branch
--
REL9_6_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/45f03cfa56c88a3c662436027f076ad2b667287e

Modified Files
--
src/backend/executor/nodeValuesscan.c   | 83 -
src/include/nodes/execnodes.h   | 11 -
src/test/regress/expected/subselect.out | 27 +++
src/test/regress/sql/subselect.sql  | 14 ++
4 files changed, 101 insertions(+), 34 deletions(-)



pgsql: Repair more failures with SubPlans in multi-row VALUES lists.

2020-01-17 Thread Tom Lane
Repair more failures with SubPlans in multi-row VALUES lists.

Commit 9b63c13f0 turns out to have been fundamentally misguided:
the parent node's subPlan list is by no means the only way in which
a child SubPlan node can be hooked into the outer execution state.
As shown in bug #16213 from Matt Jibson, we can also get short-lived
tuple table slots added to the outer es_tupleTable list.  At this point
I have little faith that there aren't other possible connections as
well; the long time it took to notice this problem shows that this
isn't a heavily-exercised situation.

Therefore, revert that fix, returning to the coding that passed a
NULL parent plan pointer down to the transiently-built subexpressions.
That gives us a pretty good guarantee that they won't hook into the
outer executor state in any way.  But then we need some other solution
to make SubPlans work.  Adopt the solution speculated about in the
previous commit's log message: do expression initialization at plan
startup for just those VALUES rows containing SubPlans, abandoning the
goal of reclaiming memory intra-query for those rows.  In practice it
seems unlikely that queries containing a vast number of VALUES rows
would be using SubPlans in them, so this should not give up much.

(BTW, this test case also refutes my claim in connection with the prior
commit that the issue only arises with use of LATERAL.  That was just
wrong: some variants of SubLink always produce SubPlans.)

As with previous patch, back-patch to all supported branches.

Discussion: https://postgr.es/m/16213-871ac3bc208ec...@postgresql.org

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/41c6f9db25b5e3a8bb8afbb7d6715cff541fd41e

Modified Files
--
src/backend/executor/nodeValuesscan.c   | 110 
src/include/nodes/execnodes.h   |  10 ++-
src/test/regress/expected/subselect.out |  27 
src/test/regress/sql/subselect.sql  |  14 
4 files changed, 118 insertions(+), 43 deletions(-)



pgsql: Repair more failures with SubPlans in multi-row VALUES lists.

2020-01-17 Thread Tom Lane
Repair more failures with SubPlans in multi-row VALUES lists.

Commit 9b63c13f0 turns out to have been fundamentally misguided:
the parent node's subPlan list is by no means the only way in which
a child SubPlan node can be hooked into the outer execution state.
As shown in bug #16213 from Matt Jibson, we can also get short-lived
tuple table slots added to the outer es_tupleTable list.  At this point
I have little faith that there aren't other possible connections as
well; the long time it took to notice this problem shows that this
isn't a heavily-exercised situation.

Therefore, revert that fix, returning to the coding that passed a
NULL parent plan pointer down to the transiently-built subexpressions.
That gives us a pretty good guarantee that they won't hook into the
outer executor state in any way.  But then we need some other solution
to make SubPlans work.  Adopt the solution speculated about in the
previous commit's log message: do expression initialization at plan
startup for just those VALUES rows containing SubPlans, abandoning the
goal of reclaiming memory intra-query for those rows.  In practice it
seems unlikely that queries containing a vast number of VALUES rows
would be using SubPlans in them, so this should not give up much.

(BTW, this test case also refutes my claim in connection with the prior
commit that the issue only arises with use of LATERAL.  That was just
wrong: some variants of SubLink always produce SubPlans.)

As with previous patch, back-patch to all supported branches.

Discussion: https://postgr.es/m/16213-871ac3bc208ec...@postgresql.org

Branch
--
REL_11_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/d8e877b869cb5dc33a8d96218115fc12e66b73d4

Modified Files
--
src/backend/executor/nodeValuesscan.c   | 110 
src/include/nodes/execnodes.h   |  11 +++-
src/test/regress/expected/subselect.out |  27 
src/test/regress/sql/subselect.sql  |  14 
4 files changed, 119 insertions(+), 43 deletions(-)



pgsql: Repair more failures with SubPlans in multi-row VALUES lists.

2020-01-17 Thread Tom Lane
Repair more failures with SubPlans in multi-row VALUES lists.

Commit 9b63c13f0 turns out to have been fundamentally misguided:
the parent node's subPlan list is by no means the only way in which
a child SubPlan node can be hooked into the outer execution state.
As shown in bug #16213 from Matt Jibson, we can also get short-lived
tuple table slots added to the outer es_tupleTable list.  At this point
I have little faith that there aren't other possible connections as
well; the long time it took to notice this problem shows that this
isn't a heavily-exercised situation.

Therefore, revert that fix, returning to the coding that passed a
NULL parent plan pointer down to the transiently-built subexpressions.
That gives us a pretty good guarantee that they won't hook into the
outer executor state in any way.  But then we need some other solution
to make SubPlans work.  Adopt the solution speculated about in the
previous commit's log message: do expression initialization at plan
startup for just those VALUES rows containing SubPlans, abandoning the
goal of reclaiming memory intra-query for those rows.  In practice it
seems unlikely that queries containing a vast number of VALUES rows
would be using SubPlans in them, so this should not give up much.

(BTW, this test case also refutes my claim in connection with the prior
commit that the issue only arises with use of LATERAL.  That was just
wrong: some variants of SubLink always produce SubPlans.)

As with previous patch, back-patch to all supported branches.

Discussion: https://postgr.es/m/16213-871ac3bc208ec...@postgresql.org

Branch
--
REL_10_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/167fd022ff3377f18b84de194ee728c91921dc3f

Modified Files
--
src/backend/executor/nodeValuesscan.c   | 83 -
src/include/nodes/execnodes.h   | 11 -
src/test/regress/expected/subselect.out | 27 +++
src/test/regress/sql/subselect.sql  | 14 ++
4 files changed, 101 insertions(+), 34 deletions(-)



pgsql: Repair more failures with SubPlans in multi-row VALUES lists.

2020-01-17 Thread Tom Lane
Repair more failures with SubPlans in multi-row VALUES lists.

Commit 9b63c13f0 turns out to have been fundamentally misguided:
the parent node's subPlan list is by no means the only way in which
a child SubPlan node can be hooked into the outer execution state.
As shown in bug #16213 from Matt Jibson, we can also get short-lived
tuple table slots added to the outer es_tupleTable list.  At this point
I have little faith that there aren't other possible connections as
well; the long time it took to notice this problem shows that this
isn't a heavily-exercised situation.

Therefore, revert that fix, returning to the coding that passed a
NULL parent plan pointer down to the transiently-built subexpressions.
That gives us a pretty good guarantee that they won't hook into the
outer executor state in any way.  But then we need some other solution
to make SubPlans work.  Adopt the solution speculated about in the
previous commit's log message: do expression initialization at plan
startup for just those VALUES rows containing SubPlans, abandoning the
goal of reclaiming memory intra-query for those rows.  In practice it
seems unlikely that queries containing a vast number of VALUES rows
would be using SubPlans in them, so this should not give up much.

(BTW, this test case also refutes my claim in connection with the prior
commit that the issue only arises with use of LATERAL.  That was just
wrong: some variants of SubLink always produce SubPlans.)

As with previous patch, back-patch to all supported branches.

Discussion: https://postgr.es/m/16213-871ac3bc208ec...@postgresql.org

Branch
--
REL_12_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/2e2646060e18461de24e3585344095664dc7727b

Modified Files
--
src/backend/executor/nodeValuesscan.c   | 110 
src/include/nodes/execnodes.h   |  11 +++-
src/test/regress/expected/subselect.out |  27 
src/test/regress/sql/subselect.sql  |  14 
4 files changed, 119 insertions(+), 43 deletions(-)



pgsql: Repair more failures with SubPlans in multi-row VALUES lists.

2020-01-17 Thread Tom Lane
Repair more failures with SubPlans in multi-row VALUES lists.

Commit 9b63c13f0 turns out to have been fundamentally misguided:
the parent node's subPlan list is by no means the only way in which
a child SubPlan node can be hooked into the outer execution state.
As shown in bug #16213 from Matt Jibson, we can also get short-lived
tuple table slots added to the outer es_tupleTable list.  At this point
I have little faith that there aren't other possible connections as
well; the long time it took to notice this problem shows that this
isn't a heavily-exercised situation.

Therefore, revert that fix, returning to the coding that passed a
NULL parent plan pointer down to the transiently-built subexpressions.
That gives us a pretty good guarantee that they won't hook into the
outer executor state in any way.  But then we need some other solution
to make SubPlans work.  Adopt the solution speculated about in the
previous commit's log message: do expression initialization at plan
startup for just those VALUES rows containing SubPlans, abandoning the
goal of reclaiming memory intra-query for those rows.  In practice it
seems unlikely that queries containing a vast number of VALUES rows
would be using SubPlans in them, so this should not give up much.

(BTW, this test case also refutes my claim in connection with the prior
commit that the issue only arises with use of LATERAL.  That was just
wrong: some variants of SubLink always produce SubPlans.)

As with previous patch, back-patch to all supported branches.

Discussion: https://postgr.es/m/16213-871ac3bc208ec...@postgresql.org

Branch
--
REL9_4_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/eb9d1f0504a64aeae2b91279bc59e2649d35b4b0

Modified Files
--
src/backend/executor/nodeValuesscan.c   | 83 -
src/include/nodes/execnodes.h   | 11 -
src/test/regress/expected/subselect.out | 27 +++
src/test/regress/sql/subselect.sql  | 14 ++
4 files changed, 101 insertions(+), 34 deletions(-)



pgsql: Set ReorderBufferTXN->final_lsn more eagerly

2020-01-17 Thread Alvaro Herrera
Set ReorderBufferTXN->final_lsn more eagerly

... specifically, set it incrementally as each individual change is
spilled down to disk.  This way, it is set correctly when the
transaction disappears without trace, ie. without leaving an XACT_ABORT
wal record.  (This happens when the server crashes midway through a
transaction.)

Failing to have final_lsn prevents ReorderBufferRestoreCleanup() from
working, since it needs the final_lsn in order to know the endpoint of
its iteration through spilled files.

Commit df9f682c7bf8 already tried to fix the problem, but it didn't set
the final_lsn in all cases.  Revert that, since it's no longer needed.

Author: Vignesh C
Reviewed-by: Amit Kapila, Dilip Kumar
Discussion: 
https://postgr.es/m/caldanm2clk+k9jdwjyst0spbgg5aqdvhut0jbkyx_hdae0j...@mail.gmail.com

Branch
--
REL9_6_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/cdb14154bb00e711152f4011417b3e44ea85adea

Modified Files
--
src/backend/replication/logical/reorderbuffer.c | 29 ++---
src/include/replication/reorderbuffer.h |  7 +++---
2 files changed, 16 insertions(+), 20 deletions(-)



pgsql: Set ReorderBufferTXN->final_lsn more eagerly

2020-01-17 Thread Alvaro Herrera
Set ReorderBufferTXN->final_lsn more eagerly

... specifically, set it incrementally as each individual change is
spilled down to disk.  This way, it is set correctly when the
transaction disappears without trace, ie. without leaving an XACT_ABORT
wal record.  (This happens when the server crashes midway through a
transaction.)

Failing to have final_lsn prevents ReorderBufferRestoreCleanup() from
working, since it needs the final_lsn in order to know the endpoint of
its iteration through spilled files.

Commit df9f682c7bf8 already tried to fix the problem, but it didn't set
the final_lsn in all cases.  Revert that, since it's no longer needed.

Author: Vignesh C
Reviewed-by: Amit Kapila, Dilip Kumar
Discussion: 
https://postgr.es/m/caldanm2clk+k9jdwjyst0spbgg5aqdvhut0jbkyx_hdae0j...@mail.gmail.com

Branch
--
REL9_5_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/58997ace5b372cc137770292f462d5b8854c832d

Modified Files
--
src/backend/replication/logical/reorderbuffer.c | 29 ++---
src/include/replication/reorderbuffer.h |  7 +++---
2 files changed, 16 insertions(+), 20 deletions(-)



pgsql: Set ReorderBufferTXN->final_lsn more eagerly

2020-01-17 Thread Alvaro Herrera
Set ReorderBufferTXN->final_lsn more eagerly

... specifically, set it incrementally as each individual change is
spilled down to disk.  This way, it is set correctly when the
transaction disappears without trace, ie. without leaving an XACT_ABORT
wal record.  (This happens when the server crashes midway through a
transaction.)

Failing to have final_lsn prevents ReorderBufferRestoreCleanup() from
working, since it needs the final_lsn in order to know the endpoint of
its iteration through spilled files.

Commit df9f682c7bf8 already tried to fix the problem, but it didn't set
the final_lsn in all cases.  Revert that, since it's no longer needed.

Author: Vignesh C
Reviewed-by: Amit Kapila, Dilip Kumar
Discussion: 
https://postgr.es/m/caldanm2clk+k9jdwjyst0spbgg5aqdvhut0jbkyx_hdae0j...@mail.gmail.com

Branch
--
REL_11_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/fe955ebee0f206117634521778efe10608cb6552

Modified Files
--
src/backend/replication/logical/reorderbuffer.c | 29 ++---
src/include/replication/reorderbuffer.h |  7 +++---
2 files changed, 16 insertions(+), 20 deletions(-)



pgsql: Set ReorderBufferTXN->final_lsn more eagerly

2020-01-17 Thread Alvaro Herrera
Set ReorderBufferTXN->final_lsn more eagerly

... specifically, set it incrementally as each individual change is
spilled down to disk.  This way, it is set correctly when the
transaction disappears without trace, ie. without leaving an XACT_ABORT
wal record.  (This happens when the server crashes midway through a
transaction.)

Failing to have final_lsn prevents ReorderBufferRestoreCleanup() from
working, since it needs the final_lsn in order to know the endpoint of
its iteration through spilled files.

Commit df9f682c7bf8 already tried to fix the problem, but it didn't set
the final_lsn in all cases.  Revert that, since it's no longer needed.

Author: Vignesh C
Reviewed-by: Amit Kapila, Dilip Kumar
Discussion: 
https://postgr.es/m/caldanm2clk+k9jdwjyst0spbgg5aqdvhut0jbkyx_hdae0j...@mail.gmail.com

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/15cac3a523cc06dba1331635f3f67445fa202a44

Modified Files
--
src/backend/replication/logical/reorderbuffer.c | 29 ++---
src/include/replication/reorderbuffer.h |  7 +++---
2 files changed, 16 insertions(+), 20 deletions(-)



pgsql: Set ReorderBufferTXN->final_lsn more eagerly

2020-01-17 Thread Alvaro Herrera
Set ReorderBufferTXN->final_lsn more eagerly

... specifically, set it incrementally as each individual change is
spilled down to disk.  This way, it is set correctly when the
transaction disappears without trace, ie. without leaving an XACT_ABORT
wal record.  (This happens when the server crashes midway through a
transaction.)

Failing to have final_lsn prevents ReorderBufferRestoreCleanup() from
working, since it needs the final_lsn in order to know the endpoint of
its iteration through spilled files.

Commit df9f682c7bf8 already tried to fix the problem, but it didn't set
the final_lsn in all cases.  Revert that, since it's no longer needed.

Author: Vignesh C
Reviewed-by: Amit Kapila, Dilip Kumar
Discussion: 
https://postgr.es/m/caldanm2clk+k9jdwjyst0spbgg5aqdvhut0jbkyx_hdae0j...@mail.gmail.com

Branch
--
REL_12_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/bc2140627ff14c207a0af990b8ea3860e188e6b1

Modified Files
--
src/backend/replication/logical/reorderbuffer.c | 29 ++---
src/include/replication/reorderbuffer.h |  7 +++---
2 files changed, 16 insertions(+), 20 deletions(-)



pgsql: Set ReorderBufferTXN->final_lsn more eagerly

2020-01-17 Thread Alvaro Herrera
Set ReorderBufferTXN->final_lsn more eagerly

... specifically, set it incrementally as each individual change is
spilled down to disk.  This way, it is set correctly when the
transaction disappears without trace, ie. without leaving an XACT_ABORT
wal record.  (This happens when the server crashes midway through a
transaction.)

Failing to have final_lsn prevents ReorderBufferRestoreCleanup() from
working, since it needs the final_lsn in order to know the endpoint of
its iteration through spilled files.

Commit df9f682c7bf8 already tried to fix the problem, but it didn't set
the final_lsn in all cases.  Revert that, since it's no longer needed.

Author: Vignesh C
Reviewed-by: Amit Kapila, Dilip Kumar
Discussion: 
https://postgr.es/m/caldanm2clk+k9jdwjyst0spbgg5aqdvhut0jbkyx_hdae0j...@mail.gmail.com

Branch
--
REL_10_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/e3154aae3c2604633a4ce8ffedf542999565c787

Modified Files
--
src/backend/replication/logical/reorderbuffer.c | 29 ++---
src/include/replication/reorderbuffer.h |  7 +++---
2 files changed, 16 insertions(+), 20 deletions(-)



pgsql: Set ReorderBufferTXN->final_lsn more eagerly

2020-01-17 Thread Alvaro Herrera
Set ReorderBufferTXN->final_lsn more eagerly

... specifically, set it incrementally as each individual change is
spilled down to disk.  This way, it is set correctly when the
transaction disappears without trace, ie. without leaving an XACT_ABORT
wal record.  (This happens when the server crashes midway through a
transaction.)

Failing to have final_lsn prevents ReorderBufferRestoreCleanup() from
working, since it needs the final_lsn in order to know the endpoint of
its iteration through spilled files.

Commit df9f682c7bf8 already tried to fix the problem, but it didn't set
the final_lsn in all cases.  Revert that, since it's no longer needed.

Author: Vignesh C
Reviewed-by: Amit Kapila, Dilip Kumar
Discussion: 
https://postgr.es/m/caldanm2clk+k9jdwjyst0spbgg5aqdvhut0jbkyx_hdae0j...@mail.gmail.com

Branch
--
REL9_4_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/20a1dc1e311d795fa37e5e4bd4f3d49157d78dba

Modified Files
--
src/backend/replication/logical/reorderbuffer.c | 29 ++---
src/include/replication/reorderbuffer.h |  7 +++---
2 files changed, 16 insertions(+), 20 deletions(-)



pgsql: Allocate freechunks bitmap as part of SlabContext

2020-01-17 Thread Tomas Vondra
Allocate freechunks bitmap as part of SlabContext

The bitmap used by SlabCheck to cross-check free chunks in a block used
to be allocated for each SlabCheck call, and was never freed. The memory
leak could be fixed by simply adding a pfree call, but it's actually a
bad idea to do any allocations in SlabCheck at all as it assumes the
state of the memory management as a whole is sane.

So instead we allocate the bitmap as part of SlabContext, which means
we don't need to do any allocations in SlabCheck and the bitmap goes
away together with the SlabContext.

Backpatch to 10, where the Slab context was introduced.

Author: Tomas Vondra
Reported-by: Andres Freund
Reviewed-by: Tom Lane
Backpatch-through: 10
Discussion: 
https://www.postgresql.org/message-id/20200116044119.g45f7pmgz4jmodxj%40alap3.anarazel.de

Branch
--
REL_10_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/a801452c9e3008b473265cc690077244b65635dc

Modified Files
--
src/backend/utils/mmgr/slab.c | 34 ++
1 file changed, 26 insertions(+), 8 deletions(-)



pgsql: Allocate freechunks bitmap as part of SlabContext

2020-01-17 Thread Tomas Vondra
Allocate freechunks bitmap as part of SlabContext

The bitmap used by SlabCheck to cross-check free chunks in a block used
to be allocated for each SlabCheck call, and was never freed. The memory
leak could be fixed by simply adding a pfree call, but it's actually a
bad idea to do any allocations in SlabCheck at all as it assumes the
state of the memory management as a whole is sane.

So instead we allocate the bitmap as part of SlabContext, which means
we don't need to do any allocations in SlabCheck and the bitmap goes
away together with the SlabContext.

Backpatch to 10, where the Slab context was introduced.

Author: Tomas Vondra
Reported-by: Andres Freund
Reviewed-by: Tom Lane
Backpatch-through: 10
Discussion: 
https://www.postgresql.org/message-id/20200116044119.g45f7pmgz4jmodxj%40alap3.anarazel.de

Branch
--
REL_12_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/162c951dfe8f0a894f2832e04aacfc3a0a7bf50c

Modified Files
--
src/backend/utils/mmgr/slab.c | 28 +---
1 file changed, 21 insertions(+), 7 deletions(-)



pgsql: Allocate freechunks bitmap as part of SlabContext

2020-01-17 Thread Tomas Vondra
Allocate freechunks bitmap as part of SlabContext

The bitmap used by SlabCheck to cross-check free chunks in a block used
to be allocated for each SlabCheck call, and was never freed. The memory
leak could be fixed by simply adding a pfree call, but it's actually a
bad idea to do any allocations in SlabCheck at all as it assumes the
state of the memory management as a whole is sane.

So instead we allocate the bitmap as part of SlabContext, which means
we don't need to do any allocations in SlabCheck and the bitmap goes
away together with the SlabContext.

Backpatch to 10, where the Slab context was introduced.

Author: Tomas Vondra
Reported-by: Andres Freund
Reviewed-by: Tom Lane
Backpatch-through: 10
Discussion: 
https://www.postgresql.org/message-id/20200116044119.g45f7pmgz4jmodxj%40alap3.anarazel.de

Branch
--
REL_11_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/8c37e4469d13e34d80b6f561f617bd4bfe339c6c

Modified Files
--
src/backend/utils/mmgr/slab.c | 28 +---
1 file changed, 21 insertions(+), 7 deletions(-)



pgsql: Allocate freechunks bitmap as part of SlabContext

2020-01-17 Thread Tomas Vondra
Allocate freechunks bitmap as part of SlabContext

The bitmap used by SlabCheck to cross-check free chunks in a block used
to be allocated for each SlabCheck call, and was never freed. The memory
leak could be fixed by simply adding a pfree call, but it's actually a
bad idea to do any allocations in SlabCheck at all as it assumes the
state of the memory management as a whole is sane.

So instead we allocate the bitmap as part of SlabContext, which means
we don't need to do any allocations in SlabCheck and the bitmap goes
away together with the SlabContext.

Backpatch to 10, where the Slab context was introduced.

Author: Tomas Vondra
Reported-by: Andres Freund
Reviewed-by: Tom Lane
Backpatch-through: 10
Discussion: 
https://www.postgresql.org/message-id/20200116044119.g45f7pmgz4jmodxj%40alap3.anarazel.de

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/543852fd8bf0adc56192aeb25ff83f1a12c30368

Modified Files
--
src/backend/utils/mmgr/slab.c | 28 +---
1 file changed, 21 insertions(+), 7 deletions(-)