On 30 November 2017 at 14:48, Robert Haas <robertmh...@gmail.com> wrote:
> On Wed, Nov 29, 2017 at 8:27 PM, Amit Langote
> <langote_amit...@lab.ntt.co.jp> wrote:
>> To be accurate, as also noted in the commit message of the patch that I
>> sent, authors of this patc
?
For me, reading the subject line of the commit I'd have expected a doc
change, or improved/new code comments.
This is really more "Disallow mixed temp/permanent partitioned hierarchies".
"Clarify" does not really involve a change of behaviour. It's an
explanation of w
as done for a reason and that I just
didn't understand what that reason was. I don't recall any comments to
explain the reason why we build two RangeTblEntrys for each
partitioned table.
In light of what Amit has highlighted, I'm still standing by the v3
patch assuming the typo is fixed.
--
nd/optimizer/prep/prepunion.c.gcov.html
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
moved. That ought to be tested, particularly as Amit
> mentions that there could be improvements with moving it around in
> future versions.
Oh okay. Yeah, you can hit that with a partitionless sub-partitioned table.
I've added a test in the attached v4.
--
David Rowley
On 4 July 2018 at 13:48, Michael Paquier wrote:
> So at the end I have dropped the table from the test, and pushed the
> patch to HEAD and REL_11_STABLE. Thanks David for the patch, and others
> for the reviews.
Thanks for pushing it.
--
David Rowley h
On 20 February 2018 at 09:40, Alvaro Herrera <alvhe...@alvh.no-ip.org> wrote:
> Modified Files
> --
> doc/src/sgml/ddl.sgml | 9 +-
Attached is a very small fix to a small error this patch created in the docs.
--
David Rowley
On 25 July 2018 at 05:11, Andres Freund wrote:
> Pushed. Not sure if any of those do enough control flow analysis to
> even consider those blocks reachable? But anyway, doesn't hurt.
Thanks. MSVC was producing a warning.
--
David Rowley http://www.2ndQuadra
On 23 July 2018 at 10:30, Andres Freund wrote:
> Hand code string to integer conversion for performance.
This could do with the attached to silence the compiler warnings from
compilers that don't understand ereport(ERROR) does not return.
--
David Rowley http://
ilers don't
quite know about that yet. The attached compiles fine for me on a
windows machine.
Changing "restrict" to "__restrict" also works, so it might,
longer-term, be worth some configure test and a PG_RESTICT macro so we
can allow this, assuming there are performance g
On 11 April 2018 at 18:58, David Rowley <david.row...@2ndquadrant.com> wrote:
> On 10 April 2018 at 08:55, Tom Lane <t...@sss.pgh.pa.us> wrote:
>> Alvaro Herrera <alvhe...@alvh.no-ip.org> writes:
>>> David Rowley wrote:
>>>> Okay, I've written an
Filter: ((a >= $1) AND (a <= $2) AND (b < 4))
>
Reading code it looks like a bug in choose_next_subplan_for_worker():
The following needs to be changed for this patch:
/* Advance to next plan. */
pstate->pa_next_plan++;
I'll think a bit harder about the bes
On 9 April 2018 at 13:03, David Rowley <david.row...@2ndquadrant.com> wrote:
> On 9 April 2018 at 09:54, Tom Lane <t...@sss.pgh.pa.us> wrote:
>> BTW, pademelon just exhibited a different instability in this test:
>>
>> ***
>> /home/bfarm/bf-data/HEA
+1 for a new field for this and making ON CONFLICT use it.
ntuples2 seems fine. If we make it too specific then we'll end up with
lots more than we need.
I don't think re-using the filter counters are very good when it's not
for filtering.
MERGE was probably just following the example made by ON C
On 9 April 2018 at 15:03, David Rowley <david.row...@2ndquadrant.com> wrote:
> On 9 April 2018 at 13:03, David Rowley <david.row...@2ndquadrant.com> wrote:
> Okay, I've written and attached a fix for this. I'm not 100% certain
> that this is the cause of the problem on pa
On 8 April 2018 at 14:56, David Rowley <david.row...@2ndquadrant.com> wrote:
> It happens 12 or 13 times on my machine, then does not happen again
> for 60 seconds, then happens again.
Setting autovacuum_naptime to 10 seconds makes it occur in 10 second
intervals...
--
On 8 April 2018 at 15:02, David Rowley <david.row...@2ndquadrant.com> wrote:
> On 8 April 2018 at 14:56, David Rowley <david.row...@2ndquadrant.com> wrote:
>> It happens 12 or 13 times on my machine, then does not happen again
>> for 60 seconds, then happens again.
>
hy
we're doing set enable_indesonlyscan = off;
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
runtime_pruning_make_tests_stable_v2.patch
Description: Binary data
On 10 April 2018 at 08:55, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Alvaro Herrera <alvhe...@alvh.no-ip.org> writes:
>> David Rowley wrote:
>>> Okay, I've written and attached a fix for this. I'm not 100% certain
>>> that this is the cause of the problem o
are other examples in that file with the switch
(part_scheme->strategy), these are not using enums. I'd have to assume
that these must be different because of that.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
On 24 April 2018 at 13:50, Alvaro Herrera <alvhe...@alvh.no-ip.org> wrote:
> David Rowley wrote:
>> On 24 April 2018 at 03:12, Alvaro Herrera <alvhe...@alvh.no-ip.org> wrote:
>> > Remove useless default clause in switch
>> >
>> > The switch covers a
On 18 April 2018 at 07:26, Alvaro Herrera <alvhe...@alvh.no-ip.org> wrote:
> David Rowley wrote:
>
>> I've made another pass over the nodeAppend.c code and I'm unable to
>> see what might cause this, although I did discover a bug where
>> first_partial_plan is not set
n't cost much performance wise, but it may mislead someone
into thinking they can add some other condition there to skip
partitions.
The attached removes it.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & S
hem in
https://www.postgresql.org/message-id/camyn-kcq+fdlusen+tmukpn5ygqcykkr266tyenjjod_wt-...@mail.gmail.com
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
y issues
where cached plans are not invalidated correctly?
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
ced in pg_global tablespace
A patch to fix is attached.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
disallow_partitioned_indexes_from_being_put_in_pg_global.patch
Description: Binary data
On Sat, 30 Mar 2019 at 20:25, Peter Eisentraut wrote:
> src/backend/utils/cache/lsyscache.c | 33 +
This change has caused a new compiler warning for compilers that don't
understand that elog(ERROR) can't return.
The attached fixes.
--
David Rowley http://
ver-the-less.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
fixup_sizeof_relid_map.patch
Description: Binary data
ked by whatever whenever statement is defined.
I might be missing something here, but if you're changing the
signature of output_simple_statement(), wouldn't you also need to
change all calls to it too? ... including the ones in preproc.y?
--
David Rowley http://www.
Fix incorrect index behavior in COPY FROM with partitioned tables
86b85044e rewrote how COPY FROM works to allow multiple tuple buffers to
exist to once thus allowing multi-inserts to be used in more cases with
partitioned tables. That commit neglected to update the estate's
Fix confusing NOTICE text in REINDEX CONCURRENTLY
When performing REINDEX TABLE CONCURRENTLY, if all of the table's indexes
could not be reindexed, a NOTICE message claimed that the table had no
indexes. This was confusing, so let's change the NOTICE text to something
less confusing.
In
Fix incorrect parameter name in comment
Author: Antonin Houska
Discussion: https://postgr.es/m/22370.1559293357@localhost
Branch
--
master
Details
---
https://git.postgresql.org/pg/commitdiff/72b6223f766d6ba9076d7b1ebdf05df75e83ba5c
Modified Files
--
Docs: concurrent builds of partitioned indexes are not supported
Document that CREATE INDEX CONCURRENTLY is not currently supported for
indexes on partitioned tables.
Discussion:
https://postgr.es/m/cakjs1f_cerd2z9l21q8ogld4tgh7yw1z9mathtso13sxvg-...@mail.gmail.com
Backpatch-through: 11
Branch
Docs: concurrent builds of partitioned indexes are not supported
Document that CREATE INDEX CONCURRENTLY is not currently supported for
indexes on partitioned tables.
Discussion:
https://postgr.es/m/cakjs1f_cerd2z9l21q8ogld4tgh7yw1z9mathtso13sxvg-...@mail.gmail.com
Backpatch-through: 11
Branch
doc: Add best practises section to partitioning docs
A few questionable partitioning designs have been cropping up lately
around the mailing lists. Generally, these cases have been partitioning
using too many partitions which have caused performance or OOM problems for
the users.
Since we have
doc: Add best practises section to partitioning docs
A few questionable partitioning designs have been cropping up lately
around the mailing lists. Generally, these cases have been partitioning
using too many partitions which have caused performance or OOM problems for
the users.
Since we have
doc: Add best practises section to partitioning docs
A few questionable partitioning designs have been cropping up lately
around the mailing lists. Generally, these cases have been partitioning
using too many partitions which have caused performance or OOM problems for
the users.
Since we have
doc: Fix grammatical error in partitioning docs
Reported-by: Amit Langote
Discussion:
https://postgr.es/m/ca+hiwqgzfkki0tkbgypr2_5qrrabhzop47ap1brluoukfqd...@mail.gmail.com
Backpatch-through: 10
Branch
--
REL_10_STABLE
Details
---
doc: Fix grammatical error in partitioning docs
Reported-by: Amit Langote
Discussion:
https://postgr.es/m/ca+hiwqgzfkki0tkbgypr2_5qrrabhzop47ap1brluoukfqd...@mail.gmail.com
Backpatch-through: 10
Branch
--
master
Details
---
Don't remove surplus columns from GROUP BY for inheritance parents
d4c3a156c added code to remove columns that were not part of a table's
PRIMARY KEY constraint from the GROUP BY clause when all the primary key
columns were present in the group by. This is fine to do since we know
that there
Don't remove surplus columns from GROUP BY for inheritance parents
d4c3a156c added code to remove columns that were not part of a table's
PRIMARY KEY constraint from the GROUP BY clause when all the primary key
columns were present in the group by. This is fine to do since we know
that there
Fix RANGE partition pruning with multiple boolean partition keys
match_clause_to_partition_key incorrectly would return
PARTCLAUSE_UNSUPPORTED if a bool qual could not be matched to the current
partition key. This was a problem, as it causes the calling function to
discard the qual and not try
Fix RANGE partition pruning with multiple boolean partition keys
match_clause_to_partition_key incorrectly would return
PARTCLAUSE_UNSUPPORTED if a bool qual could not be matched to the current
partition key. This was a problem, as it causes the calling function to
discard the qual and not try
Fix RANGE partition pruning with multiple boolean partition keys
match_clause_to_partition_key incorrectly would return
PARTCLAUSE_UNSUPPORTED if a bool qual could not be matched to the current
partition key. This was a problem, as it causes the calling function to
discard the qual and not try
Don't remove surplus columns from GROUP BY for inheritance parents
d4c3a156c added code to remove columns that were not part of a table's
PRIMARY KEY constraint from the GROUP BY clause when all the primary key
columns were present in the group by. This is fine to do since we know
that there
Don't remove surplus columns from GROUP BY for inheritance parents
d4c3a156c added code to remove columns that were not part of a table's
PRIMARY KEY constraint from the GROUP BY clause when all the primary key
columns were present in the group by. This is fine to do since we know
that there
Don't remove surplus columns from GROUP BY for inheritance parents
d4c3a156c added code to remove columns that were not part of a table's
PRIMARY KEY constraint from the GROUP BY clause when all the primary key
columns were present in the group by. This is fine to do since we know
that there
Use appendStringInfoString and appendPQExpBufferStr where possible
This changes various places where appendPQExpBuffer was used in places
where it was possible to use appendPQExpBufferStr, and likewise for
appendStringInfo and appendStringInfoString. This is really just a
stylistic improvement,
Fix missing call to table_finish_bulk_insert during COPY
86b85044e abstracted calls to heap functions in COPY FROM to support a
generic table AM. However, when performing a copy into a partitioned
table, this commit neglected to call table_finish_bulk_insert for each
partition. Before
On Tue, 2 Jul 2019 at 01:24, David Rowley wrote:
> src/backend/commands/copy.c | 21 ++---
> 1 file changed, 14 insertions(+), 7 deletions(-)
Looking at buildfarm now.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Tr
Remove surplus call to table_finish_bulk_insert
4de60244e added the call to table_finish_bulk_insert to the
CopyMultiInsertBufferCleanup function. We use a CopyMultiInsertBuffer even
for non-partitioned tables, so having the cleanup do that meant we would
call table_finsh_bulk_insert twice when
On Tue, 2 Jul 2019 at 03:33, Tom Lane wrote:
>
> David Rowley writes:
> > Remove surplus call to table_finish_bulk_insert
>
> This still blows up immediately with -DRELCACHE_FORCE_RELEASE
> (cf prion).
>
> I think at this point you should revert and try again later
Revert fix missing call to table_finish_bulk_insert during COPY
This reverts commits 4de60244e and b2d69806d. Further thought is
required to make this work properly.
Branch
--
master
Details
---
https://git.postgresql.org/pg/commitdiff/f5db56fc4d6e95c582b61c99328ea0702b869fa0
Modified
condition which was introduced by 100340e2d.
However, we're speeding up much more than just that here.
Author: David Rowley, Tom Lane
Reviewed-by: Tom Lane, Tomas Vondra
Discussion: https://postgr.es/m/6970.1545327...@sss.pgh.pa.us
Branch
--
master
Details
---
https://git.postgresql.org/pg
Adjust overly strict Assert
3373c7155 changed how we determine EquivalenceClasses for relations and
added an Assert to ensure all relations mentioned in each EC's ec_relids
was a RELOPT_BASEREL. However, the join removal code may remove a LEFT
JOIN and since it does not clean up EC members
Make better use of the new List implementation in a couple of places
In nodeAppend.c and nodeMergeAppend.c there were some foreach loops which
looped over the list of subplans and only performed any work if the
subplan index was found in a Bitmapset. With the old linked list
implementation of
Use appendBinaryStringInfo in more places where the length is known
When we already know the length that we're going to append, then it
makes sense to use appendBinaryStringInfo instead of
appendStringInfoString so that the append can be performed with a simple
memcpy() using a known length
Fix missing calls to table_finish_bulk_insert during COPY, take 2
86b85044e abstracted calls to heap functions in COPY FROM to support a
generic table AM. However, when performing a copy into a partitioned
table, this commit neglected to call table_finish_bulk_insert for each
partition. Before
Fix missing calls to table_finish_bulk_insert during COPY, take 2
86b85044e abstracted calls to heap functions in COPY FROM to support a
generic table AM. However, when performing a copy into a partitioned
table, this commit neglected to call table_finish_bulk_insert for each
partition. Before
Fix possible crash with GENERATED ALWAYS columns
In some corner cases, this could also lead to corrupted values being
included in the tuple.
Users who are concerned that they are affected by this should first
upgrade and then perform a base backup of their database and restore onto
an off-line
Fix possible crash with GENERATED ALWAYS columns
In some corner cases, this could also lead to corrupted values being
included in the tuple.
Users who are concerned that they are affected by this should first
upgrade and then perform a base backup of their database and restore onto
an off-line
Remove unneeded constraint dependency tracking
It was previously thought that remove_useless_groupby_columns() needed to
keep track of which constraints the generated plan depended upon, however,
this is unnecessary. The confusion likely arose regarding this because of
Add functions to calculate the next power of 2
There are many areas in the code where we need to determine the next
highest power of 2 of a given number. We tend to always do that in an
ad-hoc way each time, generally with some tight for loop which performs a
bitshift left once per loop and goes
Modify various power 2 calculations to use new helper functions
First pass of modifying various places that obtain the next power of 2 of
a number and make them use the new functions added in pg_bitutils.h
instead.
This also removes the _hash_log2() function. There are no longer any
callers in
Modify additional power 2 calculations to use new helper functions
2nd pass of modifying various places which obtain the next power
of 2 of a number and make them use the new functions added in
f0705bb62.
In passing, also modify num_combinations(). This can be implemented
using simple
Trigger autovacuum based on number of INSERTs
Traditionally autovacuum has only ever invoked a worker based on the
estimated number of dead tuples in a table and for anti-wraparound
purposes. For the latter, with certain classes of tables such as
insert-only tables, anti-wraparound vacuums could
Attempt to fix unstable regression tests
b07642dbc added code to trigger autovacuums based on the number of
inserts into a table. This seems to have caused some regression test
results to destabilize. I suspect this is due to autovacuum triggering a
vacuum sometime after the test's ANALYZE run
On Tue, 31 Mar 2020 at 15:55, Tom Lane wrote:
> I've been trying to reproduce this by dint of running just the stats_ext
> script, over and over in a loop. I've not had any success on fast
> machines, but on a slow one (florican's host) I got this after a few
> hundred iterations:
I've had a 13
On Wed, 1 Apr 2020 at 13:00, Tom Lane wrote:
>
> David Rowley writes:
> > On Tue, 31 Mar 2020 at 15:55, Tom Lane wrote:
> >> Now this *IS* autovacuum interference, but it's hardly autovacuum's fault:
> >> the test script is supposing that autovac won't come i
Attempt to fix unstable regression tests, take 2
Following up on 2dc16efed, petalura has suffered some additional
failures in stats_ext which again appear to be around the timing of an
autovacuum during the test, causing instability in the row estimates.
Again, let's fix this by explicitly
Attempt to stabilize partitionwise_aggregate test
In b07642dbc, we added code to trigger autovacuums based on the number of
INSERTs into a table. This seems to have cause some destabilization of
the regression tests. Likely this is due to an autovacuum triggering
mid-test and (per theory from Tom
Remove bogus Assert in foreign key cloning code
This Assert was trying to ensure that the number of columns in the foreign
key being cloned was the same number of attributes in the parentRel. Of
course, it's perfectly valid to have columns in the table which are not
part of the foreign key
as
fast as the original qsort method even when the page just has a few
tuples. As the number of tuples becomes larger the new method maintains
its performance whereas the original qsort method became much slower when
the number of tuples on the page became large.
Author: David Rowley
Reviewed
Fix compiler warning
Introduced in 0aa8f7640.
MSVC warned about performing 32-bit bit shifting when it appeared like we
might like a 64-bit result. We did, but it just so happened that none of
the calls to this function could have caused the 32-bit shift to overflow.
Here we just cast the
, but further searching by me found significantly more
places that deserved the same treatment.
Author: Zhijie Hou, David Rowley
Discussion:
https://postgr.es/m/cb172cf4361e4c7ba7167429070979d4@G08CNEXMBPEKD05.g08.fujitsu.local
Branch
--
master
Details
---
https://git.postgresql.org/pg
Relax some asserts in merge join costing code
In the planner, it was possible, given an extreme enough case containing a
large number of joins for the number of estimated rows to become infinite.
This could cause problems in initial_cost_mergejoin() where we perform
some calculations based on
Relax some asserts in merge join costing code
In the planner, it was possible, given an extreme enough case containing a
large number of joins for the number of estimated rows to become infinite.
This could cause problems in initial_cost_mergejoin() where we perform
some calculations based on
Relax some asserts in merge join costing code
In the planner, it was possible, given an extreme enough case containing a
large number of joins for the number of estimated rows to become infinite.
This could cause problems in initial_cost_mergejoin() where we perform
some calculations based on
Relax some asserts in merge join costing code
In the planner, it was possible, given an extreme enough case containing a
large number of joins for the number of estimated rows to become infinite.
This could cause problems in initial_cost_mergejoin() where we perform
some calculations based on
Relax some asserts in merge join costing code
In the planner, it was possible, given an extreme enough case containing a
large number of joins for the number of estimated rows to become infinite.
This could cause problems in initial_cost_mergejoin() where we perform
some calculations based on
Relax some asserts in merge join costing code
In the planner, it was possible, given an extreme enough case containing a
large number of joins for the number of estimated rows to become infinite.
This could cause problems in initial_cost_mergejoin() where we perform
some calculations based on
Prevent overly large and NaN row estimates in relations
Given a query with enough joins, it was possible that the query planner,
after multiplying the row estimates with the join selectivity that the
estimated number of rows would exceed the limits of the double data type
and become infinite.
To
Fixup some misusages of bms_num_members()
It's a bit inefficient to test if a Bitmapset is empty by counting all the
members and seeing if that number is zero. It's much better just to use
bms_is_empty(). Likewise for checking if there are at least two members,
just use bms_membership(), which
On Wed, 19 Aug 2020 at 12:37, Andres Freund wrote:
>
> Hi,
>
> On 2020-08-18 19:55:50 -0400, Tom Lane wrote:
> > > I'm inclined to just make ClearTransaction take an exclusive lock - the
> > > rest of the 2PC operations are so heavyweight that I can't imagine
> > > making a difference. When I
Fix a few typos in JIT comments and README
Reviewed-by: Abhijit Menon-Sen
Reviewed-by: Andres Freund
Discussion:
https://postgr.es/m/CAApHDvobgmCs6CohqhKTUf7D8vffoZXQTCBTERo9gbOeZmvLTw%40mail.gmail.com
Backpatch-through: 11, where JIT was added
Branch
--
REL_11_STABLE
Details
---
Fix a few typos in JIT comments and README
Reviewed-by: Abhijit Menon-Sen
Reviewed-by: Andres Freund
Discussion:
https://postgr.es/m/CAApHDvobgmCs6CohqhKTUf7D8vffoZXQTCBTERo9gbOeZmvLTw%40mail.gmail.com
Backpatch-through: 11, where JIT was added
Branch
--
REL_12_STABLE
Details
---
Fix a few typos in JIT comments and README
Reviewed-by: Abhijit Menon-Sen
Reviewed-by: Andres Freund
Discussion:
https://postgr.es/m/CAApHDvobgmCs6CohqhKTUf7D8vffoZXQTCBTERo9gbOeZmvLTw%40mail.gmail.com
Backpatch-through: 11, where JIT was added
Branch
--
REL_13_STABLE
Details
---
Fix a few typos in JIT comments and README
Reviewed-by: Abhijit Menon-Sen
Reviewed-by: Andres Freund
Discussion:
https://postgr.es/m/CAApHDvobgmCs6CohqhKTUf7D8vffoZXQTCBTERo9gbOeZmvLTw%40mail.gmail.com
Backpatch-through: 11, where JIT was added
Branch
--
master
Details
---
just replace the call within each macro to use
list_nth_cell().
For the llast*() case we require a new list_last_cell() inline function to
get away from the multiple evaluation hazard that we'd get if we fetched
->length on the macro's parameter.
Author: David Rowley
Reviewed-by: Tom L
Doc: Improve clarity on partitioned table limitations
Explicitly mention that primary key constraints are also included in the
limitation that the constraint columns must be a superset of the partition key
columns.
Wording suggestion from Tom Lane.
Discussion:
Doc: Improve clarity on partitioned table limitations
Explicitly mention that primary key constraints are also included in the
limitation that the constraint columns must be a superset of the partition key
columns.
Wording suggestion from Tom Lane.
Discussion:
Doc: Improve clarity on partitioned table limitations
Explicitly mention that primary key constraints are also included in the
limitation that the constraint columns must be a superset of the partition key
columns.
Wording suggestion from Tom Lane.
Discussion:
Doc: Improve clarity on partitioned table limitations
Explicitly mention that primary key constraints are also included in the
limitation that the constraint columns must be a superset of the partition key
columns.
Wording suggestion from Tom Lane.
Discussion:
this, but in the
general case, none of these lists are likely to be very large, so the
lookup was probably never that expensive anyway. However, some of the
calls are in fairly hot code paths, e.g process_equivalence(). So any
small gains there are useful.
Author: Zhijie Hou and David Rowley
Discussion:
https
Fix incorrect parameter name in a function header comment
Author: Zhijie Hou
Discussion:
https://postgr.es/m/14cd74ea00204cc8a7ea5d738ac82cd1@G08CNEXMBPEKD05.g08.fujitsu.local
Backpatch-through: 12, where the mistake was introduced
Branch
--
master
Details
---
Fix incorrect parameter name in a function header comment
Author: Zhijie Hou
Discussion:
https://postgr.es/m/14cd74ea00204cc8a7ea5d738ac82cd1@G08CNEXMBPEKD05.g08.fujitsu.local
Backpatch-through: 12, where the mistake was introduced
Branch
--
REL_12_STABLE
Details
---
Fix incorrect parameter name in a function header comment
Author: Zhijie Hou
Discussion:
https://postgr.es/m/14cd74ea00204cc8a7ea5d738ac82cd1@G08CNEXMBPEKD05.g08.fujitsu.local
Backpatch-through: 12, where the mistake was introduced
Branch
--
REL_13_STABLE
Details
---
Fix bogus EXPLAIN output for Hash Aggregate
9bdb300de modified the EXPLAIN output for Hash Aggregate to show details
from parallel workers. However, it neglected to consider that a given
parallel worker may not have assisted with the given Hash Aggregate. This
can occur when workers fail to start
Fix bogus EXPLAIN output for Hash Aggregate
9bdb300de modified the EXPLAIN output for Hash Aggregate to show details
from parallel workers. However, it neglected to consider that a given
parallel worker may not have assisted with the given Hash Aggregate. This
can occur when workers fail to start
Use int64 instead of long in incremental sort code
Windows 64bit has 4-byte long values which is not suitable for tracking
disk space usage in the incremental sort code. Let's just make all these
fields int64s.
Author: James Coleman
Discussion:
1 - 100 of 753 matches
Mail list logo