pgsql: RLS comment fixes.
RLS comment fixes. The comments in get_policies_for_relation() say that CREATE POLICY does not support defining restrictive policies. This is no longer true, starting from PG10. Branch -- REL_10_STABLE Details --- https://git.postgresql.org/pg/commitdiff/a1187c4efe024e5a1f5cf9221b13a3f3226d92c5 Modified Files -- src/backend/rewrite/rowsecurity.c | 8 ++-- 1 file changed, 2 insertions(+), 6 deletions(-)
pgsql: RLS comment fixes.
RLS comment fixes. The comments in get_policies_for_relation() say that CREATE POLICY does not support defining restrictive policies. This is no longer true, starting from PG10. Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/9c55391f0f277318c754f89950e65363ede4136e Modified Files -- src/backend/rewrite/rowsecurity.c | 8 ++-- 1 file changed, 2 insertions(+), 6 deletions(-)
Re: pgsql: Provide separate header file for built-in float types
On 29 July 2018 at 02:43, Tomas Vondra wrote: > Provide separate header file for built-in float types It looks like this commit broke float8_accum() and float4_accum() when not invoked as aggregates (transdatums is no longer being populated). Possibly it would be worth having regression tests for those, to be safe. Regards, Dean
pgsql: Guard against rare RAND_bytes() failures in pg_strong_random().
Guard against rare RAND_bytes() failures in pg_strong_random(). When built using OpenSSL, pg_strong_random() uses RAND_bytes() to generate the random number. On very rare occasions that can fail, if its PRNG has not been seeded with enough data. Additionally, once it does fail, all subsequent calls will also fail until more seed data is added. Since this is required during backend startup, this can result in all new backends failing to start until a postmaster restart. Guard against that by checking the state of OpenSSL's PRNG using RAND_status(), and if necessary (very rarely), seeding it using RAND_poll(). Back-patch to v10, where pg_strong_random() was introduced. Dean Rasheed and Michael Paquier. Discussion: https://postgr.es/m/CAEZATCXMtxbzSAvyKKk5uCRf9pNt4UV%2BF_5v%3DgLfJUuPxU4Ytg%40mail.gmail.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/8f6ce7fb090a674f18b72e89a2b868fe1343fe8f Modified Files -- src/port/pg_strong_random.c | 29 + 1 file changed, 29 insertions(+)
pgsql: Guard against rare RAND_bytes() failures in pg_strong_random().
Guard against rare RAND_bytes() failures in pg_strong_random(). When built using OpenSSL, pg_strong_random() uses RAND_bytes() to generate the random number. On very rare occasions that can fail, if its PRNG has not been seeded with enough data. Additionally, once it does fail, all subsequent calls will also fail until more seed data is added. Since this is required during backend startup, this can result in all new backends failing to start until a postmaster restart. Guard against that by checking the state of OpenSSL's PRNG using RAND_status(), and if necessary (very rarely), seeding it using RAND_poll(). Back-patch to v10, where pg_strong_random() was introduced. Dean Rasheed and Michael Paquier. Discussion: https://postgr.es/m/CAEZATCXMtxbzSAvyKKk5uCRf9pNt4UV%2BF_5v%3DgLfJUuPxU4Ytg%40mail.gmail.com Branch -- REL_11_STABLE Details --- https://git.postgresql.org/pg/commitdiff/1f919e663ee20430dfde5614a025ac2dbd0f4bfb Modified Files -- src/port/pg_strong_random.c | 29 + 1 file changed, 29 insertions(+)
pgsql: Guard against rare RAND_bytes() failures in pg_strong_random().
Guard against rare RAND_bytes() failures in pg_strong_random(). When built using OpenSSL, pg_strong_random() uses RAND_bytes() to generate the random number. On very rare occasions that can fail, if its PRNG has not been seeded with enough data. Additionally, once it does fail, all subsequent calls will also fail until more seed data is added. Since this is required during backend startup, this can result in all new backends failing to start until a postmaster restart. Guard against that by checking the state of OpenSSL's PRNG using RAND_status(), and if necessary (very rarely), seeding it using RAND_poll(). Back-patch to v10, where pg_strong_random() was introduced. Dean Rasheed and Michael Paquier. Discussion: https://postgr.es/m/CAEZATCXMtxbzSAvyKKk5uCRf9pNt4UV%2BF_5v%3DgLfJUuPxU4Ytg%40mail.gmail.com Branch -- REL_10_STABLE Details --- https://git.postgresql.org/pg/commitdiff/821200405cc3f25fda28c5f58d17d640e25559b8 Modified Files -- src/port/pg_strong_random.c | 29 + 1 file changed, 29 insertions(+)
pgsql: Improve ANALYZE's strategy for finding MCVs.
Improve ANALYZE's strategy for finding MCVs. Previously, a value was included in the MCV list if its frequency was 25% larger than the estimated average frequency of all nonnull values in the table. For uniform distributions, that can lead to values being included in the MCV list and significantly overestimated on the basis of relatively few (sometimes just 2) instances being seen in the sample. For non-uniform distributions, it can lead to too few values being included in the MCV list, since the overall average frequency may be dominated by a small number of very common values, while the remaining values may still have a large spread of frequencies, causing both substantial overestimation and underestimation of the remaining values. Furthermore, increasing the statistics target may have little effect because the overall average frequency will remain relatively unchanged. Instead, populate the MCV list with the largest set of common values that are statistically significantly more common than the average frequency of the remaining values. This takes into account the variance of the sample counts, which depends on the counts themselves and on the proportion of the table that was sampled. As a result, it constrains the relative standard error of estimates based on the frequencies of values in the list, reducing the chances of too many values being included. At the same time, it allows more values to be included, since the MCVs need only be more common than the remaining non-MCVs, rather than the overall average. Thus it tends to produce fewer MCVs than the previous code for uniform distributions, and more for non-uniform distributions, reducing estimation errors in both cases. In addition, the algorithm responds better to increasing the statistics target, allowing more values to be included in the MCV list when more of the table is sampled. Jeff Janes, substantially modified by me. Reviewed by John Naylor and Tomas Vondra. Discussion: https://postgr.es/m/CAMkU=1yvdGvW9TmiLAhz2erFnvnPFYHbOZuO+a=4dvkzpuq...@mail.gmail.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/b5db1d93d2a6e2d3186f8798a5d06e07b7536a1d Modified Files -- src/backend/commands/analyze.c | 216 ++--- 1 file changed, 158 insertions(+), 58 deletions(-)
pgsql: Improve the accuracy of floating point statistical aggregates.
Improve the accuracy of floating point statistical aggregates. When computing statistical aggregates like variance, the common schoolbook algorithm which computes the sum of the squares of the values and subtracts the square of the mean can lead to a large loss of precision when using floating point arithmetic, because the difference between the two terms is often very small relative to the terms themselves. To avoid this, re-work these aggregates to use the Youngs-Cramer algorithm, which is a proven, numerically stable algorithm that directly aggregates the sum of the squares of the differences of the values from the mean in a single pass over the data. While at it, improve the test coverage to test the aggregate combine functions used during parallel aggregation. Per report and suggested algorithm from Erich Schubert. Patch by me, reviewed by Madeleine Thompson. Discussion: https://postgr.es/m/153313051300.1397.9594490737341194...@wrigleys.postgresql.org Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/e954a727f0c8872bf5203186ad0f5312f6183746 Modified Files -- src/backend/utils/adt/float.c| 732 --- src/test/regress/expected/aggregates.out | 128 ++ src/test/regress/sql/aggregates.sql | 41 ++ 3 files changed, 639 insertions(+), 262 deletions(-)
pgsql: Perform RLS subquery checks as the right user when going via a v
Perform RLS subquery checks as the right user when going via a view. When accessing a table with RLS via a view, the RLS checks are performed as the view owner. However, the code neglected to propagate that to any subqueries in the RLS checks. Fix that by calling setRuleCheckAsUser() for all RLS policy quals and withCheckOption checks for RTEs with RLS. Back-patch to 9.5 where RLS was added. Per bug #15708 from daurnimator. Discussion: https://postgr.es/m/15708-d65cab2ce9b17...@postgresql.org Branch -- REL_11_STABLE Details --- https://git.postgresql.org/pg/commitdiff/157dcf534f8e12486d425d6c0d111c065fbbb841 Modified Files -- src/backend/rewrite/rowsecurity.c | 8 src/test/regress/expected/rowsecurity.out | 27 +++ src/test/regress/sql/rowsecurity.sql | 26 ++ 3 files changed, 61 insertions(+)
pgsql: Perform RLS subquery checks as the right user when going via a v
Perform RLS subquery checks as the right user when going via a view. When accessing a table with RLS via a view, the RLS checks are performed as the view owner. However, the code neglected to propagate that to any subqueries in the RLS checks. Fix that by calling setRuleCheckAsUser() for all RLS policy quals and withCheckOption checks for RTEs with RLS. Back-patch to 9.5 where RLS was added. Per bug #15708 from daurnimator. Discussion: https://postgr.es/m/15708-d65cab2ce9b17...@postgresql.org Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/e2d28c0f404713f564dc2250646551c75172f17b Modified Files -- src/backend/rewrite/rowsecurity.c | 8 src/test/regress/expected/rowsecurity.out | 27 +++ src/test/regress/sql/rowsecurity.sql | 26 ++ 3 files changed, 61 insertions(+)
pgsql: Perform RLS subquery checks as the right user when going via a v
Perform RLS subquery checks as the right user when going via a view. When accessing a table with RLS via a view, the RLS checks are performed as the view owner. However, the code neglected to propagate that to any subqueries in the RLS checks. Fix that by calling setRuleCheckAsUser() for all RLS policy quals and withCheckOption checks for RTEs with RLS. Back-patch to 9.5 where RLS was added. Per bug #15708 from daurnimator. Discussion: https://postgr.es/m/15708-d65cab2ce9b17...@postgresql.org Branch -- REL_10_STABLE Details --- https://git.postgresql.org/pg/commitdiff/db4bc99948bf746c7035b49cc581eb1c44a30788 Modified Files -- src/backend/rewrite/rowsecurity.c | 8 src/test/regress/expected/rowsecurity.out | 27 +++ src/test/regress/sql/rowsecurity.sql | 26 ++ 3 files changed, 61 insertions(+)
pgsql: Perform RLS subquery checks as the right user when going via a v
Perform RLS subquery checks as the right user when going via a view. When accessing a table with RLS via a view, the RLS checks are performed as the view owner. However, the code neglected to propagate that to any subqueries in the RLS checks. Fix that by calling setRuleCheckAsUser() for all RLS policy quals and withCheckOption checks for RTEs with RLS. Back-patch to 9.5 where RLS was added. Per bug #15708 from daurnimator. Discussion: https://postgr.es/m/15708-d65cab2ce9b17...@postgresql.org Branch -- REL9_5_STABLE Details --- https://git.postgresql.org/pg/commitdiff/2e606d0ad7984c15f323eacfc333990b93683cc7 Modified Files -- src/backend/rewrite/rowsecurity.c | 8 src/test/regress/expected/rowsecurity.out | 31 +++ src/test/regress/sql/rowsecurity.sql | 31 +++ 3 files changed, 70 insertions(+)
pgsql: Perform RLS subquery checks as the right user when going via a v
Perform RLS subquery checks as the right user when going via a view. When accessing a table with RLS via a view, the RLS checks are performed as the view owner. However, the code neglected to propagate that to any subqueries in the RLS checks. Fix that by calling setRuleCheckAsUser() for all RLS policy quals and withCheckOption checks for RTEs with RLS. Back-patch to 9.5 where RLS was added. Per bug #15708 from daurnimator. Discussion: https://postgr.es/m/15708-d65cab2ce9b17...@postgresql.org Branch -- REL9_6_STABLE Details --- https://git.postgresql.org/pg/commitdiff/1b9a2f458b36747db07aff6631ade56c24cfda03 Modified Files -- src/backend/rewrite/rowsecurity.c | 8 src/test/regress/expected/rowsecurity.out | 27 +++ src/test/regress/sql/rowsecurity.sql | 26 ++ 3 files changed, 61 insertions(+)
pgsql: Further fixing for multi-row VALUES lists for updatable views.
Further fixing for multi-row VALUES lists for updatable views. Previously, rewriteTargetListIU() generated a list of attribute numbers from the targetlist, which were passed to rewriteValuesRTE(), which expected them to contain the same number of entries as there are columns in the VALUES RTE, and to be in the same order. That was fine when the target relation was a table, but for an updatable view it could be broken in at least three different ways --- rewriteTargetListIU() could insert additional targetlist entries for view columns with defaults, the view columns could be in a different order from the columns of the underlying base relation, and targetlist entries could be merged together when assigning to elements of an array or composite type. As a result, when recursing to the base relation, the list of attribute numbers generated from the rewritten targetlist could no longer be relied upon to match the columns of the VALUES RTE. We got away with that prior to 41531e42d3 because it used to always be the case that rewriteValuesRTE() did nothing for the underlying base relation, since all DEFAULTS had already been replaced when it was initially invoked for the view, but that was incorrect because it failed to apply defaults from the base relation. Fix this by examining the targetlist entries more carefully and picking out just those that are simple Vars referencing the VALUES RTE. That's sufficient for the purposes of rewriteValuesRTE(), which is only responsible for dealing with DEFAULT items in the VALUES RTE. Any DEFAULT item in the VALUES RTE that doesn't have a matching simple-Var-assignment in the targetlist is an error which we complain about, but in theory that ought to be impossible. Additionally, move this code into rewriteValuesRTE() to give a clearer separation of concerns between the 2 functions. There is no need for rewriteTargetListIU() to know about the details of the VALUES RTE. While at it, fix the comment for rewriteValuesRTE() which claimed that it doesn't support array element and field assignments --- that hasn't been true since a3c7a993d5 (9.6 and later). Back-patch to all supported versions, with minor differences for the pre-9.6 branches, which don't support array element and field assignments to the same column in multi-row VALUES lists. Reviewed by Amit Langote. Discussion: https://postgr.es/m/15623-5d67a46788ec8...@postgresql.org Branch -- REL9_6_STABLE Details --- https://git.postgresql.org/pg/commitdiff/133923aeb1b60f594382f199dc4b62ee8b864a5b Modified Files -- src/backend/rewrite/rewriteHandler.c | 100 +++--- src/test/regress/expected/updatable_views.out | 37 -- src/test/regress/sql/updatable_views.sql | 14 3 files changed, 107 insertions(+), 44 deletions(-)
pgsql: Further fixing for multi-row VALUES lists for updatable views.
Further fixing for multi-row VALUES lists for updatable views. Previously, rewriteTargetListIU() generated a list of attribute numbers from the targetlist, which were passed to rewriteValuesRTE(), which expected them to contain the same number of entries as there are columns in the VALUES RTE, and to be in the same order. That was fine when the target relation was a table, but for an updatable view it could be broken in at least three different ways --- rewriteTargetListIU() could insert additional targetlist entries for view columns with defaults, the view columns could be in a different order from the columns of the underlying base relation, and targetlist entries could be merged together when assigning to elements of an array or composite type. As a result, when recursing to the base relation, the list of attribute numbers generated from the rewritten targetlist could no longer be relied upon to match the columns of the VALUES RTE. We got away with that prior to 41531e42d3 because it used to always be the case that rewriteValuesRTE() did nothing for the underlying base relation, since all DEFAULTS had already been replaced when it was initially invoked for the view, but that was incorrect because it failed to apply defaults from the base relation. Fix this by examining the targetlist entries more carefully and picking out just those that are simple Vars referencing the VALUES RTE. That's sufficient for the purposes of rewriteValuesRTE(), which is only responsible for dealing with DEFAULT items in the VALUES RTE. Any DEFAULT item in the VALUES RTE that doesn't have a matching simple-Var-assignment in the targetlist is an error which we complain about, but in theory that ought to be impossible. Additionally, move this code into rewriteValuesRTE() to give a clearer separation of concerns between the 2 functions. There is no need for rewriteTargetListIU() to know about the details of the VALUES RTE. While at it, fix the comment for rewriteValuesRTE() which claimed that it doesn't support array element and field assignments --- that hasn't been true since a3c7a993d5 (9.6 and later). Back-patch to all supported versions, with minor differences for the pre-9.6 branches, which don't support array element and field assignments to the same column in multi-row VALUES lists. Reviewed by Amit Langote. Discussion: https://postgr.es/m/15623-5d67a46788ec8...@postgresql.org Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/ed4653db8ca7a70ba7a4d329a44812893f8e59c2 Modified Files -- src/backend/rewrite/rewriteHandler.c | 98 --- src/test/regress/expected/updatable_views.out | 37 -- src/test/regress/sql/updatable_views.sql | 14 3 files changed, 106 insertions(+), 43 deletions(-)
pgsql: Further fixing for multi-row VALUES lists for updatable views.
Further fixing for multi-row VALUES lists for updatable views. Previously, rewriteTargetListIU() generated a list of attribute numbers from the targetlist, which were passed to rewriteValuesRTE(), which expected them to contain the same number of entries as there are columns in the VALUES RTE, and to be in the same order. That was fine when the target relation was a table, but for an updatable view it could be broken in at least three different ways --- rewriteTargetListIU() could insert additional targetlist entries for view columns with defaults, the view columns could be in a different order from the columns of the underlying base relation, and targetlist entries could be merged together when assigning to elements of an array or composite type. As a result, when recursing to the base relation, the list of attribute numbers generated from the rewritten targetlist could no longer be relied upon to match the columns of the VALUES RTE. We got away with that prior to 41531e42d3 because it used to always be the case that rewriteValuesRTE() did nothing for the underlying base relation, since all DEFAULTS had already been replaced when it was initially invoked for the view, but that was incorrect because it failed to apply defaults from the base relation. Fix this by examining the targetlist entries more carefully and picking out just those that are simple Vars referencing the VALUES RTE. That's sufficient for the purposes of rewriteValuesRTE(), which is only responsible for dealing with DEFAULT items in the VALUES RTE. Any DEFAULT item in the VALUES RTE that doesn't have a matching simple-Var-assignment in the targetlist is an error which we complain about, but in theory that ought to be impossible. Additionally, move this code into rewriteValuesRTE() to give a clearer separation of concerns between the 2 functions. There is no need for rewriteTargetListIU() to know about the details of the VALUES RTE. While at it, fix the comment for rewriteValuesRTE() which claimed that it doesn't support array element and field assignments --- that hasn't been true since a3c7a993d5 (9.6 and later). Back-patch to all supported versions, with minor differences for the pre-9.6 branches, which don't support array element and field assignments to the same column in multi-row VALUES lists. Reviewed by Amit Langote. Discussion: https://postgr.es/m/15623-5d67a46788ec8...@postgresql.org Branch -- REL9_4_STABLE Details --- https://git.postgresql.org/pg/commitdiff/431471e1f2a2e324b0d701c4d4fddd44f96b397a Modified Files -- src/backend/rewrite/rewriteHandler.c | 94 +-- src/test/regress/expected/updatable_views.out | 37 +-- src/test/regress/sql/updatable_views.sql | 14 3 files changed, 105 insertions(+), 40 deletions(-)
pgsql: Further fixing for multi-row VALUES lists for updatable views.
Further fixing for multi-row VALUES lists for updatable views. Previously, rewriteTargetListIU() generated a list of attribute numbers from the targetlist, which were passed to rewriteValuesRTE(), which expected them to contain the same number of entries as there are columns in the VALUES RTE, and to be in the same order. That was fine when the target relation was a table, but for an updatable view it could be broken in at least three different ways --- rewriteTargetListIU() could insert additional targetlist entries for view columns with defaults, the view columns could be in a different order from the columns of the underlying base relation, and targetlist entries could be merged together when assigning to elements of an array or composite type. As a result, when recursing to the base relation, the list of attribute numbers generated from the rewritten targetlist could no longer be relied upon to match the columns of the VALUES RTE. We got away with that prior to 41531e42d3 because it used to always be the case that rewriteValuesRTE() did nothing for the underlying base relation, since all DEFAULTS had already been replaced when it was initially invoked for the view, but that was incorrect because it failed to apply defaults from the base relation. Fix this by examining the targetlist entries more carefully and picking out just those that are simple Vars referencing the VALUES RTE. That's sufficient for the purposes of rewriteValuesRTE(), which is only responsible for dealing with DEFAULT items in the VALUES RTE. Any DEFAULT item in the VALUES RTE that doesn't have a matching simple-Var-assignment in the targetlist is an error which we complain about, but in theory that ought to be impossible. Additionally, move this code into rewriteValuesRTE() to give a clearer separation of concerns between the 2 functions. There is no need for rewriteTargetListIU() to know about the details of the VALUES RTE. While at it, fix the comment for rewriteValuesRTE() which claimed that it doesn't support array element and field assignments --- that hasn't been true since a3c7a993d5 (9.6 and later). Back-patch to all supported versions, with minor differences for the pre-9.6 branches, which don't support array element and field assignments to the same column in multi-row VALUES lists. Reviewed by Amit Langote. Discussion: https://postgr.es/m/15623-5d67a46788ec8...@postgresql.org Branch -- REL_10_STABLE Details --- https://git.postgresql.org/pg/commitdiff/0a08446308c8387144e69a99bf738e204f11fc70 Modified Files -- src/backend/rewrite/rewriteHandler.c | 100 +++--- src/test/regress/expected/updatable_views.out | 37 -- src/test/regress/sql/updatable_views.sql | 14 3 files changed, 107 insertions(+), 44 deletions(-)
pgsql: Further fixing for multi-row VALUES lists for updatable views.
Further fixing for multi-row VALUES lists for updatable views. Previously, rewriteTargetListIU() generated a list of attribute numbers from the targetlist, which were passed to rewriteValuesRTE(), which expected them to contain the same number of entries as there are columns in the VALUES RTE, and to be in the same order. That was fine when the target relation was a table, but for an updatable view it could be broken in at least three different ways --- rewriteTargetListIU() could insert additional targetlist entries for view columns with defaults, the view columns could be in a different order from the columns of the underlying base relation, and targetlist entries could be merged together when assigning to elements of an array or composite type. As a result, when recursing to the base relation, the list of attribute numbers generated from the rewritten targetlist could no longer be relied upon to match the columns of the VALUES RTE. We got away with that prior to 41531e42d3 because it used to always be the case that rewriteValuesRTE() did nothing for the underlying base relation, since all DEFAULTS had already been replaced when it was initially invoked for the view, but that was incorrect because it failed to apply defaults from the base relation. Fix this by examining the targetlist entries more carefully and picking out just those that are simple Vars referencing the VALUES RTE. That's sufficient for the purposes of rewriteValuesRTE(), which is only responsible for dealing with DEFAULT items in the VALUES RTE. Any DEFAULT item in the VALUES RTE that doesn't have a matching simple-Var-assignment in the targetlist is an error which we complain about, but in theory that ought to be impossible. Additionally, move this code into rewriteValuesRTE() to give a clearer separation of concerns between the 2 functions. There is no need for rewriteTargetListIU() to know about the details of the VALUES RTE. While at it, fix the comment for rewriteValuesRTE() which claimed that it doesn't support array element and field assignments --- that hasn't been true since a3c7a993d5 (9.6 and later). Back-patch to all supported versions, with minor differences for the pre-9.6 branches, which don't support array element and field assignments to the same column in multi-row VALUES lists. Reviewed by Amit Langote. Discussion: https://postgr.es/m/15623-5d67a46788ec8...@postgresql.org Branch -- REL9_5_STABLE Details --- https://git.postgresql.org/pg/commitdiff/556fdd49c9e003d96b28c987d533dcd4238d0cd4 Modified Files -- src/backend/rewrite/rewriteHandler.c | 100 +++--- src/test/regress/expected/updatable_views.out | 37 -- src/test/regress/sql/updatable_views.sql | 14 3 files changed, 107 insertions(+), 44 deletions(-)
pgsql: Further fixing for multi-row VALUES lists for updatable views.
Further fixing for multi-row VALUES lists for updatable views. Previously, rewriteTargetListIU() generated a list of attribute numbers from the targetlist, which were passed to rewriteValuesRTE(), which expected them to contain the same number of entries as there are columns in the VALUES RTE, and to be in the same order. That was fine when the target relation was a table, but for an updatable view it could be broken in at least three different ways --- rewriteTargetListIU() could insert additional targetlist entries for view columns with defaults, the view columns could be in a different order from the columns of the underlying base relation, and targetlist entries could be merged together when assigning to elements of an array or composite type. As a result, when recursing to the base relation, the list of attribute numbers generated from the rewritten targetlist could no longer be relied upon to match the columns of the VALUES RTE. We got away with that prior to 41531e42d3 because it used to always be the case that rewriteValuesRTE() did nothing for the underlying base relation, since all DEFAULTS had already been replaced when it was initially invoked for the view, but that was incorrect because it failed to apply defaults from the base relation. Fix this by examining the targetlist entries more carefully and picking out just those that are simple Vars referencing the VALUES RTE. That's sufficient for the purposes of rewriteValuesRTE(), which is only responsible for dealing with DEFAULT items in the VALUES RTE. Any DEFAULT item in the VALUES RTE that doesn't have a matching simple-Var-assignment in the targetlist is an error which we complain about, but in theory that ought to be impossible. Additionally, move this code into rewriteValuesRTE() to give a clearer separation of concerns between the 2 functions. There is no need for rewriteTargetListIU() to know about the details of the VALUES RTE. While at it, fix the comment for rewriteValuesRTE() which claimed that it doesn't support array element and field assignments --- that hasn't been true since a3c7a993d5 (9.6 and later). Back-patch to all supported versions, with minor differences for the pre-9.6 branches, which don't support array element and field assignments to the same column in multi-row VALUES lists. Reviewed by Amit Langote. Discussion: https://postgr.es/m/15623-5d67a46788ec8...@postgresql.org Branch -- REL_11_STABLE Details --- https://git.postgresql.org/pg/commitdiff/6ccb97337326db2a57f65b866d0679308a66a42a Modified Files -- src/backend/rewrite/rewriteHandler.c | 98 --- src/test/regress/expected/updatable_views.out | 37 -- src/test/regress/sql/updatable_views.sql | 14 3 files changed, 106 insertions(+), 43 deletions(-)
pgsql: Fix DEFAULT-handling in multi-row VALUES lists for updatable vie
Fix DEFAULT-handling in multi-row VALUES lists for updatable views. INSERT ... VALUES for a single VALUES row is implemented differently from a multi-row VALUES list, which causes inconsistent behaviour in the way that DEFAULT items are handled. In particular, when inserting into an auto-updatable view on top of a table with a column default, a DEFAULT item in a single VALUES row gets correctly replaced with the table column's default, but for a multi-row VALUES list it is replaced with NULL. Fix this by allowing rewriteValuesRTE() to leave DEFAULT items in the VALUES list untouched if the target relation is an auto-updatable view and has no column default, deferring DEFAULT-expansion until the query against the base relation is rewritten. For all other types of target relation, including tables and trigger- and rule-updatable views, we must continue to replace DEFAULT items with NULL in the absence of a column default. This is somewhat complicated by the fact that if an auto-updatable view has DO ALSO rules attached, the VALUES lists for the product queries need to be handled differently from the original query, since the product queries need to act like rule-updatable views whereas the original query has auto-updatable view semantics. Back-patch to all supported versions. Reported by Roger Curley (bug #15623). Patch by Amit Langote and me. Discussion: https://postgr.es/m/15623-5d67a46788ec8...@postgresql.org Branch -- REL_10_STABLE Details --- https://git.postgresql.org/pg/commitdiff/2b1971c0318bae45317386877621642cd5ce7849 Modified Files -- src/backend/rewrite/rewriteHandler.c | 139 +-- src/test/regress/expected/updatable_views.out | 153 ++ src/test/regress/sql/updatable_views.sql | 88 +++ 3 files changed, 369 insertions(+), 11 deletions(-)
pgsql: Fix DEFAULT-handling in multi-row VALUES lists for updatable vie
Fix DEFAULT-handling in multi-row VALUES lists for updatable views. INSERT ... VALUES for a single VALUES row is implemented differently from a multi-row VALUES list, which causes inconsistent behaviour in the way that DEFAULT items are handled. In particular, when inserting into an auto-updatable view on top of a table with a column default, a DEFAULT item in a single VALUES row gets correctly replaced with the table column's default, but for a multi-row VALUES list it is replaced with NULL. Fix this by allowing rewriteValuesRTE() to leave DEFAULT items in the VALUES list untouched if the target relation is an auto-updatable view and has no column default, deferring DEFAULT-expansion until the query against the base relation is rewritten. For all other types of target relation, including tables and trigger- and rule-updatable views, we must continue to replace DEFAULT items with NULL in the absence of a column default. This is somewhat complicated by the fact that if an auto-updatable view has DO ALSO rules attached, the VALUES lists for the product queries need to be handled differently from the original query, since the product queries need to act like rule-updatable views whereas the original query has auto-updatable view semantics. Back-patch to all supported versions. Reported by Roger Curley (bug #15623). Patch by Amit Langote and me. Discussion: https://postgr.es/m/15623-5d67a46788ec8...@postgresql.org Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/41531e42d34f4aca117d343b5e40f3f757dec5fe Modified Files -- src/backend/rewrite/rewriteHandler.c | 139 +-- src/test/regress/expected/updatable_views.out | 153 ++ src/test/regress/sql/updatable_views.sql | 88 +++ 3 files changed, 369 insertions(+), 11 deletions(-)
pgsql: Fix DEFAULT-handling in multi-row VALUES lists for updatable vie
Fix DEFAULT-handling in multi-row VALUES lists for updatable views. INSERT ... VALUES for a single VALUES row is implemented differently from a multi-row VALUES list, which causes inconsistent behaviour in the way that DEFAULT items are handled. In particular, when inserting into an auto-updatable view on top of a table with a column default, a DEFAULT item in a single VALUES row gets correctly replaced with the table column's default, but for a multi-row VALUES list it is replaced with NULL. Fix this by allowing rewriteValuesRTE() to leave DEFAULT items in the VALUES list untouched if the target relation is an auto-updatable view and has no column default, deferring DEFAULT-expansion until the query against the base relation is rewritten. For all other types of target relation, including tables and trigger- and rule-updatable views, we must continue to replace DEFAULT items with NULL in the absence of a column default. This is somewhat complicated by the fact that if an auto-updatable view has DO ALSO rules attached, the VALUES lists for the product queries need to be handled differently from the original query, since the product queries need to act like rule-updatable views whereas the original query has auto-updatable view semantics. Back-patch to all supported versions. Reported by Roger Curley (bug #15623). Patch by Amit Langote and me. Discussion: https://postgr.es/m/15623-5d67a46788ec8...@postgresql.org Branch -- REL9_4_STABLE Details --- https://git.postgresql.org/pg/commitdiff/5a73edf050ed6740fc085cad05a2e7af9fb1f06e Modified Files -- src/backend/rewrite/rewriteHandler.c | 138 +-- src/test/regress/expected/updatable_views.out | 153 ++ src/test/regress/sql/updatable_views.sql | 88 +++ 3 files changed, 368 insertions(+), 11 deletions(-)
pgsql: Fix DEFAULT-handling in multi-row VALUES lists for updatable vie
Fix DEFAULT-handling in multi-row VALUES lists for updatable views. INSERT ... VALUES for a single VALUES row is implemented differently from a multi-row VALUES list, which causes inconsistent behaviour in the way that DEFAULT items are handled. In particular, when inserting into an auto-updatable view on top of a table with a column default, a DEFAULT item in a single VALUES row gets correctly replaced with the table column's default, but for a multi-row VALUES list it is replaced with NULL. Fix this by allowing rewriteValuesRTE() to leave DEFAULT items in the VALUES list untouched if the target relation is an auto-updatable view and has no column default, deferring DEFAULT-expansion until the query against the base relation is rewritten. For all other types of target relation, including tables and trigger- and rule-updatable views, we must continue to replace DEFAULT items with NULL in the absence of a column default. This is somewhat complicated by the fact that if an auto-updatable view has DO ALSO rules attached, the VALUES lists for the product queries need to be handled differently from the original query, since the product queries need to act like rule-updatable views whereas the original query has auto-updatable view semantics. Back-patch to all supported versions. Reported by Roger Curley (bug #15623). Patch by Amit Langote and me. Discussion: https://postgr.es/m/15623-5d67a46788ec8...@postgresql.org Branch -- REL9_6_STABLE Details --- https://git.postgresql.org/pg/commitdiff/d31e421d7df17052c3b48363eeb50e5a736833bd Modified Files -- src/backend/rewrite/rewriteHandler.c | 139 +-- src/test/regress/expected/updatable_views.out | 153 ++ src/test/regress/sql/updatable_views.sql | 88 +++ 3 files changed, 369 insertions(+), 11 deletions(-)
pgsql: Fix DEFAULT-handling in multi-row VALUES lists for updatable vie
Fix DEFAULT-handling in multi-row VALUES lists for updatable views. INSERT ... VALUES for a single VALUES row is implemented differently from a multi-row VALUES list, which causes inconsistent behaviour in the way that DEFAULT items are handled. In particular, when inserting into an auto-updatable view on top of a table with a column default, a DEFAULT item in a single VALUES row gets correctly replaced with the table column's default, but for a multi-row VALUES list it is replaced with NULL. Fix this by allowing rewriteValuesRTE() to leave DEFAULT items in the VALUES list untouched if the target relation is an auto-updatable view and has no column default, deferring DEFAULT-expansion until the query against the base relation is rewritten. For all other types of target relation, including tables and trigger- and rule-updatable views, we must continue to replace DEFAULT items with NULL in the absence of a column default. This is somewhat complicated by the fact that if an auto-updatable view has DO ALSO rules attached, the VALUES lists for the product queries need to be handled differently from the original query, since the product queries need to act like rule-updatable views whereas the original query has auto-updatable view semantics. Back-patch to all supported versions. Reported by Roger Curley (bug #15623). Patch by Amit Langote and me. Discussion: https://postgr.es/m/15623-5d67a46788ec8...@postgresql.org Branch -- REL_11_STABLE Details --- https://git.postgresql.org/pg/commitdiff/fbec6fa38ade6fd0c4c5c5984f723ba351a44e85 Modified Files -- src/backend/rewrite/rewriteHandler.c | 139 +-- src/test/regress/expected/updatable_views.out | 153 ++ src/test/regress/sql/updatable_views.sql | 88 +++ 3 files changed, 369 insertions(+), 11 deletions(-)
Re: pgsql: Drop test user when done with it.
On Mon, 24 Jun 2019 at 17:37, Tom Lane wrote: > > Drop test user when done with it. > > Commit d7f8d26d9 added a test case that created a user, but forgot > to drop it again. This is no good; for one thing, it causes repeated > "make installcheck" runs to fail. > Ah, I see .. yes, my bad. Thanks for fixing. Regards, Dean
pgsql: Add security checks to the multivariate MCV estimation code.
Add security checks to the multivariate MCV estimation code. The multivariate MCV estimation code may run user-defined operators on the values in the MCV list, which means that those operators may potentially leak the values from the MCV list. Guard against leaking data to unprivileged users by checking that the user has SELECT privileges on the table or all of the columns referred to by the statistics. Additionally, if there are any securityQuals on the RTE (either due to RLS policies on the table, or accessing the table via a security barrier view), not all rows may be visible to the current user, even if they have table or column privileges. Thus we further insist that the operator be leakproof in this case. Dean Rasheed, reviewed by Tomas Vondra. Discussion: https://postgr.es/m/CAEZATCUhT9rt7Ui=Vdx4N==VV5XOK5dsXfnGgVOz_JhAicB=z...@mail.gmail.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/d7f8d26d9f4c0a574250ec53a03b3dc08d13796c Modified Files -- src/backend/statistics/extended_stats.c | 74 + src/test/regress/expected/stats_ext.out | 60 ++ src/test/regress/sql/stats_ext.sql | 60 ++ 3 files changed, 186 insertions(+), 8 deletions(-)
pgsql: Fix security checks for selectivity estimation functions with RL
Fix security checks for selectivity estimation functions with RLS. In commit e2d4ef8de8, security checks were added to prevent user-supplied operators from running over data from pg_statistic unless the user has table or column privileges on the table, or the operator is leakproof. For a table with RLS, however, checking for table or column privileges is insufficient, since that does not guarantee that the user has permission to view all of the column's data. Fix this by also checking for securityQuals on the RTE, and insisting that the operator be leakproof if there are any. Thus the leakproofness check will only be skipped if there are no securityQuals and the user has table or column privileges on the table -- i.e., only if we know that the user has access to all the data in the column. Back-patch to 9.5 where RLS was added. Dean Rasheed, reviewed by Jonathan Katz and Stephen Frost. Security: CVE-2019-10130 Branch -- REL_11_STABLE Details --- https://git.postgresql.org/pg/commitdiff/0027ee3c52d824764f7f8391edb2695c3d2b424f Modified Files -- src/backend/utils/adt/selfuncs.c | 21 +++-- src/test/regress/expected/rowsecurity.out | 21 + src/test/regress/sql/rowsecurity.sql | 20 3 files changed, 56 insertions(+), 6 deletions(-)
pgsql: Fix security checks for selectivity estimation functions with RL
Fix security checks for selectivity estimation functions with RLS. In commit e2d4ef8de8, security checks were added to prevent user-supplied operators from running over data from pg_statistic unless the user has table or column privileges on the table, or the operator is leakproof. For a table with RLS, however, checking for table or column privileges is insufficient, since that does not guarantee that the user has permission to view all of the column's data. Fix this by also checking for securityQuals on the RTE, and insisting that the operator be leakproof if there are any. Thus the leakproofness check will only be skipped if there are no securityQuals and the user has table or column privileges on the table -- i.e., only if we know that the user has access to all the data in the column. Back-patch to 9.5 where RLS was added. Dean Rasheed, reviewed by Jonathan Katz and Stephen Frost. Security: CVE-2019-10130 Branch -- REL_10_STABLE Details --- https://git.postgresql.org/pg/commitdiff/9408028305ac86d9913eaea49442cd7fdfa8aaca Modified Files -- src/backend/utils/adt/selfuncs.c | 21 +++-- src/test/regress/expected/rowsecurity.out | 21 + src/test/regress/sql/rowsecurity.sql | 20 3 files changed, 56 insertions(+), 6 deletions(-)
pgsql: Fix security checks for selectivity estimation functions with RL
Fix security checks for selectivity estimation functions with RLS. In commit e2d4ef8de8, security checks were added to prevent user-supplied operators from running over data from pg_statistic unless the user has table or column privileges on the table, or the operator is leakproof. For a table with RLS, however, checking for table or column privileges is insufficient, since that does not guarantee that the user has permission to view all of the column's data. Fix this by also checking for securityQuals on the RTE, and insisting that the operator be leakproof if there are any. Thus the leakproofness check will only be skipped if there are no securityQuals and the user has table or column privileges on the table -- i.e., only if we know that the user has access to all the data in the column. Back-patch to 9.5 where RLS was added. Dean Rasheed, reviewed by Jonathan Katz and Stephen Frost. Security: CVE-2019-10130 Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/1aebfbea83c4a3e1a0aba4b0910135dc5a45666c Modified Files -- src/backend/utils/adt/selfuncs.c | 21 +++-- src/test/regress/expected/rowsecurity.out | 21 + src/test/regress/sql/rowsecurity.sql | 20 3 files changed, 56 insertions(+), 6 deletions(-)
pgsql: Fix security checks for selectivity estimation functions with RL
Fix security checks for selectivity estimation functions with RLS. In commit e2d4ef8de8, security checks were added to prevent user-supplied operators from running over data from pg_statistic unless the user has table or column privileges on the table, or the operator is leakproof. For a table with RLS, however, checking for table or column privileges is insufficient, since that does not guarantee that the user has permission to view all of the column's data. Fix this by also checking for securityQuals on the RTE, and insisting that the operator be leakproof if there are any. Thus the leakproofness check will only be skipped if there are no securityQuals and the user has table or column privileges on the table -- i.e., only if we know that the user has access to all the data in the column. Back-patch to 9.5 where RLS was added. Dean Rasheed, reviewed by Jonathan Katz and Stephen Frost. Security: CVE-2019-10130 Branch -- REL9_5_STABLE Details --- https://git.postgresql.org/pg/commitdiff/93c36145ad10f3b4795d8e411720ae55180ad274 Modified Files -- src/backend/utils/adt/selfuncs.c | 21 +++-- src/test/regress/expected/rowsecurity.out | 21 + src/test/regress/sql/rowsecurity.sql | 21 + 3 files changed, 57 insertions(+), 6 deletions(-)
pgsql: Fix security checks for selectivity estimation functions with RL
Fix security checks for selectivity estimation functions with RLS. In commit e2d4ef8de8, security checks were added to prevent user-supplied operators from running over data from pg_statistic unless the user has table or column privileges on the table, or the operator is leakproof. For a table with RLS, however, checking for table or column privileges is insufficient, since that does not guarantee that the user has permission to view all of the column's data. Fix this by also checking for securityQuals on the RTE, and insisting that the operator be leakproof if there are any. Thus the leakproofness check will only be skipped if there are no securityQuals and the user has table or column privileges on the table -- i.e., only if we know that the user has access to all the data in the column. Back-patch to 9.5 where RLS was added. Dean Rasheed, reviewed by Jonathan Katz and Stephen Frost. Security: CVE-2019-10130 Branch -- REL9_6_STABLE Details --- https://git.postgresql.org/pg/commitdiff/085d64d9d236315dd646f42e195f822aeb15c54e Modified Files -- src/backend/utils/adt/selfuncs.c | 21 +++-- src/test/regress/expected/rowsecurity.out | 21 + src/test/regress/sql/rowsecurity.sql | 20 3 files changed, 56 insertions(+), 6 deletions(-)
pgsql: Use checkAsUser for selectivity estimator checks, if it's set.
Use checkAsUser for selectivity estimator checks, if it's set. In examine_variable() and examine_simple_variable(), when checking the user's table and column privileges to determine whether to grant access to the pg_statistic data, use checkAsUser for the privilege checks, if it's set. This will be the case if we're accessing the table via a view, to indicate that we should perform privilege checks as the view owner rather than the current user. This change makes this planner check consistent with the check in the executor, so the planner will be able to make use of statistics if the table is accessible via the view. This fixes a performance regression introduced by commit e2d4ef8de8, which affects queries against non-security barrier views in the case where the user doesn't have privileges on the underlying table, but the view owner does. Note that it continues to provide the same safeguards controlling access to pg_statistic for direct table access (in which case checkAsUser won't be set) and for security barrier views, because of the nearby checks on rte->security_barrier and rte->securityQuals. Back-patch to all supported branches because e2d4ef8de8 was. Dean Rasheed, reviewed by Jonathan Katz and Stephen Frost. Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/a0905056fd6b0927dd33f185adc9e7503515fc0d Modified Files -- src/backend/utils/adt/selfuncs.c | 20 +++-- src/test/regress/expected/privileges.out | 71 src/test/regress/sql/privileges.sql | 25 +-- 3 files changed, 101 insertions(+), 15 deletions(-)
pgsql: Use checkAsUser for selectivity estimator checks, if it's set.
Use checkAsUser for selectivity estimator checks, if it's set. In examine_variable() and examine_simple_variable(), when checking the user's table and column privileges to determine whether to grant access to the pg_statistic data, use checkAsUser for the privilege checks, if it's set. This will be the case if we're accessing the table via a view, to indicate that we should perform privilege checks as the view owner rather than the current user. This change makes this planner check consistent with the check in the executor, so the planner will be able to make use of statistics if the table is accessible via the view. This fixes a performance regression introduced by commit e2d4ef8de8, which affects queries against non-security barrier views in the case where the user doesn't have privileges on the underlying table, but the view owner does. Note that it continues to provide the same safeguards controlling access to pg_statistic for direct table access (in which case checkAsUser won't be set) and for security barrier views, because of the nearby checks on rte->security_barrier and rte->securityQuals. Back-patch to all supported branches because e2d4ef8de8 was. Dean Rasheed, reviewed by Jonathan Katz and Stephen Frost. Branch -- REL_11_STABLE Details --- https://git.postgresql.org/pg/commitdiff/98dad4cd48e362090b30187441e8c116afb74f58 Modified Files -- src/backend/utils/adt/selfuncs.c | 20 +++-- src/test/regress/expected/privileges.out | 71 src/test/regress/sql/privileges.sql | 25 +-- 3 files changed, 101 insertions(+), 15 deletions(-)
pgsql: Use checkAsUser for selectivity estimator checks, if it's set.
Use checkAsUser for selectivity estimator checks, if it's set. In examine_variable() and examine_simple_variable(), when checking the user's table and column privileges to determine whether to grant access to the pg_statistic data, use checkAsUser for the privilege checks, if it's set. This will be the case if we're accessing the table via a view, to indicate that we should perform privilege checks as the view owner rather than the current user. This change makes this planner check consistent with the check in the executor, so the planner will be able to make use of statistics if the table is accessible via the view. This fixes a performance regression introduced by commit e2d4ef8de8, which affects queries against non-security barrier views in the case where the user doesn't have privileges on the underlying table, but the view owner does. Note that it continues to provide the same safeguards controlling access to pg_statistic for direct table access (in which case checkAsUser won't be set) and for security barrier views, because of the nearby checks on rte->security_barrier and rte->securityQuals. Back-patch to all supported branches because e2d4ef8de8 was. Dean Rasheed, reviewed by Jonathan Katz and Stephen Frost. Branch -- REL9_4_STABLE Details --- https://git.postgresql.org/pg/commitdiff/3c0098519da6b601df878e326eed701201ba Modified Files -- src/backend/utils/adt/selfuncs.c | 34 +++ src/test/regress/expected/privileges.out | 71 src/test/regress/sql/privileges.sql | 25 +-- 3 files changed, 112 insertions(+), 18 deletions(-)
pgsql: Use checkAsUser for selectivity estimator checks, if it's set.
Use checkAsUser for selectivity estimator checks, if it's set. In examine_variable() and examine_simple_variable(), when checking the user's table and column privileges to determine whether to grant access to the pg_statistic data, use checkAsUser for the privilege checks, if it's set. This will be the case if we're accessing the table via a view, to indicate that we should perform privilege checks as the view owner rather than the current user. This change makes this planner check consistent with the check in the executor, so the planner will be able to make use of statistics if the table is accessible via the view. This fixes a performance regression introduced by commit e2d4ef8de8, which affects queries against non-security barrier views in the case where the user doesn't have privileges on the underlying table, but the view owner does. Note that it continues to provide the same safeguards controlling access to pg_statistic for direct table access (in which case checkAsUser won't be set) and for security barrier views, because of the nearby checks on rte->security_barrier and rte->securityQuals. Back-patch to all supported branches because e2d4ef8de8 was. Dean Rasheed, reviewed by Jonathan Katz and Stephen Frost. Branch -- REL9_6_STABLE Details --- https://git.postgresql.org/pg/commitdiff/71185228cce85ccdfb2beeebb851e829b1a7a751 Modified Files -- src/backend/utils/adt/selfuncs.c | 20 +++-- src/test/regress/expected/privileges.out | 71 src/test/regress/sql/privileges.sql | 25 +-- 3 files changed, 101 insertions(+), 15 deletions(-)
pgsql: Use checkAsUser for selectivity estimator checks, if it's set.
Use checkAsUser for selectivity estimator checks, if it's set. In examine_variable() and examine_simple_variable(), when checking the user's table and column privileges to determine whether to grant access to the pg_statistic data, use checkAsUser for the privilege checks, if it's set. This will be the case if we're accessing the table via a view, to indicate that we should perform privilege checks as the view owner rather than the current user. This change makes this planner check consistent with the check in the executor, so the planner will be able to make use of statistics if the table is accessible via the view. This fixes a performance regression introduced by commit e2d4ef8de8, which affects queries against non-security barrier views in the case where the user doesn't have privileges on the underlying table, but the view owner does. Note that it continues to provide the same safeguards controlling access to pg_statistic for direct table access (in which case checkAsUser won't be set) and for security barrier views, because of the nearby checks on rte->security_barrier and rte->securityQuals. Back-patch to all supported branches because e2d4ef8de8 was. Dean Rasheed, reviewed by Jonathan Katz and Stephen Frost. Branch -- REL9_5_STABLE Details --- https://git.postgresql.org/pg/commitdiff/01256815a6304c114c47c2ca70fb752a3ec81ac3 Modified Files -- src/backend/utils/adt/selfuncs.c | 20 +++-- src/test/regress/expected/privileges.out | 71 src/test/regress/sql/privileges.sql | 25 +-- 3 files changed, 101 insertions(+), 15 deletions(-)
pgsql: Use checkAsUser for selectivity estimator checks, if it's set.
Use checkAsUser for selectivity estimator checks, if it's set. In examine_variable() and examine_simple_variable(), when checking the user's table and column privileges to determine whether to grant access to the pg_statistic data, use checkAsUser for the privilege checks, if it's set. This will be the case if we're accessing the table via a view, to indicate that we should perform privilege checks as the view owner rather than the current user. This change makes this planner check consistent with the check in the executor, so the planner will be able to make use of statistics if the table is accessible via the view. This fixes a performance regression introduced by commit e2d4ef8de8, which affects queries against non-security barrier views in the case where the user doesn't have privileges on the underlying table, but the view owner does. Note that it continues to provide the same safeguards controlling access to pg_statistic for direct table access (in which case checkAsUser won't be set) and for security barrier views, because of the nearby checks on rte->security_barrier and rte->securityQuals. Back-patch to all supported branches because e2d4ef8de8 was. Dean Rasheed, reviewed by Jonathan Katz and Stephen Frost. Branch -- REL_10_STABLE Details --- https://git.postgresql.org/pg/commitdiff/ca74e3e0faadd5731c1bd9e6da21f11162a125f2 Modified Files -- src/backend/utils/adt/selfuncs.c | 20 +++-- src/test/regress/expected/privileges.out | 71 src/test/regress/sql/privileges.sql | 25 +-- 3 files changed, 101 insertions(+), 15 deletions(-)
pgsql: Fix intermittent self-test failures caused by the stats_ext test
Fix intermittent self-test failures caused by the stats_ext test. Commit d7f8d26d9 added new tests to the stats_ext regression test that included creating a view in the public schema, without realising that the stats_ext test runs in the same parallel group as the rules test, which makes doing that unsafe. This led to intermittent failures of the rules test on the buildfarm, although I wasn't able to reproduce that locally. Fix by creating the view in a different schema. Tomas Vondra and Dean Rasheed, report and diagnosis by Thomas Munro. Discussion: https://postgr.es/m/ca+hukgkx9hfzrya7rqzamre07l4hzicc-no_b3tajpiukyl...@mail.gmail.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/3d9a3ef5cbfc70bd2802c3f0da3fbc4e5abdf377 Modified Files -- src/test/regress/expected/stats_ext.out | 42 ++--- src/test/regress/sql/stats_ext.sql | 40 --- 2 files changed, 44 insertions(+), 38 deletions(-)
pgsql: Fix intermittent self-test failures caused by the stats_ext test
Fix intermittent self-test failures caused by the stats_ext test. Commit d7f8d26d9 added new tests to the stats_ext regression test that included creating a view in the public schema, without realising that the stats_ext test runs in the same parallel group as the rules test, which makes doing that unsafe. This led to intermittent failures of the rules test on the buildfarm, although I wasn't able to reproduce that locally. Fix by creating the view in a different schema. Tomas Vondra and Dean Rasheed, report and diagnosis by Thomas Munro. Discussion: https://postgr.es/m/ca+hukgkx9hfzrya7rqzamre07l4hzicc-no_b3tajpiukyl...@mail.gmail.com Branch -- REL_12_STABLE Details --- https://git.postgresql.org/pg/commitdiff/5576cbc8ff1f8b34e519e54ef43de68ed1b2f93e Modified Files -- src/test/regress/expected/stats_ext.out | 42 ++--- src/test/regress/sql/stats_ext.sql | 40 --- 2 files changed, 44 insertions(+), 38 deletions(-)
pgsql: Fix corner-case loss of precision in numeric ln().
Fix corner-case loss of precision in numeric ln(). When deciding on the local rscale to use for the Taylor series expansion, ln_var() neglected to account for the fact that the result is subsequently multiplied by a factor of 2^(nsqrt+1), where nsqrt is the number of square root operations performed in the range reduction step, which can be as high as 22 for very large inputs. This could result in a loss of precision, particularly when combined with large rscale values, for which a large number of Taylor series terms is required (up to around 400). Fix by computing a few extra digits in the Taylor series, based on the weight of the multiplicative factor log10(2^(nsqrt+1)). It remains to be proven whether or not the other 8 extra digits used for the Taylor series is appropriate, but this at least deals with the obvious oversight of failing to account for the effects of the final multiplication. Per report from Justin AnyhowStep. Reviewed by Tom Lane. Discussion: https://postgr.es/m/16280-279f299d9c06e...@postgresql.org Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/43a899f41f46918a0bf442edb091b08c214c68f8 Modified Files -- src/backend/utils/adt/numeric.c | 10 +- src/test/regress/expected/numeric_big.out | 19 +++ src/test/regress/sql/numeric_big.sql | 14 ++ 3 files changed, 42 insertions(+), 1 deletion(-)
pgsql: Add functions gcd() and lcm() for integer and numeric types.
Add functions gcd() and lcm() for integer and numeric types. These compute the greatest common divisor and least common multiple of a pair of numbers using the Euclidean algorithm. Vik Fearing, reviewed by Fabien Coelho. Discussion: https://postgr.es/m/adbd3e0b-e3f1-5bbc-21db-03caf1cef...@2ndquadrant.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/13661ddd7eaec7e2809ff5c29fc14653b6161036 Modified Files -- doc/src/sgml/func.sgml| 34 +++ src/backend/utils/adt/int.c | 126 + src/backend/utils/adt/int8.c | 126 + src/backend/utils/adt/numeric.c | 171 ++ src/include/catalog/catversion.h | 2 +- src/include/catalog/pg_proc.dat | 20 src/test/regress/expected/int4.out| 46 + src/test/regress/expected/int8.out| 46 + src/test/regress/expected/numeric.out | 44 + src/test/regress/sql/int4.sql | 25 + src/test/regress/sql/int8.sql | 25 + src/test/regress/sql/numeric.sql | 25 + 12 files changed, 689 insertions(+), 1 deletion(-)
pgsql: Make rewriter prevent auto-updates on views with conditional INS
Make rewriter prevent auto-updates on views with conditional INSTEAD rules. A view with conditional INSTEAD rules and no unconditional INSTEAD rules or INSTEAD OF triggers is not auto-updatable. Previously we relied on a check in the executor to catch this, but that's problematic since the planner may fail to properly handle such a query and thus return a particularly unhelpful error to the user, before reaching the executor check. Instead, trap this in the rewriter and report the correct error there. Doing so also allows us to include more useful error detail than the executor check can provide. This doesn't change the existing behaviour of updatable views; it merely ensures that useful error messages are reported when a view isn't updatable. Per report from Pengzhou Tang, though not adopting that suggested fix. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAG4reAQn+4xB6xHJqWdtE0ve_WqJkdyCV4P=tryr4kn8_3_...@mail.gmail.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/d751ba523546df2b2709c1ffd4d12d6a25e25bf6 Modified Files -- src/backend/executor/execMain.c | 8 ++-- src/backend/rewrite/rewriteHandler.c | 60 --- src/test/regress/expected/updatable_views.out | 21 ++ src/test/regress/sql/updatable_views.sql | 14 +++ 4 files changed, 94 insertions(+), 9 deletions(-)
pgsql: Make rewriter prevent auto-updates on views with conditional INS
Make rewriter prevent auto-updates on views with conditional INSTEAD rules. A view with conditional INSTEAD rules and no unconditional INSTEAD rules or INSTEAD OF triggers is not auto-updatable. Previously we relied on a check in the executor to catch this, but that's problematic since the planner may fail to properly handle such a query and thus return a particularly unhelpful error to the user, before reaching the executor check. Instead, trap this in the rewriter and report the correct error there. Doing so also allows us to include more useful error detail than the executor check can provide. This doesn't change the existing behaviour of updatable views; it merely ensures that useful error messages are reported when a view isn't updatable. Per report from Pengzhou Tang, though not adopting that suggested fix. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAG4reAQn+4xB6xHJqWdtE0ve_WqJkdyCV4P=tryr4kn8_3_...@mail.gmail.com Branch -- REL9_6_STABLE Details --- https://git.postgresql.org/pg/commitdiff/fd87262de55e402c72d4bdacee830838735b1f27 Modified Files -- src/backend/executor/execMain.c | 8 ++-- src/backend/rewrite/rewriteHandler.c | 60 --- src/test/regress/expected/updatable_views.out | 21 ++ src/test/regress/sql/updatable_views.sql | 14 +++ 4 files changed, 94 insertions(+), 9 deletions(-)
pgsql: Make rewriter prevent auto-updates on views with conditional INS
Make rewriter prevent auto-updates on views with conditional INSTEAD rules. A view with conditional INSTEAD rules and no unconditional INSTEAD rules or INSTEAD OF triggers is not auto-updatable. Previously we relied on a check in the executor to catch this, but that's problematic since the planner may fail to properly handle such a query and thus return a particularly unhelpful error to the user, before reaching the executor check. Instead, trap this in the rewriter and report the correct error there. Doing so also allows us to include more useful error detail than the executor check can provide. This doesn't change the existing behaviour of updatable views; it merely ensures that useful error messages are reported when a view isn't updatable. Per report from Pengzhou Tang, though not adopting that suggested fix. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAG4reAQn+4xB6xHJqWdtE0ve_WqJkdyCV4P=tryr4kn8_3_...@mail.gmail.com Branch -- REL_12_STABLE Details --- https://git.postgresql.org/pg/commitdiff/fd5476b79c2ff63dd32e154198f8201553180e58 Modified Files -- src/backend/executor/execMain.c | 8 ++-- src/backend/rewrite/rewriteHandler.c | 60 --- src/test/regress/expected/updatable_views.out | 21 ++ src/test/regress/sql/updatable_views.sql | 14 +++ 4 files changed, 94 insertions(+), 9 deletions(-)
pgsql: Make rewriter prevent auto-updates on views with conditional INS
Make rewriter prevent auto-updates on views with conditional INSTEAD rules. A view with conditional INSTEAD rules and no unconditional INSTEAD rules or INSTEAD OF triggers is not auto-updatable. Previously we relied on a check in the executor to catch this, but that's problematic since the planner may fail to properly handle such a query and thus return a particularly unhelpful error to the user, before reaching the executor check. Instead, trap this in the rewriter and report the correct error there. Doing so also allows us to include more useful error detail than the executor check can provide. This doesn't change the existing behaviour of updatable views; it merely ensures that useful error messages are reported when a view isn't updatable. Per report from Pengzhou Tang, though not adopting that suggested fix. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAG4reAQn+4xB6xHJqWdtE0ve_WqJkdyCV4P=tryr4kn8_3_...@mail.gmail.com Branch -- REL_11_STABLE Details --- https://git.postgresql.org/pg/commitdiff/9bdb1f0e37d4f65cdc32cec1563744dc92213d71 Modified Files -- src/backend/executor/execMain.c | 8 ++-- src/backend/rewrite/rewriteHandler.c | 60 --- src/test/regress/expected/updatable_views.out | 21 ++ src/test/regress/sql/updatable_views.sql | 14 +++ 4 files changed, 94 insertions(+), 9 deletions(-)
pgsql: Make rewriter prevent auto-updates on views with conditional INS
Make rewriter prevent auto-updates on views with conditional INSTEAD rules. A view with conditional INSTEAD rules and no unconditional INSTEAD rules or INSTEAD OF triggers is not auto-updatable. Previously we relied on a check in the executor to catch this, but that's problematic since the planner may fail to properly handle such a query and thus return a particularly unhelpful error to the user, before reaching the executor check. Instead, trap this in the rewriter and report the correct error there. Doing so also allows us to include more useful error detail than the executor check can provide. This doesn't change the existing behaviour of updatable views; it merely ensures that useful error messages are reported when a view isn't updatable. Per report from Pengzhou Tang, though not adopting that suggested fix. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAG4reAQn+4xB6xHJqWdtE0ve_WqJkdyCV4P=tryr4kn8_3_...@mail.gmail.com Branch -- REL9_5_STABLE Details --- https://git.postgresql.org/pg/commitdiff/bb09a9414f1e149aac6064d217b5d5cab8f13723 Modified Files -- src/backend/executor/execMain.c | 8 ++-- src/backend/rewrite/rewriteHandler.c | 60 --- src/test/regress/expected/updatable_views.out | 21 ++ src/test/regress/sql/updatable_views.sql | 14 +++ 4 files changed, 94 insertions(+), 9 deletions(-)
pgsql: Make rewriter prevent auto-updates on views with conditional INS
Make rewriter prevent auto-updates on views with conditional INSTEAD rules. A view with conditional INSTEAD rules and no unconditional INSTEAD rules or INSTEAD OF triggers is not auto-updatable. Previously we relied on a check in the executor to catch this, but that's problematic since the planner may fail to properly handle such a query and thus return a particularly unhelpful error to the user, before reaching the executor check. Instead, trap this in the rewriter and report the correct error there. Doing so also allows us to include more useful error detail than the executor check can provide. This doesn't change the existing behaviour of updatable views; it merely ensures that useful error messages are reported when a view isn't updatable. Per report from Pengzhou Tang, though not adopting that suggested fix. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAG4reAQn+4xB6xHJqWdtE0ve_WqJkdyCV4P=tryr4kn8_3_...@mail.gmail.com Branch -- REL9_4_STABLE Details --- https://git.postgresql.org/pg/commitdiff/9be6fcb3e4e21c5d745dba314a451dc40f6f388b Modified Files -- src/backend/executor/execMain.c | 8 ++-- src/backend/rewrite/rewriteHandler.c | 60 --- src/test/regress/expected/updatable_views.out | 21 ++ src/test/regress/sql/updatable_views.sql | 14 +++ 4 files changed, 94 insertions(+), 9 deletions(-)
pgsql: Make rewriter prevent auto-updates on views with conditional INS
Make rewriter prevent auto-updates on views with conditional INSTEAD rules. A view with conditional INSTEAD rules and no unconditional INSTEAD rules or INSTEAD OF triggers is not auto-updatable. Previously we relied on a check in the executor to catch this, but that's problematic since the planner may fail to properly handle such a query and thus return a particularly unhelpful error to the user, before reaching the executor check. Instead, trap this in the rewriter and report the correct error there. Doing so also allows us to include more useful error detail than the executor check can provide. This doesn't change the existing behaviour of updatable views; it merely ensures that useful error messages are reported when a view isn't updatable. Per report from Pengzhou Tang, though not adopting that suggested fix. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAG4reAQn+4xB6xHJqWdtE0ve_WqJkdyCV4P=tryr4kn8_3_...@mail.gmail.com Branch -- REL_10_STABLE Details --- https://git.postgresql.org/pg/commitdiff/353cd826f44537871063c6525dc0e108c974a68c Modified Files -- src/backend/executor/execMain.c | 8 ++-- src/backend/rewrite/rewriteHandler.c | 60 --- src/test/regress/expected/updatable_views.out | 21 ++ src/test/regress/sql/updatable_views.sql | 14 +++ 4 files changed, 94 insertions(+), 9 deletions(-)
pgsql: Improve the performance and accuracy of numeric sqrt() and ln().
Improve the performance and accuracy of numeric sqrt() and ln(). Instead of using Newton's method to compute numeric square roots, use the Karatsuba square root algorithm, which performs better for numbers of all sizes. In practice, this is 3-5 times faster for inputs with just a few digits and up to around 10 times faster for larger inputs. Also, the new algorithm guarantees that the final digit of the result is correctly rounded, since it computes an integer square root with truncation, containing at least 1 extra decimal digit before rounding. The former algorithm would occasionally round the wrong way because it rounded both the intermediate and final results. In addition, arrange for sqrt_var() to explicitly support negative rscale values (rounding before the decimal point). This allows the argument reduction phase of ln_var() to be optimised for large inputs, since it only needs to compute square roots with a few more digits than the final ln() result, rather than computing all the digits before the decimal point. For very large inputs, this can be many thousands of times faster. In passing, optimise div_var_fast() in a couple of places where it was doing unnecessary work. Patch be me, reviewed by Tom Lane and Tels. Discussion: https://postgr.es/m/caezatcv1a7+jd3p30zu31kjaxeseyon3v9d6tyegpxcq3cq...@mail.gmail.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/4083f445c0cbda2eacb6b17297f9d555397e5eaa Modified Files -- src/backend/utils/adt/numeric.c | 585 +++--- src/test/regress/expected/numeric.out | 51 +++ src/test/regress/sql/numeric.sql | 13 + 3 files changed, 600 insertions(+), 49 deletions(-)
pgsql: Prevent functional dependency estimates from exceeding column es
Prevent functional dependency estimates from exceeding column estimates. Formerly we applied a functional dependency "a => b with dependency degree f" using the formula P(a,b) = P(a) * [f + (1-f)*P(b)] This leads to the possibility that the combined selectivity P(a,b) could exceed P(b), which is not ideal. The addition of support for IN and OR clauses (commits 8f321bd16c and ccaa3569f5) would seem to make this more likely, since the user-supplied values in such clauses are not necessarily compatible with the functional dependency. Mitigate this by using the formula P(a,b) = f * Min(P(a), P(b)) + (1-f) * P(a) * P(b) instead, which guarantees that the combined selectivity is less than each column's individual selectivity. Logically, this is modifies the part of the formula that accounts for dependent rows to handle cases where P(a) > P(b), whilst not changing the second term which accounts for independent rows. Additionally, this refactors the way that functional dependencies are applied, so now dependencies_clauselist_selectivity() estimates both the implying clauses and the implied clauses for each functional dependency (formerly only the implied clauses were estimated), and now all clauses for each attribute are taken into account (formerly only one clause for each implied attribute was estimated). This removes the previously built-in assumption that only equality clauses will be seen, which is no longer true, and opens up the possibility of applying functional dependencies to more general clauses. Patch by me, reviewed by Tomas Vondra. Discussion: https://postgr.es/m/CAEZATCXaNFZyOhR4XXAfkvj1tibRBEjje6ZbXwqWUB_tqbH%3Drw%40mail.gmail.com Discussion: https://postgr.es/m/20200318002946.6dvblukm3cfmgir2%40development Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/87779aa47463d0fb3b4743a7d5c9534994bf7c98 Modified Files -- src/backend/statistics/dependencies.c | 349 ++-- src/test/regress/expected/stats_ext.out | 18 +- src/test/regress/sql/stats_ext.sql | 8 +- 3 files changed, 266 insertions(+), 109 deletions(-)
pgsql: Improve estimation of OR clauses using multiple extended statist
Improve estimation of OR clauses using multiple extended statistics. When estimating an OR clause using multiple extended statistics objects, treat the estimates for each set of clauses for each statistics object as independent of one another. The overlap estimates produced for each statistics object do not apply to clauses covered by other statistics objects. Dean Rasheed, reviewed by Tomas Vondra. Discussion: https://postgr.es/m/CAEZATCW=J65GUFm50RcPv-iASnS2mTXQbr=cfbvwrvhflj_...@mail.gmail.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/88b0898fe35a5a0325fca21bd4f3ed6dffb364c1 Modified Files -- src/backend/statistics/extended_stats.c | 25 + src/test/regress/expected/stats_ext.out | 2 +- 2 files changed, 18 insertions(+), 9 deletions(-)
pgsql: Improve estimation of ANDs under ORs using extended statistics.
Improve estimation of ANDs under ORs using extended statistics. Formerly, extended statistics only handled clauses that were RestrictInfos. However, the restrictinfo machinery doesn't create sub-AND RestrictInfos for AND clauses underneath OR clauses. Therefore teach extended statistics to handle bare AND clauses, looking for compatible RestrictInfo clauses underneath them. Dean Rasheed, reviewed by Tomas Vondra. Discussion: https://postgr.es/m/CAEZATCW=J65GUFm50RcPv-iASnS2mTXQbr=cfbvwrvhflj_...@mail.gmail.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/4f5760d4afa9423fe4d38e4cbec48bf5e793e7e5 Modified Files -- src/backend/optimizer/path/clausesel.c | 21 + src/backend/statistics/extended_stats.c | 25 + src/test/regress/expected/stats_ext.out | 4 ++-- 3 files changed, 48 insertions(+), 2 deletions(-)
pgsql: Improve estimation of OR clauses using extended statistics.
Improve estimation of OR clauses using extended statistics. Formerly we only applied extended statistics to an OR clause as part of the clauselist_selectivity() code path for an OR clause appearing in an implicitly-ANDed list of clauses. This meant that it could only use extended statistics if all sub-clauses of the OR clause were covered by a single extended statistics object. Instead, teach clause_selectivity() how to apply extended statistics to an OR clause by handling its ORed list of sub-clauses in a similar manner to an implicitly-ANDed list of sub-clauses, but with different combination rules. This allows one or more extended statistics objects to be used to estimate all or part of the list of sub-clauses. Any remaining sub-clauses are then treated as if they are independent. Additionally, to avoid double-application of extended statistics, this introduces "extended" versions of clause_selectivity() and clauselist_selectivity(), which include an option to ignore extended statistics. This replaces the old clauselist_selectivity_simple() function which failed to completely ignore extended statistics when called from the extended statistics code. A known limitation of the current infrastructure is that an AND clause under an OR clause is not treated as compatible with extended statistics (because we don't build RestrictInfos for such sub-AND clauses). Thus, for example, "(a=1 AND b=1) OR (a=2 AND b=2)" will currently be treated as two independent AND clauses (each of which may be estimated using extended statistics), but extended statistics will not currently be used to account for any possible overlap between those clauses. Improving that is left as a task for the future. Original patch by Tomas Vondra, with additional improvements by me. Discussion: https://postgr.es/m/20200113230008.g67iyk4cs3xbnjju@development Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/25a9e54d2db31b8031c2d8166114b187e8347098 Modified Files -- src/backend/optimizer/path/clausesel.c | 301 +++ src/backend/statistics/dependencies.c| 4 +- src/backend/statistics/extended_stats.c | 217 +++- src/backend/statistics/mcv.c | 169 - src/include/optimizer/optimizer.h| 18 +- src/include/statistics/extended_stats_internal.h | 15 ++ src/include/statistics/statistics.h | 3 +- src/test/regress/expected/stats_ext.out | 189 +- src/test/regress/sql/stats_ext.sql | 85 ++- 9 files changed, 798 insertions(+), 203 deletions(-)
pgsql: Add an explicit cast to double when using fabs().
Add an explicit cast to double when using fabs(). Commit bc43b7c2c0 used fabs() directly on an int variable, which apparently requires an explicit cast on some platforms. Per buildfarm. Branch -- REL_12_STABLE Details --- https://git.postgresql.org/pg/commitdiff/740780a6157f0688829082dec793fce6153a36a3 Modified Files -- src/backend/utils/adt/numeric.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
pgsql: Add an explicit cast to double when using fabs().
Add an explicit cast to double when using fabs(). Commit bc43b7c2c0 used fabs() directly on an int variable, which apparently requires an explicit cast on some platforms. Per buildfarm. Branch -- REL9_6_STABLE Details --- https://git.postgresql.org/pg/commitdiff/f0b9bada56f801fc9e70befb7206a72d5444eb8e Modified Files -- src/backend/utils/adt/numeric.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
pgsql: Add an explicit cast to double when using fabs().
Add an explicit cast to double when using fabs(). Commit bc43b7c2c0 used fabs() directly on an int variable, which apparently requires an explicit cast on some platforms. Per buildfarm. Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/fead67c24ada8c6a4b661dec6f159dca1447e3d8 Modified Files -- src/backend/utils/adt/numeric.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
pgsql: Add an explicit cast to double when using fabs().
Add an explicit cast to double when using fabs(). Commit bc43b7c2c0 used fabs() directly on an int variable, which apparently requires an explicit cast on some platforms. Per buildfarm. Branch -- REL_13_STABLE Details --- https://git.postgresql.org/pg/commitdiff/5777b6ea29a581f073c80ae48931adadcbc268d4 Modified Files -- src/backend/utils/adt/numeric.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
pgsql: Add an explicit cast to double when using fabs().
Add an explicit cast to double when using fabs(). Commit bc43b7c2c0 used fabs() directly on an int variable, which apparently requires an explicit cast on some platforms. Per buildfarm. Branch -- REL_11_STABLE Details --- https://git.postgresql.org/pg/commitdiff/ab042d0108232d4b8b6a2b104a466a4bb2f6b9f1 Modified Files -- src/backend/utils/adt/numeric.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
pgsql: Add an explicit cast to double when using fabs().
Add an explicit cast to double when using fabs(). Commit bc43b7c2c0 used fabs() directly on an int variable, which apparently requires an explicit cast on some platforms. Per buildfarm. Branch -- REL_10_STABLE Details --- https://git.postgresql.org/pg/commitdiff/0097193b17dd8e1dd85347007da249ed4c395eb5 Modified Files -- src/backend/utils/adt/numeric.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
pgsql: Fix numeric_power() when the exponent is INT_MIN.
Fix numeric_power() when the exponent is INT_MIN. In power_var_int(), the computation of the number of significant digits to use in the computation used log(Abs(exp)), which isn't safe because Abs(exp) returns INT_MIN when exp is INT_MIN. Use fabs() instead of Abs(), so that the exponent is cast to a double before the absolute value is taken. Back-patch to 9.6, where this was introduced (by 7d9a4737c2). Discussion: https://postgr.es/m/CAEZATCVd6pMkz=BrZEgBKyqqJrt2xghr=fNc8+Z=5xc6cgw...@mail.gmail.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/bc43b7c2c06c32264efe79d0b86abd41236f1d5b Modified Files -- src/backend/utils/adt/numeric.c | 2 +- src/test/regress/expected/numeric.out | 6 ++ src/test/regress/sql/numeric.sql | 1 + 3 files changed, 8 insertions(+), 1 deletion(-)
pgsql: Fix numeric_power() when the exponent is INT_MIN.
Fix numeric_power() when the exponent is INT_MIN. In power_var_int(), the computation of the number of significant digits to use in the computation used log(Abs(exp)), which isn't safe because Abs(exp) returns INT_MIN when exp is INT_MIN. Use fabs() instead of Abs(), so that the exponent is cast to a double before the absolute value is taken. Back-patch to 9.6, where this was introduced (by 7d9a4737c2). Discussion: https://postgr.es/m/CAEZATCVd6pMkz=BrZEgBKyqqJrt2xghr=fNc8+Z=5xc6cgw...@mail.gmail.com Branch -- REL9_6_STABLE Details --- https://git.postgresql.org/pg/commitdiff/9a299dff25a5f945cc2ee0c00b705857f966e169 Modified Files -- src/backend/utils/adt/numeric.c | 2 +- src/test/regress/expected/numeric.out | 6 ++ src/test/regress/sql/numeric.sql | 1 + 3 files changed, 8 insertions(+), 1 deletion(-)
pgsql: Fix numeric_power() when the exponent is INT_MIN.
Fix numeric_power() when the exponent is INT_MIN. In power_var_int(), the computation of the number of significant digits to use in the computation used log(Abs(exp)), which isn't safe because Abs(exp) returns INT_MIN when exp is INT_MIN. Use fabs() instead of Abs(), so that the exponent is cast to a double before the absolute value is taken. Back-patch to 9.6, where this was introduced (by 7d9a4737c2). Discussion: https://postgr.es/m/CAEZATCVd6pMkz=BrZEgBKyqqJrt2xghr=fNc8+Z=5xc6cgw...@mail.gmail.com Branch -- REL_11_STABLE Details --- https://git.postgresql.org/pg/commitdiff/160a9e425f18b248c87130fec9f04b77dceb60a3 Modified Files -- src/backend/utils/adt/numeric.c | 2 +- src/test/regress/expected/numeric.out | 6 ++ src/test/regress/sql/numeric.sql | 1 + 3 files changed, 8 insertions(+), 1 deletion(-)
pgsql: Fix numeric_power() when the exponent is INT_MIN.
Fix numeric_power() when the exponent is INT_MIN. In power_var_int(), the computation of the number of significant digits to use in the computation used log(Abs(exp)), which isn't safe because Abs(exp) returns INT_MIN when exp is INT_MIN. Use fabs() instead of Abs(), so that the exponent is cast to a double before the absolute value is taken. Back-patch to 9.6, where this was introduced (by 7d9a4737c2). Discussion: https://postgr.es/m/CAEZATCVd6pMkz=BrZEgBKyqqJrt2xghr=fNc8+Z=5xc6cgw...@mail.gmail.com Branch -- REL_13_STABLE Details --- https://git.postgresql.org/pg/commitdiff/e15c384d7acaa2d7d967f2d8feb6bb0d3b793b3d Modified Files -- src/backend/utils/adt/numeric.c | 2 +- src/test/regress/expected/numeric.out | 6 ++ src/test/regress/sql/numeric.sql | 1 + 3 files changed, 8 insertions(+), 1 deletion(-)
pgsql: Fix numeric_power() when the exponent is INT_MIN.
Fix numeric_power() when the exponent is INT_MIN. In power_var_int(), the computation of the number of significant digits to use in the computation used log(Abs(exp)), which isn't safe because Abs(exp) returns INT_MIN when exp is INT_MIN. Use fabs() instead of Abs(), so that the exponent is cast to a double before the absolute value is taken. Back-patch to 9.6, where this was introduced (by 7d9a4737c2). Discussion: https://postgr.es/m/CAEZATCVd6pMkz=BrZEgBKyqqJrt2xghr=fNc8+Z=5xc6cgw...@mail.gmail.com Branch -- REL_12_STABLE Details --- https://git.postgresql.org/pg/commitdiff/258b7700551c4fc01ecd55d7ead4085184d4dce3 Modified Files -- src/backend/utils/adt/numeric.c | 2 +- src/test/regress/expected/numeric.out | 6 ++ src/test/regress/sql/numeric.sql | 1 + 3 files changed, 8 insertions(+), 1 deletion(-)
pgsql: Prevent numeric overflows in parallel numeric aggregates.
Prevent numeric overflows in parallel numeric aggregates. Formerly various numeric aggregate functions supported parallel aggregation by having each worker convert partial aggregate values to Numeric and use numeric_send() as part of serializing their state. That's problematic, since the range of Numeric is smaller than that of NumericVar, so it's possible for it to overflow (on either side of the decimal point) in cases that would succeed in non-parallel mode. Fix by serializing NumericVars instead, to avoid the overflow risk and ensure that parallel and non-parallel modes work the same. A side benefit is that this improves the efficiency of the serialization/deserialization code, which can make a noticeable difference to performance with large numbers of parallel workers. No back-patch due to risk from changing the binary format of the aggregate serialization states, as well as lack of prior field complaints and low probability of such overflows in practice. Patch by me. Thanks to David Rowley for review and performance testing, and Ranier Vilela for an additional suggestion. Discussion: https://postgr.es/m/caezatcumefwcrq2dnzzprj5+6lfn85jyidoqm+ucsxhb9u2...@mail.gmail.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/f025f2390e13d7da69da595086fb982bbaf7f329 Modified Files -- src/backend/utils/adt/numeric.c | 255 -- src/test/regress/expected/numeric.out | 50 +++ src/test/regress/sql/numeric.sql | 36 + 3 files changed, 203 insertions(+), 138 deletions(-)
pgsql: Fix numeric_mul() overflow due to too many digits after decimal
Fix numeric_mul() overflow due to too many digits after decimal point. This fixes an overflow error when using the numeric * operator if the result has more than 16383 digits after the decimal point by rounding the result. Overflow errors should only occur if the result has too many digits *before* the decimal point. Discussion: https://postgr.es/m/caezatcumefwcrq2dnzzprj5+6lfn85jyidoqm+ucsxhb9u2...@mail.gmail.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/e7fc488ad67caaad33f6d5177081884495cb81cb Modified Files -- src/backend/utils/adt/numeric.c | 10 +- src/test/regress/expected/numeric.out | 6 ++ src/test/regress/sql/numeric.sql | 2 ++ 3 files changed, 17 insertions(+), 1 deletion(-)
pgsql: Fix numeric_mul() overflow due to too many digits after decimal
Fix numeric_mul() overflow due to too many digits after decimal point. This fixes an overflow error when using the numeric * operator if the result has more than 16383 digits after the decimal point by rounding the result. Overflow errors should only occur if the result has too many digits *before* the decimal point. Discussion: https://postgr.es/m/caezatcumefwcrq2dnzzprj5+6lfn85jyidoqm+ucsxhb9u2...@mail.gmail.com Branch -- REL9_6_STABLE Details --- https://git.postgresql.org/pg/commitdiff/f8abf6944b12c9ed01062a5163385f6c49f0f6ef Modified Files -- src/backend/utils/adt/numeric.c | 10 +- src/test/regress/expected/numeric.out | 6 ++ src/test/regress/sql/numeric.sql | 2 ++ 3 files changed, 17 insertions(+), 1 deletion(-)
pgsql: Fix numeric_mul() overflow due to too many digits after decimal
Fix numeric_mul() overflow due to too many digits after decimal point. This fixes an overflow error when using the numeric * operator if the result has more than 16383 digits after the decimal point by rounding the result. Overflow errors should only occur if the result has too many digits *before* the decimal point. Discussion: https://postgr.es/m/caezatcumefwcrq2dnzzprj5+6lfn85jyidoqm+ucsxhb9u2...@mail.gmail.com Branch -- REL_13_STABLE Details --- https://git.postgresql.org/pg/commitdiff/f23a9b8a49af1ef89dd65399867ca21318ae6b8e Modified Files -- src/backend/utils/adt/numeric.c | 10 +- src/test/regress/expected/numeric.out | 6 ++ src/test/regress/sql/numeric.sql | 2 ++ 3 files changed, 17 insertions(+), 1 deletion(-)
pgsql: Fix numeric_mul() overflow due to too many digits after decimal
Fix numeric_mul() overflow due to too many digits after decimal point. This fixes an overflow error when using the numeric * operator if the result has more than 16383 digits after the decimal point by rounding the result. Overflow errors should only occur if the result has too many digits *before* the decimal point. Discussion: https://postgr.es/m/caezatcumefwcrq2dnzzprj5+6lfn85jyidoqm+ucsxhb9u2...@mail.gmail.com Branch -- REL_10_STABLE Details --- https://git.postgresql.org/pg/commitdiff/54a0ec1bd6d5f46938b8da7bbcf7c3cb26b8638d Modified Files -- src/backend/utils/adt/numeric.c | 10 +- src/test/regress/expected/numeric.out | 6 ++ src/test/regress/sql/numeric.sql | 2 ++ 3 files changed, 17 insertions(+), 1 deletion(-)
pgsql: Fix numeric_mul() overflow due to too many digits after decimal
Fix numeric_mul() overflow due to too many digits after decimal point. This fixes an overflow error when using the numeric * operator if the result has more than 16383 digits after the decimal point by rounding the result. Overflow errors should only occur if the result has too many digits *before* the decimal point. Discussion: https://postgr.es/m/caezatcumefwcrq2dnzzprj5+6lfn85jyidoqm+ucsxhb9u2...@mail.gmail.com Branch -- REL_12_STABLE Details --- https://git.postgresql.org/pg/commitdiff/357b66ef9470bee657198b4bdb26c98c89e56459 Modified Files -- src/backend/utils/adt/numeric.c | 10 +- src/test/regress/expected/numeric.out | 6 ++ src/test/regress/sql/numeric.sql | 2 ++ 3 files changed, 17 insertions(+), 1 deletion(-)
pgsql: Fix numeric_mul() overflow due to too many digits after decimal
Fix numeric_mul() overflow due to too many digits after decimal point. This fixes an overflow error when using the numeric * operator if the result has more than 16383 digits after the decimal point by rounding the result. Overflow errors should only occur if the result has too many digits *before* the decimal point. Discussion: https://postgr.es/m/caezatcumefwcrq2dnzzprj5+6lfn85jyidoqm+ucsxhb9u2...@mail.gmail.com Branch -- REL_14_STABLE Details --- https://git.postgresql.org/pg/commitdiff/06883d58ff29cf4fb8c32fd13ce9947796b9fb0f Modified Files -- src/backend/utils/adt/numeric.c | 10 +- src/test/regress/expected/numeric.out | 6 ++ src/test/regress/sql/numeric.sql | 2 ++ 3 files changed, 17 insertions(+), 1 deletion(-)
pgsql: Fix numeric_mul() overflow due to too many digits after decimal
Fix numeric_mul() overflow due to too many digits after decimal point. This fixes an overflow error when using the numeric * operator if the result has more than 16383 digits after the decimal point by rounding the result. Overflow errors should only occur if the result has too many digits *before* the decimal point. Discussion: https://postgr.es/m/caezatcumefwcrq2dnzzprj5+6lfn85jyidoqm+ucsxhb9u2...@mail.gmail.com Branch -- REL_11_STABLE Details --- https://git.postgresql.org/pg/commitdiff/5763ef42c1872031eb71be23e1ae909548ac5409 Modified Files -- src/backend/utils/adt/numeric.c | 10 +- src/test/regress/expected/numeric.out | 6 ++ src/test/regress/sql/numeric.sql | 2 ++ 3 files changed, 17 insertions(+), 1 deletion(-)
Re: pgsql: Fix numeric_mul() overflow due to too many digits after decimal
On Sat, 10 Jul 2021 at 16:01, Tom Lane wrote: > > I think this needs a bit more thought. Before, a case like > select 1e-16000 * 1e-16000; > produced > ERROR: value overflows numeric format > Now you get an exact zero (with a lot of trailing zeroes, but still > it's just zero). Doesn't that represent catastrophic loss of > precision? Hmm, "overflow" isn't a great result for that case either. Zero is the closest we can get to the exact result with a fixed number of digits after the decimal point. > In general, I'm disturbed that we just threw away the previous > promise that numeric multiplication results were exact. That > seems like a pretty fundamental property --- which is stated > in so many words in the manual, btw --- and I'm not sure I want > to give it up. Perhaps we should amend the statement about numeric multiplication to say that it's exact within the limits of the numeric type's supported scale, which we also document in the manual as 16383. That seems a lot better than throwing an overflow error for a result that isn't very big, which limits what's possible with numeric multiplication to much less than 16383 digits. Regards, Dean
pgsql: Fix pgbench permute tests.
Fix pgbench permute tests. One of the tests for the pgbench permute() function added by 6b258e3d68 fails on some 32-bit platforms, due to variations in the floating point computations in getrand(). The remaining tests give sufficient coverage, so just remove the failing test. Reported by Christoph Berg. Analysis by Thomas Munro and Tom Lane. Based on patch by Fabien Coelho. Discussion: https://postgr.es/m/ykqnuoyv63grj...@msg.df7cb.de Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/0f516d039d8023163e82fa51104052306068dd69 Modified Files -- src/bin/pgbench/t/001_pgbench_with_server.pl | 11 --- 1 file changed, 11 deletions(-)
pgsql: pgbench: Function to generate random permutations.
pgbench: Function to generate random permutations. This adds a new function, permute(), that generates pseudorandom permutations of arbitrary sizes. This can be used to randomly shuffle a set of values to remove unwanted correlations. For example, permuting the output from a non-uniform random distribution so that all the most common values aren't collocated, allowing more realistic tests to be performed. Formerly, hash() was recommended for this purpose, but that suffers from collisions that might alter the distribution, so recommend permute() for this purpose instead. Fabien Coelho and Hironobu Suzuki, with additional hacking be me. Reviewed by Thomas Munro, Alvaro Herrera and Muhammad Usama. Discussion: https://postgr.es/m/alpine.DEB.2.21.1807280944370.5142@lancre Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/6b258e3d688db14aadb58dde2a72939362310684 Modified Files -- doc/src/sgml/ref/pgbench.sgml| 81 ++--- src/bin/pgbench/exprparse.y | 17 src/bin/pgbench/pgbench.c| 131 +++ src/bin/pgbench/pgbench.h| 3 +- src/bin/pgbench/t/001_pgbench_with_server.pl | 43 + src/bin/pgbench/t/002_pgbench_no_server.pl | 10 ++ 6 files changed, 273 insertions(+), 12 deletions(-)
pgsql: Improve reporting of "conflicting or redundant options" errors.
Improve reporting of "conflicting or redundant options" errors. When reporting "conflicting or redundant options" errors, try to ensure that errposition() is used, to help the user identify the offending option. Formerly, errposition() was invoked in less than 60% of cases. This patch raises that to over 90%, but there remain a few places where the ParseState is not readily available. Using errdetail() might improve the error in such cases, but that is left as a task for the future. Additionally, since this error is thrown from over 100 places in the codebase, introduce a dedicated function to throw it, reducing code duplication. Extracted from a slightly larger patch by Vignesh C. Reviewed by Bharath Rupireddy, Alvaro Herrera, Dilip Kumar, Hou Zhijie, Peter Smith, Daniel Gustafsson, Julien Rouhaud and me. Discussion: https://postgr.es/m/CALDaNm33FFSS5tVyvmkoK2cCMuDVxcui=gfrjti9rofynqs...@mail.gmail.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/2bfb50b3df11399ed80347dd03bfaf8cd5acf962 Modified Files -- src/backend/catalog/aclchk.c | 11 +-- src/backend/commands/copy.c| 59 --- src/backend/commands/dbcommands.c | 70 -- src/backend/commands/define.c | 12 src/backend/commands/extension.c | 20 ++ src/backend/commands/foreigncmds.c | 18 ++--- src/backend/commands/functioncmds.c| 53 -- src/backend/commands/publicationcmds.c | 27 --- src/backend/commands/sequence.c| 45 +++- src/backend/commands/subscriptioncmds.c| 66 +++-- src/backend/commands/typecmds.c| 31 +++- src/backend/commands/user.c| 112 +++-- src/backend/parser/parse_utilcmd.c | 4 +- src/backend/tcop/utility.c | 20 +++--- src/include/commands/defrem.h | 7 +- src/include/commands/publicationcmds.h | 4 +- src/include/commands/subscriptioncmds.h| 4 +- src/include/commands/typecmds.h| 2 +- src/include/commands/user.h| 2 +- src/test/regress/expected/copy2.out| 2 + src/test/regress/expected/foreign_data.out | 4 ++ src/test/regress/expected/publication.out | 2 + 22 files changed, 180 insertions(+), 395 deletions(-)
pgsql: Improve numeric_power() tests for large integer powers.
Improve numeric_power() tests for large integer powers. Two of the tests added by 4dd5ce2fd fail on buildfarm member castoroides, though it's not clear why. Improve the tests to report the actual values produced, if they're not what was expected. Apply to v11 only for now, until it's clearer what's going on. Branch -- REL_11_STABLE Details --- https://git.postgresql.org/pg/commitdiff/434ddfb79afea5c515f17fc5febac15afc26d017 Modified Files -- src/test/regress/expected/numeric.out | 8 src/test/regress/sql/numeric.sql | 4 ++-- 2 files changed, 6 insertions(+), 6 deletions(-)
pgsql: Fix corner-case errors and loss of precision in numeric_power().
Fix corner-case errors and loss of precision in numeric_power(). This fixes a couple of related problems that arise when raising numbers to very large powers. Firstly, when raising a negative number to a very large integer power, the result should be well-defined, but the previous code would only cope if the exponent was small enough to go through power_var_int(). Otherwise it would throw an internal error, attempting to take the logarithm of a negative number. Fix this by adding suitable handling to the general case in power_var() to cope with negative bases, checking for integer powers there. Next, when raising a (positive or negative) number whose absolute value is slightly less than 1 to a very large power, the result should approach zero as the power is increased. However, in some cases, for sufficiently large powers, this would lose all precision and return 1 instead of 0. This was due to the way that the local_rscale was being calculated for the final full-precision calculation: local_rscale = rscale + (int) val - ln_dweight + 8 The first two terms on the right hand side are meant to give the number of significant digits required in the result ("val" being the estimated result weight). However, this failed to account for the fact that rscale is clipped to a maximum of NUMERIC_MAX_DISPLAY_SCALE (1000), and the result weight might be less then -1000, causing their sum to be negative, leading to a loss of precision. Fix this by forcing the number of significant digits calculated to be nonnegative. It's OK for it to be zero (when the result weight is less than -1000), since the local_rscale value then includes a few extra digits to ensure an accurate result. Finally, add additional underflow checks to exp_var() and power_var(), so that they consistently return zero for cases like this where the result is indistinguishable from zero. Some paths through this code already returned zero in such cases, but others were throwing overflow errors. Dean Rasheed, reviewed by Yugo Nagata. Discussion: http://postgr.es/m/CAEZATCW6Dvq7+3wN3tt5jLj-FyOcUgT5xNoOqce5=6su0bc...@mail.gmail.com Branch -- REL_14_STABLE Details --- https://git.postgresql.org/pg/commitdiff/0d6b87497e32a5b15be8fa3247b8fae48358bb1b Modified Files -- src/backend/utils/adt/numeric.c | 81 --- src/test/regress/expected/numeric.out | 55 src/test/regress/sql/numeric.sql | 11 + 3 files changed, 132 insertions(+), 15 deletions(-)
pgsql: Fix corner-case errors and loss of precision in numeric_power().
Fix corner-case errors and loss of precision in numeric_power(). This fixes a couple of related problems that arise when raising numbers to very large powers. Firstly, when raising a negative number to a very large integer power, the result should be well-defined, but the previous code would only cope if the exponent was small enough to go through power_var_int(). Otherwise it would throw an internal error, attempting to take the logarithm of a negative number. Fix this by adding suitable handling to the general case in power_var() to cope with negative bases, checking for integer powers there. Next, when raising a (positive or negative) number whose absolute value is slightly less than 1 to a very large power, the result should approach zero as the power is increased. However, in some cases, for sufficiently large powers, this would lose all precision and return 1 instead of 0. This was due to the way that the local_rscale was being calculated for the final full-precision calculation: local_rscale = rscale + (int) val - ln_dweight + 8 The first two terms on the right hand side are meant to give the number of significant digits required in the result ("val" being the estimated result weight). However, this failed to account for the fact that rscale is clipped to a maximum of NUMERIC_MAX_DISPLAY_SCALE (1000), and the result weight might be less then -1000, causing their sum to be negative, leading to a loss of precision. Fix this by forcing the number of significant digits calculated to be nonnegative. It's OK for it to be zero (when the result weight is less than -1000), since the local_rscale value then includes a few extra digits to ensure an accurate result. Finally, add additional underflow checks to exp_var() and power_var(), so that they consistently return zero for cases like this where the result is indistinguishable from zero. Some paths through this code already returned zero in such cases, but others were throwing overflow errors. Dean Rasheed, reviewed by Yugo Nagata. Discussion: http://postgr.es/m/CAEZATCW6Dvq7+3wN3tt5jLj-FyOcUgT5xNoOqce5=6su0bc...@mail.gmail.com Branch -- REL_11_STABLE Details --- https://git.postgresql.org/pg/commitdiff/dcd0ab6729350c68681f115434054490f523956a Modified Files -- src/backend/utils/adt/numeric.c | 82 --- src/test/regress/expected/numeric.out | 55 +++ src/test/regress/sql/numeric.sql | 11 + 3 files changed, 132 insertions(+), 16 deletions(-)
pgsql: Fix corner-case errors and loss of precision in numeric_power().
Fix corner-case errors and loss of precision in numeric_power(). This fixes a couple of related problems that arise when raising numbers to very large powers. Firstly, when raising a negative number to a very large integer power, the result should be well-defined, but the previous code would only cope if the exponent was small enough to go through power_var_int(). Otherwise it would throw an internal error, attempting to take the logarithm of a negative number. Fix this by adding suitable handling to the general case in power_var() to cope with negative bases, checking for integer powers there. Next, when raising a (positive or negative) number whose absolute value is slightly less than 1 to a very large power, the result should approach zero as the power is increased. However, in some cases, for sufficiently large powers, this would lose all precision and return 1 instead of 0. This was due to the way that the local_rscale was being calculated for the final full-precision calculation: local_rscale = rscale + (int) val - ln_dweight + 8 The first two terms on the right hand side are meant to give the number of significant digits required in the result ("val" being the estimated result weight). However, this failed to account for the fact that rscale is clipped to a maximum of NUMERIC_MAX_DISPLAY_SCALE (1000), and the result weight might be less then -1000, causing their sum to be negative, leading to a loss of precision. Fix this by forcing the number of significant digits calculated to be nonnegative. It's OK for it to be zero (when the result weight is less than -1000), since the local_rscale value then includes a few extra digits to ensure an accurate result. Finally, add additional underflow checks to exp_var() and power_var(), so that they consistently return zero for cases like this where the result is indistinguishable from zero. Some paths through this code already returned zero in such cases, but others were throwing overflow errors. Dean Rasheed, reviewed by Yugo Nagata. Discussion: http://postgr.es/m/CAEZATCW6Dvq7+3wN3tt5jLj-FyOcUgT5xNoOqce5=6su0bc...@mail.gmail.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/4dd5ce2fd9b9b2134886fee7fe4cb36d1988adb2 Modified Files -- src/backend/utils/adt/numeric.c | 81 --- src/test/regress/expected/numeric.out | 55 src/test/regress/sql/numeric.sql | 11 + 3 files changed, 132 insertions(+), 15 deletions(-)
pgsql: Fix corner-case errors and loss of precision in numeric_power().
Fix corner-case errors and loss of precision in numeric_power(). This fixes a couple of related problems that arise when raising numbers to very large powers. Firstly, when raising a negative number to a very large integer power, the result should be well-defined, but the previous code would only cope if the exponent was small enough to go through power_var_int(). Otherwise it would throw an internal error, attempting to take the logarithm of a negative number. Fix this by adding suitable handling to the general case in power_var() to cope with negative bases, checking for integer powers there. Next, when raising a (positive or negative) number whose absolute value is slightly less than 1 to a very large power, the result should approach zero as the power is increased. However, in some cases, for sufficiently large powers, this would lose all precision and return 1 instead of 0. This was due to the way that the local_rscale was being calculated for the final full-precision calculation: local_rscale = rscale + (int) val - ln_dweight + 8 The first two terms on the right hand side are meant to give the number of significant digits required in the result ("val" being the estimated result weight). However, this failed to account for the fact that rscale is clipped to a maximum of NUMERIC_MAX_DISPLAY_SCALE (1000), and the result weight might be less then -1000, causing their sum to be negative, leading to a loss of precision. Fix this by forcing the number of significant digits calculated to be nonnegative. It's OK for it to be zero (when the result weight is less than -1000), since the local_rscale value then includes a few extra digits to ensure an accurate result. Finally, add additional underflow checks to exp_var() and power_var(), so that they consistently return zero for cases like this where the result is indistinguishable from zero. Some paths through this code already returned zero in such cases, but others were throwing overflow errors. Dean Rasheed, reviewed by Yugo Nagata. Discussion: http://postgr.es/m/CAEZATCW6Dvq7+3wN3tt5jLj-FyOcUgT5xNoOqce5=6su0bc...@mail.gmail.com Branch -- REL_10_STABLE Details --- https://git.postgresql.org/pg/commitdiff/39b0bda67007468ded0a2f236672a02c836ab759 Modified Files -- src/backend/utils/adt/numeric.c | 82 --- src/test/regress/expected/numeric.out | 55 +++ src/test/regress/sql/numeric.sql | 11 + 3 files changed, 132 insertions(+), 16 deletions(-)
pgsql: Fix corner-case errors and loss of precision in numeric_power().
Fix corner-case errors and loss of precision in numeric_power(). This fixes a couple of related problems that arise when raising numbers to very large powers. Firstly, when raising a negative number to a very large integer power, the result should be well-defined, but the previous code would only cope if the exponent was small enough to go through power_var_int(). Otherwise it would throw an internal error, attempting to take the logarithm of a negative number. Fix this by adding suitable handling to the general case in power_var() to cope with negative bases, checking for integer powers there. Next, when raising a (positive or negative) number whose absolute value is slightly less than 1 to a very large power, the result should approach zero as the power is increased. However, in some cases, for sufficiently large powers, this would lose all precision and return 1 instead of 0. This was due to the way that the local_rscale was being calculated for the final full-precision calculation: local_rscale = rscale + (int) val - ln_dweight + 8 The first two terms on the right hand side are meant to give the number of significant digits required in the result ("val" being the estimated result weight). However, this failed to account for the fact that rscale is clipped to a maximum of NUMERIC_MAX_DISPLAY_SCALE (1000), and the result weight might be less then -1000, causing their sum to be negative, leading to a loss of precision. Fix this by forcing the number of significant digits calculated to be nonnegative. It's OK for it to be zero (when the result weight is less than -1000), since the local_rscale value then includes a few extra digits to ensure an accurate result. Finally, add additional underflow checks to exp_var() and power_var(), so that they consistently return zero for cases like this where the result is indistinguishable from zero. Some paths through this code already returned zero in such cases, but others were throwing overflow errors. Dean Rasheed, reviewed by Yugo Nagata. Discussion: http://postgr.es/m/CAEZATCW6Dvq7+3wN3tt5jLj-FyOcUgT5xNoOqce5=6su0bc...@mail.gmail.com Branch -- REL_13_STABLE Details --- https://git.postgresql.org/pg/commitdiff/053ec4e0c4e8027bc084d25a846e35ba84ae966c Modified Files -- src/backend/utils/adt/numeric.c | 82 --- src/test/regress/expected/numeric.out | 55 +++ src/test/regress/sql/numeric.sql | 11 + 3 files changed, 132 insertions(+), 16 deletions(-)
pgsql: Fix corner-case errors and loss of precision in numeric_power().
Fix corner-case errors and loss of precision in numeric_power(). This fixes a couple of related problems that arise when raising numbers to very large powers. Firstly, when raising a negative number to a very large integer power, the result should be well-defined, but the previous code would only cope if the exponent was small enough to go through power_var_int(). Otherwise it would throw an internal error, attempting to take the logarithm of a negative number. Fix this by adding suitable handling to the general case in power_var() to cope with negative bases, checking for integer powers there. Next, when raising a (positive or negative) number whose absolute value is slightly less than 1 to a very large power, the result should approach zero as the power is increased. However, in some cases, for sufficiently large powers, this would lose all precision and return 1 instead of 0. This was due to the way that the local_rscale was being calculated for the final full-precision calculation: local_rscale = rscale + (int) val - ln_dweight + 8 The first two terms on the right hand side are meant to give the number of significant digits required in the result ("val" being the estimated result weight). However, this failed to account for the fact that rscale is clipped to a maximum of NUMERIC_MAX_DISPLAY_SCALE (1000), and the result weight might be less then -1000, causing their sum to be negative, leading to a loss of precision. Fix this by forcing the number of significant digits calculated to be nonnegative. It's OK for it to be zero (when the result weight is less than -1000), since the local_rscale value then includes a few extra digits to ensure an accurate result. Finally, add additional underflow checks to exp_var() and power_var(), so that they consistently return zero for cases like this where the result is indistinguishable from zero. Some paths through this code already returned zero in such cases, but others were throwing overflow errors. Dean Rasheed, reviewed by Yugo Nagata. Discussion: http://postgr.es/m/CAEZATCW6Dvq7+3wN3tt5jLj-FyOcUgT5xNoOqce5=6su0bc...@mail.gmail.com Branch -- REL9_6_STABLE Details --- https://git.postgresql.org/pg/commitdiff/5cf350ce02278da488e9eb4825a573d86d6d9fcc Modified Files -- src/backend/utils/adt/numeric.c | 82 --- src/test/regress/expected/numeric.out | 55 +++ src/test/regress/sql/numeric.sql | 11 + 3 files changed, 132 insertions(+), 16 deletions(-)
pgsql: Fix corner-case errors and loss of precision in numeric_power().
Fix corner-case errors and loss of precision in numeric_power(). This fixes a couple of related problems that arise when raising numbers to very large powers. Firstly, when raising a negative number to a very large integer power, the result should be well-defined, but the previous code would only cope if the exponent was small enough to go through power_var_int(). Otherwise it would throw an internal error, attempting to take the logarithm of a negative number. Fix this by adding suitable handling to the general case in power_var() to cope with negative bases, checking for integer powers there. Next, when raising a (positive or negative) number whose absolute value is slightly less than 1 to a very large power, the result should approach zero as the power is increased. However, in some cases, for sufficiently large powers, this would lose all precision and return 1 instead of 0. This was due to the way that the local_rscale was being calculated for the final full-precision calculation: local_rscale = rscale + (int) val - ln_dweight + 8 The first two terms on the right hand side are meant to give the number of significant digits required in the result ("val" being the estimated result weight). However, this failed to account for the fact that rscale is clipped to a maximum of NUMERIC_MAX_DISPLAY_SCALE (1000), and the result weight might be less then -1000, causing their sum to be negative, leading to a loss of precision. Fix this by forcing the number of significant digits calculated to be nonnegative. It's OK for it to be zero (when the result weight is less than -1000), since the local_rscale value then includes a few extra digits to ensure an accurate result. Finally, add additional underflow checks to exp_var() and power_var(), so that they consistently return zero for cases like this where the result is indistinguishable from zero. Some paths through this code already returned zero in such cases, but others were throwing overflow errors. Dean Rasheed, reviewed by Yugo Nagata. Discussion: http://postgr.es/m/CAEZATCW6Dvq7+3wN3tt5jLj-FyOcUgT5xNoOqce5=6su0bc...@mail.gmail.com Branch -- REL_12_STABLE Details --- https://git.postgresql.org/pg/commitdiff/5c62920fa938644935418226872b360f90e74b62 Modified Files -- src/backend/utils/adt/numeric.c | 82 --- src/test/regress/expected/numeric.out | 55 +++ src/test/regress/sql/numeric.sql | 11 + 3 files changed, 132 insertions(+), 16 deletions(-)
pgsql: Fix division-by-zero error in to_char() with 'EEEE' format.
Fix division-by-zero error in to_char() with '' format. This fixes a long-standing bug when using to_char() to format a numeric value in scientific notation -- if the value's exponent is less than -NUMERIC_MAX_DISPLAY_SCALE-1 (-1001), it produced a division-by-zero error. The reason for this error was that get_str_from_var_sci() divides its input by 10^exp, which it produced using power_var_int(). However, the underflow test in power_var_int() causes it to return zero if the result scale is too small. That's not a problem for power_var_int()'s only other caller, power_var(), since that limits the rscale to 1000, but in get_str_from_var_sci() the exponent can be much smaller, requiring a much larger rscale. Fix by introducing a new function to compute 10^exp directly, with no rscale limit. This also allows 10^exp to be computed more efficiently, without any numeric multiplication, division or rounding. Discussion: https://postgr.es/m/caezatcwhojfh4whaqgukbe8d5jnhb8ytzeml-pnrx+kctym...@mail.gmail.com Branch -- REL_13_STABLE Details --- https://git.postgresql.org/pg/commitdiff/a72ad63154256bd3adf1c885f8c3371e9e8ce67a Modified Files -- src/backend/utils/adt/numeric.c | 66 --- src/test/regress/expected/numeric.out | 33 ++ src/test/regress/sql/numeric.sql | 8 + 3 files changed, 78 insertions(+), 29 deletions(-)
pgsql: Fix division-by-zero error in to_char() with 'EEEE' format.
Fix division-by-zero error in to_char() with '' format. This fixes a long-standing bug when using to_char() to format a numeric value in scientific notation -- if the value's exponent is less than -NUMERIC_MAX_DISPLAY_SCALE-1 (-1001), it produced a division-by-zero error. The reason for this error was that get_str_from_var_sci() divides its input by 10^exp, which it produced using power_var_int(). However, the underflow test in power_var_int() causes it to return zero if the result scale is too small. That's not a problem for power_var_int()'s only other caller, power_var(), since that limits the rscale to 1000, but in get_str_from_var_sci() the exponent can be much smaller, requiring a much larger rscale. Fix by introducing a new function to compute 10^exp directly, with no rscale limit. This also allows 10^exp to be computed more efficiently, without any numeric multiplication, division or rounding. Discussion: https://postgr.es/m/caezatcwhojfh4whaqgukbe8d5jnhb8ytzeml-pnrx+kctym...@mail.gmail.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/226ec49ffd78c0f246da8fceb3094991dd2302ff Modified Files -- src/backend/utils/adt/numeric.c | 66 --- src/test/regress/expected/numeric.out | 32 + src/test/regress/sql/numeric.sql | 7 3 files changed, 76 insertions(+), 29 deletions(-)
pgsql: Fix division-by-zero error in to_char() with 'EEEE' format.
Fix division-by-zero error in to_char() with '' format. This fixes a long-standing bug when using to_char() to format a numeric value in scientific notation -- if the value's exponent is less than -NUMERIC_MAX_DISPLAY_SCALE-1 (-1001), it produced a division-by-zero error. The reason for this error was that get_str_from_var_sci() divides its input by 10^exp, which it produced using power_var_int(). However, the underflow test in power_var_int() causes it to return zero if the result scale is too small. That's not a problem for power_var_int()'s only other caller, power_var(), since that limits the rscale to 1000, but in get_str_from_var_sci() the exponent can be much smaller, requiring a much larger rscale. Fix by introducing a new function to compute 10^exp directly, with no rscale limit. This also allows 10^exp to be computed more efficiently, without any numeric multiplication, division or rounding. Discussion: https://postgr.es/m/caezatcwhojfh4whaqgukbe8d5jnhb8ytzeml-pnrx+kctym...@mail.gmail.com Branch -- REL_10_STABLE Details --- https://git.postgresql.org/pg/commitdiff/661558bc35c7d0cd10640e7a2a065391ec769a7d Modified Files -- src/backend/utils/adt/numeric.c | 66 --- src/test/regress/expected/numeric.out | 33 ++ src/test/regress/sql/numeric.sql | 8 + 3 files changed, 78 insertions(+), 29 deletions(-)
pgsql: Fix division-by-zero error in to_char() with 'EEEE' format.
Fix division-by-zero error in to_char() with '' format. This fixes a long-standing bug when using to_char() to format a numeric value in scientific notation -- if the value's exponent is less than -NUMERIC_MAX_DISPLAY_SCALE-1 (-1001), it produced a division-by-zero error. The reason for this error was that get_str_from_var_sci() divides its input by 10^exp, which it produced using power_var_int(). However, the underflow test in power_var_int() causes it to return zero if the result scale is too small. That's not a problem for power_var_int()'s only other caller, power_var(), since that limits the rscale to 1000, but in get_str_from_var_sci() the exponent can be much smaller, requiring a much larger rscale. Fix by introducing a new function to compute 10^exp directly, with no rscale limit. This also allows 10^exp to be computed more efficiently, without any numeric multiplication, division or rounding. Discussion: https://postgr.es/m/caezatcwhojfh4whaqgukbe8d5jnhb8ytzeml-pnrx+kctym...@mail.gmail.com Branch -- REL_11_STABLE Details --- https://git.postgresql.org/pg/commitdiff/4851940a5c000b2b2504d7add13dfbd463c1c59c Modified Files -- src/backend/utils/adt/numeric.c | 66 --- src/test/regress/expected/numeric.out | 33 ++ src/test/regress/sql/numeric.sql | 8 + 3 files changed, 78 insertions(+), 29 deletions(-)
pgsql: Fix division-by-zero error in to_char() with 'EEEE' format.
Fix division-by-zero error in to_char() with '' format. This fixes a long-standing bug when using to_char() to format a numeric value in scientific notation -- if the value's exponent is less than -NUMERIC_MAX_DISPLAY_SCALE-1 (-1001), it produced a division-by-zero error. The reason for this error was that get_str_from_var_sci() divides its input by 10^exp, which it produced using power_var_int(). However, the underflow test in power_var_int() causes it to return zero if the result scale is too small. That's not a problem for power_var_int()'s only other caller, power_var(), since that limits the rscale to 1000, but in get_str_from_var_sci() the exponent can be much smaller, requiring a much larger rscale. Fix by introducing a new function to compute 10^exp directly, with no rscale limit. This also allows 10^exp to be computed more efficiently, without any numeric multiplication, division or rounding. Discussion: https://postgr.es/m/caezatcwhojfh4whaqgukbe8d5jnhb8ytzeml-pnrx+kctym...@mail.gmail.com Branch -- REL9_6_STABLE Details --- https://git.postgresql.org/pg/commitdiff/ed3e1663c031e82def8c50ef31ee704777059459 Modified Files -- src/backend/utils/adt/numeric.c | 66 --- src/test/regress/expected/numeric.out | 33 ++ src/test/regress/sql/numeric.sql | 8 + 3 files changed, 78 insertions(+), 29 deletions(-)
pgsql: Fix division-by-zero error in to_char() with 'EEEE' format.
Fix division-by-zero error in to_char() with '' format. This fixes a long-standing bug when using to_char() to format a numeric value in scientific notation -- if the value's exponent is less than -NUMERIC_MAX_DISPLAY_SCALE-1 (-1001), it produced a division-by-zero error. The reason for this error was that get_str_from_var_sci() divides its input by 10^exp, which it produced using power_var_int(). However, the underflow test in power_var_int() causes it to return zero if the result scale is too small. That's not a problem for power_var_int()'s only other caller, power_var(), since that limits the rscale to 1000, but in get_str_from_var_sci() the exponent can be much smaller, requiring a much larger rscale. Fix by introducing a new function to compute 10^exp directly, with no rscale limit. This also allows 10^exp to be computed more efficiently, without any numeric multiplication, division or rounding. Discussion: https://postgr.es/m/caezatcwhojfh4whaqgukbe8d5jnhb8ytzeml-pnrx+kctym...@mail.gmail.com Branch -- REL_12_STABLE Details --- https://git.postgresql.org/pg/commitdiff/43644bd3b234091c4bfad0bf6d7d88f90c52aaf5 Modified Files -- src/backend/utils/adt/numeric.c | 66 --- src/test/regress/expected/numeric.out | 33 ++ src/test/regress/sql/numeric.sql | 8 + 3 files changed, 78 insertions(+), 29 deletions(-)
pgsql: Fix division-by-zero error in to_char() with 'EEEE' format.
Fix division-by-zero error in to_char() with '' format. This fixes a long-standing bug when using to_char() to format a numeric value in scientific notation -- if the value's exponent is less than -NUMERIC_MAX_DISPLAY_SCALE-1 (-1001), it produced a division-by-zero error. The reason for this error was that get_str_from_var_sci() divides its input by 10^exp, which it produced using power_var_int(). However, the underflow test in power_var_int() causes it to return zero if the result scale is too small. That's not a problem for power_var_int()'s only other caller, power_var(), since that limits the rscale to 1000, but in get_str_from_var_sci() the exponent can be much smaller, requiring a much larger rscale. Fix by introducing a new function to compute 10^exp directly, with no rscale limit. This also allows 10^exp to be computed more efficiently, without any numeric multiplication, division or rounding. Discussion: https://postgr.es/m/caezatcwhojfh4whaqgukbe8d5jnhb8ytzeml-pnrx+kctym...@mail.gmail.com Branch -- REL_14_STABLE Details --- https://git.postgresql.org/pg/commitdiff/ecbdbdfd9056a55e38f7922565c7ce46b1321907 Modified Files -- src/backend/utils/adt/numeric.c | 66 --- src/test/regress/expected/numeric.out | 32 + src/test/regress/sql/numeric.sql | 7 3 files changed, 76 insertions(+), 29 deletions(-)
pgsql: Adjust the integer overflow tests in the numeric code.
Adjust the integer overflow tests in the numeric code. Formerly, the numeric code tested whether an integer value of a larger type would fit in a smaller type by casting it to the smaller type and then testing if the reverse conversion produced the original value. That's perfectly fine, except that it caused a test failure on buildfarm animal castoroides, most likely due to a compiler bug. Instead, do these tests by comparing against PG_INT16/32_MIN/MAX. That matches existing code in other places, such as int84(), which is more widely tested, and so is less likely to go wrong. While at it, add regression tests covering the numeric-to-int8/4/2 conversions, and adjust the recently added tests to the style of 434ddfb79a (on the v11 branch) to make failures easier to diagnose. Per buildfarm via Tom Lane, reviewed by Tom Lane. Discussion: https://postgr.es/m/2394813.1628179479%40sss.pgh.pa.us Branch -- REL_13_STABLE Details --- https://git.postgresql.org/pg/commitdiff/da188b993450c824b03e7dca18c27e9f4c04754f Modified Files -- src/backend/utils/adt/numeric.c | 22 ++-- src/test/regress/expected/numeric.out | 65 ++- src/test/regress/sql/numeric.sql | 22 +--- 3 files changed, 85 insertions(+), 24 deletions(-)
pgsql: Adjust the integer overflow tests in the numeric code.
Adjust the integer overflow tests in the numeric code. Formerly, the numeric code tested whether an integer value of a larger type would fit in a smaller type by casting it to the smaller type and then testing if the reverse conversion produced the original value. That's perfectly fine, except that it caused a test failure on buildfarm animal castoroides, most likely due to a compiler bug. Instead, do these tests by comparing against PG_INT16/32_MIN/MAX. That matches existing code in other places, such as int84(), which is more widely tested, and so is less likely to go wrong. While at it, add regression tests covering the numeric-to-int8/4/2 conversions, and adjust the recently added tests to the style of 434ddfb79a (on the v11 branch) to make failures easier to diagnose. Per buildfarm via Tom Lane, reviewed by Tom Lane. Discussion: https://postgr.es/m/2394813.1628179479%40sss.pgh.pa.us Branch -- REL_10_STABLE Details --- https://git.postgresql.org/pg/commitdiff/f20b6bc96b78d50b597e28a22f58bb9240f948f9 Modified Files -- src/backend/utils/adt/numeric.c | 26 +- src/test/regress/expected/numeric.out | 65 ++- src/test/regress/sql/numeric.sql | 22 +--- 3 files changed, 84 insertions(+), 29 deletions(-)
pgsql: Adjust the integer overflow tests in the numeric code.
Adjust the integer overflow tests in the numeric code. Formerly, the numeric code tested whether an integer value of a larger type would fit in a smaller type by casting it to the smaller type and then testing if the reverse conversion produced the original value. That's perfectly fine, except that it caused a test failure on buildfarm animal castoroides, most likely due to a compiler bug. Instead, do these tests by comparing against PG_INT16/32_MIN/MAX. That matches existing code in other places, such as int84(), which is more widely tested, and so is less likely to go wrong. While at it, add regression tests covering the numeric-to-int8/4/2 conversions, and adjust the recently added tests to the style of 434ddfb79a (on the v11 branch) to make failures easier to diagnose. Per buildfarm via Tom Lane, reviewed by Tom Lane. Discussion: https://postgr.es/m/2394813.1628179479%40sss.pgh.pa.us Branch -- REL_14_STABLE Details --- https://git.postgresql.org/pg/commitdiff/0325565702d8fd18ba66cdeaad4d7f43744525b2 Modified Files -- src/backend/utils/adt/numeric.c | 22 ++-- src/test/regress/expected/numeric.out | 65 ++- src/test/regress/sql/numeric.sql | 22 +--- 3 files changed, 85 insertions(+), 24 deletions(-)
pgsql: Adjust the integer overflow tests in the numeric code.
Adjust the integer overflow tests in the numeric code. Formerly, the numeric code tested whether an integer value of a larger type would fit in a smaller type by casting it to the smaller type and then testing if the reverse conversion produced the original value. That's perfectly fine, except that it caused a test failure on buildfarm animal castoroides, most likely due to a compiler bug. Instead, do these tests by comparing against PG_INT16/32_MIN/MAX. That matches existing code in other places, such as int84(), which is more widely tested, and so is less likely to go wrong. While at it, add regression tests covering the numeric-to-int8/4/2 conversions, and adjust the recently added tests to the style of 434ddfb79a (on the v11 branch) to make failures easier to diagnose. Per buildfarm via Tom Lane, reviewed by Tom Lane. Discussion: https://postgr.es/m/2394813.1628179479%40sss.pgh.pa.us Branch -- REL_12_STABLE Details --- https://git.postgresql.org/pg/commitdiff/cc4420f8829ddbc852e8c98781f09401ef26031d Modified Files -- src/backend/utils/adt/numeric.c | 22 ++-- src/test/regress/expected/numeric.out | 65 ++- src/test/regress/sql/numeric.sql | 22 +--- 3 files changed, 85 insertions(+), 24 deletions(-)
pgsql: Adjust the integer overflow tests in the numeric code.
Adjust the integer overflow tests in the numeric code. Formerly, the numeric code tested whether an integer value of a larger type would fit in a smaller type by casting it to the smaller type and then testing if the reverse conversion produced the original value. That's perfectly fine, except that it caused a test failure on buildfarm animal castoroides, most likely due to a compiler bug. Instead, do these tests by comparing against PG_INT16/32_MIN/MAX. That matches existing code in other places, such as int84(), which is more widely tested, and so is less likely to go wrong. While at it, add regression tests covering the numeric-to-int8/4/2 conversions, and adjust the recently added tests to the style of 434ddfb79a (on the v11 branch) to make failures easier to diagnose. Per buildfarm via Tom Lane, reviewed by Tom Lane. Discussion: https://postgr.es/m/2394813.1628179479%40sss.pgh.pa.us Branch -- REL9_6_STABLE Details --- https://git.postgresql.org/pg/commitdiff/5b7b92ad74f2eeb4ac166bb45be31df95e574b3b Modified Files -- src/backend/utils/adt/numeric.c | 26 +- src/test/regress/expected/numeric.out | 65 ++- src/test/regress/sql/numeric.sql | 22 +--- 3 files changed, 84 insertions(+), 29 deletions(-)
pgsql: Adjust the integer overflow tests in the numeric code.
Adjust the integer overflow tests in the numeric code. Formerly, the numeric code tested whether an integer value of a larger type would fit in a smaller type by casting it to the smaller type and then testing if the reverse conversion produced the original value. That's perfectly fine, except that it caused a test failure on buildfarm animal castoroides, most likely due to a compiler bug. Instead, do these tests by comparing against PG_INT16/32_MIN/MAX. That matches existing code in other places, such as int84(), which is more widely tested, and so is less likely to go wrong. While at it, add regression tests covering the numeric-to-int8/4/2 conversions, and adjust the recently added tests to the style of 434ddfb79a (on the v11 branch) to make failures easier to diagnose. Per buildfarm via Tom Lane, reviewed by Tom Lane. Discussion: https://postgr.es/m/2394813.1628179479%40sss.pgh.pa.us Branch -- REL_11_STABLE Details --- https://git.postgresql.org/pg/commitdiff/7a9c9acfe5de25126f20720d31c8b066ac7c8588 Modified Files -- src/backend/utils/adt/numeric.c | 26 ++-- src/test/regress/expected/numeric.out | 57 --- src/test/regress/sql/numeric.sql | 18 +-- 3 files changed, 78 insertions(+), 23 deletions(-)
pgsql: Adjust the integer overflow tests in the numeric code.
Adjust the integer overflow tests in the numeric code. Formerly, the numeric code tested whether an integer value of a larger type would fit in a smaller type by casting it to the smaller type and then testing if the reverse conversion produced the original value. That's perfectly fine, except that it caused a test failure on buildfarm animal castoroides, most likely due to a compiler bug. Instead, do these tests by comparing against PG_INT16/32_MIN/MAX. That matches existing code in other places, such as int84(), which is more widely tested, and so is less likely to go wrong. While at it, add regression tests covering the numeric-to-int8/4/2 conversions, and adjust the recently added tests to the style of 434ddfb79a (on the v11 branch) to make failures easier to diagnose. Per buildfarm via Tom Lane, reviewed by Tom Lane. Discussion: https://postgr.es/m/2394813.1628179479%40sss.pgh.pa.us Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/2642df9fac09540c761441edd9bdd0a72c62f39c Modified Files -- src/backend/utils/adt/numeric.c | 22 ++-- src/test/regress/expected/numeric.out | 65 ++- src/test/regress/sql/numeric.sql | 22 +--- 3 files changed, 85 insertions(+), 24 deletions(-)
pgsql: Fix corner-case loss of precision in numeric_power().
Fix corner-case loss of precision in numeric_power(). This fixes a loss of precision that occurs when the first input is very close to 1, so that its logarithm is very small. Formerly, during the initial low-precision calculation to estimate the result weight, the logarithm was computed to a local rscale that was capped to NUMERIC_MAX_DISPLAY_SCALE (1000). However, the base may be as close as 1e-16383 to 1, hence its logarithm may be as small as 1e-16383, and so the local rscale needs to be allowed to exceed 16383, otherwise all precision is lost, leading to a poor choice of rscale for the full-precision calculation. Fix this by removing the cap on the local rscale during the initial low-precision calculation, as we already do in the full-precision calculation. This doesn't change the fact that the initial calculation is a low-precision approximation, computing the logarithm to around 8 significant digits, which is very fast, especially when the base is very close to 1. Patch by me, reviewed by Alvaro Herrera. Discussion: https://postgr.es/m/CAEZATCV-Ceu%2BHpRMf416yUe4KKFv%3DtdgXQAe5-7S9tD%3D5E-T1g%40mail.gmail.com Branch -- REL_11_STABLE Details --- https://git.postgresql.org/pg/commitdiff/b2a0f1673371fd0156685b44b06a818609ed7c95 Modified Files -- src/backend/utils/adt/numeric.c | 6 +- src/test/regress/expected/numeric.out | 6 ++ src/test/regress/sql/numeric.sql | 1 + 3 files changed, 12 insertions(+), 1 deletion(-)
pgsql: Fix corner-case loss of precision in numeric_power().
Fix corner-case loss of precision in numeric_power(). This fixes a loss of precision that occurs when the first input is very close to 1, so that its logarithm is very small. Formerly, during the initial low-precision calculation to estimate the result weight, the logarithm was computed to a local rscale that was capped to NUMERIC_MAX_DISPLAY_SCALE (1000). However, the base may be as close as 1e-16383 to 1, hence its logarithm may be as small as 1e-16383, and so the local rscale needs to be allowed to exceed 16383, otherwise all precision is lost, leading to a poor choice of rscale for the full-precision calculation. Fix this by removing the cap on the local rscale during the initial low-precision calculation, as we already do in the full-precision calculation. This doesn't change the fact that the initial calculation is a low-precision approximation, computing the logarithm to around 8 significant digits, which is very fast, especially when the base is very close to 1. Patch by me, reviewed by Alvaro Herrera. Discussion: https://postgr.es/m/CAEZATCV-Ceu%2BHpRMf416yUe4KKFv%3DtdgXQAe5-7S9tD%3D5E-T1g%40mail.gmail.com Branch -- REL_10_STABLE Details --- https://git.postgresql.org/pg/commitdiff/4853baaaca4a3469247027f7dc86b516e35f7c0a Modified Files -- src/backend/utils/adt/numeric.c | 6 +- src/test/regress/expected/numeric.out | 6 ++ src/test/regress/sql/numeric.sql | 1 + 3 files changed, 12 insertions(+), 1 deletion(-)