Re: [Suspect SPAM] Re: [HACKERS] path toward faster partition pruning

2018-05-08 Thread Amit Langote
Thank you Marina for the report and Michael for following up.

On 2018/05/07 16:56, Michael Paquier wrote:
> On Mon, May 07, 2018 at 10:37:10AM +0900, Michael Paquier wrote:
>> On Fri, May 04, 2018 at 12:32:23PM +0300, Marina Polyakova wrote:
>>> I got a similar server crash as in [1] on the master branch since the commit
>>> 9fdb675fc5d2de825414e05939727de8b120ae81 when the assertion fails because
>>> the second argument ScalarArrayOpExpr is not a Const or an ArrayExpr, but is
>>> an ArrayCoerceExpr (see [2]):
>>
>> Indeed, I can see the crash.  I have been playing with this stuff and I
>> am in the middle of writing the patch, but let's track this properly for
>> now.
> 
> So the problem appears when an expression needs to use
> COERCION_PATH_ARRAYCOERCE for a type coercion from one type to another,
> which requires an execution state to be able to build the list of
> elements.  The clause matching happens at planning state, so while there
> are surely cases where this could be improved depending on the
> expression type, I propose to just discard all clauses which do not
> match OpExpr and ArrayExpr for now, as per the attached.  It would be
> definitely a good practice to have a default code path returning
> PARTCLAUSE_UNSUPPORTED where the element list cannot be built.
> 
> Thoughts?

I have to agree to go with this conservative approach for now.  Although
we might be able to evaluate the array elements by applying the coercion
specified by ArrayCoerceExpr, let's save that as an improvement to be
pursued later.

FWIW, constraint exclusion wouldn't prune in this case either (that is, if
you try this example with PG 10 or using HEAD as of the parent of
9fdb675fc5), but it doesn't crash like the new pruning code does.

Thanks again.

Regards,
Amit




Re: Expression errors with "FOR UPDATE" and postgres_fdw with partition wise join enabled.

2018-05-08 Thread Ashutosh Bapat
On Tue, May 1, 2018 at 5:00 PM, Etsuro Fujita
 wrote:
> (2018/04/27 14:40), Ashutosh Bapat wrote:
>>
>> Here's updated patch set.
>
>
> Thanks for updating the patch!  Here are my review comments on patch
> 0003-postgres_fdw-child-join-with-ConvertRowtypeExprs-cau.patch:
>
> * This assertion in deparseConvertRowtypeExpr wouldn't always be true
> because of cases like ALTER FOREIGN TABLE DROP COLUMN plus ALTER FOREIGN
> TABLE ADD COLUMN:
>
> +   Assert(parent_desc->natts == child_desc->natts);
>
> Here is such an example:

>
> I think we should remove that assertion.

Removed.

>
> * But I don't think that change is enough.  Here is another oddity with that
> assertion removed.

>
> To fix this, I think we should skip the deparseColumnRef processing for
> dropped columns in the parent table.

Done.

I have changed the test file a bit to test these scenarios.

Thanks for testing.

>
> * I think it would be better to do EXPLAIN with the VERBOSE option to the
> queries this patch added to the regression tests, to see if
> ConvertRowtypeExprs are correctly deparsed in the remote queries.

The reason VERBOSE option was omitted for partition-wise join was to
avoid repeated and excessive output for every child-join. I think that
still applies.

PFA patch-set with the fixes.

I also noticed that the new function deparseConvertRowtypeExpr is not
quite following the naming convention of the other deparseFoo
functions. Foo is usually the type of the node the parser would
produced when the SQL string produced by that function is parsed. That
doesn't hold for the SQL string produced by ConvertRowtypeExpr but
then naming it as generic deparseRowExpr() wouldn't be correct either.
And then there are functions like deparseColumnRef which may produce a
variety of SQL strings which get parsed into different node types e.g.
a whole-row reference would produce ROW(...) which gets parsed as a
RowExpr. Please let me know if you have any suggestions for the name.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


expr_ref_error_pwj_v5.tar.gz
Description: GNU Zip compressed data


Setting libpq TCP keepalive parameters from environment

2018-05-08 Thread Oleksandr Shulgin
Hi Hackers,

I didn't find the original discussion which led to introduction of the
client-side set of keepalive parameters back in [1].

The issue I'm facing is that it doesn't seem to be possible to set these
parameters from the environment variables.  The block of option
definitions[2] added for these parameters doesn't specify any corresponding
env var names.

I wonder if this is an oversight or was there a conscious decision not to
allow it?

To give a specific example, I have a (legacy) Python script which talks to
the database in two ways: by directly using psycopg2.connect(host=..., )
and also by running some shell commands with psql's \copy + gzip, mainly
for convenience.

Now when I needed to tune the keepalives_idle value, I had to include it
everywhere a database connection is made:
- For psycopg2 it's straightforward: you just add an extra keyword
parameter to connect().
- For psql it's trickier, since the only way to achieve this seems to pack
it together with -d as: -d 'dbname=mydb keepalives_idle=NNN'.

In case of a binary that would amount to recompiling, I guess.  I have no
permission to tweak sysctl directly on the host, unfortunately.

It would be much more convenient to just set the environment variable when
running the script and let it affect the whole process and its children.

Would a patch be welcome?

Regards,
--
Alex

[1] https://www.postgresql.org/message-id/flat/20100623215414.053427541D4%
40cvs.postgresql.org
[2] https://git.postgresql.org/gitweb/?p=postgresql.git;
a=blob;f=src/interfaces/libpq/fe-connect.c;h=a7e969d7c1cecdc8743c43cea09906
8196a4a5fe;hb=HEAD#l251


Re: Porting PG Extension from UNIX to Windows

2018-05-08 Thread Pavlo Golub
Greetings, Alexander.

You wrote 08.05.2018, 9:42:

> 25.04.2018 11:45, insaf.k wrote:

>   
> I've done some research regarding compiling in Windows. I
> am not sure in what way I should compile the extension.
> AFAIK, Visual Studio is not POSIX compliant and so I'll have
> to rewrite all those POSIX calls using Windows API. So it's
> better to compile the extension in Cygwin.
>   

> 
> I have some questions regarding compiling the extension in 
> Cygwin. Do I have to build PG in Cygwin, if I want to compilethe 
> extension in Cygwin?
>   
>  I think it might depend on the extension, but we have managed
> to use mingw-compiled extension with VS-compiled PG. Please look at the   
>   demo script:
>   https://pastebin.com/3jQahYNe
>  If you have any questions, please don't hesitate to ask.

Cool idea.

- Why are you using x86 version of MSYS2?
- And why don't you use just msys2-x86_64-latest.tar.xz instead of
exactly named one?

>   
>   Best regards,
> 
>  -- 
>Alexander Lakhin 
>Postgres Professional: http://www.postgrespro.com  
>The Russian Postgres Company
>   



-- 
Kind regards,
 Pavlo  mailto:pavlo.go...@cybertec.at




Re: Porting PG Extension from UNIX to Windows

2018-05-08 Thread Alexander Lakhin

Hello Pavel,
08.05.2018 11:19, Pavlo Golub wrote:

Cool idea.

- Why are you using x86 version of MSYS2?
We build PostgresPro for x86 and x64 so I choose to use x86 version as a 
common denominator to run the same script in 32-bit Windows.
(When running on 64-bit Windows any (x86/x64) version of PostgresPro 
could be installed so the target host determined by the installed 
postgres.exe bitness.)

- And why don't you use just msys2-x86_64-latest.tar.xz instead of
exactly named one?
I don't remember whether I tried and had troubles with the 'latest', but 
I prefer to use some fixed version and move to a newer one in a 
controlled manner.


Best regards,

--
Alexander Lakhin
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company




Re: doc fixes: vacuum_cleanup_index_scale_factor

2018-05-08 Thread Alexander Korotkov
Hi, Justin!

Thank you for revising documentation patch.

On Mon, May 7, 2018 at 7:55 PM, Justin Pryzby  wrote:

> On Mon, May 07, 2018 at 07:26:25PM +0300, Alexander Korotkov wrote:
> > Hi!
> >
> > I've revised docs and comments, and also made some fixes in the code.
> > See the attached patchset.
> >
> > * 0004-btree-cleanup-docs-comments-fixes.patch
> > Documentation and comment improvements from Justin Pryzby
> > revised by me.
>
> 2nd iteration:
>
> diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
> index eabe2a9235..785ecf922a 100644
> --- a/doc/src/sgml/config.sgml
> +++ b/doc/src/sgml/config.sgml
> @@ -1893,15 +1893,35 @@ include_dir 'conf.d'
>
>
> 
> -When no tuples were deleted from the heap, B-tree indexes might
> still
> -be scanned during VACUUM cleanup stage by two
> -reasons.  The first reason is that B-tree index contains deleted
> pages
> -which can be recycled during cleanup.  The second reason is that
> B-tree
> -index statistics is stalled.  The criterion of stalled index
> statistics
> -is number of inserted tuples since previous statistics collection
> -is greater than vacuum_cleanup_index_
> scale_factor
> -fraction of total number of heap tuples.
> +When no tuples were deleted from the heap, B-tree indexes are
> still
> +scanned during VACUUM cleanup stage unless two
> +conditions are met: the index contains no deleted pages which can
> be
> +recycled during cleanup; and, the index statistics are not stale.
> +In order to detect stale index statistics, number of total heap
> tuples
> should say: "THE number"
>
> +during previous statistics collection is memorized in the index
> s/memorized/stored/
>
> +meta-page.  Once number number of inserted tuples since previous
> Should say "Once the number of inserted tuples..."
>
> +statistics collection is more than
> +vacuum_cleanup_index_scale_factor fraction of
> +number of heap tuples memorized in the meta-page, index
> statistics is
> s/memorized/stored/
>
> +considered to be stalled.  Note, that number of heap tuples is
> written
> "THE number"
> s/stalled/stale/
>
> +to the meta-page at the first time when no dead tuples are found
> remove "at"
>
> +during VACUUM cycle.  Thus, skip of B-tree
> index
> I think should say: "Thus, skipping of the B-tree index scan"
>
> +scan during cleanup stage is only possible in second and
> subsequent
> s/in/when/
>
> +   
> +Zero value of vacuum_cleanup_index_
> scale_factor
> I would say "A zero value of ..."
>

I've applied all the changes you suggested.  Please, find it in the
attached patchset.

--
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


0001-vacuum-cleanup-index-scale-factor-user-set-2.patch
Description: Binary data


0002-vacuum-cleanup-index-scale-factor-tab-complete-2.patch
Description: Binary data


0003-btree-cleanup-condition-fix-2.patch
Description: Binary data


0004-btree-cleanup-docs-comments-fixes-2.patch
Description: Binary data


Re: [Suspect SPAM] Re: [HACKERS] path toward faster partition pruning

2018-05-08 Thread Michael Paquier
On Tue, May 08, 2018 at 04:07:41PM +0900, Amit Langote wrote:
> I have to agree to go with this conservative approach for now.  Although
> we might be able to evaluate the array elements by applying the coercion
> specified by ArrayCoerceExpr, let's save that as an improvement to be
> pursued later.

Thanks for confirming.  Yes, non-volatile functions would be actually
safe, and we'd need to be careful about NULL handling as well, but
that's definitely out of scope for v11.

> FWIW, constraint exclusion wouldn't prune in this case either (that is, if
> you try this example with PG 10 or using HEAD as of the parent of
> 9fdb675fc5), but it doesn't crash like the new pruning code does.

Yeah, I have noticed that.
--
Michael


signature.asc
Description: PGP signature


Re: Optimze usage of immutable functions as relation

2018-05-08 Thread Aleksandr Parfenov
Hello Andrew,

Thank you for the review of the patch.

On Fri, 04 May 2018 08:37:31 +0100
Andrew Gierth  wrote:
> From an implementation point of view your patch is obviously broken in
> many ways (starting with not checking varattno anywhere, and not
> actually checking anywhere if the expression is volatile).

The actual checking if the expression volatile or not is done inside
evaluate_function(). This is called through few more function in
eval_const_experssion(). If it's volatile, the eval_const_expression()
will return FuncExpr node, Const otherwise. It also checks are arguments
immutable or not.

I agree about varattno, it should be checked. Even in case of SRF not
replaced, it is better to be sure that Var points to first (and the
only) attribute.

> But more importantly the plans that you showed seem _worse_ in that
> you've created extra calls to to_tsquery even though the query has
> been written to try and avoid that.
> 
> I think you need to take a step back and explain more precisely what
> you think you're trying to do and what the benefit (and cost) is.

Sure, the first version is some kind of prototype and attempt to
improve execution of the certain type of queries. The best solution I
see is to use the result of the function where it is required and remove
the nested loop in case of immutable functions. In this case, the
planner can choose a better plan if function result is used for tuples
selecting or as a join filter. But it will increase planning time due to
the execution of immutable functions.

It looks like I did something wrong with plans in the example, because
attached patch replaces function-relation usage with result value, not
function call. But nested loop is still in the plan.

I'm open to any suggestions/notices/critics about the idea and approach.

-- 
Aleksandr Parfenov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company



Re: Optimze usage of immutable functions as relation

2018-05-08 Thread Andrew Gierth
> "Aleksandr" == Aleksandr Parfenov  writes:

 >> From an implementation point of view your patch is obviously broken
 >> in many ways (starting with not checking varattno anywhere, and not
 >> actually checking anywhere if the expression is volatile).

 Aleksandr> The actual checking if the expression volatile or not is
 Aleksandr> done inside evaluate_function(). This is called through few
 Aleksandr> more function in eval_const_experssion(). If it's volatile,
 Aleksandr> the eval_const_expression() will return FuncExpr node, Const
 Aleksandr> otherwise. It also checks are arguments immutable or not.

You're missing a ton of other possible cases, of which by far the most
notable is function inlining: eval_const_expressions will inline even a
volatile function and return a new expression tree (which could be
almost anything depending on the function body).

 Aleksandr> I agree about varattno, it should be checked. Even in case
 Aleksandr> of SRF not replaced, it is better to be sure that Var points
 Aleksandr> to first (and the only) attribute.

It's not a matter of "better", but of basic correctness. Functions can
return composite values with columns.

-- 
Andrew (irc:RhodiumToad)



Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors

2018-05-08 Thread Alvaro Herrera
Fabien COELHO wrote:

> I think that I'll have time for a round of review in the first half of July.
> Providing a rebased patch before then would be nice.

Note that even in the absence of a rebased patch, you can apply to an
older checkout if you have some limited window of time for a review.

Looking over the diff, I find that this patch tries to do too much and
needs to be split up.  At a minimum there is a preliminary patch that
introduces the error reporting stuff (errstart etc); there are other
thread-related changes (for example to the random generation functions)
that probably belong in a separate one too.  Not sure if there are other
smaller patches hidden inside the rest.

On elog/errstart: we already have a convention for what ereport() calls
look like; I suggest to use that instead of inventing your own.  With
that, is there a need for elog()?  In the backend we have it because
$HISTORY but there's no need for that here -- I propose to lose elog()
and use only ereport everywhere.  Also, I don't see that you need
errmsg_internal() at all; let's lose it too.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Bug Report: Error caused due to wrong ordering of filters

2018-05-08 Thread Ekta Khanna
Hello PGSQL Hackers,

We have come across the following issue on Postgres REL_10_STABLE. Below is
the repro:

CREATE TABLE foo (a int, b text); INSERT INTO foo values(1, '3'); SELECT *
FROM (SELECT * FROM foo WHERE length(b)=8)x WHERE to_date(x.b,'MMDD') >
'2018-05-04';
ERROR: source string too short for "" formatting field DETAIL: Field
requires 4 characters, but only 1 remain. HINT: If your source string is
not fixed-width, try using the "FM" modifier.

On looking at the explain plan, we see the order of the clauses is reversed
due to costing of clauses in the function order_qual_clauses() below is the
plan :
*Actual Plan:*
QUERY PLAN
-
Seq Scan on foo (cost=0.00..35.40 rows=2 width=36) Filter: ((to_date(b,
'MMDD'::text) > '2018-05-04'::date) AND (length(b) = 8)) (2 rows)

Expected plan should execute the qual as part of the FROM clause before
executing the qual in the WHERE clause:
*Plan expected: *
QUERY PLAN
-
Seq Scan on foo (cost=0.00..35.40 rows=2 width=36) Filter: (length(b) = 8))
AND ((to_date(b, 'MMDD'::text) > '2018-05-04'::date) (2 rows)

Has anyone come across similar issue ?
In the plan, we see that planner merges the quals from FROM clause and the
WHERE clause in the same RESTRICTINFO. Is this the expected behavior?


Thanks & Regards,
Ekta & Sam


Re: doc fixes: vacuum_cleanup_index_scale_factor

2018-05-08 Thread Justin Pryzby
3rd iteration ; thanks for bearing with me.

On Tue, May 08, 2018 at 12:35:00PM +0300, Alexander Korotkov wrote:
> Hi, Justin!
> 
> Thank you for revising documentation patch.
> 
> On Mon, May 7, 2018 at 7:55 PM, Justin Pryzby  wrote:


+In order to detect stale index statistics, the number of total heap
+tuples during previous statistics collection is stored in the index
+meta-page.

Consider removing: "during previous statistics collection"
Or: "during THE previous statistics collection"

+ Once the number of inserted tuples since previous

since THE previous

+statistics collection is more than
+vacuum_cleanup_index_scale_factor fraction of

Since the multiplier can be greater than 1, should we say "multiple" instead of
fraction?

+during VACUUM cycle.  Thus, skipping of the B-tree
+index scan during cleanup stage is only possible when second and
+subsequent VACUUM cycles detecting no dead tuples.

Change "detecting" to "detect".  Or maybe just "find"

Justin



Re: Bug Report: Error caused due to wrong ordering of filters

2018-05-08 Thread Andrew Gierth
> "Ekta" == Ekta Khanna  writes:

 Ekta> Hello PGSQL Hackers,
 
 Ekta> We have come across the following issue on Postgres
 Ekta> REL_10_STABLE. Below is the repro:
 [...]
 Ekta> In the plan, we see that planner merges the quals from FROM
 Ekta> clause and the WHERE clause in the same RESTRICTINFO. Is this the
 Ekta> expected behavior?

Yes, it's entirely expected. You CANNOT make assumptions about the order
of evaluation of quals; the planner will rearrange them freely, even
across subquery boundaries (where the semantics allow).

You can do this:

  WHERE CASE WHEN length(b) = 8
 THEN to_date(b, 'MMDD') > '2018-05-04'
 ELSE false END

since one of the few guarantees about execution order is that a CASE
will evaluate its condition tests before any non-constant subexpressions
in the corresponding THEN clause.

(Another method is to put an OFFSET 0 in the subquery, but that's more
of a hack)

-- 
Andrew (irc:RhodiumToad)



Re: Built-in connection pooling

2018-05-08 Thread Konstantin Knizhnik



On 05.05.2018 00:54, Merlin Moncure wrote:

On Fri, May 4, 2018 at 2:25 PM, Robert Haas  wrote:

On Fri, May 4, 2018 at 11:22 AM, Merlin Moncure  wrote:

If we are breaking 1:1 backend:session relationship, what controls
would we have to manage resource consumption?

I mean, if you have a large number of sessions open, it's going to
take more memory in any design.  If there are multiple sessions per
backend, there may be some possibility to save memory by allocating it
per-backend rather than per-session; it shouldn't be any worse than if
you didn't have pooling in the first place.

It is absolutely worse, or at least can be.   plpgsql plan caches can
be GUC dependent due to search_path; you might get a different plan
depending on which tables resolve into the function.  You might
rightfully regard this as an edge case but there are other 'leakages',
for example, sessions with different planner settings obviously ought
not to share backend plans.  Point being, there are many
interdependent things in the session that will make it difficult to
share some portions but not others.


Right now, in my built-in connection pool implementation there is shared 
prepared statements cache for all sessions in one backend,
but actually each session has its own set of prepared statements. I just 
append session identifier to prepared statement name to make it unique.
So there is no problem with different execution plans for different 
clients caused by specific GUC settings (like enable_seqscan or 
max_parallel_workers_per_gather).
But the primary reason for such behavior is to avoid prepared statements 
name conflicts between different clients.


From my point of view, there are very few cases when using 
client-specific plans has any sense.
In most cases, requirement is quite opposite: I want to be able to 
prepare execution plan (using missed in Postgres hints, GUCs, adjusted 
statistic,...) which will be used by all clients.
The most natural and convenient way to achieve it is to use shared plan 
cache.
But shared plan cache is a different story, not directly related with 
connection pooling.\



Point being, there are many interdependent things in the session that will make 
it difficult to share some portions but not others.


I do not see so much such things... Yes, GUCs can affect behavior within 
session. But GUCs are now supported: each session can have its own set 
of GUCs.
Prepared plans may depend on GUCs, but them are also private for each 
session now. What else?


And in any case, with external connection pooler you are not able to use 
session semantic at all: GUCs, prepared statements, temporary table, 
advisory locks,...
with built-in connection pooler you can use sessions but with some 
restrictions (lack of advisory locks, for example). It is better than 
nothing, isn't it?





--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company




Re: Having query cache in core

2018-05-08 Thread Robert Haas
On Mon, May 7, 2018 at 2:32 PM, Pavel Stehule  wrote:
> For interactive application only for one subset of queries the plan cache is
> interesting.
>
> @1 There are long queries - the planning time is not significant (although
> can be high), and then plan cache is not important
> @2 there are fast queries with fast planning time - usually we don't need
> plan cache too
> @3 there are fast queries with long planning time - and then plan cache is
> very interesting - can be difficult to separate this case from @1.

That's not my experience.  I agree that plan caching isn't important
for long-running queries, but I think it *is* potentially important
for fast queries with fast planning time.  Even when the planning time
is fast, it can be a significant percentage of the execution time.
Not long ago, we had a case at EDB where a customer was getting custom
plans instead of generic plans and that resulted in a significant
reduction in TPS.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: perlcritic and perltidy

2018-05-08 Thread Peter Eisentraut
On 5/6/18 12:13, Andrew Dunstan wrote:
> Essentially it adds some vertical whitespace to structures so that the
> enclosing braces etc appear on their own lines. A very typical change
> looks like this:
> 
> - { code  => $code,
> + {
> +   code  => $code,
>     ucs   => $ucs,
>     comment   => $rest,
>     direction => $direction,
>     f => $in_file,
> -   l => $. };
> +   l => $.
> + };

The proposed changes certainly match the style we use in C better, which
is what some of the other settings were also informed by.  So I'm in
favor of the changes -- for braces.

For parentheses, I'm not sure whether this is a good idea:

diff --git a/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl
b/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl
index 2971e64..0d3184c 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_EUC_CN.pl
@@ -40,8 +40,11 @@ while (<$in>)
next if (($code & 0xFF) < 0xA1);
next
  if (
-   !( $code >= 0xA100 && $code <= 0xA9FF
-   || $code >= 0xB000 && $code <= 0xF7FF));
+   !(
+  $code >= 0xA100 && $code <= 0xA9FF
+   || $code >= 0xB000 && $code <= 0xF7FF
+   )
+ );

next if ($code >= 0xA2A1 && $code <= 0xA2B0);
next if ($code >= 0xA2E3 && $code <= 0xA2E4);

In a manual C-style indentation, this would just be

next if (!($code >= 0xA100 && $code <= 0xA9FF
   || $code >= 0xB000 && $code <= 0xF7FF));

but somehow the indent runs have managed to spread this compact
expression over the entire screen.

Can we have separate settings for braces and parentheses?

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: parallel.sgml for Gather with InitPlans

2018-05-08 Thread Robert Haas
On Mon, May 7, 2018 at 11:34 PM, Amit Kapila  wrote:
> Is this correct?  See below example:

That's not a counterexample to what I wrote.  When parallelism is
used, the InitPlan has to be attached to a parallel-restricted node,
and it is: Gather.  It's true that in the serial plan it's attached to
the Seq Scan, but that doesn't prove anything.  Saying that something
is parallel-restricted is a statement about where parallelism can be
used; it says nothing about what happens without parallelism.

> I think we can cover InitPlan and Subplans that can be parallelized in
> a separate section "Parallel Subplans" or some other heading.  I think
> as of now we have enabled parallel subplans and initplans in a
> limited, but useful cases (as per TPC-H benchmark) and it might be
> good to cover them in a separate section.  I can come up with an
> initial patch (or I can review it if you write the patch) if you and
> or others think that makes sense.

We could go that way, but what I wrote is short and -- I think -- accurate.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: perlcritic and perltidy

2018-05-08 Thread Stephen Frost
Greetings,

* Michael Paquier (mich...@paquier.xyz) wrote:
> On Sun, May 06, 2018 at 09:14:06PM -0400, Stephen Frost wrote:
> > While I appreciate the support, I'm not sure that you're actually
> > agreeing with me..  I was arguing that braces should be on their own
> > line and therefore there would be a new line for the brace.
> > Specifically, when moving lines between hashes, it's annoying to have to
> > also worry about if the line being copied/moved has braces at the end or
> > not- much easier if they don't and the braces are on their own line.
> 
> I should have read that twice.  Yes we are not on the same line.  Even
> if a brace is on a different line, per your argument it would still be
> nicer to add a comma at the end of each last element of a hash or an
> array, which is what you have done in the tests of pg_dump, but not
> something that the proposed patch does consistently.  If the formatting
> is automated, the way chosen does not matter much, but the extra last
> comma should be consistently present as well?

Yes, that would be nice as well, as you'd be able to move entries around
more easily that way.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: perlcritic and perltidy

2018-05-08 Thread David Steele
On 5/8/18 8:11 AM, Stephen Frost wrote:
> Greetings,
> 
> * Michael Paquier (mich...@paquier.xyz) wrote:
>> On Sun, May 06, 2018 at 09:14:06PM -0400, Stephen Frost wrote:
>>> While I appreciate the support, I'm not sure that you're actually
>>> agreeing with me..  I was arguing that braces should be on their own
>>> line and therefore there would be a new line for the brace.
>>> Specifically, when moving lines between hashes, it's annoying to have to
>>> also worry about if the line being copied/moved has braces at the end or
>>> not- much easier if they don't and the braces are on their own line.
>>
>> I should have read that twice.  Yes we are not on the same line.  Even
>> if a brace is on a different line, per your argument it would still be
>> nicer to add a comma at the end of each last element of a hash or an
>> array, which is what you have done in the tests of pg_dump, but not
>> something that the proposed patch does consistently.  If the formatting
>> is automated, the way chosen does not matter much, but the extra last
>> comma should be consistently present as well?
> 
> Yes, that would be nice as well, as you'd be able to move entries around
> more easily that way.

I'm a fan of the final comma as it makes diffs less noisy.

Regards,
-- 
-David
da...@pgmasters.net



signature.asc
Description: OpenPGP digital signature


Re: MAP syntax for arrays

2018-05-08 Thread Ildar Musin

Hello Tom, Ashutosh,

On 07.05.2018 18:16, Tom Lane wrote:

Ashutosh Bapat  writes:

Is there a way we can improve unnest() and array_agg() to match
the performance you have specified by let's say optimizing the
cases specially when those two are used together. Identifying that
may be some work, but will not require introducing new syntax.


+1.  The first thing I thought on seeing this proposal was "I wonder
how long it will be before the SQL committee introduces some syntax
that uses the MAP keyword and breaks this".

ISTM the planner could be taught to notice the combination of unnest
and array_agg and produce a special output plan from that.

It is, however, fair to wonder whether this is worth our time to
optimize.  I've not noticed a lot of people complaining about
performance of this sort of thing, at least not since we fixed
array_agg to not be O(N^2).


The main point of this patch was about convenience; the performance
thing came out later just as a side effect :) Many users are familiar
with "map/reduce/filter" concept that many languages (not only
functional ones) utilized. And my though was that it would be great to
have those in postgres too.

Apparently there is also a case when unnest/array_agg may not produce
the result we're looking for. I mean multidimensional arrays. E.g.

select array_agg(x * 2)
from unnest(array[[1,2],[3,4],[5,6]]) as x;
array_agg
-
 {2,4,6,8,10,12}
(1 row)

select map(x * 2 for x in array[[1,2],[3,4],[5,6]]);
   ?column?
---
 {{2,4},{6,8},{10,12}}
(1 row)

array_agg produces plain arrays no matter what the input was.

There is a new version of the patch in the attachment which introduces
arbitrary per-element expressions (instead of single function call). So
now user can specify a placeholder representing one element of the array
and use it in the expressions. Like following:

select map (pow(x, 2) - 1 for x in array[1,2,3]);
   ?column?
---
 {1,3,7,15,31}
(1 row)


--
Ildar Musin
i.mu...@postgrespro.ru
diff --git a/src/backend/executor/execExpr.c b/src/backend/executor/execExpr.c
index e284fd7..85d76b2 100644
--- a/src/backend/executor/execExpr.c
+++ b/src/backend/executor/execExpr.c
@@ -886,6 +886,56 @@ ExecInitExprRec(Expr *node, ExprState *state,
 break;
 			}
 
+		case T_MapExpr:
+			{
+MapExpr	   *map = (MapExpr *) node;
+ExprState  *elemstate;
+Oid			resultelemtype;
+
+ExecInitExprRec(map->arrexpr, state, resv, resnull);
+
+resultelemtype = get_element_type(map->resulttype);
+if (!OidIsValid(resultelemtype))
+	ereport(ERROR,
+			(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+			 errmsg("target type is not an array")));
+
+/* Construct a sub-expression for the per-element expression */
+elemstate = makeNode(ExprState);
+elemstate->expr = map->elemexpr;
+elemstate->parent = state->parent;
+elemstate->ext_params = state->ext_params;
+elemstate->innermost_caseval = (Datum *) palloc(sizeof(Datum));
+elemstate->innermost_casenull = (bool *) palloc(sizeof(bool));
+
+ExecInitExprRec(map->elemexpr, elemstate,
+&elemstate->resvalue, &elemstate->resnull);
+
+/* Append a DONE step and ready the subexpression */
+scratch.opcode = EEOP_DONE;
+ExprEvalPushStep(elemstate, &scratch);
+ExecReadyExpr(elemstate);
+
+scratch.opcode = EEOP_MAP;
+scratch.d.map.elemexprstate = elemstate;
+scratch.d.map.resultelemtype = resultelemtype;
+
+if (elemstate)
+{
+	/* Set up workspace for array_map */
+	scratch.d.map.amstate =
+		(ArrayMapState *) palloc0(sizeof(ArrayMapState));
+}
+else
+{
+	/* Don't need workspace if there's no subexpression */
+	scratch.d.map.amstate = NULL;
+}
+
+ExprEvalPushStep(state, &scratch);
+break;
+			}
+
 		case T_OpExpr:
 			{
 OpExpr	   *op = (OpExpr *) node;
diff --git a/src/backend/executor/execExprInterp.c b/src/backend/executor/execExprInterp.c
index 9d6e25a..b2cbc45 100644
--- a/src/backend/executor/execExprInterp.c
+++ b/src/backend/executor/execExprInterp.c
@@ -328,6 +328,7 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull)
 		&&CASE_EEOP_FUNCEXPR_STRICT,
 		&&CASE_EEOP_FUNCEXPR_FUSAGE,
 		&&CASE_EEOP_FUNCEXPR_STRICT_FUSAGE,
+		&&CASE_EEOP_MAP,
 		&&CASE_EEOP_BOOL_AND_STEP_FIRST,
 		&&CASE_EEOP_BOOL_AND_STEP,
 		&&CASE_EEOP_BOOL_AND_STEP_LAST,
@@ -699,6 +700,13 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull)
 			EEO_NEXT();
 		}
 
+		EEO_CASE(EEOP_MAP)
+		{
+			ExecEvalMapExpr(state, op, econtext);
+
+			EEO_NEXT();
+		}
+
 		/*
 		 * If any of its clauses is FALSE, an AND's result is FALSE regardless
 		 * of the states of the rest of the clauses, so we can stop evaluating
@@ -2230,6 +2238,33 @@ ExecEvalFuncExprStrictFusage(ExprState *state, ExprEvalStep *op,
 }
 
 /*
+ * Evaluate a MapExpr expression.
+ *
+ * Source array is in step's result variable.
+ */
+void
+ExecEvalMapExpr(ExprState *s

Re: MAP syntax for arrays

2018-05-08 Thread Ildar Musin


On 08.05.2018 15:49, Ildar Musin wrote:

select map (pow(x, 2) - 1 for x in array[1,2,3]);


Sorry, the example should be:

select map (pow(2, x) - 1 for x in array[1,2,3,4,5]);
   ?column?
---
 {1,3,7,15,31}
(1 row)

--
Ildar Musin
i.mu...@postgrespro.ru



Re: [HACKERS] Parallel Append implementation

2018-05-08 Thread Robert Haas
On Tue, May 8, 2018 at 12:10 AM, Thomas Munro
 wrote:
> It's not a scan, it's not a join and it's not an aggregation so I
> think it needs to be in a new  as the same level as those
> others.  It's a different kind of thing.

I'm a little skeptical about that idea because I'm not sure it's
really in the same category as far as importance is concerned, but I
don't have a better idea.  Here's a patch.  I'm worried this is too
much technical jargon, but I don't know how to explain it any more
simply.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


parallel-append-doc.patch
Description: Binary data


Re: MAP syntax for arrays

2018-05-08 Thread Chapman Flack
On 05/08/2018 08:57 AM, Ildar Musin wrote:
> 
> select map (pow(2, x) - 1 for x in array[1,2,3,4,5]);

I wonder how efficient an implementation would be possible strictly
as a function, without grammar changes?

While PostgreSQL certainly has extensions to and divergences from
standard SQL syntax, some historical and some recent, it seems like
there ought to be a highish bar for adding new ones; or, looking at it
another way, has this feature been proposed to the SQL committee?
It doesn't sound like a bad idea, and supporting new syntax for it
would be an easier call it if it were known to be in an SQL draft
somewhere.

-Chap



Re: [HACKERS] Clock with Adaptive Replacement

2018-05-08 Thread Robert Haas
On Mon, May 7, 2018 at 12:54 PM, Юрий Соколов  wrote:
>> Even if we have that, or something with similar effects, it's still
>> desirable to avoid bumping the usage count multiple times for accesses
>> that happen close together in time.  I don't really agree with Yura
>> Sokolov's proposal for how to prevent that problem, because during
>> periods when no buffer eviction is happening it basically throws out
>> all of the data: no items are replaced, and the usage count can't be
>> bumped again until NBlocks/32 items are replaced.  That's bad.
>
> That is not as bad as it seems on first glance: during period when
> no buffer eviction is happenning, all information is almost irrelevant,
> because it is naturally outdated. And due to choose of "batch" size (25),
> there is always window between "NBlocks/32 items replaced" and
> "this item is considered for replacement": even if all items in
> 25/32*NBlocks had non-zero usage count, then there are at least
> 7/32*NBlocks to consider before this item could be replaced, during
> which usage count can be incremented.

I don't agree that the information is almost irrelevant.  If we have a
working set that starts out fitting within shared_buffers and then
grows larger, we want to know which parts of the data were being
accessed frequently just prior to the point where we started evicting
things.  Otherwise we basically evict at random for a while, and
that's not good.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Should we add GUCs to allow partition pruning to be disabled?

2018-05-08 Thread Justin Pryzby
On Mon, May 07, 2018 at 06:00:59PM +1200, David Rowley wrote:
> Many thanks for reviewing this.

2nd round - from the minimalist department:

+partitions which cannot possibly contain any matching records.
maybe: partitions which cannot match any records.

+   
+Partition pruning done during execution can be performed at any of the
+following times:

remove "done"?

+   number of partitions which were removed during this phase of pruning by
remove "of prunning"

Justin



Re: cannot drop replication slot if server is running in single-user mode

2018-05-08 Thread Robert Haas
On Thu, Mar 29, 2018 at 5:51 PM, Andres Freund  wrote:
>> > 2018-03-06 13:20:24.391 GMT [14869] ERROR:  epoll_ctl() failed: Bad file
>> > descriptor
>>
>> I can confirm this bug exists in single-user mode.
>
> I'm not sure we need to do anything about this, personally. This seems
> like a fairly rare thing to do in a mode that's definitely not intended
> to be general purpose.

Mmmph.  I don't really think it's possible to view a user-visible
EBADF as anything other than a coding error.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: MAP syntax for arrays

2018-05-08 Thread Chapman Flack
On 05/08/2018 09:19 AM, Chapman Flack wrote:
> 
> While PostgreSQL certainly has extensions to and divergences from
> standard SQL syntax, some historical and some recent, it seems like
> there ought to be a highish bar for adding new ones; or, looking at it
> another way, has this feature been proposed to the SQL committee?
> It doesn't sound like a bad idea, and supporting new syntax for it
> would be an easier call it if it were known to be in an SQL draft
> somewhere.

There seems to have been some work at Databricks adding higher-order
function syntax to their SQL; they've chosen the name 'transform'
for 'map', and also provided 'exists', 'filter', and 'aggregate'.

https://databricks.com/blog/2017/05/24/working-with-nested-data-using-higher-order-functions-in-sql-on-databricks.html

-Chap



Re: Cast jsonb to numeric, int, float, bool

2018-05-08 Thread Teodor Sigaev

How about "cannot cast jsonb $json_type to $sql_type" where $json_type
is the type inside the jsonb (e.g. string, number, boolean, array,
object)?


Yes, that sounds pretty good.


Does anybody have an objections to patch?

--
Teodor Sigaev   E-mail: teo...@sigaev.ru
   WWW: http://www.sigaev.ru/
diff --git a/src/backend/utils/adt/jsonb.c b/src/backend/utils/adt/jsonb.c
index 9d2b89f90c..9f72984fac 100644
--- a/src/backend/utils/adt/jsonb.c
+++ b/src/backend/utils/adt/jsonb.c
@@ -1857,7 +1857,7 @@ jsonb_object_agg_finalfn(PG_FUNCTION_ARGS)
 /*
  * Extract scalar value from raw-scalar pseudo-array jsonb.
  */
-static JsonbValue *
+static bool
 JsonbExtractScalar(JsonbContainer *jbc, JsonbValue *res)
 {
 	JsonbIterator *it;
@@ -1865,7 +1865,11 @@ JsonbExtractScalar(JsonbContainer *jbc, JsonbValue *res)
 	JsonbValue	tmp;
 
 	if (!JsonContainerIsArray(jbc) || !JsonContainerIsScalar(jbc))
-		return NULL;
+	{
+		/* inform caller about actual type of container */
+		res->type = (JsonContainerIsArray(jbc)) ? jbvArray : jbvObject;
+		return false;
+	}
 
 	/*
 	 * A root scalar is stored as an array of one element, so we get the array
@@ -1887,7 +1891,40 @@ JsonbExtractScalar(JsonbContainer *jbc, JsonbValue *res)
 	tok = JsonbIteratorNext(&it, &tmp, true);
 	Assert(tok == WJB_DONE);
 
-	return res;
+	return true;
+}
+
+/*
+ * Map jsonb types to human-readable names
+ */
+static char*
+JsonbTypeName(enum jbvType type)
+{
+	static struct
+	{
+		enum jbvType	type;
+		char		   *typename;
+	}
+		names[] =
+	{
+		{ jbvNull, "null" },
+		{ jbvString, "string" },
+		{ jbvNumeric, "numeric" },
+		{ jbvBool, "boolean" },
+		{ jbvArray, "array" },
+		{ jbvObject, "object" },
+		{ jbvBinary, "array or object" }
+	};
+	int i;
+
+	for(i=0; iroot, &v) || v.type != jbvBool)
 		ereport(ERROR,
 (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("jsonb value must be boolean")));
+ errmsg("cannot cast jsonb %s to boolean", JsonbTypeName(v.type;
 
 	PG_FREE_IF_COPY(in, 0);
 
@@ -1916,7 +1953,7 @@ jsonb_numeric(PG_FUNCTION_ARGS)
 	if (!JsonbExtractScalar(&in->root, &v) || v.type != jbvNumeric)
 		ereport(ERROR,
 (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("jsonb value must be numeric")));
+ errmsg("cannot cast jsonb %s to numeric", JsonbTypeName(v.type;
 
 	/*
 	 * v.val.numeric points into jsonb body, so we need to make a copy to
@@ -1939,7 +1976,7 @@ jsonb_int2(PG_FUNCTION_ARGS)
 	if (!JsonbExtractScalar(&in->root, &v) || v.type != jbvNumeric)
 		ereport(ERROR,
 (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("jsonb value must be numeric")));
+ errmsg("cannot cast jsonb %s to int2", JsonbTypeName(v.type;
 
 	retValue = DirectFunctionCall1(numeric_int2,
    NumericGetDatum(v.val.numeric));
@@ -1959,7 +1996,7 @@ jsonb_int4(PG_FUNCTION_ARGS)
 	if (!JsonbExtractScalar(&in->root, &v) || v.type != jbvNumeric)
 		ereport(ERROR,
 (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("jsonb value must be numeric")));
+ errmsg("cannot cast jsonb %s to int4", JsonbTypeName(v.type;
 
 	retValue = DirectFunctionCall1(numeric_int4,
    NumericGetDatum(v.val.numeric));
@@ -1979,7 +2016,7 @@ jsonb_int8(PG_FUNCTION_ARGS)
 	if (!JsonbExtractScalar(&in->root, &v) || v.type != jbvNumeric)
 		ereport(ERROR,
 (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("jsonb value must be numeric")));
+ errmsg("cannot cast jsonb %s to int8", JsonbTypeName(v.type;
 
 	retValue = DirectFunctionCall1(numeric_int8,
    NumericGetDatum(v.val.numeric));
@@ -1999,7 +2036,7 @@ jsonb_float4(PG_FUNCTION_ARGS)
 	if (!JsonbExtractScalar(&in->root, &v) || v.type != jbvNumeric)
 		ereport(ERROR,
 (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("jsonb value must be numeric")));
+ errmsg("cannot cast jsonb %s to float4", JsonbTypeName(v.type;
 
 	retValue = DirectFunctionCall1(numeric_float4,
    NumericGetDatum(v.val.numeric));
@@ -2019,7 +2056,7 @@ jsonb_float8(PG_FUNCTION_ARGS)
 	if (!JsonbExtractScalar(&in->root, &v) || v.type != jbvNumeric)
 		ereport(ERROR,
 (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("jsonb value must be numeric")));
+ errmsg("cannot cast jsonb %s to float8", JsonbTypeName(v.type;
 
 	retValue = DirectFunctionCall1(numeric_float8,
    NumericGetDatum(v.val.numeric));
diff --git a/src/test/regress/expected/jsonb.out b/src/test/regress/expected/jsonb.out
index f705417b09..0be1fc97fc 100644
--- a/src/test/regress/expected/jsonb.out
+++ b/src/test/regress/expected/jsonb.out
@@ -4321,7 +4321,7 @@ select 'true'::jsonb::bool;
 (1 row)
 
 select '[]'::jsonb::bool;
-ERROR:  jsonb value must be boolean
+ERROR:  cannot cast jsonb array to boolean
 select '1.0'::jsonb::float;
  float8 
 
@@ -4329,7 +4329,7 @@ select '1.0'::jsonb::float;
 (1 row)
 
 select '[1.0]'::jsonb::float;
-ERROR:  jsonb value must be num

Re: Exposing guc_malloc/strdup/realloc for plugins?

2018-05-08 Thread Euler Taveira
2018-05-08 3:46 GMT-03:00 Michael Paquier :
> While hacking on an extension, I have finished by doing things similar
> to guc_malloc & friends for the allocation of a GUC parameter for malloc
> portability.  While that's not really a big deal to copy/paste this
> code, I am wondering if it would make sense to expose them for extension
> developers.  Please see the attached for the simple idea.
>
Although I didn't need similar code on some extensions, code reuse is
always a good goal.


-- 
   Euler Taveira   Timbira -
http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento



Re: perlcritic and perltidy

2018-05-08 Thread Andrew Dunstan


On 05/08/2018 08:31 AM, David Steele wrote:
> On 5/8/18 8:11 AM, Stephen Frost wrote:
>> Greetings,
>>
>> * Michael Paquier (mich...@paquier.xyz) wrote:
>>> On Sun, May 06, 2018 at 09:14:06PM -0400, Stephen Frost wrote:
 While I appreciate the support, I'm not sure that you're actually
 agreeing with me..  I was arguing that braces should be on their own
 line and therefore there would be a new line for the brace.
 Specifically, when moving lines between hashes, it's annoying to have to
 also worry about if the line being copied/moved has braces at the end or
 not- much easier if they don't and the braces are on their own line.
>>> I should have read that twice.  Yes we are not on the same line.  Even
>>> if a brace is on a different line, per your argument it would still be
>>> nicer to add a comma at the end of each last element of a hash or an
>>> array, which is what you have done in the tests of pg_dump, but not
>>> something that the proposed patch does consistently.  If the formatting
>>> is automated, the way chosen does not matter much, but the extra last
>>> comma should be consistently present as well?
>> Yes, that would be nice as well, as you'd be able to move entries around
>> more easily that way.
> I'm a fan of the final comma as it makes diffs less noisy.
>


Me too.


AFAICT there is no perltidy setting to add them where they are missing.
There is a perlcritic setting to detect them in lists, however. Here is
the output:

andrew@emma:pg_head (master)$ { find . -type f -a \( -name
'*.pl' -o -name '*.pm' \) -print; find . -type f -perm -100
-exec file {} \; -print    | egrep -i
':.*perl[0-9]*\>'    | cut -d: -f1; } | sort -u  |
xargs perlcritic --quiet --single CodeLayout::RequireTrailingCommas
./src/backend/catalog/Catalog.pm: List declaration without trailing
comma at line 30, column 23.  See page 17 of PBP.  (Severity: 1)
./src/backend/catalog/genbki.pl: List declaration without trailing comma
at line 242, column 19.  See page 17 of PBP.  (Severity: 1)
./src/backend/catalog/genbki.pl: List declaration without trailing comma
at line 627, column 20.  See page 17 of PBP.  (Severity: 1)
./src/backend/utils/mb/Unicode/UCS_to_most.pl: List declaration without
trailing comma at line 23, column 16.  See page 17 of PBP.  (Severity: 1)
./src/backend/utils/mb/Unicode/UCS_to_SJIS.pl: List declaration without
trailing comma at line 21, column 19.  See page 17 of PBP.  (Severity: 1)
./src/interfaces/ecpg/preproc/check_rules.pl: List declaration without
trailing comma at line 38, column 20.  See page 17 of PBP.  (Severity: 1)
./src/test/perl/PostgresNode.pm: List declaration without trailing comma
at line 1468, column 14.  See page 17 of PBP.  (Severity: 1)
./src/test/perl/PostgresNode.pm: List declaration without trailing comma
at line 1657, column 16.  See page 17 of PBP.  (Severity: 1)
./src/test/perl/PostgresNode.pm: List declaration without trailing comma
at line 1697, column 12.  See page 17 of PBP.  (Severity: 1)
./src/test/recovery/t/003_recovery_targets.pl: List declaration without
trailing comma at line 119, column 20.  See page 17 of PBP.  (Severity: 1)
./src/test/recovery/t/003_recovery_targets.pl: List declaration without
trailing comma at line 125, column 20.  See page 17 of PBP.  (Severity: 1)
./src/test/recovery/t/003_recovery_targets.pl: List declaration without
trailing comma at line 131, column 20.  See page 17 of PBP.  (Severity: 1)
./src/tools/msvc/Install.pm: List declaration without trailing comma at
line 22, column 28.  See page 17 of PBP.  (Severity: 1)
./src/tools/msvc/Install.pm: List declaration without trailing comma at
line 81, column 17.  See page 17 of PBP.  (Severity: 1)
./src/tools/msvc/Install.pm: List declaration without trailing comma at
line 653, column 14.  See page 17 of PBP.  (Severity: 1)
./src/tools/msvc/Install.pm: List declaration without trailing comma at
line 709, column 15.  See page 17 of PBP.  (Severity: 1)
./src/tools/msvc/Mkvcbuild.pm: List declaration without trailing comma
at line 43, column 24.  See page 17 of PBP.  (Severity: 1)
./src/tools/msvc/Mkvcbuild.pm: List declaration without trailing comma
at line 55, column 29.  See page 17 of PBP.  (Severity: 1)
./src/tools/msvc/Mkvcbuild.pm: List declaration without trailing comma
at line 59, column 31.  See page 17 of PBP.  (Severity: 1)
./src/tools/msvc/Mkvcbuild.pm: List declaration without trailing comma
at line 75, column 25.  See page 17 of PBP.  (Severity: 1)
./src/tools/msvc/Mkvcbuild.pm: List declaration without trailing comma
at line 623, column 15.  See page 17 of PBP.  (Severity: 1)
./src/tools/msvc/vcregress.pl: List declaration without trailing comma
at line 102, column 13.  See page 17 of PBP.  (Severity: 1)
./src/tools/msvc/vcregress.pl: List declaration without trailing comma
at line 121, column 13.  See page 17 of PBP.  (Severity: 1)
./src/tools/msvc/vcregress.pl: List declaration without trailing comma
at line 

Re: MAP syntax for arrays

2018-05-08 Thread Peter Eisentraut
On 5/8/18 09:19, Chapman Flack wrote:
> On 05/08/2018 08:57 AM, Ildar Musin wrote:
>>
>> select map (pow(2, x) - 1 for x in array[1,2,3,4,5]);
> 
> I wonder how efficient an implementation would be possible strictly
> as a function, without grammar changes?

Yeah, you can pass a function to another function (using regprocedure or
just oid), so this should be possible entirely in user space.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: MAP syntax for arrays

2018-05-08 Thread Alvaro Herrera
Peter Eisentraut wrote:
> On 5/8/18 09:19, Chapman Flack wrote:
> > On 05/08/2018 08:57 AM, Ildar Musin wrote:
> >>
> >> select map (pow(2, x) - 1 for x in array[1,2,3,4,5]);
> > 
> > I wonder how efficient an implementation would be possible strictly
> > as a function, without grammar changes?
> 
> Yeah, you can pass a function to another function (using regprocedure or
> just oid), so this should be possible entirely in user space.

How would you invoke it?  It seems you'd be forced to use EXECUTE in a
plpgsql function, or a C function.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] Clock with Adaptive Replacement

2018-05-08 Thread Юрий Соколов
2018-05-08 16:21 GMT+03:00 Robert Haas :
>
> On Mon, May 7, 2018 at 12:54 PM, Юрий Соколов 
wrote:
> >> Even if we have that, or something with similar effects, it's still
> >> desirable to avoid bumping the usage count multiple times for accesses
> >> that happen close together in time.  I don't really agree with Yura
> >> Sokolov's proposal for how to prevent that problem, because during
> >> periods when no buffer eviction is happening it basically throws out
> >> all of the data: no items are replaced, and the usage count can't be
> >> bumped again until NBlocks/32 items are replaced.  That's bad.
> >
> > That is not as bad as it seems on first glance: during period when
> > no buffer eviction is happenning, all information is almost irrelevant,
> > because it is naturally outdated. And due to choose of "batch" size
(25),
> > there is always window between "NBlocks/32 items replaced" and
> > "this item is considered for replacement": even if all items in
> > 25/32*NBlocks had non-zero usage count, then there are at least
> > 7/32*NBlocks to consider before this item could be replaced, during
> > which usage count can be incremented.
>
> I don't agree that the information is almost irrelevant.  If we have a
> working set that starts out fitting within shared_buffers and then
> grows larger, we want to know which parts of the data were being
> accessed frequently just prior to the point where we started evicting
> things.

"just prior to the point" means we have to have machinery for information
expiration without eviction. For example, same "clock hand" should walk
over all buffers continiously, and decrement "usage count", but without
eviction performed. Right?

As alternative, some approximation of Working Set algorithm could be used:
- on every access "access time" should be written to item,
- items accessed before some "expiration interval" are considered for
expiration.
This way, there is also fresh information about usage even without any
expiration performed yet.

"usage count" still have to be used to track access frequency, but it
should not be just incremented:
- there should be formula for "corrected count", that depends on
  last access time, written usage count and over-all system access rate.
- increment should look like:
  usage_count = corrected_count(cur_time, access_time, usage_count,
access_rate) + increment(cur_time, access_time, access_rate)
- probably, expiration should rely on "corrected_count" either,
  ie evict if "corrected_count" == 0

To smooth access spikes, new values for "usage count" and "access time"
should be written not on every access, but once in some period.

Oh, looks like I'm inventing another kind of bicycle :-(
and this one is unlikely to be good.

With regards,
Sokolov Yura


Re: MAP syntax for arrays

2018-05-08 Thread Andreas Karlsson

On 05/08/2018 02:49 PM, Ildar Musin wrote:

The main point of this patch was about convenience; the performance
thing came out later just as a side effect :) Many users are familiar
with "map/reduce/filter" concept that many languages (not only
functional ones) utilized. And my though was that it would be great to
have those in postgres too.


Map is a feature I have quite often wished to have, but I am not sure it 
is quite useful enough to be worth adding our own proprietary syntax. It 
would be a pain if the SQL committee started using MAP for something.


Andreas



Re: Cast jsonb to numeric, int, float, bool

2018-05-08 Thread Tom Lane
Teodor Sigaev  writes:
> Does anybody have an objections to patch?

1) Does this really pass muster from the translatability standpoint?
I doubt it.  I'd expect the translation of "cannot cast jsonb string
to int4" to use a translated equivalent of "string", but this code will
not do that.  You can't really fix it by gettext'ing the result of
JsonbTypeName, either, because then you're breaking the rule about not
assembling translated strings from components.

2) Our usual convention for type names that are written in error messages
is to use the SQL-standard names, that is "integer" and "double precision"
and so on.  For instance, float8in and int4in do not use the internal type
names in their messages:

regression=# select 'bogus'::float8;
ERROR:  invalid input syntax for type double precision: "bogus"
LINE 1: select 'bogus'::float8;
   ^
regression=# select 'bogus'::int4;
ERROR:  invalid input syntax for integer: "bogus"
LINE 1: select 'bogus'::int4;
   ^

So I think you made the wrong choices in jsonb_int4 etc.

regards, tom lane



Re: cannot drop replication slot if server is running in single-user mode

2018-05-08 Thread Tom Lane
Robert Haas  writes:
> On Thu, Mar 29, 2018 at 5:51 PM, Andres Freund  wrote:
>> I'm not sure we need to do anything about this, personally. This seems
>> like a fairly rare thing to do in a mode that's definitely not intended
>> to be general purpose.

> Mmmph.  I don't really think it's possible to view a user-visible
> EBADF as anything other than a coding error.

If we think this is worth spending any code at all on, what I'd suggest
is to reject replication-related commands at some early stage if not
IsUnderPostmaster.

regards, tom lane



Re: cannot drop replication slot if server is running in single-user mode

2018-05-08 Thread Alvaro Herrera
Robert Haas wrote:
> On Thu, Mar 29, 2018 at 5:51 PM, Andres Freund  wrote:
> >> > 2018-03-06 13:20:24.391 GMT [14869] ERROR:  epoll_ctl() failed: Bad file
> >> > descriptor
> >>
> >> I can confirm this bug exists in single-user mode.
> >
> > I'm not sure we need to do anything about this, personally. This seems
> > like a fairly rare thing to do in a mode that's definitely not intended
> > to be general purpose.
> 
> Mmmph.  I don't really think it's possible to view a user-visible
> EBADF as anything other than a coding error.

IMO the problem is not the user-visible EBADF -- the problem is that the
user might be attempting to clean up from some previous mistake by
removing a replication slot.  Using single-user mode might be a strange
tool, but it's not completely unreasonable.  Is it really all that
difficult to fix slot acquisition for it?

I agree with Tom that for any other operation we can just reject it
early if not under postmaster, but dropping a slot seems a special case.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: MAP syntax for arrays

2018-05-08 Thread Andrew Gierth
> "Andreas" == Andreas Karlsson  writes:

 Andreas> It would be a pain if the SQL committee started using MAP for
 Andreas> something.

They already did - MAP is a non-reserved keyword in sql2016, used at
least with . Can't see any obvious
conflict with use in expressions, but I haven't checked all the
references.

-- 
Andrew (irc:RhodiumToad)



Re: MAP syntax for arrays

2018-05-08 Thread Peter Eisentraut
On 5/8/18 10:18, Alvaro Herrera wrote:
> Peter Eisentraut wrote:
>> On 5/8/18 09:19, Chapman Flack wrote:
>>> On 05/08/2018 08:57 AM, Ildar Musin wrote:

 select map (pow(2, x) - 1 for x in array[1,2,3,4,5]);
>>>
>>> I wonder how efficient an implementation would be possible strictly
>>> as a function, without grammar changes?
>>
>> Yeah, you can pass a function to another function (using regprocedure or
>> just oid), so this should be possible entirely in user space.
> 
> How would you invoke it?  It seems you'd be forced to use EXECUTE in a
> plpgsql function, or a C function.

Yes, I was thinking about a C function.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: cannot drop replication slot if server is running in single-user mode

2018-05-08 Thread Alvaro Herrera
Alvaro Herrera wrote:
> Robert Haas wrote:
> > On Thu, Mar 29, 2018 at 5:51 PM, Andres Freund  wrote:
> > >> > 2018-03-06 13:20:24.391 GMT [14869] ERROR:  epoll_ctl() failed: Bad 
> > >> > file
> > >> > descriptor
> > >>
> > >> I can confirm this bug exists in single-user mode.
> > >
> > > I'm not sure we need to do anything about this, personally. This seems
> > > like a fairly rare thing to do in a mode that's definitely not intended
> > > to be general purpose.
> > 
> > Mmmph.  I don't really think it's possible to view a user-visible
> > EBADF as anything other than a coding error.
> 
> IMO the problem is not the user-visible EBADF -- the problem is that the
> user might be attempting to clean up from some previous mistake by
> removing a replication slot.  Using single-user mode might be a strange
> tool, but it's not completely unreasonable.  Is it really all that
> difficult to fix slot acquisition for it?

Here's a patch (against pg10) to fix this problem.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>From 59eb9253797776f21267d35b608582b5fd3ff67c Mon Sep 17 00:00:00 2001
From: Alvaro Herrera 
Date: Tue, 8 May 2018 11:48:56 -0300
Subject: [PATCH 1/2] don't try cond variables if single user mode

---
 src/backend/replication/slot.c | 28 ++--
 1 file changed, 18 insertions(+), 10 deletions(-)

diff --git a/src/backend/replication/slot.c b/src/backend/replication/slot.c
index a8a16f55e9..a43f402214 100644
--- a/src/backend/replication/slot.c
+++ b/src/backend/replication/slot.c
@@ -351,20 +351,28 @@ retry:
if (s->in_use && strcmp(name, NameStr(s->data.name)) == 0)
{
/*
-* This is the slot we want.  We don't know yet if it's 
active, so
-* get ready to sleep on it in case it is.  (We may end 
up not
-* sleeping, but we don't want to do this while holding 
the
-* spinlock.)
+* This is the slot we want; check if it's active under 
some other
+* process.  In single user mode, we don't need this 
check.
 */
-   ConditionVariablePrepareToSleep(&s->active_cv);
+   if (IsUnderPostmaster)
+   {
+   /*
+* Get ready to sleep on it in case it is 
active.  (We may end
+* up not sleeping, but we don't want to do 
this while holding
+* the spinlock.)
+*/
+   ConditionVariablePrepareToSleep(&s->active_cv);
 
-   SpinLockAcquire(&s->mutex);
+   SpinLockAcquire(&s->mutex);
 
-   active_pid = s->active_pid;
-   if (active_pid == 0)
-   active_pid = s->active_pid = MyProcPid;
+   active_pid = s->active_pid;
+   if (active_pid == 0)
+   active_pid = s->active_pid = MyProcPid;
 
-   SpinLockRelease(&s->mutex);
+   SpinLockRelease(&s->mutex);
+   }
+   else
+   active_pid = MyProcPid;
slot = s;
 
break;
-- 
2.11.0

>From 008ec88995b2ae1d851033222bfbd8dfebd7a368 Mon Sep 17 00:00:00 2001
From: Alvaro Herrera 
Date: Tue, 8 May 2018 11:49:08 -0300
Subject: [PATCH 2/2] simplify baroque coding

---
 src/backend/replication/slot.c | 34 +++---
 1 file changed, 11 insertions(+), 23 deletions(-)

diff --git a/src/backend/replication/slot.c b/src/backend/replication/slot.c
index a43f402214..664bf2e3ad 100644
--- a/src/backend/replication/slot.c
+++ b/src/backend/replication/slot.c
@@ -99,7 +99,6 @@ ReplicationSlot *MyReplicationSlot = NULL;
 intmax_replication_slots = 0;  /* the maximum number 
of replication

 * slots */
 
-static void ReplicationSlotDropAcquired(void);
 static void ReplicationSlotDropPtr(ReplicationSlot *slot);
 
 /* internal persistency functions */
@@ -434,7 +433,8 @@ ReplicationSlotRelease(void)
 * fail, all that may happen is an incomplete cleanup of the 
on-disk
 * data.
 */
-   ReplicationSlotDropAcquired();
+   ReplicationSlotDropPtr(MyReplicationSlot);
+   MyReplicationSlot = NULL;
}
 
/*
@@ -520,23 +520,10 @@ ReplicationSlotDrop(const char *name, bool nowait)
 
ReplicationSlotAcquire(name, nowait);
 
-   ReplicationSlotDropAcquired();

SQL:2011 Valid-Time Support

2018-05-08 Thread Paul Howells
Hello All,

I am sure this has been discussed somewhere but I have not found anything
specific in the archives.

Has there been or is there any current effort to implement SQL:2011
valid-time support in Postgres?  I understand that there has been some
efforts to implement some valid-time support but now that there is a
published standard I think that it would be valuable to try to obtain some
level of compatibility.   Is this something the community has decided
against?

Ultimately I am wondering if this is something that would be worth me
spending my time on.

Thanks
Paul


Re: MAP syntax for arrays

2018-05-08 Thread Tom Lane
Peter Eisentraut  writes:
> On 5/8/18 10:18, Alvaro Herrera wrote:
>> How would you invoke it?  It seems you'd be forced to use EXECUTE in a
>> plpgsql function, or a C function.

> Yes, I was thinking about a C function.

The thing actually implementing MAP would presumably be in C,
so this doesn't seem like a problem technically.  But having to
create a function seems like a big usability stumbling block,
probably a big enough one to make the "select array_agg(expression)
from unnest(something)" approach more attractive.

I do see the usability benefit of a dedicated MAP syntax --- I'm
just afraid of getting out in front of the SQL committee on such
things.  I doubt that it's enough nicer than the sub-select way
to justify risking future standards-compliance issues.

Realistically, we're talking about this:

select a, b, (select array_agg(x*2) from unnest(arraycol) x)
from ...

versus something on the order of this:

select a, b, map(x*2 over x from arraycol)
from ...

Yeah, it's a bit shorter, but not that much ... and there's a lot
more you can do with the sub-select syntax, eg add a WHERE filter.

regards, tom lane



Re: MAP syntax for arrays

2018-05-08 Thread Ildar Musin



On 08.05.2018 17:15, Peter Eisentraut wrote:

On 5/8/18 09:19, Chapman Flack wrote:

On 05/08/2018 08:57 AM, Ildar Musin wrote:


select map (pow(2, x) - 1 for x in array[1,2,3,4,5]);


I wonder how efficient an implementation would be possible
strictly as a function, without grammar changes?


Yeah, you can pass a function to another function (using regprocedure
or just oid), so this should be possible entirely in user space.



The problem with this approach is that extension should either have
single map() function with input and output type of anyarray which
cannot be used when user needs to map int[] to text[] for example. Or
the other way there should be a set of map functions for different
intput/output types.

Another thing is that this approach is not as versatile since user need
to create a function before running map, while with the proposed patch
they could run arbitrary expression over the array directly.

--
Ildar Musin
i.mu...@postgrespro.ru



Re: Cast jsonb to numeric, int, float, bool

2018-05-08 Thread Tom Lane
I wrote:
> 1) Does this really pass muster from the translatability standpoint?
> I doubt it.

After further thought about that, it seems that what we typically don't
try to translate is SQL-standard type names, that is, error messages
along the line of "blah blah blah type %s" are considered fine.  So
the problem here is that you factorized the error reporting poorly.
I think you want the callers to look like

if (!JsonbExtractScalar(&in->root, &v) || v.type != jbvNumeric)
cannotCastJsonbValue(v.type, "double precision");

where the subroutine contains the whole ereport() call, and its lookup
table entries are e.g.

gettext_noop("cannot cast jsonb string to type %s")

regards, tom lane



Re: MAP syntax for arrays

2018-05-08 Thread Alvaro Herrera
Andrew Gierth wrote:
> > "Andreas" == Andreas Karlsson  writes:
> 
>  Andreas> It would be a pain if the SQL committee started using MAP for
>  Andreas> something.
> 
> They already did - MAP is a non-reserved keyword in sql2016, used at
> least with . Can't see any obvious
> conflict with use in expressions, but I haven't checked all the
> references.

Ah, so in SQL2011 (and 2016) there already is a designator for routines,
which was a thing we were lacking previously, as I recall.

 ::=
SPECIFIC  
|   [ FOR  ]

 ::=
ROUTINE
| FUNCTION
| PROCEDURE
| [ INSTANCE | STATIC | CONSTRUCTOR ] METHOD

 ::=
 [  ]

 ::=

| 

 ::=  [  [ {   }... ] 
] 


[elsewhere]

 ::=


-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: SQL:2011 Valid-Time Support

2018-05-08 Thread Tom Lane
Paul Howells  writes:
> Has there been or is there any current effort to implement SQL:2011
> valid-time support in Postgres?

Searching the archives, I can only find "valid-time" appearing in these
threads related to temporal query processing:

https://www.postgresql.org/message-id/flat/200702100020.28893.wt%40penguintechs.org
https://www.postgresql.org/message-id/flat/CALNdv1h7TUP24Nro53KecvWB2kwA67p%2BPByDuP6_1GeESTFgSA%40mail.gmail.com

but perhaps that's talking about something different?  Neither of
those threads were discussing features that are already in the
standard, as far as I could see.

Anyway, if it's in the standard, then at least in principle we're open
to it.  There'd probably be some questions about the amount of added
complexity and whether the feature is worth supporting.  I'd suggest
trying to get some community buy-in by circulating a design document
on -hackers before you write any code.

regards, tom lane



Re: [HACKERS] Clock with Adaptive Replacement

2018-05-08 Thread Vladimir Sitnikov
> Oh, looks like I'm inventing another kind of bicycle :-(

Do you think you could capture a trace or two from a more-or-less
representative application/database?

Discussion of algorithms makes little sense as we all lack traces to
compare/validate.

Vladimir


Re: perlcritic and perltidy

2018-05-08 Thread Alvaro Herrera
Andrew Dunstan wrote:

> Yes. there are separate settings for the three types of brackets. Here's
> what happens if we restrict the vertical tightness settings to parentheses.
> 
> I think that's an unambiguous improvement.

LGTM.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] Clock with Adaptive Replacement

2018-05-08 Thread Jesper Pedersen

On 05/08/2018 11:35 AM, Vladimir Sitnikov wrote:

Oh, looks like I'm inventing another kind of bicycle :-(


Do you think you could capture a trace or two from a more-or-less
representative application/database?

Discussion of algorithms makes little sense as we all lack traces to
compare/validate.



I have work loads that I can repeat, so I can help with testing.

Best regards,
 Jesper



Re: [HACKERS] Clock with Adaptive Replacement

2018-05-08 Thread Vladimir Sitnikov
>I have work loads that I can repeat, so I can help with testing.

That would be great.

Do you think you could use DTrace to capture the trace?
For instance, https://github.com/vlsi/pgsqlstat/blob/pgsqlio/pgsqlio

Vladimir


Re: Cast jsonb to numeric, int, float, bool

2018-05-08 Thread Teodor Sigaev

1) Does this really pass muster from the translatability standpoint?
I doubt it.

Huh, I missed that.


I think you want the callers to look like

if (!JsonbExtractScalar(&in->root, &v) || v.type != jbvNumeric)
cannotCastJsonbValue(v.type, "double precision");

where the subroutine contains the whole ereport() call, and its lookup
table entries are e.g.

gettext_noop("cannot cast jsonb string to type %s")

Thanks for your idea, patch is attached
--
Teodor Sigaev   E-mail: teo...@sigaev.ru
   WWW: http://www.sigaev.ru/
diff --git a/src/backend/utils/adt/jsonb.c b/src/backend/utils/adt/jsonb.c
index 9d2b89f90c..3ef71bbade 100644
--- a/src/backend/utils/adt/jsonb.c
+++ b/src/backend/utils/adt/jsonb.c
@@ -1857,7 +1857,7 @@ jsonb_object_agg_finalfn(PG_FUNCTION_ARGS)
 /*
  * Extract scalar value from raw-scalar pseudo-array jsonb.
  */
-static JsonbValue *
+static bool
 JsonbExtractScalar(JsonbContainer *jbc, JsonbValue *res)
 {
 	JsonbIterator *it;
@@ -1865,7 +1865,11 @@ JsonbExtractScalar(JsonbContainer *jbc, JsonbValue *res)
 	JsonbValue	tmp;
 
 	if (!JsonContainerIsArray(jbc) || !JsonContainerIsScalar(jbc))
-		return NULL;
+	{
+		/* inform caller about actual type of container */
+		res->type = (JsonContainerIsArray(jbc)) ? jbvArray : jbvObject;
+		return false;
+	}
 
 	/*
 	 * A root scalar is stored as an array of one element, so we get the array
@@ -1887,7 +1891,40 @@ JsonbExtractScalar(JsonbContainer *jbc, JsonbValue *res)
 	tok = JsonbIteratorNext(&it, &tmp, true);
 	Assert(tok == WJB_DONE);
 
-	return res;
+	return true;
+}
+
+/*
+ * Emit correct, translable cast error message
+ */
+static void
+cannotCastJsonbValue(enum jbvType type, const char *sqltype)
+{
+	static struct
+	{
+		enum jbvType	type;
+		char		   *msg;
+	}
+		messages[] =
+	{
+		{ jbvNull,		gettext_noop("cannot cast jsonb null to type %s") },
+		{ jbvString,	gettext_noop("cannot cast jsonb string to type %s") },
+		{ jbvNumeric,	gettext_noop("cannot cast jsonb numeric to type %s") },
+		{ jbvBool,		gettext_noop("cannot cast jsonb boolean to type %s") },
+		{ jbvArray,		gettext_noop("cannot cast jsonb array to type %s") },
+		{ jbvObject,	gettext_noop("cannot cast jsonb object to type %s") },
+		{ jbvBinary,	gettext_noop("cannot cast jsonb array or object to type %s") }
+	};
+	int i;
+
+	for(i=0; iroot, &v) || v.type != jbvBool)
-		ereport(ERROR,
-(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("jsonb value must be boolean")));
+		cannotCastJsonbValue(v.type, "boolean");
 
 	PG_FREE_IF_COPY(in, 0);
 
@@ -1914,9 +1949,7 @@ jsonb_numeric(PG_FUNCTION_ARGS)
 	Numeric		retValue;
 
 	if (!JsonbExtractScalar(&in->root, &v) || v.type != jbvNumeric)
-		ereport(ERROR,
-(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("jsonb value must be numeric")));
+		cannotCastJsonbValue(v.type, "numeric");
 
 	/*
 	 * v.val.numeric points into jsonb body, so we need to make a copy to
@@ -1937,9 +1970,7 @@ jsonb_int2(PG_FUNCTION_ARGS)
 	Datum		retValue;
 
 	if (!JsonbExtractScalar(&in->root, &v) || v.type != jbvNumeric)
-		ereport(ERROR,
-(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("jsonb value must be numeric")));
+		cannotCastJsonbValue(v.type, "smallint");
 
 	retValue = DirectFunctionCall1(numeric_int2,
    NumericGetDatum(v.val.numeric));
@@ -1957,9 +1988,7 @@ jsonb_int4(PG_FUNCTION_ARGS)
 	Datum		retValue;
 
 	if (!JsonbExtractScalar(&in->root, &v) || v.type != jbvNumeric)
-		ereport(ERROR,
-(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("jsonb value must be numeric")));
+		cannotCastJsonbValue(v.type, "integer");
 
 	retValue = DirectFunctionCall1(numeric_int4,
    NumericGetDatum(v.val.numeric));
@@ -1977,9 +2006,7 @@ jsonb_int8(PG_FUNCTION_ARGS)
 	Datum		retValue;
 
 	if (!JsonbExtractScalar(&in->root, &v) || v.type != jbvNumeric)
-		ereport(ERROR,
-(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("jsonb value must be numeric")));
+		cannotCastJsonbValue(v.type, "bigint");
 
 	retValue = DirectFunctionCall1(numeric_int8,
    NumericGetDatum(v.val.numeric));
@@ -1997,9 +2024,7 @@ jsonb_float4(PG_FUNCTION_ARGS)
 	Datum		retValue;
 
 	if (!JsonbExtractScalar(&in->root, &v) || v.type != jbvNumeric)
-		ereport(ERROR,
-(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("jsonb value must be numeric")));
+		cannotCastJsonbValue(v.type, "real");
 
 	retValue = DirectFunctionCall1(numeric_float4,
    NumericGetDatum(v.val.numeric));
@@ -2017,9 +2042,7 @@ jsonb_float8(PG_FUNCTION_ARGS)
 	Datum		retValue;
 
 	if (!JsonbExtractScalar(&in->root, &v) || v.type != jbvNumeric)
-		ereport(ERROR,
-(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("jsonb value must be numeric")));
+		cannotCastJsonbValue(v.type, "double precision");
 
 	retValue = DirectFunctionCall1(numeric_float8,
    NumericGetDatum(v.val.numeric));
diff --git a/src/test/r

Re: SQL:2011 Valid-Time Support

2018-05-08 Thread Peter Eisentraut
On 5/8/18 11:31, Tom Lane wrote:
> Paul Howells  writes:
>> Has there been or is there any current effort to implement SQL:2011
>> valid-time support in Postgres?
> 
> Searching the archives, I can only find "valid-time" appearing in these
> threads related to temporal query processing:
> 
> https://www.postgresql.org/message-id/flat/200702100020.28893.wt%40penguintechs.org

That looks like a pre-standardization variant of the same idea.

> https://www.postgresql.org/message-id/flat/CALNdv1h7TUP24Nro53KecvWB2kwA67p%2BPByDuP6_1GeESTFgSA%40mail.gmail.com

I think those are operators that work on top of having the valid-time
available, but don't do anything about making that time available.

> Anyway, if it's in the standard, then at least in principle we're open
> to it.  There'd probably be some questions about the amount of added
> complexity and whether the feature is worth supporting.  I'd suggest
> trying to get some community buy-in by circulating a design document
> on -hackers before you write any code.

I think there is some interest, so it's worth proceeding like that.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: perlcritic and perltidy

2018-05-08 Thread Peter Eisentraut
On 5/8/18 11:39, Alvaro Herrera wrote:
> Andrew Dunstan wrote:
> 
>> Yes. there are separate settings for the three types of brackets. Here's
>> what happens if we restrict the vertical tightness settings to parentheses.
>>
>> I think that's an unambiguous improvement.
> 
> LGTM.

Yes, that looks better.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: perlcritic and perltidy

2018-05-08 Thread Andrew Dunstan


On 05/08/2018 10:06 AM, Andrew Dunstan wrote:
> { find . -type f -a \( -name
> '*.pl' -o -name '*.pm' \) -print; find . -type f -perm -100
> -exec file {} \; -print    | egrep -i
> ':.*perl[0-9]*\>'    | cut -d: -f1; } | sort -u  |
> xargs perlcritic --quiet --single CodeLayout::RequireTrailingCommas



Here's a diff of all the places it found fixed. At this stage I don't
think it's worth it. If someone wants to write a perlcritic policy that
identifies missing trailing commas reasonably comprehensively, we can
look again. Otherwise we should just clean them up as we come across them.

cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

diff --git a/src/backend/catalog/Catalog.pm b/src/backend/catalog/Catalog.pm
index f387c86..ac19682 100644
--- a/src/backend/catalog/Catalog.pm
+++ b/src/backend/catalog/Catalog.pm
@@ -34,7 +34,7 @@ sub ParseHeader
 		'Oid'   => 'oid',
 		'NameData'  => 'name',
 		'TransactionId' => 'xid',
-		'XLogRecPtr'=> 'pg_lsn');
+		'XLogRecPtr'=> 'pg_lsn',);
 
 	my %catalog;
 	my $declaring_attributes = 0;
diff --git a/src/backend/catalog/genbki.pl b/src/backend/catalog/genbki.pl
index fb61db0..0a7d433 100644
--- a/src/backend/catalog/genbki.pl
+++ b/src/backend/catalog/genbki.pl
@@ -245,7 +245,7 @@ my %lookup_kind = (
 	pg_operator => \%operoids,
 	pg_opfamily => \%opfoids,
 	pg_proc => \%procoids,
-	pg_type => \%typeoids);
+	pg_type => \%typeoids,);
 
 
 # Open temp files
@@ -631,7 +631,7 @@ sub gen_pg_attribute
 { name => 'cmin', type => 'cid' },
 { name => 'xmax', type => 'xid' },
 { name => 'cmax', type => 'cid' },
-{ name => 'tableoid', type => 'oid' });
+{ name => 'tableoid', type => 'oid' },);
 			foreach my $attr (@SYS_ATTRS)
 			{
 $attnum--;
diff --git a/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl b/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl
index a50f6f3..6d40d68 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_SJIS.pl
@@ -21,7 +21,7 @@ my $mapping = read_source("CP932.TXT");
 my @reject_sjis = (
 	0xed40 .. 0xeefc, 0x8754 .. 0x875d, 0x878a, 0x8782,
 	0x8784,   0xfa5b,   0xfa54, 0x8790 .. 0x8792,
-	0x8795 .. 0x8797, 0x879a .. 0x879c);
+	0x8795 .. 0x8797, 0x879a .. 0x879c,);
 
 foreach my $i (@$mapping)
 {
diff --git a/src/backend/utils/mb/Unicode/UCS_to_most.pl b/src/backend/utils/mb/Unicode/UCS_to_most.pl
index 4453449..26fd15d 100755
--- a/src/backend/utils/mb/Unicode/UCS_to_most.pl
+++ b/src/backend/utils/mb/Unicode/UCS_to_most.pl
@@ -47,7 +47,7 @@ my %filename = (
 	'ISO8859_16' => '8859-16.TXT',
 	'KOI8R'  => 'KOI8-R.TXT',
 	'KOI8U'  => 'KOI8-U.TXT',
-	'GBK'=> 'CP936.TXT');
+	'GBK'=> 'CP936.TXT',);
 
 # make maps for all encodings if not specified
 my @charsets = (scalar(@ARGV) > 0) ? @ARGV : sort keys(%filename);
diff --git a/src/interfaces/ecpg/preproc/check_rules.pl b/src/interfaces/ecpg/preproc/check_rules.pl
index 6c8b004..566de5d 100644
--- a/src/interfaces/ecpg/preproc/check_rules.pl
+++ b/src/interfaces/ecpg/preproc/check_rules.pl
@@ -43,7 +43,7 @@ my %replace_line = (
 	  => 'CREATE OptTemp TABLE create_as_target AS EXECUTE prepared_name execute_param_clause',
 
 	'PrepareStmtPREPAREnameprep_type_clauseASPreparableStmt' =>
-	  'PREPARE prepared_name prep_type_clause AS PreparableStmt');
+	  'PREPARE prepared_name prep_type_clause AS PreparableStmt',);
 
 my $block= '';
 my $yaccmode = 0;
diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm
index 53efb57..6e3a62f 100644
--- a/src/test/perl/PostgresNode.pm
+++ b/src/test/perl/PostgresNode.pm
@@ -1470,7 +1470,7 @@ sub lsn
 		'flush'   => 'pg_current_wal_flush_lsn()',
 		'write'   => 'pg_current_wal_lsn()',
 		'receive' => 'pg_last_wal_receive_lsn()',
-		'replay'  => 'pg_last_wal_replay_lsn()');
+		'replay'  => 'pg_last_wal_replay_lsn()',);
 
 	$mode = '' if !defined($mode);
 	croak "unknown mode for 'lsn': '$mode', valid modes are "
@@ -1657,7 +1657,7 @@ sub slot
 	my @columns = (
 		'plugin', 'slot_type',  'datoid', 'database',
 		'active', 'active_pid', 'xmin',   'catalog_xmin',
-		'restart_lsn');
+		'restart_lsn',);
 	return $self->query_hash(
 		'postgres',
 		"SELECT __COLUMNS__ FROM pg_catalog.pg_replication_slots WHERE slot_name = '$slot_name'",
@@ -1696,7 +1696,7 @@ sub pg_recvlogical_upto
 
 	my @cmd = (
 		'pg_recvlogical', '-S', $slot_name, '--dbname',
-		$self->connstr($dbname));
+		$self->connstr($dbname),);
 	push @cmd, '--endpos', $endpos;
 	push @cmd, '-f', '-', '--no-loop', '--start';
 
diff --git a/src/test/recovery/t/003_recovery_targets.pl b/src/test/recovery/t/003_recovery_targets.pl
index 824fa4d..dec17d0 100644
--- a/src/test/recovery/t/003_recovery_targets.pl
+++ b/src/test/recovery/t/003_recovery_targets.pl
@@ -119,19 +119,19 @@ test_recovery_standby('LS

Re: [HACKERS] path toward faster partition pruning

2018-05-08 Thread Alvaro Herrera
Michael Paquier wrote:

> So the problem appears when an expression needs to use
> COERCION_PATH_ARRAYCOERCE for a type coercion from one type to another,
> which requires an execution state to be able to build the list of
> elements.  The clause matching happens at planning state, so while there
> are surely cases where this could be improved depending on the
> expression type, I propose to just discard all clauses which do not
> match OpExpr and ArrayExpr for now, as per the attached.  It would be
> definitely a good practice to have a default code path returning
> PARTCLAUSE_UNSUPPORTED where the element list cannot be built.
> 
> Thoughts?

I found a related crash and I'm investigating it further.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: perlcritic and perltidy

2018-05-08 Thread Stephen Frost
Greetings,

* Andrew Dunstan (andrew.duns...@2ndquadrant.com) wrote:
> On 05/08/2018 10:06 AM, Andrew Dunstan wrote:
> > { find . -type f -a \( -name
> > '*.pl' -o -name '*.pm' \) -print; find . -type f -perm -100
> > -exec file {} \; -print    | egrep -i
> > ':.*perl[0-9]*\>'    | cut -d: -f1; } | sort -u  |
> > xargs perlcritic --quiet --single CodeLayout::RequireTrailingCommas
> 
> Here's a diff of all the places it found fixed. At this stage I don't
> think it's worth it. If someone wants to write a perlcritic policy that
> identifies missing trailing commas reasonably comprehensively, we can
> look again. Otherwise we should just clean them up as we come across them.
[...]
> diff --git a/src/backend/catalog/Catalog.pm b/src/backend/catalog/Catalog.pm
> index f387c86..ac19682 100644
> --- a/src/backend/catalog/Catalog.pm
> +++ b/src/backend/catalog/Catalog.pm
> @@ -34,7 +34,7 @@ sub ParseHeader
>   'Oid'   => 'oid',
>   'NameData'  => 'name',
>   'TransactionId' => 'xid',
> - 'XLogRecPtr'=> 'pg_lsn');
> + 'XLogRecPtr'=> 'pg_lsn',);
>  
>   my %catalog;
>   my $declaring_attributes = 0;

There's not much point adding the ',' unless you're also putting the
');' on the next line, is there..?

Or is that going to be handled in a follow-up patch?

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: [HACKERS] Clock with Adaptive Replacement

2018-05-08 Thread Jesper Pedersen

On 05/08/2018 11:49 AM, Vladimir Sitnikov wrote:

I have work loads that I can repeat, so I can help with testing.


That would be great.

Do you think you could use DTrace to capture the trace?
For instance, https://github.com/vlsi/pgsqlstat/blob/pgsqlio/pgsqlio



DTrace or BPF would be ok.

Best regards,
 Jesper




Re: perlcritic and perltidy

2018-05-08 Thread Alvaro Herrera
Stephen Frost wrote:

> * Andrew Dunstan (andrew.duns...@2ndquadrant.com) wrote:
>
> > -   'XLogRecPtr'=> 'pg_lsn');
> > +   'XLogRecPtr'=> 'pg_lsn',);
>
> There's not much point adding the ',' unless you're also putting the
> ');' on the next line, is there..?
> 
> Or is that going to be handled in a follow-up patch?

IMO we should classify this one as pointless uglification and move on.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: perlcritic and perltidy

2018-05-08 Thread Stephen Frost
Greetings,

* Alvaro Herrera (alvhe...@2ndquadrant.com) wrote:
> Stephen Frost wrote:
> 
> > * Andrew Dunstan (andrew.duns...@2ndquadrant.com) wrote:
> >
> > > - 'XLogRecPtr'=> 'pg_lsn');
> > > + 'XLogRecPtr'=> 'pg_lsn',);
> >
> > There's not much point adding the ',' unless you're also putting the
> > ');' on the next line, is there..?
> > 
> > Or is that going to be handled in a follow-up patch?
> 
> IMO we should classify this one as pointless uglification and move on.

I'm all for the change if we actually get to a result where the lines
can be moved and you don't have to go muck with the extra stuff at the
end of the line (add a comma, or remove a comma, remove or add the
parens, etc).  If we aren't going all the way to get to that point then
I tend to agree that it's not a useful change to make.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: perlcritic and perltidy

2018-05-08 Thread Andrew Dunstan


On 05/08/2018 12:51 PM, Stephen Frost wrote:
> Greetings,
>
> * Andrew Dunstan (andrew.duns...@2ndquadrant.com) wrote:
>> On 05/08/2018 10:06 AM, Andrew Dunstan wrote:
>>> { find . -type f -a \( -name
>>> '*.pl' -o -name '*.pm' \) -print; find . -type f -perm -100
>>> -exec file {} \; -print    | egrep -i
>>> ':.*perl[0-9]*\>'    | cut -d: -f1; } | sort -u  |
>>> xargs perlcritic --quiet --single CodeLayout::RequireTrailingCommas
>> Here's a diff of all the places it found fixed. At this stage I don't
>> think it's worth it. If someone wants to write a perlcritic policy that
>> identifies missing trailing commas reasonably comprehensively, we can
>> look again. Otherwise we should just clean them up as we come across them.
> [...]
>> diff --git a/src/backend/catalog/Catalog.pm b/src/backend/catalog/Catalog.pm
>> index f387c86..ac19682 100644
>> --- a/src/backend/catalog/Catalog.pm
>> +++ b/src/backend/catalog/Catalog.pm
>> @@ -34,7 +34,7 @@ sub ParseHeader
>>  'Oid'   => 'oid',
>>  'NameData'  => 'name',
>>  'TransactionId' => 'xid',
>> -'XLogRecPtr'=> 'pg_lsn');
>> +'XLogRecPtr'=> 'pg_lsn',);
>>  
>>  my %catalog;
>>  my $declaring_attributes = 0;
> There's not much point adding the ',' unless you're also putting the
> ');' on the next line, is there..?


No, not really.

>
> Or is that going to be handled in a follow-up patch?
>


No, the current proposal is to keep the vertical tightness settings for
parentheses, which is precisely this set of cases, because otherwise
there are some ugly code efects (see Peter's email upthread)

So I think we're all in agreement to fortget this trailing comma thing.

cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: perlcritic and perltidy

2018-05-08 Thread Stephen Frost
Andrew,

* Andrew Dunstan (andrew.duns...@2ndquadrant.com) wrote:
> On 05/08/2018 12:51 PM, Stephen Frost wrote:
> > * Andrew Dunstan (andrew.duns...@2ndquadrant.com) wrote:
> > There's not much point adding the ',' unless you're also putting the
> > ');' on the next line, is there..?
> 
> No, not really.
> 
> > Or is that going to be handled in a follow-up patch?
> 
> No, the current proposal is to keep the vertical tightness settings for
> parentheses, which is precisely this set of cases, because otherwise
> there are some ugly code efects (see Peter's email upthread)
> 
> So I think we're all in agreement to fortget this trailing comma thing.

Well, agreed, for parentheses, but for curly-brace blocks, it'd be nice to
have them since those will end up on their own line, right?

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: perlcritic and perltidy

2018-05-08 Thread Andrew Dunstan


On 05/08/2018 01:18 PM, Stephen Frost wrote:
> Andrew,
>
> * Andrew Dunstan (andrew.duns...@2ndquadrant.com) wrote:
>> On 05/08/2018 12:51 PM, Stephen Frost wrote:
>>> * Andrew Dunstan (andrew.duns...@2ndquadrant.com) wrote:
>>> There's not much point adding the ',' unless you're also putting the
>>> ');' on the next line, is there..?
>> No, not really.
>>
>>> Or is that going to be handled in a follow-up patch?
>> No, the current proposal is to keep the vertical tightness settings for
>> parentheses, which is precisely this set of cases, because otherwise
>> there are some ugly code efects (see Peter's email upthread)
>>
>> So I think we're all in agreement to fortget this trailing comma thing.
> Well, agreed, for parentheses, but for curly-brace blocks, it'd be nice to
> have them since those will end up on their own line, right?
>


Yes, but there isn't a perlcritic policy I can find that detects them
reliably. If you know of one we can revisit it. Specifically, the one
from the Pulp collection called RequireTrailingCommaAtNewline didn't
work very well when I tried it.

cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: perlcritic and perltidy

2018-05-08 Thread Stephen Frost
Andrew,

* Andrew Dunstan (andrew.duns...@2ndquadrant.com) wrote:
> On 05/08/2018 01:18 PM, Stephen Frost wrote:
> > * Andrew Dunstan (andrew.duns...@2ndquadrant.com) wrote:
> >> On 05/08/2018 12:51 PM, Stephen Frost wrote:
> >>> * Andrew Dunstan (andrew.duns...@2ndquadrant.com) wrote:
> >>> There's not much point adding the ',' unless you're also putting the
> >>> ');' on the next line, is there..?
> >> No, not really.
> >>
> >>> Or is that going to be handled in a follow-up patch?
> >> No, the current proposal is to keep the vertical tightness settings for
> >> parentheses, which is precisely this set of cases, because otherwise
> >> there are some ugly code efects (see Peter's email upthread)
> >>
> >> So I think we're all in agreement to fortget this trailing comma thing.
> > Well, agreed, for parentheses, but for curly-brace blocks, it'd be nice to
> > have them since those will end up on their own line, right?
> 
> Yes, but there isn't a perlcritic policy I can find that detects them
> reliably. If you know of one we can revisit it. Specifically, the one
> from the Pulp collection called RequireTrailingCommaAtNewline didn't
> work very well when I tried it.

Ok, perhaps we can't automate/enforce it, but if everyone is agreed on
it then we should at least consider it something of a policy and, as you
said up-thread, clean things up as we come to them.  I'd love to clean
up the pg_dump regression tests in such a way to make it simpler to work
with in the future, as long as we're agreed on it and we don't end up
getting complaints from perlcritic/perltiday or having them end up
being removed..

Thanks!

Stephen


signature.asc
Description: PGP signature


perlcritic script

2018-05-08 Thread Andrew Dunstan


Here's a small patch to add a script to call perlcritic, in the same way
that we have a script to call perltidy. Is also includes a perlcriticrc
file containing a policy to allow octal constants with leading zeros.
That's the only core severity 5 policy we are currently no in compliance
with.


We should probably look at being rather more aggressive with perlcritic.
I've made the buildfarm client code compliant with some exceptions down
to severity level 3. Here are the profile exceptions:

[-Variables::RequireLocalizedPunctuationVars]
[TestingAndDebugging::ProhibitNoWarnings]
allow = once
[-InputOutput::RequireBriefOpen]
[-Subroutines::RequireArgUnpacking]
[-RegularExpressions::RequireExtendedFormatting]
[-Variables::ProhibitPackageVars]
[-ErrorHandling::RequireCarping]
[-ValuesAndExpressions::ProhibitComplexVersion]
[InputOutput::ProhibitBacktickOperators]
only_in_void_context = 1
[-Modules::ProhibitExcessMainComplexity]
[-Subroutines::ProhibitExcessComplexity]
[-ValuesAndExpressions::ProhibitImplicitNewlines]
[-ControlStructures::ProhibitCascadingIfElse]
[-ControlStructures::ProhibitNegativeExpressionsInUnlessAndUntilConditions]
[-ErrorHandling::RequireCheckingReturnValueOfEval ]
[-BuiltinFunctions::ProhibitComplexMappings]

There are also 21 places in the code with "no critic" markings at
severity 3 and above.


cheers


andrew


-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

>From c63ed411fb28d7422fc5a34ddae6d8c6fbf2666f Mon Sep 17 00:00:00 2001
From: Andrew Dunstan 
Date: Mon, 7 May 2018 15:55:25 -0400
Subject: [PATCH] Add a script and a config file to run perlcritic

---
 src/tools/pgperlcritic/perlcriticrc | 12 
 src/tools/pgperlcritic/pgperlcritic | 26 ++
 2 files changed, 38 insertions(+)
 create mode 100644 src/tools/pgperlcritic/perlcriticrc
 create mode 100755 src/tools/pgperlcritic/pgperlcritic

diff --git a/src/tools/pgperlcritic/perlcriticrc b/src/tools/pgperlcritic/perlcriticrc
new file mode 100644
index 000..5ff92cb
--- /dev/null
+++ b/src/tools/pgperlcritic/perlcriticrc
@@ -0,0 +1,12 @@
+##
+#
+# src/tools/pgperlcritic/perlcriticrc
+#
+# config  file for perlcritic for Postgres project
+#
+#
+
+severity = 5
+
+# allow octal constants with leading zeros
+[-ValuesAndExpressions::ProhibitLeadingZeros]
diff --git a/src/tools/pgperlcritic/pgperlcritic b/src/tools/pgperlcritic/pgperlcritic
new file mode 100755
index 000..0449871
--- /dev/null
+++ b/src/tools/pgperlcritic/pgperlcritic
@@ -0,0 +1,26 @@
+#!/bin/sh
+
+# src/tools/pgperlcritic/pgperlcritic
+
+test -f src/tools/pgperlcritic/perlcriticrc || {
+	echo could not find src/tools/pgperlcritic/perlcriticrc
+	exit 1
+	}
+
+set -e
+
+# set this to override default perlcritic program:
+PERLCRITIC=${PERLCRITIC:-perlcritic}
+
+# locate all Perl files in the tree
+{
+	# take all .pl and .pm files
+	find . -type f -a \( -name '*.pl' -o -name '*.pm' \) -print
+	# take executable files that file(1) thinks are perl files
+	find . -type f -perm -100 -exec file {} \; -print |
+	egrep -i ':.*perl[0-9]*\>' |
+	cut -d: -f1
+} |
+sort -u |
+xargs $PERLCRITIC --quiet --profile=src/tools/pgperlcritic/perlcriticrc
+
-- 
2.9.5



Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors

2018-05-08 Thread Fabien COELHO


Hello Alvaro,


I think that I'll have time for a round of review in the first half of July.
Providing a rebased patch before then would be nice.



Note that even in the absence of a rebased patch, you can apply to an
older checkout if you have some limited window of time for a review.


Yes, sure. I'd like to bring this feature to be committable, so it will 
have to be rebased at some point anyway.



Looking over the diff, I find that this patch tries to do too much and
needs to be split up.


Yep, I agree that it would help the reviewing process. On the other hand I 
have bad memories about maintaining dependent patches which interfere 
significantly. Maybe it may not the case with this feature.


Thanks for the advices.

--
Fabien.



Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors

2018-05-08 Thread Alvaro Herrera
Hello,

Fabien COELHO wrote:

> > Looking over the diff, I find that this patch tries to do too much and
> > needs to be split up.
> 
> Yep, I agree that it would help the reviewing process. On the other hand I
> have bad memories about maintaining dependent patches which interfere
> significantly.

Sure.  I suggest not posting these patches separately -- instead, post
as a series of commits in a single email, attaching files from "git
format-patch".

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Cast jsonb to numeric, int, float, bool

2018-05-08 Thread Tom Lane
Teodor Sigaev  writes:
> Thanks for your idea, patch is attached

Looks mostly fine from here.  A couple nitpicks:

* s/translable/translatable/

* Personally I'd try harder to make the lookup table constant, that is

+   static const struct
+   {
+   enum jbvTypetype;
+   const char  *msg;
+   }
+   messages[] =
+   {

in hopes that it'd end up in the text segment.

regards, tom lane



Re: perlcritic script

2018-05-08 Thread Peter Eisentraut
On 5/8/18 13:57, Andrew Dunstan wrote:
> + # take executable files that file(1) thinks are perl files
> + find . -type f -perm -100 -exec file {} \; -print |
> + egrep -i ':.*perl[0-9]*\>' |

How portable is that?

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] Cutting initdb's runtime (Perl question embedded)

2018-05-08 Thread Tom Lane
Resurrecting this old thread ...

I decided it'd be interesting to re-examine where initdb's runtime
is going, seeing that we just got done with a lot of bootstrap data
restructuring.  I stuck some timing code into initdb, and got
results like this:

creating directory /home/postgres/testversion/data ... ok
elapsed = 0.256 msec
creating subdirectories ... ok
elapsed = 2.385 msec
selecting default max_connections ... 100
elapsed = 13.528 msec
selecting default shared_buffers ... 128MB
elapsed = 13.699 msec
selecting dynamic shared memory implementation ... posix
elapsed = 0.129 msec
elapsed = 281.335 msec in select_default_timezone
creating configuration files ... ok
elapsed = 1.319 msec
running bootstrap script ... ok
elapsed = 162.143 msec
performing post-bootstrap initialization ... ok
elapsed = 832.569 msec

Sync to disk skipped.

real0m1.316s
user0m0.941s
sys 0m0.395s

(I'm using "initdb -N" because the cost of the sync step is so
platform-dependent, and it's not interesting anyway for buildfarm
or make check-world testing.  Also, I rearranged the code slightly
so that select_default_timezone could be timed separately from the
rest of the "creating configuration files" step.)

In trying to break down the "post-bootstrap initialization" step
a bit further, I soon realized that trying to time the sub-steps from
initdb is useless.  initdb is just shoving bytes down the pipe as
fast as the kernel will let it; it has no idea how long it's taking
the backend to do any one query or queries.  So I ended up adding
"-c log_statement=all -c log_min_duration_statement=0" to the
backend_options, and digging query durations out of the log output.
I got these totals for the major steps in the post-boot run:

pg_authid setup: 0.909 ms
pg_depend setup: 64.980 ms
system views: 106.221 ms
pg_description: 39.665 ms
pg_collation: 65.162 ms
conversions: 72.024 ms
text search: 29.454 ms
init-acl hacking: 14.339 ms
information schema: 188.497 ms
plpgsql: 2.531 ms
analyze/vacuum/additional db creation: 171.762 ms

So the conversions don't look nearly as interesting as Andreas
suggested upthread.  Pushing them into .bki format would at best
save ~ 70 ms out of 1300.  Which is not nothing, but it's not
going to change the world either.

Really the only thing here that jumps out as being unduly expensive for
what it's doing is select_default_timezone.  That is, and always has been,
a brute-force algorithm; I wonder if there's a way to do better?  We can
probably guess that every non-Windows platform is using the IANA timezone
data these days.  If there were some way to extract the name of the active
timezone setting directly, we wouldn't have to try to reverse-engineer it.
But I don't know of any portable way :-(

regards, tom lane



Re: Global snapshots

2018-05-08 Thread Stas Kelvich


> On 7 May 2018, at 20:04, Robert Haas  wrote:
> 
> But what happens if a transaction starts on node A at time T0 but
> first touches node B at a much later time T1, such that T1 - T0 >
> global_snapshot_defer_time?
> 

Such transaction will get "global snapshot too old" error.

In principle such behaviour can be avoided by calculating oldest
global csn among all cluster nodes and oldest xmin on particular
node will be held only when there is some open old transaction on
other node. It's easy to do from global snapshot point of view,
but it's not obvious how to integrate that into postgres_fdw. Probably
that will require bi-derectional connection between postgres_fdw nodes
(also distributed deadlock detection will be easy with such connection).

--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company





Re: perlcritic script

2018-05-08 Thread Tom Lane
Peter Eisentraut  writes:
> On 5/8/18 13:57, Andrew Dunstan wrote:
>> +# take executable files that file(1) thinks are perl files
>> +find . -type f -perm -100 -exec file {} \; -print |
>> +egrep -i ':.*perl[0-9]*\>' |

> How portable is that?

Well, it's the same code that's in pgperltidy ... but I agree that
it's making a lot of assumptions about the behavior of file(1).

regards, tom lane



Re: perlcritic script

2018-05-08 Thread Peter Eisentraut
On 5/8/18 16:51, Tom Lane wrote:
> Peter Eisentraut  writes:
>> On 5/8/18 13:57, Andrew Dunstan wrote:
>>> +   # take executable files that file(1) thinks are perl files
>>> +   find . -type f -perm -100 -exec file {} \; -print |
>>> +   egrep -i ':.*perl[0-9]*\>' |
> 
>> How portable is that?
> 
> Well, it's the same code that's in pgperltidy ... but I agree that
> it's making a lot of assumptions about the behavior of file(1).

OK, but then it's not a problem for this thread.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] Parallel Append implementation

2018-05-08 Thread Thomas Munro
On Wed, May 9, 2018 at 1:15 AM, Robert Haas  wrote:
> On Tue, May 8, 2018 at 12:10 AM, Thomas Munro
>  wrote:
>> It's not a scan, it's not a join and it's not an aggregation so I
>> think it needs to be in a new  as the same level as those
>> others.  It's a different kind of thing.
>
> I'm a little skeptical about that idea because I'm not sure it's
> really in the same category as far as importance is concerned, but I
> don't have a better idea.  Here's a patch.  I'm worried this is too
> much technical jargon, but I don't know how to explain it any more
> simply.

+scanning them more than once would preduce duplicate results.  Plans that

s/preduce/produce/

+Append or MergeAppend plan node.
vs.
+Append of regular Index Scan plans; each

I think we should standardise on Foo Bar,
FooBar or foo bar when
discussing executor nodes on this page.

-- 
Thomas Munro
http://www.enterprisedb.com



Re: perlcritic script

2018-05-08 Thread Tels
Moin,

On Tue, May 8, 2018 5:03 pm, Peter Eisentraut wrote:
> On 5/8/18 16:51, Tom Lane wrote:
>> Peter Eisentraut  writes:
>>> On 5/8/18 13:57, Andrew Dunstan wrote:
 +  # take executable files that file(1) thinks are perl files
 +  find . -type f -perm -100 -exec file {} \; -print |
 +  egrep -i ':.*perl[0-9]*\>' |
>>
>>> How portable is that?
>>
>> Well, it's the same code that's in pgperltidy ... but I agree that
>> it's making a lot of assumptions about the behavior of file(1).
>
> OK, but then it's not a problem for this thread.

If I'm not mistaken, the first line in the "find" code could be more
compact like so:

 find . -type f -iname '*.p[lm]'

(-print is default, and the -name argument is a regexp, anyway. And IMHO
it could be "-iname" so we catch "test.PM", too?).

Also, "-print" does not handle filenames with newlines well, so "-print0"
should be used, however, this can be tricky when the next step isn't xarg,
but sort. Looking at the man page, on my system this would be:

 find . -type f -name '*.p[lm]' -print0 | sort -u -z | xargs -0 ...

Not sure if that is more, or less, portable then the original -print
variant, tho.

Best regards,

Tels




Re: [HACKERS] path toward faster partition pruning

2018-05-08 Thread Alvaro Herrera
So I found that this query also crashed (using your rig),

create table coercepart (a varchar) partition by list (a);
create table coercepart_ab partition of coercepart for values in ('ab');
create table coercepart_bc partition of coercepart for values in ('bc');
create table coercepart_cd partition of coercepart for values in ('cd');
explain (costs off) select * from coercepart where a ~ any ('{ab}');

The reason for this crash is that gen_partprune_steps_internal() is
unable to generate any steps for the clause -- which is natural, since
the operator is not in a btree opclass.  There are various callers
of gen_partprune_steps_internal that are aware that it could return an
empty set of steps, but this one in match_clause_to_partition_key for
the ScalarArrayOpExpr case was being a bit too optimistic.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
diff --git a/src/backend/partitioning/partprune.c 
b/src/backend/partitioning/partprune.c
index f954b92a6b..f8aaccfa18 100644
--- a/src/backend/partitioning/partprune.c
+++ b/src/backend/partitioning/partprune.c
@@ -571,8 +571,9 @@ get_matching_partitions(PartitionPruneContext *context, 
List *pruning_steps)
  * For BoolExpr clauses, we recursively generate steps for each argument, and
  * return a PartitionPruneStepCombine of their results.
  *
- * The generated steps are added to the context's steps list.  Each step is
- * assigned a step identifier, unique even across recursive calls.
+ * The return value is a list of the steps generated, which are also added to
+ * the context's steps list.  Each step is assigned a step identifier, unique
+ * even across recursive calls.
  *
  * If we find clauses that are mutually contradictory, or a pseudoconstant
  * clause that contains false, we set *contradictory to true and return NIL
@@ -1599,6 +1600,7 @@ match_clause_to_partition_key(RelOptInfo *rel,
List   *elem_exprs,
   *elem_clauses;
ListCell   *lc1;
+   boolcontradictory;
 
if (IsA(leftop, RelabelType))
leftop = ((RelabelType *) leftop)->arg;
@@ -1617,7 +1619,7 @@ match_clause_to_partition_key(RelOptInfo *rel,
 * Only allow strict operators.  This will guarantee nulls are
 * filtered.
 */
-   if (!op_strict(saop->opno))
+   if (!op_strict(saop_op))
return PARTCLAUSE_UNSUPPORTED;
 
/* Useless if the array has any volatile functions. */
@@ -1690,7 +1692,7 @@ match_clause_to_partition_key(RelOptInfo *rel,
elem_exprs = lappend(elem_exprs, elem_expr);
}
}
-   else
+   else if (IsA(rightop, ArrayExpr))
{
ArrayExpr  *arrexpr = castNode(ArrayExpr, rightop);
 
@@ -1704,6 +1706,11 @@ match_clause_to_partition_key(RelOptInfo *rel,
 
elem_exprs = arrexpr->elements;
}
+   else
+   {
+   /* Give up on any other clause types. */
+   return PARTCLAUSE_UNSUPPORTED;
+   }
 
/*
 * Now generate a list of clauses, one for each array element, 
of the
@@ -1722,36 +1729,21 @@ match_clause_to_partition_key(RelOptInfo *rel,
}
 
/*
-* Build a combine step as if for an OR clause or add the 
clauses to
-* the end of the list that's being processed currently.
+* If we have an ANY clause and multiple elements, first turn 
the list
+* of clauses into an OR expression.
 */
if (saop->useOr && list_length(elem_clauses) > 1)
-   {
-   Expr   *orexpr;
-   boolcontradictory;
+   elem_clauses = list_make1(makeBoolExpr(OR_EXPR, 
elem_clauses, -1));
 
-   orexpr = makeBoolExpr(OR_EXPR, elem_clauses, -1);
-   *clause_steps =
-   gen_partprune_steps_internal(context, rel, 
list_make1(orexpr),
-   
 &contradictory);
-   if (contradictory)
-   return PARTCLAUSE_MATCH_CONTRADICT;
-
-   Assert(list_length(*clause_steps) == 1);
-   return PARTCLAUSE_MATCH_STEPS;
-   }
-   else
-   {
-   boolcontradictory;
-
-   *clause_steps =
-   gen_partprune_steps_internal(context, rel, 
elem_clauses,
- 

Re: Setting libpq TCP keepalive parameters from environment

2018-05-08 Thread Craig Ringer
On 8 May 2018 at 16:15, Oleksandr Shulgin  wrote:
> Hi Hackers,
>
> I didn't find the original discussion which led to introduction of the
> client-side set of keepalive parameters back in [1].
>
> The issue I'm facing is that it doesn't seem to be possible to set these
> parameters from the environment variables.  The block of option
> definitions[2] added for these parameters doesn't specify any corresponding
> env var names.
>
> I wonder if this is an oversight or was there a conscious decision not to
> allow it?
>
> To give a specific example, I have a (legacy) Python script which talks to
> the database in two ways: by directly using psycopg2.connect(host=..., ) and
> also by running some shell commands with psql's \copy + gzip, mainly for
> convenience.
>
> Now when I needed to tune the keepalives_idle value, I had to include it
> everywhere a database connection is made:
> - For psycopg2 it's straightforward: you just add an extra keyword parameter
> to connect().
> - For psql it's trickier, since the only way to achieve this seems to pack
> it together with -d as: -d 'dbname=mydb keepalives_idle=NNN'.
>
> In case of a binary that would amount to recompiling, I guess.  I have no
> permission to tweak sysctl directly on the host, unfortunately.
>
> It would be much more convenient to just set the environment variable when
> running the script and let it affect the whole process and its children.
>
> Would a patch be welcome?

I can't really comment on that part, but:

PGOPTIONS="-c tcp_keepalives_count=5 -c tcp_keepalives_interval=60"
psql 'host=localhost'

should enable server-side keepalives. Unsure how much that helps you
if you need client-side keepalives too.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: Having query cache in core

2018-05-08 Thread Tatsuo Ishii
> On 07/05/18 05:47, Tom Lane wrote:
>> Tatsuo Ishii  writes:
>>> Does anybody think having in-memory query result cache in core is a
>>> good idea?
>> No.
> 
> Agreed.
> 
> You could probably write an extension for that, though. I think the
> planner hook and custom scans give you enough flexibility to do that
> without modifying the server code.

I have simulated the idea and I wonder how to implement the query
result cache on the streaming standby servers because no DML/DDL are
executed on standby servers, that makes it impossible to invalidate
the query cache. Probably the only way to allow to use the query cache
is,

1) Let invalidate the query cache on the primary server.

2) The cache storage needed to be on the external cache server like
   memcached.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp



Re: Having query cache in core

2018-05-08 Thread Michael Paquier
On Wed, May 09, 2018 at 09:04:04AM +0900, Tatsuo Ishii wrote:
> I have simulated the idea and I wonder how to implement the query
> result cache on the streaming standby servers because no DML/DDL are
> executed on standby servers, that makes it impossible to invalidate
> the query cache. Probably the only way to allow to use the query cache
> is,
> 
> 1) Let invalidate the query cache on the primary server.
> 
> 2) The cache storage needed to be on the external cache server like
>memcached.

Or a hook within the REDO loop which can be processed for each record?
You could take any actions needed with that by tracking mostly heap
records.
--
Michael


signature.asc
Description: PGP signature


Re: [HACKERS] path toward faster partition pruning

2018-05-08 Thread Amit Langote
Hi.

On 2018/05/09 7:05, Alvaro Herrera wrote:
> So I found that this query also crashed (using your rig),
> 
> create table coercepart (a varchar) partition by list (a);
> create table coercepart_ab partition of coercepart for values in ('ab');
> create table coercepart_bc partition of coercepart for values in ('bc');
> create table coercepart_cd partition of coercepart for values in ('cd');
> explain (costs off) select * from coercepart where a ~ any ('{ab}');
> 
> The reason for this crash is that gen_partprune_steps_internal() is
> unable to generate any steps for the clause -- which is natural, since
> the operator is not in a btree opclass.  There are various callers
> of gen_partprune_steps_internal that are aware that it could return an
> empty set of steps, but this one in match_clause_to_partition_key for
> the ScalarArrayOpExpr case was being a bit too optimistic.

Yeah, good catch!  That fixes the crash, but looking around that code a
bit, it seems that we shouldn't even have gotten up to the point you're
proposing to fix.  If saop_op is not in the partitioning opfamily, it
should have bailed out much sooner, that is, here:

/*
 * In case of NOT IN (..), we get a '<>', which we handle if list
 * partitioning is in use and we're able to confirm that it's negator
 * is a btree equality operator belonging to the partitioning operator
 * family.
 */
if (!op_in_opfamily(saop_op, partopfamily))
{


negator = get_negator(saop_op);
if (OidIsValid(negator) && op_in_opfamily(negator, partopfamily))
{

}
+   else
+   /* otherwise, unsupported! */
+   return PARTCLAUSE_UNSUPPORTED;

Let me propose that we also have this along with the rest of the changes
you made in that part of the function.  So, attached is an updated patch.

Thanks,
Amit
diff --git a/src/backend/partitioning/partprune.c 
b/src/backend/partitioning/partprune.c
index f954b92a6b..be9ea8a6db 100644
--- a/src/backend/partitioning/partprune.c
+++ b/src/backend/partitioning/partprune.c
@@ -571,8 +571,9 @@ get_matching_partitions(PartitionPruneContext *context, 
List *pruning_steps)
  * For BoolExpr clauses, we recursively generate steps for each argument, and
  * return a PartitionPruneStepCombine of their results.
  *
- * The generated steps are added to the context's steps list.  Each step is
- * assigned a step identifier, unique even across recursive calls.
+ * The return value is a list of the steps generated, which are also added to
+ * the context's steps list.  Each step is assigned a step identifier, unique
+ * even across recursive calls.
  *
  * If we find clauses that are mutually contradictory, or a pseudoconstant
  * clause that contains false, we set *contradictory to true and return NIL
@@ -1599,6 +1600,7 @@ match_clause_to_partition_key(RelOptInfo *rel,
List   *elem_exprs,
   *elem_clauses;
ListCell   *lc1;
+   boolcontradictory;
 
if (IsA(leftop, RelabelType))
leftop = ((RelabelType *) leftop)->arg;
@@ -1617,7 +1619,7 @@ match_clause_to_partition_key(RelOptInfo *rel,
 * Only allow strict operators.  This will guarantee nulls are
 * filtered.
 */
-   if (!op_strict(saop->opno))
+   if (!op_strict(saop_op))
return PARTCLAUSE_UNSUPPORTED;
 
/* Useless if the array has any volatile functions. */
@@ -1650,6 +1652,9 @@ match_clause_to_partition_key(RelOptInfo *rel,
if (strategy != BTEqualStrategyNumber)
return PARTCLAUSE_UNSUPPORTED;
}
+   else
+   /* otherwise, unsupported! */
+   return PARTCLAUSE_UNSUPPORTED;
}
 
/*
@@ -1690,7 +1695,7 @@ match_clause_to_partition_key(RelOptInfo *rel,
elem_exprs = lappend(elem_exprs, elem_expr);
}
}
-   else
+   else if (IsA(rightop, ArrayExpr))
{
ArrayExpr  *arrexpr = castNode(ArrayExpr, rightop);
 
@@ -1704,6 +1709,11 @@ match_clause_to_partition_key(RelOptInfo *rel,
 
elem_exprs = arrexpr->elements;
}
+   else
+   {
+   /* Give up on any other clause types. */
+   return PARTCLAUSE_UNSUPPORTED;
+   }
 
/*
 * Now generate a list of clauses, one for each array element, 
of the
@@ -1722,36 +1732,21 @@ match_clause_to_partition_key(RelOptInfo *rel,
}
 
/*
-* Build a combine step as if for an OR clause or add the 
clauses to
- 

Re: [HACKERS] path toward faster partition pruning

2018-05-08 Thread Michael Paquier
On Tue, May 08, 2018 at 07:05:46PM -0300, Alvaro Herrera wrote:
> The reason for this crash is that gen_partprune_steps_internal() is
> unable to generate any steps for the clause -- which is natural, since
> the operator is not in a btree opclass.  There are various callers
> of gen_partprune_steps_internal that are aware that it could return an
> empty set of steps, but this one in match_clause_to_partition_key for
> the ScalarArrayOpExpr case was being a bit too optimistic.

Indeed.

While looking at this code, is there any reason to not make
gen_partprune_steps static?  This is only used in partprune.c for now,
so the intention is to make it available for future patches?
--
Michael


signature.asc
Description: PGP signature


Re: Oddity in tuple routing for foreign partitions

2018-05-08 Thread Etsuro Fujita

(2018/05/03 9:29), Robert Haas wrote:

On Wed, May 2, 2018 at 7:06 AM, Etsuro Fujita
  wrote:

Here is a small patch to remove a no-longer-needed cast in
postgresBeginForeignInsert().


Committed.


Thanks!

Best regards,
Etsuro Fujita



Re: [HACKERS] path toward faster partition pruning

2018-05-08 Thread Amit Langote
On 2018/05/09 11:20, Michael Paquier wrote:
> While looking at this code, is there any reason to not make
> gen_partprune_steps static?  This is only used in partprune.c for now,
> so the intention is to make it available for future patches?

Yeah, making it static might be a good idea.  I had made it externally
visible, because I was under the impression that the runtime pruning
related code would want to call it from elsewhere within the planner.
But, instead it introduced a make_partition_pruneinfo() which in turn
calls get_partprune_steps.

Thanks,
Amit




Re: [HACKERS] path toward faster partition pruning

2018-05-08 Thread David Rowley
On 9 May 2018 at 14:29, Amit Langote  wrote:
> On 2018/05/09 11:20, Michael Paquier wrote:
>> While looking at this code, is there any reason to not make
>> gen_partprune_steps static?  This is only used in partprune.c for now,
>> so the intention is to make it available for future patches?
>
> Yeah, making it static might be a good idea.  I had made it externally
> visible, because I was under the impression that the runtime pruning
> related code would want to call it from elsewhere within the planner.
> But, instead it introduced a make_partition_pruneinfo() which in turn
> calls get_partprune_steps.

Yeah. Likely left over from when run-time pruning was generating the
steps during execution rather than during planning.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



RE: Having query cache in core

2018-05-08 Thread Tsunakawa, Takayuki
From: Robert Haas [mailto:robertmh...@gmail.com]
> That's not my experience.  I agree that plan caching isn't important
> for long-running queries, but I think it *is* potentially important
> for fast queries with fast planning time.  Even when the planning time
> is fast, it can be a significant percentage of the execution time.
> Not long ago, we had a case at EDB where a customer was getting custom
> plans instead of generic plans and that resulted in a significant
> reduction in TPS.

I have the same experience with our customers.  Their batch applications 
suffered from performance compared to Oracle and SQL Server.  Those apps 
repeatedly issued simple SELECT statements that retrieve a single row.

Regards
Takayuki Tsunakawa




Re: Should we add GUCs to allow partition pruning to be disabled?

2018-05-08 Thread David Rowley
Thanks for reviewing again.

On 9 May 2018 at 01:32, Justin Pryzby  wrote:
> On Mon, May 07, 2018 at 06:00:59PM +1200, David Rowley wrote:
>> Many thanks for reviewing this.
>
> 2nd round - from the minimalist department:
>
> +partitions which cannot possibly contain any matching records.
> maybe: partitions which cannot match any records.

I don't think that's an improvement. I don't think there's such a
thing as "partitions which match records". A partition can contain a
record, it never matches one.

> +   
> +Partition pruning done during execution can be performed at any of the
> +following times:
>
> remove "done"?

Removed.

> +   number of partitions which were removed during this phase of pruning 
> by
> remove "of prunning"

Removed.

v3 attached.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


further_enable_partition_pruning_doc_updates_v3.patch
Description: Binary data


Re: parallel.sgml for Gather with InitPlans

2018-05-08 Thread Amit Kapila
On Tue, May 8, 2018 at 5:27 PM, Robert Haas  wrote:
> On Mon, May 7, 2018 at 11:34 PM, Amit Kapila  wrote:
>> I think we can cover InitPlan and Subplans that can be parallelized in
>> a separate section "Parallel Subplans" or some other heading.  I think
>> as of now we have enabled parallel subplans and initplans in a
>> limited, but useful cases (as per TPC-H benchmark) and it might be
>> good to cover them in a separate section.  I can come up with an
>> initial patch (or I can review it if you write the patch) if you and
>> or others think that makes sense.
>
> We could go that way, but what I wrote is short and -- I think -- accurate.
>

Okay, again thinking about it after your explanation, it appears
correct to me, but it was not apparent on the first read.   I think
other alternatives could be (a) Evaluation of initplan OR (b)
Execution of initplan.  I think it makes sense to add what you have
written or one of the alternatives suggested by me as you deem most
appropriate.  I think one can always write a detailed explanation as a
separate patch.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: Should we add GUCs to allow partition pruning to be disabled?

2018-05-08 Thread Amit Langote
Hi David.

Thanks for addressing my comments.

On 2018/05/07 15:00, David Rowley wrote:
> v2 patch is attached.

Looks good to me.

Thanks,
Amit




Re: [HACKERS] path toward faster partition pruning

2018-05-08 Thread Amit Langote
On 2018/05/09 11:31, David Rowley wrote:
> On 9 May 2018 at 14:29, Amit Langote  wrote:
>> On 2018/05/09 11:20, Michael Paquier wrote:
>>> While looking at this code, is there any reason to not make
>>> gen_partprune_steps static?  This is only used in partprune.c for now,
>>> so the intention is to make it available for future patches?
>>
>> Yeah, making it static might be a good idea.  I had made it externally
>> visible, because I was under the impression that the runtime pruning
>> related code would want to call it from elsewhere within the planner.
>> But, instead it introduced a make_partition_pruneinfo() which in turn
>> calls get_partprune_steps.
> 
> Yeah. Likely left over from when run-time pruning was generating the
> steps during execution rather than during planning.

Here is a patch that does that.

Thanks,
Amit
diff --git a/src/backend/partitioning/partprune.c 
b/src/backend/partitioning/partprune.c
index f954b92a6b..f1f7b2dea9 100644
--- a/src/backend/partitioning/partprune.c
+++ b/src/backend/partitioning/partprune.c
@@ -116,6 +116,8 @@ typedef struct PruneStepResult
 } PruneStepResult;
 
 
+static List *gen_partprune_steps(RelOptInfo *rel, List *clauses,
+   bool *contradictory);
 static List *gen_partprune_steps_internal(GeneratePruningStepsContext *context,
 RelOptInfo *rel, List 
*clauses,
 bool *contradictory);
@@ -355,7 +357,7 @@ make_partition_pruneinfo(PlannerInfo *root, List 
*partition_rels,
  * If the clauses in the input list are contradictory or there is a
  * pseudo-constant "false", *contradictory is set to true upon return.
  */
-List *
+static List *
 gen_partprune_steps(RelOptInfo *rel, List *clauses, bool *contradictory)
 {
GeneratePruningStepsContext context;
diff --git a/src/include/partitioning/partprune.h 
b/src/include/partitioning/partprune.h
index c9fe95dc30..3d114b4c71 100644
--- a/src/include/partitioning/partprune.h
+++ b/src/include/partitioning/partprune.h
@@ -67,7 +67,5 @@ extern List *make_partition_pruneinfo(PlannerInfo *root, List 
*partition_rels,
 extern Relids prune_append_rel_partitions(RelOptInfo *rel);
 extern Bitmapset *get_matching_partitions(PartitionPruneContext *context,
List *pruning_steps);
-extern List *gen_partprune_steps(RelOptInfo *rel, List *clauses,
-   bool *contradictory);
 
 #endif /* PARTPRUNE_H */


Re: [HACKERS] path toward faster partition pruning

2018-05-08 Thread Michael Paquier
On Wed, May 09, 2018 at 02:01:26PM +0900, Amit Langote wrote:
> On 2018/05/09 11:31, David Rowley wrote:
>> On 9 May 2018 at 14:29, Amit Langote  wrote:
>>> On 2018/05/09 11:20, Michael Paquier wrote:
 While looking at this code, is there any reason to not make
 gen_partprune_steps static?  This is only used in partprune.c for now,
 so the intention is to make it available for future patches?
>>>
>>> Yeah, making it static might be a good idea.  I had made it externally
>>> visible, because I was under the impression that the runtime pruning
>>> related code would want to call it from elsewhere within the planner.
>>> But, instead it introduced a make_partition_pruneinfo() which in turn
>>> calls get_partprune_steps.
>> 
>> Yeah. Likely left over from when run-time pruning was generating the
>> steps during execution rather than during planning.
> 
> Here is a patch that does that.

Thanks, Amit.

Alvaro, could it be possible to consider as well the patch I posted
here?
https://www.postgresql.org/message-id/20180424012042.gd1...@paquier.xyz

This removes a useless default clause in partprune.c and it got
forgotten in the crowd.  Just attaching it again here, and it can just
be applied on top of the rest.
--
Michael
diff --git a/src/backend/partitioning/partprune.c b/src/backend/partitioning/partprune.c
index f8844ef2eb..cbbb4c1827 100644
--- a/src/backend/partitioning/partprune.c
+++ b/src/backend/partitioning/partprune.c
@@ -2950,10 +2950,6 @@ perform_pruning_combine_step(PartitionPruneContext *context,
 }
 			}
 			break;
-
-		default:
-			elog(ERROR, "invalid pruning combine op: %d",
- (int) cstep->combineOp);
 	}
 
 	return result;


signature.asc
Description: PGP signature


Re: [Suspect SPAM] Re: [HACKERS] path toward faster partition pruning

2018-05-08 Thread Amit Langote
Thank you Marina for the report and Michael for following up.

On 2018/05/07 16:56, Michael Paquier wrote:
> On Mon, May 07, 2018 at 10:37:10AM +0900, Michael Paquier wrote:
>> On Fri, May 04, 2018 at 12:32:23PM +0300, Marina Polyakova wrote:
>>> I got a similar server crash as in [1] on the master branch since the commit
>>> 9fdb675fc5d2de825414e05939727de8b120ae81 when the assertion fails because
>>> the second argument ScalarArrayOpExpr is not a Const or an ArrayExpr, but is
>>> an ArrayCoerceExpr (see [2]):
>>
>> Indeed, I can see the crash.  I have been playing with this stuff and I
>> am in the middle of writing the patch, but let's track this properly for
>> now.
> 
> So the problem appears when an expression needs to use
> COERCION_PATH_ARRAYCOERCE for a type coercion from one type to another,
> which requires an execution state to be able to build the list of
> elements.  The clause matching happens at planning state, so while there
> are surely cases where this could be improved depending on the
> expression type, I propose to just discard all clauses which do not
> match OpExpr and ArrayExpr for now, as per the attached.  It would be
> definitely a good practice to have a default code path returning
> PARTCLAUSE_UNSUPPORTED where the element list cannot be built.
> 
> Thoughts?

I have to agree to go with this conservative approach for now.  Although
we might be able to evaluate the array elements by applying the coercion
specified by ArrayCoerceExpr, let's save that as an improvement to be
pursued later.

FWIW, constraint exclusion wouldn't prune in this case either (that is, if
you try this example with PG 10 or using HEAD as of the parent of
9fdb675fc5), but it doesn't crash like the new pruning code does.

Thanks again.

Regards,
Amit




Re: Expression errors with "FOR UPDATE" and postgres_fdw with partition wise join enabled.

2018-05-08 Thread Ashutosh Bapat
On Tue, May 1, 2018 at 5:00 PM, Etsuro Fujita
 wrote:
> (2018/04/27 14:40), Ashutosh Bapat wrote:
>>
>> Here's updated patch set.
>
>
> Thanks for updating the patch!  Here are my review comments on patch
> 0003-postgres_fdw-child-join-with-ConvertRowtypeExprs-cau.patch:
>
> * This assertion in deparseConvertRowtypeExpr wouldn't always be true
> because of cases like ALTER FOREIGN TABLE DROP COLUMN plus ALTER FOREIGN
> TABLE ADD COLUMN:
>
> +   Assert(parent_desc->natts == child_desc->natts);
>
> Here is such an example:

>
> I think we should remove that assertion.

Removed.

>
> * But I don't think that change is enough.  Here is another oddity with that
> assertion removed.

>
> To fix this, I think we should skip the deparseColumnRef processing for
> dropped columns in the parent table.

Done.

I have changed the test file a bit to test these scenarios.

Thanks for testing.

>
> * I think it would be better to do EXPLAIN with the VERBOSE option to the
> queries this patch added to the regression tests, to see if
> ConvertRowtypeExprs are correctly deparsed in the remote queries.

The reason VERBOSE option was omitted for partition-wise join was to
avoid repeated and excessive output for every child-join. I think that
still applies.

PFA patch-set with the fixes.

I also noticed that the new function deparseConvertRowtypeExpr is not
quite following the naming convention of the other deparseFoo
functions. Foo is usually the type of the node the parser would
produced when the SQL string produced by that function is parsed. That
doesn't hold for the SQL string produced by ConvertRowtypeExpr but
then naming it as generic deparseRowExpr() wouldn't be correct either.
And then there are functions like deparseColumnRef which may produce a
variety of SQL strings which get parsed into different node types e.g.
a whole-row reference would produce ROW(...) which gets parsed as a
RowExpr. Please let me know if you have any suggestions for the name.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


expr_ref_error_pwj_v5.tar.gz
Description: GNU Zip compressed data


Setting libpq TCP keepalive parameters from environment

2018-05-08 Thread Oleksandr Shulgin
Hi Hackers,

I didn't find the original discussion which led to introduction of the
client-side set of keepalive parameters back in [1].

The issue I'm facing is that it doesn't seem to be possible to set these
parameters from the environment variables.  The block of option
definitions[2] added for these parameters doesn't specify any corresponding
env var names.

I wonder if this is an oversight or was there a conscious decision not to
allow it?

To give a specific example, I have a (legacy) Python script which talks to
the database in two ways: by directly using psycopg2.connect(host=..., )
and also by running some shell commands with psql's \copy + gzip, mainly
for convenience.

Now when I needed to tune the keepalives_idle value, I had to include it
everywhere a database connection is made:
- For psycopg2 it's straightforward: you just add an extra keyword
parameter to connect().
- For psql it's trickier, since the only way to achieve this seems to pack
it together with -d as: -d 'dbname=mydb keepalives_idle=NNN'.

In case of a binary that would amount to recompiling, I guess.  I have no
permission to tweak sysctl directly on the host, unfortunately.

It would be much more convenient to just set the environment variable when
running the script and let it affect the whole process and its children.

Would a patch be welcome?

Regards,
--
Alex

[1] https://www.postgresql.org/message-id/flat/20100623215414.053427541D4%
40cvs.postgresql.org
[2] https://git.postgresql.org/gitweb/?p=postgresql.git;
a=blob;f=src/interfaces/libpq/fe-connect.c;h=a7e969d7c1cecdc8743c43cea09906
8196a4a5fe;hb=HEAD#l251


Re: Porting PG Extension from UNIX to Windows

2018-05-08 Thread Pavlo Golub
Greetings, Alexander.

You wrote 08.05.2018, 9:42:

> 25.04.2018 11:45, insaf.k wrote:

>   
> I've done some research regarding compiling in Windows. I
> am not sure in what way I should compile the extension.
> AFAIK, Visual Studio is not POSIX compliant and so I'll have
> to rewrite all those POSIX calls using Windows API. So it's
> better to compile the extension in Cygwin.
>   

> 
> I have some questions regarding compiling the extension in 
> Cygwin. Do I have to build PG in Cygwin, if I want to compilethe 
> extension in Cygwin?
>   
>  I think it might depend on the extension, but we have managed
> to use mingw-compiled extension with VS-compiled PG. Please look at the   
>   demo script:
>   https://pastebin.com/3jQahYNe
>  If you have any questions, please don't hesitate to ask.

Cool idea.

- Why are you using x86 version of MSYS2?
- And why don't you use just msys2-x86_64-latest.tar.xz instead of
exactly named one?

>   
>   Best regards,
> 
>  -- 
>Alexander Lakhin 
>Postgres Professional: http://www.postgrespro.com  
>The Russian Postgres Company
>   



-- 
Kind regards,
 Pavlo  mailto:pavlo.go...@cybertec.at




Re: Porting PG Extension from UNIX to Windows

2018-05-08 Thread Alexander Lakhin

Hello Pavel,
08.05.2018 11:19, Pavlo Golub wrote:

Cool idea.

- Why are you using x86 version of MSYS2?
We build PostgresPro for x86 and x64 so I choose to use x86 version as a 
common denominator to run the same script in 32-bit Windows.
(When running on 64-bit Windows any (x86/x64) version of PostgresPro 
could be installed so the target host determined by the installed 
postgres.exe bitness.)

- And why don't you use just msys2-x86_64-latest.tar.xz instead of
exactly named one?
I don't remember whether I tried and had troubles with the 'latest', but 
I prefer to use some fixed version and move to a newer one in a 
controlled manner.


Best regards,

--
Alexander Lakhin
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company




Re: doc fixes: vacuum_cleanup_index_scale_factor

2018-05-08 Thread Alexander Korotkov
Hi, Justin!

Thank you for revising documentation patch.

On Mon, May 7, 2018 at 7:55 PM, Justin Pryzby  wrote:

> On Mon, May 07, 2018 at 07:26:25PM +0300, Alexander Korotkov wrote:
> > Hi!
> >
> > I've revised docs and comments, and also made some fixes in the code.
> > See the attached patchset.
> >
> > * 0004-btree-cleanup-docs-comments-fixes.patch
> > Documentation and comment improvements from Justin Pryzby
> > revised by me.
>
> 2nd iteration:
>
> diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
> index eabe2a9235..785ecf922a 100644
> --- a/doc/src/sgml/config.sgml
> +++ b/doc/src/sgml/config.sgml
> @@ -1893,15 +1893,35 @@ include_dir 'conf.d'
>
>
> 
> -When no tuples were deleted from the heap, B-tree indexes might
> still
> -be scanned during VACUUM cleanup stage by two
> -reasons.  The first reason is that B-tree index contains deleted
> pages
> -which can be recycled during cleanup.  The second reason is that
> B-tree
> -index statistics is stalled.  The criterion of stalled index
> statistics
> -is number of inserted tuples since previous statistics collection
> -is greater than vacuum_cleanup_index_
> scale_factor
> -fraction of total number of heap tuples.
> +When no tuples were deleted from the heap, B-tree indexes are
> still
> +scanned during VACUUM cleanup stage unless two
> +conditions are met: the index contains no deleted pages which can
> be
> +recycled during cleanup; and, the index statistics are not stale.
> +In order to detect stale index statistics, number of total heap
> tuples
> should say: "THE number"
>
> +during previous statistics collection is memorized in the index
> s/memorized/stored/
>
> +meta-page.  Once number number of inserted tuples since previous
> Should say "Once the number of inserted tuples..."
>
> +statistics collection is more than
> +vacuum_cleanup_index_scale_factor fraction of
> +number of heap tuples memorized in the meta-page, index
> statistics is
> s/memorized/stored/
>
> +considered to be stalled.  Note, that number of heap tuples is
> written
> "THE number"
> s/stalled/stale/
>
> +to the meta-page at the first time when no dead tuples are found
> remove "at"
>
> +during VACUUM cycle.  Thus, skip of B-tree
> index
> I think should say: "Thus, skipping of the B-tree index scan"
>
> +scan during cleanup stage is only possible in second and
> subsequent
> s/in/when/
>
> +   
> +Zero value of vacuum_cleanup_index_
> scale_factor
> I would say "A zero value of ..."
>

I've applied all the changes you suggested.  Please, find it in the
attached patchset.

--
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


0001-vacuum-cleanup-index-scale-factor-user-set-2.patch
Description: Binary data


0002-vacuum-cleanup-index-scale-factor-tab-complete-2.patch
Description: Binary data


0003-btree-cleanup-condition-fix-2.patch
Description: Binary data


0004-btree-cleanup-docs-comments-fixes-2.patch
Description: Binary data


Re: [Suspect SPAM] Re: [HACKERS] path toward faster partition pruning

2018-05-08 Thread Michael Paquier
On Tue, May 08, 2018 at 04:07:41PM +0900, Amit Langote wrote:
> I have to agree to go with this conservative approach for now.  Although
> we might be able to evaluate the array elements by applying the coercion
> specified by ArrayCoerceExpr, let's save that as an improvement to be
> pursued later.

Thanks for confirming.  Yes, non-volatile functions would be actually
safe, and we'd need to be careful about NULL handling as well, but
that's definitely out of scope for v11.

> FWIW, constraint exclusion wouldn't prune in this case either (that is, if
> you try this example with PG 10 or using HEAD as of the parent of
> 9fdb675fc5), but it doesn't crash like the new pruning code does.

Yeah, I have noticed that.
--
Michael


signature.asc
Description: PGP signature


Re: Optimze usage of immutable functions as relation

2018-05-08 Thread Aleksandr Parfenov
Hello Andrew,

Thank you for the review of the patch.

On Fri, 04 May 2018 08:37:31 +0100
Andrew Gierth  wrote:
> From an implementation point of view your patch is obviously broken in
> many ways (starting with not checking varattno anywhere, and not
> actually checking anywhere if the expression is volatile).

The actual checking if the expression volatile or not is done inside
evaluate_function(). This is called through few more function in
eval_const_experssion(). If it's volatile, the eval_const_expression()
will return FuncExpr node, Const otherwise. It also checks are arguments
immutable or not.

I agree about varattno, it should be checked. Even in case of SRF not
replaced, it is better to be sure that Var points to first (and the
only) attribute.

> But more importantly the plans that you showed seem _worse_ in that
> you've created extra calls to to_tsquery even though the query has
> been written to try and avoid that.
> 
> I think you need to take a step back and explain more precisely what
> you think you're trying to do and what the benefit (and cost) is.

Sure, the first version is some kind of prototype and attempt to
improve execution of the certain type of queries. The best solution I
see is to use the result of the function where it is required and remove
the nested loop in case of immutable functions. In this case, the
planner can choose a better plan if function result is used for tuples
selecting or as a join filter. But it will increase planning time due to
the execution of immutable functions.

It looks like I did something wrong with plans in the example, because
attached patch replaces function-relation usage with result value, not
function call. But nested loop is still in the plan.

I'm open to any suggestions/notices/critics about the idea and approach.

-- 
Aleksandr Parfenov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company



Re: Optimze usage of immutable functions as relation

2018-05-08 Thread Andrew Gierth
> "Aleksandr" == Aleksandr Parfenov  writes:

 >> From an implementation point of view your patch is obviously broken
 >> in many ways (starting with not checking varattno anywhere, and not
 >> actually checking anywhere if the expression is volatile).

 Aleksandr> The actual checking if the expression volatile or not is
 Aleksandr> done inside evaluate_function(). This is called through few
 Aleksandr> more function in eval_const_experssion(). If it's volatile,
 Aleksandr> the eval_const_expression() will return FuncExpr node, Const
 Aleksandr> otherwise. It also checks are arguments immutable or not.

You're missing a ton of other possible cases, of which by far the most
notable is function inlining: eval_const_expressions will inline even a
volatile function and return a new expression tree (which could be
almost anything depending on the function body).

 Aleksandr> I agree about varattno, it should be checked. Even in case
 Aleksandr> of SRF not replaced, it is better to be sure that Var points
 Aleksandr> to first (and the only) attribute.

It's not a matter of "better", but of basic correctness. Functions can
return composite values with columns.

-- 
Andrew (irc:RhodiumToad)



Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors

2018-05-08 Thread Alvaro Herrera
Fabien COELHO wrote:

> I think that I'll have time for a round of review in the first half of July.
> Providing a rebased patch before then would be nice.

Note that even in the absence of a rebased patch, you can apply to an
older checkout if you have some limited window of time for a review.

Looking over the diff, I find that this patch tries to do too much and
needs to be split up.  At a minimum there is a preliminary patch that
introduces the error reporting stuff (errstart etc); there are other
thread-related changes (for example to the random generation functions)
that probably belong in a separate one too.  Not sure if there are other
smaller patches hidden inside the rest.

On elog/errstart: we already have a convention for what ereport() calls
look like; I suggest to use that instead of inventing your own.  With
that, is there a need for elog()?  In the backend we have it because
$HISTORY but there's no need for that here -- I propose to lose elog()
and use only ereport everywhere.  Also, I don't see that you need
errmsg_internal() at all; let's lose it too.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Bug Report: Error caused due to wrong ordering of filters

2018-05-08 Thread Ekta Khanna
Hello PGSQL Hackers,

We have come across the following issue on Postgres REL_10_STABLE. Below is
the repro:

CREATE TABLE foo (a int, b text); INSERT INTO foo values(1, '3'); SELECT *
FROM (SELECT * FROM foo WHERE length(b)=8)x WHERE to_date(x.b,'MMDD') >
'2018-05-04';
ERROR: source string too short for "" formatting field DETAIL: Field
requires 4 characters, but only 1 remain. HINT: If your source string is
not fixed-width, try using the "FM" modifier.

On looking at the explain plan, we see the order of the clauses is reversed
due to costing of clauses in the function order_qual_clauses() below is the
plan :
*Actual Plan:*
QUERY PLAN
-
Seq Scan on foo (cost=0.00..35.40 rows=2 width=36) Filter: ((to_date(b,
'MMDD'::text) > '2018-05-04'::date) AND (length(b) = 8)) (2 rows)

Expected plan should execute the qual as part of the FROM clause before
executing the qual in the WHERE clause:
*Plan expected: *
QUERY PLAN
-
Seq Scan on foo (cost=0.00..35.40 rows=2 width=36) Filter: (length(b) = 8))
AND ((to_date(b, 'MMDD'::text) > '2018-05-04'::date) (2 rows)

Has anyone come across similar issue ?
In the plan, we see that planner merges the quals from FROM clause and the
WHERE clause in the same RESTRICTINFO. Is this the expected behavior?


Thanks & Regards,
Ekta & Sam


  1   2   >