Re: [HACKERS] Perf Benchmarking and regression.

2016-05-12 Thread Ashutosh Sharma
Hi,

Please find the test results for the following set of combinations taken at
128 client counts:

*1)* *Unpatched master, default *_flush_after :*  TPS = 10925.882396

*2) Unpatched master, *_flush_after=0 :*  TPS = 18613.343529

*3)* *That line removed with #if 0, default *_flush_after :*  TPS =
9856.809278

*4)* *That line removed with #if 0, *_flush_after=0 :*  TPS = 18158.648023

Here, *That line* points to "*AddWaitEventToSet(FeBeWaitSet,
WL_POSTMASTER_DEATH, -1, NULL, NULL);* in pq_init()."

Please note that earlier i had taken readings with data directory and
pg_xlog directory at the same location in HDD. But this time i have changed
the location of pg_xlog to ssd and taken the readings. With pg_xlog and
data directory at the same location in HDD i was seeing much lesser
performance like for "*That line removed with #if 0, *_flush_after=0 :*"
case i was getting 7367.709378 tps.


Also, the commit-id on which i have taken above readings along with pgbench
commands used are mentioned below:

commit 8a13d5e6d1bb9ff9460c72992657077e57e30c32
Author: Tom Lane 
Date:   Wed May 11 17:06:53 2016 -0400

Fix infer_arbiter_indexes() to not barf on system columns.

*Non Default settings and test*:
./postgres -c shared_buffers=8GB -N 200 -c min_wal_size=15GB -c
max_wal_size=20GB -c checkpoint_timeout=900 -c maintenance_work_mem=1GB -c
checkpoint_completion_target=0.9 &

./pgbench -i -s 1000 postgres

./pgbench -c 128 -j 128 -T 1800 -M prepared postgres

With Regards,
Ashutosh Sharma
EnterpriseDB: *http://www.enterprisedb.com *

On Thu, May 12, 2016 at 9:22 AM, Robert Haas  wrote:

> On Wed, May 11, 2016 at 12:51 AM, Ashutosh Sharma 
> wrote:
> > I am extremely sorry for the delayed response.  As suggested by you, I
> have
> > taken the performance readings at 128 client counts after making the
> > following two changes:
> >
> > 1). Removed AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL,
> > NULL); from pq_init(). Below is the git diff for the same.
> >
> > diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
> > index 8d6eb0b..399d54b 100644
> > --- a/src/backend/libpq/pqcomm.c
> > +++ b/src/backend/libpq/pqcomm.c
> > @@ -206,7 +206,9 @@ pq_init(void)
> > AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE,
> > MyProcPort->sock,
> >   NULL, NULL);
> > AddWaitEventToSet(FeBeWaitSet, WL_LATCH_SET, -1, MyLatch, NULL);
> > +#if 0
> > AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL,
> NULL);
> > +#endif
> >
> > 2). Disabled the guc vars "bgwriter_flush_after",
> "checkpointer_flush_after"
> > and "backend_flush_after" by setting them to zero.
> >
> > After doing the above two changes below are the readings i got for 128
> > client counts:
> >
> > CASE : Read-Write Tests when data exceeds shared buffers.
> >
> > Non Default settings and test
> > ./postgres -c shared_buffers=8GB -N 200 -c min_wal_size=15GB -c
> > max_wal_size=20GB -c checkpoint_timeout=900 -c maintenance_work_mem=1GB
> -c
> > checkpoint_completion_target=0.9 &
> >
> > ./pgbench -i -s 1000 postgres
> >
> > ./pgbench -c 128 -j 128 -T 1800 -M prepared postgres
> >
> > Run1 : tps = 9690.678225
> > Run2 : tps = 9904.320645
> > Run3 : tps = 9943.547176
> >
> > Please let me know if i need to take readings with other client counts as
> > well.
>
> Can you please take four new sets of readings, like this:
>
> - Unpatched master, default *_flush_after
> - Unpatched master, *_flush_after=0
> - That line removed with #if 0, default *_flush_after
> - That line removed with #if 0, *_flush_after=0
>
> 128 clients is fine.  But I want to see four sets of numbers that were
> all taken by the same person at the same time using the same script.
>
> Thanks,
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


Re: [HACKERS] Error during restore - dump taken with pg_dumpall -c option

2016-05-12 Thread Fabrízio de Royes Mello
Em quinta-feira, 12 de maio de 2016, Rushabh Lathia <
rushabh.lat...@gmail.com> escreveu:

>
> On master branch when we do pg_dumpall with -c option, I can see that
> it also dumping the "DROP ROLE pg_signal_backend", which seems wrong.
> Because when you restore the dump, its throwing an error
> "ERROR:  cannot drop role pg_signal_backend because it is required by the
> database system".
>
>
> dumpRoles()::pg_dumpall.c does have logic to not dump "CREATE ROLE"  if the
> rolename starts with "pg_", but similar check is missing into dropRoles()
> function.
>
> PFA patch, to fix the problem in the similar way its been handled into
> dumpRoles().
>
>
Shouldn't this logic be applied just to version >= 9.6? I ask it because
you write a special query filtering rolname !~ '^pg_' and again check it
using strcmp before the drop role output. Is this the expected behavior?


-- 
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
>> Timbira: http://www.timbira.com.br
>> Blog: http://fabriziomello.github.io
>> Linkedin: http://br.linkedin.com/in/fabriziomello
>> Twitter: http://twitter.com/fabriziomello
>> Github: http://github.com/fabriziomello


Re: [HACKERS] Incremental refresh of materialized view - Patch

2016-05-12 Thread Kevin Grittner
On Thu, May 12, 2016 at 1:05 AM, hari.prasath  wrote:

>   However if the same methods in matview.c
> OpenMatViewIncrementalMaintenance & CloseMatViewIncrementalMaintenance are
> mad extern its possible to do DML from the patches like i am building now.
>
>   Is there any other way of doing DML operations on materialized views
> from patch.?

If you want to maintain materialized data using application-level
DML, you should use a table, not a materialized view.  Once there
is built-in incremental maintenance, allowing DML will just make
the contents of a materialized view completely unreliable.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Perf Benchmarking and regression.

2016-05-12 Thread Robert Haas
On Thu, May 12, 2016 at 8:39 AM, Ashutosh Sharma  wrote:
> Please find the test results for the following set of combinations taken at
> 128 client counts:
>
> 1) Unpatched master, default *_flush_after :  TPS = 10925.882396
>
> 2) Unpatched master, *_flush_after=0 :  TPS = 18613.343529
>
> 3) That line removed with #if 0, default *_flush_after :  TPS = 9856.809278
>
> 4) That line removed with #if 0, *_flush_after=0 :  TPS = 18158.648023

I'm getting increasingly unhappy about the checkpoint flush control.
I saw major regressions on my parallel COPY test, too:

http://www.postgresql.org/message-id/ca+tgmoyouqf9cgcpgygngzqhcy-gcckryaqqtdu8kfe4n6h...@mail.gmail.com

That was a completely different machine (POWER7 instead of Intel,
lousy disks instead of good ones) and a completely different workload.
Considering these results, I think there's now plenty of evidence to
suggest that this feature is going to be horrible for a large number
of users.  A 45% regression on pgbench is horrible.  (Nobody wants to
take even a 1% hit for snapshot too old, right?)  Sure, it might not
be that way for every user on every Linux system, and I'm sure it
performed well on the systems where Andres benchmarked it, or he
wouldn't have committed it.  But our goal can't be to run well only on
the newest hardware with the least-buggy kernel...

> Here, That line points to "AddWaitEventToSet(FeBeWaitSet,
> WL_POSTMASTER_DEATH, -1, NULL, NULL); in pq_init()."

Given the above results, it's not clear whether that is making things
better or worse.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Perf Benchmarking and regression.

2016-05-12 Thread Robert Haas
On Thu, May 12, 2016 at 11:13 AM, Andres Freund  wrote:
> Could you run this one with a number of different backend_flush_after
> settings?  I'm suspsecting the primary issue is that the default is too low.

What values do you think would be good to test?  Maybe provide 3 or 4
suggested values to try?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Academic help for Postgres

2016-05-12 Thread Michael Banck
On Thu, May 12, 2016 at 08:57:34AM +0800, Craig Ringer wrote:
> On 11 May 2016 at 22:20, Bruce Momjian  wrote:
> > I am giving a keynote at an IEEE database conference in Helsinki next
> > week (http://icde2016.fi/).  (Yes, I am not attending PGCon Ottawa
> > because I accepted the Helsinki conference invitation before the PGCon
> > Ottawa date was changed from June to May).
> >
> > As part of the keynote, I would like to mention areas where academia can
> > help us.  The topics I can think of are:
> >
> > Any others?
> >
> 
> When publishing work, publish source code somewhere stable that won't just
> vanish. And build on the latest stable release, don't build your prototype
> on Pg 8.0. Don't just publish a tarball with no information about what
> revision it's based on, publish a git tree or a patch series.
> 
> While academic prototype source is rarely usable directly, it can serve a
> valuable role with helping to understand the changes that were made,
> reproducing results, exploring further related work, etc
> 
> Include your dummy data or data generators, setup scripts, etc.

That is all sound advise, but if they do all of the above, then they
should also make sure the source (or parts of it) is potentially usable
by the project, i.e. (joint?) PGDG copyright, if their academic
institution allows that.


Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Perf Benchmarking and regression.

2016-05-12 Thread Andres Freund
On 2016-05-12 10:49:06 -0400, Robert Haas wrote:
> On Thu, May 12, 2016 at 8:39 AM, Ashutosh Sharma  
> wrote:
> > Please find the test results for the following set of combinations taken at
> > 128 client counts:
> >
> > 1) Unpatched master, default *_flush_after :  TPS = 10925.882396
> >
> > 2) Unpatched master, *_flush_after=0 :  TPS = 18613.343529
> >
> > 3) That line removed with #if 0, default *_flush_after :  TPS = 9856.809278
> >
> > 4) That line removed with #if 0, *_flush_after=0 :  TPS = 18158.648023
> 
> I'm getting increasingly unhappy about the checkpoint flush control.
> I saw major regressions on my parallel COPY test, too:

Yes, I'm concerned too.

The workload in this thread is a bit of an "artificial" workload (all
data is constantly updated, doesn't fit into shared_buffers, fits into
the OS page cache), and only measures throughput not latency.  But I
agree that that's way too large a regression to accept, and that there's
a significant number of machines with way undersized shared_buffer
values.


> http://www.postgresql.org/message-id/ca+tgmoyouqf9cgcpgygngzqhcy-gcckryaqqtdu8kfe4n6h...@mail.gmail.com
> 
> That was a completely different machine (POWER7 instead of Intel,
> lousy disks instead of good ones) and a completely different workload.
> Considering these results, I think there's now plenty of evidence to
> suggest that this feature is going to be horrible for a large number
> of users.  A 45% regression on pgbench is horrible.

I asked you over there whether you could benchmark with just different
values for backend_flush_after... I chose the current value because it
gives the best latency / most consistent throughput numbers, but 128kb
isn't a large window.  I suspect we might need to disable backend guided
flushing if that's not sufficient :(


> > Here, That line points to "AddWaitEventToSet(FeBeWaitSet,
> > WL_POSTMASTER_DEATH, -1, NULL, NULL); in pq_init()."
> 
> Given the above results, it's not clear whether that is making things
> better or worse.

Yea, me neither. I think it's doubful that you'd see performance
difference due to the original ac1d7945f866b1928c2554c0f80fd52d7f92
, independent of the WaitEventSet stuff, at these throughput rates.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Change error code for hstore syntax error

2016-05-12 Thread Robert Haas
On Mon, May 9, 2016 at 1:42 PM, Sherrylyn Branchaw  wrote:
> I'm attaching a revised patch; please let me know if there are any other
> issues before I submit to the commitfest.

Submitting to the CommitFest is how you make sure that someone looks
at the patch to see if any issues exist.  Obviously, if someone
reviews before then, that's great, but you can't count on it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Perf Benchmarking and regression.

2016-05-12 Thread Andres Freund
On 2016-05-12 11:27:31 -0400, Robert Haas wrote:
> On Thu, May 12, 2016 at 11:13 AM, Andres Freund  wrote:
> > Could you run this one with a number of different backend_flush_after
> > settings?  I'm suspsecting the primary issue is that the default is too low.
> 
> What values do you think would be good to test?  Maybe provide 3 or 4
> suggested values to try?

0 (disabled), 16 (current default), 32, 64, 128, 256?

I'm suspecting that only backend_flush_after_* has these negative
performance implications at this point.  One path is to increase that
option's default value, another is to disable only backend guided
flushing. And add a strong hint that if you care about predictable
throughput you might want to enable it.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Failed assertion in parallel worker (ExecInitSubPlan)

2016-05-12 Thread Tom Lane
Robert Haas  writes:
> I could be wrong, but I thought that the target list for an expression
> would always contain only Vars at this stage.  Non-default tlists get
> injected at the end of scan/join planning.  Am I wrong?

Target list for a relation, you mean?  See relation.h:

 *  reltarget - Default Path output tlist for this rel; normally contains
 *  Var and PlaceHolderVar nodes for the values we need to
 *  output from this relation.
 *  List is in no particular order, but all rels of an
 *  appendrel set must use corresponding orders.
 *  NOTE: in an appendrel child relation, may contain
 *  arbitrary expressions pulled up from a subquery!

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Change error code for hstore syntax error

2016-05-12 Thread Robert Haas
On Thu, May 12, 2016 at 11:46 AM, Sherrylyn Branchaw
 wrote:
> Understood. It's just that Tom had already replied, so I wanted to give him
> first crack at it, if, say, I had grossly misinterpreted his suggestions.
> The guide on submitting a patch advises taking silence as consent within
> reason. Since there's been enough silence, I was planning on submitting to
> the commitfest soon.

No sweat.   Just didn't want you to take silence as opposition.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minor documentation patch

2016-05-12 Thread Peter Eisentraut

On 5/11/16 7:00 AM, Martín Marqués wrote:

Yesterday I was going over some consultancy and went to check some
syntax for CREATE FUNCTION, particularly related to SECURITY DEFINER part.

Reading there I saw a paragraph which had a sentence that wasn't very
clear at first.

The patch's description gives a better idea on the change, and how I got
there, and I believe it gives better meaning to the sentence in question.

I applied the same change on another part which had the same phrase.


committed

--
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Add TAP tests for pg_dump

2016-05-12 Thread Robert Haas
On Mon, May 9, 2016 at 11:58 AM, Stephen Frost  wrote:
> Further, this test framework was under discussion on-list and commented
> on by at least one other committer prior to being committed.  It was not
> entirely without review.

No, just almost entirely without review.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Perf Benchmarking and regression.

2016-05-12 Thread Andres Freund
Hi,

On 2016-05-12 18:09:07 +0530, Ashutosh Sharma wrote:
> Please find the test results for the following set of combinations taken at
> 128 client counts:

Thanks.


> *1)* *Unpatched master, default *_flush_after :*  TPS = 10925.882396

Could you run this one with a number of different backend_flush_after
settings?  I'm suspsecting the primary issue is that the default is too low.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] "Allow usage of huge maintenance_work_mem for GIN build" patch

2016-05-12 Thread Robert Haas
On Fri, May 6, 2016 at 7:58 PM, Peter Geoghegan  wrote:
> I noticed that commit 30bb26b5 ("Allow usage of huge
> maintenance_work_mem for GIN build") made the following modification:
>
> --- a/src/include/access/gin_private.h
> +++ b/src/include/access/gin_private.h
> @@ -903,7 +903,7 @@ typedef struct GinEntryAccumulator
>  typedef struct
>  {
> GinState   *ginstate;
> -   longallocatedMemory;
> +   SizeallocatedMemory;
> GinEntryAccumulator *entryallocator;
> uint32  eas_used;
> RBTree *tree;
>
> Are you sure this is safe, Teodor? I don't have time to study the
> patch in detail, but offhand I think that it might have been better to
> make allocatedMemory of type int64, just like the tuplesort.c memory
> accounting variables are post-MaxAllocHuge. It's not obvious to me
> that this variable isn't allowed to occasionally become negative, just
> like in tuplesort.c. It looks like that *might* be true -- ginbulk.c
> may let allocatedMemory go negative for a period, which would now be
> broken.
>
> If you did make this exact error, you would not be the first. If it
> isn't actually broken, perhaps you should still make this change,
> simply on general principle. I'd like to hear other opinions on that,
> though.

I've added this to the open items list.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Failed assertion in parallel worker (ExecInitSubPlan)

2016-05-12 Thread Robert Haas
On Thu, May 12, 2016 at 7:46 AM, Amit Kapila  wrote:
> On further analysis, I think I know what is going on in the original bug
> report.  We add the Vars (build_base_rel_tlists) and PlaceholderVars
> (add_placeholders_to_base_rels()) to each relations (RelOptInfo) target
> during qurey_planner and the Subplans are added as PlaceHolderVars in target
> expressions.  Now while considering whether a particular rel can be parallel
> in set_rel_consider_parallel(), we don't check the target expressions to
> allow the relation for parallelism.  I think we can prohibit the relation to
> be considered for parallelism if it's target expressions contain any
> parallel restricted clause.  Fix on those lines is attached with this mail.
>
> Thanks to Dilip Kumar for helping me in narrowing down this particular
> problem.  We were not able to generate the exact test, but I think the above
> theory is sufficient to prove that it can cause a problem as seen in the
> original bug report.

I could be wrong, but I thought that the target list for an expression
would always contain only Vars at this stage.  Non-default tlists get
injected at the end of scan/join planning.  Am I wrong?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Change error code for hstore syntax error

2016-05-12 Thread Sherrylyn Branchaw
Submitting to the CommitFest is how you make sure that someone looks
at the patch to see if any issues exist.  Obviously, if someone
reviews before then, that's great, but you can't count on it.

Understood. It's just that Tom had already replied, so I wanted to give him
first crack at it, if, say, I had grossly misinterpreted his suggestions.
The guide on submitting a patch advises taking silence as consent within
reason. Since there's been enough silence, I was planning on submitting to
the commitfest soon.

On Thu, May 12, 2016 at 11:35 AM, Robert Haas  wrote:

> On Mon, May 9, 2016 at 1:42 PM, Sherrylyn Branchaw 
> wrote:
> > I'm attaching a revised patch; please let me know if there are any other
> > issues before I submit to the commitfest.
>
> Submitting to the CommitFest is how you make sure that someone looks
> at the patch to see if any issues exist.  Obviously, if someone
> reviews before then, that's great, but you can't count on it.
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


Re: [HACKERS] Use %u to print user mapping's umid and userid

2016-05-12 Thread Tom Lane
Robert Haas  writes:
> My suggestion is that we switch from using a List to marshal the data
> to using an ExtensibleNode.  An advantage of that is that we'd have
> some in-core test coverage for the ExtensibleNode stuff.  In theory it
> ought to be simpler and less messy, too, but I guess we'll find out.

Seems like a good idea, or at least one worth trying.

> Regardless of what approach we take, I disagree that this needs to be
> fixed in 9.6.

Agreed.  This is only a cosmetic issue, and it's only going to be visible
to a very small group of people, so we can leave it alone until 9.7.

(FWIW, now that we've put in the list_make5 macros, I'd vote against
taking them out, independently of what happens in postgres_fdw.
Somebody else will need them someday, or indeed might already be
using them in some non-core extension.)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] alter table alter column ... (larger type) ... when there are dependent views

2016-05-12 Thread Robert Haas
On Tue, May 10, 2016 at 11:22 PM, Tom Lane  wrote:
> You should look at the code in ALTER TABLE that tries to rebuild index
> definitions during ALTER COLUMN TYPE, and see if that can be adapted
> to updating views.

I think the problems are almost entirely different.  In the case of
ALTER TABLE, we just need to know whether the modified data will still
index in the same way.  There are rules for that.  In the case of
views, the problem has more to do with potential POLA violations.
What the user will want (I think) is for the dependent view to end up
in the same state that it would have ended up in if the CREATE VIEW
command had been issued after the ALTER TABLE command.  But I'm pretty
sure we lack the information to do that in all cases.

We could try some hack, though.  We could say that when you alter the
table (with some special option), the view definition gets modified by
inserting a cast.  That is, if v.a gets changed from varchar(20) to
varchar(40), we rewrite the view definition so that wherever there was
previously a reference to v.a, it gets replaced with a reference to
(v.a::varchar(40)).  That might lead to hideous results in the face of
multiple ALTER TABLE commands, though, and it's doubtful whether it is
really the behavior the user wants.  What the user actually wants, I
think, is for the type of the view column to change from varchar(20)
to varchar(40) when the underlying table is altered.  That, however,
seems like a Pandora's box.  At least inserting casts would let the
ALTER TABLE succeed, and then you could fix the view afterward with
CREATE OR REPLACE VIEW.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Does Type Have = Operator?

2016-05-12 Thread Tom Lane
"David E. Wheeler"  writes:
> Some might argue that it ought to compare JSON objects, effectively be the 
> equivalent of ::jsonb = ::jsonb, rather than ::text = ::text. But as Andrew 
> points out to me offlist, “if that's what they want why aren't they using 
> jsonb in the first place?”

> So I think that, up to the introduction of JSONB, it was important not to 
> side one way or the other and put a JSON = operator in core. But now what we 
> have JSONB, perhaps it makes sense to finally take sides and intoduce JSON = 
> that does plain text comparison. Thoughts?

Meh.  Right now, if you want to compare values of type JSON, you have to
either cast them to text or to jsonb, and that effectively declares which
comparison semantics you want.  I'm not sure that prejudging that is a
good thing for us to do, especially when the argument that text semantics
are what you would probably want is so weak.

Andrew mentions in the extension you pointed to that providing a default
comparison operator would enable people to do UNION, DISTINCT, etc on JSON
columns without thinking about it.  I'm not convinced that "without
thinking about it" is a good thing here.  But if we were going to enable
that, I'd feel better about making it default to jsonb semantics ...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Perf Benchmarking and regression.

2016-05-12 Thread Fabien COELHO



I'm getting increasingly unhappy about the checkpoint flush control.
I saw major regressions on my parallel COPY test, too:


Yes, I'm concerned too.


A few thoughts:

 - focussing on raw tps is not a good idea, because it may be a lot of tps
   followed by a sync panic, with an unresponsive database. I wish the
   performance reports would include some indication of the distribution
   (eg min/q1/median/d3/max tps per second seen, standard deviation), not
   just the final "tps" figure.

 - checkpoint flush control (checkpoint_flush_after) should mostly always
   beneficial because it flushes sorted data. I would be surprised
   to see significant regressions with this on. A lot of tests showed
   maybe improved tps, but mostly greatly improved performance stability,
   where a database unresponsive 60% of the time (60% of seconds in the
   the tps show very low or zero tps) and then becomes always responsive.

 - other flush controls ({backend,bgwriter}_flush_after) may just increase
   random writes, so are more risky in nature because the data is not
   sorted, and it may or may not be a good idea depending on detailed
   conditions. A "parallel copy" would be just such a special IO load
   which degrade performance under these settings.

   Maybe these two should be disabled by default because they lead to
   possibly surprising regressions?

 - for any particular load, the admin can decide to disable these if
   they think it is better not to flush. Also, as suggested by Andres,
   with 128 parallel queries the default value may not be appropriate
   at all.

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] alter table alter column ... (larger type) ... when there are dependent views

2016-05-12 Thread Tom Lane
Robert Haas  writes:
> On Tue, May 10, 2016 at 11:22 PM, Tom Lane  wrote:
>> You should look at the code in ALTER TABLE that tries to rebuild index
>> definitions during ALTER COLUMN TYPE, and see if that can be adapted
>> to updating views.

> I think the problems are almost entirely different.  In the case of
> ALTER TABLE, we just need to know whether the modified data will still
> index in the same way.  There are rules for that.

Not sure I buy that argument; we really only try to make a similar index
on the new column.  An example is that you can use ALTER COLUMN TYPE
to change a text column to timestamp, and if there's an index on the
column it just gets replaced by one using timestamp_ops, never mind that
the sort order and equality rules will be substantially different.

> In the case of
> views, the problem has more to do with potential POLA violations.

Indeed, but that's true for indexes as well.

> ... What the user actually wants, I
> think, is for the type of the view column to change from varchar(20)
> to varchar(40) when the underlying table is altered.  That, however,
> seems like a Pandora's box.

Well, for one thing it would require recursive updates of indirectly
dependent views.  I think that's perfectly do-able, if you have something
that does what you want on the directly dependent view in the first place.
But it certainly raises the stakes in terms of the amount of damage an
ill-considered ALTER could do.

In any case, it would be a good idea to try to sketch out a spec for
the behavior you want before any code gets written.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Use %u to print user mapping's umid and userid

2016-05-12 Thread Robert Haas
On Thu, May 12, 2016 at 12:18 AM, Etsuro Fujita
 wrote:
> I think if scanning a foreign join, the user mapping is still valid at
> execution, and that is ensured by RevalidateChachedQuery, IIUC.

Yes, we added special machinery for that, along the lines of what is
also done for RLS.

But I have to say I don't like this patch very much.  The problem here
is that we want to store an unsigned integer in a node tree, but
makeInteger() takes a long, and there's no other similar node that
accepts an OID instead.  Your solution to that problem is not to store
the data at all, which seems like letting the tail wag the dog.  Maybe
the performance difference in this case is minor and maybe it's not,
but that isn't really the point.  The point is that storing an OID is
a pretty reasonable thing to want to do, and we should find a way to
do it instead of working around the problem.

My suggestion is that we switch from using a List to marshal the data
to using an ExtensibleNode.  An advantage of that is that we'd have
some in-core test coverage for the ExtensibleNode stuff.  In theory it
ought to be simpler and less messy, too, but I guess we'll find out.

Regardless of what approach we take, I disagree that this needs to be
fixed in 9.6.  The only people who are going to be hurt by this are
people who are using postgres_fdw and reading node-tree dumps and have
an OID counter that advances past 2 billion.  And those people are
going to be pretty rare, and they'll just have to live with it.  It's
more important to keep the code stable than to fix a minor issue like
this.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Does Type Have = Operator?

2016-05-12 Thread Fabrízio de Royes Mello
On Wed, May 11, 2016 at 9:09 PM, David E. Wheeler 
wrote:
>
> On May 11, 2016, at 11:01 AM, Fabrízio de Royes Mello <
fabriziome...@gmail.com> wrote:
>
> > I know... but you can do that just in case the current behaviour fail
by cathing it with "begin...exception...", so you'll minimize the looking
for process on catalog.
>
> Yeah, I guess. Honestly 90% of this issue would go away for me if there
was a `json = json` operator. I know there are a couple different ways to
interpret JSON equality, though.
>

Yeah.. it's ugly but you can do something like that:

CREATE OR REPLACE FUNCTION json_equals_to_json(first JSON, second JSON)
RETURNS boolean AS
$$
BEGIN
   RETURN first::TEXT IS NOT DISTINCT FROM second::TEXT;
END
$$
LANGUAGE plpgsql IMMUTABLE;

CREATE OPERATOR = (
   LEFTARG = json,
   RIGHTARG = json,
   PROCEDURE = json_equals_to_json
);


Regards,

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
>> Timbira: http://www.timbira.com.br
>> Blog: http://fabriziomello.github.io
>> Linkedin: http://br.linkedin.com/in/fabriziomello
>> Twitter: http://twitter.com/fabriziomello
>> Github: http://github.com/fabriziomello


Re: [HACKERS] [sqlsmith] Failed assertion in parallel worker (ExecInitSubPlan)

2016-05-12 Thread Robert Haas
On Thu, May 12, 2016 at 11:48 AM, Tom Lane  wrote:
> Robert Haas  writes:
>> I could be wrong, but I thought that the target list for an expression
>> would always contain only Vars at this stage.  Non-default tlists get
>> injected at the end of scan/join planning.  Am I wrong?
>
> Target list for a relation, you mean?  See relation.h:
>
>  *  reltarget - Default Path output tlist for this rel; normally contains
>  *  Var and PlaceHolderVar nodes for the values we need to
>  *  output from this relation.
>  *  List is in no particular order, but all rels of an
>  *  appendrel set must use corresponding orders.
>  *  NOTE: in an appendrel child relation, may contain
>  *  arbitrary expressions pulled up from a subquery!

Err, wow.  That makes my head hurt.  Can you explain why this case
only arises for appendrel children, and not for plain rels?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Does Type Have = Operator?

2016-05-12 Thread David E. Wheeler
On May 12, 2016, at 11:19 AM, Fabrízio de Royes Mello  
wrote:

> Yeah.. it's ugly but you can do something like that:

I could, but I won’t, since this is pgTAP and users of the library might have 
defined their own json operators. 

Andrew Dunstan has done the yeoman’s work of creating such operators, BTW:

  https://bitbucket.org/adunstan/jsoncmp

Some might argue that it ought to compare JSON objects, effectively be the 
equivalent of ::jsonb = ::jsonb, rather than ::text = ::text. But as Andrew 
points out to me offlist, “if that's what they want why aren't they using jsonb 
in the first place?”

So I think that, up to the introduction of JSONB, it was important not to side 
one way or the other and put a JSON = operator in core. But now what we have 
JSONB, perhaps it makes sense to finally take sides and intoduce JSON = that 
does plain text comparison. Thoughts?

Best,

David

smime.p7s
Description: S/MIME cryptographic signature


Re: [HACKERS] Use %u to print user mapping's umid and userid

2016-05-12 Thread Robert Haas
On Thu, May 12, 2016 at 2:29 PM, Alvaro Herrera
 wrote:
> Robert Haas wrote:
>> My suggestion is that we switch from using a List to marshal the data
>> to using an ExtensibleNode.  An advantage of that is that we'd have
>> some in-core test coverage for the ExtensibleNode stuff.  In theory it
>> ought to be simpler and less messy, too, but I guess we'll find out.
>
> So the data in the list has a certain specific meaning according to its
> position within the list?  And the enum being modified by this patch,
> corresponds to knowledge of what each element in the list is?

Right.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Use %u to print user mapping's umid and userid

2016-05-12 Thread Alvaro Herrera
Robert Haas wrote:

> My suggestion is that we switch from using a List to marshal the data
> to using an ExtensibleNode.  An advantage of that is that we'd have
> some in-core test coverage for the ExtensibleNode stuff.  In theory it
> ought to be simpler and less messy, too, but I guess we'll find out.

So the data in the list has a certain specific meaning according to its
position within the list?  And the enum being modified by this patch,
corresponds to knowledge of what each element in the list is?  This
seems a bit odd.  I agree that something more similar to a struct would
be more appropriate.  Maybe there are other ways, but ExtensibleNode
seems like a reasonable tool to use here.

+1 to having in-core use case for ExtensibleNode too.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Failed assertion in parallel worker (ExecInitSubPlan)

2016-05-12 Thread Tom Lane
Robert Haas  writes:
>> Target list for a relation, you mean?  See relation.h:
>> 
>> *  reltarget - Default Path output tlist for this rel; normally contains
>> *  Var and PlaceHolderVar nodes for the values we need to
>> *  output from this relation.
>> *  List is in no particular order, but all rels of an
>> *  appendrel set must use corresponding orders.
>> *  NOTE: in an appendrel child relation, may contain
>> *  arbitrary expressions pulled up from a subquery!

> Err, wow.  That makes my head hurt.  Can you explain why this case
> only arises for appendrel children, and not for plain rels?

Well, plain rels only output Vars ;-)

But consider an appendrel representing

(SELECT x+1 FROM t1 UNION ALL SELECT y+2 FROM t2) ss(a)

The RTE for ss will have a reltarget list containing just "a".
Once we pull up the subqueries, the reltarget lists for the two child
appendrel members will need to contain "x+1" and "y+2" in order to be
equivalent to the parent's reltarget list.  See set_append_rel_size(),
which does that transformation.

This doesn't happen with ordinary subquery flattening because there
isn't a RelOptInfo corresponding to an ordinary subquery that's been
pulled up into the parent query.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Error during restore - dump taken with pg_dumpall -c option

2016-05-12 Thread Rushabh Lathia
On master branch when we do pg_dumpall with -c option, I can see that
it also dumping the "DROP ROLE pg_signal_backend", which seems wrong.
Because when you restore the dump, its throwing an error
"ERROR:  cannot drop role pg_signal_backend because it is required by the
database system".


dumpRoles()::pg_dumpall.c does have logic to not dump "CREATE ROLE"  if the
rolename starts with "pg_", but similar check is missing into dropRoles()
function.

PFA patch, to fix the problem in the similar way its been handled into
dumpRoles().

Thanks,


-- 
Rushabh Lathia
www.EnterpriseDB.com
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 694bd1e..fe1d8ba 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -605,7 +605,13 @@ dropRoles(PGconn *conn)
 	int			i_rolname;
 	int			i;
 
-	if (server_version >= 80100)
+	if (server_version >= 90600)
+		res = executeQuery(conn,
+		   "SELECT rolname "
+		   "FROM pg_authid "
+		   "WHERE rolname !~ '^pg_' "
+		   "ORDER BY 1");
+	else if (server_version >= 80100)
 		res = executeQuery(conn,
 		   "SELECT rolname "
 		   "FROM pg_authid "
@@ -630,6 +636,13 @@ dropRoles(PGconn *conn)
 
 		rolename = PQgetvalue(res, i, i_rolname);
 
+		if (strncmp(rolename,"pg_",3) == 0)
+		{
+			fprintf(stderr, _("%s: role name starting with \"pg_\" skipped (%s)\n"),
+	progname, rolename);
+			continue;
+		}
+
 		fprintf(OPF, "DROP ROLE %s%s;\n",
 if_exists ? "IF EXISTS " : "",
 fmtId(rolename));

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] NULL concatenation

2016-05-12 Thread Sridhar N Bamandlapally
Hi

In migration, am facing issue with NULL concatenation in plpgsql,
by concatenating NULL between any where/position to Text / Varchar, the
total string result is setting value to NULL


*In Oracle:*

declare
txt1 VARCHAR2(100) := 'ABCD';
txt2 VARCHAR2(100) := NULL;
txt3 VARCHAR2(100) := 'EFGH';
txt VARCHAR2(100) := NULL;
begin
  txt:= txt1 || txt2 || txt3;
  dbms_output.put_line (txt);
end;
/

abcdefgh   *===>return value*



*In Postgres*

do $$
declare
txt1 text := 'ABCD';
txt2 text := NULL;
txt3 text := 'EFGH';
txt text := NULL;
begin
txt:= txt1 || txt2 || txt3;
raise notice '%', txt;
end$$ language plpgsql;

NOTICE:*===> return value*


SQL-Server also does same like Oracle

Is there any way alternate we have for same behavior in PostgreSQL

Please

Thanks
Sridhar
OpenText


Re: [HACKERS] Declarative partitioning

2016-05-12 Thread Amit Langote

Hi,

On 2016/05/12 17:42, Sameer Thakur-2 wrote:
> Hello Amit,
> In the example 
>> create table part201606week4 partition of parted 
>> for values start (2016, 6, 2) end (2016, 6, 29);
> 
> seems to be a typo

Oops, there indeed is.

create table part201606week4 partition of parted
for values start (2016, 6, 2) end (2016, 6, 29);
create table part201606week4 partition of parted
for values start (2016, 6, 22) end (2016, 6, 29);

While copy-pasting part201606week4's definition from a previous line,
ended up pasting twice and then edited the second one to the correct
definition.

Thanks,
Amit




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Declarative partitioning

2016-05-12 Thread Sameer Thakur-2
Hello Amit,
In the example 
>create table part201606week4 partition of parted 
>for values start (2016, 6, 2) end (2016, 6, 29);

seems to be a typo
regards
Sameer 



--
View this message in context: 
http://postgresql.nabble.com/Declarative-partitioning-tp5862462p5903204.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] NULL concatenation

2016-05-12 Thread Pavel Stehule
Hi

2016-05-12 10:47 GMT+02:00 Sridhar N Bamandlapally :

> Hi
>
> In migration, am facing issue with NULL concatenation in plpgsql,
> by concatenating NULL between any where/position to Text / Varchar, the
> total string result is setting value to NULL
>
>
> *In Oracle:*
>
> declare
> txt1 VARCHAR2(100) := 'ABCD';
> txt2 VARCHAR2(100) := NULL;
> txt3 VARCHAR2(100) := 'EFGH';
> txt VARCHAR2(100) := NULL;
> begin
>   txt:= txt1 || txt2 || txt3;
>   dbms_output.put_line (txt);
> end;
> /
>
> abcdefgh   *===>return value*
>
>
>
> *In Postgres*
>
> do $$
> declare
> txt1 text := 'ABCD';
> txt2 text := NULL;
> txt3 text := 'EFGH';
> txt text := NULL;
> begin
> txt:= txt1 || txt2 || txt3;
> raise notice '%', txt;
> end$$ language plpgsql;
>
> NOTICE:*===> return value*
>
>
> SQL-Server also does same like Oracle
>
> Is there any way alternate we have for same behavior in PostgreSQL
>

use function concat
http://www.postgresql.org/docs/9.5/static/functions-string.html

 postgres=# select concat('AHOJ', NULL,'XXX');
 concat
-
 AHOJXXX
(1 row)

Regards

Pavel


> Please
>
> Thanks
> Sridhar
> OpenText
>
>


Re: [HACKERS] NULL concatenation

2016-05-12 Thread Sridhar N Bamandlapally
Thanks Pavel

Great !!

I was thinking both || and CANCAT does same

Thanks again

-
Sridhar
OpenText


On Thu, May 12, 2016 at 2:22 PM, Pavel Stehule 
wrote:

> Hi
>
> 2016-05-12 10:47 GMT+02:00 Sridhar N Bamandlapally 
> :
>
>> Hi
>>
>> In migration, am facing issue with NULL concatenation in plpgsql,
>> by concatenating NULL between any where/position to Text / Varchar, the
>> total string result is setting value to NULL
>>
>>
>> *In Oracle:*
>>
>> declare
>> txt1 VARCHAR2(100) := 'ABCD';
>> txt2 VARCHAR2(100) := NULL;
>> txt3 VARCHAR2(100) := 'EFGH';
>> txt VARCHAR2(100) := NULL;
>> begin
>>   txt:= txt1 || txt2 || txt3;
>>   dbms_output.put_line (txt);
>> end;
>> /
>>
>> abcdefgh   *===>return value*
>>
>>
>>
>> *In Postgres*
>>
>> do $$
>> declare
>> txt1 text := 'ABCD';
>> txt2 text := NULL;
>> txt3 text := 'EFGH';
>> txt text := NULL;
>> begin
>> txt:= txt1 || txt2 || txt3;
>> raise notice '%', txt;
>> end$$ language plpgsql;
>>
>> NOTICE:*===> return value*
>>
>>
>> SQL-Server also does same like Oracle
>>
>> Is there any way alternate we have for same behavior in PostgreSQL
>>
>
> use function concat
> http://www.postgresql.org/docs/9.5/static/functions-string.html
>
>  postgres=# select concat('AHOJ', NULL,'XXX');
>  concat
> -
>  AHOJXXX
> (1 row)
>
> Regards
>
> Pavel
>
>
>> Please
>>
>> Thanks
>> Sridhar
>> OpenText
>>
>>
>


[HACKERS] Incremental refresh of materialized view - Patch

2016-05-12 Thread hari.prasath
Hi all

  I am building a patch to refresh materialized view incrementally from the 
change set decoded by using logical decoding from WAL.

  

  As of now i can able to generate the changes that has to be updated in 
the materialized view but the thing was it not possible to do any DML 
operations on MATVIEWS. 



  Only from the concurrent refresh of matviews the DML operations are 
allowed.

  

  However if the same methods in matview.c 
OpenMatViewIncrementalMaintenance  CloseMatViewIncrementalMaintenance are 
mad extern its possible to do DML from the patches like i am building now.



  Is there any other way of doing DML operations on materialized views from 
patch.?

  

  correct me if i am wrong









cheers

- Harry







Re: [HACKERS] Academic help for Postgres

2016-05-12 Thread konstantin knizhnik

On May 12, 2016, at 6:16 AM, Rajeev rastogi wrote:

> On 11 May 2016 19:50, Bruce Momjian Wrote:
> 
> 
>> I am giving a keynote at an IEEE database conference in Helsinki next
>> week (http://icde2016.fi/).  (Yes, I am not attending PGCon Ottawa
>> because I accepted the Helsinki conference invitation before the PGCon
>> Ottawa date was changed from June to May).
>> 
>> As part of the keynote, I would like to mention areas where academia can
>> help us.  The topics I can think of are:
>> 
>>  Query optimization
>>  Optimizer statistics
>>  Indexing structures
>>  Reducing function call overhead
>>  CPU locality
>>  Sorting
>>  Parallelism
>>  Sharding
>> 
>> Any others?
> 
> How about?
> 1. Considering NUMA aware architecture.
> 2. Optimizer tuning as per new hardware trends.
> 3. More effective version of Join algorithms (e.g. Compare to traditional 
> "build and then probe" mechanism of Hash Join, now there is pipelining Hash 
> join where probe and build both happens together).

Interesting article about optimal joins: http://arxiv.org/pdf/1203.1952v1.pdf


> 
> Thanks and Regards,
> Kumar Rajeev Rastogi
> 
> 
> 
> -- 
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Lets (not) break all the things. Was: [pgsql-advocacy] 9.6 -> 10.0

2016-05-12 Thread Joshua Drake

On Apr 30, 2016 2:07 PM, Oleg Bartunov  wrote:
>
>
>
> On Fri, Apr 29, 2016 at 7:40 PM, Joshua D. Drake  wrote:
>>
> I'd not limited by the companies, individual developes are highly welcome. I'm afraid there are some.
>  
Oh, absolutely. I was just pointing out how a lot of companies are hoarding talent internally for no productive purpose.
Jd
>>
>>
>>
>> Sincerely,
>>
>> JD
>>
>>
>> -- 
>> Command Prompt, Inc.                  http://the.postgres.company/
>>                         +1-503-667-4564
>> PostgreSQL Centered full stack support, consulting and development.
>> Everyone appreciates your honesty, until you are honest with them.
>
>



Re: [HACKERS] [sqlsmith] Failed assertion in parallel worker (ExecInitSubPlan)

2016-05-12 Thread Amit Kapila
On Fri, May 13, 2016 at 9:43 AM, Amit Kapila 
wrote:
>
> On Thu, May 12, 2016 at 11:37 PM, Tom Lane  wrote:
> >
> > Robert Haas  writes:
> > >> Target list for a relation, you mean?  See relation.h:
> > >>
> > >> *  reltarget - Default Path output tlist for this rel; normally
contains
> > >> *  Var and PlaceHolderVar nodes for the values we
need to
> > >> *  output from this relation.
> > >> *  List is in no particular order, but all rels of an
> > >> *  appendrel set must use corresponding orders.
> > >> *  NOTE: in an appendrel child relation, may contain
> > >> *  arbitrary expressions pulled up from a subquery!
> >
> > > Err, wow.  That makes my head hurt.  Can you explain why this case
> > > only arises for appendrel children, and not for plain rels?
> >
> > Well, plain rels only output Vars ;-)
> >
>
> Does this mean that base rels can't contain PlaceHolderVars?  Isn't it
possible in below code:
>

Here I want to ask base rels which are plain rels?

It might be that I am missing something, but if we debug the serial plan
for original query [1] for which this issue is reported, we have noticed
that PlaceHolderVars that contain subplans are added to base rels for which
RTE kind is (RTE_RELATION).


[1] - select ref_68.is_trigger_deletable as c0, (select d from
public.renamecolumnchild limit 1 offset 15) as c1,
ref_68.is_trigger_insertable_into as c2, 3 as c3 from
information_schema.role_udt_grants as ref_67 left join
information_schema.views as ref_68 on (ref_67.grantee =
ref_68.table_catalog ) where true limit 138;

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] [sqlsmith] Failed assertion in parallel worker (ExecInitSubPlan)

2016-05-12 Thread Amit Kapila
On Thu, May 12, 2016 at 11:37 PM, Tom Lane  wrote:
>
> Robert Haas  writes:
> >> Target list for a relation, you mean?  See relation.h:
> >>
> >> *  reltarget - Default Path output tlist for this rel; normally
contains
> >> *  Var and PlaceHolderVar nodes for the values we need
to
> >> *  output from this relation.
> >> *  List is in no particular order, but all rels of an
> >> *  appendrel set must use corresponding orders.
> >> *  NOTE: in an appendrel child relation, may contain
> >> *  arbitrary expressions pulled up from a subquery!
>
> > Err, wow.  That makes my head hurt.  Can you explain why this case
> > only arises for appendrel children, and not for plain rels?
>
> Well, plain rels only output Vars ;-)
>

Does this mean that base rels can't contain PlaceHolderVars?  Isn't it
possible in below code:

query_planner()
{
..
/*
* Now distribute "placeholders" to base rels as needed.  This has to be
* done after join removal because removal could change whether a
* placeholder is evaluatable at a base rel.
*/
add_placeholders_to_base_rels(root);
..
}

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] [GENERAL] NULL concatenation

2016-05-12 Thread Sridhar N Bamandlapally
Hi Adam

we need simple concatenation of all variables(which values may come NULL or
valid-values based on functional process),

coalesce is different functionality

Thanks
Sridhar
OpenText


On Thu, May 12, 2016 at 4:56 PM, Adam Pearson <
adam.pear...@realisticgames.co.uk> wrote:

> Hello Sridhar,
>
>   Have you tried the 'coalesce' function to handle the nulls?
>
>
> Kind Regards,
>
> Adam Pearson
> --
> *From:* pgsql-general-ow...@postgresql.org <
> pgsql-general-ow...@postgresql.org> on behalf of Sridhar N Bamandlapally <
> sridhar@gmail.com>
> *Sent:* 12 May 2016 09:47
> *To:* PG-General Mailing List; PostgreSQL-hackers
> *Subject:* [GENERAL] NULL concatenation
>
> Hi
>
> In migration, am facing issue with NULL concatenation in plpgsql,
> by concatenating NULL between any where/position to Text / Varchar, the
> total string result is setting value to NULL
>
>
> *In Oracle:*
>
> declare
> txt1 VARCHAR2(100) := 'ABCD';
> txt2 VARCHAR2(100) := NULL;
> txt3 VARCHAR2(100) := 'EFGH';
> txt VARCHAR2(100) := NULL;
> begin
>   txt:= txt1 || txt2 || txt3;
>   dbms_output.put_line (txt);
> end;
> /
>
> abcdefgh   *===>return value*
>
>
>
> *In Postgres*
>
> do $$
> declare
> txt1 text := 'ABCD';
> txt2 text := NULL;
> txt3 text := 'EFGH';
> txt text := NULL;
> begin
> txt:= txt1 || txt2 || txt3;
> raise notice '%', txt;
> end$$ language plpgsql;
>
> NOTICE:*===> return value*
>
>
> SQL-Server also does same like Oracle
>
> Is there any way alternate we have for same behavior in PostgreSQL
>
> Please
>
> Thanks
> Sridhar
> OpenText
>
>


Re: [HACKERS] Does Type Have = Operator?

2016-05-12 Thread Andrew Dunstan



On 05/12/2016 03:02 PM, Tom Lane wrote:

"David E. Wheeler"  writes:

Some might argue that it ought to compare JSON objects, effectively be the 
equivalent of ::jsonb = ::jsonb, rather than ::text = ::text. But as Andrew 
points out to me offlist, “if that's what they want why aren't they using 
jsonb in the first place?�
So I think that, up to the introduction of JSONB, it was important not to side 
one way or the other and put a JSON = operator in core. But now what we have 
JSONB, perhaps it makes sense to finally take sides and intoduce JSON = that 
does plain text comparison. Thoughts?

Meh.  Right now, if you want to compare values of type JSON, you have to
either cast them to text or to jsonb, and that effectively declares which
comparison semantics you want.  I'm not sure that prejudging that is a
good thing for us to do, especially when the argument that text semantics
are what you would probably want is so weak.

Andrew mentions in the extension you pointed to that providing a default
comparison operator would enable people to do UNION, DISTINCT, etc on JSON
columns without thinking about it.  I'm not convinced that "without
thinking about it" is a good thing here.  But if we were going to enable
that, I'd feel better about making it default to jsonb semantics ...




I think you've been a little liberal with quoting the docs ;-) The 
reason I made it an extension is precisely because it's not 
unambiguously clear what json equality should mean.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Keeping CURRENT_DATE and similar constructs in original format

2016-05-12 Thread David G. Johnston
On Thursday, May 12, 2016, Tom Lane  wrote:

> "David G. Johnston" > writes:
> > On Thursday, May 12, 2016, Tom Lane 
> > ');>>
> wrote:
> >> (I'm not particularly in love with the node type name
> >> ValueFunction; anybody got a better idea?)
>
> > SQL99DateTimeFunction (or roughly whenever they were introduced)?
>
> Some of them aren't datetime-related, though.  I thought about
> NiladicFunction but it seemed maybe too technical.
>
>
The time ones taking precision confuse things a bit but my first reaction
was positive.  It is readily grepable.  I'd rather have that over
ValueFunction.

David J.


Re: [HACKERS] Incremental refresh of materialized view - Patch

2016-05-12 Thread hari.prasath
Hi all


  I am building a patch to refresh materialized view incrementally from the 
change set decoded by using logical decoding from WAL.

  

  As of now i can able to generate the changes that has to be updated in 
the materialized view but the thing was it not possible to do any DML 
operations on MATVIEWS. 



  Only from the concurrent refresh of matviews the DML operations are 
allowed.

  

  However if the same methods in matview.c 
OpenMatViewIncrementalMaintenance  CloseMatViewIncrementalMaintenance are 
mad extern its possible to do DML from the patches like i am building now.



  Is there any other way of doing DML operations on materialized views from 
patch.?

  

  correct me if i am wrong









cheers

- Harry












[HACKERS] Keeping CURRENT_DATE and similar constructs in original format

2016-05-12 Thread Tom Lane
I got annoyed again about a minor issue I've complained about before,
and this time decided to do something about it.  The issue is that gram.y
translates a number of argument-less SQL constructs, such as CURRENT_DATE,
into very implementation-specific things such as 'now'::text::date.  There
are several reasons not to like that:

* It exposes what should be implementation details in reverse-listed views
and rules.  That's bad for us because it reduces our freedom to improve
the implementation, and it's bad for users because it looks ugly, and is
harder to understand (since it's not what you wrote to start with), and it
creates unnecessary lock-in to Postgres.

* Actually, we're exposing implementation details even without looking
at reverse listings:

regression=# select current_time;
   timetz   

 17:46:11.589945-04
(1 row)

Where did that column heading come from?  It appears because the topmost
node in what "current_time" expands to is a cast to timetz.  If you're
not aware of that, it doesn't exactly meet the POLA.

* Performance is fairly bad, because of the need to parse the 'now' string
each time.  A quick experiment puts the cost of now() at about 60ns on my
machine, while current_timestamp(6) takes over 500ns to deliver the same
result.  Admittedly, this is probably not a hot-button for most users,
but it's not good.

* Including a constant in the translated construct requires ugly hacks for
pg_stat_statements, cf commit 69c7a9838c82bbfd.


So what I've wanted to do for some time is invent a new expression node
type that represents any one of these functions and can be reverse-listed
in the same format that the input had.  The attached proposed patch does
that.  (I'm not particularly in love with the node type name
ValueFunction; anybody got a better idea?)

Obviously this is 9.7 material; I'm posting it now just so I can add
it to the next CF and thereby not forget about it.


By the by, a scan through gram.y reveals other stuff we aren't trying
to reverse-list in original form:

a_expr AT TIME ZONE a_expr
LIKE, ILIKE, SIMILAR TO
OVERLAPS
BETWEEN
COLLATION FOR '(' a_expr ')'
EXTRACT '(' extract_list ')'
OVERLAY '(' overlay_list ')'
POSITION '(' position_list ')'
SUBSTRING '(' substr_list ')'
TREAT '(' a_expr AS Typename ')'
TRIM '(' BOTH trim_list ')'
TRIM '(' LEADING trim_list ')'
TRIM '(' TRAILING trim_list ')'
TRIM '(' trim_list ')'

Each of these gets converted to some PG-specific function or operator
name, and then will get reverse-listed using that name and ordinary
function or operator syntax, rather than using the SQL-approved special
syntax.  I'm less excited about doing something about these cases,
because (1) they aren't exposing implementation details in any real way,
and (2) in most of these cases, the SQL-approved syntax is just randomly
inconsistent with anything else.  But perhaps somebody else would want
to think about changing that.  (Note that I do think we need to handle
BETWEEN better, in particular to avoid multiple-evaluation risks, but
that's a separate matter.)

regards, tom lane



value-function-1.patch.gz
Description: value-function-1.patch.gz

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Does Type Have = Operator?

2016-05-12 Thread David E. Wheeler
On May 12, 2016, at 12:02 PM, Tom Lane  wrote:

> Andrew mentions in the extension you pointed to that providing a default
> comparison operator would enable people to do UNION, DISTINCT, etc on JSON
> columns without thinking about it.  I'm not convinced that "without
> thinking about it" is a good thing here.  But if we were going to enable
> that, I'd feel better about making it default to jsonb semantics ...

If you want the JSONB semantics, why wouldn’t you use JSONB instead of JSON?

Best,

David



smime.p7s
Description: S/MIME cryptographic signature


Re: [HACKERS] Keeping CURRENT_DATE and similar constructs in original format

2016-05-12 Thread Tom Lane
"David G. Johnston"  writes:
> On Thursday, May 12, 2016, Tom Lane  > wrote:
>> (I'm not particularly in love with the node type name
>> ValueFunction; anybody got a better idea?)

> SQL99DateTimeFunction (or roughly whenever they were introduced)?

Some of them aren't datetime-related, though.  I thought about
NiladicFunction but it seemed maybe too technical.

> I agree with the premise.  I took notice of it recently in explain output
> on these lists using current_date.  That example read like
> ('now'::cstring)::date which was really odd since I was at least expecting
> text as the intermediate cast...

Yeah, that's another fun thing: the reverse listing currently differs
depending on whether you're looking at an expression tree that's been
through const-folding.  It didn't use to --- looks like the mention
of cstring started in 9.2.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Keeping CURRENT_DATE and similar constructs in original format

2016-05-12 Thread David G. Johnston
On Thursday, May 12, 2016, Tom Lane > wrote:

>
> So what I've wanted to do for some time is invent a new expression node
> type that represents any one of these functions and can be reverse-listed
> in the same format that the input had.  The attached proposed patch does
> that.  (I'm not particularly in love with the node type name
> ValueFunction; anybody got a better idea?)
>
>
SQL99DateTimeFunction (or roughly whenever they were introduced)?

I agree with the premise.  I took notice of it recently in explain output
on these lists using current_date.  That example read like
('now'::cstring)::date which was really odd since I was at least expecting
text as the intermediate cast...

David J.


Re: [HACKERS] silent data loss with ext4 / all current versions

2016-05-12 Thread Michael Paquier
On Thu, May 12, 2016 at 2:58 PM, Michael Paquier
 wrote:
> On Mon, Mar 28, 2016 at 8:25 AM, Andres Freund  wrote:
>> I've also noticed that
>
> Coming back to this issue because...
>
>> a) pg_basebackup doesn't do anything about durability (it probably needs
>>a very similar patch to the one pg_rewind just received).
>
> I think that one of the QE tests running here just got bitten by that.
> A base backup was taken with pg_basebackup and more or less after a VM
> was plugged off. The trick is that for pg_basebackup we cannot rely on
> initdb: pg_basebackup is a client-side utility. In most of the PG
> packages (Fedora, RHEL), it is put on the client-side package, where
> initdb is not. So it seems to me that the correct fix is not to use
> initdb -S but to have copies of fsync_parent_path, durable_rename and
> fsync_fname_ext in streamutil.c, and then we reuse them for both
> pg_receivexlog and pg_basebackup. At least that's less risky for
> back-branches this way.
>
>> b) nor does pg_dump[all]
>
> I have not hacked up that yet, but I would think that we would need
> one extra copy of some of those fsync_* routines in src/bin/pg_dump/.
> There is another thread for that already... On master I guess we'd end
> with something centralized in src/common/, but let's close the
> existing holes first.
>
>> So we're going to have another round of fsync stuff in the next set of
>> releases anyway...
>
> The sooner the better I think. Any people caring about this problem
> are now limited in using initdb -S after calling pg_basebackup or
> pg_dump. That's a solution, though the flushes should be contained
> inside each utility.

And actually this won't fly high if there is no equivalent of
walkdir() or if the fsync()'s are not applied recursively. On master
at least the refactoring had better be done cleanly first... For the
back branches, we could just have some recursive call like
fsync_recursively and keep that in src/bin/pg_basebackup. Andres, do
you think that this should be part of fe_utils or src/common/? I'd
tend to think the latter is more adapted as there is an equivalent in
the backend. On back-branches, we could just have something like
fsync_recursively that walks though the paths. An even more simple
approach would be to fsync() individually things that have been
written, but that would suck in performance.

Thoughts from others?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] NULL concatenation

2016-05-12 Thread Adam Pearson
Hello Sridhar,

  Have you tried the 'coalesce' function to handle the nulls?


Kind Regards,

Adam Pearson


From: pgsql-general-ow...@postgresql.org  
on behalf of Sridhar N Bamandlapally 
Sent: 12 May 2016 09:47
To: PG-General Mailing List; PostgreSQL-hackers
Subject: [GENERAL] NULL concatenation

Hi

In migration, am facing issue with NULL concatenation in plpgsql,
by concatenating NULL between any where/position to Text / Varchar, the total 
string result is setting value to NULL


In Oracle:

declare
txt1 VARCHAR2(100) := 'ABCD';
txt2 VARCHAR2(100) := NULL;
txt3 VARCHAR2(100) := 'EFGH';
txt VARCHAR2(100) := NULL;
begin
  txt:= txt1 || txt2 || txt3;
  dbms_output.put_line (txt);
end;
/

abcdefgh   ===>return value



In Postgres

do $$
declare
txt1 text := 'ABCD';
txt2 text := NULL;
txt3 text := 'EFGH';
txt text := NULL;
begin
txt:= txt1 || txt2 || txt3;
raise notice '%', txt;
end$$ language plpgsql;

NOTICE:===> return value


SQL-Server also does same like Oracle

Is there any way alternate we have for same behavior in PostgreSQL

Please

Thanks
Sridhar
OpenText



Re: [HACKERS] [sqlsmith] Failed assertion in parallel worker (ExecInitSubPlan)

2016-05-12 Thread Amit Kapila
On Sat, May 7, 2016 at 6:37 PM, Amit Kapila  wrote:
>
> On Fri, May 6, 2016 at 8:45 AM, Tom Lane  wrote:
> >
> > Andreas Seltenreich  writes:
> > > when fuzz testing master as of c1543a8, parallel workers trigger the
> > > following assertion in ExecInitSubPlan every couple hours.
> > > TRAP: FailedAssertion("!(list != ((List *) ((void *)0)))", File:
"list.c", Line: 390)
> > > Sample backtraces of a worker and leader below, plan of leader
attached.
> > > The collected queries don't seem to reproduce it.
> >
> > Odd.  My understanding of the restrictions on parallel query is that
> > anything involving a SubPlan ought not be parallelized;
> >
>
> Subplan references are considered parallel-restricted, so parallel plan
can be generated if there are subplans in a query, but they shouldn't be
pushed to workers.  I have tried a somewhat simpler example to see if we
pushdown anything parallel restricted to worker in case of joins and it
turned out there are cases when that can happen.  Consider below example:
>
>
>
> From the above output it is clear that parallel restricted function is
pushed down below gather node.  I found that though we have have care fully
avoided to push pathtarget below GatherPath in apply_projection_to_path()
if pathtarget contains any parallel unsafe or parallel restricted clause,
but we are separately also trying to apply pathtarget to partialpath list
which doesn't seem to be the correct way even if it is required.  I think
this has been added during parallel aggregate patch and it seems to me this
is not required after the changes related to GatherPath in
apply_projection_to_path().
>
> After applying the attached patch, it avoids to add parallel restricted
clauses below gather path.
>
> Now back to the original bug, if you notice in plan file attached in
original bug report, the subplan is pushed below Gather node in target
list, but not to immediate join, rather at one more level down to SeqScan
path.  I am still not sure how it has manage to push the restricted clauses
to that down the level.
>

On further analysis, I think I know what is going on in the original bug
report.  We add the Vars (build_base_rel_tlists) and PlaceholderVars
(add_placeholders_to_base_rels()) to each relations (RelOptInfo) target
during qurey_planner and the Subplans are added as PlaceHolderVars in
target expressions.  Now while considering whether a particular rel can be
parallel in set_rel_consider_parallel(), we don't check the target
expressions to allow the relation for parallelism.  I think we can prohibit
the relation to be considered for parallelism if it's target expressions
contain any parallel restricted clause.  Fix on those lines is attached
with this mail.

Thanks to Dilip Kumar for helping me in narrowing down this particular
problem.  We were not able to generate the exact test, but I think the
above theory is sufficient to prove that it can cause a problem as seen in
the original bug report.



With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


prohibit_parallel_clause_below_rel_v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers