Re: [HACKERS] to_date_valid()

2016-07-02 Thread Jaime Casanova
El 2/7/2016 20:33, "Euler Taveira"  escribió:
>
> On 02-07-2016 22:04, Andreas 'ads' Scherbaum wrote:
> > The attached patch adds a new function "to_date_valid()" which will
> > validate the date and return an error if the input and output date do
> > not match. Tests included, documentation update as well.
> >
> Why don't you add a third parameter (say, validate = true | false)
> instead of creating another function? The new parameter could default to
> false to not break compatibility.
>

Shouldn't we fix this instead? Sounds like a bug to me. We don't usually
want to be bug compatible so it doesn't matter if we break something.

--
Jaime Casanovahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] Bug in batch tuplesort memory CLUSTER case (9.6 only)

2016-07-02 Thread Peter Geoghegan
On Sat, Jul 2, 2016 at 3:17 PM, Robert Haas  wrote:
> In the interest of clarity, I was not intending to say that there
> should be a regression test in the patch.  I was intending to say that
> there should be a test case with the bug report.  I'm not opposed to
> adding a regression test, and I like the idea of attempting to do so
> while requiring only a relatively small amount of data by changing
> maintenance_work_mem, but that wasn't the target at which I was
> aiming.  Nevertheless, carry on.

How do you feel about adding testing to tuplesort.c not limited to
hitting this bug (when Valgrind memcheck is used)?

Are you satisfied that I have adequately described steps to reproduce?

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Column COMMENTs in CREATE TABLE?

2016-07-02 Thread David G. Johnston
On Sat, Jul 2, 2016 at 8:31 PM, Fabrízio de Royes Mello <
fabriziome...@gmail.com> wrote:

>
>
> Em sábado, 2 de julho de 2016, David G. Johnston <
> david.g.johns...@gmail.com> escreveu:
>
>> On Sat, Jul 2, 2016 at 12:55 PM, Marko Tiikkaja  wrote:
>>
>>>
>>> What I would prefer is something like this:
>>>
>>> CREATE TABLE foo(
>>>   f1 int NOT NULL COMMENT
>>> 'the first field',
>>>   f2 int NOT NULL COMMENT
>>> 'the second field',
>>> ...
>>> );
>>>
>>> which would ensure the comments are both next to the field definition
>>> they're documenting and that they make it all the way to the database. I
>>> looked into the biggest products, and MySQL supports this syntax.  I
>>> couldn't find any similar syntax in any other product.
>>>
>>>
>> ​+1 for the idea - though restricting it to columns would not be ideal.
>>
>>
>> CREATE TABLE name
>> COMMENT IS
>> 'Table Comment Here'
>> (
>> col1 serial COMMENT IS 'Place comment here'
>> )​;
>>
>>
> And what about the other CREATE statements? IMHO if we follow this path
> then we should add COMMENT to all CREATE statements and perhaps also to
> ALTER. Of course in a set of small patches to make the reviewers life
> easier.
>
>
​I should have made it clear I didn't expect TABLE to be the only object
but rather was using it as an example of how we could/should do this
generally for top-level objects (e.g., table) and sub-objects (e.g.,
column)​.

David J.


Re: [HACKERS] Password identifiers, protocol aging and SCRAM protocol

2016-07-02 Thread David Steele
On 7/2/16 6:32 PM, Michael Paquier wrote:
> On Sun, Jul 3, 2016 at 4:54 AM, Heikki Linnakangas  wrote:
>
>> Michael, do you plan to submit a new version of this patch set for the next
>> commitfest? I'd like to get this committed early in the 9.7 release cycle,
>> so that we have time to work on all the add-on stuff before the release.
> 
> Thanks. That's good news! Yes, I am still on track to submit a patch for CF1.

And I'm on board for reviews, testing, and whatever else I can help with.

-- 
-David
da...@pgmasters.net


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Column COMMENTs in CREATE TABLE?

2016-07-02 Thread David Fetter
On Sat, Jul 02, 2016 at 01:06:49PM -0400, David G. Johnston wrote:
> On Sat, Jul 2, 2016 at 12:55 PM, Marko Tiikkaja  wrote:
> 
> >
> > What I would prefer is something like this:
> >
> > CREATE TABLE foo(
> >   f1 int NOT NULL COMMENT
> > 'the first field',
> >   f2 int NOT NULL COMMENT
> > 'the second field',
> > ...
> > );
> >
> > which would ensure the comments are both next to the field definition
> > they're documenting and that they make it all the way to the database. I
> > looked into the biggest products, and MySQL supports this syntax.  I
> > couldn't find any similar syntax in any other product.
> >
> >
> ​+1 for the idea - though restricting it to columns would not be ideal.

+1 for adding it to all the CREATEs whose objects support COMMENT.

Might something like

CREATE ... [WITH (COMMENT $$Big honking comment here$$)]

for the explicit CREATE cases and something like

CREATE TABLE foo(
id SERIAL PRIMARY KEY WITH (COMMENT 'Generated primary key, best find a 
natural one, too'),
t TEXT NOT NULL WITH (COMMENT 'Really?  A single-letter name?!?'),
...
)

for cases where the CREATE isn't part of the syntax help alleviate the
keyword issue?

I suggested doing it this way because where there's one thing, in this
case a COMMENT, it's reasonable to expect that there will be others
and make that simpler to do.

Best,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] to_date_valid()

2016-07-02 Thread Euler Taveira
On 02-07-2016 22:04, Andreas 'ads' Scherbaum wrote:
> The attached patch adds a new function "to_date_valid()" which will
> validate the date and return an error if the input and output date do
> not match. Tests included, documentation update as well.
> 
Why don't you add a third parameter (say, validate = true | false)
instead of creating another function? The new parameter could default to
false to not break compatibility.


-- 
   Euler Taveira   Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Docs, backups, and MS VSS

2016-07-02 Thread Craig Ringer
On 2 July 2016 at 22:42, Bruce Momjian  wrote:

>
> > I suspect, but cannot prove, that it is also safe to snapshot pg_xlog on
> a
> > separate filesystem if and only if you take the datadir snapshot before
> the
> > pg_xlog snapshot and you have wal_keep_segments high enough to ensure
> that WAL
> > needeed by the redo checkpoint in the datadir snapshot is not removed. I
> > wouldn't want to do this, and certainly not document it, since it's way
> saner
> > to use pg_start_backup() etc.
>
> Yes, I have wanted to document that WAL-at-the-end is sufficient for
> non-atomic snapshots assuming the needed WAL is there.  However, even if
> the WAL is backed up, it doesn't mean we are going to read it during
> crash recovery, i.e. we only read from the last checkpoint or something
> like that.  I have no idea how to tell people when this is safe.
>
> My simplistic idea would be to tell people to run a checkpoint right
> before all the snapshots are taken, but even that doesn't seem 100%
> safe.  This needs someone who understands the WAL and how to tell people
> a totally safe procedure.
>

The main thing is to provide an easy way to get the filenames of all the
archives that are required. pg_start_backup() and pg_stop_backup() provide
the range of LSNs required to restore, but you have to - correctly -
convert them into xlog file names and copy everything in that range not
just the start and end. There's no simple way to ask PostgreSQL for the
file-list via a query, since we lack arithmetic operators for pg_lsn or any
sort of pg_xlogfile_name_next function or similar. You can easily get the
start and end filenames using pg_xlogfile_name() and rely on lexical
comparision of filenames but that's way less convenient than just getting a
file-list you can feed into rsync / tar / whatever.

It's too complicated. Yes, most users should just use pg_basebackup, but
that doesn't play well with snapshots etc. A pg_xlogfile_name_range
function would be a real help I think, so you could psql -qAt a simple
query to get all the xlogs you must copy from the saved LSNs reported by
pg_start_backup() and pg_stop_backup(). Especially if the docs incorporated
a sample script, including a test that marks the backup aborted/failed if
there are any missing files.




-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


[HACKERS] to_date_valid()

2016-07-02 Thread Andreas 'ads' Scherbaum


Hello,

we have customers complaining that to_date() accepts invalid dates, and 
returns a different date instead. This is a known issue:


http://sql-info.de/postgresql/notes/to_date-to_timestamp-gotchas.html

On the other hand this leads to wrong dates when loading dates into the 
database, because the database happily accepts invalid dates and ends up 
writing something completely different into the table.


The attached patch adds a new function "to_date_valid()" which will 
validate the date and return an error if the input and output date do 
not match. Tests included, documentation update as well.


--
Andreas 'ads' Scherbaum
German PostgreSQL User Group
European PostgreSQL User Group - Board of Directors
Volunteer Regional Contact, Germany - PostgreSQL Project


to_date_valid.patch.gz
Description: application/gzip

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Column COMMENTs in CREATE TABLE?

2016-07-02 Thread Fabrízio de Royes Mello
Em sábado, 2 de julho de 2016, David G. Johnston 
escreveu:

> On Sat, Jul 2, 2016 at 12:55 PM, Marko Tiikkaja  > wrote:
>
>>
>> What I would prefer is something like this:
>>
>> CREATE TABLE foo(
>>   f1 int NOT NULL COMMENT
>> 'the first field',
>>   f2 int NOT NULL COMMENT
>> 'the second field',
>> ...
>> );
>>
>> which would ensure the comments are both next to the field definition
>> they're documenting and that they make it all the way to the database. I
>> looked into the biggest products, and MySQL supports this syntax.  I
>> couldn't find any similar syntax in any other product.
>>
>>
> ​+1 for the idea - though restricting it to columns would not be ideal.
>
>
> CREATE TABLE name
> COMMENT IS
> 'Table Comment Here'
> (
> col1 serial COMMENT IS 'Place comment here'
> )​;
>
>
And what about the other CREATE statements? IMHO if we follow this path
then we should add COMMENT to all CREATE statements and perhaps also to
ALTER. Of course in a set of small patches to make the reviewers life
easier.

Regards,





-- 
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
>> Timbira: http://www.timbira.com.br
>> Blog: http://fabriziomello.github.io
>> Linkedin: http://br.linkedin.com/in/fabriziomello
>> Twitter: http://twitter.com/fabriziomello
>> Github: http://github.com/fabriziomello


Re: [HACKERS] Password identifiers, protocol aging and SCRAM protocol

2016-07-02 Thread Michael Paquier
On Sun, Jul 3, 2016 at 4:54 AM, Heikki Linnakangas  wrote:
> I took a quick look at the patch set now again, and except that it needs to
> have the multiple password verifier support refactored out, I think it's in
> a pretty good shape. I don't like the pg_upgrade changes and its support
> function, that also seems like an orthogonal or add-on feature that would be
> better discussed separately. I think pg_upgrade should just do the upgrade
> with as little change to the system as possible, and let the admin
> reset/rehash/deprecate the passwords separately, when she wants to switch
> all users to SCRAM. So I suggest that we rip out those changes from the
> patch set as well.

That's as well what I recall from the consensus at PGCon: only focus
on the protocol addition and storage of the scram verifier. It was not
mentioned directly but that's what I guess should be done. So no
complains here.

> In related news, RFC 7677 that describes a new SCRAM-SHA-256 authentication
> mechanism, was published in November 2015. It's identical to SCRAM-SHA-1,
> which is what this patch set implements, except that SHA-1 has been replaced
> with SHA-256. Perhaps we should forget about SCRAM-SHA-1 and jump straight
> to SCRAM-SHA-256.

That's to consider. I don't thing switching to that is much complicated.

> RFC 7677 also adds some verbiage, in response to vulnerabilities that have
> been found with the "tls-unique" channel binding mechanism:
>
>>To be secure, either SCRAM-SHA-256-PLUS and SCRAM-SHA-1-PLUS MUST be
>>used over a TLS channel that has had the session hash extension
>>[RFC7627] negotiated, or session resumption MUST NOT have been used.
>
> So that doesn't affect details of the protocol per se, but once we implement
> channel binding, we need to check for those conditions somehow (or make sure
> that OpenSSL checks for them).

Yes.

> Michael, do you plan to submit a new version of this patch set for the next
> commitfest? I'd like to get this committed early in the 9.7 release cycle,
> so that we have time to work on all the add-on stuff before the release.

Thanks. That's good news! Yes, I am still on track to submit a patch for CF1.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in batch tuplesort memory CLUSTER case (9.6 only)

2016-07-02 Thread Robert Haas
On Fri, Jul 1, 2016 at 11:37 PM, Noah Misch  wrote:
> I don't know, either.  You read both Robert and me indicating that this bug
> fix will be better with a test, and nobody has said that a test-free fix is
> optimal.  Here's your chance to obliterate that no-tests precedent.

In the interest of clarity, I was not intending to say that there
should be a regression test in the patch.  I was intending to say that
there should be a test case with the bug report.  I'm not opposed to
adding a regression test, and I like the idea of attempting to do so
while requiring only a relatively small amount of data by changing
maintenance_work_mem, but that wasn't the target at which I was
aiming.  Nevertheless, carry on.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Avoid extra locks in GetSnapshotData if old_snapshot_threshold

2016-07-02 Thread Robert Haas
On Sat, Jul 2, 2016 at 3:20 PM, Kevin Grittner  wrote:
> On Sat, Jul 2, 2016 at 1:29 PM, Noah Misch  wrote:
>> On Fri, Jul 01, 2016 at 09:00:45AM -0500, Kevin Grittner wrote:
>
>>> I have been looking at several possible fixes, and weighing the
>>> pros and cons of each.  I expect to post a patch later today.
>>
>> This PostgreSQL 9.6 open item is past due for your status update.  Kindly 
>> send
>> a status update within 24 hours, and include a date for your subsequent 
>> status
>> update.  Refer to the policy on open item ownership:
>> http://www.postgresql.org/message-id/20160527025039.ga447...@tornado.leadboat.com
>
> Attached is a patch which fixes this issue, which I will push
> Monday unless there are objections.

Considering that (1) this was posted on a weekend and (2) that Monday
is also a US holiday and (3) that we are not about to wrap a release,
I think you should postpone the proposed commit date by a few days to
allow time for review.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Password identifiers, protocol aging and SCRAM protocol

2016-07-02 Thread Heikki Linnakangas
So, the consensus so far seems to be: We don't want the support for 
multiple password verifiers per user. At least not yet. Let's get SCRAM 
working first, in a way that a user can only have SCRAM or an MD5 hash 
stored in the database, not both. We can add support for multiple 
verifiers per user, password aging, etc. later. Hopefully we'll make 
some progress on those before 9.7 is released, too, but let's treat them 
as separate issues and focus on SCRAM.


I took a quick look at the patch set now again, and except that it needs 
to have the multiple password verifier support refactored out, I think 
it's in a pretty good shape. I don't like the pg_upgrade changes and its 
support function, that also seems like an orthogonal or add-on feature 
that would be better discussed separately. I think pg_upgrade should 
just do the upgrade with as little change to the system as possible, and 
let the admin reset/rehash/deprecate the passwords separately, when she 
wants to switch all users to SCRAM. So I suggest that we rip out those 
changes from the patch set as well.


In related news, RFC 7677 that describes a new SCRAM-SHA-256 
authentication mechanism, was published in November 2015. It's identical 
to SCRAM-SHA-1, which is what this patch set implements, except that 
SHA-1 has been replaced with SHA-256. Perhaps we should forget about 
SCRAM-SHA-1 and jump straight to SCRAM-SHA-256.


RFC 7677 also adds some verbiage, in response to vulnerabilities that 
have been found with the "tls-unique" channel binding mechanism:



   To be secure, either SCRAM-SHA-256-PLUS and SCRAM-SHA-1-PLUS MUST be
   used over a TLS channel that has had the session hash extension
   [RFC7627] negotiated, or session resumption MUST NOT have been used.


So that doesn't affect details of the protocol per se, but once we 
implement channel binding, we need to check for those conditions somehow 
(or make sure that OpenSSL checks for them).


Michael, do you plan to submit a new version of this patch set for the 
next commitfest? I'd like to get this committed early in the 9.7 release 
cycle, so that we have time to work on all the add-on stuff before the 
release.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Avoid extra locks in GetSnapshotData if old_snapshot_threshold

2016-07-02 Thread Kevin Grittner
On Sat, Jul 2, 2016 at 1:29 PM, Noah Misch  wrote:
> On Fri, Jul 01, 2016 at 09:00:45AM -0500, Kevin Grittner wrote:

>> I have been looking at several possible fixes, and weighing the
>> pros and cons of each.  I expect to post a patch later today.
>
> This PostgreSQL 9.6 open item is past due for your status update.  Kindly send
> a status update within 24 hours, and include a date for your subsequent status
> update.  Refer to the policy on open item ownership:
> http://www.postgresql.org/message-id/20160527025039.ga447...@tornado.leadboat.com

Attached is a patch which fixes this issue, which I will push
Monday unless there are objections.

The problem to be fixed is this:

TOAST values can be pruned or vacuumed away while the heap still
has references to them, but the visibility data is such that you
should not be able to see the referencing heap tuple once the TOAST
value is old enough to clean up.  When the old_snapshot_threshold
is set to allow early pruning, early cleanup of the TOAST values
could occur while a connection can still see the heap row, and the
"snapshot too old" error might not be triggered.  (In practice,
it's fairly hard to hit that, but a test case will be included in a
bit.)  It would even be possible to have an overlapping transaction
which is old enough to create a new value with the old OID after it
is removed, which might even be of a different data type.  The
gymnastics required to hit that are too daunting to have created a
test case, but it seems possible.

The possible fixes considered were these:

(1)  Always vacuum the heap before its related TOAST table.
(2)  Same as (1) but only when old_snapshot_threshold >= 0.
(3)  Allow the special snapshot used for TOAST access to generate
the "snapshot too old" error, so that the modified page from the
pruning/vacuuming (along with other conditions) would cause that
rather than something suggesting corrupt internal structure.
(4)  When looking to read a toasted value for a tuple past the
early pruning horizon, if the value was not found consider it a
"snapshot too old" error.
(5)  Don't vacuum or prune a TOAST table except as part of the heap
vacuum when early pruning is enabled.
(6)  Don't allow early vacuuming/pruning of TOAST values except as
part of the vacuuming of the related heap.

It became evident pretty quickly that the HOT pruning of TOAST
values should not do early cleanup, based on practical concerns of
coordinating that with the heap cleanup for any of the above
options.  What's more, since we don't allow updating of tuples
holding TOAST values, HOT pruning seems to be of dubious value on a
TOAST table in general -- but removing that would be the matter for
a separate patch.  Anyway, this patch includes a small hunk of code
(two lines) to avoid early HOT pruning for TOAST tables.

For the vacuuming, option (6) seems a clear winner, and that is
what this patch implements.  A TOAST table can still be vacuumed on
its own, but in that case it will not use old_snapshot_threshold to
try to do any early cleanup.  We were already normally vacuuming
the TOAST table whenever we vacuumed the related heap; in such a
case it uses the "oldestXmin" used for the heap to vacuum the TOAST
table.  The other options either could not limit errors to cases
when they were really needed or had to pass through way too much
information through many layers to know what actions to take when.

Option (6) basically conditions the call to try to use a more
aggressive cleanup threshold on whether the relation is a TOAST
relation and a flag indicating whether we are in a particular
vacuum function based on the recursive call made from heap vacuum
to cover its TOAST table.  Not the most elegant code, but fairly
straightforward.

The net result is that, like existing production versions, we can
have heap rows pointing to missing TOAST values, but only when
those heap rows are not visible to anyone.

Test case (adapted from one provided by Andres Freund):

-- START WITH:
--   autovacuum = off
--   old_snapshot_threshold = 1

-- connection 1
SHOW autovacuum;
SHOW old_snapshot_threshold;
DROP TABLE IF EXISTS toasted;
CREATE TABLE toasted(largecol text);
INSERT INTO toasted SELECT string_agg(random()::text, '-') FROM
generate_series(1, 1000);
BEGIN;
DELETE FROM toasted;

-- connection 2
BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ;
SELECT hashtext(largecol), length(largecol) FROM toasted;

-- connection 1
COMMIT;

-- connection 2
SELECT hashtext(largecol), length(largecol) FROM toasted;

-- connection 1
SELECT hashtext(largecol), length(largecol) FROM toasted;

-- connection 1
/* wait for snapshot threshold to be passed */
SELECT oid FROM pg_class WHERE relname = 'toasted';
VACUUM VERBOSE pg_toast.pg_toast_?;
SELECT hashtext(largecol), length(largecol) FROM toasted;

-- connection 2
SELECT hashtext(largecol), length(largecol) FROM toasted;

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
diff --git a/src/back

Re: [HACKERS] dumping database privileges broken in 9.6

2016-07-02 Thread Stephen Frost
Noah, all,

On Saturday, July 2, 2016, Noah Misch  wrote:

> On Wed, Jun 29, 2016 at 11:50:17AM -0400, Stephen Frost wrote:
> > * Peter Eisentraut (peter.eisentr...@2ndquadrant.com )
> wrote:
> > > Do this:
> > >
> > > CREATE DATABASE test1;
> > > REVOKE CONNECT ON DATABASE test1 FROM PUBLIC;
> > >
> > > Run pg_dumpall.
> > >
> > > In 9.5, this produces
> > >
> > > CREATE DATABASE test1 WITH TEMPLATE = template0 OWNER = peter;
> > > REVOKE ALL ON DATABASE test1 FROM PUBLIC;
> > > REVOKE ALL ON DATABASE test1 FROM peter;
> > > GRANT ALL ON DATABASE test1 TO peter;
> > > GRANT TEMPORARY ON DATABASE test1 TO PUBLIC;
> > >
> > > In 9.6, this produces only
> > >
> > > CREATE DATABASE test1 WITH TEMPLATE = template0 OWNER = peter;
> > > GRANT TEMPORARY ON DATABASE test1 TO PUBLIC;
> > > GRANT ALL ON DATABASE test1 TO peter;
> > >
> > > Note that the REVOKE statements are missing.  This does not
> > > correctly recreate the original state.
> >
> > I see what happened here, the query in dumpCreateDB() needs to be
> > adjusted to pull the default information to then pass to
> > buildACLComments(), similar to how the objects handled by pg_dump work.
> > The oversight was in thinking that databases didn't have any default
> > rights granted, which clearly isn't correct.
> >
> > I'll take care of that in the next day or so and add an appropriate
> > regression test.
>
> This PostgreSQL 9.6 open item is past due for your status update.  Kindly
> send
> a status update within 24 hours, and include a date for your subsequent
> status
> update.  Refer to the policy on open item ownership:
>
> http://www.postgresql.org/message-id/20160527025039.ga447...@tornado.leadboat.com
>

Will work on it tomorrow but for a deadline for next status update, I'll
say Tuesday (which I expect is when I'll commit the fix) as it's a holiday
weekend in the US.

Thanks!

Stephen


[HACKERS] Re: [COMMITTERS] pgsql: Avoid extra locks in GetSnapshotData if old_snapshot_threshold

2016-07-02 Thread Noah Misch
On Fri, Jul 01, 2016 at 09:00:45AM -0500, Kevin Grittner wrote:
> On Fri, Jul 1, 2016 at 7:17 AM, Robert Haas  wrote:
> > On Fri, Jul 1, 2016 at 2:48 AM, Andres Freund  wrote:
> >>> This PostgreSQL 9.6 open item is past due for your status update.  Kindly 
> >>> send
> >>> a status update within 24 hours, and include a date for your subsequent 
> >>> status
> >>> update.  Refer to the policy on open item ownership:
> >>> http://www.postgresql.org/message-id/20160527025039.ga447...@tornado.leadboat.com
> >>
> >> IIRC Kevin is out of the office this week, so this'll have to wait till
> >> next week.
> >
> > No, he's back since Tuesday - it was last week that he was out.  I
> > spoke with him yesterday about this and he indicated that he had been
> > thinking about it and had several ideas about how to fix it.  I'm not
> > sure why he hasn't posted here yet.
> 
> I have been looking at several possible fixes, and weighing the
> pros and cons of each.  I expect to post a patch later today.

This PostgreSQL 9.6 open item is past due for your status update.  Kindly send
a status update within 24 hours, and include a date for your subsequent status
update.  Refer to the policy on open item ownership:
http://www.postgresql.org/message-id/20160527025039.ga447...@tornado.leadboat.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dumping database privileges broken in 9.6

2016-07-02 Thread Noah Misch
On Wed, Jun 29, 2016 at 11:50:17AM -0400, Stephen Frost wrote:
> * Peter Eisentraut (peter.eisentr...@2ndquadrant.com) wrote:
> > Do this:
> > 
> > CREATE DATABASE test1;
> > REVOKE CONNECT ON DATABASE test1 FROM PUBLIC;
> > 
> > Run pg_dumpall.
> > 
> > In 9.5, this produces
> > 
> > CREATE DATABASE test1 WITH TEMPLATE = template0 OWNER = peter;
> > REVOKE ALL ON DATABASE test1 FROM PUBLIC;
> > REVOKE ALL ON DATABASE test1 FROM peter;
> > GRANT ALL ON DATABASE test1 TO peter;
> > GRANT TEMPORARY ON DATABASE test1 TO PUBLIC;
> > 
> > In 9.6, this produces only
> > 
> > CREATE DATABASE test1 WITH TEMPLATE = template0 OWNER = peter;
> > GRANT TEMPORARY ON DATABASE test1 TO PUBLIC;
> > GRANT ALL ON DATABASE test1 TO peter;
> > 
> > Note that the REVOKE statements are missing.  This does not
> > correctly recreate the original state.
> 
> I see what happened here, the query in dumpCreateDB() needs to be
> adjusted to pull the default information to then pass to
> buildACLComments(), similar to how the objects handled by pg_dump work.
> The oversight was in thinking that databases didn't have any default
> rights granted, which clearly isn't correct.
> 
> I'll take care of that in the next day or so and add an appropriate
> regression test.

This PostgreSQL 9.6 open item is past due for your status update.  Kindly send
a status update within 24 hours, and include a date for your subsequent status
update.  Refer to the policy on open item ownership:
http://www.postgresql.org/message-id/20160527025039.ga447...@tornado.leadboat.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Statistics Injection

2016-07-02 Thread Vladimir Sitnikov
  > The problem is that, even if I set the reltuples and relpages of my choice, when I run the EXPLAIN clause for a query in which the 'newTable'  is involved in (e.g. EXPLAIN SELECT * FROM newTable), I get the same cost and row estimation.>Could anyone help me with that?There's a pg_dbms_stats extension that enables you to override/freeze the statistics: https://github.com/ossc-db/pg_dbms_statsVladimir  ‎



Re: [HACKERS] Column COMMENTs in CREATE TABLE?

2016-07-02 Thread David G. Johnston
On Sat, Jul 2, 2016 at 12:55 PM, Marko Tiikkaja  wrote:

>
> What I would prefer is something like this:
>
> CREATE TABLE foo(
>   f1 int NOT NULL COMMENT
> 'the first field',
>   f2 int NOT NULL COMMENT
> 'the second field',
> ...
> );
>
> which would ensure the comments are both next to the field definition
> they're documenting and that they make it all the way to the database. I
> looked into the biggest products, and MySQL supports this syntax.  I
> couldn't find any similar syntax in any other product.
>
>
​+1 for the idea - though restricting it to columns would not be ideal.


CREATE TABLE name
COMMENT IS
'Table Comment Here'
(
col1 serial COMMENT IS 'Place comment here'
)​;

David J.


[HACKERS] Column COMMENTs in CREATE TABLE?

2016-07-02 Thread Marko Tiikkaja

Hi,

Currently we have CREATE TABLE statements in a git repository that look 
roughly like this:


CREATE TABLE foo(
  -- the first field
  f1 int NOT NULL,
  -- the second field
  f2 int NOT NULL,
...
);

But the problem is that those comments don't obviously make it all the 
way to the database, so e.g.  \d+ tblname  won't show you that precious 
information.  If you want them to make it all the way to the database, 
you'd have to add COMMENT ON statements *after* the CREATE TABLE, which 
means that either column comments have to be maintained twice, or the 
CREATE TABLE statement won't have them, so you have to go back and forth 
in your text editor to see the comments.  Both solutions are suboptimal.


What I would prefer is something like this:

CREATE TABLE foo(
  f1 int NOT NULL COMMENT
'the first field',
  f2 int NOT NULL COMMENT
'the second field',
...
);

which would ensure the comments are both next to the field definition 
they're documenting and that they make it all the way to the database. 
I looked into the biggest products, and MySQL supports this syntax.  I 
couldn't find any similar syntax in any other product.


The downside is that this would require us to make COMMENT a fully 
reserved keyword, which would quite likely break at least one 
application out in the wild.  Another option would be to make the syntax 
something like  [ COLUMN COMMENT '...' ], but that's not exactly a 
beautiful solution either.


I still think this would be a really valuable feature if we can come up 
with a decent syntax for it.  Does anyone have any ideas?  Or does 
anybody want to shoot this proposal down right off the bat?



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Typo Patch

2016-07-02 Thread CharSyam
Hi,

I fixed typos. and attached patch for this.
Thanks.

I only changed comments only in src/backend/utils/adt/tsvector_op.c


typos.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.6beta2: query failure with 'cache lookup failed for type 0'

2016-07-02 Thread Tom Lane
Stefan Huehner  writes:
> On Sat, Jul 02, 2016 at 11:35:52AM -0400, Tom Lane wrote:
>> Hmm, I can't reproduce this on HEAD, but it doesn't seem to look like
>> anything we've fixed post-beta2.  Do you need to put any particular data
>> into the tables?  Are you running with any nondefault configuration
>> parameters?

> No data at all needed in table.

Ah, I found it: need to ANALYZE the tables.  Then I get

TRAP: FailedAssertion("!(((bool) ((aggtranstype) != ((Oid) 0", File: 
"nodeAgg.c", Line: 2698)

Looks like planner is somehow forgetting to assign aggtranstype for the
aggregate.  Will trace it down, thanks for the report!

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.6beta2: query failure with 'cache lookup failed for type 0'

2016-07-02 Thread Christoph Berg
Re: Stefan Huehner 2016-07-02 <20160702160042.ga11...@huehner.biz>
> No data at all needed in table.
> In fact just create database + create 3 those objects is enough to reproduce 
> it.

Confirmed here on Debian unstable amd64, beta2.

FEHLER:  XX000: cache lookup failed for type 0
ORT:  get_typlenbyval, lsyscache.c:1976

Christoph


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.6beta2: query failure with 'cache lookup failed for type 0'

2016-07-02 Thread Stefan Huehner
On Sat, Jul 02, 2016 at 11:35:52AM -0400, Tom Lane wrote:
> Stefan Huehner  writes:
> > re-testing our application Openbravo on 9.6beta2 i found the following 
> > query failing to run with
> > ERROR: cache lookup failed for type 0
> 
> Hmm, I can't reproduce this on HEAD, but it doesn't seem to look like
> anything we've fixed post-beta2.  Do you need to put any particular data
> into the tables?  Are you running with any nondefault configuration
> parameters?

No data at all needed in table.
In fact just create database + create 3 those objects is enough to reproduce it.

Also i did a test-built of HEAD (commit: 
b54f7a9ac9646845138f6851fdf3097e22daa383)

An get to same failure also.

./configure --prefix=/home/huehner/oss/postgresql/git/install
make -j8 install
bin/initdb -D pg_data
bin/postmaster -D pg_data -p 

So question is what in my env is triggering it?

Running intel 64bit debian/unstable here.

Stefan



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.6beta2: query failure with 'cache lookup failed for type 0'

2016-07-02 Thread Tom Lane
Stefan Huehner  writes:
> re-testing our application Openbravo on 9.6beta2 i found the following query 
> failing to run with
> ERROR: cache lookup failed for type 0

Hmm, I can't reproduce this on HEAD, but it doesn't seem to look like
anything we've fixed post-beta2.  Do you need to put any particular data
into the tables?  Are you running with any nondefault configuration
parameters?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] 9.6beta2: query failure with 'cache lookup failed for type 0'

2016-07-02 Thread Stefan Huehner
Hello,
re-testing our application Openbravo on 9.6beta2 i found the following query 
failing to run with

ERROR: cache lookup failed for type 0

Tested on 9.6beta2
Specifically debian package version  '9.6~beta2-1.pgdg+1' from 
apt.postgresql.org

SELECT 
  SUM(C_ORDERLINE.LINENETAMT),
  C_CURRENCY_SYMBOL2 (SUM(C_ORDERLINE.LINENETAMT)) 
  
  FROM C_ORDER, C_ORDERLINE
  WHERE  C_ORDER.C_ORDER_ID = C_ORDERLINE.C_ORDER_ID   
  GROUP BY C_ORDER.DOCUMENTNO
 ORDER BY C_ORDER.DOCUMENTNO;

Note: query is slimmed down manually which still reproduces the issue (cut down 
from biggger query), which explain the kind of (now)(now)  useless pl-function.

To reproduce in new empty database:
CREATE OR REPLACE FUNCTION public.c_currency_symbol2(p_amount numeric)
  RETURNS character varying AS
$BODY$ DECLARE 
BEGIN
  RETURN p_amount;
END ; $BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100;

create table c_order (c_order_id varchar(32) primary key, documentno 
varchar(60));

create table c_orderline (c_orderline_id varchar(32) primary key, c_order_id 
varchar(32), linenetamt numeric, c_currency_id varchar(32));

Deleting any more out of the query seems to no longer trigger the problem.

Also changing the 'c_orderline' create table statement to not have the last 
column 'c_currency_id' (which is not even referenced in the query) also makes 
the issue no longer reproducible.

Regards,
Stefan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Leaking memory in text_overlay function

2016-07-02 Thread Dirk Rudolph
Well that's good to know. It was just curious that, in my case, I only saw
this in this particular function.

Anyway. I will consider to handle the memory the same way, if this is the
recommendation.

Many thanks.

/Closed

On Sat, Jul 2, 2016 at 4:45 PM, Tom Lane  wrote:

> Dirk Rudolph  writes:
> > while implementing my own C function, I mentioned that some memory is
> not freed by the text_overlay function in varlena.c
>
> By and large, that's intentional.  SQL-called functions normally run
> in short-lived memory contexts, so that any memory they don't bother to
> pfree will be cleaned up automatically at the end of the tuple cycle.
> If we did not follow that approach, there would need to be many thousands
> more explicit pfree calls than there are today, and the system would be
> net slower overall because multiple retail pfree's are slower than a
> MemoryContextReset.
>
> There are places where it's important to avoid leaking memory, certainly.
> But I don't think this is one of them, unless you can demonstrate a
> scenario with query-lifespan or worse leakage.
>
> > Particularly I mean the both substrings allocated by text_substring
> (according to the documentation of text_substring "The result is always a
> freshly palloc'd datum.") and one of the concatenation results. I’m aware
> of the MemoryContext being deleted in my case but IMHO builtin functions
> should be memory safe.
>
> That is not the project policy.
>
> regards, tom lane
>



-- 

Dirk Rudolph | Senior Software Engineer

Netcentric AG

M: +41 79 642 37 11
D: +49 174 966 84 34

dirk.rudo...@netcentric.biz | www.netcentric.biz


Re: [HACKERS] Forthcoming SQL standards about JSON and Multi-Dimensional Arrays (FYI)

2016-07-02 Thread Tom Lane
Andreas Karlsson  writes:
> Has any of the major PostgreSQL companies looked into sending members to 
> the ISO committee? It would be nice if we could be represented.

Peter E. had observer status at one point, don't know if he still does.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in batch tuplesort memory CLUSTER case (9.6 only)

2016-07-02 Thread Tom Lane
Peter Geoghegan  writes:
> On Fri, Jul 1, 2016 at 8:37 PM, Noah Misch  wrote:
>> PostgreSQL is open to moving features from zero test coverage to nonzero test
>> coverage.  The last several releases have each done so.  Does that
>> sufficiently clarify the policy landscape?

> Not quite, no. There are costs associated with adding regression tests
> that exercise external sorting. What are the costs, and how are they
> felt? How can we add more extensive test coverage without burdening
> those that run extremely slow buildfarm animals, for example?

As the owner of several slow buildfarm critters, and as somebody who runs
the regression tests manually many times on a typical work day, I do have
concerns about that ;-).  I agree that improving code coverage of our
tests is a good goal, but I think we need to take steps to make sure that
we don't increase the runtime of the regression tests too much.

I suggest that we should not take the existing code behavior as something
immutable if it makes it hard to create tests.  Peter and I already
discussed changing the behavior of hash CREATE INDEX to make its sort code
path more easily reachable, and perhaps there are similar things that
could be done elsewhere.  For instance, I do not believe that the minimum
value of maintenance_work_mem was ever graven on a stone tablet; if it
would help to reduce that, let's reduce it.

> I think that a new tuplesort.sql test file is where a test like this
> belongs (not in the existing cluster.sql test file).

If you are thinking of setting it up like numeric_big, I'm not terribly
excited about that, because to the best of my recollection numeric_big
has never identified a bug.  It doesn't get run often enough to have
much chance of that.

It's possible that it'd be sensible to have a mechanism like that but
make it opt-out rather than opt-in; that is, the buildfarm critters
would run all tests by default, but owners of slow animals could set
some flag to skip longer tests.  Don't know how much new infrastructure
that would take, or whether it's worth the trouble.  But I generally
don't believe that long tests are good just for being long.  If something
is slow enough that people start taking measures to avoid running it,
that's a net loss not a net win.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Leaking memory in text_overlay function

2016-07-02 Thread Tom Lane
Dirk Rudolph  writes:
> while implementing my own C function, I mentioned that some memory is not 
> freed by the text_overlay function in varlena.c

By and large, that's intentional.  SQL-called functions normally run
in short-lived memory contexts, so that any memory they don't bother to
pfree will be cleaned up automatically at the end of the tuple cycle.
If we did not follow that approach, there would need to be many thousands
more explicit pfree calls than there are today, and the system would be
net slower overall because multiple retail pfree's are slower than a
MemoryContextReset.

There are places where it's important to avoid leaking memory, certainly.
But I don't think this is one of them, unless you can demonstrate a
scenario with query-lifespan or worse leakage.

> Particularly I mean the both substrings allocated by text_substring 
> (according to the documentation of text_substring "The result is always a 
> freshly palloc'd datum.") and one of the concatenation results. I’m aware 
> of the MemoryContext being deleted in my case but IMHO builtin functions 
> should be memory safe.

That is not the project policy.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Docs, backups, and MS VSS

2016-07-02 Thread Bruce Momjian
On Sat, Jul  2, 2016 at 10:31:32PM +0800, Craig Ringer wrote:
> - Microsoft VSS is NOT safe, as it fails point 2. It is atomic only on a
> per-file level. You MUST use pg_start_backup() and pg_stop_backup() with WAL
> archiving or automated copy of the extra WAL if you use MS VSS. Most Windows
> backup products use MS VSS internally. You must ensure they have dedicated
> PostgreSQL backup support, using pg_basebackup, pg_dump/pg_restore, or
> pg_start_backup()/pg_stop_backup().

Yes, it would be good to point out that per-file snapshots are insufficient.

> - LVM is safe
> 
> - BTRFS should be fine
> 
> - Most SAN snapshots are fine, but verify with your vendor
> 
> 
> I suspect, but cannot prove, that it is also safe to snapshot pg_xlog on a
> separate filesystem if and only if you take the datadir snapshot before the
> pg_xlog snapshot and you have wal_keep_segments high enough to ensure that WAL
> needeed by the redo checkpoint in the datadir snapshot is not removed. I
> wouldn't want to do this, and certainly not document it, since it's way saner
> to use pg_start_backup() etc.

Yes, I have wanted to document that WAL-at-the-end is sufficient for
non-atomic snapshots assuming the needed WAL is there.  However, even if
the WAL is backed up, it doesn't mean we are going to read it during
crash recovery, i.e. we only read from the last checkpoint or something
like that.  I have no idea how to tell people when this is safe.

My simplistic idea would be to tell people to run a checkpoint right
before all the snapshots are taken, but even that doesn't seem 100%
safe.  This needs someone who understands the WAL and how to tell people
a totally safe procedure.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Statistics Injection

2016-07-02 Thread Tom Lane
Victor Giannakouris - Salalidis  writes:
> For some research purposes, I am trying to modify the existing statistics
> of some tables in the catalogs in order to change the execution plan,
> experiment with the EXPLAIN call etc.

> Concretely, what I'd like to do is to create a "fake" table with a schema
> of my choice (that's the easy part) and then modify the
> statistics(particularly, the number of tuples and the number of pages).

> Firstly, I create an empty table (CREATE TABLE newTable()) and then I
> update the pg_class table as well (UPDATE pg_class SET relpages = #pages
> WHERE relname='newTable').

> The problem is that, even if I set the reltuples and relpages of my choice,
> when I run the EXPLAIN clause for a query in which the 'newTable'  is
> involved in (e.g. EXPLAIN SELECT * FROM newTable), I get the same cost and
> row estimation.

You can't really do it like that, because the planner always looks at
the true relation size (RelationGetNumberOfBlocks()).  It uses
reltuples/relpages as an estimate of tuple density, not as hard numbers.
The reason for this is to cope with any table growth that may have
occurred since the last VACUUM/ANALYZE.

You could modify the code in plancat.c to change that, or you could
plug into the get_relation_info_hook to tweak the constructed
RelOptInfo before anything is done with it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Docs, backups, and MS VSS

2016-07-02 Thread Craig Ringer
Hi all

I just noticed that the Pg docs on backups don't discuss what kind of
snapshots are safe for use without a pg_start_backup() and pg_stop_backup()
then copying the extra WAL.

I'd like to remedy that. My understanding is that it's safe to use a
filesystem or block device level snapshot without a pg_start_backup() and
pg_stop_backup() if:

1. The snapshot includes the entire PostgreSQL data directory including all
tablespaces and pg_xlog, i.e. everything is on one filesystem or block
device;

2. The snapshot mechanism guarantees an atomic snapshot, such that every
part of the filesystem or block device is snapshotted consistently at the
same effective moment in time.

This allows PostgreSQL to treat recovery from a snapshot just like recovery
from a crash or hard reset.

I'd like to document these conditions, and note that:

- Microsoft VSS is NOT safe, as it fails point 2. It is atomic only on a
per-file level. You MUST use pg_start_backup() and pg_stop_backup() with
WAL archiving or automated copy of the extra WAL if you use MS VSS. Most
Windows backup products use MS VSS internally. You must ensure they have
dedicated PostgreSQL backup support, using pg_basebackup,
pg_dump/pg_restore, or pg_start_backup()/pg_stop_backup().

- LVM is safe

- BTRFS should be fine

- Most SAN snapshots are fine, but verify with your vendor


I suspect, but cannot prove, that it is also safe to snapshot pg_xlog on a
separate filesystem if and only if you take the datadir snapshot before the
pg_xlog snapshot and you have wal_keep_segments high enough to ensure that
WAL needeed by the redo checkpoint in the datadir snapshot is not removed.
I wouldn't want to do this, and certainly not document it, since it's way
saner to use pg_start_backup() etc.

Reasonable? Will write the SGML if there's broad agreement here that it's
desirable.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


[HACKERS] Leaking memory in text_overlay function

2016-07-02 Thread Dirk Rudolph
Hi,while implementing my own C function, I mentioned that some memory is not freed by the text_overlay function in varlena.cParticularly I mean the both substrings allocated by text_substring (according to the documentation of text_substring "The result is always a freshly palloc'd datum.") and one of the concatenation results. I’m aware of the MemoryContext being deleted in my case but IMHO builtin functions should be memory safe. (or at least the others I used are).See attached a patch that fixes that against HEAD.Cheers, 
Dirk Rudolph | Senior Software EngineerNetcentric AGM: +41 79 642 37 11D: +49 174 966 84 34dirk.rudo...@netcentric.biz | www.netcentric.biz



text_overlay_pfree.patch
Description: Binary data


Re: [HACKERS] OpenSSL 1.1 breaks configure and more

2016-07-02 Thread Christoph Berg
Re: Andreas Karlsson 2016-07-02 
> On 07/01/2016 11:41 AM, Christoph Berg wrote:
> > thanks for the patches. I applied all there patches on top of HEAD
> > (10c0558f). The server builds and passes "make check", pgcrypto still
> > needs work, though:
> 
> Thanks, I had forgotten pgcrypto.

pgcrypto works now as well, thanks!

Re: Alvaro Herrera 2016-07-02 <20160702002846.GA376611@alvherre.pgsql>
> Generally, version number tests sprinkled all over the place are not
> terribly nice.  I think it would be better to get configure to define a
> symbol like HAVE_BIO_METH_NEW.  Not sure about the other hunks in this
> patch; perhaps HAVE_BIO_SET_DATA, and #define both those macros if not.

Otoh these symbols are strictly version-dependant on OpenSSL, it's not
like one of the symbols would appear or disappear for other reasons
(like different TLS implementation, or different operating system).

Once we decide (in 10 years?) that the minimum supported OpenSSL
version is >= 1.1, we can just drop the version checks. If these are
converted to feature tests now, it will be much harder to remember at
which point they can be dropped.

Christoph


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Broken handling of lwlocknames.h

2016-07-02 Thread Christoph Berg
Re: Tom Lane 2016-07-01 <26357.1467400...@sss.pgh.pa.us>
> I made some mostly-cosmetic changes to this and pushed it.

I confirm that Debian's out-of-tree python3 build works now when
invoked directly in the relevant plpython/hstore_plpython
subdirectories. Thanks!

Christoph


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in batch tuplesort memory CLUSTER case (9.6 only)

2016-07-02 Thread Jim Nasby

On 7/2/16 12:40 AM, Peter Geoghegan wrote:

We should be more ambitious about adding test coverage to tuplesort.c.


IMHO, s/ to tuplesort.c//, at least for the buildfarm.

TAP tests don't run by default, so maybe that's the place to start 
"going crazy" with adding more testing. Though I do think something 
that's sorely needed is the ability to test stuff at the C level. 
Sometimes SQL is jut too high a level to verify things.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Statistics Injection

2016-07-02 Thread Victor Giannakouris - Salalidis
Hello,

For some research purposes, I am trying to modify the existing statistics
of some tables in the catalogs in order to change the execution plan,
experiment with the EXPLAIN call etc.

Concretely, what I'd like to do is to create a "fake" table with a schema
of my choice (that's the easy part) and then modify the
statistics(particularly, the number of tuples and the number of pages).

Firstly, I create an empty table (CREATE TABLE newTable()) and then I
update the pg_class table as well (UPDATE pg_class SET relpages = #pages
WHERE relname='newTable').

The problem is that, even if I set the reltuples and relpages of my choice,
when I run the EXPLAIN clause for a query in which the 'newTable'  is
involved in (e.g. EXPLAIN SELECT * FROM newTable), I get the same cost and
row estimation.

Could anyone help me with that?

Thank you in advance,

Victor Giannakouris