Re: [HACKERS] New Python vs. old PG on raccoon and jaguarundi

2014-12-20 Thread Christian Ullrich
* From: Noah Misch [mailto:n...@leadboat.com]

 Buildfarm member jaguarundi, which has Python 3.4, activated --with-
 python for REL9_1_STABLE as of its 2014-12-15 run.  Please remove --
 with-python or test against an older Python.  It already omits --with-
 python for REL9_0_STABLE.

Done; sorry for the noise.

-- 
Christian



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg

2014-12-20 Thread Ali Akbar
2014-12-16 11:01 GMT+07:00 Ali Akbar the.ap...@gmail.com:



 2014-12-16 10:47 GMT+07:00 Ali Akbar the.ap...@gmail.com:


 2014-12-16 6:27 GMT+07:00 Tomas Vondra t...@fuzzy.cz:
 Just fast-viewing the patch.

 The patch is not implementing the checking for not creating new context
 in initArrayResultArr. I think we should implement it also there for
 consistency (and preventing future problems).



Testing the performance with your query, looks promising: speedup is
between 12% ~ 15%.

Because i'm using 32-bit systems, setting work_mem to 1024GB failed:

 ERROR:  1073741824 is outside the valid range for parameter work_mem (64
 .. 2097151)
 STATEMENT:  SET work_mem = '1024GB';
 psql:/media/truecrypt1/oss/postgresql/postgresql/../patches/array-agg.sql:20:
 ERROR:  1073741824 is outside the valid range for parameter work_mem (64
 .. 2097151)


Maybe because of that, in the large groups a test, the speedup is awesome:

 master: 16,819 ms

with patch: 1,720 ms

Looks like with master, postgres resort to disk, but with the patch it fits
in memory.

Note: I hasn't tested the large dataset.

As expected, testing array_agg(anyarray), the performance is still the
same, because the subcontext hasn't implemented there (test script modified
from Tomas', attached).

I implemented the subcontext checking in initArrayResultArr by changing the
v3 patch like this:

 +++ b/src/backend/utils/adt/arrayfuncs.c
 @@ -4797,10 +4797,11 @@ initArrayResultArr(Oid array_type, Oid
 element_type, MemoryContext rcontext,
bool subcontext)
  {
 ArrayBuildStateArr *astate;
 -   MemoryContext arr_context;
 +   MemoryContext arr_context = rcontext;   /* by default use the parent
 ctx */

 /* Make a temporary context to hold all the junk */
 -   arr_context = AllocSetContextCreate(rcontext,
 +   if (subcontext)
 +   arr_context = AllocSetContextCreate(rcontext,
 accumArrayResultArr,
 ALLOCSET_DEFAULT_MINSIZE,
 ALLOCSET_DEFAULT_INITSIZE,


Testing the performance, it got the 12%~15% speedup. Good. (patch attached)


Looking at the modification in accumArrayResult* functions, i don't really
 comfortable with:

1. Code that calls accumArrayResult* after explicitly calling
initArrayResult* must always passing subcontext, but it has no effect.
2. All existing codes that calls accumArrayResult must be changed.

 Just an idea: why don't we minimize the change in API like this:

1. Adding parameter bool subcontext, only in initArrayResult*
functions but not in accumArrayResult*
2. Code that want to not creating subcontext must calls
initArrayResult* explicitly.

 Other codes that calls directly to accumArrayResult can only be changed in
 the call to makeArrayResult* (with release=true parameter). In places that
 we don't want to create subcontext (as in array_agg_transfn), modify it to
 use initArrayResult* before calling accumArrayResult*.

 What do you think?



As per your concern about calling initArrayResult* with subcontext=false,
while makeArrayResult* with release=true:

   Also, it seems that using 'subcontext=false' and then 'release=true'
   would be a bug. Maybe it would be appropriate to store the
   'subcontext' value into the ArrayBuildState and then throw an error
   if makeArrayResult* is called with (release=true  subcontext=false).


Yes, i think we should do that to minimize unexpected coding errors. In
makeArrayResult*, i think its better to not throwing an error, but using
assertions:

 Assert(release == false || astate-subcontext == true);


Regards,
-- 
Ali Akbar


array-agg-anyarray.sql
Description: application/sql
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 18ae318..48e66bf 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1271,13 +1271,13 @@ dblink_get_connections(PG_FUNCTION_ARGS)
 			/* stash away current value */
 			astate = accumArrayResult(astate,
 	  CStringGetTextDatum(hentry-name),
-	  false, TEXTOID, CurrentMemoryContext);
+	  false, TEXTOID, CurrentMemoryContext, true);
 		}
 	}
 
 	if (astate)
 		PG_RETURN_ARRAYTYPE_P(makeArrayResult(astate,
-			  CurrentMemoryContext));
+			  CurrentMemoryContext, true));
 	else
 		PG_RETURN_NULL();
 }
diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index c16b38e..bd5eb32 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -671,7 +671,7 @@ transformRelOptions(Datum oldOptions, List *defList, char *namspace,
 /* No match, so keep old option */
 astate = accumArrayResult(astate, oldoptions[i],
 		  false, TEXTOID,
-		  CurrentMemoryContext);
+		  CurrentMemoryContext, true);
 			}
 		}
 	}
@@ -758,12 +758,12 @@ transformRelOptions(Datum oldOptions, List *defList, char *namspace,
 
 			astate = 

Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-12-20 Thread Martijn van Oosterhout
On Fri, Dec 19, 2014 at 05:32:43PM -0800, Peter Geoghegan wrote:
  Most people would list the columns, but if there is a really bizarre
  constraint, with non-default opclasses, or an exclusion constraint, it's
  probably been given a name that you could use.
 
 What I find curious about the opclass thing is: when do you ever have
 an opclass that has a different idea of equality than the default
 opclass for the type? In other words, when is B-Tree strategy number 3
 not actually '=' in practice, for *any* B-Tree opclass? Certainly, it
 doesn't appear to be the case that it isn't so with any shipped
 opclasses - the shipped non-default B-Tree opclasses only serve to
 provide alternative notions of sort order, and never equals.

Well, in theory you could build a case insensetive index on a text
column. You could argue that the column should have been defined as
citext in the first place, but it might not for various reasons.

Have a nice day,
-- 
Martijn van Oosterhout   klep...@svana.org   http://svana.org/kleptog/
 He who writes carelessly confesses thereby at the very outset that he does
 not attach much importance to his own thoughts.
   -- Arthur Schopenhauer


signature.asc
Description: Digital signature


Re: [HACKERS] pg_basebackup vs. Windows and tablespaces

2014-12-20 Thread Amit Kapila
On Wed, Dec 17, 2014 at 11:32 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
 On Tue, Dec 16, 2014 at 10:11 PM, Heikki Linnakangas 
hlinnakan...@vmware.com wrote:
 
  On 12/16/2014 06:30 PM, Andrew Dunstan wrote:
 
  I'm not clear why human readability is the major criterion here. As for
  that, it will be quite difficult for a human to distinguish a name with
  a space at the end from one without. I really think a simple encoding
  scheme would be much the best.

 Yeah that could work, but we need the special encoding mainly for newline,
 other's would work with current patch.  However it might be worth to do
 it for all kind of spaces.  Currently it just reads the line upto newline
using
 fscanf, but if we use special encoding, we might need to read the file
 character by character and check for newline without backslash(or other
 special encoding character); do you have something like that in mind?

 Another thing is that we need to take care that we encode/decode link
 path for tar format, as plain format might already be working.


Attached patch handles the newline and other characters that are allowed
in tablespace path, as we need escape character only for newline, I have
added the same only for newline.  So after patch, the tablespace_map
file will look like below for different kind of paths, as you can see for
tablespace id 16393 which contains newline, there is additional escape
sequence \ before each newline where as other paths containing space
works as it is.

16391 /home/akapila/mywork/workspace_pg/master/tbs1
16393 /home/akapila/mywork/workspace_pg/master/tbs\
a\
b\

16392 /home/akapila/mywork/workspace_pg/master/tbs   2

So with this, I have handled all review comments raised for this patch
and it is ready for review, as the status of this patch is changed from
Ready for Committer to Waiting on Author, so ideally I think it
should go back to Ready for Committer, however as I am not very sure
about this point, I will change it to Needs Review (correct me if I am
wrong).

Summarization of latest changes:
1. Change file name from symlink_label to tablespace_map and changed
the same every where in comments and variable names.
2. This feature will be supportted for both windows and linux;
tablespace_map
file will be generated on both windows and linux to restore tablespace links
during archive recovery.
3.  Handling for special characters in tablesapce path name.
4. Updation of docs.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


extend_basebackup_to_include_symlink_v6.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [pgsql-pkg-debian] Updated libpq5 packages cause connection errors on postgresql 9.2

2014-12-20 Thread Magnus Hagander
On Fri, Dec 19, 2014 at 3:57 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Magnus Hagander mag...@hagander.net writes:
  On Fri, Dec 19, 2014 at 11:52 AM, Christoph Berg c...@df7cb.de wrote:
  Googling for digest too big for rsa key seems to indicate that this
  problem occurs when you are using (client?) certificates with short
  RSA keys. 512 bits is most often cited in the problem reports,
  something like 768 is around the minimum size that works, and of
  course, anything smaller than 1024 or really 1536 (or 2048) bits is
  too small for today's crypto standards.
 
  So the question here is if this is also the problem you saw - are you
  using client or server certificates with short keys?
 
  What this explanation doesn't explain is why the problem occurs with
  9.4's libpq5 while it works with 9.3's. The libssl version used for
  building these packages should really be the same, 9.3.5-2.pgdg70+1
  was built just two days ago as well.

  Some googling shows that this could be because it's negotiating TLS 1.2
  which the key is just too small for. And we did change that in 9.4 -
 commit
  326e1d73c476a0b5061ef00134bdf57aed70d5e7 disabled SSL in favor of always
  using TLS for security reasons.

 Hm ... the 9.4 release notes fail to describe that change adequately, and
 certainly don't mention that it would have any compatibility implications.
 Guess that needs to be fixed.  Does anyone know offhand what the change in
 the minimum key length is across SSL/TLS versions, exactly?


I haven't seen a specific number, it might depend on exactly which cipher
is negotiated. See for example
http://openssl.6102.n7.nabble.com/What-is-the-reason-for-error-quot-SSL-negotiation-failed-error-04075070-rsa-routines-RSA-sign-digest-td43953.html

All references I have foud say at least 2014 is safe and 512 is broken, but
there are points in betwee nthat apparently works in some cases only.

I think if we say use 1024 bits or more we err on the safe side.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] Table-level log_autovacuum_min_duration

2014-12-20 Thread Michael Paquier
On Thu, Dec 18, 2014 at 11:15 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Michael Paquier wrote:
 Hi all,

 Today I spent a bit of time looking at the activity of autovacuum for
 one table particularly bloated. log_autovacuum_min_duration was
 enabled and set to a high value but even with that I had to deal with
 some spam from the jobs of other tables. It would be cool to have the
 possibility to do some filtering IMO. So, what about having a relopt
 to control log_autovacuum_min_duration? RelationData-rd_options is
 largely accessible for a given relation in the code paths of vacuum
 and analyze.

 +1

As long as I got this idea in mind, I looked at the code to see how
much it would be possible to tune log_autovacuum_min_duration in the
code paths of analyze and vacuum. First, I think that it would be good
to have parameters for both parent relations and their toast relation
similarly to the other parameters...

But now, here are some things to consider if we use directly the
reloptions available with RelationData:
- If the parameters toast.autovacuum_* are not set, toast relations
inherit values from their parent relation. Looking at autovacuum.c
which is the only place where autovacuum options are located, we keep
a hash table to save the mapping toast - parent relation. Using that
it is easy to fetch for a given toast relation the relopts of its
parent. Note this is strictly located in the autovacuum code path, so
to let vacuum and analyze now the custom value of
log_autovacuum_min_duration, with the inheritance property kept, we
would need to pass some extra values from autovacuum to the calls of
vacuum().
- It would not be possible to log autovacuum and analyze being skipped
when lock is not available because in this case Relation cannot be
safely fetched, so there are no reltoptions directly available. This
is for example this kind of message:
skipping analyze of foo --- lock not available

Both those things could be solved by passing a value through
VacuumStmt, like what we do when passing a value for
multixact_freeze_min_age, or with an extra argument in vacuum() for
example. Now I am not sure if it is worth it, and we could live as
well with a parameter that do not support the inheritance parent
relation - toast, so log value set for a toast relation and its
parent share no dependency. In short that's a trade between code
simplicity and usability.

Thoughts?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POLA violation with \c service=

2014-12-20 Thread David Fetter
On Fri, Dec 19, 2014 at 07:03:36PM -0700, David Johnston wrote:
 On Wed, Dec 17, 2014 at 8:25 AM, David Fetter [via PostgreSQL] 
 ml-node+s1045698n5831124...@n5.nabble.com wrote:
 
  On Wed, Dec 17, 2014 at 08:14:04AM -0500, Andrew Dunstan wrote:
 
  
   On 12/17/2014 04:11 AM, Heikki Linnakangas wrote:
   On 12/17/2014 10:03 AM, Albe Laurenz wrote:
   David Fetter wrote:
   I've noticed that psql's  \c function handles service= requests in a
   way that I can only characterize as broken.
   
   This came up in the context of connecting to a cloud hosting service
   named after warriors or a river or something, whose default hostnames
   are long, confusing, and easy to typo, so I suspect that service= may
   come up more often going forward than it has until now.
   
   For example, when I try to use
   
   \c service=foo
   
   It will correctly figure out which database I'm trying to connect to,
   but fail to notice that it's on a different host, port, etc., and
   hence fail to connect with a somewhat unhelpful error message.
   
   I can think of a few approaches for fixing this:
   
   0.  Leave it broken.
   1.  Disable service= requests entirely in \c context, and error out
   if attempted.
   2.  Ensure that \c actually uses all of the available information.
   
   Is there another one I missed?
   
   If not, which of the approaches seems reasonable?
   
   #2 is the correct solution, #1 a band aid.
   
   It would be handy, if \c service=foo actually worked. We should do
  #3.
   If the database name is actually a connection string, or a service
   specification, it should not re-use the hostname and port from previous
   connection, but use the values from the connection string or service
  file.
  
  
   Yeah, that's the correct solution. It should not be terribly difficult
  to
   create a test for a conninfo string in the dbname parameter. That's what
   libpq does after all. We certainly don't want psql to have to try to
   interpret the service file. psql just needs to let libpq do its work in
  this
   situation.
 
  letting libpq handle this is the only sane plan for fixing it.  I'm
  looking into that today.
 
 
 ​
 ​On a tangentially related note; it is not outside the realm of possibility
 that a user would want one pg_service entry​
 
 ​to reference another one​:

You want entries in the service system to be able reference other
entries, setting defaults, for example?  Interesting idea.  As you
say, it's tangential to this.

The bug I found, and I'm increasingly convinced it's a bug whose fix
should be back-patched, is that psql fails to let libpq do its job
with the extant service system, or more precisely prevents it from
doing only part of its job, leading to broken behavior.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Initdb-cs_CZ.WIN-1250 buildfarm failures

2014-12-20 Thread Noah Misch
On Sat, Dec 20, 2014 at 05:14:03PM +0100, CSPUG wrote:
 On 20.12.2014 07:39, Noah Misch wrote:
  Buildfarm members magpie, treepie and fulmar went absent on
  2014-10-29. Since returning on 2014-11-16, they have consistently
  failed with 'initdb: invalid locale name cs_CZ.WIN-1250'. No
  commits in that period readily explain a regression in this area. Did
  the underlying system configurations change?
 
 I'm pretty sure it's because of broken locales at the system level. It was
 working fine, and I haven't done any substantial changes to the system
 except for occasional yum update, so I'm unaware of what went wong :(
 
 The issue looks like this:
 
 # locale -a | grep en_US
 en_US
 en_US.iso88591
 en_US.iso885915
 en_US.utf8
 
 # locale en_US
 locale: unknown name en_US

The NAME argument to the locale tool is something like LC_PAPER, not a
locale name.  Use LANG=en_US locale LC_NUMERIC to test locale loading.

 The only reasons I can think of is that some of the updates required
 a reboot, and I haven't done that because that would kill all the VMs
 running on that HW, including the one with CLOBBER_CACHE_RECURSIVE
 tests. And that'd throw away tests running for ~3 months.
 
 I've disabled the three animals (magpie, fulmar, treepie) for now, because
 there's no point in running the tests until the locale issues are fixed. If
 anyone has an idea of what might be wrong, let me know.

Those animals have been successfully completing initdb for several locales,
including en_US, before failing at cs_CZ.WIN-1250.  You could disable just the
cs_CZ.WIN-1250 steps.  A CentOS 6.6 system here also lacks such a locale:

$ LANG=cs_CZ.WIN-1250 locale LC_NUMERIC
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_MESSAGES to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
.

-1
46
0
ANSI_X3.4-1968
$ locale -a|grep cs_
cs_CZ
cs_CZ.iso88592
cs_CZ.utf8

Perhaps an update made the system stricter about rejecting unknown locales.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] HINT: pg_hba.conf changed since last config reload

2014-12-20 Thread Steve Singer

On 12/19/2014 10:41 AM, Alex Shulgin wrote:

I don't think so.  The scenario this patch relies on assumes that the
DBA will remember to look in the log if something goes wrong, and in
your case there would be a message like the following:

WARNING:  pg_hba.conf not reloaded

So an extra hint about file timestamp is unneeded.


That makes sense to me.
I haven't found any new issues with this patch.
I think it is ready for a committer.




--
Alex






--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Initdb-cs_CZ.WIN-1250 buildfarm failures

2014-12-20 Thread Tom Lane
Noah Misch n...@leadboat.com writes:
 On Sat, Dec 20, 2014 at 05:14:03PM +0100, CSPUG wrote:
 On 20.12.2014 07:39, Noah Misch wrote:
 Buildfarm members magpie, treepie and fulmar went absent on
 2014-10-29. Since returning on 2014-11-16, they have consistently
 failed with 'initdb: invalid locale name cs_CZ.WIN-1250'. No
 commits in that period readily explain a regression in this area. Did
 the underlying system configurations change?

That's what I'd been assuming, but I had not got round to inquiring.

 I'm pretty sure it's because of broken locales at the system level. It was
 working fine, and I haven't done any substantial changes to the system
 except for occasional yum update, so I'm unaware of what went wong :(
 
 The only reasons I can think of is that some of the updates required
 a reboot, and I haven't done that because that would kill all the VMs
 running on that HW, including the one with CLOBBER_CACHE_RECURSIVE
 tests. And that'd throw away tests running for ~3 months.
 
 I've disabled the three animals (magpie, fulmar, treepie) for now, because
 there's no point in running the tests until the locale issues are fixed. If
 anyone has an idea of what might be wrong, let me know.

 Those animals have been successfully completing initdb for several locales,
 including en_US, before failing at cs_CZ.WIN-1250.  You could disable just the
 cs_CZ.WIN-1250 steps.  A CentOS 6.6 system here also lacks such a locale:

 $ LANG=cs_CZ.WIN-1250 locale LC_NUMERIC
 locale: Cannot set LC_CTYPE to default locale: No such file or directory
 locale: Cannot set LC_MESSAGES to default locale: No such file or directory
 locale: Cannot set LC_ALL to default locale: No such file or directory

FWIW, an actual RHEL 6.6 system doesn't like that either; at least not
with a fairly minimal set of locales installed.

 Perhaps an update made the system stricter about rejecting unknown locales.

Red Hat released RHEL 6.6 on Oct 14.  I am not sure how long it took the
CentOS crew to follow suit, but it's well within the realm of plausibility
that when these buildfarm critters went offline it was associated with a
system update from 6.5 to 6.6 ... which was a *major* update.  (On my
machine, something like 480 out of 1500 packages were updated.)  It's easy
to believe that either the glibc locale code is now stricter, or that Red
Hat reshuffled the packaging a bit and the missing locale is now in a
new package that didn't get installed.

Personally I'd not have tried to run such a system without rebooting ...
it's unlikely that not rebooting has anything directly to do with this
problem, but I would not be surprised by issues elsewhere.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Initdb-cs_CZ.WIN-1250 buildfarm failures

2014-12-20 Thread Pavel Stehule
2014-12-20 17:48 GMT+01:00 Noah Misch n...@leadboat.com:

 On Sat, Dec 20, 2014 at 05:14:03PM +0100, CSPUG wrote:
  On 20.12.2014 07:39, Noah Misch wrote:
   Buildfarm members magpie, treepie and fulmar went absent on
   2014-10-29. Since returning on 2014-11-16, they have consistently
   failed with 'initdb: invalid locale name cs_CZ.WIN-1250'. No
   commits in that period readily explain a regression in this area. Did
   the underlying system configurations change?
 
  I'm pretty sure it's because of broken locales at the system level. It
 was
  working fine, and I haven't done any substantial changes to the system
  except for occasional yum update, so I'm unaware of what went wong :(
 
  The issue looks like this:
 
  # locale -a | grep en_US
  en_US
  en_US.iso88591
  en_US.iso885915
  en_US.utf8
 
  # locale en_US
  locale: unknown name en_US

 The NAME argument to the locale tool is something like LC_PAPER, not a
 locale name.  Use LANG=en_US locale LC_NUMERIC to test locale loading.

  The only reasons I can think of is that some of the updates required
  a reboot, and I haven't done that because that would kill all the VMs
  running on that HW, including the one with CLOBBER_CACHE_RECURSIVE
  tests. And that'd throw away tests running for ~3 months.
 
  I've disabled the three animals (magpie, fulmar, treepie) for now,
 because
  there's no point in running the tests until the locale issues are fixed.
 If
  anyone has an idea of what might be wrong, let me know.

 Those animals have been successfully completing initdb for several locales,
 including en_US, before failing at cs_CZ.WIN-1250.  You could disable just
 the
 cs_CZ.WIN-1250 steps.  A CentOS 6.6 system here also lacks such a locale:

 $ LANG=cs_CZ.WIN-1250 locale LC_NUMERIC


It is Microsoft encoding, - it is not available on Linux

Regards

Pavel


 locale: Cannot set LC_CTYPE to default locale: No such file or directory
 locale: Cannot set LC_MESSAGES to default locale: No such file or directory
 locale: Cannot set LC_ALL to default locale: No such file or directory
 .

 -1
 46
 0
 ANSI_X3.4-1968
 $ locale -a|grep cs_
 cs_CZ
 cs_CZ.iso88592
 cs_CZ.utf8

 Perhaps an update made the system stricter about rejecting unknown locales.


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



Re: [HACKERS] Initdb-cs_CZ.WIN-1250 buildfarm failures

2014-12-20 Thread Tomas Vondra
On 20.12.2014 18:13, Pavel Stehule wrote:
 
 2014-12-20 17:48 GMT+01:00 Noah Misch n...@leadboat.com

 $ LANG=cs_CZ.WIN-1250 locale LC_NUMERIC
 
 
 It is Microsoft encoding, - it is not available on Linux

Not true. It is available on Linux, and the regression tests were
running with it for a long time (essentially from the moment magpie was
added to the buildfarm).

Tomas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Initdb-cs_CZ.WIN-1250 buildfarm failures

2014-12-20 Thread Pavel Stehule
2014-12-20 18:17 GMT+01:00 Tomas Vondra t...@fuzzy.cz:

 On 20.12.2014 18:13, Pavel Stehule wrote:
 
  2014-12-20 17:48 GMT+01:00 Noah Misch n...@leadboat.com
 
  $ LANG=cs_CZ.WIN-1250 locale LC_NUMERIC
 
 
  It is Microsoft encoding, - it is not available on Linux

 Not true. It is available on Linux, and the regression tests were
 running with it for a long time (essentially from the moment magpie was
 added to the buildfarm).


aha

I am sorry for my mistake

Pavel



 Tomas


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



Re: [HACKERS] Initdb-cs_CZ.WIN-1250 buildfarm failures

2014-12-20 Thread Tom Lane
Tomas Vondra t...@fuzzy.cz writes:
 On 20.12.2014 18:13, Pavel Stehule wrote:
 It is Microsoft encoding, - it is not available on Linux

 Not true. It is available on Linux, and the regression tests were
 running with it for a long time (essentially from the moment magpie was
 added to the buildfarm).

Well, you had it at one time, but it sure doesn't seem to be there now.

I poked around in the RHEL 6.6 release notes, and found this possibly
relevant tidbit:

 mingw component, BZ#1063396

 Following the deprecation of Matahari packages in Red Hat
 Enterprise Linux 6.3, at which time the mingw packages were noted
 as deprecated, and the subsequent removal of Matahari packages
 from Red Hat Enterprise Linux 6.4, the mingw packages are now
 being removed from Red Hat Enterprise Linux 6.6.

 The mingw packages will no longer be shipped in future Red Hat
 Enterprise Linux 6 minor releases, nor will they receive
 security-related updates. Consequently, users are advised to
 uninstall any earlier releases of the mingw packages from their
 Red Hat Enterprise Linux 6 systems.

It seems plausible that a WIN-1250 locale would have been something
that would've been supplied by mingw rather than being part of either
glibc-common or any official locale package.  I'm not sure how that
translates to it actively disappearing from your machine --- as the note
says, you're advised to uninstall mingw, but I don't think the 6.6
update would have removed it automatically.  Still, the outlines of a
theory are starting to come into focus.

Immediate recommendation is to restart the buildfarm critters with the
missing locale removed from their configurations.  If you can find the
locale again somewhere, by all means re-enable it, but better less testing
than no testing in the meantime.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Initdb-cs_CZ.WIN-1250 buildfarm failures

2014-12-20 Thread Andrew Dunstan


On 12/20/2014 12:32 PM, Tom Lane wrote:



Immediate recommendation is to restart the buildfarm critters with the
missing locale removed from their configurations.  If you can find the
locale again somewhere, by all means re-enable it, but better less testing
than no testing in the meantime.





Yeah, all they need to do is alter the locales setting in the config 
file. If they want to force new runs, they can use 
https://wiki.postgresql.org/wiki/PostgreSQL_Buildfarm_Howto#Tips_and_Tricks



cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: hash_create API changes (was Re: [HACKERS] speedup tidbitmap patch: hash BlockNumber)

2014-12-20 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 On 2014-12-19 22:03:55 -0600, Jim Nasby wrote:
 What I am thinking is not using all of those fields in their raw form to 
 calculate the hash value. IE: something analogous to:
 hash_any(SharedBufHash, (rot(forkNum, 2) | dbNode) ^ relNode)  32 | 
 blockNum)
 
 perhaps that actual code wouldn't work, but I don't see why we couldn't do 
 something similar... am I missing something?

 I don't think that'd improve anything. Jenkin's hash does have a quite
 mixing properties, I don't believe that the above would improve the
 quality of the hash.

I think what Jim is suggesting is to intentionally degrade the quality of
the hash in order to let it be calculated a tad faster.  We could do that
but I doubt it would be a win, especially in systems with lots of buffers.
IIRC, when we put in Jenkins hashing to replace the older homebrew hash
function, it improved performance even though the hash itself was slower.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Initdb-cs_CZ.WIN-1250 buildfarm failures

2014-12-20 Thread Tomas Vondra
On 20.12.2014 17:48, Noah Misch wrote:
 On Sat, Dec 20, 2014 at 05:14:03PM +0100, CSPUG wrote:
 On 20.12.2014 07:39, Noah Misch wrote:
 Buildfarm members magpie, treepie and fulmar went absent on
 2014-10-29. Since returning on 2014-11-16, they have consistently
 failed with 'initdb: invalid locale name cs_CZ.WIN-1250'. No
 commits in that period readily explain a regression in this area. Did
 the underlying system configurations change?

 I'm pretty sure it's because of broken locales at the system level. It was
 working fine, and I haven't done any substantial changes to the system
 except for occasional yum update, so I'm unaware of what went wong :(

 The issue looks like this:

 # locale -a | grep en_US
 en_US
 en_US.iso88591
 en_US.iso885915
 en_US.utf8

 # locale en_US
 locale: unknown name en_US
 
 The NAME argument to the locale tool is something like LC_PAPER, not a
 locale name.  Use LANG=en_US locale LC_NUMERIC to test locale loading.

D'oh! That kinda explains the strange `locale` failures, of course ...

But still, the three animals (magpie, fulmar, treepie) are all running
with these locales:

   C en_US
   cs_CZ.UTF-8 cs_CZ.ISO-8859-2 cs_CZ.WIN-1250
   sk_SK.UTF-8 sk_SK.ISO-8859-2 sk_SK.WIN-1250

and the only one missing (so the 'locale LC_NUMERIC' failed) was
sk_SK.WIN-1250. I've fixed that, and now all the locales work fine.

That however does not explain why the tests are failing with
cs_CZ.WIN-1250 because that locale was working for some time.

 
 The only reasons I can think of is that some of the updates required
 a reboot, and I haven't done that because that would kill all the VMs
 running on that HW, including the one with CLOBBER_CACHE_RECURSIVE
 tests. And that'd throw away tests running for ~3 months.

 I've disabled the three animals (magpie, fulmar, treepie) for now, because
 there's no point in running the tests until the locale issues are fixed. If
 anyone has an idea of what might be wrong, let me know.
 
 Those animals have been successfully completing initdb for several locales,
 including en_US, before failing at cs_CZ.WIN-1250.  You could disable just the
 cs_CZ.WIN-1250 steps.  A CentOS 6.6 system here also lacks such a locale:
 
 $ LANG=cs_CZ.WIN-1250 locale LC_NUMERIC
 locale: Cannot set LC_CTYPE to default locale: No such file or directory
 locale: Cannot set LC_MESSAGES to default locale: No such file or directory
 locale: Cannot set LC_ALL to default locale: No such file or directory
 .
 
 -1
 46
 0
 ANSI_X3.4-1968
 $ locale -a|grep cs_
 cs_CZ
 cs_CZ.iso88592
 cs_CZ.utf8
 
 Perhaps an update made the system stricter about rejecting unknown locales.

I believe the locale system (at the OS level) works just like before. I
remember I had to manually create the locales while initially setting up
the animals. Then, ~2 months ago something happened (I asssume a yum
update) and some of the locales disappeared. But I have recreated them,
except for sk_SK.WIN-1250. But the tests fail because of cs_CZ.WIN-1250
which does exist.

Attached is a log of

for l in cs_CZ.UTF-8 sk_SK.UTF-8 cs_CZ.ISO-8859-2 sk_SK.ISO-8859-2
cs_CZ.WIN-1250 sk_SK.WIN-1250; do
  echo $l; LANG=$l locale LC_NUMERIC;
done;

showing that all the locales exist. However when I tried to initialize a
cluster with cs_CZ.WIN-1250, I got an error like this:

[pgbuild@regular-builds ~]$ locale
LANG=cs_CZ.WIN-1250
LC_CTYPE=cs_CZ.WIN-1250
LC_NUMERIC=cs_CZ.WIN-1250
LC_TIME=cs_CZ.WIN-1250
LC_COLLATE=cs_CZ.WIN-1250
LC_MONETARY=cs_CZ.WIN-1250
LC_MESSAGES=cs_CZ.WIN-1250
LC_PAPER=cs_CZ.WIN-1250
LC_NAME=cs_CZ.WIN-1250
LC_ADDRESS=cs_CZ.WIN-1250
LC_TELEPHONE=cs_CZ.WIN-1250
LC_MEASUREMENT=cs_CZ.WIN-1250
LC_IDENTIFICATION=cs_CZ.WIN-1250
LC_ALL=
[pgbuild@regular-builds ~]$ pg_ctl -D tmp-data init
The files belonging to this database system will be owned by user pgbuild.
This user must also own the server process.

The database cluster will be initialized with locale cs_CZ.WIN-1250.
could not determine encoding for locale cs_CZ.WIN-1250: codeset is
ANSI_X3.4-1968
initdb: could not find suitable encoding for locale cs_CZ.WIN-1250
Rerun initdb with the -E option.
Try initdb --help for more information.
pg_ctl: database system initialization failed

So apparently the locale does exist, but we're unable to cope with it
for some reason ...

Tomas
cs_CZ.UTF-8
,
 
3;3
44
160
UTF-8
sk_SK.UTF-8
,
 
3;3
44
160
UTF-8
cs_CZ.ISO-8859-2
,
�
3;3
44
160
ISO-8859-2
sk_SK.ISO-8859-2
,
�
3;3
44
160
ISO-8859-2
cs_CZ.WIN-1250
,
 
3;3
44
160
ANSI_X3.4-1968
sk_SK.WIN-1250
,
 
3;3
44
160
ANSI_X3.4-1968
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Initdb-cs_CZ.WIN-1250 buildfarm failures

2014-12-20 Thread Tomas Vondra
On 20.12.2014 18:32, Tom Lane wrote:
 Tomas Vondra t...@fuzzy.cz writes:
 On 20.12.2014 18:13, Pavel Stehule wrote:
 It is Microsoft encoding, - it is not available on Linux
 
 Not true. It is available on Linux, and the regression tests were 
 running with it for a long time (essentially from the moment magpie
 was added to the buildfarm).
 
 Well, you had it at one time, but it sure doesn't seem to be there now.

 I poked around in the RHEL 6.6 release notes, and found this possibly
 relevant tidbit:

Ah, apparently I upgraded this VM to 6.6 - that wasn't exactly planned.

 
mingw component, BZ#1063396
 
Following the deprecation of Matahari packages in Red Hat
Enterprise Linux 6.3, at which time the mingw packages were noted
as deprecated, and the subsequent removal of Matahari packages
from Red Hat Enterprise Linux 6.4, the mingw packages are now
being removed from Red Hat Enterprise Linux 6.6.
 
The mingw packages will no longer be shipped in future Red Hat
Enterprise Linux 6 minor releases, nor will they receive
security-related updates. Consequently, users are advised to
uninstall any earlier releases of the mingw packages from their
Red Hat Enterprise Linux 6 systems.
 
 It seems plausible that a WIN-1250 locale would have been something 
 that would've been supplied by mingw rather than being part of
 either glibc-common or any official locale package. I'm not sure how
 that translates to it actively disappearing from your machine --- as
 the note says, you're advised to uninstall mingw, but I don't think
 the 6.6 update would have removed it automatically. Still, the
 outlines of a theory are starting to come into focus.

I don't think so. As I mentioned in the previous post, I still can
create the locale, but initdb still fails with an error about
ANSI_X3.4-1968 encoding.

But clearly, something changed between RH 6.5 and 6.6, because on 6.5 I
get this:

$ LANG=cs_CZ.WIN-1250 locale LC_NUMERIC
,
�
3;3
44
160
CP1250

while on 6.6 I get this:

$ LANG=cs_CZ.WIN-1250 locale LC_NUMERIC
,

3;3
44
160
ANSI_X3.4-1968

So yeah, the locale definition did change a bit - the  encoding is
different, and we can't cope with it.

 Immediate recommendation is to restart the buildfarm critters with
 the missing locale removed from their configurations. If you can find
 the locale again somewhere, by all means re-enable it, but better
 less testing than no testing in the meantime.

I've removed cs_CZ.WIN-1250 and sk_SK.WIN-1250 from the config. Let's
see how that works.


Tomas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Initdb-cs_CZ.WIN-1250 buildfarm failures

2014-12-20 Thread Tom Lane
Tomas Vondra t...@fuzzy.cz writes:
 I believe the locale system (at the OS level) works just like before. I
 remember I had to manually create the locales while initially setting up
 the animals. Then, ~2 months ago something happened (I asssume a yum
 update) and some of the locales disappeared. But I have recreated them,
 except for sk_SK.WIN-1250. But the tests fail because of cs_CZ.WIN-1250
 which does exist.

I am betting that you recreated them differently from before.

 However when I tried to initialize a
 cluster with cs_CZ.WIN-1250, I got an error like this:

 [pgbuild@regular-builds ~]$ pg_ctl -D tmp-data init
 The files belonging to this database system will be owned by user pgbuild.
 This user must also own the server process.

 The database cluster will be initialized with locale cs_CZ.WIN-1250.
 could not determine encoding for locale cs_CZ.WIN-1250: codeset is
 ANSI_X3.4-1968
 initdb: could not find suitable encoding for locale cs_CZ.WIN-1250

Locale cs_CZ.WIN-1250 is evidently marked with a codeset property of
ANSI_X3.4-1968 (which means old-school US-ASCII).  That's certainly
wrong.  I believe the correct thing would be CP1250.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Initdb-cs_CZ.WIN-1250 buildfarm failures

2014-12-20 Thread Tom Lane
Tomas Vondra t...@fuzzy.cz writes:
 But clearly, something changed between RH 6.5 and 6.6, because on 6.5 I
 get this:

 $ LANG=cs_CZ.WIN-1250 locale LC_NUMERIC
 ,
 �
 3;3
 44
 160
 CP1250

 while on 6.6 I get this:

 $ LANG=cs_CZ.WIN-1250 locale LC_NUMERIC
 ,

 3;3
 44
 160
 ANSI_X3.4-1968

That's certainly broken.  The entire point of having a cs_CZ.WIN-1250
locale (as opposed to cs_CZ.something-else) would be to specify a
codeset corresponding to WIN-1250.  Our code recognizes the spelling
CP1250 for that.  It's possible there are other spellings we should
recognize, but ANSI_X3.4-1968 is certainly not one.  As I said, that
just means ASCII, so it's completely useless for determining which
ASCII-superset encoding is wanted.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Initdb-cs_CZ.WIN-1250 buildfarm failures

2014-12-20 Thread Tomas Vondra
On 20.12.2014 19:05, Tom Lane wrote:
 Tomas Vondra t...@fuzzy.cz writes:
 I believe the locale system (at the OS level) works just like before. I
 remember I had to manually create the locales while initially setting up
 the animals. Then, ~2 months ago something happened (I asssume a yum
 update) and some of the locales disappeared. But I have recreated them,
 except for sk_SK.WIN-1250. But the tests fail because of cs_CZ.WIN-1250
 which does exist.
 
 I am betting that you recreated them differently from before.

And you're probably right. Apparently, I recreated them like this:

  $ localedef -v -c -i cs_CZ -f WIN-1250 cs_CZ.WIN-1250

but the correct way seems to be this:

  $ localedef -v -c -i cs_CZ -f CP1250 cs_CZ.WIN-1250

 However when I tried to initialize a
 cluster with cs_CZ.WIN-1250, I got an error like this:
 
 [pgbuild@regular-builds ~]$ pg_ctl -D tmp-data init
 The files belonging to this database system will be owned by user pgbuild.
 This user must also own the server process.
 
 The database cluster will be initialized with locale cs_CZ.WIN-1250.
 could not determine encoding for locale cs_CZ.WIN-1250: codeset is
 ANSI_X3.4-1968
 initdb: could not find suitable encoding for locale cs_CZ.WIN-1250
 
 Locale cs_CZ.WIN-1250 is evidently marked with a codeset property of
 ANSI_X3.4-1968 (which means old-school US-ASCII).  That's certainly
 wrong.  I believe the correct thing would be CP1250.

Yes. I fixed the locales and added the locales back to the client
configuration.

regards
Tomas Vondra


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Initdb-cs_CZ.WIN-1250 buildfarm failures

2014-12-20 Thread Tom Lane
Tomas Vondra t...@fuzzy.cz writes:
 On 20.12.2014 19:05, Tom Lane wrote:
 I am betting that you recreated them differently from before.

 And you're probably right. Apparently, I recreated them like this:

   $ localedef -v -c -i cs_CZ -f WIN-1250 cs_CZ.WIN-1250

 but the correct way seems to be this:

   $ localedef -v -c -i cs_CZ -f CP1250 cs_CZ.WIN-1250

Interesting.  Apparently, instead of failing outright on an unrecognized
charmap name, localedef just substituted ASCII and plowed ahead.  Bad dog.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Initdb-cs_CZ.WIN-1250 buildfarm failures

2014-12-20 Thread CSPUG
On 20.12.2014 07:39, Noah Misch wrote:
 Buildfarm members magpie, treepie and fulmar went absent on
 2014-10-29. Since returning on 2014-11-16, they have consistently
 failed with 'initdb: invalid locale name cs_CZ.WIN-1250'. No
 commits in that period readily explain a regression in this area. Did
 the underlying system configurations change?

I'm pretty sure it's because of broken locales at the system level. It was
working fine, and I haven't done any substantial changes to the system
except for occasional yum update, so I'm unaware of what went wong :(

The issue looks like this:

# locale -a | grep en_US
en_US
en_US.iso88591
en_US.iso885915
en_US.utf8

# locale en_US
locale: unknown name en_US

The only reasons I can think of is that some of the updates required
a reboot, and I haven't done that because that would kill all the VMs
running on that HW, including the one with CLOBBER_CACHE_RECURSIVE
tests. And that'd throw away tests running for ~3 months.

I've disabled the three animals (magpie, fulmar, treepie) for now, because
there's no point in running the tests until the locale issues are fixed. If
anyone has an idea of what might be wrong, let me know.

Tomas


Re: [HACKERS] Initdb-cs_CZ.WIN-1250 buildfarm failures

2014-12-20 Thread Tomas Vondra
On 20.12.2014 19:35, Tom Lane wrote:
 Tomas Vondra t...@fuzzy.cz writes:
 On 20.12.2014 19:05, Tom Lane wrote:
 I am betting that you recreated them differently from before.
 
 And you're probably right. Apparently, I recreated them like this:
 
   $ localedef -v -c -i cs_CZ -f WIN-1250 cs_CZ.WIN-1250
 
 but the correct way seems to be this:
 
   $ localedef -v -c -i cs_CZ -f CP1250 cs_CZ.WIN-1250
 
 Interesting.  Apparently, instead of failing outright on an unrecognized
 charmap name, localedef just substituted ASCII and plowed ahead.  Bad dog.

Not really. It's rather about abusive owner of the dog, using '-c' to
force the dog to create the locale even when there are warings:

  # localedef -i cs_CZ -f WIN-1250 cs_CZ.WIN-1250
  character map file `WIN-1250' not found: No such file or directory
  no output file produced because warnings were issued

In my defense, I've been using verbose mode, and that produces a lot of
warnings like 'non-symbolic character value should not be used' (which
gets ignored in non-verbose mode) and thus missed the one important one.

regards
Tomas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg

2014-12-20 Thread Tomas Vondra
Hi!

First of all, thanks for the review - the insights and comments are
spot-on. More comments below.

On 20.12.2014 09:26, Ali Akbar wrote:
 
 2014-12-16 11:01 GMT+07:00 Ali Akbar the.ap...@gmail.com
 mailto:the.ap...@gmail.com:
 
 
 
 2014-12-16 10:47 GMT+07:00 Ali Akbar the.ap...@gmail.com
 mailto:the.ap...@gmail.com:
 
 
 2014-12-16 6:27 GMT+07:00 Tomas Vondra t...@fuzzy.cz
 mailto:t...@fuzzy.cz:
 Just fast-viewing the patch.
 
 The patch is not implementing the checking for not creating new
 context in initArrayResultArr. I think we should implement it
 also there for consistency (and preventing future problems).

You're right that initArrayResultArr was missing the code deciding
whether to create a subcontext or reuse the parent one, and the fix you
proposed (i.e. reusing code from initArrayResult) is IMHO the right one.

 Testing the performance with your query, looks promising: speedup is
 between 12% ~ 15%.
 
 Because i'm using 32-bit systems, setting work_mem to 1024GB failed:
 
 ERROR:  1073741824 is outside the valid range for parameter
 work_mem (64 .. 2097151)
 STATEMENT:  SET work_mem = '1024GB';
 
 psql:/media/truecrypt1/oss/postgresql/postgresql/../patches/array-agg.sql:20:
 ERROR:  1073741824 is outside the valid range for parameter
 work_mem (64 .. 2097151)

Yes, that's pretty clearly because of the 2GB limit on 32-bit systems.

 Maybe because of that, in the large groups a test, the speedup is awesome:
 
 master: 16,819 ms
 
 with patch: 1,720 ms

Probably. It's difficult to say without explain plans or something, but
it's probably using a different plan (e.g. group aggregate).

 Looks like with master, postgres resort to disk, but with the patch it
 fits in memory.

I'd bet that's not postgres, but system using a swap (because postgres
allocates a lot of memory).

 Note: I hasn't tested the large dataset.
 
 As expected, testing array_agg(anyarray), the performance is still the
 same, because the subcontext hasn't implemented there (test script
 modified from Tomas', attached).
 
 I implemented the subcontext checking in initArrayResultArr by changing
 the v3 patch like this:
 
 +++ b/src/backend/utils/adt/arrayfuncs.c
 @@ -4797,10 +4797,11 @@ initArrayResultArr(Oid array_type, Oid
 element_type, MemoryContext rcontext,
bool subcontext)
  {
 ArrayBuildStateArr *astate;
 -   MemoryContext arr_context;
 +   MemoryContext arr_context = rcontext;   /* by default use the
 parent ctx */
  
 /* Make a temporary context to hold all the junk */
 -   arr_context = AllocSetContextCreate(rcontext,
 +   if (subcontext)
 +   arr_context = AllocSetContextCreate(rcontext,
 accumArrayResultArr,
 ALLOCSET_DEFAULT_MINSIZE,
 ALLOCSET_DEFAULT_INITSIZE,
 
 
 Testing the performance, it got the 12%~15% speedup. Good. (patch attached)

Nice, and it's consistent with my measurements on scalar values.


 Looking at the modification in accumArrayResult* functions, i don't
 really comfortable with:
 
  1. Code that calls accumArrayResult* after explicitly calling
 initArrayResult* must always passing subcontext, but it has no
 effect.
  2. All existing codes that calls accumArrayResult must be changed.
 
 Just an idea: why don't we minimize the change in API like this:
 
  1. Adding parameter bool subcontext, only in initArrayResult*
 functions but not in accumArrayResult*
  2. Code that want to not creating subcontext must calls
 initArrayResult* explicitly.
 
 Other codes that calls directly to accumArrayResult can only be
 changed in the call to makeArrayResult* (with release=true
 parameter). In places that we don't want to create subcontext (as in
 array_agg_transfn), modify it to use initArrayResult* before calling
 accumArrayResult*.
 
 What do you think?

I think it's an interesting idea.

I've been considering this before, when thinking about the best way to
keep the calls to the various methods consistent (eg. enforcing the use
of release=true only with subcontexts).

What I ended up doing (see the v4 patch attached) is that I

  (1) added 'private_cxt' flag to the ArrayBuildState[Arr] struct,
  tracking whether there's a private memory context

  (2) rolled back all the API changes, except for the initArray*
  methods (as you proposed)

This has the positive benefit that it allows checking consistency of the
calls - you can still do

   initArrayResult(..., subcontext=false)
   ...
   makeArrayResult(..., release=true)

but it won't reset the memory context, and with assert-enabled build it
will actually fail.

Another positive benefit is that this won't break the code unless it
uses the new API. This is a 

Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg

2014-12-20 Thread Tomas Vondra
Attached is v5 of the patch, fixing an error with releasing a shared
memory context (invalid flag values in a few calls).

kind regards
Tomas Vondra
diff --git a/src/backend/executor/nodeSubplan.c b/src/backend/executor/nodeSubplan.c
index d9faf20..9c97755 100644
--- a/src/backend/executor/nodeSubplan.c
+++ b/src/backend/executor/nodeSubplan.c
@@ -262,7 +262,7 @@ ExecScanSubPlan(SubPlanState *node,
 	/* Initialize ArrayBuildStateAny in caller's context, if needed */
 	if (subLinkType == ARRAY_SUBLINK)
 		astate = initArrayResultAny(subplan-firstColType,
-	CurrentMemoryContext);
+	CurrentMemoryContext, true);
 
 	/*
 	 * We are probably in a short-lived expression-evaluation context. Switch
@@ -964,7 +964,7 @@ ExecSetParamPlan(SubPlanState *node, ExprContext *econtext)
 	/* Initialize ArrayBuildStateAny in caller's context, if needed */
 	if (subLinkType == ARRAY_SUBLINK)
 		astate = initArrayResultAny(subplan-firstColType,
-	CurrentMemoryContext);
+	CurrentMemoryContext, true);
 
 	/*
 	 * Must switch to per-query memory context.
diff --git a/src/backend/utils/adt/array_userfuncs.c b/src/backend/utils/adt/array_userfuncs.c
index 50ea4d2..f434fdd 100644
--- a/src/backend/utils/adt/array_userfuncs.c
+++ b/src/backend/utils/adt/array_userfuncs.c
@@ -498,8 +498,13 @@ array_agg_transfn(PG_FUNCTION_ARGS)
 		elog(ERROR, array_agg_transfn called in non-aggregate context);
 	}
 
-	state = PG_ARGISNULL(0) ? NULL : (ArrayBuildState *) PG_GETARG_POINTER(0);
+	if (PG_ARGISNULL(0))
+		state = initArrayResult(arg1_typeid, aggcontext, false);
+	else
+		state = (ArrayBuildState *) PG_GETARG_POINTER(0);
+
 	elem = PG_ARGISNULL(1) ? (Datum) 0 : PG_GETARG_DATUM(1);
+
 	state = accumArrayResult(state,
 			 elem,
 			 PG_ARGISNULL(1),
@@ -573,7 +578,22 @@ array_agg_array_transfn(PG_FUNCTION_ARGS)
 		elog(ERROR, array_agg_array_transfn called in non-aggregate context);
 	}
 
-	state = PG_ARGISNULL(0) ? NULL : (ArrayBuildStateArr *) PG_GETARG_POINTER(0);
+
+	if (PG_ARGISNULL(0))
+	{
+		Oid			element_type = get_element_type(arg1_typeid);
+
+		if (!OidIsValid(element_type))
+			ereport(ERROR,
+	(errcode(ERRCODE_DATATYPE_MISMATCH),
+	 errmsg(data type %s is not an array type,
+			format_type_be(arg1_typeid;
+
+		state = initArrayResultArr(arg1_typeid, element_type, aggcontext, false);
+	}
+	else
+		state = (ArrayBuildStateArr *) PG_GETARG_POINTER(0);
+
 	state = accumArrayResultArr(state,
 PG_GETARG_DATUM(1),
 PG_ARGISNULL(1),
diff --git a/src/backend/utils/adt/arrayfuncs.c b/src/backend/utils/adt/arrayfuncs.c
index 933c6b0..7a14a71 100644
--- a/src/backend/utils/adt/arrayfuncs.c
+++ b/src/backend/utils/adt/arrayfuncs.c
@@ -4617,22 +4617,24 @@ array_insert_slice(ArrayType *destArray,
  * were no elements.  This is preferred if an empty array is what you want.
  */
 ArrayBuildState *
-initArrayResult(Oid element_type, MemoryContext rcontext)
+initArrayResult(Oid element_type, MemoryContext rcontext, bool subcontext)
 {
 	ArrayBuildState *astate;
-	MemoryContext arr_context;
+	MemoryContext arr_context = rcontext;	/* by default use the parent ctx */
 
 	/* Make a temporary context to hold all the junk */
-	arr_context = AllocSetContextCreate(rcontext,
-		accumArrayResult,
-		ALLOCSET_DEFAULT_MINSIZE,
-		ALLOCSET_DEFAULT_INITSIZE,
-		ALLOCSET_DEFAULT_MAXSIZE);
+	if (subcontext)
+		arr_context = AllocSetContextCreate(rcontext,
+			accumArrayResult,
+			ALLOCSET_DEFAULT_MINSIZE,
+			ALLOCSET_DEFAULT_INITSIZE,
+			ALLOCSET_DEFAULT_MAXSIZE);
 
 	astate = (ArrayBuildState *)
 		MemoryContextAlloc(arr_context, sizeof(ArrayBuildState));
 	astate-mcontext = arr_context;
-	astate-alen = 64;			/* arbitrary starting array size */
+	astate-private_cxt = subcontext;
+	astate-alen = 8;			/* arbitrary starting array size */
 	astate-dvalues = (Datum *)
 		MemoryContextAlloc(arr_context, astate-alen * sizeof(Datum));
 	astate-dnulls = (bool *)
@@ -4666,7 +4668,7 @@ accumArrayResult(ArrayBuildState *astate,
 	if (astate == NULL)
 	{
 		/* First time through --- initialize */
-		astate = initArrayResult(element_type, rcontext);
+		astate = initArrayResult(element_type, rcontext, true);
 	}
 	else
 	{
@@ -4768,6 +4770,9 @@ makeMdArrayResult(ArrayBuildState *astate,
 
 	MemoryContextSwitchTo(oldcontext);
 
+	/* we can only release the context if it's a private one. */
+	// Assert(! (release  !astate-private_cxt));
+
 	/* Clean up all the junk */
 	if (release)
 		MemoryContextDelete(astate-mcontext);
@@ -4791,22 +4796,25 @@ makeMdArrayResult(ArrayBuildState *astate,
  *	rcontext is where to keep working state
  */
 ArrayBuildStateArr *
-initArrayResultArr(Oid array_type, Oid element_type, MemoryContext rcontext)
+initArrayResultArr(Oid array_type, Oid element_type, MemoryContext rcontext,
+   bool subcontext)
 {
 	ArrayBuildStateArr *astate;
-	MemoryContext arr_context;
+	MemoryContext arr_context = rcontext;   /* by 

Re: hash_create API changes (was Re: [HACKERS] speedup tidbitmap patch: hash BlockNumber)

2014-12-20 Thread Jim Nasby

On 12/20/14, 11:51 AM, Tom Lane wrote:

Andres Freund and...@2ndquadrant.com writes:

On 2014-12-19 22:03:55 -0600, Jim Nasby wrote:

What I am thinking is not using all of those fields in their raw form to 
calculate the hash value. IE: something analogous to:
hash_any(SharedBufHash, (rot(forkNum, 2) | dbNode) ^ relNode)  32 | blockNum)

perhaps that actual code wouldn't work, but I don't see why we couldn't do 
something similar... am I missing something?



I don't think that'd improve anything. Jenkin's hash does have a quite
mixing properties, I don't believe that the above would improve the
quality of the hash.


I think what Jim is suggesting is to intentionally degrade the quality of
the hash in order to let it be calculated a tad faster.  We could do that
but I doubt it would be a win, especially in systems with lots of buffers.
IIRC, when we put in Jenkins hashing to replace the older homebrew hash
function, it improved performance even though the hash itself was slower.


Right. Now that you mention it, I vaguely recall the discussions about changing 
the hash function to reduce collisions.

I'll still take a look at fash-hash, but it's looking like there may not be 
anything we can do here unless we change how we identify relation files 
(combining dbid, tablespace id, fork number and file id, at least for 
searching). If we had 64bit hash support then maybe that'd be a significant 
win, since you wouldn't need to hash at all. But that certainly doesn't seem to 
be low-hanging fruit to me...
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GiST kNN search queue (Re: KNN-GiST with recheck)

2014-12-20 Thread Peter Geoghegan
On Wed, Dec 17, 2014 at 6:07 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
 How about adding a src/backend/lib/README for that, per attached?

I took a quick look at this. Some observations:

* I like the idea of adding a .../lib README. However, the README
fails to note that ilist.c implements an *integrated* list, unlike the
much more prevalent cell-based List structure. It should note that,
since that's the whole point of ilist.c.

* pairingheap_remove() is technically dead code. It still makes sense
that you'd have it in this patch, but I think there's an argument for
not including it at all on the theory that if you need to use it you
should use a different data structure. After all, the actual
(non-amortized) complexity of that operation is O(n) [1], and if
remove operations are infrequent as we might expect, that might be the
more important consideration. As long as you are including
pairingheap_remove(), though, why is the local variable prev_ptr a
pointer to a pointer to a pairingheap_node, rather than just a pointer
to a pairingheap_node?

* Similarly, the function-like macro pairingheap_reset() doesn't seem
to pull its weight. Why does it exist alongside pairingheap_free()?
I'm not seeing a need to re-use a heap like that.

*  Assert(!pairingheap_is_empty(heap)) appears in a few places.
You're basically asserting that a pointer isn't null, often
immediately before dereferencing the pointer. This seems to be of
questionable value.

* I think that the indentation of code could use some tweaking.

* More comments, please. In particular, comment the struct fields in
pairingheap_node. There are various blocks of code that could use at
least an additional terse comment, too.

* You talked about tuplesort.c integration. In order for that to
happen, I think the comparator logic should know less about min-heaps.
This should formally be a max-heap, with the ability to provide
customizations only encapsulated in the comparator (like inverting the
comparison logic to get a min-heap, or like custom NULLs first/last
behavior). So IMV this comment should be more generic/anticipatory:

+ /*
+  * For a max-heap, the comparator must return 0 iff a  b, 0 iff a == b,
+  * and 0 iff a  b.  For a min-heap, the conditions are reversed.
+  */
+ typedef int (*pairingheap_comparator) (const pairingheap_node *a,
const pairingheap_node *b, void *arg);

I think the functions should be called pairing_max_heap* for this
reason, too. Although that isn't consistent with binaryheap.c, so I
guess this whole argument is a non-starter.

* We should just move rbtree.c to .../lib. We're not using CVS anymore
-- the history will be magically preserved.

Anyway, to get to the heart of the matter: in general, I think the
argument for the patch is sound. It's not a stellar improvement, but
it's worthwhile. That's all I have for now...

[1] https://www.cise.ufl.edu/~sahni/dsaac/enrich/c13/pairing.htm
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-12-20 Thread Peter Geoghegan
On Sat, Dec 20, 2014 at 2:16 AM, Martijn van Oosterhout
klep...@svana.org wrote:
 What I find curious about the opclass thing is: when do you ever have
 an opclass that has a different idea of equality than the default
 opclass for the type? In other words, when is B-Tree strategy number 3
 not actually '=' in practice, for *any* B-Tree opclass? Certainly, it
 doesn't appear to be the case that it isn't so with any shipped
 opclasses - the shipped non-default B-Tree opclasses only serve to
 provide alternative notions of sort order, and never equals.

 Well, in theory you could build a case insensetive index on a text
 column. You could argue that the column should have been defined as
 citext in the first place, but it might not for various reasons.

That generally works in other systems by having a case-insensitive
collation. I don't know if that implies that non bitwise identical
items can be equal according to the equals operator in those other
systems. There aren't too many examples of that happening in general
(I can only think of citext and numeric offhand), presumably because
it necessitates a normalization process (such as lower-casing in the
case of citext) within the hash opclass support function 1, a process
best avoided.

citext is an interesting precedent that supports my argument above,
because citext demonstrates that we preferred to create a new type
rather than a new non-default opclass (with a non-'=' equals
operator) when time came to introduce a new concept of equals (and
not merely a new, alternative sort order). Again, this is surely due
to the system dependency on the default B-Tree opclass for the
purposes of GROUP BY and DISTINCT, whose behavior sort ordering
doesn't necessarily enter into at all.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-12-20 Thread Peter Geoghegan
On Thu, Dec 18, 2014 at 9:31 AM, Peter Geoghegan p...@heroku.com wrote:
 So I think there needs to be some kind of logic to de-recognize the table
 alias foo.

 Once I rewrote the query to use TARGET and EXCLUDED correctly, I've put this
 through an adaptation of my usual torture test, and it ran fine until
 wraparound shutdown.  I'll poke at it more later.

 Oops. I agree with your diagnosis, and will circle around to fix that
 bug in the next revision

Attached patch fixes the bug. I'm not delighted about the idea of
cutting off parent parse state (the parse state of the insert) within
transformUpdateStmt() only once we've used the parent state to
establish that this is a speculative/auxiliary update, but it's
probably the path of least resistance here.

When this is rolled into the next version, there will be a testcase.

Thanks
-- 
Peter Geoghegan
diff --git a/src/backend/parser/analyze.c b/src/backend/parser/analyze.c
index e037c0f..8c25e82 100644
--- a/src/backend/parser/analyze.c
+++ b/src/backend/parser/analyze.c
@@ -2006,6 +2006,16 @@ transformUpdateStmt(ParseState *pstate, UpdateStmt *stmt)
 		qry-hasModifyingCTE = pstate-p_hasModifyingCTE;
 	}
 
+	/*
+	 * Having established that this is a speculative insertion's auxiliary
+	 * update, do not allow the query to access parent parse state.  This is a
+	 * simple way of making parent RTEs invisible -- otherwise, the parent's
+	 * target could spuriously become visible were the query to reference the
+	 * original target table name rather than the TARGET.* alias.
+	 */
+	if (pstate-p_is_speculative)
+		pstate-parentParseState = NULL;
+
 	qry-resultRelation = setTargetTable(pstate, stmt-relation,
   interpretInhOption(stmt-relation-inhOpt),
 		 true,

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Role Attribute Bitmask Catalog Representation

2014-12-20 Thread Adam Brightwell
Alvaro and Stephen,

I propose this patch on top of Adam's v5.  Also included is a full patch
 against master.


I have attached an updated patch for review
(role-attribute-bitmask-v7.patch).

This patch incorporates the 'v5a' patch proposed by Alvaro, input
validation (Assert) check in 'check_role_attribute' and the documentation
updates requested by Stephen.

Thanks,
Adam


-- 
Adam Brightwell - adam.brightw...@crunchydatasolutions.com
Database Engineer - www.crunchydatasolutions.com
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
new file mode 100644
index 9ceb96b..9470916
*** a/doc/src/sgml/catalogs.sgml
--- b/doc/src/sgml/catalogs.sgml
***
*** 1391,1479 
   /row
  
   row
!   entrystructfieldrolsuper/structfield/entry
!   entrytypebool/type/entry
entryRole has superuser privileges/entry
   /row
  
   row
!   entrystructfieldrolinherit/structfield/entry
!   entrytypebool/type/entry
!   entryRole automatically inherits privileges of roles it is a
!member of/entry
   /row
  
   row
!   entrystructfieldrolcreaterole/structfield/entry
!   entrytypebool/type/entry
entryRole can create more roles/entry
   /row
  
   row
!   entrystructfieldrolcreatedb/structfield/entry
!   entrytypebool/type/entry
entryRole can create databases/entry
   /row
  
   row
!   entrystructfieldrolcatupdate/structfield/entry
!   entrytypebool/type/entry
entry
 Role can update system catalogs directly.  (Even a superuser cannot do
 this unless this column is true)
/entry
   /row
  
   row
!   entrystructfieldrolcanlogin/structfield/entry
!   entrytypebool/type/entry
entry
 Role can log in. That is, this role can be given as the initial
 session authorization identifier
/entry
   /row
  
   row
!   entrystructfieldrolreplication/structfield/entry
!   entrytypebool/type/entry
entry
 Role is a replication role. That is, this role can initiate streaming
 replication (see xref linkend=streaming-replication) and set/unset
 the system backup mode using functionpg_start_backup/ and
 functionpg_stop_backup/
/entry
   /row
  
   row
!   entrystructfieldrolconnlimit/structfield/entry
!   entrytypeint4/type/entry
!   entry
!For roles that can log in, this sets maximum number of concurrent
!connections this role can make.  -1 means no limit.
!   /entry
!  /row
! 
!  row
!   entrystructfieldrolpassword/structfield/entry
!   entrytypetext/type/entry
entry
!Password (possibly encrypted); null if none.  If the password
!is encrypted, this column will begin with the string literalmd5/
!followed by a 32-character hexadecimal MD5 hash.  The MD5 hash
!will be of the user's password concatenated to their user name.
!For example, if user literaljoe/ has password literalxyzzy/,
!productnamePostgreSQL/ will store the md5 hash of
!literalxyzzyjoe/.  A password that does not follow that
!format is assumed to be unencrypted.
/entry
   /row
  
-  row
-   entrystructfieldrolvaliduntil/structfield/entry
-   entrytypetimestamptz/type/entry
-   entryPassword expiry time (only used for password authentication);
-null if no expiration/entry
-  /row
  /tbody
 /tgroup
/table
--- 1391,1524 
   /row
  
   row
!   entrystructfieldrolattr/structfield/entry
!   entrytypebigint/type/entry
!   entry
!Role attributes; see xref linkend=catalog-rolattr-bitmap-table and
!xref linkend=sql-createrole for details
!   /entry
!  /row
! 
!  row
!   entrystructfieldrolconnlimit/structfield/entry
!   entrytypeint4/type/entry
!   entry
!For roles that can log in, this sets maximum number of concurrent
!connections this role can make.  -1 means no limit.
!   /entry
!  /row
! 
!  row
!   entrystructfieldrolpassword/structfield/entry
!   entrytypetext/type/entry
!   entry
!Password (possibly encrypted); null if none.  If the password
!is encrypted, this column will begin with the string literalmd5/
!followed by a 32-character hexadecimal MD5 hash.  The MD5 hash
!will be of the user's password concatenated to their user name.
!For example, if user literaljoe/ has password literalxyzzy/,
!productnamePostgreSQL/ will store the md5 hash of
!literalxyzzyjoe/.  A password that does not follow that
!format is assumed to be unencrypted.
!   /entry
!  /row
! 
!  row
!   entrystructfieldrolvaliduntil/structfield/entry
!   entrytypetimestamptz/type/entry
!   entryPassword expiry time (only used for password authentication);
!null if no 

Re: [HACKERS] Doing better at HINTing an appropriate column within errorMissingColumn()

2014-12-20 Thread Peter Geoghegan
On Tue, Dec 16, 2014 at 12:18 PM, Stephen Frost sfr...@snowman.net wrote:
 * Tom Lane (t...@sss.pgh.pa.us) wrote:
 So my two cents is that when considering a qualified name, this patch
 should take levenshtein distance across the two components equally.
 There's no good reason to suppose that typos will attack one name
 component more (nor less) than the other.

 Agreed (since it seems like folks are curious for the opinion's of
 mostly bystanders).

 +1 to the above for my part.

Okay, then. Attached patch implements this scheme. It is identical to
the previous revision, except that iff there was an alias specified
and that alias does not match the correct name (alias/table name) of
the RTE currently under consideration, we charge the distance between
the differing aliases rather than a fixed distance of 1.

-- 
Peter Geoghegan
From 91087191ba49e5fed7fdfa43f98deb009c2b3e0e Mon Sep 17 00:00:00 2001
From: Peter Geoghegan p...@heroku.com
Date: Wed, 12 Nov 2014 15:31:37 -0800
Subject: [PATCH] Levenshtein distance column HINT

Add a new HINT -- a guess as to what column the user might have intended
to reference, to be shown in various contexts where an
ERRCODE_UNDEFINED_COLUMN error is raised.  The user will see this HINT
when he or she fat-fingers a column reference in an ad-hoc SQL query, or
incorrectly pluralizes or fails to pluralize a column reference, or
incorrectly omits or includes an underscore or other punctuation
character.

The HINT suggests a column in the range table with the lowest
Levenshtein distance, or the tied-for-best pair of matching columns in
the event of there being exactly two equally likely candidates (these
may come from multiple RTEs, or the same RTE).  Limiting to two the
number of cases where multiple equally likely suggestions are all
offered at once (i.e.  giving no hint when the number of equally likely
candidates exceeds two) is a measure against suggestions that are of low
quality in an absolute sense.

A further, final measure is taken against suggestions that are of low
absolute quality:  If the distance exceeds a normalized distance
threshold, no suggestion is given.
---
 src/backend/parser/parse_expr.c   |   9 +-
 src/backend/parser/parse_func.c   |   2 +-
 src/backend/parser/parse_relation.c   | 325 +++---
 src/backend/utils/adt/levenshtein.c   |   9 +
 src/include/parser/parse_relation.h   |  20 +-
 src/test/regress/expected/alter_table.out |   3 +
 src/test/regress/expected/join.out|  38 
 src/test/regress/sql/join.sql |  24 +++
 8 files changed, 392 insertions(+), 38 deletions(-)

diff --git a/src/backend/parser/parse_expr.c b/src/backend/parser/parse_expr.c
index 4a8aaf6..a77a3a0 100644
--- a/src/backend/parser/parse_expr.c
+++ b/src/backend/parser/parse_expr.c
@@ -621,7 +621,8 @@ transformColumnRef(ParseState *pstate, ColumnRef *cref)
 colname = strVal(field2);
 
 /* Try to identify as a column of the RTE */
-node = scanRTEForColumn(pstate, rte, colname, cref-location);
+node = scanRTEForColumn(pstate, rte, colname, cref-location,
+		NULL);
 if (node == NULL)
 {
 	/* Try it as a function call on the whole row */
@@ -666,7 +667,8 @@ transformColumnRef(ParseState *pstate, ColumnRef *cref)
 colname = strVal(field3);
 
 /* Try to identify as a column of the RTE */
-node = scanRTEForColumn(pstate, rte, colname, cref-location);
+node = scanRTEForColumn(pstate, rte, colname, cref-location,
+		NULL);
 if (node == NULL)
 {
 	/* Try it as a function call on the whole row */
@@ -724,7 +726,8 @@ transformColumnRef(ParseState *pstate, ColumnRef *cref)
 colname = strVal(field4);
 
 /* Try to identify as a column of the RTE */
-node = scanRTEForColumn(pstate, rte, colname, cref-location);
+node = scanRTEForColumn(pstate, rte, colname, cref-location,
+		NULL);
 if (node == NULL)
 {
 	/* Try it as a function call on the whole row */
diff --git a/src/backend/parser/parse_func.c b/src/backend/parser/parse_func.c
index 9ebd3fd..472e15e 100644
--- a/src/backend/parser/parse_func.c
+++ b/src/backend/parser/parse_func.c
@@ -1779,7 +1779,7 @@ ParseComplexProjection(ParseState *pstate, char *funcname, Node *first_arg,
 	 ((Var *) first_arg)-varno,
 	 ((Var *) first_arg)-varlevelsup);
 		/* Return a Var if funcname matches a column, else NULL */
-		return scanRTEForColumn(pstate, rte, funcname, location);
+		return scanRTEForColumn(pstate, rte, funcname, location, NULL);
 	}
 
 	/*
diff --git a/src/backend/parser/parse_relation.c b/src/backend/parser/parse_relation.c
index 478584d..e6adee1 100644
--- a/src/backend/parser/parse_relation.c
+++ b/src/backend/parser/parse_relation.c
@@ -15,6 +15,7 @@
 #include postgres.h
 
 #include ctype.h
+#include limits.h
 
 #include access/htup_details.h
 #include access/sysattr.h
@@ -520,6 +521,69 @@ GetCTEForRTE(ParseState *pstate, RangeTblEntry *rte, int 

Re: [HACKERS] Doing better at HINTing an appropriate column within errorMissingColumn()

2014-12-20 Thread Peter Geoghegan
On Sat, Dec 20, 2014 at 3:17 PM, Peter Geoghegan p...@heroku.com wrote:
 Attached patch implements this scheme.

I had another thought: NAMEDATALEN + 1 is a better representation of
infinity for matching purposes than INT_MAX. I probably should have
made that change, too. It would then not have been necessary to
#include limits.h. I think that this is a useful
belt-and-suspenders precaution against integer overflow. It almost
certainly won't matter, since it's very unlikely that the best match
within an RTE will end up being a dropped column, but we might as well
do it that way (Levenshtein distance is costed in multiples of code
point changes, but the maximum density is 1 byte per codepoint).

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg

2014-12-20 Thread Alvaro Herrera
Tomas Vondra wrote:
 Attached is v5 of the patch, fixing an error with releasing a shared
 memory context (invalid flag values in a few calls).

The functions that gain a new argument should get their comment updated,
to explain what the new argument is for.

Also, what is it with this hunk?

 @@ -4768,6 +4770,9 @@ makeMdArrayResult(ArrayBuildState *astate,
  
   MemoryContextSwitchTo(oldcontext);
  
 + /* we can only release the context if it's a private one. */
 + // Assert(! (release  !astate-private_cxt));
 +
   /* Clean up all the junk */
   if (release)
   MemoryContextDelete(astate-mcontext);

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel Seq Scan

2014-12-20 Thread Amit Kapila
On Fri, Dec 19, 2014 at 6:21 PM, Stephen Frost sfr...@snowman.net wrote:

 Amit,

 * Amit Kapila (amit.kapil...@gmail.com) wrote:
  1. Parallel workers help a lot when there is an expensive qualification
  to evaluated, the more expensive the qualification the more better are
  results.

 I'd certainly hope so. ;)

  2. It works well for low selectivity quals and as the selectivity
increases,
  the benefit tends to go down due to additional tuple communication cost
  between workers and master backend.

 I'm a bit sad to hear that the communication between workers and the
 master backend is already being a bottleneck.  Now, that said, the box
 you're playing with looks to be pretty beefy and therefore the i/o
 subsystem might be particularly good, but generally speaking, it's a lot
 faster to move data in memory than it is to pull it off disk, and so I
 wouldn't expect the tuple communication between processes to really be
 the bottleneck...


The main reason for higher cost of tuple communication is because at
this moment I have used an approach to pass the tuples which is
comparatively
less error prone and could be used as per existing FE/BE protocol.

To explain in brief, what is happening here is that currently worker backend
gets the tuple from page which it is deforms and send the same to master
backend via message queue, master backend then forms the tuple and send it
to upper layer which before sending it to frontend again deforms it via
slot_getallattrs(slot).  The benefit of using this approach is that it works
as per current protocol message ('D') and as per our current executor code.

Now there could be couple of ways with which we can reduce the tuple
communication overhead.

a. Instead of passing value array, just pass tuple id, but retain the
buffer pin till master backend reads the tuple based on tupleid.
This has side effect that we have to retain buffer pin for longer
period of time, but again that might not have any problem in
real world usage of parallel query.

b. Instead of passing value array, pass directly the tuple which could
be directly propagated by master backend to upper layer or otherwise
in master backend change some code such that it could propagate the
tuple array received via shared memory queue directly to frontend.
Basically save the one extra cycle of form/deform tuple.

Both these need some new message type and handling for same in
Executor code.

Having said above, I think we can try to optimize this in multiple
ways, however we need additional mechanism and changes in Executor
code which is error prone and doesn't seem to be important at this
stage where we want the basic feature to work.


  3. After certain point, increasing having more number of workers won't
  help and rather have negative impact, refer Test-4.

 Yes, I see that too and it's also interesting- have you been able to
 identify why?  What is the overhead (specifically) which is causing
 that?


I think there are mainly two things which can lead to benefit
by employing parallel workers
a. Better use of available I/O bandwidth
b. Better use of available CPU's by doing expression evaluation
by multiple workers.

The simple theory here is that there has to be certain limit
(in terms of number of parallel workers) till which there can
be benefit due to both of the above points and after which there
will be overhead (setting up so many workers even though they
are not required, then some additional wait by master backend
for non-helping workers to finish their work, then if there
are not enough CPU's available and may be others as well like
overusing I/O channel might also degrade the performance
rather than improving it).

In the above tests, it seems to me that the maximum benefit due to
'a' is realized upto 4~8 workers and the maximum benefit due to
'b' depends upon the complexity (time to evaluate) of expression.
That is the reason why we can see benefit's in Tests-1 ~ Test-3 above
8 parallel workers as well whereas for Tests-4 to Tests-6 it maximizes
at 8 workers and after that either there is no improvement or
degradation due to one or more reasons as explained in previous
paragraph.

I think important point which is mentioned by you as well is
that there should be a reasonably good cost model which can
account some or all of these things so that by using parallel
query user can achieve the benefit it provides and won't have
to pay the cost in which there is no or less benefit.
I am not sure that in first cut we can come up with a highly
robust cost model, but it should not be too weak that most
of the time user has to find the right tuning based on parameters
we are going to add.  Based on my understanding and by referring
to existing literature, I will try to come up with the cost model
and then we can have a discussion if required whether that is good
enough for first cut or not.

  I think as discussed previously we need to introduce 2 additional cost
  variables (parallel_startup_cost,