Re: [HACKERS] pg_stat_statements: calls under-estimation propagation

2013-09-15 Thread samthakur74
 You have added this email to the commit fest, but it contains no patch.

Please add the email with the actual patch.

 I hope its attached now!

  Maybe the author should be
 given a chance to update the patches, though, because they are quite
 old.

 I did connect with Daniel and he did have some improvement ideas. I am not
sure when they could be implemented. Since we have a interest in the
current version of the patch, which needed documentation, i tried to
complete that.
Thank you,
Sameer


pg_stat_statements-identification-v4.patch.gz (8K) 
http://postgresql.1045698.n5.nabble.com/attachment/5770937/0/pg_stat_statements-identification-v4.patch.gz




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/pg-stat-statements-calls-under-estimation-propagation-tp5738128p5770937.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.

Re: [HACKERS] Minor inheritance/check bug: Inconsistent behavior

2013-09-15 Thread Amit Kapila
Bruce Momjian wrote:
On Sun, Jun 30, 2013 at 06:57:10AM +, Amit kapila wrote:
  I have done the initial analysis and prepared a patch, don't know if
  anything more I can do until
  someone can give any suggestions to further proceed on this bug.

  So, I guess we never figured this out.

 I can submit this bug-fix for next commitfest if there is no objection for 
 doing so.
 What is your opinion?

 Yes, good idea.

I had rebased the patch against head and added the test case to validate it.
I will upload this patch to commit fest.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


sys_col_constr_v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GIN improvements part 1: additional information

2013-09-15 Thread Alexander Korotkov
On Sat, Jun 29, 2013 at 12:56 PM, Heikki Linnakangas 
hlinnakan...@vmware.com wrote:

 There's a few open questions:

 1. How are we going to handle pg_upgrade? It would be nice to be able to
 read the old page format, or convert on-the-fly. OTOH, if it gets too
 complicated, might not be worth it. The indexes are much smaller with the
 patch, so anyone using GIN probably wants to rebuild them anyway, sooner or
 later. Still, I'd like to give it a shot.

 2. The patch introduces a small fixed 32-entry index into the packed
 items. Is that an optimal number?

 3. I'd like to see some performance testing of insertions, deletions, and
 vacuum. I suspect that maintaining the 32-entry index might be fairly
 expensive, as it's rewritten on every update to a leaf page.


It appears that maintaining 32-entry index is really expensive because it
required re-decoding whole page. This issue is fixed in attached version of
patch by introducing incremental updating of that index. Benchmarks will be
posted later today.

--
With best regards,
Alexander Korotkov.


gin-packed-postinglists-3.patch.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT...ON DUPLICATE KEY LOCK FOR UPDATE

2013-09-15 Thread Peter Geoghegan
On Sat, Sep 14, 2013 at 1:57 AM, Greg Stark st...@mit.edu wrote:
 It seems to me that the nature of the problem is that there will unavoidably
 be a nexus between the two parts of the code here. We can try to isolate it
 as much as possible but we're going to need a bit of a compromise.

Exactly. That's why all the proposals with the exception of this one
have to date involved unacceptable bloating - that's how they try and
span the nexus.

I'll find it very difficult to accept any implementation that is going
to bloat things even worse than our upsert looping example. The only
advantage of such an implementation over the upsert example is that
it'll avoid burning through subxacts. The main reason I don't want to
take that approach is that I know it won't be accepted, because it's a
disaster. That's why the people that proposed this in various forms
down through the years haven't gone and implemented it themselves. I
do not accept that all of this is like the general situation with row
locks. I do not think that the big costs of having many dead
duplicates in a unique index can be overlooked (or perhaps the cost of
cleaning them up eagerly, which is something I'd also expect to work
very badly). That's something that's going to reverberate all over the
place. Imagine a simple, innocent looking pattern that resulted in
there being unique indexes that became hugely bloated. It's not hard.

What I will concede (what I have conceded, actually) is that it would
be better if the locks were more granular. Now, I'm not so much
concerned about concurrent inserters inserting values that just so
happen to be values that were locked. It's more the case that I'm
worried about inserters blocking on other values that are incidentally
locked despite not already existing, that would go on the locked page
or maybe a later page. In particular, I'm concerned about the impact
on SERIAL primary key columns. Not exactly an uncommon case (though
one I'd already thought to optimize by locking last).

What I think might actually work acceptably is if we were to create an
SLRU that kept track of value-locks per buffer. The challenge there
would be to have regular unique index inserters care about them, while
having little to no impact on their regular performance. This might be
possible by having them check the buffer for external value locks in
the SLRU immediately after exclusive locking the buffer - usually that
only has to happen once per index tuple insertion (assuming no
duplicates necessitate retry). If they find their value in the SLRU,
they do something like unlock and block on the other xact and restart.
Now, obviously a lot of the details would have to be worked out, but
it seems possible.

In order for any of this to really be possible, there'd have to be
some concession made to my position, as Greg mentions here. In other
words, I'd need buy-in for the general idea of holding locks in shared
memory from indexes across heap tuple insertion (subject to a sound
deadlock analysis, of course). Some modest compromises may need to be
made around interruptibility. I'd also probably need agreement that
it's okay that value locks can not last more than an instant (they
cannot be held indefinitely pending the end of a transaction). This
isn't something that I imagine to be too controversial, because it's
true today for a single unique index. As I've already outlined, anyone
waiting on another transaction with a would-be duplicate to commit has
very few guarantees about the order that it'll get its second shot
relative to the order it initial queued up behind the successful but
not-yet-committed inserter.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] record identical operator

2013-09-15 Thread Kevin Grittner
Andres Freund and...@2ndquadrant.com wrote:

 If matview refreshs weren't using plain SQL and thus wouldn't
 require exposing that operator to SQL I wouldn't have a problem
 with this...

If RMVC were the end of the story, it might be worth building up a
mass of execution nodes directly, although it would be hard to see
how we could make the right planning choices (e.g., MergeJoin
versus HashJoin) that way.  But the whole incremental maintenance
area, to have any chance of working accurately and without an
endless stream of bugs, needs to be based on relational algebra.
There needs to be a way to express that in a much higher level
language than execution node creation.  If it doesn't use SQL we
would need to invent a relational language very much like it, which
would be silly when we have a perfectly good language we can
already use.  The sky is blue; let's move on.

The test for identical records will be needed in SQL if we want to
have these matview features.  We could limit use of that to
contexts where MatViewIncrementalMaintenanceIsEnabled(), but I
don't see the point.  If someone uses an undocumented operator, and
uses it inappropriately, they may get a surprising result.  We
already have undocumented operators to support record comparison
and pattern opclasses, and if people use those they may get
surprising results.  I don't recall any reports of problems from
people actually trying to do so.

-- 
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plpgsql.print_strict_params

2013-09-15 Thread Marko Tiikkaja

On 15/09/2013 04:02, Peter Eisentraut wrote:

On Fri, 2013-09-13 at 23:56 +0200, Marko Tiikkaja wrote:

Attached is a patch for optionally printing more information on STRICT
failures in PL/PgSQL:


Please fix the tabs in the SGML files.


Oops.  Thanks.

Attached an updated patch to fix the tabs and to change this to a 
compile-time option.  Now it's not possible to flexibly disable the 
feature during runtime, but I think that's fine.



Regards,
Marko Tiikkaja


*** a/doc/src/sgml/plpgsql.sgml
--- b/doc/src/sgml/plpgsql.sgml
***
*** 1077,1082  END;
--- 1077,1105 
  /para
  
  para
+  If literalprint_strict_params/ is enabled for the function,
+  you will get information about the parameters passed to the
+  query in the literalDETAIL/ part of the error message produced
+  when the requirements of STRICT are not met.  You can change this
+  setting on a system-wide basis by setting
+  varnameplpgsql.print_strict_params/, though only subsequent
+  function compilations will be affected.  You can also enable it
+  on a per-function basis by using a compiler option:
+ programlisting
+ CREATE FUNCTION get_userid(username text) RETURNS int
+ AS $$
+ #print_strict_params on
+ DECLARE
+ userid int;
+ BEGIN
+ SELECT users.userid INTO STRICT userid WHERE users.username = 
get_userid.username;
+ RETURN userid;
+ END
+ $$ LANGUAGE plpgsql;
+ /programlisting
+ /para
+ 
+ para
   For commandINSERT//commandUPDATE//commandDELETE/ with
   literalRETURNING/, applicationPL/pgSQL/application reports
   an error for more than one returned row, even when
*** a/src/pl/plpgsql/src/pl_comp.c
--- b/src/pl/plpgsql/src/pl_comp.c
***
*** 351,356  do_compile(FunctionCallInfo fcinfo,
--- 351,357 
function-fn_cxt = func_cxt;
function-out_param_varno = -1; /* set up for no OUT param */
function-resolve_option = plpgsql_variable_conflict;
+   function-print_strict_params = plpgsql_print_strict_params;
  
if (is_dml_trigger)
function-fn_is_trigger = PLPGSQL_DML_TRIGGER;
***
*** 847,852  plpgsql_compile_inline(char *proc_source)
--- 848,854 
function-fn_cxt = func_cxt;
function-out_param_varno = -1; /* set up for no OUT param */
function-resolve_option = plpgsql_variable_conflict;
+   function-print_strict_params = plpgsql_print_strict_params;
  
plpgsql_ns_init();
plpgsql_ns_push(func_name);
*** a/src/pl/plpgsql/src/pl_exec.c
--- b/src/pl/plpgsql/src/pl_exec.c
***
*** 139,144  static void plpgsql_estate_setup(PLpgSQL_execstate *estate,
--- 139,150 
 ReturnSetInfo *rsi);
  static void exec_eval_cleanup(PLpgSQL_execstate *estate);
  
+ static char *exec_get_query_params(PLpgSQL_execstate *estate,
+  const 
PLpgSQL_expr *expr);
+ static char *exec_get_dynquery_params(PLpgSQL_execstate *estate,
+ const 
PreparedParamsData *ppd);
+ 
+ 
  static void exec_prepare_plan(PLpgSQL_execstate *estate,
  PLpgSQL_expr *expr, int cursorOptions);
  static bool exec_simple_check_node(Node *node);
***
*** 3226,3231  exec_prepare_plan(PLpgSQL_execstate *estate,
--- 3232,3310 
exec_simple_check_plan(expr);
  }
  
+ static char *
+ exec_get_query_params(PLpgSQL_execstate *estate,
+ const PLpgSQL_expr *expr)
+ {
+   int paramno;
+   int dno;
+   StringInfoData paramstr;
+   Bitmapset *tmpset;
+ 
+   if (!expr-paramnos)
+   return (no parameters);
+ 
+   initStringInfo(paramstr);
+   tmpset = bms_copy(expr-paramnos);
+   paramno = 1;
+   while ((dno = bms_first_member(tmpset)) = 0)
+   {
+   Datum paramdatum;
+   Oid paramtypeid;
+   bool paramisnull;
+   int32 paramtypmod;
+   PLpgSQL_var *curvar;
+ 
+   curvar = (PLpgSQL_var *) estate-datums[dno];
+ 
+   exec_eval_datum(estate, (PLpgSQL_datum *) curvar, paramtypeid,
+   paramtypmod, paramdatum, 
paramisnull);
+ 
+   if (paramno  1)
+   appendStringInfo(paramstr, , );
+ 
+   if (paramisnull)
+   appendStringInfo(paramstr, %s = NULL, 
curvar-refname);
+   else
+   {
+   char *value = convert_value_to_string(estate, 
paramdatum, paramtypeid);
+   appendStringInfo(paramstr, %s = '%s', 
curvar-refname, value);
+   }
+ 
+   paramno++;
+   }
+   bms_free(tmpset);
+ 
+   return paramstr.data;
+ }
+ 
+ static char *
+ 

Re: [HACKERS] PL/pgSQL, RAISE and error context

2013-09-15 Thread Marko Tiikkaja

On 15/09/2013 13:50, I wrote:

On 15/09/2013 04:05, Peter Eisentraut wrote:

On Sat, 2013-09-14 at 04:58 +0200, Marko Tiikkaja wrote:

The attached patch (based on Pavel's patch) changes the default to be
slightly more verbose (the CONTEXT lines which were previously
omitted
will be visible), but adds a new PGVerbosity called COMPACT which
suppresses CONTEXT in non-error messages.  Now DEFAULT will be more
useful when debugging PL/PgSQL, and people who are annoyed by the new
behaviour can use the COMPACT mode.


Your patch fails the regression tests.


Attached is an updated patch with the regression test fixes.  No other
changes included.


Hmm.  I just noticed there's something weird going on in the select_view 
test.  I'm investigating this currently.



Regards,
Marko Tiikkaja


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL Code Archive Proposal

2013-09-15 Thread Peter Eisentraut
You added this to the commit fest, but there is no patch attached.  I
don't think you have a patch yet.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Where to load modules from?

2013-09-15 Thread Peter Eisentraut
On Sat, 2013-09-14 at 22:15 +0200, Dimitri Fontaine wrote:
 
 This proposal comes with no patch because I think we are able to
 understand it without that, so that it would only be a waste of
 everybody's time to attach code for a random solution on the list here
 to that email.

It shouldn't be in the commit fest if it has no patch.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6

2013-09-15 Thread Peter Eisentraut
What's with 0001-Improve-regression-test-for-8410.patch?  Did you mean
to include that?



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GUC for data checksums

2013-09-15 Thread Andres Freund
On 2013-09-15 03:34:53 +0200, Bernd Helmle wrote:
 
 
 --On 15. September 2013 00:25:34 +0200 Andres Freund
 and...@2ndquadrant.com wrote:
 
 Looks like a good idea to me. The implementation looks sane as well,
 except that I am not sure if we really need to introduce that faux
 variable. If the variable cannot be set and we have a SHOW hook, do we
 need it?
 
 It's along the line with the other informational variables like block_size
 et al. Do you want to have a function instead or what's your intention?

Well, you've added a data_checksums variable that won't ever get used,
right? You can't set the variable and the show hook doesn't actually use
it.
The reason you presumably did so is that there is no plain variable that
contains information about data checksums, we first need to read the
control file to know whether it's enabled and GUCs are initialized way
earlier than that.

A quick look unfortunately shows that there's no support for GUCs
without an actual underlying variable, so unless somebody adds that,
there doesn't seem to be much choice.

I think a comment documenting that the data_checksums variable is not
actually used would be appropriate.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6

2013-09-15 Thread Andres Freund
On 2013-09-15 10:03:54 -0400, Peter Eisentraut wrote:
 What's with 0001-Improve-regression-test-for-8410.patch?  Did you mean
 to include that?

Gah, no. That's already committed and unrelated. Stupid wildcard.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Where to load modules from?

2013-09-15 Thread Dimitri Fontaine
Peter Eisentraut pete...@gmx.net writes:
 It shouldn't be in the commit fest if it has no patch.

What should I do if my goal is to get community consensus on the best
way to solve a problem, and want to start the discussion with some
proposals?

My understanding is that a Commit Fest is mainly about Reviewing, that's
why I still added an entry for two designs that I need feedback on
before actually coding a solution.

Writing the code is the easiest part of those proposals, but that's only
true as soon as we decide what code we should be writing.

Regards,
-- 
Dimitri Fontaine06 63 07 10 78
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT...ON DUPLICATE KEY LOCK FOR UPDATE

2013-09-15 Thread Greg Stark
On 15 Sep 2013 10:19, Peter Geoghegan p...@heroku.com wrote:

 On Sat, Sep 14, 2013 at 1:57 AM, Greg Stark st...@mit.edu wrote:
  It seems to me that the nature of the problem is that there will
unavoidably
  be a nexus between the two parts of the code here. We can try to
isolate it
  as much as possible but we're going to need a bit of a compromise.

 In order for any of this to really be possible, there'd have to be
 some concession made to my position, as Greg mentions here. In other
 words, I'd need buy-in for the general idea of holding locks in shared
 memory from indexes across heap tuple insertion (subject to a sound
 deadlock analysis, of course).

Actually that wasn't what I meant by that.

What I meant is that there going to be some code coupling between the
executor and btree code. That's purely a question of course structure, and
will be true regardless of the algorithm you settle on.

What I was suggesting was an api for a function that would encapsulate that
coupling. The executor would call this function which would promise to
obtain all the locks needed for both operations or give up. Effectively it
would be a special btree operation which would have special knowledge of
the executor only in that it knows that being able to get a lock on two
heap buffers is something the executor needs sometimes.

I'm not sure this fits well with your syntax since it assumes the update
will happen at the same time as the index lookup but as I said I haven't
read your patch, maybe it's not incompatible. I'm writing all this on my
phone so it's mostly just pie in the sky brainstorming. I'm sorry if it's
entirely irrelevant.


Re: [HACKERS] Assertions in PL/PgSQL

2013-09-15 Thread Peter Eisentraut
On Sat, 2013-09-14 at 20:47 +0200, Marko Tiikkaja wrote:
 Attached is a patch for supporting assertions in PL/PgSQL.  These are 
 similar to the Assert() backend macro: they can be disabled during 
 compile time, but when enabled, abort execution if the passed expression 
 is not true.

Doesn't build:

pl_exec.c: In function ‘exec_stmt_assert’:
pl_exec.c:3647:58: error: ‘ERRCODE_ASSERTION_FAILURE’ undeclared (first use in 
this function)
pl_exec.c:3647:58: note: each undeclared identifier is reported only once for 
each function it appears in




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Assertions in PL/PgSQL

2013-09-15 Thread Marko Tiikkaja

On 2013-09-15 16:34, Peter Eisentraut wrote:

On Sat, 2013-09-14 at 20:47 +0200, Marko Tiikkaja wrote:

Attached is a patch for supporting assertions in PL/PgSQL.  These are
similar to the Assert() backend macro: they can be disabled during
compile time, but when enabled, abort execution if the passed expression
is not true.


Doesn't build:


Ugh.  Accidentally edited an auto-generated file.  Fixed in the 
attached, thanks!



Regards,
Marko Tiikkaja
*** a/doc/src/sgml/plpgsql.sgml
--- b/doc/src/sgml/plpgsql.sgml
***
*** 3528,3533  RAISE unique_violation USING MESSAGE = 'Duplicate user ID: ' 
|| user_id;
--- 3528,3596 
  /para
 /note
  
+   sect2 id=plpgsql-assert
+titleAssertions/title
+ 
+para
+ literalAssertions/literal provide a way to check that the
+ internal state of a function is as expected.  For example:
+ programlisting
+ CREATE FUNCTION delete_user(username text) RETURNS VOID AS $$
+ BEGIN
+   DELETE FROM users WHERE users.username = delete_user.username;
+   ASSERT FOUND;
+ END
+ $$ LANGUAGE plpgsql;
+ 
+ SELECT delete_user('mia');
+ ERROR:  Assertion on line 4 failed
+ CONTEXT:  PL/pgSQL function delete_user(text) line 4 at ASSERT
+ /programlisting
+ 
+ One could implement the equivalent functionality with a conditional
+ RAISE EXCEPTION statement, but assertions have two major differences:
+ itemizedlist
+  listitem
+para
+ They're a lot faster to write than the equivalent IF .. THEN 
RAISE EXCEPTION .. END IF constructs.
+/para
+  /listitem
+  listitem
+para
+ They can be (and are by default) disabled in production
+ environments.  A disabled assertion only incurs a negligible
+ compile-time overhead and no execution time overhead, so you
+ can write complex checks for development environments without
+ having to worry about performance.
+/para
+  /listitem
+ /itemizedlist 
+/para
+ 
+para
+ The configuration parameter varnameplpgsql.enable_assertions/
+ controls whether assertions are enabled.  Note that the value of
+ this parameter only affects subsequent compilations of
+ applicationPL/pgSQL/ functions, but not statements already
+ compiled in the current session.
+/para
+ 
+para
+ It is also possible to enable assertions in a single function
+ (possibly overriding the global setting) by using a compile
+ option:
+ programlisting
+ CREATE FUNCTION delete_user(username text) RETURNS VOID AS $$
+ #enable_assertions
+ BEGIN
+   DELETE FROM users WHERE users.username = delete_user.username;
+   ASSERT FOUND;
+ END
+ $$ LANGUAGE plpgsql;
+ /programlisting
+/para
+   /sect2
+ 
   /sect1
  
   sect1 id=plpgsql-trigger
*** a/src/backend/utils/errcodes.txt
--- b/src/backend/utils/errcodes.txt
***
*** 454,459  PEERRCODE_PLPGSQL_ERROR
  plp
--- 454,460 
  P0001EERRCODE_RAISE_EXCEPTION
raise_exception
  P0002EERRCODE_NO_DATA_FOUND  
no_data_found
  P0003EERRCODE_TOO_MANY_ROWS  
too_many_rows
+ P0004EERRCODE_ASSERTION_FAILURE  
assertion_failure
  
  Section: Class XX - Internal Error
  
*** a/src/pl/plpgsql/src/pl_comp.c
--- b/src/pl/plpgsql/src/pl_comp.c
***
*** 351,356  do_compile(FunctionCallInfo fcinfo,
--- 351,357 
function-fn_cxt = func_cxt;
function-out_param_varno = -1; /* set up for no OUT param */
function-resolve_option = plpgsql_variable_conflict;
+   function-enable_assertions = plpgsql_enable_assertions;
  
if (is_dml_trigger)
function-fn_is_trigger = PLPGSQL_DML_TRIGGER;
***
*** 847,852  plpgsql_compile_inline(char *proc_source)
--- 848,854 
function-fn_cxt = func_cxt;
function-out_param_varno = -1; /* set up for no OUT param */
function-resolve_option = plpgsql_variable_conflict;
+   function-enable_assertions = plpgsql_enable_assertions;
  
plpgsql_ns_init();
plpgsql_ns_push(func_name);
*** a/src/pl/plpgsql/src/pl_exec.c
--- b/src/pl/plpgsql/src/pl_exec.c
***
*** 133,138  static int exec_stmt_dynexecute(PLpgSQL_execstate *estate,
--- 133,140 
 PLpgSQL_stmt_dynexecute *stmt);
  static int exec_stmt_dynfors(PLpgSQL_execstate *estate,
  PLpgSQL_stmt_dynfors *stmt);
+ static int exec_stmt_assert(PLpgSQL_execstate *estate,
+ PLpgSQL_stmt_assert *stmt);
  
  static void plpgsql_estate_setup(PLpgSQL_execstate *estate,
 PLpgSQL_function 

Re: [HACKERS] Where to load modules from?

2013-09-15 Thread Peter Eisentraut
On Sun, 2013-09-15 at 16:09 +0200, Dimitri Fontaine wrote:
 Peter Eisentraut pete...@gmx.net writes:
  It shouldn't be in the commit fest if it has no patch.
 
 What should I do if my goal is to get community consensus on the best
 way to solve a problem, and want to start the discussion with some
 proposals?

Post it to the pgsql-hackers list.

 My understanding is that a Commit Fest is mainly about Reviewing, that's
 why I still added an entry for two designs that I need feedback on
 before actually coding a solution.
 
 Writing the code is the easiest part of those proposals, but that's only
 true as soon as we decide what code we should be writing.

I understand why using the commit fest process is attractive for this,
because it enables you to force the issue.  But the point of the commit
fest is to highlight patches whose discussion has mostly concluded and
get them committed.  If we add general discussion to the commit fest,
it'll just become a mirror of the mailing list, and then we'll need yet
another level of process to isolate the ready patches from that.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT...ON DUPLICATE KEY LOCK FOR UPDATE

2013-09-15 Thread Kevin Grittner
Peter Geoghegan p...@heroku.com wrote:

 There is no reason to call CheckForSerializableConflictIn() with
 the additional locks held either. After all, For a heap insert,
 we only need to check for table-level SSI locks.

You're only talking about not covering that call with a *new*
LWLock, right?  We put some effort into making sure that such calls
were only inside of LWLocks which were needed for correctness.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Where to load modules from?

2013-09-15 Thread Marko Tiikkaja

On 2013-09-15 16:51, Peter Eisentraut wrote:

On Sun, 2013-09-15 at 16:09 +0200, Dimitri Fontaine wrote:

My understanding is that a Commit Fest is mainly about Reviewing, that's
why I still added an entry for two designs that I need feedback on
before actually coding a solution.

Writing the code is the easiest part of those proposals, but that's only
true as soon as we decide what code we should be writing.


I understand why using the commit fest process is attractive for this,
because it enables you to force the issue.  But the point of the commit
fest is to highlight patches whose discussion has mostly concluded and
get them committed.  If we add general discussion to the commit fest,
it'll just become a mirror of the mailing list, and then we'll need yet
another level of process to isolate the ready patches from that.


I have one item like this in the current commit fest.  I wrote a PoC 
patch, but that's just a bad excuse to get around the issue that we 
don't really want just RFCs on there.


The problem is when you post an idea requesting comments on -HACKERS, 
and nobody or only one person answers despite efforts to try and keep 
the discussion alive and/or revive it.  What should one do in that case? 
 Writing a patch just to throw it away later because something's 
fundamentally broken (or unnacceptable) seems silly if people could have 
just looked at the original -HACKERS post and said this can't possibly 
work with your current design.



Regards,
Marko Tiikkaja


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6

2013-09-15 Thread Peter Eisentraut
On Sat, 2013-09-14 at 22:49 +0200, Andres Freund wrote:
 Attached you can find the newest version of the logical changeset
 generation patchset.

You probably have bigger things to worry about, but please check the
results of cpluspluscheck, because some of the header files don't
include header files they depend on.

(I guess that's really pgcompinclude's job to find out, but
cpluspluscheck seems to be easier to use.)




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT...ON DUPLICATE KEY LOCK FOR UPDATE

2013-09-15 Thread Andres Freund
On 2013-09-15 02:19:41 -0700, Peter Geoghegan wrote:
 On Sat, Sep 14, 2013 at 1:57 AM, Greg Stark st...@mit.edu wrote:
  It seems to me that the nature of the problem is that there will unavoidably
  be a nexus between the two parts of the code here. We can try to isolate it
  as much as possible but we're going to need a bit of a compromise.
 
 Exactly. That's why all the proposals with the exception of this one
 have to date involved unacceptable bloating - that's how they try and
 span the nexus.

 I'll find it very difficult to accept any implementation that is going
 to bloat things even worse than our upsert looping example.

How would any even halfway sensible example cause *more* bloat than the
upsert looping thing?
I'll concede that bloat is something to be aware of, but just because
it's *an* issue, it's not *the* only issue.

In all the solutions I can think of/have heard of that have the chance
of producing additional bloat also have good chance of cleaning up the
additional bloat.

In the promises approach you simply can mark the promise index tuples
as LP_DEAD in the IGNORE case if you've found a conflicting tuple. In
the OR UPDATE case you can immediately reuse them. There's no heap
bloat. The logic for dead items already exists in nbtree, so that's not
too much complication. The case where that doesn't work is when postgres
dies inbetween or we're signalled to abort. But that produces bloat for
normal DML anyway. Any vacuum or insert can check whether the promise
xid has committed and remove the promise otherwise.

In the proposals that involve just inserting the heaptuple and then
handle the uniqueness violation when inserting the index tuples you can
immediately mark the index tuples as dead and mark it as prunable.


 The only  advantage of such an implementation over the upsert example is that
 it'll avoid burning through subxacts. The main reason I don't want to
 take that approach is that I know it won't be accepted, because it's a
 disaster. That's why the people that proposed this in various forms
 down through the years haven't gone and implemented it themselves. I
 do not accept that all of this is like the general situation with row
 locks.

The primary advantage will be that it's actually usable by users without
massive overhead in writing dozens of functions.

I don't think the bloat issue had much to do with the feature not
getting implemented so far. It's that nobody was willing to do the work
and endure the discussions around it. And I definitely applaud you for
finally tackling the issue despite that.

 I do not think that the big costs of having many dead
 duplicates in a unique index can be overlooked

Why would there be so many duplicate index tuples? The primary user of
this is going to be UPSERT. In case there's a conflicting tuple, there
is going to be a new tuple version. Which will need a new index entry
quite often. If there's no conflict, we will insert anyway.
So, there's the case of UPSERTs that could be done as HOT updates
because there's enough space on the page and none of the indexes
actually have changed. As explained above, we can simply mark the index
tuple as dead in that case (don't even need an exclusive lock for that,
if done right).

 (or perhaps the cost of
 cleaning them up eagerly, which is something I'd also expect to work
 very badly).

Why? Remember the page you did the insert to, do a _bt_moveright() to
catch eventual splits. Mark the item as dead. Done. The next insert will
repack the page if necessary (cf. _bt_findinsertloc).

 What I will concede (what I have conceded, actually) is that it would
 be better if the locks were more granular. Now, I'm not so much
 concerned about concurrent inserters inserting values that just so
 happen to be values that were locked. It's more the case that I'm
 worried about inserters blocking on other values that are incidentally
 locked despite not already existing, that would go on the locked page
 or maybe a later page. In particular, I'm concerned about the impact
 on SERIAL primary key columns. Not exactly an uncommon case (though
 one I'd already thought to optimize by locking last).

Yes, I think that's the primary issue from a scalability and performance
POV. Locking entire ranges of values, potentially even on inner pages
(because you otherwise would have to split) isn't going to work.

 What I think might actually work acceptably is if we were to create an
 SLRU that kept track of value-locks per buffer. The challenge there
 would be to have regular unique index inserters care about them, while
 having little to no impact on their regular performance. This might be
 possible by having them check the buffer for external value locks in
 the SLRU immediately after exclusive locking the buffer - usually that
 only has to happen once per index tuple insertion (assuming no
 duplicates necessitate retry). If they find their value in the SLRU,
 they do something like unlock and block on the other xact and 

Re: [HACKERS] Minmax indexes

2013-09-15 Thread Peter Eisentraut
On Sat, 2013-09-14 at 21:14 -0300, Alvaro Herrera wrote:
 Here's a reviewable version of what I've dubbed Minmax indexes.

Please fix duplicate OID 3177.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6

2013-09-15 Thread Andres Freund
On 2013-09-15 11:20:20 -0400, Peter Eisentraut wrote:
 On Sat, 2013-09-14 at 22:49 +0200, Andres Freund wrote:
  Attached you can find the newest version of the logical changeset
  generation patchset.
 
 You probably have bigger things to worry about, but please check the
 results of cpluspluscheck, because some of the header files don't
 include header files they depend on.

Hm. I tried to get that right, but it's been a while since I last
checked. I don't regularly use cpluspluscheck because it doesn't work in
VPATH builds... We really need to fix that.

I'll push a fix for that to the git tree, don't think that's worth a
resend in itself.

Thanks,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Commitfest II, 9.4 edition

2013-09-15 Thread David Fetter
has begun!

New hackers, this is your chance to contribute by doing the first few
steps of review.  Seasoned hackers, this is a way to help the whole
project along.  You know the drill.

Too many patches have Nobody today.  That's the first thing we'll
need to fix, one way or another.

Let's make this one great!

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Where to load modules from?

2013-09-15 Thread Andres Freund
On 2013-09-15 17:03:10 +0200, Marko Tiikkaja wrote:
 On 2013-09-15 16:51, Peter Eisentraut wrote:
 On Sun, 2013-09-15 at 16:09 +0200, Dimitri Fontaine wrote:
 My understanding is that a Commit Fest is mainly about Reviewing, that's
 why I still added an entry for two designs that I need feedback on
 before actually coding a solution.
 
 Writing the code is the easiest part of those proposals, but that's only
 true as soon as we decide what code we should be writing.
 
 I understand why using the commit fest process is attractive for this,
 because it enables you to force the issue.  But the point of the commit
 fest is to highlight patches whose discussion has mostly concluded and
 get them committed.  If we add general discussion to the commit fest,
 it'll just become a mirror of the mailing list, and then we'll need yet
 another level of process to isolate the ready patches from that.
 
 I have one item like this in the current commit fest.  I wrote a PoC patch,
 but that's just a bad excuse to get around the issue that we don't really
 want just RFCs on there.

 The problem is when you post an idea requesting comments on -HACKERS, and
 nobody or only one person answers despite efforts to try and keep the
 discussion alive and/or revive it.  What should one do in that case?

Adding it to the CF in that case seeems like a acceptable emergency
measure in the case that nobody has replied to a proposal in a couple of
days. But afaics, that's not the case with the patches that Peter is
complaining about. This issue certainly hasn't had a lack of comments
and the archive proposal is completely new, so I see where Peter is
coming from and I tend to agree.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] pg_upgrade: Split off pg_fatal() from pg_log()

2013-09-15 Thread Marko Tiikkaja

Hi Peter,

On 2013-09-13 04:50, Peter Eisentraut wrote:

The experiences with elog() and ereport() have shown that having one
function that can return or not depending on some log level parameter
isn't a good idea when you want to communicate well with the compiler.
In pg_upgrade, there is a similar case with the pg_log() function.
Since that isn't a public API, I'm proposing to change it and introduce
a separate function pg_fatal() for those cases where the calls don't
return.


I think the reasoning behind this patch is sound.  However, I would like 
to raise a couple of small questions:


  1) Is there a reason for the fmt string not being const char*?  You 
changed it for pg_log_v(), but not for pg_log() and pg_fatal().
  2) Allowing PG_FATAL to be passed to pg_log() seems weird, but I 
don't feel strongly about that.


Other than that, the patch looks good to me.


Regards,
Marko Tiikkaja


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Where to load modules from?

2013-09-15 Thread Dimitri Fontaine
Peter Eisentraut pete...@gmx.net writes:
 Post it to the pgsql-hackers list.

Well. I think I just did ;-)

 I understand why using the commit fest process is attractive for this,
 because it enables you to force the issue.  But the point of the commit

It enables me to have a slight chance of seeing the effort happen within
the current release development time frame, 9.4. The next commit fest is
going to happen in november, if I don't have code to submit at this time
the next release will still have no solution to offer.

 fest is to highlight patches whose discussion has mostly concluded and
 get them committed.  If we add general discussion to the commit fest,
 it'll just become a mirror of the mailing list, and then we'll need yet
 another level of process to isolate the ready patches from that.

Well, even with what I said before, you're completely right in this
phrasing of the situation, so I've just removed my two entries with no
patch from the current CF.

With some luck I will be able to revive the discussions and reach some
consensus in time to have code written for the Open CF 2013-11.

Regards,
-- 
Dimitri Fontaine06 63 07 10 78
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL Code Archive Proposal

2013-09-15 Thread Dimitri Fontaine
Peter Eisentraut pete...@gmx.net writes:
 You added this to the commit fest, but there is no patch attached.  I
 don't think you have a patch yet.

My intention is to work on a patch only after getting some feedback and
possibly a consensus, if enough people are interested in fixing the
problem.

Meanwhile, entry removed from current CF.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: json_populate_record and nested json objects

2013-09-15 Thread Andrew Dunstan


On 09/14/2013 10:27 PM, chris travers wrote:

Hi all;
Currently json_populate_record and json_populate_recordset cannot work 
with nested json objects.  This creates two fundamental problems when 
trying to use JSON as an interface format.
The first problem is you can't easily embed a json data type in an 
json object and have it populate a record.  This means that storing 
extended attributes in the database is somewhat problematic if you 
accept the whole row in as a json object.
The second problem is that nested data structures and json don't go 
together well.  You can't have  a composite type which has as an 
attribute an array of another composite type and populate this from a 
json object.  This makes json largely an alternative to hstore for 
interfaces in its current shape.

I would propose handling the json_populate_record and friends as such:
1. Don't throw errors initially as a pre-check if the json object is 
nested.
2. If one comes to a nested fragment, check the attribute type it is 
going into first.

2.1 If it is a json type, put the nested fragment there.
2.2 If it is a composite type (i.e. anything in pg_class), push it 
through another json_populate_record run
2.3 If it is neither, then see if a json::[type] cast exists, if 
so call it.

2.4 Otherwise raise an exception
I have a few questions before I go on to look at creating a patch.
1.  Are there any problems anyone spots with this approach?
2.  Is anyone working on something like this?
3.  Would it be preferable to build something like this first as an 
extension (perhaps with different function names) or first as a patch?





Well, you could fairly easily build it as an extension as a POC. The 
main point of the API this is built on was to allow for extensions.


The logic changes might be a bit tricky. I'll be interested to see what 
you come up with.


If we're going to do this we should make these handle arrays as well as 
objects.


cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [Review] Effectiveness of enable_material = off

2013-09-15 Thread Andrew Gierth
Before getting to the administrivia of the patch review, I think it's
worth a bit of analysis on why the planner behaves as it does.

The example query:

explain analyze select * from
( select oid, * from pg_class order by oid) as c
join
( select * from pg_attribute a order by attrelid) as a
on c.oid = a.attrelid;

This produces a plan that looks like it ought not to need
materialization, since the inner path is just an indexscan. However,
the reason why it does so is this: at the time that the mergejoin is
being costed, the inner path is not just an indexscan, but rather a
SubqueryScan node which will be optimized out later (in setrefs).

In this type of situation the effect of enable_material=false with
this patch will obviously be to force it to use another join type if
possible, and it strikes me that this may actually be somewhat _less_
useful than the existing behaviour where enable_material only disables
performance-optimization uses of Materialize. (One can after all use
enable_mergejoin to force another join type.)

So on balance I don't see a strong reason to accept the patch; I'm not
convinced that it's not worse than the current behaviour. Anyone have
strong opinions on this?

Administrivia:

The patch applies cleanly with patch but not with git apply, since it
has a spurious 'new file mode' line; how was it prepared? (there is a
thread discussing this problem)

No tests, but I wouldn't think any are needed for this.

No doc patch, which I don't think is OK; the current wording of the
docs seems more descriptive of the current behaviour, and at least a
small change to the wording ought to be considered if the patch is to
be accepted.

The code passes all tests and has the expected effect on plans, but
there is a slightly unexpected effect on the explain output; the
penalty cost is applied to the mergejoin and not to the materialize
node itself. Fixing this in create_mergejoin_plan would at least make
the explain output look less surprising.

Patch state - waiting for author, though I suggest getting more
buy-in on accepting the change before spending time on the docs or
costing issue.

-- 
Andrew (irc:RhodiumToad)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Assertions in PL/PgSQL

2013-09-15 Thread Jaime Casanova
El 14/09/2013 15:25, Pavel Stehule pavel.steh...@gmail.com escribió:

 Hello

 There is a significant issue - new reserved keyword. There is high
probability so lot of users has a functions named assert.

 I like this functionality, but I dislike any compatibility break for
feature, that can be implemented as extension without any lost of
compatibility or lost of functionality.

 So can you redesign this without new keyword?


Hi,

If using ASSERT as keyword is not acceptable, not that i agree but in case.
What about using RAISE EXCEPTION WHEN (condition)

--
Jaime Casanova
2ndQuadrant: Your PostgreSQL partner


Re: [HACKERS] Where to load modules from?

2013-09-15 Thread Jeff Janes
On Sun, Sep 15, 2013 at 6:51 AM, Peter Eisentraut pete...@gmx.net wrote:

 On Sat, 2013-09-14 at 22:15 +0200, Dimitri Fontaine wrote:
 
  This proposal comes with no patch because I think we are able to
  understand it without that, so that it would only be a waste of
  everybody's time to attach code for a random solution on the list here
  to that email.

 It shouldn't be in the commit fest if it has no patch.


I thought the general recommendation was the opposite, that planning and
road maps should be submitted for review before non-trivial coding is
started; and that despite the name the commitfest is the best way that this
is done.   Of course now I can't find the hackers thread where this
recommendation was made...

Cheers,

Jeff


Re: [HACKERS] Assertions in PL/PgSQL

2013-09-15 Thread Marko Tiikkaja

On 2013-09-15 23:23, Jaime Casanova wrote:

If using ASSERT as keyword is not acceptable, not that i agree but in case.
What about using RAISE EXCEPTION WHEN (condition)


I think it would be extremely surprising if a command like that got 
optimized away based on a GUC, so I don't think that would be a good idea.



Regards,
Marko Tiikkaja


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Assertions in PL/PgSQL

2013-09-15 Thread Jaime Casanova
El 15/09/2013 17:13, Marko Tiikkaja ma...@joh.to escribió:

 On 2013-09-15 23:23, Jaime Casanova wrote:

 If using ASSERT as keyword is not acceptable, not that i agree but in
case.
 What about using RAISE EXCEPTION WHEN (condition)


 I think it would be extremely surprising if a command like that got
optimized away based on a GUC, so I don't think that would be a good idea.



Ah! good point, didn't think on that

--
Jaime Casanova
2ndQuadrant: Your PostgreSQL partner


Re: [HACKERS] record identical operator

2013-09-15 Thread Noah Misch
On Sat, Sep 14, 2013 at 08:58:32PM +0200, Andres Freund wrote:
 On 2013-09-14 11:25:52 -0700, Kevin Grittner wrote:
  Andres Freund and...@2ndquadrant.com wrote:
   But both arrays don't have the same binary representation since
   the former has a null bitmap, the latter not. So, if you had a
   composite type like (int4[]) and would compare that without
   invoking operators you'd return something false in some cases
   because of the null bitmaps.
  
  Not for the = operator.  The new identical operator would find
  them to not be identical, though.
 
 Yep. And I think that's a problem if exposed to SQL. People won't
 understand the hazards and end up using it because its faster or
 somesuch.

The important question is whether to document the new operator and/or provide
it under a guessable name.  If we give the operator a weird name, don't
document it, and put an internal use only comment in the catalogs, that is
essentially as good as hiding this feature at the SQL level.

I'm of two minds on that question.  On the one hand, MV maintenance is hardly
the first use case for an identity operator.  Any replication system or user
space materialized view implementation might want this.  On the other hand,
offering it for the record type exclusively is surprising.  It's also
surprising how records with different numbers of dropped columns can be found
identical, even though a record column within the top-level record is not
permitted to vary that way.

Supposing a decision to document the operator, a second question is whether
=== is the right name:

On Thu, Sep 12, 2013 at 03:27:27PM -0700, Kevin Grittner wrote:
 The identical (===) and not identical (!==) operator names were
 chosen because of a vague similarity to the exactly equals
 concepts in JavaScript and PHP, which use that name.  The semantics
 aren't quite the same, but it seemed close enough not to be too
 surprising.

Maybe.  If we were mimicking the JavaScript/PHP operator of the same name,
'1.0'::numeric === '1.00'::numeric would return true, and '1'::int4 ===
'1'::int2 would return false.  The patch submitted returns false for the first
example and raises a cannot compare dissimilar column types error when
comparing an int4 record column to an int2 record column.

 I think, introducing a noticeable amount of infrastructure for this just
 because of citext is a bad idea.
 At some point we need to replace citext with proper case-insensitive
 collation support - then it really might become necessary.

citext is just one example.  Others include '1.0'::numeric = '1.00'::numeric
and '30 day'::interval = '1 mon'::interval.

  (2)  Require every data type which can be used in a matview to
  implement some new operator or function for identical.  Perhaps
  that could be mitigated to only implementat it if equal values can
  have user-visible differences.
 
 That basically would require adding a new member to btree opclasses that
 btrees don't need themselves... Hm.

That's the wrong way to model it.  Identity is a particular kind of equality,
which is to say a particular hash opclass.  It so happens that memcmp() is the
logical way to implement most/all identity operators, so we can get a btree
opclass as well.

Type-specific identity operators seem like overkill, anyway.  If we find that
meaningless variations in a particular data type are causing too many false
non-matches for the generic identity operator, the answer is to make the
functions generating datums of that type settle on a canonical form.  That
would be the solution for your example involving array null bitmaps.

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Where to load modules from?

2013-09-15 Thread Andrew Dunstan


On 09/15/2013 05:52 PM, Jeff Janes wrote:
On Sun, Sep 15, 2013 at 6:51 AM, Peter Eisentraut pete...@gmx.net 
mailto:pete...@gmx.net wrote:


On Sat, 2013-09-14 at 22:15 +0200, Dimitri Fontaine wrote:

 This proposal comes with no patch because I think we are able to
 understand it without that, so that it would only be a waste of
 everybody's time to attach code for a random solution on the
list here
 to that email.

It shouldn't be in the commit fest if it has no patch.


I thought the general recommendation was the opposite, that planning 
and road maps should be submitted for review before non-trivial coding 
is started; and that despite the name the commitfest is the best way 
that this is done. Of course now I can't find the hackers thread where 
this recommendation was made...





It is unquestionably correct that roadmaps and planning should be made 
available for review and discussion. But the assertion that this should 
be done via the commitfest is not. The commitfest app has never been for 
anything other than code, that I am aware of, and I am quite sure you 
will find fierce resistance to any notion that design discussions should 
take place anywhere but on this mailing list.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [HACKERS] Re: [HACKERS] Re: [HACKERS] Re: [HACKERS] Is it necessary to rewrite table while increasing the scale of datatype numeric?

2013-09-15 Thread Jeff Janes
On Fri, Sep 6, 2013 at 11:47 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Robert Haas robertmh...@gmail.com writes:
  Sure, but the point is that 5. is not the same as 5.000 today.  If
  you start whacking this around you'll be changing that behavior, I
  think.

 Yeah.  And please note that no matter what the OP may think, a lot of
 people *do* consider that there's a useful distinction between 5.000
 and 5. --- it might indicate the number of significant digits in
 a measurement, for example.  I do not see us abandoning that just to make
 certain cases of ALTER TABLE faster.


But note that the current behavior is worse in this regard.  If you specify
a scale of 4 at the column level, than it is not possible to distinguish
between 5.000 and 5. on a per-value basis within that column.  If the
scale at the column level was taken as the maximum scale, not the only
allowed one, then that distinction could be recorded. That behavior seems
more sensible to me (metrologically speaking, regardless of alter table
performance aspects), but I don't see how to get there from here with
acceptable compatibility breakage.

My lesson from going over this thread is, just use numeric, not
numeric(x,y), unless you are storing currency or need compatibility with a
different database system (in which case, good luck with that).

Cheers,

Jeff


Re: [HACKERS] record identical operator

2013-09-15 Thread Hannu Krosing
On 09/15/2013 01:35 PM, Kevin Grittner wrote:
 Andres Freund and...@2ndquadrant.com wrote:

 If matview refreshs weren't using plain SQL and thus wouldn't
 require exposing that operator to SQL I wouldn't have a problem
 with this...
 If RMVC were the end of the story, it might be worth building up a
 mass of execution nodes directly, although it would be hard to see
 how we could make the right planning choices (e.g., MergeJoin
 versus HashJoin) that way.  But the whole incremental maintenance
 area, to have any chance of working accurately and without an
 endless stream of bugs, needs to be based on relational algebra.
 There needs to be a way to express that in a much higher level
 language than execution node creation.  If it doesn't use SQL we
 would need to invent a relational language very much like it, which
 would be silly when we have a perfectly good language we can
 already use.  The sky is blue; let's move on.

 The test for identical records will be needed in SQL if we want to
 have these matview features.  We could limit use of that to
 contexts where MatViewIncrementalMaintenanceIsEnabled(), but I
 don't see the point.  If someone uses an undocumented operator, and
 uses it inappropriately, they may get a surprising result.
Just remember to document it as undocumented so
people will know not to use them ;)

Lots of people were bitten when (undocumented) hash
functions were changed thus breaking hash-based partitioning.

Cheers

-- 
Hannu Krosing
PostgreSQL Consultant
Performance, Scalability and High Availability
2ndQuadrant Nordic OÜ



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest II, 9.4 edition

2013-09-15 Thread Ian Lawrence Barwick
2013/9/16 David Fetter da...@fetter.org:
 has begun!

 New hackers, this is your chance to contribute by doing the first few
 steps of review.  Seasoned hackers, this is a way to help the whole
 project along.  You know the drill.

 Too many patches have Nobody today.  That's the first thing we'll
 need to fix, one way or another.

 Let's make this one great!

And for those of us who haven't really participated before, a couple
of useful links ;)

  https://commitfest.postgresql.org/action/commitfest_view/inprogress
  http://wiki.postgresql.org/wiki/Reviewing_a_Patch

It might be worth adding a link on the developer page (
http://www.postgresql.org/developer/ ),
as the existence of commitfests is otherwise very well hidden (I ended
up googling
the link).

Regards

Ian Barwick


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: json_populate_record and nested json objects

2013-09-15 Thread Chris Travers


 On 15 September 2013 at 18:42 Andrew Dunstan and...@dunslane.net wrote:



 On 09/14/2013 10:27 PM, chris travers wrote:

 Well, you could fairly easily build it as an extension as a POC. The
 main point of the API this is built on was to allow for extensions.

 The logic changes might be a bit tricky. I'll be interested to see what
 you come up with.

 If we're going to do this we should make these handle arrays as well as
 objects.

Yes, arrays are necessary for my use case.  I probably should have been explicit
that I needed that too.

Best Wishes,
Chris Travers

 cheers

 andrew


Best Wishes,
Chris Travers
http://www.2ndquadrant.com
PostgreSQL Services, Training, and Support

Re: [HACKERS] [PoC] pgstattuple2: block sampling to reduce physical read

2013-09-15 Thread Satoshi Nagayasu

(2013/09/15 11:07), Peter Eisentraut wrote:

On Sat, 2013-09-14 at 16:18 +0900, Satoshi Nagayasu wrote:

I'm looking forward to seeing more feedback on this approach,
in terms of design and performance improvement.
So, I have submitted this for the next CF.


Your patch fails to build:

pgstattuple.c: In function ‘pgstat_heap_sample’:
pgstattuple.c:737:13: error: ‘SnapshotNow’ undeclared (first use in this 
function)
pgstattuple.c:737:13: note: each undeclared identifier is reported only once 
for each function it appears in


Thanks for checking. Fixed to eliminate SnapshotNow.

Regards,
--
Satoshi Nagayasu sn...@uptime.jp
Uptime Technologies, LLC. http://www.uptime.jp
diff --git a/contrib/pgstattuple/pgstattuple--1.1--1.2.sql 
b/contrib/pgstattuple/pgstattuple--1.1--1.2.sql
index 2783a63..8ebec6f 100644
--- a/contrib/pgstattuple/pgstattuple--1.1--1.2.sql
+++ b/contrib/pgstattuple/pgstattuple--1.1--1.2.sql
@@ -37,3 +37,17 @@ CREATE FUNCTION pg_relpages(IN relname regclass)
 RETURNS BIGINT
 AS 'MODULE_PATHNAME', 'pg_relpagesbyid'
 LANGUAGE C STRICT;
+
+CREATE FUNCTION pgstattuple2(IN relname regclass,
+OUT table_len BIGINT,  -- physical table length in bytes
+OUT tuple_count BIGINT,-- number of live tuples
+OUT tuple_len BIGINT,  -- total tuples length in bytes
+OUT tuple_percent FLOAT8,  -- live tuples in %
+OUT dead_tuple_count BIGINT,   -- number of dead tuples
+OUT dead_tuple_len BIGINT, -- total dead tuples length in bytes
+OUT dead_tuple_percent FLOAT8, -- dead tuples in %
+OUT free_space BIGINT, -- free space in bytes
+OUT free_percent FLOAT8)   -- free space in %
+AS 'MODULE_PATHNAME', 'pgstattuple2'
+LANGUAGE C STRICT;
+
diff --git a/contrib/pgstattuple/pgstattuple--1.2.sql 
b/contrib/pgstattuple/pgstattuple--1.2.sql
index e5fa2f5..963eb00 100644
--- a/contrib/pgstattuple/pgstattuple--1.2.sql
+++ b/contrib/pgstattuple/pgstattuple--1.2.sql
@@ -77,3 +77,17 @@ CREATE FUNCTION pg_relpages(IN relname regclass)
 RETURNS BIGINT
 AS 'MODULE_PATHNAME', 'pg_relpagesbyid'
 LANGUAGE C STRICT;
+
+CREATE FUNCTION pgstattuple2(IN relname regclass,
+OUT table_len BIGINT,  -- physical table length in bytes
+OUT tuple_count BIGINT,-- number of live tuples
+OUT tuple_len BIGINT,  -- total tuples length in bytes
+OUT tuple_percent FLOAT8,  -- live tuples in %
+OUT dead_tuple_count BIGINT,   -- number of dead tuples
+OUT dead_tuple_len BIGINT, -- total dead tuples length in bytes
+OUT dead_tuple_percent FLOAT8, -- dead tuples in %
+OUT free_space BIGINT, -- free space in bytes
+OUT free_percent FLOAT8)   -- free space in %
+AS 'MODULE_PATHNAME', 'pgstattuple2'
+LANGUAGE C STRICT;
+
diff --git a/contrib/pgstattuple/pgstattuple.c 
b/contrib/pgstattuple/pgstattuple.c
index f9ba0a6..2b29d44 100644
--- a/contrib/pgstattuple/pgstattuple.c
+++ b/contrib/pgstattuple/pgstattuple.c
@@ -41,9 +41,22 @@ PG_MODULE_MAGIC;
 
 PG_FUNCTION_INFO_V1(pgstattuple);
 PG_FUNCTION_INFO_V1(pgstattuplebyid);
+PG_FUNCTION_INFO_V1(pgstattuple2);
 
 extern Datum pgstattuple(PG_FUNCTION_ARGS);
 extern Datum pgstattuplebyid(PG_FUNCTION_ARGS);
+extern Datum pgstattuple2(PG_FUNCTION_ARGS);
+
+#define SAMPLE_SIZE 3000
+
+typedef struct pgstattuple_block_stats
+{
+   uint64  tuple_count;
+   uint64  tuple_len;
+   uint64  dead_tuple_count;
+   uint64  dead_tuple_len;
+   uint64  free_space; /* free/reusable space in bytes 
*/
+} pgstattuple_block_stats;
 
 /*
  * struct pgstattuple_type
@@ -66,8 +79,9 @@ typedef void (*pgstat_page) (pgstattuple_type *, Relation, 
BlockNumber,
 
 static Datum build_pgstattuple_type(pgstattuple_type *stat,
   FunctionCallInfo fcinfo);
-static Datum pgstat_relation(Relation rel, FunctionCallInfo fcinfo);
+static Datum pgstat_relation(Relation rel, FunctionCallInfo fcinfo, bool 
enable_sample);
 static Datum pgstat_heap(Relation rel, FunctionCallInfo fcinfo);
+static Datum pgstat_heap_sample(Relation rel, FunctionCallInfo fcinfo);
 static void pgstat_btree_page(pgstattuple_type *stat,
  Relation rel, BlockNumber blkno,
  BufferAccessStrategy bstrategy);
@@ -81,6 +95,11 @@ static Datum pgstat_index(Relation rel, BlockNumber start,
 pgstat_page pagefn, FunctionCallInfo fcinfo);
 static void pgstat_index_page(pgstattuple_type *stat, Page page,
  OffsetNumber minoff, OffsetNumber maxoff);
+static void compute_parameters(pgstattuple_block_stats *block_stats,
+  BlockNumber sample_size, BlockNumber nblocks,
+  uint64 *tuple_count, uint64 *tuple_len,
+  uint64 *dead_tuple_count, 

Re: [HACKERS] [PATCH] bitmap indexes

2013-09-15 Thread Jaime Casanova
On Sat, Sep 14, 2013 at 1:14 PM, Abhijit Menon-Sen a...@2ndquadrant.com wrote:
 Hi.

 This is a cleaned-up and rebased version of the bitmap index patch from
 Gavin Sherry, later revised by Gianni Ciolli and Gabriele Bartolini, and
 others including Daniel Bausch.


Hi Abhijit,

Please, in the next update consider this messages i'm getting when
compiling with your patch.

bitmapxlog.c: In function ‘bitmap_xlog_cleanup’:
bitmapxlog.c:658:32: warning: ‘reln’ may be used uninitialized in this
function [-Wuninitialized]
selfuncs.c: In function ‘bmcostestimate’:
selfuncs.c:7327:13: warning: unused variable ‘indexCorrelation’
[-Wunused-variable]
selfuncs.c:7326:15: warning: unused variable ‘indexSelectivity’
[-Wunused-variable]
selfuncs.c:7325:11: warning: unused variable ‘indexTotalCost’
[-Wunused-variable]
selfuncs.c:7324:11: warning: unused variable ‘indexStartupCost’
[-Wunused-variable]


Also, there are 2 regression tests failing (attached regression.diffs)

And this error, when trying to generate docs

openjade:bitmap.sgml:123:85:X: reference to non-existent ID
SQL-CREATEINDEX-TITLE


And finally, i was excercising the feature in some ways and got a
crash when creating an index concurrently (attached
index_failure.txt), it wasn't just a crash i couldn't start up the
server again after it

-- 
Jaime Casanova www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación
Phone: +593 4 5107566 Cell: +593 987171157
postgres=# create index concurrently  idx_t1_i on t1 using bitmap (i);
TRAP: FailedAssertion(!(GetTopTransactionIdIfAny() == ((TransactionId) 0)), 
File: index.c, Line: 3011)
La conexión al servidor se ha perdido. Intentando reiniciar: LOG:  server 
process (PID 20955) was terminated by signal 6: Aborted
DETAIL:  Failed process was running: create index concurrently  idx_t1_i on t1 
using bitmap (i);
LOG:  terminating any other active server processes
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the 
current transaction and exit, because another server process exited abnormally 
and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and repeat 
your command.
FATAL:  the database system is in recovery mode
falló.
Duración: 239,212 ms
! LOG:  all server processes terminated; reinitializing
LOG:  database system was interrupted; last known up at 2013-09-15 23:21:15 ECT
LOG:  database system was not properly shut down; automatic recovery in progress
LOG:  redo starts at 0/AA9C4E8
PANIC:  _bitmap_xlog_insert_last_bitmapwords: VMI block not found: 5
CONTEXT:  xlog redo insert words in a not-last bitmap page: rel 1663/12937/70025
LOG:  startup process (PID 24399) was terminated by signal 6: Aborted
LOG:  aborting startup due to startup process failure



regression.diffs
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] \h open

2013-09-15 Thread Oleg Bartunov
I noticed there is nothing available in built-in psql help about OPEN
command. Does it intentional ?

postgres=# \h open
No help available for open.
Try \h with no arguments to see available help.

Oleg


Re: [HACKERS] git apply vs patch -p1

2013-09-15 Thread Ashutosh Bapat
On Sun, Sep 15, 2013 at 12:38 AM, Andres Freund and...@2ndquadrant.comwrote:

 On 2013-09-14 15:03:52 -0400, Andrew Dunstan wrote:
 
  On 09/14/2013 02:37 PM, Josh Berkus wrote:
  Folks,
  
  Lately I've been running into a lot of reports of false conflicts
  reported by git apply.  The most recent one was the points patch,
  which git apply rejected for completely ficticious reasons (it claimed
  that the patch was trying to create a new file where a file already
  existed, which it wasn't).
  
  I think we should modify the patch review and developer instructions to
  recommend always using patch -p1 (or -p0, depending), even if the patch
  was produced with git diff.
  
  Thoughts?
  
 
 
  FWIW that's what I invariably use.
 
  You do have to be careful to git-add/git-rm any added/deleted files,
 which
  git-apply does for you (as well as renames) - I've been caught by that a
  couple of times.

 git reset?


git reset wouldn't remove the files that were newly added by the patch,
would it?


 Greetings,

 Andres Freund

 --
  Andres Freund http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


Re: [HACKERS] GIN improvements part 1: additional information

2013-09-15 Thread Peter Eisentraut
Please fix compiler warnings:

gindatapage.c: In function ‘dataPlaceToPage’:
gindatapage.c:706:24: warning: unused variable ‘pageBackup’ [-Wunused-variable]
gindatapage.c: In function ‘updateItemIndexes’:
gindatapage.c:1182:6: warning: variable ‘totalsize’ set but not used 
[-Wunused-but-set-variable]
gindatapage.c: In function ‘dataPlaceToPage’:
gindatapage.c:633:28: warning: ‘startI’ may be used uninitialized in this 
function [-Wuninitialized]
gindatapage.c:617:21: note: ‘startI’ was declared here




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/pgSQL, RAISE and error context

2013-09-15 Thread Peter Eisentraut
On Sun, 2013-09-15 at 14:28 +0200, Marko Tiikkaja wrote:
 On 15/09/2013 13:58, I wrote:
  Hmm.  I just noticed there's something weird going on in the select_view
  test.  I'm investigating this currently.
 
 Seems that there's some magic going on and I overwrote the expected 
 results of the wrong file.  However, I can't figure out how one is 
 supposed to be getting the output of expected/select_views.out, nor do I 
 find this documented anywhere (I know xml has a similar thing so I tried 
 grepping around for XML, to no avail).
 
 Here's an updated patch, but I think expected/select_views.out is still 
 broken.

You patch still fails the plperl regression tests.

I don't see a failure with select_views.  Your issue might be that you
updated expected/select_views_1.out but not expected/select_views.out.
This documentation might help:
http://www.postgresql.org/docs/devel/static/regress-variant.html



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] bitmap indexes

2013-09-15 Thread Abhijit Menon-Sen
Hi Jaime.

At 2013-09-15 23:32:11 -0500, ja...@2ndquadrant.com wrote:

 bitmapxlog.c:658:32: warning: ‘reln’ may be used uninitialized in this
 function [-Wuninitialized]

I added an XXX comment about this one, will investigate and fix.

Will look into the other errors as well, many thanks for the report.

-- Abhijit


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_system_identifier()

2013-09-15 Thread Satoshi Nagayasu
Hi,

I'm catching up with the discussion as a reviewer...

(2013/08/27 5:35), Tom Lane wrote:
 Andres Freund and...@2ndquadrant.com writes:
 On 2013-08-26 10:10:54 -0700, Josh Berkus wrote:
 I'm going to reverse my vote, and vote against this patch.  The reason
 why is that I think we should instead have a function:

 pg_controldata(parameter text)

 ... which would report *all* strings in pg_controldata.  Hence, you'd do
 instead:

 pg_controldata('system identifier')

 This will hopefully spare us from 15 patches incrementally adding all of
 the individual items in controldata.
 
 If anything but the proposed feature, it should be an SRF - passing in
 text parameters isn't very discoverable.
 
 I'm not pleased with the idea that we'd have to dumb all the relevant
 datatypes down to text so that we could push them through this single
 function.
 
 Also, what about i18n?  pg_controldata localizes all the strings it
 prints, but I doubt that's a great idea for either the input or the
 output of this proposed function.

How about adding new system view with new function which returns
a single pg_controldata value in text type, and using a cast for
each column in the view definition?

CREATE VIEW pg_catalog.pg_controldata AS
  SELECT pg_controldata('control_version')::integer AS control_version,
 pg_controldata('catalog_version')::integer AS catalog_version,
 pg_controldata('system_identifier')::bigint AS system_identifier,
 ...
 pg_controldata('next_xlog_file')::char(25) AS next_xlog_file,
 ...
 pg_controldata('encoding')::text AS encoding;

Given that the view can work like a SRF, and it allows us to retrieve
all the values of pg_controldata with appropriate types in single
record from the view:

  select * from pg_catalog.pg_controldata;

To get the system identifier value:

  select system_identifier from pg_catalog.pg_controldata;

Regards,
-- 
Satoshi Nagayasu sn...@uptime.jp
Uptime Technologies, LLC. http://www.uptime.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hstore: Query speedups with Gin index

2013-09-15 Thread David Fetter
Blake,

I've taken the liberty of adding this patch to the current Commitfest.

In future, please continue to send patches both to this thread and to
the commitfest application when you have a message ID for them :)

Cheers,
David.
On Mon, Sep 09, 2013 at 09:55:01AM -0500, Blake Smith wrote:
 Thanks for getting back to me about this change Oleg. I took your advice
 and reworked the patch by adding a new hstore gin opclass
 (gin_hstore_combined_ops) and leaving the functionality of the default
 hstore gin opclass the same. This should prevent the on-disk compatibility
 issues from the first patch, and allow users to select the different
 indexing method when they build the index. The hstore regression suite is
 passing for me locally with the --enable-cassert configure flag. Please let
 me know what you think and if there is any other work that would need to be
 done (style cleanups, updating documentation, etc) to get this merged.
 
 Thanks!
 
 Blake
 
 
 
 
 
 
 On Fri, Sep 6, 2013 at 1:47 PM, Oleg Bartunov obartu...@gmail.com wrote:
 
  Blake,
 
  I think it's better to implement this patch as a separate opclass, so
  users will have option to choose indexing.
 
  Oleg
 
 
  On Tue, Sep 3, 2013 at 6:24 PM, Blake Smith blakesmi...@gmail.com wrote:
 
  Thanks for the feedback everyone. I've attached the patch that we are now
  running in production to service our hstore include queries. We rebuilt the
  index to account for the on-disk incompatibility. I've submitted the patch
  to commitfest here:
  https://commitfest.postgresql.org/action/patch_view?id=1203
 
  Michael: I don't have a formal benchmark, but several of our worst
  queries went from 10-20 seconds per query down to 50-400 ms. These are
  numbers we've seen when testing real production queries against our
  production dataset with real world access patterns.
  Oleg: Thanks for your thoughts on this change. As for the spgist / gin
  work you're doing, is there anything you need help with or are you still in
  the research phase? I'd love to help get something more robust merged into
  mainline if you think there's collaborative work to be done (even if it's
  only user testing).
 
  Thanks,
 
  Blake
 
 
 
 
  On Wed, Aug 28, 2013 at 12:40 PM, Andres Freund 
  and...@2ndquadrant.comwrote:
 
  On 2013-08-28 13:31:22 -0400, Bruce Momjian wrote:
   On Sun, Aug 25, 2013 at 10:11:50PM -0400, Tom Lane wrote:
Michael Paquier michael.paqu...@gmail.com writes:
 On Thu, Aug 22, 2013 at 11:55 PM, Blake Smith 
  blakesmi...@gmail.com wrote:
 The combined entry is used to support contains (@) queries,
  and the key
 only item is used to support key contains (?) queries. This
  change seems
 to help especially with hstore keys that have high cardinalities.
  Downsides
 of this change is that it requires an index rebuild, and the
  index will be
 larger in size.
   
 Index rebuild would be a problem only for minor releases,
   
That's completely false; people have expected major releases to be
on-disk-compatible for several years now.  While there probably will
  be
future releases in which we are willing to break storage
  compatibility,
a contrib module doesn't get to dictate that.
   
What might be a practical solution, especially if this isn't always a
win (which seems likely given the index-bloat risk), is to make
  hstore
offer two different GIN index opclasses, one that works the
  traditional
way and one that works this way.
   
Another thing that needs to be taken into account here is Oleg and
Teodor's in-progress work on extending hstore:
https://www.pgcon.org/2013/schedule/events/518.en.html
I'm not sure if this patch would conflict with that at all, but it
needs to be considered.
  
   We can disallow in-place upgrades for clusters that use certain contrib
   modules --- we have done that in the past.
 
  But that really cannot be acceptable for hstore. The probably most
  widely used extension there is.
 
  Greetings,
 
  Andres Freund
 
  --
   Andres Freund http://www.2ndQuadrant.com/
   PostgreSQL Development, 24x7 Support, Training  Services
 
 
 
 
  --
  Blake Smith
  http://blakesmith.me
  @blakesmith
 
 
  --
  Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-hackers
 
 
 
 
 
 -- 
 Blake Smith
 http://blakesmith.me
 @blakesmith


 
 -- 
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers