Re: [HACKERS] ALTER TABLE lock strength reduction patch is unsafe

2014-02-27 Thread Simon Riggs
On 26 February 2014 07:32, Simon Riggs si...@2ndquadrant.com wrote:

 * Are you sure AlterConstraint is generally safe without an AEL? It
   should be safe to change whether something is deferred, but not
   necessarily whether it's deferrable?

 We change the lock levels for individual commands. This patch provides
 some initial settings and infrastructure.

 It is possible you are correct that changing the deferrability is not
 safe without an AEL. That was enabled for the first time in this
 release in a patch added by me after this patch was written. I will
 think on that and change, if required.

Thoughts...

It would be a problem to change a DEFERRABLE constraint into a NOT
DEFERRABLE constraint concurrently with a transaction that was
currently deferring its constraint checks. It would not be a problem
to go in the other direction, relaxing the constraint attributes.

The patch uses ShareRowExclusive for AlterConstraint, so no writes are
possible concurrently with the table being ALTERed, hence the problem
situation cannot arise.

So in my understanding, no issue is possible.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Use MAP_HUGETLB where supported (v3)

2014-02-27 Thread Christian Kruse
Hi,

On 27/02/14 08:35, Christian Kruse wrote:
 Hi Peter,

Sorry, Stephen of course – it was definitely to early.

Best regards,

-- 
 Christian Kruse   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services



pgpm6lYpan4Df.pgp
Description: PGP signature


Re: [HACKERS] ALTER TABLE lock strength reduction patch is unsafe

2014-02-27 Thread Simon Riggs
On 26 February 2014 15:25, Andres Freund and...@2ndquadrant.com wrote:

   * Why does ChangeOwner need AEL?
 
  Ownership affects privileges, which includes SELECTs, hence AEL.
 
  So?

 That reply could be added to any post. Please explain your concern.

 I don't understand why that means it needs an AEL. After all,
 e.g. changes in table inheritance do *not* require an AEL. I think it's
 perfectly ok to not go for the minimally required locklevel for all
 subcommands, but then it should be commented as such and not with
 change visible to SELECT where other operations that do so as well
 require lower locklevels.

Those are two separate cases, with separate lock levels, so that
argument doesn't hold.

My understanding of the argument as to why Inheritance doesn't need
AEL is that adding/removing children is akin to inserting or deleting
rows from the parent.

Removing SELECT privilege while running a SELECT would be a different
matter.  This is all a matter of definition; we can make up any rules
we like. Doing so is IMHO a separate patch and not something to hold
up the main patch.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unfortunate choice of short switch name in pgbench

2014-02-27 Thread Fabien COELHO


Hello Tom.


I just wasted some time puzzling over strange results from pgbench.
I eventually realized that I'd been testing against the wrong server,
because rather than -p 65432 I'd typed -P 65432, thereby invoking
the recently added --progress option.  pgbench has no way to know that
that isn't what I meant; the fact that both switches take integer
arguments doesn't help.


ISTM that this is an unfortunate but unlikely mistake, as -p is used in 
all postgresql commands to signify the port number (psql, pg_dump, 
pg_basebackup, createdb, ...).



To fix this, I propose removing the -P short form and only allowing the
long --progress form.


I do not think that such a fix is really needed. This logic would lead 
to remove many short options from many commands in postgresql and 
elsewhere : -t/-T in pgbench, -s/-S in psql, and so on, -l/-L -r/-R -s/-S 
in ls...


Moreover, I use -P more often than -p:-)

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unfortunate choice of short switch name in pgbench

2014-02-27 Thread Fabien COELHO


Hello Tom,


Meh.  A progress-reporting feature has use when the tool is working
towards completion of a clearly defined task.  In the case of pgbench,
if you told it to run for -T 60 seconds rather than -T 10 seconds,
that's probably because you don't trust a 10-second average to be
sufficiently reproducible.


The motivation for the progress options are:

(1) to check for (not blindly trust) the performance stability, especially 
as warming up time can be very long. See for instance my blog post:


http://blog.coelho.net/database/2013/08/14/postgresql-warmup/

a scaled 100 read-only pgbench run on a standard HDD requires 18 minutes 
to reach the performance steady state, and the performance is multiplied 
by 120 along the way, mostly in the last 2 minutes. In my experience 10 
and 60 seconds running period are equally ridiculously short running times 
for real benchmarks. When I am running a bench for 30 minutes, I like to 
have some output before the end of the command to know what is going on.


(2) when reporting performance figures, benchmark rules usually require 
that the detailed performance during the whole run are also reported, not 
just the final average, so as to rule out warming up or other unexpected 
and transitional effects.


(3) another use case of the option is to run with --rate (to target some 
tps you expect on your system) and then to run other commands in parallel 
(say pg_dump, pg_basebackup...) to check the impact it has on performance.


I do agree that having report every second on a 10 second run is not very 
useful, but that is not the use case.


So I'm not real sure that reporting averages over shorter intervals is 
all that useful; especially not if it takes cycles out of pgbench, which 
itself is often a bottleneck.


If you do not ask for it, it does not harm the performance significantly.


I could see some value in a feature that computed shorter-interval TPS
averages and then did some further arithmetic, like measuring the standard
deviation of the shorter-interval averages to assess how much noise there
will be in the full-run average.


I do not understand. pgbench -P does report the standard deviation as 
well as the client side latency. Without this option pgbench is a black 
box.


But that's not what this does, and if it did do that, reporting 
progress would not be what I'd see as its main purpose.


This is for benchmarking. It is really reporting progress towards 
performance steady state, not reporting progress towards task completion.

Maybe a better name could have been thought for.

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE lock strength reduction patch is unsafe

2014-02-27 Thread Simon Riggs
On 26 February 2014 15:25, Andres Freund and...@2ndquadrant.com wrote:
 On 2014-02-26 15:15:00 +, Simon Riggs wrote:
 On 26 February 2014 13:38, Andres Freund and...@2ndquadrant.com wrote:
  Hi,
 
  On 2014-02-26 07:32:45 +, Simon Riggs wrote:
   * This definitely should include isolationtester tests actually
 performing concurrent ALTER TABLEs. All that's currently there is
 tests that the locklevel isn't too high, but not that it actually 
   works.
 
  There is no concurrent behaviour here, hence no code that would be
  exercised by concurrent tests.
 
  Huh? There's most definitely new concurrent behaviour. Previously no
  other backends could have a relation open (and locked) while it got
  altered (which then sends out relcache invalidations). That's something
  that should be tested.

 It has been. High volume concurrent testing has been performed, per
 Tom's original discussion upthread, but that's not part of the test
 suite.

 Yea, that's not what I am looking for.

 For other tests I have no guide as to how to write a set of automated
 regression tests. Anything could cause a failure, so I'd need to write
 an infinite set of tests to prove there is no bug *somewhere*. How
 many tests are required? 0, 1, 3, 30?

 I think some isolationtester tests for the most important changes in
 lock levels are appropriate. Say, create a PRIMARY KEY, DROP INHERIT,
 ...  while a query is in progress in a nother session.

OK, I'll work on some tests.

v18 attached, with v19 coming soon

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


reduce_lock_levels.v18.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Peter Geoghegan
On Wed, Feb 26, 2014 at 7:42 PM, Andrew Dunstan and...@dunslane.net wrote:
 The jsonb set will get larger as time goes on. I don't think either of you
 are thinking very clearly about how we would do this. Extensions can't call
 each other's code. So the whole notion we have here of sharing the tree-ish
 data representation and a lot of the C API would go out the window, unless
 you want to shoehorn jsonb into hstore. Frankly, we'll look silly with json
 as a core type and the more capable jsonb not.

When are you going to add more jsonb functions? ISTM that you have a
bunch of new ones right here (i.e. the hstore functions and
operators). Why not add those ones right now?

I don't understand why you'd consider it to be a matter of shoehorning
jsonb into hstore (and yes, that is what I was suggesting). jsonb is a
type with an implict cast to hstore, that is binary coercible both
ways. Oleg and Teodor had at one point considered having the ouput
format controlled entirely by a GUC, so there'd be no new jsonb type
at all. While I'm not asserting that you should definitely not
structure things this way (i.e. have substantial in-core changes), it
isn't obvious to me why this can't work as an extension, especially if
doing everything as part of an extension helps the implementation.
Please point out anything that I may have missed.

Speaking from a Heroku perspective, I know the company places a huge
value on jsonb. However, I believe it matters not a whit to adoption
whether or not it's an extension, except insofar as having it be an
extension helps the implementation effort (that is, that it helps
there be something to adopt), or hinders that effort.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UNION ALL on partitioned tables won't use indices.

2014-02-27 Thread Kyotaro HORIGUCHI
Thank you for the labor for polishing this patch.

I have no obvious objection at a glance on this new patch.

I agree to commit this if you this is pertinent to commit except
for the issue newly revealed by this patch. Though could you let
me have some more time to examine this by myself and fully
understand the changes what you made?


At Wed, 26 Feb 2014 23:29:35 -0500, Noah Misch wrote
  Finally, the result we got has proved not to be a problem.
 
 The first union branch should return two rows, and the second union branch
 should return one row, for a total of three.  In any case, I see later in your
 mail that you fixed this.

I got it. I confirmed that *after* fixing duplicate rows, then
misunderstood your point.

 The larger point is that this patch has no business
 changing the output rows of *any* query.  Its goal is to pick a more-efficient
 plan for arriving at the same answer.  If there's a bug in our current output
 for some query, that's a separate discussion from the topic of this thread.

Totally agreed.

  Second, about the crash in this sql,
  
   select parent from parent union all select parent from parent;
  
  It is ignored whole-row reference (=0) which makes the index of
  child translated-vars list invalid (-1).  I completely ignored it
  in spite that myself referred to before.
  
  Unfortunately ConvertRowtypeExpr prevents appendrels from being
  removed currently, and such a case don't seem to take place so
  often, so I decided to exclude the case.
 
  +   /*
  +* Appendrels which does whole-row-var 
  conversion cannot be
  +* removed. ConvertRowtypeExpr can convert only 
  RELs which can
  +* be referred to using relid.
 
 We have parent and child relids, so it is not clear to me how imposing that
 restriction helps us.  I replaced transvars_merge_mutator() with a call to
 adjust_appendrel_attrs().  This reduces code duplication, and it handles
 whole-row references.

Thank you, I didn't grasp them as a whole..

 (I don't think the other nodes adjust_appendrel_attrs()
 can handle matter to this caller.  translated_vars will never contain join
 tree nodes, and I doubt it could contain a PlaceHolderVar with phrels
 requiring translation.)
 
 The central design question for this patch seems to be how to represent, in
 the range table, the fact that we expanded an inheritance parent despite its
 children ending up as appendrel children of a freestanding UNION ALL.  The v6
 patch detaches the original RTE from the join tree and clears its inh flag.
 This breaks sepgsql_dml_privileges(), which looks for RTE_RELATION with inh =
 true and consults selinux concerning every child table.

Mmm. It was not in my sight. A bit more time is needed to
understand this.

 We could certainly
 change the way sepgsql discovers inheritance hierarchies, but nothing clearly
 better comes to mind.  I selected the approach of preserving the RTE's inh
 flag, removing the AppendRelInfo connecting that RTE to its enclosing UNION
 ALL, and creating no AppendRelInfo children for that RTE.

Ok, I did that because I was not certain whether removing
detached AppendRelInfos are safe or not. It is far smarter than
mine.

 An alternative was
 to introduce a new RTE flag, say append.  An inheritance parent under a
 UNION ALL would have append = false, inh = true; other inheritance parent RTEs
 would have append = true, inh = true; an RTE for UNION ALL itself would have
 append = true, inh = false.

I think that kind of complexity is not necessary for this issue.

   The attached two patches are rebased to current 9.4dev HEAD and
   make check at the topmost directory and src/test/isolation are
   passed without error. One bug was found and fixed on the way. It
   was an assertion failure caused by probably unexpected type
   conversion introduced by collapse_appendrels() which leads
   implicit whole-row cast from some valid reltype to invalid
   reltype (0) into adjust_appendrel_attrs_mutator().
  
  What query demonstrated this bug in the previous type 2/3 patches?

I would still like to know the answer to the above question.
  
  I rememberd after some struggles. It failed during 'make check',
  on the following query in inherit.sql.
 
  [details]
 
 Interesting.  Today, the parent_reltype and child_reltype fields of
 AppendRelInfo are either both valid or both invalid.  Your v6 patch allowed us
 to have a valid child_reltype with an invalid parent_reltype.  At the moment,
 we can't benefit from just one valid reltype.  I restored the old invariant.

Ok, I understood it.

 If the attached patch version looks reasonable, I will commit it.
 
 
 Incidentally, I tried adding an assertion that append_rel_list does not show
 one appendrel as a direct child of another.  The following query, off-topic
 for the patch at hand, triggered that assertion:
 
 SELECT 0 FROM 

[HACKERS] What behavior is in this loop?

2014-02-27 Thread KONDO Mitsumasa
Hi,

I found interesting for and while loop in WaitForWALToBecomeAvailable() in
xlog.c. Can you tell me this behavior?

for (;;)
{
~
} while (StanbyMode)

I confirmed this code is no problem in gcc compiler:)

Regards,
--
Mitsumasa KONDO
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GiST support for inet datatypes

2014-02-27 Thread Emre Hasegeli
2014-02-24 17:55, Bruce Momjian br...@momjian.us:

 pg_upgrade has _never_ modified the old cluster, and I am hesitant to do
 that now.  Can we make the changes in the new cluster, or in pg_dump
 when in binary upgrade mode?

It can be possible to update the new operator class in the new cluster
as not default, before restore. After the restore, pg_upgrade can upgrade
the btree_gist extension and reset the operator class as the default.
pg_upgrade suggests to re-initdb the new cluster if it fails, anyway. Do
you think it is a better solution? I think it will be more complicated
to do in pg_dump when in binary upgrade mode.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: show relation and tuple infos of a lock to acquire

2014-02-27 Thread Christian Kruse
Hi,

On 25/02/14 16:11, Robert Haas wrote:
 On Mon, Feb 24, 2014 at 10:13 AM, Christian Kruse
 christ...@2ndquadrant.com wrote:
  To be honest, I don't like the idea of setting up this error context
  only for log_lock_wait messages. This sounds unnecessary complex to me
  and I think that in the few cases where this message doesn't add a
  value (and thus is useless) don't justify such complexity.
 
 Reading this over, I'm not sure I understand why this is a CONTEXT at
 all and not just a DETAIL for the particular error message that it's
 supposed to be decorating.  Generally CONTEXT should be used for
 information that will be relevant to all errors in a given code path,
 and DETAIL for extra information specific to a particular error.

Because there is more than one scenario where this context is useful,
not just log_lock_wait messages.

 If we're going to stick with CONTEXT, we could rephrase it like this:
 
 CONTEXT: while attempting to lock tuple (1,2) in relation with OID 3456
 
 or when the relation name is known:
 
 CONTEXT: while attempting to lock tuple (1,2) in relation public.foo

Accepted. Patch attached.

  Displaying whole tuple doesn't seem to be the most right way
  to get debug information and especially in this case we are
  already displaying tuple offset(ctid) which is unique identity
  to identify a tuple. It seems to me that it is sufficient to display
  unique value of tuple if present.
  I understand that there is no clear issue here, so may be if others also
  share their opinion then it will be quite easy to take a call.
 
 I wouldn't be inclined to dump the whole tuple under any
 circumstances.  That could be a lot more data than what you want
 dumped in your log.  The PK could already be somewhat unreasonably
 large, but the whole tuple could be a lot more unreasonably large.

Well, as I already stated: we don't. I copied the behavior we use in
CHECK constraints (ExecBuildSlotValueDescription()).

Best regards,

-- 
 Christian Kruse   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index a771ccb..a04414e 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -107,6 +107,8 @@ static TransactionId MultiXactIdGetUpdateXid(TransactionId xmax,
 		uint16 t_infomask);
 static void MultiXactIdWait(MultiXactId multi, MultiXactStatus status,
 int *remaining, uint16 infomask);
+static void MultiXactIdWaitWithErrorContext(Relation rel, HeapTuple tup, MultiXactId multi,
+	MultiXactStatus status, int *remaining, uint16 infomask);
 static bool ConditionalMultiXactIdWait(MultiXactId multi,
 		   MultiXactStatus status, int *remaining,
 		   uint16 infomask);
@@ -2703,8 +2705,8 @@ l1:
 		if (infomask  HEAP_XMAX_IS_MULTI)
 		{
 			/* wait for multixact */
-			MultiXactIdWait((MultiXactId) xwait, MultiXactStatusUpdate,
-			NULL, infomask);
+			MultiXactIdWaitWithErrorContext(relation, tp, (MultiXactId) xwait,
+	  MultiXactStatusUpdate, NULL, infomask);
 			LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 
 			/*
@@ -2730,7 +2732,7 @@ l1:
 		else
 		{
 			/* wait for regular transaction to end */
-			XactLockTableWait(xwait);
+			XactLockTableWaitWithInfo(relation, tp, xwait);
 			LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 
 			/*
@@ -3255,8 +3257,8 @@ l2:
 			int			remain;
 
 			/* wait for multixact */
-			MultiXactIdWait((MultiXactId) xwait, mxact_status, remain,
-			infomask);
+			MultiXactIdWaitWithErrorContext(relation, oldtup,
+	   (MultiXactId) xwait, mxact_status, remain, infomask);
 			LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 
 			/*
@@ -3330,7 +3332,7 @@ l2:
 			else
 			{
 /* wait for regular transaction to end */
-XactLockTableWait(xwait);
+XactLockTableWaitWithInfo(relation, oldtup, xwait);
 LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 
 /*
@@ -4398,7 +4400,8 @@ l3:
 		RelationGetRelationName(relation;
 }
 else
-	MultiXactIdWait((MultiXactId) xwait, status, NULL, infomask);
+	MultiXactIdWaitWithErrorContext(relation, tuple,
+(MultiXactId) xwait, status, NULL, infomask);
 
 /* if there are updates, follow the update chain */
 if (follow_updates 
@@ -4453,7 +4456,7 @@ l3:
 		RelationGetRelationName(relation;
 }
 else
-	XactLockTableWait(xwait);
+	XactLockTableWaitWithInfo(relation, tuple, xwait);
 
 /* if there are updates, follow the update chain */
 if (follow_updates 
@@ -5140,7 +5143,7 @@ l4:
 	if (needwait)
 	{
 		LockBuffer(buf, BUFFER_LOCK_UNLOCK);
-		XactLockTableWait(members[i].xid);
+		XactLockTableWaitWithInfo(rel, mytup, members[i].xid);
 		pfree(members);
 		goto l4;
 	}
@@ -5200,7 +5203,7 @@ l4:
 if (needwait)
 {
 	LockBuffer(buf, BUFFER_LOCK_UNLOCK);
-	XactLockTableWait(rawxmax);
+	XactLockTableWaitWithInfo(rel, mytup, 

Re: [HACKERS] extension_control_path

2014-02-27 Thread Dimitri Fontaine
Stephen Frost sfr...@snowman.net writes:
 I'm a bit confused here- above you '+1'd packagers/sysadmins, but then
 here you are saying that hackers will be setting it?  Also, it strikes

Well I was then talking about how it works today, as in PostgreSQL 9.1,
9.2 and 9.3, and most certainly 9.4 as we're not trying to change
anything on that front.

 me as a terrible idea to ship absolute object file names (which I assume
 you mean to include path, given you say 'absolute') unless you're an

I agree, that's why my current design also needs cooperation on the
backend side of things, to implement what you're calling here relocation
of the files. Now that I read your comments, we might be able to
implement something really simple and have something in core…

Please see attached patch, tested and documented.

 doc/src/sgml/extend.sgml |  7 ++
 src/backend/commands/extension.c | 39 +++-
 2 files changed, 45 insertions(+), 1 deletion(-)

 Presumably, that's what you'd want to set both the control path and the
 dynamic extension path to- a directory of control files and a directory
 of .so's, or perhaps one combined directory of both, for the simplest
 setup.  If you're working with a directory-per-package, then wouldn't
 you want to have everything for that package in that package's directory
 and then only have to add all those directories to one place in
 postgresql.conf?

That's a fair-enough observation, that targets a use case where you're
using the feature without the extra software. I also note that it could
simplify said software a little bit.

What about allowing a control file like this:

   # hstore extension
   comment = 'data type for storing sets of (key, value) pairs'
   default_version = '1.3'
   directory = 'local/hstore-new'
   module_pathname = '$directory/hstore'
   relocatable = true

The current way directory is parsed, relative pathnames are allowed and
will be resolved in SHAREDIR, which is where we find the extension/ main
directory, where currently live extension control files.

With such a feature, we would allow module_pathname to reuse the same
location as where we're going to find auxilliary control files and
scripts.

 My questions about this are mostly covered above, but I did want to get
 clarification- is this going to be on a per-system basis, as in, when
 the package is installed through your tool, it's going to go figure out
 where the package got installed to and rewrite the control file?  Seems
 like a lot of work if you're going to have to add that directory to the
 postgresql.conf path for the control file anyway to then *also* have to
 hack up the control file itself.

Given module_pathname = '$directory/xxx' the extension is now fully
relocatable and the tool doesn't need to put in any other effort than
hacking the control file *at build time*.

See the attached patch that implements the idea.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

*** a/doc/src/sgml/extend.sgml
--- b/doc/src/sgml/extend.sgml
***
*** 462,467 
--- 462,474 
  FUNCTION/ commands for C-language functions, so that the script
  files do not need to hard-wire the name of the shared library.
 /para
+para
+ The macro literal$directory/literal is supported when found at the
+ very start of the value of this parameter. When
+ used, literal$directory/literal is then substituted with
+ the literaldirectory/literal control parameter value by
+ PostgreSQL.
+/para
/listitem
   /varlistentry
  
*** a/src/backend/commands/extension.c
--- b/src/backend/commands/extension.c
***
*** 432,437  get_extension_script_filename(ExtensionControlFile *control,
--- 432,470 
  	return result;
  }
  
+ /*
+  * Substitute for any macros appearing in the given string.
+  * Result is always freshly palloc'd.
+  */
+ static char *
+ substitute_directory_macro(const char *directory, const char *module_pathname)
+ {
+ 	const char *sep_ptr;
+ 
+ 	AssertArg(module_pathname != NULL);
+ 
+ 	/* Currently, we only recognize $directory at the start of the string */
+ 	if (module_pathname[0] != '$')
+ 		return pstrdup(module_pathname);
+ 
+ 	if ((sep_ptr = first_dir_separator(module_pathname)) == NULL)
+ 		sep_ptr = module_pathname + strlen(module_pathname);
+ 
+ 	/* Accept $libdir, just return module_pathname as is then */
+ 	if (strlen($libdir) == sep_ptr - module_pathname 
+ 		strncmp(module_pathname, $libdir, strlen($libdir)) == 0)
+ 		return pstrdup(module_pathname);
+ 
+ 	if (strlen($directory) != sep_ptr - module_pathname ||
+ 		strncmp(module_pathname, $directory, strlen($directory)) != 0)
+ 		ereport(ERROR,
+ (errcode(ERRCODE_INVALID_NAME),
+  errmsg(invalid macro module_pathname in: %s,
+ 		module_pathname)));
+ 
+ 	return psprintf(%s%s, directory, sep_ptr);
+ }
+ 
  
  /*
   * Parse contents of 

Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Hannu Krosing
On 02/26/2014 09:17 AM, Christophe Pettus wrote:
 On Feb 25, 2014, at 1:57 PM, Hannu Krosing ha...@2ndquadrant.com wrote:

 It is not in any specs, but nevertheless all major imlementations do it and
 some code depends on it.
 I have no doubt that some code depends on it, but all major implementations 
 is 
 too strong a statement.  BSON, in particular, does not have stable field 
 order.
First, BSON is not JSON :)

And I do not really see how the don't preserve the field order - the
structure
is pretty similar to tnetstrings, just binary concatenation of datums
with a bit
more types.

It is possible that some functions on BSON do not preserve it for some
reason ...


Cheers

-- 
Hannu Krosing
PostgreSQL Consultant
Performance, Scalability and High Availability
2ndQuadrant Nordic OÜ



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] What behavior is in this loop?

2014-02-27 Thread Heikki Linnakangas

On 02/27/2014 12:38 PM, KONDO Mitsumasa wrote:

I found interesting for and while loop in WaitForWALToBecomeAvailable() in
xlog.c. Can you tell me this behavior?

for (;;)
{
~
} while (StanbyMode)

I confirmed this code is no problem in gcc compiler:)


Oh wow :-). That's clearly a thinko, although harmless in this case. 
Looking at the git history, I made that mistake in commit abf5c5c9a. 
Before that, there was no while.


That's easier to understand with some extra formatting. That's two 
loops, like this:


/* loop 1 */
for (;;)
{
  ...
}

/* loop2 */
while(StandbyMode);

The second loop is obviously completely pointless. Thankfully, the there 
are no breaks inside the first loop (the ones within the 
switch-statements don't count), so the endless while-loop is never reached.


I'll go fix that... Thanks for the report!

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Robert Haas
On Wed, Feb 26, 2014 at 10:42 PM, Andrew Dunstan and...@dunslane.net wrote:
 Why can't this whole thing be shipped as an extension?   It might well
 be more convenient to have the whole thing packaged as an extension
 than to have parts of it in core and parts of it not in core.

 That's a good question. I think having everything in contrib would
 make it easier to resolve the disconnect between jsonb and hstore. As
 things stand, there is a parallel set of functions and operators for
 hstore and jsonb, with the former set much larger than the latter. I'm
 not terribly happy with that.

 The jsonb set will get larger as time goes on. I don't think either of you
 are thinking very clearly about how we would do this. Extensions can't call
 each other's code. So the whole notion we have here of sharing the tree-ish
 data representation and a lot of the C API would go out the window, unless
 you want to shoehorn jsonb into hstore. Frankly, we'll look silly with json
 as a core type and the more capable jsonb not.

It's not very clear to me why we think it's a good idea to share the
tree-ish representation between json and hstore.  In deference to your
comments that this has been very publicly discussed over quite a
considerable period, I went back and tried to find the email in which
the drivers for that design decision were laid out.  I can find no
such email; in fact, the first actual nested hstore patch I can find
is from January 13th and the first jsonb patch I can find is from
February 9th.  Neither contains anything much more than the patch
itself, without anything at all describing the design, let alone
explaining why it was chosen.  And although there are earlier mentions
of both nested hstore and jsonb, there's nothing that says, OK, this
is why we're doing it that way.  Or if there is, I couldn't find it.

So I tried to tease it out from looking at the patches.  As nearly as
I can tell, the reason for making jsonb use hstore's binary format is
because then we can build indexes on jsonbfield::hstore, and the
actual type conversion will be a no-op; and the reason for upgrading
hstore to allow nested keys is so that jsonb can map onto it.  So from
where I sit this whole thing looks like a very complicated exercise to
try to reuse parts of the existing hstore opclasses until such time as
jsonb opclasses of its own.  But if, as Josh postulates, those
opclasses are going to materialize within a matter of months, then the
whole need for these things to share the same binary format is going
to go away before 9.4 is even out the door.  That may not be a good
enough reason to tie these things together inextricably.  Once jsonb
has its own opclasses, it can ship as a standalone data type without
needing to depend on hstore or anything else.

I may well be missing some other benefit here, so please feel free to
enlighten me.

 Not to mention that if at this stage people suddenly decide we should change
 direction on a course that has been very publicly discussed over quite a
 considerable period, and for which Teodor and I and others have put in a
 great deal of work, I at least am going to be extremely annoyed (note the
 characteristic Australian used of massive understatement.)

Unless I've missed some emails sent earlier than the dates noted
above, which is possible, the comments by myself and others on this
thread ought to be regarded as timely review.  The basic problem here
is that this patch wasn't timely submitted, still doesn't seem to be
very done, and it's getting rather late.  We therefore face the usual
problem of deciding whether to commit something that we might regret
later.  If jsonb turns out to the wrong solution to the json problem,
will there be community support for adding a jsonc type next year?  I
bet not.  You may think this is most definitely the right direction to
go and you may even be right, but our ability to maneuver and back out
of things goes down to nearly zero once a release goes out the door,
so I think it's entirely appropriate to question whether we're
charting the best possible course.  But I certainly understand the
annoyance.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Dump pageinspect to 1.2: page_header with pg_lsn datatype

2014-02-27 Thread Michael Paquier
On Wed, Feb 26, 2014 at 5:45 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Robert Haas escribió:
 On Mon, Feb 24, 2014 at 9:34 AM, Alvaro Herrera
 alvhe...@2ndquadrant.com wrote:
  Yeah, erroring out seems good enough.  Particularly if you add a hint
  saying please upgrade the extension.

 In past instances where this has come up, we have actually made the
 .so backward-compatible.  See pg_stat_statements in particular.  I'd
 prefer to keep to that precedent here.

 But pg_stat_statement is a user tool which is expected to have lots of
 use, and backwards compatibility concerns because of people who might
 have written tools on top of it.  Not so with pageinspect.  I don't
 think we need to put in the same amount of effort.  (Even though,
 really, it's probably not all that difficult to support both cases.  I
 just don't see the point.)
Actually a little bit of hacking I noticed that supporting both is as
complicated as supporting only pg_lsn... Here is for example how I can
get page_header to work across versions:
-   snprintf(lsnchar, sizeof(lsnchar), %X/%X,
-(uint32) (lsn  32), (uint32) lsn);

-   values[0] = CStringGetTextDatum(lsnchar);
+   /*
+* Do some version-related checks. pageinspect = 1.2 uses pg_lsn
+* instead of text when using this function for the LSN field.
+*/
+   if (tupdesc-attrs[0]-atttypid == TEXTOID)
+   {
+   charlsnchar[64];
+   snprintf(lsnchar, sizeof(lsnchar), %X/%X,
+(uint32) (lsn  32), (uint32) lsn);
+   values[0] = CStringGetTextDatum(lsnchar);
+   }
+   else
+   values[0] = LSNGetDatum(lsn);
In this case an upgraded 9.4 server using pageinspect 1.1 or older
simply goes through the text datatype path... You can find that in the
patch attached.
Regards,
-- 
Michael
diff --git a/contrib/pageinspect/Makefile b/contrib/pageinspect/Makefile
index 0e267eb..ee78cb2 100644
--- a/contrib/pageinspect/Makefile
+++ b/contrib/pageinspect/Makefile
@@ -4,8 +4,8 @@ MODULE_big	= pageinspect
 OBJS		= rawpage.o heapfuncs.o btreefuncs.o fsmfuncs.o
 
 EXTENSION = pageinspect
-DATA = pageinspect--1.1.sql pageinspect--1.0--1.1.sql \
-	pageinspect--unpackaged--1.0.sql
+DATA = pageinspect--1.2.sql pageinspect--1.0--1.1.sql \
+	pageinspect--1.1--1.2.sql pageinspect--unpackaged--1.0.sql
 
 ifdef USE_PGXS
 PG_CONFIG = pg_config
diff --git a/contrib/pageinspect/pageinspect--1.1--1.2.sql b/contrib/pageinspect/pageinspect--1.1--1.2.sql
new file mode 100644
index 000..5e23ca4
--- /dev/null
+++ b/contrib/pageinspect/pageinspect--1.1--1.2.sql
@@ -0,0 +1,18 @@
+/* contrib/pageinspect/pageinspect--1.1--1.2.sql */
+
+-- complain if script is sourced in psql, rather than via ALTER EXTENSION
+\echo Use ALTER EXTENSION pageinspect UPDATE TO 1.2 to load this file. \quit
+
+DROP FUNCTION page_header(bytea);
+CREATE FUNCTION page_header(IN page bytea,
+OUT lsn pg_lsn,
+OUT checksum smallint,
+OUT flags smallint,
+OUT lower smallint,
+OUT upper smallint,
+OUT special smallint,
+OUT pagesize smallint,
+OUT version smallint,
+OUT prune_xid xid)
+AS 'MODULE_PATHNAME', 'page_header'
+LANGUAGE C STRICT;
diff --git a/contrib/pageinspect/pageinspect--1.1.sql b/contrib/pageinspect/pageinspect--1.1.sql
deleted file mode 100644
index 22a47d5..000
--- a/contrib/pageinspect/pageinspect--1.1.sql
+++ /dev/null
@@ -1,107 +0,0 @@
-/* contrib/pageinspect/pageinspect--1.1.sql */
-
--- complain if script is sourced in psql, rather than via CREATE EXTENSION
-\echo Use CREATE EXTENSION pageinspect to load this file. \quit
-
---
--- get_raw_page()
---
-CREATE FUNCTION get_raw_page(text, int4)
-RETURNS bytea
-AS 'MODULE_PATHNAME', 'get_raw_page'
-LANGUAGE C STRICT;
-
-CREATE FUNCTION get_raw_page(text, text, int4)
-RETURNS bytea
-AS 'MODULE_PATHNAME', 'get_raw_page_fork'
-LANGUAGE C STRICT;
-
---
--- page_header()
---
-CREATE FUNCTION page_header(IN page bytea,
-OUT lsn text,
-OUT checksum smallint,
-OUT flags smallint,
-OUT lower smallint,
-OUT upper smallint,
-OUT special smallint,
-OUT pagesize smallint,
-OUT version smallint,
-OUT prune_xid xid)
-AS 'MODULE_PATHNAME', 'page_header'
-LANGUAGE C STRICT;
-
---
--- heap_page_items()
---
-CREATE FUNCTION heap_page_items(IN page bytea,
-OUT lp smallint,
-OUT lp_off smallint,
-OUT lp_flags smallint,
-OUT lp_len smallint,
-OUT t_xmin xid,
-OUT t_xmax xid,
-OUT t_field3 int4,
-OUT t_ctid tid,
-OUT t_infomask2 integer,
-OUT t_infomask integer,
-OUT t_hoff smallint,
-OUT t_bits text,
-OUT t_oid oid)
-RETURNS SETOF record
-AS 'MODULE_PATHNAME', 'heap_page_items'
-LANGUAGE C STRICT;
-
---
--- bt_metap()
---
-CREATE FUNCTION bt_metap(IN relname text,
-OUT magic int4,
-OUT version int4,
-OUT root int4,
-OUT level int4,
-OUT fastroot int4,
-OUT fastlevel int4)
-AS 'MODULE_PATHNAME', 

[HACKERS] Defining macro LSNOID for pg_lsn in pg_type.h

2014-02-27 Thread Michael Paquier
Hi all,

When working on the datatype pg_lsn, we actually did not create a
define macro for its oid in pg_type.h and this could be useful for
extension developers. The simple patch attached corrects that by
naming this macro LSNOID.
Regards,
-- 
Michael
diff --git a/src/include/catalog/pg_type.h b/src/include/catalog/pg_type.h
index 8ea9ceb..4c060f5 100644
--- a/src/include/catalog/pg_type.h
+++ b/src/include/catalog/pg_type.h
@@ -580,6 +580,7 @@ DATA(insert OID = 2951 ( _uuid			PGNSP PGUID -1 f b A f t \054 0 2950 0 array_in
 /* pg_lsn */
 DATA(insert OID = 3220 ( pg_lsn			PGNSP PGUID 8 FLOAT8PASSBYVAL b U t t \054 0 0 3221 pg_lsn_in pg_lsn_out pg_lsn_recv pg_lsn_send - - - d p f 0 -1 0 0 _null_ _null_ _null_ ));
 DESCR(PostgreSQL LSN datatype);
+#define LSNOID			3220
 DATA(insert OID = 3221 ( _pg_lsn			PGNSP PGUID -1 f b A f t \054 0 3220 0 array_in array_out array_recv array_send - - array_typanalyze d x f 0 -1 0 0 _null_ _null_ _null_ ));
 
 /* text search */

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unfortunate choice of short switch name in pgbench

2014-02-27 Thread Alvaro Herrera
Fabien COELHO wrote:

 I just wasted some time puzzling over strange results from pgbench.
 I eventually realized that I'd been testing against the wrong server,
 because rather than -p 65432 I'd typed -P 65432, thereby invoking
 the recently added --progress option.  pgbench has no way to know that
 that isn't what I meant; the fact that both switches take integer
 arguments doesn't help.
 
 ISTM that this is an unfortunate but unlikely mistake, as -p is
 used in all postgresql commands to signify the port number (psql,
 pg_dump, pg_basebackup, createdb, ...).

Plus other tools already use -P for progress, such as rsync.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another possible corruption bug in 9.3.2 or possibly a known MultiXact problem?

2014-02-27 Thread Alvaro Herrera
Andres Freund wrote:
 On 2014-02-26 18:18:05 -0300, Alvaro Herrera wrote:
  Andres Freund wrote:
  
   static void
   heap_xlog_lock(XLogRecPtr lsn, XLogRecord *record)
   {
   ...
   HeapTupleHeaderClearHotUpdated(htup);
   HeapTupleHeaderSetXmax(htup, xlrec-locking_xid);
   HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
   /* Make sure there is no forward chain link in t_ctid */
   htup-t_ctid = xlrec-target.tid;
   ...
   }
  
  I think the fix is to reset HOT_UPDATED and t_ctid only if the infomask
  says the tuple is LOCKED_ONLY, per the attached patch.
 
 Looks good to me.

Thanks, pushed.

Greg, Peter, if you could update your standbys to the current HEAD of
REL9_3_STABLE for the affected apps and verify the problem no longer
shows up in a reasonable timeframe, it would be great.  (I'm assuming
you saw this happen repeatedly.)

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unfortunate choice of short switch name in pgbench

2014-02-27 Thread Tom Lane
Alvaro Herrera alvhe...@2ndquadrant.com writes:
 Fabien COELHO wrote:
 I just wasted some time puzzling over strange results from pgbench.
 I eventually realized that I'd been testing against the wrong server,
 because rather than -p 65432 I'd typed -P 65432, thereby invoking
 the recently added --progress option.  pgbench has no way to know that
 that isn't what I meant; the fact that both switches take integer
 arguments doesn't help.

 ISTM that this is an unfortunate but unlikely mistake, as -p is
 used in all postgresql commands to signify the port number (psql,
 pg_dump, pg_basebackup, createdb, ...).

 Plus other tools already use -P for progress, such as rsync.

Yeah, but they don't make -P take an integer argument.  It's that
little frammish that makes this problem significant.

I don't object to having the --progress switch.  I just think we
could live without a short form for it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UNION ALL on partitioned tables won't use indices.

2014-02-27 Thread Noah Misch
On Thu, Feb 27, 2014 at 07:30:57PM +0900, Kyotaro HORIGUCHI wrote:
 Thank you for the labor for polishing this patch.
 
 I have no obvious objection at a glance on this new patch.
 
 I agree to commit this if you this is pertinent to commit except
 for the issue newly revealed by this patch. Though could you let
 me have some more time to examine this by myself and fully
 understand the changes what you made?

Yes; waiting several days is no problem.

 At Wed, 26 Feb 2014 23:29:35 -0500, Noah Misch wrote
  An alternative was
  to introduce a new RTE flag, say append.  An inheritance parent under a
  UNION ALL would have append = false, inh = true; other inheritance parent 
  RTEs
  would have append = true, inh = true; an RTE for UNION ALL itself would have
  append = true, inh = false.
 
 I think that kind of complexity is not necessary for this issue.

Agreed.

  Incidentally, I tried adding an assertion that append_rel_list does not show
  one appendrel as a direct child of another.  The following query, off-topic
  for the patch at hand, triggered that assertion:
  
  SELECT 0 FROM (SELECT 0 UNION ALL SELECT 0) t0
  UNION ALL
  SELECT 0 FROM (SELECT 0 UNION ALL SELECT 0) t0;
 
 This seems not to crash unless this patch is applied, but itself
 doesn't seem to be a bug.

To my knowledge, the query does not crash the server under any patch provided
on this thread.

 I think it should be cured along with
 this patch even if it is not the issue of this patch.

That would require changing rather different code, probably something in the
vicinity of pull_up_subqueries().  I'll leave it for another patch.

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Changeset Extraction v7.6.1

2014-02-27 Thread Andres Freund
On 2014-02-24 12:50:03 -0500, Robert Haas wrote:
 On Mon, Feb 24, 2014 at 9:48 AM, Andres Freund and...@2ndquadrant.com wrote:
  On 2014-02-15 17:29:04 -0500, Robert Haas wrote:
  On Fri, Feb 14, 2014 at 4:55 AM, Andres Freund and...@2ndquadrant.com 
  wrote:
 
  +   /*
  +* XXX: It's impolite to ignore our argument and keep decoding 
  until the
  +* current position.
  +*/
 
  Eh, what?
 
  So, the background here is that I was thinking of allowing to specify a
  limit for the number of returned rows. For the sql interface that sounds
  like a good idea. I am just not so sure anymore that allowing to specify
  a LSN as a limit is sufficient. Maybe simply allow to limit the number
  of changes and check everytime a transaction has been replayed?
 
 The last idea there seems like pretty sound, but ...
 
  It's all trivial codewise, I am just wondering about the interface most
  users would want.
 
 ...I can't swear it meets this criterion.

So, it's now:
CREATE OR REPLACE FUNCTION pg_logical_slot_get_changes(
IN slotname name, IN upto_lsn pg_lsn, IN upto_nchanges int, VARIADIC 
options text[] DEFAULT '{}',
OUT location pg_lsn, OUT xid xid, OUT data text)
RETURNS SETOF RECORD
LANGUAGE INTERNAL
VOLATILE ROWS 1000 COST 1000
AS 'pg_logical_slot_get_changes';

if nonnull upto_lsn allows limiting based on the lsn, similar with
upto_nchanges.

Makes sense?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GiST support for inet datatypes

2014-02-27 Thread Florian Pflug
On Feb27, 2014, at 11:39 , Emre Hasegeli e...@hasegeli.com wrote:
 2014-02-24 17:55, Bruce Momjian br...@momjian.us:
 
 pg_upgrade has _never_ modified the old cluster, and I am hesitant to do
 that now.  Can we make the changes in the new cluster, or in pg_dump
 when in binary upgrade mode?
 
 It can be possible to update the new operator class in the new cluster
 as not default, before restore. After the restore, pg_upgrade can upgrade
 the btree_gist extension and reset the operator class as the default.
 pg_upgrade suggests to re-initdb the new cluster if it fails, anyway. Do
 you think it is a better solution? I think it will be more complicated
 to do in pg_dump when in binary upgrade mode.

Maybe I'm missing something, but isn't the gist of the problem here that
pg_dump won't explicitly state the operator class if it's the default?

If so, can't we just make pg_dump always spell out the operator class
explicitly? Then changing the default will never change the meaning of
database dumps, so upgraded clusters will simply continue to use the
old (now non-default) opclass, no?

best regards,
Florian Pflug



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Changeset Extraction v7.6.1

2014-02-27 Thread Robert Haas
On Thu, Feb 27, 2014 at 11:06 AM, Andres Freund and...@2ndquadrant.com wrote:
 On 2014-02-24 12:50:03 -0500, Robert Haas wrote:
 On Mon, Feb 24, 2014 at 9:48 AM, Andres Freund and...@2ndquadrant.com 
 wrote:
  On 2014-02-15 17:29:04 -0500, Robert Haas wrote:
  On Fri, Feb 14, 2014 at 4:55 AM, Andres Freund and...@2ndquadrant.com 
  wrote:
 
  +   /*
  +* XXX: It's impolite to ignore our argument and keep decoding 
  until the
  +* current position.
  +*/
 
  Eh, what?
 
  So, the background here is that I was thinking of allowing to specify a
  limit for the number of returned rows. For the sql interface that sounds
  like a good idea. I am just not so sure anymore that allowing to specify
  a LSN as a limit is sufficient. Maybe simply allow to limit the number
  of changes and check everytime a transaction has been replayed?

 The last idea there seems like pretty sound, but ...

  It's all trivial codewise, I am just wondering about the interface most
  users would want.

 ...I can't swear it meets this criterion.

 So, it's now:
 CREATE OR REPLACE FUNCTION pg_logical_slot_get_changes(
 IN slotname name, IN upto_lsn pg_lsn, IN upto_nchanges int, VARIADIC 
 options text[] DEFAULT '{}',
 OUT location pg_lsn, OUT xid xid, OUT data text)
 RETURNS SETOF RECORD
 LANGUAGE INTERNAL
 VOLATILE ROWS 1000 COST 1000
 AS 'pg_logical_slot_get_changes';

 if nonnull upto_lsn allows limiting based on the lsn, similar with
 upto_nchanges.

 Makes sense?

Time will tell, but it seems plausible to me.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Changeset Extraction v7.8

2014-02-27 Thread Andres Freund
Hi,

Attached you can find version 7.8 of this patcheset. Changes since 7.7
include:
* Signature changes of the SQL changeset SRFs to support limits based on
  LSN and/or number of returned rows (pg_logical_slot_get_changes() et
  al) and to make parameter passing optional (by adding a DEFAULT '{}'
  to the variadic argument)
* heap_page_prune_opt() now decides itself which horizon to use,
  removing a good amount of duplicated logic
* GetOldestXmin() now has a Relation parameter that can be NULL instead
  of the former allDbs (existing in master) and systable (just this
  branch) parameters, also removing code duplication.
* pg_create_logical_replication_slot() is now defined in slotfuncs.c
* a fair number of cosmetic and comment changes

The open issues that I know of are:
* do we modify struct SnapshotData to be polymorphic based on some tag
  or move comments there?
* How/whether to change the exclusive lock on the ProcArrayLock in
  CreateInitDecodingContext()

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GiST support for inet datatypes

2014-02-27 Thread Tom Lane
Florian Pflug f...@phlo.org writes:
 Maybe I'm missing something, but isn't the gist of the problem here that
 pg_dump won't explicitly state the operator class if it's the default?

That's not a bug, it's a feature, for much the same reasons that pg_dump
tries to minimize explicit schema-qualification.

At least, it's a feature for ordinary dumps.  I'm not sure whether it
might be a good idea for binary_upgrade dumps.  That would depend on
our policy about whether a new opclass has to have a new (and necessarily
weird) name.  If we try to make the new opclass still have the nicest
name then it won't help at all for pg_dump to dump the old name.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Changeset Extraction v7.8

2014-02-27 Thread Thom Brown
On 27 February 2014 16:56, Andres Freund and...@2ndquadrant.com wrote:

 Hi,

 Attached you can find version 7.8 of this patcheset. Changes since 7.7
 include:


Try again? :)

-- 
Thom


Re: [HACKERS] GiST support for inet datatypes

2014-02-27 Thread Florian Pflug
On Feb27, 2014, at 17:56 , Tom Lane t...@sss.pgh.pa.us wrote:
 Florian Pflug f...@phlo.org writes:
 Maybe I'm missing something, but isn't the gist of the problem here that
 pg_dump won't explicitly state the operator class if it's the default?
 
 That's not a bug, it's a feature, for much the same reasons that pg_dump
 tries to minimize explicit schema-qualification.

I fail to see the value in this for opclasses. It's certainly nice for
schema qualifications, because dumping one schema and restoring into a
different schema is probably quite common. But how often does anyone dump
a database and restore it into a database with a different default opclass
for some type?

Or is the idea just to keep the dump as readable as possible?

 At least, it's a feature for ordinary dumps.  I'm not sure whether it
 might be a good idea for binary_upgrade dumps.  That would depend on
 our policy about whether a new opclass has to have a new (and necessarily
 weird) name.  If we try to make the new opclass still have the nicest
 name then it won't help at all for pg_dump to dump the old name.

Well, given the choice between not ever being able to change the default
opclass of a type, and not being able to re-use an old opclass' name,
I'd pick the latter. Especially because for default opclasses, users don't
usually have to know the name anyway.

best regards,
Florian Pflug



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GiST support for inet datatypes

2014-02-27 Thread Tom Lane
Florian Pflug f...@phlo.org writes:
 On Feb27, 2014, at 17:56 , Tom Lane t...@sss.pgh.pa.us wrote:
 That's not a bug, it's a feature, for much the same reasons that pg_dump
 tries to minimize explicit schema-qualification.

 I fail to see the value in this for opclasses. It's certainly nice for
 schema qualifications, because dumping one schema and restoring into a
 different schema is probably quite common.

The value in it is roughly the same as the reason we don't include a
version number when dumping CREATE EXTENSION.  If you had a default
opclass in the source database, you probably want a default opclass
in the destination, even if that's not bitwise the same as what you
had before.  The implication is that you want best practice for the
current PG version.

Another reason for not doing it is that unique-constraint syntax doesn't
even support it.  Constraints *have to* use the default opclass.

 But how often does anyone dump
 a database and restore it into a database with a different default opclass
 for some type?

Indeed.  The root of the problem here is that we've never really thought
about changing a type's default opclass before.  I don't think that a
two-line change in pg_dump fixes all the issues this will bring up.

I remind you also that even if you had a 100% bulletproof argument for
changing the behavior now, it won't affect what's in existing dump files.
The only real wiggle room we have is to change the --binary-upgrade
behavior, since we can plausibly insist that people use a current pg_dump
while doing an upgrade.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unfortunate choice of short switch name in pgbench

2014-02-27 Thread Fabien COELHO



ISTM that this is an unfortunate but unlikely mistake, as -p is
used in all postgresql commands to signify the port number (psql,
pg_dump, pg_basebackup, createdb, ...).



Plus other tools already use -P for progress, such as rsync.


Yeah, but they don't make -P take an integer argument.  It's that
little frammish that makes this problem significant.


I do not see a strong case to make options with arguments case insensitive 
as a general rule. If this is done for -p/-P, why not -t/-T?


If you really fell you must remove -P, please replace it by another 
one-letter, I use this option nearly everytime a run pgbench.


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread David E . Wheeler
On Feb 27, 2014, at 3:54 AM, Robert Haas robertmh...@gmail.com wrote:

 It's not very clear to me why we think it's a good idea to share the
 tree-ish representation between json and hstore.  In deference to your
 comments that this has been very publicly discussed over quite a
 considerable period, I went back and tried to find the email in which
 the drivers for that design decision were laid out.  I can find no
 such email; in fact, the first actual nested hstore patch I can find
 is from January 13th and the first jsonb patch I can find is from
 February 9th.  Neither contains anything much more than the patch
 itself, without anything at all describing the design, let alone
 explaining why it was chosen.  And although there are earlier mentions
 of both nested hstore and jsonb, there's nothing that says, OK, this
 is why we're doing it that way.  Or if there is, I couldn't find it.

FWIW, It was discussed quite a bit in meatspace, at the PGCon unconference last 
spring.

 Unless I've missed some emails sent earlier than the dates noted
 above, which is possible, the comments by myself and others on this
 thread ought to be regarded as timely review.  The basic problem here
 is that this patch wasn't timely submitted, still doesn't seem to be
 very done, and it's getting rather late.

The hstore patch landed in the Nov/Dec patch fest, sent to the list on Nov 12. 
The discussion that led to the decision to implement jsonb was carried out for 
the week after that. Here’s the thread:

  http://www.postgresql.org/message-id/528274f3.3060...@sigaev.ru

There was also quite a bit of discussion that week in the “additional json 
functionality” thread.

  http://www.postgresql.org/message-id/528274d0.7070...@dunslane.net

I submitted a review of hstore2, adding documentation, on Dec 20. Andrew got 
the patch updated with jsonb type, per discussion, and based on a first cut by 
Teodor, in January, I forget when. v7 was sent to the list on Jan 29. So while 
some stuff has been added a bit late, it was based on discussion and the 
example of hstore's code.

I think you might have missed quite a bit of the earlier discussion because it 
was in an hstore thread, not a JSON or JSONB thread.

 We therefore face the usual
 problem of deciding whether to commit something that we might regret
 later.  If jsonb turns out to the wrong solution to the json problem,
 will there be community support for adding a jsonc type next year? I
 bet not.  

Bit of a red herring, that. You could make that argument about just about *any* 
data type. I realize it's more loaded for object data types, but personally I 
have a hard time imagining something other than a text-based type or a binary 
type. There was disagreement as to whether the binary type should replace the 
text type, and the consensus of the discussion was to have both. (And then we 
had 10,000 messages bike-sheadding the name of the binary type, naturally.)

 You may think this is most definitely the right direction to
 go and you may even be right, but our ability to maneuver and back out
 of things goes down to nearly zero once a release goes out the door,
 so I think it's entirely appropriate to question whether we're
 charting the best possible course.  But I certainly understand the
 annoyance.

Like the hstore type, the jsonb type has a version bit, so if we decide to 
change its representation to make it more efficient in the future, we will be 
able to do so without having to introduce a new type. Maybe someday we will 
want a completely different JSON implementation based on genetic mappings or 
quantum superpositions or something, but I would not hold up the ability to 
improve the speed of accessing values, let alone full path indexing via GIN 
indexing, because we might want to do something different in the future. 
Besides, hstore has proved itself pretty well over time, so I think it’s pretty 
safe to adopt its implementation to make an awesome jsonb type.

Best,

David



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unfortunate choice of short switch name in pgbench

2014-02-27 Thread Tom Lane
Fabien COELHO coe...@cri.ensmp.fr writes:
 Yeah, but they don't make -P take an integer argument.  It's that
 little frammish that makes this problem significant.

 I do not see a strong case to make options with arguments case insensitive 
 as a general rule. If this is done for -p/-P, why not -t/-T?

I have not proposed fooling with -t/-T; people are used to that one,
and it's a core part of what pgbench does, so it's reasonable to expect
that people are familiar with it.  It helps also that -t and -T do
somewhat related things, ie constrain the length of the test --- so even
if you pick the wrong one, you still get behavior that's somewhat sane.

 If you really fell you must remove -P, please replace it by another 
 one-letter, I use this option nearly everytime a run pgbench.

Meh.  If I thought -P would be that popular, I'd expect people to get
used to the issue.  I don't believe it though.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pgbench help message fix

2014-02-27 Thread Fabien COELHO


A very minor fix to pgbench --help which is missing the expected 
argument for the -t option.


--
Fabien.diff --git a/contrib/pgbench/pgbench.c b/contrib/pgbench/pgbench.c
index a836acf..7c1e59e 100644
--- a/contrib/pgbench/pgbench.c
+++ b/contrib/pgbench/pgbench.c
@@ -368,7 +368,7 @@ usage(void)
 		 -R, --rate=NUM   target rate in transactions per second\n
 		 -s, --scale=NUM  report this scale factor in output\n
 		 -S, --select-onlyperform SELECT-only transactions\n
-		 -t, --transactions   number of transactions each client runs (default: 10)\n
+		 -t, --transactions=NUM   number of transactions each client runs (default: 10)\n
 		 -T, --time=NUM   duration of benchmark test in seconds\n
 		 -v, --vacuum-all vacuum all four standard tables before tests\n
 		 --aggregate-interval=NUM aggregate data over NUM seconds\n

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] VACUUM FULL/CLUSTER doesn't update pg_class's pg_class.relfrozenxid

2014-02-27 Thread Andres Freund
Hi,

As Robert previously complained a database wide VACUUM FULL now (as of
3cff1879f8d03) reliably increases the relfrozenxid for all tables but
pg_class itself. That's a bit sad because it means doing a VACUUM FULL
won't help in a anti-wraparound scenario.

The reason for that is explained by the following comment:
/*
 * Update the tuples in pg_class --- unless the target relation of the
 * swap is pg_class itself.  In that case, there is zero point in making
 * changes because we'd be updating the old data that we're about to 
throw
 * away.  Because the real work being done here for a mapped relation is
 * just to change the relation map settings, it's all right to not 
update
 * the pg_class rows in this case.
 */

I think the easiest fix for that is to update pg_class' relfrozenxid in
finish_heap_swap() after the indexes have been rebuilt, that's just a
couple of lines. There's more complex solutions that'd avoid the need
for that special case, but I it's sufficient. A patch doing that is
attached.

Note that VACUUM FULL will still require more xids than a plain VACUUM,
but it scales linearly with the number of relations, so I have a hard
time seing that as problematic.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services
From cc8943822e18f283af01c1f14489f7bd9a2abede Mon Sep 17 00:00:00 2001
From: Andres Freund and...@anarazel.de
Date: Thu, 27 Feb 2014 19:00:11 +0100
Subject: [PATCH] Increase relfrozenxid even when doing a VACUUM FULL on
 pg_class.

Previously VACUUM FULL (and CLUSTER) didn't update pg_class's own
relfrozenxid because the place doing so only has convenient (as in
indexed) access to the old heap, not to the new rebuilt heap. Fix that
by adding a special case updating pg_class's relfrozenxid after the
indexes have been rebuilt.

That's useful because now a database VACUUM FULL reliably increases
the database's datfrozenxid (and datminmxid, although that's often
less critical).
---
 src/backend/commands/cluster.c | 37 -
 1 file changed, 36 insertions(+), 1 deletion(-)

diff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c
index 8b18e4a..c478ba5 100644
--- a/src/backend/commands/cluster.c
+++ b/src/backend/commands/cluster.c
@@ -1269,7 +1269,8 @@ swap_relation_files(Oid r1, Oid r2, bool target_is_pg_class,
 	 * changes because we'd be updating the old data that we're about to throw
 	 * away.  Because the real work being done here for a mapped relation is
 	 * just to change the relation map settings, it's all right to not update
-	 * the pg_class rows in this case.
+	 * the pg_class rows in this case. The most important changes will instead
+	 * performed later, in finish_heap_swap() itself.
 	 */
 	if (!target_is_pg_class)
 	{
@@ -1504,6 +1505,40 @@ finish_heap_swap(Oid OIDOldHeap, Oid OIDNewHeap,
 		reindex_flags |= REINDEX_REL_CHECK_CONSTRAINTS;
 	reindex_relation(OIDOldHeap, reindex_flags);
 
+	/*
+	 * If the relation being rebuild is pg_class, swap_relation_files()
+	 * couldn't update pg_class's own pg_class entry (check comments in
+	 * swap_relation_files()), thus relfrozenxid was not updated. That's
+	 * annoying because a potential reason for doing a VACUUM FULL is a
+	 * imminent or actual anti-wraparound shutdown.  So, now that we can
+	 * access the new relation using it's indices, update
+	 * relfrozenxid. pg_class doesn't have a toast relation, so we don't need
+	 * to update the corresponding toast relation. Not that there's little
+	 * point moving all relfrozenxid updates here since swap_relation_files()
+	 * needs to write to pg_class for non-mapped relations anyway.
+	 */
+	if (OIDOldHeap == RelationRelationId)
+	{
+		Relation	relRelation;
+		HeapTuple	reltup;
+		Form_pg_class relform;
+
+		relRelation = heap_open(RelationRelationId, RowExclusiveLock);
+
+		reltup = SearchSysCacheCopy1(RELOID, ObjectIdGetDatum(OIDOldHeap));
+		if (!HeapTupleIsValid(reltup))
+			elog(ERROR, cache lookup failed for relation %u, OIDOldHeap);
+		relform = (Form_pg_class) GETSTRUCT(reltup);
+
+		relform-relfrozenxid = frozenXid;
+		relform-relminmxid = cutoffMulti;
+
+		simple_heap_update(relRelation, reltup-t_self, reltup);
+		CatalogUpdateIndexes(relRelation, reltup);
+
+		heap_close(relRelation, RowExclusiveLock);
+	}
+
 	/* Destroy new heap with old filenode */
 	object.classId = RelationRelationId;
 	object.objectId = OIDNewHeap;
-- 
1.8.3.251.g1462b67


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Josh Berkus
On 02/27/2014 01:56 AM, Peter Geoghegan wrote:
 I don't understand why you'd consider it to be a matter of shoehorning
 jsonb into hstore (and yes, that is what I was suggesting).

Because the course Andrew is following is the one which *this list*
decided on in CF3, no matter that people who participated in that
discussion seem to have collective amnesia.  There was a considerable
amount of effort involved in implementing things this way, so if Hackers
suddenly want to retroactively change a collective decision, I think
they should be prepared to pitch in and help implement the changed plan.

One of the issues there is that, due to how we handle types, a type
which has been available as an extension can never ever become a core
type because it breaks upgrading, per the discussion about hstore2.  For
better or for worse, we chose to make json-text a core type when it was
introduced (and XML before it, although that was before CREATE
EXTENSION).  This means that, if we have jsonb as an extension, we'll
eventually be in the position where the recommended json type with all
the features is an extension, whereas the legacy json type is in core.

However, we had this discussion already in November-December, which
resulted in the current patch.  Now you and Robert want to change the
rules on Andrew, which means Andrew is ready to quit, and we go another
year without JSON indexing.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-advocacy] GSoC 2014 - mentors, students and admins

2014-02-27 Thread Thom Brown
On 25 February 2014 13:28, Andreas 'ads' Scherbaum adsm...@wars-nicht.dewrote:


 Hi,


 On 01/28/2014 06:46 PM, Atri Sharma wrote:

 On Tue, Jan 28, 2014 at 11:04 PM, Thom Brown t...@linux.com
 mailto:t...@linux.com wrote:

 Hi all,

 Application to Google Summer of Code 2014 can be made as of next
 Monday (3rd Feb), and then there will be a 12 day window in which to
 submit an application.

 I'd like to gauge interest from both mentors and students as to
 whether we'll want to do this.

 And I'd be fine with being admin again this year, unless there's
 anyone else who would like to take up the mantle?

 Who would be up for mentoring this year?  And are there any project
 ideas folk would like to suggest?

 I would like to bring up the addition to MADLIB algorithms again this
 year.


 I've spoken with the MADlib team at goivotal and they are ok to support
 this proposal. Therefore I offer to mentor this.


Are there any more prospective mentors?  We'll want some folk to act as
back-up mentors too to ensure projects can still be completed should any
mentor become unavailable.

-- 
Thom


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Andrew Dunstan


On 02/26/2014 05:45 PM, Andres Freund wrote:

On 2014-02-26 16:23:12 -0500, Andrew Dunstan wrote:

On 02/10/2014 09:11 PM, Andres Freund wrote:

Is it just me or is jsonapi.h not very well documented?


What about it do you think is missing? In any case, it's hardly relevant to
this patch, so I'll take that as obiter dicta.

It's relevant insofer because I tried to understand it, to understand
whether this patch's usage is sensible.

O n a quick reread of the header, what I am missing is:
* what's semstate in JsonSemAction? Private data?
* what's object_start and object_field_start? Presumably object vs
   keypair? Why not use element as ifor the array?
* scalar_action is called for which types of tokens?
* what's exactly the meaning of the isnull parameter for ofield_action
   and aelem_action?
* How is one supposed to actually access data in the callbacks, not
   obvious for all the callbacks.
* are scalar callbacks triggered for object keys, object/array values?
...



You realize that this API dates from 9.3 and has been used in numerous 
extensions, right? So the names are pretty well fixed, for good or ill.


semstate is private data. This is at least implied:

 * parse_json will parse the string in the lex calling the
 * action functions in sem at the appropriate points. It is
 * up to them to keep what state they need  in semstate. If they
 * need access to the state of the lexer, then its pointer
 * should be passed to them as a member of whatever semstate
 * points to.

object_start is called, as its name suggests, at the start of on object. 
object_field_start is called at the start of a key/value pair.


isnull is true iff the value in question is a json null.

scalar action as not called for object keys, but is called for scalar 
object values or array elements, in fact for any value that's not an 
object or array (i.e. for a (non-key) string, number, true, false, null).


You access json fragments by pulling them from the lexical object. 
jsonfuncs.c is chock full of examples.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pglz_decompress()

2014-02-27 Thread Hadi Moshayedi
Hello,

The comments in pg_lzcompress.c say that:

 * If VARSIZE(x) == rawsize + sizeof(PGLZ_Header), then the data
 * is stored uncompressed as plain bytes. Thus, the decompressor
 * simply copies rawsize bytes from the location after the
 * header to the destination.

But pg_lzdecompress doesn't seem to check explicitly for the VARSIZE(x) ==
rawsize + sizeof(PGLZ_Header) condition. Is it caller's responsibility to
check for this condition before calling pg_lzdecompress(), or does it
happen somehow in pg_lzdecompress()?

Thanks,
  -- Hadi


Re: [HACKERS] [pgsql-advocacy] GSoC 2014 - mentors, students and admins

2014-02-27 Thread David Fetter
On Thu, Feb 27, 2014 at 07:54:13PM +, Thom Brown wrote:
 On 25 February 2014 13:28, Andreas 'ads' Scherbaum 
 adsm...@wars-nicht.dewrote:
  I've spoken with the MADlib team at goivotal and they are ok to
  support this proposal. Therefore I offer to mentor this.
 
 Are there any more prospective mentors?  We'll want some folk to act
 as back-up mentors too to ensure projects can still be completed
 should any mentor become unavailable.

For MADlib, no.  Are you asking for mentors in general?

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-advocacy] GSoC 2014 - mentors, students and admins

2014-02-27 Thread Thom Brown
On 27 February 2014 21:08, David Fetter da...@fetter.org wrote:

 On Thu, Feb 27, 2014 at 07:54:13PM +, Thom Brown wrote:
  On 25 February 2014 13:28, Andreas 'ads' Scherbaum 
 adsm...@wars-nicht.dewrote:
   I've spoken with the MADlib team at goivotal and they are ok to
   support this proposal. Therefore I offer to mentor this.
 
  Are there any more prospective mentors?  We'll want some folk to act
  as back-up mentors too to ensure projects can still be completed
  should any mentor become unavailable.

 For MADlib, no.  Are you asking for mentors in general?


Ah yes, I should clarify.  Yes, mentors in general.

-- 
Thom


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Merlin Moncure
On Thu, Feb 27, 2014 at 1:11 PM, Josh Berkus j...@agliodbs.com wrote:
 However, we had this discussion already in November-December, which
 resulted in the current patch.  Now you and Robert want to change the
 rules on Andrew, which means Andrew is ready to quit, and we go another
 year without JSON indexing.

How we got here is not the point.  All that matters is what's going to
happen from here.  Here are the facts as I see them:

1) we've worked ourselves into a situation where we're simultaneously
developing two APIs that do essentially exactly the same thing (hstore
and jsonb).   Text json is not the problem and is irrelevant to the
discussion.

2) The decision to do that was made a long time ago.  I complained
loudly as my mousy no-programming-only-griping voice would allow here:
http://postgresql.1045698.n5.nabble.com/JSON-Function-Bike-Shedding-tp5744932p5746152.html.
 The decision was made (and Robert cast one of the deciding votes in
support of that decision) to bifurcate hstore/json.  I firmly believe
that was a mistake but there's no point in revisiting it. Done is
done.

3) In it's current state jsonb is not very useful and we have to
recognize that; it optimizes text json but OTOH covers, maybe 30-40%
of what hstore offers.  In particular, it's missing manipulation and
GIST/GIN.  The stuff it does offer however is how Andrew, Josh and
others perceive the API will be used and I defer to them with the
special exception of deserialization (the mirror of to_json) which is
currently broken or near-useless in all three types.  Andrew
recognized that and has suggested a fix; even then to me it only
matters to the extent that the API is clean and forward compatible.

Here are the options on the table:
a) Push everything to 9.5 and introduce out of core hstore2/jsonb
extensions to meet market demand.  Speaking practically, 'out of core'
translates to Can't be used to most industrial IT shops.  I hate
this option but recognize it's the only choice if the code isn't ready
in time.

b) Accept hstore2 but push jsonb on the premise they should be married
in some way or that jsonb simply isn't ready.  I'm not a fan of this
option either unless Andrew specifically thinks it's a good idea.  The
stuff that is there seems to work pretty well (again, except
deserialization which I haven't tested recently) and the jsonb
patterns that are in place have some precedent in terms of the text
json type.

c) Accept hstore2 and jsonb as in-core extensions (assuming code
worthiness).  Since extensions can't call into each other (this really
ought to be solved at some point) this means a lot of code copy/pasto.
  The main advantage here is that it reduces the penalty of failure
and avoids pollution of the public schema.  I did not find the
rationale upthread that there was a stigma to in-core extensions in
any way convincing.  In fact I'd go further and suggest that we really
ought to have a project policy to have all non-SQL standard functions,
operators and types as extensions from here on out.  Each in-core type
introduction after having introduced the extension system has left me
scratching my head.

d) The status quo.  This essentially means we'll have to liberally
document how things are (to avoid confusing our hapless users) and
take Andrew at his word that a separate extension will materialize
making jsonb more broadly useful.  The main concern here is that the
market will vote with their feet and adopt hstore API style broadly,
sticking us with a bunch of marginally used functions in the public
namespace to support forever.

My personal preference is c) but am perfectly ok with d), particularly
if there was more visibility into the long term planning.  Good
documentation will help either way and that's why I signed up for it.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Peter Geoghegan
On Thu, Feb 27, 2014 at 11:11 AM, Josh Berkus j...@agliodbs.com wrote:
 Because the course Andrew is following is the one which *this list*
 decided on in CF3, no matter that people who participated in that
 discussion seem to have collective amnesia.  There was a considerable
 amount of effort involved in implementing things this way, so if Hackers
 suddenly want to retroactively change a collective decision, I think
 they should be prepared to pitch in and help implement the changed plan.

I think you've completely misunderstood my remarks. For the most part
I agree that there are advantages to having hstore and jsonb share the
same tree representation, and this may be where Robert and I differ.
My concern is: Having gone to that considerable amount of effort, why
on earth does jsonb not get the benefit of the hstore stuff
immediately, since it's virtually the same thing? What is it that
we're actually being asked to wait for?

Let me be more concrete about what my concern is right now:

postgres=# select '{foo:{bar:yellow}}'::jsonb || '{}'::jsonb;
 ?column?
--
 foo={bar=yellow}
(1 row)

I put in jsonb, but got out hstore, for this totally innocent use of
the concatenation operator. Now, maybe the answer here is that we
require people to cast for this kind of thing while using jsonb. The
problems I see with that are:

1. It's pretty ugly, in a way that people that care about jsonb are
particularly unlikely to find acceptable. When you mix in GIN/GiST
compatibility to the mix, it gets uglier.

2. Don't we already have a much simpler way of casting from hstore to json?

 One of the issues there is that, due to how we handle types, a type
 which has been available as an extension can never ever become a core
 type because it breaks upgrading, per the discussion about hstore2.  For
 better or for worse, we chose to make json-text a core type when it was
 introduced (and XML before it, although that was before CREATE
 EXTENSION).  This means that, if we have jsonb as an extension, we'll
 eventually be in the position where the recommended json type with all
 the features is an extension, whereas the legacy json type is in core.

I take issue with characterizing the original json type as legacy
(it's json, not anything else - jsonb isn't quite json, much like
BSON), but leaving that aside: So? I mean, really: what are the
practical consequences of packing everything as an extension? I can
see some benefits to doing it, but like Robert I have a harder time
seeing a cost.

To be clear: I would really like for jsonb to have parity with hstore.
I don't understand how you can argue for it being unfortunate that the
original json may occupy a privileged position as a core type over
jsonb on the one hand, while not also taking issue with jsonb clearly
playing second fiddle to hstore. Wasn't the whole point of their
sharing a binary representation that that didn't have to happen?

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Backup throttling

2014-02-27 Thread Alvaro Herrera
Antonin Houska escribió:

  Why did you choose bytes per second as a valid rate which we can specify?
  Since the minimum rate is 32kB, isn't it better to use KB per second for 
  that?
  If we do that, we can easily increase the maximum rate from 1GB to very 
  large
  number in the future if required.
 
 The attached version addresses all the comments above.

I pushed this patch with a few further tweaks.  In your changes to
address the above point, you made the suffix mandatory in the
pg_basebackup -r option.  This seemed a strange restriction, so I
removed it.  It seems more user-friendly to me to accept the value as
being expressed in kilobytes per second without requiring the suffix to
be there; the 'k' suffix is then also accepted and has no effect.  I
amended the docs to say that also.

If you or others feel strongly about this, we can still tweak it, of
course.

I also moved the min/max #defines to replication/basebackup.h, and
included that file in pg_basebackup.c.  This avoids the duplicated
values.  That file is okay to be included there.

  If WL_POSTMASTER_DEATH is triggered, we should exit immediately like
  other process does? This is not a problem of this patch. This problem exists
  also in current master. But ISTM it's better to solve that together. 
  Thought?
 
 Once we're careful about not missing signals, I think PM death should be
 noticed too. The backup functionality itself would probably manage to
 finish without postmaster, however it's executed under walsender process.
 
 Question is where !PostmasterIsAlive() check should be added. I think it
 should go to the main loop of perform_base_backup(), but that's probably
 not in the scope of this patch.

Feel free to submit patches about this.

Thanks for your patch, and the numerous reviewers who took part.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another possible corruption bug in 9.3.2 or possibly a known MultiXact problem?

2014-02-27 Thread Greg Stark
On Thu, Feb 27, 2014 at 2:34 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Greg, Peter, if you could update your standbys to the current HEAD of
 REL9_3_STABLE for the affected apps and verify the problem no longer
 shows up in a reasonable timeframe, it would be great.  (I'm assuming
 you saw this happen repeatedly.)

Actually I can do better. I restored the same base backup and applied
the same period of logs and can confirm that the rows in question now
appear normally:

=# select * from (select
(heap_page_items(get_raw_page('users',13065))).*) as x where lp
in(2,3,4,7);

 lp | lp_off | lp_flags | lp_len | t_xmin  | t_xmax  | t_field3 |
t_ctid   | t_infomask2 | t_infomask | t_hoff |
++--++-+-+--+---+-+++---
  2 |   3424 |1 |232 | 5943845 |  728896 |0 |
(13065,3) |   16416 |   4419 | 32 |
  3 |   3152 |1 |272 | 5943849 | 5943879 |0 |
(13065,4) |   49184 |   9475 | 32 |
  4 |   2864 |1 |287 | 5943879 | 5943880 |0 |
(13065,7) |   49184 |   9475 | 32 |
  7 |   2576 |1 |287 | 5943880 |   0 |0 |
(13065,7) |   32800 |  10499 | 32 |



-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UNION ALL on partitioned tables won't use indices.

2014-02-27 Thread Tom Lane
Noah Misch n...@leadboat.com writes:
 If the attached patch version looks reasonable, I will commit it.

The test case is completely bogus, as the query explained is significantly
different from the query executed.  I'm not sure whether you can just
remove the extra ORDER BY column without getting machine-dependent
results, though.

More generally, I'm afraid the whole approach is probably wrong, or at
least not solving all problems in this area, because of this:

 Incidentally, I tried adding an assertion that append_rel_list does not show
 one appendrel as a direct child of another.  The following query, off-topic
 for the patch at hand, triggered that assertion:
 SELECT 0 FROM (SELECT 0 UNION ALL SELECT 0) t0
 UNION ALL
 SELECT 0 FROM (SELECT 0 UNION ALL SELECT 0) t0;

That's not off topic at all; it shows that there's not been any effort
to date to flatten appendrel membership, and therefore this partial
implementation is going to miss some cases.  It doesn't help to merge an
inheritance-based appendrel into its parent if the query ORDER BY is still
a level or two above that due to UNION ALLs.

I wonder whether we should consider adding a pass to flatten any nested
appendrels after we're done creating them all.  Or alternatively, perhaps
rather than changing the representation invariant, we need to take a
harder look at why ordering info isn't getting pushed down through
appendrels.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UNION ALL on partitioned tables won't use indices.

2014-02-27 Thread Tom Lane
BTW, isn't the proposed change to the comments for adjust_appendrel_attrs
just wrong?  If it's correct, why doesn't the Assert(!IsA(node, SubLink))
therein fire?  (AFAICS, the existing comment is correct: we don't use
this function until after expression preprocessing is complete.)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another possible corruption bug in 9.3.2 or possibly a known MultiXact problem?

2014-02-27 Thread Greg Stark
Though I notice something I can't understand here.

After activating the new clone subsequent attempts to select rows from
the page bump the LSN, presumably due to touching hint bits (since the
prune xid hasn't changed). But the checksum hasn't changed even after
running CHECKPOINT.

How is it possible for the LSN to get updated without changing the checksum?

postgres=# select (page_header(get_raw_page('users',13065))).* ;
 lsn | checksum | flags | lower | upper | special | pagesize |
version | prune_xid
-+--+---+---+---+-+--+-+---
 FD/330EC998 |   -25547 | 1 |   152 |  2576 |8192 | 8192 |
  4 |   5638282
(1 row)

postgres=# select (page_header(get_raw_page('users',13065))).* ;
 lsn | checksum | flags | lower | upper | special | pagesize |
version | prune_xid
-+--+---+---+---+-+--+-+---
 FD/33140160 |   -25547 | 1 |   152 |  2576 |8192 | 8192 |
  4 |   5638282
(1 row)

postgres=# select (page_header(get_raw_page('users',13065))).* ;
 lsn | checksum | flags | lower | upper | special | pagesize |
version | prune_xid
-+--+---+---+---+-+--+-+---
 FD/350016E8 |   -25547 | 1 |   152 |  2576 |8192 | 8192 |
  4 |   5638282
(1 row)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_dump reporing version of server pg_dump as comments in the output

2014-02-27 Thread Wang, Jing
Enclosed is the patch to implement the requirement that pg_dump should
report version of server  pg_dump as comments in the output. 

 

[Benefit]

By running head on pg_dump output, you can readily discover what
version of PostgreSQL was used to generate that dump. Very useful
especially for mouldy old database dumps.

The benefit of this requirement is to let user clearly understand from
which version the dump output file will be insert into which version
database server and easy handle the problems of Incompatibility
versions.

 

[Analysis]

Using pg_dump can dump the data into the file with format set to be
'c','t' or plain text. In the existing version the version of server 
pg_dump is already there when the format of file is 'c' or 't'. And even
for the plain text format file the version of server  pg_dump is
already there if using '--verbose' in pg_dump. Using '--verbose' leads
to some many other prints which are not required always. 

 

So the requirement is dump the version of server  pg_dump as comment
into the plain text format output file even without using '--verbose'
option.

 

[Solution details]

The main change is in the pg_backup_archiver.c file, in the function
'RestoreArchive' the version of server  pg_dump is only print when
finding the '--verbose' option to be used in current version.  Now we
just let the printing works even without finding the '--verbose' option.

 

[what is changed when applying the patch]

1. The output file which is created by pg_dump with format set to be
'plain text' and without using '--verbose' option will include the
version of server  pg_dump. One example is  as following:

--

-- PostgreSQL database dump

--

 

-- Dumped from database version 9.2.4

-- Dumped by pg_dump version 9.4devel

 

SET statement_timeout = 0;

SET lock_timeout = 0;

SET client_encoding = 'UTF8';

...

 

2. The output file which is created by pg_dumpall with format set to be
'plain text' and without using '--verbose' option will include the
version of server  pg_dump. The example is as following:

 

--

-- PostgreSQL database cluster dump

--

 

SET default_transaction_read_only = off;

 

...

 

\connect connectdb

 

SET default_transaction_read_only = off;

 

--

-- PostgreSQL database dump

--

 

-- Dumped from database version 9.2.4

-- Dumped by pg_dump version 9.4devel

 

SET statement_timeout = 0;

SET lock_timeout = 0;

SET client_encoding = 'UTF8';

SET standard_conforming_strings = on;

 

...

 

\connect postgres

 

SET default_transaction_read_only = off;

 

--

-- PostgreSQL database dump

--

 

-- Dumped from database version 9.2.4

-- Dumped by pg_dump version 9.4devel

 

SET statement_timeout = 0;

SET lock_timeout = 0;

SET client_encoding = 'UTF8';

SET standard_conforming_strings = on;

SET check_function_bodies = false;

 

 

3. The version of server and pg_dump will be dumped into the output
file. The output file is created by the following command:

pg_restore inputFile -f output.sql 

 

One example is as following:

--

-- PostgreSQL database dump

--

 

-- Dumped from database version 9.2.4

-- Dumped by pg_dump version 9.4devel

 

SET statement_timeout = 0;

SET lock_timeout = 0;

SET client_encoding = 'UTF8';

SET standard_conforming_strings = on;

...

 

 

Kind regards

Jing



[HACKERS] Windows exit code 128 ... it's baaack

2014-02-27 Thread Tom Lane
I looked at the postmaster log for the ongoing issue on narwhal
(to wit, that the contrib/dblink test dies the moment it tries
to do anything dblink-y), and looky here what the postmaster
has logged:

530fc965.bac:2] LOG:  server process (PID 2144) exited with exit code 128
[530fc965.bac:3] DETAIL:  Failed process was running: SELECT *
FROM dblink('dbname=contrib_regression','SELECT * FROM foo') AS t(a 
int, b text, c text[])
WHERE t.a  7;
[530fc965.bac:4] LOG:  server process (PID 2144) exited with exit code 0
[530fc965.bac:5] LOG:  terminating any other active server processes

The double report of the same process exiting can be attributed to
the kluge at postmaster.c lines 2906..2926 (in HEAD): that code
logs an ERROR_WAIT_NO_CHILDREN (128) exit and then resets the exitstatus
to zero.  Further down, where we realize the process failed to disable
its deadman switch, we report the phony exit 0 status and begin the
system reset cycle.

Now, back in the 2010 thread where we agreed to put in the ignore-128
kluge, it was asserted that all known cases of this exit code were
irreproducible Windows flakiness occurring at process start or exit.
This is evidently neither start nor exit, but likely is being provoked
by trying to load libpq into the backend.  And it appears to be 100.00%
reproducible on narwhal.

Seems worth poking into a little closer.  Not by me, though; I don't
do Windows.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Peter Geoghegan
On Thu, Feb 27, 2014 at 3:54 AM, Robert Haas robertmh...@gmail.com wrote:
 So I tried to tease it out from looking at the patches.  As nearly as
 I can tell, the reason for making jsonb use hstore's binary format is
 because then we can build indexes on jsonbfield::hstore, and the
 actual type conversion will be a no-op; and the reason for upgrading
 hstore to allow nested keys is so that jsonb can map onto it.

I think that a typed, nested hstore has considerable independent
value, and would have had just the same value 10 years ago, before
JSON existed. I'm told that broadly speaking most people would prefer
the interface to speak JSON, and I'd like to give people what they
want, but that's as far as it goes. While I see problems with some
aspects of the patches as implemented, I think that the reason that
the two types share a binary format is that they're basically the same
thing. It might be that certain facets of the nested hstore
implementation reflect a need to accommodate jsonb, but there are no
ones that I'm currently aware of that I find at all objectionable.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump reporing version of server pg_dump as comments in the output

2014-02-27 Thread Wang, Jing
Sorry for missing the patch file in the original email.  Enclosed please
find it.

 

Jing Wang

Fujitsu Australia

 

 

From: Arulappan, Arul Shaji 
Sent: Friday, 28 February 2014 11:21 AM
To: Wang, Jing
Subject: RE: [HACKERS] pg_dump reporing version of server  pg_dump as
comments in the output
Importance: High

 

Jing, You missed the patch attachement.

 

Rgds,

Arul Shaji

 

 

From: pgsql-hackers-ow...@postgresql.org [
mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Wang, Jing
Sent: Friday, 28 February 2014 11:11 AM
To: pgsql-hackers@postgresql.org
Subject: [HACKERS] pg_dump reporing version of server  pg_dump as
comments in the output

 

Enclosed is the patch to implement the requirement that pg_dump should
report version of server  pg_dump as comments in the output. 

 

[Benefit]

By running head on pg_dump output, you can readily discover what
version of PostgreSQL was used to generate that dump. Very useful
especially for mouldy old database dumps.

The benefit of this requirement is to let user clearly understand from
which version the dump output file will be insert into which version
database server and easy handle the problems of Incompatibility
versions.

 

[Analysis]

Using pg_dump can dump the data into the file with format set to be
'c','t' or plain text. In the existing version the version of server 
pg_dump is already there when the format of file is 'c' or 't'. And even
for the plain text format file the version of server  pg_dump is
already there if using '--verbose' in pg_dump. Using '--verbose' leads
to some many other prints which are not required always. 

 

So the requirement is dump the version of server  pg_dump as comment
into the plain text format output file even without using '--verbose'
option.

 

[Solution details]

The main change is in the pg_backup_archiver.c file, in the function
'RestoreArchive' the version of server  pg_dump is only print when
finding the '--verbose' option to be used in current version.  Now we
just let the printing works even without finding the '--verbose' option.

 

[what is changed when applying the patch]

1. The output file which is created by pg_dump with format set to be
'plain text' and without using '--verbose' option will include the
version of server  pg_dump. One example is  as following:

--

-- PostgreSQL database dump

--

 

-- Dumped from database version 9.2.4

-- Dumped by pg_dump version 9.4devel

 

SET statement_timeout = 0;

SET lock_timeout = 0;

SET client_encoding = 'UTF8';

...

 

2. The output file which is created by pg_dumpall with format set to be
'plain text' and without using '--verbose' option will include the
version of server  pg_dump. The example is as following:

 

--

-- PostgreSQL database cluster dump

--

 

SET default_transaction_read_only = off;

 

...

 

\connect connectdb

 

SET default_transaction_read_only = off;

 

--

-- PostgreSQL database dump

--

 

-- Dumped from database version 9.2.4

-- Dumped by pg_dump version 9.4devel

 

SET statement_timeout = 0;

SET lock_timeout = 0;

SET client_encoding = 'UTF8';

SET standard_conforming_strings = on;

 

...

 

\connect postgres

 

SET default_transaction_read_only = off;

 

--

-- PostgreSQL database dump

--

 

-- Dumped from database version 9.2.4

-- Dumped by pg_dump version 9.4devel

 

SET statement_timeout = 0;

SET lock_timeout = 0;

SET client_encoding = 'UTF8';

SET standard_conforming_strings = on;

SET check_function_bodies = false;

 

 

3. The version of server and pg_dump will be dumped into the output
file. The output file is created by the following command:

pg_restore inputFile -f output.sql 

 

One example is as following:

--

-- PostgreSQL database dump

--

 

-- Dumped from database version 9.2.4

-- Dumped by pg_dump version 9.4devel

 

SET statement_timeout = 0;

SET lock_timeout = 0;

SET client_encoding = 'UTF8';

SET standard_conforming_strings = on;

...

 

 

Kind regards

Jing



pg_dump.patch
Description: pg_dump.patch

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Peter Geoghegan
On Thu, Feb 27, 2014 at 1:28 PM, Merlin Moncure mmonc...@gmail.com wrote:
 3) In it's current state jsonb is not very useful and we have to
 recognize that; it optimizes text json but OTOH covers, maybe 30-40%
 of what hstore offers.  In particular, it's missing manipulation and
 GIST/GIN.  The stuff it does offer however is how Andrew, Josh and
 others perceive the API will be used and I defer to them with the
 special exception of deserialization (the mirror of to_json) which is
 currently broken or near-useless in all three types.  Andrew
 recognized that and has suggested a fix; even then to me it only
 matters to the extent that the API is clean and forward compatible.

It's missing manipulation (in the sense that the implicit cast
sometimes produces surprising results, in particular for operators
that return hstore), but it isn't really missing GiST/GIN support as
compared to hstore, AFAICT:

postgres=# select * from foo;
   i
---
 {foo: {bar: yellow}}
 {foozzz: {bar: orange}}
 {foozzz: {bar: orange}}
(3 rows)

postgres=# select * from foo where i ? 'foo';
 i

 {foo: {bar: yellow}}
(1 row)

postgres=# explain analyze select * from foo where i ? 'foo';
  QUERY PLAN
---
 Bitmap Heap Scan on foo  (cost=12.00..16.01 rows=1 width=32) (actual
time=0.051..0.051 rows=1 loops=1)
   Recheck Cond: ((i)::hstore ? 'foo'::text)
   Heap Blocks: exact=1
   -  Bitmap Index Scan on hidxb  (cost=0.00..12.00 rows=1 width=0)
(actual time=0.041..0.041 rows=1 loops=1)
 Index Cond: ((i)::hstore ? 'foo'::text)
 Planning time: 0.172 ms
 Total runtime: 0.128 ms
(7 rows)

Now, it's confusing that it has to go through hstore, perhaps, but
that's hardly all that bad in and of itself. It may be a matter of
reconsidering how to make the two work together. Certainly, queries
like the following fail, because the parser thinks the rhs string is
an hstore literal, not a jsonb literal:

postgres=# select * from foo where i @ '{foo:4}';
ERROR:  42601: bad hstore representation
LINE 1: select * from foo where i @ '{foo:4}';
 ^
DETAIL:  syntax error, unexpected STRING_P, expecting '}' or ',' at end of input
LOCATION:  hstore_yyerror, hstore_scan.l:172

Other than that, I'm not sure in what sense you consider that jsonb is
missing GIN/GiST. If you mean that it doesn't have some of the
capabilities that I believe are planned for the VODKA infrastructure
[1], which one might hope to have immediately available to index this
new nested structure, that is hardly a criticism of jsonb in
particular.

[1] http://www.pgcon.org/2014/schedule/events/696.en.html

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Proposal/design feedback needed: Providing catalog view to pg_hba.conf file

2014-02-27 Thread Prabakaran, Vaishnavi
Hi All,

 

I would like to propose an implementation of creating new catalog view
for pg_hba.conf file contents. Aim of this proposal is to present a new
view pg_settings_hba to database administrator, for viewing
pg_hba.conf file contents. 

 

Currently, to view the pg_hba.conf file contents, DB admin has to access
the file from database server to read the settings.  In case of huge and
multiple hba files, finding the appropriate hba rules which are loaded
will be difficult and take some time. 

 

Advantage of having this pg_settings_hba view is that the admin can
check what hba rules are loaded in runtime via database connection
itself.  And, thereby it will be easy and useful for admin to check all
the users with their privileges in a single view to manage them. 

 

Since exposing this view to everyone poses a security problem, access of
this view will be limited to super user. 

As a first step, am proposing only the SELECT option for this new view.
Later, based on your feedbacks, I would like to add UPDATE/DELETE
options also to this view. 

 

Here is the brief design of the proposal:

1.   Create a new view pg_settings_hba in system_views.sql.

Structure of new view:

 

ColumnType

--   --

connection_type text

databases   text[]

roles text[]

socket_Address   text

socket_Mask text

compare_Method  text

hostName  text

authMethod text

linenumber   integer

 

2.   Grant select permission of this view to super user.

3.   Adding new function in guc.c (and in hba.c to load data from
parsed hba lines)  to create tuple descriptor . CREATE VIEW command in
system_views.sql will make use of this new function, in guc.c, to build
view.

 

Input for this view is taken from parsed hba lines and not from files
directly. 

 

Any comments or feedback on this proposal?

 

 

 

 

Thanks  Regards,

Vaishnavi

 



Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Josh Berkus
On 02/27/2014 01:28 PM, Merlin Moncure wrote:
 How we got here is not the point.  All that matters is what's going to
 happen from here.  Here are the facts as I see them:

Well, it certainly matters if we want it in this release.

As far as I can tell, moving jsonb to contrib basically requires
rewriting a bunch of code, without actually fixing any of the bugs which
have been discussed in the more technical reviews.  I'm really unclear
what, at this point, moving jsonb to /contrib would improve.

On 02/27/2014 04:27 PM, Peter Geoghegan wrote:
 I think that a typed, nested hstore has considerable independent
 value, and would have had just the same value 10 years ago, before
 JSON existed. I'm told that broadly speaking most people would prefer
 the interface to speak JSON, and I'd like to give people what they
 want, but that's as far as it goes. While I see problems with some
 aspects of the patches as implemented, I think that the reason that
 the two types share a binary format is that they're basically the same
 thing. It might be that certain facets of the nested hstore
 implementation reflect a need to accommodate jsonb, but there are no
 ones that I'm currently aware of that I find at all objectionable.

We discussed this with Oleg  Teodor at pgCon 2013.  From the
perspective of several of us, we were mystified as to why hstore2 has
it's own syntax at all; that is, why not just implement the JSONish
syntax?  Their answer was to provide a smooth upgrade path to existing
hstore users, which makes sense.  This was also the reason for not
making hstore a core type.

But again ... we discussed all of this at pgCon and in
November-December.  It's not like the people on this thread now weren't
around for both of those discussions.

And it's not just that broadly speaking most people would prefer
the interface to speak JSON; it's that a JSONish interface for indexed
heirachical data is a Big Feature which will drive adoption among web
developers, and hstore2 without JSON support simply is not.  At trade
shows and developer conferences, I get more questions about PostgreSQL's
JSON support than I do for any new feature since streaming replication.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Christophe Pettus

On Feb 27, 2014, at 5:31 PM, Peter Geoghegan p...@heroku.com wrote:

 Now, it's confusing that it has to go through hstore, perhaps, but
 that's hardly all that bad in and of itself.

Yes, it is.  It strikes me as irrational to have jsonb depend on hstore.  Let's 
be honest with ourselves: if we were starting over, we wouldn't start by 
creating our own proprietary hierarchical type and then making the hierarchical 
type everyone else uses depend on it.  hstore exists because json didn't.  But 
json does now, and we shouldn't create a jsonb dependency on hstore.

--
-- Christophe Pettus
   x...@thebuild.com



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Craig Ringer
On 02/28/2014 09:54 AM, Josh Berkus wrote:
 On 02/27/2014 01:28 PM, Merlin Moncure wrote:
 How we got here is not the point.  All that matters is what's going to
 happen from here.  Here are the facts as I see them:
 
 Well, it certainly matters if we want it in this release.
 
 As far as I can tell, moving jsonb to contrib basically requires
 rewriting a bunch of code, without actually fixing any of the bugs which
 have been discussed in the more technical reviews.  I'm really unclear
 what, at this point, moving jsonb to /contrib would improve.

It's also make it a lot harder to use in other extensions, something
that's already an issue with hstore.

It should be a core type.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Peter Geoghegan
On Thu, Feb 27, 2014 at 6:02 PM, Craig Ringer cr...@2ndquadrant.com wrote:
 It's also make it a lot harder to use in other extensions, something
 that's already an issue with hstore.

What do you mean?


-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Stephen Frost
* Peter Geoghegan (p...@heroku.com) wrote:
 On Thu, Feb 27, 2014 at 6:02 PM, Craig Ringer cr...@2ndquadrant.com wrote:
  It's also make it a lot harder to use in other extensions, something
  that's already an issue with hstore.
 
 What do you mean?

Extensions can't depend on other extensions directly- hence you can't
write an extension that depends on hstore, which sucks.  It'd be
preferrable to not have that issue w/ json/jsonb/whatever.

Yes, it'd be nice to solve that problem, but I don't see it happening in
the next few weeks...

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Auto-tuning work_mem and maintenance_work_mem

2014-02-27 Thread Craig Ringer
On 02/18/2014 12:19 AM, Andres Freund wrote:
 On 2014-02-16 21:26:47 -0500, Robert Haas wrote:
 I don't think anyone objected to increasing the defaults for work_mem
 and maintenance_work_mem by 4x, and a number of people were in favor,
 so I think we should go ahead and do that.  If you'd like to do the
 honors, by all means!
 
 Actually, I object to increasing work_mem by default. In my experience
 most of the untuned servers are backing some kind of web application and
 often run with far too many connections. Increasing work_mem for those
 is dangerous.

Good point. Especially with pagination involved. Those OFFSET 4
LIMIT 100 queries can be a killer.


-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Peter Geoghegan
On Thu, Feb 27, 2014 at 6:08 PM, Stephen Frost sfr...@snowman.net wrote:
 On Thu, Feb 27, 2014 at 6:02 PM, Craig Ringer cr...@2ndquadrant.com wrote:
  It's also make it a lot harder to use in other extensions, something
  that's already an issue with hstore.

 What do you mean?

 Extensions can't depend on other extensions directly- hence you can't
 write an extension that depends on hstore, which sucks.  It'd be
 preferrable to not have that issue w/ json/jsonb/whatever.

I think it depends of what you mean by depend. The earthdistance
extension requires 'cube', for example, a data type cube for
representing multidimensional cubes. Although I am aware of the
lengths that drivers like psycopg2 go to to support hstore because
it's an extension, which is undesirable.


-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Stephen Frost
* Peter Geoghegan (p...@heroku.com) wrote:
 On Thu, Feb 27, 2014 at 6:08 PM, Stephen Frost sfr...@snowman.net wrote:
  Extensions can't depend on other extensions directly- hence you can't
  write an extension that depends on hstore, which sucks.  It'd be
  preferrable to not have that issue w/ json/jsonb/whatever.
 
 I think it depends of what you mean by depend. The earthdistance
 extension requires 'cube', for example, a data type cube for
 representing multidimensional cubes. Although I am aware of the
 lengths that drivers like psycopg2 go to to support hstore because
 it's an extension, which is undesirable.

What earthdistance does is simply use the 'cube' data type- that's quite
different from needing to be able to make calls from one .so into the
other .so directly.  With earthdistance/cube, everything goes through
PG.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] tests for client programs

2014-02-27 Thread Peter Eisentraut
Updated patch.  Changes:

- added documentation
- avoid port conflicts with running instances
- added tests for pg_basebackup -T
- removed TODO tests for rejected pg_basebackup feature

A test on Windows would be nice.  Otherwise we'll let the buildfarm do it.
From 8205e58442720965c98794cb2f234c46b70dada7 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut pete...@gmx.net
Date: Thu, 27 Feb 2014 21:39:35 -0500
Subject: [PATCH v3] Add TAP tests for client programs

---
 GNUmakefile.in |   4 +-
 configure  |  47 +++
 configure.in   |   5 +
 doc/src/sgml/installation.sgml |   3 +-
 doc/src/sgml/regress.sgml  |  28 
 src/Makefile.global.in |  19 ++-
 src/bin/initdb/.gitignore  |   2 +
 src/bin/initdb/Makefile|   7 +
 src/bin/initdb/t/001_initdb.pl |  37 +
 src/bin/pg_basebackup/.gitignore   |   2 +
 src/bin/pg_basebackup/Makefile |   6 +
 src/bin/pg_basebackup/t/010_pg_basebackup.pl   |  91 
 src/bin/pg_basebackup/t/020_pg_receivexlog.pl  |   8 ++
 src/bin/pg_config/.gitignore   |   1 +
 src/bin/pg_config/Makefile |   6 +
 src/bin/pg_config/t/001_pg_config.pl   |  12 ++
 src/bin/pg_controldata/.gitignore  |   1 +
 src/bin/pg_controldata/Makefile|   6 +
 src/bin/pg_controldata/t/001_pg_controldata.pl |  14 ++
 src/bin/pg_ctl/.gitignore  |   1 +
 src/bin/pg_ctl/Makefile|   6 +
 src/bin/pg_ctl/t/001_start_stop.pl |  25 
 src/bin/pg_ctl/t/002_status.pl |  19 +++
 src/bin/scripts/.gitignore |   2 +
 src/bin/scripts/Makefile   |   7 +
 src/bin/scripts/t/010_clusterdb.pl |  18 +++
 src/bin/scripts/t/011_clusterdb_all.pl |   9 ++
 src/bin/scripts/t/020_createdb.pl  |  16 +++
 src/bin/scripts/t/030_createlang.pl|  18 +++
 src/bin/scripts/t/040_createuser.pl|  26 
 src/bin/scripts/t/050_dropdb.pl|  16 +++
 src/bin/scripts/t/060_droplang.pl  |  15 ++
 src/bin/scripts/t/070_dropuser.pl  |  16 +++
 src/bin/scripts/t/080_pg_isready.pl|  15 ++
 src/bin/scripts/t/090_reindexdb.pl |  21 +++
 src/bin/scripts/t/091_reindexdb_all.pl |  11 ++
 src/bin/scripts/t/100_vacuumdb.pl  |  17 +++
 src/bin/scripts/t/101_vacuumdb_all.pl  |   9 ++
 src/test/perl/TestLib.pm   | 186 +
 39 files changed, 748 insertions(+), 4 deletions(-)
 create mode 100644 src/bin/initdb/t/001_initdb.pl
 create mode 100644 src/bin/pg_basebackup/t/010_pg_basebackup.pl
 create mode 100644 src/bin/pg_basebackup/t/020_pg_receivexlog.pl
 create mode 100644 src/bin/pg_config/t/001_pg_config.pl
 create mode 100644 src/bin/pg_controldata/t/001_pg_controldata.pl
 create mode 100644 src/bin/pg_ctl/t/001_start_stop.pl
 create mode 100644 src/bin/pg_ctl/t/002_status.pl
 create mode 100644 src/bin/scripts/t/010_clusterdb.pl
 create mode 100644 src/bin/scripts/t/011_clusterdb_all.pl
 create mode 100644 src/bin/scripts/t/020_createdb.pl
 create mode 100644 src/bin/scripts/t/030_createlang.pl
 create mode 100644 src/bin/scripts/t/040_createuser.pl
 create mode 100644 src/bin/scripts/t/050_dropdb.pl
 create mode 100644 src/bin/scripts/t/060_droplang.pl
 create mode 100644 src/bin/scripts/t/070_dropuser.pl
 create mode 100644 src/bin/scripts/t/080_pg_isready.pl
 create mode 100644 src/bin/scripts/t/090_reindexdb.pl
 create mode 100644 src/bin/scripts/t/091_reindexdb_all.pl
 create mode 100644 src/bin/scripts/t/100_vacuumdb.pl
 create mode 100644 src/bin/scripts/t/101_vacuumdb_all.pl
 create mode 100644 src/test/perl/TestLib.pm

diff --git a/GNUmakefile.in b/GNUmakefile.in
index a573880..69e0824 100644
--- a/GNUmakefile.in
+++ b/GNUmakefile.in
@@ -66,9 +66,9 @@ check check-tests: all
 check check-tests installcheck installcheck-parallel installcheck-tests:
 	$(MAKE) -C src/test/regress $@
 
-$(call recurse,check-world,src/test src/pl src/interfaces/ecpg contrib,check)
+$(call recurse,check-world,src/test src/pl src/interfaces/ecpg contrib src/bin,check)
 
-$(call recurse,installcheck-world,src/test src/pl src/interfaces/ecpg contrib,installcheck)
+$(call recurse,installcheck-world,src/test src/pl src/interfaces/ecpg contrib src/bin,installcheck)
 
 GNUmakefile: GNUmakefile.in $(top_builddir)/config.status
 	./config.status $@
diff --git a/configure b/configure
index 122ace7..e0dbdfe 100755
--- a/configure
+++ b/configure
@@ -627,6 +627,7 @@ ac_includes_default=\
 
 ac_subst_vars='LTLIBOBJS
 vpath_build
+PROVE
 OSX
 XSLTPROC
 COLLATEINDEX
@@ -14350,6 +14351,52 @@ fi
 done
 
 
+#
+# Check for test tools
+#
+for ac_prog in prove
+do
+  # 

Re: [HACKERS] UNION ALL on partitioned tables won't use indices.

2014-02-27 Thread Noah Misch
On Thu, Feb 27, 2014 at 05:47:16PM -0500, Tom Lane wrote:
 BTW, isn't the proposed change to the comments for adjust_appendrel_attrs
 just wrong?  If it's correct, why doesn't the Assert(!IsA(node, SubLink))
 therein fire?  (AFAICS, the existing comment is correct: we don't use
 this function until after expression preprocessing is complete.)

The comment change is accurate; expand_inherited_tables(), which precedes
subplan conversion, now calls adjust_appendrel_attrs().  I neglected to remove
the SubLink assert or test a scenario actually entailing a SubLink in
translated_vars.  Thanks for spotting the problem.

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UNION ALL on partitioned tables won't use indices.

2014-02-27 Thread Noah Misch
On Thu, Feb 27, 2014 at 05:33:47PM -0500, Tom Lane wrote:
 Noah Misch n...@leadboat.com writes:
  If the attached patch version looks reasonable, I will commit it.
 
 The test case is completely bogus, as the query explained is significantly
 different from the query executed.  I'm not sure whether you can just
 remove the extra ORDER BY column without getting machine-dependent
 results, though.

Each query tests something slightly different.  The EXPLAIN verifies that we
can MergeAppend given this query structure, and the plain SELECT verifies that
any join tree contortions we made to achieve that do not change the answer.

 More generally, I'm afraid the whole approach is probably wrong, or at
 least not solving all problems in this area, because of this:
 
  Incidentally, I tried adding an assertion that append_rel_list does not show
  one appendrel as a direct child of another.  The following query, off-topic
  for the patch at hand, triggered that assertion:
  SELECT 0 FROM (SELECT 0 UNION ALL SELECT 0) t0
  UNION ALL
  SELECT 0 FROM (SELECT 0 UNION ALL SELECT 0) t0;
 
 That's not off topic at all; it shows that there's not been any effort
 to date to flatten appendrel membership, and therefore this partial
 implementation is going to miss some cases.  It doesn't help to merge an
 inheritance-based appendrel into its parent if the query ORDER BY is still
 a level or two above that due to UNION ALLs.

Nonetheless, I find it reasonable to accept a patch that makes
expand_inherited_tables() avoid introducing nested appendrels.  Making
pull_up_subqueries() do the same can be a separate effort.  There will still
be a pile of ways to stifle MergeAppend, including everything that makes
is_simple_union_all() return false.  Having said that, both you and Kyotaro
sound keen to achieve an appendrel flatness invariant in a single patch.  If
that's the consensus, I'm fine with bouncing this back so it can happen.

 I wonder whether we should consider adding a pass to flatten any nested
 appendrels after we're done creating them all.

We did consider that upthread.  It's a valid option, but I remain more
inclined to teach pull_up_subqueries() to preserve flatness just like
expand_inherited_tables() will.

 Or alternatively, perhaps
 rather than changing the representation invariant, we need to take a
 harder look at why ordering info isn't getting pushed down through
 appendrels.

That could facilitate MergeAppend in considerably more places, such as UNION
ALL containing non-simple branches.  I don't have much of a sense concerning
what such a patch would entail.

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench help message fix

2014-02-27 Thread Peter Eisentraut
On 2/27/14, 12:53 PM, Fabien COELHO wrote:
 A very minor fix to pgbench --help which is missing the expected
 argument for the -t option.

done



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Peter Eisentraut
On 2/27/14, 2:11 PM, Josh Berkus wrote:
 This means that, if we have jsonb as an extension, we'll
 eventually be in the position where the recommended json type with all
 the features is an extension, whereas the legacy json type is in core.

Well that wouldn't be a new situation.  Compare geometry types vs
postgis, inet vs ip4(r).  It's not bad being an extension.  You can
iterate faster and don't have to discuss so much. ;-)



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Peter Geoghegan
On Thu, Feb 27, 2014 at 5:54 PM, Josh Berkus j...@agliodbs.com wrote:
 And it's not just that broadly speaking most people would prefer
 the interface to speak JSON; it's that a JSONish interface for indexed
 heirachical data is a Big Feature which will drive adoption among web
 developers, and hstore2 without JSON support simply is not.  At trade
 shows and developer conferences, I get more questions about PostgreSQL's
 JSON support than I do for any new feature since streaming replication.

I work for Heroku; believe me, I get it. I'd go along with abandoning
nested hstore as a user-visible thing if I thought it bought jsonb
something and I thought we could, but I have doubts about that.

I understand why the nested hstore approach was taken. It isn't that
desirable to maintain something like a jsonb in parallel, while also
having the old key/value, untyped hstore. They are still fairly
similar as these things go. Robert said something about re-using op
classes rather than waiting for new op classes to be developed, but
why do we need to wait? These ones look like they work fine - what
will be better about the ones we develop later that justifies their
independent existence? Why should we believe that they won't just be
copied and pasted? The major problem is that conceptually, hstore
owns them (which is at least in part due to old hstore code rather
than nested hstore code), and so we need a better way to make that
work. We need some commonality and variability analysis, because
duplicating large amounts of hstore isn't very appealing.

 As far as I can tell, moving jsonb to contrib basically requires
 rewriting a bunch of code, without actually fixing any of the bugs which
 have been discussed in the more technical reviews.  I'm really unclear
 what, at this point, moving jsonb to /contrib would improve.

These are all of the additions to core, excluding regression tests and docs:

***SNIP***
 src/backend/catalog/system_views.sql  |8 +
 src/backend/utils/adt/Makefile|2 +-
 src/backend/utils/adt/json.c  |   44 ++--
 src/backend/utils/adt/jsonb.c |  455 
 src/backend/utils/adt/jsonb_support.c | 1268

 src/backend/utils/adt/jsonfuncs.c | 1159
++---
 src/include/catalog/pg_cast.h |4 +
 src/include/catalog/pg_operator.h |   12 +
 src/include/catalog/pg_proc.h |   44 +++-
 src/include/catalog/pg_type.h |6 +
 src/include/funcapi.h |9 +
 src/include/utils/json.h  |   15 ++
 src/include/utils/jsonapi.h   |8 +-
 src/include/utils/jsonb.h |  245 +
 **SNIP**

It's not immediately obvious to me why moving that into contrib
requires much work at all (relatively speaking), especially since
that's where much of it came from to begin with, although I grant that
I don't grok the patch.

Here is the line of reasoning that suggests to me that putting jsonb
in contrib is useful:

* It is not desirable to maintain some amount of common code between
hstore (as it exists today) and jsonb. This is of course a question of
degree (not an absolute), so feel free to call me out on the details
here, but I'm of the distinct impression that jsonb doesn't have that
much of an independent existence from hstore - what you could loosely
call the jsonb parts includes in no small part historic hstore code,
and not just new nested hstore code (that could reasonably be broken
out if we decided to jettison nested hstore as a user-visible thing
and concentrated on jsonb alone, as you would have us do). In other
words, Oleg and Teodor built nested hstore on hstore because of
practical considerations, and not just because they were attached to
hstore's perl-like syntax. They didn't start from scratch because that
was harder, or didn't make sense.

* We can't throw hstore users under the bus. It has to stay in contrib
for various reasons.

* It hardly makes any sense to have an in-core jsonb if it comes with
no batteries included. You need to install hstore for this jsonb
implementation to be of *any* use anyway. When you don't have the
extension installed, expect some really confusing error messages when
you go to create a GIN index. jsonb is no use on its own; why not just
make it all or nothing?


Another way of resolving this tension might be to push a lot more of
hstore into core than is presently proposed, but that seems like a
more difficult solution with little to no upside.
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Peter Eisentraut
On 2/26/14, 10:42 PM, Andrew Dunstan wrote:
 Extensions can't call each other's code.

That's not necessarily so.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] extension_control_path

2014-02-27 Thread Stephen Frost
* Dimitri Fontaine (dimi...@2ndquadrant.fr) wrote:
 Stephen Frost sfr...@snowman.net writes:
  me as a terrible idea to ship absolute object file names (which I assume
  you mean to include path, given you say 'absolute') unless you're an
 
 I agree, that's why my current design also needs cooperation on the
 backend side of things, to implement what you're calling here relocation
 of the files. Now that I read your comments, we might be able to
 implement something really simple and have something in core…

I didn't really expect this to be a huge issue or hard problem to
implement.. :)

 Please see attached patch, tested and documented.

On a quick glance-over, it looks like a reasonable implementation to me.

 What about allowing a control file like this:
 
# hstore extension
comment = 'data type for storing sets of (key, value) pairs'
default_version = '1.3'
directory = 'local/hstore-new'
module_pathname = '$directory/hstore'
relocatable = true

Interesting idea.  I'm a *little* concerned that re-useing '$directory'
there might confuse people into thking that any values in the control
file could be substituted in a similar way though.  Would there really
be much difference between that and '$ctldir' or something?

 The current way directory is parsed, relative pathnames are allowed and
 will be resolved in SHAREDIR, which is where we find the extension/ main
 directory, where currently live extension control files.

Sure, though it's unlikely to be used much there, as it's managed by the
packagers.

 With such a feature, we would allow module_pathname to reuse the same
 location as where we're going to find auxilliary control files and
 scripts.

Right- they'd be able to have everything in a single directory,
presumably one where they're doing development or where some other
installers installs to.

 Given module_pathname = '$directory/xxx' the extension is now fully
 relocatable and the tool doesn't need to put in any other effort than
 hacking the control file *at build time*.

Right.

Peter, any thoughts on this approach..?

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Peter Eisentraut
On 2/27/14, 9:08 PM, Stephen Frost wrote:
 Extensions can't depend on other extensions directly- hence you can't
 write an extension that depends on hstore, which sucks.

Sure they can, see transforms.

(Or if you disagree, download that patch and demo it, because I'd like
to know. ;-) )



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Andrew Dunstan


On 02/27/2014 10:09 PM, Peter Geoghegan wrote:


* It hardly makes any sense to have an in-core jsonb if it comes with
no batteries included. You need to install hstore for this jsonb
implementation to be of *any* use anyway.



This is complete nonsense. Right out of the box today a considerable 
number of the json operations are likely to be considerable faster.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] extension_control_path

2014-02-27 Thread Peter Eisentraut
On 2/27/14, 6:04 AM, Dimitri Fontaine wrote:
 What about allowing a control file like this:
 
# hstore extension
comment = 'data type for storing sets of (key, value) pairs'
default_version = '1.3'
directory = 'local/hstore-new'
module_pathname = '$directory/hstore'
relocatable = true
 
 The current way directory is parsed, relative pathnames are allowed and
 will be resolved in SHAREDIR, which is where we find the extension/ main
 directory, where currently live extension control files.
 
 With such a feature, we would allow module_pathname to reuse the same
 location as where we're going to find auxilliary control files and
 scripts.

If I understand this correctly, then installing an extension in a
nonstandard directory would require editing (or otherwise changing) the
control file.

That doesn't seem very attractive.  In fact, it would fail my main use
case for all of this, which is being able to test extensions before
installing them.

I think we should get rid of the module_pathname business, and
extensions' SQL files should just refer to the base file name and rely
on the dynamic library path to find the files.  What would we lose if we
did that?



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Peter Geoghegan
On Thu, Feb 27, 2014 at 7:27 PM, Andrew Dunstan and...@dunslane.net wrote:
 On 02/27/2014 10:09 PM, Peter Geoghegan wrote:

 * It hardly makes any sense to have an in-core jsonb if it comes with
 no batteries included. You need to install hstore for this jsonb
 implementation to be of *any* use anyway.



 This is complete nonsense. Right out of the box today a considerable number
 of the json operations are likely to be considerable faster.

We need the hstore operator classes to have something interesting.
That's what those people at trade shows and developer conferences that
Josh refers to actually care about. But in any case, even that's kind
of beside the point.

I'm hearing a lot about how important jsonb is, but not much on how to
make the simple jsonb cases that are currently broken (as illustrated
by my earlier examples [1], [2]) work. Surely you'd agree that those
are problematic. We need a better solution than an implicit cast. What
do you propose? I think we might be able to fix at least some things
with judicious use of function overloading, or we could if it didn't
seem incongruous to have to do so given the role of the hstore module
in the extant patch.

[1] 
http://www.postgresql.org/message-id/CAM3SWZR2mWUNFoQdWQmEsJsvaEBqq6jhfCM1Wevwc7r=tpf...@mail.gmail.com

[2] 
http://www.postgresql.org/message-id/cam3swzslybxywh6p2pghhfgzmzkhqbwkfr83mrzqvsoyqfb...@mail.gmail.com
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Stephen Frost
Peter,

* Peter Eisentraut (pete...@gmx.net) wrote:
 On 2/27/14, 9:08 PM, Stephen Frost wrote:
  Extensions can't depend on other extensions directly- hence you can't
  write an extension that depends on hstore, which sucks.
 
 Sure they can, see transforms.
 
 (Or if you disagree, download that patch and demo it, because I'd like
 to know. ;-) )

The issue is if there's a direct reference from one extension to another
extension- we're talking C level function call here.  If the extensions
aren't loaded in the correct order then you'll run into problems.  I've
not tried to work out getting one to actually link to the other, so
they're pulled in together, but it doesn't strike me as great answer
either.  Then there's the questions around versioning, etc...

Presumably, using shared_preload_libraries would work to get the .so's
loaded in the right order, but it doesn't strike me as appropriate to
require that.

And, for my 2c, I'd like to see jsonb as a built-in type *anyway*.  Even
if it's possible to fight with things and make inter-extension
dependency work, it's not trivial and would likely discourage new
developers trying to use it.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Peter Geoghegan
On Thu, Feb 27, 2014 at 8:05 PM, Stephen Frost sfr...@snowman.net wrote:
 And, for my 2c, I'd like to see jsonb as a built-in type *anyway*.  Even
 if it's possible to fight with things and make inter-extension
 dependency work, it's not trivial and would likely discourage new
 developers trying to use it.

I'm not advocating authoring two extensions. I am tentatively
suggesting that we look at one extension for everything. That may well
be the least worst thing.


-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Peter Geoghegan
On Thu, Feb 27, 2014 at 8:07 PM, Peter Geoghegan p...@heroku.com wrote:
 On Thu, Feb 27, 2014 at 8:05 PM, Stephen Frost sfr...@snowman.net wrote:
 And, for my 2c, I'd like to see jsonb as a built-in type *anyway*.  Even
 if it's possible to fight with things and make inter-extension
 dependency work, it's not trivial and would likely discourage new
 developers trying to use it.

 I'm not advocating authoring two extensions. I am tentatively
 suggesting that we look at one extension for everything. That may well
 be the least worst thing.

(Not that it's clear that you imagined I was, but I note it all the same).


-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Joshua D. Drake


On 02/27/2014 05:54 PM, Josh Berkus wrote:






And it's not just that broadly speaking most people would prefer
the interface to speak JSON; it's that a JSONish interface for indexed
heirachical data is a Big Feature which will drive adoption among web
developers, and hstore2 without JSON support simply is not.  At trade
shows and developer conferences, I get more questions about PostgreSQL's
JSON support than I do for any new feature since streaming replication.



Just to back this up. This is not anecdotal. I have multiple customers 
performing very large development projects right now. Every single one 
of them is interested in the pros/cons of using PostgreSQL and JSON.


JD

--
Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
For my dreams of your image that blossoms
   a rose in the deeps of my heart. - W.B. Yeats


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Christophe Pettus

On Feb 27, 2014, at 8:04 PM, Peter Geoghegan p...@heroku.com wrote:

 I'm hearing a lot about how important jsonb is, but not much on how to
 make the simple jsonb cases that are currently broken (as illustrated
 by my earlier examples [1], [2]) work.

Surely, the answer is to define a jsonb || jsonb (and likely the other 
combinatorics of json and jsonb), along with the appropriate GIN and GiST 
interfaces for jsonb.  Why would that not work?

--
-- Christophe Pettus
   x...@thebuild.com



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Peter Geoghegan
On Thu, Feb 27, 2014 at 8:23 PM, Christophe Pettus x...@thebuild.com wrote:
 On Feb 27, 2014, at 8:04 PM, Peter Geoghegan p...@heroku.com wrote:

 I'm hearing a lot about how important jsonb is, but not much on how to
 make the simple jsonb cases that are currently broken (as illustrated
 by my earlier examples [1], [2]) work.

 Surely, the answer is to define a jsonb || jsonb (and likely the other 
 combinatorics of json and jsonb), along with the appropriate GIN and GiST 
 interfaces for jsonb.  Why would that not work?

I'm not the one opposed to putting jsonb stuff in the hstore module!


-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Christophe Pettus

On Feb 27, 2014, at 8:31 PM, Peter Geoghegan p...@heroku.com wrote:

 On Thu, Feb 27, 2014 at 8:23 PM, Christophe Pettus x...@thebuild.com wrote:
 Surely, the answer is to define a jsonb || jsonb (and likely the other 
 combinatorics of json and jsonb), along with the appropriate GIN and GiST 
 interfaces for jsonb.  Why would that not work?
 
 I'm not the one opposed to putting jsonb stuff in the hstore module!

My proposal is that we break the dependencies of jsonb (at least, at the 
user-visible level) on hstore2, thus allowing it in core successfully. jsonb || 
jsonb returning hstore seems like a bug to me, not a feature we should be 
supporting.

--
-- Christophe Pettus
   x...@thebuild.com



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] extension_control_path

2014-02-27 Thread Stephen Frost
* Peter Eisentraut (pete...@gmx.net) wrote:
 On 2/27/14, 6:04 AM, Dimitri Fontaine wrote:
  What about allowing a control file like this:
  
 # hstore extension
 comment = 'data type for storing sets of (key, value) pairs'
 default_version = '1.3'
 directory = 'local/hstore-new'
 module_pathname = '$directory/hstore'
 relocatable = true
  
  The current way directory is parsed, relative pathnames are allowed and
  will be resolved in SHAREDIR, which is where we find the extension/ main
  directory, where currently live extension control files.
  
  With such a feature, we would allow module_pathname to reuse the same
  location as where we're going to find auxilliary control files and
  scripts.
 
 If I understand this correctly, then installing an extension in a
 nonstandard directory would require editing (or otherwise changing) the
 control file.

It depends on what's in the control file.  What this would do is give
developers another option for where the .so resides that means a
directory relative to the control file, which could be quite handy.

 That doesn't seem very attractive.  In fact, it would fail my main use
 case for all of this, which is being able to test extensions before
 installing them.

Not sure why that wouldn't work...?

 I think we should get rid of the module_pathname business, and
 extensions' SQL files should just refer to the base file name and rely
 on the dynamic library path to find the files.  What would we lose if we
 did that?

As I pointed out up-thread, you'd have to keep adding more and more
directories to both the control-filed-paths-to-search plus the
dynamic-shared-libraries

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Craig Ringer
On 02/28/2014 12:43 PM, Christophe Pettus wrote:
 My proposal is that we break the dependencies of jsonb (at least, at the 
 user-visible level) on hstore2, thus allowing it in core successfully. jsonb 
 || jsonb returning hstore seems like a bug to me, not a feature we should be 
 supporting.

Urgh, really?

That's not something I'd be excited to be stuck with into the future.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Peter Geoghegan
On Thu, Feb 27, 2014 at 8:43 PM, Christophe Pettus x...@thebuild.com wrote:
 I'm not the one opposed to putting jsonb stuff in the hstore module!

 My proposal is that we break the dependencies of jsonb (at least, at the 
 user-visible level) on
 hstore2, thus allowing it in core successfully. jsonb || jsonb returning 
 hstore seems like a bug
 to me, not a feature we should be supporting.

Of course it's a bug.

The only problem with that is now you have to move the implementation
of ||, plus a bunch of other hstore operators into core. That seems
like a more difficult direction to move in from a practical
perspective, and I'm not sure that you won't hit a snag elsewhere. But
you must do this in order to make what you describe work; obviously
you can't break jsonb's dependency on hstore if users must have hstore
installed to get a || operator. In short, jsonb and hstore are tied at
the hip (which I don't think is unreasonable), and if you insist on
having one in core, you almost need to have both there (with hstore
proper perhaps just consisting of stub functions and io routines).

I don't understand the aversion to putting jsonb in the hstore
extension. What's wrong with having the code live in an extension,
really? I suppose that putting it in core would be slightly preferable
given the strategic importance of jsonb, but it's not something that
I'd weigh too highly. Right now, I'm much more concerned about finding
*some* way of integrating jsonb that is broadly acceptable.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Christophe Pettus

On Feb 27, 2014, at 9:12 PM, Craig Ringer cr...@2ndquadrant.com wrote:

 On 02/28/2014 12:43 PM, Christophe Pettus wrote:
 My proposal is that we break the dependencies of jsonb (at least, at the 
 user-visible level) on hstore2, thus allowing it in core successfully. jsonb 
 || jsonb returning hstore seems like a bug to me, not a feature we should be 
 supporting.
 
 Urgh, really?
 
 That's not something I'd be excited to be stuck with into the future.

The reason that we're even here is that there's no jsonb || jsonb operator (or 
the other operators that one would expect).

If you try || without the hstore, you get an error, of course:

postgres=# select '{foo:{bar:yellow}}'::jsonb || '{}'::jsonb;
ERROR:  operator does not exist: jsonb || jsonb
LINE 1: select '{foo:{bar:yellow}}'::jsonb || '{}'::jsonb;
 ^
HINT:  No operator matches the given name and argument type(s). You might need 
to add explicit type casts.


The reason it works with hstore installed is that there's an implicit cast from 
hstore to jsonb:

postgres=# create extension hstore;
CREATE EXTENSION
postgres=# select '{foo:{bar:yellow}}'::jsonb || '{}'::jsonb;
 ?column? 
--
 foo={bar=yellow}
(1 row)

--

But I think we're piling broken on broken here.  Just creating an appropriate 
jsonb || jsonb operator solves this problem.  That seems the clear route 
forward.

--
-- Christophe Pettus
   x...@thebuild.com



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Christophe Pettus

On Feb 27, 2014, at 9:28 PM, Peter Geoghegan p...@heroku.com wrote:

 The only problem with that is now you have to move the implementation
 of ||, plus a bunch of other hstore operators into core. That seems
 like a more difficult direction to move in from a practical
 perspective, and I'm not sure that you won't hit a snag elsewhere.

Implementing operators for new types in PostgreSQL is pretty well-trod ground.  
I really don't know what snags we might hit.

 I suppose that putting it in core would be slightly preferable
 given the strategic importance of jsonb, but it's not something that
 I'd weigh too highly.

I'm completely unsure how to parse the idea that something is strategically 
important but we shouldn't put it in core.  If json was important enough to 
make it into core, jsonb certainly is.

Honestly, I really don't understand the resistance to putting jsonb in core.  
There are missing operators, yes; that's a very straight-forward hole to plug.

--
-- Christophe Pettus
   x...@thebuild.com



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UNION ALL on partitioned tables won't use indices.

2014-02-27 Thread Kyotaro HORIGUCHI
Hello,

At Thu, 27 Feb 2014 21:53:52 -0500, Noah Misch wrote
 On Thu, Feb 27, 2014 at 05:33:47PM -0500, Tom Lane wrote:
  Noah Misch n...@leadboat.com writes:
   If the attached patch version looks reasonable, I will commit it.
  
  The test case is completely bogus, as the query explained is significantly
  different from the query executed.  I'm not sure whether you can just
  remove the extra ORDER BY column without getting machine-dependent
  results, though.
 
 Each query tests something slightly different.  The EXPLAIN verifies that we
 can MergeAppend given this query structure, and the plain SELECT verifies that
 any join tree contortions we made to achieve that do not change the answer.

I think Tom said that the second query can yield the same result
even if the 2nd column in orderby is removed so the pertinence of
it seems doubtful. Actually the altered query gave me the same
result on my workset (CentOS6.5/x86-64).

  More generally, I'm afraid the whole approach is probably wrong, or at
  least not solving all problems in this area, because of this:
  
   Incidentally, I tried adding an assertion that append_rel_list does not 
   show
   one appendrel as a direct child of another.  The following query, 
   off-topic
   for the patch at hand, triggered that assertion:
   SELECT 0 FROM (SELECT 0 UNION ALL SELECT 0) t0
   UNION ALL
   SELECT 0 FROM (SELECT 0 UNION ALL SELECT 0) t0;
  
  That's not off topic at all; it shows that there's not been any effort
  to date to flatten appendrel membership, and therefore this partial
  implementation is going to miss some cases.  It doesn't help to merge an
  inheritance-based appendrel into its parent if the query ORDER BY is still
  a level or two above that due to UNION ALLs.
 
 Nonetheless, I find it reasonable to accept a patch that makes
 expand_inherited_tables() avoid introducing nested appendrels.  Making
 pull_up_subqueries() do the same can be a separate effort.  There will still
 be a pile of ways to stifle MergeAppend, including everything that makes
 is_simple_union_all() return false.  Having said that, both you and Kyotaro
 sound keen to achieve an appendrel flatness invariant in a single patch.  If
 that's the consensus, I'm fine with bouncing this back so it can happen.

I understood what that query causes and still don't have definite
opinion on which is better between fixing them individually and
in one go. Nontheless, I'd feel more at ease flattening them
altogether regardless of their origin.

  I wonder whether we should consider adding a pass to flatten any nested
  appendrels after we're done creating them all.
 
 We did consider that upthread.  It's a valid option, but I remain more
 inclined to teach pull_up_subqueries() to preserve flatness just like
 expand_inherited_tables() will.

Yes, the old dumped version of typ2 patch did so. It flattened
appendrel tree for the query prpoerly. Let me hear the reson you
prefer to do so.

  Or alternatively, perhaps
  rather than changing the representation invariant, we need to take a
  harder look at why ordering info isn't getting pushed down through
  appendrels.
 
 That could facilitate MergeAppend in considerably more places, such as UNION
 ALL containing non-simple branches.  I don't have much of a sense concerning
 what such a patch would entail.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UNION ALL on partitioned tables won't use indices.

2014-02-27 Thread Kyotaro HORIGUCHI
Sorry, I did wrong test.

   Noah Misch n...@leadboat.com writes:
If the attached patch version looks reasonable, I will commit it.
   
   The test case is completely bogus, as the query explained is significantly
   different from the query executed.  I'm not sure whether you can just
   remove the extra ORDER BY column without getting machine-dependent
   results, though.
  
  Each query tests something slightly different.  The EXPLAIN verifies that we
  can MergeAppend given this query structure, and the plain SELECT verifies 
  that
  any join tree contortions we made to achieve that do not change the answer.
 
 I think Tom said that the second query can yield the same result
 even if the 2nd column in orderby is removed so the pertinence of
 it seems doubtful. Actually the altered query gave me the same
 result on my workset (CentOS6.5/x86-64).

No, no! It was actually returned a *different* result. I
accidentially(?) added 'desc' to the '2'. Please forget it.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Peter Geoghegan
On Thu, Feb 27, 2014 at 9:35 PM, Christophe Pettus x...@thebuild.com wrote:
 The only problem with that is now you have to move the implementation
 of ||, plus a bunch of other hstore operators into core. That seems
 like a more difficult direction to move in from a practical
 perspective, and I'm not sure that you won't hit a snag elsewhere.

 Implementing operators for new types in PostgreSQL is pretty well-trod 
 ground.  I really don't know what snags we might hit.

I don't find that very reassuring.

 I suppose that putting it in core would be slightly preferable
 given the strategic importance of jsonb, but it's not something that
 I'd weigh too highly.
 I'm completely unsure how to parse the idea that something is strategically 
 important but we shouldn't put it in core.  If json was important enough to 
 make it into core, jsonb certainly is.

That is completely orthogonal to everything I've said. To be clear:
I'm not suggesting that we don't put jsonb in core because it's not
important enough - it has nothing to do with that whatsoever - and
besides, I don't understand why an extension is seen as not befitting
of a more important feature.

 Honestly, I really don't understand the resistance to putting jsonb in core.  
 There are missing operators, yes; that's a very straight-forward hole to plug.

You are basically suggesting putting all of hstore in core, because
jsonb and hstore are approximately the same thing. That seem quite a
bit more controversial than putting everything in the hstore
extension. I doubt that you can reasonably take any half measure
between those two extremes, and one seems a lot less controversial
than the other. This patch already seems controversial enough to me.
It's as simple as that.


-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Andres Freund
On 2014-02-27 22:10:22 -0500, Peter Eisentraut wrote:
 On 2/26/14, 10:42 PM, Andrew Dunstan wrote:
  Extensions can't call each other's code.
 
 That's not necessarily so.

I don't think we have portable infrastructure to it properly yet,
without a detour via the fmgr. If I am wrong, what's the infrastructure?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another possible corruption bug in 9.3.2 or possibly a known MultiXact problem?

2014-02-27 Thread Andres Freund
On 2014-02-27 23:41:08 +, Greg Stark wrote:
 Though I notice something I can't understand here.
 
 After activating the new clone subsequent attempts to select rows from
 the page bump the LSN, presumably due to touching hint bits (since the
 prune xid hasn't changed). But the checksum hasn't changed even after
 running CHECKPOINT.

Are you running with full_page_writes=off?

Only delete and update do a PageSetPrunable(), so prune_xid not being
changed doesn't say much...

 How is it possible for the LSN to get updated without changing the checksum?

Generally the LSN is computed when writing, not when a buffer is
modified, so that's not particularly surprising. It'd be interesting to
see what the records are that end on those LSNs.

It'd probably nice to add the capability to dump records that end in a
particular location to pg_xlogdump...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Christophe Pettus

On Feb 27, 2014, at 9:59 PM, Peter Geoghegan p...@heroku.com wrote:

 I don't find that very reassuring.

Obviously, we have to try it, and that will decide it.

 I don't understand why an extension is seen as not befitting
 of a more important feature.

contrib/ is considered a secondary set of features; I routinely get pushback 
from clients about using hstore because it's not in core, and they are thus 
suspicious of it.  The educational project required to change that far exceeds 
any technical work we are talking about here..  There's a very large 
presentational difference between having a feature in contrib/ and in core, at 
the minimum, setting aside the technical issues (such as the 
extensions-calling-extensions problem).

We have an existence proof of this already: if there was absolutely no 
difference between having things being in contrib/ and being in core, full text 
search would still be in contrib/.

 You are basically suggesting putting all of hstore in core, because
 jsonb and hstore are approximately the same thing. That seem quite a
 bit more controversial than putting everything in the hstore
 extension.

Well, controversy is just a way of saying there are people who don't like the 
idea, and I get that.  But I don't see the basis for the dislike.

--
-- Christophe Pettus
   x...@thebuild.com



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Heikki Linnakangas

On 02/28/2014 09:02 AM, Christophe Pettus wrote:

contrib/ is considered a secondary set of features; I routinely get pushback 
from clients about using hstore because it's not in core, and they are thus 
suspicious of it.  The educational project required to change that far exceeds 
any technical work we are talking about here..  There's a very large 
presentational difference between having a feature in contrib/ and in core, at 
the minimum, setting aside the technical issues (such as the 
extensions-calling-extensions problem).

We have an existence proof of this already: if there was absolutely no 
difference between having things being in contrib/ and being in core, full text 
search would still be in contrib/.


Although presentation was probably the main motivation for moving 
full-text search into core, there was good technical reasons for that 
too. Full-text search in contrib had a bunch of catalog-like tables to 
store the dictionaries etc, and cumbersome functions to manipulate them. 
When it was moved into core, we created new SQL commands for that stuff, 
which is much clearer. The json doesn't have that; it would be well 
suited to be an extension from technical point of view.


(This is not an opinion statement on what I think we should do. I 
haven't been following the discussion, so I'm going to just whine 
afterwards ;-) )


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Peter Geoghegan
On Thu, Feb 27, 2014 at 11:02 PM, Christophe Pettus x...@thebuild.com wrote:
 On Feb 27, 2014, at 9:59 PM, Peter Geoghegan p...@heroku.com wrote:
 I don't find that very reassuring.

 Obviously, we have to try it, and that will decide it.

I don't think that's obvious at all. Anyone is free to spend their
time however they please, but personally I don't think that that's a
wise use of anyone's time.

 contrib/ is considered a secondary set of features; I routinely get pushback 
 from clients about using hstore because it's not in core, and they are thus 
 suspicious of it.  The educational project required to change that far 
 exceeds any technical work we are talking about here..  There's a very large 
 presentational difference between having a feature in contrib/ and in core, 
 at the minimum, setting aside the technical issues (such as the 
 extensions-calling-extensions problem).

There are no technical issues of any real consequence in this specific instance.

 We have an existence proof of this already: if there was absolutely no 
 difference between having things being in contrib/ and being in core, full 
 text search would still be in contrib/.

I never said there was no difference, and whatever difference exists
varies considerably, as Heikki points out. I myself want to move
pg_stat_statements to core, for example, for exactly one very specific
reason: so that I can reserve a small amount of shared memory by
default so that it can be enabled without a restart at short notice.

 You are basically suggesting putting all of hstore in core, because
 jsonb and hstore are approximately the same thing. That seem quite a
 bit more controversial than putting everything in the hstore
 extension.

 Well, controversy is just a way of saying there are people who don't like 
 the idea, and I get that.  But I don't see the basis for the dislike.

Yes, people who have the ability to block the feature entirely. I am
attempting to build consensus by reaching a compromise that weighs
everyone's concerns.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Request improve pg_stat_statements module

2014-02-27 Thread pgsql-kr
I patched to add one column in pg_stat_statements module.
and sent to author but
I recived a reject mail because unknown user :(

so I am posting to this mailling.

I need a last time of query, because I want to analyse order by recent time.

this patch code below,
review please and
I wish to apply at next version.

--- diff begin
--- ../pg_stat_statements.orig/pg_stat_statements.c 2014-02-18 
04:29:55.0 +0900
+++ pg_stat_statements.c2014-02-28 15:34:38.0 +0900
@@ -59,6 +59,7 @@
 #include storage/spin.h
 #include tcop/utility.h
 #include utils/builtins.h
+#include utils/timestamp.h


 PG_MODULE_MAGIC;
@@ -116,6 +117,7 @@
double  blk_read_time;  /* time spent reading, in msec */
double  blk_write_time; /* time spent writing, in msec */
double  usage;  /* usage factor */
+   TimestampTz last_executed_timestamp; /* last executed 
timestamp of query */
 } Counters;

 /*
@@ -1043,6 +1045,8 @@
e-counters.blk_read_time += 
INSTR_TIME_GET_MILLISEC(bufusage-blk_read_time);
e-counters.blk_write_time += 
INSTR_TIME_GET_MILLISEC(bufusage-blk_write_time);
e-counters.usage += USAGE_EXEC(total_time);
+   /* last executed timestamp */
+   e-counters.last_executed_timestamp = GetCurrentTimestamp();

SpinLockRelease(e-mutex);
}
@@ -1069,7 +1073,8 @@
 }

 #define PG_STAT_STATEMENTS_COLS_V1_0   14
-#define PG_STAT_STATEMENTS_COLS18
+#define PG_STAT_STATEMENTS_COLS_V1_1   18
+#define PG_STAT_STATEMENTS_COLS19

 /*
  * Retrieve statement statistics.
@@ -1087,6 +1092,7 @@
HASH_SEQ_STATUS hash_seq;
pgssEntry  *entry;
boolsql_supports_v1_1_counters = true;
+   boolsql_supports_v1_2_counters = true;

if (!pgss || !pgss_hash)
ereport(ERROR,
@@ -1107,8 +1113,12 @@
/* Build a tuple descriptor for our result type */
if (get_call_result_type(fcinfo, NULL, tupdesc) != TYPEFUNC_COMPOSITE)
elog(ERROR, return type must be a row type);
-   if (tupdesc-natts == PG_STAT_STATEMENTS_COLS_V1_0)
+   if (tupdesc-natts == PG_STAT_STATEMENTS_COLS_V1_0){
sql_supports_v1_1_counters = false;
+   sql_supports_v1_2_counters = false;
+   }
+   if (tupdesc-natts == PG_STAT_STATEMENTS_COLS_V1_1)
+   sql_supports_v1_2_counters = false;

per_query_ctx = rsinfo-econtext-ecxt_per_query_memory;
oldcontext = MemoryContextSwitchTo(per_query_ctx);
@@ -1185,8 +1195,15 @@
values[i++] = Float8GetDatumFast(tmp.blk_read_time);
values[i++] = Float8GetDatumFast(tmp.blk_write_time);
}
+   // last_executed_timestamp
+   if (sql_supports_v1_2_counters)
+   values[i++] = 
TimestampTzGetDatum(tmp.last_executed_timestamp);
+

-   Assert(i == (sql_supports_v1_1_counters ?
+   if(sql_supports_v1_2_counters)
+   Assert(i == PG_STAT_STATEMENTS_COLS);
+   else
+   Assert(i == (sql_supports_v1_1_counters ?
 PG_STAT_STATEMENTS_COLS : 
PG_STAT_STATEMENTS_COLS_V1_0));

tuplestore_putvalues(tupstore, tupdesc, values, nulls);

-- end of diff




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Christophe Pettus

On Feb 27, 2014, at 11:15 PM, Peter Geoghegan p...@heroku.com wrote:

 I don't think that's obvious at all. Anyone is free to spend their
 time however they please, but personally I don't think that that's a
 wise use of anyone's time.

I believe you are misunderstanding me.  If there are actual technical problems 
or snags to migrating jsonb into core with full operator and index support, 
then the way we find out is to do the implementation, unless you know of a 
specific technical holdup already.

 There are no technical issues of any real consequence in this specific 
 instance.

There was no technical reason that json couldn't have been an extension, 
either, but there were very compelling presentational reasons to have it in 
core.  jsonb has exactly the same presentational issues.

 Yes, people who have the ability to block the feature entirely. I am
 attempting to build consensus by reaching a compromise that weighs
 everyone's concerns.

The thing I still haven't heard is why jsonb in core is a bad idea, except that 
it is too much code.  Is that the objection?

--
-- Christophe Pettus
   x...@thebuild.com



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unfortunate choice of short switch name in pgbench

2014-02-27 Thread KONDO Mitsumasa
(2014/02/28 2:39), Tom Lane wrote:
 Fabien COELHO coe...@cri.ensmp.fr writes:
 Yeah, but they don't make -P take an integer argument.  It's that
 little frammish that makes this problem significant.
 
 I do not see a strong case to make options with arguments case insensitive
 as a general rule. If this is done for -p/-P, why not -t/-T?
I'll say the same thing. And if we remove -P short option in pgbench, it means
that -P with integer will be forbided in postgres command. Surely, we don't 
hope so.

 If you really fell you must remove -P, please replace it by another
 one-letter, I use this option nearly everytime a run pgbench.
 
 Meh.  If I thought -P would be that popular, I'd expect people to get
 used to the issue.  I don't believe it though.
At least, a user which is interested in postgres performance tuning(include
kernel options, etc) will often use this option. I recommended this feature,
because we can see the bottle-neck which we have not seen:) I believe you will
also become to like it more and more, while you use it.

Regards,
--
Mitsumasa KONDO
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb and nested hstore

2014-02-27 Thread Peter Geoghegan
On Thu, Feb 27, 2014 at 11:36 PM, Christophe Pettus x...@thebuild.com wrote:
 There was no technical reason that json couldn't have been an extension, 
 either, but there were very compelling presentational reasons to have it in 
 core.  jsonb has exactly the same presentational issues.

There were also no compelling reasons why json should have been an
extension. The two situations are not at all comparable. In any case,
no author of this patch has proposed any solution to the casting
problems described with using the jsonb type with the new (and
existing) hstore operators. For that reason, I won't comment further
on this until I hear a more concrete proposal.

 Yes, people who have the ability to block the feature entirely. I am
 attempting to build consensus by reaching a compromise that weighs
 everyone's concerns.

 The thing I still haven't heard is why jsonb in core is a bad idea, except 
 that it is too much code.  Is that the objection?

I suspect that it's going to be considered odd to have code in core
that considers compatibility with earlier versions of hstore, back
when it was an extension, with calling stub functions, for one thing.
Having hstore be almost but not quite in core may be seen as a
contortion. Is that really the conversation you'd prefer to have at
this late stage? In any case, as I say, if that's the patch that
Andres or Oleg or Teodor really want to submit, then by all means let
them submit it. I maintain that the *current* state of affairs, where
jsonb isn't sure if it's in core or is an extension will not fly.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers