Re: [HACKERS] pg_restore -t should match views, matviews, and foreign tables

2015-04-09 Thread Craig Ringer
On 8 April 2015 at 05:05, David G. Johnston david.g.johns...@gmail.com
wrote:

 On Tue, Apr 7, 2015 at 1:33 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Peter Eisentraut pete...@gmx.net writes:
  On 3/31/15 11:01 PM, Craig Ringer wrote:
  this patch adds support for views, foreign tables, and materialised
  views to the pg_restore -t flag.

  I think this is a good change.  Any concerns?

 Are we happy with pg_dump/pg_restore not distinguishing these objects
 by type?  It seems rather analogous to letting ALTER TABLE work on views
 etc.  Personally I'm fine with this, but certainly some people have
 complained about that approach so far as ALTER is concerned.  (But the
 implication would be that we'd need four distinct switches, which is
 not an outcome I favor.)


 ​The pg_dump documentation for the equivalent -t switch states:

 ​Dump only tables (or views or sequences or foreign tables) matching
 table

 Does pg_dump need to be updated to address materialized views here?


The pg_dump code handles materialized views, the docs weren't updated. I
added mention of them in the next rev of the patch to pg_restore.


 Does pg_restore need to be updated to address sequences here?


I'd be against that if pg_dump didn't already behave the same way. Given
that, yes, I think so.


 ISTM that the two should mirror each other.


Ideally, yes, but the differences go much deeper than this.

to get the equivalent of:

pg_restore -n myschema -t sometable

in pg_dump you need:

pg_dump -t \myschema\.\sometable\

pg_dump -n myschema -t sometable is **not** equivalent. In fact, the -n is
ignored, and -t will match using the search_path.

so they're never really going to be the same, just similar enough to catch
people out most of the time.

I think you're right that sequences should be included by pg_restore since
they are by pg_dump, though. So v3 patch attached.


-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services
From dc8985d4aa5e995e5107fe9dcb65ec061dc378eb Mon Sep 17 00:00:00 2001
From: Craig Ringer cr...@2ndquadrant.com
Date: Wed, 1 Apr 2015 10:46:29 +0800
Subject: [PATCH] pg_restore -t should select views, matviews, and foreign
 tables

Currently pg_restore's '-t' option selects only tables, not other
relations. It should be able to match anything that behaves like
a relation in the relation namespace, anything that's interchangable
with a table, including:

* Normal relations
* Views
* Materialized views
* Foreign tables
* Sequences

Sequences are matched because pg_dump -t matches them, even though
their status as relations is really just an implementation detail.

Indexes are not matched because pg_dump -t doesn't match them, and
because they aren't really relations.

TOAST tables aren't matched, they're implementation detail.

See:
  http://www.postgresql.org/message-id/camsr+ygj50tvtvk4dbp66gajeoc0kap6kxfehaom+neqmhv...@mail.gmail.com
---
 doc/src/sgml/ref/pg_dump.sgml|  2 +-
 doc/src/sgml/ref/pg_restore.sgml | 25 ++---
 src/bin/pg_dump/pg_backup_archiver.c |  7 ++-
 3 files changed, 29 insertions(+), 5 deletions(-)

diff --git a/doc/src/sgml/ref/pg_dump.sgml b/doc/src/sgml/ref/pg_dump.sgml
index a6e7b08..7f7da9e 100644
--- a/doc/src/sgml/ref/pg_dump.sgml
+++ b/doc/src/sgml/ref/pg_dump.sgml
@@ -501,7 +501,7 @@ PostgreSQL documentation
   termoption--table=replaceable class=parametertable/replaceable/option/term
   listitem
para
-Dump only tables (or views or sequences or foreign tables) matching
+Dump only tables (or views, sequences, foreign tables or materialized views) matching
 replaceable class=parametertable/replaceable.  Multiple tables
 can be selected by writing multiple option-t/ switches.  Also, the
 replaceable class=parametertable/replaceable parameter is
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index 9f8dc00..9119e3e 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -405,9 +405,28 @@
   termoption--table=replaceable class=parametertable/replaceable/option/term
   listitem
para
-Restore definition and/or data of named table only. Multiple tables
-may be specified with multiple option-t/ switches. This can be
-combined with the option-n/option option to specify a schema.
+Restore definition and/or data of the named table (or other relation)
+only. This flag matches views, materialized views and foreign tables as
+well as ordinary tables. Multiple relations may be specified with
+multiple option-t/ switches. This can be combined with the
+option-n/option option to specify a schema.
+note
+ para
+  When literal-t/literal is specified,
+  applicationpg_restore/application makes no attempt to restore any
+  other database objects that the 

Re: [HACKERS] TABLESAMPLE patch

2015-04-09 Thread Michael Paquier
On Thu, Apr 9, 2015 at 5:12 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
 On Thu, Apr 9, 2015 at 4:30 AM, Jeff Janes jeff.ja...@gmail.com wrote:
 On Mon, Apr 6, 2015 at 11:02 AM, Petr Jelinek p...@2ndquadrant.com wrote:

 On 06/04/15 14:30, Petr Jelinek wrote:

 On 06/04/15 11:02, Simon Riggs wrote:

 Are we ready for a final detailed review and commit?


 I plan to send v12 in the evening with some additional changes that came
 up from Amit's comments + some improvements to error reporting. I think
 it will be ready then.


 Ok so here it is.

 Changes vs v11:
 - changed input parameter list to expr_list
 - improved error reporting, particularly when input parameters are wrong
 - fixed SELECT docs to show correct syntax and mention that there can be
 more sampling methods
 - added name of the sampling method to the explain output - I don't like
 the code much there as it has to look into RTE but on the other hand I don't
 want to create new scan node just so it can hold the name of the sampling
 method for explain
 - made views containing TABLESAMPLE clause not autoupdatable
 - added PageIsAllVisible() check before trying to check for tuple
 visibility
 - some typo/white space fixes



 Compiler warnings:
 explain.c: In function 'ExplainNode':
 explain.c:861: warning: 'sname' may be used uninitialized in this function


 Docs spellings:

 PostgreSQL distrribution  extra r.

 The optional parameter REPEATABLE acceps   accepts.  But I don't know that
 'accepts' is the right word.  It makes the seed value sound optional to
 REPEATABLE.

 each block having same chance  should have the before same.

 Both of those sampling methods currently  I think it should be these
 not those, as this sentence is immediately after their introduction, not
 at a distance.

 ...tuple contents and decides if to return in, or zero if none  Something
 here is confusing. return it, not return in?

 Other comments:

 Do we want tab completions for psql?  (I will never remember how to spell
 BERNOULLI).

 Yes. I think so.

 Needs a Cat version bump.

 The committer who will pick up this patch will normally do it.

 Patch 1 is simple enough and looks fine to me.

 Regarding patch 2...
 I found for now some typos:
 +  titlestructnamepg_tabesample_method/structname/title
 +productnamePostgreSQL/productname distrribution:

 Also, I am wondering if the sampling logic based on block analysis is
 actually correct, for example for now this fails and I think that we
 should support it:
 =# with query_select as (select generate_series(1, 10) as a) select
 query_select.a from query_select tablesample system (100.0/11)
 REPEATABLE ();
 ERROR:  42P01: relation query_select does not exist

 How does the SQL spec define exactly TABLESAMPLE? Shouldn't we get a
 sample from a result set?
 Thoughts?

Just to be clear, the example above being misleading... Doing table
sampling using SYSTEM at physical level makes sense. In this case I
think that we should properly error out when trying to use this method
on something not present at physical level. But I am not sure that
this restriction applies to BERNOUILLI: you may want to apply it on
other things than physical relations, like views or results of WITH
clauses. Also, based on the fact that we support custom sampling
methods, I think that it should be up to the sampling method to define
on what kind of objects it supports sampling, and where it supports
sampling fetching, be it page-level fetching or analysis from an
existing set of tuples. Looking at the patch, TABLESAMPLE is just
allowed on tables and matviews, this limitation is too restrictive
IMO.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Row security violation error is misleading

2015-04-09 Thread Craig Ringer
On 9 April 2015 at 01:30, Dean Rasheed dean.a.rash...@gmail.com wrote:


 That doesn't match what the code currently does:

  * Also, allow extensions to add their own policies.
  *
  * Note that, as with the internal policies, if multiple policies are
  * returned then they will be combined into a single expression with
  * all of them OR'd together.  However, to avoid the situation of an
  * extension granting more access to a table than the internal policies
  * would allow, the extension's policies are AND'd with the internal
  * policies.  In other words - extensions can only provide further
  * filtering of the result set (or further reduce the set of records
  * allowed to be added).

 which seems reasonable, and means that if there are both internal and
 external policies, an allow all external policy would be a no-op.


Great, I'm glad to see that they're ANDed now.

I wasn't caught up with the current state of this. At some earlier point
policies from hooks were being ORed, which made mandatory access control
extensions impossible.

(I need to finish reading threads before replying).

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


[HACKERS] Revisiting Re: BUG #8532: postgres fails to start with timezone-data =2013e

2015-04-09 Thread Ian Stakenvicius
Hey all -- so I know that Gentoo Linux is likely the only platform this
bug occurs under, but i got annoyed enough with it that I decided to
write a patch to fix this issue once and for all (or at least, help keep
it from happening).

That thread in question actually dealt with crashing on startup in
postgresql-9.1 and earlier, but all versions including the latest still
suffer from the inability to load timezone data via the pg_timezone_*
tables if /usr/share/zoneinfo contains filesystem loops.

To that end, the following helps resolve this issue by ensuring
single-level filesystem loops are detected and skipped.  The top half of
the patch only applies to postgresql-9.1 and earlier, while the second
half applies to all versions.

(hopefully the patch attached properly)

Regards,
Ian
--- a/src/timezone/pgtz.c	2015-02-02 15:45:23.0 -0500
+++ b/src/timezone/pgtz.c	2015-04-07 14:21:22.341832190 -0400
@@ -586,6 +586,12 @@
 		if (direntry-d_name[0] == '.')
 			continue;
 
+		/* if current working directory has the same name as current direntry name,
+		 * then skip as this is a recursive fs loop
+		 */
+		if (strncmp(direntry-d_name,tzdirsub,strlen(direntry-d_name)) == 0)
+			continue;
+
 		snprintf(tzdir + tzdir_orig_len, MAXPGPATH - tzdir_orig_len,
  /%s, direntry-d_name);
 
@@ -1615,6 +1621,13 @@
 		if (direntry-d_name[0] == '.')
 			continue;
 
+		/* copy current working directory so that there is no risk of modification by basename(),
+		 * and compare to current direntry name; skip if they are the same as this is a recursive fs loop
+		 */
+		snprintf(fullname, MAXPGPATH, %s, dir-dirname[dir-depth]);
+		if (strncmp(direntry-d_name,basename(fullname),strlen(direntry-d_name)) == 0)
+			continue;
+
 		snprintf(fullname, MAXPGPATH, %s/%s,
  dir-dirname[dir-depth], direntry-d_name);
 		if (stat(fullname, statbuf) != 0)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] libpq's multi-threaded SSL callback handling is busted

2015-04-09 Thread Jan Urbański

Peter Eisentraut writes:

 On 2/12/15 7:28 AM, Jan Urbański wrote:
 +#if OPENSSL_VERSION_NUMBER  0x1000
 +/* OpenSSL 1.0.0 deprecates the CRYPTO_set_id_callback function and 
 provides a
 + * default implementation, so there's no need for our own. */

 I have some additional concerns about this.  It is true that OpenSSL
 1.0.0 deprecates CRYPTO_set_id_callback(), but it replaces it with
 CRYPTO_THREADID_set_callback().  There is no indication that you don't
 need to set a callback anymore.  The man page
 (https://www.openssl.org/docs/crypto/threads.html) still says you need
 to set two callbacks, and points to the new interface.

Truly, no good deed can ever go unpunished.

I spent around an hour tracking down why setting both callbacks
for OpenSSL = 1.0.0 brought back the PHP segfaults. Turns out, in OpenSSL
there's *no way* to unregister a callback registered with
CRYPTO_THREADID_set_callback()!

Observe: 
https://github.com/openssl/openssl/blob/35a1cc90bc1795e8893c11e442790ee7f659fffb/crypto/thr_id.c#L174-L180

Once you set a callback, game over - unloading your library will cause a
segfault. I cannot express how joyful I felt when I discovered it.

 I suggest you remove this part from your patch and submit a separate
 patch for consideration if you want to.

At this point I will propose an even simpler patch (attached). I gave up on
trying to use the more modern CRYPTO_THREADID_* functions.

Right now, HEAD libpq won't compile against OpenSSL compiled with
OPENSSL_NO_DEPRECATED, which I think is fine, given the lack of complaints. So
let's just keep using the older, saner functions and ignore the THREADID crap.

By the way, might I take the opportunity to breach the subject of backpatching
this change, should it get accepted?

Cheers,
Jan

diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index a32af34..02c9177 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -806,9 +806,12 @@ pgtls_init(PGconn *conn)
 
 		if (ssl_open_connections++ == 0)
 		{
-			/* These are only required for threaded libcrypto applications */
-			CRYPTO_set_id_callback(pq_threadidcallback);
-			CRYPTO_set_locking_callback(pq_lockingcallback);
+			/* These are only required for threaded libcrypto applications, but
+			 * make sure we don't stomp on them if they're already set. */
+			if (CRYPTO_get_id_callback() == NULL)
+CRYPTO_set_id_callback(pq_threadidcallback);
+			if (CRYPTO_get_locking_callback() == NULL)
+CRYPTO_set_locking_callback(pq_lockingcallback);
 		}
 	}
 #endif   /* ENABLE_THREAD_SAFETY */
@@ -885,9 +888,12 @@ destroy_ssl_system(void)
 
 	if (pq_init_crypto_lib  ssl_open_connections == 0)
 	{
-		/* No connections left, unregister libcrypto callbacks */
-		CRYPTO_set_locking_callback(NULL);
-		CRYPTO_set_id_callback(NULL);
+		/* No connections left, unregister libcrypto callbacks, if no one
+		 * registered different ones in the meantime. */
+		if (CRYPTO_get_id_callback() == pq_threadidcallback)
+			CRYPTO_set_id_callback(NULL);
+		if (CRYPTO_get_locking_callback() == pq_lockingcallback)
+			CRYPTO_set_locking_callback(NULL);
 
 		/*
 		 * We don't free the lock array or the SSL_context. If we get another

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSL information view

2015-04-09 Thread Magnus Hagander
On Thu, Apr 9, 2015 at 3:20 PM, Andres Freund and...@anarazel.de wrote:

 Hi,

 On 2015-04-09 13:31:55 +0200, Magnus Hagander wrote:
  +  row
  +
  
 entrystructnamepg_stat_ssl/indextermprimarypg_stat_ssl/primary/indexterm/entry
  +   entryOne row per connection (regular and replication), showing
 information about
  +SSL used on this connection.
  +See xref linkend=pg-stat-ssl-view for details.
  +   /entry
  +  /row
  +

 I kinda wonder why this even separate from pg_stat_activity, at least
 from the POV of the function gathering the result. This way you have to
 join between pg_stat_activity and pg_stat_ssl which will mean that the
 two don't necessarily correspond to each other.


To keep from cluttering  pg_stat_activity for the majority of users who
are the ones not actually using SSL.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] rejected vs returned with feedback in new CF app

2015-04-09 Thread Tom Lane
Magnus Hagander mag...@hagander.net writes:
 On Apr 9, 2015 2:20 AM, Robert Haas robertmh...@gmail.com wrote:
 +1.

 Is that at +1 for naming it moved, or for not having it? :-)

 I can definitely go with moved. Buy I would like to keep it - the reason
 for having it in the first place is to make the history of the patch follow
 along when it goes to the next cf. If we don't have the move option, I
 think it's likely that we'll be back to the same patch having multiple
 completely unrelated entries in different cfs.

The problem with the whole thing is that you're asking the person doing
the returned marking to guess whether the patch will be resubmitted in
a future CF.

The right workflow here, IMO, is that a patch should be marked returned or
rejected, full stop; and then when/if the author submits a new version for
a future CF, there should be a way *at that time* to re-link the email
thread into that future CF.

Moved is really only applicable, I think, for cases where we punt a
patch to the next CF for lack of time.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Row security violation error is misleading

2015-04-09 Thread Dean Rasheed
On 8 April 2015 at 16:27, Stephen Frost sfr...@snowman.net wrote:
 * Dean Rasheed (dean.a.rash...@gmail.com) wrote:
 I actually re-used the sql status code 42501 -
 ERRCODE_INSUFFICIENT_PRIVILEGE for a RLS check failure because of the
 parallel with permissions checks, but I quite like Craig's idea of
 inventing a new status code for this, so that it can be more easily
 distinguished from a lack of GRANTed privileges.

 As I mentioned to Kevin, I'm not sure that this is really a useful
 distinction.  I'm quite curious if other systems provide that
 distinction between grant violations and policy violations.  If they do
 then that would certainly bolster the argument to provide the
 distinction in PG.


OK, on further reflection I think that's probably right.

ERRCODE_INSUFFICIENT_PRIVILEGE is certainly more appropriate than
anything based on a WCO violation, because it reflects the fact that
the current user isn't allowed to perform the insert/update, but
another user might be allowed, so this is a privilege problem, not a
data error.

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] raw output from copy

2015-04-09 Thread Pavel Stehule
Hi

This thread was finished without real work. I have a real use case - export
XML doc in non utf8 encoding.

http://www.postgresql.org/message-id/16174.1319228...@sss.pgh.pa.us

I propose to implement new format option RAW like Tom proposed.

It requires only one row, one column result - and result is just raw binary
data without size.

Objections? Ideas?

Regards

Pavel


Re: [HACKERS] libpq's multi-threaded SSL callback handling is busted

2015-04-09 Thread Peter Eisentraut
On 2/12/15 7:28 AM, Jan Urbański wrote:
 +#if OPENSSL_VERSION_NUMBER  0x1000
 +/* OpenSSL 1.0.0 deprecates the CRYPTO_set_id_callback function and provides 
 a
 + * default implementation, so there's no need for our own. */

I have some additional concerns about this.  It is true that OpenSSL
1.0.0 deprecates CRYPTO_set_id_callback(), but it replaces it with
CRYPTO_THREADID_set_callback().  There is no indication that you don't
need to set a callback anymore.  The man page
(https://www.openssl.org/docs/crypto/threads.html) still says you need
to set two callbacks, and points to the new interface.

It is true that there is a fallback implementation for some platforms,
but there is no indication that one is invited to rely on those.  Let's
keep in mind that libpq is potentially used on obscure platforms, so I'd
rather stick with the documented approaches.

I suggest you remove this part from your patch and submit a separate
patch for consideration if you want to.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Row security violation error is misleading

2015-04-09 Thread Craig Ringer
On 8 April 2015 at 19:52, Dean Rasheed dean.a.rash...@gmail.com wrote:


 2). In prepend_row_security_policies(), I think it is better to have
 any table RLS policies applied before any hook policies, so that a
 hook cannot be used to bypass built-in RLS.


A hook really has to be able to ensure that built-in RLS cannot bypass the
hook's policies, too, i.e. the hook policy *must* return true for the row
to be visible.

This is necessary for mandatory access control hooks, which need to be able
to say permit if and only if...

I'll take a closer look at this.


 3). The infinite recursion detection in fireRIRrules() didn't properly
 manage the activeRIRs list in the case of WCOs, so it would
 incorrectly report infinite recusion if the same relation with RLS
 appeared more than once in the rtable, for example UPDATE t ... FROM
 t 


I'm impressed you found that one.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


Re: [HACKERS] SSL information view

2015-04-09 Thread Andres Freund
On 2015-04-09 15:56:00 +0200, Magnus Hagander wrote:
 On Thu, Apr 9, 2015 at 3:20 PM, Andres Freund and...@anarazel.de wrote:
 
  Hi,
 
  On 2015-04-09 13:31:55 +0200, Magnus Hagander wrote:
   +  row
   +
   
  entrystructnamepg_stat_ssl/indextermprimarypg_stat_ssl/primary/indexterm/entry
   +   entryOne row per connection (regular and replication), showing
  information about
   +SSL used on this connection.
   +See xref linkend=pg-stat-ssl-view for details.
   +   /entry
   +  /row
   +
 
  I kinda wonder why this even separate from pg_stat_activity, at least
  from the POV of the function gathering the result. This way you have to
  join between pg_stat_activity and pg_stat_ssl which will mean that the
  two don't necessarily correspond to each other.
 
 
 To keep from cluttering  pg_stat_activity for the majority of users who
 are the ones not actually using SSL.

I'm not sure that's actually a problem. But even if, it seems a bit
better to return the data for both views from one SRF and just define
the views differently. That way there's a way to query without the
danger of matching the wrong rows between pg_stat_activity  stat_ssl
due to pid reuse.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] FPW compression leaks information

2015-04-09 Thread Heikki Linnakangas
Now that we have compression of full-page images in WAL, it can be used 
to leak sensitive information you're not supposed to see. Somewhat 
similar to the recent BREACH and CRIME attacks on SSL, if you can insert 
into a table, the compression ratio gives you a hint of how similar the 
existing data on the page is to the data you inserted.


That might seem abstract, so let's consider a concrete example. Imagine 
that Eve wants to know the password of another user. She can't read 
pg_authid directly, but she can change her own password, which causes an 
update in pg_authid. The procedure is to:


1. Perform a checkpoint.
2. Call pg_current_xlog_insert_location() to get current WAL position.
3. Change password.
4. Call pg_current_xlog_insert_location() again, subtract the previous 
position to get the difference.


Repeat that thousands of times with different passwords, and record the 
WAL record size of each attempt. The smaller the WAL size, the more 
common characters that guess had with the victim password.


The passwords are (usually) stored as MD5 hashes, but the procedure 
works with those too. Each attempt will give a hint on how many common 
bytes the attempt had with the victim hash.


There are some complications that make this difficult to perform in 
practice. A regular user can't perform a checkpoint directly, she'll 
have to generate a lot of other activity to trigger one, or wait until 
one happens naturally. The measurement of the WAL record size might be 
conflated by other concurrent activity that wrote WAL. And you cannot 
force your own pg_authid row to the same page as that of a chosen victim 
- so you're limited to guessing victims that happen to be on the same page.


All in all, this is a bit clumsy and very time-consuming to pull off in 
practice, but it's possible at least if the conditions are just right.


What should we do about this? Make it configurable on a per-table basis? 
Disable FPW compression on system tables? Disable FPW on tables you 
don't have SELECT access to? Add a warning to the docs?


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] rejected vs returned with feedback in new CF app

2015-04-09 Thread Andrew Dunstan


On 04/09/2015 09:09 AM, Magnus Hagander wrote:




Moved is really only applicable, I think, for cases where we punt a
patch to the next CF for lack of time.


Well, that's basically what returned with feedback is now, so I 
guess that one should just be renamed in that case. And we add a new 
returned with feedback that closes out the patch and doesn't move it 
anywhere. Which is pretty similar to the suggestion earlier in this 
thread except it also swaps the two names.




I think we should keep it, and see how it works in practice. I'd prefer 
a name like deferred to moved - the latter is a workflow process 
rather than a patch status, ISTM.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] rejected vs returned with feedback in new CF app

2015-04-09 Thread Andres Freund
On 2015-04-09 15:09:55 +0200, Magnus Hagander wrote:
 If we just link the email thread, that would mean we loose all those
 precious annotations we just added support for. Is that really what you
 meant? We also loose all history of a patch, and can't see that a previous
 version existed in a previous commitfest, without manually checking each
 and every one. But if that's a history we don't *want*, that's of course
 doable, but it seems wrong to me?

It'd be better if we kept them, but it's not *that* important imo. But
if the (documented) workflow would be to go to the old cf and click the
'move to next CF' button that'd not be a problem anyway.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT IGNORE (and UPDATE) 3.0

2015-04-09 Thread Peter Geoghegan
On Wed, Mar 18, 2015 at 2:59 AM, Dean Rasheed dean.a.rash...@gmail.com wrote:
 Yes, I read that, and I agree with the intention to not leak data
 according to both the INSERT and UPDATE policies, however...

 You're seeing a failure that applies to the target tuple of the UPDATE
 (the tuple that we can't leak the contents of). I felt it was best to
 check all policies against the target/existing tuple, including both
 WITH CHECK OPTIONS and USING quals (which are both enforced).


 I think that's an incorrect implementation of the RLS UPDATE policy.
 The WITH CHECK quals of a RLS policy are intended to be applied to the
 NEW data, not the existing data. This patch is applying the WITH CHECK
 quals to both the existing and NEW tuples, which runs counter to the
 way RLS polices are normally enforced, and I think that will just lead
 to confusion.

The big idea (the fine details of which Stephen appeared to be in
tentative agreement with [1]) is that an UPSERT is a hybrid between an
INSERT and an UPDATE, and not simply an INSERT and separate UPDATE
tied together. So at the very least the exact behavior of such a
hybrid cannot be assumed to be any particular way just from
generalizing from known behaviors for INSERT and UPDATE (which is a
usability issue, since the fine details of RLS are already complex
enough without UPSERT).

The INSERT policies are only enforced when a tuple is inserted
because, when the alternative path isn't taken then it's really just
an INSERT.

For the UPDATE path, where the stickiness/hybridness begins, we
enforce the target tuple passes both INSERT policies, and UPDATE
policies (including USING quals as WCO). The theory here is that if
you're entitled to INSERT it, you ought to be entitled to INSERT the
existing tuple in order to take the UPDATE path. And we bunch the
UPDATE USING quals + WCO for the sake of (conceptual, not
implementation) simplicity - they're already all WCO at that point.

Finally, the final tuple (generated using the EXCLUDED and TARGET
tuples, from the UPDATE) must pass the UPDATE WCO (including any that
originated as USING quals, a distinction that no longer exists) as
well as INSERT policies. If you couldn't INSERT the tuple in the first
place (when there wasn't a conflict), then why should you be able to
UPSERT it? This is substantively the same row, no? You (the user)
are tacitly asserting that you don't care about whether the INSERT or
UPDATE path is taken anyway, so why should you care? Surely you'd want
this to fail as early as possible, rather than leaving it to chance. I
really do expect that people are only going to do simple
transformations in their UPDATE handler (e.g. ON CONFLICT UPDATE set
count = TARGET.count + EXCLUDED.count), so in practice it usually
doesn't matter.

Note that other systems that I looked at don't even support RLS with
SQL MERGE at all. So we have no precedent to consider that I'm aware
of, other than simply not supporting RLS, which would not be
outrageous IMV. I felt, given the ambiguity about how this should
differ from ordinary INSERTs + UPDATEs, that something quite
restrictive but not entirely restrictive (not supporting RLS, just
throwing an error all the time) was safest. In any case I doubt that
this will actually come up all that often.

 The problem with that is that the user will see errors saying that the
 data violates the RLS WITH CHECK policy, when they might quite
 reasonably argue that it doesn't. That's not really being
 conservative. I'd argue it's a bug.

Again, I accept that that's a valid interpretation of it. I have my
own opinion, but I will take the path of least resistance on this
point. What do other people think?

I'd appreciate it if you explicitly outlined what policies you feel
should be enforce at each of the 3 junctures within an UPSERT (post
INSERT, pre-UPDATE, post-UPDATE). I would also like you to be very
explicit about whether or not RLS WITH CHECK policies and USING quals
(presumably enforced as RLS WITH CHECK policies) from both INSERT and
UPDATE policies should be enforced at each point. In particular,
should I ever not treat RLS WCO and USING quals equivalently? (recall
that we don't want to elide an UPDATE silently, which makes much less
sense for UPSERT...I had assumed that whatever else, we'd always treat
WCO and USING quals from UPDATE (and ALL) policies equivalently, but
perhaps not).

Alternatively, perhaps you'd prefer to state your objection in terms
of the exact modifications you'd make to the above outline of the
existing behavior. I don't think I totally follow what you're saying
yet (which is the problem with being cleverer generally!). It is easy
to explain: The insert path is the same as always. Otherwise, both the
before and after tuple have all RLS policies (including USING quals)
enforced as WCOs. I think that it might be substantially easier to
explain that than to explain what you have in mind...let's see.

Thanks

[1] 

Re: [HACKERS] Shouldn't CREATE TABLE LIKE copy the relhasoids property?

2015-04-09 Thread Bruce Momjian
On Wed, Jan 14, 2015 at 07:29:24PM -0500, Tom Lane wrote:
 dst1 doesn't get an OID column:
 
 regression=# create table src1 (f1 int) with oids;
 CREATE TABLE
 regression=# create table dst1 (like src1);
 CREATE TABLE
 regression=# \d+ src1
  Table public.src1
  Column |  Type   | Modifiers | Storage | Stats target | Description 
 +-+---+-+--+-
  f1 | integer |   | plain   |  | 
 Has OIDs: yes
 
 regression=# \d+ dst1
  Table public.dst1
  Column |  Type   | Modifiers | Storage | Stats target | Description 
 +-+---+-+--+-
  f1 | integer |   | plain   |  | 
 
 
 If you don't find that problematic, how about this case?
 
 regression=# create table src2 (f1 int, primary key(oid)) with oids;
 CREATE TABLE
 regression=# create table dst2 (like src2 including indexes);
 ERROR:  column oid named in key does not exist

I have developed the attached patch to fix this.  The code was basically
confused because setting cxt.hasoids had no effect, and the LIKE
relation was never checked.  

The fix is to default cxt.hasoids to false, set it to true if the LIKE
relation has oids, and add WITH OIDS to the CREATE TABLE statement, if
necessary.  It also honors WITH/WITHOUT OIDS specified literally in the
CREATE TABLE clause because the first specification is honored, and we
only append WITH OIDS if the LIKE table has oids.

Should this be listed in the release notes as a backward-incompatibility?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +
diff --git a/src/backend/parser/parse_utilcmd.c b/src/backend/parser/parse_utilcmd.c
new file mode 100644
index 1fc8c2c..405c347
*** a/src/backend/parser/parse_utilcmd.c
--- b/src/backend/parser/parse_utilcmd.c
*** transformCreateStmt(CreateStmt *stmt, co
*** 222,228 
  	cxt.blist = NIL;
  	cxt.alist = NIL;
  	cxt.pkey = NULL;
! 	cxt.hasoids = interpretOidsOption(stmt-options, true);
  
  	Assert(!stmt-ofTypename || !stmt-inhRelations);	/* grammar enforces */
  
--- 222,228 
  	cxt.blist = NIL;
  	cxt.alist = NIL;
  	cxt.pkey = NULL;
! 	cxt.hasoids = false;
  
  	Assert(!stmt-ofTypename || !stmt-inhRelations);	/* grammar enforces */
  
*** transformCreateStmt(CreateStmt *stmt, co
*** 281,286 
--- 281,290 
  	 * Output results.
  	 */
  	stmt-tableElts = cxt.columns;
+ 	/* add WITH OIDS, if necessary */
+ 	if (!interpretOidsOption(stmt-options, true)  cxt.hasoids)
+ 		stmt-options = lappend(stmt-options, makeDefElem(oids,
+ (Node *)makeInteger(TRUE)));
  	stmt-constraints = cxt.ckconstraints;
  
  	result = lappend(cxt.blist, stmt);
*** transformTableLikeClause(CreateStmtConte
*** 849,854 
--- 853,860 
  		}
  	}
  
+ 	cxt-hasoids = relation-rd_rel-relhasoids;
+ 
  	/*
  	 * Copy CHECK constraints if requested, being careful to adjust attribute
  	 * numbers so they match the child.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSL information view

2015-04-09 Thread Andres Freund
Hi,

On 2015-04-09 13:31:55 +0200, Magnus Hagander wrote:
 +  row
 +   
 entrystructnamepg_stat_ssl/indextermprimarypg_stat_ssl/primary/indexterm/entry
 +   entryOne row per connection (regular and replication), showing 
 information about
 +SSL used on this connection.
 +See xref linkend=pg-stat-ssl-view for details.
 +   /entry
 +  /row
 + 

I kinda wonder why this even separate from pg_stat_activity, at least
from the POV of the function gathering the result. This way you have to
join between pg_stat_activity and pg_stat_ssl which will mean that the
two don't necessarily correspond to each other.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_restore -t should match views, matviews, and foreign tables

2015-04-09 Thread Craig Ringer
On 8 April 2015 at 04:33, Tom Lane t...@sss.pgh.pa.us wrote:

 Peter Eisentraut pete...@gmx.net writes:
  On 3/31/15 11:01 PM, Craig Ringer wrote:
  this patch adds support for views, foreign tables, and materialised
  views to the pg_restore -t flag.

  I think this is a good change.  Any concerns?

 Are we happy with pg_dump/pg_restore not distinguishing these objects
 by type?  It seems rather analogous to letting ALTER TABLE work on views
 etc.  Personally I'm fine with this, but certainly some people have
 complained about that approach so far as ALTER is concerned.  (But the
 implication would be that we'd need four distinct switches, which is
 not an outcome I favor.)



My reasoning was that these are all relations that, as far as SELECT et al
are concerned, can be interchangeable.

I guess this is more like the ALTER TABLE case though - if you pg_restore
-t a view, you don't get the data from any table(s) it depends on. So
substituting a table for a view won't be transparent to the user anyway.

I mostly just don't see the point of requiring multiple flags for things
that are all in the same namespace. It'll mean new flags each time we add
some new object type, more logic in apps that invoke pg_restore, etc, and
for what seems like no meaningful gain. We'll just land up with No table
'viewblah' matched, did you mean -V 'viewblah'? 


 Also, I think you missed MATERIALIZED VIEW DATA.


Thanks, amended.


 Also, shouldn't there be a documentation update?


Yes. Again, amended.

I've also added mention of materialized views to the pg_dump docs for
--table, which omitted them.

(It's rather unfortunate that pg_restore's -t is completely different to
pg_dump's -t . Fixing that would involve implementing wildcard search
support in pg_restore and would break backward compatibility, though).

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services
From bccc5623f39a40a7ba3f63b3dcaf902259ad485c Mon Sep 17 00:00:00 2001
From: Craig Ringer cr...@2ndquadrant.com
Date: Wed, 1 Apr 2015 10:46:29 +0800
Subject: [PATCH] pg_restore -t should select views, matviews, and foreign
 tables

Currently pg_restore's '-t' option selects only tables, not other
relations. It should be able to match anything that behaves like
a relation in the relation namespace, anything that's interchangable
with a table, including:

* Normal relations
* Views
* Materialized views
* Foreign tables

Sequences are not matched. They're in the relation namespace, but
only as an implementation detail. A separate option to selectively
dump sequences should be added so that there's no BC break if
they later become non-class objects.

Indexes are also not matched; again, a different option should be
added for them.

TOAST tables aren't matched, they're implementation detail.

See:
  http://www.postgresql.org/message-id/camsr+ygj50tvtvk4dbp66gajeoc0kap6kxfehaom+neqmhv...@mail.gmail.com
---
 doc/src/sgml/ref/pg_dump.sgml|  2 +-
 doc/src/sgml/ref/pg_restore.sgml | 25 ++---
 src/bin/pg_dump/pg_backup_archiver.c |  6 +-
 3 files changed, 28 insertions(+), 5 deletions(-)

diff --git a/doc/src/sgml/ref/pg_dump.sgml b/doc/src/sgml/ref/pg_dump.sgml
index a6e7b08..7f7da9e 100644
--- a/doc/src/sgml/ref/pg_dump.sgml
+++ b/doc/src/sgml/ref/pg_dump.sgml
@@ -501,7 +501,7 @@ PostgreSQL documentation
   termoption--table=replaceable class=parametertable/replaceable/option/term
   listitem
para
-Dump only tables (or views or sequences or foreign tables) matching
+Dump only tables (or views, sequences, foreign tables or materialized views) matching
 replaceable class=parametertable/replaceable.  Multiple tables
 can be selected by writing multiple option-t/ switches.  Also, the
 replaceable class=parametertable/replaceable parameter is
diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml
index 9f8dc00..9119e3e 100644
--- a/doc/src/sgml/ref/pg_restore.sgml
+++ b/doc/src/sgml/ref/pg_restore.sgml
@@ -405,9 +405,28 @@
   termoption--table=replaceable class=parametertable/replaceable/option/term
   listitem
para
-Restore definition and/or data of named table only. Multiple tables
-may be specified with multiple option-t/ switches. This can be
-combined with the option-n/option option to specify a schema.
+Restore definition and/or data of the named table (or other relation)
+only. This flag matches views, materialized views and foreign tables as
+well as ordinary tables. Multiple relations may be specified with
+multiple option-t/ switches. This can be combined with the
+option-n/option option to specify a schema.
+note
+ para
+  When literal-t/literal is specified,
+  applicationpg_restore/application makes no attempt to restore any
+  other database objects that the 

Re: [HACKERS] psql showing owner in \dT

2015-04-09 Thread Magnus Hagander
On Thu, Apr 9, 2015 at 3:27 PM, Alvaro Herrera alvhe...@2ndquadrant.com
wrote:

 Magnus Hagander wrote:
  After running into the need twice now - is there a particular reason why
 we
  don't have psql showing the owner of a type, at least in \dT+?

 Can't think of anything ...

  If not, how about attached trivial patch?

 Owner should normally be printed before ACL, no?


That's probably correct.

Done that way and pushed.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] FPW compression leaks information

2015-04-09 Thread Stephen Frost
* Heikki Linnakangas (hlinn...@iki.fi) wrote:
 On 04/09/2015 06:28 PM, Stephen Frost wrote:
 * Heikki Linnakangas (hlinn...@iki.fi) wrote:
 What should we do about this? Make it configurable on a per-table
 basis? Disable FPW compression on system tables? Disable FPW on
 tables you don't have SELECT access to? Add a warning to the docs?
 
 REVOKE EXECUTE ON FUNCTION pg_current_xlog_insert_location() FROM public;
 
 Yeah, that's one option. Will also have to revoke access to
 pg_stat_replication and anything else that might indirectly give
 away the current WAL position.

Sure, though pg_stat_replication already only returns the PID and no
details, unless you're a superuser.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Shouldn't CREATE TABLE LIKE copy the relhasoids property?

2015-04-09 Thread Alvaro Herrera
Bruce Momjian wrote:

 Should this be listed in the release notes as a backward-incompatibility?

Isn't this a backpatchable bug fix?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] rejected vs returned with feedback in new CF app

2015-04-09 Thread Alvaro Herrera
Magnus Hagander wrote:
 On Thu, Apr 9, 2015 at 2:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:

  The right workflow here, IMO, is that a patch should be marked
  returned or rejected, full stop; and then when/if the author submits
  a new version for a future CF, there should be a way *at that time*
  to re-link the email thread into that future CF.
 
 If we just link the email thread, that would mean we loose all those
 precious annotations we just added support for. Is that really what
 you meant? We also loose all history of a patch, and can't see that a
 previous version existed in a previous commitfest, without manually
 checking each and every one. But if that's a history we don't *want*,
 that's of course doable, but it seems wrong to me?
 
 I'm not necessarily saying that what we have now is right, but just
 giving up on the history completely doesn't seem like a very good
 workflow to me.
 
 We could always tell those people to go back and find your old patch
 and re-open it, but in fairness, are people likely to actually do
 that?

I think it's convenient if the submitter can go to a previous commitfest
and set an RwF entry as again needing review in the open commitfest.
That would keep the CF-app-history intact.  This should probably only be
allowed for patches that are either RwF or Rejected, and only in
commitfests that are already closed (perhaps allow it for the commitfest
in progress also?).

  Moved is really only applicable, I think, for cases where we punt
  a patch to the next CF for lack of time.
 
 Well, that's basically what returned with feedback is now, so I
 guess that one should just be renamed in that case.

Yes, keeping the current behavior with name Moved to next CF seems
good to me.

 And we add a new returned with feedback that closes out the patch
 and doesn't move it anywhere.

Sounds good.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] psql showing owner in \dT

2015-04-09 Thread Magnus Hagander
After running into the need twice now - is there a particular reason why we
don't have psql showing the owner of a type, at least in \dT+?

If not, how about attached trivial patch?

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/
*** a/src/bin/psql/describe.c
--- b/src/bin/psql/describe.c
***
*** 533,538  describeTypes(const char *pattern, bool verbose, bool showSystem)
--- 533,544 
  		printACLColumn(buf, t.typacl);
  		appendPQExpBufferStr(buf, ,\n  );
  	}
+ 	if (verbose)
+ 	{
+ 		appendPQExpBuffer(buf,
+ 		pg_catalog.pg_get_userbyid(t.typowner) AS \%s\,\n,
+ 		  gettext_noop(Owner));
+ 	}
  
  	appendPQExpBuffer(buf,
    pg_catalog.obj_description(t.oid, 'pg_type') as \%s\\n,

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Possible gaps/garbage in the output of XLOG reader

2015-04-09 Thread Antonin Houska
While playing with xlogreader, I was lucky enough to see one of the many
record validations to fail. After having some fun with gdb, I found out that
in some cases the reader does not enforce enough data to be in state-readBuf
before copying into state-readRecordBuf starts. This should not happen if the
callback always reads XLOG_BLCKSZ bytes, but in fact only *reqLen* is the
mandatory size of the chunk delivered.

There are probably various ways to fix this problem. Attached is what I did in
my environment. I hit the problem on 9.4.1, but the patch seems to apply to
master too.

-- 
Antonin Houska
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de, http://www.cybertec.at

diff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c
index 474137a..e6ebd9d 100644
--- a/src/backend/access/transam/xlogreader.c
+++ b/src/backend/access/transam/xlogreader.c
@@ -313,7 +313,21 @@ XLogReadRecord(XLogReaderState *state, XLogRecPtr RecPtr, char **errormsg)
 		goto err;
 	}
 
-	len = XLOG_BLCKSZ - RecPtr % XLOG_BLCKSZ;
+	/* Bytes of the current record residing on the current page. */
+	len = Min(XLOG_BLCKSZ - targetRecOff, total_len);
+
+	/*
+	 * Nothing beyond the record header is guaranteed to be in state-readBuf
+	 * so far.
+	 */
+	if (readOff  targetRecOff + len)
+	{
+		readOff = ReadPageInternal(state, targetPagePtr, targetRecOff + len);
+
+		if (readOff  0)
+			goto err;
+	}
+
 	if (total_len  len)
 	{
 		/* Need to reassemble record */
@@ -322,9 +336,11 @@ XLogReadRecord(XLogReaderState *state, XLogRecPtr RecPtr, char **errormsg)
 		char	   *buffer;
 		uint32		gotlen;
 
+		Assert(readOff == targetRecOff + len);
+		Assert(readOff == XLOG_BLCKSZ);
+
 		/* Copy the first fragment of the record from the first page. */
-		memcpy(state-readRecordBuf,
-			   state-readBuf + RecPtr % XLOG_BLCKSZ, len);
+		memcpy(state-readRecordBuf, state-readBuf + targetRecOff, len);
 		buffer = state-readRecordBuf + len;
 		gotlen = len;
 
@@ -413,20 +429,16 @@ XLogReadRecord(XLogReaderState *state, XLogRecPtr RecPtr, char **errormsg)
 	}
 	else
 	{
-		/* Wait for the record data to become available */
-		readOff = ReadPageInternal(state, targetPagePtr,
- Min(targetRecOff + total_len, XLOG_BLCKSZ));
-		if (readOff  0)
-			goto err;
+		Assert(readOff = targetRecOff + len);
 
 		/* Record does not cross a page boundary */
 		if (!ValidXLogRecord(state, record, RecPtr))
 			goto err;
 
-		state-EndRecPtr = RecPtr + MAXALIGN(total_len);
+		state-EndRecPtr = RecPtr + MAXALIGN(len);
 
 		state-ReadRecPtr = RecPtr;
-		memcpy(state-readRecordBuf, record, total_len);
+		memcpy(state-readRecordBuf, record, len);
 	}
 
 	/*

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TABLESAMPLE patch

2015-04-09 Thread Peter Eisentraut
On 4/9/15 5:02 AM, Michael Paquier wrote:
 Just to be clear, the example above being misleading... Doing table
 sampling using SYSTEM at physical level makes sense. In this case I
 think that we should properly error out when trying to use this method
 on something not present at physical level. But I am not sure that
 this restriction applies to BERNOUILLI: you may want to apply it on
 other things than physical relations, like views or results of WITH
 clauses. Also, based on the fact that we support custom sampling
 methods, I think that it should be up to the sampling method to define
 on what kind of objects it supports sampling, and where it supports
 sampling fetching, be it page-level fetching or analysis from an
 existing set of tuples. Looking at the patch, TABLESAMPLE is just
 allowed on tables and matviews, this limitation is too restrictive
 IMO.

In the SQL standard, the TABLESAMPLE clause is attached to a table
expression (table primary), which includes table functions,
subqueries, CTEs, etc.  In the proposed patch, it is attached to a table
name, allowing only an ONLY clause.  So this is a significant deviation.

Obviously, doing block sampling on a physical table is a significant use
case, but we should be clear about which restrictions and tradeoffs were
are making now and in the future, especially if we are going to present
extension interfaces.  The fact that physical tables are interchangeable
with other relation types, at least in data-reading contexts, is a
feature worth preserving.

It may be worth thinking about some examples of other sampling methods,
in order to get a better feeling for whether the interfaces are appropriate.

Earlier in the thread, someone asked about supporting specifying a
number of rows instead of percents.  While not essential, that seems
pretty useful, but I wonder how that could be implemented later on if we
take the approach that the argument to the sampling method can be an
arbitrary quantity that is interpreted only by the method.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSL information view

2015-04-09 Thread Magnus Hagander
On Wed, Dec 17, 2014 at 9:19 PM, Heikki Linnakangas hlinnakan...@vmware.com
 wrote:

 On 11/19/2014 02:36 PM, Magnus Hagander wrote:

 +   /* Create or attach to the shared SSL status buffers */
 +   size = mul_size(NAMEDATALEN, MaxBackends);
 +   BackendSslVersionBuffer = (char *)
 +   ShmemInitStruct(Backend SSL Version Buffer, size,
 found);
 +
 +   if (!found)
 +   {
 +   MemSet(BackendSslVersionBuffer, 0, size);
 +
 +   /* Initialize st_ssl_version pointers. */
 +   buffer = BackendSslVersionBuffer;
 +   for (i = 0; i  MaxBackends; i++)
 +   {
 +   BackendStatusArray[i].st_ssl_version = buffer;
 +   buffer += NAMEDATALEN;
 +   }
 +   }
 +
 +   size = mul_size(NAMEDATALEN, MaxBackends);
 +   BackendSslCipherBuffer = (char *)
 +   ShmemInitStruct(Backend SSL Cipher Buffer, size,
 found);
 +
 +   if (!found)
 +   {
 +   MemSet(BackendSslCipherBuffer, 0, size);
 +
 +   /* Initialize st_ssl_cipher pointers. */
 +   buffer = BackendSslCipherBuffer;
 +   for (i = 0; i  MaxBackends; i++)
 +   {
 +   BackendStatusArray[i].st_ssl_cipher = buffer;
 +   buffer += NAMEDATALEN;
 +   }
 +   }
 +
 +   size = mul_size(NAMEDATALEN, MaxBackends);
 +   BackendSslClientDNBuffer = (char *)
 +   ShmemInitStruct(Backend SSL Client DN Buffer, size,
 found);
 +
 +   if (!found)
 +   {
 +   MemSet(BackendSslClientDNBuffer, 0, size);
 +
 +   /* Initialize st_ssl_clientdn pointers. */
 +   buffer = BackendSslClientDNBuffer;
 +   for (i = 0; i  MaxBackends; i++)
 +   {
 +   BackendStatusArray[i].st_ssl_clientdn = buffer;
 +   buffer += NAMEDATALEN;
 +   }
 +   }


 This pattern gets a bit tedious. We do that already for application_names,
 client hostnames, and activity status but this adds three more such
 strings. Why are these not just regular char arrays in PgBackendStatus
 struct, anyway? The activity status is not, because its size is
 configurable with the pgstat_track_activity_query_size GUC, but all those
 other things are fixed-size.

 Also, it would be nice if you didn't allocate the memory for all those SSL
 strings, when SSL is disabled altogether. Perhaps put the SSL-related
 information into a separate struct:

 struct
 {
 /* Information about SSL connection */
 int st_ssl_bits;
 boolst_ssl_compression;
 charst_ssl_version[NAMEDATALEN];  /* MUST be
 null-terminated */
 charst_ssl_cipher[NAMEDATALEN];   /* MUST be
 null-terminated */
 charst_ssl_clientdn[NAMEDATALEN]; /* MUST be
 null-terminated */
 } PgBackendSSLStatus;

 Those structs could be allocated like you allocate the string buffers now,
 with a pointer to that struct from PgBackendStatus. When SSL is disabled,
 the structs are not allocated and the pointers in PgBackendStatus structs
 are NULL.



Finally, I found time to do this. PFA a new version of this patch.

It takes into account the changes suggested by Heikki and Alex (minus the
renaming of fields - I think that's a separate thing to do, and we should
stick to existing naming conventions for now - but I changed the order of
the fields). Also the documentation changes suggested by Peter (but still
not the contrib/sslinfo part, as that should be a separate patch - but I
can look at that once we agree on this one). And resolves the inevitable
oid conflict for a patch that's been delayed that long.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/
*** a/doc/src/sgml/monitoring.sgml
--- b/doc/src/sgml/monitoring.sgml
***
*** 300,305  postgres   27093  0.0  0.0  30096  2752 ?Ss   11:34   0:00 postgres: ser
--- 300,313 
/entry
   /row
  
+  row
+   entrystructnamepg_stat_ssl/indextermprimarypg_stat_ssl/primary/indexterm/entry
+   entryOne row per connection (regular and replication), showing information about
+SSL used on this connection.
+See xref linkend=pg-stat-ssl-view for details.
+   /entry
+  /row
+ 
  /tbody
 /tgroup
/table
***
*** 825,830  postgres   27093  0.0  0.0  30096  2752 ?Ss   11:34   0:00 postgres: ser
--- 833,907 
 listed; no information is available about downstream standby servers.
/para
  
+   table id=pg-stat-ssl-view xreflabel=pg_stat_ssl
+titlestructnamepg_stat_ssl/structname View/title
+tgroup cols=3
+ thead
+ row
+   entryColumn/entry
+   entryType/entry
+   entryDescription/entry
+  /row
+ /thead
+ 
+tbody
+ row
+  

Re: [HACKERS] FPW compression leaks information

2015-04-09 Thread Heikki Linnakangas

On 04/09/2015 06:28 PM, Stephen Frost wrote:

* Heikki Linnakangas (hlinn...@iki.fi) wrote:

What should we do about this? Make it configurable on a per-table
basis? Disable FPW compression on system tables? Disable FPW on
tables you don't have SELECT access to? Add a warning to the docs?


REVOKE EXECUTE ON FUNCTION pg_current_xlog_insert_location() FROM public;


Yeah, that's one option. Will also have to revoke access to 
pg_stat_replication and anything else that might indirectly give away 
the current WAL position.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Making src/test/ssl more robust

2015-04-09 Thread Michael Paquier
On Wed, Apr 8, 2015 at 9:57 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
 I noticed two things while looking at the SSL test suite:
 1) When running the tests, some logs are generated in client-log, but
 this log file has no entry in .gitignore... A patch is attached.
 2) cp is used with a wildcard and system_or_bail in ServerSetup.pm:
   system_or_bail cp ssl/server-*.crt '$tempdir'/pgdata;
   system_or_bail cp ssl/server-*.key '$tempdir'/pgdata;
   system_or_bail chmod 0600 '$tempdir'/pgdata/server-*.key;
   system_or_bail cp ssl/root+client_ca.crt '$tempdir'/pgdata;
   system_or_bail cp ssl/root+client.crl '$tempdir'/pgdata;
 This does not look very portable to me. Wouldn't it be better to use
 glob to get a list of the files and then copy each matching entry?
 Thoughts?

Here are patches on top of those words. Instead of cp, glob is used
with File::Copy and File::Basename to make the code more portable,
something useful for Windows for example where cp is not directly
available.
-- 
Michael
From 47b832a1fee639e49dd0065da710b064275423c4 Mon Sep 17 00:00:00 2001
From: Michael Paquier mich...@otacoo.com
Date: Thu, 9 Apr 2015 15:47:42 +
Subject: [PATCH 1/2] Ignore content generated by TAP test suite of SSL

---
 src/test/ssl/.gitignore | 3 +++
 1 file changed, 3 insertions(+)
 create mode 100644 src/test/ssl/.gitignore

diff --git a/src/test/ssl/.gitignore b/src/test/ssl/.gitignore
new file mode 100644
index 000..61352c1
--- /dev/null
+++ b/src/test/ssl/.gitignore
@@ -0,0 +1,3 @@
+# Generated by tests
+/client-log
+/tmp_check/
-- 
2.3.5

From 12ce72438f609fed0a22248ab0945466252dc936 Mon Sep 17 00:00:00 2001
From: Michael Paquier mich...@otacoo.com
Date: Thu, 9 Apr 2015 16:02:08 +
Subject: [PATCH 2/2] Make TAP tests of SSL more portable by avoiding cp

Instead, glob takes care of the wildcard, and is used with File::Copy
to copy the set of files necessary for the tests in PGDATA.
---
 src/test/ssl/ServerSetup.pm | 25 -
 1 file changed, 20 insertions(+), 5 deletions(-)

diff --git a/src/test/ssl/ServerSetup.pm b/src/test/ssl/ServerSetup.pm
index 1579dc9..ee4daf0 100644
--- a/src/test/ssl/ServerSetup.pm
+++ b/src/test/ssl/ServerSetup.pm
@@ -19,6 +19,8 @@ package ServerSetup;
 use strict;
 use warnings;
 use TestLib;
+use File::Basename;
+use File::Copy;
 use Test::More;
 
 use Exporter 'import';
@@ -26,6 +28,20 @@ our @EXPORT = qw(
   configure_test_server_for_ssl switch_server_cert
 );
 
+# Copy a set of files, taking into account wildcards
+sub copy_files
+{
+	my $orig = shift;
+	my $dest = shift;
+
+	my @orig_files = glob $orig;
+	foreach my $orig_file (@orig_files)
+	{
+		my $base_file = basename($orig_file);
+		copy($orig_file, $dest/$base_file) or die Could not copy $orig_file to $dest;
+	}
+}
+
 sub configure_test_server_for_ssl
 {
   my $tempdir = $_[0];
@@ -48,13 +64,12 @@ sub configure_test_server_for_ssl
 
   close CONF;
 
-
   # Copy all server certificates and keys, and client root cert, to the data dir
-  system_or_bail cp ssl/server-*.crt '$tempdir'/pgdata;
-  system_or_bail cp ssl/server-*.key '$tempdir'/pgdata;
+  copy_files(ssl/server-*.crt, $tempdir/pgdata);
+  copy_files(ssl/server-*.key, $tempdir/pgdata);
   system_or_bail chmod 0600 '$tempdir'/pgdata/server-*.key;
-  system_or_bail cp ssl/root+client_ca.crt '$tempdir'/pgdata;
-  system_or_bail cp ssl/root+client.crl '$tempdir'/pgdata;
+  copy_files(ssl/root+client_ca.crt, $tempdir/pgdata);
+  copy_files(ssl/root+client.crl, $tempdir/pgdata);
 
   # Only accept SSL connections from localhost. Our tests don't depend on this
   # but seems best to keep it as narrow as possible for security reasons.
-- 
2.3.5


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TABLESAMPLE patch

2015-04-09 Thread Petr Jelinek

On 09/04/15 11:37, Simon Riggs wrote:

On 9 April 2015 at 04:52, Simon Riggs si...@2ndquadrant.com wrote:


TABLESAMPLE BERNOULLI could work in this case, or any other non-block
based sampling mechanism. Whether it does work yet is another matter.

This query should be part of the test suite and should generate a
useful message or work correctly.


The SQL Standard does allow the WITH query given. It makes no mention
of the obvious point that SYSTEM-defined mechanisms might not work,
but that is for the implementation to define, AFAICS.


Yes SQL Standard allows this and the reason why they don't define what 
happens with SYSTEM is that they actually don't define how SYSTEM should 
behave except that it should return approximately given percentage of 
rows, but the actual behavior is left to the DBMS. The reason why other 
dbs like MSSQL or DB2 have chosen this to be block sampling is that it 
makes most sense (and is fastest) on tables and those databases don't 
support TABLESAMPLE on anything else at all.




On balance, in this release, I would be happier to exclude sampled
results from queries, and only allow sampling against base tables.



I think so too, considering how late in the last CF we are. Especially 
given my note about MSSQL and DB2 above.


In any case I don't see any fundamental issues with extending the 
current implementation with the subquery support. I think most of the 
work there is actually in parser/analyzer and planner. The sampling 
methods will just not receive the request for next blockid and tupleid 
from that block when source of the data is subquery and if they want to 
support subquery as source of sampling they will have to provide the 
examinetuple interface (which is already there and optional, the 
test/example custom sampling method is using it).


--
 Petr Jelinek  http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TABLESAMPLE patch

2015-04-09 Thread Michael Paquier
On Thu, Apr 9, 2015 at 5:52 PM, Simon Riggs si...@2ndquadrant.com wrote:
 On 9 April 2015 at 04:12, Michael Paquier michael.paqu...@gmail.com wrote:

 Also, I am wondering if the sampling logic based on block analysis is
 actually correct, for example for now this fails and I think that we
 should support it:
 =# with query_select as (select generate_series(1, 10) as a) select
 query_select.a from query_select tablesample system (100.0/11)
 REPEATABLE ();
 ERROR:  42P01: relation query_select does not exist

 How does the SQL spec define exactly TABLESAMPLE? Shouldn't we get a
 sample from a result set?
 Thoughts?

 Good catch. The above query doesn't make any sense.

 TABLESAMPLE SYSTEM implies system-defined sampling mechanism, which is
 block level sampling. So any block level sampling method should be
 barred from operating on a result set in this way... i.e. should
 generate an ERROR inappropriate sampling method specified

 TABLESAMPLE BERNOULLI could work in this case, or any other non-block
 based sampling mechanism. Whether it does work yet is another matter.

 This query should be part of the test suite and should generate a
 useful message or work correctly.

Ahah, you just beat me on that ;) See a more precise reply below.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Making src/test/ssl more robust

2015-04-09 Thread Heikki Linnakangas

On 04/09/2015 10:06 AM, Michael Paquier wrote:

On Wed, Apr 8, 2015 at 9:57 PM, Michael Paquier
michael.paqu...@gmail.com wrote:

I noticed two things while looking at the SSL test suite:
1) When running the tests, some logs are generated in client-log, but
this log file has no entry in .gitignore... A patch is attached.
2) cp is used with a wildcard and system_or_bail in ServerSetup.pm:
   system_or_bail cp ssl/server-*.crt '$tempdir'/pgdata;
   system_or_bail cp ssl/server-*.key '$tempdir'/pgdata;
   system_or_bail chmod 0600 '$tempdir'/pgdata/server-*.key;
   system_or_bail cp ssl/root+client_ca.crt '$tempdir'/pgdata;
   system_or_bail cp ssl/root+client.crl '$tempdir'/pgdata;
This does not look very portable to me. Wouldn't it be better to use
glob to get a list of the files and then copy each matching entry?
Thoughts?


Here are patches on top of those words. Instead of cp, glob is used
with File::Copy and File::Basename to make the code more portable,
something useful for Windows for example where cp is not directly
available.


Pushed, thanks!

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How about to have relnamespace and relrole?

2015-04-09 Thread Kyotaro HORIGUCHI
Hello, sorry for the absence. I changed the regnamespace's
behavior as the same as the other reg* types. And I attached a
patch as separate one that fixes regroleout to do the same as the
other reg* types, because I have

0001-Add-regrole_v6.patch : fix regnamespace to behave as the
same as the other reg* types.

0001-Make-regnamespace-behave-as-the-same-as-other-reg-ty.patch:
  Diff from v5 to v6, only for convinience.

0001-Fix-regroleout-s-behavior-following-other-out-functi.patch:
  Fixes regroleout so as to behave as the same as other reg*
  types, but might be a bit too large.


At Wed, 01 Apr 2015 14:46:01 -0400, Andrew Dunstan and...@dunslane.net wrote 
in 551c3ce9.4080...@dunslane.net
  But in any case I cannot see a reason to treat regnamespace
  differently
  from the existing types on this point.
 
  
 
 Good points.
 
 I agree re namespace. And I also agree that shared dependency support
 is not worth the trouble, especially not just to support regrole. I'm
 not sure that's a reason to reject regrole entirely, though. However,
 I also think there is a significantly less compelling case for it than
 for regnamespace, based on the number of times I have wanted each.
 
 Anybody else have thoughts on this?

Yeah, that's my thinko. regnamespace in the attached patch
behaves as the same to other reg* types. On the way fixing it, I
found a bug that regnamespaceout returns NULL for removed shema
name. I fixed it, too.

Addition to that, regroleout raises exception for invalid role
oids. This is unwanted behavior but GetUserNameFromId is needed
to have one extra parameter 'noerr' to fix this. This fix is
attached as another patch.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
diff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml
index da1f25f..6d1f02d 100644
--- a/doc/src/sgml/datatype.sgml
+++ b/doc/src/sgml/datatype.sgml
@@ -4321,7 +4321,8 @@ SET xmloption TO { DOCUMENT | CONTENT };
 an object identifier.  There are also several alias types for
 typeoid/: typeregproc/, typeregprocedure/,
 typeregoper/, typeregoperator/, typeregclass/,
-typeregtype/, typeregconfig/, and typeregdictionary/.
+typeregtype/, typeregrole/, typeregnamespace/, 
+typeregconfig/, and typeregdictionary/.
 xref linkend=datatype-oid-table shows an overview.
/para
 
@@ -4431,6 +4432,20 @@ SELECT * FROM pg_attribute
/row
 
row
+entrytyperegrole//entry
+entrystructnamepg_authid//entry
+entryrole name/entry
+entryliteralsmithee//entry
+   /row
+
+   row
+entrytyperegnamespace//entry
+entrystructnamepg_namespace//entry
+entrynamespace name/entry
+entryliteralpg_catalog//entry
+   /row
+
+   row
 entrytyperegconfig//entry
 entrystructnamepg_ts_config//entry
 entrytext search configuration/entry
@@ -4448,29 +4463,37 @@ SELECT * FROM pg_attribute
 /table
 
para
-All of the OID alias types accept schema-qualified names, and will
-display schema-qualified names on output if the object would not
-be found in the current search path without being qualified.
-The typeregproc/ and typeregoper/ alias types will only
-accept input names that are unique (not overloaded), so they are
-of limited use; for most uses typeregprocedure/ or
+All of the OID alias types for objects grouped by namespace accept
+schema-qualified names, and will display schema-qualified names on output
+if the object would not be found in the current search path without being
+qualified.  The typeregproc/ and typeregoper/ alias types will
+only accept input names that are unique (not overloaded), so they are of
+limited use; for most uses typeregprocedure/ or
 typeregoperator/ are more appropriate.  For typeregoperator/,
 unary operators are identified by writing literalNONE/ for the unused
 operand.
/para
 
+   para An additional property of most of the OID alias types is the
+creation of dependencies.  If a constant of one of these types
+appears in a stored expression (such as a column default
+expression or view), it creates a dependency on the referenced
+object.  For example, if a column has a default expression
+literalnextval('my_seq'::regclass)/,
+productnamePostgreSQL/productname understands that the default
+expression depends on the sequence literalmy_seq/; the system
+will not let the sequence be dropped without first removing the
+default expression. typeregrole/ is the only exception for the
+property. Constants of this type is not allowed in such
+expressions.  /para
+
+   note
para
-An additional property of the OID alias types is the creation of
-dependencies.  If a
-constant of one of these types appears in a stored expression
-(such as a column default expression or view), it creates a dependency
-on the 

Re: [HACKERS] TABLESAMPLE patch

2015-04-09 Thread Simon Riggs
On 9 April 2015 at 04:52, Simon Riggs si...@2ndquadrant.com wrote:

 TABLESAMPLE BERNOULLI could work in this case, or any other non-block
 based sampling mechanism. Whether it does work yet is another matter.

 This query should be part of the test suite and should generate a
 useful message or work correctly.

The SQL Standard does allow the WITH query given. It makes no mention
of the obvious point that SYSTEM-defined mechanisms might not work,
but that is for the implementation to define, AFAICS.

The SQL Standard goes on to talk about possibly non-deterministic
issues. Which in Postgres relates to the point that the results of a
SampleScan will never be IMMUTABLE. That raises the possibility of
planner issues. We must, for example, never do inner join removal on a
sampled relation - we don't do that yet, but something to watch for.

On balance, in this release, I would be happier to exclude sampled
results from queries, and only allow sampling against base tables.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, RemoteDBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal : REINDEX xxx VERBOSE

2015-04-09 Thread Sawada Masahiko
On Thu, Apr 9, 2015 at 1:14 PM, Fujii Masao masao.fu...@gmail.com wrote:
 On Wed, Apr 8, 2015 at 10:53 PM, Sawada Masahiko sawada.m...@gmail.com 
 wrote:
 On Wed, Apr 8, 2015 at 1:09 PM, Fujii Masao masao.fu...@gmail.com wrote:
 On Wed, Apr 8, 2015 at 1:57 AM, Sawada Masahiko sawada.m...@gmail.com 
 wrote:
 On Wed, Apr 8, 2015 at 1:11 AM, Fabrízio de Royes Mello
 fabriziome...@gmail.com wrote:

 On Tue, Apr 7, 2015 at 1:04 PM, Sawada Masahiko sawada.m...@gmail.com
 wrote:

 On Tue, Apr 7, 2015 at 10:12 PM, Fabrízio de Royes Mello
 fabriziome...@gmail.com wrote:
 
  On Tue, Apr 7, 2015 at 7:22 AM, Sawada Masahiko sawada.m...@gmail.com
  wrote:
 
  Thank you for your reviewing.
  I modified the patch and attached latest version patch(v7).
  Please have a look it.
 
 
  Looks good to me. Attached patch (v8) just fix a tab indentation in
  gram.y.
 

 I had forgotten fix a tab indentation, sorry.
 Thank you for reviewing!
 It looks good to me too.
 Can this patch be marked as Ready for Committer?


 +1


 Changed status to Ready for Committer.

 The patch adds new syntax like REINDEX ... WITH VERBOSE, i.e., () is not
 added after WITH clause. Did we reach the consensus about this syntax?
 The last email from Robert just makes me think that () should be added
 into the syntax.


 Thank you for taking time for this patch!

 I removed the FORCE option from REINDEX, so you would need to update the 
 patch.

Thanks.
I will change the patch with this change.

 This was quite complicated issue since we already have a lot of style
 command currently.
 We can think many of style from various perspective: kind of DDL, new
 or old command, maintenance command. And each command has each its
 story.
 I believe we have reached the consensus with this style at least once
 (please see previous discussion), but we might needs to discuss
 more...

 Okay, another question is that; WITH must be required whenever the options
 are specified? Or should it be abbreviatable?

In previous discussion, the WITH clause is always required by VERBOSE
option. I don't think to enable us to abbreviate WITH clause for now.
Also, at this time that FORCE option is removed, we could bring back
idea is to put VERBOSE at after object name like CLUSTER. (INDEX,
TABLE, etc.)

Regards,

---
Sawada Masahiko


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Row security violation error is misleading

2015-04-09 Thread Stephen Frost
* Craig Ringer (cr...@2ndquadrant.com) wrote:
 On 9 April 2015 at 01:30, Dean Rasheed dean.a.rash...@gmail.com wrote:
  That doesn't match what the code currently does:

Ah, right.

   * Also, allow extensions to add their own policies.
   *
   * Note that, as with the internal policies, if multiple policies are
   * returned then they will be combined into a single expression with
   * all of them OR'd together.  However, to avoid the situation of an
   * extension granting more access to a table than the internal policies
   * would allow, the extension's policies are AND'd with the internal
   * policies.  In other words - extensions can only provide further
   * filtering of the result set (or further reduce the set of records
   * allowed to be added).
 
  which seems reasonable, and means that if there are both internal and
  external policies, an allow all external policy would be a no-op.
 
 Great, I'm glad to see that they're ANDed now.
 
 I wasn't caught up with the current state of this. At some earlier point
 policies from hooks were being ORed, which made mandatory access control
 extensions impossible.

That's what I had been recalling also.  I'm certainly on-board with
wanting to support MAC, but I'm wondering what we're going to do when we
add support for restrictive policies.  We'd certainly want extensions
to be able to provide both kinds and we will need to make sure they are
added correctly, with all of the restrictive policies being combined
and applied together, and then the permissive policies similairly
combined (but with OR's).

Thoughts?

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: Custom/Foreign-Join-APIs (Re: [HACKERS] [v9.5] Custom Plan API)

2015-04-09 Thread Kouhei Kaigai
 2015/04/09 10:48、Kouhei Kaigai kai...@ak.jp.nec.com のメール:
 * merge_fpinfo()
  It seems to me fpinfo-rows should be joinrel-rows, and
  fpinfo-width also should be joinrel-width.
  No need to have special intelligence here, isn't it?
 
 
  Oops. They are vestige of my struggle which disabled SELECT clause 
  optimization
  (omit unused columns).  Now width and rows are inherited from joinrel.
 Besides
  that, fdw_startup_cost and fdw_tuple_cost seem wrong, so I fixed them to 
  use
 simple
  summary, not average.
 
  Does fpinfo-fdw_startup_cost represent a cost to open connection to remote
  PostgreSQL, doesn't it?
 
  postgres_fdw.c:1757 says as follows:
 
 /*
  * Add some additional cost factors to account for connection overhead
  * (fdw_startup_cost), transferring data across the network
  * (fdw_tuple_cost per retrieved row), and local manipulation of the data
  * (cpu_tuple_cost per retrieved row).
  */
 
  If so, does a ForeignScan that involves 100 underlying relation takes 100
  times heavy network operations on startup? Probably, no.
  I think, average is better than sum, and max of them will reflect the cost
  more correctly.
 
 In my current opinion, no. Though I remember that I've written such comments
 before :P.
 
 Connection establishment occurs only once for the very first access to the 
 server,
 so in the use cases with long-lived session (via psql, connection pooling, 
 etc.),
 taking connection overhead into account *every time* seems too pessimistic.
 
 Instead, for practical cases, fdw_startup_cost should consider overheads of 
 query
 construction and getting first response of it (hopefully it minus retrieving
 actual data).  These overheads are visible in the order of milliseconds.  I’m
 not sure how much is appropriate for the default, but 100 seems not so bad.
 
 Anyway fdw_startup_cost is per-server setting as same as fdw_tuple_cost, and 
 it
 should not be modified according to the width of the result, so using
 fpinfo_o-fdw_startup_cost would be ok.

Indeed, I forgot the connection cache mechanism. As long as we define
fdw_startup_cost as you mentioned, it seems to me your logic is heuristically
reasonable.

  Also, fdw_tuple_cost introduce the cost of data transfer over the network.
  I thinks, weighted average is the best strategy, like:
   fpinfo-fdw_tuple_cost =
 (fpinfo_o-width / (fpinfo_o-width + fpinfo_i-width) *
 fpinfo_o-fdw_tuple_cost +
 (fpinfo_i-width / (fpinfo_o-width + fpinfo_i-width) *
 fpinfo_i-fdw_tuple_cost;
 
  That's just my suggestion. Please apply the best way you thought.
 
 I can’t agree that strategy, because 1) width 0 causes per-tuple cost 0, and 
 2)
 fdw_tuple_cost never vary in a foreign server.  Using fpinfo_o-fdw_tuple_cost
 (it must be identical to fpinfo_i-fdw_tuple_cost) seems reasonable.  
 Thoughts?

OK, you are right.

I think it is time to hand over the patch reviewing to committers.
So, let me mark it ready for committers.

Thanks,
--
NEC Business Creation Division / PG-Strom Project
KaiGai Kohei kai...@ak.jp.nec.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TABLESAMPLE patch

2015-04-09 Thread Michael Paquier
On Thu, Apr 9, 2015 at 4:30 AM, Jeff Janes jeff.ja...@gmail.com wrote:
 On Mon, Apr 6, 2015 at 11:02 AM, Petr Jelinek p...@2ndquadrant.com wrote:

 On 06/04/15 14:30, Petr Jelinek wrote:

 On 06/04/15 11:02, Simon Riggs wrote:

 Are we ready for a final detailed review and commit?


 I plan to send v12 in the evening with some additional changes that came
 up from Amit's comments + some improvements to error reporting. I think
 it will be ready then.


 Ok so here it is.

 Changes vs v11:
 - changed input parameter list to expr_list
 - improved error reporting, particularly when input parameters are wrong
 - fixed SELECT docs to show correct syntax and mention that there can be
 more sampling methods
 - added name of the sampling method to the explain output - I don't like
 the code much there as it has to look into RTE but on the other hand I don't
 want to create new scan node just so it can hold the name of the sampling
 method for explain
 - made views containing TABLESAMPLE clause not autoupdatable
 - added PageIsAllVisible() check before trying to check for tuple
 visibility
 - some typo/white space fixes



 Compiler warnings:
 explain.c: In function 'ExplainNode':
 explain.c:861: warning: 'sname' may be used uninitialized in this function


 Docs spellings:

 PostgreSQL distrribution  extra r.

 The optional parameter REPEATABLE acceps   accepts.  But I don't know that
 'accepts' is the right word.  It makes the seed value sound optional to
 REPEATABLE.

 each block having same chance  should have the before same.

 Both of those sampling methods currently  I think it should be these
 not those, as this sentence is immediately after their introduction, not
 at a distance.

 ...tuple contents and decides if to return in, or zero if none  Something
 here is confusing. return it, not return in?

 Other comments:

 Do we want tab completions for psql?  (I will never remember how to spell
 BERNOULLI).

Yes. I think so.

 Needs a Cat version bump.

The committer who will pick up this patch will normally do it.

Patch 1 is simple enough and looks fine to me.

Regarding patch 2...
I found for now some typos:
+  titlestructnamepg_tabesample_method/structname/title
+productnamePostgreSQL/productname distrribution:

Also, I am wondering if the sampling logic based on block analysis is
actually correct, for example for now this fails and I think that we
should support it:
=# with query_select as (select generate_series(1, 10) as a) select
query_select.a from query_select tablesample system (100.0/11)
REPEATABLE ();
ERROR:  42P01: relation query_select does not exist

How does the SQL spec define exactly TABLESAMPLE? Shouldn't we get a
sample from a result set?
Thoughts?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Row security violation error is misleading

2015-04-09 Thread Craig Ringer
On 9 April 2015 at 14:56, Dean Rasheed dean.a.rash...@gmail.com wrote:

 On 8 April 2015 at 16:27, Stephen Frost sfr...@snowman.net wrote:
  * Dean Rasheed (dean.a.rash...@gmail.com) wrote:
  I actually re-used the sql status code 42501 -
  ERRCODE_INSUFFICIENT_PRIVILEGE for a RLS check failure because of the
  parallel with permissions checks, but I quite like Craig's idea of
  inventing a new status code for this, so that it can be more easily
  distinguished from a lack of GRANTed privileges.
 
  As I mentioned to Kevin, I'm not sure that this is really a useful
  distinction.  I'm quite curious if other systems provide that
  distinction between grant violations and policy violations.  If they do
  then that would certainly bolster the argument to provide the
  distinction in PG.
 

 OK, on further reflection I think that's probably right.

 ERRCODE_INSUFFICIENT_PRIVILEGE is certainly more appropriate than
 anything based on a WCO violation, because it reflects the fact that
 the current user isn't allowed to perform the insert/update, but
 another user might be allowed, so this is a privilege problem, not a
 data error.


I'd be OK with that too. Reusing WCO's code for something that isn't really
with check option at all was my concern, really.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


Re: [HACKERS] SSL information view

2015-04-09 Thread Magnus Hagander
On Thu, Apr 9, 2015 at 5:46 PM, Andres Freund and...@anarazel.de wrote:

 On 2015-04-09 15:56:00 +0200, Magnus Hagander wrote:
  On Thu, Apr 9, 2015 at 3:20 PM, Andres Freund and...@anarazel.de
 wrote:
 
   Hi,
  
   On 2015-04-09 13:31:55 +0200, Magnus Hagander wrote:
+  row
+
  
 entrystructnamepg_stat_ssl/indextermprimarypg_stat_ssl/primary/indexterm/entry
+   entryOne row per connection (regular and replication),
 showing
   information about
+SSL used on this connection.
+See xref linkend=pg-stat-ssl-view for details.
+   /entry
+  /row
+
  
   I kinda wonder why this even separate from pg_stat_activity, at least
   from the POV of the function gathering the result. This way you have to
   join between pg_stat_activity and pg_stat_ssl which will mean that the
   two don't necessarily correspond to each other.
  
 
  To keep from cluttering  pg_stat_activity for the majority of users who
  are the ones not actually using SSL.

 I'm not sure that's actually a problem. But even if, it seems a bit
 better to return the data for both views from one SRF and just define
 the views differently. That way there's a way to query without the
 danger of matching the wrong rows between pg_stat_activity  stat_ssl
 due to pid reuse.


Ah, now I see your point.

Attached is a patch which does this, at least the way I think you meant.
And also fixes a nasty crash bug in the previous one that for some reason
my testing missed (can't set a pointer to null and then expect to use it
later, no... When it's in shared memory, it survives into a new backend.)

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/
*** a/doc/src/sgml/monitoring.sgml
--- b/doc/src/sgml/monitoring.sgml
***
*** 300,305  postgres   27093  0.0  0.0  30096  2752 ?Ss   11:34   0:00 postgres: ser
--- 300,313 
/entry
   /row
  
+  row
+   entrystructnamepg_stat_ssl/indextermprimarypg_stat_ssl/primary/indexterm/entry
+   entryOne row per connection (regular and replication), showing information about
+SSL used on this connection.
+See xref linkend=pg-stat-ssl-view for details.
+   /entry
+  /row
+ 
  /tbody
 /tgroup
/table
***
*** 825,830  postgres   27093  0.0  0.0  30096  2752 ?Ss   11:34   0:00 postgres: ser
--- 833,907 
 listed; no information is available about downstream standby servers.
/para
  
+   table id=pg-stat-ssl-view xreflabel=pg_stat_ssl
+titlestructnamepg_stat_ssl/structname View/title
+tgroup cols=3
+ thead
+ row
+   entryColumn/entry
+   entryType/entry
+   entryDescription/entry
+  /row
+ /thead
+ 
+tbody
+ row
+  entrystructfieldpid//entry
+  entrytypeinteger//entry
+  entryProcess ID of a backend or WAL sender process/entry
+ /row
+ row
+  entrystructfieldssl//entry
+  entrytypeboolean//entry
+  entryTrue if SSL is used on this connection/entry
+ /row
+ row
+  entrystructfieldversion//entry
+  entrytypetext//entry
+  entryVersion of SSL in use, or NULL if SSL is not in use
+   on this connection/entry
+ /row
+ row
+  entrystructfieldcipher//entry
+  entrytypetext//entry
+  entryName of SSL cipher in use, or NULL if SSL is not in use
+   on this connection/entry
+ /row
+ row
+  entrystructfieldbits//entry
+  entrytypeinteger//entry
+  entryNumber of bits in the encryption algorithm used, or NULL
+  if SSL is not used on this connection/entry
+ /row
+ row
+  entrystructfieldcompression//entry
+  entrytypeboolean//entry
+  entryTrue if SSL compression is in use, false if not,
+   or NULL if SSL is not in use on this connection/entry
+ /row
+ row
+  entrystructfieldclientdn//entry
+  entrytypetext//entry
+  entryDistinguished Name (DN) field from the client certificate
+   used, or NULL if no client certificate was supplied or if SSL
+   is not in use on this connection. This field is truncated if the
+   DN field is longer than symbolNAMEDATALEN/symbol (64 characters
+   in a standard build)
+  /entry
+ /row
+/tbody
+/tgroup
+   /table
+ 
+   para
+The structnamepg_stat_ssl/structname view will contain one row per
+backend or WAL sender process, showing statistics about SSL usage on
+this connection. It can be joined to structnamepg_stat_activity/structname
+or structnamepg_stat_replication/structname on the
+structfieldpid/structfield column to get more details about the
+connection.
+   /para
+ 
  
table id=pg-stat-archiver-view xreflabel=pg_stat_archiver
 titlestructnamepg_stat_archiver/structname View/title
*** a/src/backend/catalog/system_views.sql
--- b/src/backend/catalog/system_views.sql
***
*** 646,651  CREATE VIEW pg_stat_replication AS
--- 

Re: [HACKERS] GUC context information in the document.

2015-04-09 Thread Kyotaro HORIGUCHI
Hello, thank you for the comment.

At Tue, 31 Mar 2015 15:07:08 -0400, Tom Lane t...@sss.pgh.pa.us wrote in 
24663.1427828...@sss.pgh.pa.us
 Kyotaro HORIGUCHI horiguchi.kyot...@lab.ntt.co.jp writes:
  If I'm not missing anyting, putting stereotyped information about
  GUC contexts like following would be usable.
 
  share_buffers (integer), (effective after server restart)
 
  log_destination (string), (effetive after config reload)
 
  log_min_duration_statement (integer), (effective in-session, superuser 
  only)
 
  DateStyle (string), (effective in-session)
 
  What do you think about this?
 
 TBH, those don't seem like improvements over the existing boilerplate
 texts, particularly not the last two.
 
 I follow the general idea of getting rid of the boilerplate sentences
 in favor of an annotation similar to the variable datatype notations;
 but such annotations would have to be *very* carefully wordsmithed
 to be both precise and understandable yet brief enough to fit ... and
 these are not.  I'm not sure such a goal is possible at all.

Exactly. Inappropriate wording results in simply moving the
problem to another place, and I know I am completely not fit the
work..

 If we were to go in this direction, I'd favor just annotating with
 the same context keywords that we already expose to users in the
 pg_settings view, ie more like
 
   shared_buffers (integer, postmaster context)

This was an choice came to me but I feel it a little difficult to
recognize for users. But since any possible (simple) wording
won't tell what itself precisely means, and it would be easy to
maintain for developers, it seems the best way now. 

 and then we'd need some introductory text in section 18.1 that defines
 these keywords.  Maybe we could move the text about them that's currently
 associated with the pg_settings view (section 48.69 ATM).
 
 But TBH, I'm not sure that anything like this would reduce the number
 of questions.  It's basically relying on the assumption that people would
 read section 18.1 before asking, and that's a shaky assumption.

Mmm. I completely agree with you about the first question for the
questioner. I hope that no second question comes (for the
questioner alone) if we could say Hem, the answer for your
question is written here in the documentation.. But on the other
hand, I personally think that it is very similar to Hem, the SQL
statement below gives you the answer for your question..

Ok, this porposal doesn't seem to get approvals so much. I'll try
the second way above for the time being.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] rejected vs returned with feedback in new CF app

2015-04-09 Thread Magnus Hagander
On Apr 9, 2015 2:20 AM, Robert Haas robertmh...@gmail.com wrote:

 On Apr 9, 2015, at 1:08 AM, Andres Freund and...@anarazel.de wrote:
  I'm not convinced we really need a version that closes and moves a
entry. But if we indeed want it we can just name it moved.

 +1.

Is that at +1 for naming it moved, or for not having it? :-)

I can definitely go with moved. Buy I would like to keep it - the reason
for having it in the first place is to make the history of the patch follow
along when it goes to the next cf. If we don't have the move option, I
think it's likely that we'll be back to the same patch having multiple
completely unrelated entries in different cfs.

/Magnus


Re: [HACKERS] Parallel Seq Scan

2015-04-09 Thread David Rowley
On 9 April 2015 at 00:12, Amit Kapila amit.kapil...@gmail.com wrote:

 On Wed, Apr 8, 2015 at 3:30 PM, David Rowley dgrowle...@gmail.com wrote:
 
  On 8 April 2015 at 15:46, Amit Kapila amit.kapil...@gmail.com wrote:
 
  I think there is always a chance that resources (like parallel-workers)
  won't be available at run-time even if we decide about them at
  executor-start phase unless we block it for that node's usage and OTOH
  if we block it (by allocating) those resources during executor-start
 phase
  then we might end up blocking it too early or may be they won't even get
  used if we decide not to execute that node.  On that basis, it seems to
  me current strategy is not bad where we decide during planning time and
  later during execution time if not all resources (particularly
 parallel-workers)
  are not available, then we use only the available one's to execute the
 plan.
  Going forward, I think we can improve the same if we decide not to
 shutdown
  parallel workers till postmaster shutdown once they are started and
  then just allocate them during executor-start phase.
 
 
 
  Yeah, but what about when workers are not available in cases when the
 plan was only a win because the planner thought there would be lots of
 workers... There could have been a more optimal serial plan already thrown
 out by the planner which is no longer available to the executor.
 

 That could also happen even if we decide in executor-start phase.


Yes this is true, but if we already have the most optimal serial plan, then
there's no issue.


 I agree that there is a chance of loss incase appropriate resources
 are not available during execution, but same is true for work_mem
 as well for a non-parallel plan.  I think we need some advanced way
 to handle the case when resources are not available during execution
 by either re-planing the statement or by some other way, but that can
 also be done separately.


There was some talk of re-planning queries over on the Removing INNER JOINs
thread:
http://www.postgresql.org/message-id/CA+TgmoaHi8tq7haZCf46O_NUHT8w=p0z_n59dc0yojfmucs...@mail.gmail.com

Regards

David Rowley


Re: [HACKERS] psql showing owner in \dT

2015-04-09 Thread Alvaro Herrera
Magnus Hagander wrote:
 After running into the need twice now - is there a particular reason why we
 don't have psql showing the owner of a type, at least in \dT+?

Can't think of anything ...

 If not, how about attached trivial patch?

Owner should normally be printed before ACL, no?


-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] NOT NULL markings for BKI columns

2015-04-09 Thread Andres Freund
 Specifically, this code chunk:
 
 +   if (defined $attopt)
 +   {
 +   if ($attopt eq 'PG_FORCE_NULL')
 +   {
 +   $row{'forcenull'} = 1;
 +   }
 +   elsif ($attopt eq 'BKI_FORCE_NOT_NULL')
 +   {
 +   $row{'forcenotnull'} = 1;
 +   }
 +   else
 +   {
 +   die unknown column option $attopt on column
 $attname
 +   }
 +   }
 

 We should have BKI_FORCE_NULL instead of PG_FORCE_NULL in the comparison.

Ick. Thanks. Fixed.

Just out of interest and if you can answer: What are you using it for? I
guess it's AS?

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] rejected vs returned with feedback in new CF app

2015-04-09 Thread Magnus Hagander
On Thu, Apr 9, 2015 at 2:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Magnus Hagander mag...@hagander.net writes:
  On Apr 9, 2015 2:20 AM, Robert Haas robertmh...@gmail.com wrote:
  +1.

  Is that at +1 for naming it moved, or for not having it? :-)

  I can definitely go with moved. Buy I would like to keep it - the reason
  for having it in the first place is to make the history of the patch
 follow
  along when it goes to the next cf. If we don't have the move option, I
  think it's likely that we'll be back to the same patch having multiple
  completely unrelated entries in different cfs.

 The problem with the whole thing is that you're asking the person doing
 the returned marking to guess whether the patch will be resubmitted in
 a future CF.

 The right workflow here, IMO, is that a patch should be marked returned or
 rejected, full stop; and then when/if the author submits a new version for
 a future CF, there should be a way *at that time* to re-link the email
 thread into that future CF.


If we just link the email thread, that would mean we loose all those
precious annotations we just added support for. Is that really what you
meant? We also loose all history of a patch, and can't see that a previous
version existed in a previous commitfest, without manually checking each
and every one. But if that's a history we don't *want*, that's of course
doable, but it seems wrong to me?

I'm not necessarily saying that what we have now is right, but just giving
up on the history completely doesn't seem like a very good workflow to me.

We could always tell those people to go back and find your old patch and
re-open it, but in fairness, are people likely to actually do that?


Moved is really only applicable, I think, for cases where we punt a
 patch to the next CF for lack of time.


Well, that's basically what returned with feedback is now, so I guess
that one should just be renamed in that case. And we add a new returned
with feedback that closes out the patch and doesn't move it anywhere.
Which is pretty similar to the suggestion earlier in this thread except it
also swaps the two names.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] FPW compression leaks information

2015-04-09 Thread Stephen Frost
* Heikki Linnakangas (hlinn...@iki.fi) wrote:
 What should we do about this? Make it configurable on a per-table
 basis? Disable FPW compression on system tables? Disable FPW on
 tables you don't have SELECT access to? Add a warning to the docs?

REVOKE EXECUTE ON FUNCTION pg_current_xlog_insert_location() FROM public;

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] rejected vs returned with feedback in new CF app

2015-04-09 Thread David G. Johnston
On Thursday, April 9, 2015, Magnus Hagander mag...@hagander.net wrote:

 On Thu, Apr 9, 2015 at 2:20 PM, Tom Lane t...@sss.pgh.pa.us
 javascript:_e(%7B%7D,'cvml','t...@sss.pgh.pa.us'); wrote:

 Magnus Hagander mag...@hagander.net
 javascript:_e(%7B%7D,'cvml','mag...@hagander.net'); writes:
  On Apr 9, 2015 2:20 AM, Robert Haas robertmh...@gmail.com
 javascript:_e(%7B%7D,'cvml','robertmh...@gmail.com'); wrote:
  +1.

  Is that at +1 for naming it moved, or for not having it? :-)

  I can definitely go with moved. Buy I would like to keep it - the reason
  for having it in the first place is to make the history of the patch
 follow
  along when it goes to the next cf. If we don't have the move option, I
  think it's likely that we'll be back to the same patch having multiple
  completely unrelated entries in different cfs.

 The problem with the whole thing is that you're asking the person doing
 the returned marking to guess whether the patch will be resubmitted in
 a future CF.

 The right workflow here, IMO, is that a patch should be marked returned or
 rejected, full stop; and then when/if the author submits a new version for
 a future CF, there should be a way *at that time* to re-link the email
 thread into that future CF.


 If we just link the email thread, that would mean we loose all those
 precious annotations we just added support for. Is that really what you
 meant? We also loose all history of a patch, and can't see that a previous
 version existed in a previous commitfest, without manually checking each
 and every one. But if that's a history we don't *want*, that's of course
 doable, but it seems wrong to me?

 I'm not necessarily saying that what we have now is right, but just giving
 up on the history completely doesn't seem like a very good workflow to me.

 We could always tell those people to go back and find your old patch and
 re-open it, but in fairness, are people likely to actually do that?


 Moved is really only applicable, I think, for cases where we punt a
 patch to the next CF for lack of time.


 Well, that's basically what returned with feedback is now, so I guess
 that one should just be renamed in that case. And we add a new returned
 with feedback that closes out the patch and doesn't move it anywhere.
 Which is pretty similar to the suggestion earlier in this thread except it
 also swaps the two names.


Can we create a fake CF time period into which all of these waiting on
author entries can be placed and readily browsed/found instead of leaving
them in whatever CF they happened to stall in?

David J.


Re: [HACKERS] How about to have relnamespace and relrole?

2015-04-09 Thread Kyotaro HORIGUCHI
I sent the previous mail unfinished.

At Thu, 09 Apr 2015 17:25:10 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI 
horiguchi.kyot...@lab.ntt.co.jp wrote in 
20150409.172510.29010318.horiguchi.kyot...@lab.ntt.co.jp
 Hello, sorry for the absence. I changed the regnamespace's
 behavior as the same as the other reg* types. And I attached a
 patch as separate one that fixes regroleout to do the same as the
 other reg* types, because I have

because, I doubt that it is appropriate way to do so.

 0001-Add-regrole_v6.patch : fix regnamespace to behave as the
 same as the other reg* types.
 
 0001-Make-regnamespace-behave-as-the-same-as-other-reg-ty.patch:
   Diff from v5 to v6, only for convinience.
 
 0001-Fix-regroleout-s-behavior-following-other-out-functi.patch:
   Fixes regroleout so as to behave as the same as other reg*
   types, but might be a bit too large.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Shouldn't CREATE TABLE LIKE copy the relhasoids property?

2015-04-09 Thread Bruce Momjian
On Thu, Apr  9, 2015 at 12:32:23PM -0300, Alvaro Herrera wrote:
 Bruce Momjian wrote:
 
  Should this be listed in the release notes as a backward-incompatibility?
 
 Isn't this a backpatchable bug fix?

Uh, I don't think so.  I think users are used to the existing behavior
and changing it on them will cause more harm than good.  Also, we have
had zero field reports about this problem.

The updated attached patch handles cases where the default_with_oids =
true.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +
diff --git a/src/backend/parser/parse_utilcmd.c b/src/backend/parser/parse_utilcmd.c
new file mode 100644
index 1fc8c2c..6a72f15
*** a/src/backend/parser/parse_utilcmd.c
--- b/src/backend/parser/parse_utilcmd.c
***
*** 56,61 
--- 56,62 
  #include rewrite/rewriteManip.h
  #include utils/acl.h
  #include utils/builtins.h
+ #include utils/guc.h
  #include utils/lsyscache.h
  #include utils/rel.h
  #include utils/syscache.h
*** transformCreateStmt(CreateStmt *stmt, co
*** 222,228 
  	cxt.blist = NIL;
  	cxt.alist = NIL;
  	cxt.pkey = NULL;
! 	cxt.hasoids = interpretOidsOption(stmt-options, true);
  
  	Assert(!stmt-ofTypename || !stmt-inhRelations);	/* grammar enforces */
  
--- 223,229 
  	cxt.blist = NIL;
  	cxt.alist = NIL;
  	cxt.pkey = NULL;
! 	cxt.hasoids = default_with_oids;
  
  	Assert(!stmt-ofTypename || !stmt-inhRelations);	/* grammar enforces */
  
*** transformCreateStmt(CreateStmt *stmt, co
*** 281,286 
--- 282,298 
  	 * Output results.
  	 */
  	stmt-tableElts = cxt.columns;
+ 	/*
+ 	 * Add WITH/WITHOUT OIDS, if necessary.  A literal statement-specified
+ 	 * WITH/WITHOUT OIDS will still take precedence because the first
+ 	 * matching oids in options is used.
+ 	 */
+ 	if (!interpretOidsOption(stmt-options, true)  cxt.hasoids)
+ 		stmt-options = lappend(stmt-options, makeDefElem(oids,
+ (Node *)makeInteger(TRUE)));
+ 	if (interpretOidsOption(stmt-options, true)  !cxt.hasoids)
+ 		stmt-options = lappend(stmt-options, makeDefElem(oids,
+ (Node *)makeInteger(FALSE)));
  	stmt-constraints = cxt.ckconstraints;
  
  	result = lappend(cxt.blist, stmt);
*** transformTableLikeClause(CreateStmtConte
*** 849,854 
--- 861,868 
  		}
  	}
  
+ 	cxt-hasoids = relation-rd_rel-relhasoids;
+ 
  	/*
  	 * Copy CHECK constraints if requested, being careful to adjust attribute
  	 * numbers so they match the child.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TABLESAMPLE patch

2015-04-09 Thread Simon Riggs
On 9 April 2015 at 04:12, Michael Paquier michael.paqu...@gmail.com wrote:

 Also, I am wondering if the sampling logic based on block analysis is
 actually correct, for example for now this fails and I think that we
 should support it:
 =# with query_select as (select generate_series(1, 10) as a) select
 query_select.a from query_select tablesample system (100.0/11)
 REPEATABLE ();
 ERROR:  42P01: relation query_select does not exist

 How does the SQL spec define exactly TABLESAMPLE? Shouldn't we get a
 sample from a result set?
 Thoughts?

Good catch. The above query doesn't make any sense.

TABLESAMPLE SYSTEM implies system-defined sampling mechanism, which is
block level sampling. So any block level sampling method should be
barred from operating on a result set in this way... i.e. should
generate an ERROR inappropriate sampling method specified

TABLESAMPLE BERNOULLI could work in this case, or any other non-block
based sampling mechanism. Whether it does work yet is another matter.

This query should be part of the test suite and should generate a
useful message or work correctly.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, RemoteDBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers