Re: [HACKERS] psql - add special variable to reflect the last query status

2017-06-16 Thread Fabien COELHO



I have not any other comments. The implementation is trivial. I rerun all
tests and tests passed.

I'll mark this patch as ready for commiters.


Oops, I just noticed a stupid confusion on my part which got through, I 
was setting "ERROR" as "success", inverting the expected boolean value.


Here is a fixed version.

--
Fabien.diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index e6eba21..18c3e7e 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -3450,6 +3450,35 @@ bar
   
 
   
+   ERROR
+   
+
+ Whether the last query failed, as a boolean.
+
+   
+  
+
+  
+   ERROR_CODE
+   
+
+ The error code associated to the last query, or
+ 0 if no error occured.
+
+   
+  
+
+  
+   ERROR_MESSAGE
+   
+
+ If the last query failed, the associated error message,
+ otherwise an empty string.
+
+   
+  
+
+  
 FETCH_COUNT
 
 
@@ -3654,6 +3683,24 @@ bar
   
 
   
+   STATUS
+   
+
+ Text status of the last query.
+
+   
+  
+
+  
+   ROW_COUNT
+   
+
+ How many rows were returned or affected by the last query.
+
+   
+  
+
+  
 QUIET
 
 
diff --git a/src/bin/psql/common.c b/src/bin/psql/common.c
index a2f1259..e717159 100644
--- a/src/bin/psql/common.c
+++ b/src/bin/psql/common.c
@@ -1213,6 +1213,62 @@ PrintQueryResults(PGresult *results)
 	return success;
 }
 
+/*
+ * Set special variables for "front door" queries
+ * - STATUS: last query status
+ * - ERROR: TRUE/FALSE, whether an error occurred
+ * - ERROR_CODE: code if an error occured, or "0"
+ * - ERROR_MESSAGE: message if an error occured, or ""
+ * - ROW_COUNT: how many rows were returned or affected, or "0"
+ */
+static void
+SetResultVariables(PGresult *results)
+{
+	bool			success;
+	ExecStatusType	restat = PQresultStatus(results);
+	char 		   *code = PQresultErrorField(results, PG_DIAG_SQLSTATE);
+	char 		   *mesg = PQresultErrorField(results, PG_DIAG_MESSAGE_PRIMARY);
+
+	SetVariable(pset.vars, "STATUS", PQresStatus(restat) + strlen("PGRES_"));
+	SetVariable(pset.vars, "ERROR_CODE", code ? code : "0");
+	SetVariable(pset.vars, "ERROR_MESSAGE", mesg ? mesg : "");
+
+	/* check all possible PGRES_ */
+	switch (restat)
+	{
+		case PGRES_EMPTY_QUERY:
+		case PGRES_TUPLES_OK:
+		case PGRES_COMMAND_OK:
+		case PGRES_COPY_OUT:
+		case PGRES_COPY_IN:
+		case PGRES_COPY_BOTH:
+		case PGRES_SINGLE_TUPLE:
+			success = true;
+			break;
+		case PGRES_BAD_RESPONSE:
+		case PGRES_NONFATAL_ERROR:
+		case PGRES_FATAL_ERROR:
+			success = false;
+			break;
+		default:
+			/* dead code */
+			success = false;
+			psql_error("unexpected PQresultStatus: %d\n", restat);
+			break;
+	}
+
+	if (success)
+	{
+		char   *ntuples = PQcmdTuples(results);
+		SetVariable(pset.vars, "ROW_COUNT", *ntuples ? ntuples : "0");
+		SetVariable(pset.vars, "ERROR", "FALSE");
+	}
+	else
+	{
+		SetVariable(pset.vars, "ROW_COUNT", "0");
+		SetVariable(pset.vars, "ERROR", "TRUE");
+	}
+}
 
 /*
  * SendQuery: send the query string to the backend
@@ -1346,6 +1402,9 @@ SendQuery(const char *query)
 			elapsed_msec = INSTR_TIME_GET_MILLISEC(after);
 		}
 
+		/* set special variables to reflect the result status */
+		SetResultVariables(results);
+
 		/* but printing results isn't: */
 		if (OK && results)
 			OK = PrintQueryResults(results);
diff --git a/src/test/regress/expected/psql.out b/src/test/regress/expected/psql.out
index d602aee..c3972a6 100644
--- a/src/test/regress/expected/psql.out
+++ b/src/test/regress/expected/psql.out
@@ -2904,6 +2904,69 @@ bar 'bar' "bar"
 	\echo 'should print #8-1'
 should print #8-1
 \endif
+-- special result variables
+-- 2 rows select
+SELECT 1 AS stuff UNION SELECT 2;
+ stuff 
+---
+ 1
+ 2
+(2 rows)
+
+\if :ERROR
+  \echo 'MUST NOT SHOW'
+\else
+  \echo 'ERROR is FALSE as expected'
+ERROR is FALSE as expected
+\endif
+\echo 'status:' :STATUS
+status: TUPLES_OK
+\echo 'error code:' :ERROR_CODE
+error code: 0
+\echo 'error message:' :ERROR_MESSAGE
+error message: 
+\echo 'number of rows:' :ROW_COUNT
+number of rows: 2
+-- syntax error
+SELECT 1 UNION;
+ERROR:  syntax error at or near ";"
+LINE 1: SELECT 1 UNION;
+  ^
+\echo 'status:' :STATUS
+status: FATAL_ERROR
+\if :ERROR
+  \echo 'ERROR is TRUE as expected'
+ERROR is TRUE as expected
+\else
+  \echo 'MUST NOT SHOW'
+\endif
+\echo 'error code:' :ERROR_CODE
+error code: 42601
+\echo 'error message:' :ERROR_MESSAGE
+error message: syntax error at or near ";"
+\echo 'number of rows:' :ROW_COUNT
+number of rows: 0
+-- empty query
+;
+\echo 'status:' :STATUS
+status: EMPTY_QUERY
+\echo 'error code:' :ERROR_CODE
+error code: 0
+\echo 'error message:' :ERROR_MESSAGE
+error message: 
+\echo 'number of rows:' 

Re: [HACKERS] Getting server crash on Windows when using ICU collation

2017-06-16 Thread Amit Kapila
On Sat, Jun 17, 2017 at 6:57 AM, Peter Eisentraut
 wrote:
> On 6/16/17 10:12, Peter Eisentraut wrote:
>> On 6/16/17 06:30, Amit Kapila wrote:
>>> How will this compare UTF-8 strings in UTF-8 encoding?  It seems to me
>>> that ideally, it should use ucol_strcollUTF8 to compare the same,
>>> however, with patch, it will always ucol_strcoll as we never define
>>> HAVE_UCOL_STRCOLLUTF8 flag on Windows.
>>
>> We have a configure check for that, but I don't know how to replicate
>> that on Windows.
>>
>> If ucol_strcollUTF8 is not available, we have code to convert to UTF-16.
>>  This is the same code that is used for non-Windows.
>
> After thinking about this some more, I have committed a change to define
> HAVE_UCOL_STRCOLLUTF8 on Windows unconditionally.  Until someone figures
> out a different way, I think it's better that users of newish versions
> of ICU get the newer/better behavior, and users of older versions can
> file a bug.
>

Yeah, we can take that stand or maybe document that as well but not
sure that is the best way to deal with it.  I have just posted one way
to determine if icu library has support for ucol_strcollUTF8, see if
that sounds like a way forward to you.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting server crash on Windows when using ICU collation

2017-06-16 Thread Amit Kapila
On Fri, Jun 16, 2017 at 7:42 PM, Peter Eisentraut
 wrote:
> On 6/16/17 06:30, Amit Kapila wrote:
>> How will this compare UTF-8 strings in UTF-8 encoding?  It seems to me
>> that ideally, it should use ucol_strcollUTF8 to compare the same,
>> however, with patch, it will always ucol_strcoll as we never define
>> HAVE_UCOL_STRCOLLUTF8 flag on Windows.
>
> We have a configure check for that, but I don't know how to replicate
> that on Windows.
>

I think we can find out in which version of ICU this function has been
released and then determine the version.  We already determine the
MSVC version in Windows (refer DetermineVisualStudioVersion in
src/tools/msvc/VSObjectFactory.pm).  I think we can determine ICU
version with something like "unconv -V", it gives results as "uconv
v2.1  ICU 53.1".

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Broken hint bits (freeze)

2017-06-16 Thread Amit Kapila
On Fri, Jun 16, 2017 at 11:03 PM, Sergey Burladyan  wrote:
> Bruce Momjian  writes:
>
>> On Fri, Jun 16, 2017 at 08:10:13PM +0530, Amit Kapila wrote:
>> > On Fri, Jun 16, 2017 at 7:03 AM, Sergey Burladyan  
>> > wrote:
>> > > Bruce Momjian  writes:
>> > >
>> > >> !  against the old primary and standby clusters.  Verify that the
>> > >> !  Latest checkpoint location values match in all 
>> > >> clusters.
>> > >
>> > > For "Log-Shipping only" standby server this cannot be satisfied, because
>> > > last WAL from master (with shutdown checkpoint) never archived.
>> > >
>> >
>> > Yeah, we have ensured that all the transactions before shutdown
>> > checkpoint got archived.  It is done in commit
>> > 2e6107cb621d003dcab0df53ac8673ea67c4e467.  However, it is not clear to
>> > me neither it is mentioned in comments why we have done it that way.
>>
>> Yes, I am confused why Sergey doesn't see that behavior.
>

The behavior reported by Sergey is what is expected i.e the last file
in which shutdown checkpoint record is written won't be archived and
there is a reason behind that.  We always perform shutdown checkpoint
(which will write shutdown checkpoint record) after requesting a xlog
switch.  Any record written after xlog switch won't be archived unless
it is so big that it consumes complete xlog segment.

> I think this last new switched WAL with shutdown checkpoint record is
> incomplete and it does not marked as *.ready in pg_xlog/archive_status/
> and not archived.
>

Yes, that's true and is expected behavior.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting server crash on Windows when using ICU collation

2017-06-16 Thread Ashutosh Sharma
Hi,

On Sat, Jun 17, 2017 at 6:57 AM, Peter Eisentraut
 wrote:
> On 6/16/17 10:12, Peter Eisentraut wrote:
>> On 6/16/17 06:30, Amit Kapila wrote:
>>> How will this compare UTF-8 strings in UTF-8 encoding?  It seems to me
>>> that ideally, it should use ucol_strcollUTF8 to compare the same,
>>> however, with patch, it will always ucol_strcoll as we never define
>>> HAVE_UCOL_STRCOLLUTF8 flag on Windows.
>>
>> We have a configure check for that, but I don't know how to replicate
>> that on Windows.
>>
>> If ucol_strcollUTF8 is not available, we have code to convert to UTF-16.
>>  This is the same code that is used for non-Windows.
>
> After thinking about this some more, I have committed a change to define
> HAVE_UCOL_STRCOLLUTF8 on Windows unconditionally.

Surprisingly, I am not able to find this commit on a master branch.

Until someone figures
> out a different way, I think it's better that users of newish versions
> of ICU get the newer/better behavior, and users of older versions can
> file a bug.  The alternative is that we forget about this and we keep
> using the old code path indefinitely.

Well, it will work for the users who are using ICU version >= 50 but
not for the older ICU versions So, we will have to figure out a way,
to either detect the availability of ucoll_strcollutf8() on Windows or
may be ICU version itself and set HAVE_UCOL_STRCOLLUTF8 flag
accordingly.

-- 
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com

>
> --
> Peter Eisentraut  http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Preliminary results for proposed new pgindent implementation

2017-06-16 Thread Tom Lane
Piotr Stefaniak  writes:
> On 2017-06-17 00:02, Tom Lane wrote:
>> What I'm testing with right now has just four differences from your repo:

> There are also the "portability fixes" and they're the main problem.

Fair enough.

> I've simply removed things like capsicum or __FBSDID() because I thought
> it wouldn't be a problem since Postgres will have its own copy of indent
> anyway (so that its behavior is not a moving target). I can ifdef-out
> them instead of removing entirely, I just didn't think it was important
> anymore.

We should be able to deal with those via some #define hackery, no?

> I expect to be in trouble for replacing err() and errx(), though.

Understood.  I think we could deal with this by providing err() and
errx() in a support file that would be part of our distribution but
not yours.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] how are the rpms configured that are available in PostgreSQL RPM Building Project - Yum Repository

2017-06-16 Thread David G. Johnston
On Fri, Jun 16, 2017 at 12:40 PM, Cook, Malcolm  wrote:

> Hi,
>
>
>
> I am referring to the contents of https://yum.postgresql.org/
> (specifically version 9.6 rpms for CentoOS7)
>
>
>
> More specifically I wonder if they are configured:--with-python
> --with-tcl --with-pam --with-ldap
>
>
>
> And do they install the Additional Supplied Modules (
> https://www.postgresql.org/docs/9.1/static/contrib.html ) as provided
> when performing a
>
>
>
>make install world
>
>
>
> Please advise if my possibly novice question should better be posed
> elsewhere.  Or if I should expect to find answers in the fine manual.  I
> looked but did not see…
>

​This may provide the info you need.  Note you'll actually need an
installed version to run it against.​

https://www.postgresql.org/docs/current/static/app-pgconfig.html

​This will show what extensions are available in said cluster.

https://www.postgresql.org/docs/current/static/view-pg-available-extensions.html

​David J.​


Re: [HACKERS] [BUGS] BUG #14699: Statement trigger and logical replication

2017-06-16 Thread Thomas Munro
On Sat, Jun 17, 2017 at 1:22 PM, Andres Freund  wrote:
> On 2017-06-16 21:08:44 -0400, Peter Eisentraut wrote:
>> On 6/16/17 09:13, Константин Евтеев wrote:
>> > 2017-06-13 5:57 GMT+03:00 Peter Eisentraut
>> > > > >:
>> >
>> > I think this is all working correctly and as intended.
>> >
>> > But then, why data copy for init logical replication fires statement
>> > trigger. May be it is also not nedeed?
>> > Or this feature needs to be mentioned in documentation?
>>
>> I don't know.  Hackers?
>>
>> The issue is that the logical replication initial data copy fires a
>> statement trigger for INSERT, because it's implemented as a COPY internally.
>>
>> By contrast, the normal apply worker does not fire any statement
>> triggers (because they are not "statements").
>>
>> We could adjust one or the other or leave it as is.
>
> Leave it as is.

I also noticed this while working on the open items for transition
tables.  One of my patches adds a comment to execReplication.c about
the need to do a bit more work if/when statement triggers are fired
here in future.  I assume that we'll eventually want statement
triggers to fire if eager incremental matviews depend on that (as is
proposed), or if that set-oriented FK check idea goes somewhere.
Transition tables make statement triggers a lot more powerful
(something like batch-mode after row triggers).

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting server crash on Windows when using ICU collation

2017-06-16 Thread Peter Eisentraut
On 6/16/17 10:12, Peter Eisentraut wrote:
> On 6/16/17 06:30, Amit Kapila wrote:
>> How will this compare UTF-8 strings in UTF-8 encoding?  It seems to me
>> that ideally, it should use ucol_strcollUTF8 to compare the same,
>> however, with patch, it will always ucol_strcoll as we never define
>> HAVE_UCOL_STRCOLLUTF8 flag on Windows.
> 
> We have a configure check for that, but I don't know how to replicate
> that on Windows.
> 
> If ucol_strcollUTF8 is not available, we have code to convert to UTF-16.
>  This is the same code that is used for non-Windows.

After thinking about this some more, I have committed a change to define
HAVE_UCOL_STRCOLLUTF8 on Windows unconditionally.  Until someone figures
out a different way, I think it's better that users of newish versions
of ICU get the newer/better behavior, and users of older versions can
file a bug.  The alternative is that we forget about this and we keep
using the old code path indefinitely.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] BUG #14699: Statement trigger and logical replication

2017-06-16 Thread Andres Freund
On 2017-06-16 21:08:44 -0400, Peter Eisentraut wrote:
> On 6/16/17 09:13, Константин Евтеев wrote:
> > 2017-06-13 5:57 GMT+03:00 Peter Eisentraut
> >  > >:
> > 
> > I think this is all working correctly and as intended.
> > 
> > But then, why data copy for init logical replication fires statement
> > trigger. May be it is also not nedeed?
> > Or this feature needs to be mentioned in documentation?
> 
> I don't know.  Hackers?
> 
> The issue is that the logical replication initial data copy fires a
> statement trigger for INSERT, because it's implemented as a COPY internally.
> 
> By contrast, the normal apply worker does not fire any statement
> triggers (because they are not "statements").
> 
> We could adjust one or the other or leave it as is.

Leave it as is.

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] BUG #14699: Statement trigger and logical replication

2017-06-16 Thread Peter Eisentraut
On 6/16/17 09:13, Константин Евтеев wrote:
> 2017-06-13 5:57 GMT+03:00 Peter Eisentraut
>  >:
> 
> I think this is all working correctly and as intended.
> 
> But then, why data copy for init logical replication fires statement
> trigger. May be it is also not nedeed?
> Or this feature needs to be mentioned in documentation?

I don't know.  Hackers?

The issue is that the logical replication initial data copy fires a
statement trigger for INSERT, because it's implemented as a COPY internally.

By contrast, the normal apply worker does not fire any statement
triggers (because they are not "statements").

We could adjust one or the other or leave it as is.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Restrictions of logical replication

2017-06-16 Thread Peter Eisentraut
On 6/16/17 10:59, Tatsuo Ishii wrote:
>> Docs stated "Publications can choose to limit the changes they produce to
>> any combination of INSERT, UPDATE, and DELETE". It is clear that only those
>> DMLs are supported.
> 
> What about COPY?

Sure, added note about that as well.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Restrictions of logical replication

2017-06-16 Thread Tatsuo Ishii
>> Some of that information was sprinkled around, but I have now added a
>> new section that collects them all in one place.  (section 31.4)
> 
> 
> Shouldn't we mention that COPY is supported?

I think any commands that are not mentioned in the section are
considered to be supported.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RLS policy not getting honer while pg_dump on declarative partition

2017-06-16 Thread Stephen Frost
Greetings,

* Rushabh Lathia (rushabh.lat...@gmail.com) wrote:
> While doing some testing I noticed that RLS policy not getting honer
> while pg_dump on declarative partition.
> 
> I can understand that while doing SELECT on individual child
> table, policy of parent is not getting applied. But is this desirable
> behaviour? I think for partitions, any policy on the root table should
> get redirect to the child, thoughts?
> 
> If current behaviour is desirable then atleast we should document this.

The current behaviour matches how the GRANT system works, unless it's
been changed as part of the partitioning patches, we don't check the
privileges on tthe parent to see if an individual has access to the
child.

I think we could certainly consider if this behavior is desirable in a
system which includes partitioning instead of inheritance, but if we
wish to do so then I think we should be considering if the GRANT system
should also be changed as I do feel the two should be consistent.

Thinking it through a bit though, I would imagine someone certainly
might want to GRANT access to a given partition and not others, though
that could actually be done with an appropriate RLS policy on the
parent, but until we improve the performance of constraint exclusion (or
change entirely how all of that works with partitions...), I'm not sure
that's a practical answer in all cases.  It might also be the case that
one would wish for different policies to be used when a user is
accessing a table directly vs. going through the parent.

Thanks!

Stephen


signature.asc
Description: Digital signature


[HACKERS] how are the rpms configured that are available in PostgreSQL RPM Building Project - Yum Repository

2017-06-16 Thread Cook, Malcolm
Hi,

I am referring to the contents of https://yum.postgresql.org/ (specifically 
version 9.6 rpms for CentoOS7)

More specifically I wonder if they are configured:--with-python --with-tcl 
--with-pam --with-ldap

And do they install the Additional Supplied Modules 
(https://www.postgresql.org/docs/9.1/static/contrib.html ) as provided when 
performing a

   make install world

Thank you!

Please advise if my possibly novice question should better be posed elsewhere.  
Or if I should expect to find answers in the fine manual.  I looked but did not 
see…

~Malcolm Cook


[HACKERS] Incorrect comment in 001_ssltests.pl

2017-06-16 Thread Michael Paquier
Hi all,

I have noticed the following thing:
--- a/src/test/ssl/t/001_ssltests.pl
+++ b/src/test/ssl/t/001_ssltests.pl
@@ -34,8 +34,6 @@ sub run_test_psql
 # The first argument is a (part of a) connection string, and it's also printed
 # out as the test case name. It is appended to $common_connstr global variable,
 # which also contains a libpq connection string.
-#
-# The second argument is a hostname to connect to.
 sub test_connect_ok
 {
my $connstr = $_[0]

But test_connect_ok and test_connect_fails do not have a second
argument as the hostname is appended directly into $common_connstr.

(I am thinking as well that run_test_psql, test_connect_ok and
test_connect_fails ought to be moved to ServerSetup.pm so as other SSL
tests can take advantage of them, but that's another story for another
day.)

Thanks,
-- 
Michael
diff --git a/src/test/ssl/t/001_ssltests.pl b/src/test/ssl/t/001_ssltests.pl
index f8847e3932..32df273929 100644
--- a/src/test/ssl/t/001_ssltests.pl
+++ b/src/test/ssl/t/001_ssltests.pl
@@ -34,8 +34,6 @@ sub run_test_psql
 # The first argument is a (part of a) connection string, and it's also printed
 # out as the test case name. It is appended to $common_connstr global variable,
 # which also contains a libpq connection string.
-#
-# The second argument is a hostname to connect to.
 sub test_connect_ok
 {
 	my $connstr = $_[0];

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Preliminary results for proposed new pgindent implementation

2017-06-16 Thread Piotr Stefaniak
On 2017-06-17 00:02, Tom Lane wrote:
> Piotr Stefaniak  writes:
>> On 2017-06-16 21:56, Tom Lane wrote:
>>> Unless Piotr objects, I propose to add another switch to bsdindent
>>> that selects this behavior, and then we can drop entab, removing
>>> another impediment to getting pgindent working.
> 
>> I understand the reasoning, but this is a very specific need and I think
>> not at all universal for anyone else in the future. One of the bugs
>> listed in indent's manpage is that it "has more switches than ls(1)". So
>> currently I'm against pushing an option for the above upstream, to the
>> FreeBSD repository.
> 
>> Why not add this to the already non-empty list of custom patches?
> 
> Umm ... I thought the idea was to get to the point where the list of
> custom patches *is* empty.  Except for carrying our own Makefile of
> course.  I'd be sad if we needed a fork just for this.
> 
> What I'm testing with right now has just four differences from your repo:

There are also the "portability fixes" and they're the main problem.

I've simply removed things like capsicum or __FBSDID() because I thought
it wouldn't be a problem since Postgres will have its own copy of indent
anyway (so that its behavior is not a moving target). I can ifdef-out
them instead of removing entirely, I just didn't think it was important
anymore.

I expect to be in trouble for replacing err() and errx(), though.


> 1. This workaround for what I believe you agree is a bug:
> 
> - ps.in_decl = ps.decl_on_line = ps.last_token != type_def;
> + ps.in_decl = ps.decl_on_line = true;

That will need a proper fix...

> 2. The long-lines adjustment I just sent you a patch for.

That looks very good.

> 3. The tab-vs-space difference under discussion here.

I can be convinced to make it another option upstream. But I dislike it
nevertheless.

> 4. A temporary hack affecting the indentation of comments on the same line
> (forcing them to a multiple of 8 spaces even though tabsize is 4).  I have
> every intention of dropping that one later; I just don't want to deal with
> comment reindentation at the same time as these other things.

Great!

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Preliminary results for proposed new pgindent implementation

2017-06-16 Thread Tom Lane
Piotr Stefaniak  writes:
> On 2017-06-16 21:56, Tom Lane wrote:
>> Unless Piotr objects, I propose to add another switch to bsdindent
>> that selects this behavior, and then we can drop entab, removing
>> another impediment to getting pgindent working.

> I understand the reasoning, but this is a very specific need and I think
> not at all universal for anyone else in the future. One of the bugs
> listed in indent's manpage is that it "has more switches than ls(1)". So
> currently I'm against pushing an option for the above upstream, to the
> FreeBSD repository.

> Why not add this to the already non-empty list of custom patches?

Umm ... I thought the idea was to get to the point where the list of
custom patches *is* empty.  Except for carrying our own Makefile of
course.  I'd be sad if we needed a fork just for this.

What I'm testing with right now has just four differences from your repo:

1. This workaround for what I believe you agree is a bug:

-   ps.in_decl = ps.decl_on_line = ps.last_token != type_def;
+   ps.in_decl = ps.decl_on_line = true;

2. The long-lines adjustment I just sent you a patch for.

3. The tab-vs-space difference under discussion here.

4. A temporary hack affecting the indentation of comments on the same line
(forcing them to a multiple of 8 spaces even though tabsize is 4).  I have
every intention of dropping that one later; I just don't want to deal with
comment reindentation at the same time as these other things.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Preliminary results for proposed new pgindent implementation

2017-06-16 Thread Tom Lane
Piotr Stefaniak  writes:
> On 2017-06-16 20:11, Tom Lane wrote:
>> I assume though that Piotr wants an option to preserve that behavior.
>> I'm happy to write up a patch for bsdindent that adds a switch
>> controlling this, but is there any rhyme or reason to the way its
>> switches are named?

> I don't want to preserve the current behavior at all, but I might need
> to add an option for choosing one or the other if users of FreeBSD
> indent protest.

> I don't have a good name for it. The best I can do is -lpl ("-lp long
> lines too").  Can I see the patch?

Here's a patch.  An alternative switch name might be -lpa ("-lp always")
but I'm not set on that.

regards, tom lane

diff -pudr indent-curr/args.c indent-lpl/args.c
--- indent-curr/args.c	2017-06-16 11:06:53.329712682 -0400
+++ indent-lpl/args.c	2017-06-16 17:43:56.473230024 -0400
@@ -125,6 +125,7 @@ struct pro {
 {"i", PRO_INT, 8, 0, _size},
 {"lc", PRO_INT, 0, 0, _comment_max_col},
 {"ldi", PRO_INT, -1, 0, _decl_indent},
+{"lpl", PRO_BOOL, false, ON, _to_parens_always},
 {"lp", PRO_BOOL, true, ON, _to_parens},
 {"l", PRO_INT, 78, 0, _col},
 {"nbacc", PRO_BOOL, false, OFF, _around_conditional_compilation},
@@ -143,6 +144,7 @@ struct pro {
 {"nfc1", PRO_BOOL, true, OFF, _col1_comments},
 {"nfcb", PRO_BOOL, true, OFF, _block_comments},
 {"nip", PRO_BOOL, true, OFF, _parameters},
+{"nlpl", PRO_BOOL, false, OFF, _to_parens_always},
 {"nlp", PRO_BOOL, true, OFF, _to_parens},
 {"npcs", PRO_BOOL, false, OFF, _calls_space},
 {"npro", PRO_SPECIAL, 0, IGN, 0},
diff -pudr indent-curr/indent.1 indent-lpl/indent.1
--- indent-curr/indent.1	2017-06-16 17:18:05.697722416 -0400
+++ indent-lpl/indent.1	2017-06-16 17:26:53.203823690 -0400
@@ -74,6 +74,7 @@
 .Op Fl \ Ns Ar n
 .Op Fl \ Ns Ar n
 .Op Fl \ | Fl nlp
+.Op Fl \ | Fl nlpl
 .Op Fl npro
 .Op Fl P Ns Ar file
 .Op Fl pcs | Fl npcs
@@ -388,6 +389,19 @@ p1\ =\ first_procedure(second_procedure(
 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ third_procedure(p4,
 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ p5));
 .Ed
+.It Fl \ , nlpl
+With
+.Fl \ ,
+code surrounded by parentheses in continuation lines is lined up even if it
+would extend past the right margin.
+With
+.Fl \
+(the default), such a line that would extend past the right margin is moved
+left to keep it within the margin, if that does not require placing it to
+the left of the prevailing indentation level.
+These switches have no effect if
+.Fl nlp
+is selected.
 .It Fl npro
 Causes the profile files,
 .Sq Pa ./.indent.pro
diff -pudr indent-curr/indent.c indent-lpl/indent.c
--- indent-curr/indent.c	2017-06-13 11:53:59.474524563 -0400
+++ indent-lpl/indent.c	2017-06-16 17:29:11.924267996 -0400
@@ -160,6 +160,7 @@ main(int argc, char **argv)
 #ifdef undef
 max_col = 78;		/* -l78 */
 lineup_to_parens = 1;	/* -lp */
+lineup_to_parens_always = 0;	/* -nlpl */
 ps.ljust_decl = 0;		/* -ndj */
 ps.com_ind = 33;		/* -c33 */
 star_comment_cont = 1;	/* -sc */
diff -pudr indent-curr/indent_globs.h indent-lpl/indent_globs.h
--- indent-curr/indent_globs.h	2017-06-16 11:06:53.329712682 -0400
+++ indent-lpl/indent_globs.h	2017-06-16 17:30:14.664826384 -0400
@@ -185,6 +185,8 @@ int continuation_indent;/* set t
  * code and continuation lines */
 int lineup_to_parens;	/* if true, continued code within parens will
  * be lined up to the open paren */
+int lineup_to_parens_always;	/* if true, do not attempt to keep
+	 * lined-up code within the margin */
 int Bill_Shannon;	/* true iff a blank should always be inserted
  * after sizeof */
 int blanklines_after_declarations_at_proctop;	/* This is vaguely
diff -pudr indent-curr/io.c indent-lpl/io.c
--- indent-curr/io.c	2017-06-13 11:53:59.475524587 -0400
+++ indent-lpl/io.c	2017-06-16 17:31:11.233230896 -0400
@@ -221,6 +221,8 @@ compute_code_target(void)
 	if (!lineup_to_parens)
 	target_col += continuation_indent
 		* (2 * continuation_indent == ps.ind_size ? 1 : ps.paren_level);
+	else if (lineup_to_parens_always)
+	target_col = paren_target;
 	else {
 	int w;
 	int t = paren_target;

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Restrictions of logical replication

2017-06-16 Thread Euler Taveira
2017-06-16 11:03 GMT-03:00 Peter Eisentraut <
peter.eisentr...@2ndquadrant.com>:

>
>
> Some of that information was sprinkled around, but I have now added a
> new section that collects them all in one place.  (section 31.4)


Shouldn't we mention that COPY is supported?


-- 
   Euler Taveira   Timbira -
http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento



Re: [HACKERS] Preliminary results for proposed new pgindent implementation

2017-06-16 Thread Piotr Stefaniak
On 2017-06-16 20:11, Tom Lane wrote:
> Andres Freund  writes:
>> On 2017-06-16 13:44:30 -0400, Bruce Momjian wrote:
>>> Yes, it is all about <80 column output.  The current pgindent does
>>> everything possible to accomplish that --- the question is whether we
>>> want uglier code to do it.
> 
>> For me personally the misindentation is way uglier than a too long line.

> 
> I assume though that Piotr wants an option to preserve that behavior.
> I'm happy to write up a patch for bsdindent that adds a switch
> controlling this, but is there any rhyme or reason to the way its
> switches are named?

I don't want to preserve the current behavior at all, but I might need
to add an option for choosing one or the other if users of FreeBSD
indent protest.

I don't have a good name for it. The best I can do is -lpl ("-lp long
lines too").  Can I see the patch?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Broken hint bits (freeze)

2017-06-16 Thread Bruce Momjian
On Fri, Jun 16, 2017 at 04:44:46PM -0400, Bruce Momjian wrote:
> Yes, that is _exactly_ the right place to look.  Only in PG 10 do we
> restart the new cluster to invalidate hash indexes.  In previous
> releases we didn't do the restart.
> 
> That didn't matter with the old rsync instructions, but now that we have
> removed the start/stop before rsync step, the final WAL status of
> pg_upgrade matters.
> 
> I suggest applying the attached patch

Sorry, I meant to say, I suggest applying the attached patch to all
Postgres versions, of course modified.  While the rsync instructions
only appear in PG 9.5+, the instructions work for any supported version of
Postgres, so we should allow it to continue working, even if the updated
instructions are used.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Preliminary results for proposed new pgindent implementation

2017-06-16 Thread Piotr Stefaniak
On 2017-06-16 21:56, Tom Lane wrote:
> One other thing I'd like to do while we're changing this stuff is
> to get rid of the need for entab/detab.  Right now, after doing
> all the other work, my copy of pgindent is running the code through
> detab and then entab so as to match the old decisions about how to
> represent whitespace (ie, as spaces or tabs).  This is grotty as
> can be.  I managed to tweak bsdindent so that its output matches
> what entab would do, by dint of the attached patch, which implements
> the rule "use a space instead of a tab if the tab would only move
> one column and we don't need another tab after it".  (I think entab
> is being weird with the second half of that rule, but if I remove it,
> I get circa a thousand lines of invisible whitespace changes; probably
> better not to deal with those.  With no patch at all, just letting
> bsdindent do what it does now, there's circa ten thousand changed lines.)
> 
> Unless Piotr objects, I propose to add another switch to bsdindent
> that selects this behavior, and then we can drop entab, removing
> another impediment to getting pgindent working.

I understand the reasoning, but this is a very specific need and I think
not at all universal for anyone else in the future. One of the bugs
listed in indent's manpage is that it "has more switches than ls(1)". So
currently I'm against pushing an option for the above upstream, to the
FreeBSD repository.

Why not add this to the already non-empty list of custom patches?


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Preliminary results for proposed new pgindent implementation

2017-06-16 Thread Bruce Momjian
On Fri, Jun 16, 2017 at 03:56:47PM -0400, Tom Lane wrote:
> can be.  I managed to tweak bsdindent so that its output matches
> what entab would do, by dint of the attached patch, which implements
> the rule "use a space instead of a tab if the tab would only move
> one column and we don't need another tab after it".  (I think entab
> is being weird with the second half of that rule, but if I remove it,
> I get circa a thousand lines of invisible whitespace changes; probably
> better not to deal with those.  With no patch at all, just letting
> bsdindent do what it does now, there's circa ten thousand changed lines.)

Yeah, entab was designed to do that, via this C comment:

/*
 * Is the next character going to be a tab?  We do tab
 * replacement in the current spot if the next char is
 * going to be a tab and ignore min_spaces.
 */


-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Broken hint bits (freeze)

2017-06-16 Thread Sergey Burladyan
Bruce Momjian  writes:

> On Fri, Jun 16, 2017 at 04:33:16AM +0300, Sergey Burladyan wrote:
> > Bruce Momjian  writes:
> > > ! 
> > > !  Also, if upgrading standby servers, change wal_level
> > > !  to replica in the postgresql.conf file on
> > > !  the new cluster.
> > >   
> > >  
> > 
> > I am not sure how this help.
> > 
> > wal_level is reset by pg_resetxlog during pg_upgrade, so it does not
> > depend on postgresql.conf. After pg_upgrade wal_level always is
> > 'minimal', that is why you must start and stop new master before rsync:
> > 
> >  output 
> > $ "$bin"/pg_controldata "$ver" | grep wal_level
> > wal_level setting:replica
> > 
> > $ "$bin"/pg_resetwal "$ver"
> > Write-ahead log reset
> > 
> > $ "$bin"/pg_controldata "$ver" | grep wal_level
> > wal_level setting:minimal
> > 
>
> Yes, I see that, but pg_resetxlog is run _before_ the _new_ cluster is
> started for the last time, so in my testing the wal_level at the end of
> pg_upgrade matches the value in postgresql.conf, e.g. "replica".  For
> example:
>
>   Upgrade Complete
>   
>   Optimizer statistics are not transferred by pg_upgrade so,
>   once you start the new server, consider running:
>   ./analyze_new_cluster.sh
>   
>   Running this script will delete the old cluster's data files:
>   ./delete_old_cluster.sh
>
>   $ pg_controldata /u/pg/data/ | grep wal_level
>   wal_level setting:replica
>
> The way pg_upgrade uses rsync, the standby never needs to replay the WAL
> when it starts up because we already copied the changed system tables
> and hard linked the user data files.

Oh, it is my fail, I was not run test script completely for current git
master. In git master it work as expected. But not in previous versions.
I used this test script and got this result:
9.2 -> master: wal_level setting:replica
9.2 -> 9.6: wal_level setting:minimal
9.2 -> 9.5: wal_level setting:minimal
9.2 -> 9.4: Current wal_level setting:minimal

I also save strace for pg_upgrade:
=== 9.6 ===
pg_resetxlog", ["/home/sergey/inst/pg9.6/bin/pg_resetxlog", "-l", 
"00010002", "9.6"],
pg_ctl", ["/home/sergey/inst/pg9.6/bin/pg_ctl", "-w", "-l", 
"pg_upgrade_server.log", "-D", "9.6",
pg_ctl", ["/home/sergey/inst/pg9.6/bin/pg_ctl", "-w", "-D", "9.6", "-o", "", 
"-m", "smart", "stop"],
pg_resetxlog", ["/home/sergey/inst/pg9.6/bin/pg_resetxlog", "-o", "16393", 
"9.6"], [/* 68 vars */]) = 0
===

It is exec pg_resetxlog last for set next OID,
it is from src/bin/pg_upgrade/pg_upgrade.c:149

=== master ===
pg_resetwal", ["/home/sergey/inst/pg-master/bin/pg_resetwal", "-l", 
"00010002", "master"],
pg_ctl", ["/home/sergey/inst/pg-master/bin/pg_ctl", "-w", "-l", 
"pg_upgrade_server.log", "-D", "master",
pg_ctl", ["/home/sergey/inst/pg-master/bin/pg_ctl", "-w", "-D", "master", "-o", 
"", "-m", "smart", "stop"],
pg_resetwal", ["/home/sergey/inst/pg-master/bin/pg_resetwal", "-o", "16393", 
"master"], [/* 70 vars */]) = 0
pg_ctl", ["/home/sergey/inst/pg-master/bin/pg_ctl", "-w", "-l", 
"pg_upgrade_server.log", "-D", "master",
pg_ctl", ["/home/sergey/inst/pg-master/bin/pg_ctl", "-w", "-D", "master", "-o", 
"", "-m", "smart", "stop"],
==

>From git master pg_upgrade is restart new master again after
pg_resetwal -o, as you said.

It is from src/bin/pg_upgrade/check.c:176
void
issue_warnings(void)
{
/* Create dummy large object permissions for old < PG 9.0? */
if (GET_MAJOR_VERSION(old_cluster.major_version) <= 804)
{
start_postmaster(_cluster, true);
new_9_0_populate_pg_largeobject_metadata(_cluster, false);
stop_postmaster(false);
}

/* Reindex hash indexes for old < 10.0 */
if (GET_MAJOR_VERSION(old_cluster.major_version) <= 906)
{
start_postmaster(_cluster, true);
old_9_6_invalidate_hash_indexes(_cluster, false);
stop_postmaster(false);
}
}

-- 
Sergey Burladyan



sh_wfR3JYaTl.sh
Description: Bourne shell script

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Preliminary results for proposed new pgindent implementation

2017-06-16 Thread Tom Lane
One other thing I'd like to do while we're changing this stuff is
to get rid of the need for entab/detab.  Right now, after doing
all the other work, my copy of pgindent is running the code through
detab and then entab so as to match the old decisions about how to
represent whitespace (ie, as spaces or tabs).  This is grotty as
can be.  I managed to tweak bsdindent so that its output matches
what entab would do, by dint of the attached patch, which implements
the rule "use a space instead of a tab if the tab would only move
one column and we don't need another tab after it".  (I think entab
is being weird with the second half of that rule, but if I remove it,
I get circa a thousand lines of invisible whitespace changes; probably
better not to deal with those.  With no patch at all, just letting
bsdindent do what it does now, there's circa ten thousand changed lines.)

Unless Piotr objects, I propose to add another switch to bsdindent
that selects this behavior, and then we can drop entab, removing
another impediment to getting pgindent working.

regards, tom lane
diff -ur /home/postgres/freebsd_indent/indent.c ./indent.c
--- /home/postgres/freebsd_indent/indent.c	2017-06-13 11:53:59.474524563 -0400
+++ ./indent.c	2017-06-16 15:41:53.515416614 -0400
@@ -1275,7 +1275,7 @@
 
 	CHECK_SIZE_CODE(cur_dec_ind / tabsize);
 	while ((tpos = tabsize * (1 + pos / tabsize)) <= cur_dec_ind) {
-	*e_code++ = '\t';
+	*e_code++ = (tpos > pos + 1 || cur_dec_ind >= tpos + tabsize) ? '\t' : ' ';
 	pos = tpos;
 	}
 }
diff -ur /home/postgres/freebsd_indent/io.c ./io.c
--- /home/postgres/freebsd_indent/io.c	2017-06-13 11:53:59.475524587 -0400
+++ ./io.c	2017-06-16 15:42:47.686762023 -0400
@@ -399,7 +399,7 @@
 	int tcur;
 
 	while ((tcur = tabsize * (1 + (curr - 1) / tabsize) + 1) <= target) {
-	putc('\t', output);
+	putc((tcur > curr + 1 || target >= tcur + tabsize) ? '\t' : ' ', output);
 	curr = tcur;
 	}
 }

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Preliminary results for proposed new pgindent implementation

2017-06-16 Thread Bruce Momjian
On Fri, Jun 16, 2017 at 11:54:06AM -0700, Andres Freund wrote:
> On 2017-06-16 14:42:38 -0400, Bruce Momjian wrote:
> > On Fri, Jun 16, 2017 at 02:23:00PM -0400, Tom Lane wrote:
> > > Well, that's something we need to discuss.  I originally argued for
> > > back-patching the new rules, whatever they are (ie, run the new
> > > pgindent on the back branches whenever we've agreed that the dust
> > > has settled).  But I'm starting to realize that that's likely to
> > > be horrid for anyone who's carrying out-of-tree patches, as I know
> > > a lot of packagers do for instance.  We have to trade off our own
> > > inconvenience in making back-patches against inconvenience to
> > > people who are maintaining private patchsets.
> > 
> > Can't they sync up to just before our pgindent commit and run pgindent
> > on their own code base?
> 
> That doesn't really help that much if you have a series of patches that
> you want to keep independent, e.g. because you might want to submit to
> postgres.  And you'll also get a bunch of annoying to resolve merge
> conflicts, even if they're easier to resolve with that methodology.

I think we have to ask how much we want to make things easier for people
with modified but continually-updated Postgres trees vs. our
community-tree developers.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Preliminary results for proposed new pgindent implementation

2017-06-16 Thread Andres Freund
On 2017-06-16 14:42:38 -0400, Bruce Momjian wrote:
> On Fri, Jun 16, 2017 at 02:23:00PM -0400, Tom Lane wrote:
> > Well, that's something we need to discuss.  I originally argued for
> > back-patching the new rules, whatever they are (ie, run the new
> > pgindent on the back branches whenever we've agreed that the dust
> > has settled).  But I'm starting to realize that that's likely to
> > be horrid for anyone who's carrying out-of-tree patches, as I know
> > a lot of packagers do for instance.  We have to trade off our own
> > inconvenience in making back-patches against inconvenience to
> > people who are maintaining private patchsets.
> 
> Can't they sync up to just before our pgindent commit and run pgindent
> on their own code base?

That doesn't really help that much if you have a series of patches that
you want to keep independent, e.g. because you might want to submit to
postgres.  And you'll also get a bunch of annoying to resolve merge
conflicts, even if they're easier to resolve with that methodology.

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Preliminary results for proposed new pgindent implementation

2017-06-16 Thread Bruce Momjian
On Fri, Jun 16, 2017 at 02:23:00PM -0400, Tom Lane wrote:
> Well, that's something we need to discuss.  I originally argued for
> back-patching the new rules, whatever they are (ie, run the new
> pgindent on the back branches whenever we've agreed that the dust
> has settled).  But I'm starting to realize that that's likely to
> be horrid for anyone who's carrying out-of-tree patches, as I know
> a lot of packagers do for instance.  We have to trade off our own
> inconvenience in making back-patches against inconvenience to
> people who are maintaining private patchsets.

Can't they sync up to just before our pgindent commit and run pgindent
on their own code base?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Preliminary results for proposed new pgindent implementation

2017-06-16 Thread Tom Lane
Andres Freund  writes:
> On 2017-06-16 13:34:01 -0400, Tom Lane wrote:
>> I do intend to apply the diffs to HEAD in multiple steps, just to
>> make them more reviewable.  But I think we should probably absorb
>> all the changes we want into v10, not leave some for later cycles.

> Btw, how much are you planning to backpatch these?

Well, that's something we need to discuss.  I originally argued for
back-patching the new rules, whatever they are (ie, run the new
pgindent on the back branches whenever we've agreed that the dust
has settled).  But I'm starting to realize that that's likely to
be horrid for anyone who's carrying out-of-tree patches, as I know
a lot of packagers do for instance.  We have to trade off our own
inconvenience in making back-patches against inconvenience to
people who are maintaining private patchsets.

One idea that occurs to me after a few minutes' thought is to
announce that we will reindent the back branches, but not till
around the time of v10 final release.  Once v10 is out, anybody
who's carrying a private patchset will be needing to think about
rebasing it on top of reindented code anyway, so dealing with that
in the back branches at the same time might be a bit less work.

Or we could leave the back branches alone and anticipate five
years worth of pain in back-patching.  I don't find that very
appetizing personally, but it might be the easiest sell to the
majority of the community, since very few of us do back-patching
work on a regular basis.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Preliminary results for proposed new pgindent implementation

2017-06-16 Thread Tom Lane
Andres Freund  writes:
> On 2017-06-16 13:44:30 -0400, Bruce Momjian wrote:
>> Yes, it is all about <80 column output.  The current pgindent does
>> everything possible to accomplish that --- the question is whether we
>> want uglier code to do it.

> For me personally the misindentation is way uglier than a too long line.

I'm coming around to that opinion too.  We have many source lines that
are a bit too long, or a lot too long if someone decided they didn't
want to split an error message across lines.  pgindent "fixes" that
in some places but not others (if it would have to go left of the
prevailing statement indent, it gives up and indents to the paren level
anyway).  On balance it's just weird.  Better to indent normally and
let the programmer decide if she wants to break the lines differently
to keep them from wrapping.

I assume though that Piotr wants an option to preserve that behavior.
I'm happy to write up a patch for bsdindent that adds a switch
controlling this, but is there any rhyme or reason to the way its
switches are named?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Preliminary results for proposed new pgindent implementation

2017-06-16 Thread Andres Freund
On 2017-06-16 13:34:01 -0400, Tom Lane wrote:
> > I could live with both of these proposed
> > changes, the selection of the changes you posted looks like it could be
> > improved by code changes, but that's obviously a large amount of work.
> 
> In the end, the only thing that fixes this sort of stuff is to be more
> rigid about making the code fit into 80 columns to begin with.  I get
> the impression though that a lot of people work in editor windows that
> are wider than that, so the code looks fine to them when it slops over
> a bit.

That, but maybe also that it's often slightly too long line vs. weird
multiline mess.  A good number of things pgindent indents weirdly can be
prevented by just not adding a linebreak, which isn't a great fix...


> > At this point however I wonder whether just moving to the new tool on
> > its own wouldn't be a big enough change - we could just delay that
> > decision until we've got the rest done at least.
> 
> I'm torn between that approach and "let's just have one big flag day
> and get it over with".

I don't have a strong opinion on this.


> I think having the rules incrementally changing from one release to
> the next will be a huge headache.

Yea, I was more thinking of getting the new indent in, and then making
the followup decisions a few days after.


> I do intend to apply the diffs to HEAD in multiple steps, just to
> make them more reviewable.  But I think we should probably absorb
> all the changes we want into v10, not leave some for later cycles.

Btw, how much are you planning to backpatch these?

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Preliminary results for proposed new pgindent implementation

2017-06-16 Thread Andres Freund
On 2017-06-16 13:44:30 -0400, Bruce Momjian wrote:
> On Fri, Jun 16, 2017 at 01:34:01PM -0400, Tom Lane wrote:
> > > I could live with both of these proposed
> > > changes, the selection of the changes you posted looks like it could be
> > > improved by code changes, but that's obviously a large amount of work.
> > 
> > In the end, the only thing that fixes this sort of stuff is to be more
> > rigid about making the code fit into 80 columns to begin with.  I get
> > the impression though that a lot of people work in editor windows that
> > are wider than that, so the code looks fine to them when it slops over
> > a bit.
> 
> Yes, it is all about <80 column output.  The current pgindent does
> everything possible to accomplish that --- the question is whether we
> want uglier code to do it.

For me personally the misindentation is way uglier than a too long line.
I think a number of those long-lines are there because pgindent
sometimes re-indents lines that are continuations of previous ones
pretty far, making it hard to reduce indentation.

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_waldump command line arguments

2017-06-16 Thread Andres Freund
On 2017-06-15 17:08:23 -0400, Robert Haas wrote:
> pg_waldump --help claims that you run it like this:
> 
> Usage:
>   pg_waldump [OPTION]... [STARTSEG [ENDSEG]]
> 
> And https://www.postgresql.org/docs/10/static/pgwaldump.html agrees.
> Since square brackets indicate optional arguments, this sort of makes
> it sound like running pg_waldump with no arguments ought to work.  But
> it doesn't:

Well, not really, it indicates that positional arguments are allowed,
but not required.  You can get by with with -s / -e, which are sometimes
important, if you want to look at multiple timelines etc.


> A slightly broader concern is whether we need to require the start
> position at all.  It seems like one could locate the WAL directory
> using the existing logic, then search for the earliest file.

"earliest file" isn't actually that trivial to determine if there's
timelines etc. But leaving that aside, it'll be frequently so much data
that'll be output, that it'd make the output pretty much useless, no?  I
think if we were to add a bit more magic, it'd make more sense to parse
pg_control and start at the last flushed point nof WAL forward,
especially with -f.


> It might be a little unclear what "earliest" means when multiple
> timelines are present, but I bet we could come up with some behavior
> that would be convenient for most users.  It would be quite handy to
> be able to run this without arguments (or just with -z) and have it
> process all the WAL files that you've got on hand.

With -z I agree, probably best by parsing pg_control and parsing
[checkpoint - 1, minRecoveryPoint) or such.

I'm willing to review some patches here, but I don't plan to personally
work on patches around this...

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why forcing Hot_standby_feedback to be enabled when creating a logical decoding slot on standby

2017-06-16 Thread Andres Freund
Hi,

On 2017-06-16 06:31:03 +, sanyam jain wrote:
> Isn't XLogRecord carries full information to be decoded in itself?If so a 
> VACCUM should not be a problem in decoding?

First: Please don't full quote emails.
Secondly: You've not actually explained what you want to do, nor what
your precise question itself is. Nor why Michael or my answer wasn't
sufficient.

Logicla decoding looks at the catalogs for metadata - otherwise the WAL
would need to contain a lot more information and thus be more voluminous
- so the records do *not* contain the "full information".

Please start at the beginning and explain what you're trying to do
(since you apparently did some hacking to get a slot on a standby) and
what you're trying to achieve, instead of one-sentence questions.

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Making server name part of the startup message

2017-06-16 Thread Andres Freund
On 2017-06-15 09:43:13 -0400, Tom Lane wrote:
> Satyanarayana Narlapuram  writes:
> > As a cloud service, Azure Database for PostgreSQL uses a gateway proxy to 
> > route connections to a node hosting the actual server. To do that, the 
> > proxy needs to know the name of the server it tries to locate. As a 
> > work-around we currently overload the username parameter to pass in the 
> > server name using username@servername convention. It is purely a convention 
> > that our customers need to follow and understand. We would like to extend 
> > the PgSQL connection protocol to add an optional parameter for the server 
> > name to help with this scenario.
> 
> We don't actually have any concept of a server name at the moment,
> and it isn't very clear what introducing that concept would buy.
> Please explain.

cluster_name could be what's meant?

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Preliminary results for proposed new pgindent implementation

2017-06-16 Thread Bruce Momjian
On Fri, Jun 16, 2017 at 01:34:01PM -0400, Tom Lane wrote:
> > I could live with both of these proposed
> > changes, the selection of the changes you posted looks like it could be
> > improved by code changes, but that's obviously a large amount of work.
> 
> In the end, the only thing that fixes this sort of stuff is to be more
> rigid about making the code fit into 80 columns to begin with.  I get
> the impression though that a lot of people work in editor windows that
> are wider than that, so the code looks fine to them when it slops over
> a bit.

Yes, it is all about <80 column output.  The current pgindent does
everything possible to accomplish that --- the question is whether we
want uglier code to do it.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors

2017-06-16 Thread Kevin Grittner
On Fri, Jun 16, 2017 at 5:31 AM, Marina Polyakova
 wrote:

> And thank you very much for your explanation how and why transactions with
> failures should be retried! I'll try to implement all of it.

To be clear, part of "retrying from the beginning" means that if a
result from one statement is used to determine the content (or
whether to run) a subsequent statement, that first statement must be
run in the new transaction and the results evaluated again to
determine what to use for the later statement.  You can't simply
replay the statements that were run during the first try.  For
examples, to help get a feel of why that is, see:

https://wiki.postgresql.org/wiki/SSI

--
Kevin Grittner
VMware vCenter Server
https://www.vmware.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Preliminary results for proposed new pgindent implementation

2017-06-16 Thread Tom Lane
Andres Freund  writes:
> I think the current logic is pretty horrible, primarily because it's so
> hard to get to manually.

Yes, I think that's really the big argument against it: no editor on
the face of the planet will indent code that way to start with.

> I could live with both of these proposed
> changes, the selection of the changes you posted looks like it could be
> improved by code changes, but that's obviously a large amount of work.

In the end, the only thing that fixes this sort of stuff is to be more
rigid about making the code fit into 80 columns to begin with.  I get
the impression though that a lot of people work in editor windows that
are wider than that, so the code looks fine to them when it slops over
a bit.

> At this point however I wonder whether just moving to the new tool on
> its own wouldn't be a big enough change - we could just delay that
> decision until we've got the rest done at least.

I'm torn between that approach and "let's just have one big flag day
and get it over with".  I think having the rules incrementally changing
from one release to the next will be a huge headache.

I do intend to apply the diffs to HEAD in multiple steps, just to
make them more reviewable.  But I think we should probably absorb
all the changes we want into v10, not leave some for later cycles.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Broken hint bits (freeze)

2017-06-16 Thread Sergey Burladyan
Bruce Momjian  writes:

> On Fri, Jun 16, 2017 at 08:10:13PM +0530, Amit Kapila wrote:
> > On Fri, Jun 16, 2017 at 7:03 AM, Sergey Burladyan  
> > wrote:
> > > Bruce Momjian  writes:
> > >
> > >> !  against the old primary and standby clusters.  Verify that the
> > >> !  Latest checkpoint location values match in all clusters.
> > >
> > > For "Log-Shipping only" standby server this cannot be satisfied, because
> > > last WAL from master (with shutdown checkpoint) never archived.
> > >
> > 
> > Yeah, we have ensured that all the transactions before shutdown
> > checkpoint got archived.  It is done in commit
> > 2e6107cb621d003dcab0df53ac8673ea67c4e467.  However, it is not clear to
> > me neither it is mentioned in comments why we have done it that way.
>
> Yes, I am confused why Sergey doesn't see that behavior.

I think this last new switched WAL with shutdown checkpoint record is
incomplete and it does not marked as *.ready in pg_xlog/archive_status/
and not archived.

-- 
Sergey Burladyan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Preliminary results for proposed new pgindent implementation

2017-06-16 Thread Andres Freund
Hi,

On 2017-06-16 13:10:50 -0400, Tom Lane wrote:
> I experimented with disabling that logic and just always aligning
> to the paren indentation.  That fixes the weird cases with continued
> string literals, but it also makes for a heck of a lot of other changes.
> The full diff is too big to post here, but I've attached a selection
> of diff hunks to give you an idea.  I'm not really sure if I like this
> better than pgindent's traditional behavior --- but it's arguably less
> confusing.
> 
> An intermediate position that we could consider is to disable the back-off
> logic only when the line starts with a string literal.  I haven't actually
> coded this but it looks like it would be easy, if grotty.

I think the current logic is pretty horrible, primarily because it's so
hard to get to manually.  I could live with both of these proposed
changes, the selection of the changes you posted looks like it could be
improved by code changes, but that's obviously a large amount of work.
The heuristic also seems to make sense.

At this point however I wonder whether just moving to the new tool on
its own wouldn't be a big enough change - we could just delay that
decision until we've got the rest done at least.

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Preliminary results for proposed new pgindent implementation

2017-06-16 Thread Tom Lane
There was some discussion upthread about how we'd like pgindent not to
do weird things with string literals that wrap around the end of the
line a little bit.  I looked into that and found that it's actually
a generic behavior for any line that's within parentheses: normally,
such a line will get lined up with the parens, like this:

foobar(baz,
   baz2,
   baz3,
   ...

but if the line would wrap when indented that much, and backing off
lets it not wrap, then it backs off.

I experimented with disabling that logic and just always aligning
to the paren indentation.  That fixes the weird cases with continued
string literals, but it also makes for a heck of a lot of other changes.
The full diff is too big to post here, but I've attached a selection
of diff hunks to give you an idea.  I'm not really sure if I like this
better than pgindent's traditional behavior --- but it's arguably less
confusing.

An intermediate position that we could consider is to disable the back-off
logic only when the line starts with a string literal.  I haven't actually
coded this but it looks like it would be easy, if grotty.

Or we could leave it alone.

Thoughts?

regards, tom lane

diff -ur pgsql/contrib/amcheck/verify_nbtree.c pgsql-dup/contrib/amcheck/verify_nbtree.c
--- pgsql/contrib/amcheck/verify_nbtree.c	2017-06-16 12:31:36.900504080 -0400
+++ pgsql-dup/contrib/amcheck/verify_nbtree.c	2017-06-16 12:35:21.042052427 -0400
@@ -240,8 +240,8 @@
 		ereport(ERROR,
 (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
  errmsg("cannot access temporary tables of other sessions"),
-			 errdetail("Index \"%s\" is associated with temporary relation.",
-	   RelationGetRelationName(rel;
+ errdetail("Index \"%s\" is associated with temporary relation.",
+		   RelationGetRelationName(rel;
 
 	if (!IndexIsValid(rel->rd_index))
 		ereport(ERROR,
@@ -411,12 +411,12 @@
 ereport(ERROR,
 		(errcode(ERRCODE_INDEX_CORRUPTED),
 		 errmsg("block %u fell off the end of index \"%s\"",
-			 current, RelationGetRelationName(state->rel;
+current, RelationGetRelationName(state->rel;
 			else
 ereport(DEBUG1,
 		(errcode(ERRCODE_NO_DATA),
 		 errmsg("block %u of index \"%s\" ignored",
-			 current, RelationGetRelationName(state->rel;
+current, RelationGetRelationName(state->rel;
 			goto nextpage;
 		}
 		else if (nextleveldown.leftmost == InvalidBlockNumber)
@@ -433,14 +433,14 @@
 if (!P_LEFTMOST(opaque))
 	ereport(ERROR,
 			(errcode(ERRCODE_INDEX_CORRUPTED),
-		   errmsg("block %u is not leftmost in index \"%s\"",
-			 current, RelationGetRelationName(state->rel;
+			 errmsg("block %u is not leftmost in index \"%s\"",
+	current, RelationGetRelationName(state->rel;
 
 if (level.istruerootlevel && !P_ISROOT(opaque))
 	ereport(ERROR,
 			(errcode(ERRCODE_INDEX_CORRUPTED),
-		  errmsg("block %u is not true root in index \"%s\"",
-			 current, RelationGetRelationName(state->rel;
+			 errmsg("block %u is not true root in index \"%s\"",
+	current, RelationGetRelationName(state->rel;
 			}
 
 			/*
@@ -488,7 +488,7 @@
 	 errmsg("left link/right link pair in index \"%s\" not in agreement",
 			RelationGetRelationName(state->rel)),
 	 errdetail_internal("Block=%u left block=%u left link from block=%u.",
-  current, leftcurrent, opaque->btpo_prev)));
+		current, leftcurrent, opaque->btpo_prev)));
 
 		/* Check level, which must be valid for non-ignorable page */
 		if (level.level != opaque->btpo.level)
@@ -497,7 +497,7 @@
 	 errmsg("leftmost down link for level points to block in index \"%s\" whose level is not one level down",
 			RelationGetRelationName(state->rel)),
 	 errdetail_internal("Block pointed to=%u expected level=%u level in pointed to block=%u.",
- current, level.level, opaque->btpo.level)));
+		current, level.level, opaque->btpo.level)));
 
 		/* Verify invariants for page -- all important checks occur here */
 		bt_target_page_check(state);
@@ -508,8 +508,8 @@
 		if (current == leftcurrent || current == opaque->btpo_prev)
 			ereport(ERROR,
 	(errcode(ERRCODE_INDEX_CORRUPTED),
-			  errmsg("circular link chain found in block %u of index \"%s\"",
-	 current, RelationGetRelationName(state->rel;
+	 errmsg("circular link chain found in block %u of index \"%s\"",
+			current, RelationGetRelationName(state->rel;
 
 		leftcurrent = current;
 		current = opaque->btpo_next;
@@ -665,17 +665,17 @@
 	(errcode(ERRCODE_INDEX_CORRUPTED),
 	 errmsg("item order invariant violated for index \"%s\"",
 			RelationGetRelationName(state->rel)),
-			   errdetail_internal("Lower index tid=%s (points to %s tid=%s) "
-  "higher index tid=%s (points to %s tid=%s) "
-  "page lsn=%X/%X.",
-  itid,
-  P_ISLEAF(topaque) ? "heap" 

[HACKERS] Re: [HACKERS] 答复: GiST API Adancement

2017-06-16 Thread Andrew Borodin
2017-06-16 17:06 GMT+03:00 Tom Lane :
> Yuan Dong  writes:
>> ·¢¼þÈË: Andrew Borodin 
>>> I think there is one more aspect of development: backward
>>> compatibility: it's impossible to update all existing extensions. This
>>> is not that major feature to ignore them.
>
>> I should still maintain original API of GiST after modification.
>
> To put a little context on that: we have always felt that it's okay
> to require extension authors to make small adjustments when going to
> a new major release.

Then maybe it worth to consider more general API advancement at once?
Here's my wish list:
1. Choose subtree function as an alternative for penalty
2. Bulk load support
3. Better split framework: now many extensions peek two seeds and
spread split candidates among them, some with subtle but noisy
mistakes
4. Better way of key compares (this thread, actually)
5. Split in many portions

Best regards, Andrey Borodin.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Broken hint bits (freeze)

2017-06-16 Thread Bruce Momjian
On Fri, Jun 16, 2017 at 08:10:13PM +0530, Amit Kapila wrote:
> On Fri, Jun 16, 2017 at 7:03 AM, Sergey Burladyan  wrote:
> > Bruce Momjian  writes:
> >
> >> !  against the old primary and standby clusters.  Verify that the
> >> !  Latest checkpoint location values match in all clusters.
> >
> > For "Log-Shipping only" standby server this cannot be satisfied, because
> > last WAL from master (with shutdown checkpoint) never archived.
> >
> 
> Yeah, we have ensured that all the transactions before shutdown
> checkpoint got archived.  It is done in commit
> 2e6107cb621d003dcab0df53ac8673ea67c4e467.  However, it is not clear to
> me neither it is mentioned in comments why we have done it that way.

Yes, I am confused why Sergey doesn't see that behavior.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Broken hint bits (freeze)

2017-06-16 Thread Bruce Momjian
On Fri, Jun 16, 2017 at 04:33:16AM +0300, Sergey Burladyan wrote:
> Bruce Momjian  writes:
> > ! 
> > !  Also, if upgrading standby servers, change wal_level
> > !  to replica in the postgresql.conf file on
> > !  the new cluster.
> >   
> >  
> 
> I am not sure how this help.
> 
> wal_level is reset by pg_resetxlog during pg_upgrade, so it does not
> depend on postgresql.conf. After pg_upgrade wal_level always is
> 'minimal', that is why you must start and stop new master before rsync:
> 
>  output 
> $ "$bin"/pg_controldata "$ver" | grep wal_level
> wal_level setting:replica
> 
> $ "$bin"/pg_resetwal "$ver"
> Write-ahead log reset
> 
> $ "$bin"/pg_controldata "$ver" | grep wal_level
> wal_level setting:minimal
> 

Yes, I see that, but pg_resetxlog is run _before_ the _new_ cluster is
started for the last time, so in my testing the wal_level at the end of
pg_upgrade matches the value in postgresql.conf, e.g. "replica".  For
example:

Upgrade Complete

Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
./analyze_new_cluster.sh

Running this script will delete the old cluster's data files:
./delete_old_cluster.sh

$ pg_controldata /u/pg/data/ | grep wal_level
wal_level setting:replica

The way pg_upgrade uses rsync, the standby never needs to replay the WAL
when it starts up because we already copied the changed system tables
and hard linked the user data files.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ASOF join

2017-06-16 Thread David Fetter
On Fri, Jun 16, 2017 at 11:51:34AM +1200, Thomas Munro wrote:
> On Fri, Jun 16, 2017 at 4:20 AM, Konstantin Knizhnik
>  wrote:
> > I wonder if there were some discussion/attempts to add ASOF join to Postgres
> > (sorry, may be there is better term for it, I am refereeing KDB definition:
> > http://code.kx.com/wiki/Reference/aj ).
> 
> Interesting idea.  Also in Pandas:
> 
> http://pandas.pydata.org/pandas-docs/version/0.19.0/generated/pandas.merge_asof.html#pandas.merge_asof
> 
> I suppose you could write a function that pulls tuples out of a bunch
> of cursors and zips them together like this, as a kind of hand-coded
> special merge join "except that we match on nearest key rather than
> equal keys" (as they put it).
> 
> I've written code like this before in a trading context, where we
> called that 'previous tick interpolation', and in a scientific context
> where other kinds of interpolation were called for (so not really
> matching a tuple but synthesising one if no exact match).  If you view
> the former case as a kind of degenerate case of interpolation then it
> doesn't feel like a "join" as we know it, but clearly it is.  I had
> never considered before that such things might belong inside the
> database as a kind of join operator.

If you turn your head sideways, it's very similar to the range merge
join Jeff Davis proposed.  https://commitfest.postgresql.org/14/1106/

Best,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david(dot)fetter(at)gmail(dot)com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Data at rest encryption

2017-06-16 Thread Bruce Momjian
On Fri, Jun 16, 2017 at 11:06:39AM +0300, Konstantin Knizhnik wrote:
> Encryption is much easier to implement than compression, because it is not
> changing page size. So I do not see any "complexity and flexibility
> challenges" here.
> Just for reference I attached to this mail our own encryption patch. I do

I didn't see you using CPU AES instructions, which can improve
performance by 3-10x.  Is there a reason?

> Postgres buffer manager interface significantly simplifies integration of
> encryption and compression. There is actually single path through which data
> is fetched/stored to the disk.
> It is most obvious and natural solution to decompress/decrypt data when it
> is read from the disk to page pool and compress/encrypt it when it is
> written back. Taken in account that memory is cheap now and many databases
> can completely fit in memory, storing pages in the buffer cache in plain
> (decompressed/decrypted) format allows to minimize overhead of
> compression/encryption and its influence on performance. For read only
> queries working with cached data performance will be exactly the same as
> without encryption/compression.
> Write speed for encrypted pages will be certainly slightly worse, but still
> encryption speed is much higher than disk IO speed.

Good point.

> I do not think that pluggable storage API is right approach to integrate
> compression and especially encryption. It is better to plugin encryption
> between buffer manager and storage device,
> allowing to use  it with any storage implementation. Also it is not clear to
> me whether encryption of WAL can be provided using pluggable storage API.

Yes, you are completely correct.  I withdraw my suggestion of doing it
as plugin storage.

> The last discussed question is whether it is necessary to encrypt temporary
> data (BufFile). In our solution we encrypt only main fork of non-system
> relations and do no encrypt temporary relations. It may cause that some
> secrete data will be stored at this disk in non-encrypted format. But
> accessing this data is not trivial. You can not just copy/stole disk, open
> database and do "select * from SecreteTable": you will have to extract data
> from raw file yourself. So looks like it is better to allow user to make
> choice whether to encrypt temporary data or not.

If we go forward with in-db encryption, I think we are going to have to
have a discussion about what parts of PGDATA need to be encrypted,
i.e., I don't think pg_clog needs encryption.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Data at rest encryption

2017-06-16 Thread Bruce Momjian
On Thu, Jun 15, 2017 at 08:08:05PM -0400, Bruce Momjian wrote:
> On Thu, Jun 15, 2017 at 04:56:36PM -0700, Andres Freund wrote:
> > how few concerns about this feature's complexity / maintainability
> > impact have been raised.
> 
> Yeah, I guess we will just have to wait to see it since other people are
> excited about it.  My concern is code complexity and usability
> challenges, vs punting the problem to the operating system, though
> admittedly there are some cases where that is not possible.

I know some OS's can create file systems inside files.  Can you encrypt
such file storage as non-root?  I assume that is just too odd.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Restrictions of logical replication

2017-06-16 Thread Tatsuo Ishii
> Docs stated "Publications can choose to limit the changes they produce to
> any combination of INSERT, UPDATE, and DELETE". It is clear that only those
> DMLs are supported.

What about COPY?

> However, we should mention that large objects are not
> supported.

Right.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Preliminary results for proposed new pgindent implementation

2017-06-16 Thread Tom Lane
Peter Eisentraut  writes:
> On 5/19/17 13:31, Alvaro Herrera wrote:
>> I favor having indent in a separate repository in our Git server, for
>> these reasons

> I am also in favor of that.

>> 0. it's under our control (so we can change rules as we see fit)
>> 1. we can have Piotr as a committer there
>> 2. we can use the same pgindent version for all Pg branches

> 3. We can use pgindent for external code.

Now that we've about reached the point of actually making the change,
we need to come to a resolution on where we're keeping the new indent
code.  I thought that Alvaro's point 1 above (we can give Piotr a
commit bit) was the only really compelling argument for putting it
into a separate repo rather than into our main tree.  In other aspects
that's a loser --- in particular, it would be hard to have different
indent versions for different PG branches, if we chose to run things
that way.  However, I gather from Piotr's recent remarks[1] that
he's not actually excited about doing continuing maintenance on
indent, so that advantage now seems illusory.  In any case we'd
need to keep such a repo pretty well locked down: if it's changing,
and different developers pull from it at different times, then
we're going to have people working with different indent behaviors,
which will make nobody happy.

So I'm back to the position that we ought to stick the indent
code under src/tools/ in our main repo.  Is anyone really
seriously against that?

regards, tom lane

[1] 
https://www.postgresql.org/message-id/VI1PR03MB119959F4B65F000CA7CD9F6BF2CC0%40VI1PR03MB1199.eurprd03.prod.outlook.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Broken hint bits (freeze)

2017-06-16 Thread Amit Kapila
On Fri, Jun 16, 2017 at 7:03 AM, Sergey Burladyan  wrote:
> Bruce Momjian  writes:
>
>> !  against the old primary and standby clusters.  Verify that the
>> !  Latest checkpoint location values match in all clusters.
>
> For "Log-Shipping only" standby server this cannot be satisfied, because
> last WAL from master (with shutdown checkpoint) never archived.
>

Yeah, we have ensured that all the transactions before shutdown
checkpoint got archived.  It is done in commit
2e6107cb621d003dcab0df53ac8673ea67c4e467.  However, it is not clear to
me neither it is mentioned in comments why we have done it that way.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Restrictions of logical replication

2017-06-16 Thread Tatsuo Ishii
> On 6/16/17 03:00, Tatsuo Ishii wrote:
>> Maybe I am missing something, but I could not find any description
>> that logical replication does not support large objects and TRUNCATE
>> in the doc.  Do we want to add them to the doc as the restrictions of
>> the logical replication?
> 
> Some of that information was sprinkled around, but I have now added a
> new section that collects them all in one place.  (section 31.4)

Great. Thanks!
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting server crash on Windows when using ICU collation

2017-06-16 Thread Peter Eisentraut
On 6/16/17 06:30, Amit Kapila wrote:
> How will this compare UTF-8 strings in UTF-8 encoding?  It seems to me
> that ideally, it should use ucol_strcollUTF8 to compare the same,
> however, with patch, it will always ucol_strcoll as we never define
> HAVE_UCOL_STRCOLLUTF8 flag on Windows.

We have a configure check for that, but I don't know how to replicate
that on Windows.

If ucol_strcollUTF8 is not available, we have code to convert to UTF-16.
 This is the same code that is used for non-Windows.

> We have some multi-byte tests
> in src/test/mb directory, see if we can use those to verify these
> changes.  I admit that I have not tried to execute those on Windows,
> so I have no idea if those even work.

There is a test specifically for ICU, which you can run with

make check EXTRA_TESTS=collate.icu.utf8

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting server crash on Windows when using ICU collation

2017-06-16 Thread Peter Eisentraut
On 6/15/17 13:48, Ashutosh Sharma wrote:
>> Maybe just
>>
>> diff --git a/src/backend/utils/adt/varlena.c 
>> b/src/backend/utils/adt/varlena.c
>> index a0dd391f09..2506f4eeb8 100644
>> --- a/src/backend/utils/adt/varlena.c
>> +++ b/src/backend/utils/adt/varlena.c
>> @@ -1433,7 +1433,7 @@ varstr_cmp(char *arg1, int len1, char *arg2, int len2, 
>> Oid collid)
>>
>>  #ifdef WIN32
>> /* Win32 does not have UTF-8, so we need to map to UTF-16 */
>> -   if (GetDatabaseEncoding() == PG_UTF8)
>> +   if (GetDatabaseEncoding() == PG_UTF8 && (!mylocale || 
>> mylocale->provider == COLLPROVIDER_LIBC))
>> {
>> int a1len;
>> int a2len;
> 
> Oh, yes, this looks like the simplest and possibly the ideal way to
> fix the issue. Attached is the patch. Thanks for the inputs.

committed

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 答复: GiST API Adancement

2017-06-16 Thread Tom Lane
Yuan Dong  writes:
> ·¢¼þÈË: Andrew Borodin 
>> I think there is one more aspect of development: backward
>> compatibility: it's impossible to update all existing extensions. This
>> is not that major feature to ignore them.

> I should still maintain original API of GiST after modification.

To put a little context on that: we have always felt that it's okay
to require extension authors to make small adjustments when going to
a new major release.  For instance, adding a new parameter to a
globally visible function is fine, especially if callers can just
pass NULL or some such to get the old behavior.  So in the context
here, you shouldn't feel compelled to come up with a bizarre API
design just to preserve exact compatibility of old code.  You should
indeed think about reducing the amount of work that extension
authors have to do to update, but that doesn't have to mean "zero".
Also, it's wise to make sure that any places where code changes
have to be made will result in compile errors if the change isn't
made.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Restrictions of logical replication

2017-06-16 Thread Peter Eisentraut
On 6/16/17 03:00, Tatsuo Ishii wrote:
> Maybe I am missing something, but I could not find any description
> that logical replication does not support large objects and TRUNCATE
> in the doc.  Do we want to add them to the doc as the restrictions of
> the logical replication?

Some of that information was sprinkled around, but I have now added a
new section that collects them all in one place.  (section 31.4)

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Restrictions of logical replication

2017-06-16 Thread Euler Taveira
2017-06-16 4:00 GMT-03:00 Tatsuo Ishii :

> Maybe I am missing something, but I could not find any description
> that logical replication does not support large objects and TRUNCATE
> in the doc.  Do we want to add them to the doc as the restrictions of
> the logical replication?
>

Docs stated "Publications can choose to limit the changes they produce to
any combination of INSERT, UPDATE, and DELETE". It is clear that only those
DMLs are supported. However, we should mention that large objects are not
supported.


-- 
   Euler Taveira   Timbira -
http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento



Re: [HACKERS] Shortened URLs for commit messages

2017-06-16 Thread Bruce Momjian
On Thu, Jun 15, 2017 at 01:05:19PM -0400, Bruce Momjian wrote:
> On Tue, May 23, 2017 at 11:25:07PM -0400, Bruce Momjian wrote:
> > I have written the following sed script to convert regular Postgres
> > email message URLs to their shorter form for commit messages:
> > 
> >  sed 
> > 's;http\(s\?\)://www\.postgresql\.org/message-id/;http\1://postgr.es/m/;gi'
> > 
> > in case this is helpful to anyone.
> 
> Oh, here's another one.  I use an optional "Discussion:" tag in my
> commit messages. This sed script converts a message-id into a proper
> URL:
> 
>   sed '/http/!s;^\(Discussion: *\)\(.*\)$;\1https://postgr.es/m/\2;'

Oh, here is a fixed version that requires an @ sign, which all message
id's have:

sed '/http/!s;^\(Discussion: *\)\(.*@.*\)$;\1https://postgr.es/m/\2;'

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] 答复: GiST API Adancement

2017-06-16 Thread Yuan Dong
Hi Andrey,


Thank you for your suggestion.



I should still maintain original API of GiST after modification.


--

Dong




发件人: Andrew Borodin 
发送时间: 2017年6月16日 6:24:03
收件人: Yuan Dong
抄送: pgsql-hackers@postgresql.org
主题: Re: GiST API Adancement

Hi, Dong!

2017-06-15 21:19 GMT+05:00 Yuan Dong :
> I'm going to hack on my own. With the help of Andrew Borodin, I want to
> start the project with adding a third state to collision check. The third
> state is that:
>  subtree is totally within the query. In this case, GiST scan can scan down
> subtree without key checks.

That's fine!

> After reading some code, I get this plan to modify the code:
>
> 1. Modify the consistent function of datatypes supported by GiST.
>
> 1.1 Start with cube, g_cube_consistent should return a third state when
> calling g_cube_internal_consistent. Inspired by Andrew Borodin, I have two
> solutions, 1)modify return value, 2) modify a reference type parameter.
>
> 2. Modify the gistindex_keytest in gistget.c to return multiple states
>
> 2.1 Need declare a new enum type(or define values) in gist_private.h or
> somewhere
>
> 3. Modify the gitsScanPage in gistget.c to deal with the extra state
>
> 4. Add a state to mark the nodes under this GISTSearchItem are all with in
> the query
>
> 4.1 Two ways to do this: 1) add a state to GISTSearchItem (prefered) 2)
> Somewhere else to record all this kind of items
>
> 4.2 We may use the block number as the key
>
> 4.3 Next time when the gistScanPage met this item, just find all the leaves
> directly(need a new function)
>
> After this, I shall start to benchmark the improvement and edit the code of
> other datatypes.
>
> Hope you hackers can give me some suggestions~

I think there is one more aspect of development: backward
compatibility: it's impossible to update all existing extensions. This
is not that major feature to ignore them.

Though for benchmarking purposes backward compatibility can be omitted.

Best regards, Andrey Borodin


Re: [HACKERS] Bug in ExecModifyTable function and trigger issues for foreign tables

2017-06-16 Thread Etsuro Fujita

On 2017/06/16 19:26, Ashutosh Bapat wrote:

On Fri, Jun 16, 2017 at 3:35 PM, Etsuro Fujita

Ashutosh mentioned his concern about what I proposed above before [2], but
I'm not sure we should address that.  And there have been no opinions from
him (or anyone else) since then.  So, I'd like to leave that for committer
(ie, +1 for Ready for Committer).


That issue has not been addressed. The reason stated was that it would
make code complicated. But I have not had chance to look at how
complicated would be and assess myself whether that's worth the
trouble.
I have to admit that what I proposed upthread is a quick-and-dirty 
kluge.  One thing I thought to address your concern was to move 
rewriteTargetListUD entirely from the rewriter to the planner when doing 
inherited UPDATE/DELETE, but I'm not sure that's a good idea, because at 
least I think that would need a lot more changes to the rewriter.


Best regards,
Etsuro Fujita



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in ExecModifyTable function and trigger issues for foreign tables

2017-06-16 Thread Etsuro Fujita

On 2017/06/16 19:26, Ashutosh Bapat wrote:

Also, I don't see any discussion about my concern [3] about a parent
with child from multiple foreign servers with different FDWs. So, I am
not sure whether the patch really fixes the problem in its entirety.


The patch would allow child tables to have different foreign servers 
with different FDWs since it applies rewriteTargetListUD to each child 
table when generating a modified query with that child table with the 
target in inheritance_planner.  I didn't any regression tests for that, 
though.


Best regards,
Etsuro Fujita



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors

2017-06-16 Thread Marina Polyakova

>> P.S. Does this use case (do not retry transaction with serialization or
>> deadlock failure) is most interesting or failed transactions should be
>> retried (and how much times if there seems to be no hope of success...)?
>
> I can't quite parse that sentence, could you restate?

The way I read it was that the most interesting solution would retry
a transaction from the beginning on a serialization failure or
deadlock failure.


As far as I understand her proposal, it is exactly the opposite -- if a
transaction fails, it is discarded.  And this P.S. note is asking
whether this is a good idea, or would we prefer that failing
transactions are retried.


Yes, I have meant this, thank you!


I think it's pretty obvious that transactions that failed with
some serializability problem should be retried.


Thank you voted :)

--
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors

2017-06-16 Thread Marina Polyakova
P.S. Does this use case (do not retry transaction with serialization 
or
deadlock failure) is most interesting or failed transactions should 
be
retried (and how much times if there seems to be no hope of 
success...)?


I can't quite parse that sentence, could you restate?


The way I read it was that the most interesting solution would retry
a transaction from the beginning on a serialization failure or
deadlock failure.  Most people who use serializable transactions (at
least in my experience) run though a framework that does that
automatically, regardless of what client code initiated the
transaction.  These retries are generally hidden from the client
code -- it just looks like the transaction took a bit longer.
Sometimes people will have a limit on the number of retries.  I
never used such a limit and never had a problem, because our
implementation of serializable transactions will not throw a
serialization failure error until one of the transactions involved
in causing it has successfully committed -- meaning that the retry
can only hit this again on a *new* set of transactions.

Essentially, the transaction should only count toward the TPS rate
when it eventually completes without a serialization failure.

Marina, did I understand you correctly?


Álvaro Herrera in next message of this thread has understood my text 
right:



As far as I understand her proposal, it is exactly the opposite -- if a
transaction fails, it is discarded.  And this P.S. note is asking
whether this is a good idea, or would we prefer that failing
transactions are retried.


And thank you very much for your explanation how and why transactions 
with failures should be retried! I'll try to implement all of it.


--
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting server crash on Windows when using ICU collation

2017-06-16 Thread Amit Kapila
On Thu, Jun 15, 2017 at 11:18 PM, Ashutosh Sharma  wrote:
> Hi,
>
> On Thu, Jun 15, 2017 at 8:36 PM, Peter Eisentraut
>  wrote:
>> On 6/12/17 00:38, Ashutosh Sharma wrote:
>>> PFA patch that fixes the issue described in above thread. As mentioned
>>> in the above thread, the crash is basically happening in varstr_cmp()
>>> function and  it's  only happening on Windows because in varstr_cmp(),
>>> if the collation provider is ICU, we don't even think of calling ICU
>>> functions to compare the string. Infact, we directly attempt to call
>>> the OS function wsccoll*() which is not expected. Thanks.
>>
>> Maybe just
>>
>> diff --git a/src/backend/utils/adt/varlena.c 
>> b/src/backend/utils/adt/varlena.c
>> index a0dd391f09..2506f4eeb8 100644
>> --- a/src/backend/utils/adt/varlena.c
>> +++ b/src/backend/utils/adt/varlena.c
>> @@ -1433,7 +1433,7 @@ varstr_cmp(char *arg1, int len1, char *arg2, int len2, 
>> Oid collid)
>>
>>  #ifdef WIN32
>> /* Win32 does not have UTF-8, so we need to map to UTF-16 */
>> -   if (GetDatabaseEncoding() == PG_UTF8)
>> +   if (GetDatabaseEncoding() == PG_UTF8 && (!mylocale || 
>> mylocale->provider == COLLPROVIDER_LIBC))
>> {
>> int a1len;
>> int a2len;
>
> Oh, yes, this looks like the simplest and possibly the ideal way to
> fix the issue. Attached is the patch. Thanks for the inputs.
>

How will this compare UTF-8 strings in UTF-8 encoding?  It seems to me
that ideally, it should use ucol_strcollUTF8 to compare the same,
however, with patch, it will always ucol_strcoll as we never define
HAVE_UCOL_STRCOLLUTF8 flag on Windows.  We have some multi-byte tests
in src/test/mb directory, see if we can use those to verify these
changes.  I admit that I have not tried to execute those on Windows,
so I have no idea if those even work.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors

2017-06-16 Thread Marina Polyakova

Hi,


Hello!


I think that's a good idea and sorely needed.


Thanks, I'm very glad to hear it!

- if there were these failures during script execution this 
"transaction" is

marked
appropriately in logs;
- numbers of "transactions" with these failures are printed in 
progress, in

aggregation logs and in the end with other results (all and for each
script);


I guess that'll include a "rolled-back %' or 'retried %' somewhere?


Not exactly, see documentation:

+   If transaction has serialization / deadlock failure or them both 
(last thing

+   is possible if used script contains several transactions; see
+for more information), its
+   time will be reported as serialization 
failure /

+   deadlock failure /
+   serialization and deadlock failures appropriately.

+   Example with serialization, deadlock and both these failures:
+
+1 128 24968 0 1496759158 426984
+0 129 serialization failure 0 1496759158 427023
+3 129 serialization failure 0 1496759158 432662
+2 128 serialization failure 0 1496759158 432765
+0 130 deadlock failure 0 1496759159 460070
+1 129 serialization failure 0 1496759160 485188
+2 129 serialization and deadlock failures 0 1496759160 485339
+4 130 serialization failure 0 1496759160 485465
+

I have understood proposals in next messages of this thread that the 
most interesting case is to retry failed transaction. Do you think it's 
better to write for example "rolled-back after % retries (serialization 
failure)' or "time (retried % times, serialization and deadlock 
failures)'?



Advanced options:
- mostly for testing built-in scripts: you can set the default 
transaction

isolation level by the appropriate benchmarking option (-I);


I'm less convinced of the need of htat, you can already set arbitrary
connection options with
PGOPTIONS='-c default_transaction_isolation=serializable' pgbench


Oh, thanks, I forgot about it =[

P.S. Does this use case (do not retry transaction with serialization 
or

deadlock failure) is most interesting or failed transactions should be
retried (and how much times if there seems to be no hope of 
success...)?



I can't quite parse that sentence, could you restate?


Álvaro Herrera later in this thread has understood my text right:


As far as I understand her proposal, it is exactly the opposite -- if a
transaction fails, it is discarded.  And this P.S. note is asking
whether this is a good idea, or would we prefer that failing
transactions are retried.


With his explanation has my text become clearer?

--
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in ExecModifyTable function and trigger issues for foreign tables

2017-06-16 Thread Ashutosh Bapat
On Fri, Jun 16, 2017 at 3:35 PM, Etsuro Fujita
 wrote:
> On 2017/06/16 0:05, Ildus Kurbangaliev wrote:
>
>
> I wrote:
>
> One approach I came up with to fix this issue is to rewrite the
> targetList entries of an inherited UPDATE/DELETE properly using
> rewriteTargetListUD, when generating a plan for each child table
> in inheritance_planner.  Attached is a WIP patch for that.  Maybe
> I am missing something, though.


 While updating the patch, I noticed the patch rewrites the UPDATE
 targetList incorrectly in some cases; rewrite_inherited_tlist I
 added to adjust_appendrel_attrs (1) removes all junk items from the
 targetList and (2) adds junk items for the child table using
 rewriteTargetListUD, but it's wrong to drop all junk items in cases
 where there are junk items for some other reasons than
 rewriteTargetListUD.  Consider junk items containing MULTIEXPR
 SubLink.  One way I came up with to fix this is to change (1) to
 only remove junk items with resname; since junk items added by
 rewriteTargetListUD should have resname (note: we would need
 resname to call ExecFindJunkAttributeInTlist at execution time!)
 while other junk items wouldn't have resname (see
 transformUpdateTargetList), we could correctly replace junk items
 added by rewriteTargetListUD for the parent with ones for the
 child, by that change.  I might be missing something, though.
 Comments welcome.
>>>
>>>
>>> I updated the patch that way.  Please find attached an updated
>>> version.
>>>
>>> Other changes:
>>> * Moved the initialization for "tupleid" I added in ExecModifyTable
>>> as discussed before, which I think is essentially the same as
>>> proposed by Ildus in [1], since I think that would be more consistent
>>> with "oldtuple".
>>> * Added regression tests.
>>>
>>> Anyway I'll add this to the next commitfest.
>
>
>> Checked the latest patch. Looks good.
>> Shouldn't this patch be backported to 9.6 and 10beta? The bug
>> affects them too.
>
>
> Thank you for the review!
>
> The bug is in foreign table inheritance, which was supported in 9.5, so I
> think this patch should be backported to 9.5.
>
> Ashutosh mentioned his concern about what I proposed above before [2], but
> I'm not sure we should address that.  And there have been no opinions from
> him (or anyone else) since then.  So, I'd like to leave that for committer
> (ie, +1 for Ready for Committer).

That issue has not been addressed. The reason stated was that it would
make code complicated. But I have not had chance to look at how
complicated would be and assess myself whether that's worth the
trouble. There was another issue

Also, I don't see any discussion about my concern [3] about a parent
with child from multiple foreign servers with different FDWs. So, I am
not sure whether the patch really fixes the problem in its entirety.

[3] 
https://www.postgresql.org/message-id/cafjfprfq1pkcjnqfvop_bpjfc7or3596nqvvfbgaidezqb4...@mail.gmail.com

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] GSoC 2017 : Patch for predicate locking in Gist index

2017-06-16 Thread Shubham Barai
Hi, hackers!

I have created my first patch for predicate locking in gist index. It
includes a test for verification of serialization failures and a test to
check false positives.
I am submitting my patch little late because there were some issues with
"make check" that I was trying to solve. Now, the patch passes all existing
tests.

Regards,
Shubham



   Sent with Mailtrack



0001-Predicate-Locking-in-Gist-index.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in ExecModifyTable function and trigger issues for foreign tables

2017-06-16 Thread Etsuro Fujita

On 2017/06/16 0:05, Ildus Kurbangaliev wrote:

I wrote:

One approach I came up with to fix this issue is to rewrite the
targetList entries of an inherited UPDATE/DELETE properly using
rewriteTargetListUD, when generating a plan for each child table
in inheritance_planner.  Attached is a WIP patch for that.  Maybe
I am missing something, though.


While updating the patch, I noticed the patch rewrites the UPDATE
targetList incorrectly in some cases; rewrite_inherited_tlist I
added to adjust_appendrel_attrs (1) removes all junk items from the
targetList and (2) adds junk items for the child table using
rewriteTargetListUD, but it's wrong to drop all junk items in cases
where there are junk items for some other reasons than
rewriteTargetListUD.  Consider junk items containing MULTIEXPR
SubLink.  One way I came up with to fix this is to change (1) to
only remove junk items with resname; since junk items added by
rewriteTargetListUD should have resname (note: we would need
resname to call ExecFindJunkAttributeInTlist at execution time!)
while other junk items wouldn't have resname (see
transformUpdateTargetList), we could correctly replace junk items
added by rewriteTargetListUD for the parent with ones for the
child, by that change.  I might be missing something, though.
Comments welcome.


I updated the patch that way.  Please find attached an updated
version.

Other changes:
* Moved the initialization for "tupleid" I added in ExecModifyTable
as discussed before, which I think is essentially the same as
proposed by Ildus in [1], since I think that would be more consistent
with "oldtuple".
* Added regression tests.

Anyway I'll add this to the next commitfest.



Checked the latest patch. Looks good.
Shouldn't this patch be backported to 9.6 and 10beta? The bug
affects them too.


Thank you for the review!

The bug is in foreign table inheritance, which was supported in 9.5, so 
I think this patch should be backported to 9.5.


Ashutosh mentioned his concern about what I proposed above before [2], 
but I'm not sure we should address that.  And there have been no 
opinions from him (or anyone else) since then.  So, I'd like to leave 
that for committer (ie, +1 for Ready for Committer).


Attached is a slightly-updated version; I renamed some variables used in 
rewrite_inherited_tlist() to match other existing code in prepunion.c 
and revised some comments a bit.  I didn't make any functional changes, 
so I'll keep this Ready for Committer.


Best regards,
Etsuro Fujita

[2] 
https://www.postgresql.org/message-id/CAFjFpRfTpamoUz6fNyk6gPh%3DecfBJjbUHJNKbDxscpyPJ3FfjQ%40mail.gmail.com
*** a/contrib/postgres_fdw/expected/postgres_fdw.out
--- b/contrib/postgres_fdw/expected/postgres_fdw.out
***
*** 6924,6929  update bar set f2 = f2 + 100 returning *;
--- 6924,7038 
7 | 277
  (6 rows)
  
+ -- Test that UPDATE/DELETE with inherited target works with row-level triggers
+ CREATE TRIGGER trig_row_before
+ BEFORE UPDATE OR DELETE ON bar2
+ FOR EACH ROW EXECUTE PROCEDURE trigger_data(23,'skidoo');
+ CREATE TRIGGER trig_row_after
+ AFTER UPDATE OR DELETE ON bar2
+ FOR EACH ROW EXECUTE PROCEDURE trigger_data(23,'skidoo');
+ explain (verbose, costs off)
+ update bar set f2 = f2 + 100;
+   QUERY PLAN  

+ 
--
+  Update on public.bar
+Update on public.bar
+Foreign Update on public.bar2
+  Remote SQL: UPDATE public.loct2 SET f2 = $2 WHERE ctid = $1 RETURNING 
f1, f2, f3
+->  Seq Scan on public.bar
+  Output: bar.f1, (bar.f2 + 100), bar.ctid
+->  Foreign Scan on public.bar2
+  Output: bar2.f1, (bar2.f2 + 100), bar2.f3, bar2.ctid, bar2.*
+  Remote SQL: SELECT f1, f2, f3, ctid FROM public.loct2 FOR UPDATE
+ (9 rows)
+ 
+ update bar set f2 = f2 + 100;
+ NOTICE:  trig_row_before(23, skidoo) BEFORE ROW UPDATE ON bar2
+ NOTICE:  OLD: (3,333,33),NEW: (3,433,33)
+ NOTICE:  trig_row_before(23, skidoo) BEFORE ROW UPDATE ON bar2
+ NOTICE:  OLD: (4,344,44),NEW: (4,444,44)
+ NOTICE:  trig_row_before(23, skidoo) BEFORE ROW UPDATE ON bar2
+ NOTICE:  OLD: (7,277,77),NEW: (7,377,77)
+ NOTICE:  trig_row_after(23, skidoo) AFTER ROW UPDATE ON bar2
+ NOTICE:  OLD: (3,333,33),NEW: (3,433,33)
+ NOTICE:  trig_row_after(23, skidoo) AFTER ROW UPDATE ON bar2
+ NOTICE:  OLD: (4,344,44),NEW: (4,444,44)
+ NOTICE:  trig_row_after(23, skidoo) AFTER ROW UPDATE ON bar2
+ NOTICE:  OLD: (7,277,77),NEW: (7,377,77)
+ explain (verbose, costs off)
+ update bar set (f2,f1) = (select f1, f2 from bar2 where f3 = 77) where f1 = 7;
+   QUERY PLAN  
 
+ 
---
+  Update on public.bar
+Update on public.bar
+Foreign Update on public.bar2
+  Remote SQL: UPDATE public.loct2 SET 

Re: [HACKERS] Restrictions of logical replication

2017-06-16 Thread Aleksander Alekseev
Hi Tatsuo,

On Fri, Jun 16, 2017 at 04:00:56PM +0900, Tatsuo Ishii wrote:
> Maybe I am missing something, but I could not find any description
> that logical replication does not support large objects and TRUNCATE
> in the doc.  Do we want to add them to the doc as the restrictions of
> the logical replication?

I knew about TRUNCATE and it most definitely should be documented if
it's not yet. And what about large objects?

-- 
Best regards,
Aleksander Alekseev


signature.asc
Description: PGP signature


Re: [HACKERS] pg_waldump command line arguments

2017-06-16 Thread Ashutosh Bapat
On Fri, Jun 16, 2017 at 2:38 AM, Robert Haas  wrote:
>
> A slightly broader concern is whether we need to require the start
> position at all.  It seems like one could locate the WAL directory
> using the existing logic, then search for the earliest file.  It might
> be a little unclear what "earliest" means when multiple timelines are
> present, but I bet we could come up with some behavior that would be
> convenient for most users.

We already have some default behaviour defined
--
-t timeline
--timeline=timeline

Timeline from which to read log records. The default is to use the
value in startseg, if that is specified; otherwise, the default is 1.
--

So, if startseg is not provided, choose the earliest file in the
default timeline (given by -t 1 when specified).

> It would be quite handy to be able to run
> this without arguments (or just with -z) and have it process all the
> WAL files that you've got on hand.
>

+1.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Get stuck when dropping a subscription during synchronizing table

2017-06-16 Thread Masahiko Sawada
On Thu, Jun 15, 2017 at 10:22 AM, Masahiko Sawada  wrote:
> On Thu, Jun 15, 2017 at 7:35 AM, Petr Jelinek
>  wrote:
>> On 13/06/17 21:49, Peter Eisentraut wrote:
>>> On 6/13/17 02:33, Noah Misch wrote:
> Steps to reproduce -
> X cluster -> create 100 tables , publish all tables  (create publication 
> pub
> for all tables);
> Y Cluster -> create 100 tables ,create subscription(create subscription 
> sub
> connection 'user=centos host=localhost' publication pub;
> Y cluster ->drop subscription - drop subscription sub;
>
> check the log file on Y cluster.
>
> Sometime , i have seen this error on psql prompt and drop subscription
> operation got failed at first attempt.
>
> postgres=# drop subscription sub;
> ERROR:  tuple concurrently updated
> postgres=# drop subscription sub;
> NOTICE:  dropped replication slot "sub" on publisher
> DROP SUBSCRIPTION

 [Action required within three days.  This is a generic notification.]
>>>
>>> It's being worked on.  Let's see by Thursday.
>>>
>>

I've reviewed these patches. 0001 patch conflicts with commit
a571c7f661a7b601aafcb12196d004cdb8b8cb23.

>> Attached fixes it (it was mostly about order of calls). I also split the
>> SetSubscriptionRelState into 2 separate interface while I was changing
>> it, because now that the update_only bool was added it has become quite
>> strange to have single interface for what is basically two separate
>> functions.

+1 from me, too.
A subscription relation state may have been removed already when we
try to update it. SetSubscriptionRelState didn't emit an error in such
case but with this patch we end up with an error. Since we shouldn't
ignore such error in UpdateSubscriptionRelState I'd say we can let the
user know about that possibility in the error message.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding support for Default partition in partitioning

2017-06-16 Thread Kyotaro HORIGUCHI
Hello, I'd like to review this but it doesn't fit the master, as
Robert said. Especially the interface of predicate_implied_by is
changed by the suggested commit.

Anyway I have some comment on this patch with fresh eyes.  I
believe the basic design so my comment below are from a rather
micro viewpoint.

At Thu, 15 Jun 2017 16:01:53 +0900, Amit Langote 
 wrote in 

> Oops, I meant to send one more comment.
> 
> On 2017/06/15 15:48, Amit Langote wrote:
> > BTW, I noticed the following in 0002
> +  errmsg("there exists a default 
> partition for table \"%s\", cannot
> add a new partition",
> 
> This error message style seems novel to me.  I'm not sure about the best
> message text here, but maybe: "cannot add new partition to table \"%s\"
> with default partition"
> 
> Note that the comment applies to both DefineRelation and
> ATExecAttachPartition.

- Considering on how canSkipPartConstraintValidation is called, I
  *think* that RelationProvenValid() would be better.  (But this
  would be disappear by rebasing..)

- 0002 changes the interface of get_qual_for_list, but left
  get_qual_for_range alone.  Anyway get_qual_for_range will have
  to do the similar thing soon.

- In check_new_partition_bound, "overlap" and "with" is
  completely correlated with each other. "with > -1" means
  "overlap = true". So overlap is not useless. ("with" would be
  better to be "overlap_with" or somehting if we remove
  "overlap")

- The error message of check_default_allows_bound is below.

  "updated partition constraint for default partition \"%s\"
   would be violated by some row"

  This looks an analog of validateCheckConstraint but as my
  understanding this function is called only when new partition
  is added. This would be difficult to recognize in the
  situation.

  "the default partition contains rows that should be in
   the new partition: \"%s\""

  or something?

- In check_default_allows_bound, the iteration over partitions is
  quite similar to what validateCheckConstraint does. Can we
  somehow share validateCheckConstraint with this function?

- In the same function, skipping RELKIND_PARTITIONED_TABLE is
  okay, but silently ignoring RELKIND_FOREIGN_TABLE doesn't seem
  good. I think at least some warning should be emitted.

  "Skipping foreign tables in the defalut partition. It might
   contain rows that should be in the new partition."  (Needs
   preventing multple warnings in single call, maybe)

- In the same function, the following condition seems somewhat
  strange in comparison to validateCheckConstraint.

> if (partqualstate && ExecCheck(partqualstate, econtext))

  partqualstate won't be null as long as partition_constraint is
  valid. Anyway (I'm believing that) an invalid constraint
  results in error by ExecPrepareExpr. Therefore 'if
  (partqualstate' is useless.

- In gram.y, the nonterminal for list spec clause is still
  "ForValues". It seems somewhat strange. partition_spec or
  something would be better.

- This is not a part of this patch, but in ruleutils.c, the error
  for unknown paritioning strategy is emitted as following.

>   elog(ERROR, "unrecognized partition strategy: %d",
>(int) strategy);

  The cast is added because the strategy is a char. I suppose
  this is because strategy can be an unprintable. I'd like to see
  a comment if it is correct.


regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Data at rest encryption

2017-06-16 Thread Konstantin Knizhnik



On 16.06.2017 03:08, Bruce Momjian wrote:


Yeah, I guess we will just have to wait to see it since other people are
excited about it.  My concern is code complexity and usability
challenges, vs punting the problem to the operating system, though
admittedly there are some cases where that is not possible.



Let me also share my opinion about encryption and compression support at 
database level.
PostgresPro Enterprise does support both. I have made presentation about 
it at PgConn 2016 in Tallinn.
I was a little bit surprised that there were more questions about 
encryption than about compression.
But right now we have several customers which are using compression and 
none of them use encryption (just because them do not need
to protect their databases). But I absolutely sure that there are many 
Postgres users which first of all need to protect their data.


Encryption is much easier to implement than compression, because it is 
not changing page size. So I do not see any "complexity and flexibility 
challenges" here.
Just for reference I attached to this mail our own encryption patch. I 
do not want to propose it as alternative to Aasmas patch: it is less 
flexible and doesn't support encryption of WAL, just encryption of 
relation data. Also it doesn't allow custom encryption libraries: AES 
implementation is embedded. Encryption cipher is taken from environment 
variable. At Tallin's conferences I was informed about possible security 
issue with passing key through environment variable: it is possible to 
inspect server's environment variables using plpython/plperl stored 
procedure.
This is why we unset this environment variable after reading. I am not 
expect in security, but I do not know other issues with such solution.


Concerning the question whether to implement compression/encryption on 
database level or rely on OS, my opinion is that there are many 
scenarios where it is not possible or is not desirable to use OS level 
encryption/protection. It first of all includes cloud installations and 
embedded applications.  I do not want to repeat arguments already 
mentioned in this thread.
But the fact is that there are many people which really need 
compression/encryption support and them can not or do not want to 
redirect this aspects to OS. Almost all DBMSes are supporting 
compression encryption, so lack of this features in Postgres definitely 
can not be considered as Postgres advantage.


Postgres buffer manager interface significantly simplifies integration 
of encryption and compression. There is actually single path through 
which data is fetched/stored to the disk.
It is most obvious and natural solution to decompress/decrypt data when 
it is read from the disk to page pool and compress/encrypt it when it is 
written back. Taken in account that memory is cheap now and many 
databases can completely fit in memory, storing pages in the buffer 
cache in plain (decompressed/decrypted) format allows to minimize 
overhead of compression/encryption and its influence on performance. For 
read only queries working with cached data performance will be exactly 
the same as without encryption/compression.
Write speed for encrypted pages will be certainly slightly worse, but 
still encryption speed is much higher than disk IO speed.


So I do not think that it is really necessary to support encryption of 
some particular tables, storing "non-secrete" data in plain format 
without encryption. It should not cause noticeable  improve of 
performance, but may complicate implementation and increase possibility 
of leaking secure data.


I do not think that pluggable storage API is right approach to integrate 
compression and especially encryption. It is better to plugin encryption 
between buffer manager and storage device,
allowing to use  it with any storage implementation. Also it is not 
clear to me whether encryption of WAL can be provided using pluggable 
storage API.


The last discussed question is whether it is necessary to encrypt 
temporary data (BufFile). In our solution we encrypt only main fork of 
non-system relations and do no encrypt temporary relations. It may cause 
that some secrete data will be stored at this disk in non-encrypted 
format. But accessing this data is not trivial. You can not just 
copy/stole disk, open database and do "select * from SecreteTable": you 
will have to extract data from raw file yourself. So looks like it is 
better to allow user to make choice whether to encrypt temporary data or 
not.


--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

diff --git a/src/backend/storage/file/Makefile b/src/backend/storage/file/Makefile
index d2198f2..9492662 100644
--- a/src/backend/storage/file/Makefile
+++ b/src/backend/storage/file/Makefile
@@ -12,6 +12,6 @@ subdir = src/backend/storage/file
 top_builddir = ../../../..
 include $(top_builddir)/src/Makefile.global
 
-OBJS = fd.o buffile.o copydir.o reinit.o
+OBJS 

[HACKERS] RLS policy not getting honer while pg_dump on declarative partition

2017-06-16 Thread Rushabh Lathia
While doing some testing I noticed that RLS policy not getting honer
while pg_dump on declarative partition.

I can understand that while doing SELECT on individual child
table, policy of parent is not getting applied. But is this desirable
behaviour? I think for partitions, any policy on the root table should
get redirect to the child, thoughts?

If current behaviour is desirable then atleast we should document this.

Consider the below test:

\c postgres rushabh

CREATE USER rls_test_user1;

CREATE TABLE tp_sales
(
visibility VARCHAR(30),
sales_region   VARCHAR(30)
) PARTITION BY LIST (sales_region);

create table tp_sales_p_india  partition of tp_sales for values in
('INDIA');
create table tp_sales_p_rest  partition of tp_sales for values in ('REST');

insert into tp_sales values ( 'hidden', 'INDIA');
insert into tp_sales values ( 'visible', 'INDIA');
insert into tp_sales values ( 'hidden', 'REST');
insert into tp_sales values ( 'visible', 'REST');

GRANT SELECT ON tp_sales to rls_test_user1;
GRANT SELECT ON tp_sales_p_india to rls_test_user1;
GRANT SELECT ON tp_sales_p_rest to rls_test_user1;

ALTER TABLE tp_sales ENABLE ROW LEVEL SECURITY;

CREATE POLICY dump_p1 ON tp_sales FOR ALL USING (visibility = 'visible');

\c - rls_test_user1

-- SELECT honer the policy
SELECT * FROM tp_sales;

When we run the pg_dump using user rls_test_user1, can see the hidden
rows in the pg_dump output.

./db/bin/pg_dump -U rls_test_user1 postgres --inserts

Attaching the dump output.


Thanks,
Rushabh Lathia
www.EnterpriseDB.com
--
-- PostgreSQL database dump
--

-- Dumped from database version 10beta1
-- Dumped by pg_dump version 10beta1

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET row_security = off;

--
-- Name: postgres; Type: COMMENT; Schema: -; Owner: rushabh
--

COMMENT ON DATABASE postgres IS 'default administrative connection database';


--
-- Name: plpgsql; Type: EXTENSION; Schema: -; Owner: 
--

CREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalog;


--
-- Name: EXTENSION plpgsql; Type: COMMENT; Schema: -; Owner: 
--

COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';


SET search_path = public, pg_catalog;

SET default_tablespace = '';

SET default_with_oids = false;

--
-- Name: tp_sales; Type: TABLE; Schema: public; Owner: rushabh
--

CREATE TABLE tp_sales (
visibility character varying(30),
sales_region character varying(30)
)
PARTITION BY LIST (sales_region);


ALTER TABLE tp_sales OWNER TO rushabh;

--
-- Name: tp_sales_p_india; Type: TABLE; Schema: public; Owner: rushabh
--

CREATE TABLE tp_sales_p_india PARTITION OF tp_sales
FOR VALUES IN ('INDIA');


ALTER TABLE tp_sales_p_india OWNER TO rushabh;

--
-- Name: tp_sales_p_rest; Type: TABLE; Schema: public; Owner: rushabh
--

CREATE TABLE tp_sales_p_rest PARTITION OF tp_sales
FOR VALUES IN ('REST');


ALTER TABLE tp_sales_p_rest OWNER TO rushabh;

--
-- Data for Name: tp_sales_p_india; Type: TABLE DATA; Schema: public; Owner: rushabh
--

INSERT INTO tp_sales_p_india VALUES ('hidden', 'INDIA');
INSERT INTO tp_sales_p_india VALUES ('visible', 'INDIA');


--
-- Data for Name: tp_sales_p_rest; Type: TABLE DATA; Schema: public; Owner: rushabh
--

INSERT INTO tp_sales_p_rest VALUES ('hidden', 'REST');
INSERT INTO tp_sales_p_rest VALUES ('visible', 'REST');


--
-- Name: tp_sales dump_p1; Type: POLICY; Schema: public; Owner: rushabh
--

CREATE POLICY dump_p1 ON tp_sales USING (((visibility)::text = 'visible'::text));


--
-- Name: tp_sales; Type: ROW SECURITY; Schema: public; Owner: rushabh
--

ALTER TABLE tp_sales ENABLE ROW LEVEL SECURITY;

--
-- Name: tp_sales; Type: ACL; Schema: public; Owner: rushabh
--

GRANT SELECT ON TABLE tp_sales TO rls_test_user1;


--
-- Name: tp_sales_p_india; Type: ACL; Schema: public; Owner: rushabh
--

GRANT SELECT ON TABLE tp_sales_p_india TO rls_test_user1;


--
-- Name: tp_sales_p_rest; Type: ACL; Schema: public; Owner: rushabh
--

GRANT SELECT ON TABLE tp_sales_p_rest TO rls_test_user1;


--
-- PostgreSQL database dump complete
--


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical replication: \dRp+ and "for all tables"

2017-06-16 Thread Masahiko Sawada
On Thu, Jun 15, 2017 at 11:49 PM, Peter Eisentraut
 wrote:
> On 6/10/17 02:02, Jeff Janes wrote:
>> On Fri, Jun 9, 2017 at 10:20 PM, Masahiko Sawada > > wrote:
>>
>> On Sat, Jun 10, 2017 at 7:29 AM, Jeff Janes > > wrote:
>> > If I create a publication "for all tables", \dRp+ doesn't indicate it 
>> is for
>> > all tables, it just gives a list of the tables.
>> >
>> > So it doesn't distinguish between a publication specified to be for all
>> > tables (which will be dynamic regarding future additions), and one 
>> which
>> > just happens to include all the table which currently exist.
>> >
>> > That seems unfortunate.  Should the "for all tables" be included as 
>> another
>> > column in \dRp and \dRp+, or at least as a footnote tag in \dRp+ ?
>> >
>>
>> +1. I was thinking the same. Attached patch adds "All Tables" column
>> to both \dRp and \dRp+.
>>
>>
>> Looks good to me.  Attached with regression test expected output  changes.
>
> I have committed your patch and removed the "Tables" footer for
> all-tables publications, as was discussed later in the thread.

Thank you!

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support to COMMENT ON DATABASE CURRENT_DATABASE

2017-06-16 Thread Surafel Temesgen
On Mon, Jun 5, 2017 at 4:09 AM, Jing Wang  wrote:

> Hi all,
>
> The attached patch is to support the feature "COMMENT ON DATABASE
> CURRENT_DATABASE". The solution is based on the previous discussion in [2] .
>

Your patch doesn't cover security labels on databases which have similar
issue
and consider dividing the patch into two one for adding CURRENT_DATABASE as
a
database specifier and the other for adding database-level information to
pg_dump output
in a way that allows to load a dump into a differently named database

Regards,

Surafel


[HACKERS] Restrictions of logical replication

2017-06-16 Thread Tatsuo Ishii
Maybe I am missing something, but I could not find any description
that logical replication does not support large objects and TRUNCATE
in the doc.  Do we want to add them to the doc as the restrictions of
the logical replication?

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal psql \gdesc

2017-06-16 Thread Fabien COELHO


Hello Pavel,


new update - rebase, changed message


I did yet another rebase of your patch after Tom alphabetically ordered 
backslash commands. Here is the result.


--
Fabien.diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index e6eba21..833460e 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -1943,6 +1943,15 @@ Tue Oct 26 21:40:57 CEST 1999
 
   
 
+  
+\gdesc
+
+
+Show the description of the result of the current query buffer without
+actually executing it, by considering it a prepared statement.
+
+
+  
 
   
 \gexec
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index b3263a9..a1c8e1d 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -88,6 +88,7 @@ static backslashResult exec_command_errverbose(PsqlScanState scan_state, bool ac
 static backslashResult exec_command_f(PsqlScanState scan_state, bool active_branch);
 static backslashResult exec_command_g(PsqlScanState scan_state, bool active_branch,
 			   const char *cmd);
+static backslashResult exec_command_gdesc(PsqlScanState scan_state, bool active_branch);
 static backslashResult exec_command_gexec(PsqlScanState scan_state, bool active_branch);
 static backslashResult exec_command_gset(PsqlScanState scan_state, bool active_branch);
 static backslashResult exec_command_help(PsqlScanState scan_state, bool active_branch);
@@ -337,6 +338,8 @@ exec_command(const char *cmd,
 		status = exec_command_f(scan_state, active_branch);
 	else if (strcmp(cmd, "g") == 0 || strcmp(cmd, "gx") == 0)
 		status = exec_command_g(scan_state, active_branch, cmd);
+	else if (strcmp(cmd, "gdesc") == 0)
+		status = exec_command_gdesc(scan_state, active_branch);
 	else if (strcmp(cmd, "gexec") == 0)
 		status = exec_command_gexec(scan_state, active_branch);
 	else if (strcmp(cmd, "gset") == 0)
@@ -1328,6 +1331,25 @@ exec_command_g(PsqlScanState scan_state, bool active_branch, const char *cmd)
 }
 
 /*
+ * \gdesc -- describe query result
+ */
+static backslashResult
+exec_command_gdesc(PsqlScanState scan_state, bool active_branch)
+{
+	backslashResult status = PSQL_CMD_SKIP_LINE;
+
+	if (active_branch)
+	{
+		pset.gdesc_flag = true;
+		status = PSQL_CMD_SEND;
+	}
+	else
+		ignore_slash_filepipe(scan_state);
+
+	return status;
+}
+
+/*
  * \gexec -- send query and execute each field of result
  */
 static backslashResult
diff --git a/src/bin/psql/common.c b/src/bin/psql/common.c
index a2f1259..5baf372 100644
--- a/src/bin/psql/common.c
+++ b/src/bin/psql/common.c
@@ -1323,7 +1323,88 @@ SendQuery(const char *query)
 		}
 	}
 
-	if (pset.fetch_count <= 0 || pset.gexec_flag ||
+	if (pset.gdesc_flag)
+	{
+		/*
+		 * Unnamed prepared statement is used. Is not possible to
+		 * create any unnamed prepared statement from psql user space,
+		 * so there should not be any conflict. In this moment is not
+		 * possible to deallocate this prepared statement, so it should
+		 * to live to end of session or to another \gdesc call.
+		 */
+		results = PQprepare(pset.db, "", query, 0, NULL);
+		if (PQresultStatus(results) != PGRES_COMMAND_OK)
+		{
+			psql_error("%s", PQerrorMessage(pset.db));
+			ClearOrSaveResult(results);
+			ResetCancelConn();
+			goto sendquery_cleanup;
+		}
+		PQclear(results);
+
+		results = PQdescribePrepared(pset.db, "");
+		OK = ProcessResult();
+		if (OK && results)
+		{
+			if (PQnfields(results) > 0)
+			{
+PQExpBufferData		buf;
+int		i;
+
+initPQExpBuffer();
+
+printfPQExpBuffer(,
+	  "SELECT name AS \"%s\", pg_catalog.format_type(tp, tpm) AS \"%s\"\n"
+	  "FROM (VALUES",
+	  gettext_noop("Name"),
+	  gettext_noop("Type"));
+
+for(i = 0; i< PQnfields(results); i++)
+{
+	char	*name;
+	char	*escname;
+	size_t		name_length;
+
+	if (i > 0)
+		appendPQExpBufferStr(, ",");
+
+	name = PQfname(results, i);
+	name_length = strlen(name);
+	escname = PQescapeLiteral(pset.db, name, name_length);
+
+	if (escname == NULL)
+	{
+		psql_error("%s", PQerrorMessage(pset.db));
+		PQclear(results);
+		termPQExpBuffer();
+		goto sendquery_cleanup;
+	}
+
+	appendPQExpBuffer(, "(%s, %d, %d)",
+	  escname, PQftype(results,i), PQfmod(results,i));
+	PQfreemem(escname);
+}
+
+appendPQExpBuffer(,") s (name, tp, tpm)");
+PQclear(results);
+
+results = PQexec(pset.db, buf.data);
+OK = ProcessResult();
+
+if (OK && results)
+	OK = PrintQueryResults(results);
+
+termPQExpBuffer();
+			}
+			else
+fprintf(pset.queryFout, _("The result has no columns or the command has no result.\n"));
+		}
+
+		ClearOrSaveResult(results);
+		ResetCancelConn();
+		results = NULL;			/* PQclear(NULL) does nothing */
+	}
+	else if (pset.fetch_count <= 0 || pset.gexec_flag ||
 		pset.crosstab_flag || !is_select_command(query))
 	{
 		/* Default fetch-it-all-and-print mode */
@@ 

Re: [HACKERS] Why forcing Hot_standby_feedback to be enabled when creating a logical decoding slot on standby

2017-06-16 Thread sanyam jain
Isn't XLogRecord carries full information to be decoded in itself?If so a 
VACCUM should not be a problem in decoding?

Thanks
Sanyam Jain



From: Michael Paquier 
Sent: Monday, June 12, 2017 6:52:06 AM
To: sanyam jain
Cc: Pg Hackers
Subject: Re: [HACKERS] Why forcing Hot_standby_feedback to be enabled when 
creating a logical decoding slot on standby

On Mon, Jun 12, 2017 at 3:16 PM, sanyam jain  wrote:
> I have created a logical decoding slot on a standby but i haven't enabled
> Hot_standby_feedback.What are the test cases where this setup will fail?

hot_standby_feedback needs to be enabled at all times in logical
decoding so as the node does not remove rows that are still needed for
the decoding, and a VACUUM passing by with a minimal xmin too high
would cause inconsistent decoded data.
--
Michael


Re: [HACKERS] GiST API Adancement

2017-06-16 Thread Andrew Borodin
Hi, Dong!

2017-06-15 21:19 GMT+05:00 Yuan Dong :
> I'm going to hack on my own. With the help of Andrew Borodin, I want to
> start the project with adding a third state to collision check. The third
> state is that:
>  subtree is totally within the query. In this case, GiST scan can scan down
> subtree without key checks.

That's fine!

> After reading some code, I get this plan to modify the code:
>
> 1. Modify the consistent function of datatypes supported by GiST.
>
> 1.1 Start with cube, g_cube_consistent should return a third state when
> calling g_cube_internal_consistent. Inspired by Andrew Borodin, I have two
> solutions, 1)modify return value, 2) modify a reference type parameter.
>
> 2. Modify the gistindex_keytest in gistget.c to return multiple states
>
> 2.1 Need declare a new enum type(or define values) in gist_private.h or
> somewhere
>
> 3. Modify the gitsScanPage in gistget.c to deal with the extra state
>
> 4. Add a state to mark the nodes under this GISTSearchItem are all with in
> the query
>
> 4.1 Two ways to do this: 1) add a state to GISTSearchItem (prefered) 2)
> Somewhere else to record all this kind of items
>
> 4.2 We may use the block number as the key
>
> 4.3 Next time when the gistScanPage met this item, just find all the leaves
> directly(need a new function)
>
> After this, I shall start to benchmark the improvement and edit the code of
> other datatypes.
>
> Hope you hackers can give me some suggestions~

I think there is one more aspect of development: backward
compatibility: it's impossible to update all existing extensions. This
is not that major feature to ignore them.

Though for benchmarking purposes backward compatibility can be omitted.

Best regards, Andrey Borodin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers