[HACKERS] Assignment of valid collation for SET operations on queries with UNKNOWN types.

2016-11-16 Thread Rahila Syed
Following UNION of two queries with constant literals runs successfully.

CASE 1:
postgres=# SELECT 'abc' UNION SELECT 'bcd' ;
 ?column?
--
 abc
 bcd
(2 rows)

whereas when these literals are part of a view, the UNION fails.

CASE 2:
postgres=# create view v as select 'abc' a;
2016-11-16 15:28:48 IST WARNING:  column "a" has type "unknown"
2016-11-16 15:28:48 IST DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "a" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
CREATE VIEW

postgres=# create view v1 as select 'bcd' a;
2016-11-16 15:28:56 IST WARNING:  column "a" has type "unknown"
2016-11-16 15:28:56 IST DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "a" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
CREATE VIEW

postgres=# select a from v UNION select a from v1;
2016-11-16 15:25:28 IST ERROR:  could not determine which collation to use
for string comparison
2016-11-16 15:25:28 IST HINT:  Use the COLLATE clause to set the collation
explicitly.
2016-11-16 15:25:28 IST STATEMENT:  select a from v UNION select a from v1;
ERROR:  could not determine which collation to use for string comparison
HINT:  Use the COLLATE clause to set the collation explicitly.

When UNION of queries with constant literals as in CASE 1 is allowed
shouldn't a UNION of queries with literals in a view as in CASE 2 be
allowed?

In transformSetOperationTree, while determining the result type of the
merged
output columns, if the left and right column types are UNKNOWNs the result
type
is resolved to TEXT.

The difference of behaviour in above two cases arises because the result
collation
assigned is not valid in CASE 2.

When the left and the right inputs are literal constants i.e UNKNOWN as in
Case 1
the collation of result column is correctly assigned to a valid value.

Whereas when the left and the right inputs are columns of UNKNOWN type as
in Case 2,
the result collation is InvalidOid.

So if we ensure assignment of a valid collation when the left and the right
columns/inputs
are UNKNOWN, the above can be resolved.

Attached WIP patch does that. Kindly let me know your opinion.

Thank you,
Rahila Syed


invalid_collation_error.patch
Description: application/download

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-16 Thread Erik Rijkers

On 2016-11-17 02:15, Peter Eisentraut wrote:

On 11/16/16 1:14 PM, Erik Rijkers wrote:

real5m21.348s  -- for 'make -j 8 html'
versus
real1m8.502s   -- for 'make -j 8 oldhtml'

Centos 6.6 - I suppose it's getting a bit old, I don't know if that's
the cause of the discrepancy with other's measurements.


I tested the build on a variety of operating systems, including that
one, with different tool chain versions and I am getting consistent
performance.  So the above is unclear to me at the moment.

For the heck of it, run this

xsltproc --nonet --stringparam pg.version '10devel'  stylesheet.xsl
postgres.xml

to make sure it's not downloading something from the network.


$ time xsltproc --nonet --stringparam pg.version '10devel'  
stylesheet.xsl postgres.xml

real5m43.776s

$ ( cd /home/aardvark/pg_stuff/pg_sandbox/pgsql.HEAD/doc/src/sgml; time 
make oldhtml )

real1m14.152s

(I did clean out in between)





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Document how to set up TAP tests for Perl 5.8.8

2016-11-16 Thread Craig Ringer
On 17 November 2016 at 12:23, Michael Paquier  wrote:
> On Wed, Nov 16, 2016 at 6:54 PM, Craig Ringer  wrote:
>> In all seriousness, though, lets query the buildfarm database for Perl
>> versions. Let it answer.
>
> castoroides, Solaris 10 and perl 5.8.4, no?
> https://www.postgresql.org/message-id/20150415053842.ga2948...@tornado.leadboat.com

Thanks for checking.

Can we just add the README text with Perl 5.8.4 then? Or 5.8.0 or
3.005 or whatever version is determined to be the oldest needed?

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Exclude pg_largeobject form pg_dump

2016-11-16 Thread Amul Sul
The following review has been posted through the commitfest application:
make installcheck-world:  tested, passed
Implements feature:   tested, passed
Spec compliant:   not tested
Documentation:tested, passed

Patch v6 looks good to me, passing to committer.

Thanks !

The new status of this patch is: Ready for Committer

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [bug fix] Cascading standby cannot catch up and get stuck emitting the same message repeatedly

2016-11-16 Thread Tsunakawa, Takayuki
From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Amit Kapila
> I think it beginning of segment (aka the first page of the segment), even
> the comment indicates the same.
> 
> /*
> * Whenever switching to a new WAL segment, we read the first page of
> * the file and validate its header, even if that's not where the
> * target record is. ...
> ..
> */
> 
> However, on again looking at the code, it seems that part of code behaves
> similarly for both 9.2 and 9.3.

Yes, the code behaves similarly in 9.2 and later.  FYI, ValidXLogHeader() is 
called at two sites.  The earlier one checks the first page of a segment when 
the real target page is different, and the latter one checks any page including 
the first page of a segment.


> ..Because node3 found a WAL
> !  * record fragment at the end of segment 10, it expects to find the !
> * remaining fragment at the beginning of WAL segment 11 streamed from !
> * node2. But there was a fragment of a different WAL record, because !  *
> node2 overwrote a different WAL record at the end of segment 10 across !
> * to 11.
> 
> How does node3 ensure that the fragment of WAL in segment 11 is different?
> Isn't it possible that when node2 overwrites the last record in WAL segment
> 10, it writes a record of slightly different contents but which is of the
> same size as an original record in WAL segment 10?

That case is detected by checking the CRC value in the XLOG record header.

Regards
Takayuki Tsunakawa


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Push down more full joins in postgres_fdw

2016-11-16 Thread Etsuro Fujita

On 2016/11/16 18:14, Ashutosh Bapat wrote:

(1) You added the
following comments to deparseSelectSql:

+   /*
+* For a non-base relation, use the input tlist. If a base relation
is
+* being deparsed as a subquery, use input tlist, if the caller has
passed
+* one. The column aliases of the subquery are crafted based on the
input
+* tlist. If tlist is NIL, there will not be any expression
referring to
+* any of the columns of the base relation being deparsed. Hence it
doesn't
+* matter whether the base relation is being deparsed as subquery or
not.
+*/

The last sentence seems confusing to me.  My understanding is: if a base
relation has tlist=NIL, then the base relation *must* be deparsed as
ordinary, not as a subquery.  Maybe I'm missing something, though.  (I left
that as-is, but I think we need to reword that to be more clear, at least.)



Hmm, I agree. I think the comment should just say, "Use tlist to
create the SELECT clause if one has been provided. For a base relation
with tlist = NIL, check attrs_needed information.". Does that sound
good?


I don't think that is 100% correct, because (1) tlist can be NIL for a 
join relation, you pointed out upthread, but we need to use 
deparseExplicitTargetList, so the first sentence is not completely 
correct, and (2) we need to check attrs_needed information not only for 
a baserel but for an otherrel, so the second sentence is not completely 
correct, either.  How about this, instead?:


/*
 * For a join relation or an upper relation, use 
deparseExplicitTargetList.
 * Likewise, for a base relation that is being deparsed as a 
subquery, in
 * which case the caller would have passed tlist that is non-NIL, 
use that

 * function.  Otherwise, use deparseTargetList.
 */


(3) I don't think we need this in isSubqueryExpr, so I removed it from the
patch:

+   /* Keep compiler happy. */
+   return false;



Doesn't that cause compiler warning, saying "non-void function
returning nothing" or something like that. Actually, the "if
(bms_is_member(node->varno, outerrel->relids))" ends with a "return"
always. Hence we don't need to encapsulate the code in "else" block in
else { }. It could be taken outside.


Yeah, I think so too, but I like the "if () { } else { }" coding.  That 
coding can be found in other places in core, eg, 
operator_same_subexprs_lookup.



OK, I changed isSubqueryExpr to is_subquery_expr; I kept to refer to expr
because I think we would soon extend that function so that it can handle
PHVs, as I said upthread.  For getSubselectAliasInfo, I changed the name to
get_subselect_alias_id, because (1) the word "alias" seems general and (2)
the "for_var" part would make the name a bit long.



is_subquery_expr(Var *node -- that looks odd. Either it should
is_subquery_var(Var * ... OR it should be is_subquery_expr(Expr * ...
. I would prefer the first one, since that's what current patch is
doing. When we introduce PHVs, we may change it, if required.


OK, I used is_subquery_var().


Done.  I modified the patch as proposed; create the tlist by
build_tlist_to_deparse in foreign_join_ok, if needed, and search the tlist
by tlist_member.  I also added a new member "tlist" to PgFdwRelationInfo to
save the tlist created in foreign_join_ok.



Instead of adding a new member, you might want to reuse grouped_tlist
by renaming it.


Done.


Another idea on the "tlist" member would be to save a tlist created for
EXPLAIN into that member in estimate_patch_cost_size, so that we don't need
to generate the tlist again in postgresGetForeignPlan, when
use_remote_estimate=true.  But I'd like to leave that for another patch.



I think that will happen automatically, while deparsing, whether for
EXPLAIN or for actual execution.


Really?  Anyway, I'd like to leave that as-is.

Please find attached a new version of the patch.

Best regards,
Etsuro Fujita
*** a/contrib/postgres_fdw/deparse.c
--- b/contrib/postgres_fdw/deparse.c
***
*** 109,114  typedef struct deparse_expr_cxt
--- 109,116 
  /* Handy macro to add relation name qualification */
  #define ADD_REL_QUALIFIER(buf, varno)	\
  		appendStringInfo((buf), "%s%d.", REL_ALIAS_PREFIX, (varno))
+ #define SS_TAB_ALIAS_PREFIX	"s"
+ #define SS_COL_ALIAS_PREFIX	"c"
  
  /*
   * Functions to determine whether an expression can be evaluated safely on
***
*** 167,172  static void appendConditions(List *exprs, deparse_expr_cxt *context);
--- 169,180 
  static void deparseFromExprForRel(StringInfo buf, PlannerInfo *root,
  	RelOptInfo *joinrel, bool use_alias, List **params_list);
  static void deparseFromExpr(List *quals, deparse_expr_cxt *context);
+ static void deparseRangeTblRef(StringInfo buf, PlannerInfo *root, RelOptInfo *foreignrel,
+    bool make_subquery, List **params_list);
+ static void appendSubselectAlias(StringInfo buf, int tabno, int ncols);
+ static void get_subselect_alias_id

Re: [HACKERS] Remove the comment on the countereffectiveness of large shared_buffers on Windows

2016-11-16 Thread Tsunakawa, Takayuki
From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Amit Kapila
> > I think the reason why increasing shared_buffers didn't give better
> performance for read-only tests than you expect is that the relation files
> are cached in the filesystem cache.  The purpose of this verification is
> to know that the effective upper limit is not 512MB (which is too small
> now), and I think the purpose is achieved.  There may be another threshold,
> say 32GB or 128GB, over which the performance degrades due to PostgreSQL
> implementation, but that's another topic which also applies to other OSes.
> >
> 
> If we don't get any benefit by increasing the shared_buffers on windows,
> then what advantage do you see in recommending higher value?

No, I'm not recommending a higher value, but just removing the doubtful 
sentences of 512MB upper limit.  The advantage is that eliminating this 
sentence will make a chance for users to try best setting.



> I generally run it for 20 to 30 mins for read-write tests.  Also, to ensure
> consistent data, please consider changing following parameters in
> postgresql.conf checkpoint_timeout = 35 minutes or so, min_wal_size = 5GB
> or so, max_wal_size = 20GB or so and checkpoint_completion_target=0.9.
> 
> Apart from above, ensure to run manual checkpoint (checkpoint command) after
> each test.

Thank you, I'll try the read-write test with these settings on the weekend, 
when my PC is available.  I understood that your intention is to avoid being 
affected by checkpointing and WAL segment creation.

Regards
Takayuki Tsunakawa


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unlogged tables cleanup

2016-11-16 Thread Michael Paquier
On Wed, Nov 16, 2016 at 7:09 PM, Robert Haas  wrote:
> On Wed, Nov 16, 2016 at 3:54 PM, Michael Paquier
>  wrote:
>> Indeed I missed this comment block. Please let me suggest the following 
>> instead:
>>  /*
>>   * Set up an init fork for an unlogged table so that it can be correctly
>> - * reinitialized on restart.  Since we're going to do an immediate sync, we
>> - * only need to xlog this if archiving or streaming is enabled.  And the
>> - * immediate sync is required, because otherwise there's no guarantee that
>> - * this will hit the disk before the next checkpoint moves the redo pointer.
>> + * reinitialized on restart.  An immediate sync is required even if the
>> + * page has been logged, because the write did not go through
>> + * shared_buffers and therefore a concurrent checkpoint may have moved
>> + * the redo pointer past our xlog record.
>>   */
>
> Hmm.  Well, that deletes the comment that's no longer true, but it
> doesn't replace it with any explanation of why we also need to WAL-log
> it unconditionally, and I think that explanation is not entirely
> trivial?

OK, the original code does not give any special reason either
regarding why doing so is safe for archiving or streaming :)

More seriously, if there could be more details regarding that, I would
think that we could say something like "logging the init fork is
mandatory in any case to ensure its on-disk presence when at recovery
replay, even on non-default tablespaces whose base location are
deleted and re-created from scratch if the WAL record in charge of
creating this tablespace gets replayed". The problem shows up because
of tablespaces being deleted at replay at the end... So perhaps this
makes sense. What do you think?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Password identifiers, protocol aging and SCRAM protocol

2016-11-16 Thread Michael Paquier
On Wed, Nov 16, 2016 at 8:04 PM, Michael Paquier
 wrote:
> In the current set of patches, the sha2 functions would not get used
> until the main patch for SCRAM gets committed so that's a couple of
> steps and many months ahead.. And --as-needed/--no-as-needed are not
> supported in macos. So I would believe that the best route is just to
> use this patch with the way it does things, and once SCRAM gets in we
> could switch the build into more appropriate linking. At least that's
> far less ugly than having fake objects in the backend code. Of course
> a comment in pgcrypo's Makefile would be appropriate.

Or a comment with a "ifeq ($(PORTNAME), darwin)" containing the
additional objects to make clear that this is proper to only OSX.
Other ideas are welcome.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Document how to set up TAP tests for Perl 5.8.8

2016-11-16 Thread Michael Paquier
On Wed, Nov 16, 2016 at 6:54 PM, Craig Ringer  wrote:
> In all seriousness, though, lets query the buildfarm database for Perl
> versions. Let it answer.

castoroides, Solaris 10 and perl 5.8.4, no?
https://www.postgresql.org/message-id/20150415053842.ga2948...@tornado.leadboat.com
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [sqlsmith] FailedAssertion("!(XLogCtl->Insert.exclusiveBackup)", File: "xlog.c", Line: 10200)

2016-11-16 Thread Michael Paquier
On Mon, Nov 7, 2016 at 6:18 PM, Kyotaro HORIGUCHI
 wrote:
> I will mark this as "Ready for Committer".

I have just noticed that Robert has switched this patch to "needs
review" by mistake (I think that there was a mistake with another
patch), so I have switched it back to "Ready for committer". I agree
with Horiguchi-san that this seems adapted, as it got some review and
some love.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fun fact about autovacuum and orphan temp tables

2016-11-16 Thread Michael Paquier
On Wed, Nov 16, 2016 at 5:24 PM, Haribabu Kommi
 wrote:
> This is a gentle reminder.
>
> you assigned as reviewer to the current patch in the 11-2016 commitfest.
> But you haven't shared your review yet. Can you please try to share your
> views
> about the patch. This will help us in smoother operation of commitfest.

Thanks for the reminder.

> Michael had sent an updated patch based on some discussion.
> Please Ignore if you already shared your review.

Hm. Thinking about that again, having a GUC to control if orphaned
temp tables in autovacuum is an overkill (who is going to go into this
level of tuning!?) and that we had better do something more aggressive
as there have been cases of users complaining about dangling temp
tables. I suspect the use case where people would like to have a look
at orphaned temp tables after a backend crash is not that wide, at
least a method would be to disable autovacuum after a crash if such a
monitoring is necessary. Tom has already stated upthread that the
patch to remove wildly locks is not acceptable, and he's clearly
right.

So the best move would be really to make the removal of orphaned temp
tables more aggressive, and not bother about having a GUC to control
that. The patch sent in
https://www.postgresql.org/message-id/cab7npqsrywaz1i12mpkh06_roo3ifgcgr88_aex1oeg2r4o...@mail.gmail.com
does so, so I am marking the CF entry as ready for committer for this
patch to attract some committer's attention on the matter.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Password identifiers, protocol aging and SCRAM protocol

2016-11-16 Thread Michael Paquier
On Wed, Nov 16, 2016 at 6:51 PM, Robert Haas  wrote:
> So, it seems that the linker is willing to drop archive members if the
> entire .o file is used, but not individual symbols.  That explains why
> Michael thinks we need to do something special here, because with his
> 0001 patch, nothing in the new sha2(_openssl).c file would immediately
> be used in the backend.  And indeed I see now that my earlier testing
> was done incorrectly, and pgcrypto does in fact fail to build under my
> proposal.  Oops.

Ah, thanks! I did not notice that before in configure.in:
if test "$PORTNAME" = "darwin"; then
  PGAC_PROG_CC_LDFLAGS_OPT([-Wl,-dead_strip_dylibs], $link_test_func)
elif test "$PORTNAME" = "openbsd"; then
  PGAC_PROG_CC_LDFLAGS_OPT([-Wl,-Bdynamic], $link_test_func)
else
  PGAC_PROG_CC_LDFLAGS_OPT([-Wl,--as-needed], $link_test_func)
fi

In the current set of patches, the sha2 functions would not get used
until the main patch for SCRAM gets committed so that's a couple of
steps and many months ahead.. And --as-needed/--no-as-needed are not
supported in macos. So I would believe that the best route is just to
use this patch with the way it does things, and once SCRAM gets in we
could switch the build into more appropriate linking. At least that's
far less ugly than having fake objects in the backend code. Of course
a comment in pgcrypo's Makefile would be appropriate.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump versus rules, once again

2016-11-16 Thread Robert Haas
On Wed, Nov 16, 2016 at 10:14 PM, Tom Lane  wrote:
> Robert Haas  writes:
>> On Wed, Nov 16, 2016 at 10:00 PM, Tom Lane  wrote:
>>> The changes in pg_backup_archiver.c would have to be back-patched
>>> into all versions supporting --if-exists, so that they don't fail
>>> on dump archives produced by patched versions.
>
>> Even if you patch future minor releases, past minor releases are still
>> going to exist out there in the wild for a long, long time.
>
> Yeah, but it would only matter if you try to use pg_restore --clean 
> --if-exists
> with an archive file that happens to contain a view that has this issue.
> Such cases would previously have failed anyway, because of precisely
> the bug at issue ... and there aren't very many of them, or we'd have
> noticed the problem before.  So I don't feel *too* bad about this,
> I just want to make sure we have a solution available.

Right, OK.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improve OOM handling in pg_locale.c

2016-11-16 Thread Mithun Cy
On Thu, Oct 13, 2016 at 1:40 PM, Michael Paquier 
wrote:
> I am attaching that to the next CF.

I have tested this patch. Now we error out as OOM instead of crash.

postgres=# SELECT '12.34'::money;
ERROR:  out of memory
LINE 1: SELECT '12.34'::money;


One thing which you might need to reconsider is removal of memory leak
comments. There is still a leak if there is an error while encoding in
db_encoding_strdup.
Unless you want to catch those error with an TRY();CATCH(); and then
free the mem.

-* localeconv()'s results.  Note that if we were to fail within this
-* sequence before reaching "CurrentLocaleConvAllocated = true", we 
could
-* leak some memory --- but not much, so it's not worth agonizing over.

Rest all LGTM.
-- 
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] pg_dump versus rules, once again

2016-11-16 Thread Tom Lane
Robert Haas  writes:
> On Wed, Nov 16, 2016 at 10:00 PM, Tom Lane  wrote:
>> The changes in pg_backup_archiver.c would have to be back-patched
>> into all versions supporting --if-exists, so that they don't fail
>> on dump archives produced by patched versions.

> Even if you patch future minor releases, past minor releases are still
> going to exist out there in the wild for a long, long time.

Yeah, but it would only matter if you try to use pg_restore --clean --if-exists
with an archive file that happens to contain a view that has this issue.
Such cases would previously have failed anyway, because of precisely
the bug at issue ... and there aren't very many of them, or we'd have
noticed the problem before.  So I don't feel *too* bad about this,
I just want to make sure we have a solution available.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unlogged tables cleanup

2016-11-16 Thread Robert Haas
On Wed, Nov 16, 2016 at 3:54 PM, Michael Paquier
 wrote:
> On Wed, Nov 16, 2016 at 11:45 AM, Robert Haas  wrote:
>> The header comment for heap_create_init_fork() says this:
>>
>> /*
>>  * Set up an init fork for an unlogged table so that it can be correctly
>>  * reinitialized on restart.  Since we're going to do an immediate sync, we
>>  * only need to xlog this if archiving or streaming is enabled.  And the
>>  * immediate sync is required, because otherwise there's no guarantee that
>>  * this will hit the disk before the next checkpoint moves the redo pointer.
>>  */
>>
>> Your patch causes the code not to match the comment any more.  And the
>> comment explains why at the time I wrote this code I thought it was OK
>> to have the XLogIsNeeded() test in there, so it clearly needs
>> updating.
>
> Indeed I missed this comment block. Please let me suggest the following 
> instead:
>  /*
>   * Set up an init fork for an unlogged table so that it can be correctly
> - * reinitialized on restart.  Since we're going to do an immediate sync, we
> - * only need to xlog this if archiving or streaming is enabled.  And the
> - * immediate sync is required, because otherwise there's no guarantee that
> - * this will hit the disk before the next checkpoint moves the redo pointer.
> + * reinitialized on restart.  An immediate sync is required even if the
> + * page has been logged, because the write did not go through
> + * shared_buffers and therefore a concurrent checkpoint may have moved
> + * the redo pointer past our xlog record.
>   */

Hmm.  Well, that deletes the comment that's no longer true, but it
doesn't replace it with any explanation of why we also need to WAL-log
it unconditionally, and I think that explanation is not entirely
trivial?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump versus rules, once again

2016-11-16 Thread Robert Haas
On Wed, Nov 16, 2016 at 10:00 PM, Tom Lane  wrote:
> The changes in pg_backup_archiver.c would have to be back-patched
> into all versions supporting --if-exists, so that they don't fail
> on dump archives produced by patched versions.

Even if you patch future minor releases, past minor releases are still
going to exist out there in the wild for a long, long time.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump versus rules, once again

2016-11-16 Thread Tom Lane
I wrote:
> We've talked before about replacing this _RETURN-rule business with
> CREATE OR REPLACE VIEW, ie the idea would be for pg_dump to first emit
> a dummy view with the right column names/types, say
> CREATE VIEW vv AS SELECT null::int AS f1, null::text AS f2;
> and then later when it's safe, emit CREATE OR REPLACE VIEW with the view's
> real query.

Here's a proposed patch for that.  It turns out there are some other
kluges that can be gotten rid of while we're at it: no longer need the
idea of reloptions being attached to a rule, for instance.

The changes in pg_backup_archiver.c would have to be back-patched
into all versions supporting --if-exists, so that they don't fail
on dump archives produced by patched versions.  We could possibly
put the rest only into HEAD; I remain of two minds about that.

regards, tom lane

diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index b938d79..b89bd99 100644
*** a/src/bin/pg_dump/pg_backup_archiver.c
--- b/src/bin/pg_dump/pg_backup_archiver.c
*** RestoreArchive(Archive *AHX)
*** 521,527 
  		 * knows how to do it, without depending on
  		 * te->dropStmt; use that.  For other objects we need
  		 * to parse the command.
- 		 *
  		 */
  		if (strncmp(te->desc, "BLOB", 4) == 0)
  		{
--- 521,526 
*** RestoreArchive(Archive *AHX)
*** 529,538 
  		}
  		else
  		{
- 			char		buffer[40];
- 			char	   *mark;
  			char	   *dropStmt = pg_strdup(te->dropStmt);
! 			char	   *dropStmtPtr = dropStmt;
  			PQExpBuffer ftStmt = createPQExpBuffer();
  
  			/*
--- 528,535 
  		}
  		else
  		{
  			char	   *dropStmt = pg_strdup(te->dropStmt);
! 			char	   *dropStmtOrig = dropStmt;
  			PQExpBuffer ftStmt = createPQExpBuffer();
  
  			/*
*** RestoreArchive(Archive *AHX)
*** 549,566 
  			/*
  			 * ALTER TABLE..ALTER COLUMN..DROP DEFAULT does
  			 * not support the IF EXISTS clause, and therefore
! 			 * we simply emit the original command for such
! 			 * objects. For other objects, we need to extract
! 			 * the first part of the DROP which includes the
! 			 * object type. Most of the time this matches
  			 * te->desc, so search for that; however for the
  			 * different kinds of CONSTRAINTs, we know to
  			 * search for hardcoded "DROP CONSTRAINT" instead.
  			 */
! 			if (strcmp(te->desc, "DEFAULT") == 0)
  appendPQExpBufferStr(ftStmt, dropStmt);
  			else
  			{
  if (strcmp(te->desc, "CONSTRAINT") == 0 ||
   strcmp(te->desc, "CHECK CONSTRAINT") == 0 ||
  	strcmp(te->desc, "FK CONSTRAINT") == 0)
--- 546,573 
  			/*
  			 * ALTER TABLE..ALTER COLUMN..DROP DEFAULT does
  			 * not support the IF EXISTS clause, and therefore
! 			 * we simply emit the original command for DEFAULT
! 			 * objects (modulo the adjustment made above).
! 			 *
! 			 * If we used CREATE OR REPLACE VIEW as a means of
! 			 * quasi-dropping an ON SELECT rule, that should
! 			 * be emitted unchanged as well.
! 			 *
! 			 * For other object types, we need to extract the
! 			 * first part of the DROP which includes the
! 			 * object type.  Most of the time this matches
  			 * te->desc, so search for that; however for the
  			 * different kinds of CONSTRAINTs, we know to
  			 * search for hardcoded "DROP CONSTRAINT" instead.
  			 */
! 			if (strcmp(te->desc, "DEFAULT") == 0 ||
! strncmp(dropStmt, "CREATE OR REPLACE VIEW", 22) == 0)
  appendPQExpBufferStr(ftStmt, dropStmt);
  			else
  			{
+ char		buffer[40];
+ char	   *mark;
+ 
  if (strcmp(te->desc, "CONSTRAINT") == 0 ||
   strcmp(te->desc, "CHECK CONSTRAINT") == 0 ||
  	strcmp(te->desc, "FK CONSTRAINT") == 0)
*** RestoreArchive(Archive *AHX)
*** 570,588 
  			 te->desc);
  
  mark = strstr(dropStmt, buffer);
- Assert(mark != NULL);
  
! *mark = '\0';
! appendPQExpBuffer(ftStmt, "%s%s IF EXISTS%s",
!   dropStmt, buffer,
!   mark + strlen(buffer));
  			}
  
  			ahprintf(AH, "%s", ftStmt->data);
  
  			destroyPQExpBuffer(ftStmt);
! 
! 			pg_free(dropStmtPtr);
  		}
  	}
  }
--- 577,604 
  			 te->desc);
  
  mark = strstr(dropStmt, buffer);
  
! if (mark)
! {
! 	*mark = '\0';
! 	appendPQExpBuffer(ftStmt, "%s%s IF EXISTS%s",
! 	  dropStmt, buffer,
! 	  mark + strlen(buffer));
! }
! else
! {
! 	/* complain and emit unmodified command */
! 	write_msg(modulename,
! 			  "WARNING: could not find where to insert IF EXISTS in statement \"%s\"\n",
! 			  dropStmtOrig);
! 

Re: [HACKERS] Document how to set up TAP tests for Perl 5.8.8

2016-11-16 Thread Kyotaro HORIGUCHI
At Thu, 17 Nov 2016 10:42:54 +0800, Craig Ringer  wrote 
in 
> On 17 November 2016 at 10:31, Kyotaro HORIGUCHI
>  wrote:
> 
> >
> > I vote +1 to upgrading perl, but I'm not sure if all supporting
> > platforms other than linux support the version of perl.
> >
> > Anyway ./configure is saying as the following.
> >
> > | *** The installed version of Perl, $PERL, is too old to use with 
> > PostgreSQL.
> > | *** Perl version 5.8 or later is required, but this is 
> > $pgac_perl_version." >&5
> >
> > If TAP test requires 5.8.8, the whole build sequence should
> > follow that. Or at least ./configure --enable-tap-tests should
> > check that.
> 
> I wrote 5.8.8 because that's what we've always discussed before and I
> could honestly not imagine it mattering whether we require 10-year or
> 15-year old Perl. Especially for the TAP tests, which are new, and
> optional.
> 
> I really don't think it matters if the TAP tests use a slightly newer
> Perl. They're optional and not enabled by default. Can we just

If so, why explicitly require 5.8.8? I think it is because the
'slight' difference actually prevent the test from running.

> document this please? I didn't think a four-line docs patch to an
> optional, non-default, new test suite would require this kind of
> discussion.

My only concern is the fact that 'make check-world' runs the TAP
tests as a part if --enable-tap-tests (according to
release-9.4.sgml, but I haven't done by myself.). I completely
agree to you if it didn't run as a part of top-level 'make '.
 
> But sure, if it's easier, we can have 5.8.0 in the README. What's five
> extra years matter anyway? Hey, while we're at it, lets change Pg to
> build on Borland C and require K&R style!

It's seems an extreme story. And I *believe* anywhere written
that Pg requires some version of C standard.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Implement failover on libpq connect level.

2016-11-16 Thread Robert Haas
On Wed, Nov 16, 2016 at 9:00 PM, Tsunakawa, Takayuki
 wrote:
> Do we really want to enable libpq failover against pre-V10 servers?  I don't 
> think so, as libpq is a part of PostgreSQL and libpq failover is a new 
> feature in PostgreSQL 10.  At least, as one user, I don't want PostgreSQL to 
> sacrifice another round trip to establish a connection.  As a developer, I 
> don't want libpq code more complex than necessary (the proposed patch adds a 
> new state to the connection state machine.)  And I think it's natural for the 
> server to return the server attribute (primary/standby, writable, etc.) as a 
> response to the Startup message like server_version, 
> standard_conforming_strings and server_encoding.

Well, generally speaking, a new feature that works against older
server is better than one that doesn't.  Of course, if that entails
making other compromises then you have to decide whether it's worth
it, but SELECT pg_is_in_recovery() and SHOW transaction_read_only
exist in older versions so if we pick either of those methods then it
will just work.  If we decide to invent some completely new method of
distinguishing masters from standbys, then it might not, but that
would be a drawback of such a choice, not a benefit.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Implement failover on libpq connect level.

2016-11-16 Thread Tsunakawa, Takayuki
From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Robert Haas
> Hmm, let's go back to the JDBC method, then.  "show transaction_read_only"
> will return true on a standby, but presumably also on any other non-writable
> node.  You could even force it to be true artificially if you wanted to
> force traffic off of a node, using ALTER {SYSTEM|USER ...|DATABASE ..} SET
> default_transaction_read_only = on
> 
> I think that would address Alvaro's concern, and it's nicer anyway if libpq
> and JDBC are doing the same thing.

If you prefer consistency between libpq and JDBC, then we could correct JDBC.  
People here should know the server state well, and be able to figure out a good 
specification.

Regards
Takayuki Tsunakawa



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Document how to set up TAP tests for Perl 5.8.8

2016-11-16 Thread Craig Ringer
On 17 November 2016 at 10:42, Craig Ringer  wrote:

> But sure, if it's easier, we can have 5.8.0 in the README. What's five
> extra years matter anyway? Hey, while we're at it, lets change Pg to
> build on Borland C and require K&R style!

Sorry. That was unnecessary. I should've had the sense to save that as
a draft and come back later.

In all seriousness, though, lets query the buildfarm database for Perl
versions. Let it answer.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Password identifiers, protocol aging and SCRAM protocol

2016-11-16 Thread Robert Haas
On Wed, Nov 16, 2016 at 7:36 PM, Andres Freund  wrote:
> With -Wl,--as-neeeded the linker will dismiss unused symbols found in a
> static library. Maybe that's the difference?

The man page --as-needed says that --as-needed modifies the behavior
of dynamic libraries, not static ones.  If there is any such effect,
it is undocumented.  Here is the text:

LD> This option affects ELF DT_NEEDED tags for dynamic libraries mentioned
LD> on the command line after the --as-needed option. Normally the linker will
LD> add a DT_NEEDED tag for each dynamic library mentioned on the
LD> command line, regardless of whether the library is actually needed or not.
LD> --as-needed causes a DT_NEEDED tag to only be emitted for a library
LD> that at that point in the link satisfies a non-weak undefined
symbol reference
LD> from a regular object file or, if the library is not found in the DT_NEEDED
LD> lists of other needed libraries, a non-weak undefined symbol reference
LD> from another needed dynamic library. Object files or libraries appearing
LD> on the command line after the library in question do not affect whether the
LD> library is seen as needed. This is similar to the rules for
extraction of object
LD> files from archives. --no-as-needed restores the default behaviour.

Some experimentation on my Mac reveals that my previous statement
about how this works was incorrect.  See attached patch for what I
tried.  What I find is:

1. If I create an additional source file in src/common containing a
completely unused symbol (wunk) it appears in the nm output for
libpgcommon_srv.a but not in the nm output for the postgres binary.

2. If I add an additional function to an existing source file in
src/common containing a completely unused symbol (quux) it appears in
the nm output for both libpgcommon_srv.a and also in the nm output for
the postgres binary.

3. If I create an additional source file in src/backend containing a
completely unused symbol (blarfle) it appears in the nm output for the
postgres binary.

So, it seems that the linker is willing to drop archive members if the
entire .o file is used, but not individual symbols.  That explains why
Michael thinks we need to do something special here, because with his
0001 patch, nothing in the new sha2(_openssl).c file would immediately
be used in the backend.  And indeed I see now that my earlier testing
was done incorrectly, and pgcrypto does in fact fail to build under my
proposal.  Oops.

But I think that's a temporary thing.  As soon as the backend is using
the sha2 routines for anything (which is the point, right?) the build
changes become unnecessary.  For example, if I apply this patch:

--- a/src/backend/lib/binaryheap.c
+++ b/src/backend/lib/binaryheap.c
@@ -305,3 +305,7 @@ sift_down(binaryheap *heap, int node_off)
node_off = swap_off;
}
 }
+
+#include "common/sha2.h"
+extern void ugh(void);
+void ugh(void) { pg_sha224_init(NULL); }

...then the backend ends up sucking in everything in sha2.c and the
pgcrypto build works again.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
diff --git a/src/common/Makefile b/src/common/Makefile
index 03dfaa1..f84264a 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -46,7 +46,7 @@ OBJS_COMMON = config_info.o controldata_utils.o exec.o ip.o keywords.o \
 
 OBJS_FRONTEND = $(OBJS_COMMON) fe_memutils.o file_utils.o restricted_token.o
 
-OBJS_SRV = $(OBJS_COMMON:%.o=%_srv.o)
+OBJS_SRV = $(OBJS_COMMON:%.o=%_srv.o) wunk.o
 
 all: libpgcommon.a libpgcommon_srv.a
 
diff --git a/src/common/ip.c b/src/common/ip.c
index 797d910..d517802 100644
--- a/src/common/ip.c
+++ b/src/common/ip.c
@@ -258,3 +258,11 @@ getnameinfo_unix(const struct sockaddr_un * sa, int salen,
 	return 0;
 }
 #endif   /* HAVE_UNIX_SOCKETS */
+
+extern void quux(void);
+
+void
+quux(void)
+{
+	/* quux */
+}
diff --git a/src/common/wunk.c b/src/common/wunk.c
new file mode 100644
index 000..2db667c
--- /dev/null
+++ b/src/common/wunk.c
@@ -0,0 +1,7 @@
+extern void wunk(void);
+
+void
+wunk(void)
+{
+	/* wunk */
+}

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Document how to set up TAP tests for Perl 5.8.8

2016-11-16 Thread Craig Ringer
On 17 November 2016 at 10:31, Kyotaro HORIGUCHI
 wrote:

>
> I vote +1 to upgrading perl, but I'm not sure if all supporting
> platforms other than linux support the version of perl.
>
> Anyway ./configure is saying as the following.
>
> | *** The installed version of Perl, $PERL, is too old to use with PostgreSQL.
> | *** Perl version 5.8 or later is required, but this is $pgac_perl_version." 
> >&5
>
> If TAP test requires 5.8.8, the whole build sequence should
> follow that. Or at least ./configure --enable-tap-tests should
> check that.

I wrote 5.8.8 because that's what we've always discussed before and I
could honestly not imagine it mattering whether we require 10-year or
15-year old Perl. Especially for the TAP tests, which are new, and
optional.

I really don't think it matters if the TAP tests use a slightly newer
Perl. They're optional and not enabled by default. Can we just
document this please? I didn't think a four-line docs patch to an
optional, non-default, new test suite would require this kind of
discussion.

But sure, if it's easier, we can have 5.8.0 in the README. What's five
extra years matter anyway? Hey, while we're at it, lets change Pg to
build on Borland C and require K&R style!

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Document how to set up TAP tests for Perl 5.8.8

2016-11-16 Thread Kyotaro HORIGUCHI
Hello,

At Thu, 17 Nov 2016 10:00:53 +0800, Craig Ringer  wrote 
in 
> On 17 November 2016 at 00:01, Michael Paquier  
> wrote:
> > On Tue, Nov 15, 2016 at 11:32 PM, Noah Misch  wrote:
> >> On Wed, Nov 16, 2016 at 12:48:03PM +0800, Craig Ringer wrote:
> >>> --- a/src/test/perl/README
> >>> +++ b/src/test/perl/README
> >>> @@ -64,3 +64,20 @@ For available PostgreSQL-specific test methods and 
> >>> some example tests read the
> >>>  perldoc for the test modules, e.g.:
> >>>
> >>>  perldoc src/test/perl/PostgresNode.pm
> >>> +
> >>> +Required Perl
> >>> +-
> >>> +
> >>> +Tests must run on perl 5.8.8 and newer. perlbrew is a good way to obtain
> >>
> >> Tests must run on Perl 5.8.0 and newer.
> 
> Why? We've always discussed 5.8.8 before. That's already a full 10 years old.
> 
> 5.8.0 is from *2002*. Are you running any 15-year-old systems you
> can't update to a *patch release* of the same perl major?
> 
> 
> gendef.pl says it needs 5.8.0 or newer with "use 5.8.0" but that's the
> only relevant thing I can find, and it's not relevant to the TAP tests
> anyway.
> 
> BTW, here's my older approach, with a dockerfile, before I became
> aware of perlbrew:
> 
> https://www.postgresql.org/message-id/camsr+ygr6pu-guyp-ft98xwxasc9n6j-awzaqxvw_+p3rtc...@mail.gmail.com
> 
> 
> 5.8.8 is in CentOS 5.
> 
> Debian Lenny (6) has 5.14.2 Wheezy (5) has 5.10. Etch (4) has 5.8.8.
> Etch came out in early 2007. Even Sarge had 5.8.4.
> 
> Ubuntu 10.04 Lucid (old-lts) had 5.10.1-8ubuntu2 .
> 
> So unless we care about Debian 3 Sarge or source builds done more than
> 10 years ago, 5.8.8 is more than good enough.

I vote +1 to upgrading perl, but I'm not sure if all supporting
platforms other than linux support the version of perl.

Anyway ./configure is saying as the following.

| *** The installed version of Perl, $PERL, is too old to use with PostgreSQL.
| *** Perl version 5.8 or later is required, but this is $pgac_perl_version." 
>&5

If TAP test requires 5.8.8, the whole build sequence should
follow that. Or at least ./configure --enable-tap-tests should
check that.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Implement failover on libpq connect level.

2016-11-16 Thread Tsunakawa, Takayuki
From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Tatsuo Ishii
> In my understanding pg_is_in_recovery() returns true if it's a standby node.
> However, even if it returns other than true, the server is not necessarily
> a primary. Even it's not configured as a streaming replication primary,
> it returns other than true.
> 
> So if your intention is finding a primary, I am not sure if
> pg_is_in_recovery() is the best solution.

Yes, I don't think pg_is_in_recovery() is the best, but there doesn't seem to 
be a better solution.  pg_is_in_recovery(), as its name clearly suggests, 
returns true if the server is performing recovery.  For example, it returns 
true if hot_standby=on is present in postgresql.conf and the recovery from 
backup is in progress.  It's not a standby.

Regards
Takayuki Tsunakawa



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Document how to set up TAP tests for Perl 5.8.8

2016-11-16 Thread Craig Ringer
On 17 November 2016 at 00:01, Michael Paquier  wrote:
> On Tue, Nov 15, 2016 at 11:32 PM, Noah Misch  wrote:
>> On Wed, Nov 16, 2016 at 12:48:03PM +0800, Craig Ringer wrote:
>>> --- a/src/test/perl/README
>>> +++ b/src/test/perl/README
>>> @@ -64,3 +64,20 @@ For available PostgreSQL-specific test methods and some 
>>> example tests read the
>>>  perldoc for the test modules, e.g.:
>>>
>>>  perldoc src/test/perl/PostgresNode.pm
>>> +
>>> +Required Perl
>>> +-
>>> +
>>> +Tests must run on perl 5.8.8 and newer. perlbrew is a good way to obtain
>>
>> Tests must run on Perl 5.8.0 and newer.

Why? We've always discussed 5.8.8 before. That's already a full 10 years old.

5.8.0 is from *2002*. Are you running any 15-year-old systems you
can't update to a *patch release* of the same perl major?


gendef.pl says it needs 5.8.0 or newer with "use 5.8.0" but that's the
only relevant thing I can find, and it's not relevant to the TAP tests
anyway.

BTW, here's my older approach, with a dockerfile, before I became
aware of perlbrew:

https://www.postgresql.org/message-id/camsr+ygr6pu-guyp-ft98xwxasc9n6j-awzaqxvw_+p3rtc...@mail.gmail.com


5.8.8 is in CentOS 5.

Debian Lenny (6) has 5.14.2 Wheezy (5) has 5.10. Etch (4) has 5.8.8.
Etch came out in early 2007. Even Sarge had 5.8.4.

Ubuntu 10.04 Lucid (old-lts) had 5.10.1-8ubuntu2 .

So unless we care about Debian 3 Sarge or source builds done more than
10 years ago, 5.8.8 is more than good enough.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Implement failover on libpq connect level.

2016-11-16 Thread Tsunakawa, Takayuki
From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Robert Haas
> On Mon, Nov 14, 2016 at 8:09 PM, Tsunakawa, Takayuki
>  wrote:
> > From: pgsql-hackers-ow...@postgresql.org
> >> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Mithun Cy
> >> Thanks, my concern is suppose you have 3 server in cluster A(new
> >> version), B(new version), C(old version). If we implement as above
> >> only new servers will send ParameterStatus message to indicate what
> >> type of server we are connected. Server C will not send same. So we
> >> will not be able to use new feature "failover to new master" for such
> a kind of cluster.
> >
> > No, the streaming replication requires the same major release for all
> member servers, so there's no concern about the mixed-version cluster.
> 
> True, but there is a concern about a newer libpq connecting to older servers.
> If we mimic what JDBC is already doing, we'll be compatible and you'll be
> able to use this feature with a v10 libpq without worrying about whether
> the target server is also v10.  If we invent something new on the server
> side, then you'll need to be sure you have both a v10 libpq and v10 server.

Do we really want to enable libpq failover against pre-V10 servers?  I don't 
think so, as libpq is a part of PostgreSQL and libpq failover is a new feature 
in PostgreSQL 10.  At least, as one user, I don't want PostgreSQL to sacrifice 
another round trip to establish a connection.  As a developer, I don't want 
libpq code more complex than necessary (the proposed patch adds a new state to 
the connection state machine.)  And I think it's natural for the server to 
return the server attribute (primary/standby, writable, etc.) as a response to 
the Startup message like server_version, standard_conforming_strings and 
server_encoding.

Regards
Takayuki Tsunakawa



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Better logging of COPY queries if log_statement='all'

2016-11-16 Thread Haribabu Kommi
Hi Dmitry,

This is a gentle reminder.

you assigned as reviewer to the current patch in the 11-2016 commitfest.
But you haven't shared your review yet. Can you please try to share your
views
about the patch. This will help us in smoother operation of commitfest.

some people are against to the current patch approach. If you can share your
views on the use case and etc, it will be helpful. If you are not agreed
with the
approach similar like others, better reject the patch.

Please Ignore if you already shared your review.


Regards,
Hari Babu
Fujitsu Australia


Re: [HACKERS] Fun fact about autovacuum and orphan temp tables

2016-11-16 Thread Haribabu Kommi
Hi Dilip,

This is a gentle reminder.

you assigned as reviewer to the current patch in the 11-2016 commitfest.
But you haven't shared your review yet. Can you please try to share your
views
about the patch. This will help us in smoother operation of commitfest.

Michael had sent an updated patch based on some discussion.

Please Ignore if you already shared your review.


Regards,
Hari Babu
Fujitsu Australia


[HACKERS] partial documentation builds

2016-11-16 Thread Peter Eisentraut
Here is a tip for building the documentation faster during development.
With the new toolchain, you can build only a part of the documentation,
like this:

make html XSLTPROCFLAGS='--stringparam rootid pageinspect'

where "pageinspect" is some XML id (see the top of pageinspect.sgml in
this example).  This will build only that part of the documentation,
which is much faster than the full build, but still in the proper
context so that links and section numbering work correctly.

Here is an example of integrating this into an editor:

(defun top-level-id ()
  "Return top-level XML/SGML id"
  (save-excursion
(goto-char (point-min))
(if (re-search-forward "^<[a-z0-9]+ id=\"\\([a-z-]+\\)\"")
(match-string 1

(defun compile-html-of-this ()
  (interactive)
  (let ((id (top-level-id)))
(when id
  (compile (format "make html XSLTPROCFLAGS='--stringparam rootid
%s'" id)

(defun browse-html-of-this ()
  (interactive)
  (let ((id (top-level-id)))
(when id
  (browse-url-of-file (concat (file-name-directory buffer-file-name)
"html/" id ".html")


-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-16 Thread Peter Eisentraut
On 11/16/16 1:14 PM, Erik Rijkers wrote:
> real5m21.348s  -- for 'make -j 8 html'
> versus
> real1m8.502s   -- for 'make -j 8 oldhtml'
> 
> Centos 6.6 - I suppose it's getting a bit old, I don't know if that's 
> the cause of the discrepancy with other's measurements.

I tested the build on a variety of operating systems, including that
one, with different tool chain versions and I am getting consistent
performance.  So the above is unclear to me at the moment.

For the heck of it, run this

xsltproc --nonet --stringparam pg.version '10devel'  stylesheet.xsl
postgres.xml

to make sure it's not downloading something from the network.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-16 Thread Peter Eisentraut
On 11/16/16 1:23 PM, Alvaro Herrera wrote:
> Now admittedly this conversion didn't do one bit towards the goal I
> wanted to achieve: that each doc source file ended up as a valid XML
> file that could be processed separately with tools like xml2po.  They
> are still SGML only -- in particular no doctype declaration and
> incomplete closing tags.

Yes, that is one of the upcoming steps.  But we need to do the current
thing first.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] amcheck (B-Tree integrity checking tool)

2016-11-16 Thread Thomas Munro
On Thu, Sep 8, 2016 at 6:44 AM, Peter Geoghegan  wrote:
> On Fri, Sep 2, 2016 at 11:19 AM, Kevin Grittner  wrote:
>> IMV the process is to post a patch to this list to certify that it
>> is yours to contribute and free of IP encumbrances that would
>> prevent us from using it.  I will wait for that post.
>
> I attach my V3

+ * Ideally, we'd compare every item in the index against every other
+ * item in the index, and not trust opclass obedience of the transitive
+ * law to bridge the gap between children and their grandparents (as
+ * well as great-grandparents, and so on).  We don't go to those
+ * lengths because that would be prohibitively expensive, and probably
+ * not markedly more effective in practice.
+ */

I skimmed the Modern B-Tree Techniques paper recently, and there was a
section on "the cousin problem" when verifying btrees, which this
comment reminded me of.  I tried to come up with an example of a pair
of characters being switched in a collation that would introduce
undetectable corruption:

T1.  Order   = a < b < c < d

 Btree   =[a|c]
  /   \
 / \
/   \
  [a]---[c]
   | |
   | |
  [b]---[d]

T2.  Order   = a < c < b < d

Now c and b have been switched around.  Won't amcheck still pass since
we only verify immediate downlinks and siblings?  Yet searches for b
will take a wrong turn at the root.  Do I have this right?  I wonder
if there is a way to verify that each page's high key < the 'next' key
for all ancestors.

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Password identifiers, protocol aging and SCRAM protocol

2016-11-16 Thread Andres Freund
Hi,

On 2016-11-16 19:29:41 -0500, Robert Haas wrote:
> On Wed, Nov 16, 2016 at 6:56 PM, Michael Paquier
>  wrote:
> > On Wed, Nov 16, 2016 at 11:24 AM, Robert Haas  wrote:
> >> diff --git a/contrib/pgcrypto/Makefile b/contrib/pgcrypto/Makefile
> >> index 805db76..ddb0183 100644
> >> --- a/contrib/pgcrypto/Makefile
> >> +++ b/contrib/pgcrypto/Makefile
> >> @@ -1,6 +1,6 @@
> >>  # contrib/pgcrypto/Makefile
> >>
> >> -INT_SRCS = md5.c sha1.c sha2.c internal.c internal-sha2.c blf.c 
> >> rijndael.c \
> >> +INT_SRCS = md5.c sha1.c internal.c internal-sha2.c blf.c rijndael.c \
> >>  fortuna.c random.c pgp-mpi-internal.c imath.c
> >>  INT_TESTS = sha2
> >
> > I would like to do so. And while Linux is happy with that, macOS is
> > not, this results in linking resolution errors when compiling the
> > library.
>
> Well, I'm running macOS and it worked for me.  TBH, I don't even quite
> understand how it could NOT work.  What makes the symbols provided by
> libpgcommon any different from any other symbols that are part of the
> binary?  How could one set work and the other set fail?  I can
> understand how there might be some problem if the backend were
> dynamically linked libpgcommon, but it's not.  It's doing this:

With -Wl,--as-neeeded the linker will dismiss unused symbols found in a
static library. Maybe that's the difference?

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Password identifiers, protocol aging and SCRAM protocol

2016-11-16 Thread Robert Haas
On Wed, Nov 16, 2016 at 6:56 PM, Michael Paquier
 wrote:
> On Wed, Nov 16, 2016 at 11:24 AM, Robert Haas  wrote:
>> diff --git a/contrib/pgcrypto/Makefile b/contrib/pgcrypto/Makefile
>> index 805db76..ddb0183 100644
>> --- a/contrib/pgcrypto/Makefile
>> +++ b/contrib/pgcrypto/Makefile
>> @@ -1,6 +1,6 @@
>>  # contrib/pgcrypto/Makefile
>>
>> -INT_SRCS = md5.c sha1.c sha2.c internal.c internal-sha2.c blf.c rijndael.c \
>> +INT_SRCS = md5.c sha1.c internal.c internal-sha2.c blf.c rijndael.c \
>>  fortuna.c random.c pgp-mpi-internal.c imath.c
>>  INT_TESTS = sha2
>
> I would like to do so. And while Linux is happy with that, macOS is
> not, this results in linking resolution errors when compiling the
> library.

Well, I'm running macOS and it worked for me.  TBH, I don't even quite
understand how it could NOT work.  What makes the symbols provided by
libpgcommon any different from any other symbols that are part of the
binary?  How could one set work and the other set fail?  I can
understand how there might be some problem if the backend were
dynamically linked libpgcommon, but it's not.  It's doing this:

gcc -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels
-Wmissing-format-attribute -Wformat-security -fno-strict-aliasing
-fwrapv -g -O2 -Wall -Werror -L../../src/port -L../../src/common
-Wl,-dead_strip_dylibs  -Wall -Werror   access/brin/brin.o [many more
.o files omitted for brevity] utils/fmgrtab.o
../../src/timezone/localtime.o ../../src/timezone/strftime.o
../../src/timezone/pgtz.o ../../src/port/libpgport_srv.a
../../src/common/libpgcommon_srv.a -lm -o postgres

As I understand it, listing the .a file on the linker command line
like that is exactly equivalent to listing out each individual .o file
that is part of that static library.  There shouldn't be any
difference in how a symbol that's provided by one of the .o files
looks vs. how a symbol that's provided by one of the .a files looks.
Let's test it.

[rhaas pgsql]$ nm src/backend/postgres | grep -E 'GetUserIdAndContext|psprintf'
0001003d71d0 T _GetUserIdAndContext
00010040f160 T _psprintf

So... how would the dynamic loader know that it was supposed to find
the first one and fail to find the second one?  More to the point,
it's clear that it DOES find the second one on every platform in the
buildfarm, because adminpack, dblink, pageinspect, and pgstattuple all
use psprintf without the push-ups you are proposing to undertake here.
pg_md5_encrypt is used by passwordcheck, and forkname_to_number is
used by pageinspect and pg_prewarm.  It all just works.  No special
magic required.

> Yes we could do that for consistency with the other nix platforms. But
> is that really necessary as libpgcommon already has those objects?

The point is that *postgres* already has those objects.  You don't
need to include them twice.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Password identifiers, protocol aging and SCRAM protocol

2016-11-16 Thread Michael Paquier
On Wed, Nov 16, 2016 at 11:24 AM, Robert Haas  wrote:
> diff --git a/contrib/pgcrypto/Makefile b/contrib/pgcrypto/Makefile
> index 805db76..ddb0183 100644
> --- a/contrib/pgcrypto/Makefile
> +++ b/contrib/pgcrypto/Makefile
> @@ -1,6 +1,6 @@
>  # contrib/pgcrypto/Makefile
>
> -INT_SRCS = md5.c sha1.c sha2.c internal.c internal-sha2.c blf.c rijndael.c \
> +INT_SRCS = md5.c sha1.c internal.c internal-sha2.c blf.c rijndael.c \
>  fortuna.c random.c pgp-mpi-internal.c imath.c
>  INT_TESTS = sha2

I would like to do so. And while Linux is happy with that, macOS is
not, this results in linking resolution errors when compiling the
library.

> And for Mkvcbuild.pm I think you could just do this:
>
> diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
> index de764dd..1993764 100644
> --- a/src/tools/msvc/Mkvcbuild.pm
> +++ b/src/tools/msvc/Mkvcbuild.pm
> @@ -114,6 +114,15 @@ sub mkvcbuild
>md5.c pg_lzcompress.c pgfnames.c psprintf.c relpath.c rmtree.c
>string.c username.c wait_error.c);
>
> +if ($solution->{options}->{openssl})
> +{
> +push(@pgcommonallfiles, 'sha2_openssl.c');
> +}
> +else
> +{
> +push(@pgcommonallfiles, 'sha2.c');
> +}
> +
>  our @pgcommonfrontendfiles = (
>  @pgcommonallfiles, qw(fe_memutils.c file_utils.c
>restricted_token.c));
> @@ -422,7 +431,7 @@ sub mkvcbuild
>  {
>  $pgcrypto->AddFiles(
>  'contrib/pgcrypto',   'md5.c',
> -'sha1.c', 'sha2.c',
> +'sha1.c',
>  'internal.c', 'internal-sha2.c',
>  'blf.c',  'rijndael.c',
>  'fortuna.c',  'random.c',
>
> Is there some reason that won't work?

Yes we could do that for consistency with the other nix platforms. But
is that really necessary as libpgcommon already has those objects?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improve OOM handling in pg_locale.c

2016-11-16 Thread Haribabu Kommi
Hi Mithun,

This is a gentle reminder.

you assigned as reviewer to the current patch in the 11-2016 commitfest.
But you haven't shared your review yet. Can you please try to share your
views
about the patch. This will help us in smoother operation of commitfest.

Please Ignore if you already shared your review.

Regards,
Hari Babu
Fujitsu Australia


Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-11-16 Thread Haribabu Kommi
Hi Tomas and Gerdan,

This is a gentle reminder.

you both are assigned as reviewers to the current patch in the 11-2016
commitfest.
But you haven't shared any reviews yet, can you please try to share your
views
about the patch. This will help us in smoother operation of commitfest.

Please Ignore if you already shared your review.

Regards,
Hari Babu
Fujitsu Australia


Re: [HACKERS] commitfest 2016-11 status summary

2016-11-16 Thread Haribabu Kommi
Hi All,


The commitfest status summary after one week of progress:

Needs review: 76
Waiting on author: 16
Ready for Commiter: 16
Commited: 32
Moved to next CF: 0
Rejected: 4
Returned with feedback: 3
TOTAL: 147


Overall progress of completion - 26%.
week-1 progress of completion - 9%.
week-2 progress of completion - 3%

The progress in this week is slow compared to earlier week.


There are patches with the reviewer assigned, but no review is shared yet.
I will try to ask the reviewer in the corresponding thread for a review.

There are still some patches with no reviewer assigned. Please consider
reviewing some patches. The review can be a code, test and etc, whichever is
comfortable to you. We need your help.


Regards,
Hari Babu
Fujitsu Australia


Re: [HACKERS] WIP: About CMake v2

2016-11-16 Thread Michael Paquier
On Wed, Nov 16, 2016 at 2:14 PM, Mark Kirkwood
 wrote:
> I see there are some patches for the putenv issue with Visual studio 2013 in
> progress on this list - it is probably best to work with the author to see
> if 2015 has any new issues and keep all changes for that *out* of the cmake
> patches.

I don't recall all the details here, but no wrappers should be needed,
particularly on Windows where we already do that:
src/include/port/win32.h:#define putenv(x) pgwin32_putenv(x)
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: About CMake v2

2016-11-16 Thread Mark Kirkwood
I see there are some patches for the putenv issue with Visual studio 
2013 in progress on this list - it is probably best to work with the 
author to see if 2015 has any new issues and keep all changes for that 
*out* of the cmake patches.



regards


Mark


On 16/11/16 21:22, Yury Zhuravlev wrote:
I made this small wrapper special for MSVC 2015 without Service Packs 
because postgres macross were in conflict with MS internal functions. 
After some time and some updates for MSVC Michael Paquier could not 
reproduce  my problem but I keep this patch to avoid problems in the 
future. I can check old behavior again and revert all changes if 
needed and ofcourse I have plans to make separate patch for this changes.

Thanks!




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-16 Thread Tom Lane
Erik Rijkers  writes:
> On 2016-11-16 21:59, Peter Eisentraut wrote:
>> I have committed another patch to improve the build performance a bit.
>> Could you check again?

> It is indeed better (three minutes off, nice) but still:
> real5m21.348s  -- for 'make -j 8 html'
> versus
> real1m8.502s   -- for 'make -j 8 oldhtml'

Yeah, I get about the same.

> Centos 6.6 - I suppose it's getting a bit old, I don't know if that's 
> the cause of the discrepancy with other's measurements.

... and on the same toolchain.  Probably the answer is "install a newer
toolchain", but from what I understand, there's a whole lot of work there
if your platform vendor doesn't supply it already packaged.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash Indexes

2016-11-16 Thread Robert Haas
On Sat, Nov 12, 2016 at 12:49 AM, Amit Kapila  wrote:
> You are right and I have changed the code as per your suggestion.

So...

+/*
+ * We always maintain the pin on bucket page for whole scan operation,
+ * so releasing the additional pin we have acquired here.
+ */
+if ((*opaquep)->hasho_flag & LH_BUCKET_PAGE)
+_hash_dropbuf(rel, *bufp);

This relies on the page contents to know whether we took a pin; that
seems like a bad plan.  We need to know intrinsically whether we took
a pin.

+ * If the bucket split is in progress, then we need to skip tuples that
+ * are moved from old bucket.  To ensure that vacuum doesn't clean any
+ * tuples from old or new buckets till this scan is in progress, maintain
+ * a pin on both of the buckets.  Here, we have to be cautious about

It wouldn't be a problem if VACUUM removed tuples from the new bucket,
because they'd have to be dead anyway.   It also wouldn't be a problem
if it removed tuples from the old bucket that were actually dead.  The
real issue isn't vacuum anyway, but the process of cleaning up after a
split.  We need to hold the pin so that tuples being moved from the
old bucket to the new bucket by the split don't get removed from the
old bucket until our scan is done.

+old_blkno = _hash_get_oldblock_from_newbucket(rel,
opaque->hasho_bucket);

Couldn't you pass "bucket" here instead of "hasho->opaque_bucket"?  I
feel like I'm repeating this ad nauseum, but I really think it's bad
to rely on the special space instead of our own local variables!

-/* we ran off the end of the bucket without finding a match */
+/*
+ * We ran off the end of the bucket without finding a match.
+ * Release the pin on bucket buffers.  Normally, such pins are
+ * released at end of scan, however scrolling cursors can
+ * reacquire the bucket lock and pin in the same scan multiple
+ * times.
+ */
 *bufP = so->hashso_curbuf = InvalidBuffer;
 ItemPointerSetInvalid(current);
+_hash_dropscanbuf(rel, so);

I think this comment is saying that we'll release the pin on the
primary bucket page for now, and then reacquire it later if the user
reverses the scan direction.  But that doesn't sound very safe,
because the bucket could be split in the meantime and the order in
which tuples are returned could change.  I think we want that to
remain stable within a single query execution.

+_hash_readnext(rel, &buf, &page, &opaque,
+   (opaque->hasho_flag & LH_BUCKET_PAGE) ? true : false);

Same comment: don't rely on the special space to figure this out.
Keep track.  Also != 0 would be better than ? true : false.

+/*
+ * setting hashso_skip_moved_tuples to false
+ * ensures that we don't check for tuples that are
+ * moved by split in old bucket and it also
+ * ensures that we won't retry to scan the old
+ * bucket once the scan for same is finished.
+ */
+so->hashso_skip_moved_tuples = false;

I think you've got a big problem here.  Suppose the user starts the
scan in the new bucket and runs it forward until they end up in the
old bucket.  Then they turn around and run the scan backward.  When
they reach the beginning of the old bucket, they're going to stop, not
move back to the new bucket, AFAICS.  Oops.

_hash_first() has a related problem: a backward scan starts at the end
of the new bucket and moves backward, but it should start at the end
of the old bucket, and then when it reaches the beginning, flip to the
new bucket and move backward through that one.  Otherwise, a backward
scan and a forward scan don't return tuples in opposite order, which
they should.

I think what you need to do to fix both of these problems is a more
thorough job gluing the two buckets together.  I'd suggest that the
responsibility for switching between the two buckets should probably
be given to _hash_readprev() and _hash_readnext(), because every place
that needs to advance to the next or previous page that cares about
this.  Right now you are trying to handle it mostly in the functions
that call those functions, but that is prone to errors of omission.

Also, I think that so->hashso_skip_moved_tuples is badly designed.
There are two separate facts you need to know: (1) whether you are
scanning a bucket that was still being populated at the start of your
scan and (2) if yes, whether you are scanning the bucket being
populated or whether you are instead scanning the corresponding "old"
bucket.  You're trying to keep track of that using one Boolean, but
one Boolean only has two states and there are three possible states
here.

+if (H_BUCKET_BEING_SPLIT(pag

Re: [HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-16 Thread Alvaro Herrera
Peter Eisentraut wrote:

> > This xslt build  takes  8+ minutes, compared to barely 1 minute for 
> > 'oldhtml'.
> 
> I have committed another patch to improve the build performance a bit.
> Could you check again?

After the optimization, on my laptop it takes 2:31 with the new system
and 1:58 with the old one.  If it can be made faster, all the better,
but at this level I'm okay.

Now admittedly this conversion didn't do one bit towards the goal I
wanted to achieve: that each doc source file ended up as a valid XML
file that could be processed separately with tools like xml2po.  They
are still SGML only -- in particular no doctype declaration and
incomplete closing tags.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-16 Thread Erik Rijkers

On 2016-11-16 21:59, Peter Eisentraut wrote:

On 11/16/16 6:29 AM, Erik Rijkers wrote:


This xslt build  takes  8+ minutes, compared to barely 1 minute for
'oldhtml'.


I have committed another patch to improve the build performance a bit.
Could you check again?


It is indeed better (three minutes off, nice) but still:
real5m21.348s  -- for 'make -j 8 html'
versus
real1m8.502s   -- for 'make -j 8 oldhtml'

Centos 6.6 - I suppose it's getting a bit old, I don't know if that's 
the cause of the discrepancy with other's measurements.


Obviously as long as 'oldhtml' is possible I won't complain.

thanks,

Erik Rijkers







--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-16 Thread Peter Eisentraut
On 11/16/16 6:46 AM, Tom Lane wrote:
> What was the improvement we were hoping for, again?

Get off an ancient and unmaintained tool chain.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-16 Thread Peter Eisentraut
On 11/16/16 12:38 PM, Alvaro Herrera wrote:
> "make check" still uses DSSSL though.  Is that intentional?  Is it going
> to be changed?

It doesn't use DSSSL.  Is uses nsgmls to parse the SGML, which is a
different thing that will be addressed in a separate step.

So, yes, but later.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] amcheck (B-Tree integrity checking tool)

2016-11-16 Thread Andres Freund
Hi,


I think the patch could use a pgindent run.

On 2016-09-07 11:44:01 -0700, Peter Geoghegan wrote:
> diff --git a/contrib/amcheck/amcheck--1.0.sql 
> b/contrib/amcheck/amcheck--1.0.sql
> new file mode 100644
> index 000..ebbd6ac
> --- /dev/null
> +++ b/contrib/amcheck/amcheck--1.0.sql
> @@ -0,0 +1,20 @@
> +/* contrib/amcheck/amcheck--1.0.sql */
> +
> +-- complain if script is sourced in psql, rather than via CREATE EXTENSION
> +\echo Use "CREATE EXTENSION amcheck" to load this file. \quit
> +
> +--
> +-- bt_index_check()
> +--
> +CREATE FUNCTION bt_index_check(index regclass)
> +RETURNS VOID
> +AS 'MODULE_PATHNAME', 'bt_index_check'
> +LANGUAGE C STRICT;
> +
> +--
> +-- bt_index_parent_check()
> +--
> +CREATE FUNCTION bt_index_parent_check(index regclass)
> +RETURNS VOID
> +AS 'MODULE_PATHNAME', 'bt_index_parent_check'
> +LANGUAGE C STRICT;

I'd really want a function that runs all check on a table.


> +#define CHECK_SUPERUSER() { \
> + if (!superuser()) \
> + ereport(ERROR, \
> + 
> (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE), \
> +  (errmsg("must be superuser to use 
> verification functions"; }

Why is this a macro?


> +/*
> + * Callers to verification functions should never receive a false positive
> + * indication of corruption.  Therefore, when using amcheck functions for
> + * stress testing, it may be useful to temporally change the CORRUPTION 
> elevel
> + * to PANIC, to immediately halt the server in the event of detecting an
> + * invariant condition violation.  This may preserve more information about 
> the
> + * nature of the underlying problem.  Note that modifying the CORRUPTION
> + * constant to be an elevel < ERROR is not well tested.
> + */
> +#ifdef NOT_USED
> +#define CORRUPTION   PANIC
> +#define CONCERN  WARNING
> +#define POSITION NOTICE
> +#else
> +#define CORRUPTION   ERROR
> +#define CONCERN  DEBUG1
> +#define POSITION DEBUG2
> +#endif

Hm, if we want that - and it doesn't seem like a bad idea - I think we
should be make it available without recompiling.


> +/*
> + * A B-Tree cannot possibly have this many levels, since there must be one
> + * block per level, which is bound by the range of BlockNumber:
> + */
> +#define InvalidBtreeLevel((uint32) InvalidBlockNumber)

Hm, I think it'd be, for the long term, better if we'd move the btree
check code to amcheck_nbtree.c or such.


> +Datum
> +bt_index_parent_check(PG_FUNCTION_ARGS)
> +{
> + Oid indrelid = PG_GETARG_OID(0);
> + Oid heapid;
> + Relationindrel;
> + Relationheaprel;
> +
> + CHECK_SUPERUSER();
> +
> + /*
> +  * We must lock table before index to avoid deadlocks.  However, if the
> +  * passed indrelid isn't an index then IndexGetRelation() will fail.
> +  * Rather than emitting a not-very-helpful error message, postpone
> +  * complaining, expecting that the is-it-an-index test below will fail.
> +  */
> + heapid = IndexGetRelation(indrelid, true);
> + if (OidIsValid(heapid))
> + heaprel = heap_open(heapid, ShareLock);
> + else
> + heaprel = NULL;
> +
> + /*
> +  * Open the target index relations separately (like relation_openrv(), 
> but
> +  * with heap relation locked first to prevent deadlocking).  In hot 
> standby
> +  * mode this will raise an error.
> +  */
> + indrel = index_open(indrelid, ExclusiveLock);

Theoretically we should recheck that the oids still match, theoretically
the locking here allows the index->table mapping to change. It's not a
huge window tho...


> + /* Check for active uses of the index in the current transaction */
> + CheckTableNotInUse(indrel, "bt_index_parent_check");

Why do we actually care?



> +static void
> +btree_index_checkable(Relation rel)
> +{
> + if (rel->rd_rel->relkind != RELKIND_INDEX ||
> + rel->rd_rel->relam != BTREE_AM_OID)
> + ereport(ERROR,
> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
> +  errmsg("only nbtree access method indexes are 
> supported"),
> +  errdetail("Relation \"%s\" is not a B-Tree 
> index.",
> +
> RelationGetRelationName(rel;
> +
> + if (RELATION_IS_OTHER_TEMP(rel))
> + ereport(ERROR,
> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
> +  errmsg("cannot access temporary tables of 
> other sessions"),
> +  errdetail("Index \"%s\" is associated with 
> temporary relation.",
> +
> RelationGetRelationName(rel;
> +
> + if (!rel->rd_index->indisready)
> + ereport(ERROR,
>

Re: [HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-16 Thread Peter Eisentraut
On 11/16/16 6:09 AM, Magnus Hagander wrote:
> Btw., shouldn't the output web site pages have encoding declarations?
> 
> That gets sent in the http header, doesn't it? 

That's probably alright, but it would be nicer if the documents were
self-contained.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unlogged tables cleanup

2016-11-16 Thread Michael Paquier
On Wed, Nov 16, 2016 at 11:45 AM, Robert Haas  wrote:
> The header comment for heap_create_init_fork() says this:
>
> /*
>  * Set up an init fork for an unlogged table so that it can be correctly
>  * reinitialized on restart.  Since we're going to do an immediate sync, we
>  * only need to xlog this if archiving or streaming is enabled.  And the
>  * immediate sync is required, because otherwise there's no guarantee that
>  * this will hit the disk before the next checkpoint moves the redo pointer.
>  */
>
> Your patch causes the code not to match the comment any more.  And the
> comment explains why at the time I wrote this code I thought it was OK
> to have the XLogIsNeeded() test in there, so it clearly needs
> updating.

Indeed I missed this comment block. Please let me suggest the following instead:
 /*
  * Set up an init fork for an unlogged table so that it can be correctly
- * reinitialized on restart.  Since we're going to do an immediate sync, we
- * only need to xlog this if archiving or streaming is enabled.  And the
- * immediate sync is required, because otherwise there's no guarantee that
- * this will hit the disk before the next checkpoint moves the redo pointer.
+ * reinitialized on restart.  An immediate sync is required even if the
+ * page has been logged, because the write did not go through
+ * shared_buffers and therefore a concurrent checkpoint may have moved
+ * the redo pointer past our xlog record.
  */
-- 
Michael
diff --git a/contrib/bloom/blinsert.c b/contrib/bloom/blinsert.c
index 0946aa2..c9c7049 100644
--- a/contrib/bloom/blinsert.c
+++ b/contrib/bloom/blinsert.c
@@ -164,13 +164,12 @@ blbuildempty(Relation index)
 	metapage = (Page) palloc(BLCKSZ);
 	BloomFillMetapage(index, metapage);
 
-	/* Write the page.  If archiving/streaming, XLOG it. */
+	/* Write the page and log it. */
 	PageSetChecksumInplace(metapage, BLOOM_METAPAGE_BLKNO);
 	smgrwrite(index->rd_smgr, INIT_FORKNUM, BLOOM_METAPAGE_BLKNO,
 			  (char *) metapage, true);
-	if (XLogIsNeeded())
-		log_newpage(&index->rd_smgr->smgr_rnode.node, INIT_FORKNUM,
-	BLOOM_METAPAGE_BLKNO, metapage, false);
+	log_newpage(&index->rd_smgr->smgr_rnode.node, INIT_FORKNUM,
+BLOOM_METAPAGE_BLKNO, metapage, false);
 
 	/*
 	 * An immediate sync is required even if we xlog'd the page, because the
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index 128744c..624aa84 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -242,13 +242,12 @@ btbuildempty(Relation index)
 	metapage = (Page) palloc(BLCKSZ);
 	_bt_initmetapage(metapage, P_NONE, 0);
 
-	/* Write the page.  If archiving/streaming, XLOG it. */
+	/* Write the page and log it */
 	PageSetChecksumInplace(metapage, BTREE_METAPAGE);
 	smgrwrite(index->rd_smgr, INIT_FORKNUM, BTREE_METAPAGE,
 			  (char *) metapage, true);
-	if (XLogIsNeeded())
-		log_newpage(&index->rd_smgr->smgr_rnode.node, INIT_FORKNUM,
-	BTREE_METAPAGE, metapage, false);
+	log_newpage(&index->rd_smgr->smgr_rnode.node, INIT_FORKNUM,
+BTREE_METAPAGE, metapage, false);
 
 	/*
 	 * An immediate sync is required even if we xlog'd the page, because the
diff --git a/src/backend/access/spgist/spginsert.c b/src/backend/access/spgist/spginsert.c
index 01c8d21..8ac3b00 100644
--- a/src/backend/access/spgist/spginsert.c
+++ b/src/backend/access/spgist/spginsert.c
@@ -161,13 +161,12 @@ spgbuildempty(Relation index)
 	page = (Page) palloc(BLCKSZ);
 	SpGistInitMetapage(page);
 
-	/* Write the page.  If archiving/streaming, XLOG it. */
+	/* Write the page and log it. */
 	PageSetChecksumInplace(page, SPGIST_METAPAGE_BLKNO);
 	smgrwrite(index->rd_smgr, INIT_FORKNUM, SPGIST_METAPAGE_BLKNO,
 			  (char *) page, true);
-	if (XLogIsNeeded())
-		log_newpage(&index->rd_smgr->smgr_rnode.node, INIT_FORKNUM,
-	SPGIST_METAPAGE_BLKNO, page, false);
+	log_newpage(&index->rd_smgr->smgr_rnode.node, INIT_FORKNUM,
+SPGIST_METAPAGE_BLKNO, page, false);
 
 	/* Likewise for the root page. */
 	SpGistInitPage(page, SPGIST_LEAF);
@@ -175,9 +174,8 @@ spgbuildempty(Relation index)
 	PageSetChecksumInplace(page, SPGIST_ROOT_BLKNO);
 	smgrwrite(index->rd_smgr, INIT_FORKNUM, SPGIST_ROOT_BLKNO,
 			  (char *) page, true);
-	if (XLogIsNeeded())
-		log_newpage(&index->rd_smgr->smgr_rnode.node, INIT_FORKNUM,
-	SPGIST_ROOT_BLKNO, page, true);
+	log_newpage(&index->rd_smgr->smgr_rnode.node, INIT_FORKNUM,
+SPGIST_ROOT_BLKNO, page, true);
 
 	/* Likewise for the null-tuples root page. */
 	SpGistInitPage(page, SPGIST_LEAF | SPGIST_NULLS);
@@ -185,9 +183,8 @@ spgbuildempty(Relation index)
 	PageSetChecksumInplace(page, SPGIST_NULL_BLKNO);
 	smgrwrite(index->rd_smgr, INIT_FORKNUM, SPGIST_NULL_BLKNO,
 			  (char *) page, true);
-	if (XLogIsNeeded())
-		log_newpage(&index->rd_smgr->smgr_rnode.node, INIT_FORKNUM,
-	SPGIST_NULL_BLKNO, page, true);
+	log_newpage(&index->rd_smgr->smgr_rnode.node, INIT_FORKNUM,
+SPGIST_NULL_BLKNO, page, true);
 
 	/*
 	 * An

Re: [HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-16 Thread Peter Eisentraut
On 11/16/16 6:29 AM, Erik Rijkers wrote:
> On 2016-11-16 08:06, Peter Eisentraut wrote:
>> Build HTML documentation using XSLT stylesheets by default
>>
>> The old DSSSL build is still available for a while using the make 
>> target
>> "oldhtml".
> 
> This xslt build  takes  8+ minutes, compared to barely 1 minute for 
> 'oldhtml'.

I have committed another patch to improve the build performance a bit.
Could you check again?

On my machine and on the build farm, the performance now almost matches
the DSSSL build.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql: Add putenv support for msvcrt from Visual Studio 2013

2016-11-16 Thread Christian Ullrich
* Noah Misch wrote:

> On Tue, Apr 26, 2016 at 07:39:29PM +0200, Christian Ullrich
> wrote:

> > * Christian Ullrich wrote:

> Patch 1 looks good, except that it should be three patches:
> 
> - cosmetic parts: change whitespace and s/0/NULL/
> - remove CloseHandle() call
> - probe for debug CRT modules, not just release CRT modules

Attached as 0001, 0002, 0003, in that order.

0004 is what used to be 0002, it disables caching of "DLL not 
loaded" results.

> I recommend also moving the SetEnvironmentVariable() call before
> the putenv calls.  That way, if a CRT loads while pgwin32_putenv()
> is executing, the newly-loaded CRT will always have the latest
> value.

Agreed, attached as 0005.

0006 was previously known as 0004, removing all caching.

> > Even with patch 0004, there is still a race condition between
> > the main thread and a theoretical additional thread created by
> > some other component that unloads some CRT between
> > GetProcAddress() and the _putenv() call, but that is hardly
> > fixable.
> 
> I think you can fix it by abandoning GetModuleHandle() in favor
> of GetModuleHandleEx() + GetProcessAddress() + FreeLibrary(). 

Attached as 0007.

> > There is another possible fix, ugly as sin, if we really need
> > to keep the whole environment update machinery *and* cannot do 
> > the full loop each time. Patch 0003 pins each CRT when we see 
> > it for the first time.

This is now 0008.

Patch topology: master --- 1 .. 5 --- 6 --- 7
  \
   `- 8

> I prefer the simplicity of abandoning the cache (patch 4), if it
> performs decently.  Would you compare the performance of patch 1,
> patches 1+2+3, and patches 1+2+4?  This should measure the right 
> thing (after substituting @libdir@ for your environment):

Tested with release builds; Core i7-6700K (quad/HT; 4000 MHz).
I did three runs each, and they were always within 0.5 % of each
other's run time.

- patch 1 (now 1-3): 24 μs/iteration (24 s for 1e6)
- patch 1+2+3 (now 1-5 + 8): 29 μs/iteration
- patch 1+2+4 (now 1-7): 30 μs/iteration

I also did a debug build with 1+2+4 that came in at 84 μs/iteration.

--
Christian

... now how do I get all the dangling debris out of the repo ...


0008-getmodulehandleex-pin.patch
Description: 0008-getmodulehandleex-pin.patch


0001-whitespace.patch
Description: 0001-whitespace.patch


0002-closehandle.patch
Description: 0002-closehandle.patch


0003-debug-crt.patch
Description: 0003-debug-crt.patch


0004 no-caching-notfound.patch
Description: 0004 no-caching-notfound.patch


0005-reorder-update.patch
Description: 0005-reorder-update.patch


0006-no-caching-at-all.patch
Description: 0006-no-caching-at-all.patch


0007-getmodulehandleex-correctness.patch
Description: 0007-getmodulehandleex-correctness.patch

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-16 Thread Alvaro Herrera
Peter Eisentraut wrote:
> Build HTML documentation using XSLT stylesheets by default

"make check" still uses DSSSL though.  Is that intentional?  Is it going
to be changed?

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unlogged tables cleanup

2016-11-16 Thread Robert Haas
On Thu, Nov 10, 2016 at 9:25 PM, Michael Paquier
 wrote:
> On Thu, Nov 10, 2016 at 10:52 PM, Kuntal Ghosh
>  wrote:
>> On Thu, Nov 10, 2016 at 3:42 PM, Michael Paquier
>>  wrote:
>>> Nah. Looking at the code the fix is quite obvious.
>>> heap_create_init_fork() is checking for XLogIsNeeded() to decide if
>>> the INIT forknum should be logged or not. But this is wrong, it needs
>>> to be done unconditionally to ensure that the relation gets created at
>>> replay.
>>
>> I think that we should also update other *buildempty() functions as well.
>> For example, if the table has a primary key, we'll encounter the error again
>> for btree index.
>
> Good point. I just had a look at all the AMs: bloom, btree and SP-gist
> are plainly broken. The others are fine. Attached is an updated patch.
>
> Here are the tests I have done with wal_level = minimal to test all the AMs:
> \! rm -rf /Users/mpaquier/Desktop/tbsp/PG*
> create extension bloom;
> create extension btree_gist;
> create extension btree_gin;
> create tablespace popo location '/Users/mpaquier/Desktop/tbsp';
> set default_tablespace = popo;
> create unlogged table foo (a int);
> insert into foo values (generate_series(1,1));
> create index foo_bt on foo(a);
> create index foo_bloom on foo using bloom(a);
> create index foo_gin on foo using gin (a);
> create index foo_gist on foo using gist (a);
> create index foo_brin on foo using brin (a);
> create unlogged table foo_geo (a box);
> insert into foo_geo values ('(1,2),(2,3)');
> create index foo_spgist ON foo_geo using spgist(a);
>
> -- crash PG
> -- Insert some data
> insert into foo values (100);
> insert into foo_geo values ('(50,12),(312,3)');
>
> This should create 8 INIT forks, 6 for the indexes and 2 for the
> tables. On HEAD, only 3 are getting created and with the patch all of
> them are.

The header comment for heap_create_init_fork() says this:

/*
 * Set up an init fork for an unlogged table so that it can be correctly
 * reinitialized on restart.  Since we're going to do an immediate sync, we
 * only need to xlog this if archiving or streaming is enabled.  And the
 * immediate sync is required, because otherwise there's no guarantee that
 * this will hit the disk before the next checkpoint moves the redo pointer.
 */

Your patch causes the code not to match the comment any more.  And the
comment explains why at the time I wrote this code I thought it was OK
to have the XLogIsNeeded() test in there, so it clearly needs
updating.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_dump versus rules, once again

2016-11-16 Thread Tom Lane
I looked into the problem reported at
https://www.postgresql.org/message-id/b3690957-fd8c-6def-d3ec-e589887dd0f1%40codata.eu

It's easy to reproduce.  Given this simple database:

create table tt (f1 int primary key, f2 text);
create view vv as select * from tt group by f1;

pg_dump with the --clean option will generate

DROP RULE "_RETURN" ON public.vv;

which the backend rejects:

ERROR:  cannot drop rule _RETURN on view vv because view vv requires it
HINT:  You can drop view vv instead.

The reason for this is that because the view is dependent on tt's primary
key constraint (since it omits an otherwise-required "GROUP BY f2"),
pg_dump has a circular dependency to deal with: it wants to create the
view pre-data, but the view definition won't work until after the pkey has
been created post-data.

Our longtime solution to circularities involving views is to break the
view into CREATE TABLE and then CREATE RULE "_RETURN", exploiting a
horribly ancient backwards-compatibility hack in the backend that will
turn an empty table into a view if it gets a command to create an ON
SELECT rule for it.  That's fine until you add --clean to the mix, which
causes pg_dump to blindly emit a DROP RULE and then DROP TABLE.  Lose.

One way to fix this would be to add code to the backend so that
DROP RULE "_RETURN" converts the view back into a table, but ick.

We've talked before about replacing this _RETURN-rule business with
CREATE OR REPLACE VIEW, ie the idea would be for pg_dump to first emit
a dummy view with the right column names/types, say

CREATE VIEW vv AS SELECT null::int AS f1, null::text AS f2;

and then later when it's safe, emit CREATE OR REPLACE VIEW with the view's
real query.  The main point of this according to past discussion would be
to eliminate dump files' dependency on the _RETURN-rule implementation of
views, so that someday in the far future we could change that if we
wished.  However, if that were how pg_dump dealt with circular view
dependencies, then it would not take much more code to emit

CREATE OR REPLACE VIEW vv AS SELECT null::int AS f1, null::text AS f2;

as a substitute for the DROP RULE "_RETURN" step in a --clean script.
(Later, after we'd gotten rid of whatever was circularly depending on
that, we would emit DROP VIEW vv.)

So I'm thinking of going and doing this.  Any objections?

Although this is a bug fix, I'm not sure whether to consider
back-patching.  The case isn't that hard to work around -- either ignore
the error, or change your view to spell out its GROUP BY in full.
But it'd be annoying to hit this during pg_upgrade for instance.

CREATE OR REPLACE VIEW has existed since 7.3, so we're not creating much
of a portability problem at the server end if we make this change.
However, I notice that the kluge that was added to RestoreArchive() for
--if-exists will dump core (Assert failure or null pointer dereference)
if an archived dropStmt isn't what it expects.  I think that's broken
anyway, but it'd become actively broken as soon as we start handling views
this way, so we'd need to back-patch at least some change there.
Probably it's sufficient to teach that code to do nothing to statements
it doesn't recognize.

BTW, the ability to create a view that has this hazard has existed since
9.1.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remove the comment on the countereffectiveness of large shared_buffers on Windows

2016-11-16 Thread Robert Haas
On Fri, Nov 11, 2016 at 10:03 PM, Amit Kapila  wrote:
> Right, but for other platforms, the recommendation seems to be 25% of
> RAM, can we safely say that for Windows as well?  As per test results
> in this thread, it seems the read-write performance degrades when
> shared buffers have increased from 12.5 to 25%.  I think as the test
> is done for a short duration so that degradation could be just a run
> to run to run variation, that's why I suggested doing few more tests.

Really, 25% of RAM with no upper limit?  I've usually heard 25% of RAM
with a limit of 8GB, or less.  I suspect that the performance for
large values of shared_buffers has improved in recent releases, but
there's one huge reason for not going too high: you're going to get
double buffering between shared_buffers and the OS cache, and the more
you raise shared_buffers, the worse that double-buffering is going to
get.  Now, on the flip side, on a write-heavy workload, raising
shared_buffers will reduce the rate at which dirty buffers are evicted
which may save substantial amounts of I/O.  But if you have, say, a
200GB and 128GB of RAM, would you really set shared_buffers to 32GB?
I wouldn't.

We're not really going to get out from under these issues until we
rewrite the system not to depend on buffered I/O, but I haven't gotten
around to that yet. :-)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: psql \setfileref

2016-11-16 Thread Pavel Stehule
2016-11-16 17:58 GMT+01:00 Tom Lane :

> Pavel Stehule  writes:
> > The Daniel's proposal has important issues:
>
> > 1. you need to store and hold the content in memory
> > 2. you cannot use tab complete - without it this feature lost a usability
> > 3. you have to do two steps
> > 4. Depends on o.s.
>
> I think you're putting *way* too much emphasis on tab completion of the
> filename.  That's a nice-to-have, but it should not cause us to contort

the feature design to get it.
>

I am sorry, I cannot to agree. When you cannot use tab complete in
interactive mode, then you lost valuable help.


>
> I'm not especially impressed by objections 3 and 4, either.  Point #1
> has some merit, but since the wire protocol is going to limit us to
> circa 1GB anyway, it doesn't seem fatal.
>

There are not "cat" on ms win. For relative basic functionality you have to
switch between platforms.


>
> What I like about Daniel's proposal is that because it adds two separate
> new behaviors, it can be used for more things than just "interpolate a
> file".  What I don't like is that we're still stuck in the land of textual
> interpolation into query strings, which as you noted upthread is not
> very user-friendly for long values.  I thought there was some value in
> your original idea of having a way to get psql to use extended query mode
> with the inserted value being sent as a parameter.  But again, I'd rather
> see that decoupled as a separate feature and not tightly tied to the
> use-case of interpolating a file.
>

With content commands syntax, I am able to control it simply - there is
space for invention of new features.

the my patch has full functionality of Daniel's proposal

\set xxx {rb somefile} - is supported already in my last patch

Now I am working only with real files, but the patch can be simply extended
to work with named pipes, with everything. There is a space for extending.


>
> Maybe using extended mode could be driven off a setting rather than being
> identified by syntax?


It is possible, but I don't think so it is user friendly - what is worst -
use special syntax or search setting some psql set value?



> There doesn't seem to be much reason why you'd want
> some of the :-interpolated values in a query to be inlined and others sent
> as parameters.  Also, for testing purposes, it'd be mighty nice to have a
> way of invoking extended query mode in psql with a clear way to define
> which things are sent as parameters.  But I don't want to have to make
> separate files to contain the values being sent.
>
> regards, tom lane
>


Re: [HACKERS] Password identifiers, protocol aging and SCRAM protocol

2016-11-16 Thread Robert Haas
On Wed, Nov 16, 2016 at 1:53 PM, Michael Paquier
 wrote:
>> Yeah, I don't see a point to that.
>
> OK, by doing so here is what I have. The patch generated by
> format-patch, as well as diffs generated by git diff -M are reduced
> and the patch gets half in size. They could be reduced more by adding
> at the top of sha2.c a couple of defined to map the old SHAXXX_YYY
> variables with their PG_ equivalents, but that does not seem worth it
> to me, and diffs are listed line by line.

All right, this version is much easier to review.  I am a bit puzzled,
though.  It looks like src/common will include sha2.o if built without
OpenSSL and sha2_openssl.o if built with OpenSSL.  So far, so good.
One would think, then, that pgcrypto would not need to worry about
these functions any more because libpgcommon_srv.a is linked into the
server, so any references to those symbols would presumably just work.
However, that's not what you did.  On Windows, you added a dependency
on libpgcommon which I think is unnecessary because that stuff is
already linked into the server.  On non-Windows systems, however, you
have instead taught pgcrypto to copy the source file it needs from
src/common and recompile it.  I don't understand why you need to do
any of that, or why it should be different on Windows vs. non-Windows.
So I think that the changes for the pgcrypto Makefile could just look
like this:

diff --git a/contrib/pgcrypto/Makefile b/contrib/pgcrypto/Makefile
index 805db76..ddb0183 100644
--- a/contrib/pgcrypto/Makefile
+++ b/contrib/pgcrypto/Makefile
@@ -1,6 +1,6 @@
 # contrib/pgcrypto/Makefile

-INT_SRCS = md5.c sha1.c sha2.c internal.c internal-sha2.c blf.c rijndael.c \
+INT_SRCS = md5.c sha1.c internal.c internal-sha2.c blf.c rijndael.c \
 fortuna.c random.c pgp-mpi-internal.c imath.c
 INT_TESTS = sha2

And for Mkvcbuild.pm I think you could just do this:

diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm
index de764dd..1993764 100644
--- a/src/tools/msvc/Mkvcbuild.pm
+++ b/src/tools/msvc/Mkvcbuild.pm
@@ -114,6 +114,15 @@ sub mkvcbuild
   md5.c pg_lzcompress.c pgfnames.c psprintf.c relpath.c rmtree.c
   string.c username.c wait_error.c);

+if ($solution->{options}->{openssl})
+{
+push(@pgcommonallfiles, 'sha2_openssl.c');
+}
+else
+{
+push(@pgcommonallfiles, 'sha2.c');
+}
+
 our @pgcommonfrontendfiles = (
 @pgcommonallfiles, qw(fe_memutils.c file_utils.c
   restricted_token.c));
@@ -422,7 +431,7 @@ sub mkvcbuild
 {
 $pgcrypto->AddFiles(
 'contrib/pgcrypto',   'md5.c',
-'sha1.c', 'sha2.c',
+'sha1.c',
 'internal.c', 'internal-sha2.c',
 'blf.c',  'rijndael.c',
 'fortuna.c',  'random.c',

Is there some reason that won't work?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Implement failover on libpq connect level.

2016-11-16 Thread Robert Haas
On Wed, Nov 16, 2016 at 12:00 PM, Catalin Iacob  wrote:
> On Wed, Nov 16, 2016 at 2:50 PM, Robert Haas  wrote:
>> Hmm, let's go back to the JDBC method, then.  "show
>> transaction_read_only" will return true on a standby, but presumably
>> also on any other non-writable node.  You could even force it to be
>> true artificially if you wanted to force traffic off of a node, using
>> ALTER {SYSTEM|USER ...|DATABASE ..} SET default_transaction_read_only
>> = on
>>
>> I think that would address Alvaro's concern, and it's nicer anyway if
>> libpq and JDBC are doing the same thing.
>
> Not sure I agree that using this is a good idea in the first place.
>
> But if we end up using this, I really think the docs should be very
> explicit about what's implemented and not just say master/any. With
> the master/any docs in the patch I would be *very* surprised if a
> master is skipped only because it was configured with
> default_transaction_read_only = on.

It seems like it is going to be difficult to please everyone here
100%, because there are multiple conflicting priorities.  But we can
definitely document whatever choices we make.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Password identifiers, protocol aging and SCRAM protocol

2016-11-16 Thread Michael Paquier
On Wed, Nov 16, 2016 at 4:46 AM, Robert Haas  wrote:
> On Tue, Nov 15, 2016 at 5:12 PM, Michael Paquier
>  wrote:
>> On Tue, Nov 15, 2016 at 12:40 PM, Robert Haas  wrote:
>>> On Tue, Nov 15, 2016 at 2:24 PM, Michael Paquier
>>>  wrote:
 How do you plug in that with OpenSSL? Are you suggesting to use a set
 of undef definitions in the new header in the same way as pgcrypto is
 doing, which is rather ugly? Because that's what the deal is about in
 this patch.
>>>
>>> Perhaps that justifies renaming them -- although I would think the
>>> fact that they are static would prevent conflicts -- but why reorder
>>> them and change variable names?
>>
>> Yeah... Perhaps I should not have done that, which was just for
>> consistency's sake, and even if the new reordering makes more sense
>> actually...
>
> Yeah, I don't see a point to that.

OK, by doing so here is what I have. The patch generated by
format-patch, as well as diffs generated by git diff -M are reduced
and the patch gets half in size. They could be reduced more by adding
at the top of sha2.c a couple of defined to map the old SHAXXX_YYY
variables with their PG_ equivalents, but that does not seem worth it
to me, and diffs are listed line by line.
-- 
Michael
From 3171c40390703e9b12f97e25914f31accf480a52 Mon Sep 17 00:00:00 2001
From: Michael Paquier 
Date: Wed, 16 Nov 2016 10:48:42 -0800
Subject: [PATCH] Refactor SHA2 functions and move them to src/common/

This way both frontend and backends can refer to them if needed. Those
functions are taken from pgcrypto, which now fetches directly the source
files it needs from src/common/ when compiling its library.

A new interface, which is more PG-like is designed for those SHA2 functions,
allowing to link to either OpenSSL or the in-core stuff taken from KAME
as need be, which is the most flexible solution.
---
 contrib/pgcrypto/.gitignore |   4 +
 contrib/pgcrypto/Makefile   |   5 +-
 contrib/pgcrypto/fortuna.c  |  12 +--
 contrib/pgcrypto/internal-sha2.c|  82 +++
 contrib/pgcrypto/sha2.h | 100 --
 src/common/Makefile |   6 ++
 {contrib/pgcrypto => src/common}/sha2.c | 174 +---
 src/common/sha2_openssl.c   | 102 +++
 src/include/common/sha2.h   | 115 +
 src/tools/msvc/Mkvcbuild.pm |  22 ++--
 10 files changed, 388 insertions(+), 234 deletions(-)
 delete mode 100644 contrib/pgcrypto/sha2.h
 rename {contrib/pgcrypto => src/common}/sha2.c (82%)
 create mode 100644 src/common/sha2_openssl.c
 create mode 100644 src/include/common/sha2.h

diff --git a/contrib/pgcrypto/.gitignore b/contrib/pgcrypto/.gitignore
index 5dcb3ff..30619bf 100644
--- a/contrib/pgcrypto/.gitignore
+++ b/contrib/pgcrypto/.gitignore
@@ -1,3 +1,7 @@
+# Source file copied from src/common
+/sha2.c
+/sha2_openssl.c
+
 # Generated subdirectories
 /log/
 /results/
diff --git a/contrib/pgcrypto/Makefile b/contrib/pgcrypto/Makefile
index 805db76..4085abb 100644
--- a/contrib/pgcrypto/Makefile
+++ b/contrib/pgcrypto/Makefile
@@ -4,7 +4,7 @@ INT_SRCS = md5.c sha1.c sha2.c internal.c internal-sha2.c blf.c rijndael.c \
 		fortuna.c random.c pgp-mpi-internal.c imath.c
 INT_TESTS = sha2
 
-OSSL_SRCS = openssl.c pgp-mpi-openssl.c
+OSSL_SRCS = openssl.c pgp-mpi-openssl.c sha2_openssl.c
 OSSL_TESTS = sha2 des 3des cast5
 
 ZLIB_TST = pgp-compression
@@ -59,6 +59,9 @@ SHLIB_LINK += $(filter -leay32, $(LIBS))
 SHLIB_LINK += -lws2_32
 endif
 
+sha2.c sha2_openssl.c: % : $(top_srcdir)/src/common/%
+	rm -f $@ && $(LN_S) $< .
+
 rijndael.o: rijndael.tbl
 
 rijndael.tbl:
diff --git a/contrib/pgcrypto/fortuna.c b/contrib/pgcrypto/fortuna.c
index 5028203..ba74db6 100644
--- a/contrib/pgcrypto/fortuna.c
+++ b/contrib/pgcrypto/fortuna.c
@@ -34,9 +34,9 @@
 #include 
 #include 
 
+#include "common/sha2.h"
 #include "px.h"
 #include "rijndael.h"
-#include "sha2.h"
 #include "fortuna.h"
 
 
@@ -112,7 +112,7 @@
 #define CIPH_BLOCK		16
 
 /* for internal wrappers */
-#define MD_CTX			SHA256_CTX
+#define MD_CTX			pg_sha256_ctx
 #define CIPH_CTX		rijndael_ctx
 
 struct fortuna_state
@@ -154,22 +154,22 @@ ciph_encrypt(CIPH_CTX * ctx, const uint8 *in, uint8 *out)
 static void
 md_init(MD_CTX * ctx)
 {
-	SHA256_Init(ctx);
+	pg_sha256_init(ctx);
 }
 
 static void
 md_update(MD_CTX * ctx, const uint8 *data, int len)
 {
-	SHA256_Update(ctx, data, len);
+	pg_sha256_update(ctx, data, len);
 }
 
 static void
 md_result(MD_CTX * ctx, uint8 *dst)
 {
-	SHA256_CTX	tmp;
+	pg_sha256_ctx	tmp;
 
 	memcpy(&tmp, ctx, sizeof(*ctx));
-	SHA256_Final(dst, &tmp);
+	pg_sha256_final(&tmp, dst);
 	px_memset(&tmp, 0, sizeof(tmp));
 }
 
diff --git a/contrib/pgcrypto/internal-sha2.c b/contrib/pgcrypto/internal-sha2.c
index 55ec7e1..e06f554 100644
--- a/contrib/pgcrypto/internal-sha2.c
+++ b/contrib/pgcrypto/internal-sha2.c
@@ -33,8 +33,8 @@
 
 #include 
 
+#include "common/sha2.h"
 #inc

Re: [HACKERS] Avoiding pin scan during btree vacuum

2016-11-16 Thread Alvaro Herrera
I am ready now to backpatch this to 9.4 and 9.5; here are the patches.
They are pretty similar, but some adjustments were needed due to XLog
format changes in 9.5.  (I kept most of Simon's original commit
message.)

This patch has survived in master for a long time, and in released 9.6
for a couple of months now.  We have a couple of customers running with
this patch installed also (who make heavy use of their standbys),
without reported problems.  Moreover, the next minors for 9.4 and 9.5
are scheduled for February 2017, so unless some major security problem
pops up that prompts a more urgent update, we have three months to go
before this is released to the masses running 9.4 and 9.5; if a bug
crops up in the meantime, we have plenty of time to get it fixed.

While reviewing this I noticed a small thinko in the README, which I'm
going to patch in 9.6 and master thusly (with the same commit message):

diff --git a/src/backend/access/nbtree/README b/src/backend/access/nbtree/README
index 067d15c..a3f11da 100644
--- a/src/backend/access/nbtree/README
+++ b/src/backend/access/nbtree/README
@@ -521,11 +521,12 @@ because it allows running applications to continue while 
the standby
 changes state into a normally running server.
 
 The interlocking required to avoid returning incorrect results from
-MVCC scans is not required on standby nodes. That is because
+non-MVCC scans is not required on standby nodes. That is because
 HeapTupleSatisfiesUpdate(), HeapTupleSatisfiesSelf(),
 HeapTupleSatisfiesDirty() and HeapTupleSatisfiesVacuum() are only
 ever used during write transactions, which cannot exist on the standby.
-This leaves HeapTupleSatisfiesMVCC() and HeapTupleSatisfiesToast().
+MVCC scans are already protected by definition, so HeapTupleSatisfiesMVCC()
+is not a problem.  That leaves concern only for HeapTupleSatisfiesToast().
 HeapTupleSatisfiesToast() doesn't use MVCC semantics, though that's
 because it doesn't need to - if the main heap row is visible then the
 toast rows will also be visible. So as long as we follow a toast

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>From caaba7a1db9d9981c8f93b6dda7d33979164845d Mon Sep 17 00:00:00 2001
From: Alvaro Herrera 
Date: Tue, 15 Nov 2016 22:26:19 -0300
Subject: [PATCH] Avoid pin scan for replay of XLOG_BTREE_VACUUM in all cases
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Replay of XLOG_BTREE_VACUUM during Hot Standby was previously thought to
require complex interlocking that matched the requirements on the
master. This required an O(N) operation that became a significant
problem with large indexes, causing replication delays of seconds or in
some cases minutes while the XLOG_BTREE_VACUUM was replayed.

This commit skips the “pin scan” that was previously required, by
observing in detail when and how it is safe to do so, with full
documentation. The pin scan is skipped only in replay; the VACUUM code
path on master is not touched here.

No tests included. Manual tests using an additional patch to view WAL records
and their timing have shown the change in WAL records and their handling has
successfully reduced replication delay.

This is a back-patch of commits 687f2cd7a015, 3e4b7d87988f, b60284261375
by Simon Riggs, to branches 9.4 and 9.5.  No further backpatch is
possible because this depends on catalog scans being MVCC.  I (Álvaro)
additionally updated a slight problem in the README, which explains why
this touches the 9.6 and master branches.
---
 src/backend/access/nbtree/README| 22 ++
 src/backend/access/nbtree/nbtree.c  | 15 +++
 src/backend/access/nbtree/nbtxlog.c | 23 +--
 src/include/access/nbtree.h |  6 --
 4 files changed, 58 insertions(+), 8 deletions(-)

diff --git a/src/backend/access/nbtree/README b/src/backend/access/nbtree/README
index 4820f76..997d272 100644
--- a/src/backend/access/nbtree/README
+++ b/src/backend/access/nbtree/README
@@ -486,6 +486,28 @@ normal running after recovery has completed. This is a key 
capability
 because it allows running applications to continue while the standby
 changes state into a normally running server.
 
+The interlocking required to avoid returning incorrect results from
+non-MVCC scans is not required on standby nodes. That is because
+HeapTupleSatisfiesUpdate(), HeapTupleSatisfiesSelf(),
+HeapTupleSatisfiesDirty() and HeapTupleSatisfiesVacuum() are only
+ever used during write transactions, which cannot exist on the standby.
+MVCC scans are already protected by definition, so HeapTupleSatisfiesMVCC()
+is not a problem.  That leaves concern only for HeapTupleSatisfiesToast().
+HeapTupleSatisfiesToast() doesn't use MVCC semantics, though that's
+because it doesn't need to - if the main heap row is visible then the
+toast rows will also be visible. So as long as we follow a toast
+pointer from a visi

Re: [HACKERS] Patch: Implement failover on libpq connect level.

2016-11-16 Thread Catalin Iacob
On Wed, Nov 16, 2016 at 2:50 PM, Robert Haas  wrote:
> Hmm, let's go back to the JDBC method, then.  "show
> transaction_read_only" will return true on a standby, but presumably
> also on any other non-writable node.  You could even force it to be
> true artificially if you wanted to force traffic off of a node, using
> ALTER {SYSTEM|USER ...|DATABASE ..} SET default_transaction_read_only
> = on
>
> I think that would address Alvaro's concern, and it's nicer anyway if
> libpq and JDBC are doing the same thing.

Not sure I agree that using this is a good idea in the first place.

But if we end up using this, I really think the docs should be very
explicit about what's implemented and not just say master/any. With
the master/any docs in the patch I would be *very* surprised if a
master is skipped only because it was configured with
default_transaction_read_only = on.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] quieting DEBUG3

2016-11-16 Thread Alvaro Herrera
Robert Haas wrote:

> Right: me either.  So, here is a series of three patches.

+1 to the general idea of the three patches.  I didn't review nor test
them in detail.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: psql \setfileref

2016-11-16 Thread Tom Lane
Pavel Stehule  writes:
> The Daniel's proposal has important issues:

> 1. you need to store and hold the content in memory
> 2. you cannot use tab complete - without it this feature lost a usability
> 3. you have to do two steps
> 4. Depends on o.s.

I think you're putting *way* too much emphasis on tab completion of the
filename.  That's a nice-to-have, but it should not cause us to contort
the feature design to get it.

I'm not especially impressed by objections 3 and 4, either.  Point #1
has some merit, but since the wire protocol is going to limit us to
circa 1GB anyway, it doesn't seem fatal.

What I like about Daniel's proposal is that because it adds two separate
new behaviors, it can be used for more things than just "interpolate a
file".  What I don't like is that we're still stuck in the land of textual
interpolation into query strings, which as you noted upthread is not
very user-friendly for long values.  I thought there was some value in
your original idea of having a way to get psql to use extended query mode
with the inserted value being sent as a parameter.  But again, I'd rather
see that decoupled as a separate feature and not tightly tied to the
use-case of interpolating a file.

Maybe using extended mode could be driven off a setting rather than being
identified by syntax?  There doesn't seem to be much reason why you'd want
some of the :-interpolated values in a query to be inlined and others sent
as parameters.  Also, for testing purposes, it'd be mighty nice to have a
way of invoking extended query mode in psql with a clear way to define
which things are sent as parameters.  But I don't want to have to make
separate files to contain the values being sent.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] quieting DEBUG3

2016-11-16 Thread Robert Haas
On Wed, Oct 28, 2015 at 3:10 PM, Alvaro Herrera
 wrote:
> Robert Haas wrote:
>
>> Another point I want to reiterate - because nobody seems to be
>> addressing it - is that some of these messages are totally useless.  I
>> grant that printing the transaction state (XIDs, CIDs, etc.) is
>> useful.  But does anybody really think that it's useful for every
>> statement to *additionally* generate DEBUG:  CommitTransactionCommand?
>>  Who looks at that?  What value does it have?  We do not print a
>> message when any other function that is called for every query is
>> entered - why that one?
>
> No, it is useless, let's get rid of it.  Maybe it was a useful debugging
> tool when postgres.c was being developed, but it's not useful now and
> instead very bothersome.
>
>> Whether we adjust the log levels of the messages we have or not, we
>> surely ought to get rid of the ones that are useless clutter.
>
> Agreed.  I liked your proposal for reduction of transaction state
> printout to a single, denser line.
>
>> Can anyone think of a single instance in which that particular message
>> has been useful in debugging ... anything?
>
> Not I.

Right: me either.  So, here is a series of three patches.

0001 completely removes the debug messages for
StartTransactionCommand, CommitTransactionCommand, ProcessQuery, and
ProcessUtility.  AFAICS, those are entirely log spam.  Nobody ever
wants to see them; they are a pure waste of cycles.

0002 adjusts the ShowTransactionState() output.  Every call to
ShowTransactionState() currently produces N + 1 lines of log output,
where N is the number of entries on the transaction state stack.  With
this patch, each call to ShowTransactionState() produces N lines of
output.  All of the same information is still printed.  In the common
case where N = 1, this reduces the number of lines of output by 50%
without losing any information.  The individual lines are also a bit
shorter, again without removing any information, but just tightening
up the format.

0003 reduces the ShowTransactionState() output from DEBUG3 to DEBUG5.
I did find this output help in some of the parallel query development,
but I think the need to look at these messages is going to be
uncommon.  Developers might care, but not often, and users never will.
Even after 0002 the log volume from turning this on is very high, so I
think it makes sense to push this down to a lower level.

If anybody objects to these, please say which specific patch you
object to and for what reason.  Last year's discussion veered off into
a discussion of what the general policy for setting DEBUGn levels
ought to be, which didn't reach any conclusion, but I don't think that
should bar clear improvements.  I think that all of these are
improvements, and that 0001 and 0002 are particularly clear-cut.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


0001-Remove-uninformative-DEBUG-messages.patch
Description: binary/octet-stream


0002-Tighten-up-debugging-messages-that-print-the-transac.patch
Description: binary/octet-stream


0003-Reduce-transaction-status-debugging-messages-to-DEBU.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-11-16 Thread Karl O. Pinc
On Mon, 7 Nov 2016 23:29:28 +0100
Gilles Darold  wrote:

> Here is the v13 of the patch, ...

Attached is a doc patch to apply on top of v13.

It adds a lot more detail regards just what is
in the current_logfiles file and when it's
in current_logfiles.  I'd like review both for
language and accuracy, but I'd also like to
confirm that we really want the documented
behavior regards what's there at backend startup
v.s. what's there during normal runtime.  Seems
ok the way it is to me but...

This patch is also starting to put a lot of information
inside a graphical table.  I'm fine with this but
it's a bit of a departure from the other cells of
the table so maybe somebody else has a better
suggestion.

Regards,

Karl 
Free Software:  "You don't pay back, you pay forward."
 -- Robert A. Heinlein


patch_pg_current_logfile-v13.diff.detail_docs
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: psql \setfileref

2016-11-16 Thread Pavel Stehule
2016-11-16 16:59 GMT+01:00 Robert Haas :

> On Tue, Nov 15, 2016 at 11:48 AM, Pavel Stehule 
> wrote:
> >> For text contents, we already have this (pasted right from the doc):
> >>
> >> testdb=> \set content `cat my_file.txt`
> >> testdb=> INSERT INTO my_table VALUES (:'content');
> >>
> >> Maybe we might look at why it doesn't work for binary string and fix
> that?
> >>
> >> I can think of three reasons:
> >>
> >> - psql use C-style '\0' terminated string implying no nul byte in
> >> variables.
> >> That could potentially be fixed by tweaking the data structures and
> >> accessors.
> >>
> >> - the backtick operator trims ending '\n' from the result of the command
> >> (like the shell), but we could add a variant that doesn't have this
> >> behavior.
> >>
> >> - we don't have "interpolate as binary", an operator that will
> >> essentially apply PQescapeByteaConn rather than PQescapeStringConn.
> >>
> >> For example, I'd suggest this syntax:
> >>
> >> -- slurp a file into a variable, with no translation whatsoever
> >> \set content ``cat my_binary_file``
> >>
> >> -- interpolate as binary (backquotes instead of quotes)
> >> INSERT INTO my_table VALUES(:`content`);
> >>
> >> If we had something like this, would we still need filerefs? I feel like
> >> the point of filerefs is mainly to work around lack of support for
> >> binary in variables, but maybe supporting the latter directly would
> >> be an easier change to accept.
> >
> > I leaved a concept of fileref - see Tom's objection. I know, so I can
> hack
> > ``, but I would not do it. It is used for shell (system) calls, and these
> > functionality can depends on used os. The proposed content commands can
> be
> > extended more, and you are doesn't depends on o.s. Another issue of your
> > proposal is hard support for tab complete (file path).
>
> On the other hand, it seems like you've invented a whole new system of
> escaping and interpolation that is completely disconnected from the
> one we already have.  psql is already full of features that nobody
> knows about identified by incomprehensible character combinations.
> Daniel's suggestion would at least be more similar to what already
> exists.
>
>
The Daniel's proposal has important issues:

1. you need to store and hold the content in memory
2. you cannot use tab complete - without it this feature lost a usability
3. you have to do two steps
4. Depends on o.s.

Regards

Pavel




> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


Re: [HACKERS] Document how to set up TAP tests for Perl 5.8.8

2016-11-16 Thread Michael Paquier
On Tue, Nov 15, 2016 at 11:32 PM, Noah Misch  wrote:
> On Wed, Nov 16, 2016 at 12:48:03PM +0800, Craig Ringer wrote:
>> --- a/src/test/perl/README
>> +++ b/src/test/perl/README
>> @@ -64,3 +64,20 @@ For available PostgreSQL-specific test methods and some 
>> example tests read the
>>  perldoc for the test modules, e.g.:
>>
>>  perldoc src/test/perl/PostgresNode.pm
>> +
>> +Required Perl
>> +-
>> +
>> +Tests must run on perl 5.8.8 and newer. perlbrew is a good way to obtain
>
> Tests must run on Perl 5.8.0 and newer.

Hm? I thought that 5.8.8 was the minimum supported by recalling the
precious discussions. That's as well the oldest version of perldoc,
which is kind of useful. Anyway it would be nice to mention the
minimum requirements directly in src/test/perl/README?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: psql \setfileref

2016-11-16 Thread Robert Haas
On Tue, Nov 15, 2016 at 11:48 AM, Pavel Stehule  wrote:
>> For text contents, we already have this (pasted right from the doc):
>>
>> testdb=> \set content `cat my_file.txt`
>> testdb=> INSERT INTO my_table VALUES (:'content');
>>
>> Maybe we might look at why it doesn't work for binary string and fix that?
>>
>> I can think of three reasons:
>>
>> - psql use C-style '\0' terminated string implying no nul byte in
>> variables.
>> That could potentially be fixed by tweaking the data structures and
>> accessors.
>>
>> - the backtick operator trims ending '\n' from the result of the command
>> (like the shell), but we could add a variant that doesn't have this
>> behavior.
>>
>> - we don't have "interpolate as binary", an operator that will
>> essentially apply PQescapeByteaConn rather than PQescapeStringConn.
>>
>> For example, I'd suggest this syntax:
>>
>> -- slurp a file into a variable, with no translation whatsoever
>> \set content ``cat my_binary_file``
>>
>> -- interpolate as binary (backquotes instead of quotes)
>> INSERT INTO my_table VALUES(:`content`);
>>
>> If we had something like this, would we still need filerefs? I feel like
>> the point of filerefs is mainly to work around lack of support for
>> binary in variables, but maybe supporting the latter directly would
>> be an easier change to accept.
>
> I leaved a concept of fileref - see Tom's objection. I know, so I can hack
> ``, but I would not do it. It is used for shell (system) calls, and these
> functionality can depends on used os. The proposed content commands can be
> extended more, and you are doesn't depends on o.s. Another issue of your
> proposal is hard support for tab complete (file path).

On the other hand, it seems like you've invented a whole new system of
escaping and interpolation that is completely disconnected from the
one we already have.  psql is already full of features that nobody
knows about identified by incomprehensible character combinations.
Daniel's suggestion would at least be more similar to what already
exists.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Use of pg_proc.probin is legal?

2016-11-16 Thread Tom Lane
Kohei KaiGai  writes:
> On the other hands, interpret_AS_clause() raises an ereport if SQL
> function tries to use probin except
> for C-language. Is it illegal for other languages to use probin field
> to store something useful?

Well, there's no convention about how to use it.

> In my case, PL/CUDA language allows to define SQL function with a CUDA
> code block.
> It saves a raw CUDA source code on the pg_proc.prosrc and its
> intermediate representation
> on the pg_proc.probin; which is automatically constructed on the
> validator callback of the language
> handler.

I have precisely zero sympathy for such a kluge.  The validator exists
to validate, it is not supposed to modify the pg_proc row.

We could imagine extending the PL API to allow storage of a compiled
version in probin, but overloading the validator isn't the way to do
that IMO.  I'd prefer to see a separate "compile" function for it.
Existence of a compile function could be the trigger that instructs,
eg, pg_dump not to include the probin value in the dump.

(There once was a LANCOMPILER option in the CREATE LANGUAGE syntax,
which I imagine was meant to do something like this, but it was never
fully implemented and we got rid of it years ago.)

The bigger question though is whether it's really worth the trouble.
All existing PLs deal with this by caching compiled (to one degree
or another) representations in process memory.  If you keep it in
probin you can save some compilation time once per session, but on the
other hand you're limited to representations that can fit in a flat blob.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: About CMake v2

2016-11-16 Thread Robert Haas
On Tue, Nov 8, 2016 at 9:12 PM, Michael Paquier
 wrote:
> On Wed, Nov 9, 2016 at 7:54 AM, Craig Ringer
>  wrote:
>> On 9 Nov. 2016 06:37, "Yury Zhuravlev"  wrote:
>>> This approach I see only in Postgres project and not fully understood.
>>> Can you explain me more what reasons led to this approach?
>>
>> It's predictable. The default has the same result for everyone. I quite like
>> it myself.
>
> +1. Let's tell to the system what we want him to do and not let him
> guess what we'd like to be done or it will get harder to test and
> develop code for all kind of code paths with #ifdef's. That's one step
> away from Skynet.

Exaggerate much?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel execution and prepared statements

2016-11-16 Thread Robert Haas
On Tue, Nov 15, 2016 at 3:57 PM, Tobias Bussmann  wrote:
> As the patch in [1] targeting the execution of the plan in ExecutePlan 
> depending on the destination was declined, I hacked around a bit to find 
> another way to use parallel mode with SQL prepared statements while disabling 
> the parallel execution in case of an non read-only execution. For this I used 
> the already present test for an existing intoClause in ExecuteQuery to set 
> the parallelModeNeeded flag of the prepared statement. This results in a non 
> parallel execution of the parallel plan, as we see with a non-zero fetch 
> count used with the extended query protocol. Despite this patch seem to work 
> in my tests, I'm by no means confident this being a proper way of handling 
> the situation in question.

Yeah, we could do something like this, perhaps not in exactly this
way, but I'm not sure it's a good idea to just execute the parallel
plan without workers.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-16 Thread Robert Haas
On Wed, Nov 16, 2016 at 10:16 AM, Andrew Dunstan  wrote:
> On the buildfarm crake has gone from about 2 minutes to about 3.5 minutes to
> run "make doc". That's not good but it's not an eight-fold increase either.

On my MacBook, "time make docs" as of e36ddab11735052841b4eff96642187ec9a8a7bc:

real2m17.871s
user2m15.505s
sys0m2.238s

And as of 4ecd1974377ffb4d6d72874ba14fcd23965b1792:

real1m47.696s
user1m47.085s
sys0m1.145s

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel execution and prepared statements

2016-11-16 Thread Tobias Bussmann
> Can you once test by just passing CURSOR_OPT_PARALLEL_OK in
> PrepareQuery() and run the tests by having forc_parallel_mode=regress?
> It seems to me that the code in the planner is changed for setting a
> parallelModeNeeded flag, so it might work as it is.

Do you mean running a "make installcheck" with "force_parallel_mode=regress" in 
postgresql.conf? I did so with just CURSOR_OPT_PARALLEL_OK in PrepareQuery 
(like the original commit 57a6a72b) and still got 3 failed tests, all with 
CREATE TABLE .. AS EXECUTE .. . With my patch applied, all tests were 
successful.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-16 Thread Robert Haas
On Wed, Nov 16, 2016 at 9:46 AM, Tom Lane  wrote:
> Erik Rijkers  writes:
>> This xslt build  takes  8+ minutes, compared to barely 1 minute for
>> 'oldhtml'.
>
> I'm just discovering the same.
>
>> I'd say that is a strong disadvantage.
>
> I'd say that is flat out unacceptable.  I won't ever use this toolchain
> if it's that much slower than the old way.  What was the improvement
> we were hoping for, again?

Gosh, and I thought the existing toolchain was already ridiculously
slow.  Couldn't somebody write a Perl script that generated the HTML
documentation from the SGML in, like, a second?  I mean, we're
basically just mapping one set up markup tags to another set of markup
tags.  And splitting up some files for the HTML version.  And adding
some boilerplate.  But none of that sounds like it should be all that
hard.

I am reminded of the saying that XML is like violence -- if it doesn't
solve your problem, you're not using enough of it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-16 Thread Andrew Dunstan



On 11/16/2016 09:46 AM, Tom Lane wrote:

Erik Rijkers  writes:

This xslt build  takes  8+ minutes, compared to barely 1 minute for
'oldhtml'.

I'm just discovering the same.


I'd say that is a strong disadvantage.

I'd say that is flat out unacceptable.  I won't ever use this toolchain
if it's that much slower than the old way.  What was the improvement
we were hoping for, again?






On the buildfarm crake has gone from about 2 minutes to about 3.5 
minutes to run "make doc". That's not good but it's not an eight-fold 
increase either.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-16 Thread Tom Lane
Erik Rijkers  writes:
> This xslt build  takes  8+ minutes, compared to barely 1 minute for 
> 'oldhtml'.

I'm just discovering the same.

> I'd say that is a strong disadvantage.

I'd say that is flat out unacceptable.  I won't ever use this toolchain
if it's that much slower than the old way.  What was the improvement
we were hoping for, again?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-16 Thread Erik Rijkers

On 2016-11-16 08:06, Peter Eisentraut wrote:

Build HTML documentation using XSLT stylesheets by default

The old DSSSL build is still available for a while using the make 
target

"oldhtml".


This xslt build  takes  8+ minutes, compared to barely 1 minute for 
'oldhtml'.


I'd say that is a strong disadvantage.

I hope 'for a while' will mean 'for a long time to come' or even 
'forever.'


Erik Rijkers






--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] switching documentation build to XSLT

2016-11-16 Thread Peter Eisentraut
On 11/10/16 5:49 AM, Peter Eisentraut wrote:
> We are now proposing that we change the way the HTML documentation is
> built from jade/openjade+docbook-dsssl to xsltproc+docbook-xsl.

> The actual patch to make this change is attached.  For the build
> process, nothing changes, e.g., 'make' or 'make html' will still have
> the same purposes.

This has been committed.

If you find any changes in the output that bother you, let pgsql-docs know.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-16 Thread Peter Eisentraut
On 11/16/16 1:38 AM, Magnus Hagander wrote:
> AFAICT this is because the output is now UTF8 and it used to be LATIN1.
> The current output actually has it in the html tags that it's utf8,but
> since the old one had no tags specifying it's encoding we hardcoded it
> to LATIN1.

The old output has this:



This has always been the case, AFAICT.

Btw., shouldn't the output web site pages have encoding declarations?

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-16 Thread Magnus Hagander
On Wed, Nov 16, 2016 at 3:02 PM, Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:

> On 11/16/16 1:38 AM, Magnus Hagander wrote:
> > AFAICT this is because the output is now UTF8 and it used to be LATIN1.
> > The current output actually has it in the html tags that it's utf8,but
> > since the old one had no tags specifying it's encoding we hardcoded it
> > to LATIN1.
>
> The old output has this:
>
> 
>
> This has always been the case, AFAICT.
>

Oh, it's there. It's just not on one line and not at the beginning, so I
misssed it :)


> Btw., shouldn't the output web site pages have encoding declarations?
>

That gets sent in the http header, doesn't it?

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] Patch: Implement failover on libpq connect level.

2016-11-16 Thread Kevin Grittner
[moving this branch of discussion to pgsql-jdbc]

On Tue, Nov 15, 2016 at 10:31 PM, Mithun Cy  wrote:

> JDBC is sending "show transaction_read_only" to find whether it
> is master or not.

If true, that's insane.  That can be different on each connection
to the cluster and can change tens of thousands of times per second
on any connection!

I know of one large shop that sets default_transaction_read_only =
true because 99% of their transactions are read only and they use
serializable transactions -- which run faster and with less
contention when transactions which don't need to write are flagged
as read only.  It seems safer to them to only turn off the read
only property for transactions which might need to write.

> Victor's patch also started with it, but later it was transformed into
> pg_is_in_recovery
> by him as it appeared more appropriate to identify the master / slave.

I don't know whether that is ideal, but it is sure a lot better
than something which can change with every transaction -- or even
within a transaction (in one direction).

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Implement failover on libpq connect level.

2016-11-16 Thread Robert Haas
On Tue, Nov 15, 2016 at 11:31 PM, Mithun Cy  wrote:
>> > So I am tempted to just
>> > hold my nose and hard-code the SQL as JDBC is presumably already
>> doing.
>
> JDBC is sending "show transaction_read_only" to find whether it is master or
> not.
> Victor's patch also started with it, but later it was transformed into
> pg_is_in_recovery
> by him as it appeared more appropriate to identify the master / slave.

Hmm, let's go back to the JDBC method, then.  "show
transaction_read_only" will return true on a standby, but presumably
also on any other non-writable node.  You could even force it to be
true artificially if you wanted to force traffic off of a node, using
ALTER {SYSTEM|USER ...|DATABASE ..} SET default_transaction_read_only
= on

I think that would address Alvaro's concern, and it's nicer anyway if
libpq and JDBC are doing the same thing.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel execution and prepared statements

2016-11-16 Thread Robert Haas
On Tue, Nov 15, 2016 at 10:41 AM, Albe Laurenz  wrote:
> Tobias Bussmann has discovered an oddity with prepared statements.
>
> Parallel scan is used with prepared statements, but only if they have
> been created with protocol V3 "Parse".
> If a prepared statement has been prepared with the SQL statement PREPARE,
> it will never use a parallel scan.
>
> I guess that is an oversight in commit 57a6a72b, right?
> PrepareQuery in commands/prepare.c should call CompleteCachedPlan
> with cursor options CURSOR_OPT_PARALLEL_OK, just like
> exec_prepare_message in tcop/postgres.c does.
>
> The attached patch fixes the problem for me.

Actually, commit 57a6a72b made this change, and then 7bea19d0 backed
it out again because it turned out to break things.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel execution and prepared statements

2016-11-16 Thread Amit Kapila
On Wed, Nov 16, 2016 at 7:10 PM, Amit Kapila  wrote:
> On Wed, Nov 16, 2016 at 2:27 AM, Tobias Bussmann  wrote:
>> As the patch in [1] targeting the execution of the plan in ExecutePlan 
>> depending on the destination was declined, I hacked around a bit to find 
>> another way to use parallel mode with SQL prepared statements while 
>> disabling the parallel execution in case of an non read-only execution. For 
>> this I used the already present test for an existing intoClause in 
>> ExecuteQuery to set the parallelModeNeeded flag of the prepared statement. 
>> This results in a non parallel execution of the parallel plan, as we see 
>> with a non-zero fetch count used with the extended query protocol. Despite 
>> this patch seem to work in my tests, I'm by no means confident this being a 
>> proper way of handling the situation in question.
>>
>
> Can you once test by just passing CURSOR_OPT_PARALLEL_OK in
> PrepareQuery() and run the tests by having forc_parallel_mode=regress?

Typo.
/forc_parallel_mode/force_parallel_mode


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel execution and prepared statements

2016-11-16 Thread Amit Kapila
On Wed, Nov 16, 2016 at 2:27 AM, Tobias Bussmann  wrote:
> As the patch in [1] targeting the execution of the plan in ExecutePlan 
> depending on the destination was declined, I hacked around a bit to find 
> another way to use parallel mode with SQL prepared statements while disabling 
> the parallel execution in case of an non read-only execution. For this I used 
> the already present test for an existing intoClause in ExecuteQuery to set 
> the parallelModeNeeded flag of the prepared statement. This results in a non 
> parallel execution of the parallel plan, as we see with a non-zero fetch 
> count used with the extended query protocol. Despite this patch seem to work 
> in my tests, I'm by no means confident this being a proper way of handling 
> the situation in question.
>

Can you once test by just passing CURSOR_OPT_PARALLEL_OK in
PrepareQuery() and run the tests by having forc_parallel_mode=regress?
It seems to me that the code in the planner is changed for setting a
parallelModeNeeded flag, so it might work as it is.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Snapshot too old logging

2016-11-16 Thread Kevin Grittner
On Wed, Nov 16, 2016 at 6:12 AM, Magnus Hagander  wrote:
> On Tue, Nov 15, 2016 at 9:23 PM, Alvaro Herrera  
> wrote:

>> Can this happen for relation types other than tables, say materialized
>> views?  (Your suggested wording omits relation type so it wouldn't be
>> affected, but it's worth considering I think.)
>
>
> I'm fairly certain it can hit other things, including indexes and definitely
> matviews, but I won't say I'm 100% sure :) The check is at block level. Does
> errtable() work for that? (I've never used it, and it seems it's only
> actually use din a single place in the codebase..)

Yes, it can happen for indexes and for materialized views.

-- 
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Password identifiers, protocol aging and SCRAM protocol

2016-11-16 Thread Robert Haas
On Tue, Nov 15, 2016 at 5:12 PM, Michael Paquier
 wrote:
> On Tue, Nov 15, 2016 at 12:40 PM, Robert Haas  wrote:
>> On Tue, Nov 15, 2016 at 2:24 PM, Michael Paquier
>>  wrote:
>>> How do you plug in that with OpenSSL? Are you suggesting to use a set
>>> of undef definitions in the new header in the same way as pgcrypto is
>>> doing, which is rather ugly? Because that's what the deal is about in
>>> this patch.
>>
>> Perhaps that justifies renaming them -- although I would think the
>> fact that they are static would prevent conflicts -- but why reorder
>> them and change variable names?
>
> Yeah... Perhaps I should not have done that, which was just for
> consistency's sake, and even if the new reordering makes more sense
> actually...

Yeah, I don't see a point to that.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Snapshot too old logging

2016-11-16 Thread Robert Haas
On Tue, Nov 15, 2016 at 2:25 PM, Magnus Hagander  wrote:
> On Tue, Nov 15, 2016 at 8:22 PM, Robert Haas  wrote:
>>
>> On Tue, Nov 15, 2016 at 2:21 PM, Kevin Grittner  wrote:
>> > On Tue, Nov 15, 2016 at 12:43 PM, Robert Haas 
>> > wrote:
>> >
>> >> I think it would be better not to include either the snapshot or the
>> >> block number, and just find some way to reword the error message so
>> >> that it mentions which relation was involved without implying that all
>> >> access to the relation would necessarily fail.  For example:
>> >>
>> >> ERROR: snapshot too old
>> >> DETAIL: One or more rows required by this query have already been
>> >> removed from "%s".
>> >
>> > That particular language would be misleading.  All we know about
>> > the page is that it was modified since the referencing (old)
>> > snapshot was taken.  We don't don't know in what way it was
>> > modified, so we must assume that it *might* have been pruned of
>> > rows that the snapshot should still be able to see.
>>
>> Oh, yeah.  So maybe "may have already been removed".
>
>
> Just to be clear, you're suggesting 'One or more rows may have already been
> removed from "%s"?

I think I was suggesting: One or more rows required by this query may
already have been removed from "%s".

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Snapshot too old logging

2016-11-16 Thread Magnus Hagander
On Tue, Nov 15, 2016 at 9:23 PM, Alvaro Herrera 
wrote:

> Magnus Hagander wrote:
> > On Tue, Nov 15, 2016 at 8:22 PM, Robert Haas 
> wrote:
> >
> > > On Tue, Nov 15, 2016 at 2:21 PM, Kevin Grittner 
> wrote:
>
> > > > That particular language would be misleading.  All we know about
> > > > the page is that it was modified since the referencing (old)
> > > > snapshot was taken.  We don't don't know in what way it was
> > > > modified, so we must assume that it *might* have been pruned of
> > > > rows that the snapshot should still be able to see.
> > >
> > > Oh, yeah.  So maybe "may have already been removed".
> >
> > Just to be clear, you're suggesting 'One or more rows may have already
> been
> > removed from "%s"?
>
> Focusing on the relation itself for a second, I think the name should be
> schema-qualified.  What about using errtable()?
>
> Can this happen for relation types other than tables, say materialized
> views?  (Your suggested wording omits relation type so it wouldn't be
> affected, but it's worth considering I think.)
>

I'm fairly certain it can hit other things, including indexes and
definitely matviews, but I won't say I'm 100% sure :) The check is at block
level. Does errtable() work for that? (I've never used it, and it seems
it's only actually use din a single place in the codebase..)


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] Performance degradation in Bitmapscan (commit 75ae538bc3168bf44475240d4e0487ee2f3bb376)

2016-11-16 Thread Dilip Kumar
On Wed, Nov 16, 2016 at 12:58 AM, Andres Freund  wrote:
> It's not really related to lossy pages, it's just that due to deletions
> / insertions a lot more "shapes" of the hashtable are hit.
Okay..
>
> I suspect that this is with parallelism disabled? Without that the query
> ends up using a parallel sequential scan for me.
>

It's with max_parallel_worker_per_gather=2, I always noticed that Q6
takes parallel seq scan only for
max_parallel_worker_per_gather=4 or more..
>
> I've a working fix for this, and for a similar issue Robert found. I'm
> still playing around with it, but basically the fix is to make the
> growth policy a bit more adaptive.

Okay.. Thanks.


-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_hba_file_settings view patch

2016-11-16 Thread Ashutosh Bapat
make check run with this patch shows server crashes. regression.out
attached. I have run make check after a clean build, tried building it
after running configure, but the problem is always reproducible. Do
you see this problem?

Also the patch has a white space error.
git diff --check
src/backend/utils/init/postinit.c:729: space before tab in indent.
+   /*

On Thu, Nov 10, 2016 at 11:40 AM, Haribabu Kommi
 wrote:
>
>
> On Mon, Nov 7, 2016 at 3:36 PM, Michael Paquier 
> wrote:
>>
>> On Mon, Nov 7, 2016 at 12:36 PM, Haribabu Kommi
>>  wrote:
>> > The added regression test fails for the cases where the server is loaded
>> > with
>> > different pg_hba.conf rules during installcheck verification. Updated
>> > patch
>> > is
>> > attached with removing those tests.
>>
>> That's not a full review as I just glanced at this patch a couple of
>> seconds...
>>
>>  #include "utils/guc.h"
>> +#include "utils/jsonb.h"
>>  #include "utils/lsyscache.h"
>> You don't need to include this header when using arrays.
>
>
> Thanks for the review. Fixed in the updated patch with
> additional error messages are also added.
>
>>
>> Implementing a test case is possible as well using the TAP
>> infrastructure. You may want to look at it and help folks testing the
>> patch more easily with a set of configurations in pg_hba.conf that
>> cover a maximum of code paths in your patch.
>
>
> Added a tap test under src/test folder to cover maximum code changes.
>
> Regards,
> Hari Babu
> Fujitsu Australia
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>



-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


regression.out
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fractal tree indexing

2016-11-16 Thread Andrew Borodin
Hi hackers!

Here is prototype of procrastinating GiST. Or lazy GiST. Or buffered GiST.
Indeed, there is nothing fractal about it.

The concept is leaf tuples stall on internal pages. From time to time they
are pushed down in batches. This code is far from perfect, I've coded this
only for the research purposes.

I have tested code with insertion of random 3D cubes. On 1 million of
insertions classic GiST is 8% faster, on 3M performance is equal, on 30M
lazy GiST if 12% faster.
I've tested it with SSD disk, may be with HDD difference will be more
significant.
Test scenario is here https://github.com/x4m/postgres_g/blob/bufdev/test.sql

In theory, this is cache friendly implementation (but not cache oblivious)
and it must be faster even for small datasets without huge disk work. But
it has there extra cost of coping batch of tuples to palloc`d array,
couln't avoid that.

This is just a proof-of-concept for performance measrures:
1. make check fails for two tests
2. WAL is broken
3. code is a mess in some places

I'm not sure 12% of performance worth it. I'll appreciate any ideas on what
to do next.

Best regards, Andrey Borodin.

2013-02-13 22:54 GMT+05:00 Simon Riggs :

> On 13 February 2013 16:48, Heikki Linnakangas 
> wrote:
> > On 13.02.2013 18:20, Tom Lane wrote:
> >>
> >> Heikki Linnakangas  writes:
> >>>
> >>> The basic idea of a fractal tree index is to attach a buffer to every
> >>> non-leaf page. On insertion, instead of descending all the way down to
> >>> the correct leaf page, the new tuple is put on the buffer at the root
> >>> page. When that buffer fills up, all the tuples in the buffer are
> >>> cascaded down to the buffers on the next level pages. And recursively,
> >>> whenever a buffer fills up at any level, it's flushed to the next
> level.
> >>
> >>
> >> [ scratches head... ]  What's "fractal" about that?  Or is that just a
> >> content-free marketing name for this technique?
> >
> >
> > I'd call it out as a marketing name. I guess it's fractal in the sense
> that
> > all levels of the tree can hold "leaf tuples" in the buffers; the
> structure
> > looks the same no matter how deep you zoom, like a fractal.. But
> "Buffered"
> > would be more appropriate IMO.
>
> I hope for their sake there is more to it than that. It's hard to see
> how buffering can be patented.
>
> --
>  Simon Riggs   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>
>
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index b8aa9bc..29770d2 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -265,6 +265,7 @@ gistplacetopage(Relation rel, Size freespace, GISTSTATE 
*giststate,
boolis_rootsplit;
int npage;
 
+
is_rootsplit = (blkno == GIST_ROOT_BLKNO);
 
/*
@@ -524,6 +525,8 @@ gistplacetopage(Relation rel, Size freespace, GISTSTATE 
*giststate,
gistfillbuffer(page, itup, ntup, InvalidOffsetNumber);
}
 
+   GistPageGetOpaque(page)->nlazy=1; //DO NOT FORGET: remark pages 
after split
+
MarkBufferDirty(buffer);
 
if (BufferIsValid(leftchildbuf))
@@ -589,6 +592,7 @@ gistplacetopage(Relation rel, Size freespace, GISTSTATE 
*giststate,
  * this routine assumes it is invoked in a short-lived memory context,
  * so it does not bother releasing palloc'd allocations.
  */
+
 void
 gistdoinsert(Relation r, IndexTuple itup, Size freespace, GISTSTATE *giststate)
 {
@@ -597,7 +601,6 @@ gistdoinsert(Relation r, IndexTuple itup, Size freespace, 
GISTSTATE *giststate)
GISTInsertStack firststack;
GISTInsertStack *stack;
GISTInsertState state;
-   boolxlocked = false;
 
memset(&state, 0, sizeof(GISTInsertState));
state.freespace = freespace;
@@ -610,6 +613,21 @@ gistdoinsert(Relation r, IndexTuple itup, Size freespace, 
GISTSTATE *giststate)
firststack.downlinkoffnum = InvalidOffsetNumber;
state.stack = stack = &firststack;
 
+
+   gistdoinsertloop(r,itup,freespace, giststate, stack, state, 0);
+}
+
+void
+gistdoinsertloop(Relation r, IndexTuple itup, Size freespace, GISTSTATE 
*giststate,GISTInsertStack *givenstack, GISTInsertState state, GISTInsertStack 
*nolazy)
+{
+   ItemId  iid;
+   IndexTuple  idxtuple;
+
+   boolxlocked = false;
+   GISTInsertStack *stack = givenstack;
+
+
+
/*
 * Walk down along the path of smallest penalty, updating the parent
 * pointers with the key we're inserting as we go. If we crash in the
@@ -685,9 +703,86 @@ gistdoinsert(Relation r, IndexTuple itup, Size freespace, 
GISTSTATE *giststate)
GISTInsertStack *item;
OffsetNumber downlinkoffnum;
 
+
+
+   if(stack!=nolazy)
+   {
+   

Re: [HACKERS] Gather Merge

2016-11-16 Thread Rushabh Lathia
On Mon, Nov 14, 2016 at 3:51 PM, Thomas Munro  wrote:

> On Sat, Nov 12, 2016 at 1:56 AM, Rushabh Lathia
>  wrote:
> > On Fri, Nov 4, 2016 at 8:30 AM, Thomas Munro <
> thomas.mu...@enterprisedb.com>
> > wrote:
> >> + * Portions Copyright (c) 1996-2015, PostgreSQL Global Development
> Group
> >> + * Portions Copyright (c) 1994, Regents of the University of California
> >>
> >> Shouldn't this say just "(c) 2016, PostgreSQL Global Development
> >> Group"?
> >
> > Fixed.
>
> The year also needs updating to 2016 in nodeGatherMerge.h.
>

Oops sorry, fixed now.


>
> >> + /* Per-tuple heap maintenance cost */
> >> + run_cost += path->path.rows * comparison_cost * 2.0 * logN;
> >>
> >> Why multiply by two?  The comment above this code says "about log2(N)
> >> comparisons to delete the top heap entry and another log2(N)
> >> comparisons to insert its successor".  In fact gather_merge_getnext
> >> calls binaryheap_replace_first, which replaces the top element without
> >> any comparisons at all and then performs a sift-down in log2(N)
> >> comparisons to find its new position.  There is no per-tuple "delete"
> >> involved.  We "replace" the top element with the value it already had,
> >> just to trigger the sift-down, because we know that our comparator
> >> function might have a new opinion of the sort order of this element.
> >> Very clever!  The comment and the 2.0 factor in cost_gather_merge seem
> >> to be wrong though -- or am I misreading the code?
> >>
> > See cost_merge_append.
>
> That just got tweaked in commit 34ca0905.
>

Fixed.


>
> > Looking at the plan I realize that this is happening because wrong
> costing
> > for Gather Merge. Here in the plan we can see the row estimated by
> > Gather Merge is wrong. This is because earlier patch GM was considering
> > rows = subpath->rows, which is not true as subpath is partial path. So
> > we need to multiple it with number of worker. Attached patch also fixed
> > this issues. I also run the TPC-H benchmark with the patch and results
> > are same as earlier.
>
> In create_grouping_paths:
> +   double  total_groups = gmpath->rows *
> gmpath->parallel_workers;
>
> This hides a variable of the same name in the enclosing scope.  Maybe
> confusing?
>
> In some other places like create_ordered_paths:
> +   double  total_groups = path->rows * path->parallel_workers;
>
> Though it probably made sense to use this variable name in
> create_grouping_paths, wouldn't total_rows be better here?
>
>
Initially I just copied from the other places. I agree with you that
create_ordered_paths - total_rows make more sense.


> It feels weird to be working back to a total row count estimate from
> the partial one by simply multiplying by path->parallel_workers.
> Gather Merge will underestimate the total rows when parallel_workers <
> 4, if using partial row estimates ultimately from  cost_seqscan which
> assume some leader contribution.  I don't have a better idea though.
> Reversing cost_seqscan's logic certainly doesn't seem right.  I don't
> know how to make them agree on the leader's contribution AND give
> principled answers, since there seems to be some kind of cyclic
> dependency in the costing logic (cost_seqscan really needs to be given
> a leader contribution estimate from its superpath which knows whether
> it will allow the leader to pull tuples greedily/fairly or not, but
> that superpath hasn't been created yet; cost_gather_merge needs the
> row count from its subpath).  Or maybe I'm just confused.
>
>
Yes, I agree with you. But we can't really do changes into cost_seqscan.
Another option I can think of is just calculate the rows for gather merge,
by using the reverse formula which been used into cost_seqscan. So
we can completely remote the rows argument from the
create_gather_merge_path()
and then inside create_gather_merge_path() - calculate the total_rows
using same formula which been used into cost_seqscan. This is working
fine - but not quite sure about the approach. So I attached that part of
changes
as separate patch. Any suggestions?


-- 
Rushabh Lathia
www.EnterpriseDB.com


gather_merge_v4_minor_changes.patch
Description: application/download


gm_v4_plus_rows_estimate.patch
Description: application/download

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-16 Thread Magnus Hagander
This seems to have broken our website build a bit. If you check
https://www.postgresql.org/docs/devel/static/index.html, you'll notice a
bunch of bad characters.

AFAICT this is because the output is now UTF8 and it used to be LATIN1. The
current output actually has it in the html tags that it's utf8,but since
the old one had no tags specifying it's encoding we hardcoded it to LATIN1.

I assume we shall expect it to always be UTF8 from now on, and just find a
way for the docs loader script for the website to properly detect when we
switched over? Probably by just looking for that specific  wrote:

> Build HTML documentation using XSLT stylesheets by default
>
> The old DSSSL build is still available for a while using the make target
> "oldhtml".
>
> Branch
> --
> master
>
> Details
> ---
> http://git.postgresql.org/pg/commitdiff/e36ddab11735052841b4eff9664218
> 7ec9a8a7bc
>
> Modified Files
> --
> doc/src/sgml/Makefile   |  8 
> doc/src/sgml/stylesheet.css | 50 +-
> ---
> 2 files changed, 23 insertions(+), 35 deletions(-)
>
>
> --
> Sent via pgsql-committers mailing list (pgsql-committ...@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-committers
>


Re: [HACKERS] Push down more full joins in postgres_fdw

2016-11-16 Thread Ashutosh Bapat
Thanks.

> except for few things; (1) You added the
> following comments to deparseSelectSql:
>
> +   /*
> +* For a non-base relation, use the input tlist. If a base relation
> is
> +* being deparsed as a subquery, use input tlist, if the caller has
> passed
> +* one. The column aliases of the subquery are crafted based on the
> input
> +* tlist. If tlist is NIL, there will not be any expression
> referring to
> +* any of the columns of the base relation being deparsed. Hence it
> doesn't
> +* matter whether the base relation is being deparsed as subquery or
> not.
> +*/
>
> The last sentence seems confusing to me.  My understanding is: if a base
> relation has tlist=NIL, then the base relation *must* be deparsed as
> ordinary, not as a subquery.  Maybe I'm missing something, though.  (I left
> that as-is, but I think we need to reword that to be more clear, at least.)
>

Hmm, I agree. I think the comment should just say, "Use tlist to
create the SELECT clause if one has been provided. For a base relation
with tlist = NIL, check attrs_needed information.". Does that sound
good?

> (2) You added the following comments to deparseRangeTblRef:
>
>> +  * If make_subquery is true, deparse the relation as a subquery.
>> Otherwise,
>> +  * deparse it as relation name with alias.
>
> The second sentence seems confusing to me, too, because it looks like the
> relation being deparsed is assumed to be a base relation, but the relation
> can be a join relation, which might join base relations, lower join
> relations, and/or lower subqueries.  So, I modified the sentence a bit.
>

OK.

> (3) I don't think we need this in isSubqueryExpr, so I removed it from the
> patch:
>
> +   /* Keep compiler happy. */
> +   return false;
>

Doesn't that cause compiler warning, saying "non-void function
returning nothing" or something like that. Actually, the "if
(bms_is_member(node->varno, outerrel->relids))" ends with a "return"
always. Hence we don't need to encapsulate the code in "else" block in
else { }. It could be taken outside.


>
>
> OK, I changed isSubqueryExpr to is_subquery_expr; I kept to refer to expr
> because I think we would soon extend that function so that it can handle
> PHVs, as I said upthread.  For getSubselectAliasInfo, I changed the name to
> get_subselect_alias_id, because (1) the word "alias" seems general and (2)
> the "for_var" part would make the name a bit long.

is_subquery_expr(Var *node -- that looks odd. Either it should
is_subquery_var(Var * ... OR it should be is_subquery_expr(Expr * ...
. I would prefer the first one, since that's what current patch is
doing. When we introduce PHVs, we may change it, if required.


>
>
> Done.  I modified the patch as proposed; create the tlist by
> build_tlist_to_deparse in foreign_join_ok, if needed, and search the tlist
> by tlist_member.  I also added a new member "tlist" to PgFdwRelationInfo to
> save the tlist created in foreign_join_ok.
>

Instead of adding a new member, you might want to reuse grouped_tlist
by renaming it.

> Another idea on the "tlist" member would be to save a tlist created for
> EXPLAIN into that member in estimate_patch_cost_size, so that we don't need
> to generate the tlist again in postgresGetForeignPlan, when
> use_remote_estimate=true.  But I'd like to leave that for another patch.

I think that will happen automatically, while deparsing, whether for
EXPLAIN or for actual execution.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: About CMake v2

2016-11-16 Thread Yury Zhuravlev

Mark Kirkwood wrote:
Yeah, there seems to be a lot of these. Looking through them 
almost all concern the addition of piece of code to wrap putenv. 
e.g:
I made this small wrapper special for MSVC 2015 without Service Packs 
because postgres macross were in conflict with MS internal functions. 
After some time and some updates for MSVC Michael Paquier could not 
reproduce  my problem but I keep this patch to avoid problems in the 
future. 
I can check old behavior again and revert all changes if needed and 
ofcourse I have plans to make separate patch for this changes. 


Thanks!
--
Yury Zhuravlev
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: About CMake v2

2016-11-16 Thread Yury Zhuravlev

Mark Kirkwood wrote:
Actually, it was not that tricky to separate out the cmake only 
changes, and test this on unmodified sources. It appears to work 
fine for me - passes 'make check' (needs the v1_1 incremental 
patch applied of course). The Patch is attached. I wonder if the 
original had some changes for building under latest 
Windows...(I'm using Ubuntu 16.10, with cmake 3.5).


Thanks for all your works! Can you make push request here: 
https://github.com/stalkerg/postgres_cmake


I have rebased (merge) to master and make other important fix.

--
Yury Zhuravlev
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >