Re: [HACKERS] ToDo: API for SQL statement execution other than SPI

2016-06-28 Thread Craig Ringer
On 29 June 2016 at 13:55, Pavel Stehule  wrote:

> Hi
>
> I am writing two background workers - autoreindex and scheduler. In Both I
> need to execute queries from top level. I had to wrote redundant code
> https://github.com/okbob/autoreindex/blob/master/utils.c 
> autoreindex_execute_sql_command
> .Same code is in pglogical. Some statements - like VACUUM or REINDEX
> CONCURRENTLY is not possible call from SPI.
>
> Can be nice to have this function in core.
>

I strongly agree. In particular, something that can clean up and recover
from ERRORs. Right now you have to borrow a bunch of PostgresMain.



-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] Rename max_parallel_degree?

2016-06-28 Thread Amit Kapila
On Wed, Jun 29, 2016 at 11:54 AM, Julien Rouhaud
 wrote:
> Or should we allow setting it to -1 for instance to disable the limit?
>

By disabling the limit, do you mean to say that only
max_parallel_workers_per_gather will determine the workers required or
something else?  If earlier, then I am not sure if it is good idea,
because it can cause some confusion to the user about usage of both
the parameters together.



-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Rename max_parallel_degree?

2016-06-28 Thread Julien Rouhaud
On 29/06/2016 06:29, Amit Kapila wrote:
> On Wed, Jun 29, 2016 at 2:57 AM, Julien Rouhaud
>  wrote:
>>
>> Thanks a lot for the help!
>>
>> PFA v6 which should fix all the issues mentioned.
> 
> Couple of minor suggestions.
> 
> - .  Note that the requested
> + , limited by
> + .  Note that the requested
> 
> Typo.
> /linked/linkend
> 

Oops, fixed.

> You can always find such mistakes by doing make check in doc/src/sgml/
> 

I wasn't aware of that, it's really a nice thing to know, thanks!

> + /*
> + * We need a memory barrier here to make sure the above test doesn't get
> + * reordered
> + */
> + pg_read_barrier();
> 
> /memory barrier/read barrier
> 

fixed

> + if (max_parallel_workers == 0)
> + {
> + ereport(elevel,
> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
> + errmsg("background worker \"%s\": cannot request parallel worker if
> no parallel worker allowed",
> 
> " ..no parallel worker is allowed".  'is' seems to be missing.
> 

fixed

> 
>>  Also, after second
>> thought I didn't add the extra hint about max_worker_processes in the
>> max_parallel_worker paragraph, since this line was a duplicate of the
>> precedent paragraph, it seemed better to leave the text as is.
>>
> 
> not a big problem, we can leave it for committer to decide on same.
> However just by reading the description of max_parallel_worker, user
> can set its value more than max_wroker_processes which we don't want.
> 

Right.  On the other hand I'm not sure that's really an issue, because
such a case is handled in the code, and setting max_parallel_workers way
above max_worker_processes could be a way to configure it as unlimited.
Or should we allow setting it to -1 for instance to disable the limit?

-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 061697b..3a47421 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -2005,7 +2005,8 @@ include_dir 'conf.d'
  Sets the maximum number of workers that can be started by a single
  Gather node.  Parallel workers are taken from the
  pool of processes established by
- .  Note that the requested
+ , limited by
+ .  Note that the requested
  number of workers may not actually be available at runtime.  If this
  occurs, the plan will run with fewer workers than expected, which may
  be inefficient.  The default value is 2.  Setting this value to 0
@@ -2014,6 +2015,21 @@ include_dir 'conf.d'

   
 
+  
+   max_parallel_workers (integer)
+   
+max_parallel_workers configuration 
parameter
+   
+   
+   
+
+ Sets the maximum number of workers that the system can support for
+ parallel queries.  The default value is 4.  Setting this value to 0
+ disables parallel query execution.
+
+   
+  
+
   
backend_flush_after (integer)

diff --git a/src/backend/access/transam/parallel.c 
b/src/backend/access/transam/parallel.c
index 088700e..ea7680b 100644
--- a/src/backend/access/transam/parallel.c
+++ b/src/backend/access/transam/parallel.c
@@ -452,7 +452,8 @@ LaunchParallelWorkers(ParallelContext *pcxt)
snprintf(worker.bgw_name, BGW_MAXLEN, "parallel worker for PID %d",
 MyProcPid);
worker.bgw_flags =
-   BGWORKER_SHMEM_ACCESS | BGWORKER_BACKEND_DATABASE_CONNECTION;
+   BGWORKER_SHMEM_ACCESS | BGWORKER_BACKEND_DATABASE_CONNECTION
+   | BGWORKER_IS_PARALLEL_WORKER;
worker.bgw_start_time = BgWorkerStart_ConsistentState;
worker.bgw_restart_time = BGW_NEVER_RESTART;
worker.bgw_main = ParallelWorkerMain;
diff --git a/src/backend/optimizer/path/allpaths.c 
b/src/backend/optimizer/path/allpaths.c
index 2e4b670..e1da5f9 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -724,9 +724,11 @@ create_plain_partial_paths(PlannerInfo *root, RelOptInfo 
*rel)
}
 
/*
-* In no case use more than max_parallel_workers_per_gather workers.
+* In no case use more than max_parallel_workers or
+* max_parallel_workers_per_gather workers.
 */
-   parallel_workers = Min(parallel_workers, 
max_parallel_workers_per_gather);
+   parallel_workers = Min(max_parallel_workers, Min(parallel_workers,
+   max_parallel_workers_per_gather));
 
/* If any limit was set to zero, the user doesn't want a parallel scan. 
*/
if (parallel_workers <= 0)
diff --git a/src/backend/optimizer/path/costsize.c 
b/src/backend/optimizer/path/costsize.c
index 8c1dccc..6cb2f4e 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -113,6 +113,7 @@ int effective_cache_size = 
DEFAULT_EFFECTIVE_CACHE_SIZE;
 
 Cost   disable_cost = 1.0e10;
 
+int  

Re: [HACKERS] Gin index on array of uuid

2016-06-28 Thread Oleg Bartunov
On Wed, Jun 29, 2016 at 6:17 AM, M Enrique 
wrote:

> What's a good source code entry point to review how this is working for
> anyarray currently? I am new to the postgres code. I spend some time
> looking for it but all I found is the following (which I have not been able
> to decipher yet).
>

Look on https://commitfest.postgresql.org/4/145/


>
> [image: pasted1]
>
> Thank you,
> Enrique
>
>
>
> On Tue, Jun 21, 2016 at 12:20 PM Tom Lane  wrote:
>
>> Enrique MailingLists  writes:
>> > Currently creating an index on an array of UUID involves defining an
>> > operator class. I was wondering if this would be a valid request to add
>> as
>> > part of the uuid-ossp extension? This seems like a reasonable operator
>> to
>> > support as a default for UUIDs.
>>
>> This makes me itch, really, because if we do this then we should logically
>> do it for every other add-on type.
>>
>> It seems like we are not that far from being able to have just one GIN
>> opclass on "anyarray".  The only parts of this declaration that are
>> UUID-specific are the comparator function and the storage type, both of
>> which could be gotten without that much trouble, one would think.
>>
>> > Any downsides to adding this as a default?
>>
>> Well, it'd likely break things at dump/reload time for people who had
>> already created a competing "default for _uuid" opclass manually.  I'm not
>> entirely sure, but possibly replacing the core opclasses with a single one
>> that is "default for anyarray" could avoid such failures.  We'd have to
>> figure out ambiguity resolution rules.
>>
>> regards, tom lane
>>
>


Re: [HACKERS] make clean didn't clean up files generated from *.(y|l)

2016-06-28 Thread Kouhei Kaigai
> Kouhei Kaigai  writes:
> > I tried to build the latest master branch just after the switch from
> > REL9_5_STABLE and "make clean", however, repl_gram.c was not cleaned
> > up correctly. So, my problem is that repl_gram.l was the latest version,
> > but compiler saw the repl_gram.c generated based on the v9.5 source.
> > ...
> > Probably, we have to add explicit cleanup of these auto-generated files
> > on Makefiles.
> 
> "make clean" absolutely should NOT remove that file; not even "make
> distclean" should, because we ship it in tarballs.  Likewise for the other
> bison product files you mention, as well as a boatload of other derived
> files.
> 
> If you want to checkout a different release branch in the same working
> directory, I'd suggest "make maintainer-clean" or "git clean -dfx" first.
> (Personally I don't ever do that --- it's much easier to maintain a
> separate workdir per branch.)
> 
> Having said that, switching to a different branch should have resulted in
> repl_gram.l being updated by git, and thereby acquiring a new file mod
> date; so I don't understand why make wouldn't have chosen to rebuild
> repl_gram.c.  Can you provide a reproducible sequence that makes this
> happen?
>
Ah, I might have inadequate operation just before the branch switching.

$ cd ~/source/pgsql <-- REL9_5_STABLE; already built
$ git checkout master
$ cp -r ~/source/pgsql ~/repo/pgsql-kg
$ cd ~/repo/pgsql-kg
$ ./configure
$ make clean
$ make  <-- repl_gram.c raised an error

~/source/pgsql is a copy of community's branch; with no my own modification.
To keep it clean, I copied entire repository to other directory, but cp command
updated the file modification timestamp.
I may be the reason why repl_gram.c was not rebuilt.

Sorry for the noise.
--
NEC Business Creation Division / PG-Strom Project
KaiGai Kohei 



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] ToDo: API for SQL statement execution other than SPI

2016-06-28 Thread Pavel Stehule
Hi

I am writing two background workers - autoreindex and scheduler. In Both I
need to execute queries from top level. I had to wrote redundant code
https://github.com/okbob/autoreindex/blob/master/utils.c
autoreindex_execute_sql_command
.Same code is in pglogical. Some statements - like VACUUM or REINDEX
CONCURRENTLY is not possible call from SPI.

Can be nice to have this function in core.

Regards

Pavel


Re: [HACKERS] Reviewing freeze map code

2016-06-28 Thread Masahiko Sawada
On Fri, Jun 24, 2016 at 11:04 AM, Amit Kapila  wrote:
> On Fri, Jun 24, 2016 at 4:33 AM, Andres Freund  wrote:
>> On 2016-06-23 18:59:57 -0400, Alvaro Herrera wrote:
>>> Andres Freund wrote:
>>>
>>> > I'm looking into three approaches right now:
>>> >
>>> > 3) Use WAL logging for the already_marked = true case.
>>>
>>>
>>> > 3) This approach so far seems the best. It's possible to reuse the
>>> > xl_heap_lock record (in an afaics backwards compatible manner), and in
>>> > most cases the overhead isn't that large.  It's of course annoying to
>>> > emit more WAL, but it's not that big an overhead compared to extending a
>>> > file, or to toasting.  It's also by far the simplest fix.
>>>
>
> +1 for proceeding with Approach-3.
>
>>> I suppose it's fine if we crash midway from emitting this wal record and
>>> the actual heap_update one, since the xmax will appear to come from an
>>> aborted xid, right?
>>
>> Yea, that should be fine.
>>
>>
>>> I agree that the overhead is probably negligible, considering that this
>>> only happens when toast is invoked.  It's probably not as great when the
>>> new tuple goes to another page, though.
>>
>> I think it has to happen in both cases unfortunately. We could try to
>> add some optimizations (e.g. only release lock & WAL log if the target
>> page, via fsm, is before the current one), but I don't really want to go
>> there in the back branches.
>>
>
> You are right, I think we can try such an optimization in Head and
> that too if we see a performance hit with adding this new WAL in
> heap_update.
>
>

+1 for #3 approach, and attached draft patch for that.
I think attached patch would fix this problem but please let me know
if this patch is not what you're thinking.

Regards,

--
Masahiko Sawada
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 57da57a..2f3fd83 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -3923,6 +3923,28 @@ l2:
 
 	if (need_toast || newtupsize > pagefree)
 	{
+		/*
+		 * To prevent data corruption due to updating old tuple by
+		 * other backends after released buffer, we need to emit that
+		 * xmax of old tuple is set and clear visibility map bits if
+		 * needed before relasing buffer. We can reuse xl_heap_lock
+		 * for this pupose. It should be fine even if we crash midway
+		 * from this section and the actual updating one later, since
+		 * the xmax will appear to come from an aborted xid.
+		 */
+		START_CRIT_SECTION();
+
+		/* Celar PD_ALL_VISIBLE flags */
+		if (PageIsAllVisible(BufferGetPage(buffer)))
+		{
+			all_visible_cleared = true;
+			PageClearAllVisible(BufferGetPage(buffer));
+			visibilitymap_clear(relation, BufferGetBlockNumber(buffer),
+vmbuffer);
+		}
+
+		MarkBufferDirty(buffer);
+
 		/* Clear obsolete visibility flags ... */
 		oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
 		oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
@@ -3936,6 +3958,26 @@ l2:
 		/* temporarily make it look not-updated */
 		oldtup.t_data->t_ctid = oldtup.t_self;
 		already_marked = true;
+
+		if (RelationNeedsWAL(relation))
+		{
+			xl_heap_lock xlrec;
+			XLogRecPtr recptr;
+
+			XLogBeginInsert();
+			XLogRegisterBuffer(0, buffer, REGBUF_STANDARD);
+
+			xlrec.offnum = ItemPointerGetOffsetNumber(&oldtup.t_self);
+			xlrec.locking_xid = xid;
+			xlrec.infobits_set = compute_infobits(oldtup.t_data->t_infomask,
+  oldtup.t_data->t_infomask2);
+			XLogRegisterData((char *) &xlrec, SizeOfHeapLock);
+			recptr = XLogInsert(RM_HEAP_ID, XLOG_HEAP_LOCK);
+			PageSetLSN(page, recptr);
+		}
+
+		END_CRIT_SECTION();
+
 		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
 
 		/*

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Documentation fixes for pg_visibility

2016-06-28 Thread Michael Paquier
On Tue, Jun 28, 2016 at 7:05 AM, Robert Haas  wrote:
> On Mon, Jun 27, 2016 at 5:56 PM, Michael Paquier
>  wrote:
>>> Under what circumstances would you wish to check only one page of a 
>>> relation?
>>
>> What I'd like to be able to do is to stop scanning the relation once
>> one defective tuple has been found: if there is at least one problem,
>> the whole vm needs to be rebuilt anyway. So this function could be
>> wrapped in a plpgsql function for example. It is more flexible than
>> directly modifying this function so as it stops at the first problem
>> stopped.
>
> I think most likely the best way to handle this is teach VACUUM to do
> PageClearAllVisible() and visibilitymap_clear() on any page where
> VM_ALL_FROZEN(onerel, blkno, &vmbuffer) && !all_frozen.  This would go
> well with the existing code to clear incorrectly-set visibility map
> bits, and it would allow VACUUM (DISABLE_PAGE_SKIPPING) to serve the
> purpose you're talking about here, but more efficiently.

Ah, I see. So your suggestion is to do this job in lazy_scan_heap()
when scanning each block, and then to issue a WARNING and clear the
visibility map. Indeed that's better. I guess I need to take a closer
look at vacuumlazy.c. See attached for example, but that's perhaps not
something to have in 9.6 as that's more a micro-optimization than
anything else.
-- 
Michael


vm-all-frozen-check.patch
Description: invalid/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Postgres_fdw join pushdown - wrong results with whole-row reference

2016-06-28 Thread Ashutosh Bapat
On Tue, Jun 28, 2016 at 12:52 PM, Etsuro Fujita  wrote:

> On 2016/06/28 15:23, Ashutosh Bapat wrote:
>
>> The wording "column "whole-row reference ..." doesn't look good.
>> Whole-row reference is not a column. The error context itself should be
>> "whole row reference for foreign table ft1".
>>
>
> Ah, you are right.  Please find attached an updated version.
>
>
This looks good to me. Regression tests pass.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


Re: [HACKERS] make clean didn't clean up files generated from *.(y|l)

2016-06-28 Thread Tom Lane
Kouhei Kaigai  writes:
> I tried to build the latest master branch just after the switch from
> REL9_5_STABLE and "make clean", however, repl_gram.c was not cleaned
> up correctly. So, my problem is that repl_gram.l was the latest version,
> but compiler saw the repl_gram.c generated based on the v9.5 source.
> ...
> Probably, we have to add explicit cleanup of these auto-generated files
> on Makefiles.

"make clean" absolutely should NOT remove that file; not even "make
distclean" should, because we ship it in tarballs.  Likewise for the other
bison product files you mention, as well as a boatload of other derived
files.

If you want to checkout a different release branch in the same working
directory, I'd suggest "make maintainer-clean" or "git clean -dfx" first.
(Personally I don't ever do that --- it's much easier to maintain a
separate workdir per branch.)

Having said that, switching to a different branch should have resulted in
repl_gram.l being updated by git, and thereby acquiring a new file mod
date; so I don't understand why make wouldn't have chosen to rebuild
repl_gram.c.  Can you provide a reproducible sequence that makes this
happen?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Rename max_parallel_degree?

2016-06-28 Thread Amit Kapila
On Wed, Jun 29, 2016 at 2:57 AM, Julien Rouhaud
 wrote:
>
> Thanks a lot for the help!
>
> PFA v6 which should fix all the issues mentioned.

Couple of minor suggestions.

- .  Note that the requested
+ , limited by
+ .  Note that the requested

Typo.
/linked/linkend

You can always find such mistakes by doing make check in doc/src/sgml/

+ /*
+ * We need a memory barrier here to make sure the above test doesn't get
+ * reordered
+ */
+ pg_read_barrier();

/memory barrier/read barrier

+ if (max_parallel_workers == 0)
+ {
+ ereport(elevel,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("background worker \"%s\": cannot request parallel worker if
no parallel worker allowed",

" ..no parallel worker is allowed".  'is' seems to be missing.


>  Also, after second
> thought I didn't add the extra hint about max_worker_processes in the
> max_parallel_worker paragraph, since this line was a duplicate of the
> precedent paragraph, it seemed better to leave the text as is.
>

not a big problem, we can leave it for committer to decide on same.
However just by reading the description of max_parallel_worker, user
can set its value more than max_wroker_processes which we don't want.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] make clean didn't clean up files generated from *.(y|l)

2016-06-28 Thread Kouhei Kaigai
Hello,

I got the build error below. It concerns RESERVE_WAL is not defined,
however, it should not be a problem to be oversight for a long time.

I tried to build the latest master branch just after the switch from
REL9_5_STABLE and "make clean", however, repl_gram.c was not cleaned
up correctly. So, my problem is that repl_gram.l was the latest version,
but compiler saw the repl_gram.c generated based on the v9.5 source.

I could see the similar problems at:
  src/backend/replication/repl_gram.c
  src/interfaces/ecpg/preproc/pgc.c
  src/bin/pgbench/exprparse.c
  src/bin/pgbench/exprscan.c
  src/pl/plpgsql/src/pl_gram.c

(*) At least, these files raised a build error.

Probably, we have to add explicit cleanup of these auto-generated files
on Makefiles.

Thanks,


gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement 
-Wendif-labels -Wmissing-format-attribute -Wformat-security 
-fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -O2 -O0 -g -I. -I. 
-I../../../src/include -D_GNU_SOURCE   -c -o repl_gram.o repl_gram.c
In file included from repl_gram.y:320:0:
repl_scanner.l: In function 'replication_yylex':
repl_scanner.l:98:10: error: 'K_RESERVE_WAL' undeclared (first use in this 
function)
 RESERVE_WAL   { return K_RESERVE_WAL; }
  ^
repl_scanner.l:98:10: note: each undeclared identifier is reported only once 
for each function it appears in
make[3]: *** [repl_gram.o] Error 1
make[3]: Leaving directory `/home/kaigai/repo/pgsql/src/backend/replication'
make[2]: *** [replication-recursive] Error 2
make[2]: Leaving directory `/home/kaigai/repo/pgsql/src/backend'
make[1]: *** [all-backend-recurse] Error 2
make[1]: Leaving directory `/home/kaigai/repo/pgsql/src'
make: *** [all-src-recurse] Error 2


--
NEC Business Creation Division / PG-Strom Project
KaiGai Kohei 




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] primary_conninfo missing from pg_stat_wal_receiver

2016-06-28 Thread Michael Paquier
On Wed, Jun 29, 2016 at 12:23 PM, Alvaro Herrera
 wrote:
> Michael Paquier wrote:
>> On Wed, Jun 29, 2016 at 6:42 AM, Alvaro Herrera
>>  wrote:
>
>> > I have already edited the patch following some of these ideas.  Will
>> > post a new version later.
>>
>> Cool, thanks.
>
> Here it is.  I found it was annoying to maintain the function return
> tupdesc in two places (pg_proc.h and the function code itself), so I
> changed that too.

This looks globally fine to me. Good catch to handle NULL results of
walrcv_get_conninfo.

+   appendPQExpBuffer(&buf, "%s=%s ",
+ conn_opt->keyword,
+ obfuscate ? "" : conn_opt->val)
This would add an extra space at the end of the string
unconditionally. What about checking if buf->len == 0, then fill buf
with "%s=%s", and " %s=%s" otherwise?

Do we want to do something for back-branches regarding the presence of
the connection string in shared memory? The only invasive point is the
addition of the interface routine to get back the obfuscated
connection string from libpqwalreceiver. That's a private interface in
the backend, but perhaps it would be a problem to change that in a
minor release?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How to kill a Background worker and Its metadata

2016-06-28 Thread Craig Ringer
On 28 June 2016 at 08:28, Akash Agrawal  wrote:

> I've handled SIGTERM signal. pg_terminate_backend send signals (SIGTERM)
> to backend processes identified by process ID. And also, after this call I
> am able to track in my logs that the background worker gets terminated.
>
> Yet, I am only able to register first 8 background workers. I am using
> select worker_spi1_launch(1) to launch it every time. This is why I guess
> there is some metadata maintained which has got to be deleted.
>

(Please reply below other posts, not above)

The bgworker API currently offers no way to enumerate bgworkers or
unregister them from the outside. The only way to unregister a dynamic
bgworker is to:

*proc_exit(0) from within the worker; or

*register it with BGW_NO_RESTART so it doesn't auto-restart in the
first place.

This is a deficiency in the bgworker API, but there are workarounds in
place and other things are more important for now. Just make your your
workers proc_exit(0) on SIGTERM or don't register them as auto-restarting.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] ERROR: ORDER/GROUP BY expression not found in targetlist

2016-06-28 Thread Rushabh Lathia
Thanks Tom.

I performed testing with the latest commit and test are
running fine.

On Tue, Jun 28, 2016 at 8:14 PM, Tom Lane  wrote:

> Rushabh Lathia  writes:
> > SELECT setval('s', max(100)) from tab;
> > ERROR:  ORDER/GROUP BY expression not found in targetlist
>
> Fixed, thanks for the report!
>
> regards, tom lane
>



-- 
Rushabh Lathia


Re: [HACKERS] primary_conninfo missing from pg_stat_wal_receiver

2016-06-28 Thread Alvaro Herrera
Michael Paquier wrote:
> On Wed, Jun 29, 2016 at 6:42 AM, Alvaro Herrera
>  wrote:

> > I have already edited the patch following some of these ideas.  Will
> > post a new version later.
> 
> Cool, thanks.

Here it is.  I found it was annoying to maintain the function return
tupdesc in two places (pg_proc.h and the function code itself), so I
changed that too.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index bd7bb77..a8b8bb0 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -1302,6 +1302,14 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i
  text
  Replication slot name used by this WAL receiver
 
+
+ conn_info
+ text
+ 
+  Connection string used by this WAL receiver,
+  with security-sensitive fields obfuscated.
+ 
+


   
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 272c02f..f52de3a 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -681,7 +681,8 @@ CREATE VIEW pg_stat_wal_receiver AS
 s.last_msg_receipt_time,
 s.latest_end_lsn,
 s.latest_end_time,
-s.slot_name
+s.slot_name,
+s.conn_info
 FROM pg_stat_get_wal_receiver() s
 WHERE s.pid IS NOT NULL;
 
diff --git a/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c b/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c
index b61e39d..2a10c56 100644
--- a/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c
+++ b/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c
@@ -20,6 +20,7 @@
 #include 
 
 #include "libpq-fe.h"
+#include "pqexpbuffer.h"
 #include "access/xlog.h"
 #include "miscadmin.h"
 #include "replication/walreceiver.h"
@@ -47,6 +48,7 @@ static char *recvBuf = NULL;
 
 /* Prototypes for interface functions */
 static void libpqrcv_connect(char *conninfo);
+static char *libpqrcv_get_conninfo(void);
 static void libpqrcv_identify_system(TimeLineID *primary_tli);
 static void libpqrcv_readtimelinehistoryfile(TimeLineID tli, char **filename, char **content, int *len);
 static bool libpqrcv_startstreaming(TimeLineID tli, XLogRecPtr startpoint,
@@ -74,6 +76,7 @@ _PG_init(void)
 		walrcv_disconnect != NULL)
 		elog(ERROR, "libpqwalreceiver already loaded");
 	walrcv_connect = libpqrcv_connect;
+	walrcv_get_conninfo = libpqrcv_get_conninfo;
 	walrcv_identify_system = libpqrcv_identify_system;
 	walrcv_readtimelinehistoryfile = libpqrcv_readtimelinehistoryfile;
 	walrcv_startstreaming = libpqrcv_startstreaming;
@@ -118,6 +121,54 @@ libpqrcv_connect(char *conninfo)
 }
 
 /*
+ * Return a user-displayable conninfo string.  Any security-sensitive fields
+ * are obfuscated.
+ */
+static char *
+libpqrcv_get_conninfo(void)
+{
+	PQconninfoOption *conn_opts;
+	PQconninfoOption *conn_opt;
+	PQExpBufferData	buf;
+	char	   *retval;
+
+	Assert(streamConn != NULL);
+
+	initPQExpBuffer(&buf);
+	conn_opts = PQconninfo(streamConn);
+
+	if (conn_opts == NULL)
+		ereport(ERROR,
+(errmsg("could not parse connection string: %s",
+		_("out of memory";
+
+	/* build a clean connection string from pieces */
+	for (conn_opt = conn_opts; conn_opt->keyword != NULL; conn_opt++)
+	{
+		bool	obfuscate;
+
+		/* Skip debug and empty options */
+		if (strchr(conn_opt->dispchar, 'D') ||
+			conn_opt->val == NULL ||
+			conn_opt->val[0] == '\0')
+			continue;
+
+		/* Obfuscate security-sensitive options */
+		obfuscate = strchr(conn_opt->dispchar, '*') != NULL;
+
+		appendPQExpBuffer(&buf, "%s=%s ",
+		  conn_opt->keyword,
+		  obfuscate ? "" : conn_opt->val);
+	}
+
+	PQconninfoFree(conn_opts);
+
+	retval = PQExpBufferDataBroken(buf) ? NULL : pstrdup(buf.data);
+	termPQExpBuffer(&buf);
+	return retval;
+}
+
+/*
  * Check that primary's system identifier matches ours, and fetch the current
  * timeline ID of the primary.
  */
diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c
index ce311cb..04569c2 100644
--- a/src/backend/replication/walreceiver.c
+++ b/src/backend/replication/walreceiver.c
@@ -75,6 +75,7 @@ bool		hot_standby_feedback;
 
 /* libpqreceiver hooks to these when loaded */
 walrcv_connect_type walrcv_connect = NULL;
+walrcv_get_conninfo_type walrcv_get_conninfo = NULL;
 walrcv_identify_system_type walrcv_identify_system = NULL;
 walrcv_startstreaming_type walrcv_startstreaming = NULL;
 walrcv_endstreaming_type walrcv_endstreaming = NULL;
@@ -192,6 +193,7 @@ void
 WalReceiverMain(void)
 {
 	char		conninfo[MAXCONNINFO];
+	char	   *tmp_conninfo;
 	char		slotname[NAMEDATALEN];
 	XLogRecPtr	startpoint;
 	TimeLineID	startpointTLI;
@@ -282,7 +284,9 @@ WalReceiverMain(void)
 
 	/* Load the libpq-specific functions */
 	load_file("libpqwalreceiver

Re: [HACKERS] Gin index on array of uuid

2016-06-28 Thread M Enrique
What's a good source code entry point to review how this is working for
anyarray currently? I am new to the postgres code. I spend some time
looking for it but all I found is the following (which I have not been able
to decipher yet).

[image: pasted1]

Thank you,
Enrique



On Tue, Jun 21, 2016 at 12:20 PM Tom Lane  wrote:

> Enrique MailingLists  writes:
> > Currently creating an index on an array of UUID involves defining an
> > operator class. I was wondering if this would be a valid request to add
> as
> > part of the uuid-ossp extension? This seems like a reasonable operator to
> > support as a default for UUIDs.
>
> This makes me itch, really, because if we do this then we should logically
> do it for every other add-on type.
>
> It seems like we are not that far from being able to have just one GIN
> opclass on "anyarray".  The only parts of this declaration that are
> UUID-specific are the comparator function and the storage type, both of
> which could be gotten without that much trouble, one would think.
>
> > Any downsides to adding this as a default?
>
> Well, it'd likely break things at dump/reload time for people who had
> already created a competing "default for _uuid" opclass manually.  I'm not
> entirely sure, but possibly replacing the core opclasses with a single one
> that is "default for anyarray" could avoid such failures.  We'd have to
> figure out ambiguity resolution rules.
>
> regards, tom lane
>


[HACKERS] dumping database privileges broken in 9.6

2016-06-28 Thread Peter Eisentraut

Do this:

CREATE DATABASE test1;
REVOKE CONNECT ON DATABASE test1 FROM PUBLIC;

Run pg_dumpall.

In 9.5, this produces

CREATE DATABASE test1 WITH TEMPLATE = template0 OWNER = peter;
REVOKE ALL ON DATABASE test1 FROM PUBLIC;
REVOKE ALL ON DATABASE test1 FROM peter;
GRANT ALL ON DATABASE test1 TO peter;
GRANT TEMPORARY ON DATABASE test1 TO PUBLIC;

In 9.6, this produces only

CREATE DATABASE test1 WITH TEMPLATE = template0 OWNER = peter;
GRANT TEMPORARY ON DATABASE test1 TO PUBLIC;
GRANT ALL ON DATABASE test1 TO peter;

Note that the REVOKE statements are missing.  This does not correctly 
recreate the original state.


--
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] parallel workers and client encoding

2016-06-28 Thread Peter Eisentraut

On 6/27/16 5:37 PM, Robert Haas wrote:

Please find attached an a patch for a proposed alternative approach.
This does the following:

1. When the client_encoding GUC is changed in the worker,
SetClientEncoding() is not called.


I think this could be a problem, because then the client encoding in the 
background worker process is inherited from the postmaster, which could 
in theory be anything.  I think you need to set it at least once to the 
correct value.



Thus, GetClientEncoding() in the
worker always returns the database encoding, regardless of how the
client encoding is set.  This is better than your approach of calling
SetClientEncoding() during worker startup, I think, because the worker
could call a parallel-safe function with SET clause, and one of the
GUCs temporarily set could be client_encoding.  That would be stupid,
but somebody could do it.


I think if we're worried about this, then this should be an error, but 
that's a minor concern.


I realize that we don't have a good mechanism in the GUC code to 
distinguish these two situations.


Then again, this shouldn't be so much different in concept from the 
restoring of GUC variables in the EXEC_BACKEND case.  There is special 
code in set_config_option() to bypass some of the rules in that case. 
RestoreGUCState() should be able to get the same sort of pass.


Also, set_config_option() knows something about what settings are 
allowed in a parallel worker, so I wonder if setting client_encoding 
would even work in spite of that?



2. A new function pq_getmsgrawstring() is added.  This is like
pq_getmsgstring() but it does no encoding conversion.
pq_parse_errornotice() is changed to use pq_getmsgrawstring() instead
of pq_getmsgstring().  Because of (1), when the leader receives an
ErrorResponse or NoticeResponse from the worker, it will not have been
subject to encoding conversion; because of this item, the leader will
not try to convert it either when initially parsing it.  So the extra
encoding round-trip is avoided.


I like that.


3. The changes for NotifyResponse which you proposed are included
here, but with the modification that pq_getmsgrawstring() is used.


and that.

--
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] primary_conninfo missing from pg_stat_wal_receiver

2016-06-28 Thread Michael Paquier
On Wed, Jun 29, 2016 at 6:42 AM, Alvaro Herrera
 wrote:
> Michael Paquier wrote:
>
>> I have been thinking more about that, and came up with the following
>> idea... We do not want to link libpq directly to the server, so let's
>> add a new routine to libpqwalreceiver that builds an obfuscated
>> connection string and let's have walreceiver.c save it in shared
>> memory. Then pg_stat_wal_receiver just makes use of this string. This
>> results in a rather light patch, proposed as attached. Connection URIs
>> get as well translated as connection strings via PQconninfo(), then
>> the new interface routine of libpqwalreceiver looks at dispchar to
>> determine if it should dump a field or not and obfuscates it with more
>> or less ''.
>
> Seems a reasonable idea to me, but some details seem a bit strange:
>
> * Why obfuscate debug options instead of skipping them?

Those are hidden in postgres_fdw/ and 'D' marks options only used for
debugging purposes or options that should not be shown. That7s why I
did so.

> * why not use PQExpBuffer?

Yes, that would be better.

> * Why have the return param be an output argument instead of a plain
>   return value? i.e. static char *libpqrcv_get_conninfo(void).

Oh, yes. That's something I forgot to change. We cannot be completely
sure that the connstr will fit in MAXCONNINFO, so it makes little
sense to store the result in a pre-allocated string.

> On the security aspect of "conninfo" itself, which persists in shared
> memory: do we absolutely need to keep that data? In my reading of the
> code, it's only used once to establish the initial connection to the
> walsender, and then never afterwards.  We could remove the disclosure by
> the simple expedient of overwriting that struct member with the
> obfuscated one, right after establishing that connection.  Then we don't
> need an additional struct member safe_conninfo.  Is there a reason why
> this wouldn't work?

[Wait a minute...]
I don't see why that would not work. By reading the code we do not
reattempt a connection, and leave WalReceiverMain if there is a
disconnection.

> I have already edited the patch following some of these ideas.  Will
> post a new version later.

Cool, thanks.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Forthcoming SQL standards about JSON and Multi-Dimensional Arrays (FYI)

2016-06-28 Thread Stefan Keller
Hi,

FYI: I'd just like to point you to following two forthcoming standard
parts from "ISO/IEC JTS 1/SC 32" comittee: one on JSON, and one on
"Multi-Dimensional Arrays" (SQL/MDA).

They define there some things different as already in PG. See also
Peter Baumann's slides [1] and e.g. [2]

:Stefan

[1] 
https://www.unibw.de/inf4/professors/geoinformatics/agile-2016-workshop-gis-with-nosql
[2] http://jtc1sc32.org/doc/N2501-2550/32N2528-WG3-Tutorial-Opening-Plenary.pdf


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] primary_conninfo missing from pg_stat_wal_receiver

2016-06-28 Thread Alvaro Herrera
Michael Paquier wrote:

> I have been thinking more about that, and came up with the following
> idea... We do not want to link libpq directly to the server, so let's
> add a new routine to libpqwalreceiver that builds an obfuscated
> connection string and let's have walreceiver.c save it in shared
> memory. Then pg_stat_wal_receiver just makes use of this string. This
> results in a rather light patch, proposed as attached. Connection URIs
> get as well translated as connection strings via PQconninfo(), then
> the new interface routine of libpqwalreceiver looks at dispchar to
> determine if it should dump a field or not and obfuscates it with more
> or less ''.

Seems a reasonable idea to me, but some details seem a bit strange:

* Why obfuscate debug options instead of skipping them?
* why not use PQExpBuffer?
* Why have the return param be an output argument instead of a plain
  return value? i.e. static char *libpqrcv_get_conninfo(void).

On the security aspect of "conninfo" itself, which persists in shared
memory: do we absolutely need to keep that data?  In my reading of the
code, it's only used once to establish the initial connection to the
walsender, and then never afterwards.  We could remove the disclosure by
the simple expedient of overwriting that struct member with the
obfuscated one, right after establishing that connection.  Then we don't
need an additional struct member safe_conninfo.  Is there a reason why
this wouldn't work?

I have already edited the patch following some of these ideas.  Will
post a new version later.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: Should phraseto_tsquery('simple', 'blue blue') @@ to_tsvector('simple', 'blue') be true ?

2016-06-28 Thread Oleg Bartunov
On Tue, Jun 28, 2016 at 7:00 PM, Oleg Bartunov  wrote:
> On Tue, Jun 28, 2016 at 9:32 AM, Noah Misch  wrote:
>> On Sun, Jun 26, 2016 at 10:22:26PM -0400, Noah Misch wrote:
>>> On Wed, Jun 15, 2016 at 11:08:54AM -0400, Noah Misch wrote:
>>> > On Wed, Jun 15, 2016 at 03:02:15PM +0300, Teodor Sigaev wrote:
>>> > > On Wed, Jun 15, 2016 at 02:54:33AM -0400, Noah Misch wrote:
>>> > > > On Mon, Jun 13, 2016 at 10:44:06PM -0400, Noah Misch wrote:
>>> > > > > On Fri, Jun 10, 2016 at 03:10:40AM -0400, Noah Misch wrote:
>>> > > > > > [Action required within 72 hours.  This is a generic 
>>> > > > > > notification.]
>>> > > > > >
>>> > > > > > The above-described topic is currently a PostgreSQL 9.6 open 
>>> > > > > > item.  Teodor,
>>> > > > > > since you committed the patch believed to have created it, you 
>>> > > > > > own this open
>>> > > > > > item.  If some other commit is more relevant or if this does not 
>>> > > > > > belong as a
>>> > > > > > 9.6 open item, please let us know.  Otherwise, please observe the 
>>> > > > > > policy on
>>> > > > > > open item ownership[1] and send a status update within 72 hours 
>>> > > > > > of this
>>> > > > > > message.  Include a date for your subsequent status update.  
>>> > > > > > Testers may
>>> > > > > > discover new open items at any time, and I want to plan to get 
>>> > > > > > them all fixed
>>> > > > > > well in advance of shipping 9.6rc1.  Consequently, I will 
>>> > > > > > appreciate your
>>> > > > > > efforts toward speedy resolution.  Thanks.
>>> > > > > >
>>> > > > > > [1] 
>>> > > > > > http://www.postgresql.org/message-id/20160527025039.ga447...@tornado.leadboat.com
>>> > > > >
>>> > > > > This PostgreSQL 9.6 open item is past due for your status update.  
>>> > > > > Kindly send
>>> > > > > a status update within 24 hours, and include a date for your 
>>> > > > > subsequent status
>>> > > > > update.  Refer to the policy on open item ownership:
>>> > > > > http://www.postgresql.org/message-id/20160527025039.ga447...@tornado.leadboat.com
>>> > > >
>>> > > >IMMEDIATE ATTENTION REQUIRED.  This PostgreSQL 9.6 open item is long 
>>> > > >past due
>>> > > >for your status update.  Please reacquaint yourself with the policy on 
>>> > > >open
>>> > > >item ownership[1] and then reply immediately.  If I do not hear from 
>>> > > >you by
>>> > > >2016-06-16 07:00 UTC, I will transfer this item to release management 
>>> > > >team
>>> > > >ownership without further notice.
>>> > > >
>>> > > >[1] 
>>> > > >http://www.postgresql.org/message-id/20160527025039.ga447...@tornado.leadboat.com
>>> > >
>>> > > I'm working on it right now.
>>> >
>>> > That is good news, but it is not a valid status update.  In particular, it
>>> > does not specify a date for your next update.
>>>
>>> You still have not delivered the status update due thirteen days ago.  If I 
>>> do
>>> not hear from you a fully-conforming status update by 2016-06-28 03:00 UTC, 
>>> or
>>> if this item ever again becomes overdue for a status update, I will transfer
>>> the item to release management team ownership.
>>
>> This PostgreSQL 9.6 open item now needs a permanent owner.  Would any other
>> committer like to take ownership?  I see Teodor committed some things 
>> relevant
>> to this item just today, so the task may be as simple as verifying that those
>> commits resolve the item.  If this role interests you, please read this 
>> thread
>> and the policy linked above, then send an initial status update bearing a 
>> date
>> for your subsequent status update.  If the item does not have a permanent
>> owner by 2016-07-01 07:00 UTC, I will resolve the item by reverting all 
>> phrase
>> search commits.
>
> Teodor pushed three patches, two of them fix the issues discussed in
> this topic (working with duplicates and disable fallback to & for
> stripped tsvector)
>  and the one about precedence of phrase search tsquery operator, which
> was discussed in separate thread
> (https://www.postgresql.org/message-id/flat/576AB63C.7090504%40sigaev.ru#576ab63c.7090...@sigaev.ru)
>
> They all look good, but need small documentation patch. I will provide it 
> later.

I attached a little documentation patch to textsearch.sgml.

>
>
>
>>
>> Thanks,
>> nm
--- textsearch.sgml	2016-06-29 00:21:53.0 +0300
+++ /Users/postgres/textsearch.sgml.new	2016-06-29 00:06:36.0 +0300
@@ -358,14 +358,18 @@
 SELECT phraseto_tsquery('cats ate rats');
phraseto_tsquery
 ---
- ( 'cat' <-> 'ate' ) <-> 'rat'
+ 'cat' <-> 'ate' <-> 'rat'
 
 SELECT phraseto_tsquery('the cats ate the rats');
phraseto_tsquery
 ---
- ( 'cat' <-> 'ate' ) <2> 'rat'
+ 'cat' <-> 'ate' <2> 'rat'
 

+   
+ The precedence of tsquery operators is as follows: |, &, 
+ <->, !.
+   
   
 
   
@@ -923,7 +927,7 @@
 SELECT phraseto_tsquery('english', 'The Fat & Rats:C');
   phraseto_tsquery
 -
- ( 'fat' <-> 'rat' ) <-> 'c'
+ 'fat'

Re: [HACKERS] Rename max_parallel_degree?

2016-06-28 Thread Julien Rouhaud
On 28/06/2016 04:44, Amit Kapila wrote:
> On Mon, Jun 27, 2016 at 10:35 PM, Julien Rouhaud
>>
>> There's already a pg_memory_barrier() call in
>> BackgroundWorkerStateChange(), to avoid reordering the notify_pid load.
>> Couldn't we use it to also make sure the parallel_terminate_count
>> increment happens before the slot->in_use store?
>>
> 
> Yes, that is enough, as memory barrier ensures that both loads and
> stores are completed before any loads and stores that are after
> barrier.
> 
>>  I guess that a write
>> barrier will be needed in ForgetBacgroundWorker().
>>
> 
> Yes.
> 
 2.
 + if (parallel && (BackgroundWorkerData->parallel_register_count -
 +
 BackgroundWorkerData->parallel_terminate_count) >=
 +
 max_parallel_workers)
 + {
 + LWLockRelease(BackgroundWorkerLock);
 + return
 false;
 + }
 +

 I think we need a read barrier here, so that this check doesn't get
 reordered with the for loop below it.
>>
>> You mean between the end of this block and the for loop?
>>
> 
> Yes.
> 
  Also, see if you find the code
 more readable by moving the after && part of check to next line.
>>
>> I think I'll just pgindent the file.
>>
> 
> make sense.
> 
> 

Thanks a lot for the help!

PFA v6 which should fix all the issues mentioned.  Also, after second
thought I didn't add the extra hint about max_worker_processes in the
max_parallel_worker paragraph, since this line was a duplicate of the
precedent paragraph, it seemed better to leave the text as is.

-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index a82bf06..6812b0d 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -2009,7 +2009,8 @@ include_dir 'conf.d'
  Sets the maximum number of workers that can be started by a single
  Gather node.  Parallel workers are taken from the
  pool of processes established by
- .  Note that the requested
+ , limited by
+ .  Note that the requested
  number of workers may not actually be available at runtime.  If this
  occurs, the plan will run with fewer workers than expected, which may
  be inefficient.  The default value is 2.  Setting this value to 0
@@ -2018,6 +2019,21 @@ include_dir 'conf.d'

   
 
+  
+   max_parallel_workers (integer)
+   
+max_parallel_workers configuration 
parameter
+   
+   
+   
+
+ Sets the maximum number of workers that the system can support for
+ parallel queries.  The default value is 4.  Setting this value to 0
+ disables parallel query execution.
+
+   
+  
+
   
backend_flush_after (integer)

diff --git a/src/backend/access/transam/parallel.c 
b/src/backend/access/transam/parallel.c
index 088700e..ea7680b 100644
--- a/src/backend/access/transam/parallel.c
+++ b/src/backend/access/transam/parallel.c
@@ -452,7 +452,8 @@ LaunchParallelWorkers(ParallelContext *pcxt)
snprintf(worker.bgw_name, BGW_MAXLEN, "parallel worker for PID %d",
 MyProcPid);
worker.bgw_flags =
-   BGWORKER_SHMEM_ACCESS | BGWORKER_BACKEND_DATABASE_CONNECTION;
+   BGWORKER_SHMEM_ACCESS | BGWORKER_BACKEND_DATABASE_CONNECTION
+   | BGWORKER_IS_PARALLEL_WORKER;
worker.bgw_start_time = BgWorkerStart_ConsistentState;
worker.bgw_restart_time = BGW_NEVER_RESTART;
worker.bgw_main = ParallelWorkerMain;
diff --git a/src/backend/optimizer/path/allpaths.c 
b/src/backend/optimizer/path/allpaths.c
index 2e4b670..e1da5f9 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -724,9 +724,11 @@ create_plain_partial_paths(PlannerInfo *root, RelOptInfo 
*rel)
}
 
/*
-* In no case use more than max_parallel_workers_per_gather workers.
+* In no case use more than max_parallel_workers or
+* max_parallel_workers_per_gather workers.
 */
-   parallel_workers = Min(parallel_workers, 
max_parallel_workers_per_gather);
+   parallel_workers = Min(max_parallel_workers, Min(parallel_workers,
+   max_parallel_workers_per_gather));
 
/* If any limit was set to zero, the user doesn't want a parallel scan. 
*/
if (parallel_workers <= 0)
diff --git a/src/backend/optimizer/path/costsize.c 
b/src/backend/optimizer/path/costsize.c
index 8c1dccc..6cb2f4e 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -113,6 +113,7 @@ int effective_cache_size = 
DEFAULT_EFFECTIVE_CACHE_SIZE;
 
 Cost   disable_cost = 1.0e10;
 
+intmax_parallel_workers = 4;
 intmax_parallel_workers_per_gather = 2;
 
 bool   enable_seqscan = true;
diff --git a/src/backend/optimizer/plan/planner.c 

Re: [HACKERS] IPv6 link-local addresses and init data type

2016-06-28 Thread Markus Wanner
Haribabu,

On 07.06.2016 07:19, Haribabu Kommi wrote:
>> I have not looked at the spec, but I wouldn't be surprised if there
>> were an upper limit on the length of valid scope names.

Yeah, I didn't find any upper limit, either.

> I am not able to find any link that suggests the maximum length of the scope 
> id.
> From [1], I came to know that, how the scope id is formed in different 
> operating
> systems.
> 
> windows - fe80::3%1
> unix systems - fe80::3%eth0

eth0 may well be interface number 1 on some machine, therefore being
equivalent. However, as discussed before, Postgres cannot know, so it
shouldn't bother.

> I added another character array of 256 member into inet_struct as a last 
> member
> to store the zone id.

I haven't looked at the patch in detail, but zeroing or memcpy'ing those
256 bytes seems like overkill to me. I'd recommend to limit this to only
allocate and move around as many bytes as needed for the scope id.

> Currently all the printable characters are treated as zone id's. I
> will restrict this
> to only alphabets and numbers.

I fear alphanumeric only is too restrictive. RFC 4007 only specifies
that the zone id "must not conflict with the delimiter character" and
leaves everything beyond that to the implementation (which seems too
loose, printable characters sounds about right to me...).

> i will add the zone support for everything and send the patch.

What's currently missing?

> How about the following case, Do we treat them as same or different?
> 
> select 'fe80::%eth1'::inet = 'fe80::%ETH1'::inet;

Let's be consistent in not interpreting the scope id in any way, meaning
those would be different. (After all, interfaces names seem to be case
sensitive - on my variant of Linux at the very least - i.e. ETH1 cannot
be found, while eth1 can be.)

> fe80::%2/64 is only treated as the valid address but not other way as
> fe80::/64%2.
> Do we need to throw an error in this case or just ignore.

I didn't find any evidence for the second case being invalid; nor for it
being valid.

Note, however, that RFC 4007 only gives recommendations for textual
representations (with a "should" preference for the former).

It explicitly "does not specify how the format for non-global addresses
should be combined with the preferred format for literal IPv6 addresses".

Also note that RFC 2732 (Format for Literal IPv6 Addresses in URL's)
doesn't have the '%' sign in the set of reserved characters (not even in
the "unwise" one).

I'm starting to question if it's really wise to add the scope id to the
INET6 type...

> [2] - 
> http://stackoverflow.com/questions/24932172/what-length-can-a-network-interface-name-have

Note that the scope id doesn't necessarily have to be a network
interface name. Concluding there's at max 256 bytes, just because that's
the network interface name's max, doesn't seem correct. However, I agree
that's a reasonable limit for a scope id of the inet6 data type.

Kind Regards

Markus Wanner



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] seg fault on dsm_create call

2016-06-28 Thread Robert Haas
On Tue, Jun 28, 2016 at 12:45 PM, Max Fomichev  wrote:
> On 28/06/16 19:24, Robert Haas wrote:
> Thanks.
> It works now with CurrentResourceOwner = ResourceOwnerCreate(NULL, "name of
> my extension")
>
> I am a little bit confused about test/modules/test_shm_mq, where
> CurrentResourceOwner is set up before dsm_attach, not dsm_create -

You need a resource owner for either dsm_attach or dsm_create.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] An unkillable connection caused replication delay on my replica

2016-06-28 Thread Alvaro Herrera
Shawn wrote:

> strace of long-running query pid 6819 loops like this:
> 
>  sendto(10, "" NULL, 0) = ? ERESTARTSYS (To be
> restarted if SA_RESTART is set)
>  --- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_USER, si_pid=8719,
> si_uid=3001} ---
>  rt_sigreturn()   
> 
> Where pid 6819 is a long (running for 2 days) running query.  In
> pg_stat_activity, it was still listed as "active".  The query had a horrible
> execution plan and it was being executed via a python script.  I couldn't
> pg_terminate_backend the connection.  I didn't try to "kill -9" it due to
> all the warnings about that and I felt I had something special here.  I
> attached the debugger.

Did you happen to grab a stack trace of PID 6819?


-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Questionabl description in datatype.sgml

2016-06-28 Thread Bruce Momjian
On Fri, Jun 24, 2016 at 07:27:24AM +0900, Tatsuo Ishii wrote:
> > On Sat, Jun 18, 2016 at 11:58:58AM -0400, Tom Lane wrote:
> >> Tatsuo Ishii  writes:
> >> > In "8.13.2. Encoding Handling"
> >> >
> >> > When using binary mode to pass query parameters to the server
> >> > and query results back to the client, no character set conversion
> >> > is performed, so the situation is different.  In this case, an
> >> > encoding declaration in the XML data will be observed, and if it
> >> > is absent, the data will be assumed to be in UTF-8 (as required by
> >> > the XML standard; note that PostgreSQL does not support UTF-16).
> >> > On output, data will have an encoding declaration
> >> > specifying the client encoding, unless the client encoding is
> >> > UTF-8, in which case it will be omitted.
> >> >
> >> 
> >> > In the first sentence shouldn't "no character set conversion" be "no
> >> > encoding conversion"? PostgreSQL is doing client/server encoding
> >> > conversion, rather than character set conversion.
> >> 
> >> I think the text is treating "character set conversion" as meaning
> >> the same thing as "encoding conversion"; certainly I've never seen
> >> any place in our docs that draws a distinction between those terms.
> >> If you think there is a difference, maybe we need to define those
> >> terms somewhere.
> > 
> > Uh, I think Unicode is a character set, and UTF8 is an encoding.  I
> > think Tatsuo is right here.
> 
> Yes, a character set is different from an encoding. I though it's a
> common understanding among people.

Fixed with the attached applied patch.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
diff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml
new file mode 100644
index 11e246f..9643746
*** a/doc/src/sgml/datatype.sgml
--- b/doc/src/sgml/datatype.sgml
*** SET xmloption TO { DOCUMENT | CONTENT };
*** 4219,4225 
  
 
  When using binary mode to pass query parameters to the server
! and query results back to the client, no character set conversion
  is performed, so the situation is different.  In this case, an
  encoding declaration in the XML data will be observed, and if it
  is absent, the data will be assumed to be in UTF-8 (as required by
--- 4219,4225 
  
 
  When using binary mode to pass query parameters to the server
! and query results back to the client, no encoding conversion
  is performed, so the situation is different.  In this case, an
  encoding declaration in the XML data will be observed, and if it
  is absent, the data will be assumed to be in UTF-8 (as required by

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reference to UT1

2016-06-28 Thread Bruce Momjian
On Wed, Jun 22, 2016 at 04:51:46PM -0400, Bruce Momjian wrote:
> On Mon, Jun  6, 2016 at 03:53:41PM +1200, Thomas Munro wrote:
> > Hi
> > 
> > The manual[1] says "Technically,PostgreSQL uses UT1 rather than UTC
> > because leap seconds are not handled."  I'm certainly no expert on
> > this stuff but it seems to me that we are using POSIX time[2] or Unix
> > time, not UT1.
> 
> Based on this report I have removed mentions of UT1 from our docs in the
> attached patch, which I would like to apply to 9.6.

Patch applied.  Thanks for the report.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] primary_conninfo missing from pg_stat_wal_receiver

2016-06-28 Thread Alvaro Herrera
Stephen Frost wrote:
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
> > Michael Paquier  writes:
> > > On Tue, Jun 21, 2016 at 11:29 AM, Tom Lane  wrote:
> > >> What I would want to know is whether this specific change is actually a
> > >> good idea.  In particular, I'm concerned about the possible security
> > >> implications of exposing primary_conninfo --- might it not contain a
> > >> password, for example?
> > 
> > > Yes it could, as a connection string, but we make the information of
> > > this view only visible to superusers. For the others, that's just
> > > NULL.
> > 
> > Well, that's okay for now, but I'm curious to hear Stephen Frost's
> > opinion on this.  He's been on the warpath to decrease our dependence
> > on superuser-ness for protection purposes.  Seems to me that having
> > one column in this view that is a lot more security-sensitive than
> > the others is likely to be an issue someday.
> 
> Ugh.  I would certainly rather not have yet another special, hard-coded,
> bit of logic that magically makes things available to a superuser when
> it's not available to regular users.
> 
> What that results in is the need to have a new default role to control
> access to that column for the non-superuser case.

FWIW we already have a superuser() check for the walsender stats view
since 9.1 -- see commit f88a6381.  To appease this we could create our
second predefined role that controls access to both
pg_stat_get_wal_senders and pg_stat_get_wal_receiver.  I don't think
my commit in 9.6 creates this problem, only exacerbates a pre-existing
one, but I also think it's fair to fix both cases for 9.6.

Not sure what to name the new predefined role though -- pg_wal_stats_reader?
(I don't suppose we want to create it to cover *any* future privileged
stats reads rather than just those WAL related, do we?)

> As for the password showing up, sorry, but we need a solution for *that*
> regardless of the rest- the password shouldn't be exposed to anyone, nor
> should it be sent and kept in the backend's memory for longer than
> necessary.  I'm not suggesting we've got that figured out already, but
> that's where we should be trying to go.

I suppose Michael's proposed patch to copy the conninfo obscuring the
password should be enough for this, but I'll go have a closer look.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] seg fault on dsm_create call

2016-06-28 Thread Max Fomichev

On 28/06/16 19:24, Robert Haas wrote:

On Tue, Jun 28, 2016 at 10:11 AM, Max Fomichev  wrote:

Hello,
sorry for my repost from psql-novice, probably it was not a right place for
my question.

I'm trying to understand how to work with dynamic shared memory, message
queues and workers.
The problem is I can not initialize any dsm segment -

 void _PG_init() {
 ...
 dsm_segment *seg = dsm_create(32768, 0); // Segmentation fault here
 ...
 BackgroundWorker worker;
 sprintf(worker.bgw_name, "mystem wrapper process");
 worker.bgw_flags = BGWORKER_SHMEM_ACCESS;
 worker.bgw_start_time = BgWorkerStart_RecoveryFinished;
 worker.bgw_restart_time = BGW_NEVER_RESTART;
 worker.bgw_main = mainProc;
 worker.bgw_notify_pid = 0;
 RegisterBackgroundWorker(&worker);
 }

Also I was trying to move dsm_create call to a worker, but with the same
result -

 static void mainProc(Datum) {
 ...
 dsm_segment *seg = dsm_create(32768, 0); // Segmentation fault here
 ...
 pqsignal(SIGTERM, mystemSigterm);
 BackgroundWorkerUnblockSignals();
 ...

What could be a reason and what am I doing wrong?

I think there are two problems.

1. You need to set up a ResourceOwner before you can attach to a DSM
segment.  Something like: CurrentResourceOwner =
ResourceOwnerCreate(NULL, "name of my extension").

2. You can't do this from inside a _PG_init() block.  That will run in
every process that loads this, which is probably not what you want,
and it will run in the postmaster also, which will not work: the
postmaster cannot use DSM.

Actually, I'd like to change #1 at some point, so that if
CurrentResourceOwner = NULL and you create or attach to a DSM, you
just get a backend-lifespan mapping.  The current setup is annoying
rather than helpful.  But currently that's how it is.


Thanks.
It works now with CurrentResourceOwner = ResourceOwnerCreate(NULL, "name 
of my extension")


I am a little bit confused about test/modules/test_shm_mq, where 
CurrentResourceOwner is set up before dsm_attach, not dsm_create -


/*
 * Connect to the dynamic shared memory segment.
 *
 * The backend that registered this worker passed us the ID of a shared
 * memory segment to which we must attach for further instructions.  In
 * order to attach to dynamic shared memory, we need a resource owner.
 * Once we've mapped the segment in our address space, attach to 
the table
 * of contents so we can locate the various data structures we'll 
need to

 * find within the segment.
 */
CurrentResourceOwner = ResourceOwnerCreate(NULL, "test_shm_mq worker");
seg = dsm_attach(DatumGetInt32(main_arg));

--
Best regards,
Max Fomichev



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] primary_conninfo missing from pg_stat_wal_receiver

2016-06-28 Thread Alvaro Herrera
Noah Misch wrote:
> On Sun, Jun 19, 2016 at 05:56:12PM +0900, Michael Paquier wrote:
> > The new pg_stat_wal_receiver does not include primary_conninfo.
> > Looking at that now, it looks almost stupid not to include it...
> > Adding it now would require a catalog bump, so I am not sure if this
> > is acceptable at this stage for 9.6...
> 
> There is no value in avoiding catversion bumps at this time.

I'm looking at this problem now and will report back by Wed 29th EOB.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] seg fault on dsm_create call

2016-06-28 Thread Max Fomichev

Some debug info related to my previous post -

* thread #1: tid = 0x2601e9, 0x000100313e5e 
postgres`ResourceOwnerEnlargeDSMs + 10, queue = 'com.apple.main-thread', 
stop reason = EXC_BAD_ACCESS (code=1, address=0x130)

  * frame #0: 0x000100313e5e postgres`ResourceOwnerEnlargeDSMs + 10
frame #1: 0x000100202a5f postgres`dsm_create_descriptor + 22
frame #2: 0x000100202853 postgres`dsm_create + 43
frame #3: 0x000101a17717 pg_mystem.so`_PG_init + 39


Hello,
sorry for my repost from psql-novice, probably it was not a right place 
for my question.


I'm trying to understand how to work with dynamic shared memory, message 
queues and workers.

The problem is I can not initialize any dsm segment -

 void _PG_init() {
 ...
 dsm_segment *seg = dsm_create(32768, 0); // Segmentation fault 
here

 ...
 BackgroundWorker worker;
 sprintf(worker.bgw_name, "mystem wrapper process");
 worker.bgw_flags = BGWORKER_SHMEM_ACCESS;
 worker.bgw_start_time = BgWorkerStart_RecoveryFinished;
 worker.bgw_restart_time = BGW_NEVER_RESTART;
 worker.bgw_main = mainProc;
 worker.bgw_notify_pid = 0;
 RegisterBackgroundWorker(&worker);
 }

Also I was trying to move dsm_create call to a worker, but with the same 
result -


 static void mainProc(Datum) {
 ...
 dsm_segment *seg = dsm_create(32768, 0); // Segmentation fault 
here

 ...
 pqsignal(SIGTERM, mystemSigterm);
 BackgroundWorkerUnblockSignals();
 ...

What could be a reason and what am I doing wrong?

PS
test/modules/test_shm_mq works fine...
dynamic_shared_memory_type = posix
OSX 10.11.5
PostgreSQL 9.5.3

--
Best regards,
Max Fomichev



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] seg fault on dsm_create call

2016-06-28 Thread Robert Haas
On Tue, Jun 28, 2016 at 10:11 AM, Max Fomichev  wrote:
> Hello,
> sorry for my repost from psql-novice, probably it was not a right place for
> my question.
>
> I'm trying to understand how to work with dynamic shared memory, message
> queues and workers.
> The problem is I can not initialize any dsm segment -
>
> void _PG_init() {
> ...
> dsm_segment *seg = dsm_create(32768, 0); // Segmentation fault here
> ...
> BackgroundWorker worker;
> sprintf(worker.bgw_name, "mystem wrapper process");
> worker.bgw_flags = BGWORKER_SHMEM_ACCESS;
> worker.bgw_start_time = BgWorkerStart_RecoveryFinished;
> worker.bgw_restart_time = BGW_NEVER_RESTART;
> worker.bgw_main = mainProc;
> worker.bgw_notify_pid = 0;
> RegisterBackgroundWorker(&worker);
> }
>
> Also I was trying to move dsm_create call to a worker, but with the same
> result -
>
> static void mainProc(Datum) {
> ...
> dsm_segment *seg = dsm_create(32768, 0); // Segmentation fault here
> ...
> pqsignal(SIGTERM, mystemSigterm);
> BackgroundWorkerUnblockSignals();
> ...
>
> What could be a reason and what am I doing wrong?

I think there are two problems.

1. You need to set up a ResourceOwner before you can attach to a DSM
segment.  Something like: CurrentResourceOwner =
ResourceOwnerCreate(NULL, "name of my extension").

2. You can't do this from inside a _PG_init() block.  That will run in
every process that loads this, which is probably not what you want,
and it will run in the postmaster also, which will not work: the
postmaster cannot use DSM.

Actually, I'd like to change #1 at some point, so that if
CurrentResourceOwner = NULL and you create or attach to a DSM, you
just get a backend-lifespan mapping.  The current setup is annoying
rather than helpful.  But currently that's how it is.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: Should phraseto_tsquery('simple', 'blue blue') @@ to_tsvector('simple', 'blue') be true ?

2016-06-28 Thread Oleg Bartunov
On Tue, Jun 28, 2016 at 9:32 AM, Noah Misch  wrote:
> On Sun, Jun 26, 2016 at 10:22:26PM -0400, Noah Misch wrote:
>> On Wed, Jun 15, 2016 at 11:08:54AM -0400, Noah Misch wrote:
>> > On Wed, Jun 15, 2016 at 03:02:15PM +0300, Teodor Sigaev wrote:
>> > > On Wed, Jun 15, 2016 at 02:54:33AM -0400, Noah Misch wrote:
>> > > > On Mon, Jun 13, 2016 at 10:44:06PM -0400, Noah Misch wrote:
>> > > > > On Fri, Jun 10, 2016 at 03:10:40AM -0400, Noah Misch wrote:
>> > > > > > [Action required within 72 hours.  This is a generic notification.]
>> > > > > >
>> > > > > > The above-described topic is currently a PostgreSQL 9.6 open item. 
>> > > > > >  Teodor,
>> > > > > > since you committed the patch believed to have created it, you own 
>> > > > > > this open
>> > > > > > item.  If some other commit is more relevant or if this does not 
>> > > > > > belong as a
>> > > > > > 9.6 open item, please let us know.  Otherwise, please observe the 
>> > > > > > policy on
>> > > > > > open item ownership[1] and send a status update within 72 hours of 
>> > > > > > this
>> > > > > > message.  Include a date for your subsequent status update.  
>> > > > > > Testers may
>> > > > > > discover new open items at any time, and I want to plan to get 
>> > > > > > them all fixed
>> > > > > > well in advance of shipping 9.6rc1.  Consequently, I will 
>> > > > > > appreciate your
>> > > > > > efforts toward speedy resolution.  Thanks.
>> > > > > >
>> > > > > > [1] 
>> > > > > > http://www.postgresql.org/message-id/20160527025039.ga447...@tornado.leadboat.com
>> > > > >
>> > > > > This PostgreSQL 9.6 open item is past due for your status update.  
>> > > > > Kindly send
>> > > > > a status update within 24 hours, and include a date for your 
>> > > > > subsequent status
>> > > > > update.  Refer to the policy on open item ownership:
>> > > > > http://www.postgresql.org/message-id/20160527025039.ga447...@tornado.leadboat.com
>> > > >
>> > > >IMMEDIATE ATTENTION REQUIRED.  This PostgreSQL 9.6 open item is long 
>> > > >past due
>> > > >for your status update.  Please reacquaint yourself with the policy on 
>> > > >open
>> > > >item ownership[1] and then reply immediately.  If I do not hear from 
>> > > >you by
>> > > >2016-06-16 07:00 UTC, I will transfer this item to release management 
>> > > >team
>> > > >ownership without further notice.
>> > > >
>> > > >[1] 
>> > > >http://www.postgresql.org/message-id/20160527025039.ga447...@tornado.leadboat.com
>> > >
>> > > I'm working on it right now.
>> >
>> > That is good news, but it is not a valid status update.  In particular, it
>> > does not specify a date for your next update.
>>
>> You still have not delivered the status update due thirteen days ago.  If I 
>> do
>> not hear from you a fully-conforming status update by 2016-06-28 03:00 UTC, 
>> or
>> if this item ever again becomes overdue for a status update, I will transfer
>> the item to release management team ownership.
>
> This PostgreSQL 9.6 open item now needs a permanent owner.  Would any other
> committer like to take ownership?  I see Teodor committed some things relevant
> to this item just today, so the task may be as simple as verifying that those
> commits resolve the item.  If this role interests you, please read this thread
> and the policy linked above, then send an initial status update bearing a date
> for your subsequent status update.  If the item does not have a permanent
> owner by 2016-07-01 07:00 UTC, I will resolve the item by reverting all phrase
> search commits.

Teodor pushed three patches, two of them fix the issues discussed in
this topic (working with duplicates and disable fallback to & for
stripped tsvector)
 and the one about precedence of phrase search tsquery operator, which
was discussed in separate thread
(https://www.postgresql.org/message-id/flat/576AB63C.7090504%40sigaev.ru#576ab63c.7090...@sigaev.ru)

They all look good, but need small documentation patch. I will provide it later.



>
> Thanks,
> nm


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [HITB-Announce] HITB2016AMS Videos & GSEC Singapore Voting

2016-06-28 Thread Alvaro Herrera
Robert Haas wrote:
> On Mon, Jun 27, 2016 at 5:29 PM, Alvaro Herrera
>  wrote:
> > Robert Haas wrote:
> >> On Mon, Jun 20, 2016 at 5:41 PM, Hafez Kamal  
> >> wrote:
> >> > See you in Singapore!
> >>
> >> This seems totally off-topic.  Shouldn't a post like this result in a ban?
> >
> > It is off-topic.  Sorry that it got through.  We get dozens of these
> > every week, and the vast majority are rejected; I suppose some moderator
> > slipped up (might have been me).
> 
> Ah, I didn't realize that this was an ongoing issue.  Thanks for
> getting rid of as many of them as you do.
> 
> > I see that this is a repeat of an incident you already complained about
> > in January 2014.  Will look into what happened exactly, and if a ban is
> > warranted, I'll implement that.  Conference spammers have gotten pretty
> > annoying ...
> 
> Thanks!

The logs say I slipped up myself :-(  No need for a specific ban for
this one, since this is the only post in the last 30 days.  The problem
is that there are several different conference scammers.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ERROR: ORDER/GROUP BY expression not found in targetlist

2016-06-28 Thread Tom Lane
Rushabh Lathia  writes:
> SELECT setval('s', max(100)) from tab;
> ERROR:  ORDER/GROUP BY expression not found in targetlist

Fixed, thanks for the report!

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reviewing freeze map code

2016-06-28 Thread Masahiko Sawada
On Tue, Jun 28, 2016 at 8:06 PM, Masahiko Sawada  wrote:
> On Tue, Jun 21, 2016 at 6:59 AM, Andres Freund  wrote:
>> On 2016-06-20 17:55:19 -0400, Robert Haas wrote:
>>> On Mon, Jun 20, 2016 at 4:24 PM, Andres Freund  wrote:
>>> > On 2016-06-20 16:10:23 -0400, Robert Haas wrote:
>>> >> What exactly is the point of all of that already_marked stuff?
>>> >
>>> > Preventing the old tuple from being locked/updated by another backend,
>>> > while unlocking the buffer.
>>> >
>>> >> I
>>> >> mean, suppose we just don't do any of that before we go off to do
>>> >> toast_insert_or_update and RelationGetBufferForTuple.  Eventually,
>>> >> when we reacquire the page lock, we might find that somebody else has
>>> >> already updated the tuple, but couldn't that be handled by
>>> >> (approximately) looping back up to l2 just as we do in several other
>>> >> cases?
>>> >
>>> > We'd potentially have to undo a fair amount more work: the toasted data
>>> > would have to be deleted and such, just to retry. Which isn't going to
>>> > super easy, because all of it will be happening with the current cid (we
>>> > can't just increase CommandCounterIncrement() for correctness reasons).
>>>
>>> Why would we have to delete the TOAST data?  AFAIUI, the tuple points
>>> to the TOAST data, but not the other way around.  So if we change our
>>> mind about where to put the tuple, I don't think that requires
>>> re-TOASTing.
>>
>> Consider what happens if we, after restarting at l2, notice that we
>> can't actually insert, but return in the !HeapTupleMayBeUpdated
>> branch. If the caller doesn't error out - and there's certainly callers
>> doing that - we'd "leak" a toasted datum.
>
> Sorry for interrupt you, but I have a question about this case.
> Is there case where we back to l2 after we created toasted
> datum(called toast_insert_or_update)?
> IIUC, after we stored toast datum we just insert heap tuple and log
> WAL (or error out for some reasons).
>

I understood now, sorry for the noise.

Regards,

--
Masahiko Sawada


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] seg fault on dsm_create call

2016-06-28 Thread Max Fomichev

Hello,
sorry for my repost from psql-novice, probably it was not a right place 
for my question.


I'm trying to understand how to work with dynamic shared memory, message 
queues and workers.

The problem is I can not initialize any dsm segment -

void _PG_init() {
...
dsm_segment *seg = dsm_create(32768, 0); // Segmentation fault 
here

...
BackgroundWorker worker;
sprintf(worker.bgw_name, "mystem wrapper process");
worker.bgw_flags = BGWORKER_SHMEM_ACCESS;
worker.bgw_start_time = BgWorkerStart_RecoveryFinished;
worker.bgw_restart_time = BGW_NEVER_RESTART;
worker.bgw_main = mainProc;
worker.bgw_notify_pid = 0;
RegisterBackgroundWorker(&worker);
}

Also I was trying to move dsm_create call to a worker, but with the same 
result -


static void mainProc(Datum) {
...
dsm_segment *seg = dsm_create(32768, 0); // Segmentation fault 
here

...
pqsignal(SIGTERM, mystemSigterm);
BackgroundWorkerUnblockSignals();
...

What could be a reason and what am I doing wrong?

PS
test/modules/test_shm_mq works fine...
dynamic_shared_memory_type = posix
OSX 10.11.5
PostgreSQL 9.5.3

--
Best regards,
Max Fomichev



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ERROR: ORDER/GROUP BY expression not found in targetlist

2016-06-28 Thread Amit Langote
On Tue, Jun 28, 2016 at 2:52 PM, Rushabh Lathia
 wrote:
> Hi,
>
> Consider the below testcase:
>
> CREATE TABLE tab(
>   c1 INT NOT NULL,
>   c2 INT NOT NULL
> );
> INSERT INTO tab VALUES (1, 2);
> INSERT INTO tab VALUES (2, 1);
> INSERT INTO tab VALUES (1, 2);
>
>
> case 1:
>
> SELECT c.c1, c.c2 from tab C WHERE c.c2 = ANY (
> SELECT 1 FROM tab A WHERE a.c2 IN (
>   SELECT 1 FROM tab B WHERE a.c1 = c.c1
>   GROUP BY rollup(a.c1)
> )
> GROUP BY cube(c.c2)
>   )
>   GROUP BY grouping sets(c.c1, c.c2)
>   ORDER BY 1, 2 DESC;
> ERROR:  ORDER/GROUP BY expression not found in targetlist
>
> case 2:
>
> create sequence s;
> SELECT setval('s', max(100)) from tab;
> ERROR:  ORDER/GROUP BY expression not found in targetlist

The following give the same error:

select max(100) from tab;
select max((select 1)) from tab;

Thanks,
Amit


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How to kill a Background worker and Its metadata

2016-06-28 Thread Akash Agrawal
I am calling proc_exit(1) once the worker encounters SIGTERM signal. I've
attached my code here.

Here is the link to stackoverflow:
http://stackoverflow.com/questions/38058628/how-to-kill-a-background-worker-including-its-metadata-in-postgres



On Mon, Jun 27, 2016 at 8:41 PM, Craig Ringer  wrote:

> On 28 June 2016 at 02:27, Akash Agrawal  wrote:
>
>> Hi,
>>
>> I've created a background worker and I am using Postgresql-9.4. This
>> bgworker handles the job queue dynamically and goes to sleep if there is no
>> job to process within the next 1 hour.
>>
>> Now, I want to have a mechanism to wake the bgworker up in case if
>> someone adds a new job while the bgworker is in sleep mode. So to do it, I
>> have created a trigger which initially removes the existing background
>> worker and then registers a new one.
>>
>
> Don't do that.
>
> Instead, set the background worker's process latch, which you can find in
> the PGPROC array. There are a variety of examples of how to set latches in
> the sources.
>
>>
>> I am retrieving the pid from pg_Stat_activity. The maximum number of
>> background worker that can run simultaneously is equal to 8.  I think even
>> if I call pg_terminate_backend the metadata of the background worker is not
>> being deleted
>>
>
> Correct. Unless you register it as a dynamic bgworker with no automatic
> restart, it'll get restarted when it exits uncleanly.
>
> Have the worker call proc_exit(0) if you want it not to be restarted.
>
>
> --
>  Craig Ringer   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>


bgworker.rtf
Description: RTF file

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How to kill a Background worker and Its metadata

2016-06-28 Thread Akash Agrawal
I've handled SIGTERM signal. pg_terminate_backend send signals (SIGTERM) to
backend processes identified by process ID. And also, after this call I am
able to track in my logs that the background worker gets terminated.

Yet, I am only able to register first 8 background workers. I am using
select worker_spi1_launch(1) to launch it every time. This is why I guess
there is some metadata maintained which has got to be deleted.

On Mon, Jun 27, 2016 at 7:59 PM, Michael Paquier 
wrote:

> On Tue, Jun 28, 2016 at 3:27 AM, Akash Agrawal  wrote:
> > I've created a background worker and I am using Postgresql-9.4. This
> > bgworker handles the job queue dynamically and goes to sleep if there is
> no
> > job to process within the next 1 hour.
> >
> > Now, I want to have a mechanism to wake the bgworker up in case if
> someone
> > adds a new job while the bgworker is in sleep mode. So to do it, I have
> > created a trigger which initially removes the existing background worker
> and
> > then registers a new one. I am using the following two queries inside it:
>
> Why don't you just register and use a signal in this case? You could
> even do something with SIGHUP...
> --
> Michael
>


Re: [HACKERS] Postgres_fdw join pushdown - wrong results with whole-row reference

2016-06-28 Thread Ashutosh Bapat
>
> >
> > postgres_fdw resets the search path to pg_catalog while opening
> connection
> > to the server. The reason behind this is explained in deparse.c
> >
> >  * We assume that the remote session's search_path is exactly
> "pg_catalog",
> >  * and thus we need schema-qualify all and only names outside pg_catalog.
>
> Hmm.  OK, should we revert the schema-qualification part of that
> commit, or just leave it alone?
>
>
If we leave that code as is, someone who wants to add similar code later
would get confused or will be tempted to create more instances of
schema-qualification. I think we should revert the schema qualification.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


Re: [HACKERS] fixing subplan/subquery confusion

2016-06-28 Thread Amit Kapila
On Tue, Jun 28, 2016 at 8:25 AM, Tom Lane  wrote:
> Amit Kapila  writes:
>> I had couple of questions [1] related to that patch.  See if you find
>> those as relevant?
>
> I do not think those cases are directly relevant: you're talking about
> appendrels not single, unflattened RTE_SUBQUERY rels.
>

Right, but still I think we shouldn't leave the appendrel case unattended.

> In the subquery case, my view of how it ought to work is that Paths coming
> up from the subquery would be marked as not-parallel-safe if they contain
> references to unsafe functions.  It might be that that doesn't happen for
> free, but my guess is that it would already work that way given a change
> similar to what I proposed.
>

This makes sense to me.

> In the appendrel case, I tend to agree that the easiest solution is to
> scan all the children of the appendrel and just mark the whole thing as
> not consider_parallel if any of them have unsafe functions.
>

Thats what I had in mind as well, but not sure which is the best place
to set it.  Shall we do it in set_append_rel_size() after setting the
size of each relation (after foreach loop) or is it better to do it in
set_append_rel_pathlist().  Is it better to do it as a separate patch
or to enhance your patch for this change?  If you are okay, I can
update the patch or write a new one based on what is preferred?


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reviewing freeze map code

2016-06-28 Thread Masahiko Sawada
On Tue, Jun 21, 2016 at 6:59 AM, Andres Freund  wrote:
> On 2016-06-20 17:55:19 -0400, Robert Haas wrote:
>> On Mon, Jun 20, 2016 at 4:24 PM, Andres Freund  wrote:
>> > On 2016-06-20 16:10:23 -0400, Robert Haas wrote:
>> >> What exactly is the point of all of that already_marked stuff?
>> >
>> > Preventing the old tuple from being locked/updated by another backend,
>> > while unlocking the buffer.
>> >
>> >> I
>> >> mean, suppose we just don't do any of that before we go off to do
>> >> toast_insert_or_update and RelationGetBufferForTuple.  Eventually,
>> >> when we reacquire the page lock, we might find that somebody else has
>> >> already updated the tuple, but couldn't that be handled by
>> >> (approximately) looping back up to l2 just as we do in several other
>> >> cases?
>> >
>> > We'd potentially have to undo a fair amount more work: the toasted data
>> > would have to be deleted and such, just to retry. Which isn't going to
>> > super easy, because all of it will be happening with the current cid (we
>> > can't just increase CommandCounterIncrement() for correctness reasons).
>>
>> Why would we have to delete the TOAST data?  AFAIUI, the tuple points
>> to the TOAST data, but not the other way around.  So if we change our
>> mind about where to put the tuple, I don't think that requires
>> re-TOASTing.
>
> Consider what happens if we, after restarting at l2, notice that we
> can't actually insert, but return in the !HeapTupleMayBeUpdated
> branch. If the caller doesn't error out - and there's certainly callers
> doing that - we'd "leak" a toasted datum.

Sorry for interrupt you, but I have a question about this case.
Is there case where we back to l2 after we created toasted
datum(called toast_insert_or_update)?
IIUC, after we stored toast datum we just insert heap tuple and log
WAL (or error out for some reasons).

Regards,

--
Masahiko Sawada


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improving executor performance

2016-06-28 Thread Rajeev rastogi
On 25 June 2016 05:00, Andres Freund Wrote:
>To: pgsql-hackers@postgresql.org
>Subject: [HACKERS] Improving executor performance
>
>My observations about the performance bottlenecks I found, and partially
>addressed, are in rough order of importance (there's interdependence
>between most of them):
>
>1) Cache misses cost us a lot, doing more predictable accesses can make
>   a huge difference. Without addressing that, many other bottlenecks
>   don't matter all that much.  I see *significant* performance
>   improvements for large seqscans (when the results are used) simply
>   memcpy()'ing the current target block.
>
>   This partially is an intrinsic problem of analyzing a lot of data,
>   and partially because our access patterns are bad.
>
>
>2) Baring 1) tuple deforming is the biggest bottleneck in nearly all
>   queries I looked at. There's various places we trigger deforming,
>   most ending in either slot_deform_tuple(), heap_getattr(),
>   heap_deform_tuple().

Agreed, 
I had also observed similar behavior specifically (2) while working on 
improving executor performance.
I had done some prototype work on this to improve performance by using native 
compilation.
Details of same is available at my blog:
http://rajeevrastogi.blogspot.in/2016/03/native-compilation-part-2-pgday-asia.html
 


>3) Our 1-by-1 tuple flow in the executor has two major issues:

Agreed, In order to tackle this IMHO, we should
1. Makes the processing data-centric instead of operator centric.
2. Instead of pulling each tuple from immediate operator, operator can push the 
tuple to its parent. It can be allowed to push until it sees any operator, 
which cannot be processed without result from other operator.   
More details from another thread:
https://www.postgresql.org/message-id/bf2827dcce55594c8d7a8f7ffd3ab77159a9b...@szxeml521-mbs.china.huawei.com
 

Thanks and Regards,
Kumar Rajeev Rastogi


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-06-28 Thread Gilles Darold
Le 07/04/2016 08:30, Karl O. Pinc a écrit :
> On Thu, 7 Apr 2016 01:13:51 -0500
> "Karl O. Pinc"  wrote:
>
>> On Wed, 6 Apr 2016 23:37:09 -0500
>> "Karl O. Pinc"  wrote:
>>
>>> On Wed, 6 Apr 2016 22:26:13 -0500
>>> "Karl O. Pinc"  wrote:  
 On Wed, 23 Mar 2016 23:22:26 +0100
 Gilles Darold  wrote:
 
> Thanks for the reminder, here is the v3 of the patch after a
> deeper review and testing. It is now registered to the next
> commit fest under the System Administration topic.
>> This is what I see at the moment.  I'll wait for replies
>> before looking further and writing more.
> Er, one more thing.  Isn't it true that in logfile_rotate()
> you only need to call store_current_log_filename() when
> logfile_open() is called with a "w" mode, and never need to
> call it when logfile_open() is called with an "a" mode?
>
>
> Karl 
> Free Software:  "You don't pay back, you pay forward."
>  -- Robert A. Heinlein
>
>

Hi Karl,

Thank you very much for the patch review and please apologies this too
long response delay. I was traveling since end of April and totally
forgotten this patch. I have applied all your useful feedbacks on the
patch and attached a new one (v4) to this email :

- Fix the missing  in doc/src/sgml/func.sgml
- Rewrite pg_current_logfile descritption as suggested
- Remove comment in src/backend/postmaster/syslogger.c
- Use pgrename to first write the filename to a temporary file
before overriding the last log file.
- Rename store_current_log_filename() into logfile_writename() -
logfile_getname is used to retrieve the filename.
- Use logfile_open() to enable CRLF line endings on Windows
- Change log level for when it can't create pg_log_file, from
WARNING to LOG
- Remove NOTICE message when the syslogger is not enabled, the
function return null.

On other questions:

> "src/backend/postmaster/syslogger.c expects to see fopen() fail with
ENFILE and EMFILE.  What will you do if you get these?"

- Nothing, if the problem occurs during the log rotate call, then
log rotation file is disabled so logfile_writename() will not be called.
Case where the problem occurs between the rotation call and the
logfile_writename() call is possible but I don't think that it will be
useful to try again. In this last case log filename will be updated
during next rotation.

> "Have you given any thought as to when logfile_rotate() is called? 
Since logfile_rotate() itself logs with ereport(), it would _seem_ safe
to ereport() from within your store_current_log_filename(), called from
within logfile_rotate()."

- Other part of logfile_rotate() use ereport() so I guess it is safe
to use it.

> "The indentation of the ereport(), in the part that continues over
multiple lines"

- This was my first thought but seeing what is done in the other
call to ereport() I think I have done it the right way.

> "Isn't it true that in logfile_rotate() you only need to call
store_current_log_filename() when logfile_open() is called with a "w"
mode, and never need to call it when logfile_open() is called with an
"a" mode?"

- No, it is false, append mode is used when logfile_rotate used an
existing file but the filename still change.


Best regards,

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 3f627dc..7100881 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -15293,6 +15293,12 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n);
   
 
   
+   pg_current_logfile()
+   text
+   current log file used by the logging collector
+  
+
+  
session_user
name
session user name
@@ -15499,6 +15505,17 @@ SET search_path TO schema , schema, ..
 pg_notification_queue_usage

 
+   
+pg_current_logfile
+   
+
+   
+pg_current_logfile returns the name of the
+current log file used by the logging collector, as
+text. Log collection must be active or the
+return value is undefined.
+   
+

 pg_listening_channels returns a set of names of
 asynchronous notification channels that the current session is listening
diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml
index 1b812bd..7dae000 100644
--- a/doc/src/sgml/storage.sgml
+++ b/doc/src/sgml/storage.sgml
@@ -170,6 +170,13 @@ last started with
   (this file is not present after server shutdown)
 
 
+
+ pg_log_file
+ A file recording the current log file used by the syslogger
+  when log collection is active
+
+
+
 
 
 
diff --git a/src/backend/postmaster/syslogger.c b/src/backend/postmaster/syslogger.c
index e7e488a..0e25aa6 100644
--- a/src/backend/postmaster/syslogger.c
+++ b/src/backend/postmaster/syslogger.c
@@ -145,6 +145,7 @@ static char *logfile_getname(pg_time_t timestamp, const char *suffix);
 static void set_next_rotation_time(void);
 static void sigHupHandler(SIGNAL_ARGS);
 st

Re: [HACKERS] Postgres_fdw join pushdown - wrong results with whole-row reference

2016-06-28 Thread Etsuro Fujita

On 2016/06/28 15:23, Ashutosh Bapat wrote:

The wording "column "whole-row reference ..." doesn't look good.
Whole-row reference is not a column. The error context itself should be
"whole row reference for foreign table ft1".


Ah, you are right.  Please find attached an updated version.

Best regards,
Etsuro Fujita


postgres-fdw-conv-error-callback-v4.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers