Re: Missing "Up" navigation link between parts and doc root?

2020-07-03 Thread Fabien COELHO



Hello Peter,

The original stylesheets explicitly go out of their way to do it that way. 
We can easily fix that by removing that special case.  See attached patch.


That patch only fixes it for the header.  To fix it for the footer as well, 
we'd first need to import the navfooter template to be able to customize it.


Thanks for the patch, which applies cleanly, doc compiles, works for me 
with w3m.



Not a big problem though.


Nope, just mildly irritating for quite a long time:-) So I'd go for back 
patching if it applies cleanly.


--
Fabien.




Re: Missing "Up" navigation link between parts and doc root?

2020-07-03 Thread Fabien COELHO




The original stylesheets explicitly go out of their way to do it that
way.


Can we find any evidence of the reasoning?  As you say, that clearly was
an intentional choice.


Given the code, my guess would be the well-intentioned but misplaced 
desire to avoid a redundancy, i.e. two links side-by-side which point to 
the same place.


--
Fabien.




Re: pgbench: option delaying queries till connections establishment?

2020-07-03 Thread Fabien COELHO



* First patch reworks time measurements in pgbench.
* Second patch adds a barrier before starting the bench

It applies on top of the previous one. The initial imbalance due to 
thread creation times is smoothed.


The usecs patch fails to apply to HEAD, can you please submit a rebased version
of this patchset.  Also, when doing so, can you please rename the patches such
that sort alphabetically in the order in which they are intended to be applied.
The CFBot patch tester will otherwise try to apply them out of order which
cause errors.


Ok. Attached.

--
Fabien.diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml
index 8e728dc094..3c5ef9c27e 100644
--- a/doc/src/sgml/ref/pgbench.sgml
+++ b/doc/src/sgml/ref/pgbench.sgml
@@ -58,8 +58,10 @@ number of clients: 10
 number of threads: 1
 number of transactions per client: 1000
 number of transactions actually processed: 1/1
-tps = 85.184871 (including connections establishing)
-tps = 85.296346 (excluding connections establishing)
+latency average = 11.013 ms
+latency stddev = 7.351 ms
+initial connection time = 45.758 ms
+tps = 896.967014 (without initial connection establishing)
 
 
   The first six lines report some of the most important parameter
@@ -2226,7 +2228,6 @@ END;
   
For the default script, the output will look similar to this:
 
-starting vacuum...end.
 transaction type: 
 scaling factor: 1
 query mode: simple
@@ -2234,22 +2235,22 @@ number of clients: 10
 number of threads: 1
 number of transactions per client: 1000
 number of transactions actually processed: 1/1
-latency average = 15.844 ms
-latency stddev = 2.715 ms
-tps = 618.764555 (including connections establishing)
-tps = 622.977698 (excluding connections establishing)
+latency average = 10.870 ms
+latency stddev = 7.341 ms
+initial connection time = 30.954 ms
+tps = 907.949122 (without initial connection establishing)
 statement latencies in milliseconds:
-0.002  \set aid random(1, 10 * :scale)
-0.005  \set bid random(1, 1 * :scale)
-0.002  \set tid random(1, 10 * :scale)
-0.001  \set delta random(-5000, 5000)
-0.326  BEGIN;
-0.603  UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
-0.454  SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
-5.528  UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
-7.335  UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
-0.371  INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
-1.212  END;
+0.001  \set aid random(1, 10 * :scale)
+0.001  \set bid random(1, 1 * :scale)
+0.001  \set tid random(1, 10 * :scale)
+0.000  \set delta random(-5000, 5000)
+0.046  BEGIN;
+0.151  UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
+0.107  SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
+4.241  UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
+5.245  UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
+0.102  INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
+0.974  END;
 
   
 
diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c
index 08a5947a9e..46be67adaf 100644
--- a/src/bin/pgbench/pgbench.c
+++ b/src/bin/pgbench/pgbench.c
@@ -291,9 +291,9 @@ typedef struct SimpleStats
  */
 typedef struct StatsData
 {
-	time_t		start_time;		/* interval start time, for aggregates */
-	int64		cnt;			/* number of transactions, including skipped */
-	int64		skipped;		/* number of transactions skipped under --rate
+	pg_time_usec_t	start_time;	/* interval start time, for aggregates */
+	int64			cnt;		/* number of transactions, including skipped */
+	int64			skipped;	/* number of transactions skipped under --rate
  * and --latency-limit */
 	SimpleStats latency;
 	SimpleStats lag;
@@ -416,11 +416,11 @@ typedef struct
 	int			nvariables;		/* number of variables */
 	bool		vars_sorted;	/* are variables sorted by name? */
 
-	/* various times about current transaction */
-	int64		txn_scheduled;	/* scheduled start time of transaction (usec) */
-	int64		sleep_until;	/* scheduled start time of next cmd (usec) */
-	instr_time	txn_begin;		/* used for measuring schedule lag times */
-	instr_time	stmt_begin;		/* used for measuring statement latencies */
+	/* various times about current transaction in microseconds */
+	pg_time_usec_t	txn_scheduled;	/* scheduled start time of transaction */
+	pg_time_usec_t	sleep_until;	/* scheduled start time of next cmd */
+	pg_time_usec_t	txn_begin;		/* used for measuring schedule lag times */
+	pg_time_usec_t	stmt_begin;		/* used for measuring statement latencies */
 
 	bool		prepared[MAX_SCRIPTS];	/* whether client prepared the script */
 
@@ -449,13 +4

Re: min_safe_lsn column in pg_replication_slots view

2020-07-03 Thread Amit Kapila
On Wed, Jul 1, 2020 at 8:44 PM Alvaro Herrera  wrote:
>
> On 2020-Jul-01, Amit Kapila wrote:
>
> > Okay, but do we think it is better to display this in
> > pg_replication_slots or some new view like pg_stat_*_slots as being
> > discussed in [1]?  It should not happen that we later decide to move
> > this or similar stats to that view.
>
> It seems that the main motivation for having some counters in another
> view is the ability to reset them; and resetting this distance value
> makes no sense, so I think it's better to have it in
> pg_replication_slots.
>

Fair enough.  It would be good if we can come up with something better
than 'distance' for this column.  Some ideas safe_wal_limit,
safe_wal_size?

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: A patch for get origin from commit_ts.

2020-07-03 Thread Movead Li
>Thanks.  Movead, please note that the patch is waiting on author? 

>Could you send an update if you think that those changes make sense? 

Thanks for approval the issue, I will send a patch at Monday. 
Regards,

Highgo Software (Canada/China/Pakistan) 

URL : http://www.highgo.ca/ 

EMAIL: mailto:movead(dot)li(at)highgo(dot)ca

Re: proposal - psql - possibility to redirect only tabular output

2020-07-03 Thread Pavel Stehule
pá 3. 7. 2020 v 12:16 odesílatel Artur Zakirov  napsal:

> Hello,
>
> On Sat, Apr 11, 2020 at 6:20 PM Pavel Stehule 
> wrote:
> > Main motivation for this patch is working with psql for writing and
> editing queries, and browsing result in second terminal with pspg or any
> other similar tool (tail, ...). The advantage of this setup is possibility
> to see sql and query result together. I use terminal multiplicator (Tilix),
> but it can be used without it.
> >
> > So example with pspg (should be some fresh release)
> >
> > 1. create fifo used for communication - mkfifo ~/pipe
> >
> > 2. run in one terminal pspg - pspg -f ~/pipe --hold-stream=2
> >
> > 3. run psql in other terminal
>
> The patch looks interesting. As far as I understand the purpose of the
> patch is to hide status messages from result output.
> So maybe it would be enough just to hide status messages at all. There
> is the QUIET variable for that. The main advantage of this variable is
> that it hides a status of "\lo_" commands, for example, as well as a
> status of utility commands. So the QUIET variable covers more use
> cases already.
>

The quiet mode isn't exactly what I want (it can be used as a workaround -
and now, pspg https://github.com/okbob/pspg knows a format of status line
and can work it).

I would like to see a status row. For me it is a visual check so some
statements like INSERT or UPDATE was done successfully. But I would not
send it to the terminal with an active tabular pager.

Pavel




> --
> Artur
>


Re: track_planning causing performance regression

2020-07-03 Thread Pavel Stehule
pá 3. 7. 2020 v 13:02 odesílatel Fujii Masao 
napsal:

>
>
> On 2020/07/03 16:02, Pavel Stehule wrote:
> >
> >
> > pá 3. 7. 2020 v 8:57 odesílatel Fujii Masao  > napsal:
> >
> >
> >
> > On 2020/07/03 13:05, Pavel Stehule wrote:
> >  > Hi
> >  >
> >  > pá 3. 7. 2020 v 4:39 odesílatel Fujii Masao <
> masao.fu...@oss.nttdata.com   masao.fu...@oss.nttdata.com >> napsal:
> >  >
> >  >
> >  >
> >  > On 2020/07/01 7:37, Peter Geoghegan wrote:
> >  >  > On Tue, Jun 30, 2020 at 6:40 AM Fujii Masao <
> masao.fu...@oss.nttdata.com   masao.fu...@oss.nttdata.com >> wrote:
> >  >  >> Ants and Andres suggested to replace the spinlock used in
> pgss_store() with
> >  >  >> LWLock. I agreed with them and posted the POC patch doing
> that. But I think
> >  >  >> the patch is an item for v14. The patch may address the
> reported performance
> >  >  >> issue, but may cause other performance issues in other
> workloads. We would
> >  >  >> need to measure how the patch affects the performance in
> various workloads.
> >  >  >> It seems too late to do that at this stage of v13.
> Thought?
> >  >  >
> >  >  > I agree that it's too late for v13.
> >  >
> >  > Thanks for the comment!
> >  >
> >  > So I pushed the patch and changed default of track_planning
> to off.
> >  >
> >  >
> >  > Maybe there can be documented so enabling this option can have a
> negative impact on performance.
> >
> > Yes. What about adding either of the followings into the doc?
> >
> >   Enabling this parameter may incur a noticeable performance
> penalty.
> >
> > or
> >
> >   Enabling this parameter may incur a noticeable performance
> penalty,
> >   especially when a fewer kinds of queries are executed on many
> >   concurrent connections.
> >
> >
> > This second variant looks perfect for this case.
>
> Ok, so patch attached.
>

+1

Thank you

Pavel


> Regards,
>
> --
> Fujii Masao
> Advanced Computing Technology Center
> Research and Development Headquarters
> NTT DATA CORPORATION
>


Ideas about a better API for postgres_fdw remote estimates

2020-07-03 Thread Tom Lane
In <1116564.1593813...@sss.pgh.pa.us> I wrote:
> I wonder whether someday we ought to invent a new API that's more
> suited to postgres_fdw's needs than EXPLAIN is.  It's not like the
> remote planner doesn't know the number we want; it just fails to
> include it in EXPLAIN.

I've been thinking about this a little more, and I'd like to get some
ideas down on electrons before they vanish.

The current method for postgres_fdw to obtain remote estimates is to
issue an EXPLAIN command to the remote server and then decipher the
result.  This has just one big advantage, which is that it works
against existing, even very old, remote PG versions.  In every other
way it's pretty awful: it involves a lot of cycles on the far end
to create output details we don't really care about, it requires a
fair amount of logic to parse that output, and we can't get some
details that we *do* care about (such as the total size of the foreign
table, as per the other discussion).

We can do better.  I don't propose removing the existing logic, because
being able to work against old remote PG versions seems pretty useful.
But we could probe at connection start for whether the remote server
has support for a better way, and then use that way if available.

What should the better way look like?  I suggest the following:

* Rather than adding a core-server feature, the remote-end part of the new
API should be a function installed by an extension (either postgres_fdw
itself, or a new extension "postgres_fdw_remote" or the like).  One
attraction of this approach is that it'd be conceivable to back-port the
new code into existing PG releases by updating the extension.  Also
there'd be room for multiple versions of the support.  The
connection-start probe could be of the form "does this function exist
in pg_proc?".

* I'm imagining the function being of the form

function pg_catalog.postgres_fdw_support(query text) returns something

where the input is still the text of a query we're considering issuing,
and the output is some structure that contains the items of EXPLAIN-like
data we need, but not the items we don't.  The implementation of the
function would run the query through parse/plan, then pick out the
data we want and return that.

* We could do a lot worse than to have the "structure" be JSON.
This'd allow structured, labeled data to be returned; it would not be
too difficult to construct, even in PG server versions predating the
addition of JSON logic to the core; and the receiving postgres_fdw
extension could use the core's JSON logic to parse the data.

* The contents of the structure need to be designed with forethought
for extensibility, but this doesn't seem hard if it's all a collection
of labeled fields.  We can just say that the recipient must ignore
fields it doesn't recognize.  Once a given field has been defined, we
can't change its contents, but we can introduce new fields as needed.
Note that I would not be in favor of putting an overall version number
within the structure; that's way too coarse-grained.

I'm not planning to do anything about these ideas myself, at least
not in the short term.  But perhaps somebody else would like to
run with them.

regards, tom lane




avoid bitmapOR-ing indexes with scan condition inconsistent with partition constraint

2020-07-03 Thread Justin Pryzby
(resending to the list)

Hi All

I started looking into Konstantin's 30 month old thread/patch:
|Re: [HACKERS] Secondary index access optimizations
https://www.postgresql.org/message-id/27516421-5afa-203c-e22a-8407e9187327%40postgrespro.ru

..to which David directed me 12 months ago:
|Subject: Re: scans on table fail to be excluded by partition bounds
https://www.postgresql.org/message-id/CAKJS1f_iOmCW11dFzifpDGUgSLAoSTDOjw2tcec%3D7Cgq%2BsR80Q%40mail.gmail.com

My complaint at the time was for a query plan like:

CREATE TABLE p (i int, j int) PARTITION BY RANGE(i);
SELECT format('CREATE TABLE p%s PARTITION OF p FOR VALUES FROM(%s)TO(%s)', i, 
10*(i-1), 10*i) FROM generate_series(1,10)i; \gexec
INSERT INTO p SELECT i%99, i%9 FROM generate_series(1,9)i;
VACUUM ANALYZE p;
CREATE INDEX ON p(i);
CREATE INDEX ON p(j);

postgres=# explain analyze SELECT * FROM p WHERE (i=10 OR i=20 OR i=30) AND j<2;
Append  (cost=28.51..283.25 rows=546 width=12) (actual time=0.100..1.364 
rows=546 loops=1)
  ->  Bitmap Heap Scan on p2  (cost=28.51..93.51 rows=182 width=12) (actual 
time=0.099..0.452 rows=182 loops=1)
Recheck Cond: ((i = 10) OR (i = 20) OR (i = 30))
Filter: (j < 2)
Rows Removed by Filter: 818
Heap Blocks: exact=45
->  BitmapOr  (cost=28.51..28.51 rows=1000 width=0) (actual 
time=0.083..0.083 rows=0 loops=1)
  ->  Bitmap Index Scan on p2_i_idx  (cost=0.00..19.79 rows=1000 
width=0) (actual time=0.074..0.074 rows=1000 loops=1)
Index Cond: (i = 10)
  ->  Bitmap Index Scan on p2_i_idx  (cost=0.00..4.29 rows=1 
width=0) (actual time=0.008..0.008 rows=0 loops=1)
Index Cond: (i = 20)
  ->  Bitmap Index Scan on p2_i_idx  (cost=0.00..4.29 rows=1 
width=0) (actual time=0.001..0.001 rows=0 loops=1)
Index Cond: (i = 30)
...

This 2nd and 3rd index scan on p2_i_idx are useless, and benign here, but
harmful if we have a large OR list.

I tried rebasing Konstantin's patch, but it didn't handle the case of
"refuting" inconsistent arms of an "OR" list, so I came up with this.  This
currently depends on the earlier patch only to call RelationGetPartitionQual,
so appears to be mostly a separate issue.

I believe the current behavior of "OR" lists is also causing another issue I
reported, which a customer hit again last week:
https://www.postgresql.org/message-id/20191216184906.ga2...@telsasoft.com
|ERROR: could not resize shared memory segment...No space left on device

When I looked into it, their explain(format text) was 50MB, due to a list of
~500 "OR" conditions, *each* of which was causing an index scan for each of
~500 partitions, where only one index scan per partition was needed or useful,
all the others being inconsistent with the partition constraint.  Thus the
query ultimately errors when it exceeds a resource limit (maybe no surprise
with 8500 index scans).

Here, I was trying to create a test case reproducing that error to see if this
resolves it, but so far hasn't failed. Tomas has a reproducer with a different
(much simpler) plan, though.

CREATE TABLE p (i int, j int) PARTITION BY RANGE(i);
\pset pager off
SELECT format('CREATE TABLE p%s PARTITION OF p FOR VALUES FROM(%s)TO(%s)', i, 
10*(i-1), 10*i) FROM generate_series(1,500)i;
\timing off
\set quiet
\set echo always
\gexec
\timing on
INSERT INTO p SELECT i%5000, i%500 FROM generate_series(1,999)i;
VACUUM ANALYZE p;
CREATE INDEX ON p(i);
CREATE INDEX ON p(j);
SELECT format('explain analyze SELECT * FROM p WHERE %s', 
array_to_string(array_agg('i='||(i*10)::text),' OR ')) FROM 
generate_series(1,500)i;

-- 
Justin
>From bd1f9f93f9bec0585a79bcdf932d1a5e2c0001a0 Mon Sep 17 00:00:00 2001
From: Konstantin Knizhnik 
Date: Fri, 12 Oct 2018 15:53:51 +0300
Subject: [PATCH 1/2] Secondary index access optimizations

---
 .../postgres_fdw/expected/postgres_fdw.out|   8 +-
 contrib/postgres_fdw/sql/postgres_fdw.sql |   2 +-
 src/backend/optimizer/path/allpaths.c |   2 +
 src/backend/optimizer/util/plancat.c  |  45 ++
 src/include/optimizer/plancat.h   |   3 +
 src/test/regress/expected/create_table.out|  14 +-
 src/test/regress/expected/inherit.out | 123 ++--
 .../regress/expected/partition_aggregate.out  |  10 +-
 src/test/regress/expected/partition_join.out  |  38 +-
 src/test/regress/expected/partition_prune.out | 571 ++
 src/test/regress/expected/rowsecurity.out |  12 +-
 src/test/regress/expected/update.out  |   4 +-
 12 files changed, 313 insertions(+), 519 deletions(-)

diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 82fc1290ef..7ab8639dc7 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -629,12 +629,12 @@ EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE c1 IS NULL;-- Nu
Remote SQL: SELECT "C 1", c2, c3, c

Re: estimation problems for DISTINCT ON with FDW

2020-07-03 Thread Tom Lane
Etsuro Fujita  writes:
> On Thu, Jul 2, 2020 at 11:46 PM Tom Lane  wrote:
>> We could perhaps try to make our own estimate of the selectivity of the
>> shippable quals and then back into #1 from the value we got for #2 from
>> the remote server.

> Actually, that is what I suggested:
> ... By "the baserestrictinfo clauses", I mean the shippable clauses as
> well as the non-shippable clauses.  Since baserel->rows stores the
> rows estimate returned by estimate_path_cost_size(), which is #3, this
> estimates #1.

Ah.  That isn't a number we compute in this code path at the moment,
but you're right that we could do so.  However ...

>> But that sounds mighty error-prone, so I doubt it'd
>> make for much of an improvement.

> I have to admit the error-proneness.

... that is the crux of the problem.  The entire reason why we're
expending all these cycles to get a remote estimate is that we don't
trust the local estimate of the shippable quals' selectivity to be
any good.  So relying on it anyway doesn't seem very smart, even if
it's for the usually-not-too-important purpose of estimating the
total table size.

I suppose there is one case where this approach could win: if the
local selectivity estimate is just fine, but the remote table size has
changed a lot since we last did an ANALYZE, then this would give us a
decent table size estimate with no additional remote traffic.  But
that doesn't really seem like a great bet --- if the table size has
changed that much, our local stats are probably obsolete too.

I wonder whether someday we ought to invent a new API that's more
suited to postgres_fdw's needs than EXPLAIN is.  It's not like the
remote planner doesn't know the number we want; it just fails to
include it in EXPLAIN.

>> In any case, the proposal I'm making is just to add a sanity-check
>> clamp to prevent the worst effects of not setting rel->tuples sanely.
>> It doesn't foreclose future improvements inside the FDW.

> Agreed.

OK, I'll go ahead and push the patch I proposed yesterday.

regards, tom lane




Re: Parallell hashjoin sometimes ignores temp_tablespaces

2020-07-03 Thread Tom Lane
Magnus Hagander  writes:
> On Fri, Jul 3, 2020 at 7:06 PM Tom Lane  wrote:
>> Shouldn't that be inserting MyDatabaseTableSpace?  I see no other places
>> anywhere that are forcing temp stuff into pg_default like this.

> Yeah, looking at it again, I think it should. I can't see any reason why it
> should enforce pg_default.

OK, pushed with that additional correction.

regards, tom lane




Re: pro pametniky -historie vzniku PL/SQL

2020-07-03 Thread Jonah H. Harris
On Fri, Jul 3, 2020 at 3:20 PM Pavel Stehule 
wrote:

> Hi
>
> I am sorry, wrong mailing list.
>

Thanks for reading/sharing my blog post, regardless of the mailing list :)

-- 
Jonah H. Harris


Re: Missing "Up" navigation link between parts and doc root?

2020-07-03 Thread Alvaro Herrera
On 2020-Jul-03, Tom Lane wrote:

> Peter Eisentraut  writes:
> > On 2020-06-21 09:19, Fabien COELHO wrote:
> >> I've been annoyed that the documentation navigation does not always has an
> >> "Up" link. It has them inside parts, but the link disappears and you have
> >> to go for the "Home" link which is far on the right when on the root page
> >> of a part?
> >> 
> >> Is there a good reason not to have the "Up" link there as well?
> 
> > The original stylesheets explicitly go out of their way to do it that 
> > way.
> 
> Can we find any evidence of the reasoning?  As you say, that clearly was
> an intentional choice.

If it helps, this seems to have been first introduced in commit b8691d838be.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Default setting for enable_hashagg_disk (hash_mem)

2020-07-03 Thread Alvaro Herrera
On 2020-Jul-03, Bruce Momjian wrote:

> Well, the bottom line is that we are designing features during beta.

Well, we're designing a way for users to interact the new feature.
The feature itself is already in, and it works well in general terms.  I
expect that the new behavior is a win in the majority of cases, and the
problem being discussed here will only manifest as a regression in
corner cases.  (I don't have data to back this up, but if this weren't
the case we would have realized much earlier).

It seem to me we're designing a solution to a problem that was found
during testing, which seems perfectly acceptable to me.  I don't see
grounds for reverting the behavior and I haven't seen anyone suggesting
that it would be an appropriate solution to the issue.

> If we add a new behavior to PG 13, we then have the pre-PG 13 behavior,
> the pre-patch behavior, and the post-patch behavior.  How are people
> supposed to test all of that?

They don't have to.  We tell them that we added some new tweak for a new
pg13 feature in beta3 and that's it.

> Add to that that some don't even feel we
> need a new behavior, which is delaying any patch from being applied.

If we don't need any new behavior, then we would just close the open
item and call the current state good, no?

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Inlining of couple of functions in pl_exec.c improves performance

2020-07-03 Thread Tom Lane
I did some performance testing on 0001+0002 here, and found that
for me, there's basically no change on x86_64 but a win of 2 to 3
percent on aarch64, using Pavel's pi_est_1() as a representative
case for simple plpgsql statements.  That squares with your original
results I believe.  It's not clear to me whether any of the later
tests in this thread measured these changes in isolation, or only
with 0003 added.

Anyway, that's good enough for me, so I pushed 0001+0002 after a
little bit of additional cosmetic tweaking.

I attach your original 0003 here (it still applies, with some line
offsets).  That's just so the cfbot doesn't get confused about what
it's supposed to test now.

regards, tom lane

From 56aac7dff8243ff6dc9b8e72651cb1d9a018f1b3 Mon Sep 17 00:00:00 2001
From: Amit Khandekar 
Date: Sat, 23 May 2020 21:53:24 +0800
Subject: [PATCH 3/3] Inline exec_cast_value().

This function does not do the casting if not required. So move the
code that actually does the cast into a separate function, so as to
reduce the exec_cast_value() code and make it inline.

There are frequent calls of this function, so inlining it has shown to
improve performance by as much as 14%
---
 src/pl/plpgsql/src/pl_exec.c | 63 
 1 file changed, 43 insertions(+), 20 deletions(-)

diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c
index d5377a6dad..4028a3f0f6 100644
--- a/src/pl/plpgsql/src/pl_exec.c
+++ b/src/pl/plpgsql/src/pl_exec.c
@@ -425,10 +425,14 @@ static void instantiate_empty_record_variable(PLpgSQL_execstate *estate,
 			  PLpgSQL_rec *rec);
 static char *convert_value_to_string(PLpgSQL_execstate *estate,
 	 Datum value, Oid valtype);
-static Datum exec_cast_value(PLpgSQL_execstate *estate,
+static inline Datum exec_cast_value(PLpgSQL_execstate *estate,
 			 Datum value, bool *isnull,
 			 Oid valtype, int32 valtypmod,
 			 Oid reqtype, int32 reqtypmod);
+static Datum do_cast_value(PLpgSQL_execstate *estate,
+Datum value, bool *isnull,
+Oid valtype, int32 valtypmod,
+Oid reqtype, int32 reqtypmod);
 static plpgsql_CastHashEntry *get_cast_hashentry(PLpgSQL_execstate *estate,
  Oid srctype, int32 srctypmod,
  Oid dsttype, int32 dsttypmod);
@@ -7764,9 +7768,11 @@ convert_value_to_string(PLpgSQL_execstate *estate, Datum value, Oid valtype)
  * also contain the result Datum if we have to do a conversion to a pass-
  * by-reference data type.  Be sure to do an exec_eval_cleanup() call when
  * done with the result.
+ * The actual code to cast is kept outside of this function, to keep it short
+ * since it is an inline function, being called frequently.
  * --
  */
-static Datum
+static inline Datum
 exec_cast_value(PLpgSQL_execstate *estate,
 Datum value, bool *isnull,
 Oid valtype, int32 valtypmod,
@@ -,31 +7783,48 @@ exec_cast_value(PLpgSQL_execstate *estate,
 	 */
 	if (valtype != reqtype ||
 		(valtypmod != reqtypmod && reqtypmod != -1))
-	{
-		plpgsql_CastHashEntry *cast_entry;
+		value = do_cast_value(estate, value, isnull, valtype, valtypmod,
+			  reqtype, reqtypmod);
 
-		cast_entry = get_cast_hashentry(estate,
-		valtype, valtypmod,
-		reqtype, reqtypmod);
-		if (cast_entry)
-		{
-			ExprContext *econtext = estate->eval_econtext;
-			MemoryContext oldcontext;
+	return value;
+}
 
-			oldcontext = MemoryContextSwitchTo(get_eval_mcontext(estate));
+/* --
+ * do_cast_value			cast the input value.
+ *
+ * Returns the cast value.
+ * Check comments in the wrapper function exec_cast_value().
+ * --
+ */
+static Datum
+do_cast_value(PLpgSQL_execstate *estate,
+Datum value, bool *isnull,
+Oid valtype, int32 valtypmod,
+Oid reqtype, int32 reqtypmod)
+{
+	plpgsql_CastHashEntry *cast_entry;
 
-			econtext->caseValue_datum = value;
-			econtext->caseValue_isNull = *isnull;
+	cast_entry = get_cast_hashentry(estate,
+	valtype, valtypmod,
+	reqtype, reqtypmod);
+	if (cast_entry)
+	{
+		ExprContext *econtext = estate->eval_econtext;
+		MemoryContext oldcontext;
 
-			cast_entry->cast_in_use = true;
+		oldcontext = MemoryContextSwitchTo(get_eval_mcontext(estate));
 
-			value = ExecEvalExpr(cast_entry->cast_exprstate, econtext,
- isnull);
+		econtext->caseValue_datum = value;
+		econtext->caseValue_isNull = *isnull;
 
-			cast_entry->cast_in_use = false;
+		cast_entry->cast_in_use = true;
 
-			MemoryContextSwitchTo(oldcontext);
-		}
+		value = ExecEvalExpr(cast_entry->cast_exprstate, econtext,
+			 isnull);
+
+		cast_entry->cast_in_use = false;
+
+		MemoryContextSwitchTo(oldcontext);
 	}
 
 	return value;
-- 
2.17.1



Re: pro pametniky -historie vzniku PL/SQL

2020-07-03 Thread Pavel Stehule
Hi

I am sorry, wrong mailing list.

Regards

Pavel

pá 3. 7. 2020 v 21:16 odesílatel Pavel Stehule 
napsal:

> Ahoj
>
> PL/pgSQL vznikl jako jednoducha implementace jazyka silne inspirovana
> PL/SQL. Proto muze byt zajimave si neco precist o PL/SQL
>
>
> http://oracle-internals.com/blog/2020/04/29/a-not-so-brief-but-very-accurate-history-of-pl-sql/
>
> Pavel
>


pro pametniky -historie vzniku PL/SQL

2020-07-03 Thread Pavel Stehule
Ahoj

PL/pgSQL vznikl jako jednoducha implementace jazyka silne inspirovana
PL/SQL. Proto muze byt zajimave si neco precist o PL/SQL

http://oracle-internals.com/blog/2020/04/29/a-not-so-brief-but-very-accurate-history-of-pl-sql/

Pavel


Re: Parallell hashjoin sometimes ignores temp_tablespaces

2020-07-03 Thread Magnus Hagander
On Fri, Jul 3, 2020 at 7:06 PM Tom Lane  wrote:

> Magnus Hagander  writes:
> > On Fri, Jul 3, 2020 at 6:12 PM Tom Lane  wrote:
> >> The lack of documentation seems to be my fault, so I'm willing to pick
> >> this up unless somebody else wants it.
>
> > If the comments I included in that patch are enough, I can just commit
> > those along with it. Otherwise, please do :)
>
> Being once burned, I had something more like the attached in mind.
>

That's a bit more elaborate and yes, I agree, better.


BTW, looking at this, I'm kind of wondering about the other code path
> in SharedFileSetInit:
>
> if (fileset->ntablespaces == 0)
> {
> fileset->tablespaces[0] = DEFAULTTABLESPACE_OID;
> fileset->ntablespaces = 1;
> }
>
> Shouldn't that be inserting MyDatabaseTableSpace?  I see no other places
> anywhere that are forcing temp stuff into pg_default like this.
>

Yeah, looking at it again, I think it should. I can't see any reason why it
should enforce pg_default.

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: Parallell hashjoin sometimes ignores temp_tablespaces

2020-07-03 Thread Tom Lane
Magnus Hagander  writes:
> On Fri, Jul 3, 2020 at 6:12 PM Tom Lane  wrote:
>> The lack of documentation seems to be my fault, so I'm willing to pick
>> this up unless somebody else wants it.

> If the comments I included in that patch are enough, I can just commit
> those along with it. Otherwise, please do :)

Being once burned, I had something more like the attached in mind.

BTW, looking at this, I'm kind of wondering about the other code path
in SharedFileSetInit:

if (fileset->ntablespaces == 0)
{
fileset->tablespaces[0] = DEFAULTTABLESPACE_OID;
fileset->ntablespaces = 1;
}

Shouldn't that be inserting MyDatabaseTableSpace?  I see no other places
anywhere that are forcing temp stuff into pg_default like this.

regards, tom lane

diff --git a/src/backend/commands/tablespace.c b/src/backend/commands/tablespace.c
index f887ee9857..2c3b9050b2 100644
--- a/src/backend/commands/tablespace.c
+++ b/src/backend/commands/tablespace.c
@@ -1183,6 +1183,7 @@ GetDefaultTablespace(char relpersistence, bool partitioned)
 
 typedef struct
 {
+	/* Array of OIDs to be passed to SetTempTablespaces() */
 	int			numSpcs;
 	Oid			tblSpcs[FLEXIBLE_ARRAY_MEMBER];
 } temp_tablespaces_extra;
@@ -1232,6 +1233,7 @@ check_temp_tablespaces(char **newval, void **extra, GucSource source)
 			/* Allow an empty string (signifying database default) */
 			if (curname[0] == '\0')
 			{
+/* InvalidOid signifies database's default tablespace */
 tblSpcs[numSpcs++] = InvalidOid;
 continue;
 			}
@@ -1258,6 +1260,7 @@ check_temp_tablespaces(char **newval, void **extra, GucSource source)
 			 */
 			if (curoid == MyDatabaseTableSpace)
 			{
+/* InvalidOid signifies database's default tablespace */
 tblSpcs[numSpcs++] = InvalidOid;
 continue;
 			}
@@ -1368,6 +1371,7 @@ PrepareTempTablespaces(void)
 		/* Allow an empty string (signifying database default) */
 		if (curname[0] == '\0')
 		{
+			/* InvalidOid signifies database's default tablespace */
 			tblSpcs[numSpcs++] = InvalidOid;
 			continue;
 		}
@@ -1386,7 +1390,8 @@ PrepareTempTablespaces(void)
 		 */
 		if (curoid == MyDatabaseTableSpace)
 		{
-			tblSpcs[numSpcs++] = curoid;
+			/* InvalidOid signifies database's default tablespace */
+			tblSpcs[numSpcs++] = InvalidOid;
 			continue;
 		}
 
diff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c
index 7dc6dd2f15..5f6420efb2 100644
--- a/src/backend/storage/file/fd.c
+++ b/src/backend/storage/file/fd.c
@@ -264,8 +264,10 @@ static int	numExternalFDs = 0;
 static long tempFileCounter = 0;
 
 /*
- * Array of OIDs of temp tablespaces.  When numTempTableSpaces is -1,
- * this has not been set in the current transaction.
+ * Array of OIDs of temp tablespaces.  (Some entries may be InvalidOid,
+ * indicating that the current database's default tablespace should be used.)
+ * When numTempTableSpaces is -1, this has not been set in the current
+ * transaction.
  */
 static Oid *tempTableSpaces = NULL;
 static int	numTempTableSpaces = -1;
@@ -2779,6 +2781,9 @@ closeAllVfds(void)
  * unless this function is called again before then.  It is caller's
  * responsibility that the passed-in array has adequate lifespan (typically
  * it'd be allocated in TopTransactionContext).
+ *
+ * Some entries of the array may be InvalidOid, indicating that the current
+ * database's default tablespace should be used.
  */
 void
 SetTempTablespaces(Oid *tableSpaces, int numSpaces)
@@ -2818,7 +2823,10 @@ TempTablespacesAreSet(void)
  * GetTempTablespaces
  *
  * Populate an array with the OIDs of the tablespaces that should be used for
- * temporary files.  Return the number that were copied into the output array.
+ * temporary files.  (Some entries may be InvalidOid, indicating that the
+ * current database's default tablespace should be used.)  At most numSpaces
+ * entries will be filled.
+ * Returns the number of OIDs that were copied into the output array.
  */
 int
 GetTempTablespaces(Oid *tableSpaces, int numSpaces)
diff --git a/src/backend/storage/file/sharedfileset.c b/src/backend/storage/file/sharedfileset.c
index f7206c9175..0f65210476 100644
--- a/src/backend/storage/file/sharedfileset.c
+++ b/src/backend/storage/file/sharedfileset.c
@@ -66,6 +66,21 @@ SharedFileSetInit(SharedFileSet *fileset, dsm_segment *seg)
 		fileset->tablespaces[0] = DEFAULTTABLESPACE_OID;
 		fileset->ntablespaces = 1;
 	}
+	else
+	{
+		int			i;
+
+		/*
+		 * An entry of InvalidOid means use the default tablespace for the
+		 * current database.  Replace that now, to be sure that all users of
+		 * the SharedFileSet agree on what to do.
+		 */
+		for (i = 0; i < fileset->ntablespaces; i++)
+		{
+			if (fileset->tablespaces[i] == InvalidOid)
+fileset->tablespaces[i] = MyDatabaseTableSpace;
+		}
+	}
 
 	/* Register our cleanup callback. */
 	on_dsm_detach(seg, SharedFileSetOnDetach, PointerGetDatum(fileset));


Re: POC: rational number type (fractions)

2020-07-03 Thread Tom Lane
Andrew Dunstan  writes:
> On 7/2/20 10:01 AM, Tom Lane wrote:
>> Yeah.  We *must not* simply give up on extensibility and decide that
>> every interesting feature has to be in core.  I don't have any great
>> ideas about how we grow the wider Postgres development community and
>> infrastructure, but that certainly isn't the path to doing so.

> I've been thinking about this a bit. Right now there isn't anything
> outside of core that seems to work well. PGXN was supposed to be our
> CPAN equivalent, but it doesn't seem to have worked out that way, it
> never really got the traction.

Yeah.  Can we analyze why it hasn't done better?  Can we improve it
rather than starting something completely new?

> I'm thinking about something different,
> in effect a curated set of extensions, maintained separately from the
> core. Probably the involvement of one or two committers would be good,
> but the idea is that in general core developers wouldn't need to be
> concerned about these. For want of a better name let's call it
> postgresql-extras. I would undertake to provide buildfarm support, and
> possibly we would provide packages to complement the PGDG yum and apt
> repos. If people think that's a useful idea then those of us who are
> prepared to put in some effort on this can take the discussion offline
> and come back with a firmer proposal.

My only objection to this idea is that competing with PGXN might not
be a great thing.  But then again, maybe it would be.  Or maybe this
is an intermediate tier between PGXN and core.   Anyway, it certainly
seems worth spending more thought on.  I agree that we need to do
*something* proactive rather than just hoping the extension community
gets stronger by itself.

regards, tom lane




Re: Parallell hashjoin sometimes ignores temp_tablespaces

2020-07-03 Thread Magnus Hagander
On Fri, Jul 3, 2020 at 6:12 PM Tom Lane  wrote:

> Magnus Hagander  writes:
> > A quick look -- to do things right, we will need to know the database
> > default tablespace in this case right? Which I guess isn't there because
> > the shared fileset isn't tied to a database. But perhaps it's as easy as
> > something like the attached, just overwriting the oid?
>
> Yeah, we just have to pick an appropriate place for making the
> substitution.  I have no objection to doing it in SharedFileSetInit, as
> long as we're sure it will only be consulted for placing temp files and
> not relations.
>

It doesn't *now*, and I'm pretty sure it can't be in the future the way it
is now (a parallel worker can't be creating relations). But it is probably
a good idea to add a comment indicating this as well...


>
> The lack of documentation seems to be my fault, so I'm willing to pick
> this up unless somebody else wants it.
>

If the comments I included in that patch are enough, I can just commit
those along with it. Otherwise, please do :)

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: Parallell hashjoin sometimes ignores temp_tablespaces

2020-07-03 Thread Tom Lane
Magnus Hagander  writes:
> A quick look -- to do things right, we will need to know the database
> default tablespace in this case right? Which I guess isn't there because
> the shared fileset isn't tied to a database. But perhaps it's as easy as
> something like the attached, just overwriting the oid?

Yeah, we just have to pick an appropriate place for making the
substitution.  I have no objection to doing it in SharedFileSetInit, as
long as we're sure it will only be consulted for placing temp files and
not relations.

The lack of documentation seems to be my fault, so I'm willing to pick
this up unless somebody else wants it.

regards, tom lane




Re: POC: rational number type (fractions)

2020-07-03 Thread Andrew Dunstan




On 7/2/20 10:01 AM, Tom Lane wrote:
> Andrew Dunstan  writes:
>> On 7/1/20 7:00 PM, Stephen Frost wrote:
>>> ... extensions outside of PG simply don't thrive as independent projects.
>>>
>>> There's various potential reasons for that, from being hard to find, to
>>> being hard to install and work with, to the fact that we don't have a
>>> centralized extension system (PGXN isn't really endorsed at all by
>>> core... and I don't really think it should be), and our general
>>> extension management system isn't particularly great anyway.
>> Then these are things we should fix. But the right fix isn't including
>> every extension in the core code.
> Yeah.  We *must not* simply give up on extensibility and decide that
> every interesting feature has to be in core.  I don't have any great
> ideas about how we grow the wider Postgres development community and
> infrastructure, but that certainly isn't the path to doing so.
>
>   


I've been thinking about this a bit. Right now there isn't anything
outside of core that seems to work well. PGXN was supposed to be our
CPAN equivalent, but it doesn't seem to have worked out that way, it
never really got the traction. I'm thinking about something different,
in effect a curated set of extensions, maintained separately from the
core. Probably the involvement of one or two committers would be good,
but the idea is that in general core developers wouldn't need to be
concerned about these. For want of a better name let's call it
postgresql-extras. I would undertake to provide buildfarm support, and
possibly we would provide packages to complement the PGDG yum and apt
repos. If people think that's a useful idea then those of us who are
prepared to put in some effort on this can take the discussion offline
and come back with a firmer proposal.


cheers


andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services





Re: Cache lookup errors with functions manipulation object addresses

2020-07-03 Thread Alvaro Herrera
On 2020-Jul-02, Michael Paquier wrote:

> Attached is an updated patch set, because of conflicts in the docs.
> Daniel, you are still registered as a reviewer of this patch, and it
> is marked as ready for committer?  Do you have more comments?  Would
> anybody have objections if I move on with 0001 and 0002 that extend
> the APIs to get the procedure, operator and type names?

0001 and 0002 look good to me.

I think 0003 could be a little more careful about indentation; some long
lines are going to result after pgindent that might be better to handle
in a different way before commit, e.g., here

> + {
> + char *proname = 
> format_procedure_extended(object->objectId,
> + FORMAT_PROC_FORCE_QUALIFY | 
> FORMAT_PROC_INVALID_AS_NULL);

I don't have any substantive comments.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Parallell hashjoin sometimes ignores temp_tablespaces

2020-07-03 Thread Magnus Hagander
On Fri, Jul 3, 2020 at 4:16 PM Tom Lane  wrote:

> Magnus Hagander  writes:
> > Thanks. pushed!
>
> Sorry for not having paid more attention earlier, but this patch is
> quite broken.  If it weren't misguided it'd still be wrong, because
> this isn't the only spot in PrepareTempTablespaces that inserts
> InvalidOid into the output list.
>
> But, in fact, it's intentional that we represent the DB's default
> tablespace by InvalidOid in that list.  Some callers of
> GetNextTempTableSpace need that to be the case, either for
> permissions-checking reasons or because they're going to store the
> result into a temp table's pg_class.reltablespace, where that
> representation is *required*.
>

Hmm. I guess I must've been careless in checking other callers.


I see that this is perhaps underdocumented, since while
> GetNextTempTableSpace's comment mentions the behavior, there's
> no comment about it with the data structure proper.
>

Yeah, it could definitely do with that. It was too many steps of
indirections away to me to pick that up.


It looks to me like the actual bug here is that whoever added
> GetTempTablespaces() and made sharedfileset.c depend on it
> did not get the memo about what to do with InvalidOid.
> It's possible that we could safely make GetTempTablespaces()
> do the substitution, but that would be making fd.c assume more
> about the usage of GetTempTablespaces() than I think it ought to.
> I feel like we oughta fix sharedfileset.c, instead.
>

This seems to be in dc6c4c9dc2a -- adding Thomas.

A quick look -- to do things right, we will need to know the database
default tablespace in this case right? Which I guess isn't there because
the shared fileset isn't tied to a database. But perhaps it's as easy as
something like the attached, just overwriting the oid?

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 
diff --git a/src/backend/commands/tablespace.c b/src/backend/commands/tablespace.c
index 946777f48b..6d44d0480d 100644
--- a/src/backend/commands/tablespace.c
+++ b/src/backend/commands/tablespace.c
@@ -1152,6 +1152,10 @@ GetDefaultTablespace(char relpersistence)
 typedef struct
 {
 	int			numSpcs;
+	/*
+	 * InvalidOid stored in the tblSpcs member means use the database default
+	 * tablespace.
+	 */
 	Oid			tblSpcs[FLEXIBLE_ARRAY_MEMBER];
 } temp_tablespaces_extra;
 
@@ -1226,6 +1230,7 @@ check_temp_tablespaces(char **newval, void **extra, GucSource source)
 			 */
 			if (curoid == MyDatabaseTableSpace)
 			{
+/* The default tablespace is indicated by the value InvalidOid */
 tblSpcs[numSpcs++] = InvalidOid;
 continue;
 			}
diff --git a/src/backend/storage/file/sharedfileset.c b/src/backend/storage/file/sharedfileset.c
index d41b39a177..9c412959d8 100644
--- a/src/backend/storage/file/sharedfileset.c
+++ b/src/backend/storage/file/sharedfileset.c
@@ -64,6 +64,20 @@ SharedFileSetInit(SharedFileSet *fileset, dsm_segment *seg)
 		fileset->tablespaces[0] = DEFAULTTABLESPACE_OID;
 		fileset->ntablespaces = 1;
 	}
+	else
+	{
+		int i;
+
+		/*
+		 * Tablespace set to InvalidOid means use the default tablespace for
+		 * the database.
+		 */
+		for (i = 0; i < fileset->ntablespaces; i++)
+		{
+			if (fileset->tablespaces[i] == InvalidOid)
+fileset->tablespaces[i] = MyDatabaseTableSpace;
+		}
+	}
 
 	/* Register our cleanup callback. */
 	on_dsm_detach(seg, SharedFileSetOnDetach, PointerGetDatum(fileset));


Re: Default setting for enable_hashagg_disk (hash_mem)

2020-07-03 Thread Justin Pryzby
On Fri, Jul 03, 2020 at 10:08:08AM -0400, Bruce Momjian wrote:
> On Thu, Jul  2, 2020 at 08:35:40PM -0700, Peter Geoghegan wrote:
> > But the problem isn't really the hashaggs-that-spill patch itself.
> > Rather, the problem is the way that work_mem is supposed to behave in
> > general, and the impact that that has on hash aggregate now that it
> > has finally been brought into line with every other kind of executor
> > node. There just isn't much reason to think that we should give the
> > same amount of memory to a groupagg + sort as a hash aggregate. The
> > patch more or less broke an existing behavior that is itself
> > officially broken. That is, the problem that we're trying to fix here
> > is only a problem to the extent that the previous scheme isn't really
> > operating as intended (because grouping estimates are inherently very
> > hard). A revert doesn't seem like it helps anyone.
> > 
> > I accept that the idea of inventing hash_mem to fix this problem now
> > is unorthodox. In a certain sense it solves problems beyond the
> > problems that we're theoretically obligated to solve now. But any
> > "more conservative" approach that I can think of seems like a big
> > mess.
> 
> Well, the bottom line is that we are designing features during beta.
> People are supposed to be testing PG 13 behavior during beta, including
> optimizer behavior.  We don't even have a user report yet of a
> regression compared to PG 12, or one that can't be fixed by increasing
> work_mem.
> 
> If we add a new behavior to PG 13, we then have the pre-PG 13 behavior,
> the pre-patch behavior, and the post-patch behavior.  How are people
> supposed to test all of that?  Add to that that some don't even feel we
> need a new behavior, which is delaying any patch from being applied.

If we default hash_mem=-1, the post-patch behavior by default would be same as
the pre-patch behavior.

Actually, another reason it should be -1 is simply to reduce the minimum,
essential number of GUCs everyone has to change or review on a new installs of
a dedicated or nontrivial instance.  shared_buffers, max_wal_size,
checkpoint_timeout, eff_cache_size, work_mem.

I don't think anybody said it before, but now it occurs to me that one
advantage of making hash_mem a multiplier (I'm thinking of
hash_mem_scale_factor) rather than an absolute is that one wouldn't need to
remember to increase hash_mem every time they increase work_mem.  Otherwise,
this is kind of a foot-gun: hash_mem would default to 16MB, and people
experiencing poor performance would increase work_mem to 256MB like they've
been doing for decades, and see no effect.  Or someone would increase work_mem
from 4MB to 256MB, which exceeds hash_mem default of 16MB, so then (if Peter
has his way) hash_mem is ignored.

Due to these behaviors, I'll retract my previous preference:
| "I feel it should same as work_mem, as it's written, and not a multiplier."

I think the better ideas are:
 - hash_mem=-1
 - hash_mem_scale_factor=1 ?

Maybe as a separate patch we'd set default hash_mem_scale_factor=4, possibly
only in master not and v13.

-- 
Justin




Re: warnings for invalid function casts

2020-07-03 Thread Tom Lane
Peter Eisentraut  writes:
> On 2020-06-30 21:38, Tom Lane wrote:
>> In any case, I think the issue here is what is the escape hatch for saying
>> that "I know this cast is okay, don't warn about it, thanks".  Treating
>> "void (*) (void)" as special for that purpose is nothing more nor less
>> than a kluge, so another compiler might do it differently.  Given the
>> POSIX restriction, I think we could reasonably use "void *" instead.

> I think gcc had to pick some escape hatch that is valid also outside of 
> POSIX, so they just had to pick something.  If we're disregarding 
> support for these Harvard architecture type things, then we might as 
> well use void * for easier notation.

As long as it's behind a typedef, the code will look the same in any
case ;-).

> Btw., one of the hunks in my patch was in PL/Python.  I have found an 
> equivalent change in the core Python code, which does make use of void 
> (*) (void): https://github.com/python/cpython/commit/62be74290aca

Given that gcc explicitly documents "void (*) (void)" as being what
to use, they're going to have a hard time changing their minds about
that ... and gcc is dominant enough in this space that I suppose
other compilers would have to be compatible with it.  So even though
it's theoretically bogus, I suppose we might as well go along with
it.  The typedef will allow a centralized fix if we ever find a
better answer.

regards, tom lane




Re: Parallell hashjoin sometimes ignores temp_tablespaces

2020-07-03 Thread Tom Lane
Magnus Hagander  writes:
> Thanks. pushed!

Sorry for not having paid more attention earlier, but this patch is
quite broken.  If it weren't misguided it'd still be wrong, because
this isn't the only spot in PrepareTempTablespaces that inserts
InvalidOid into the output list.

But, in fact, it's intentional that we represent the DB's default
tablespace by InvalidOid in that list.  Some callers of
GetNextTempTableSpace need that to be the case, either for
permissions-checking reasons or because they're going to store the
result into a temp table's pg_class.reltablespace, where that
representation is *required*.

I see that this is perhaps underdocumented, since while
GetNextTempTableSpace's comment mentions the behavior, there's
no comment about it with the data structure proper.

It looks to me like the actual bug here is that whoever added
GetTempTablespaces() and made sharedfileset.c depend on it
did not get the memo about what to do with InvalidOid.
It's possible that we could safely make GetTempTablespaces()
do the substitution, but that would be making fd.c assume more
about the usage of GetTempTablespaces() than I think it ought to.
I feel like we oughta fix sharedfileset.c, instead.

regards, tom lane




Re: TAP tests and symlinks on Windows

2020-07-03 Thread Peter Eisentraut

On 2020-06-30 14:13, Michael Paquier wrote:

Attached is an updated patch, where I have tried to use a better
wording in all the code paths involved.


This new patch doesn't work for me on MSYS2 yet.

It fails right now in 010_pg_basebackup.pl at

my $realTsDir  = TestLib::perl2host("$shorter_tempdir/tblspc1");

with chdir: No such file or directory.  Because perl2host requires the 
parent directory of the argument to exist, but here it doesn't.


If I add

mkdir $shorter_tempdir;

above it, then it proceeds past that point, but then the CREATE 
TABLESPACE command fails with No such file or directory.  I think the call


symlink "$tempdir", $shorter_tempdir;

creates a directory inside $shorter_tempdir, since it now exists, per my 
above change, rather than in place of $shorter_tempdir.


I think all of this is still a bit too fragile it needs further 
consideration.


--
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Default setting for enable_hashagg_disk (hash_mem)

2020-07-03 Thread Bruce Momjian
On Thu, Jul  2, 2020 at 08:35:40PM -0700, Peter Geoghegan wrote:
> But the problem isn't really the hashaggs-that-spill patch itself.
> Rather, the problem is the way that work_mem is supposed to behave in
> general, and the impact that that has on hash aggregate now that it
> has finally been brought into line with every other kind of executor
> node. There just isn't much reason to think that we should give the
> same amount of memory to a groupagg + sort as a hash aggregate. The
> patch more or less broke an existing behavior that is itself
> officially broken. That is, the problem that we're trying to fix here
> is only a problem to the extent that the previous scheme isn't really
> operating as intended (because grouping estimates are inherently very
> hard). A revert doesn't seem like it helps anyone.
> 
> I accept that the idea of inventing hash_mem to fix this problem now
> is unorthodox. In a certain sense it solves problems beyond the
> problems that we're theoretically obligated to solve now. But any
> "more conservative" approach that I can think of seems like a big
> mess.

Well, the bottom line is that we are designing features during beta.
People are supposed to be testing PG 13 behavior during beta, including
optimizer behavior.  We don't even have a user report yet of a
regression compared to PG 12, or one that can't be fixed by increasing
work_mem.

If we add a new behavior to PG 13, we then have the pre-PG 13 behavior,
the pre-patch behavior, and the post-patch behavior.  How are people
supposed to test all of that?  Add to that that some don't even feel we
need a new behavior, which is delaying any patch from being applied.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: warnings for invalid function casts

2020-07-03 Thread Peter Eisentraut

On 2020-06-30 21:38, Tom Lane wrote:

In any case, I think the issue here is what is the escape hatch for saying
that "I know this cast is okay, don't warn about it, thanks".  Treating
"void (*) (void)" as special for that purpose is nothing more nor less
than a kluge, so another compiler might do it differently.  Given the
POSIX restriction, I think we could reasonably use "void *" instead.


I think gcc had to pick some escape hatch that is valid also outside of 
POSIX, so they just had to pick something.  If we're disregarding 
support for these Harvard architecture type things, then we might as 
well use void * for easier notation.


Btw., one of the hunks in my patch was in PL/Python.  I have found an 
equivalent change in the core Python code, which does make use of void 
(*) (void): https://github.com/python/cpython/commit/62be74290aca


--
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Persist MVCC forever - retain history

2020-07-03 Thread Adam Brusselback
> But I didn't fill a big interest to it from community.
Just fyi, it is something that I use in my database design now (just hacked
together using ranges / exclusion constraints) and
would love for a well supported solution.

I've chimed in a couple times as this feature has popped up in discussion
over the years, as I have seen others with similar needs do as well.
Just sometimes feels like spam to chime in just saying "i'd find this
feature useful" so I try and not do that too much. I'd rather not step on
the
community's toes.

-Adam


Re: Cleanup - Removed unused function parameter in reorder buffer & parallel vacuum

2020-07-03 Thread Tom Lane
Amit Kapila  writes:
> On Fri, Jul 3, 2020 at 2:06 PM Simon Riggs  wrote:
>> Seems like a change with low utility.

> Yeah, all or most of the ReorderBuffer APIs seem to take the
> "ReorderBuffer *" parameter, so not sure if removing from some of them
> is useful or not.  At least in the current form, they look consistent.

Yeah, I agree with that.  This makes things less consistent and it seems
like it's not buying much.  Are any of these code paths sufficiently hot
that saving a couple of instructions would matter?

In the long run, it seems like the fact that any of these functions
are not using these parameters is an implementation artifact that
could change at any time.  So we might just be putting them back
someday, with nothing except code churn and back-patch hazards to
show for our trouble.  Or, if you want to argue that a "ReorderBufferXXX"
function is inherently never going to use the ReorderBuffer, why is it
in that module with that name to begin with?

regards, tom lane




Re: Missing "Up" navigation link between parts and doc root?

2020-07-03 Thread Tom Lane
Peter Eisentraut  writes:
> On 2020-06-21 09:19, Fabien COELHO wrote:
>> I've been annoyed that the documentation navigation does not always has an
>> "Up" link. It has them inside parts, but the link disappears and you have
>> to go for the "Home" link which is far on the right when on the root page
>> of a part?
>> 
>> Is there a good reason not to have the "Up" link there as well?

> The original stylesheets explicitly go out of their way to do it that 
> way.

Can we find any evidence of the reasoning?  As you say, that clearly was
an intentional choice.

regards, tom lane




Re: Parallell hashjoin sometimes ignores temp_tablespaces

2020-07-03 Thread Magnus Hagander
On Mon, Jun 29, 2020 at 10:26 PM Daniel Gustafsson  wrote:

> > On 29 Jun 2020, at 17:02, Magnus Hagander  wrote:
>
> > I think the fix is the attached one (tested on version 11 which is what
> $customer is using).  To me it looks like this may have been a copy/paste
> error all the way back in 98e8b480532 which added default_tablespace back
> in 2004. (And is in itself entirely unrelated to parallel hashjoin, but
> that's where it got exposed at least in my case)
>
> Running through the repro and patch on HEAD I confirm that the attached
> fixes
> the issue. +1 for the patch and a backpatch of it.
>
> It would be nice to have a test covering test_tablespaces, but it seems a
> tad
> cumbersome to create a stable one.
>

Thanks. pushed!

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: Cleanup - Removed unused function parameter in reorder buffer & parallel vacuum

2020-07-03 Thread Amit Kapila
On Fri, Jul 3, 2020 at 2:06 PM Simon Riggs  wrote:
>
> On Fri, 3 Jul 2020 at 09:07, vignesh C  wrote:
>
>>
>> While checking through the code I found that  some of the function
>> parameters in reorderbuffer & vacuumlazy are not used. I felt this
>> could be removed. I'm not sure if it is kept for future use or not.
>> Attached patch contains the changes for the same.
>> Thoughts?
>
>
> Unused? To confirm that, everybody that has a logical decoding plugin needs 
> to check their code so we are certain this is sensible.
>

The changes proposed by Vignesh are in ReorderBuffer APIs and some of
them are static functions, so not sure if decoding plugin comes into
the picture.

> Seems like a change with low utility.
>

Yeah, all or most of the ReorderBuffer APIs seem to take the
"ReorderBuffer *" parameter, so not sure if removing from some of them
is useful or not.  At least in the current form, they look consistent.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: estimation problems for DISTINCT ON with FDW

2020-07-03 Thread Etsuro Fujita
On Fri, Jul 3, 2020 at 5:19 AM Tom Lane  wrote:
> Concretely, I now propose the attached, which seems entirely
> safe to back-patch.

The patch looks good to me.  And +1 for back-patching.

Best regards,
Etsuro Fujita




Re: Cleanup - Removed unused function parameter in reorder buffer & parallel vacuum

2020-07-03 Thread Masahiko Sawada
On Fri, 3 Jul 2020 at 17:07, vignesh C  wrote:
>
> Hi,
>
> While checking through the code I found that  some of the function
> parameters in reorderbuffer & vacuumlazy are not used. I felt this
> could be removed. I'm not sure if it is kept for future use or not.
> Attached patch contains the changes for the same.
> Thoughts?
>

For the part of parallel vacuum change, it looks good to me.

Regards,

-- 
Masahiko Sawadahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: track_planning causing performance regression

2020-07-03 Thread Fujii Masao



On 2020/07/03 16:02, Pavel Stehule wrote:



pá 3. 7. 2020 v 8:57 odesílatel Fujii Masao mailto:masao.fu...@oss.nttdata.com>> napsal:



On 2020/07/03 13:05, Pavel Stehule wrote:
 > Hi
 >
 > pá 3. 7. 2020 v 4:39 odesílatel Fujii Masao mailto:masao.fu...@oss.nttdata.com> >> napsal:
 >
 >
 >
 >     On 2020/07/01 7:37, Peter Geoghegan wrote:
 >      > On Tue, Jun 30, 2020 at 6:40 AM Fujii Masao mailto:masao.fu...@oss.nttdata.com> >> wrote:
 >      >> Ants and Andres suggested to replace the spinlock used in 
pgss_store() with
 >      >> LWLock. I agreed with them and posted the POC patch doing that. 
But I think
 >      >> the patch is an item for v14. The patch may address the reported 
performance
 >      >> issue, but may cause other performance issues in other 
workloads. We would
 >      >> need to measure how the patch affects the performance in various 
workloads.
 >      >> It seems too late to do that at this stage of v13. Thought?
 >      >
 >      > I agree that it's too late for v13.
 >
 >     Thanks for the comment!
 >
 >     So I pushed the patch and changed default of track_planning to off.
 >
 >
 > Maybe there can be documented so enabling this option can have a 
negative impact on performance.

Yes. What about adding either of the followings into the doc?

      Enabling this parameter may incur a noticeable performance penalty.

or

      Enabling this parameter may incur a noticeable performance penalty,
      especially when a fewer kinds of queries are executed on many
      concurrent connections.


This second variant looks perfect for this case.


Ok, so patch attached.

Regards,

--
Fujii Masao
Advanced Computing Technology Center
Research and Development Headquarters
NTT DATA CORPORATION
diff --git a/doc/src/sgml/pgstatstatements.sgml 
b/doc/src/sgml/pgstatstatements.sgml
index 430d8bf07c..cf2d25b7b2 100644
--- a/doc/src/sgml/pgstatstatements.sgml
+++ b/doc/src/sgml/pgstatstatements.sgml
@@ -607,6 +607,9 @@
  
   pg_stat_statements.track_planning controls whether
   planning operations and duration are tracked by the module.
+  Enabling this parameter may incur a noticeable performance penalty,
+  especially when a fewer kinds of queries are executed on many
+  concurrent connections.
   The default value is off.
   Only superusers can change this setting.
  


Re: Cache lookup errors with functions manipulation object addresses

2020-07-03 Thread Daniel Gustafsson
> On 2 Jul 2020, at 07:35, Michael Paquier  wrote:
> 
> On Thu, Mar 19, 2020 at 10:30:11PM +0900, Michael Paquier wrote:
>> The new FORMAT_TYPE_* flag still makes sense to me while reading this
>> code once again, as well as the extensibility of the new APIs for
>> operators and procedures.  One doubt I still have is if we should
>> really change the signature of getObjectDescription() to include the
>> new missing_ok argument or if a new function should be introduced.
>> I'd rather keep only one function as, even if this is called in many
>> places in the backend, I cannot track down an extension using it, but
>> I won't go against Alvaro's will either if he thinks something
>> different as this is his original design and commit as of f8348ea3.
> 
> Attached is an updated patch set, because of conflicts in the docs.
> Daniel, you are still registered as a reviewer of this patch, and it
> is marked as ready for committer?  Do you have more comments?  Would
> anybody have objections if I move on with 0001 and 0002 that extend
> the APIs to get the procedure, operator and type names?

No objections from me after skimming over the updated patchset.  I still
maintain that the format_operator_extended comment should be amended to include
that it can return NULL and not alwatys palloced string, but that's it.

cheers ./daniel



Re: Creating a function for exposing memory usage of backend process

2020-07-03 Thread Fujii Masao




On 2020/07/03 11:45, torikoshia wrote:

On Wed, Jul 1, 2020 at 10:15 PM torikoshia  wrote:

I'm going to do some renaming and transportations.

- view name: pg_memory_contexts


I like more specific name like pg_backend_memory_contexts.
But I'd like to hear more opinions about the name from others.



- function name: pg_get_memory_contexts()
- source file: mainly src/backend/utils/mmgr/mcxt.c


Attached an updated patch.


Thanks for updating the patch!

+   level integer

In catalog.sgml, "int4" and "int8" are used in other catalogs tables.
So "integer" in the above should be "int4"?

+   total_bytes bigint

"bigint" should be "int8"?

+   Identification information of the memory context. This field is 
truncated if the identification field is longer than 1024 characters

"characters" should be "bytes"?

It's a bit confusing to have both "This field" and "the identification field"
in one description. What about just "This field is truncated at 1024 bytes"?

+  
+   Total bytes requested from malloc

Isn't it better not to use "malloc" in the description? For example,
what about something like "Total bytes allocated for this memory context"?

+#define PG_STAT_GET_MEMORY_CONTEXT_COLS9

Isn't it better to rename this to PG_GET_MEMORY_CONTEXTS_COLS
for the consistency with the function name?

+   memset(nulls, 0, sizeof(nulls));

"values[]" also should be initialized with zero?

Regards,


--
Fujii Masao
Advanced Computing Technology Center
Research and Development Headquarters
NTT DATA CORPORATION




Re: proposal - psql - possibility to redirect only tabular output

2020-07-03 Thread Artur Zakirov
Hello,

On Sat, Apr 11, 2020 at 6:20 PM Pavel Stehule  wrote:
> Main motivation for this patch is working with psql for writing and editing 
> queries, and browsing result in second terminal with pspg or any other 
> similar tool (tail, ...). The advantage of this setup is possibility to see 
> sql and query result together. I use terminal multiplicator (Tilix), but it 
> can be used without it.
>
> So example with pspg (should be some fresh release)
>
> 1. create fifo used for communication - mkfifo ~/pipe
>
> 2. run in one terminal pspg - pspg -f ~/pipe --hold-stream=2
>
> 3. run psql in other terminal

The patch looks interesting. As far as I understand the purpose of the
patch is to hide status messages from result output.
So maybe it would be enough just to hide status messages at all. There
is the QUIET variable for that. The main advantage of this variable is
that it hides a status of "\lo_" commands, for example, as well as a
status of utility commands. So the QUIET variable covers more use
cases already.

-- 
Artur




Re: pgbench: option delaying queries till connections establishment?

2020-07-03 Thread Daniel Gustafsson
> On 17 May 2020, at 11:55, Fabien COELHO  wrote:

> I have split the patch.
> 
> * First patch reworks time measurements in pgbench.

> * Second patch adds a barrier before starting the bench
> 
> It applies on top of the previous one. The initial imbalance due to thread 
> creation times is smoothed.

The usecs patch fails to apply to HEAD, can you please submit a rebased version
of this patchset.  Also, when doing so, can you please rename the patches such
that sort alphabetically in the order in which they are intended to be applied.
The CFBot patch tester will otherwise try to apply them out of order which
cause errors.

cheers ./daniel



Parallel worker hangs while handling errors.

2020-07-03 Thread vignesh C
Hi,

Parallel worker hangs while handling errors.

Analysis:
When there is an error in the parallel worker process, we will call
ereport/elog with the error message. Worker will then jump from
errfinish to setjmp in StartBackgroundWorker function which was set
earlier. Then the worker process will then send the error message
through the shared memory to the leader process. Shared memory size is
ok 16K, if the error message is less than 16K it works fine. If there
is a bigger error message, the worker process will wait for the leader
process to read the message, free up some memory in shared memory and
set the latch. The worker will be waiting at the below back trace:
#4  0x0090480c in WaitLatch (latch=0x7f2b39f6b454,
wakeEvents=33, timeout=0, wait_event_info=134217753) at latch.c:368
#5  0x00787c7f in mq_putmessage (msgtype=69 'E', s=0x2f24350
"SERROR", len=230015) at pqmq.c:171
#6  0x0078712e in pq_endmessage (buf=0x7ffe721c4370) at pqformat.c:301
#7  0x00ac1749 in send_message_to_frontend (edata=0xfe91a0
) at elog.c:3327
#8  0x00abdf5b in EmitErrorReport () at elog.c:1460

Leader process then identifies that there are some messages that need
to be processed, it copies the messages and sets the latch so that the
worker process can copy the remaining message from the below function:
shm_mq_inc_bytes_read -> SetLatch(&sender->procLatch);, Worker is not
able to receive any signal at this point of time & hangs infinitely
Worker hangs in this case because when the worker is started the
signals will be masked using sigprocmask. Unblocking of signals is
done by calling BackgroundWorkerUnblockSignals in ParallelWorkerMain.
Now due to error handling the worker has jumped to setjmp in
StartBackgroundWorker function. Here the signals are in a blocked
state, hence the signal is not received by the worker process.

One of the fixes could be to call BackgroundWorkerUnblockSignals just
after sigsetjmp. I'm not sure if this is the best solution.
Robert & myself had a discussion about the problem yesterday. We felt
this is a genuine problem with the parallel worker error handling and
need to be fixed.
I could reproduce this issue when there is an error during copy of
toast data using parallel copy, this project is an in-progress
project. I don't have a test case to reproduce on the head. Any
suggestions for a test case on head?
The Attached patch has the fix for the same.

Thoughts?

Regards,
Vignesh
EnterpriseDB: http://www.enterprisedb.com
From a2177fdb0896160f209db4eebc5b4d80eb341e42 Mon Sep 17 00:00:00 2001
From: Vignesh C 
Date: Fri, 3 Jul 2020 12:18:55 +0530
Subject: [PATCH] Fix for Parallel worker hangs while handling errors.

Worker is not able to receive the signals while processing error flow. Worker
hangs in this case because when the worker is started the signals will be
masked using sigprocmask. Unblocking of signals is done by calling
BackgroundWorkerUnblockSignals in ParallelWorkerMain. Now due to error
handling the worker has jumped to setjmp in StartBackgroundWorker function.
Here the signals are in blocked state, hence the signal is not received by the
worker process.
---
 src/backend/postmaster/bgworker.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/src/backend/postmaster/bgworker.c b/src/backend/postmaster/bgworker.c
index beb5e85..9663907 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -747,6 +747,11 @@ StartBackgroundWorker(void)
 	 */
 	if (sigsetjmp(local_sigjmp_buf, 1) != 0)
 	{
+		/*
+		 * Unblock signals (they were blocked when the postmaster forked us)
+		 */
+		BackgroundWorkerUnblockSignals();
+
 		/* Since not using PG_TRY, must reset error stack by hand */
 		error_context_stack = NULL;
 
-- 
1.8.3.1



Re: Missing "Up" navigation link between parts and doc root?

2020-07-03 Thread Peter Eisentraut

On 2020-06-21 09:19, Fabien COELHO wrote:

I've been annoyed that the documentation navigation does not always has an
"Up" link. It has them inside parts, but the link disappears and you have
to go for the "Home" link which is far on the right when on the root page
of a part?

Is there a good reason not to have the "Up" link there as well?


The original stylesheets explicitly go out of their way to do it that 
way.  We can easily fix that by removing that special case.  See 
attached patch.


That patch only fixes it for the header.  To fix it for the footer as 
well, we'd first need to import the navfooter template to be able to 
customize it.  Not a big problem though.


--
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From fd256cfd4bc59daa2520083eca7ee3ca83991bfd Mon Sep 17 00:00:00 2001
From: Peter Eisentraut 
Date: Fri, 3 Jul 2020 10:56:48 +0200
Subject: [PATCH] doc: Don't hide the "Up" link when it is the same as "Home"

Reported-by: Fabien COELHO 
Discussion: 
https://www.postgresql.org/message-id/flat/alpine.DEB.2.22.394.2006210914370.859381%40pseudo
---
 doc/src/sgml/stylesheet.xsl | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/doc/src/sgml/stylesheet.xsl b/doc/src/sgml/stylesheet.xsl
index aeaa1e7c10..af0ac129e9 100644
--- a/doc/src/sgml/stylesheet.xsl
+++ b/doc/src/sgml/stylesheet.xsl
@@ -57,7 +57,6 @@ Customization of header
   
   
 
@@ -95,8 +94,7 @@ Customization of header
   
   
 
-  
+  
 
   
 
@@ -117,7 +115,6 @@ Customization of header
   
 
   
 
   
-- 
2.27.0



Re: Cleanup - Removed unused function parameter in reorder buffer & parallel vacuum

2020-07-03 Thread Simon Riggs
On Fri, 3 Jul 2020 at 09:07, vignesh C  wrote:


> While checking through the code I found that  some of the function
> parameters in reorderbuffer & vacuumlazy are not used. I felt this
> could be removed. I'm not sure if it is kept for future use or not.
> Attached patch contains the changes for the same.
> Thoughts?
>

Unused? To confirm that, everybody that has a logical decoding plugin needs
to check their code so we are certain this is sensible.

Seems like a change with low utility.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

Mission Critical Databases


Re: [PATCH] Finally split StdRdOptions into HeapOptions and ToastOptions

2020-07-03 Thread Nikolay Shaplov
В письме от четверг, 2 июля 2020 г. 17:15:13 MSK пользователь Daniel 
Gustafsson написал:

> > A new version of the patch.
> > Autovacuum options were extended in b07642db
> > 
> > So I added that options to the current patch.
> 
> The heapam.c hunk in this version fails to apply to HEAD, can you please
> submit a rebased version?  
Thanks for reminding about it.

Here goes a rebased version.


-- 
Software Developer: https://www.upwork.com/freelancers/~014a87e140ff02c0da
Body-oriented Therapist: https://vk.com/nataraj_rebalancing  (Russian)diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index 8ccc228..6118b55 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -46,9 +46,8 @@
  * upper and lower bounds (if applicable); for strings, consider a validation
  * routine.
  * (ii) add a record below (or use add__reloption).
- * (iii) add it to the appropriate options struct (perhaps StdRdOptions)
- * (iv) add it to the appropriate handling routine (perhaps
- * default_reloptions)
+ * (iii) add it to the appropriate options struct
+ * (iv) add it to the appropriate handling routine
  * (v) make sure the lock level is set correctly for that operation
  * (vi) don't forget to document the option
  *
@@ -1375,9 +1374,11 @@ extractRelOptions(HeapTuple tuple, TupleDesc tupdesc,
 	switch (classForm->relkind)
 	{
 		case RELKIND_RELATION:
-		case RELKIND_TOASTVALUE:
 		case RELKIND_MATVIEW:
-			options = heap_reloptions(classForm->relkind, datum, false);
+			options = heap_reloptions(datum, false);
+			break;
+		case RELKIND_TOASTVALUE:
+			options = toast_reloptions(datum, false);
 			break;
 		case RELKIND_PARTITIONED_TABLE:
 			options = partitioned_table_reloptions(datum, false);
@@ -1811,59 +1812,66 @@ fillRelOptions(void *rdopts, Size basesize,
 
 
 /*
- * Option parser for anything that uses StdRdOptions.
+ * Option parsing definition for autovacuum. Used in toast and heap options.
+ */
+
+#define AUTOVACUUM_RELOPTIONS(OFFSET)\
+		{"autovacuum_enabled", RELOPT_TYPE_BOOL, \
+		OFFSET + offsetof(AutoVacOpts, enabled)},\
+		{"autovacuum_vacuum_threshold", RELOPT_TYPE_INT, \
+		OFFSET + offsetof(AutoVacOpts, vacuum_threshold)},   \
+		{"autovacuum_vacuum_insert_threshold", RELOPT_TYPE_INT,  \
+		OFFSET + offsetof(AutoVacOpts, vacuum_ins_threshold)},   \
+		{"autovacuum_analyze_threshold", RELOPT_TYPE_INT,\
+		OFFSET + offsetof(AutoVacOpts, analyze_threshold)},  \
+		{"autovacuum_vacuum_cost_delay", RELOPT_TYPE_REAL,   \
+		OFFSET + offsetof(AutoVacOpts, vacuum_cost_delay)},  \
+		{"autovacuum_vacuum_cost_limit", RELOPT_TYPE_INT,\
+		OFFSET + offsetof(AutoVacOpts, vacuum_cost_limit)},  \
+		{"autovacuum_freeze_min_age", RELOPT_TYPE_INT,   \
+		OFFSET + offsetof(AutoVacOpts, freeze_min_age)}, \
+		{"autovacuum_freeze_max_age", RELOPT_TYPE_INT,   \
+		OFFSET + offsetof(AutoVacOpts, freeze_max_age)}, \
+		{"autovacuum_freeze_table_age", RELOPT_TYPE_INT, \
+		OFFSET + offsetof(AutoVacOpts, freeze_table_age)},   \
+		{"autovacuum_multixact_freeze_min_age", RELOPT_TYPE_INT, \
+		OFFSET + offsetof(AutoVacOpts, multixact_freeze_min_age)},   \
+		{"autovacuum_multixact_freeze_max_age", RELOPT_TYPE_INT, \
+		OFFSET + offsetof(AutoVacOpts, multixact_freeze_max_age)},   \
+		{"autovacuum_multixact_freeze_table_age", RELOPT_TYPE_INT,   \
+		OFFSET + offsetof(AutoVacOpts, multixact_freeze_table_age)}, \
+		{"log_autovacuum_min_duration", RELOPT_TYPE_INT, \
+		OFFSET + offsetof(AutoVacOpts, log_min_duration)},   \
+		{"autovacuum_vacuum_scale_factor", RELOPT_TYPE_REAL, \
+		OFFSET + offsetof(AutoVacOpts, vacuum_scale_factor)},\
+		{"autovacuum_vacuum_insert_scale_factor", RELOPT_TYPE_REAL,  \
+		OFFSET  + offsetof(AutoVacOpts, vacuum_ins_scale_factor)},   \
+		{"autovacuum_analyze_scale_factor", RELOPT_TYPE_REAL,\
+		OFFSET + offsetof(AutoVacOpts, analyze_scale_factor)}
+/*
+ * Option parser for heap
  */
 bytea *
-default_reloptions(Datum reloptions, bool validate, relopt_kind kind)
+heap_reloptions(Datum reloptions, bool validate)
 {
 	static const relopt_parse_elt tab[] = {
-		{"fillfactor", RELOPT_TYPE_INT, offsetof(StdRdOptions, fillfactor)},
-		{"autovacuum_enabled", RELOPT_TYPE_BOOL,
-		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, enabled)},
-		{"autovacuum_vacuum_threshold", RELOPT_TYPE_INT,
-		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_threshold)},
-		{"autovacuum_vacuum_insert_threshold", RELOPT_TYPE_INT,
-		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_ins_threshold)},
-		{"autovacuum_analyze_threshold", RELOPT_TYPE_INT,
-		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, analyze_threshold)},
-

Cleanup - Removed unused function parameter in reorder buffer & parallel vacuum

2020-07-03 Thread vignesh C
Hi,

While checking through the code I found that  some of the function
parameters in reorderbuffer & vacuumlazy are not used. I felt this
could be removed. I'm not sure if it is kept for future use or not.
Attached patch contains the changes for the same.
Thoughts?

Regards,
Vignesh
EnterpriseDB: http://www.enterprisedb.com
From c575e1edda90a529ca24a73d6e99ea6f2af77980 Mon Sep 17 00:00:00 2001
From: Vignesh C 
Date: Fri, 3 Jul 2020 13:29:13 +0530
Subject: [PATCH] Removal unused function parameter in parallel vacuum &
 reorder buffer.

Removal unused function parameter in parallel vacuum & reorder buffer. The
function parameters are not used, they can be removed.
---
 src/backend/access/heap/vacuumlazy.c|  8 ++---
 src/backend/replication/logical/decode.c|  3 +-
 src/backend/replication/logical/reorderbuffer.c | 42 -
 src/include/replication/reorderbuffer.h |  6 ++--
 4 files changed, 29 insertions(+), 30 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 68effca..1bbc459 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -390,7 +390,7 @@ static void update_index_statistics(Relation *Irel, IndexBulkDeleteResult **stat
 static LVParallelState *begin_parallel_vacuum(Oid relid, Relation *Irel,
 			  LVRelStats *vacrelstats, BlockNumber nblocks,
 			  int nindexes, int nrequested);
-static void end_parallel_vacuum(Relation *Irel, IndexBulkDeleteResult **stats,
+static void end_parallel_vacuum(IndexBulkDeleteResult **stats,
 LVParallelState *lps, int nindexes);
 static LVSharedIndStats *get_indstats(LVShared *lvshared, int n);
 static bool skip_parallel_vacuum_index(Relation indrel, LVShared *lvshared);
@@ -1712,7 +1712,7 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,
 	 * during parallel mode.
 	 */
 	if (ParallelVacuumIsActive(lps))
-		end_parallel_vacuum(Irel, indstats, lps, nindexes);
+		end_parallel_vacuum(indstats, lps, nindexes);
 
 	/* Update index statistics */
 	update_index_statistics(Irel, indstats, nindexes);
@@ -3361,8 +3361,8 @@ begin_parallel_vacuum(Oid relid, Relation *Irel, LVRelStats *vacrelstats,
  * context, but that won't be safe (see ExitParallelMode).
  */
 static void
-end_parallel_vacuum(Relation *Irel, IndexBulkDeleteResult **stats,
-	LVParallelState *lps, int nindexes)
+end_parallel_vacuum(IndexBulkDeleteResult **stats, LVParallelState *lps,
+	int nindexes)
 {
 	int			i;
 
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index c2e5e3a..19e1354 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -337,8 +337,7 @@ DecodeStandbyOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 (xl_invalidations *) XLogRecGetData(r);
 
 if (!ctx->fast_forward)
-	ReorderBufferImmediateInvalidation(ctx->reorder,
-	   invalidations->nmsgs,
+	ReorderBufferImmediateInvalidation(invalidations->nmsgs,
 	   invalidations->msgs);
 			}
 			break;
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 642a1c7..0bc2f15 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -220,7 +220,7 @@ static void ReorderBufferIterTXNInit(ReorderBuffer *rb, ReorderBufferTXN *txn,
 static ReorderBufferChange *ReorderBufferIterTXNNext(ReorderBuffer *rb, ReorderBufferIterTXNState *state);
 static void ReorderBufferIterTXNFinish(ReorderBuffer *rb,
 	   ReorderBufferIterTXNState *state);
-static void ReorderBufferExecuteInvalidations(ReorderBuffer *rb, ReorderBufferTXN *txn);
+static void ReorderBufferExecuteInvalidations(ReorderBufferTXN *txn);
 
 /*
  * ---
@@ -235,12 +235,12 @@ static Size ReorderBufferRestoreChanges(ReorderBuffer *rb, ReorderBufferTXN *txn
 		TXNEntryFile *file, XLogSegNo *segno);
 static void ReorderBufferRestoreChange(ReorderBuffer *rb, ReorderBufferTXN *txn,
 	   char *change);
-static void ReorderBufferRestoreCleanup(ReorderBuffer *rb, ReorderBufferTXN *txn);
+static void ReorderBufferRestoreCleanup(ReorderBufferTXN *txn);
 static void ReorderBufferCleanupSerializedTXNs(const char *slotname);
 static void ReorderBufferSerializedPath(char *path, ReplicationSlot *slot,
 		TransactionId xid, XLogSegNo segno);
 
-static void ReorderBufferFreeSnap(ReorderBuffer *rb, Snapshot snap);
+static void ReorderBufferFreeSnap(Snapshot snap);
 static Snapshot ReorderBufferCopySnap(ReorderBuffer *rb, Snapshot orig_snap,
 	  ReorderBufferTXN *txn, CommandId cid);
 
@@ -437,13 +437,13 @@ ReorderBufferReturnChange(ReorderBuffer *rb, ReorderBufferChange *change)
 		case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT:
 			if (change->data.tp.newtuple)
 			{
-ReorderBufferR

Re: Curious - "logical replication launcher" (PID) existed with exit code 1

2020-07-03 Thread Peter Eisentraut

On 2020-06-23 22:48, David G. Johnston wrote:
In the following log file the presence of "exit code 1" after performing 
a "pg_ctl stop -m smart" shutdown is bugging me.  I take it most people 
would just ignore it as noise but a clean install from source, startup, 
and shutdown would ideally not result in a non-zero exit code being sent 
to the log.  But I've decided to stop trying to track it down on my own 
and at least mention it here.  It seems like f669c09989 probably 
introduced the behavior, and I can see how "restarting" is generally a 
good thing, but we are not going to restart the launcher during a clean 
shutdown and the exit code isn't being done conditionally.


Yeah, this is just a consequence of how background workers work.  One 
would need to adjust the handling of signaling, restart flags, exit 
codes, etc. to be able to do this more elegantly.  It's probably not 
that hard, but it needs some leg work to go through all the cases and 
issues.


--
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Cleanup - adjust the code crossing 80-column window limit

2020-07-03 Thread Peter Eisentraut

On 2020-07-01 09:00, Amul Sul wrote:
Attached patch makes an adjustment to ipc.c code to be in the 80-column 
window.


I can see an argument that this makes the code a bit easier to read, but 
making code fit into 80 columns doesn't have to be a goal by itself.


--
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: max_slot_wal_keep_size and wal_keep_segments

2020-07-03 Thread Masahiko Sawada
On Thu, 2 Jul 2020 at 02:18, David Steele  wrote:
>
> On 7/1/20 10:54 AM, Alvaro Herrera wrote:
> > On 2020-Jul-01, Fujii Masao wrote:
> >
> >> On 2020/07/01 12:26, Alvaro Herrera wrote:
> >>> On 2020-Jun-30, Fujii Masao wrote:
> >>>
>  When I talked about max_slot_wal_keep_size as new feature in v13
>  at the conference, I received the question like "Why are the units of
>  setting values in max_slot_wal_keep_size and wal_keep_segments 
>  different?"
>  from audience. That difference looks confusing for users and
>  IMO it's better to use the same unit for them. Thought?
> >>>
> >>> Do we still need wal_keep_segments for anything?
> >>
> >> Yeah, personally I like wal_keep_segments because its setting is very
> >> simple and no extra operations on replication slots are necessary.
> >
> > Okay.  In that case I +1 the idea of renaming to wal_keep_size.
>
> +1 for renaming to wal_keep_size.
>

+1 from me, too.

Regards,


--
Masahiko Sawadahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Persist MVCC forever - retain history

2020-07-03 Thread Konstantin Knizhnik




On 02.07.2020 21:55, Mitar wrote:

Hi!

(Sorry if this was already discussed, it looks pretty obvious, but I
could not find anything.)

I was thinking and reading about how to design the schema to keep
records of all changes which happen to the table, at row granularity,
when I realized that all this is already done for me by PostgreSQL
MVCC. All rows (tuples) are already stored, with an internal version
field as well.

So I wonder, how could I hack PostgreSQL to disable vacuuming a table,
so that all tuples persist forever, and how could I make those
internal columns visible so that I could make queries asking for
results at the particular historical version of table state? My
understanding is that indices are already indexing over those internal
columns as well, so those queries over historical versions would be
efficient as well. Am I missing something which would make this not
possible?

Is this something I would have to run a custom version of PostgreSQL
or is this possible through an extension of sort?


Mitar


Did you read this thread:
https://www.postgresql.org/message-id/flat/78aadf6b-86d4-21b9-9c2a-51f1efb8a499%40postgrespro.ru
I have proposed a patch for supporting time travel (AS OF) queries.
But I didn't fill a big interest to it from community.





Re: SEARCH and CYCLE clauses

2020-07-03 Thread Peter Eisentraut
While the larger patch is being considered, I think some simpler and 
separable pieces could be addressed.


Here is a patch that adjusts the existing cycle detection example and 
test queries to put the cycle column before the path column.  The CYCLE 
clause puts them in that order, and so if we added that feature that 
would make the sequence of examples more consistent and easier to follow.


(And while the order of columns has no semantic meaning, for a human 
left-to-right reader it also makes a bit more sense because the cycle 
flag is computed against the previous path value, so it happens "before" 
the path column.)


--
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From 1042cad84631406eeec8715ab93c0451702b5c0d Mon Sep 17 00:00:00 2001
From: Peter Eisentraut 
Date: Fri, 3 Jul 2020 08:51:12 +0200
Subject: [PATCH] Adjust cycle detection examples and tests

Adjust the existing cycle detection example and test queries to put
the cycle column before the path column.  This is mainly because the
SQL-standard CYCLE clause puts them in that order, and so if we added
that feature that would make the sequence of examples more consistent
and easier to follow.
---
 doc/src/sgml/queries.sgml  |  26 +++---
 src/test/regress/expected/with.out | 124 ++---
 src/test/regress/sql/with.sql  |  16 ++--
 3 files changed, 83 insertions(+), 83 deletions(-)

diff --git a/doc/src/sgml/queries.sgml b/doc/src/sgml/queries.sgml
index 572e968273..ef0ca56c66 100644
--- a/doc/src/sgml/queries.sgml
+++ b/doc/src/sgml/queries.sgml
@@ -2112,20 +2112,20 @@ Recursive Query Evaluation
UNION ALL to UNION would not 
eliminate the looping.
Instead we need to recognize whether we have reached the same row again
while following a particular path of links.  We add two columns
-   path and cycle to the 
loop-prone query:
+   is_cycle and path to 
the loop-prone query:
 
 
-WITH RECURSIVE search_graph(id, link, data, depth, path, cycle) AS (
+WITH RECURSIVE search_graph(id, link, data, depth, is_cycle, path) AS (
 SELECT g.id, g.link, g.data, 1,
-  ARRAY[g.id],
-  false
+  false,
+  ARRAY[g.id]
 FROM graph g
   UNION ALL
 SELECT g.id, g.link, g.data, sg.depth + 1,
-  path || g.id,
-  g.id = ANY(path)
+  g.id = ANY(path),
+  path || g.id
 FROM graph g, search_graph sg
-WHERE g.id = sg.link AND NOT cycle
+WHERE g.id = sg.link AND NOT is_cycle
 )
 SELECT * FROM search_graph;
 
@@ -2140,17 +2140,17 @@ Recursive Query Evaluation
compare fields f1 and 
f2:
 
 
-WITH RECURSIVE search_graph(id, link, data, depth, path, cycle) AS (
+WITH RECURSIVE search_graph(id, link, data, depth, is_cycle, path) AS (
 SELECT g.id, g.link, g.data, 1,
-  ARRAY[ROW(g.f1, g.f2)],
-  false
+  false,
+  ARRAY[ROW(g.f1, g.f2)]
 FROM graph g
   UNION ALL
 SELECT g.id, g.link, g.data, sg.depth + 1,
-  path || ROW(g.f1, g.f2),
-  ROW(g.f1, g.f2) = ANY(path)
+  ROW(g.f1, g.f2) = ANY(path),
+  path || ROW(g.f1, g.f2)
 FROM graph g, search_graph sg
-WHERE g.id = sg.link AND NOT cycle
+WHERE g.id = sg.link AND NOT is_cycle
 )
 SELECT * FROM search_graph;
 
diff --git a/src/test/regress/expected/with.out 
b/src/test/regress/expected/with.out
index 67eaeb4f3e..457f3bf04f 100644
--- a/src/test/regress/expected/with.out
+++ b/src/test/regress/expected/with.out
@@ -579,79 +579,79 @@ insert into graph values
(1, 4, 'arc 1 -> 4'),
(4, 5, 'arc 4 -> 5'),
(5, 1, 'arc 5 -> 1');
-with recursive search_graph(f, t, label, path, cycle) as (
-   select *, array[row(g.f, g.t)], false from graph g
+with recursive search_graph(f, t, label, is_cycle, path) as (
+   select *, false, array[row(g.f, g.t)] from graph g
union all
-   select g.*, path || row(g.f, g.t), row(g.f, g.t) = any(path)
+   select g.*, row(g.f, g.t) = any(path), path || row(g.f, g.t)
from graph g, search_graph sg
-   where g.f = sg.t and not cycle
+   where g.f = sg.t and not is_cycle
 )
 select * from search_graph;
- f | t |   label|   path| cycle 
+---++---+---
- 1 | 2 | arc 1 -> 2 | {"(1,2)"} | f
- 1 | 3 | arc 1 -> 3 | {"(1,3)"} | f
- 2 | 3 | arc 2 -> 3 | {"(2,3)"} | f
- 1 | 4 | arc 1 -> 4 | {"(1,4)"} | f
- 4 | 5 | arc 4 -> 5 | {"(4,5)"} | f
- 5 | 1 | arc 5 -> 1 | {"(5,1)"} | f
- 1 | 2 | arc 1 -> 2 | {"(5,1)","(1,2)"} | f
- 1 | 3 | arc 1 -> 3 | {"(5,1)","(1,3)"} | f
- 1 | 4 | arc 1 -> 4 | {"(5,1)","(1,4)"} | f
- 2 | 3 | arc 2 -> 3 | {"(1,2)","(2,3)"}  

Re: track_planning causing performance regression

2020-07-03 Thread Pavel Stehule
pá 3. 7. 2020 v 8:57 odesílatel Fujii Masao 
napsal:

>
>
> On 2020/07/03 13:05, Pavel Stehule wrote:
> > Hi
> >
> > pá 3. 7. 2020 v 4:39 odesílatel Fujii Masao  > napsal:
> >
> >
> >
> > On 2020/07/01 7:37, Peter Geoghegan wrote:
> >  > On Tue, Jun 30, 2020 at 6:40 AM Fujii Masao <
> masao.fu...@oss.nttdata.com > wrote:
> >  >> Ants and Andres suggested to replace the spinlock used in
> pgss_store() with
> >  >> LWLock. I agreed with them and posted the POC patch doing that.
> But I think
> >  >> the patch is an item for v14. The patch may address the reported
> performance
> >  >> issue, but may cause other performance issues in other
> workloads. We would
> >  >> need to measure how the patch affects the performance in various
> workloads.
> >  >> It seems too late to do that at this stage of v13. Thought?
> >  >
> >  > I agree that it's too late for v13.
> >
> > Thanks for the comment!
> >
> > So I pushed the patch and changed default of track_planning to off.
> >
> >
> > Maybe there can be documented so enabling this option can have a
> negative impact on performance.
>
> Yes. What about adding either of the followings into the doc?
>
>  Enabling this parameter may incur a noticeable performance penalty.
>
> or
>
>  Enabling this parameter may incur a noticeable performance penalty,
>  especially when a fewer kinds of queries are executed on many
>  concurrent connections.
>

This second variant looks perfect for this case.

Thank you

Pavel




> Regards,
>
> --
> Fujii Masao
> Advanced Computing Technology Center
> Research and Development Headquarters
> NTT DATA CORPORATION
>