Re: [HACKERS] [POC] hash partitioning

2017-05-24 Thread amul sul
On Mon, May 22, 2017 at 2:23 PM, amul sul  wrote:
>
> Updated patch attached. Thanks a lot for review.
>
Minor fix in the document, PFA.

Regards,
Amul


0002-hash-partitioning_another_design-v12.patch
Description: Binary data


0001-Cleanup_v4.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TPC-H Q20 from 1 hour to 19 hours!

2017-05-24 Thread Robert Haas
On Thu, Apr 6, 2017 at 4:37 PM, Tomas Vondra
 wrote:
> Which brings me to the slightly suspicious bit. On 9.5, there's no
> difference between GROUP and GROUP+LIKE cases - the estimates are exactly
> the same in both cases. This is true too, but only without the foreign key
> between "partsupp" and "part", i.e. the two non-grouped relations in the
> join. And what's more, the difference (1737 vs. 16) is pretty much exactly
> 100x, which is the estimate for the LIKE condition.

I don't follow this.  How does the foreign key between partsupp and
part change the selectivity of LIKE?

> So it kinda seems 9.5 does not apply this condition for semi-joins, while
>>=9.6 does that.

If 9.5 and prior are ignoring some of the quals, that's bad, but I
don't think I understand from your analysis why
7fa93eec4e0c9c3e801e3c51aa4bae3a38aaa218 regressed anything.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Commit fests created for PG11 development

2017-05-24 Thread Michael Paquier
Hi all,

Following the conclusions of the developer meeting whose minutes are here:
https://wiki.postgresql.org/wiki/PgCon_2017_Developer_Meeting
The commit fest schedule has been decided as the same as 2016-2017,
with four commit fest entries, planned for one month each:
- 2017-09
- 2017-11
- 2018-01
- 2018-03

Those commit fest entries have been created in
https://commitfest.postgresql.org/. Please note that 2017-07 has been
renamed to 2017-09 and remains open, so all future patches can be
parked there, and all patches added up to now still remain on it.
Thanks,
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Server ignores contents of SASLInitialResponse

2017-05-24 Thread Michael Paquier
Hi all,

I have noticed today that the server ignores completely the contents
of SASLInitialResponse. For example with the patch attached called
scram-trick-server:
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index f4397afc64..8fe1c8edfb 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -540,7 +540,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
conn->sasl_state = pg_fe_scram_init(conn->pguser, password);
if (!conn->sasl_state)
goto oom_error;
-   selected_mechanism = SCRAM_SHA256_NAME;
+   selected_mechanism = "kunfoobar";
}
}

This sends a custom string to the server to name a SASL mechanism,
about which the server complains with a COMMERROR log:
LOG:  client selected an invalid SASL authentication mechanism
However this error is completely ignored and the server continues
authentication, succeeding if the password is right. It seems to me
that the error that should be returned to the user is a password
mismatch, and that the COMMERROR message is kept only for the server
logs. Attached is a patch to fix the problem.

Open item added as well.

Thanks,
-- 
Michael


scram-trick-server.patch
Description: Binary data


fix-sasl-init.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: retry shm attach for windows (WAS: Re: [HACKERS] OK, so culicidae is *still* broken)

2017-05-24 Thread Noah Misch
On Thu, May 25, 2017 at 11:41:19AM +0900, Michael Paquier wrote:
> On Thu, May 25, 2017 at 11:34 AM, Tsunakawa, Takayuki
>  wrote:
> > From: pgsql-hackers-ow...@postgresql.org
> >> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Noah Misch
> >> Ten feels low to me.  The value should be be low enough so users don't give
> >> up and assume a permanent hang, but there's little advantage to making it
> >> lower.
> >> I'd set it such that we give up in 1-5s on a modern Windows machine, which
> >> I expect implies a retry count of one hundred or more.
> >
> > Then, maybe we can measure the time in each iteration and give up after a 
> > particular seconds.

Exact duration is not important.  Giving up after 0.1s is needlessly early,
because a system taking that long to start a backend is still usable.  Giving
up after 50s is quite late.  In between those extremes, lots of durations
would be reasonable.  Thus, measuring time is needless complexity; retry count
is a suitable proxy.

> Indeed, pgrename() does so with a 100ms sleep time between each
> iteration. Perhaps we could do that and limit to 50 iterations?

pgrename() is polling for an asynchronous event, hence the sleep.  To my
knowledge, time doesn't heal shm attach failures; therefore, a sleep is not
appropriate here.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: retry shm attach for windows (WAS: Re: [HACKERS] OK, so culicidae is *still* broken)

2017-05-24 Thread Michael Paquier
On Thu, May 25, 2017 at 11:34 AM, Tsunakawa, Takayuki
 wrote:
> From: pgsql-hackers-ow...@postgresql.org
>> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Noah Misch
>> Ten feels low to me.  The value should be be low enough so users don't give
>> up and assume a permanent hang, but there's little advantage to making it
>> lower.
>> I'd set it such that we give up in 1-5s on a modern Windows machine, which
>> I expect implies a retry count of one hundred or more.
>
> Then, maybe we can measure the time in each iteration and give up after a 
> particular seconds.

Indeed, pgrename() does so with a 100ms sleep time between each
iteration. Perhaps we could do that and limit to 50 iterations?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: retry shm attach for windows (WAS: Re: [HACKERS] OK, so culicidae is *still* broken)

2017-05-24 Thread Tsunakawa, Takayuki
From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Noah Misch
> Ten feels low to me.  The value should be be low enough so users don't give
> up and assume a permanent hang, but there's little advantage to making it
> lower.
> I'd set it such that we give up in 1-5s on a modern Windows machine, which
> I expect implies a retry count of one hundred or more.

Then, maybe we can measure the time in each iteration and give up after a 
particular seconds.

Regards
Takayuki Tsunakawa



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: retry shm attach for windows (WAS: Re: [HACKERS] OK, so culicidae is *still* broken)

2017-05-24 Thread Noah Misch
On Wed, May 24, 2017 at 09:29:11AM -0400, Michael Paquier wrote:
> On Tue, May 23, 2017 at 8:14 AM, Amit Kapila  wrote:
> > So it seems both you and Tom are leaning towards some sort of retry
> > mechanism for shm reattach on Windows.  I also think that is a viable
> > option to negate the impact of ASLR.  Attached patch does that.  Note
> > that, as I have mentioned above I think we need to do it for shm
> > reserve operation as well.  I think we need to decide how many retries
> > are sufficient before bailing out.  As of now, I have used 10 to have
> > some similarity with PGSharedMemoryCreate(), but we can choose some
> > different count as well.  One might say that we can have "number of
> > retries" as a guc parameter, but I am not sure about it, so not used.
> 
> New GUCs can be backpatched if necessary, though this does not seem
> necessary. Who is going to set up that anyway if we have a limit high
> enough. 10 looks like a sufficient number to me.

Ten feels low to me.  The value should be be low enough so users don't give up
and assume a permanent hang, but there's little advantage to making it lower.
I'd set it such that we give up in 1-5s on a modern Windows machine, which I
expect implies a retry count of one hundred or more.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Create subscription with `create_slot=false` and incorrect slot name

2017-05-24 Thread Masahiko Sawada
On Wed, May 24, 2017 at 9:41 PM, Robert Haas  wrote:
> On Wed, May 24, 2017 at 7:31 PM, Peter Eisentraut
>  wrote:
>> On 5/23/17 02:33, Kuntal Ghosh wrote:
 The command succeed even if slot_name is invalid. That's because slot_name
 isn't validated. ReplicationSlotValidateName() should be called in
 parse_subscription_options() to avoid a pilot error. IMHO we should prevent
 a future error (use of invalid slot name).

>>> +1. Since, slot_name can be provided even when create_slot is set
>>> false, it should be validated as well while creating the subscription.
>>
>> This came up in a previous thread.  It is up to the publishing end what
>> slot names it accepts.  So running the validation locally is incorrect.
>
> That argument seems pretty tenuous; surely both ends are PostgreSQL,
> and the rules for valid slot names aren't likely to change very often.
>
> But even if we accept it as true, I still don't like the idea that a
> DROP can just fail, especially with no real guidance as to how to fix
> things so it doesn't fail.  Ideas:
>
> 1. If we didn't create the slot (and have never connected to it?),
> don't try to drop it.
>
> 2. Emit some kind of a HINT telling people about ALTER SUBSCRIPTION ..
> SET (slot_name = NONE).
>
> 3. ???
>

+1 to #2 idea. We already emit such errhint when connection to the
publisher failed. I think we can do the same thing in this case.

subscriptioncmds.c:L928

wrconn = walrcv_connect(conninfo, true, subname, );
if (wrconn == NULL)
ereport(ERROR,
(errmsg("could not connect to publisher when attempting to "
 "drop the replication slot \"%s\"", slotname),
 errdetail("The error was: %s", err),
 errhint("Use ALTER SUBSCRIPTION ... SET (slot_name = NONE) "
"to disassociate the subscription from the
slot.")));

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Get stuck when dropping a subscription during synchronizing table

2017-05-24 Thread Masahiko Sawada
On Wed, May 24, 2017 at 3:14 PM, Petr Jelinek
 wrote:
> Hi,
>
> I finally had time to properly analyze this, and turns out we've been
> all just trying to fix symptoms and the actual problems.

Thank you for the patches!

> All the locking works just fine the way it is in master.
> The issue with
> deadlock with apply comes from the wrong handling of the SIGTERM in the
> apply (we didn't set InterruptPending). I changed the SIGTERM handler in
> patch 0001 just to die which is actually the correct behavior for apply
> workers. I also moved the connection cleanup code to the
> before_shmem_exit callback (similar to walreceiver) and now that part
> works correctly.
>
> The issue with orphaned sync workers is actually two separate issues.
> First, due to thinko we always searched for sync worker in
> wait_for_sync_status_change instead of searching for opposite worker as
> was the intention (i.e. sync worker should search for apply and apply
> should search for sync). Thats fixed by 0002. And second, we didn't
> accept any invalidation messages until the whole sync process finished
> (because it flattens all the remote transactions in the single one) so
> sync worker didn't learn about subscription changes/drop until it has
> finished, which I now fixed in 0003.
>
> There is still outstanding issue that sync worker will keep running
> inside the long COPY because the invalidation messages are also not
> processed until it finishes but all the original issues reported here
> disappear for me with the attached patches applied.
>

The issue reported on this thread seems to be solved with your patch.
But because DROP SUBSCRIPTION is only one DDL command that acquires
lock on system catalog with AccessExclusiveLock and the logical
replication mechanism is complex,  I'm concerned that there might be
another potential deadlock issue due to acquire lock on system catalog
with strong lock level. It might be good idea to do either
fine-grained locking or reducing lock level or both.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Create subscription with `create_slot=false` and incorrect slot name

2017-05-24 Thread Robert Haas
On Wed, May 24, 2017 at 7:31 PM, Peter Eisentraut
 wrote:
> On 5/23/17 02:33, Kuntal Ghosh wrote:
>>> The command succeed even if slot_name is invalid. That's because slot_name
>>> isn't validated. ReplicationSlotValidateName() should be called in
>>> parse_subscription_options() to avoid a pilot error. IMHO we should prevent
>>> a future error (use of invalid slot name).
>>>
>> +1. Since, slot_name can be provided even when create_slot is set
>> false, it should be validated as well while creating the subscription.
>
> This came up in a previous thread.  It is up to the publishing end what
> slot names it accepts.  So running the validation locally is incorrect.

That argument seems pretty tenuous; surely both ends are PostgreSQL,
and the rules for valid slot names aren't likely to change very often.

But even if we accept it as true, I still don't like the idea that a
DROP can just fail, especially with no real guidance as to how to fix
things so it doesn't fail.  Ideas:

1. If we didn't create the slot (and have never connected to it?),
don't try to drop it.

2. Emit some kind of a HINT telling people about ALTER SUBSCRIPTION ..
SET (slot_name = NONE).

3. ???

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH v1] Add and report the new "in_hot_standby" GUC pseudo-variable.

2017-05-24 Thread Tsunakawa, Takayuki
From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Tom Lane
> I didn't look at exactly how you tried to do that, but GUCs whose values
> depend on other GUCs generally don't work well at all.

transaction_read_only and transaction_isolation depends on 
default_transaction_read_only and default_transaction_isolation respectively.  
But I feel you are concerned about something I'm not aware of.  Could you share 
your worries?  I haven't found a problem so far.


> >> * Its value is false during recovery.
> 
> [ scratches head... ]  Surely this should read as "true" during recovery?

Ouch, you are right.


> Also, what happens if the standby server goes live mid-session?

The clients will know the change of session_read_only when they do something 
that calls RecoveryInProgress().  Currently, RecoveryInProgress() seems to be 
the only place where the sessions notice the promotion, so I set 
session_read_only to the value of default_transaction_read_only there.  I think 
that there is room for discussion here.  It would be ideal for the sessions to 
notice the server promotion promptly and notify the clients of the change.  I 
have no idea to do that well.

Regards
Takayuki Tsunakawa





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump ignoring information_schema tables which used in Create Publication.

2017-05-24 Thread Robert Haas
On Wed, May 24, 2017 at 7:16 PM, Peter Eisentraut
 wrote:
> On 5/22/17 07:42, Kuntal Ghosh wrote:
>> pg_dump ignores anything created under object name "pg_*" or
>> "information_schema".
>
> Publications have a slightly different definition of what tables to
> ignore/prohibit than pg_dump, partly because they have more built-in
> knowledge.  I'm not sure whether it's worth fixing this.

Well, I think if it's not going to work, it should be prohibited,
rather than seeming to work but then not actually working.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improvement in log message of logical replication worker

2017-05-24 Thread Masahiko Sawada
On Wed, May 24, 2017 at 7:04 PM, Peter Eisentraut
 wrote:
> On 5/17/17 23:36, Masahiko Sawada wrote:
>> The patch looks good to me.
>> There are some log messages saying just 'logical replication worker
>> for subscription ...' in reread_subscription but should we add 'apply'
>> to these messages in order for user to distinguish between apply
>> worker and table sync worker?
>
> done and done
>

Thank you!

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH v1] Add and report the new "in_hot_standby" GUC pseudo-variable.

2017-05-24 Thread Tsunakawa, Takayuki
From: Robert Haas [mailto:robertmh...@gmail.com]
> I've already stated my position on this, which is that:
> 
> * target_session_attrs=read-only is a perfectly good new feature, but we're
> past feature freeze, so it's material for v11.
> 
> * I'm not opposed to adding a GUC_REPORT GUC of some kind, but I see no
> urgency about that either.  The feature works fine as it is.  The fact that
> it could possibly be made to work more efficiently is not a critical bug.

I see.  I'll add this in the next CF for PG 11.  I'd be glad if you could 
review the patch in PG 11 development.  Also, I'll update the PostgreSQL 10 
Open Items page that way.

Regards
Takayuki Tsunakawa


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] fix for table syncing with different column order

2017-05-24 Thread Peter Eisentraut
On 5/18/17 20:30, Peter Eisentraut wrote:
> The issues with the different column orders in the regression test
> database also revealed that logical replication table syncing was broken
> for that case.  Here is a fix and a test.

committed

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Create subscription with `create_slot=false` and incorrect slot name

2017-05-24 Thread Peter Eisentraut
On 5/23/17 02:33, Kuntal Ghosh wrote:
>> The command succeed even if slot_name is invalid. That's because slot_name
>> isn't validated. ReplicationSlotValidateName() should be called in
>> parse_subscription_options() to avoid a pilot error. IMHO we should prevent
>> a future error (use of invalid slot name).
>>
> +1. Since, slot_name can be provided even when create_slot is set
> false, it should be validated as well while creating the subscription.

This came up in a previous thread.  It is up to the publishing end what
slot names it accepts.  So running the validation locally is incorrect.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER PUBLICATION materializing the list of tables

2017-05-24 Thread Peter Eisentraut
On 5/22/17 18:08, Jeff Janes wrote:
> If I create a publication FOR ALL TABLES and then change my mind, the
> only thing I can do is drop the publication and recreate it.  
> 
> Since "ALTER PUBLICATION name SET TABLE" allows you to replace the
> entire table list, shouldn't it also let you change from the dynamic FOR
> ALL TABLES to a static specific list?

I would file this under "anything is possible, but not everything is
worthwhile".

The purpose of the ADD/SET TABLE clauses is that be able to add new
tables that are related to the tables you are already publishing (e.g.,
new partitions).  If you are already publishing all tables
automatically, then there is no reason for replacing that with a list of
specific tables.  You can also just add a new publication and add that
to the subscription.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER PUBLICATION documentation

2017-05-24 Thread Peter Eisentraut
On 5/22/17 17:50, Jeff Janes wrote:
> "The first variant of this command listed in the synopsis can change all
> of the publication properties specified in CREATE PUBLICATION
> ."
> 
> That referenced first variant no longer exists.  I don't if that should
> just be removed, or reworked to instead describe "ALTER PUBLICATION name
> SET".

Yeah that got whacked around a bit too much.  I'll see about fixing it up.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump ignoring information_schema tables which used in Create Publication.

2017-05-24 Thread Peter Eisentraut
On 5/22/17 07:42, Kuntal Ghosh wrote:
> pg_dump ignores anything created under object name "pg_*" or
> "information_schema".

Publications have a slightly different definition of what tables to
ignore/prohibit than pg_dump, partly because they have more built-in
knowledge.  I'm not sure whether it's worth fixing this.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improvement in log message of logical replication worker

2017-05-24 Thread Peter Eisentraut
On 5/20/17 00:58, Alvaro Herrera wrote:
> Umm, just skimming here -- this patch shows some LOG messages using
> elog() rather than ereport(), which seems bogus to me.

Fixed that.

> Also:
>   "logical replication table synchronization worker for subscription 
> \"%s\", table \"%s\" has started"
> surely there is a more convenient name than "logical replication table
> synchronization worker" for this process?  I think just getting rid of
> the words "logical replication" there would be sufficient, since we
> don't have the concept of "table synchronization worker" in any other
> context.

We could look into that, but then we'd have a review all the log
messages so they are consistent.

> More generally, the overall wording of this message seems a bit off.
> How about something along the lines of
>   "starting synchronization for table \"%s\" in subscription \"%s\""
> ?

There is of course a difference between "starting" and "has started".
We used to have both of these messages, now we only have the latter by
default.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improvement in log message of logical replication worker

2017-05-24 Thread Peter Eisentraut
On 5/17/17 23:36, Masahiko Sawada wrote:
> The patch looks good to me.
> There are some log messages saying just 'logical replication worker
> for subscription ...' in reread_subscription but should we add 'apply'
> to these messages in order for user to distinguish between apply
> worker and table sync worker?

done and done

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH v1] Add and report the new "in_hot_standby" GUC pseudo-variable.

2017-05-24 Thread Tom Lane
Robert Haas  writes:
> On Wed, May 24, 2017 at 3:16 AM, Tsunakawa, Takayuki
>  wrote:
>> For this, I added a GUC_REPORT variable session_read_only which indicates 
>> the session's default read-only status.  The characteristics are:
>> 
>> * It cannot be changed directly by the user (postgresql.conf, SET, etc.)
>> * Its value is the same as default_transaction_read_only when not in 
>> recovery.

I didn't look at exactly how you tried to do that, but GUCs whose values
depend on other GUCs generally don't work well at all.

>> * Its value is false during recovery.

[ scratches head... ]  Surely this should read as "true" during recovery?
Also, what happens if the standby server goes live mid-session?

>> Could you include this in PG 10?

I concur with Robert that this is too late for v10, and the argument to
shove it in anyway is not compelling.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH v1] Add and report the new "in_hot_standby" GUC pseudo-variable.

2017-05-24 Thread Robert Haas
On Wed, May 24, 2017 at 3:16 AM, Tsunakawa, Takayuki
 wrote:
> I confirmed that the attached patch successfully provides:
>
> * target_session_attrs=read-only
> * If the server is >= 10, avoid the round-trip for SHOW transaction_read_only.
>
> For this, I added a GUC_REPORT variable session_read_only which indicates the 
> session's default read-only status.  The characteristics are:
>
> * It cannot be changed directly by the user (postgresql.conf, SET, etc.)
> * Its value is the same as default_transaction_read_only when not in recovery.
> * Its value is false during recovery.
>
> Could you include this in PG 10?  I think these are necessary as the bottom 
> line to meet the average expectation of users (please don't ask me what's the 
> average; the main reasons are that PostgreSQL provides hot standby, PgJDBC 
> enables connection to the standby (targetServerType=slave), and PostgreSQL 
> emphasizes performance.)  Ideally, I wanted to add other features of PgJDBC 
> (e.g. targetServerType=preferSlave), but I thought this is the limit not to 
> endanger the quality of the final release.

I've already stated my position on this, which is that:

* target_session_attrs=read-only is a perfectly good new feature, but
we're past feature freeze, so it's material for v11.

* I'm not opposed to adding a GUC_REPORT GUC of some kind, but I see
no urgency about that either.  The feature works fine as it is.  The
fact that it could possibly be made to work more efficiently is not a
critical bug.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unreachable code in partition.c

2017-05-24 Thread Robert Haas
On Tue, May 23, 2017 at 2:06 PM, Jeevan Ladhe
 wrote:
> Following code in function get_qual_for_list(partition.c) is not reachable.
>
> else
> result = list_make1(opexpr);
>
> Attached is the patch that removes this dead code.

Committed, but I added an Assert() as future-proofing.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: retry shm attach for windows (WAS: Re: [HACKERS] OK, so culicidae is *still* broken)

2017-05-24 Thread Amit Kapila
On Wed, May 24, 2017 at 6:59 PM, Michael Paquier
 wrote:
> On Tue, May 23, 2017 at 8:14 AM, Amit Kapila  wrote:
>> So it seems both you and Tom are leaning towards some sort of retry
>> mechanism for shm reattach on Windows.  I also think that is a viable
>> option to negate the impact of ASLR.  Attached patch does that.  Note
>> that, as I have mentioned above I think we need to do it for shm
>> reserve operation as well.  I think we need to decide how many retries
>> are sufficient before bailing out.  As of now, I have used 10 to have
>> some similarity with PGSharedMemoryCreate(), but we can choose some
>> different count as well.  One might say that we can have "number of
>> retries" as a guc parameter, but I am not sure about it, so not used.
>
> New GUCs can be backpatched if necessary, though this does not seem
> necessary. Who is going to set up that anyway if we have a limit high
> enough. 10 looks like a sufficient number to me.
>

Okay.

>> Another point to consider is that do we want the same retry mechanism
>> for EXEC_BACKEND builds (function
>> PGSharedMemoryReAttach is different on Windows and EXEC_BACKEND
>> builds).  I think it makes sense to have retry mechanism for
>> EXEC_BACKEND builds, so done that way in the patch.  Yet another point
>> which needs some thought is for reattach operation, before retrying do
>> we want to reserve the shm by using VirtualAllocEx?
>
> -   elog(FATAL, "could not reattach to shared memory (key=%p,
> addr=%p): error code %lu",
> +   {
> +   elog(LOG, "could not reattach to shared memory (key=%p,
> addr=%p): error code %lu",
>  UsedShmemSegID, UsedShmemSegAddr, GetLastError());
> +   return false;
> +   }
> This should be a WARNING, with the attempt number reported as well?
>

I think for retry we just want to log, why you want to display it as a
warning?  During startup, other similar places (where we continue
startup even after the call has failed) also use LOG (refer
PGSharedMemoryDetach), so why do differently here?   However, I think
adding retry_count should be okay.

> -void
> -PGSharedMemoryReAttach(void)
> +bool
> +PGSharedMemoryReAttach(int retry_count)
> I think that the loop logic should be kept within
> PGSharedMemoryReAttach, this makes the code of postmaster.c cleaner,
>

Sure, we can do that, but then we need to repeat the same looping
logic in both sysv and win32 case.  Now, if decide not to do for the
sysv case, then it might make sense to consider it doing it in
function PGSharedMemoryReAttach().

> and it seems to me that each step of PGSharedMemoryReAttach() should
> be retried in order. Do we need also to worry about SysV? I agree with
> you that having consistency is better, but I don't recall seeing
> failures or complains related to cygwin for ASLR.
>

I am also not aware of Cygwin failures, but I think keeping the code
same for the cases where we are not using fork mechanism seems like an
advisable approach.  Also, if someone is testing EXEC_BACKEND on Linux
then randomization is 'on' by default, so one can hit this issue
during tests which doesn't matter much, but it still seems better to
have retry logic to prevent the issue.

> I think that you are forgetting PGSharedMemoryCreate in the retry process.
>

No, we don't need retry for PGSharedMemoryCreate as we need this only
we are trying to attach to some pre-reserved shared memory.  Do you
have something else in mind?

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade translation

2017-05-24 Thread Tom Lane
Robert Haas  writes:
> On Mon, May 22, 2017 at 8:28 PM, Alvaro Herrera
>  wrote:
>> Translatability was turned on for pg_upgrade on pg10, but it needs some
>> work.  The choices are we fix it now, or we revert the addition of NLS
>> there and we do it for PG11.
>> Aye or nay?

> Aye.

+1

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade translation

2017-05-24 Thread Robert Haas
On Mon, May 22, 2017 at 8:28 PM, Alvaro Herrera
 wrote:
> Translatability was turned on for pg_upgrade on pg10, but it needs some
> work.  The choices are we fix it now, or we revert the addition of NLS
> there and we do it for PG11.
>
> I think the ugliest one is to change the messages about "the %s server",
> which are about a dozen -- things like:
> msgid "pg_ctl failed to start the %s server, or connection failed\n"
> where the %s is either "new" or "old".
>
> These need be to be split in two messages, and I think I'd replace the
> "old" with "source" and "new" with "target".
>
> Aye or nay?

Aye.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER SUBSCRIPTION ..SET PUBLICATION refresh is not throwing error.

2017-05-24 Thread Petr Jelinek
On 24/05/17 21:28, Petr Jelinek wrote:
> On 24/05/17 21:21, Petr Jelinek wrote:
>> On 24/05/17 21:07, Euler Taveira wrote:
>>> 2017-05-23 6:00 GMT-03:00 tushar >> >:
>>>
>>>
>>> s=# alter subscription s1 set publication  skip refresh ;
>>> NOTICE:  removed subscription for table public.t
>>> NOTICE:  removed subscription for table public.t1
>>> ALTER SUBSCRIPTION
>>> s=#
>>>
>>>
>>> That's a design flaw. Since SKIP is not a reserved word, parser consider
>>> it as a publication name. Instead of causing an error, it executes
>>> another command (REFRESH) that is the opposite someone expects. Also, as
>>> "skip" is not a publication name, it removes all tables in the subscription.
>>>
>>
>> Ah that explains why I originally added the ugly NOREFRESH keyword.
>>
>>> ALTER SUBSCRIPTION name SET PUBLICATION publication_name_list SKIP REFRESH
>>> ALTER SUBSCRIPTION name SET PUBLICATION publication_name_list REFRESH
>>> opt_definition
>>>
>>> I think the first command was a bad design. Why don't we transform SKIP
>>> REFRESH into a REFRESH option?
>>>
>>> ALTER SUBSCRIPTION sub1 SET PUBLICATION pub1 REFRESH WITH (skip = true);
>>>
>>> skip (boolean): specifies that the command will not try to refresh table
>>> information. The default is false.
>>
>> That's quite confusing IMHO, saying REFRESH but then adding option to
>> actually not refresh is not a good interface.
>>
>> I wonder if we actually need the SKIP REFRESH syntax, there is the
>> "REFRESH [ WITH ... ]" when user wants to refresh, so if REFRESH is not
>> specified, we can just behave as if SKIP REFRESH was used, it's not like
>> there is 3rd possible behavior.
>>
> 
> Attached patch does exactly that.
> 

And of course I forgot to update docs...

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


Remove-the-SKIP-REFRESH-syntax-suggar-in-ALTER-SUBSC-v2.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Tightening isspace() tests where behavior should match SQL parser

2017-05-24 Thread Tom Lane
I wrote:
> Heikki Linnakangas  writes:
>> +1 for back-patching. If I understand correctly, it would change the 
>> behavior when you pass a string with non-ASCII leading or trailing 
>> whitespace characters to those functions. I suppose that can happen, but 
>> it's only pure luck if the current code happens to work in that case. 

> Well, it'd work properly for e.g. no-break space in LATINn.

Actually, it's dubious that treating no-break space as whitespace is
correct at all in these use-cases.  The core scanner would think it's
an identifier character, so it's not impossible that somebody would
consider it cute to write  as part of a SQL identifier.  If
the core scanner accepts that, so must these functions.

Hence, applied and back-patched.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER SUBSCRIPTION ..SET PUBLICATION refresh is not throwing error.

2017-05-24 Thread Petr Jelinek
On 24/05/17 21:21, Petr Jelinek wrote:
> On 24/05/17 21:07, Euler Taveira wrote:
>> 2017-05-23 6:00 GMT-03:00 tushar > >:
>>
>>
>> s=# alter subscription s1 set publication  skip refresh ;
>> NOTICE:  removed subscription for table public.t
>> NOTICE:  removed subscription for table public.t1
>> ALTER SUBSCRIPTION
>> s=#
>>
>>
>> That's a design flaw. Since SKIP is not a reserved word, parser consider
>> it as a publication name. Instead of causing an error, it executes
>> another command (REFRESH) that is the opposite someone expects. Also, as
>> "skip" is not a publication name, it removes all tables in the subscription.
>>
> 
> Ah that explains why I originally added the ugly NOREFRESH keyword.
> 
>> ALTER SUBSCRIPTION name SET PUBLICATION publication_name_list SKIP REFRESH
>> ALTER SUBSCRIPTION name SET PUBLICATION publication_name_list REFRESH
>> opt_definition
>>
>> I think the first command was a bad design. Why don't we transform SKIP
>> REFRESH into a REFRESH option?
>>
>> ALTER SUBSCRIPTION sub1 SET PUBLICATION pub1 REFRESH WITH (skip = true);
>>
>> skip (boolean): specifies that the command will not try to refresh table
>> information. The default is false.
> 
> That's quite confusing IMHO, saying REFRESH but then adding option to
> actually not refresh is not a good interface.
> 
> I wonder if we actually need the SKIP REFRESH syntax, there is the
> "REFRESH [ WITH ... ]" when user wants to refresh, so if REFRESH is not
> specified, we can just behave as if SKIP REFRESH was used, it's not like
> there is 3rd possible behavior.
> 

Attached patch does exactly that.

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


0001-Remove-the-SKIP-REFRESH-syntax-suggar-in-ALTER-SUBSC.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER SUBSCRIPTION ..SET PUBLICATION refresh is not throwing error.

2017-05-24 Thread Petr Jelinek
On 24/05/17 21:07, Euler Taveira wrote:
> 2017-05-23 6:00 GMT-03:00 tushar  >:
> 
> 
> s=# alter subscription s1 set publication  skip refresh ;
> NOTICE:  removed subscription for table public.t
> NOTICE:  removed subscription for table public.t1
> ALTER SUBSCRIPTION
> s=#
> 
> 
> That's a design flaw. Since SKIP is not a reserved word, parser consider
> it as a publication name. Instead of causing an error, it executes
> another command (REFRESH) that is the opposite someone expects. Also, as
> "skip" is not a publication name, it removes all tables in the subscription.
> 

Ah that explains why I originally added the ugly NOREFRESH keyword.

> ALTER SUBSCRIPTION name SET PUBLICATION publication_name_list SKIP REFRESH
> ALTER SUBSCRIPTION name SET PUBLICATION publication_name_list REFRESH
> opt_definition
> 
> I think the first command was a bad design. Why don't we transform SKIP
> REFRESH into a REFRESH option?
> 
> ALTER SUBSCRIPTION sub1 SET PUBLICATION pub1 REFRESH WITH (skip = true);
> 
> skip (boolean): specifies that the command will not try to refresh table
> information. The default is false.

That's quite confusing IMHO, saying REFRESH but then adding option to
actually not refresh is not a good interface.

I wonder if we actually need the SKIP REFRESH syntax, there is the
"REFRESH [ WITH ... ]" when user wants to refresh, so if REFRESH is not
specified, we can just behave as if SKIP REFRESH was used, it's not like
there is 3rd possible behavior.

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Get stuck when dropping a subscription during synchronizing table

2017-05-24 Thread Petr Jelinek
Hi,

I finally had time to properly analyze this, and turns out we've been
all just trying to fix symptoms and the actual problems.

All the locking works just fine the way it is in master. The issue with
deadlock with apply comes from the wrong handling of the SIGTERM in the
apply (we didn't set InterruptPending). I changed the SIGTERM handler in
patch 0001 just to die which is actually the correct behavior for apply
workers. I also moved the connection cleanup code to the
before_shmem_exit callback (similar to walreceiver) and now that part
works correctly.

The issue with orphaned sync workers is actually two separate issues.
First, due to thinko we always searched for sync worker in
wait_for_sync_status_change instead of searching for opposite worker as
was the intention (i.e. sync worker should search for apply and apply
should search for sync). Thats fixed by 0002. And second, we didn't
accept any invalidation messages until the whole sync process finished
(because it flattens all the remote transactions in the single one) so
sync worker didn't learn about subscription changes/drop until it has
finished, which I now fixed in 0003.

There is still outstanding issue that sync worker will keep running
inside the long COPY because the invalidation messages are also not
processed until it finishes but all the original issues reported here
disappear for me with the attached patches applied.

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


0003-Receive-invalidation-messages-correctly-in-tablesync.patch
Description: binary/octet-stream


0002-Make-tablesync-worker-exit-when-apply-dies-while-it-.patch
Description: binary/octet-stream


0001-Fix-signal-handling-in-logical-workers.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER SUBSCRIPTION ..SET PUBLICATION refresh is not throwing error.

2017-05-24 Thread Euler Taveira
2017-05-23 6:00 GMT-03:00 tushar :

>
> s=# alter subscription s1 set publication  skip refresh ;
> NOTICE:  removed subscription for table public.t
> NOTICE:  removed subscription for table public.t1
> ALTER SUBSCRIPTION
> s=#


That's a design flaw. Since SKIP is not a reserved word, parser consider it
as a publication name. Instead of causing an error, it executes another
command (REFRESH) that is the opposite someone expects. Also, as "skip" is
not a publication name, it removes all tables in the subscription.

ALTER SUBSCRIPTION name SET PUBLICATION publication_name_list SKIP REFRESH
ALTER SUBSCRIPTION name SET PUBLICATION publication_name_list REFRESH
opt_definition

I think the first command was a bad design. Why don't we transform SKIP
REFRESH into a REFRESH option?

ALTER SUBSCRIPTION sub1 SET PUBLICATION pub1 REFRESH WITH (skip = true);

skip (boolean): specifies that the command will not try to refresh table
information. The default is false.


-- 
   Euler Taveira   Timbira -
http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento



Re: [HACKERS] generate_series regression 9.6->10

2017-05-24 Thread Paul Ramsey
Thanks Tom. This showed up in a regression test of ours that built the test
data using generate_series, so it's not a critical production issue or
anything, just a surprise change in behaviour.

P.

On Wed, May 24, 2017 at 10:28 AM, Tom Lane  wrote:

> Paul Ramsey  writes:
> > The behaviour of generate_series seems to have changed a little, at least
> > in conjunction w/ CTEs.
>
> What's changed is the behavior of multiple SRFs in a SELECT's targetlist,
> cf
>
> https://git.postgresql.org/gitweb/?p=postgresql.git=commitdiff=
> 69f4b9c85f168ae006929eec44fc44d569e846b9
>
> specifically this comment:
>
> While moving SRF evaluation to ProjectSet would allow to retain the old
> "least common multiple" behavior when multiple SRFs are present in one
> targetlist (i.e.  continue returning rows until all SRFs are at the
> end of
> their input at the same time), we decided to instead only return rows
> till
> all SRFs are exhausted, returning NULL for already exhausted ones.  We
> deemed the previous behavior to be too confusing, unexpected and
> actually
> not particularly useful.
>
> I see the current v10 release notes have failed miserably at documenting
> this :-(.  Will try to improve that.
>
> regards, tom lane
>


Re: [HACKERS] translatable string fixes

2017-05-24 Thread Tom Lane
Alvaro Herrera  writes:
> It took me a very long time to figure out how to translate these 9.6-new
> strings in the AM validate routines:
> msgid "gin operator family \"%s\" contains support procedure %s with 
> cross-type registration"
> ...
> Maybe we can use this phrasing:
> "gin operator family %s contains support procedure %s registered with 
> different left and right types"

OK with me, or maybe better "support procedure %s with different left and
right input types".  I doubt "registered" adds much.

> The other complaint I have about this one and also other amvalidate
> functions is the hardcoded AM name, so it's actually one string per AM,
> which is annoying (a total of twenty-something messages which are
> actually only four or five different ones).  Ignoring the second part of
> the phrase now, we could use this:
>   "operator family %s of access method %s contains support procedure %s with 
> cross-type registration"

If that seems better to the actual translators, it's OK with me. It's
not real clear where is the boundary between combining near-duplicate
messages and assembling messages at runtime.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] generate_series regression 9.6->10

2017-05-24 Thread Tom Lane
Paul Ramsey  writes:
> The behaviour of generate_series seems to have changed a little, at least
> in conjunction w/ CTEs.

What's changed is the behavior of multiple SRFs in a SELECT's targetlist,
cf

https://git.postgresql.org/gitweb/?p=postgresql.git=commitdiff=69f4b9c85f168ae006929eec44fc44d569e846b9

specifically this comment:

While moving SRF evaluation to ProjectSet would allow to retain the old
"least common multiple" behavior when multiple SRFs are present in one
targetlist (i.e.  continue returning rows until all SRFs are at the end of
their input at the same time), we decided to instead only return rows till
all SRFs are exhausted, returning NULL for already exhausted ones.  We
deemed the previous behavior to be too confusing, unexpected and actually
not particularly useful.

I see the current v10 release notes have failed miserably at documenting
this :-(.  Will try to improve that.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] generate_series regression 9.6->10

2017-05-24 Thread Andres Freund
On 2017-05-24 10:09:19 -0700, Paul Ramsey wrote:
> The behaviour of generate_series seems to have changed a little, at least
> in conjunction w/ CTEs. Under 9.6 (and prior) this query returns 2127 rows,
> with no nulls:
> 
> with
> ij as ( select i, j from generate_series(1, 10) i, generate_series(1, 10)
> j),
> iijj as (select generate_series(1, i) as a, generate_series(1, j) b from ij)
> select a, b from iijj;
> 
> Under 10, it returns only 715 rows, with many nulls.

Right, that's expected - we probably need to expand on that in the
release notes.  Before v10 targetlist with multiple SRFs were evaluated
using on a "least common multiple" logic.  I.e. if you have SELECT
generate_series(1,2), generate_series(1,4); once the first SRFs is
exhausted it was restarted.  Only once all SRFs stopped returning rows
at the same time, things were stopped.  Going on forward, once either
SRF stops returning rows, it'll return NULL until all SRFs are
exhausted.

Makes sense?  Is that a problem for you? If so, what do you use the LCM
logic for in practical terms?

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] generate_series regression 9.6->10

2017-05-24 Thread Paul Ramsey
The behaviour of generate_series seems to have changed a little, at least
in conjunction w/ CTEs. Under 9.6 (and prior) this query returns 2127 rows,
with no nulls:

with
ij as ( select i, j from generate_series(1, 10) i, generate_series(1, 10)
j),
iijj as (select generate_series(1, i) as a, generate_series(1, j) b from ij)
select a, b from iijj;

Under 10, it returns only 715 rows, with many nulls.


Re: [HACKERS] 10beta1 sequence regression failure on sparc64

2017-05-24 Thread Christoph Berg
Re: To Andres Freund 2017-05-18 <20170518192924.jkrzevlencp3g...@msg.df7cb.de>
> > If we had a typo or something in that code, the build farm should have
> > caught it by now.
> > 
> > I would try compiling with lower -O and see what happens.
> 
> Trying -O0 now.

Sorry for the late reply, I was hoping to catch you on IRC, but then
didn't manage to be around at a time that would fit the timezone
shift.

The -O0 build passed the regression tests. Not sure what that means
for our problem, though.

> Re: Andres Freund 2017-05-18 
> <20170518184811.7c44jcvwjvnzc...@alap3.anarazel.de>
> > Weird.  Christoph, IIRC you can help gaining access to a porter machine
> > to reproduce this?
> 
> I'll let the build run over night (the machine has 32 cores but is
> dead slow) and see what I can arrange tomorrow. (Peter will already
> have access, it's notker.debian.net.)

Will send you a separate mail for that.

Christoph


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC 2017: Foreign Key Arrays

2017-05-24 Thread Robert Haas
On Mon, May 22, 2017 at 7:51 PM, Mark Rofail  wrote:
> Cloned the git repo found @ https://github.com/postgres/postgres and
> identified the main two files I will be concerned with. (I know I may need
> to edit other files but these seem to where I will spend most of my summer)
>
> src/backend/commands/tablecmds.c
> src/backend/utils/ri_triggers.c
>
> I am yet to identify the files concerned with the GIN opclass. <-- if anyone
> can help with this

There's not only one GIN opclass.  You can get a list like this:

select oid, * from pg_opclass where opcmethod = 2742;

Actually, you probably want to look for GIN opfamilies:

rhaas=# select oid, * from pg_opfamily where opfmethod = 2742;
 oid  | opfmethod |opfname | opfnamespace | opfowner
--+---++--+--
 2745 |  2742 | array_ops  |   11 |   10
 3659 |  2742 | tsvector_ops   |   11 |   10
 4036 |  2742 | jsonb_ops  |   11 |   10
 4037 |  2742 | jsonb_path_ops |   11 |   10
(4 rows)

To see which SQL functions are used to implement a particular
opfamily, use the OID from the previous step in a query like this:

rhaas=# select prosrc from pg_amop, pg_operator, pg_proc where
amopfamily = 2745 and amopopr = pg_operator.oid and oprcode =
pg_proc.oid;
 prosrc

 array_eq
 arrayoverlap
 arraycontains
 arraycontained
(4 rows)

Then, you can look for those in the source tree.  You can also search
for the associated support functions, e.g.:

rhaas=# select distinct amprocnum, prosrc from pg_amproc, pg_proc
where amprocfamily = 2745 and amproc = pg_proc.oid order by 1, 2;
 amprocnum |prosrc
---+---
 1 | bitcmp
 1 | bpcharcmp
 1 | btabstimecmp
 1 | btboolcmp
 1 | btcharcmp
 1 | btfloat4cmp
 1 | btfloat8cmp
 1 | btint2cmp
 1 | btint4cmp
 1 | btint8cmp
 1 | btnamecmp
 1 | btoidcmp
 1 | btoidvectorcmp
 1 | btreltimecmp
 1 | bttextcmp
 1 | bttintervalcmp
 1 | byteacmp
 1 | cash_cmp
 1 | date_cmp
 1 | interval_cmp
 1 | macaddr_cmp
 1 | network_cmp
 1 | numeric_cmp
 1 | time_cmp
 1 | timestamp_cmp
 1 | timetz_cmp
 2 | ginarrayextract
 3 | ginqueryarrayextract
 4 | ginarrayconsistent
 6 | ginarraytriconsistent
(30 rows)

You might want to read https://www.postgresql.org/docs/devel/static/xindex.html

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] Concurrent ALTER SEQUENCE RESTART Regression

2017-05-24 Thread Robert Haas
On Wed, May 24, 2017 at 10:32 AM, Andres Freund  wrote:
> Well, but then we should just remove minval/maxval if we can't rely on
> it.

That seems like a drastic overreaction to me.

> I wonder if that's not actually very little new code, and I think we
> might end up regretting having yet another inconsistent set of semantics
> in v10, which we'll then end up changing again in v11.

I'm not exercised enough about it to spend time on it or to demand
that Peter do so, but feel free to propose something.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UPDATE of partition key

2017-05-24 Thread Amit Kapila
On Wed, May 24, 2017 at 8:14 PM, Amit Kapila  wrote:
> On Wed, May 24, 2017 at 2:47 PM, Amit Khandekar  
> wrote:
>>
>> By now, majority of the opinions have shown that they do not favour
>> two triggers getting fired on a single update. Amit, do you consider
>> option 2 as a valid option ?
>>
>
> Sounds sensible to me.
>
>> That is, fire only UPDATE triggers. BR on
>> source partition, and AR on destination partition. Do you agree that
>> firing BR update trigger is essential since it can modify the row and
>> even prevent the update from happening ?
>>
>
> Agreed.
>
> Apart from above, there is one open issue [1]
>

Forget to mention the link, doing it now.

[1] - 
https://www.postgresql.org/message-id/CAA4eK1KEZQ%2BCyXbBzfn1jFHoEfa_OemDLhLyy7xfD1QUZLo1DQ%40mail.gmail.com


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UPDATE of partition key

2017-05-24 Thread Amit Kapila
On Wed, May 24, 2017 at 2:47 PM, Amit Khandekar  wrote:
> On 18 May 2017 at 16:52, Amit Kapila  wrote:
>> On Wed, May 17, 2017 at 4:05 PM, Dilip Kumar  wrote:
>>> On Wed, May 17, 2017 at 3:15 PM, Amit Kapila  
>>> wrote:
> Earlier I thought that option1 is better but later I think that this
> can complicate the situation as we are firing first BR update then BR
> delete and can change the row multiple time and defining such
> behaviour can be complicated.
>

 If we have to go by this theory, then the option you have preferred
 will still execute BR triggers for both delete and insert, so input
 row can still be changed twice.
>>>
>>> Yeah, right as per my theory above Option3 have the same problem.
>>>
>>> But after putting some more thought I realised that only for "Before
>>> Update" or the "Before Insert" trigger row can be changed.
>>>
>>
>> Before Row Delete triggers can suppress the delete operation itself
>> which is kind of unintended in this case.  I think without the user
>> being aware it doesn't seem advisable to execute multiple BR triggers.
>
> By now, majority of the opinions have shown that they do not favour
> two triggers getting fired on a single update. Amit, do you consider
> option 2 as a valid option ?
>

Sounds sensible to me.

> That is, fire only UPDATE triggers. BR on
> source partition, and AR on destination partition. Do you agree that
> firing BR update trigger is essential since it can modify the row and
> even prevent the update from happening ?
>

Agreed.

Apart from above, there is one open issue [1] related to generating an
error for concurrent delete of row for which I have mentioned some way
of getting it done, do you want to try that option and see if you face
any issue in making the progress on that lines?

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] Concurrent ALTER SEQUENCE RESTART Regression

2017-05-24 Thread Andres Freund
On 2017-05-24 10:24:19 -0400, Robert Haas wrote:
> On Wed, May 24, 2017 at 9:04 AM, Andres Freund  wrote:
> > At the very least we'll have to error out. That's still not nice usability 
> > wise, but it sure beats returning flat out wrong values.
> 
> I'm not sure.  That seems like it might often be worse.  Now you need
> manual intervention before anything even has a hope of working.

Well, but then we should just remove minval/maxval if we can't rely on
it.


> > I suspect that the proper fix would be to use a different relfilenode after 
> > ddl, when changing the seq file itself (I.e. setval and restart).  That 
> > seems like it'd be architecturally more appropriate, but also some work.
> 
> I can see some advantages to that, but it seems far too late to think
> about doing that in v10.

I wonder if that's not actually very little new code, and I think we
might end up regretting having yet another inconsistent set of semantics
in v10, which we'll then end up changing again in v11.

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Error log for psql (uploading backup) in PostgreSQL 9.3.17

2017-05-24 Thread Nick Dro

Hi,
I should say that I'm haviing this issue since 9.3.2 I though it would be resolved when I upgrade to 9.3.17 but this still appear.
This is a very wierd issue.
 
I upload backups using pg_restore and
psql -h SERVER -U USERNAME -f backup.sql -q -d databasename -pPORT 2>errors.txt >output.txt
 
When I open the errors.txt file I see:
 
psql:backup.sql:43112523: ERROR:  relation "tablename" does not existLINE 3: from tablename ^QUERY:  select tablenameidfrom tablenamejoin tablename_visable v using(statusid)where id=$1 order by id,case statusidwhen 2 then 1 when 1 then 2 else 3 end, coalesce(rev,'-1') desclimit 1
 
 
Now, the intresting thing is that tablename exists in the backup.
I can do:
select * from tablename
The table is OK, the data is OK. So I don't understand this error.
 
It looks like a wrong order of commands, maybe the creation of the table is done after it creates this code?
 
Why is this happens?


Re: [HACKERS] Error-like LOG when connecting with SSL for password authentication

2017-05-24 Thread Michael Paquier
On Tue, May 23, 2017 at 6:26 AM, Heikki Linnakangas  wrote:
> Yeah. The be_tls_read() change looks OK to me.
>
> Can SSL_ERROR_ZERO_RETURN also happen in a write? I suppose it can, but
> returning 0 from secure_write() seems iffy.

It seems to me that it could be the case, the man page of SSL_write
tells me that:
   0   The write operation was not successful. Probably the
underlying connection was closed. Call SSL_get_error() with the return
value ret to find out, whether an error occurred or the connection was
shut down cleanly (SSL_ERROR_ZERO_RETURN).

> My man page for send(2) doesn't
> say that it would return 0 if the connection has been closed by the peer, so
> that would be different from the non-SSL codepath.

errno maps to ECONNRESET in this case. So send() can return 0 only if
the caller has specified 0 as the number of bytes to send?

> Looking at the only
> caller to secure_write(), it does check for 0, and would treat it correctly
> as EOF, so it would work. But it makes me a bit nervous, a comment in
> secure_write() at least would be in order, to point out that it can return 0
> to indicate EOF, unlike the raw write(2) or send(2) system functions.

Yep. Agreed. Hopefully improved in the attached.

> In libpq, do we want to set the "SSL connection has been closed
> unexpectedly" message?

Perhaps not.

> In the non-SSL case, pqsecure_raw_read() doesn't set
> any error message on EOF. Also, even though the comment in pgtls_read() says
> "... we should not report it as a server crash", looking at pqReadData, ISTM
> that returning 0 will in fact lead to the "server closed the connection
> unexpectedly" message. And it seems like a good assumption anyway, that the
> server did in fact terminate abnormally, if that happens. We don't expect
> the server to disconnect the client without notice, even though it's not
> unusual for the client to disconnect without notifying the server.

Yes.

Attached is an updated patch.
-- 
Michael


ssl-read-commerr-v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] Concurrent ALTER SEQUENCE RESTART Regression

2017-05-24 Thread Robert Haas
On Wed, May 24, 2017 at 9:04 AM, Andres Freund  wrote:
> At the very least we'll have to error out. That's still not nice usability 
> wise, but it sure beats returning flat out wrong values.

I'm not sure.  That seems like it might often be worse.  Now you need
manual intervention before anything even has a hope of working.

> I suspect that the proper fix would be to use a different relfilenode after 
> ddl, when changing the seq file itself (I.e. setval and restart).  That seems 
> like it'd be architecturally more appropriate, but also some work.

I can see some advantages to that, but it seems far too late to think
about doing that in v10.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Index created in BEFORE trigger not updated during INSERT

2017-05-24 Thread Andres Freund
On 2017-05-24 08:26:24 -0400, Robert Haas wrote:
> On Mon, May 22, 2017 at 7:05 AM, Albe Laurenz  wrote:
> > Not that it is a useful use case, but I believe that this is
> > a bug that causes index corruption:
> >
> > CREATE TABLE mytable(
> >id integer PRIMARY KEY,
> >id2 integer NOT NULL
> > );
> >
> > CREATE FUNCTION makeindex() RETURNS trigger
> >LANGUAGE plpgsql AS
> > $$BEGIN
> >CREATE INDEX ON mytable(id2);
> >RETURN NEW;
> > END;$$;
> 
> I'm willing to bet that nobody ever thought about that case very hard.
> It seems like we should either make it work or prohibit it, but I
> can't actually see quite how to do either off-hand.

Hm, strategically sprinkled CheckTableNotInUse() might do the trick?
I've neither tried nor thought this through fully, but I can't think of
a case where pre-existing relcache references to tables are ok...

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: retry shm attach for windows (WAS: Re: [HACKERS] OK, so culicidae is *still* broken)

2017-05-24 Thread Michael Paquier
On Tue, May 23, 2017 at 8:14 AM, Amit Kapila  wrote:
> So it seems both you and Tom are leaning towards some sort of retry
> mechanism for shm reattach on Windows.  I also think that is a viable
> option to negate the impact of ASLR.  Attached patch does that.  Note
> that, as I have mentioned above I think we need to do it for shm
> reserve operation as well.  I think we need to decide how many retries
> are sufficient before bailing out.  As of now, I have used 10 to have
> some similarity with PGSharedMemoryCreate(), but we can choose some
> different count as well.  One might say that we can have "number of
> retries" as a guc parameter, but I am not sure about it, so not used.

New GUCs can be backpatched if necessary, though this does not seem
necessary. Who is going to set up that anyway if we have a limit high
enough. 10 looks like a sufficient number to me.

> Another point to consider is that do we want the same retry mechanism
> for EXEC_BACKEND builds (function
> PGSharedMemoryReAttach is different on Windows and EXEC_BACKEND
> builds).  I think it makes sense to have retry mechanism for
> EXEC_BACKEND builds, so done that way in the patch.  Yet another point
> which needs some thought is for reattach operation, before retrying do
> we want to reserve the shm by using VirtualAllocEx?

-   elog(FATAL, "could not reattach to shared memory (key=%p,
addr=%p): error code %lu",
+   {
+   elog(LOG, "could not reattach to shared memory (key=%p,
addr=%p): error code %lu",
 UsedShmemSegID, UsedShmemSegAddr, GetLastError());
+   return false;
+   }
This should be a WARNING, with the attempt number reported as well?

-void
-PGSharedMemoryReAttach(void)
+bool
+PGSharedMemoryReAttach(int retry_count)
I think that the loop logic should be kept within
PGSharedMemoryReAttach, this makes the code of postmaster.c cleaner,
and it seems to me that each step of PGSharedMemoryReAttach() should
be retried in order. Do we need also to worry about SysV? I agree with
you that having consistency is better, but I don't recall seeing
failures or complains related to cygwin for ASLR.

I think that you are forgetting PGSharedMemoryCreate in the retry process.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [pgsql-translators] [HACKERS] translatable string fixes

2017-05-24 Thread Daniele Varrazzo
On Tue, May 23, 2017 at 1:15 AM, Alvaro Herrera
 wrote:
> It took me a very long time to figure out how to translate these 9.6-new
> strings in the AM validate routines:
>
> msgid "gin operator family \"%s\" contains support procedure %s with 
> cross-type registration"
>
> The problem I had was that the term "cross-type registration" is not
> used anywhere else, it's not clear what it means, and (worst from my
> POV) I couldn't think of any decent phrasing in Spanish for it.  After
> staring at the code for a while, I translated them roughly as:
>
> "gin operator family %s contains support procedure %s registered with 
> differing types"
>
> which I think is a tad clearer ... but as a user confronted with such a
> message, I would be at a complete loss on what to do about it.

I did something similar, translating the equivalent of "across
different types". Had to look at the source code to understand the
meaning of the sentence. Maybe cross-type registration is a bit too
cryptic.


> Maybe we can use this phrasing:
> "gin operator family %s contains support procedure %s registered with 
> different left and right types"
>
>
> The other complaint I have about this one and also other amvalidate
> functions is the hardcoded AM name, so it's actually one string per AM,
> which is annoying (a total of twenty-something messages which are
> actually only four or five different ones).  Ignoring the second part of
> the phrase now, we could use this:
>   "operator family %s of access method %s contains support procedure %s with 
> cross-type registration"

Yup, that was boring and error-prone :\


-- Daniele


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] Concurrent ALTER SEQUENCE RESTART Regression

2017-05-24 Thread Andres Freund


On May 24, 2017 6:57:08 AM EDT, Robert Haas  wrote:
>On Tue, May 23, 2017 at 11:25 PM, Andres Freund 
>wrote:
>> On 2017-05-23 22:47:07 -0400, Robert Haas wrote:
>>> On Mon, May 22, 2017 at 11:42 AM, Andres Freund 
>wrote:
>>> > Ooops.
>>> >
>>> > Two issues: Firstly, we get a value smaller than seqmin, obviously
>>> > that's not ok. But even if we'd error out, it'd imo still not be
>ok,
>>> > because we have a command that behaves partially transactionally
>>> > (keeping the seqmax/min transactionally), partially not (keeping
>the
>>> > current sequence state at -9).
>>>
>>> I don't really agree that this is broken.
>>
>> Just a quick clarification question: You did notice that nextval() in
>S1
>> after the rollback returned -9, despite seqmin being 0?  I can see
>> erroring out being acceptable, but returning flat out wrong
>values?
>
>I did see that.  I'm unclear what you think it should do instead.  I
>mean, it could notice that cur_val < min_val and return min_val, but
>that doesn't have a very good chance of being correct either.  I
>suppose the aborted transaction could try to restore the old cur_val
>when it rolls back, but that's clearly the wrong thing when there's no
>DDL involved (plus I'm not sure it would even be safe to try to do
>that).  Do you have an idea?

At the very least we'll have to error out. That's still not nice usability 
wise, but it sure beats returning flat out wrong values.

I suspect that the proper fix would be to use a different relfilenode after 
ddl, when changing the seq file itself (I.e. setval and restart).  That seems 
like it'd be architecturally more appropriate, but also some work.

Andres
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Is it possible to get query_string value in an event trigger?

2017-05-24 Thread Robert Haas
:On Mon, May 22, 2017 at 2:18 PM, Jeremy Finzel  wrote:
> Hello.  Is it possible to get the currently executing query in an event
> trigger, for example, a create table event trigger function firing after
> ddl_command_end, aside from checking pg_stat_activity for the current
> process?
>
> When I am debugging the source code, after executing a statement I can see
> the variable query_string which has the entire SQL statement, however, I
> would be very interested if it's possible to get this query_string value in
> a c function called during an event trigger, for a given process, or if by
> that time the memory has been freed and/or it's just not possible for some
> other reason to get the query string of a particular process.
>
> Any thoughts much appreciated.

I don't think there's a totally reliable way to do it today.  You
could look at debug_query_string, but that's the toplevel query
string, not necessarily the query string that fired the event trigger.
It looks like ProcessUtilitySlow() has the query_string, but it's not
passed down to EventTriggerDDLCommandStart(),
EventTriggerDDLCommandEnd(), EventTriggerSQLDrop(), or
EventTriggerTableRewrite().  In the first three cases, it seems like
that would be pretty easy to change, but the last one looks harder.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Broken hint bits (freeze)

2017-05-24 Thread Robert Haas
On Wed, May 24, 2017 at 7:27 AM, Dmitriy Sarafannikov
 wrote:
> It seems like replica did not replayed corresponding WAL records.
> Any thoughts?

heap_xlog_freeze_page() is a pretty simple function.  It's not
impossible that it could have a bug that causes it to incorrectly skip
records, but it's not clear why that wouldn't affect many other replay
routines equally, since the pattern of using the return value of
XLogReadBufferForRedo() to decide what to do is widespread.

Can you prove that other WAL records generated around the same time as
the freeze record *were* replayed on the master?  If so, that proves
that this isn't just a case of the WAL never reaching the standby.
Can you look at the segment that contains the relevant freeze record
with pg_xlogdump?  Maybe that record is messed up somehow.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] wal_level > WAL_LEVEL_LOGICAL

2017-05-24 Thread Robert Haas
On Mon, May 22, 2017 at 9:08 AM, Neha Khatri  wrote:
> As per my understabding, current postgres server supports only three
> values for wal_level i.e. 'minimal' , 'replica' or 'logical'. But
> following error message brought to notice that there are various code
> spots that try to look for wal_level >= WAL_LEVEL_LOGICAL:

I suspect that this was intended as future-proofing.  I think it's
actually very reasonable to write the internal tests that way, but it
does seem strange that it's crept into the user-visible error
messages.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Index created in BEFORE trigger not updated during INSERT

2017-05-24 Thread Robert Haas
On Mon, May 22, 2017 at 7:05 AM, Albe Laurenz  wrote:
> Not that it is a useful use case, but I believe that this is
> a bug that causes index corruption:
>
> CREATE TABLE mytable(
>id integer PRIMARY KEY,
>id2 integer NOT NULL
> );
>
> CREATE FUNCTION makeindex() RETURNS trigger
>LANGUAGE plpgsql AS
> $$BEGIN
>CREATE INDEX ON mytable(id2);
>RETURN NEW;
> END;$$;

I'm willing to bet that nobody ever thought about that case very hard.
It seems like we should either make it work or prohibit it, but I
can't actually see quite how to do either off-hand.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal : For Auto-Prewarm.

2017-05-24 Thread Robert Haas
On Wed, May 24, 2017 at 6:28 AM, Mithun Cy  wrote:
> On Tue, May 23, 2017 at 7:06 PM, Mithun Cy  wrote:
>> Thanks, Andres,
>>
>> I have tried to fix all of your comments.
>
> There was a typo issue in previous patch 07 where instead of == I have
> used !=. And, a mistake in comments. I have corrected same now.

+/*
+ * autoprewarm :
+ *
+ * What is it?
+ * ===
+ * A bgworker which automatically records information about blocks which were
+ * present in buffer pool before server shutdown and then prewarm the buffer
+ * pool upon server restart with those blocks.
+ *
+ * How does it work?
+ * =
+ * When the shared library "pg_prewarm" is preloaded, a
+ * bgworker "autoprewarm" is launched immediately after the server has reached
+ * consistent state. The bgworker will start loading blocks recorded in the
+ * format BlockInfoRecord
+ * <> in
+ * $PGDATA/AUTOPREWARM_FILE, until there is no free buffer left in the buffer
+ * pool. This way we do not replace any new blocks which were loaded either by
+ * the recovery process or the querying clients.
+ *
+ * Once the "autoprewarm" bgworker has completed its prewarm task, it will
+ * start a new task to periodically dump the BlockInfoRecords related to blocks
+ * which are currently in shared buffer pool. Upon next server restart, the
+ * bgworker will prewarm the buffer pool by loading those blocks. The GUC
+ * pg_prewarm.dump_interval will control the dumping activity of the bgworker.
+ */

Make this part of the file header comment.  Also, add an enabling GUC.
The default can be true, but it should be possible to preload the
library so that the SQL functions are available without a dynamic
library load without requiring you to get the auto-prewarm behavior.
I suggest pg_prewarm.autoprewarm = true / false.

Your SigtermHandler and SighupHandler routines are still capitalized
in a way that's not very consistent with what we do elsewhere.  I
think all of our other signal handlers have names_like_this() not
NamesLikeThis().

+ * ==types and variables used by autoprewam  =

Spelling.

+ * Meta-data of each persistent block which is dumped and used to load.

Metadata

+typedef struct BlockInfoRecord
+{
+Oiddatabase;/* database */
+OidspcNode;/* tablespace */
+Oidfilenode;/* relation's filenode. */
+ForkNumberforknum;/* fork number */
+BlockNumber blocknum;/* block number */
+} BlockInfoRecord;

spcNode is capitalized differently from all of the other members.

+LWLock   *lock;/* protects SharedState */

Just declare this as LWLock lock, and initialize it using
LWLockInitialize.  The way you're doing it is more complicated.

+static dsa_area *AutoPrewarmDSA = NULL;

DSA seems like more than you need here.  There's only one allocation
being performed.  I think it would be simpler and less error-prone to
use DSM directly.  I don't even think you need a shm_toc.  You could
just do:

seg = dsm_create(segsize, 0);
blkinfo = dsm_segment_address(seg);

Then pass dsm_segment_handle(seg) to the worker using bgw_main_arg or
bgw_extra, and have it call dsm_attach.  An advantage of this approach
is that you only allocate the memory you actually need, whereas DSA
will allocate more, expecting that you might do further allocations.

+pg_qsort(block_info_array, num_blocks, sizeof(BlockInfoRecord),
+ blockinfo_cmp);

I think it would make more sense to sort the array on the load side,
in the autoprewarm worker, rather than when dumping.  First, many
dumps will never be reloaded, so there's no real reason to waste time
sorting them.  Second, the way you've got the code right now, it
relies heavily on the assumption that the dump file will be sorted,
but that doesn't seem like a necessary assumption.  We don't really
expect users to hand-edit the dump files, but if they do, it'd be nice
if it didn't randomly break things unnecessarily.

+ errmsg("autoprewarm: could not open \"%s\": %m",
+dump_file_path)));

Use standard error message wordings!  Don't create new translatable
messages by adding "autoprewarm" to error messages.  There are
multiple instances of this throughout the patch.

+snprintf(dump_file_path, sizeof(dump_file_path),
+ "%s", AUTOPREWARM_FILE);

This is completely pointless.  You can get rid of the dump_file_path
variable and just use AUTOPREWARM_FILE wherever you would have used
dump_file_path.  It's just making a copy of a string you already have.
Also, this code interrupts the flow of the surrounding logic in a
weird way; even if we needed it, it would make more sense to put it up
higher, where we construct the temporary path, or down lower, where
the value is actually needed.

+SPI_connect();
+

Re: [HACKERS] Broken hint bits (freeze)

2017-05-24 Thread Dmitriy Sarafannikov
We found that this problem appears also on shards with enabled checksums.
This shard has 1st timeline, which means there was no switchover after upgrade 
to 9.6.

xdb11f(master)=# select pg_current_xlog_location(), 
pg_xlogfile_name(pg_current_xlog_location());
 pg_current_xlog_location | pg_xlogfile_name
--+--
 30BA/5966AD38| 000130BA0059
(1 row)

xdb11f(master)=# select * from page_header(get_raw_page(‘mytable', 1787));
  lsn  | checksum | flags | lower | upper | special | pagesize | 
version | prune_xid
---+--+---+---+---+-+--+-+---
 1F43/8C432C60 |-3337 | 5 |   256 |   304 |8192 | 8192 |   
4 | 0
(1 row)

xdb11h(replica)=# select * from page_header(get_raw_page(‘mytable', 1787));
  lsn  | checksum | flags | lower | upper | special | pagesize | 
version | prune_xid
---+--+---+---+---+-+--+-+---
 1B28/45819C28 |   -17617 | 5 |   256 |   304 |8192 | 8192 |   
4 | 0
(1 row)

xdb11e(replica)=# select * from page_header(get_raw_page('mytable', 1787));
  lsn  | checksum | flags | lower | upper | special | pagesize | 
version | prune_xid
---+--+---+---+---+-+--+-+---
 1B28/45819C28 |   -17617 | 5 |   256 |   304 |8192 | 8192 |   
4 | 0
(1 row)

Master has newer page version and freeze bits.

xdb11f(master)=# select t_xmin, t_infomask::bit(32) & X'0300'::int::bit(32) 
from heap_page_items(get_raw_page(‘mytable', 1787)) where lp = 42;
  t_xmin   | ?column?
---+--
 516651778 | 0011
(1 row)

xdb11h(replica)=# select t_xmin, t_infomask::bit(32) & X'0300'::int::bit(32) 
from heap_page_items(get_raw_page('mytable', 1787)) where lp = 42;
  t_xmin   | ?column?
---+--
 516651778 | 
(1 row)

xdb11e(replica)=# select t_xmin, t_infomask::bit(32) & X'0300'::int::bit(32) 
from heap_page_items(get_raw_page('mytable', 1787)) where lp = 42;
  t_xmin   | ?column?
---+--
 516651778 | 
(1 row)

It seems like replica did not replayed corresponding WAL records.
Any thoughts?

Regards,
Dmitriy Sarafannikov

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] Concurrent ALTER SEQUENCE RESTART Regression

2017-05-24 Thread Robert Haas
On Tue, May 23, 2017 at 11:25 PM, Andres Freund  wrote:
> On 2017-05-23 22:47:07 -0400, Robert Haas wrote:
>> On Mon, May 22, 2017 at 11:42 AM, Andres Freund  wrote:
>> > Ooops.
>> >
>> > Two issues: Firstly, we get a value smaller than seqmin, obviously
>> > that's not ok. But even if we'd error out, it'd imo still not be ok,
>> > because we have a command that behaves partially transactionally
>> > (keeping the seqmax/min transactionally), partially not (keeping the
>> > current sequence state at -9).
>>
>> I don't really agree that this is broken.
>
> Just a quick clarification question: You did notice that nextval() in S1
> after the rollback returned -9, despite seqmin being 0?  I can see
> erroring out being acceptable, but returning flat out wrong values?

I did see that.  I'm unclear what you think it should do instead.  I
mean, it could notice that cur_val < min_val and return min_val, but
that doesn't have a very good chance of being correct either.  I
suppose the aborted transaction could try to restore the old cur_val
when it rolls back, but that's clearly the wrong thing when there's no
DDL involved (plus I'm not sure it would even be safe to try to do
that).  Do you have an idea?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UPDATE of partition key

2017-05-24 Thread Amit Khandekar
On 12 May 2017 at 09:27, Amit Kapila  wrote:
>
> + is_partitioned_table =
> + root_rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE;
> +
> + if (is_partitioned_table)
> + ExecSetupPartitionTupleRouting(
> + root_rel,
> + /* Build WITH CHECK OPTION constraints for leaf partitions */
> + ExecInitPartitionWithCheckOptions(mtstate, root_rel);
> + /* Build a projection for each leaf partition rel. */
> + ExecInitPartitionReturningProjection(mtstate, root_rel);
> ..
> + /* It's not a partitioned table after all; error out. */
> + ExecPartitionCheckEmitError(resultRelInfo, slot, estate);
>
> When we are anyway going to give error if table is not a partitioned
> table, then isn't it better to give it early when we first identify
> that.

Yeah that's right, fixed. Moved the partitioned table check early.
This also showed that there is no need for is_partitioned_table
variable. Accordingly adjusted the code.


> -
> +static void ExecInitPartitionWithCheckOptions(ModifyTableState *mtstate,
> +  Relation root_rel);
> Spurious line delete.

Done.

Also rebased the patch over latest code.

Attached v8 patch.


update-partition-key_v8.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] server closed the connection message in the log file of standby while performing logical replication synchronization

2017-05-24 Thread tushar

Hi,

I am getting server closed the connection message in the log file of 
standby after 'logical replication synchronization worker finished 
processing' LOG message


2017-05-24 08:58:04.451 BST [28496] LOG:  logical replication sync for 
subscription s_5434, table t1039 started
2017-05-24 08:58:05.447 BST [28388] LOG:  starting logical replication 
worker for subscription "s_5434"
2017-05-24 08:58:05.453 BST [28498] LOG:  logical replication sync for 
subscription s_5434, table t104 started
2017-05-24 08:58:06.449 BST [28388] LOG:  starting logical replication 
worker for subscription "s_5434"
2017-05-24 08:58:06.455 BST [28500] LOG:  logical replication sync for 
subscription s_5434, table t1040 started
2017-05-24 08:58:07.451 BST [28388] LOG:  starting logical replication 
worker for subscription "s_5434"
2017-05-24 08:58:07.458 BST [28502] LOG:  logical replication sync for 
subscription s_5434, table t1041 started
2017-05-24 08:58:08.453 BST [28388] LOG:  starting logical replication 
worker for subscription "s_5434"
2017-05-24 08:58:08.461 BST [28504] LOG:  logical replication sync for 
subscription s_5434, table t1042 started
2017-05-24 08:58:23.540 BST [28463] LOG:  logical replication 
synchronization worker finished processing
2017-05-24 08:58:23.550 BST [28461] LOG:  logical replication 
synchronization worker finished processing
2017-05-24 08:58:08.453 BST [28388] LOG:  starting logical replication 
worker for subscription "s_5434"
2017-05-24 08:58:08.461 BST [28504] LOG:  logical replication sync for 
subscription s_5434, table t1042 started
2017-05-24 08:58:23.540 BST [28463] LOG:  logical replication 
synchronization worker finished processing
2017-05-24 08:58:23.550 BST [28461] LOG:  logical replication 
synchronization worker finished processing

server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
2017-05-24 08:59:23.622 BST [28465] ERROR:  error reading result of 
streaming command: server closed the connection unexpectedly

This probably means the server terminated abnormally
before or while processing the request.
2017-05-24 08:59:23.627 BST [28340] LOG:  worker process: logical 
replication worker for subscription 31385 sync 19463 (PID 28465) exited 
with exit code 1

server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
2017-05-24 08:59:33.624 BST [28467] ERROR:  error reading result of 
streaming command: server closed the connection unexpectedly

This probably means the server terminated abnormally
before or while processing the request.

Steps to reproduce -

X Cluster -
Modify/add this below parameters in postgresql.conf file
wal_level=logical
logging_collector=on
max_replication_slots = 100
max_worker_processes = 50
max_logical_replication_workers = 35
max_sync_workers_per_subscription = 25
max_wal_senders=50
log_min_duration_statement = 0
port=5432

Start the server

connect to psql
create 5000 tables
create publication -> create publication pub for all tables;

Y Cluster-

Modify/add this below parameters in postgresql.conf file

wal_level=logical
logging_collector=on
max_replication_slots = 100
max_worker_processes = 50
max_logical_replication_workers = 35
max_sync_workers_per_subscription = 25
max_wal_senders=50
log_min_duration_statement = 0
port=5433

Start the server

connect to psql
create 5000 tables
create subscription ->create subscription suv connection 
'dbname=postgres port=5432 user=centos host=localhost' publication pub;


check the log file on standby ( in 5-10 minutes , server closed ... 
message should come)


attaching both log files ( master and standby).

reproducing this issue is not consistent. Out of 5 times,  i am able to 
reproduce it 2 times only.


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



files.tar.bz2
Description: application/bzip

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal : For Auto-Prewarm.

2017-05-24 Thread Mithun Cy
On Tue, May 23, 2017 at 7:06 PM, Mithun Cy  wrote:
> Thanks, Andres,
>
> I have tried to fix all of your comments.

There was a typo issue in previous patch 07 where instead of == I have
used !=. And, a mistake in comments. I have corrected same now.

-- 
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com


autoprewarm_08.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] CentOS based openshift ready docker container

2017-05-24 Thread Mohammed Zeeshan
Hi

   I have successfully built postgresql containers, that work on
openshift[1],  available in my git repository[2]. These containers are
currently building on CentOS Container Pipeline[3] and being delivered to
registry.centos.org.

I am proposing that they be added to the postgresql repositories.

[1] https://www.openshift.com/
[2] https://github.com/mohammedzee1000/postgresql_openshift
[3] https://wiki.centos.org/ContainerPipeline

-- 
*Mohammed Zeeshan Ahmed, *
Associate Software Engineer, Red Hat
B.E Computer Science Engineering
Certified IT & Cloud Architect & RHCSA
+919986458839
Bengaluru, India

mzee1000.io
PGP Public key :
https://pgp.mit.edu/pks/lookup?op=get=0x147A8569AFC5E9ED


[HACKERS] CentOS based openshift ready docker container

2017-05-24 Thread Mohammed Zeeshan
Hi

   I have successfully built postgresql containers, that work on
openshift[1],  available in my git repository[2]. These containers are
currently building on CentOS Container Pipeline[3] and being delivered to
registry.centos.org.

I am proposing that they be added to the postgresql repositories.

[1] https://www.openshift.com/
[2] https://github.com/mohammedzee1000/postgresql_openshift
[3] https://wiki.centos.org/ContainerPipeline

-- 
*Mohammed Zeeshan Ahmed, *
Associate Software Engineer, Red Hat
B.E Computer Science Engineering
Certified IT & Cloud Architect & RHCSA
+919986458839
Bengaluru, India

mzee1000.io
PGP Public key :
https://pgp.mit.edu/pks/lookup?op=get=0x147A8569AFC5E9ED


Re: [HACKERS] UPDATE of partition key

2017-05-24 Thread Amit Khandekar
On 18 May 2017 at 16:52, Amit Kapila  wrote:
> On Wed, May 17, 2017 at 4:05 PM, Dilip Kumar  wrote:
>> On Wed, May 17, 2017 at 3:15 PM, Amit Kapila  wrote:
 Earlier I thought that option1 is better but later I think that this
 can complicate the situation as we are firing first BR update then BR
 delete and can change the row multiple time and defining such
 behaviour can be complicated.

>>>
>>> If we have to go by this theory, then the option you have preferred
>>> will still execute BR triggers for both delete and insert, so input
>>> row can still be changed twice.
>>
>> Yeah, right as per my theory above Option3 have the same problem.
>>
>> But after putting some more thought I realised that only for "Before
>> Update" or the "Before Insert" trigger row can be changed.
>>
>
> Before Row Delete triggers can suppress the delete operation itself
> which is kind of unintended in this case.  I think without the user
> being aware it doesn't seem advisable to execute multiple BR triggers.

By now, majority of the opinions have shown that they do not favour
two triggers getting fired on a single update. Amit, do you consider
option 2 as a valid option ? That is, fire only UPDATE triggers. BR on
source partition, and AR on destination partition. Do you agree that
firing BR update trigger is essential since it can modify the row and
even prevent the update from happening ?

Also, since a user does not have a provision to install a common
UPDATE row trigger, (s)he installs it on each of the leaf partitions.
And then when an update causes row movement, using option 3 would end
up not firing update triggers on any of the partitions. So, I prefer
option 2 over option 3 , i.e. make sure to fire BR and AR update
triggers. Actually option 2 is what Robert had proposed in the
beginning.

-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH v1] Add and report the new "in_hot_standby" GUC pseudo-variable.

2017-05-24 Thread Tsunakawa, Takayuki
From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Craig Ringer
> On 13 April 2017 at 14:59, Tsunakawa, Takayuki
>  wrote:
> 
> > 2. Make transaction_read_only GUC_REPORT This is to avoid the added
> > round-trip by SHOW command.  It also benefits client apps that want to
> know when the server gets promoted?  And this may simplify the libpq code.
> > I don't understand yet why we need to provide this feature for older servers
> by using SHOW.  Those who are already using <= 9.6 in production completed
> the system or application, and their business is running.  Why would they
> want to just replace libpq and use this feature?
> 
> I think "transaction_read_only" is a bit confusing for something we're
> expecting to change under us.
> 
> To me, a "read only" xact is one created with
> 
> BEGIN READ ONLY TRANSACTION;
> 
>  which I would not expect to become read/write under me, since I
> explicitly asked for read-only.
> 
> It's more like "session read only" that we're interested in IMO.

I confirmed that the attached patch successfully provides:

* target_session_attrs=read-only
* If the server is >= 10, avoid the round-trip for SHOW transaction_read_only.

For this, I added a GUC_REPORT variable session_read_only which indicates the 
session's default read-only status.  The characteristics are:

* It cannot be changed directly by the user (postgresql.conf, SET, etc.)
* Its value is the same as default_transaction_read_only when not in recovery.
* Its value is false during recovery.

Could you include this in PG 10?  I think these are necessary as the bottom 
line to meet the average expectation of users (please don't ask me what's the 
average; the main reasons are that PostgreSQL provides hot standby, PgJDBC 
enables connection to the standby (targetServerType=slave), and PostgreSQL 
emphasizes performance.)  Ideally, I wanted to add other features of PgJDBC 
(e.g. targetServerType=preferSlave), but I thought this is the limit not to 
endanger the quality of the final release.

Regards
Takayuki Tsunakawa




libpq-fast-connect-read-only.patch
Description: libpq-fast-connect-read-only.patch

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench more operators & functions

2017-05-24 Thread Fabien COELHO


Hello Pavel,


I am watching this patch - it looks so there are not problems.


Great.

I found only issue in documentation - new CASE expression is not 
documented.


Hmmm. Actually there was a rather very discreet one in v10:

 +  SQL CASE generic conditional expressions

I've improved it in attached v11:
 - add a link to the CASE full documentation
 - add an example expression with CASE ...

Also, if the "pgbench - improve tap test infrastructure" patch get to be 
committed, I'll add coverage for these operators and functions instead of 
the external pgbench test scripts I provided.


--
Fabien.diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml
index 5735c48..af581f3 100644
--- a/doc/src/sgml/ref/pgbench.sgml
+++ b/doc/src/sgml/ref/pgbench.sgml
@@ -825,14 +825,31 @@ pgbench  options  dbname
  
   Sets variable varname to a value calculated
   from expression.
-  The expression may contain integer constants such as 5432,
+  The expression may contain the NULL constant,
+  boolean constants TRUE and FALSE,
+  integer constants such as 5432,
   double constants such as 3.14159,
   references to variables :variablename,
-  unary operators (+, -) and binary operators
-  (+, -, *, /,
-  %) with their usual precedence and associativity,
-  function calls, and
-  parentheses.
+  operators
+  with their usual SQL precedence and associativity,
+  function calls,
+  SQL CASE generic conditional
+  expressions and parentheses.
+ 
+
+ 
+  Functions and most operators return NULL on
+  NULL input.
+ 
+
+ 
+  For conditional purposes, non zero numerical values are TRUE,
+  zero numerical values and NULL are FALSE.
+ 
+
+ 
+  When no final ELSE clause is provided to a CASE,
+  the default value is NULL.
  
 
  
@@ -841,6 +858,7 @@ pgbench  options  dbname
 \set ntellers 10 * :scale
 \set aid (1021 * random(1, 10 * :scale)) % \
(10 * :scale) + 1
+\set divx CASE WHEN :x <> 0 THEN :y/:x ELSE NULL END 
 
 

@@ -917,6 +935,177 @@ pgbench  options  dbname
   
  
 
+ 
+  Built-In Operators
+
+  
+   The arithmetic, bitwise, comparison and logical operators listed in
+are built into pgbench
+   and may be used in expressions appearing in
+   \set.
+  
+
+  
+   pgbench Operators by increasing precedence
+   
+
+ 
+  Operator
+  Description
+  Example
+  Result
+ 
+
+
+ 
+  OR
+  logical or
+  5 or 0
+  TRUE
+ 
+ 
+  AND
+  logical and
+  3 and 0
+  FALSE
+ 
+ 
+  NOT
+  logical not
+  not false
+  TRUE
+ 
+ 
+  IS [NOT] (NULL|TRUE|FALSE)
+  value tests
+  1 is null
+  FALSE
+ 
+ 
+  ISNULL|NOTNULL
+  null tests
+  1 notnull
+  TRUE
+ 
+ 
+  =
+  is equal
+  5 = 4
+  FALSE
+ 
+ 
+  
+  is not equal
+  5  4
+  TRUE
+ 
+ 
+  !=
+  is not equal
+  5 != 5
+  FALSE
+ 
+ 
+  
+  lower than
+  5  4
+  FALSE
+ 
+ 
+  =
+  lower or equal
+  5 = 4
+  FALSE
+ 
+ 
+  
+  greater than
+  5  4
+  TRUE
+ 
+ 
+  =
+  greater or equal
+  5 = 4
+  TRUE
+ 
+ 
+  |
+  integer bitwise OR
+  1 | 2
+  3
+ 
+ 
+  #
+  integer bitwise XOR
+  1 # 3
+  2
+ 
+ 
+  
+  integer bitwise AND
+  1  3
+  1
+ 
+ 
+  ~
+  integer bitwise NOT
+  ~ 1
+  -2
+ 
+ 
+  
+  integer bitwise shift left
+  1  2
+  4
+ 
+ 
+  
+  integer bitwise shift right
+  8  2
+  2
+ 
+ 
+  +
+  addition
+  5 + 4
+  9
+ 
+ 
+  -
+  substraction
+  3 - 2.0
+  1.0
+ 
+ 
+  *
+  multiplication
+  5 * 4
+  20
+ 
+ 
+  /
+  division (integer truncates the results)
+  5 / 3
+  1
+ 
+ 
+  %
+  modulo
+  3 % 2
+  1
+ 
+ 
+  -
+  opposite
+  - 2.0
+  -2.0
+ 
+
+   
+  
+ 
+
  
   Built-In Functions
 
@@ -963,6 +1152,13 @@ pgbench  options  dbname
5432.0
   
   
+   exp(x)
+   double
+   exponential
+   exp(1.0)
+   2.718281828459045
+  
+  
greatest(a [, ... ] )
double if any a is double, else integer
largest value among arguments
@@ -984,6 +1180,20 @@ pgbench  options  dbname
2.1
   
   
+   ln(x)
+   double
+   natural logarithm
+   ln(2.718281828459045)
+   1.0
+  
+  
+   mod(i, j)
+   inteter
+   modulo
+   mod(54, 32)
+   22
+  
+  
pi()
double
value of the constant PI
diff --git a/src/bin/pgbench/exprparse.y 

Re: [HACKERS] Regarding Postgres Dynamic Shared Memory (DSA)

2017-05-24 Thread Mahi Gurram
One solution that is striking me is
1. I'll create one background worker and will initialise DSA in it.
2. If there are any callbacks available for client open/close connections,
i'll attach/detach to the DSA in those callbacks.

But i'm not sure there are such callbacks available. If such callbacks
exists in postgres and you guys know please help me out with that.

Thanks & Best Regards,
- Mahi

On Wed, May 24, 2017 at 11:32 AM, Mahi Gurram  wrote:

> Hi,
>
> As Michael said, i'm creating DSA too early. Shared_Preload libraries are
> loading prior to memory related stuff.
>
> But i'm totally clueless how to solve my use case.
>
> Please help me with any work around.
>
> Thanks & Best Regards,
> - Mahi
>
> On Tue, May 23, 2017 at 5:52 PM, Mahi Gurram  wrote:
>
>> Hi Michael,
>>
>> Thanks for your response.
>>
>> All i'm building is In-Memory Index as an extension over Postgres.
>>
>> Postgres Indexes will get Insert calls and Read calls from various
>> processes(typically client/connection process - forked processes to
>> postmaster process). Hence i have to maintain my In-Memory index in shared
>> memory.
>>
>> If i create DynamicSharedArea (DSA) in postmaster/main process, all these
>> Client/Connection processes(In-Memory Index Processes)  need not attach to
>> that DSA using area handle. Because these are forked processes to
>> postmaster/Main process and hence they automatically gets attached.
>>
>> Hence i'm trying to create DSA in _PG_init function as it is called by
>> postmaster/main process.
>>
>> Hope this is clear.
>>
>> Thanks & Best Regards,
>> - Mahi
>>
>>
>> On Tue, May 23, 2017 at 5:30 PM, Michael Paquier <
>> michael.paqu...@gmail.com> wrote:
>>
>>> On Tue, May 23, 2017 at 6:42 AM, Mahi Gurram 
>>> wrote:
>>> > I'm building In-Memory index extension for Postgres, for which i'm
>>> trying to
>>> > use DSA. But ended with some issues, as it is not allowing me to create
>>> > DSA(Dynamic Shared Area) in _PG_init function.
>>> >
>>> > Please refer my_PG_init code below:
>>> >>
>>> >> void
>>> >> _PG_init(void)
>>> >> {
>>> >> area = dsa_create(LWLockNewTrancheId(), "CustomIndex_DSA");
>>> >> area_handle = dsa_get_handle(area);
>>> >> }
>>> >
>>> >
>>> > Because of this code, Postgres is not starting. Not even giving any
>>> error
>>> > messages in pg logs. Hence, i'm totally clue less :(
>>>
>>> It seems to me that you are creating those too early. For example, for
>>> a background worker process, DSA segments would likely be created in
>>> the main process routine. Without understanding what you are trying to
>>> achieve, it is hard to make a good answer. You could always use a
>>> ramdisk, but that would be likely be a waste of memory as Postgres has
>>> its own buffer pool, killing the performance gains of OS caching.
>>> --
>>> Michael
>>>
>>
>>
>


Re: [HACKERS] Regarding Postgres Dynamic Shared Memory (DSA)

2017-05-24 Thread Mahi Gurram
Hi,

As Michael said, i'm creating DSA too early. Shared_Preload libraries are
loading prior to memory related stuff.

But i'm totally clueless how to solve my use case.

Please help me with any work around.

Thanks & Best Regards,
- Mahi

On Tue, May 23, 2017 at 5:52 PM, Mahi Gurram  wrote:

> Hi Michael,
>
> Thanks for your response.
>
> All i'm building is In-Memory Index as an extension over Postgres.
>
> Postgres Indexes will get Insert calls and Read calls from various
> processes(typically client/connection process - forked processes to
> postmaster process). Hence i have to maintain my In-Memory index in shared
> memory.
>
> If i create DynamicSharedArea (DSA) in postmaster/main process, all these
> Client/Connection processes(In-Memory Index Processes)  need not attach to
> that DSA using area handle. Because these are forked processes to
> postmaster/Main process and hence they automatically gets attached.
>
> Hence i'm trying to create DSA in _PG_init function as it is called by
> postmaster/main process.
>
> Hope this is clear.
>
> Thanks & Best Regards,
> - Mahi
>
>
> On Tue, May 23, 2017 at 5:30 PM, Michael Paquier <
> michael.paqu...@gmail.com> wrote:
>
>> On Tue, May 23, 2017 at 6:42 AM, Mahi Gurram  wrote:
>> > I'm building In-Memory index extension for Postgres, for which i'm
>> trying to
>> > use DSA. But ended with some issues, as it is not allowing me to create
>> > DSA(Dynamic Shared Area) in _PG_init function.
>> >
>> > Please refer my_PG_init code below:
>> >>
>> >> void
>> >> _PG_init(void)
>> >> {
>> >> area = dsa_create(LWLockNewTrancheId(), "CustomIndex_DSA");
>> >> area_handle = dsa_get_handle(area);
>> >> }
>> >
>> >
>> > Because of this code, Postgres is not starting. Not even giving any
>> error
>> > messages in pg logs. Hence, i'm totally clue less :(
>>
>> It seems to me that you are creating those too early. For example, for
>> a background worker process, DSA segments would likely be created in
>> the main process routine. Without understanding what you are trying to
>> achieve, it is hard to make a good answer. You could always use a
>> ramdisk, but that would be likely be a waste of memory as Postgres has
>> its own buffer pool, killing the performance gains of OS caching.
>> --
>> Michael
>>
>
>