[HACKERS] proposal: session server side variables

2016-10-13 Thread Pavel Stehule
Hi,

long time I working on this topic. Session server side variables are one
major missing feature in PLpgSQL. Now I hope, I can summarize requests for
implementation in Postgres:

Requirements
==
1. Should be used in any PL (PLpgSQL, PLPython, PLPerl, ..)

2. Should not block a implementation of  ANSI/SQL SQL modules - the modules
and PSM languages are big chapter and should be implemented together and
maybe from scratch - isn't easy to inject it to our environment pretty.
More the modules are partially redundant with schemas and with our
extensions. This is reason, why I don't take functionality described in
standard.

3. The usage should be simple, secure and not limited by only PL usage.



I found very good inspiration in PostgreSQL sequences. They can be used
anywhere, the access to sequences is secure, the sequence interface is
stabilized.

The session variables should be:

1. persistent objects with temporal unshared typed content. The life of
content should be limited by session or by transaction. The content is
initialized to default (when it is defined) or to NULL when variable is
first accessed in variable' time scope (session, transaction).

CREATE VARIABLE [schema.]variable type [DEFAULT default_value]
[TRANSACTION|SESION SCOPE]
DROP VARIABLE [schema.]variable

2. accessed with respecting access rights:

GRANT SELECT|UPDATE|ALL ON VARIABLE variable TO role
REVOKE SELECT|UPDATE|ALL ON VARIABLE variable FROM role

The variable is joined with some schema - the access is filtered by schema
too - like any other schema object.

3. accessed/updated with special function "getvar", "setvar":

FUNCTION getvar(regclass) RETURNS type
FUNCTION setvar(regclass, type) RETURNS void

These functions are supported by PostgreSQL analyzer - a casting to correct
variable type is enforced there. These functions are volatile. Some stable
variants can exists too.

4. non transactional  - the metadata are transactional, but the content is
not.



This concept doesn't introduce any new visibility or accessibility methods.
The session variable is first class object like any others and special
rules are not necessary. The access should be controlled by access rights
only.

This proposal doesn't propose Oracle's package variables and related
behave. When we have not a full ADA environment, then partial
implementation should be too complex with strange (foreign) behave in our
environment. But Oracle's package variables should be emulated over
proposed layer and this emulation should be really secure - no security by
obscurity.

Comments, notices?

Regards

Pavel


Re: [HACKERS] Change of extension name to new name

2016-10-13 Thread Andres Freund


On October 13, 2016 10:12:26 PM PDT, Craig Ringer  wrote:
>On 13 October 2016 at 12:37, Tom Lane  wrote:
>> Haribabu Kommi  writes:
>>> As we are planning to change an extension name from one name to
>another
>>> name because of additional features that are added into this
>extension,
>>
>> The usual approach to that is just to increase the version number.
>> Why is it necessary to change the name?
>>
>>> I just thought of adding the support of (ALTER EXTENSION name RENAME
>To
>>> newname), this can be executed before executing the pg_upgrade to
>the new
>>> extension name that is available in the
>>> newer version.
>>
>> And if the user forgets to do that before upgrading?  Not to mention
>> that the extension is mostly broken the moment its SQL name no longer
>> corresponds to the on-disk control file name.  This seems like
>> a non-solution.
>>
>> In general, once you've shipped something, changing its name is a
>huge
>> pain both for you and your users.  Just say no.
>
>I've touched on a somewhat related case when I wanted to merge two
>extensions into one. I took a look and quickly punted on it as way too
>messy, but I'm sure there are legitimate use cases for
>splitting/merging extensions. That doesn't mean we want to carry
>little-used infrastructure for it or that anyone's going to care
>enough to implement anything.
>
>Certainly my need wasn't worth doing it for, and it was a simple one.
>Doing things like extracting only some parts of an extension into
>another extension while maintaining correct dependencies sounds
>nightmarish.

Hm. Make pgupgrade specify cascade (seems like a good idea anyway) and list the 
new extension as one.  And/or add an automatically installed dependency control 
file field.
L

Andres
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Change of extension name to new name

2016-10-13 Thread Craig Ringer
On 13 October 2016 at 12:37, Tom Lane  wrote:
> Haribabu Kommi  writes:
>> As we are planning to change an extension name from one name to another
>> name because of additional features that are added into this extension,
>
> The usual approach to that is just to increase the version number.
> Why is it necessary to change the name?
>
>> I just thought of adding the support of (ALTER EXTENSION name RENAME To
>> newname), this can be executed before executing the pg_upgrade to the new
>> extension name that is available in the
>> newer version.
>
> And if the user forgets to do that before upgrading?  Not to mention
> that the extension is mostly broken the moment its SQL name no longer
> corresponds to the on-disk control file name.  This seems like
> a non-solution.
>
> In general, once you've shipped something, changing its name is a huge
> pain both for you and your users.  Just say no.

I've touched on a somewhat related case when I wanted to merge two
extensions into one. I took a look and quickly punted on it as way too
messy, but I'm sure there are legitimate use cases for
splitting/merging extensions. That doesn't mean we want to carry
little-used infrastructure for it or that anyone's going to care
enough to implement anything.

Certainly my need wasn't worth doing it for, and it was a simple one.
Doing things like extracting only some parts of an extension into
another extension while maintaining correct dependencies sounds
nightmarish.

So I'm with you. Just don't rename it.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: Batch/pipelining support for libpq

2016-10-13 Thread Craig Ringer
On 12 October 2016 at 19:51, Shay Rojansky  wrote:
> Hi all. I thought I'd share some experience from Npgsql regarding
> batching/pipelining - hope this isn't off-topic.

Not at all.

> Npgsql has supported batching for quite a while, similar to what this patch
> proposes - with a single Sync message is sent at the end.

PgJDBC does too. The benefits of it there are what prompted me to do
this, not only so libpq users can use it directly but so psqlODBC,
psycopg2, etc can benefit from it if they choose to expose it.

> It has recently come to my attention that this implementation is problematic
> because it forces the batch to occur within a transaction; in other words,
> there's no option for a non-transactional batch.

That's not strictly the case. If you explicitly BEGIN and COMMIT,
those operations are honoured within the batch.

What's missing is implicit transactions. Usually if you send a series
of messages they will each get their own implicit transaction. If you
batch them, the whole lot gets an implicit transaction. This is
because the PostgreSQL v3 protocol conflates transaction delineation
with protocol error recovery into a single Sync message type.

It's very similar to the behaviour of multi-statements, where:

psql -c "CREATE TABLE fred(x integer); DROP TABLE notexists;"

doesn't create "fred", but

psql -c "BEGIN; CREATE TABLE fred(x integer); COMMIT; BEGIN; DROP
TABLE notexists; COMMMIT;"

... does. And in fact it's for almost the same reason. They're sent as
a single SimpleQuery message by psql and split up client side, but the
effect is the same as two separate Query messages followed by a Sync.

It isn't simple to manage this client-side, because libpq doesn't know
whether a given command string may contain transaction delimiting
statements or not and can't reliably look for them without client-side
parsing of the SQL. So it can't dispatch its own BEGIN/COMMIT around
statements in a batch that it thinks might be intended to be
autocommit, and anyway that'd result in sending 3 queries for every 1
client query, which would suck.

If the mythical v4 protocol ever happens I'd want to split Sync into
two messages, one which is a protocol synchronisation message and
another that is a transaction delimiter. Or give it flags or whatever.

In the mean time:

> This can be a problem for
> several reasons: users may want to sent off a batch of inserts, not caring
> whether one of them fails (e.g. because of a unique constraint violation).
> In other words, in some scenarios it may be appropriate for later batched
> statements to be executed when an earlier batched statement raised an error.
> If Sync is only sent at the very end, this isn't possible.

Right, and that remains the case even with explicit transaction
delineation, because the first ERROR causes processing of all
subsequent messages to be skipped.

The design I have in libpq allows for this by allowing the client to
delimit batches without ending batch mode, concurrently consuming a
stream of multiple batches. Each endbatch is a Sync. So a client that
wants autocommit-like behavour can send a series of 1-query batches.

I think I'll need to document this a bit better since it's more subtle
than I properly explained.

> Another example
> of a problem (which actually happened) is that transactions acquire
> row-level locks, and so may trigger deadlocks if two different batches
> update the same rows in reverse order. Both of these issues wouldn't occur
> if the batch weren't implicitly batched.

Same solution as above.

> My current plan is to modify the batch implementation based on whether we're
> in an (explicit) transaction or not. If we're in a transaction, then it
> makes perfect sense to send a single Sync at the end as is being proposed
> here - any failure would cause the transaction to fail anyway, so skipping
> all subsequent statements until the batch's end makes sense. However, if
> we're not in an explicit transaction, I plan to insert a Sync message after
> each individual Execute, making non-transactional batched statements more or
> less identical in behavior to non-transactional unbatched statements. Note
> that this mean that a batch can generate multiple errors, not just one.

Yes, that's what I suggest, and basically what the libpq batch
interface does, though it expects the client to deal with the
transaction boundaries.

You will need to think hard about transaction boundaries as they
relate to multi-statements unless nPgSQL parses out each statement
from multi-statement strings like PgJDBC does. Otherwise a user can
sneak in:

somestatement; BEGIN; someotherstatement;

or

somestatement; CoMMiT; otherstatement;

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Renaming of pg_xlog and pg_clog

2016-10-13 Thread Peter Eisentraut
On 10/12/16 11:22 AM, Tom Lane wrote:
> The main problem we're trying to fix here is people thinking that
> something with "log" in the name contains discardable data.  Just
> relocating the directory without renaming it won't improve that.

I think it would help if we moved it to something like
"internal/pg_xlog" and "internal/pg_clog".  Keep the name but move it
out of sight.

We have a tool called pg_xlogdump in the standard installation.  initdb
has an option --xlogdir, pg_basebackup has --xlog and others.  Renaming
the xlog directory would make this all a bit confusing, unless we're
prepared to rename the programs and options as well.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: function xmltable

2016-10-13 Thread Pavel Stehule
Hi

new update - only doc

+
+ Only XPath query language is supported. PostgreSQL doesn't support
XQuery
+ language. Then the syntax of xmltable doesn't
+ allow to use XQuery related functionality - the name of xml expression
+ (clause AS) is not allowed, and only one xml
expression
+ should be passed to xmltable function as
parameter.
+

Regards

Pavel


xmltable-12.patch.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Extend framework from commit 53be0b1ad to report latch waits.

2016-10-13 Thread Peter Eisentraut
On 10/12/16 7:38 PM, Andres Freund wrote:
>> We generally don't build test code during make world.
> FWIW, I find that quite annoying. I want to run make world with parallelism so
> I can run make world afterwards with as little unnecessary
> unparallelized work. And since the latter takes just about forever, any
> errors visible earlier are good.
> 
> What are we gaining by this behaviour?

Well, the purpose of make all or make world is to build the things that
are to be installed by make install or make install-world, which is the
stuff that users want to use.  The purpose is not necessarily to build
stuff for the amusement of developers.  Remember that we used to have
some dusty corners under src/test/, so "build everything no matter what"
is/was not a clearly better strategy.  Also, some test code used to take
a relatively long time to build, which led to some level of annoyance.

It might be worth reviewing this approach, but that's what it is.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Extend framework from commit 53be0b1ad to report latch waits.

2016-10-13 Thread Peter Eisentraut
On 10/12/16 12:04 PM, Robert Haas wrote:
> But it's annoying to push code and have it break the buildfarm when
> you already did 'make world' and 'make check-world'.  What target
> should I be using?

I don't know why worker_spi is not tested by check-world.  It should
clearly do that.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] make coverage-html on OS X

2016-10-13 Thread Peter Eisentraut
On 10/13/16 4:03 PM, Jim Nasby wrote:
> I've slowly stripped away my normal build environment; I'm now using 
> baseline gcc (no ccache) with the following ./configure... anyone have 
> any ideas how to debug/fix this?

Which gcc are you using?

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Minor improvement to delete.sgml

2016-10-13 Thread Etsuro Fujita

Hi,

I think it's better to mention that an alias is needed for the target  
table specified in the USING clause of a DELETE statement, to set up a  
self-join, as the documentation on the from_list parameter of UPDATE  
does.  Please find attached a patch.


Best regards,
Etsuro Fujita


doc-delete-sgml.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] btree vacuum and suspended scans can deadlock

2016-10-13 Thread Amit Kapila
On Fri, Oct 14, 2016 at 2:52 AM, Robert Haas  wrote:
> On Thu, Oct 13, 2016 at 6:33 AM, Amit Kapila  wrote:
>> If we agree that above is a problematic case, then some of the options
>> to solve it could be (a) Vacuum should not wait for a cleanup lock and
>> instead just give up and start again which I think is a bad idea (b)
>> don't allow to take lock of higher granularity after the scan is
>> suspended, not sure if that is feasible (c) document the above danger,
>> this sounds okay on the ground that nobody has reported the problem
>> till now
>
> I don't think any of these sound particularly good.
>

Tom has suggested something similar to approach (b) in his mail [1],
basically rejecting some commands like Truncate, Reindex,.. if the
current transaction is already holding the table open and I think if
we want to go that way, it might be better to reject any command that
requires lock level higher than the current transaction has on table.
Tom has suggested few more ways to resolve such deadlocks in that
thread, but I think we never implemented those.

Here, one point to think is do we really need to invent some ways to
make hash indexes not prone to that problem when it can occur for
other indexes or even for heap.  Even if want to do something, I think
the solution has to be common.


[1] - https://www.postgresql.org/message-id/21534.1200956...@sss.pgh.pa.us

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reporting wait events for system processes

2016-10-13 Thread Michael Paquier
On Thu, Oct 13, 2016 at 3:46 PM, Michael Paquier
 wrote:
> I think that it would be fine to name it, say pg_stat_system_activity,
> and to define it as a system view fed by a SRF function that scans the
> arrays of PGPROC entries for background workers, autovacuum workers
> and system processes.
>
> Thoughts about that? I am expecting a couple of -1 for suggesting to
> expose the system PIDs now only local to postmaster.c ;)

Looking more into that, exposing the list of PIDs from the postmaster
may not be mandatory... The best way to do things would be to use the
existing AuxiliaryProcs, removing its NON_EXEC_STATIC declaration and
marking it as an extern variable in proc.h. The number of entries in
AuxiliaryProcs is now capped by NUM_AUXILIARY_PROCS, and each
auxiliary process started uses InitAuxiliaryProcess() to choose its
PGPROC entry via a liner search.

But the problem is that the order of those PGPROC is undefined. So in
order to make that ordered, I think that we could replace
NUM_AUXILIARY_PROCS by NUM_AUXPROCTYPES, and save in shared memory one
PGPROC entry per auxiliary process using AuxProcType as ordering
system. That will waste shared memory because not all the PGPROC slots
would be used all the time, but this way the user could know what are
the system processes running or not via SQL, which is actually a win
for this case in terms of visibility.

For non-EXEC_BACKEND procs, extending InitAuxiliaryProcess() with an
argument to define AuxProcType is no big deal because the type of the
process to start is known. The problem is for EXEC_BACKEND processes,
in SubPostmasterMain() exactly, where we would need to look for -x%d
in the list of arguments *before* calling AuxiliaryProcessMain() to
choose the appropriate PGPROC slot to set up MyProc for this process.

If all the process PIDs in postmaster.c are exposed though, there is
no need to do anything like that. We could just scan AuxiliaryProcs
one by one and check to which system process PID an entry matches,
then save into the system view a process name according to the match.
That's less performant, but there are not many auxiliary processes,
and the user will only now about active processes that are the ones
filling in the PGPROC slots in AuxiliaryProcs.

Extending AuxiliaryProcs with one entry per AuxProcType looks cleaner
to me, even if it involves one hack for the EXEC_BACKEND case.

Thoughts are welcome, one approach or the other have their cost in
terms of effort. So I am not convinced that it is worth spending time
on implementing something as long as nothing is decided regarding the
way to implement that, if being able to look into wait events for
system processes is wanted of course.

An extra problem is related to the syslogger, we could just add a
AuxProcType to show up its wait events.
Thanks,
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] btree vacuum and suspended scans can deadlock

2016-10-13 Thread Amit Kapila
On Fri, Oct 14, 2016 at 3:14 AM, Tom Lane  wrote:
> Robert Haas  writes:
>> On Thu, Oct 13, 2016 at 6:33 AM, Amit Kapila  wrote:
>>> If we agree that above is a problematic case, then some of the options
>>> to solve it could be (a) Vacuum should not wait for a cleanup lock and
>>> instead just give up and start again which I think is a bad idea (b)
>>> don't allow to take lock of higher granularity after the scan is
>>> suspended, not sure if that is feasible (c) document the above danger,
>>> this sounds okay on the ground that nobody has reported the problem
>>> till now
>
>> I don't think any of these sound particularly good.
>
> Note that it's a mistake to imagine that this is specific to indexes;
> the same type of deadlock risk exists just when considering heap cleanup.
> We've ameliorated the heap case quite a bit by recognizing situations
> where it's okay to skip a page and move on, but it's not gone.
> Unfortunately indexes don't get to decide that deletion is optional.
>
> I was about to suggest that maybe we didn't need cleanup locks in btree
> indexes anymore now that SnapshotNow is gone,
>

Wouldn't it still be a problem for SnapshotAny?


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: Batch/pipelining support for libpq

2016-10-13 Thread Jim Nasby

On 10/4/16 11:54 PM, Michael Paquier wrote:

+   
+Much like asynchronous query mode, there is no performance disadvantage to
+using batching and pipelining. It somewhat increased client application
+complexity and extra caution is required to prevent client/server network
+deadlocks, but can offer considerable performance improvements.
+   
I would reword that a bit "it increases client application complexity
and extra caution is required to prevent client/server deadlocks but
offers considerable performance improvements".


Unrelated, but another doc bug, on line 4647:

 + The batch API was introduced in PostgreSQL 9.6, but clients 
using it can


That should read 10.0 (or just 10?)
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Typo in parallel.sqml

2016-10-13 Thread Tatsuo Ishii
> Indeed. It looks like a bad copy-pasto of two sentences gathered together.

Thanks! I will commit/push to master and 9.6 stable branches.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Stats sender and 2pc minor problem

2016-10-13 Thread Tom Lane
Stas Kelvich  writes:
> Statistics sender logic during usual commit and two-phase commit do not
> strictly matches each other and that leads to delta_live_tuples added to
> n_live_tup in case of truncate in two phase commit.

Yeah, that code says it's supposed to match AtEOXact_PgStat, but it
doesn't.

I pushed this, but without the regression test case, which would have
failed outright in any test run with max_prepared_transactions = 0.
Timing sensitivity is another problem.  In the commit that created this
discrepancy, d42358efb, Alvaro had tried to add regression coverage for
this area, but we ended up backing it out because it failed too often
in the buildfarm.

TBH, now that I look at it, I think that d42358efb was fundamentally
wrong and this patch is just continuing down the same wrong path.
Having the stats collector respond to a TRUNCATE like this cannot
work reliably, because the "it got truncated" flag will arrive at
the stats collector asynchronously, perhaps quite some time later
than the truncate occurred.  When that happens, we may throw away
live/dead tuple count updates from transactions that actually happened
after the truncate but chanced to report first.

I wonder if we could make that better by making the stats collector
track stats by relfilenode rather than table OID.  It'd be a pretty
major logic change, though, to serve a corner case.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Typo in parallel.sqml

2016-10-13 Thread Michael Paquier
On Fri, Oct 14, 2016 at 8:41 AM, Tatsuo Ishii  wrote:
> diff --git a/doc/src/sgml/parallel.sgml b/doc/src/sgml/parallel.sgml
> index 7f8e902..02173da 100644
> --- a/doc/src/sgml/parallel.sgml
> +++ b/doc/src/sgml/parallel.sgml
> @@ -339,7 +339,7 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE 
> '%x%';
>
>
>  When executing a parallel plan, you can use EXPLAIN (ANALYZE,
> -VERBOSE) will display per-worker statistics for each plan node.
> +VERBOSE) to display per-worker statistics for each plan node.
>  This may be useful in determining whether the work is being evenly
>  distributed between all plan nodes and more generally in understanding 
> the
>  performance characteristics of the plan.

Indeed. It looks like a bad copy-pasto of two sentences gathered together.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Typo in parallel.sqml

2016-10-13 Thread Tatsuo Ishii
I think this is a typo. Since I am not an English native speaker, I
would appreciate if someone confirm this.

diff --git a/doc/src/sgml/parallel.sgml b/doc/src/sgml/parallel.sgml
index 7f8e902..02173da 100644
--- a/doc/src/sgml/parallel.sgml
+++ b/doc/src/sgml/parallel.sgml
@@ -339,7 +339,7 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE 
'%x%';
 
   
 When executing a parallel plan, you can use EXPLAIN (ANALYZE,
-VERBOSE) will display per-worker statistics for each plan node.
+VERBOSE) to display per-worker statistics for each plan node.
 This may be useful in determining whether the work is being evenly
 distributed between all plan nodes and more generally in understanding the
 performance characteristics of the plan.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Mention to pg_backup in pg_dump.c

2016-10-13 Thread Michael Paquier
On Fri, Oct 14, 2016 at 5:08 AM, Tom Lane  wrote:
> Michael Paquier  writes:
>> I just bumped into the following thing in pg_dump.c:
>> /* Set default options based on progname */
>> if (strcmp(progname, "pg_backup") == 0)
>> format = "c";
>> As far as I can see this comes from e8f69be0 dated of 2000. Couldn't
>> this be removed? I have never seen traces of pg_backup in the code.
>
> Looks dead to me too --- deleted.

Thanks.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] parallel.sgml

2016-10-13 Thread Tatsuo Ishii
>> Can someone, possibly a native English speaker, please confirm this is
>> typo or not? I cannot parse the sentence as it is, but this maybe
>> because I'm not a English speaker.
> 
> Yes, I think removing "between" is correct.

Thanks for the confirmation. Fix pushed to master and 9.6 stable.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Remove spurious word.

2016-10-13 Thread Tatsuo Ishii
Shouldn't this be back patched to REL9_6_STABLE as well?

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp

From: Robert Haas 
Subject: [COMMITTERS] pgsql: Remove spurious word.
Date: Thu, 13 Oct 2016 00:10:20 +
Message-ID: 

> Remove spurious word.
> 
> Tatsuo Ishii
> 
> Branch
> --
> master
> 
> Details
> ---
> http://git.postgresql.org/pg/commitdiff/248776ea06c240ae4605e77369d66bcd7ae4f9e3
> 
> Modified Files
> --
> doc/src/sgml/parallel.sgml | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
> 
> 
> -- 
> Sent via pgsql-committers mailing list (pgsql-committ...@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-committers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] btree vacuum and suspended scans can deadlock

2016-10-13 Thread Tom Lane
Robert Haas  writes:
> On Thu, Oct 13, 2016 at 6:33 AM, Amit Kapila  wrote:
>> If we agree that above is a problematic case, then some of the options
>> to solve it could be (a) Vacuum should not wait for a cleanup lock and
>> instead just give up and start again which I think is a bad idea (b)
>> don't allow to take lock of higher granularity after the scan is
>> suspended, not sure if that is feasible (c) document the above danger,
>> this sounds okay on the ground that nobody has reported the problem
>> till now

> I don't think any of these sound particularly good.

Note that it's a mistake to imagine that this is specific to indexes;
the same type of deadlock risk exists just when considering heap cleanup.
We've ameliorated the heap case quite a bit by recognizing situations
where it's okay to skip a page and move on, but it's not gone.
Unfortunately indexes don't get to decide that deletion is optional.

I was about to suggest that maybe we didn't need cleanup locks in btree
indexes anymore now that SnapshotNow is gone, but I see that somebody
already altered nbtree/README to say this:

: Therefore, a scan using an MVCC snapshot which has no other confounding
: factors will not hold the pin after the page contents are read.  The
: current reasons for exceptions, where a pin is still needed, are if the
: index is not WAL-logged or if the scan is an index-only scan.

This is one of the saddest excuses for documentation I've ever seen,
because it doesn't explain WHY either of those conditions require a VACUUM
interlock, and certainly it's not immediately obvious why they should.
"git blame" pins the blame for this text on Kevin, so I'm going to throw
it up to him to explain himself.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] btree vacuum and suspended scans can deadlock

2016-10-13 Thread Robert Haas
On Thu, Oct 13, 2016 at 6:33 AM, Amit Kapila  wrote:
> If we agree that above is a problematic case, then some of the options
> to solve it could be (a) Vacuum should not wait for a cleanup lock and
> instead just give up and start again which I think is a bad idea (b)
> don't allow to take lock of higher granularity after the scan is
> suspended, not sure if that is feasible (c) document the above danger,
> this sounds okay on the ground that nobody has reported the problem
> till now

I don't think any of these sound particularly good.  There have been
some previous discussions of this topic.

https://wiki.postgresql.org/wiki/Todo#Locking - second and third items
https://www.postgresql.org/message-id/CA+TgmoZG01uGL2TV6KOjmax-53eT3J66nk1KSkuOsPysz=d...@mail.gmail.com
https://www.postgresql.org/message-id/flat/CA%2BU5nMJde1%2Bh5bKm48%2B_4U%3D78%2Bw%2BRa4ipfJnAva6QVyDWv0VNQ%40mail.gmail.com
https://www.postgresql.org/message-id/1223.1298392...@sss.pgh.pa.us

I thought Tom had at some point suggested a way of detecting deadlocks
of this type, but I can't find any such email now.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] emergency outage requiring database restart

2016-10-13 Thread Tom Lane
Merlin Moncure  writes:
> Today I had an emergency production outage on a server.
> ...
> Adding all this up it smells like processes were getting stuck on a spinlock.

Maybe.  If it happens again, probably the most useful debug data would
be stack traces from some of the busy processes.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] emergency outage requiring database restart

2016-10-13 Thread Merlin Moncure
Today I had an emergency production outage on a server.  This
particular server was running 9.5.2.   The symptoms were interesting
so I thought I'd report.  Here is what I saw:

*) User CPU was pegged 100%
*) Queries reading data would block and not respond to cancel or terminate
*) pg_stat_activity reported no waiting queries (but worked fine otherwise).

Adding all this up it smells like processes were getting stuck on a spinlock.

Connections quickly got eaten up and situation was desperately urgent
so I punted and did an immediate restart and things came back
normally.   I had a console to the database and did manage to grab
contents of pg_stat_activity and noticed several trivial queries were
running normally (according to pg_stat_activity) but were otherwise
stuck.  Attempting to run one of them myself, I noted query got stuck
and did not cancel.  I was in a terrible rush but am casting around
for stuff to grab out in case that happens again -- 'perf top' would
be a natural choice I guess.

Three autovacuum processes were running.  Obviously going to do bugfix
upgrade but was wondering if anybody has seen anything like this.
This particular server was upgraded to 9.5 somewhat recently but ran
on 9.2 for years with no issues.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How to inspect tuples during execution of a plan?

2016-10-13 Thread Jim Nasby

On 10/12/16 2:58 AM, Ernst-Georg Schmid wrote:

Either of those would do, if you want to write change the executor.
>

I used the ExeceutorRun_hook and it seems to work. The drawback is,
that I have to provide my own implementation of ExecutePlan() which
could make it incompatible over versions of PostgreSQL. I don't know
how stable that function is over time.


There might be another possibility, which would be to use one of the 
earlier hooks to change the query destination, and to implement a custom 
receiver.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Conflicting constraint being merged

2016-10-13 Thread Tom Lane
Amit Langote  writes:
> Currently, if child table has a non-inherited constraint and a constraint
> with the same name is added to the parent, it will fail with an error as
> illustrated below:
> ...
> If we had allowed it to be merged, any children of child itself won't
> inherit that constraint (because on child it's marked NO INHERIT), which
> would not be good.

Right.  Merging must happen only for inheritable constraints.

> However, it is still possible for a child to override/hide the parent's
> constraint as follows:

Hmm, I thought I'd checked for this when I was fooling with constraint
merging a few days ago.  I must have checked one of the other cases and
missed this path.  Thanks for catching it!

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pluggable storage

2016-10-13 Thread Alvaro Herrera
I have sent the partial patch I have to Hari Babu Kommi.  We expect that
he will be able to further this goal some more.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Mention to pg_backup in pg_dump.c

2016-10-13 Thread Tom Lane
Michael Paquier  writes:
> I just bumped into the following thing in pg_dump.c:
> /* Set default options based on progname */
> if (strcmp(progname, "pg_backup") == 0)
> format = "c";
> As far as I can see this comes from e8f69be0 dated of 2000. Couldn't
> this be removed? I have never seen traces of pg_backup in the code.

Looks dead to me too --- deleted.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] make coverage-html on OS X

2016-10-13 Thread Jim Nasby
I've been trying to get $SUBJECT working, but all my lcov.info files are 
empty. I'm also getting warnings about no .da files.


I've slowly stripped away my normal build environment; I'm now using 
baseline gcc (no ccache) with the following ./configure... anyone have 
any ideas how to debug/fix this?


./configure --with-includes=/opt/local/include 
--with-libraries=/opt/local/lib --with-libxml --with-tcl --with-perl 
--with-python --enable-depend -C --enable-dtrace --enable-tap-tests 
--enable-coverage --enable-profiling --enable-cassert --enable-debug 
--prefix='/Users/decibel/pgsql/HEAD/i' --with-pgport=$PGC_PORT


gcov --version
Apple LLVM version 8.0.0 (clang-800.0.38)
  Optimized build.
  Default target: x86_64-apple-darwin15.6.0
  Host CPU: haswell

lcov --version
lcov: LCOV version 1.12

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] signal handling in plpython

2016-10-13 Thread Mario De Frutos Dieguez
Hello everyone :).

First of all, I want to introduce me to this list. My name is Mario de
Frutos and I work at CARTO :)

I come here asking for some advice/help because we're facing some
unexpected behavior when we want to interrupt functions doing CPU intensive
operations in plpython.

Our problem is that we're not able to interrupt them when they're making
CPU intensive operations. For example, when calculating Moran using PySAL,
the SIGINT handler of Postgres is not able to cancel it.

I want to show you some possible solutions that I've tried without success:

- If we don't add a custom signal handler, we're not able to interrupt the
function when it's making CPU intensive operations. When the `SIGINT`
signal is launched, the system is not able to interrupt it until the
function ends.
- If we add a custom signal handler for the `SIGINT`, we are able to
interrupt the CPU intensive function but we're not able to interrupt data
fetching operations like `plpy.execute(query)` because we have overridden
the Postgres handler for that signal.
- As a third option I've added a python context manager to wrap, for
testing purposes, the CPU intensive part (Moran function from PySAL):
```
def _signal_handler(signal_code, frame):
plpy.error(INTERRUPTED BY USER!!')


@contextmanager
def interruptible():
try:
signal.signal(signal.SIGINT, _signal_handler)
yield
finally:
# Restore the default behavoiur for the signal
signal.signal(signal.SIGINT, signal.SIG_DFL)
```
  This doesn't work as expected because in the `finally` clause we try to
reset to the default behavior but in Postgres, the behavior for the SIGINT
signal is defined by a [custom handler](
https://github.com/postgres/postgres/blob/master/src/include/tcop/tcopprot.h#L66
).
  If we try to retrieve the old handler using `signal.getsignal` we get a
None object

So after all,going back and forth I came up with two possible solutions:
- [custom code
]
in `plpython` to make us able to reset the default signal handler after
finish the CPU intensive functions. It seems to work but I'm still doing
some tests. This option lets us call it explicitly and add it to the
`finally` part of a decorator/context manager
- Reset the signal handler at the beginning of the `plpy.execute` or alike
functions like [here

].

As an extra ball, we want to implement the SIGALRM part to mimic the
"statement timeout" behavior too

I don't know if there is a better way to implement this, I know we're
pushing/doing things beyond the scope of plpython but any advise is welcome
:)


Re: [HACKERS] parallel.sgml

2016-10-13 Thread Tom Lane
Tatsuo Ishii  writes:
>> While reading parallel.sgml, I met with following sentence.
>> 
>> If this occurs, the leader will execute the portion of the plan
>> between below the Gather node entirely by itself,
>> almost as if the Gather node were not present.
>> 
>> Maybe "the portion of the plan between below" shoud have been
>> "the portion of the plan below"?

> Can someone, possibly a native English speaker, please confirm this is
> typo or not? I cannot parse the sentence as it is, but this maybe
> because I'm not a English speaker.

Yes, I think removing "between" is correct.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] btree vacuum and suspended scans can deadlock

2016-10-13 Thread Amit Kapila
During Vacuum, cleanup in btree need to acquire cleanup lock on the
block and the non-MVCC scans or scans on unlogged index retain the pin
on a buffer during scan of a block.  Now, if the scan is suspended
(use cursor and then fetch one row) after holding a pin on a buffer in
Session-1 and in Session-2 we initiate the vacuum which will wait for
the suspended scan in Session-1 to finish (by this time vacuum has a
ShareUpdateExclusiveLock on the table), now if in Session-1, user
tries to acquire a conflicting lock on the same table, it will create
a deadlock.

Simple test to reproduce deadlock:

create unlogged table ul_t1(c1 int);
create index idx_ul_t1 on ul_t1 (c1);
insert into ul_t1 values(1);
insert into ul_t1 values(2);
delete from ul_t1 where c1=1;

Session-1
set enable_seqscan = off;
begin transaction;
declare c1 cursor for select * from ul_t1 where c1=2;
fetch next from c1; --every thing is good till here

Session-2
vacuum ul_t1;  -- This will wait for session-1 to complete the scan

Session-1
lock table ul_t1 in Access Exclusive Mode;  -- This will wait for
session-2 to complete the vacuum

If we agree that above is a problematic case, then some of the options
to solve it could be (a) Vacuum should not wait for a cleanup lock and
instead just give up and start again which I think is a bad idea (b)
don't allow to take lock of higher granularity after the scan is
suspended, not sure if that is feasible (c) document the above danger,
this sounds okay on the ground that nobody has reported the problem
till now

I have found this problem while analysing similar case for hash index
interaction with vacuum as is being discussed on that thread [1].  We
could think of solving it in a way as proposed on that thread, but I
think as proposed that is not safe for non-MVCC scans.

Thoughts?


[1] - 
https://www.postgresql.org/message-id/CAA4eK1%2BJM%2BffHgUfW8wf%2BLgn2eJ1fGjyn6b_L5m0fiTEj2_6Pw%40mail.gmail.com

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] munmap() failure due to sloppy handling of hugepage size

2016-10-13 Thread Tom Lane
Andres Freund  writes:
> On 2016-10-12 16:33:38 -0400, Tom Lane wrote:
>> Also, if you look into /sys then you are going to see multiple
>> possible values and it's not clear how to choose the right one.

> That's a fair point. It'd probably be good to use the largest we can,
> bounded by a percentage of max waste or such.  But that's likely
> something for another day.

Yeah.  Merlin pointed out that on versions of Linux newer than my
RHEL6 box, mmap accepts additional flag bits that let you specify
which hugepage size to use.  So we would need to use those if we
wanted to work with anything besides the default size.

Now AFAICT from the documentation I've seen, configuring hugepages
is all still pretty manual, ie the sysadmin has to set up so many huge
pages of each size at or near boot.  So I'm thinking that using a
non-default size should be something that happens only if the user
tells us to, ie we'd need to add a GUC saying "use size X".  That's
pretty ugly but if the admin is intending PG to use pages from a
certain pool, how else would we ensure that the right thing happens?
And it'd provide a way of overriding our default 2MB guess on non-Linux
platforms.

Anyway, anything involving a new GUC is certainly new-feature, HEAD-only
material.  I think though that reading the default hugepage size out of
/proc/meminfo is a back-patchable bug fix.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump: Simplify internal archive version handling

2016-10-13 Thread Tom Lane
Peter Eisentraut  writes:
> I propose the attached patch to clean up some redundancy and confusion
> in pg_dump.

Looks sane, though I'd suggest coding the access macros with shifts
not multiplies/divides.

Since these numbers are purely internal and not stored in this form in the
file, I wonder whether we shouldn't drop the rightmost zero byte, ie make
it just  not 00.  That would save at least
one divide/shift when accessing.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: scan key push down to heap [WIP]

2016-10-13 Thread Tom Lane
Dilip Kumar  writes:
> On Thu, Oct 13, 2016 at 6:05 AM, Robert Haas  wrote:
>> I seriously doubt that this should EVER be supported for anything
>> other than "var op const", and even then only for very simple
>> operators.

> Yes, with existing key push down infrastructure only "var op const",
> but If we extend that I think we can cover many other simple
> expressions, i.e

I think it is a mistake to let this idea drive any additional
complication of the ScanKey data structure.  That will have negative
impacts on indexscan performance, not to mention require touching
quite a lot of unrelated code.  And as Robert points out, we do not
want to execute anything expensive while holding the buffer LWLock.

>> Part of the trick if we want to make this work is going to be figuring
>> out how we'll identify which operators are safe.

> Yes, I agree that's the difficult part. Can't we process full qual
> list and decide decide the operator are safe or not based on their
> datatype ?

Possibly restricting it to builtin, immutable functions on non-toastable
data types would do.  Or for more safety, only allow pass-by-value data
types.  But I have a feeling that there might still be counterexamples.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_dump: Simplify internal archive version handling

2016-10-13 Thread Peter Eisentraut
I propose the attached patch to clean up some redundancy and confusion
in pg_dump.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>From 41fce2adb8b3051ba1b29f592d4d7b8e52aeb30a Mon Sep 17 00:00:00 2001
From: Peter Eisentraut 
Date: Thu, 13 Oct 2016 12:00:00 -0400
Subject: [PATCH] pg_dump: Simplify internal archive version handling

The ArchiveHandle structure contained the archive format version number
twice, once as a single field and once split into components.  Simplify
that by just keeping the single field and adding some macros to extract
the components.  Also introduce some macros for composing version
numbers, to eliminate the repeated use of magic formulas.
---
 src/bin/pg_dump/pg_backup_archiver.c | 52 -
 src/bin/pg_dump/pg_backup_archiver.h | 56 ++--
 2 files changed, 53 insertions(+), 55 deletions(-)

diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index e237b4a..0e20985 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -1105,7 +1105,8 @@ PrintTOCSummary(Archive *AHX)
 			fmtName = "UNKNOWN";
 	}
 
-	ahprintf(AH, "; Dump Version: %d.%d-%d\n", AH->vmaj, AH->vmin, AH->vrev);
+	ahprintf(AH, "; Dump Version: %d.%d-%d\n",
+			 ARCHIVE_MAJOR(AH->version), ARCHIVE_MINOR(AH->version), ARCHIVE_REV(AH->version));
 	ahprintf(AH, "; Format: %s\n", fmtName);
 	ahprintf(AH, "; Integer: %d bytes\n", (int) AH->intSize);
 	ahprintf(AH, "; Offset: %d bytes\n", (int) AH->offSize);
@@ -2106,6 +2107,7 @@ _discoverArchiveFormat(ArchiveHandle *AH)
 	if (strncmp(sig, "PGDMP", 5) == 0)
 	{
 		int			byteread;
+		char		vmaj, vmin, vrev;
 
 		/*
 		 * Finish reading (most of) a custom-format header.
@@ -2115,31 +2117,30 @@ _discoverArchiveFormat(ArchiveHandle *AH)
 		if ((byteread = fgetc(fh)) == EOF)
 			READ_ERROR_EXIT(fh);
 
-		AH->vmaj = byteread;
+		vmaj = byteread;
 
 		if ((byteread = fgetc(fh)) == EOF)
 			READ_ERROR_EXIT(fh);
 
-		AH->vmin = byteread;
+		vmin = byteread;
 
 		/* Save these too... */
-		AH->lookahead[AH->lookaheadLen++] = AH->vmaj;
-		AH->lookahead[AH->lookaheadLen++] = AH->vmin;
+		AH->lookahead[AH->lookaheadLen++] = vmaj;
+		AH->lookahead[AH->lookaheadLen++] = vmin;
 
 		/* Check header version; varies from V1.0 */
-		if (AH->vmaj > 1 || ((AH->vmaj == 1) && (AH->vmin > 0)))		/* Version > 1.0 */
+		if (vmaj > 1 || (vmaj == 1 && vmin > 0))		/* Version > 1.0 */
 		{
 			if ((byteread = fgetc(fh)) == EOF)
 READ_ERROR_EXIT(fh);
 
-			AH->vrev = byteread;
-			AH->lookahead[AH->lookaheadLen++] = AH->vrev;
+			vrev = byteread;
+			AH->lookahead[AH->lookaheadLen++] = vrev;
 		}
 		else
-			AH->vrev = 0;
+			vrev = 0;
 
-		/* Make a convenient integer 00 */
-		AH->version = ((AH->vmaj * 256 + AH->vmin) * 256 + AH->vrev) * 256 + 0;
+		AH->version = MAKE_ARCHIVE_VERSION(vmaj, vmin, vrev);
 
 		if ((AH->intSize = fgetc(fh)) == EOF)
 			READ_ERROR_EXIT(fh);
@@ -2234,12 +2235,7 @@ _allocAH(const char *FileSpec, const ArchiveFormat fmt,
 
 	/* AH->debugLevel = 100; */
 
-	AH->vmaj = K_VERS_MAJOR;
-	AH->vmin = K_VERS_MINOR;
-	AH->vrev = K_VERS_REV;
-
-	/* Make a convenient integer 00 */
-	AH->version = ((AH->vmaj * 256 + AH->vmin) * 256 + AH->vrev) * 256 + 0;
+	AH->version = K_VERS_SELF;
 
 	/* initialize for backwards compatible string processing */
 	AH->public.encoding = 0;	/* PG_SQL_ASCII */
@@ -3528,9 +3524,9 @@ WriteHead(ArchiveHandle *AH)
 	struct tm	crtm;
 
 	(*AH->WriteBufPtr) (AH, "PGDMP", 5);		/* Magic code */
-	(*AH->WriteBytePtr) (AH, AH->vmaj);
-	(*AH->WriteBytePtr) (AH, AH->vmin);
-	(*AH->WriteBytePtr) (AH, AH->vrev);
+	(*AH->WriteBytePtr) (AH, ARCHIVE_MAJOR(AH->version));
+	(*AH->WriteBytePtr) (AH, ARCHIVE_MINOR(AH->version));
+	(*AH->WriteBytePtr) (AH, ARCHIVE_REV(AH->version));
 	(*AH->WriteBytePtr) (AH, AH->intSize);
 	(*AH->WriteBytePtr) (AH, AH->offSize);
 	(*AH->WriteBytePtr) (AH, AH->format);
@@ -3563,24 +3559,26 @@ ReadHead(ArchiveHandle *AH)
 	 */
 	if (!AH->readHeader)
 	{
+		char		vmaj, vmin, vrev;
+
 		(*AH->ReadBufPtr) (AH, tmpMag, 5);
 
 		if (strncmp(tmpMag, "PGDMP", 5) != 0)
 			exit_horribly(modulename, "did not find magic string in file header\n");
 
-		AH->vmaj = (*AH->ReadBytePtr) (AH);
-		AH->vmin = (*AH->ReadBytePtr) (AH);
+		vmaj = (*AH->ReadBytePtr) (AH);
+		vmin = (*AH->ReadBytePtr) (AH);
 
-		if (AH->vmaj > 1 || ((AH->vmaj == 1) && (AH->vmin > 0)))		/* Version > 1.0 */
-			AH->vrev = (*AH->ReadBytePtr) (AH);
+		if (vmaj > 1 || (vmaj == 1 && vmin > 0))		/* Version > 1.0 */
+			vrev = (*AH->ReadBytePtr) (AH);
 		else
-			AH->vrev = 0;
+			vrev = 0;
 
-		AH->version = ((AH->vmaj * 256 + AH->vmin) * 256 + AH->vrev) * 256 + 0;
+		AH->version = MAKE_ARCHIVE_VERSION(vmaj, vmin, vrev);
 
 		if (AH->version < K_VERS_1_0 || AH->version > K_VERS_MAX)
 			exit_horribly(modulename, "unsupported version (%d.%d) 

[HACKERS] Stats sender and 2pc minor problem

2016-10-13 Thread Stas Kelvich
Hello.

Statistics sender logic during usual commit and two-phase commit do not 
strictly matches each other and that leads to
delta_live_tuples added to n_live_tup in case of truncate in two phase commit.

That can be see in following example:

CREATE TABLE trunc_stats_test5(id serial);
INSERT INTO trunc_stats_test5 DEFAULT VALUES;
INSERT INTO trunc_stats_test5 DEFAULT VALUES;
INSERT INTO trunc_stats_test5 DEFAULT VALUES;
BEGIN;
TRUNCATE trunc_stats_test5;
PREPARE TRANSACTION 'twophase_stats';
COMMIT PREPARED 'twophase_stats';

After that pg_stat_user_tables will have n_live_tup = 3 instead of 0.

Fix along with test is attached.



2pc-stats.patch
Description: Binary data

-- 
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add PGDLLEXPORT to PG_FUNCTION_INFO_V1

2016-10-13 Thread Craig Ringer
On 13 Oct. 2016 05:28, "Tom Lane"  wrote:
>
> I wrote:
> > Albe Laurenz  writes:
> >> Tom Lane wrote:
> >>> I'm okay with adding PGDLLEXPORT to the extern, but we should update
> >>> that comment to note that it's not necessary with any of our standard
> >>> Windows build processes.  (For that matter, the comment fails to
explain
> >>> why this macro is providing an extern for the base function at all...)
>
> >> Here is a patch for that, including an attempt to improve the comment.
>
> > Pushed with some further twiddling of the comment.
>
> Well, the buildfarm doesn't like that even a little bit.  It seems that
> the MSVC compiler does not like seeing both "extern Datum foo(...)" and
> "extern PGDLLEXPORT Datum foo(...)", so anything that had an extern in
> a .h file is failing.  There is also quite a bit of phase-of-the-moon
> behavior in here, because in some cases some functions are raising errors
> and others that look entirely the same are not.

Pretty sure we discussed and did exactly this before around 9.4. Will check
archives...

Yeah. Here's the thread.

https://www.postgresql.org/message-id/flat/27019.1397571477%40sss.pgh.pa.us#27019.1397571...@sss.pgh.pa.us

I think the discussion last time came down to you and I disagreeing about
Microsoft droppings too ;)


Re: [HACKERS] Transactions involving multiple postgres foreign servers

2016-10-13 Thread Amit Langote
On 2016/10/13 19:37, Ashutosh Bapat wrote:
>> In case we can not reach a foreign server during post-commit phase,
>> basically the transaction and following transaction should stop until
>> the crashed server revived.
> 
> I have repeatedly given reasons why this is not correct. You and Amit
> seem to repeat this statement again and again in turns without giving
> any concrete reasons about why this is so.

As mentioned in description of the "Commit" or "Completion" phase in the
Wikipedia article [1]:

* Success

If the coordinator received an agreement message from all cohorts during
the commit-request phase:

1. The coordinator sends a commit message to all the cohorts.

2. Each cohort completes the operation, and releases all the locks and
   resources held during the transaction.

3. Each cohort sends an acknowledgment to the coordinator.

4. The coordinator completes the transaction when all acknowledgments
   have been received.

* Failure

If any cohort votes No during the commit-request phase (or the
coordinator's timeout expires):

1. The coordinator sends a rollback message to all the cohorts.

2. Each cohort undoes the transaction using the undo log, and releases
   the resources and locks held during the transaction.

3. Each cohort sends an acknowledgement to the coordinator.

4. The coordinator undoes the transaction when all acknowledgements have
   been received.

In point 4 of both commit and abort cases above, it's been said, "when
*all* acknowledgements have been received."


However, when I briefly read the description in "Transaction Management in
the R* Distributed Database Management System (C. Mohan et al)" [2], it
seems that what Ashutosh is saying might be a correct way to proceed after
all:

"""
2. THE TWO-PHASE COMMIT PROTOCOL

...

After the coordinator receives the votes from all its subordinates, it
initiates the second phase of the protocol. If all the votes were YES
VOTES, then the coordinator moves to the committing state by force-writing
a commit record and sending COMMIT messages to all the subordinates. The
completion of the force-write takes the transaction to its commit point.
Once this point is passed the user can be told that the transaction has
been committed.
...

"""

Sorry about the noise.

Thanks,
Amit

[1] https://en.wikipedia.org/wiki/Two-phase_commit_protocol#Commit_phase

[2] http://www.cs.cmu.edu/~natassa/courses/15-823/F02/papers/p378-mohan.pdf





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transactions involving multiple postgres foreign servers

2016-10-13 Thread Ashutosh Bapat
>>
>> If we are successful in COMMITTING foreign transactions during
>> post-commit phase, COMMIT message will be returned after we have
>> committed all foreign transactions. But in case we can not reach a
>> foreign server, and request times out, we can not revert back our
>> decision that we are going to commit the transaction. That's my answer
>> to the timeout based heuristic.
>
> IIUC 2PC is the protocol that assumes that all of the foreign server live.

Do you have any references? Take a look at [1]. The first paragraph
itself mentions that 2PC can achieve its goals despite temporary
failures.

> In case we can not reach a foreign server during post-commit phase,
> basically the transaction and following transaction should stop until
> the crashed server revived.

I have repeatedly given reasons why this is not correct. You and Amit
seem to repeat this statement again and again in turns without giving
any concrete reasons about why this is so.

> This is the first place to implement 2PC
> for FDW, I think. The heuristically determination approach I mentioned
> is one of the optimization idea to avoid holding up transaction in
> case a foreign server crashed.
>
>> I don't see much point in holding up post-commit processing for a
>> non-responsive foreign server, which may not respond for days
>> together. Can you please elaborate a use case? Which commercial
>> transaction manager does that?
>
> For example, the client updates a data on foreign server and then
> commits. And the next transaction from the same client selects new
> data which was updated on previous transaction. In this case, because
> the first transaction is committed the second transaction should be
> able to see updated data, but it can see old data in your idea. Since
> these is obviously order between first transaction and second
> transaction I think that It's not problem of providing consistent
> view.

2PC doesn't guarantee this. For that you need other methods and
protocols. We have discussed this before. [2]


[1] https://en.wikipedia.org/wiki/Two-phase_commit_protocol
[2] 
https://www.postgresql.org/message-id/CAD21AoCTe1CFfA9g1uqETvLaJZfFH6QoPSDf-L3KZQ-CDZ7q8g%40mail.gmail.com
-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transactions involving multiple postgres foreign servers

2016-10-13 Thread Masahiko Sawada
On Fri, Oct 7, 2016 at 4:25 PM, Ashutosh Bapat
 wrote:
> On Thu, Oct 6, 2016 at 2:52 PM, Amit Langote
>  wrote:
>> On 2016/10/06 17:45, Ashutosh Bapat wrote:
>>> On Thu, Oct 6, 2016 at 1:34 PM, Masahiko Sawada  
>>> wrote:
 On Thu, Oct 6, 2016 at 1:41 PM, Ashutosh Bapat 
  wrote:
>> My understanding is that basically the local server can not return
>> COMMIT to the client until 2nd phase is completed.
>
> If we do that, the local server may not return to the client at all,
> if the foreign server crashes and never comes up. Practically, it may
> take much longer to finish a COMMIT, depending upon how long it takes
> for the foreign server to reply to a COMMIT message.

 Yes, I think 2PC behaves so, please refer to [1].
 To prevent local server stops forever due to communication failure.,
 we could provide the timeout on coordinator side or on participant
 side.
>>>
>>> This too, looks like a heuristic and shouldn't be the default
>>> behaviour and hence not part of the first version of this feature.
>>
>> At any rate, the coordinator should not return to the client until after
>> the 2nd phase is completed, which was the original point.  If COMMIT
>> taking longer is an issue, then it could be handled with one of the
>> approaches mentioned so far (even if not in the first version), but no
>> version of this feature should really return COMMIT to the client only
>> after finishing the first phase.  Am I missing something?
>
> There is small time window between actual COMMIT and a commit message
> returned. An actual commit happens when we insert a WAL saying
> transaction X committed and then we return to the client saying a
> COMMIT happened. Note that a transaction may be committed but we will
> never return to the client with a commit message, because connection
> was lost or the server crashed. I hope we agree on this.

Agree.

> COMMITTING the foreign prepared transactions happens after we COMMIT
> the local transaction. If we do it before COMMITTING local transaction
> and the local server crashes, we will roll back local transaction
> during subsequence recovery while the foreign segments have committed
> resulting in an inconsistent state.
>
> If we are successful in COMMITTING foreign transactions during
> post-commit phase, COMMIT message will be returned after we have
> committed all foreign transactions. But in case we can not reach a
> foreign server, and request times out, we can not revert back our
> decision that we are going to commit the transaction. That's my answer
> to the timeout based heuristic.

IIUC 2PC is the protocol that assumes that all of the foreign server live.
In case we can not reach a foreign server during post-commit phase,
basically the transaction and following transaction should stop until
the crashed server revived. This is the first place to implement 2PC
for FDW, I think. The heuristically determination approach I mentioned
is one of the optimization idea to avoid holding up transaction in
case a foreign server crashed.

> I don't see much point in holding up post-commit processing for a
> non-responsive foreign server, which may not respond for days
> together. Can you please elaborate a use case? Which commercial
> transaction manager does that?

For example, the client updates a data on foreign server and then
commits. And the next transaction from the same client selects new
data which was updated on previous transaction. In this case, because
the first transaction is committed the second transaction should be
able to see updated data, but it can see old data in your idea. Since
these is obviously order between first transaction and second
transaction I think that It's not problem of providing consistent
view.

I guess transaction manager of Postgres-XC behaves so, no?

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Implement failover on libpq connect level.

2016-10-13 Thread Victor Wagner
On Thu, 13 Oct 2016 12:30:59 +0530
Mithun Cy  wrote:

> On Fri, Sep 30, 2016 at 2:14 PM, Victor Wagner 
> wrote:

> Okay but for me consistency is also important. Since we agree to
> disagree on some of the comments and others have not expressed any
> problem I am moving it to committer.


Thank you for your efforts improving my patch



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] FTS Configuration option

2016-10-13 Thread Artur Zakirov

On 13.10.2016 11:54, Emre Hasegeli wrote:



Maybe also better to use -> instead of AND? AND would has another
behaviour. I could create the following configuration:

=> ALTER TEXT SEARCH CONFIGURATION multi_conf
ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
word, hword, hword_part
WITH (german_ispell AND english_ispell) OR simple;

which will return both german_ispell and english_ispell results. But
I'm not sure that this is a good solution.


I see you usecase for AND.  It might indeed be useful.  AND suits well to it.

Maybe THEN can be the keyword instead of -> for pass the results to
subsequent dictionaries.  They are all reserved keywords.  I guess it
wouldn't be a problem to use them.


I agree with THEN. It is better than using -> I think. I suppose it 
wouldn't be a problem too. I think it is necessary to fix gram.y and 
implement logic with OR, AND and THEN.





Of course if this syntax will be implemented, old syntax with commas
also should be maintained.


Yes, we should definitely.  The comma can be interpreted either one of
the keywords depending on left hand side dictionary.

I would be glad to review, if you develop this feature.



Then I will develop it :). But I suppose I can do it a few days or weeks 
later, because I have other tasks with higher priority.


BTW, I've already implemented USING option a few weeks before 
https://github.com/select-artur/postgres/tree/join_tsconfig . But of 
course it is not useful now.


--
Artur Zakirov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] FTS Configuration option

2016-10-13 Thread Emre Hasegeli
> With such syntax we also don't need the TSL_FILTER flag for lexeme. At
> the current time unaccent extension set this flag to pass a lexeme to
> a next dictionary. This flag is used by the text-search parser. It
> looks like a hard coded solution. User can't change this behaviour.

Exactly.

> Maybe also better to use -> instead of AND? AND would has another
> behaviour. I could create the following configuration:
>
> => ALTER TEXT SEARCH CONFIGURATION multi_conf
> ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
> word, hword, hword_part
> WITH (german_ispell AND english_ispell) OR simple;
>
> which will return both german_ispell and english_ispell results. But
> I'm not sure that this is a good solution.

I see you usecase for AND.  It might indeed be useful.  AND suits well to it.

Maybe THEN can be the keyword instead of -> for pass the results to
subsequent dictionaries.  They are all reserved keywords.  I guess it
wouldn't be a problem to use them.

> Of course if this syntax will be implemented, old syntax with commas
> also should be maintained.

Yes, we should definitely.  The comma can be interpreted either one of
the keywords depending on left hand side dictionary.

I would be glad to review, if you develop this feature.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump, pg_dumpall and data durability

2016-10-13 Thread Michael Paquier
On Thu, Oct 13, 2016 at 2:49 PM, Michael Paquier
 wrote:
> - This is effective for all modes, when the user specifies an output
> file. In short that's when fileSpec is not NULL.

I have sent the email too early here... In this case the sync is a
no-op. It is missing a sentence.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Improve OOM handling in pg_locale.c

2016-10-13 Thread Michael Paquier
Hi all,

This is a follow-up of
https://www.postgresql.org/message-id/11202.1472597262%40sss.pgh.pa.us
where we are looking at improving OOM handling in the code. In short,
report an ERROR appropriately instead of crashing. As mentioned in
this message, pg_locale.c is trickier to handle because we had better
not call elog() in a code path where the backend's locale are not set
up appropriately. Attached is a patch aimed at fixing that, doing the
following:
- Copy into a temporary struct lconv the results from the call of
localeconv() as those can be overwritten when restoring back the
locales with setlocale().
- Use db_encoding_strdup to encode that correctly.
- Switch back to the backend locales
- Check for any strdup calls that returned NULL and elog()
- If no error, fill in CurrentLocaleConv and return back to caller.

I am attaching that to the next CF.
Thanks,
-- 
Michael
diff --git a/src/backend/utils/adt/pg_locale.c 
b/src/backend/utils/adt/pg_locale.c
index a818023..e4fd0f3 100644
--- a/src/backend/utils/adt/pg_locale.c
+++ b/src/backend/utils/adt/pg_locale.c
@@ -440,6 +440,7 @@ PGLC_localeconv(void)
static struct lconv CurrentLocaleConv;
static bool CurrentLocaleConvAllocated = false;
struct lconv *extlconv;
+   struct lconv tmplconv;
char   *save_lc_monetary;
char   *save_lc_numeric;
char   *decimal_point;
@@ -523,23 +524,22 @@ PGLC_localeconv(void)
encoding = pg_get_encoding_from_locale(locale_monetary, true);
 
/*
-* Must copy all values since restoring internal settings may overwrite
-* localeconv()'s results.  Note that if we were to fail within this
-* sequence before reaching "CurrentLocaleConvAllocated = true", we 
could
-* leak some memory --- but not much, so it's not worth agonizing over.
+* First copy all values into a temporary variable since restoring 
internal
+* settings may overwrite localeconv()'s results.  Note that elog() 
cannot
+* be called as the backend's locale settings prevail and they are not
+* restored yet.
 */
-   CurrentLocaleConv = *extlconv;
-   CurrentLocaleConv.decimal_point = decimal_point;
-   CurrentLocaleConv.grouping = grouping;
-   CurrentLocaleConv.thousands_sep = thousands_sep;
-   CurrentLocaleConv.int_curr_symbol = db_encoding_strdup(encoding, 
extlconv->int_curr_symbol);
-   CurrentLocaleConv.currency_symbol = db_encoding_strdup(encoding, 
extlconv->currency_symbol);
-   CurrentLocaleConv.mon_decimal_point = db_encoding_strdup(encoding, 
extlconv->mon_decimal_point);
-   CurrentLocaleConv.mon_grouping = strdup(extlconv->mon_grouping);
-   CurrentLocaleConv.mon_thousands_sep = db_encoding_strdup(encoding, 
extlconv->mon_thousands_sep);
-   CurrentLocaleConv.negative_sign = db_encoding_strdup(encoding, 
extlconv->negative_sign);
-   CurrentLocaleConv.positive_sign = db_encoding_strdup(encoding, 
extlconv->positive_sign);
-   CurrentLocaleConvAllocated = true;
+   tmplconv = *extlconv;
+   tmplconv.decimal_point = decimal_point;
+   tmplconv.grouping = grouping;
+   tmplconv.thousands_sep = thousands_sep;
+   tmplconv.int_curr_symbol = db_encoding_strdup(encoding, 
extlconv->int_curr_symbol);
+   tmplconv.currency_symbol = db_encoding_strdup(encoding, 
extlconv->currency_symbol);
+   tmplconv.mon_decimal_point = db_encoding_strdup(encoding, 
extlconv->mon_decimal_point);
+   tmplconv.mon_grouping = strdup(extlconv->mon_grouping);
+   tmplconv.mon_thousands_sep = db_encoding_strdup(encoding, 
extlconv->mon_thousands_sep);
+   tmplconv.negative_sign = db_encoding_strdup(encoding, 
extlconv->negative_sign);
+   tmplconv.positive_sign = db_encoding_strdup(encoding, 
extlconv->positive_sign);
 
/* Try to restore internal settings */
if (save_lc_monetary)
@@ -566,6 +566,31 @@ PGLC_localeconv(void)
}
 #endif
 
+   /*
+* Now that locales are set back to normal check for any NULL fields and
+* report an out-of-memory error in consequence if any.
+*/
+   if (tmplconv.decimal_point == NULL ||
+   tmplconv.grouping == NULL ||
+   tmplconv.thousands_sep == NULL ||
+   tmplconv.int_curr_symbol == NULL ||
+   tmplconv.currency_symbol == NULL ||
+   tmplconv.mon_decimal_point == NULL ||
+   tmplconv.mon_grouping == NULL ||
+   tmplconv.mon_thousands_sep == NULL ||
+   tmplconv.negative_sign == NULL ||
+   tmplconv.positive_sign == NULL)
+   {
+   free_struct_lconv();
+   elog(ERROR, "out of memory");
+   }
+
+   /*
+* Everything is good, so save the result.
+*/
+   CurrentLocaleConv = tmplconv;
+
+   CurrentLocaleConvAllocated = true;
CurrentLocaleConvValid = true;
return 
 }

-- 
Sent via 

Re: [HACKERS] Patch: Implement failover on libpq connect level.

2016-10-13 Thread Mithun Cy
On Fri, Sep 30, 2016 at 2:14 PM, Victor Wagner  wrote:
>Ok, some trailing whitespace and mixing of tabs and spaces
>which git apply doesn't like really present in the patch.
>I'm attached hear version with these issues resolved.

There were some more trailing spaces and spaces used for tabs in patch 09.
I have fixed same along with formatting issues related to comments and { of
controlled blocks.

>But backward compatibility is more important thing, so I now assume, that
user tries to connect just onenode, and this node is read only, user knows
>what he is doing.
Okay but for me consistency is also important. Since we agree to disagree
on some of the comments and others have not expressed any problem I am
moving it to committer.

-- 
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com


libpq-failover-10.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Reporting wait events for system processes

2016-10-13 Thread Michael Paquier
Hi all,

I have been looking at the possibility to support wait events for
system processes, and here are the things that I think need to be
done:
- System processes do not have PGPROC entries in PROC_HDR. I think
that we should add an array for system processes, putting in it
walwriterLatch, checkpointerLatch and startupProc at the same time
because we'd store directly a PGPROC entry. Another possibility would
be to declare them individually in PROC_HDR, but I'd like to have a
routine like BackendPidGetProc() but for system processes. And that is
more portable in the long run.
- System process PIDs are not reported as global variables. So we
would need to move them from postmaster.c to globals.c and declare
them as global variables. Those are now just static variables in
postmaster.c. This is useful to match those PIDs with the entries in
PGPROC_HDR to get the names of each process.
- The catalog associated to system processes has this shape:
-- name - name of the process, walreceiver, autovacuum, bgworker,
autovacuum_worker, startup etc.
-- PID
-- wait_event
-- wait_event_type
I think that it would be fine to name it, say pg_stat_system_activity,
and to define it as a system view fed by a SRF function that scans the
arrays of PGPROC entries for background workers, autovacuum workers
and system processes.

Thoughts about that? I am expecting a couple of -1 for suggesting to
expose the system PIDs now only local to postmaster.c ;)

Thanks,
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Aggregate Push Down - Performing aggregation on foreign server

2016-10-13 Thread Prabhat Sahu
On Fri, Sep 30, 2016 at 5:23 PM, Jeevan Chalke <
jeevan.cha...@enterprisedb.com> wrote:


> In the attached patch I have fixed all other review comments you have
> posted. All the comments were excellent and helped me a lot to improve
> in various areas.
>


Hi,

I have tested and created few extra testcases which we can consider to add
in "postgres_fdw.sql".
Please find attached patch.

Note: This patch is an addition on top of Jeevan Chalke's patch Dt: 30th,
Sept 2016.


*Thanks & Regards,*


*Prabhat Kumar Sahu*

Mob: 7758988455

Skype ID: prabhat.sahu1984

www.enterprisedb.co m



pg_agg_push_down_extra_test.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers