Re: [HACKERS] Disabling START TRANSACTION for a SuperUser

2015-10-28 Thread Kevin Grittner
On Tuesday, October 27, 2015 10:57 PM, Muthiah Rajan  wrote:
> On 27-Oct-2015 7:37 PM, "Kevin Grittner"  wrote:

>> It is more problematic where a shop wants to use serializable
>> transactions to ensure data integrity.

> This may be a trivial thing But what do you mean by shops? I
> actually can't get it :-)

http://www.merriam-webster.com/dictionary/shop

I was using "shop" in the sense of the second of the short noun
definitions ("the place where a specified kind of worker works : a
worker's place of business") or number 5a under the long noun
definitions ("a business establishment: office").  When used in
that sense the type of business is usually used ("an I.T. shop"),
but where it is implied or obvious it is often dropped.  The
dictionary doesn't list it as a colloquialism, but it is rather
informal -- approaching the colloquial.  As I used it I was
intending to convey a group of I.T. professionals under the same
management with a common set of policies, working on the same set
of hardware and/or software.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PROPOSAL] Max recursion depth in WITH Queries (Common Table Expressions)

2015-10-28 Thread Tom Lane
Valery Popov  writes:
> Recursive queries are typically used to deal with hierarchical or
> tree-structured data.
> In some conditions when data contain  relationships with cycles recursive 
> query will loop
> unlimited and significantly slows the client's session.

The standard way of dealing with that is to include logic in the query to
limit the recursion depth, for example

WITH RECURSIVE t(n) AS (
SELECT 1
  UNION ALL
SELECT n+1 FROM t WHERE n < 10
)
SELECT n FROM t;

I don't see an example of this technique in the documentation, which maybe
is a documentation improvement opportunity.

> To prevent "infinite" loop I suggest the max_recursion_depth parameter,
> which defines the maximum recursion level during the execution of recursive
> query.

Controlling this via a GUC is a seriously awful idea.  We learned a long
time ago to avoid GUCs that have a direct impact on query semantics; the
scope of their effects is just about never what you want.

Also, there are already ways to constrain queries-gone-crazy; particularly
statement_timeout, which has the advantage that it works for other types
of badly-written queries not only this one.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add EXTRA_CFLAGS to configure

2015-10-28 Thread Robert Haas
On Wed, Oct 28, 2015 at 2:17 PM, Andres Freund  wrote:
>> I use COPT for this purpose.
>
> Unless I miss something you can't just pass that to configure though,
> right? I.e. it has to be passed to each make invocation?

What I do is:

echo COPT=-Wall -Werror > src/Makefile.custom

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add EXTRA_CFLAGS to configure

2015-10-28 Thread Nathan Wagner
On Wed, Oct 28, 2015 at 02:42:19PM +0100, Robert Haas wrote:
> On Wed, Oct 28, 2015 at 2:17 PM, Andres Freund  wrote:
> >> I use COPT for this purpose.
> >
> > Unless I miss something you can't just pass that to configure though,
> > right? I.e. it has to be passed to each make invocation?
> 
> What I do is:
> 
> echo COPT=-Wall -Werror > src/Makefile.custom

Make will pick up variables from the environment.  So unless the
makefile goes out of its way to circumvent it, you can just do

COPT=-Werror
export COPT

and then run your usual configure/compile cycle.  There's no
specific need to modify the makefiles or pass extra arguments
into make.

-- 
nw


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add EXTRA_CFLAGS to configure

2015-10-28 Thread Robert Haas
On Wed, Oct 28, 2015 at 11:55 AM, Andres Freund  wrote:
> I rather regularly want to pass extra flags to configure without
> overwriting CFLAGS. There's two basic reasons:
>
> 1) ./configure CFLAGS=... essentially breaks --enable-debug and related
>options, overwrites -O2 as the default and such. That's imo pretty
>confusing.
> 2) I like to be able to pass -Werror or something to configure without
>breaking a bunch of configure tests that won't work with Werror.
>
> A good way to do that seems to be to add a separate variable for that
> purpose? Unless someobdy has a better idea?

I use COPT for this purpose.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add EXTRA_CFLAGS to configure

2015-10-28 Thread Andres Freund
On 2015-10-28 12:20:22 +0100, Robert Haas wrote:
> On Wed, Oct 28, 2015 at 11:55 AM, Andres Freund  wrote:
> > I rather regularly want to pass extra flags to configure without
> > overwriting CFLAGS. There's two basic reasons:
> >
> > 1) ./configure CFLAGS=... essentially breaks --enable-debug and related
> >options, overwrites -O2 as the default and such. That's imo pretty
> >confusing.
> > 2) I like to be able to pass -Werror or something to configure without
> >breaking a bunch of configure tests that won't work with Werror.
> >
> > A good way to do that seems to be to add a separate variable for that
> > purpose? Unless someobdy has a better idea?
>
> I use COPT for this purpose.

Unless I miss something you can't just pass that to configure though,
right? I.e. it has to be passed to each make invocation?

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ATT_FOREIGN_TABLE and ATWrongRelkindError()

2015-10-28 Thread Robert Haas
On Fri, Oct 23, 2015 at 11:51 AM, Etsuro Fujita
 wrote:
> BTW, I found an incorrect error message in ATWrongRelkindError. Attached is
> a patch for fixing the message.

Committed and back-patched to 9.3.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench gaussian/exponential docs improvements

2015-10-28 Thread Robert Haas
On Sun, Oct 25, 2015 at 7:12 PM, Tomas Vondra
 wrote:
>> By default, or when uniform is specified, all values in the range are
>> drawn with equal probability. Specifying gaussian or exponential
>> options modifies this behavior; each requires a mandatory threshold
>> which determines the precise shape of the distribution.
>
> I find the 'threshold' name to be rather unfortunate, as none of the
> probability distribution functions that I know use this term. And even if
> there's one probability function that uses 'threshold' it has very little
> meaning in the others. For example the exponential distribution uses 'rate'
> (lambda). I'd prefer a neutral name (e.g. 'parameter').

+1 for this change.

(I have no particular opinion on your other suggestions.)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add EXTRA_CFLAGS to configure

2015-10-28 Thread Tom Lane
Andres Freund  writes:
> I rather regularly want to pass extra flags to configure without
> overwriting CFLAGS. There's two basic reasons:

> 1) ./configure CFLAGS=... essentially breaks --enable-debug and related
>options, overwrites -O2 as the default and such. That's imo pretty
>confusing.
> 2) I like to be able to pass -Werror or something to configure without
>breaking a bunch of configure tests that won't work with Werror.

I would be rather surprised if such a switch didn't affect the flags used
by configure itself, so your point (2) seems like it would require nasty
inconsistency.

Why don't you just edit Makefile.global after configure finishes?
Or use COPT or PROFILE?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ATT_FOREIGN_TABLE and ATWrongRelkindError()

2015-10-28 Thread Robert Haas
On Mon, Oct 26, 2015 at 3:12 AM, Amit Langote
 wrote:
> On 2015/10/23 19:02, Amit Langote wrote:
>> On 2015/10/23 18:51, Etsuro Fujita wrote:
>>>
>>> This is really really nitpicking, but I noticed that there is an implicit
>>> rule concerning the message format in ATWrongRelkindError; if more than
>>> two objects are present, the message is "\"%s\" is not a foo, bar, or
>>> baz". ("or" is preceded by a comma!)  So, would it be better that the
>>> former is "\"%s\" is not a table, view, or foreign table"?
>>
>> Oops! Yeah, I missed the comma there. That seems like a generally
>> preferred punctuation rule (the comma before conjunction(s) I mean).
>
> Here is a patch rectifying that mistake.

Committed.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PROPOSAL] Max recursion depth in WITH Queries (Common Table Expressions)

2015-10-28 Thread Tom Lane
Pavel Stehule  writes:
> 2015-10-28 14:33 GMT+01:00 Tom Lane :
>> Also, there are already ways to constrain queries-gone-crazy; particularly
>> statement_timeout, which has the advantage that it works for other types
>> of badly-written queries not only this one.

> isn't the recursive limits much more a resource limit like work_mem etc?

Exceeding work_mem isn't generally supposed to result in an error --- it
causes, or should cause, the system to shift execution strategy so that
you get the same answer with less memory and more time consumption.

In any case, the question is what purpose this would serve that isn't
already covered perfectly well by existing features like
statement_timeout.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PROPOSAL] Max recursion depth in WITH Queries (Common Table Expressions)

2015-10-28 Thread Pavel Stehule
2015-10-28 14:33 GMT+01:00 Tom Lane :

> Valery Popov  writes:
> > Recursive queries are typically used to deal with hierarchical or
> > tree-structured data.
> > In some conditions when data contain  relationships with cycles
> recursive query will loop
> > unlimited and significantly slows the client's session.
>
> The standard way of dealing with that is to include logic in the query to
> limit the recursion depth, for example
>
> WITH RECURSIVE t(n) AS (
> SELECT 1
>   UNION ALL
> SELECT n+1 FROM t WHERE n < 10
> )
> SELECT n FROM t;
>
> I don't see an example of this technique in the documentation, which maybe
> is a documentation improvement opportunity.
>
> > To prevent "infinite" loop I suggest the max_recursion_depth parameter,
> > which defines the maximum recursion level during the execution of
> recursive
> > query.
>
> Controlling this via a GUC is a seriously awful idea.  We learned a long
> time ago to avoid GUCs that have a direct impact on query semantics; the
> scope of their effects is just about never what you want.
>
> Also, there are already ways to constrain queries-gone-crazy; particularly
> statement_timeout, which has the advantage that it works for other types
> of badly-written queries not only this one.
>

isn't the recursive limits much more a resource limit like work_mem etc?

Regards

Pavel


>
> regards, tom lane
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] quieting DEBUG3

2015-10-28 Thread Robert Haas
On Wed, Oct 28, 2015 at 1:51 AM, Tom Lane  wrote:
> Craig Ringer  writes:
>> I think it'd be helpful to define some level of policy about what the
>> debug levels are intended for, so there's some guidance on what level
>> to emit messages on rather than playing "pick a number".
>
> +1 ... I doubt anyone has ever looked at that in a holistic way.

Well, I don't know that I could give really specific guidance on each
individual level, but what I think is pretty clear is that messages
which tell you about something that's likely to happen very frequently
should only appear if you have really cranked the logging up to the
maximum.   So a message that fires every time you touch a data block
had better be DEBUG5, but a message that fires at most once per
checkpoint cycle can afford to be DEBUG1.  Qualitatively, I think that
higher debug level (DEBUG1, DEBUG2) should focus on telling you about
things that are potentially interesting and maybe a little bit
unusual, while tracing messages that report things which are entirely
routine should use lower debug levels (DEBUG4, DEBUG5).

I agree with Craig that the transaction system is interesting and
important, but it's not the *only* interesting and important subsystem
we have ... and any log level below DEBUG2 is basically useless right
now; the stuff you want is almost guaranteed to be lost in the noise.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PROPOSAL] Max recursion depth in WITH Queries (Common Table Expressions)

2015-10-28 Thread Valery Popov



28.10.2015 16:33, Tom Lane пишет:

Valery Popov  writes:

Recursive queries are typically used to deal with hierarchical or
tree-structured data.
In some conditions when data contain  relationships with cycles recursive query 
will loop
unlimited and significantly slows the client's session.

The standard way of dealing with that is to include logic in the query to
limit the recursion depth, for example

WITH RECURSIVE t(n) AS (
 SELECT 1
   UNION ALL
 SELECT n+1 FROM t WHERE n < 10
)
SELECT n FROM t;

Yes, I agree with this thesis.  But I think in some cases would be 
better to receive error message and stop execution than results will 
incomplete.


--
Regards,
Valery Popov
Postgres Professional http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PROPOSAL] Max recursion depth in WITH Queries (Common Table Expressions)

2015-10-28 Thread Tom Lane
Valery Popov  writes:
> 28.10.2015 16:33, Tom Lane ïèøåò:
>> The standard way of dealing with that is to include logic in the query to
>> limit the recursion depth, for example ...

> Yes, I agree with this thesis.  But I think in some cases would be 
> better to receive error message and stop execution than results will 
> incomplete.

Sure, but you can do that at the SQL level if you have a mind to, as well.

In practice, I think people tend to use recursive queries mainly for data
layouts where the maximum recursion depth isn't terribly clear, so that
setting this GUC to a useful value would be a difficult task anyway.
If you end up setting it to 100X or 1000X more than you think your queries
could possibly recurse, you might as well use some other approach like
statement_timeout, which has got a closer relationship to what you care
about, ie how long you want to wait.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] a raft of parallelism-related bug fixes

2015-10-28 Thread Robert Haas
On Sun, Oct 18, 2015 at 12:17 AM, Robert Haas  wrote:
>> So reviewing patch 13 isn't possible without prior knowledge.
>
> The basic question for patch 13 is whether ephemeral record types can
> occur in executor tuples in any contexts that I haven't identified.  I
> know that a tuple table slot can contain have a column that is of type
> record or record[], and those records can themselves contain
> attributes of type record or record[], and so on as far down as you
> like.  I *think* that's the only case.  For example, I don't believe
> that a TupleTableSlot can contain a *named* record type that has an
> anonymous record buried down inside of it somehow.  But I'm not
> positive I'm right about that.

I have done some more testing and investigation and determined that
this optimism was unwarranted.  It turns out that the type information
for composite and record types gets stored in two different places.
First, the TupleTableSlot has a type OID, indicating the sort of the
value it expects to be stored for that slot attribute.  Second, the
value itself contains a type OID and typmod.  And these don't have to
match.  For example, consider this query:

select row_to_json(i) from int8_tbl i(x,y);

Without i(x,y), the HeapTuple passed to row_to_json is labelled with
the pg_type OID of int8_tbl.  But with the query as written, it's
labeled as an anonymous record type.  If I jigger things by hacking
the code so that this is planned as Gather (single-copy) -> SeqScan,
with row_to_json evaluated at the Gather node, then the sequential
scan kicks out a tuple with a transient record type and stores it into
a slot whose type OID is still that of int8_tbl.  My previous patch
failed to deal with that; the attached one does.

The previous patch was also defective in a few other respects.  The
most significant of those, maybe, is that it somehow thought it was OK
to assume that transient typmods from all workers could be treated
interchangeably rather than individually.  To fix this, I've changed
the TupleQueueFunnel implemented by tqueue.c to be merely a
TupleQueueReader which handles reading from a single worker only.
nodeGather.c therefore creates one TupleQueueReader per worker instead
of a single TupleQueueFunnel for all workers; accordingly, the logic
for multiplexing multiple queues now lives in nodeGather.c.  This is
probably how I should have done it originally - someone, I think Jeff
Davis - complained previously that tqueue.c had no business embedding
the round-robin policy decision, and he was right.  So this addresses
that complaint as well.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From db5b2a90ec35adf3f5fac72483679ebcefdb29af Mon Sep 17 00:00:00 2001
From: Robert Haas 
Date: Wed, 7 Oct 2015 12:43:22 -0400
Subject: [PATCH 7/8] Modify tqueue infrastructure to support transient record
 types.

Commit 4a4e6893aa080b9094dadbe0e65f8a75fee41ac6, which introduced this
mechanism, failed to account for the fact that the RECORD pseudo-type
uses transient typmods that are only meaningful within a single
backend.  Transferring such tuples without modification between two
cooperating backends does not work.  This commit installs a system
for passing the tuple descriptors over the same shm_mq being used to
send the tuples themselves.  The two sides might not assign the same
transient typmod to any given tuple descriptor, so we must also
substitute the appropriate receiver-side typmod for the one used by
the sender.  That adds some CPU overhead, but still seems better than
being unable to pass records between cooperating parallel processes.

Along the way, move the logic for handling multiple tuple queues from
tqueue.c to nodeGather.c; tqueue.c now provides a TupleQueueReader,
which reads from a single queue, rather than a TupleQueueFunnel, which
potentially reads from multiple queues.  This change was suggested
previously as a way to make sure that nodeGather.c rather than tqueue.c
had policy control over the order in which to read from queues, but
it wasn't clear to me until now how good an idea it was.  typmod
mapping needs to be performed separately for each queue, and it is
much simpler if the tqueue.c code handles that and leaves multiplexing
multiple queues to higher layers of the stack.
---
 src/backend/executor/nodeGather.c | 139 --
 src/backend/executor/tqueue.c | 977 +-
 src/include/executor/tqueue.h |  12 +-
 src/include/nodes/execnodes.h |   4 +-
 src/tools/pgindent/typedefs.list  |   2 +-
 5 files changed, 980 insertions(+), 154 deletions(-)

diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c
index 9c1533e..312302a 100644
--- a/src/backend/executor/nodeGather.c
+++ b/src/backend/executor/nodeGather.c
@@ -36,11 +36,13 @@
 #include "executor/nodeGather.h"
 #include "executor/nodeSubplan.h"
 #include "executor/tqueue.h"
+#include 

Re: [HACKERS] Add EXTRA_CFLAGS to configure

2015-10-28 Thread Andres Freund
On 2015-10-28 09:36:39 -0400, Tom Lane wrote:
> Andres Freund  writes:
> > I rather regularly want to pass extra flags to configure without
> > overwriting CFLAGS. There's two basic reasons:
> 
> > 1) ./configure CFLAGS=... essentially breaks --enable-debug and related
> >options, overwrites -O2 as the default and such. That's imo pretty
> >confusing.
> > 2) I like to be able to pass -Werror or something to configure without
> >breaking a bunch of configure tests that won't work with Werror.
> 
> I would be rather surprised if such a switch didn't affect the flags used
> by configure itself, so your point (2) seems like it would require nasty
> inconsistency.

Hm, I don't find it that inconsistent to say that configure itself uses
CFLAGS but not EXTRA_CFLAGS (or whatever).

> Why don't you just edit Makefile.global after configure finishes?

I personally do, and it's not a problem for me because I call configure
and everything from scripts anyway. But it's harder to to tell people
that. It's e.g. much easier to tell people that they should add one
argument to configure to specify -fno-omit-frame-pointer rather than
having to create Makefile.custom or something like that.

Makefile.custom doesn't, afaik, won't forward the cflags to PGXS built
extensions. Which then is annoying for flags like
-fno-omit-frame-pointer.

> Or use COPT or PROFILE?

If we'd persist COPT or PROFILE when specified for configure that'd work
for me as well.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Is there any ordering to the values in guc.c?

2015-10-28 Thread Bill Moran

See subject. Aside from them being divvied up by datatype, they seem
to be ordered randomly. Since I'm putting together a patch that will
add some GUCs, do I just add them to the end of the list?

-- 
Bill Moran


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [DESIGN] ParallelAppend

2015-10-28 Thread Kouhei Kaigai
At PGconf.EU, I could have a talk with Robert about this topic,
then it became clear we have same idea.

> ++
> |sub-plan |   * Sub-Plan 1 ... Index Scan on p1
> |index on *-> * Sub-Plan 2 ... PartialSeqScan on p2
> |shared   |   * Sub-Plan 2 ... PartialSeqScan on p2
> |memory   |   * Sub-Plan 2 ... PartialSeqScan on p2
> +-+   * Sub-Plan 3 ... Index Scan on p3
>
In the above example, I put non-parallel sub-plan to use only
1 slot of the array, even though a PartialSeqScan takes 3 slots.
It is a strict rule; non-parallel aware sub-plan can be picked
up once.
The index of sub-plan array is initialized to 0, then increased
to 5 by each workers when it processes the parallel-aware Append.
So, once a worker takes non-parallel sub-plan, other worker can
never take the same slot again, thus, no duplicated rows will be
produced by non-parallel sub-plan in the parallel aware Append.
Also, this array structure will prevent too large number of
workers pick up a particular parallel aware sub-plan, because
PartialSeqScan occupies 3 slots; that means at most three workers
can pick up this sub-plan. If 1st worker took the IndexScan on
p1, and 2nd-4th worker took the PartialSeqScan on p2, then the
5th worker (if any) will pick up the IndexScan on p3 even if
PartialSeqScan on p2 was not completed.



One other thought experiment, what happen if parallel-aware
Append is underlying another parallel-aware Append.
As literal, parallel-aware Append is parallel-aware, thus, it
can occupy multiple slots in the array of sub-plans, like:

subplans[0] ... SeqScan on p1
subplans[1] ... Parallel Append on p2+p3+p4
subplans[2] ... Parallel Append on p2+p3+p4
subplans[3] ... Parallel Append on p2+p3+p4
subplans[4] ... Parallel Append on p2+p3+p4
subplans[5] ... IndexScan on p5

Also, assume the child parallel-aware Append the following
array.
subplans[0] ... SeqScan on p2
subplans[1] ... PartialSeqScan on p3
subplans[2] ... PartialSeqScan on p3
subplans[3] ... SeqScan on p4

The Gather node located on top of the upper Append node will
launch (at most) 6 workers according to the requirement by
the upper Append node.
Each worker picks up a particular sub-plan in the array of
upper Append node, so some of them (4 workers at most) will
execute the child Append node.
Then, these 4 workers will also pick up a particular sub-plan
in the array of child Append node.
It will work even if number of workers are less than the optimal,
so I believe the overall design is reasonable.

Thanks,
--
NEC Business Creation Division / PG-Strom Project
KaiGai Kohei 

> -Original Message-
> From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Kouhei Kaigai
> Sent: Tuesday, October 27, 2015 6:46 AM
> To: Robert Haas
> Cc: pgsql-hackers@postgresql.org; Amit Kapila; Kyotaro HORIGUCHI
> Subject: Re: [HACKERS] [DESIGN] ParallelAppend
> 
> > -Original Message-
> > From: pgsql-hackers-ow...@postgresql.org
> > [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Robert Haas
> > Sent: Monday, October 26, 2015 8:53 PM
> > To: Kaigai Kouhei(海外 浩平)
> > Cc: pgsql-hackers@postgresql.org; Amit Kapila; Kyotaro HORIGUCHI
> > Subject: Re: [HACKERS] [DESIGN] ParallelAppend
> >
> > On Sun, Oct 25, 2015 at 9:23 PM, Kouhei Kaigai  wrote:
> > > I entirely agree with your suggestion.
> > >
> > > We may be able to use an analogy between PartialSeqScan and the
> > > parallel- aware Append node.
> > > PartialSeqScan fetches blocks pointed by the index on shared memory
> > > segment, thus multiple workers eventually co-operate to scan a table
> > > using round-robin manner even though individual worker fetches comb-
> > > shaped blocks.
> > > If we assume individual blocks are individual sub-plans on the parallel
> > > aware Append, it performs very similar. A certain number of workers
> > > (more than zero) is launched by Gather node, then the parallel aware
> > > Append node fetches one of the sub-plans if any.
> >
> > Exactly, except for the part about the blocks being "comb-shaped",
> > which doesn't seem to make sense.
> >
> > > I think, this design also gives additional flexibility according to
> > > the required parallelism by the underlying sub-plans.
> > > Please assume the "PartialSeqScan on p2" in the above example wants
> > > 3 workers to process the scan, we can expand the virtual array of
> > > the sub-plans as follows. Then, if Gather node kicks 5 workers,
> > > individual workers are assigned on some of plans. If Gather node
> > > could kick less than 5 workers, the first exit worker picks the
> > > second sub-plan, then it eventually provides the best parallelism.
> > >
> > > ++
> > > |sub-plan |   * Sub-Plan 1 ... Index Scan on p1
> > > |index on *-> * Sub-Plan 2 ... PartialSeqScan on p2
> > > |shared   |   * Sub-Plan 2 ... PartialSeqScan on p2
> > > |memory   |   * Sub-Plan 2 ... PartialSeqScan on p2

Re: [HACKERS] Patch: Implement failover on libpq connect level.

2015-10-28 Thread Victor Wagner
On Mon, 26 Oct 2015 16:25:57 -0400
Peter Eisentraut  wrote:

> Also, this assumes that all the components other than host and port
> are the same.  Earlier there was a discussion about why the ports
> would ever need to be different.  Well, why can't the database names
> be different? I could have use for that.

Because of way postgresql replication is implemented.

This multihost feature is for clusters. Either for automatic switch
from one cluster node to another when master node fails and one of the
standbys get promoted to master, or for load balancing between hot
standby nodes.

Postgresql allows different replicas listen on different ports, because
each replica has its own postgresql conf, but repication works on the
node level, not on the database level, so all the databases from master
node would be replicated with their original names.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting sorted data from foreign server

2015-10-28 Thread Ashutosh Bapat
On Tue, Oct 27, 2015 at 6:44 PM, Fabrízio de Royes Mello <
fabriziome...@gmail.com> wrote:

>
>
> On Tue, Oct 27, 2015 at 5:26 AM, Ashutosh Bapat <
> ashutosh.ba...@enterprisedb.com> wrote:
> >
> >
> >
> > On Fri, Oct 23, 2015 at 2:43 AM, Robert Haas 
> wrote:
> >>
> >> On Wed, Oct 21, 2015 at 5:23 AM, Ashutosh Bapat
> >>  wrote:
> >> > Increasing the sorting cost factor (when use_remote_estimates =
> false) from
> >> > 1.1 to 1.2 makes the difference disappear.
> >> >
> >> > Since the startup costs for postgres_fdw are large portion of total
> cost,
> >> > extra 10% of rest of the cost is comparable to 1% fuzzy limit. IMO, we
> >> > shouldn't bother too much about it as the path costs are not much
> different.
> >>
> >> My feeling is that cranking the sorting cost factor up to 20-25% would
> >> be a good idea, just so we have less unnecessary plan churn.  I dunno
> >> if sorting always costs that much, but if a 10% cost overhead is
> >> really 1% because it only applies to a fraction of the cost, I don't
> >> think that's good.  The whole point was to pick something large enough
> >> that we wouldn't take the sorted path unless we will benefit from the
> >> sort, and clearly that's not what happened here.
> >>
> >
> > PFA patch with the default multiplication factor for sort bumped up to
> 1.2.
> >
>
> +/* If no remote estimates, assume a sort costs 10% extra */
> +#define DEFAULT_FDW_SORT_MULTIPLIER 1.2
>
> The above comment should not be 20%?
>
> Ah! Here's patch with comment fixed.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company
diff --git a/contrib/postgres_fdw/deparse.c b/contrib/postgres_fdw/deparse.c
index 697de60..cb5c3ae 100644
--- a/contrib/postgres_fdw/deparse.c
+++ b/contrib/postgres_fdw/deparse.c
@@ -186,23 +186,26 @@ is_foreign_expr(PlannerInfo *root,
 	 * Check that the expression consists of nodes that are safe to execute
 	 * remotely.
 	 */
 	glob_cxt.root = root;
 	glob_cxt.foreignrel = baserel;
 	loc_cxt.collation = InvalidOid;
 	loc_cxt.state = FDW_COLLATE_NONE;
 	if (!foreign_expr_walker((Node *) expr, _cxt, _cxt))
 		return false;
 
-	/* Expressions examined here should be boolean, ie noncollatable */
-	Assert(loc_cxt.collation == InvalidOid);
-	Assert(loc_cxt.state == FDW_COLLATE_NONE);
+	/*
+	 * The collation of the expression should be none or originate from a
+	 * foreign var.
+	 */
+	Assert(loc_cxt.state == FDW_COLLATE_NONE ||
+			loc_cxt.state == FDW_COLLATE_SAFE);
 
 	/*
 	 * An expression which includes any mutable functions can't be sent over
 	 * because its result is not stable.  For example, sending now() remote
 	 * side could cause confusion from clock offsets.  Future versions might
 	 * be able to make this choice with more granularity.  (We check this last
 	 * because it requires a lot of expensive catalog lookups.)
 	 */
 	if (contain_mutable_functions((Node *) expr))
 		return false;
@@ -1870,10 +1873,57 @@ printRemoteParam(int paramindex, Oid paramtype, int32 paramtypmod,
  */
 static void
 printRemotePlaceholder(Oid paramtype, int32 paramtypmod,
 	   deparse_expr_cxt *context)
 {
 	StringInfo	buf = context->buf;
 	char	   *ptypename = format_type_with_typemod(paramtype, paramtypmod);
 
 	appendStringInfo(buf, "((SELECT null::%s)::%s)", ptypename, ptypename);
 }
+
+/*
+ * Deparse ORDER BY clause according to the given pathkeys for given base
+ * relation. From given pathkeys expressions belonging entirely to the given
+ * base relation are obtained and deparsed.
+ */
+void
+appendOrderByClause(StringInfo buf, PlannerInfo *root, RelOptInfo *baserel,
+	List *pathkeys)
+{
+	ListCell			*lcell;
+	deparse_expr_cxt	context;
+	int	nestlevel;
+	char*delim = " ";
+
+	/* Set up context struct for recursion */
+	context.root = root;
+	context.foreignrel = baserel;
+	context.buf = buf;
+	context.params_list = NULL;
+
+	/* Make sure any constants in the exprs are printed portably */
+	nestlevel = set_transmission_modes();
+
+	appendStringInfo(buf, " ORDER BY");
+	foreach(lcell, pathkeys)
+	{
+		PathKey*pathkey = lfirst(lcell);
+		Expr*em_expr;
+
+		em_expr = find_em_expr_for_rel(pathkey->pk_eclass, baserel);
+		Assert(em_expr != NULL);
+
+		appendStringInfoString(buf, delim);
+		deparseExpr(em_expr, );
+		if (pathkey->pk_strategy == BTLessStrategyNumber)
+			appendStringInfoString(buf, " ASC");
+		else
+			appendStringInfoString(buf, " DESC");
+
+		if (pathkey->pk_nulls_first)
+			appendStringInfoString(buf, " NULLS FIRST");
+
+		delim = ", ";
+	}
+	reset_transmission_modes(nestlevel);
+}
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 65ea6e8..58bf76c 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -127,86 +127,82 @@ ALTER FOREIGN TABLE ft2 ALTER COLUMN c1 OPTIONS (column_name 'C 1');
 (2 rows)
 
 -- Now we should be able 

Re: [HACKERS] Disabling START TRANSACTION for a SuperUser

2015-10-28 Thread Amit Langote
On 2015/10/28 12:57, Muthiah Rajan wrote:
> Hello Kevin,
> 
> This may be a trivial thing But what do you mean by shops? I actually
> can't get it :-)

I think it's casually used to mean a company or a production environment.

Thanks,
Amit



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Disabling START TRANSACTION for a SuperUser

2015-10-28 Thread rajan
Thanks Amit... :-)



--
View this message in context: 
http://postgresql.nabble.com/Disabling-START-TRANSACTION-for-a-SuperUser-tp5871630p5871739.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: PL/Pythonu - function ereport

2015-10-28 Thread Catalin Iacob
On Tue, Oct 27, 2015 at 9:34 AM, Pavel Stehule  wrote:
> Hi
>
> 2015-10-23 7:34 GMT+02:00 Catalin Iacob :
>> The current code doesn't build on Python3 because the 3rd argument of
>> PyMethod_New, the troubled one you need set to NULL has been removed.
>> This has to do with the distinction between bound and unbound methods
>> which is gone in Python3.
>
>
> this part of is well isolated, and we can do switch for Python2 and Python3
> without significant problems.

I had a quick look at this and at least 2 things are needed for Python3:

* using PyInstanceMethod_New instead of PyMethod_New; as
http://bugs.python.org/issue1587 and
https://docs.python.org/3/c-api/method.html explain that's the new way
of binding a PyCFunction to a class, PyMethod_New(func, NULL) fails

* in the PLy_spi_error_methods definition, __init__ has to take
METH_VARARGS | METH_KEYWORDS not just METH_KEYWORDS; in Python2
METH_KEYWORDS implied METH_VARARGS so it's not needed (it also doesn't
hurt) but if I don't add it, in Python3 I get:
ERROR:  SystemError: Bad call flags in PyCFunction_Call. METH_OLDARGS
is no longer supported!

>> Still, the above link shows a (more verbose but maybe better)
>> alternative: don't use PyErr_NewException and instead define an
>> SPIError type with each slot spelled out explicitly. This will remove
>> the "is it safe to set the third argument to NULL" question.
>
> Should be there some problems, if we lost dependency on basic exception
> class? Some compatibility issues? Or is possible to create full type with
> inheritance support?

It's possible to give it a base type, see at
https://hg.python.org/cpython/rev/429cadbc5b10/ this line before
calling PyType_Ready:
PyComError_Type.tp_base = (PyTypeObject*)PyExc_Exception;

PyErr_NewException is a shortcut for defining simple Exception
deriving types, usually one defines a type with the full PyTypeObject
definition.

In the meantime, I had a deeper look at the 2.7.10 code and I trust
that PyMethod_New with the last argument set to NULL is ok. Setting
that to NULL will lead to the PyMethod representing __init__ im_class
being NULL. But that PyMethod object is not held onto by C code, it's
added to the SPIError class' dict. From there, it is always retrieved
from Python via an instance or via the class (so SPIError().__init__
or SPIError.__init__) which will lead to instancemethod_descr_get
being called because it's the tp_descr_get slot of PyMethod_Type and
that code knows the class you're retrieving the attribute from
(SPIError in this case), checks if the PyMethod already has a not NULL
im_class (which SPIError.__init__ doesn't) and, if not, calls
PyMethod_New again and passes the class (SPIError) as the 3rd
argument.

Given this, I think it's ok to keep using PyErr_NewException rather
than spelling out the type explicitly.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [PROPOSAL] Max recursion depth in WITH Queries (Common Table Expressions)

2015-10-28 Thread Valery Popov

Hi, Hackers

Recursive queries are typically used to deal with hierarchical or
tree-structured data.
In some conditions when data contain  relationships with cycles recursive query 
will loop
unlimited and significantly slows the client's session.
To prevent "infinite" loop I suggest the max_recursion_depth parameter,
which defines the maximum recursion level during the execution of recursive
query.
When max_recursion_depth > 0 and the recursion level of query exceeds
specified value then the execution of query interrupts with error message.
In the MS SQL Server there is MAXRECURSION hint for the same purpose.


Thanks!
--

Regards,
Valery Popov
Postgres Professional http://www.postgrespro.com
The Russian Postgres Company


diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 1da7dfb..33a6009 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -6048,6 +6048,23 @@ SET XML OPTION { DOCUMENT | CONTENT };
   
  
 
+ 
+  max_recursion_depth (integer)
+  
+   max_recursion_depth configuration 
parameter
+  
+  
+  
+   
+Sets the maximum recursion depth in  WITH Queries (Common 
Table Expressions).
+The default value is 0 and it means no limit for recursion depth 
(infinite loop is possible). 
+When max_recursion_depth > 0 and the recursion level of 
query exceeds specified value 
+   then execution of query interrupts with error message.
+ See  for more information.
+   
+  
+ 
+
  
 
  
diff --git a/doc/src/sgml/queries.sgml b/doc/src/sgml/queries.sgml
index ab49bd7..80a63c8 100644
--- a/doc/src/sgml/queries.sgml
+++ b/doc/src/sgml/queries.sgml
@@ -2216,6 +2216,46 @@ SELECT n FROM t LIMIT 100;
In each case it effectively provides temporary table(s) that can
be referred to in the main command.
   
+  
+  
+   
+Also it is possible to limit the number of returning rows by setting 
parameter
+max_recursion_depth in the postgresql.conf file. 
+See  for more information.
+   
+  
+  
+  
+  Another way to set max_recursion_depth is SET 
max_recursion_depth command in psql.
+  
+
+postgres=# show max_recursion_depth;
+LOG:  statement: show max_recursion_depth;
+ max_recursion_depth 
+-
+ 0
+(1 row)
+
+postgres=# set max_recursion_depth = 5;
+LOG:  statement: set max_recursion_depth = 5;
+SET
+postgres=# show max_recursion_depth;
+LOG:  statement: show max_recursion_depth;
+ max_recursion_depth 
+-
+ 5
+(1 row)
+  
+   When max_recursion_depth > 0 and the recursion level of query 
exceeds specified value 
+   then execution of query interrupts with error message like this:  
+
+
+   ERROR:  The statement terminated. The maximum recursion depth 5 has been 
exhausted before statement completion.
+
+   
+ 
+   
+  
  
 
  
diff --git a/src/backend/executor/nodeRecursiveunion.c 
b/src/backend/executor/nodeRecursiveunion.c
index 8df1639..c616250 100644
--- a/src/backend/executor/nodeRecursiveunion.c
+++ b/src/backend/executor/nodeRecursiveunion.c
@@ -110,6 +110,13 @@ ExecRecursiveUnion(RecursiveUnionState *node)
/* 2. Execute recursive term */
for (;;)
{
+   if ((with_recursive_limit > 0) && (node->ps.recursion_cnt >= 
with_recursive_limit))
+   {
+   ereport(ERROR,
+   (errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED),
+   errmsg("The statement terminated. The maximum 
recursion depth %d has been exhausted before statement completion.", 
with_recursive_limit)));
+   break;
+   }
slot = ExecProcNode(innerPlan);
if (TupIsNull(slot))
{
@@ -132,6 +139,9 @@ ExecRecursiveUnion(RecursiveUnionState *node)
innerPlan->chgParam = 
bms_add_member(innerPlan->chgParam,

 plan->wtParam);
 
+   /* go to the next recursion level */
+   node->ps.recursion_cnt++;
+
/* and continue fetching from recursive term */
continue;
}
@@ -261,6 +271,11 @@ ExecInitRecursiveUnion(RecursiveUnion *node, EState 
*estate, int eflags)
build_hash_table(rustate);
}
 
+   /*
+* Init recursion depth counter.
+*/
+   rustate->ps.recursion_cnt=0;
+
return rustate;
 }
 
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 230c5cc..c39e8ca 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -112,6 +112,9 @@ extern char *temp_tablespaces;
 extern bool ignore_checksum_failure;
 extern bool synchronize_seqscans;
 
+/* Parameters for controlling recursive depth */
+intwith_recursive_limit;
+
 #ifdef TRACE_SYNCSCAN
 extern bool trace_syncscan;
 

[HACKERS] pg_dump

2015-10-28 Thread Dmitry Voronin
Hello, guys.

I have a database testdb with commet 'test comment' and security label 
'classified'. I create dump by pg_dump:

pg_dump -h 127.0.0.1 -d testdb -U postgres --format c dump

So, I want to restore a dump with comment and security label to testdb_restore. 
I run:

pg_restore -h 127.0.0.1 -d testdb_restore dump

So, we have, that SECURITY LABEL and COMMENT will be applied to database testdb 
but not testdb_restore. I think, that it's not good.

-- 
Best regards, Dmitry Voronin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PROPOSAL] Max recursion depth in WITH Queries (Common Table Expressions)

2015-10-28 Thread Pavel Stehule
2015-10-28 8:33 GMT+01:00 Valery Popov :

> Hi, Hackers
>
> Recursive queries are typically used to deal with hierarchical or
> tree-structured data.
> In some conditions when data contain  relationships with cycles recursive
> query will loop
> unlimited and significantly slows the client's session.
> To prevent "infinite" loop I suggest the max_recursion_depth parameter,
> which defines the maximum recursion level during the execution of recursive
> query.
> When max_recursion_depth > 0 and the recursion level of query exceeds
> specified value then the execution of query interrupts with error message.
> In the MS SQL Server there is MAXRECURSION hint for the same purpose.
>
>
+1

good idea

Regards

Pavel


>
> Thanks!
> --
>
> Regards,
> Valery Popov
> Postgres Professional http://www.postgrespro.com
> The Russian Postgres Company
>
>
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>


Re: [HACKERS] Add EXTRA_CFLAGS to configure

2015-10-28 Thread Tom Lane
Andres Freund  writes:
> On 2015-10-28 11:42:28 -0400, Tom Lane wrote:
>> 1. Invent a "--with-werror" configure switch that causes -Werror to be
>> added to the CFLAGS, but not while running tests that it'd break.
>> 
>> 2. Explicitly filter -Werror out of the user-provided CFLAGS while running
>> tests that it'd break.

> I think either of these is fine - I've a slight preference for 2)
> because we already filter *FLAGS in a bunch of places and it seems a bit
> nicer to extend, should another similar case come up.

> Do you have a preference?

I'd go with #2 also, if it's not hard to implement.  Single-purpose
configure switches are a wart.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] quieting DEBUG3

2015-10-28 Thread Christopher Browne
On 27 October 2015 at 20:51, Tom Lane  wrote:
>
> Craig Ringer  writes:
> > I think it'd be helpful to define some level of policy about what the
> > debug levels are intended for, so there's some guidance on what level
> > to emit messages on rather than playing "pick a number".
>
> +1 ... I doubt anyone has ever looked at that in a holistic way.
>
> regards, tom lane

A few years ago, I went through Slony's logging with a view to this very
thing.

It was pretty fruitful in changing log levels for a lot of things, and
actually
led to more things being logged 'always', as I noticed useful places to do
CONFIG level logging in the process.

But we were able to get things to the point where INFO logging provided
enough output most of the time.

Looking through the Postgres code base to this end seems likely to be
pretty useful, and I agree, it would need to be done holisticly.
--
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"


Re: [HACKERS] pageinspect patch, for showing tuple data

2015-10-28 Thread Michael Paquier
On Sat, Oct 17, 2015 at 1:48 AM, Michael Paquier wrote:
> On Sat, Oct 17, 2015 at 5:15 AM, Nikolay Shaplov wrote:
>> Or it's ready to commit, and just not marked this way?
>
> No, I don't think we have reached this state yet.
>
>> I am going to make report based on this patch in Vienna. It would be
>> nice to have it committed until then :)
>
> This is registered in this November's CF and the current one is not
> completely wrapped up, so I haven't rushed into looking at it.

So, I have finally been able to look at this patch in more details,
resulting in the attached. I noticed a couple of things that should be
addressed, mainly:
- addition of a new routine text_to_bits to perform the reverse
operation of bits_to_text. This was previously part of
tuple_data_split, I think that it deserves its own function.
- split_tuple_data should be static
- t_bits_str should not be a text, rather a char* fetched using
text_to_cstring(PG_GETARG_TEXT_PP(4)). This way there is no need to
perform extra calculations with VARSIZE and VARHDRSZ
- split_tuple_data can directly use the relation OID instead of the
tuple descriptor of the relation
- t_bits was leaking memory. For correctness I think that it is better
to free it after calling split_tuple_data.
- PG_DETOAST_DATUM_COPY allocates some memory, this was leaking as
well in raw_attr actually. I refactored the code such as a bytea* is
used and always freed when allocated to avoid leaks. Removing raw_attr
actually simplified the code a bit.
- I simplified the docs, that was largely too verbose in my opinion.
- Instead of using VARATT_IS_1B_E and VARTAG_EXTERNAL, using
VARATT_IS_EXTERNAL and VARATT_IS_EXTERNAL_ONDISK seems more adapted to
me, those other ones are much more low-level and not really spread in
the backend code.
- Found some typos in the code, the docs and some comments. I reworked
the error messages as well.
- The code format was not really in line with the project guidelines.
I fixed that as well.
- I removed heap_page_item_attrs for now to get this patch in a
bare-bone state. Though I would not mind if this is re-added (I
personally don't think that's much necessary in the module
actually...).
- The calculation of the length of t_bits using HEAP_NATTS_MASK is
more correct as you mentioned earlier, so I let it in its state.
That's actually more accurate for error handling as well.
That's everything I recall I have. How does this look?
-- 
Michael
diff --git a/contrib/pageinspect/Makefile b/contrib/pageinspect/Makefile
index aec5258..91ab119 100644
--- a/contrib/pageinspect/Makefile
+++ b/contrib/pageinspect/Makefile
@@ -5,9 +5,9 @@ OBJS		= rawpage.o heapfuncs.o btreefuncs.o fsmfuncs.o \
 		  brinfuncs.o ginfuncs.o $(WIN32RES)
 
 EXTENSION = pageinspect
-DATA = pageinspect--1.3.sql pageinspect--1.2--1.3.sql \
-	pageinspect--1.1--1.2.sql pageinspect--1.0--1.1.sql \
-	pageinspect--unpackaged--1.0.sql
+DATA = pageinspect--1.4.sql pageinspect--1.3--1.4.sql \
+	pageinspect--1.2--1.3.sql pageinspect--1.1--1.2.sql \
+	pageinspect--1.0--1.1.sql pageinspect--unpackaged--1.0.sql
 PGFILEDESC = "pageinspect - functions to inspect contents of database pages"
 
 ifdef USE_PGXS
diff --git a/contrib/pageinspect/heapfuncs.c b/contrib/pageinspect/heapfuncs.c
index 8d1666c..15fd7f1 100644
--- a/contrib/pageinspect/heapfuncs.c
+++ b/contrib/pageinspect/heapfuncs.c
@@ -27,8 +27,11 @@
 
 #include "access/htup_details.h"
 #include "funcapi.h"
-#include "utils/builtins.h"
+#include "catalog/pg_type.h"
 #include "miscadmin.h"
+#include "utils/array.h"
+#include "utils/builtins.h"
+#include "utils/rel.h"
 
 
 /*
@@ -55,6 +58,42 @@ bits_to_text(bits8 *bits, int len)
 
 
 /*
+ * text_to_bits
+ *
+ * Converts a c-string representation of bits into a bits8-array. This is
+ * the reverse operation of previous routine.
+ */
+static bits8 *
+text_to_bits(char *str, int len)
+{
+	bits8	   *bits;
+	int			off = 0;
+	char		byte = 0;
+
+	bits = palloc(len + 1);
+
+	while (off < len)
+	{
+		if (off % 8 == 0)
+			byte = 0;
+
+		if ((str[off] == '0') || (str[off] == '1'))
+			byte = byte | ((str[off] - '0') << off % 8);
+		else
+			ereport(ERROR,
+	(errcode(ERRCODE_DATA_CORRUPTED),
+	 errmsg("illegal character '%c' in t_bits string", str[off])));
+
+		if (off % 8 == 7)
+			bits[off / 8] = byte;
+
+		off++;
+	}
+
+	return bits;
+}
+
+/*
  * heap_page_items
  *
  * Allows inspection of line pointers and tuple headers of a heap page.
@@ -122,8 +161,8 @@ heap_page_items(PG_FUNCTION_ARGS)
 		HeapTuple	resultTuple;
 		Datum		result;
 		ItemId		id;
-		Datum		values[13];
-		bool		nulls[13];
+		Datum		values[14];
+		bool		nulls[14];
 		uint16		lp_offset;
 		uint16		lp_flags;
 		uint16		lp_len;
@@ -154,7 +193,8 @@ heap_page_items(PG_FUNCTION_ARGS)
 			lp_offset + lp_len <= raw_page_size)
 		{
 			HeapTupleHeader tuphdr;
-			int			bits_len;
+			bytea		*tuple_data_bytea;
+			int			tuple_data_len;
 
 			/* Extract information from the tuple header */
 
@@ -168,6 +208,14 @@ 

Re: [HACKERS] Add EXTRA_CFLAGS to configure

2015-10-28 Thread Andres Freund
On 2015-10-28 11:42:28 -0400, Tom Lane wrote:
> In view of your point (1), I'd be okay with inventing an EXTRA_CFLAGS
> argument that is added to, rather than replacing, the automatically
> computed flags.  But I think that configure must include such flags
> for its own compile runs, else it is not testing the true build
> environment and might get wrong answers.

Ok.

> Is -Werror the only practical case where we need configure to *not* see
> a flag that should otherwise be applied?  If so, maybe we should just
> attack that problem directly and narrowly.  I can think of at least
> two ways:

I can't really any that aren't of the form -Werror or
-Werror=specific-warning. I'm not sure that the latter is particularly
interesting, using -Werror and then -Wno-error=xxx seems like it'd
usually be better.

> 1. Invent a "--with-werror" configure switch that causes -Werror to be
> added to the CFLAGS, but not while running tests that it'd break.
>
> 2. Explicitly filter -Werror out of the user-provided CFLAGS while running
> tests that it'd break.

I think either of these is fine - I've a slight preference for 2)
because we already filter *FLAGS in a bunch of places and it seems a bit
nicer to extend, should another similar case come up.

Do you have a preference?

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH v2] GSSAPI encryption support

2015-10-28 Thread Jeff Janes
On Tue, Sep 29, 2015 at 7:53 AM, Robbie Harwood  wrote:
> Robbie Harwood  writes:
>
> Michael Paquier  writes:
>
>> Well, the issue is still here: login through gssapi fails with
>> your patch, not with HEAD. This patch is next on my review list by
>> the way so I'll see what I can do about it soon even if I am in
>> the US for Postgres Open next week. Still, how did you test it? I
>> am just creating by myself a KDC, setting up a valid credential
>> with kinit, and after setting up Postgres for this purpose the
>> protocol communication just fails.
>>
>> I have no issues, no sync loss; nothing is amiss as far as I can see.
>> If there is actually a problem here, I need more information from you.
>> At the very least, as previously mentioned, I need to know what
>> messages went over the wire to/from the server before it occurred, and
>> what command (if it it made it to command processing) it was in the
>> midst of sending.
>
> Any follow-up on this?  I'd really like my code to be bug-free.

I don't know if this is worth posting as the patch is currently
returned with feedback and you are redoing it in a different way, but
with your patch I get this error when connecting:

lost synchronization with server: got message type "T", length 27
The connection to the server was lost. Attempting reset: Failed.

I only get the error when connection to a patched server from a
patched libpq.  If either is unpatched, then there is no problem.

Let me know if this is worth looking into.

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] onlyvalue aggregate (was: First Aggregate Funtion?)

2015-10-28 Thread Pavel Stehule
2015-10-28 17:50 GMT+01:00 Marko Tiikkaja :

> Hi,
>
> Here's a patch for the aggregate function outlined by Corey Huinker in
> CADkLM=foA_oC_Ri23F9PbfLnfwXFbC3Lt8bBzRu3=cb77g9...@mail.gmail.com .  I
> called it "onlyvalue", which is a horrible name, but I have nothing better
> to offer.  (Corey called it "only", but that doesn't really work since ONLY
> is a fully reserved keyword.)
>
> I'll add this to September's commit fest, but if you want to bash me or
> the patch in the meanwhile, go ahead.
>

Hi

what is use case for this function and why it should be in core?

Regards

Pavel


>
>
> .m
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>


Re: [HACKERS] Add EXTRA_CFLAGS to configure

2015-10-28 Thread Tom Lane
Andres Freund  writes:
> On 2015-10-28 09:36:39 -0400, Tom Lane wrote:
>> Andres Freund  writes:
>>> 1) ./configure CFLAGS=... essentially breaks --enable-debug and related
>>> options, overwrites -O2 as the default and such. That's imo pretty
>>> confusing.
>>> 2) I like to be able to pass -Werror or something to configure without
>>> breaking a bunch of configure tests that won't work with Werror.

>> I would be rather surprised if such a switch didn't affect the flags used
>> by configure itself, so your point (2) seems like it would require nasty
>> inconsistency.

> Hm, I don't find it that inconsistent to say that configure itself uses
> CFLAGS but not EXTRA_CFLAGS (or whatever).

Well, it is.  In particular, you could easily shoot yourself in the foot
this way, for example by passing some semantically relevant switch like
"-m64" in the wrong set of flags.

In view of your point (1), I'd be okay with inventing an EXTRA_CFLAGS
argument that is added to, rather than replacing, the automatically
computed flags.  But I think that configure must include such flags
for its own compile runs, else it is not testing the true build
environment and might get wrong answers.

Is -Werror the only practical case where we need configure to *not* see
a flag that should otherwise be applied?  If so, maybe we should just
attack that problem directly and narrowly.  I can think of at least
two ways:

1. Invent a "--with-werror" configure switch that causes -Werror to be
added to the CFLAGS, but not while running tests that it'd break.

2. Explicitly filter -Werror out of the user-provided CFLAGS while running
tests that it'd break.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] quieting DEBUG3

2015-10-28 Thread Jeff Janes
On Wed, Oct 28, 2015 at 6:57 AM, Robert Haas  wrote:
> On Wed, Oct 28, 2015 at 1:51 AM, Tom Lane  wrote:
>> Craig Ringer  writes:
>>> I think it'd be helpful to define some level of policy about what the
>>> debug levels are intended for, so there's some guidance on what level
>>> to emit messages on rather than playing "pick a number".
>>
>> +1 ... I doubt anyone has ever looked at that in a holistic way.
>
> Well, I don't know that I could give really specific guidance on each
> individual level, but what I think is pretty clear is that messages
> which tell you about something that's likely to happen very frequently
> should only appear if you have really cranked the logging up to the
> maximum.   So a message that fires every time you touch a data block
> had better be DEBUG5, but a message that fires at most once per
> checkpoint cycle can afford to be DEBUG1.  Qualitatively, I think that
> higher debug level (DEBUG1, DEBUG2) should focus on telling you about
> things that are potentially interesting and maybe a little bit
> unusual, while tracing messages that report things which are entirely
> routine should use lower debug levels (DEBUG4, DEBUG5).

I figured it would go something like this:

DEBUG1 once or a few times per statement/autovac/checkpoint
DEBUG2 several times per statement/autovac/checkpoint (like once per
joined relation in the planner)
DEBUG3 once or a few times per row/block
DEBUG4 several times per row/block
DEBUG5 memory allocation.

>
> I agree with Craig that the transaction system is interesting and
> important, but it's not the *only* interesting and important subsystem
> we have ... and any log level below DEBUG2 is basically useless right
> now; the stuff you want is almost guaranteed to be lost in the noise.

The main use I have for any of the DEBUG levels above 1 is to step
through them with vim in a specific source file to see what someone
felt was important to log when they were debugging that file in the
past.  Then I change the ones I like to LOG or NOTICE and recompile to
get a targeted verbose logging build.  This is not to disparage their
usefulness, it is hard to imagine me getting into a new part of the
code without the ability to use these archeological markers.  They are
often more useful than the /* comments */ .



Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] onlyvalue aggregate (was: First Aggregate Funtion?)

2015-10-28 Thread Marko Tiikkaja

Hi,

Here's a patch for the aggregate function outlined by Corey Huinker in 
CADkLM=foA_oC_Ri23F9PbfLnfwXFbC3Lt8bBzRu3=cb77g9...@mail.gmail.com .  I 
called it "onlyvalue", which is a horrible name, but I have nothing 
better to offer.  (Corey called it "only", but that doesn't really work 
since ONLY is a fully reserved keyword.)


I'll add this to September's commit fest, but if you want to bash me or 
the patch in the meanwhile, go ahead.



.m
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 2946122..6edc220 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -12631,6 +12631,26 @@ NULL baz(3 rows)
  
   

+onlyvalue
+   
+   
+ onlyvalue(expression)
+   
+  
+  
+   any type for which the equality operator has been defined
+  
+  
+   same as argument type
+  
+  returns the single distinct non-NULL value from the input
+  values; if any of the input values is NULL or more than one distinct
+  value exists, an exception is raised
+ 
+
+ 
+  
+   
 string_agg


diff --git a/src/backend/utils/adt/misc.c b/src/backend/utils/adt/misc.c
index c0495d9..72bb55c 100644
--- a/src/backend/utils/adt/misc.c
+++ b/src/backend/utils/adt/misc.c
@@ -40,6 +40,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/timestamp.h"
+#include "utils/typcache.h"
 
 #define atooid(x)  ((Oid) strtoul((x), NULL, 10))
 
@@ -598,3 +599,93 @@ pg_column_is_updatable(PG_FUNCTION_ARGS)
 
 	PG_RETURN_BOOL((events & REQ_EVENTS) == REQ_EVENTS);
 }
+
+struct onlyvalue_agg_stype
+{
+	FunctionCallInfoData fcinfo;
+	Datum datum;
+};
+
+Datum
+onlyvalue_agg_transfn(PG_FUNCTION_ARGS)
+{
+	Oid		 arg1_typeid = get_fn_expr_argtype(fcinfo->flinfo, 1);
+	MemoryContext aggcontext;
+	struct onlyvalue_agg_stype *state;
+
+	if (arg1_typeid == InvalidOid)
+		ereport(ERROR,
+(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("could not determine input data type")));
+
+	if (!AggCheckCallContext(fcinfo, ))
+	{
+		/* cannot be called directly because of internal-type argument */
+		elog(ERROR, "onlyvalue_agg_transfn called in non-aggregate context");
+	}
+
+	if (PG_ARGISNULL(1))
+	{
+		ereport(ERROR,
+(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+		errmsg("NULL value passed to onlyvalue")));
+	}
+
+	if (PG_ARGISNULL(0))
+	{
+		TypeCacheEntry *typentry;
+
+		state = (struct onlyvalue_agg_stype *) MemoryContextAlloc(aggcontext, sizeof(struct onlyvalue_agg_stype));
+		state->datum = PG_GETARG_DATUM(1);
+
+		typentry = lookup_type_cache(arg1_typeid,
+	 TYPECACHE_EQ_OPR_FINFO);
+		if (!OidIsValid(typentry->eq_opr_finfo.fn_oid))
+			ereport(ERROR,
+	(errcode(ERRCODE_UNDEFINED_FUNCTION),
+			errmsg("could not identify an equality operator for type %s",
+   format_type_be(arg1_typeid;
+
+		InitFunctionCallInfoData(state->fcinfo, >eq_opr_finfo, 2,
+ InvalidOid, NULL, NULL);
+	}
+	else
+	{
+		bool oprresult;
+
+		state = (struct onlyvalue_agg_stype *) PG_GETARG_POINTER(0);
+
+		state->fcinfo.argnull[0] = false;
+		state->fcinfo.argnull[1] = false;
+		state->fcinfo.arg[0] = state->datum;
+		state->fcinfo.arg[1] = PG_GETARG_DATUM(1);
+		state->fcinfo.isnull = false;
+		oprresult = DatumGetBool(FunctionCallInvoke(>fcinfo));
+		if (!oprresult)
+			ereport(ERROR,
+	(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+			errmsg("more than one distinct value passed to onlyvalue")));
+	}
+
+	/*
+	 * The transition type for onlyvalue() is declared to be "internal", which
+	 * is a pass-by-value type the same size as a pointer.  So we can safely
+	 * pass the pointer through nodeAgg.c's machinations.
+	 */
+	PG_RETURN_POINTER(state);
+}
+
+Datum
+onlyvalue_agg_finalfn(PG_FUNCTION_ARGS)
+{
+	struct onlyvalue_agg_stype *state;
+
+	if (PG_ARGISNULL(0))
+		PG_RETURN_NULL();
+
+	/* cannot be called directly because of internal-type argument */
+	Assert(AggCheckCallContext(fcinfo, NULL));
+
+	state = (struct onlyvalue_agg_stype *) PG_GETARG_POINTER(0);
+	PG_RETURN_DATUM(state->datum);
+}
diff --git a/src/include/catalog/pg_aggregate.h b/src/include/catalog/pg_aggregate.h
index dd6079f..9d6c44a 100644
--- a/src/include/catalog/pg_aggregate.h
+++ b/src/include/catalog/pg_aggregate.h
@@ -292,6 +292,9 @@ DATA(insert ( 3197	n 0 json_object_agg_transfn json_object_agg_finalfn --
 DATA(insert ( 3267	n 0 jsonb_agg_transfn	jsonb_agg_finalfn			---f f 0	2281	0	0		0	_null_ _null_ ));
 DATA(insert ( 3270	n 0 jsonb_object_agg_transfn jsonb_object_agg_finalfn ---f f 0	2281	0	0		0	_null_ _null_ ));
 
+/* onlyvalue */
+DATA(insert ( 4202 n 0 onlyvalue_agg_transfnonlyvalue_agg_finalfn-   -   -   t f 0   22810   0   0   _null_ _null_ ));
+
 /* ordered-set and hypothetical-set aggregates */
 DATA(insert ( 3972	o 1 ordered_set_transition			percentile_disc_final	-		-		-		t f 0	2281	0	0		0	_null_ _null_ ));
 

Re: [HACKERS] onlyvalue aggregate (was: First Aggregate Funtion?)

2015-10-28 Thread Tom Lane
Marko Tiikkaja  writes:
> Here's a patch for the aggregate function outlined by Corey Huinker in 
> CADkLM=foA_oC_Ri23F9PbfLnfwXFbC3Lt8bBzRu3=cb77g9...@mail.gmail.com .  I 
> called it "onlyvalue", which is a horrible name, but I have nothing 
> better to offer.  (Corey called it "only", but that doesn't really work 
> since ONLY is a fully reserved keyword.)

On the name front, maybe think "single" rather than "only"?  This might
lead to "single()" or "single_value()", or "singleton()" if you want to
sound highbrow.

On the semantics front, I'm not sure that I like excluding nulls from the
input domain.  I'd rather that it acted like IS NOT DISTINCT, ie, nulls
are fine as long as all the input values are nulls.  The implementation
would need some work for that.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] onlyvalue aggregate (was: First Aggregate Funtion?)

2015-10-28 Thread David G. Johnston
On Wed, Oct 28, 2015 at 1:38 PM, Tom Lane  wrote:

> Marko Tiikkaja  writes:
> > Here's a patch for the aggregate function outlined by Corey Huinker in
> > CADkLM=foA_oC_Ri23F9PbfLnfwXFbC3Lt8bBzRu3=cb77g9...@mail.gmail.com .  I
> > called it "onlyvalue", which is a horrible name, but I have nothing
> > better to offer.  (Corey called it "only", but that doesn't really work
> > since ONLY is a fully reserved keyword.)
>
> On the name front, maybe think "single" rather than "only"?  This might
> lead to "single()" or "single_value()", or "singleton()" if you want to
> sound highbrow.
>
> On the semantics front, I'm not sure that I like excluding nulls from the
> input domain.  I'd rather that it acted like IS NOT DISTINCT, ie, nulls
> are fine as long as all the input values are nulls.  The implementation
> would need some work for that.
>

​homogeneous() ?

It is basically an assertion function since in most cases where you would
use this you needn't use a function at all but instead simply place the
column in the GROUP BY clause.

I have at various times desired having various assertion functions that
return the input value if the assertion is met but causes a query error if
it is not.  That said functions can be both scalar and aggregate oriented
makes sense.  Adding just this one function in seems like it adds a partial
feature to -core.  I'd rather it bake more in PGXN before considering
putting it into core.  There doesn't seem to be any hard need or benefit to
doing so at this time.

I would probably stick to the concept of assertion and call it something
like
"assert_nongrouping()"
​ to denote that the input does not cause multiple groups to be created due
to their being multiple values.

David J.
​


Re: [HACKERS] quieting DEBUG3

2015-10-28 Thread Robert Haas
On Wed, Oct 28, 2015 at 5:26 PM, Jeff Janes  wrote:
> I figured it would go something like this:
>
> DEBUG1 once or a few times per statement/autovac/checkpoint
> DEBUG2 several times per statement/autovac/checkpoint (like once per
> joined relation in the planner)
> DEBUG3 once or a few times per row/block
> DEBUG4 several times per row/block
> DEBUG5 memory allocation.

I feel like this fails to take account of things that often don't
occur at all.  On this scale, the messages I'm complaining about could
justifiably be DEBUG1 rather than DEBUG3, which would render all of
the DEBUG levels essentially useless.  How about this: imagine a
reasonably busy system processing a thousand queries or so per second,
and consider log volume:

A given DEBUG1 message shouldn't fire more than ~1/sec.
A given DEBUG2 message shouldn't fire more than ~10/sec
A given DEBUG3 message shouldn't fire more than ~100/sec
A given DEBUG4 message shouldn't fire more than ~1000/sec
Anything which fires more than ~1000/s is DEBUG5

On this scale, most of the debug messages that the system actually
generates today are fine at their current levels - some of the DEBUG4
stuff could probably be bumped up to higher levels.  But the messages
I'm complaining about are at least one level too high.

Another point I want to reiterate - because nobody seems to be
addressing it - is that some of these messages are totally useless.  I
grant that printing the transaction state (XIDs, CIDs, etc.) is
useful.  But does anybody really think that it's useful for every
statement to *additionally* generate DEBUG:  CommitTransactionCommand?
 Who looks at that?  What value does it have?  We do not print a
message when any other function that is called for every query is
entered - why that one?  Whether we adjust the log levels of the
messages we have or not, we surely ought to get rid of the ones that
are useless clutter.  Can anyone think of a single instance in which
that particular message has been useful in debugging ... anything?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] ExclusiveLock on PostgreSQL - Fabio Mendonça

2015-10-28 Thread Fabio Oliveira De Mendonca
Hi friends,


I'm Fabio Mendonça (Brazil) and I initiated a work with PostgreSQL, I work with 
Oracle , DB2 , Sybase and Informix , but PostgreSQL is a sweet experience.


In this moment I'm looking any help, because the articles that I've found  not 
solve the problem.

The Cenario:


I 've a process with 600.000 rows, for insert on table "A" with 130 columns and 
I'm received  the "Exclusivelock"   error message, making lost some  rows 
during transaction.  The insert of transaction occurs on each  2 min. and for 
each 1 min, a second process read the table "A" (with Join Table "C" using  PK 
) to make a insert on a table  ("B") . Well ,  I did think create a partitions 
on table "A",  but I don't believe get a correcting in the problem ( 
"Exclusivelock" ).


I'm reading the your conversation about WAL performance and if I understood
, your sugestion was separed the WAL archives between fisical disks .

Please could  you confirm this solution or exist another form to resolve the   
"Exclusivelock" ? .


I appreciate any sugestion.


Thanks .

(sorry my english, cause I'm student on language)


Fabio Mendonça.

System Analist

BRQ (CAIXA FEDERAL )


Re: [HACKERS] onlyvalue aggregate (was: First Aggregate Funtion?)

2015-10-28 Thread Marko Tiikkaja

On 10/28/15 5:53 PM, Pavel Stehule wrote:

what is use case for this function and why it should be in core?


Corey had one example in his email, but I can offer another one which 
came up this week at $work.  The query looked something like this:


SELECT a, sum(amount), onlyvalue(rolling_count)
FROM
(
SELECT a, amount, count(*) OVER (ORDER BY a) AS rolling_count
FROM tbl
) ss
GROUP BY a;

We know that all the values for the column are going to be the same 
value for every "a", so we could use min() or max().  But the advantage 
of "onlyvalue" is that it actually checks that, so if someone went and 
changed the window frame to do something slightly different, the query 
would blow up instead of silently returning the (now likely incorrect) 
minimum or maximum value.  It's also self-documenting for the reader of 
such queries.


In my experience this problem comes up often enough that it would be 
make sense to have this aggregate in core.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] onlyvalue aggregate (was: First Aggregate Funtion?)

2015-10-28 Thread Pavel Stehule
2015-10-28 18:03 GMT+01:00 Marko Tiikkaja :

> On 10/28/15 5:53 PM, Pavel Stehule wrote:
>
>> what is use case for this function and why it should be in core?
>>
>
> Corey had one example in his email, but I can offer another one which came
> up this week at $work.  The query looked something like this:
>
> SELECT a, sum(amount), onlyvalue(rolling_count)
> FROM
> (
> SELECT a, amount, count(*) OVER (ORDER BY a) AS rolling_count
> FROM tbl
> ) ss
> GROUP BY a;
>
> We know that all the values for the column are going to be the same value
> for every "a", so we could use min() or max().  But the advantage of
> "onlyvalue" is that it actually checks that, so if someone went and changed
> the window frame to do something slightly different, the query would blow
> up instead of silently returning the (now likely incorrect) minimum or
> maximum value.  It's also self-documenting for the reader of such queries.
>
> In my experience this problem comes up often enough that it would be make
> sense to have this aggregate in core.
>

This function is pretty inconsistent with any other builtin aggregate
function, so it is one argument against to push it to core. Next, this
function can be pretty simply implemented in PLpgSQL. And my last argument
- I don't remember too often request for this functionality. It looks like
module for PGXN much more, than PG core functionality.

Regards

Pavel


>
>
> .m
>


Re: [HACKERS] [PATCH v2] GSSAPI encryption support

2015-10-28 Thread Robbie Harwood
Jeff Janes  writes:

> On Tue, Sep 29, 2015 at 7:53 AM, Robbie Harwood  wrote:
>> Robbie Harwood  writes:
>>
>> Michael Paquier  writes:
>>
>>> Well, the issue is still here: login through gssapi fails with
>>> your patch, not with HEAD. This patch is next on my review list by
>>> the way so I'll see what I can do about it soon even if I am in
>>> the US for Postgres Open next week. Still, how did you test it? I
>>> am just creating by myself a KDC, setting up a valid credential
>>> with kinit, and after setting up Postgres for this purpose the
>>> protocol communication just fails.
>>>
>>> I have no issues, no sync loss; nothing is amiss as far as I can see.
>>> If there is actually a problem here, I need more information from you.
>>> At the very least, as previously mentioned, I need to know what
>>> messages went over the wire to/from the server before it occurred, and
>>> what command (if it it made it to command processing) it was in the
>>> midst of sending.
>>
>> Any follow-up on this?  I'd really like my code to be bug-free.
>
> I don't know if this is worth posting as the patch is currently
> returned with feedback and you are redoing it in a different way, but
> with your patch I get this error when connecting:
>
> lost synchronization with server: got message type "T", length 27
> The connection to the server was lost. Attempting reset: Failed.
>
> I only get the error when connection to a patched server from a
> patched libpq.  If either is unpatched, then there is no problem.
>
> Let me know if this is worth looking into.

Definitely good to know, and I appreciate your testing.  It's probably
not worth looking into right now, but please do test the next version of
the code as well.

Thanks!
--Robbie


signature.asc
Description: PGP signature


Re: [HACKERS] onlyvalue aggregate (was: First Aggregate Funtion?)

2015-10-28 Thread Pavel Stehule
2015-10-28 18:38 GMT+01:00 Tom Lane :

> Marko Tiikkaja  writes:
> > Here's a patch for the aggregate function outlined by Corey Huinker in
> > CADkLM=foA_oC_Ri23F9PbfLnfwXFbC3Lt8bBzRu3=cb77g9...@mail.gmail.com .  I
> > called it "onlyvalue", which is a horrible name, but I have nothing
> > better to offer.  (Corey called it "only", but that doesn't really work
> > since ONLY is a fully reserved keyword.)
>
> On the name front, maybe think "single" rather than "only"?  This might
> lead to "single()" or "single_value()", or "singleton()" if you want to
> sound highbrow.
>

this function should to have some distinguish name than other aggregates
because important work of this func is not some calculation but some
constraint check.


> On the semantics front, I'm not sure that I like excluding nulls from the
> input domain.  I'd rather that it acted like IS NOT DISTINCT, ie, nulls
> are fine as long as all the input values are nulls.  The implementation
> would need some work for that.
>
> regards, tom lane
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] Disabling START TRANSACTION for a SuperUser

2015-10-28 Thread Muthiah Rajan
Thanks for clarifying my doubt...


--
Muthiah Rajan

On Wed, Oct 28, 2015 at 6:19 PM, Kevin Grittner  wrote:

> On Tuesday, October 27, 2015 10:57 PM, Muthiah Rajan 
> wrote:
> > On 27-Oct-2015 7:37 PM, "Kevin Grittner"  wrote:
>
> >> It is more problematic where a shop wants to use serializable
> >> transactions to ensure data integrity.
>
> > This may be a trivial thing But what do you mean by shops? I
> > actually can't get it :-)
>
> http://www.merriam-webster.com/dictionary/shop
>
> I was using "shop" in the sense of the second of the short noun
> definitions ("the place where a specified kind of worker works : a
> worker's place of business") or number 5a under the long noun
> definitions ("a business establishment: office").  When used in
> that sense the type of business is usually used ("an I.T. shop"),
> but where it is implied or obvious it is often dropped.  The
> dictionary doesn't list it as a colloquialism, but it is rather
> informal -- approaching the colloquial.  As I used it I was
> intending to convey a group of I.T. professionals under the same
> management with a common set of policies, working on the same set
> of hardware and/or software.
>
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


Re: [HACKERS] quieting DEBUG3

2015-10-28 Thread Alvaro Herrera
Robert Haas wrote:

> Another point I want to reiterate - because nobody seems to be
> addressing it - is that some of these messages are totally useless.  I
> grant that printing the transaction state (XIDs, CIDs, etc.) is
> useful.  But does anybody really think that it's useful for every
> statement to *additionally* generate DEBUG:  CommitTransactionCommand?
>  Who looks at that?  What value does it have?  We do not print a
> message when any other function that is called for every query is
> entered - why that one?

No, it is useless, let's get rid of it.  Maybe it was a useful debugging
tool when postgres.c was being developed, but it's not useful now and
instead very bothersome.

> Whether we adjust the log levels of the messages we have or not, we
> surely ought to get rid of the ones that are useless clutter.

Agreed.  I liked your proposal for reduction of transaction state
printout to a single, denser line.

> Can anyone think of a single instance in which that particular message
> has been useful in debugging ... anything?

Not I.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [BUGS] BUG #13611: test_postmaster_connection failed (Windows, listen_addresses = '0.0.0.0' or '::')

2015-10-28 Thread Noah Misch
On Tue, Oct 27, 2015 at 05:31:25PM +0900, Tatsuo Ishii wrote:
> > No, PQping("host='127.0.0.1'") fails to reach a listen_addresses='::' server
> > on many systems.  Here's what I thought Kondo was proposing:
> > 
> > --- a/src/bin/pg_ctl/pg_ctl.c
> > +++ b/src/bin/pg_ctl/pg_ctl.c
> > @@ -649,5 +649,9 @@ test_postmaster_connection(pgpid_t pm_pid, bool 
> > do_checkpoint)
> >  
> > -   /* If postmaster is listening 
> > on "*", use localhost */
> > +   /* explanation here */
> > if (strcmp(host_str, "*") == 0)
> > strcpy(host_str, 
> > "localhost");
> > +   else if (strcmp(host_str, 
> > "0.0.0.0") == 0)
> > +   strcpy(host_str, 
> > "127.0.0.1");
> > +   else if (strcmp(host_str, "::") 
> > == 0)
> > +   strcpy(host_str, "::1");
> >  
> 
> I see. Would you like to commit this?

I am happy to finish it, but I am no less happy if you finish it.  Which do
you prefer?

Should the back-branch commits mirror the master branch?  A more-cautious
alternative would be to, in back branches, wrap the change in #ifdefs so it
takes effect only on Windows, OpenBSD and NetBSD.  It could break setups with
local firewall rules that block connections to "127.0.0.1" or "::1" without
blocking "0.0.0.0" or "::".  Such firewall rules sound outlandish enough that
I would be fairly comfortable not worrying about this and making the change
unconditional in all branches.  It's a judgment call, though.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Personal note: changing employers

2015-10-28 Thread Jim Nasby

On 10/28/15 5:14 PM, Tom Lane wrote:

As of the end of this month, I will be departing Salesforce.com and
joining Crunchy Data Solutions (http://crunchydatasolutions.com),
whom you might recognize as being already the employers of Stephen
Frost, Joe Conway, and Greg Smith, as well as a few other folk who
hang out on pghackers.

While I enjoyed working at Salesforce, it was sucking an awfully large
percentage of my time away from the PG community.  With this change,
I expect to return to spending more-or-less full time on community work.


So I guess this means that the Fedora-wearing Elephant that ended up on 
a Cloud[1] is suddenly turning into a Hippo. :P


[1] 
https://secure2.sfdcstatic.com/common/assets/images/home/sfdc-home-footer-logo.png

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small improvement to tlist_matches_tupdesc()

2015-10-28 Thread Tom Lane
David Rowley  writes:
> I just found myself in execScan.c. I noticed that tlist_matches_tupdesc()
> is a bit wasteful in some cases as it goes to the trouble of matching the
> tlist to the TupleDesc item by item until it runs out of tlist items
> or finds a non-match. Once the loop completes it ensures that all tlist
> items were consumed, so basically in other words, it checks the lengths
> match.

> I think likely this was just not made optimal originally in d0b4399 when
> the length property etc were added to List

No, it's modeled after other places that allow for ignoring dropped
columns in one or both lists.  While this code doesn't currently allow
that, the patch you suggest would prevent it from ever doing so (at least
not without reverting the patch first).  I'd be okay with that if there
were evidence of a meaningful performance gain from it ... but you've
not provided any.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] security_barrier view option type mistake in create view document

2015-10-28 Thread Haribabu Kommi
The security_barrier view option is classified as string in the create
view documentation.
But it is actually a boolean. The type is mentioned correctly in alter
view. Here I attached
the patch with the correction.

-security_barrier (string)
+security_barrier (boolean)

Regards,
Hari Babu
Fujitsu Australia


security_barrier_type.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump

2015-10-28 Thread Noah Misch
On Wed, Oct 28, 2015 at 10:44:37AM +0300, Dmitry Voronin wrote:
> I have a database testdb with commet 'test comment' and security label 
> 'classified'. I create dump by pg_dump:
> 
> pg_dump -h 127.0.0.1 -d testdb -U postgres --format c dump
> 
> So, I want to restore a dump with comment and security label to 
> testdb_restore. I run:
> 
> pg_restore -h 127.0.0.1 -d testdb_restore dump
> 
> So, we have, that SECURITY LABEL and COMMENT will be applied to database 
> testdb but not testdb_restore. I think, that it's not good.

It's a problem.  See this recent discussion:
http://www.postgresql.org/message-id/flat/20150710115735.gh26...@alap3.anarazel.de


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Cross-check recent documentation changes

2015-10-28 Thread Robins
Hi,

Was reviewing recent commits, and it seems the following commit adds an
extra line to some comments. Just wanted to cross-check if that was
intentional.

Commit: http://goo.gl/zxA00l
Pre-Commit: http://goo.gl/2DpLxi
Post-Commit: http://goo.gl/eKcNm3

Apologies for the noise, if this was intentional.
--
Robins Tharakan


[HACKERS] Small improvement to tlist_matches_tupdesc()

2015-10-28 Thread David Rowley
I just found myself in execScan.c. I noticed that tlist_matches_tupdesc()
is a bit wasteful in some cases as it goes to the trouble of matching the
tlist to the TupleDesc item by item until it runs out of tlist items
or finds a non-match. Once the loop completes it ensures that all tlist
items were consumed, so basically in other words, it checks the lengths
match.

I think likely this was just not made optimal originally in d0b4399 when
the length property etc were added to List

The attached patch adds a fast path so that no list items are compared if
the lengths don't match.

Is this worth fixing?

--
 David Rowley   http://www.2ndQuadrant.com/

 PostgreSQL Development, 24x7 Support, Training & Services


tlist_matches_tupdesc_fastpath.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Personal note: changing employers

2015-10-28 Thread Tom Lane
Jim Nasby  writes:
> On 10/28/15 5:14 PM, Tom Lane wrote:
>> As of the end of this month, I will be departing Salesforce.com and
>> joining Crunchy Data Solutions (http://crunchydatasolutions.com),

> So I guess this means that the Fedora-wearing Elephant that ended up on 
> a Cloud[1] is suddenly turning into a Hippo. :P

Yeah ... I'm not sure how we got from Proboscidae all the way over to
Theria, but whatever.  At least it's still under Mammalia.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Foreign join pushdown vs EvalPlanQual

2015-10-28 Thread Kouhei Kaigai
> On Tue, Oct 20, 2015 at 12:39 PM, Etsuro Fujita
>  wrote:
> > Sorry, my explanation was not correct.  (Needed to take in caffeine.) What
> > I'm concerned about is the following:
> >
> > SELECT * FROM localtab JOIN (ft1 LEFT JOIN ft2 ON ft1.x = ft2.x) ON
> > localtab.id = ft1.id FOR UPDATE OF ft1
> >
> > LockRows
> > -> Nested Loop
> >  Join Filter: (localtab.id = ft1.id)
> >  -> Seq Scan on localtab
> >  -> Foreign Scan on 
> >   Remote SQL: SELECT * FROM ft1 LEFT JOIN ft2 WHERE ft1.x = ft2.x
> > FOR UPDATE OF ft1
> >
> > Assume that ft1 performs late row locking.
> 
> If the SQL includes "FOR UPDATE of ft1", then it clearly performs
> early row locking.  I assume you meant to omit that.
> 
> > If an EPQ recheck was invoked
> > due to a concurrent transaction on the remote server that changed only the
> > value x of the ft1 tuple previously retrieved, then we would have to
> > generate a fake ft1/ft2-join tuple with nulls for ft2. (Assume that the ft2
> > tuple previously retrieved was not a null tuple.) However, I'm not sure how
> > we can do that in ForeignRecheck; we can't know for example, which one is
> > outer and which one is inner, without an alternative local join execution
> > plan.  Maybe I'm missing something, though.
> 
> I would expect it to issue a new query like: SELECT * FROM ft1 LEFT
> JOIN ft2 WHERE ft1.x = ft2.x AND ft1.tid = $0 AND ft2.tid = $1.
> 
> This should be significantly more efficient than fetching the base
> rows from each of two tables with two separate queries.
>
In this case, the EPQ slot to store the joined tuple is still
a challenge to be solved.

Is it possible to use one or any of EPQ slots that are setup for
base relations but represented by ForeignScan/CustomScan?
In case when ForeignScan run a remote join that involves three
base foreign tables (relid=2, 3, 5 for example), for example,
no other code touches this slot. So, it is safe even if we put
a joined tuple on EPQ slots of underlying base relations.

In this case, EPQ slots are initialized as below:

  es_epqTuple[0] ... EPQ tuple of base relation (relid=1)
  es_epqTuple[1] ... EPQ of the joined tuple (for relis=2, 3 5)
  es_epqTuple[2] ... EPQ of the joined tuple (for relis=2, 3 5), copy of above
  es_epqTuple[3] ... EPQ tuple of base relation (relid=4)
  es_epqTuple[4] ... EPQ of the joined tuple (for relis=2, 3 5), copy of above
  es_epqTuple[5] ... EPQ tuple of base relation (relid=6)

Also, FDW/CSP shall be responsible to return a joined tuple
as a result for whole-row reference of underlying base relation.
(One other challenge is how to handle the case when user explicitly
required a whole-row reference...Hmm...)

Then, if FDW/CSP is designed to utilize the preliminary joined
tuples rather than local join, it can just raise the tuple kept
in one of the EPQ slots for underlying base relations.
If FDW/CSP prefers local join, it can perform as like local join
doing; check join condition and construct a joined tuple by itself
or by alternative plan.

Thanks,
--
NEC Business Creation Division / PG-Strom Project
KaiGai Kohei 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: lookbehind constraints for our regexp engine

2015-10-28 Thread Tom Lane
I wrote:
> I've occasionally heard complaints that our regex engine only has
> lookahead constraints not lookbehind constraints, while Perl's for example
> does have those.  While I was fooling around with the other issues in that
> code, I learned enough to realize that it would not be that hard to
> implement such things, and the attached proof-of-concept patch does so
> (using Perl-compatible syntax, in case you're wondering).

> However, I don't feel like this is done to the point of being committable,
> because its performance leaves something to be desired in a lot of cases.
> The difficulty is that because the engine can only match in a forward
> direction, in the worst case you have to check a lookbehind constraint by
> trying to match starting from the string start to see if a match exists
> ending at the current-location where you're testing the constraint.  That
> makes it, worst case, O(N^2) to search a string of length N.

Attached is an updated patch that eliminates the O(N^2) performance issue
by creating a third version of the core text-matching functions (previously
just longest() and shortest()).  This new version is able to suspend and
resume a scan without significant overhead, and it reports whether a match
exists ending at the current stop point.  So we effectively run a
lookbehind constraint's NFA over the string just once, no matter how many
places we actually need to check the constraint's satisfaction at.

This is still not fast in an absolute sense: the limiting scan rate for a
pattern with leading lookbehind constraint (which means the constraint
must be checked at each character) seems to be roughly 10X slower than
for simple constraint-free patterns.  For example

regression=# select repeat('xyzzy', 100) ~ '(?<=xy)y';
 ?column? 
--
 f
(1 row)

Time: 601.079 ms
regression=# select repeat('xyzzy', 100) ~ 'xyy';
 ?column? 
--
 f
(1 row)

Time: 64.213 ms

(As a comparison point, perl 5.10.1 does the same searches in about 250ms
and 30ms respectively; though I'm not comparing apples to oranges exactly
since this is a debug build of PG versus production perl.)

This is not entirely the fault of the lookbehind code, though; most of the
cycles are being spent in miss() in the outer DFA scan, not in checking
the LACON itself.  That's a consequence of Henry Spencer's decision that
any state transition involving a LACON check should be treated as a cache
miss.  It's not terribly hard to imagine having a code path like "check
this LACON and go to one of two cached states depending on whether it
succeeds".  However, I couldn't figure a way to shoehorn that into the
code without adding both cycles to the basic text search loops and bloat
to the stateset cache data structure, costs that would be paid whether or
not you're actually using any LACONs.  So maybe Henry made the right
optimization tradeoff.

Also, I put in a compile-time optimization for lookbehind (and lookahead)
constraints that consist of just a single character or bracket expression,
eg these run at about the speed of a simple pattern:

regression=# select repeat('xyzzy', 100) ~ '(?<=[ab])y';
 ?column? 
--
 f
(1 row)

Time: 51.936 ms
regression=# select repeat('xyzzy', 100) ~ '(?
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 2946122..4d482ec 100644
*** a/doc/src/sgml/func.sgml
--- b/doc/src/sgml/func.sgml
*** SELECT foo FROM regexp_split_to_table('t
*** 4477,4489 
 where no substring matching re begins
 (AREs only) 
 

   
  
  
 
! Lookahead constraints cannot contain back references
! (see ),
  and all parentheses within them are considered non-capturing.
 
 
--- 4477,4503 
 where no substring matching re begins
 (AREs only) 
 
+ 
+
+ (?<=re) 
+ positive lookbehind matches at any point
+where a substring matching re ends
+(AREs 

Re: [HACKERS] Is there any ordering to the values in guc.c?

2015-10-28 Thread Peter Eisentraut
On 10/28/15 10:27 AM, Bill Moran wrote:
> See subject. Aside from them being divvied up by datatype, they seem
> to be ordered randomly. Since I'm putting together a patch that will
> add some GUCs, do I just add them to the end of the list?

The initial commit grouped them logically, and it went downhill from
there. :)

But "at the end" is almost never a good answer in these situations, I think.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Is there any ordering to the values in guc.c?

2015-10-28 Thread Josh Berkus
On 10/28/2015 01:58 PM, Peter Eisentraut wrote:
> On 10/28/15 10:27 AM, Bill Moran wrote:
>> See subject. Aside from them being divvied up by datatype, they seem
>> to be ordered randomly. Since I'm putting together a patch that will
>> add some GUCs, do I just add them to the end of the list?
> 
> The initial commit grouped them logically, and it went downhill from
> there. :)

Yeah, we're overdue for another overhaul of GUC ordering.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Observed checkpointer crashes in 9.4.4 on Fedora 22

2015-10-28 Thread Tom Lane
Can anyone identify a likely cause of the stack traces reported here:

https://retrace.fedoraproject.org/faf/reports/853674/

Apparently, what's happening is that PGSemaphoreUnlock is hitting

elog(FATAL, "semop(id=%d) failed: %m", sema->semId);

and then that's being promoted to a PANIC, presumably because we're
inside a critical section.

I have no more data than what's on that webpage --- I just got an
auto-notification that the number of similar reports had reached 10.
Those might all be from the same Fedora user, for all I can tell;
in which case it might have some pedestrian explanation.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Is there any ordering to the values in guc.c?

2015-10-28 Thread Bill Moran
On Wed, 28 Oct 2015 16:58:30 -0400
Peter Eisentraut  wrote:

> On 10/28/15 10:27 AM, Bill Moran wrote:
> > See subject. Aside from them being divvied up by datatype, they seem
> > to be ordered randomly. Since I'm putting together a patch that will
> > add some GUCs, do I just add them to the end of the list?
> 
> The initial commit grouped them logically, and it went downhill from
> there. :)
> 
> But "at the end" is almost never a good answer in these situations, I think.

Hrm ...

That begs a larger question ... as I'm working to add new config options that
don't really belong in an existing category, I guess I should create a new
category? I.e. the two config values are target_tuples_per_page and
target_compression_ratio ... and I'm not seeing an existing category that
they fall into. More description here:
https://github.com/williammoran/postgres/blob/master/README

Thoughts/opinions?

-- 
Bill Moran


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Personal note: changing employers

2015-10-28 Thread Tom Lane
As of the end of this month, I will be departing Salesforce.com and
joining Crunchy Data Solutions (http://crunchydatasolutions.com),
whom you might recognize as being already the employers of Stephen
Frost, Joe Conway, and Greg Smith, as well as a few other folk who
hang out on pghackers.

While I enjoyed working at Salesforce, it was sucking an awfully large
percentage of my time away from the PG community.  With this change,
I expect to return to spending more-or-less full time on community work.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql: add \pset true/false

2015-10-28 Thread Robert Haas
On Wed, Oct 28, 2015 at 10:03 AM, Marko Tiikkaja  wrote:
> Hello hello,
>
> Since the default t/f output for booleans is not very user friendly,
> attached is a patch which enables you to do for example the following:
>
> =# \pset true TRUE
> Boolean TRUE display is "TRUE".
> =# \pset false FALSE
> Boolean FALSE display is "FALSE".
> =# select true, false;
>   bool | bool
> --+---
>   TRUE | FALSE
> (1 row)
>
>
> (For anyone reviewing: I didn't touch the parts of describe.c which do this
> for nullPrint:
>
>   myopt.nullPrint = NULL;
>
> since I thought it was dubious in the first place, and I don't think we
> output booleans in the describe commands.)

-0 on this concept from me.  I'm not going to vigorously oppose it, but:

1. You can always do it in the query if you really want it.
2. If you're the sort of person liable to be confused by t/f, you
probably aren't in the target audience for psql anyway.
3. I really don't want to end up with a bunch of features of this type
for a variety of different data types.

But I just work here, and if others feel differently, so be it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Personal note: changing employers

2015-10-28 Thread Thom Brown
On 28 October 2015 at 22:14, Tom Lane  wrote:
> As of the end of this month, I will be departing Salesforce.com and
> joining Crunchy Data Solutions (http://crunchydatasolutions.com),
> whom you might recognize as being already the employers of Stephen
> Frost, Joe Conway, and Greg Smith, as well as a few other folk who
> hang out on pghackers.
>
> While I enjoyed working at Salesforce, it was sucking an awfully large
> percentage of my time away from the PG community.  With this change,
> I expect to return to spending more-or-less full time on community work.

Sounds like you'll be in good company, and us community folk will
benefit more from your *mad skills*.

So,... congrats?

Thom


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql: add \pset true/false

2015-10-28 Thread Greg Stark
On Wed, Oct 28, 2015 at 10:52 PM, Robert Haas  wrote:
> 3. I really don't want to end up with a bunch of features of this type
> for a variety of different data types.

We already have \pset null which feels very similar. It's not like 'f'
and 't' are terribly common and probably different from how they
render in whatever driver the user's probably coding to.

On the other hand if their driver isn't mapping booleans to a native
data type and just providing the Postgres text output then then this
is only risking more confusion by presenting the user with a rendering
that's different from what they need to be expecting. IIRC this is the
case for the PHP driver for example.

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql: add \pset true/false

2015-10-28 Thread Andres Freund
On 2015-10-28 23:38:45 +, Greg Stark wrote:
> On Wed, Oct 28, 2015 at 10:52 PM, Robert Haas  wrote:
> > 3. I really don't want to end up with a bunch of features of this type
> > for a variety of different data types.
> 
> We already have \pset null which feels very similar.

It's data type neutral, and there's no representation of NULL that's
definitely discernible from the corresponding string. So I don't see
those being the same.


This seems like opening a can of worms to me. Why not add formatting
options in psql for other data types?

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] psql: add \pset true/false

2015-10-28 Thread Marko Tiikkaja

Hello hello,

Since the default t/f output for booleans is not very user friendly, 
attached is a patch which enables you to do for example the following:


=# \pset true TRUE
Boolean TRUE display is "TRUE".
=# \pset false FALSE
Boolean FALSE display is "FALSE".
=# select true, false;
  bool | bool
--+---
  TRUE | FALSE
(1 row)


(For anyone reviewing: I didn't touch the parts of describe.c which do 
this for nullPrint:


  myopt.nullPrint = NULL;

since I thought it was dubious in the first place, and I don't think we 
output booleans in the describe commands.)




.m
diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index 212dbfa..2048ba3 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -2229,6 +2229,26 @@ lo_import 152801
   
 
   
+  true
+  
+  
+  Sets the string to be printed in place of a boolean TRUE value.
+  The default is to print a 't' character.
+  
+  
+  
+
+  
+  false
+  
+  
+  Sets the string to be printed in place of a boolean FALSE value.
+  The default is to print an 'f' character.
+  
+  
+  
+
+  
   null
   
   
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index 5163c76..7fadb0a 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -1151,7 +1151,7 @@ exec_command(const char *cmd,
 			int			i;
 			static const char *const my_list[] = {
 "border", "columns", "expanded", "fieldsep", "fieldsep_zero",
-"footer", "format", "linestyle", "null",
+"footer", "format", "linestyle", "true", "false", "null",
 "numericlocale", "pager", "pager_min_lines",
 "recordsep", "recordsep_zero",
 "tableattr", "title", "tuples_only",
@@ -2591,6 +2591,26 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 		}
 	}
 
+	/* boolean TRUE display */
+	else if (strcmp(param, "true") == 0)
+	{
+		if (value)
+		{
+			free(popt->truePrint);
+			popt->truePrint = pg_strdup(value);
+		}
+	}
+
+	/* boolean FALSE display */
+	else if (strcmp(param, "false") == 0)
+	{
+		if (value)
+		{
+			free(popt->falsePrint);
+			popt->falsePrint = pg_strdup(value);
+		}
+	}
+
 	/* field separator for unaligned text */
 	else if (strcmp(param, "fieldsep") == 0)
 	{
@@ -2782,6 +2802,20 @@ printPsetInfo(const char *param, struct printQueryOpt *popt)
 			   popt->nullPrint ? popt->nullPrint : "");
 	}
 
+	/* show true display */
+	else if (strcmp(param, "true") == 0)
+	{
+		printf(_("Boolean TRUE display is \"%s\".\n"),
+			   popt->truePrint ? popt->truePrint : "t");
+	}
+
+	/* show false display */
+	else if (strcmp(param, "false") == 0)
+	{
+		printf(_("Boolean FALSE display is \"%s\".\n"),
+			   popt->falsePrint ? popt->falsePrint : "f");
+	}
+
 	/* show locale-aware numeric output */
 	else if (strcmp(param, "numericlocale") == 0)
 	{
@@ -2953,6 +2987,14 @@ pset_value_string(const char *param, struct printQueryOpt *popt)
 		return psprintf("%s", _align2string(popt->topt.format));
 	else if (strcmp(param, "linestyle") == 0)
 		return psprintf("%s", get_line_style(>topt)->name);
+	else if (strcmp(param, "true") == 0)
+		return pset_quoted_string(popt->truePrint
+  ? popt->truePrint
+  : "t");
+	else if (strcmp(param, "false") == 0)
+		return pset_quoted_string(popt->falsePrint
+  ? popt->falsePrint
+  : "f");
 	else if (strcmp(param, "null") == 0)
 		return pset_quoted_string(popt->nullPrint
   ? popt->nullPrint
diff --git a/src/bin/psql/help.c b/src/bin/psql/help.c
index 5b63e76..d0bf418 100644
--- a/src/bin/psql/help.c
+++ b/src/bin/psql/help.c
@@ -258,7 +258,7 @@ slashUsage(unsigned short int pager)
 	fprintf(output, _("  \\H toggle HTML output mode (currently %s)\n"),
 			ON(pset.popt.topt.format == PRINT_HTML));
 	fprintf(output, _("  \\pset [NAME [VALUE]]   set table output option\n"
-	  " (NAME := {format|border|expanded|fieldsep|fieldsep_zero|footer|null|\n"
+	  " (NAME := {format|border|expanded|fieldsep|fieldsep_zero|footer|true|false|null|\n"
 	  " numericlocale|recordsep|recordsep_zero|tuples_only|title|tableattr|pager|\n"
 	  " unicode_border_linestyle|unicode_column_linestyle|unicode_header_linestyle})\n"));
 	fprintf(output, _("  \\t [on|off]show only rows (currently %s)\n"),
@@ -371,6 +371,8 @@ helpVariables(unsigned short int pager)
 	fprintf(output, _("  format set output format [unaligned, aligned, wrapped, html, asciidoc, ...]\n"));
 	fprintf(output, _("  footer enable or disable display of the table footer [on, off]\n"));
 	fprintf(output, _("  linestyle  set the border line drawing style [ascii, old-ascii, unicode]\n"));
+	fprintf(output, _("  true   

[HACKERS] Add EXTRA_CFLAGS to configure

2015-10-28 Thread Andres Freund
Hi,

I rather regularly want to pass extra flags to configure without
overwriting CFLAGS. There's two basic reasons:

1) ./configure CFLAGS=... essentially breaks --enable-debug and related
   options, overwrites -O2 as the default and such. That's imo pretty
   confusing.
2) I like to be able to pass -Werror or something to configure without
   breaking a bunch of configure tests that won't work with Werror.

A good way to do that seems to be to add a separate variable for that
purpose? Unless someobdy has a better idea?

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql: add \pset true/false

2015-10-28 Thread Marko Tiikkaja

On 10/28/15 11:52 PM, Robert Haas wrote:

-0 on this concept from me.  I'm not going to vigorously oppose it, but:

1. You can always do it in the query if you really want it.


True, but not always practical.


2. If you're the sort of person liable to be confused by t/f, you
probably aren't in the target audience for psql anyway.


Really?  The difference between t/f is that the vertical squiggle is 
flipped, THAT'S IT.  Consider:


t t f f f
f t f t f

Saying that I'm not target audience for not being able to see WTF is 
going on above I find offending.



3. I really don't want to end up with a bunch of features of this type
for a variety of different data types.


Fine.  Then let's not add it for a different variety of data types.  But 
boolean is quite essential and it has a really small number of valid 
output values.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers