Re: [HACKERS] to_json(NULL) should to return JSON null instead NULL

2015-08-29 Thread Pavel Stehule
2015-08-30 4:57 GMT+02:00 Andrew Dunstan :

>
>
> On 08/29/2015 04:27 PM, Tom Lane wrote:
>
>> Jim Nasby  writes:
>>
>>> On 8/29/15 12:29 PM, Pavel Stehule wrote:
>>>
 what is correct from JSON perspective? All fields with NULL

>>> ISTM that the whole purpose of to_json is to properly jsonify something,
>>> and the proper json form for "undefined" is 'null', is it not?
>>>
>> What's not entirely clear is what we should do with cases like
>>
>> regression=#  select array_to_json(null::int[]);
>>   array_to_json
>> ---
>>   (1 row)
>>
>> regression=#  select row_to_json(null::record);
>>   row_to_json
>> -
>>   (1 row)
>>
>> If we leave those alone (and in the latter case, in particular, there is
>> not enough information available to do much else) then it's not so clear
>> that changing to_json() is really improving consistency overall.
>> For instance, do we really want row_to_json(null::record) and
>> to_json(null::record) giving different results?  Or if we make them
>> both return "null", that breaks the previous invariant that row_to_json
>> always yields a JSON object.
>>
>> An advantage of leaving these things as strict is that the user can easily
>> substitute whatever specific behavior she wants for NULLs via coalesce(),
>> as was shown upthread.  If we put in a different behavior, then the
>> only way to override it would be with a CASE, which is tedious and creates
>> multiple-evaluation issues.
>>
>> I'm not necessarily against changing it --- but it doesn't seem entirely
>> black-and-white to me, and we do now have a couple of versions worth
>> of precedent we'd be breaking with.
>>
>> If we do vote to change it, I'd want to do so now (ie in 9.5) rather than
>> create yet another year's worth of precedent.
>>
>>
>>
>
> I agree with pretty much all of this. My fairly strong inclination is to
> leave it as it is and document the behaviour more clearly. Changing it
> seems likely to introduce a different inconsistency which is harder to
> understand.
>

I agree so there is not clear solution - and both possible solution can
have a real base. On second hand, the fix with COALESCE, NULLIF, .. is not
hard and it is has not a performance impact - so better documentation can
be good enough fix. The custom solution is ugly named but simple function
to_json2

CREATE OR REPLACE FUNCTION to_json2(anyelement)
RETURNS json AS $$
SELECT COALESCE(to_json($1), json 'null')
$$ LANGUAGE sql;

Regards

Pavel


>
> cheers
>
> andrew
>


Re: [HACKERS] Minor code improvements to create_foreignscan_plan/ExecInitForeignScan

2015-08-29 Thread David Rowley
On 28 August 2015 at 22:20, Etsuro Fujita 
wrote:

> On 2015/07/22 15:25, Etsuro Fujita wrote:
>
>> On 2015/07/10 21:59, David Rowley wrote:
>
> I just glanced at this and noticed that the method for determining if
>>> there's any system columns could be made a bit nicer.
>>>
>>
> /* Now, are any system columns requested from rel? */
>>> scan_plan->fsSystemCol = false;
>>> for (i = FirstLowInvalidHeapAttributeNumber + 1; i < 0; i++)
>>> {
>>> if (bms_is_member(i - FirstLowInvalidHeapAttributeNumber, attrs_used))
>>> {
>>> scan_plan->fsSystemCol = true;
>>> break;
>>> }
>>> }
>>>
>>
> I think could just be written as:
>>> /* Now, are any system columns requested from rel? */
>>> if (!bms_is_empty(attrs_used) &&
>>> bms_next_member(attrs_used, -1) < -FirstLowInvalidHeapAttributeNumber)
>>> scan_plan->fsSystemCol = true;
>>> else
>>> scan_plan->fsSystemCol = false;
>>>
>>
> I know you didn't change this, but just thought I'd mention it while
>>> there's an opportunity to fix it.
>>>
>>
> On second thought, I noticed that there is a case when that fix doesn't
> work well; bms_next_member wouldn't be efficient when only the rear
> user-columns are requested from a foreign table that has a large number of
> user-columns.  So, I'm inclined to leave that as-is.
>
>
You might be right here. I'd failed to think about that.

It's likely not worth changing if there's cases when it'll be slower, but
curiosity got the better of me and I wondered how extreme a case it would
take to actually see a slowdown, and per my benchmark results the first
used column would have to be about attnum 500.

I used the attached to benchmark. has_system_columns is the current method,
has_system_columns2 has my changes. Lines are prefixed by the position
where the first (and only) attnum appears in the bitmap set.

1 has_system_columns complete in 1.196000
1 has_system_columns2 complete in 0.17
2 has_system_columns complete in 1.198000
2 has_system_columns2 complete in 0.167000
4 has_system_columns complete in 1.197000
4 has_system_columns2 complete in 0.17
8 has_system_columns complete in 1.206000
8 has_system_columns2 complete in 0.203000
16 has_system_columns complete in 1.202000
16 has_system_columns2 complete in 0.237000
32 has_system_columns complete in 1.206000
32 has_system_columns2 complete in 0.232000
64 has_system_columns complete in 1.207000
64 has_system_columns2 complete in 0.268000
128 has_system_columns complete in 1.205000
128 has_system_columns2 complete in 0.368000
256 has_system_columns complete in 1.203000
256 has_system_columns2 complete in 0.78
512 has_system_columns complete in 1.202000
512 has_system_columns2 complete in 1.302000
1024 has_system_columns complete in 1.199000
1024 has_system_columns2 complete in 3.539000

So, for what it's worth, could be 6 times faster for an "average" sized
table, but hey, we're talking nanoseconds anyway...

Regards

David Rowley
--
 David Rowley   http://www.2ndQuadrant.com/

 PostgreSQL Development, 24x7 Support, Training & Services
#include 
#include 
#include 
#include 

#define FirstLowInvalidHeapAttributeNumber		(-8)
#define FLEXIBLE_ARRAY_MEMBER 1

#define elog(elevel, s)  \
	do { \
		printf("%s\n", s); \
		exit(EXIT_FAILURE); \
	} while(0)


#define BITS_PER_BITMAPWORD 32
#define WORDNUM(x)	((x) / BITS_PER_BITMAPWORD)
#define BITNUM(x)	((x) % BITS_PER_BITMAPWORD)

#define BITMAPSET_SIZE(nwords)	\
	(offsetof(Bitmapset, words) + (nwords) * sizeof(bitmapword))

typedef unsigned int bitmapword;		/* must be an unsigned type */
typedef int signedbitmapword; /* must be the matching signed type */
typedef char bool;

#define true 1
#define false 0

typedef struct Bitmapset
{
	int			nwords;			/* number of words in array */
	bitmapword	words[FLEXIBLE_ARRAY_MEMBER];	/* really [nwords] */
} Bitmapset;

void *
palloc0(size_t size)
{
	void *p = malloc(size);
	if (!p)
		elog(ERROR, "out of memory");
	memset(p, 0, size);
	return p;
}
/*
 * bms_is_member - is X a member of A?
 */
bool
bms_is_member(int x, const Bitmapset *a)
{
	int			wordnum,
bitnum;

	/* XXX better to just return false for x<0 ? */
	if (x < 0)
		elog(ERROR, "negative bitmapset member not allowed");
	if (a == NULL)
		return false;
	wordnum = WORDNUM(x);
	bitnum = BITNUM(x);
	if (wordnum >= a->nwords)
		return false;
	if ((a->words[wordnum] & ((bitmapword) 1 << bitnum)) != 0)
		return true;
	return false;
}

static const unsigned char rightmost_one_pos[256] = {
	0, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0,
	4, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0,
	5, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0,
	4, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0,
	6, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0,
	4, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0,
	5, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0,
	4, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0,
	7, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0,
	4, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0,
	5, 0, 1, 0, 2, 0, 1, 

Re: [HACKERS] Information of pg_stat_ssl visible to all users

2015-08-29 Thread Michael Paquier
On Sun, Aug 30, 2015 at 5:27 AM, Bruce Momjian wrote:

> I know I am coming in late here, but I know Heroku uses random user
> names to allow a cluster to have per-user databases without showing
> external user name details:
> [...]
> I can see them having problems with a user being able to see the SSL
> remote user names of all connected users.
>

Yep, and I can imagine that this is the case of any company managing cloud
nodes with Postgres embedded, and at least to me that's a real concern.
-- 
Michael


Re: [HACKERS] to_json(NULL) should to return JSON null instead NULL

2015-08-29 Thread Andrew Dunstan



On 08/29/2015 04:27 PM, Tom Lane wrote:

Jim Nasby  writes:

On 8/29/15 12:29 PM, Pavel Stehule wrote:

what is correct from JSON perspective? All fields with NULL

ISTM that the whole purpose of to_json is to properly jsonify something,
and the proper json form for "undefined" is 'null', is it not?

What's not entirely clear is what we should do with cases like

regression=#  select array_to_json(null::int[]);
  array_to_json
---
  
(1 row)


regression=#  select row_to_json(null::record);
  row_to_json
-
  
(1 row)


If we leave those alone (and in the latter case, in particular, there is
not enough information available to do much else) then it's not so clear
that changing to_json() is really improving consistency overall.
For instance, do we really want row_to_json(null::record) and
to_json(null::record) giving different results?  Or if we make them
both return "null", that breaks the previous invariant that row_to_json
always yields a JSON object.

An advantage of leaving these things as strict is that the user can easily
substitute whatever specific behavior she wants for NULLs via coalesce(),
as was shown upthread.  If we put in a different behavior, then the
only way to override it would be with a CASE, which is tedious and creates
multiple-evaluation issues.

I'm not necessarily against changing it --- but it doesn't seem entirely
black-and-white to me, and we do now have a couple of versions worth
of precedent we'd be breaking with.

If we do vote to change it, I'd want to do so now (ie in 9.5) rather than
create yet another year's worth of precedent.




I agree with pretty much all of this. My fairly strong inclination is to 
leave it as it is and document the behaviour more clearly. Changing it 
seems likely to introduce a different inconsistency which is harder to 
understand.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Horizontal scalability/sharding

2015-08-29 Thread Bruce Momjian
I have recently increased my public statements about the idea of adding
horizontal scaling/sharding to Postgres. I wanted to share with hackers
a timeline of how we got here, and where I think we are going in the
short term:

2012-2013:  As part of writing my scaling talk
(http://momjian.us/main/presentations/overview.html#scaling), studying
Oracle RAC, and talking to users, it became clear that an XC-like
architecture (sharding) was the only architecture that was going to allow
for write scaling.

Users and conference attendees I talked to were increasingly concerned
about the ability of Postgres to scale for high write volumes.  They didn't
necessarily need that scale now, but they needed to know they could get
it if they wanted it, and wouldn't need to switch to a new database in
the future.  This is similar to wanting a car that can get you on a highway
on-ramp fast --- even if you don't need it, you want to know it is there.

2014:  I started to shop around the idea that we could use FDWs,
parallelism, and a transaction/snapshot manager to get XC features
as built-in to Postgres.  (I don't remember where the original idea
came from.)  It was clear that having separate forks of the source code
in XC and XL was never going to achieve critical mass --- there just
aren't enough people who need high right scale right now, and the fork
maintenance overhead is a huge burden.

I realized that we would never get community acceptance to dump the XC
(or XL) code needed for sharding into community Postgres, but with FDWs,
we could add the features as _part_ of improving FDWs, which would benefit
FDWs _and_ would be useful for sharding.  (We already see some of those
FDW features in 9.5.)

October, 2014:  EDB and NTT started working together in the community
to start improving FDWs as a basis for an FDW-based sharding solution.
Many of the 9.5 FDW improvements that also benefit sharding were developed
by a combined EDB/NTT team.  The features improved FDWs independent of
sharding, so they didn't need community buy-in on sharding to get them
accepted.

June, 2015:  I attended the PGCon sharding unconference session and
there was a huge discussion about where we should go with sharding.
I think the big take-away was that most people liked the FDW approach,
but had business/customer reasons for wanting to work on XC or XL because
those would be production-ready faster.

July, 2015:  Oleg Bartunov and his new company Postgres Professional (PP)
started to think about joining the FDW approach, rather than working on
XL, as they had stated at PGCon in June.  A joint NTT/EDB/PP phone-in
meeting is scheduled for September 1.

August, 2015:  While speaking at SFPUG, Citus Data approached me about
joining the FDW sharding team.  They have been invited to the September
1 meeting, as have the XC and XL people.

October, 2015:  EDB is sponsoring a free 3-hour summit about FDW sharding
at the PG-EU conference in Vienna.   Everyone is invited, but it is hoped
most of the September 1 folks can attend.

February, 2016:  Oleg is planning a similar meeting at their February
Moscow conference.

Anyway, I wanted to explain the work that has been happening around
sharding.  As things move forward, I am increasingly convinced that write
scaling will be needed soon, that the XC approach is the only reasonable
way to do it, and that FDWs are the cleanest way to get it into community
Postgres.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: About CMake v2

2015-08-29 Thread Peter Eisentraut

On 8/28/15 10:14 AM, YUriy Zhuravlev wrote:

Hello Hackers

How would you react if I provided a patch which introduces a CMake build
system?


I would say, "Yay!".

I had that on my todo list, but I won't mind if someone else does it. 
I've certainly "had it" with the current build system. ;-)




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] SQL function to report log message

2015-08-29 Thread Pavel Stehule
Hi

I am starting to work review of this patch

2015-07-13 9:54 GMT+02:00 dinesh kumar :

> Hi All,
>
> Greetings for the day.
>
> Would like to discuss on below feature here.
>
> Feature:
> Having an SQL function, to write messages to log destination.
>
> Justification:
> As of now, we don't have an SQL function to write custom/application
> messages to log destination. We have "RAISE" clause which is controlled by
> log_ parameters. If we have an SQL function which works irrespective of
> log settings, that would be a good for many log parsers. What i mean is, in
> DBA point of view, if we route all our native OS stats to log files in a
> proper format, then we can have our log reporting tools to give most
> effective reports. Also, Applications can log their own messages to
> postgres log files, which can be monitored by DBAs too.
>
> Implementation:
> Implemented a new function "pg_report_log" which takes one argument as
> text, and returns void. I took, "LOG" prefix for all the reporting
> messages.I wasn't sure to go with new prefix for this, since these are
> normal LOG messages. Let me know, if i am wrong here.
>
> Here is the attached patch.
>

This patch is not complex, but the implementation doesn't cover a "ereport"
well.

Although this functionality should be replaced by custom function in any PL
(now or near future), I am not against to have this function in core. There
are lot of companies with strong resistance against stored procedures - and
sometimes this functionality can help with SQL debugging.

Issues:

1. Support only MESSAGE field in exception - I am expecting to support all
fields: HINT, DETAIL, ...
2. Missing regress tests
3. the parsing ereport level should be public function shared with PLpgSQL
and other PL
4. should be hidestmt mandatory parameter?
5. the function declaration is strange

postgres=# \sf pg_report_log (text, anyelement, boolean)
CREATE OR REPLACE FUNCTION pg_catalog.pg_report_log(text, anyelement,
boolean)
 RETURNS void
 LANGUAGE sql
 STABLE STRICT COST 1
AS $function$SELECT pg_report_log($1::pg_catalog.text, $2::pg_catalog.text,
$3::boolean)$function$

Why polymorphic? It is useless on any modern release

postgres=# \sf pg_report_log (text, text, boolean)
CREATE OR REPLACE FUNCTION pg_catalog.pg_report_log(text, text, boolean)
 RETURNS void
 LANGUAGE internal
 IMMUTABLE STRICT
AS $function$pg_report_log$function$

Why stable, why immutable? This function should be volatile.

6. using elog level enum as errcode is wrong idea - errcodes are defined in
table http://www.postgresql.org/docs/9.4/static/errcodes-appendix.html

Regards

Pavel


>
> Regards,
> Dinesh
> manojadinesh.blogspot.com
>


Re: [HACKERS] buffer README is out of date

2015-08-29 Thread Jeff Janes
On Sat, Aug 29, 2015 at 1:27 PM, Jim Nasby  wrote:

> On 8/29/15 2:21 PM, Jeff Janes wrote:
>
>> The buffer/README section on buffer clean up locks never got updated
>> for the creation of Heap Only Tuples and their associated compaction
>> logic.
>>
>> I've attached a patch to change the explanation.  I'm sure someone
>> can word it better than I have.
>>
>
> ! Obtaining the necessary lock is done by the bufmgr routines
>> ! LockBufferForCleanup() or ConditionalLockBufferForCleanup().
>> ! They first get an exclusive lock and then check to see if the shared pin
>> ! count is currently 1.  If not, ConditionalLockBufferForCleanup()
>> releases
>> ! the exclusive lock and then returns false, while LockBufferForCleanup()
>> ! releases the exclusive lock (but not the caller's pin) and waits until
>> ! signaled by another backend, whereupon it tries again.  The signal will
>> ! occur when UnpinBuffer decrements the shared pin count to 1.  As
>>
>
> I don't think that's true. If 2 other backends have a pin then AFAIK you'd
> wake up twice. There's also this comment in LockBufferForCleanup:
>
> /*
>  * Remove flag marking us as waiter. Normally this will not be set
>  * anymore, but ProcWaitForSignal() can return for other signals as
>  * well.  We take care to only reset the flag if we're the waiter, as
>  * theoretically another backend could have started waiting. That's
>  * impossible with the current usages due to table level locking, but
>  * better be safe.
>  */
>

If 2 other backends have a pin, only the last one to drop it should do the
waking.  I don't see a way they could both try to do the waking, the
interlock on the buffer header seems to prevent that.  But if it did, that
would just be another way you could have a spurious wake-up, which can
already happen for other reasons.  I don't think the spurious wake up needs
to be documented in the higher level README file.


>
> ! indicated above, this operation might have to wait a good while before
>> ! it acquires lock, but that shouldn't matter much for concurrent VACUUM.
>> ! The current implementation only supports a single waiter for pin-count-1
>> ! on any particular shared buffer.  This is enough for VACUUM's use, since
>> ! we don't allow multiple VACUUMs concurrently on a single relation
>> anyway.
>> ! Anyone wishing to obtain a clean up lock outside of a VACUUM must use
>> ! the conditional variant of the function.
>>
>
> That last statement should possibly be worded more strongly or just
> removed. If someone thinks they want to use this mechanism for something
> other than VACUUM there's probably a lot of other things to consider beyond
> just buffer locking.


But the point of this is that HOT updates are **already** doing this
outside of VACUUM.  That is why the README is out of date.

Cheers,

Jeff


Re: [HACKERS] to_json(NULL) should to return JSON null instead NULL

2015-08-29 Thread David G. Johnston
On Sat, Aug 29, 2015 at 4:27 PM, Tom Lane  wrote:

> Jim Nasby  writes:
> > On 8/29/15 12:29 PM, Pavel Stehule wrote:
> >> what is correct from JSON perspective? All fields with NULL
>
> > ISTM that the whole purpose of to_json is to properly jsonify something,
> > and the proper json form for "undefined" is 'null', is it not?
>
> What's not entirely clear is what we should do with cases like
>
> regression=#  select array_to_json(null::int[]);
>  array_to_json
> ---
>
> (1 row)
>
> regression=#  select row_to_json(null::record);
>  row_to_json
> -
>
> (1 row)
>
> If we leave those alone (and in the latter case, in particular, there is
> not enough information available to do much else) then it's not so clear
> that changing to_json() is really improving consistency overall.
> For instance, do we really want row_to_json(null::record) and
> to_json(null::record) giving different results?  Or if we make them
> both return "null", that breaks the previous invariant that row_to_json
> always yields a JSON object.
>
> An advantage of leaving these things as strict is that the user can easily
> substitute whatever specific behavior she wants for NULLs via coalesce(),
> as was shown upthread.  If we put in a different behavior, then the
> only way to override it would be with a CASE, which is tedious and creates
> multiple-evaluation issues.
>

​Wouldn't NULLIF(any, any) be another means of overriding?​


> I'm not necessarily against changing it --- but it doesn't seem entirely
> black-and-white to me, and we do now have a couple of versions worth
> of precedent we'd be breaking with.
>
> If we do vote to change it, I'd want to do so now (ie in 9.5) rather than
> create yet another year's worth of precedent.
>
>
​I don't see COALESCE as being particularly problematic since we are going
to JSON which means that in (nearly?) all situations all of the potential
input values will be homogeneous and a single coalesce can apply the
relevant logic at the point of conversion.  Since row_to_json and
record_to_json effectively deal with heterogeneous input types ​the same
opportunity is not available to them and simply incorporating null into the
output for that position is the only reasonable thing to do.

I'm tending to favor removing the strict modifier and having NULL => json
'null' with the advice that the old behavior can be obtained by writing
"NULLIF(to_json(...), json 'null')".

If we go this route we should also consider returning json 'null' for
(record|array)_to_json(NULL) at the same time.

My experience with the json feature is has strictly been the json_to_record
direction...but it seems like a wasted opportunity to be useful in the
default case when JSON provides a null concept that is semantically similar
to the concept in SQL.  While I'm hesitant to change this without
representative use cases to work from the theory argument holds enough
weight to strongly consider making the change.

David J.


Re: [HACKERS] WIP: About CMake v2

2015-08-29 Thread Alvaro Herrera
Tom Lane wrote:
> Andres Freund  writes:
> > On 2015-08-29 17:53:26 -0300, Alvaro Herrera wrote:
> >> Therefore, either we will not find any portability problems, or fixing
> >> upstream those we do find will not be terribly difficult.
> 
> > Well, the difference to know is that we can't resolve that relatively
> > quickly ourselves, but that it'd rather require $whoever installing a
> > new version of cmake...
> 
> Yup.  So one of the things we'd want to establish is what's the minimum
> version of cmake we're going to require, and which (hopefully old)
> platforms will we be leaving out in the cold with that?

Just for the record, in Debian land the versions shipped are thus:

oldoldstable (squeeze): 2.8.2 (has 2.8.9 available in squeeze-backports)
oldstable (wheezy): 2.8.9 (has 2.8.11 available in wheezy-backports)
stable (jessie): 3.0.2

VAX support was added in 2.8.11 (bug fixed in Oct. 2012).

I imagine that esoteric platforms are not going to have cmake at all and
are going to need their own installation anyway.  Not sure if that's
going to be more onerous than the requirement to install GNU make.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: About CMake v2

2015-08-29 Thread Tom Lane
Andres Freund  writes:
> On 2015-08-29 17:53:26 -0300, Alvaro Herrera wrote:
>> Therefore, either we will not find any portability problems, or fixing
>> upstream those we do find will not be terribly difficult.

> Well, the difference to know is that we can't resolve that relatively
> quickly ourselves, but that it'd rather require $whoever installing a
> new version of cmake...

Yup.  So one of the things we'd want to establish is what's the minimum
version of cmake we're going to require, and which (hopefully old)
platforms will we be leaving out in the cold with that?

I suspect it's premature to try to decide that now, but it's certainly
a question to keep in mind.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: About CMake v2

2015-08-29 Thread Andres Freund
On 2015-08-29 17:53:26 -0300, Alvaro Herrera wrote:
> Therefore, either we will not find any portability problems, or fixing
> upstream those we do find will not be terribly difficult.

Well, the difference to know is that we can't resolve that relatively
quickly ourselves, but that it'd rather require $whoever installing a
new version of cmake...


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: About CMake v2

2015-08-29 Thread Alvaro Herrera
YUriy Zhuravlev wrote:
> On Friday 28 August 2015 13:51:30 you wrote:
> > It's broadly interesting, but since it bakes in a build dependency on
> > CMake, there is some risk that the dependencies become an insurmountable
> > problem.
> > 
> > (Does CMake run on a VAX 11/780??  :-))
> 
> http://public.kitware.com/Bug/view.php?id=13605 you about this? 

Hmm, what this closed bug report says to me is that 1) cmake developers
take portability seriously, 2) all common and even some not-so-common
platforms are likely to be in hand already.  Therefore, either we will
not find any portability problems, or fixing upstream those we do find
will not be terribly difficult.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.4 broken on alpha

2015-08-29 Thread Tom Lane
Christoph Berg  writes:
> It'd be nice if the patch could get applied to 9.4 and earlier.

I've pushed that patch into 9.4.  Barring somebody stepping forward
with an offer of a buildfarm member and any other necessary developer
support, I do not think there will be any further consideration of
reversing our decision to drop Alpha support as of 9.5.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] to_json(NULL) should to return JSON null instead NULL

2015-08-29 Thread Jim Nasby

On 8/29/15 3:27 PM, Tom Lane wrote:

Jim Nasby  writes:

On 8/29/15 12:29 PM, Pavel Stehule wrote:

what is correct from JSON perspective? All fields with NULL



ISTM that the whole purpose of to_json is to properly jsonify something,
and the proper json form for "undefined" is 'null', is it not?


What's not entirely clear is what we should do with cases like

regression=#  select array_to_json(null::int[]);
  array_to_json
---

(1 row)

regression=#  select row_to_json(null::record);
  row_to_json
-

(1 row)

If we leave those alone (and in the latter case, in particular, there is
not enough information available to do much else) then it's not so clear
that changing to_json() is really improving consistency overall.
For instance, do we really want row_to_json(null::record) and
to_json(null::record) giving different results?  Or if we make them
both return "null", that breaks the previous invariant that row_to_json
always yields a JSON object.


The tricky part is that if you're calling any of those functions to find 
the value to stick inside a JSON array or object then NULL is definitely 
incorrect. IE: if you were trying to turn the results of this


create table t(a int, b text);
insert into t(a) values(1);

Into a JSON object, you'd want '{"a":1,"b":null}', not NULL. Of course 
you'd just use row_to_json() for something that simple, but if you were 
doing something more complex you might have to code a transform yourself.



An advantage of leaving these things as strict is that the user can easily
substitute whatever specific behavior she wants for NULLs via coalesce(),
as was shown upthread.  If we put in a different behavior, then the
only way to override it would be with a CASE, which is tedious and creates
multiple-evaluation issues.


Certainly true.

The downside to leaving it alone is this will probably be hard to debug 
if you're using it to build a complex JSON object. One NULL ends up in 
the right place and suddenly your whole output becomes NULL. I think 
this is why %s works the way it does in format as well.


If we do change it I think it best to add an argument to control what it 
does with a NULL so you can get whichever you need. It might be worth 
adding to (array|record)_to_json as well, though I think the use case 
for those is markedly different than for plain to_json, so maybe not.



I'm not necessarily against changing it --- but it doesn't seem entirely
black-and-white to me, and we do now have a couple of versions worth
of precedent we'd be breaking with.

If we do vote to change it, I'd want to do so now (ie in 9.5) rather than
create yet another year's worth of precedent.


I wonder how much people are actually using to_json(). I've done some 
amount of JSON massaging and don't recall needing it yet.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Information of pg_stat_ssl visible to all users

2015-08-29 Thread Bruce Momjian
On Tue, Jul  7, 2015 at 12:57:58PM -0400, Tom Lane wrote:
> Andres Freund  writes:
> > On 2015-07-07 12:03:36 -0400, Peter Eisentraut wrote:
> >> I think the DN is analogous to the remote user name, which we don't
> >> expose for any of the other authentication methods.
> 
> > Huh?
> 
> Peter's exactly right: there is no other case where you can tell what
> some other connection's actual OS username is.  You might *guess* that
> it's the same as their database username, but you don't know that,
> assuming you don't know how they authenticated.
> 
> I'm not sure how security-critical this info really is, though.

I know I am coming in late here, but I know Heroku uses random user
names to allow a cluster to have per-user databases without showing
external user name details:

=> \du
List of roles
   Role name|   Attributes   | 
Member of

++---
 aafgrwewediiqz | 20 connections | {}
 aaszwkfnholarh | 20 connections | {}
 aatbelxbaeriwy | 20 connections | {}
 aaxiwolkcxmbxo | 20 connections | {}
 abbyljzgqaonjb | 20 connections | {}

I can see them having problems with a user being able to see the SSL
remote user names of all connected users.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] buffer README is out of date

2015-08-29 Thread Jim Nasby

On 8/29/15 2:21 PM, Jeff Janes wrote:

The buffer/README section on buffer clean up locks never got updated
for the creation of Heap Only Tuples and their associated compaction
logic.

I've attached a patch to change the explanation.  I'm sure someone
can word it better than I have.



! Obtaining the necessary lock is done by the bufmgr routines
! LockBufferForCleanup() or ConditionalLockBufferForCleanup().
! They first get an exclusive lock and then check to see if the shared pin
! count is currently 1.  If not, ConditionalLockBufferForCleanup() releases
! the exclusive lock and then returns false, while LockBufferForCleanup()
! releases the exclusive lock (but not the caller's pin) and waits until
! signaled by another backend, whereupon it tries again.  The signal will
! occur when UnpinBuffer decrements the shared pin count to 1.  As


I don't think that's true. If 2 other backends have a pin then AFAIK 
you'd wake up twice. There's also this comment in LockBufferForCleanup:


/*
 * Remove flag marking us as waiter. Normally this will not be set
 * anymore, but ProcWaitForSignal() can return for other signals as
 * well.  We take care to only reset the flag if we're the waiter, as
 * theoretically another backend could have started waiting. That's
 * impossible with the current usages due to table level locking, but
 * better be safe.
 */


! indicated above, this operation might have to wait a good while before
! it acquires lock, but that shouldn't matter much for concurrent VACUUM.
! The current implementation only supports a single waiter for pin-count-1
! on any particular shared buffer.  This is enough for VACUUM's use, since
! we don't allow multiple VACUUMs concurrently on a single relation anyway.
! Anyone wishing to obtain a clean up lock outside of a VACUUM must use
! the conditional variant of the function.


That last statement should possibly be worded more strongly or just 
removed. If someone thinks they want to use this mechanism for something 
other than VACUUM there's probably a lot of other things to consider 
beyond just buffer locking.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] to_json(NULL) should to return JSON null instead NULL

2015-08-29 Thread Tom Lane
Jim Nasby  writes:
> On 8/29/15 12:29 PM, Pavel Stehule wrote:
>> what is correct from JSON perspective? All fields with NULL

> ISTM that the whole purpose of to_json is to properly jsonify something, 
> and the proper json form for "undefined" is 'null', is it not?

What's not entirely clear is what we should do with cases like

regression=#  select array_to_json(null::int[]);
 array_to_json 
---
 
(1 row)

regression=#  select row_to_json(null::record);
 row_to_json 
-
 
(1 row)

If we leave those alone (and in the latter case, in particular, there is
not enough information available to do much else) then it's not so clear
that changing to_json() is really improving consistency overall.
For instance, do we really want row_to_json(null::record) and
to_json(null::record) giving different results?  Or if we make them
both return "null", that breaks the previous invariant that row_to_json
always yields a JSON object.

An advantage of leaving these things as strict is that the user can easily
substitute whatever specific behavior she wants for NULLs via coalesce(),
as was shown upthread.  If we put in a different behavior, then the
only way to override it would be with a CASE, which is tedious and creates
multiple-evaluation issues.

I'm not necessarily against changing it --- but it doesn't seem entirely
black-and-white to me, and we do now have a couple of versions worth
of precedent we'd be breaking with.

If we do vote to change it, I'd want to do so now (ie in 9.5) rather than
create yet another year's worth of precedent.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] to_json(NULL) should to return JSON null instead NULL

2015-08-29 Thread Pavel Stehule
2015-08-29 22:06 GMT+02:00 Jim Nasby :

> On 8/29/15 12:29 PM, Pavel Stehule wrote:
>
>> Note that all the to_json functions are strict. In this sense it's
>> quite consistent. If we change it to being called on null input,
>> what should we return if a null non-scalar is passed in?
>>
>>
>> what is correct from JSON perspective? All fields with NULL
>>
>
> ISTM that the whole purpose of to_json is to properly jsonify something,
> and the proper json form for "undefined" is 'null', is it not?


it is probably equivalent

Pavel

>
> --
> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
> Experts in Analytics, Data Architecture and PostgreSQL
> Data in Trouble? Get it in Treble! http://BlueTreble.com
>


Re: [HACKERS] to_json(NULL) should to return JSON null instead NULL

2015-08-29 Thread Jim Nasby

On 8/29/15 12:29 PM, Pavel Stehule wrote:

Note that all the to_json functions are strict. In this sense it's
quite consistent. If we change it to being called on null input,
what should we return if a null non-scalar is passed in?


what is correct from JSON perspective? All fields with NULL


ISTM that the whole purpose of to_json is to properly jsonify something, 
and the proper json form for "undefined" is 'null', is it not?

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: Core dump with nested CREATE TEMP TABLE

2015-08-29 Thread Jim Nasby

On 8/29/15 11:31 AM, Andres Freund wrote:

Hi,

On 2015-08-29 11:06:05 -0500, Jim Nasby wrote:

Stack trace below. Relevant assert:


What exact revision is this on? Either there's some aggressive inlining
going on, or these lines don't entirely match up with HEAD.


Oops, that was 9.4.1. Trace from just pulled HEAD below, compiled with -O0.


That's compiled with optimization, isn't it? Could you compile with -O0?

Can you go up to

frame #19: 0x000116295f1f 
plpgsql.so`exec_stmt_block(estate=0x7fff532c0090, block=0x7fa3f53a21d0) 
+ 2095 at pl_exec.c:1300


and do 'disass/m' to see where we actually are?


That's now 20 instead of 19. disass/m gives an error (note my gdb is 
tweaked for apple and everything else I've posted here is from lldb). 
Output from 'disassemble' in frame 20 at bottom.



The area around #19 likely is the PG_CATCH inside exec_stmt_block. So
what's happening is that there's an exception raised, while handling the
previous exception. And apparently we're not doing that entirely
right. Besides the line numbers "during exception cleanup" hints at
that.


If the line numbers and the 'gui' mode of lldb are to be believed, frame 
20 is NOT in the try block, but #9 is:



│ 1242 │PG_CATCH(); 
   ││   
│
│ 1243 │{   
   ││   
│
│ 1244 │ErrorData  *edata;  
   ││   
│
│ 1245 │ListCell   *e;  
   ││   
│
│ 1246 │
   ││   
│
│ 1247 │estate->err_text = gettext_noop("during exception 
cleanup");   ││ 
  │
│ 1248 │
   ││   
│
│ 1249 │/* Save error info */   
   ││   
│
│ 1250 │MemoryContextSwitchTo(oldcontext);  
   ││   
│
│ 1251 │edata = CopyErrorData();
   ││   
│
│ 1252 │FlushErrorState();  
   ││   
│
│ 1253 │
   ││   
│
│ 1254 │/* Abort the inner transaction */   
   ││   
│
│ 1255 │◆   RollbackAndReleaseCurrentSubTransaction();


Looking at frame 10

   432   */
   433  estate.err_text = NULL;
   434  estate.err_stmt = (PLpgSQL_stmt *) (func->action);
-> 435   rc = exec_stmt_block(&estate, func->action);
   436  if (rc != PLPGSQL_RC_RETURN)
   437  {
   438  estate.err_stmt = NULL;

func->fn_signature is results_eq(refcursor,refcursor,text), which is a 
pgtap function that has an exception handler, and I know the code that 
it's calling also has an exception handler.



(lldb) bt
* thread #1: tid = 0x3c8fb4, 0x7fff92a5a866 
libsystem_kernel.dylib`__pthread_kill + 10, queue = 'com.apple.main-thread', 
stop reason = signal SIGABRT
  * frame #0: 0x7fff92a5a866 libsystem_kernel.dylib`__pthread_kill + 10
frame #1: 0x7fff9001b35c libsystem_pthread.dylib`pthread_kill + 92
frame #2: 0x7fff8cf89b1a libsystem_c.dylib`abort + 125
frame #3: 0x00010fcf3b99 
postgres`ExceptionalCondition(conditionName=0x00010fe157e9, 
errorType=0x00010fd60053, fileName=0x00010fe14a7f, lineNumber=2055) + 
137 at assert.c:54
frame #4: 0x00010fcda0ac 
postgres`RelationClearRelation(relation=0x00011954f388, rebuild='\0') + 140 
at relcache.c:2053
frame #5: 0x00010

Re: [HACKERS] buffer README is out of date

2015-08-29 Thread Jeff Janes
On Sat, Aug 29, 2015 at 11:45 AM, Jeff Janes  wrote:

> The buffer/README section on buffer clean up locks never got updated for
> the creation of Heap Only Tuples and their associated compaction logic.
>
> I've attached a patch to change the explanation.  I'm sure someone can
> word it better than I have.
>

...particularly if I won't let you see my wording.

Actually attached this time.


buffer_readme.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] buffer README is out of date

2015-08-29 Thread Jeff Janes
The buffer/README section on buffer clean up locks never got updated for
the creation of Heap Only Tuples and their associated compaction logic.

I've attached a patch to change the explanation.  I'm sure someone can word
it better than I have.

Cheers,

Jeff


Re: [HACKERS] Pg_upgrade remote copy

2015-08-29 Thread Bruce Momjian
On Fri, Aug 28, 2015 at 10:34:38PM -0700, AI Rumman wrote:
> Hi,
> 
> In pg_upgrade, how about adding a feature to copy data directory over network.
> That is, we can run pg_upgrade from our new host, where old host will be a
> remote machine.
> We can add two parameters - oldhost and if it is used, pg_upgrade will 
> identify
> the old host as remote and instead of local copy, it will use remote copy.
> 
> What do you think about it?

Well, you could NFS-mount the remote directory and have it work that
way.  One complexity is that you need to start/stop the old and new
servers, so doing something without NFS is going to be vary hard.

I think it is much simpler to just copy the old clsuter to the remote
server and run pg_upgrade in --link mode on the remote server.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] On-demand running query plans using auto_explain and signals

2015-08-29 Thread Pavel Stehule
2015-08-29 18:36 GMT+02:00 Andres Freund :

> On 2015-08-29 18:27:59 +0200, Pavel Stehule wrote:
> > 2015-08-29 18:25 GMT+02:00 Shulgin, Oleksandr <
> oleksandr.shul...@zalando.de>
> > > Good point.  There's still hope to set a flag and process it later on.
> > > Will have to check if it's possible to stay in the scope of a loaded
> module
> > > though.
>
> > I had a workable prototype - and It was implemented very similar as
> > handling CANCEL
>
> Where did you put the handling of that kind of interrupt? Directly into
> ProcessInterrupts()?
>

Probably. I don't remember it well, but it need hack code - it cannot be
used from extension.

Pavel


>
> Andres
>


Re: [HACKERS] to_json(NULL) should to return JSON null instead NULL

2015-08-29 Thread Pavel Stehule
2015-08-29 19:11 GMT+02:00 Andrew Dunstan :

>
>
> On 08/29/2015 12:02 PM, Pavel Stehule wrote:
>
>>
>>
>> 2015-08-29 15:43 GMT+02:00 Shulgin, Oleksandr <
>> oleksandr.shul...@zalando.de >:
>>
>> On Sat, Aug 29, 2015 at 3:39 PM, Tom Lane > > wrote:
>>
>> Andrew Dunstan > > writes:
>> > On 08/29/2015 08:47 AM, Shulgin, Oleksandr wrote:
>> >> Given there were no loud complaints about this, the current
>> behavior
>> >> is appropriate for most users, the rest can still work
>> around using
>> >> coalesce(to_json(...), json 'null').
>>
>> > I don't think it's necessarily more correct. But I do agree
>> that it's
>> > not a good idea to change the behaviour unless there is major
>> > unhappiness with it.
>>
>> I'm not entirely convinced that JSON NULL and SQL NULL should
>> be treated
>> as the same concept, so I would say that the current behavior
>> is fine ---
>> at least when you think about it in isolation.  However,
>> haven't we
>> already bought into that equivalence in these examples?
>>
>> regression=# select row_to_json(row(1,null,2));
>> row_to_json
>> ---
>>  {"f1":1,"f2":null,"f3":2}
>> (1 row)
>>
>> regression=# select array_to_json(array[1,null,2]);
>>  array_to_json
>> ---
>>  [1,null,2]
>> (1 row)
>>
>> or even in to_json itself:
>>
>> regression=# select to_json(array[1,null,2]);
>>   to_json
>> 
>>  [1,null,2]
>> (1 row)
>>
>> The scalar case is definitely failing to be consistent with these.
>>
>>
>> Yes, that's my argument for correctness also: to_json() on a
>> composite object should behave like distribution of to_json()
>> calls over object/array elements.
>>
>> Is consistency a sufficient reason to change it?
>>
>>
>> Not for me.
>>
>>
>> It is bug - and it should be fixed. I agree, so this change is too strong
>> for fixing in minor version - but we can change it in unreleased major
>> versions - 9.5 and master.
>>
>
>
>
>
> No, frankly that's being far too free with the word bug. It's not even
> unambiguously incorrect.
>
> Note that all the to_json functions are strict. In this sense it's quite
> consistent. If we change it to being called on null input, what should we
> return if a null non-scalar is passed in?
>

what is correct from JSON perspective? All fields with NULL

Pavel


>
> cheers
>
> andrew
>
>


Re: [HACKERS] to_json(NULL) should to return JSON null instead NULL

2015-08-29 Thread Andrew Dunstan



On 08/29/2015 12:02 PM, Pavel Stehule wrote:



2015-08-29 15:43 GMT+02:00 Shulgin, Oleksandr 
mailto:oleksandr.shul...@zalando.de>>:


On Sat, Aug 29, 2015 at 3:39 PM, Tom Lane mailto:t...@sss.pgh.pa.us>> wrote:

Andrew Dunstan mailto:and...@dunslane.net>> writes:
> On 08/29/2015 08:47 AM, Shulgin, Oleksandr wrote:
>> Given there were no loud complaints about this, the current behavior
>> is appropriate for most users, the rest can still work
around using
>> coalesce(to_json(...), json 'null').

> I don't think it's necessarily more correct. But I do agree
that it's
> not a good idea to change the behaviour unless there is major
> unhappiness with it.

I'm not entirely convinced that JSON NULL and SQL NULL should
be treated
as the same concept, so I would say that the current behavior
is fine ---
at least when you think about it in isolation.  However,
haven't we
already bought into that equivalence in these examples?

regression=# select row_to_json(row(1,null,2));
row_to_json
---
 {"f1":1,"f2":null,"f3":2}
(1 row)

regression=# select array_to_json(array[1,null,2]);
 array_to_json
---
 [1,null,2]
(1 row)

or even in to_json itself:

regression=# select to_json(array[1,null,2]);
  to_json

 [1,null,2]
(1 row)

The scalar case is definitely failing to be consistent with these.


Yes, that's my argument for correctness also: to_json() on a
composite object should behave like distribution of to_json()
calls over object/array elements.

Is consistency a sufficient reason to change it?


Not for me.


It is bug - and it should be fixed. I agree, so this change is too 
strong for fixing in minor version - but we can change it in 
unreleased major versions - 9.5 and master.





No, frankly that's being far too free with the word bug. It's not even 
unambiguously incorrect.


Note that all the to_json functions are strict. In this sense it's quite 
consistent. If we change it to being called on null input, what should 
we return if a null non-scalar is passed in?


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: About CMake v2

2015-08-29 Thread YUriy Zhuravlev
On Friday 28 August 2015 13:51:30 you wrote:
> It's broadly interesting, but since it bakes in a build dependency on
> CMake, there is some risk that the dependencies become an insurmountable
> problem.
> 
> (Does CMake run on a VAX 11/780??  :-))

http://public.kitware.com/Bug/view.php?id=13605 you about this? 


> 
> It is probably worth a try, to see what improvements arise, albeit with the
> need to accept some risk of refusal of the change.
> 
> The experiment is most likely necessary: we won't know the benefits without
> trying.

You right.

> 
> If the results represent little improvement, there will be little or no
> appetite to jump through the dependency hoops needed to get the change
> accepted.
> 
> On the other hand, if there are big gains, that encourages pushing thru the
> dependency issues.
> 
> On Aug 28, 2015 10:45, "YUriy Zhuravlev"  wrote:
> > Hello Hackers
> > 
> > How would you react if I provided a patch which introduces a CMake build
> > system?
> > 
> > Old thread:
> > http://www.postgresql.org/message-id/200812291325.13354.pete...@gmx.net
> > 
> > The main argument against the "it's too hard". I'm right?
> > 
> > Thanks!
> > --
> > YUriy Zhuravlev
> > Postgres Professional: http://www.postgrespro.com
> > The Russian Postgres Company
> > 
> > 
> > --
> > Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-hackers

-- 
YUriy Zhuravlev
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] On-demand running query plans using auto_explain and signals

2015-08-29 Thread Andres Freund
On 2015-08-29 18:27:59 +0200, Pavel Stehule wrote:
> 2015-08-29 18:25 GMT+02:00 Shulgin, Oleksandr 
> > Good point.  There's still hope to set a flag and process it later on.
> > Will have to check if it's possible to stay in the scope of a loaded module
> > though.

> I had a workable prototype - and It was implemented very similar as
> handling CANCEL

Where did you put the handling of that kind of interrupt? Directly into
ProcessInterrupts()?

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: Core dump with nested CREATE TEMP TABLE

2015-08-29 Thread Andres Freund
Hi,

On 2015-08-29 11:06:05 -0500, Jim Nasby wrote:
> Stack trace below. Relevant assert:

What exact revision is this on? Either there's some aggressive inlining
going on, or these lines don't entirely match up with HEAD.

That's compiled with optimization, isn't it? Could you compile with -O0?

Can you go up to
> >frame #19: 0x000116295f1f 
> > plpgsql.so`exec_stmt_block(estate=0x7fff532c0090, 
> > block=0x7fa3f53a21d0) + 2095 at pl_exec.c:1300

and do 'disass/m' to see where we actually are?


The area around #19 likely is the PG_CATCH inside exec_stmt_block. So
what's happening is that there's an exception raised, while handling the
previous exception. And apparently we're not doing that entirely
right. Besides the line numbers "during exception cleanup" hints at
that.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] On-demand running query plans using auto_explain and signals

2015-08-29 Thread Pavel Stehule
2015-08-29 18:25 GMT+02:00 Shulgin, Oleksandr 
:

> On Sat, Aug 29, 2015 at 5:44 PM, Andres Freund  wrote:
>
>> On 2015-08-29 17:33:22 +0200, Shulgin, Oleksandr wrote:
>> > Probably using SIGUSR2 would be more appropriate, but I'm not sure if
>> there
>> > are other extensions out there that might be already using it for some
>> > other reason (well, I do not know that for SIGUSR1 either).  Looking at
>> the
>> > current state of affairs in procsignal_sigusr1_handler() makes me
>> believe
>> > it should be pretty safe to catch the signal like I do.  Or is that not
>> the
>> > case?
>>
>> You can catch signals, but you're not allowed to do a lot from
>> them. Anything allocating memory, acquiring locks, etc. is out - these
>> functions aren't reentrant.  If you can guarantee that you're not
>> interrupting any relevant code you can bend those rules, but that's
>> obviously not the case here.
>>
>> Check out the list of async-signal-safe functions at
>> http://man7.org/linux/man-pages/man7/signal.7.html
>
>
> Good point.  There's still hope to set a flag and process it later on.
> Will have to check if it's possible to stay in the scope of a loaded module
> though.
>

I had a workable prototype - and It was implemented very similar as
handling CANCEL

Pavel


Re: [HACKERS] On-demand running query plans using auto_explain and signals

2015-08-29 Thread Shulgin, Oleksandr
On Sat, Aug 29, 2015 at 5:44 PM, Andres Freund  wrote:

> On 2015-08-29 17:33:22 +0200, Shulgin, Oleksandr wrote:
> > Probably using SIGUSR2 would be more appropriate, but I'm not sure if
> there
> > are other extensions out there that might be already using it for some
> > other reason (well, I do not know that for SIGUSR1 either).  Looking at
> the
> > current state of affairs in procsignal_sigusr1_handler() makes me believe
> > it should be pretty safe to catch the signal like I do.  Or is that not
> the
> > case?
>
> You can catch signals, but you're not allowed to do a lot from
> them. Anything allocating memory, acquiring locks, etc. is out - these
> functions aren't reentrant.  If you can guarantee that you're not
> interrupting any relevant code you can bend those rules, but that's
> obviously not the case here.
>
> Check out the list of async-signal-safe functions at
> http://man7.org/linux/man-pages/man7/signal.7.html


Good point.  There's still hope to set a flag and process it later on.
Will have to check if it's possible to stay in the scope of a loaded module
though.


Re: [HACKERS] to_json(NULL) should to return JSON null instead NULL

2015-08-29 Thread Jim Nasby

On 8/29/15 11:02 AM, Pavel Stehule wrote:

It is bug - and it should be fixed. I agree, so this change is too
strong for fixing in minor version - but we can change it in unreleased
major versions - 9.5 and master.


+1
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql - better support pipe line

2015-08-29 Thread Jim Nasby

On 8/29/15 8:10 AM, Shulgin, Oleksandr wrote:

The other issue is there's no way to capture \conninfo inside of
psql and do something with it. If instead this was exposed as a
variable, you could handle it in SQL if you wanted to.


Yeah, I forgot about the variable proposal, that would be a more useful
way to expose it for sure.


Right. My only other point is it would be nice if what we exposed there 
could be easily parsed in SQL. But as I said, having *anything* would be 
an improvement.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: Core dump with nested CREATE TEMP TABLE

2015-08-29 Thread Jim Nasby

On 8/29/15 8:04 AM, Tom Lane wrote:

Michael Paquier  writes:

Ah, OK, you meant this file... Yes I was able to receive it as well in your
original email. I'll try to investigate further later, but Tom may beat me
first. He usually does.


Jim had indicated the bug wasn't reproducible on the strength of that
info, so I was waiting for him to provide a more reproducible case.


It was reproducible, just not very self contained. [1] is a bit better, 
but it still involves pgTap as well as test_factory.


Steps to reproduce:
Download https://github.com/BlueTreble/test_factory/archive/crash.zip
Unzip, cd into directory
pgxn install pgtap (or just make test)
make install if you didn't do make test
psql -f crash.sql

Stack trace below. Relevant assert:

   1967		 * As per notes above, a rel to be rebuilt MUST have refcnt > 
0; while of
   1968		 * course it would be a bad idea to blow away one with nonzero 
refcnt.

   1969  */
-> 1970  Assert(rebuild ?
   1971!RelationHasReferenceCountZero(relation) :
   1972RelationHasReferenceCountZero(relation));

Relevant bits of relation:

│ ◆─(Relation) relation = 0x0001165832a8
   ││   
│
│ ├─◆─(RelFileNode) rd_node 
   ││   
│
│ ├─◆─(SMgrRelationData *) rd_smgr = 0x 
   ││   
│
│ ├─(int) rd_refcnt = 1 
   ││   
│
│ ├─(BackendId) rd_backend = 2  
   ││   
│
│ ├─(bool) rd_islocaltemp = '\x01'  
   ││   
│
│ ├─(bool) rd_isnailed = '\0'   
   ││   
│
│ ├─(bool) rd_isvalid = '\x01'  
   ││   
│
│ ├─(char) rd_indexvalid = '\0' 
   ││   
│
│ ├─(SubTransactionId) rd_createSubid = 16  
   ││   
│
│ ├─(SubTransactionId) rd_newRelfilenodeSubid = 0   
   ││   
│
│ ├─◆─(Form_pg_class) rd_rel = 0x0001165838d8   
   ││   
│
│ │ ├─◆─(NameData) relname  
   ││   
│
│ │ │ └─◆─(char [64]) data "invoice_03"


rebuild is 0.

[1]https://github.com/BlueTreble/test_factory/blob/crash/crash.sql


(lldb) bt
* thread #1: tid = 0x3b03a0, 0x7fff92a5a866 
libsystem_kernel.dylib`__pthread_kill + 10, queue = 'com.apple.main-thread', 
stop reason = signal SIGABRT
  * frame #0: 0x7fff92a5a866 libsystem_kernel.dylib`__pthread_kill + 10
frame #1: 0x7fff9001b35c libsystem_pthread.dylib`pthread_kill + 92
frame #2: 0x7fff8cf89b1a libsystem_c.dylib`abort + 125
frame #3: 0x00010cdb4039 
postgres`ExceptionalCondition(conditionName=0x00010cf59cfd, 
errorType=0x00010cec392d, fileName=0x00010cf59045, lineNumber=1972) + 
137 at assert.c:54
frame #4: 0x00010cd9b332 
postgres`RelationClearRelation(relation=0x000116594cd8, rebuild='\0') + 162 
at relcache.c:1970
frame #5: 0x00010cd9cc0f 
postgres`AtEOSubXact_cleanup(relation=0x000116594cd8, isCommit='\0', 
mySubid=15, parentSubid=14) + 79 at relcache.c:2665
frame #6: 0x00010cd9cb92 
postgres`AtEOSubXact_RelationCache(isCommit='\0', mySubid=15, parentSubid=14) + 
242 at relcache.c:2633
frame #7: 0x00010c9b6e88 postgres`AbortSubTransaction + 440 at 
xact.c:4373
frame #8: 0x00010c9b8ef2 
postgres`RollbackAndReleaseCurrentSubTransaction + 178 at xact.c:3948
frame #9: 0x000116295c93 
plpgsql.so`exec_stmt_block(estate=0x000

Re: [HACKERS] to_json(NULL) should to return JSON null instead NULL

2015-08-29 Thread Pavel Stehule
2015-08-29 15:43 GMT+02:00 Shulgin, Oleksandr 
:

> On Sat, Aug 29, 2015 at 3:39 PM, Tom Lane  wrote:
>
>> Andrew Dunstan  writes:
>> > On 08/29/2015 08:47 AM, Shulgin, Oleksandr wrote:
>> >> Given there were no loud complaints about this, the current behavior
>> >> is appropriate for most users, the rest can still work around using
>> >> coalesce(to_json(...), json 'null').
>>
>> > I don't think it's necessarily more correct. But I do agree that it's
>> > not a good idea to change the behaviour unless there is major
>> > unhappiness with it.
>>
>> I'm not entirely convinced that JSON NULL and SQL NULL should be treated
>> as the same concept, so I would say that the current behavior is fine ---
>> at least when you think about it in isolation.  However, haven't we
>> already bought into that equivalence in these examples?
>>
>> regression=# select row_to_json(row(1,null,2));
>> row_to_json
>> ---
>>  {"f1":1,"f2":null,"f3":2}
>> (1 row)
>>
>> regression=# select array_to_json(array[1,null,2]);
>>  array_to_json
>> ---
>>  [1,null,2]
>> (1 row)
>>
>> or even in to_json itself:
>>
>> regression=# select to_json(array[1,null,2]);
>>   to_json
>> 
>>  [1,null,2]
>> (1 row)
>>
>> The scalar case is definitely failing to be consistent with these.
>>
>
> Yes, that's my argument for correctness also: to_json() on a composite
> object should behave like distribution of to_json() calls over object/array
> elements.
>
>
>> Is consistency a sufficient reason to change it?
>>
>
> Not for me.
>

It is bug - and it should be fixed. I agree, so this change is too strong
for fixing in minor version - but we can change it in unreleased major
versions - 9.5 and master.

Regards

Pavel


>
> --
> Alex
>
>


Re: [HACKERS] On-demand running query plans using auto_explain and signals

2015-08-29 Thread Andres Freund
On 2015-08-29 17:33:22 +0200, Shulgin, Oleksandr wrote:
> Probably using SIGUSR2 would be more appropriate, but I'm not sure if there
> are other extensions out there that might be already using it for some
> other reason (well, I do not know that for SIGUSR1 either).  Looking at the
> current state of affairs in procsignal_sigusr1_handler() makes me believe
> it should be pretty safe to catch the signal like I do.  Or is that not the
> case?

You can catch signals, but you're not allowed to do a lot from
them. Anything allocating memory, acquiring locks, etc. is out - these
functions aren't reentrant.  If you can guarantee that you're not
interrupting any relevant code you can bend those rules, but that's
obviously not the case here.

Check out the list of async-signal-safe functions at 
http://man7.org/linux/man-pages/man7/signal.7.html

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.4 broken on alpha

2015-08-29 Thread Andres Freund
On 2015-08-29 08:32:29 -0700, David Fetter wrote:
> and I could make a pretty good
> case for the idea that we can relegate 32-bit platforms to the ash
> heap of history, at least on the server side.

Don't see the point, it doesn't cost us very much.

> Then, there's the question of rotating media.  Given the givens, we
> ought to be drawing up plans for the cases where we might consider
> supporting them, but those would need to be zero-based plans, i.e. the
> starting point would be that we don't support them, and arguments
> would have to be made affirmatively to support them for some specific,
> demonstrable use case.

We don't particularly care either way, so I don't see why we'd add/drop
support for anything here.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] On-demand running query plans using auto_explain and signals

2015-08-29 Thread Shulgin, Oleksandr
Hello,

The other day I've rediscovered the exciting idea of using signals to dump
a backend's currently executed query plan, which, it turns out, was already
proposed by Pavel and Simon in these threads:

http://www.postgresql.org/message-id/flat/cafj8pra-duzkmdtu52ciugb0p7tvri_b8ltjmjfwcnr1lpt...@mail.gmail.com

http://www.postgresql.org/message-id/flat/CAFj8pRDEo24joEg4UFRDYeFADFTw-jw_=t=kPwOyDW=v=g1...@mail.gmail.com#

Unfortunately, the latest one is missing an actual patch (or the attachment
was scrubbed) and I'm really surprised that the idea didn't take off then.


While evaluating it myself I've decided to simply patch auto_explain module
which is now in contrib, so presumably should be available to a broad
audience.  Proof-of-concept patch against master is attached (a patch for
an older branch like 9.0 requires trivial manual effort to adapt).

While I can see the value of in-core support for this, realistically this
is unlikely to be included in 9.5, but an ad hoc patch could be useful long
before that happens.

What this patch does is simply dump the plan of the query being run by the
executor to the log when signaled with SIGUSR1.  The auto_explain module
must be loaded to the backend beforehand of course,
session_preload_libraries seems like the place to do that cluster-wide.

Probably using SIGUSR2 would be more appropriate, but I'm not sure if there
are other extensions out there that might be already using it for some
other reason (well, I do not know that for SIGUSR1 either).  Looking at the
current state of affairs in procsignal_sigusr1_handler() makes me believe
it should be pretty safe to catch the signal like I do.  Or is that not the
case?

The current_query_desc probably needs to be a stack-like structure in order
to keep track of the nested queries correctly, but it works in the simplest
cases.


What would be even more useful is including stats from the running query in
the explain output, if you're a willing to pay for a (hopefully small)
overhead.  Unfortunately, that doesn't work out of the box: if you enable
the auto_explain.log_analyze and friends in the .conf-file, you either get
all zero counts, or if you're really unlucky, an error from InstrEndLoop():

ERROR:  InstrEndLoop called on running node

The latest patch in this area I could found is this one:
http://www.postgresql.org/message-id/87wsn82lda@oxford.xeocode.com

>From what I can see, there's no way around this problem short of hacking
InstrEndLoop...  Did anything change in this area since 2008 possibly?  I
would really love to have a way to make this work with existing un-patched
servers.

Cheers!
--
Alex
diff --git a/contrib/auto_explain/auto_explain.c b/contrib/auto_explain/auto_explain.c
index 2a184ed..a64b3fe 100644
--- a/contrib/auto_explain/auto_explain.c
+++ b/contrib/auto_explain/auto_explain.c
@@ -13,6 +13,7 @@
 #include "postgres.h"
 
 #include 
+#include 
 
 #include "commands/explain.h"
 #include "executor/instrument.h"
@@ -46,6 +47,9 @@ static ExecutorStart_hook_type prev_ExecutorStart = NULL;
 static ExecutorRun_hook_type prev_ExecutorRun = NULL;
 static ExecutorFinish_hook_type prev_ExecutorFinish = NULL;
 static ExecutorEnd_hook_type prev_ExecutorEnd = NULL;
+static pqsigfunc prev_sigusr1 = NULL;
+
+static QueryDesc *current_query_desc = NULL;
 
 #define auto_explain_enabled() \
 	(auto_explain_log_min_duration >= 0 && \
@@ -54,6 +58,8 @@ static ExecutorEnd_hook_type prev_ExecutorEnd = NULL;
 void		_PG_init(void);
 void		_PG_fini(void);
 
+static void sigusr1_handler(int signum);
+
 static void explain_ExecutorStart(QueryDesc *queryDesc, int eflags);
 static void explain_ExecutorRun(QueryDesc *queryDesc,
 	ScanDirection direction,
@@ -61,6 +67,7 @@ static void explain_ExecutorRun(QueryDesc *queryDesc,
 static void explain_ExecutorFinish(QueryDesc *queryDesc);
 static void explain_ExecutorEnd(QueryDesc *queryDesc);
 
+static void explain_output_plain(QueryDesc *queryDesc, double time_msec, int format);
 
 /*
  * Module load callback
@@ -170,6 +177,8 @@ _PG_init(void)
 	ExecutorFinish_hook = explain_ExecutorFinish;
 	prev_ExecutorEnd = ExecutorEnd_hook;
 	ExecutorEnd_hook = explain_ExecutorEnd;
+
+	prev_sigusr1 = pqsignal(SIGUSR1, sigusr1_handler);
 }
 
 /*
@@ -185,12 +194,34 @@ _PG_fini(void)
 	ExecutorEnd_hook = prev_ExecutorEnd;
 }
 
+static void
+sigusr1_handler(int signum)
+{
+	int			save_errno = errno;
+
+	if (current_query_desc != NULL)
+	{
+		explain_output_plain(current_query_desc, 0.0, EXPLAIN_FORMAT_TEXT);
+	}
+
+	if (prev_sigusr1 != NULL &&
+		prev_sigusr1 != SIG_IGN &&
+		prev_sigusr1 != SIG_DFL)
+	{
+		prev_sigusr1(signum);
+	}
+
+	errno = save_errno;
+}
+
 /*
  * ExecutorStart hook: start up logging if needed
  */
 static void
 explain_ExecutorStart(QueryDesc *queryDesc, int eflags)
 {
+	current_query_desc = queryDesc;
+
 	if (auto_explain_enabled())
 	{
 		/* Enable per-node instrumentation iff log_analyze is required. */
@@ -294,45 +325,7 @@ explain_ExecutorEnd(QueryDesc *qu

Re: [HACKERS] 9.4 broken on alpha

2015-08-29 Thread David Fetter
On Wed, Aug 26, 2015 at 09:19:09PM -0400, Noah Misch wrote:
> As it is, we've implicitly prepared to ship Alpha-supporting
> PostgreSQL 9.4 until 2019, by which time the newest Alpha hardware
> will be 15 years old.  Computer museums would be our only audience
> for continued support.  I do have a sentimental weakness for
> computer museums, but not at the price of drag on important
> performance work.

+1000

I think we need to take realistic stock of what we're doing and what
we should require.

At a minimum, we should de-support every platform on which literally
no new deployments will ever happen.

I'm looking specifically at you, HPUX, and I could make a pretty good
case for the idea that we can relegate 32-bit platforms to the ash
heap of history, at least on the server side.

Then, there's the question of rotating media.  Given the givens, we
ought to be drawing up plans for the cases where we might consider
supporting them, but those would need to be zero-based plans, i.e. the
starting point would be that we don't support them, and arguments
would have to be made affirmatively to support them for some specific,
demonstrable use case.

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] to_json(NULL) should to return JSON null instead NULL

2015-08-29 Thread Shulgin, Oleksandr
On Sat, Aug 29, 2015 at 3:39 PM, Tom Lane  wrote:

> Andrew Dunstan  writes:
> > On 08/29/2015 08:47 AM, Shulgin, Oleksandr wrote:
> >> Given there were no loud complaints about this, the current behavior
> >> is appropriate for most users, the rest can still work around using
> >> coalesce(to_json(...), json 'null').
>
> > I don't think it's necessarily more correct. But I do agree that it's
> > not a good idea to change the behaviour unless there is major
> > unhappiness with it.
>
> I'm not entirely convinced that JSON NULL and SQL NULL should be treated
> as the same concept, so I would say that the current behavior is fine ---
> at least when you think about it in isolation.  However, haven't we
> already bought into that equivalence in these examples?
>
> regression=# select row_to_json(row(1,null,2));
> row_to_json
> ---
>  {"f1":1,"f2":null,"f3":2}
> (1 row)
>
> regression=# select array_to_json(array[1,null,2]);
>  array_to_json
> ---
>  [1,null,2]
> (1 row)
>
> or even in to_json itself:
>
> regression=# select to_json(array[1,null,2]);
>   to_json
> 
>  [1,null,2]
> (1 row)
>
> The scalar case is definitely failing to be consistent with these.
>

Yes, that's my argument for correctness also: to_json() on a composite
object should behave like distribution of to_json() calls over object/array
elements.


> Is consistency a sufficient reason to change it?
>

Not for me.

--
Alex


Re: [HACKERS] to_json(NULL) should to return JSON null instead NULL

2015-08-29 Thread Tom Lane
Andrew Dunstan  writes:
> On 08/29/2015 08:47 AM, Shulgin, Oleksandr wrote:
>> Given there were no loud complaints about this, the current behavior 
>> is appropriate for most users, the rest can still work around using 
>> coalesce(to_json(...), json 'null').

> I don't think it's necessarily more correct. But I do agree that it's 
> not a good idea to change the behaviour unless there is major 
> unhappiness with it.

I'm not entirely convinced that JSON NULL and SQL NULL should be treated
as the same concept, so I would say that the current behavior is fine ---
at least when you think about it in isolation.  However, haven't we
already bought into that equivalence in these examples?

regression=# select row_to_json(row(1,null,2));
row_to_json
---
 {"f1":1,"f2":null,"f3":2}
(1 row)

regression=# select array_to_json(array[1,null,2]);
 array_to_json 
---
 [1,null,2]
(1 row)

or even in to_json itself:

regression=# select to_json(array[1,null,2]);
  to_json   

 [1,null,2]
(1 row)

The scalar case is definitely failing to be consistent with these.
Is consistency a sufficient reason to change it?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.4 broken on alpha

2015-08-29 Thread Christoph Berg
Re: Andrew Dunstan 2015-08-25 <55dc5f9e.60...@dunslane.net>
> >gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement 
> >-Wendif-labels -Wmissing-format-attribute -Wformat-security 
> >-fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -g -O2 -Wformat 
> >-Werror=format-security -I/usr/include/mit-krb5 -fPIC -pie 
> >-DLINUX_OOM_SCORE_ADJ=0 -I../../../src/include 
> >-I/«PKGBUILDDIR»/build/../src/include -D_FORTIFY_SOURCE=2 -D_GNU_SOURCE 
> >-I/usr/include/libxml2  -I/usr/include/tcl8.6  -c -o bgworker.o 
> >/«PKGBUILDDIR»/build/../src/backend/postmaster/bgworker.c
> >/tmp/cc4j88on.s: Assembler messages:
> >/tmp/cc4j88on.s:952: Error: unknown opcode `rmb'
> >as: BFD (GNU Binutils for Debian) 2.25 internal error, aborting at 
> >../../gas/write.c line 603 in size_seg
> 
> 
> needs a buildfarm animal. If we had one we'd presumably have caught this
> much earlier.

In the meantime, I've added this patch to the 9.4 Debian package, and
the build including check-world succeeds:

https://buildd.debian.org/status/fetch.php?pkg=postgresql-9.4&arch=alpha&ver=9.4.4-2&stamp=1440797195

It'd be nice if the patch could get applied to 9.4 and earlier.

Thanks,
Christoph
-- 
Senior Berater, Tel.: +49 (0)21 61 / 46 43-187
credativ GmbH, HRB Mönchengladbach 12080, USt-ID-Nummer: DE204566209
Hohenzollernstr. 133, 41061 Mönchengladbach
Geschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer
pgp fingerprint: 5C48 FE61 57F4 9179 5970  87C6 4C5A 6BAB 12D2 A7AE


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] to_json(NULL) should to return JSON null instead NULL

2015-08-29 Thread Andrew Dunstan



On 08/29/2015 08:47 AM, Shulgin, Oleksandr wrote:
On Sat, Aug 29, 2015 at 8:39 AM, Pavel Stehule 
mailto:pavel.steh...@gmail.com>> wrote:


Hi

currently JSON support NULL value - but we doesn't convert NULL to
JSON correctly

postgres=# select to_json(NULL::text) IS NULL;
┌──┐
│ ?column? │
╞══╡
│ t│
└──┘
(1 row)

probably should be json "null";

like

postgres=# select json 'null' is null;
┌──┐
│ ?column? │
╞══╡
│ f│
└──┘
(1 row)


I agree that for correctness reasons it should, but just don't think 
it would be OK to change this behavior -- the function was there since 
9.3...


Given there were no loud complaints about this, the current behavior 
is appropriate for most users, the rest can still work around using 
coalesce(to_json(...), json 'null').






I don't think it's necessarily more correct. But I do agree that it's 
not a good idea to change the behaviour unless there is major 
unhappiness with it.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: Implement failover on libpq connect level.

2015-08-29 Thread Shulgin, Oleksandr
On Fri, Aug 28, 2015 at 6:10 PM, Teodor Sigaev  wrote:

> +1 for bringing the jdbc driver URI syntax into libpq, so that all
>> interfaces
>> can be optionally specified this way. This doesn't preclude the use of
>> ipfailover, in fact it might be work well together. If you don't like it,
>> don't
>> use it.
>>
>
> +1
>
> Another thought:  multiple hosts in URI could be used in simple
> configuration for read-only clients. I faced with customers which manages
> two connections in process - to master and to one of several slaves.


Hm, but do they suffer any trouble while doing that *outside* of libpq?
What is the benefit in adding this to libpq itself while it already
provides very rich and finely grained connection control functions?

--
Alex


Re: [HACKERS] psql - better support pipe line

2015-08-29 Thread Shulgin, Oleksandr
On Fri, Aug 28, 2015 at 9:52 PM, Jim Nasby  wrote:

> On 8/28/15 3:58 AM, Shulgin, Oleksandr wrote:
>
>> It occurs to me the most flexible thing that could be done here
>> would be providing a libpq function that spits out JSON connection
>> parameters and have psql turn that into a variable. It would be easy
>> to feed that to a SQL statement and do whatever you want with it at
>> that point, including format it to a connection URI.
>>
>>
>> Hm... but that would mean that suddenly psql would need JSON parsing
>> capabilities and URI escaping code would have to be moved there too?  So
>> every client that links to libpq and wants to use this feature going as
>> far as reconstructing an URI would need both of the capabilities.
>>
>
> Anything that's doing this presumably has connected to the database, which
> on any recent version means you have plenty of ability to process JSON at
> the SQL layer.


*Cough*...  Well, the fact that it's technically not impossible, doesn't
mean it's the right way to do it.  By the same reasoning we can also ask
the server to calculate 1+1 for us in SQL. :-)

And that will work even with a 9.0 server, while parsing JSON -- not
really.  Another point is that you don't need an *alive* connection to be
able to extract its URI/conninfo string, while when offloading JSON parsing
part to the server you suddenly do.  Bottom line for me: while still
possible, this can't be portable.

Why instead of JSON not spit conninfo format, with proper escaping?
>> That could be a separate library call, e.g. PGgetConnectionString() and
>> a separate backslash command: \conninfo
>>
>
> Do you mean as a URI? The downside to that it's it's more difficult to
> parse than JSON. Another option might be an array.
>

Hm... actually why not just use the existing call:

PQconninfoOption *PQconninfo(PGconn *conn);

and move whatever code is needed to form an URI or conninfo string to psql
itself?

The other issue is there's no way to capture \conninfo inside of psql and
> do something with it. If instead this was exposed as a variable, you could
> handle it in SQL if you wanted to.
>

Yeah, I forgot about the variable proposal, that would be a more useful way
to expose it for sure.

--
Alex


Re: [HACKERS] [BUGS] Compile fails on AIX 6.1

2015-08-29 Thread Tom Lane
Noah Misch  writes:
> I tested a gcc 64-bit build.  Consistent with your followup, "b .+12" doesn't
> build, but "b $+12" builds and passes "make check".  I am attaching the exact
> diff I tested.
> On GNU/Linux ppc, I get the same opcodes before and after the change.

Thanks for checking!  I have some other business to attend to today, but
I'll get this committed soon.

Please consider spinning up a gcc buildfarm critter on the machine running
hornet & mandrill.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: Core dump with nested CREATE TEMP TABLE

2015-08-29 Thread Tom Lane
Michael Paquier  writes:
> Ah, OK, you meant this file... Yes I was able to receive it as well in your
> original email. I'll try to investigate further later, but Tom may beat me
> first. He usually does.

Jim had indicated the bug wasn't reproducible on the strength of that
info, so I was waiting for him to provide a more reproducible case.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] to_json(NULL) should to return JSON null instead NULL

2015-08-29 Thread Shulgin, Oleksandr
On Sat, Aug 29, 2015 at 8:39 AM, Pavel Stehule 
wrote:

> Hi
>
> currently JSON support NULL value - but we doesn't convert NULL to JSON
> correctly
>
> postgres=# select to_json(NULL::text) IS NULL;
> ┌──┐
> │ ?column? │
> ╞══╡
> │ t│
> └──┘
> (1 row)
>
> probably should be json "null";
>
> like
>
> postgres=# select json 'null' is null;
> ┌──┐
> │ ?column? │
> ╞══╡
> │ f│
> └──┘
> (1 row)
>

I agree that for correctness reasons it should, but just don't think it
would be OK to change this behavior -- the function was there since 9.3...

Given there were no loud complaints about this, the current behavior is
appropriate for most users, the rest can still work around using
coalesce(to_json(...), json 'null').

--
Alex