Re: [HACKERS] Logging WAL when updating hintbit

2013-11-17 Thread Sawada Masahiko
On Fri, Nov 15, 2013 at 11:33 PM, Peter Eisentraut pete...@gmx.net wrote:
 On 11/14/13, 1:02 AM, Sawada Masahiko wrote:
 I attached patch adds new wal_level 'all'.

 Compiler warning:

 pg_controldata.c: In function ‘wal_level_str’:
 pg_controldata.c:72:2: warning: enumeration value ‘WAL_LEVEL_ALL’ not handled 
 in switch [-Wswitch]


Thank you for report!
I have fixed it.


-- 
Regards,

---
Sawada Masahiko


log_hint_bit_wal_v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pre-commit triggers

2013-11-17 Thread Hannu Krosing
On 11/17/2013 01:42 AM, Tom Lane wrote:
 Hannu Krosing ha...@2ndquadrant.com writes:
 I have not looked at the patch, but does it also run pre-rollback ?
 error in trigger - instant infinite loop.
Means this needs to have some kind of recursion depth limit, like python

 def x():
... return x()
...
 x()
... (a few thousand messages like the following) ...
  File stdin, line 2, in x
RuntimeError: maximum recursion depth exceeded


 Besides, exactly what would you do in such a trigger?  
The use case would be telling another system about the rollback.

Basically sending a ignore what I told you to do message

So it would send a network message, a signal or writing something to
external file.

 Not modify
 the database, for certain, because we're about to roll back.

   regards, tom lane

Cheers

-- 
Hannu Krosing
PostgreSQL Consultant
Performance, Scalability and High Availability
2ndQuadrant Nordic OÜ



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Review: pset autocomplete add missing options

2013-11-17 Thread Ian Lawrence Barwick
Review for Pavel Stehule's patch in CF 2013-11:

  https://commitfest.postgresql.org/action/patch_view?id=1253

Patch applies cleanly and works as intended; it's a very straightforward
patch so no surprises there.

The patch expands the range of completable items for \pset, putting
them in alphabetical order and syncs them with the list in command.c
introduced by Gilles Darold's earlier patch for \pset without any
options ( https://commitfest.postgresql.org/action/patch_view?id=1202 ).

However double-checking the options available to \pset, I see there
is also 'fieldsep_zero' and 'recordsep_zero', which are special cases
for 'fieldsep' and 'recordsep' respectively and which are therefore not
displayed separately by \pset without-any-options, but should nevertheless
be tab-completable. Modified patch attached to include these.

Regards

Ian Barwick

PS I will endeavour to review a more complex patch
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
new file mode 100644
index 021b6c5..24a5c69
*** a/src/bin/psql/tab-complete.c
--- b/src/bin/psql/tab-complete.c
*** psql_completion(char *text, int start, i
*** 3306,3314 
  	else if (strcmp(prev_wd, \\pset) == 0)
  	{
  		static const char *const my_list[] =
! 		{format, border, expanded,
! 			null, fieldsep, tuples_only, title, tableattr,
! 		linestyle, pager, recordsep, NULL};
  
  		COMPLETE_WITH_LIST_CS(my_list);
  	}
--- 3306,3315 
  	else if (strcmp(prev_wd, \\pset) == 0)
  	{
  		static const char *const my_list[] =
! 		{border, columns, expanded, fieldsep, fieldsep_zero,
! 		 footer, format, linestyle, null, numericlocale,
! 		 pager, recordsep, recordsep_zero, tableattr, title,
! 		 tuples_only, NULL};
  
  		COMPLETE_WITH_LIST_CS(my_list);
  	}

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] appendPQExpBufferVA vs appendStringInfoVA

2013-11-17 Thread Marko Kreen
On Thu, Nov 14, 2013 at 09:33:59PM +1300, David Rowley wrote:
 On Sun, Nov 3, 2013 at 3:18 AM, David Rowley dgrowle...@gmail.com wrote:
  I'm low on ideas on how to improve things much around here for now, but
  for what it's worth, I did create a patch which changes unnecessary calls
  to appendPQExpBuffer() into calls to appendPQExpBufferStr() similar to the
  recent one for appendStringInfo and appendStringInfoString.
 
 Attached is a re-based version of this.

It does not apply anymore, could you resend it?

I am bit suspicious of performance impact of this patch, but think
that it's still worthwhile as it decreases code style where single
string argument is given to printf-style function without %s.

-- 
marko



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pre-commit triggers

2013-11-17 Thread Alvaro Herrera
Hannu Krosing wrote:

 So it would send a network message, a signal or writing something to
 external file.

If you're OK with a C function, you could try registering a callback,
see RegisterXactCallback().

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pre-commit triggers

2013-11-17 Thread Hannu Krosing
On 11/17/2013 04:20 PM, Alvaro Herrera wrote:
 Hannu Krosing wrote:

 So it would send a network message, a signal or writing something to
 external file.
 If you're OK with a C function, you could try registering a callback,
 see RegisterXactCallback().

I already have an implementation doing just that, thoughg having a
trigger would be perhaps clearer :)

And I suspect that calling a pl/* function after the ROLLBACK has
actually happened due to
error is a no-go anyway, so it has to be C.

Cheers

-- 
Hannu Krosing
PostgreSQL Consultant
Performance, Scalability and High Availability
2ndQuadrant Nordic OÜ



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Using indices for UNION.

2013-11-17 Thread Tom Lane
Kyotaro HORIGUCHI horiguchi.kyot...@lab.ntt.co.jp writes:
 [ union_uses_idx_v2_20131113.patch ]

I'm aware that you said you were going to refactor this, but I took a
quick look through it anyway.  I don't think mere refactoring is going
to make me happy with it :-(.

The basic problem here, as well as with some of the hackery that's
been proposed recently in planner.c, is that we can't really get much
further with improving this layer of the planner until we bite the
bullet and convert this layer to work with Paths not finished Plans.
Look at what you're proposing here: do a complete planning cycle on
the subquery (underneath which both ordered and unordered Paths will
be considered, and one or the other will get thrown away).  Then do
*another* complete planning cycle with ordered output forced.  Then
compare costs of the results.  This is hugely inefficient, and it
produces far-from-ideal results too, because you're forced to make a
decision right there on which subquery plan to use; you can't make the
globally optimal choice after considering costs of all the subqueries.
What we need to do is fix things so we can get multiple Paths out of
the subquery planning step, some ordered and some not.  Then we could
construct Paths for both the brute force and merge-append styles of
computing the UNION result, and finally choose the cheapest Path at the
top level.

The same goes for hacking around in grouping_planner.  That function
is well past the point of collapsing of its own weight; we need
to invest some effort in refactoring before we can afford to add
much more complexity there.  And I think the logical way to refactor
is to rewrite the code in a Path-generation-and-comparison style.
(Actually, grouping_planner would need to be fixed first, since
that's what would have to produce the Paths we're hoping to compare
in prepunion.)

I had hoped to make some progress on that rewrite this past summer,
but ended up with no time to work on it :-(.  There's going to be
a lot of boring infrastructure work before we see much in the way
of user-visible improvement, I'm afraid, so it's kind of hard to
make time for it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Compile and test in netbeans

2013-11-17 Thread Rohit Goyal
Hi All,

I want to modify postgresql code for the first time.
I tried to use netbeans to compile the code.

Now, I wanted to print something on console when I create a table or an
index. I am printing just for testing purpose to get a feel of code.

Please tel me the procedure to run and print using printf().

-- 
Regards,
Rohit Goyal


Re: [HACKERS] pre-commit triggers

2013-11-17 Thread Andres Freund
On 2013-11-17 09:39:26 +0100, Hannu Krosing wrote:
  Besides, exactly what would you do in such a trigger?  
 The use case would be telling another system about the rollback.

 Basically sending a ignore what I told you to do message

But you can't rely on it - if e.g. the server restarted/crashed, there
won't be any messages about it. In that light, I really don't see what
you could do with it.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Wildcard usage enhancements in .pgpass

2013-11-17 Thread Martijn van Oosterhout
On Sat, Nov 16, 2013 at 09:26:33PM +0100, Alexey Klyukin wrote:
 Hi,
 
 Attached is the patch that improves usage of '*' wildcard in .pgpass,
 particularly in the host part. The use case is below.

Looks interesting, though I wonder if you could use fnmatch(3) here. Or
woud that match more than you expect?  For example, it would allow
'foo*bar' to match 'foo.bar' which your code doesn't.

Have a nice day,
-- 
Martijn van Oosterhout   klep...@svana.org   http://svana.org/kleptog/
 He who writes carelessly confesses thereby at the very outset that he does
 not attach much importance to his own thoughts.
   -- Arthur Schopenhauer


signature.asc
Description: Digital signature


Re: [HACKERS] pre-commit triggers

2013-11-17 Thread Hannu Krosing
On 11/17/2013 07:31 PM, Andres Freund wrote:
 On 2013-11-17 09:39:26 +0100, Hannu Krosing wrote:
 Besides, exactly what would you do in such a trigger?  
 The use case would be telling another system about the rollback.

 Basically sending a ignore what I told you to do message
 But you can't rely on it - if e.g. the server restarted/crashed, there
 won't be any messages about it. In that light, I really don't see what
 you could do with it.
I can get the info about non-commit earlier :)

At some point I can call back into the database and see if the
transaction is still running.

Cheers

-- 
Hannu Krosing
PostgreSQL Consultant
Performance, Scalability and High Availability
2ndQuadrant Nordic OÜ



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-17 Thread David E. Wheeler
On Nov 16, 2013, at 2:04 PM, Hannu Krosing ha...@2ndquadrant.com wrote:

 It’s still input and output as JSON, though.
 Yes, because JavaScript Object Notation *is* a serialization format
 (aka Notation) for converting JavaScript Objects to text format
 and back :)
 I still like JSONB best.
 To me it feels redundant, like binarytextbinary
 
 the binary representation of JSON is JavaScript(-like) Object, not
 binary json
 
 So my vote would be either jsobj or jsdoc (as document databases) tend
 to call the structured types documents

You know that both types support scalar values right? 'a'::JSON works now, and 
'a'::hstore works with the WIP patch. For that reason I would not think that 
doc or obj would be good choices.

I like JSONB because:

1. The B means binary
2. The B means second
3. It's short
4. See also BYTEA.

Best,

David




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE TABLE IF NOT EXISTS AS

2013-11-17 Thread David E. Wheeler
On Nov 16, 2013, at 4:40 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Co-worker asked a question I could not answer: Why is IF NOT EXISTS not 
 supported by CREATE TABLE AS?
 
 That's an even worse idea than plain CREATE IF NOT EXISTS (which was
 put in over vocal objections from me and some other people).  Not only
 would you not have the faintest clue as to the properties of the table
 afterwards, but no clue as to its contents either.

You mean that, after running it, one cannot tell whether or not a new table was 
created or not without looking at it? I guess that makes sense, though 
sometimes I like to tell the system to assume that I know what I’m doing -- 
e.g., that either outcome works for me.

Not essential as a core feature, mind you; I can use DO to accomplish the same 
thing. It’s just a bit more work that way. And perhaps that’s for the best.

Best,

David



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: simple date constructor from numeric values

2013-11-17 Thread Tom Lane
Alvaro Herrera alvhe...@2ndquadrant.com writes:
 Thanks.  I wasn't sure about the error message returned when times are
 outside range; how about this instead?  I'm not wedded to this approach
 -- I can return to yours if this one isn't liked -- but I think the
 more specific messages are better.  I realize this is inconsistent with
 the make_date case which always displays the full date instead of
 specific fields, but I think it's more likely that someone is doing
 arithmetic to enter time fields than date.  (Anyway maybe this is not an
 important enough issue to create more work for translators.)

I thought that last point was the most important one: doing it like that
would create more work for translators than it's worth.  There's no reason
to think that people can't figure out which field it's unhappy about.
And what if more than one field is wrong?  You'd be exposing an
implementation detail about the order in which the tests are made.

Another issue with the patch as submitted was that make_date with a
negative year value behaved unreasonably.  I made it throw error, but
you could also argue that say -44 ought to mean 44 BC.  (Year zero
should be disallowed in any case, of course.)  It would take a few
extra lines of code to do that.

Committed with those changes and some other cosmetic adjustments.

This doesn't really finish the TODO item, as that contemplated a
make_timestamp() function as well; but I don't see a reason not
to commit what we've got.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-17 Thread David Johnston
David E. Wheeler-3 wrote
 I like JSONB because:
 
 1. The B means binary
 2. The B means second
 3. It's short
 4. See also BYTEA.

json_strict :

Not sure about the bytea reference off-hand...

I was pondering jsons which meets the short property just fine and the
trailing s would stand for strict which is the user-visible semantic
that this type exhibits rather than some less-visible binary attribute
which most users would not really care about.  I dislike the implication of
plural-ness that the s imparts, though.

Implication of second doesn't seem that important since both types provide
useful semantics.

I can imagine where the short aspect will lead people to accidentally type
json where they mean to use jsonb and having a just a single extra
character will increase the likelihood they will not notice.  Knowing about
and having used json_strict previously it will be more probable that such
users will noticeably feel something is missing if they drop the whole
_strict suffix.

So, I'll toss out json_strict for my bikeshed contribution.

David J.
 



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/additional-json-functionality-tp5777975p5778770.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] information schema parameter_default implementation

2013-11-17 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 [ 0001-Implement-information_schema.parameters.parameter_de.patch ]

I'm a bit confused as to where this column is coming from?  There's
no such thing in SQL:2008 as far as I can see.  If it's coming from
some not-yet-ratified draft, maybe we should wait for ratification.
It's impossible for a bystander to tell if this implementation conforms
to what the spec is expecting.

BTW, although SQL:2008 lacks this column in the parameters view, there
are about six columns it has that we don't: see the from_sql_xxx and
to_sql_xxx columns.  Surely we should put those in (at least as dummy
columns) before trying to claim adherence to some even-newer spec draft.

As far as the code goes, I have no particular objections, modulo the
question about whether this patch is implementing spec-compatible
behavior.  A small stylistic idea is that maybe the computation of
nth_inputarg should be moved down nearer where it's used.  Really
that's just part of the calculation of nth_default, and it wouldn't
be unreasonable to stick it under the comment explaining why we're
doing that calculation like that.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-17 Thread Dimitri Fontaine
David E. Wheeler da...@justatheory.com writes:
 You know that both types support scalar values right? 'a'::JSON works now,
 and 'a'::hstore works with the WIP patch. For that reason I would not think
 that doc or obj would be good choices.

I'm wondering about just pushing hstore in core (even if technically
still an extension, install it by default, like we do for PLpgSQL), and
calling it a day.

If you need pre-9.4 JSON-is-text compatibility, use the json datatype,
if you want something with general index support, use hstore.

For bikeshedding purposes, what about calling it jstore, as in “we
actually know how to store your json documents”?

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-17 Thread Hannu Krosing
On 11/17/2013 09:02 PM, David E. Wheeler wrote:
 On Nov 16, 2013, at 2:04 PM, Hannu Krosing ha...@2ndquadrant.com wrote:

 It’s still input and output as JSON, though.
 Yes, because JavaScript Object Notation *is* a serialization format
 (aka Notation) for converting JavaScript Objects to text format
 and back :)
 I still like JSONB best.
 To me it feels redundant, like binarytextbinary

 the binary representation of JSON is JavaScript(-like) Object, not
 binary json

 So my vote would be either jsobj or jsdoc (as document databases) tend
 to call the structured types documents
 You know that both types support scalar values right? 
 'a'::JSON works now, 
Yeah, and I remember all the bikeshedding about how
scalars should not be supported as they are
not really JSON by standard ...

At that time I was also quite vocal about not painting
ourselves in corner by not normalising json on input and
thus generating a backwards compatibility problem in
case we would ever get proper json support.
 and 'a'::hstore works with the WIP patch. For that reason I would not think 
 that doc or obj would be good choices.
this is like claiming that text should not be text because you
can store a single character there as well.

I feel that both doc and obj convey the meaning that it is a
structured type meant for fast component lookup as opposed to
jsoN(otation) type which is text.

Also jsdoc/jsobj would be a natural bridge to pgdoc/pgobj which would be
similar to new json but allow any type supported by postgresql as a value.

(... and in several languages even scalars really are objects)

 I like JSONB because:

 1. The B means binary
Binary has really little to do with the fact that we
normalise on input, which is the real significant feature
of the new json type.
 2. The B means second
Why not just json2 , (you know, like varchar2 in a certain other database ;)
 3. It's short
jsobj and jsdoc are exactly as short as jsonb
 4. See also BYTEA.
BYTEA is byte array, so not really relevant.

(unless you try to rhyme a byte-a, json-b sequence ;) )

Cheers

-- 
Hannu Krosing
PostgreSQL Consultant
Performance, Scalability and High Availability
2ndQuadrant Nordic OÜ



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Assertions in PL/PgSQL

2013-11-17 Thread Tom Lane
Pavel Stehule pavel.steh...@gmail.com writes:
 [ rebased patch for RAISE WHEN ]

I have to say I do not see the point of this.  It does nothing you
can't do already with IF condition THEN RAISE   And frankly
the RAISE statement has got too darn many options already.  We don't
need yet more cruft on it that we'll have to maintain forevermore.

If this were improving standards compliance somehow, I'd be okay
with it; but what other implementation has got this?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-17 Thread Hannu Krosing
On 11/17/2013 10:51 PM, Dimitri Fontaine wrote:
 David E. Wheeler da...@justatheory.com writes:
 You know that both types support scalar values right? 'a'::JSON works now,
 and 'a'::hstore works with the WIP patch. For that reason I would not think
 that doc or obj would be good choices.
 I'm wondering about just pushing hstore in core (even if technically
 still an extension, install it by default, like we do for PLpgSQL), and
 calling it a day.

 If you need pre-9.4 JSON-is-text compatibility, use the json datatype,
 if you want something with general index support, use hstore.
+1 for getting also hstore in

I think hstore needs to keep its text format compatible with older hstore
(in this discussion lets call this text format hson, short for
HStore Object Notation for added confusion :)

 For bikeshedding purposes, what about calling it jstore, 
+1 for jstore as well. I am happy with jstore, jsdoc, jsobj

jstore/jsobj/jsdoc really is *not* JSON, but a bona-fide freeform
structured datatype that happens to have JSON as convenient
I/O format.

You may want to use jstore even if you have never needed
JSON as serialisation/transport format before.

I do not like jsonB (sounds too much like json2, i.e. like we were
trying to cover up a design accident) nor json_strict (as this is not
really strict as it indeed does accept scalars, not just Arrays/Lists
and Objects/Dictionaries as per JSON standard)
 as in “we actually know how to store your json documents”?

 Regards,

Cheers

-- 
Hannu Krosing
PostgreSQL Consultant
Performance, Scalability and High Availability
2ndQuadrant Nordic OÜ



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-17 Thread David E. Wheeler
On Nov 17, 2013, at 1:51 PM, Dimitri Fontaine dimi...@2ndquadrant.fr wrote:

 I'm wondering about just pushing hstore in core (even if technically
 still an extension, install it by default, like we do for PLpgSQL), and
 calling it a day.

It’s syntax is different than JSON, so one would need to convert to and from 
JSON all the time to parse and serialize. PITA.

 For bikeshedding purposes, what about calling it jstore, as in “we
 actually know how to store your json documents”?

-1 Sounds like a Java storage API.

David

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-17 Thread Dimitri Fontaine
David E. Wheeler da...@justatheory.com writes:
 On Nov 17, 2013, at 1:51 PM, Dimitri Fontaine dimi...@2ndquadrant.fr wrote:
 I'm wondering about just pushing hstore in core (even if technically
 still an extension, install it by default, like we do for PLpgSQL), and
 calling it a day.

 It’s syntax is different than JSON, so one would need to convert to
 and from JSON all the time to parse and serialize. PITA.

Oh I misremembered about that, I though it would take JSON as input
as-is and could be made to output JSON. And IIRC the community input at
pgconf.eu has been to just always output json texts and get rid of the
formating GUCs.

Now, if it turns out that the new hstore is not dealing with json input
and output, we could have json, jstore and hstore.

-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-17 Thread David E. Wheeler
On Nov 17, 2013, at 2:26 PM, Dimitri Fontaine dimi...@2ndquadrant.fr wrote:

 It’s syntax is different than JSON, so one would need to convert to
 and from JSON all the time to parse and serialize. PITA.
 
 Oh I misremembered about that, I though it would take JSON as input
 as-is and could be made to output JSON. And IIRC the community input at
 pgconf.eu has been to just always output json texts and get rid of the
 formating GUCs.

Yeah, but for back-compate, it has to use = instead of : to separate keys from 
values, and cannot use braces for a root-level object. :-(

 Now, if it turns out that the new hstore is not dealing with json input
 and output, we could have json, jstore and hstore.

That's where this is headed, yes.

David



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-17 Thread Andrew Dunstan


On 11/17/2013 04:51 PM, Dimitri Fontaine wrote:

David E. Wheeler da...@justatheory.com writes:

You know that both types support scalar values right? 'a'::JSON works now,
and 'a'::hstore works with the WIP patch. For that reason I would not think
that doc or obj would be good choices.

I'm wondering about just pushing hstore in core (even if technically
still an extension, install it by default, like we do for PLpgSQL), and
calling it a day.


That would be one of the silliest and most short-sighted decisions we 
have made in many years, IMNSHO. The demand for strong JSON support is 
enormous. I don't think I have ever received as many positive comments 
on any other feature I have worked on in the last 9 years. What these 
people want is not something jsonish, they want json, pure and simple. 
And they want it fast and featured and efficient.


Much as I love hstore, it isn't json, and so it won't satisfy that demand.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-17 Thread Dimitri Fontaine
Andrew Dunstan and...@dunslane.net writes:
 That would be one of the silliest and most short-sighted decisions we have
 made in many years, IMNSHO. The demand for strong JSON support is enormous.

One of the silliest and most short-sighted decisions we made recently
might have been to actually ship that json variant in 9.2, after all.

The most popular PostgreSQL commands in 9.4 are going to be:

  $ sudo apt-get install postgresql-contrib-9.4

  # create extension jstore;
  # alter table foo alter column documents type jstore;
  # create index on foo using gist(documents);

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-17 Thread Gavin Flower

On 18/11/13 09:02, David E. Wheeler wrote:

On Nov 16, 2013, at 2:04 PM, Hannu Krosing ha...@2ndquadrant.com wrote:


It’s still input and output as JSON, though.

Yes, because JavaScript Object Notation *is* a serialization format
(aka Notation) for converting JavaScript Objects to text format
and back :)

I still like JSONB best.

To me it feels redundant, like binarytextbinary

the binary representation of JSON is JavaScript(-like) Object, not
binary json

So my vote would be either jsobj or jsdoc (as document databases) tend
to call the structured types documents

You know that both types support scalar values right? 'a'::JSON works now, and 'a'::hstore works 
with the WIP patch. For that reason I would not think that doc or obj would 
be good choices.

I like JSONB because:

1. The B means binary
2. The B means second
3. It's short
4. See also BYTEA.

Best,

David


 

Whatever, I think the first 4 characters have to 'JSON' - for easy 
identification.



Cheers,
Gavin


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-17 Thread Gavin Flower

On 18/11/13 09:45, David Johnston wrote:

David E. Wheeler-3 wrote

I like JSONB because:

1. The B means binary
2. The B means second
3. It's short
4. See also BYTEA.

json_strict :

Not sure about the bytea reference off-hand...

I was pondering jsons which meets the short property just fine and the
trailing s would stand for strict which is the user-visible semantic
that this type exhibits rather than some less-visible binary attribute
which most users would not really care about.  I dislike the implication of
plural-ness that the s imparts, though.

Implication of second doesn't seem that important since both types provide
useful semantics.

I can imagine where the short aspect will lead people to accidentally type
json where they mean to use jsonb and having a just a single extra
character will increase the likelihood they will not notice.  Knowing about
and having used json_strict previously it will be more probable that such
users will noticeably feel something is missing if they drop the whole
_strict suffix.

So, I'll toss out json_strict for my bikeshed contribution.

David J.
  




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/additional-json-functionality-tp5777975p5778770.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.



+1


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-17 Thread Andrew Dunstan


On 11/17/2013 05:44 PM, Dimitri Fontaine wrote:

Andrew Dunstan and...@dunslane.net writes:

That would be one of the silliest and most short-sighted decisions we have
made in many years, IMNSHO. The demand for strong JSON support is enormous.

One of the silliest and most short-sighted decisions we made recently
might have been to actually ship that json variant in 9.2, after all.



The fact is that we had been going round and round on Json for a while, 
and Robert Haas rightly made a move to break the Gordian knot. We would 
not have done ourselves any service by not accepting it.


Hindsight is always 20-20, but even with what we have today people get 
excited, and more people move to use Postgres every day because we have 
that support.




The most popular PostgreSQL commands in 9.4 are going to be:

   $ sudo apt-get install postgresql-contrib-9.4

   # create extension jstore;
   # alter table foo alter column documents type jstore;
   # create index on foo using gist(documents);




Umm, not if I have anything to do with it.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-17 Thread Josh Berkus

 Now, if it turns out that the new hstore is not dealing with json input
 and output, we could have json, jstore and hstore.

Jstore isn't the worst name suggestion I've heard on this thread.  The
reason I prefer JSONB though, is that a new user looking for a place to
put JSON data will clearly realize that JSON and JSONB are alternatives
and related in some way.  They won't necessarily expect that jstore
has anything to do with JSON, especially when there is another type
called JSON.  Quite a few people are liable to think it's something to
do with Java.

Besides, we might get sued by these people: http://www.jstor.org/  ;-)

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-17 Thread David E. Wheeler

On Nov 17, 2013, at 5:49 PM, Josh Berkus j...@agliodbs.com wrote:

 Jstore isn't the worst name suggestion I've heard on this thread.  The
 reason I prefer JSONB though, is that a new user looking for a place to
 put JSON data will clearly realize that JSON and JSONB are alternatives
 and related in some way.  They won't necessarily expect that jstore
 has anything to do with JSON, especially when there is another type
 called JSON.  Quite a few people are liable to think it's something to
 do with Java.
 
 Besides, we might get sued by these people: http://www.jstor.org/  ;-)

Okay, how about JDATE? ;-P

David



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimize kernel readahead using buffer access strategy

2013-11-17 Thread KONDO Mitsumasa

(2013/11/15 13:48), Claudio Freire wrote:

On Thu, Nov 14, 2013 at 11:13 PM, KONDO Mitsumasa

I use CentOS 6.4 which kernel version is 2.6.32-358.23.2.el6.x86_64 in this
test.


That's close to the kernel version I was using, so you should see the
same effect.
OK. You proposed readahead maximum patch, I think it seems to get benefit for 
perofomance and your part of argument is really true.



Your patch becomes maximum readahead, when a sql is selected index range
scan. Is it right?


Ehm... sorta.


I think that your patch assumes that pages are ordered by
index-data.


No. It just knows which pages will be needed, and fadvises them. No
guessing involved, except the guess that the scan will not be aborted.
There's a heuristic to stop limited scans from attempting to fadvise,
and that's that prefetch strategy is applied only from the Nth+ page
walk.

We may completely optimize kernel readahead in PostgreSQL in the future,
however it is very difficult and takes long time that it completely comes true 
from a beginning. So I propose GUC switch that can use in their transactions.(I  
will create this patch in this CF.). If someone off readahed for using file cache 
more efficient in his transactions, he can set SET readahead = off. PostgreSQL 
is open source, and I think that it becomes clear which case it is effective for, 
by using many people.



It improves index-only scans the most, but I also attempted to handle
heap prefetches. That's where the kernel started conspiring against
me, because I used many naturally-clustered indexes, and THERE
performance was adversely affected because of that kernel bug.
I also create gaussinan-distributed pgbench now and submit this CF. It can clear 
which situasion is effective, partially we will know.



You may want to try your patch with more
real workloads, and maybe you'll confirm what I found out last time I
messed with posix_fadvise. If my experience is still relevant, those
patterns will have suffered a severe performance penalty with this
patch, because it will disable kernel read-ahead on sequential index
access. It may still work for sequential heap scans, because the
access strategy will tell the kernel to do read-ahead, but many other
access methods will suffer.


The decisive difference with your patch is that my patch uses buffer hint
control architecture, so it can control readahaed smarter in some cases.


Indeed, but it's not enough. See my above comment about naturally
clustered indexes. The planner expects that, and plans accordingly. It
will notice correlation between a PK and physical location, and will
treat an index scan over PK to be almost sequential. With your patch,
that assumption will be broken I believe.

~

However, my patch is on the way and needed to more improvement. I am going
to add method of controlling readahead by GUC, for user can freely select
readahed parameter in their transactions.


Rather, I'd try to avoid fadvising consecutive or almost-consecutive
blocks. Detecting that is hard at the block level, but maybe you can
tie that detection into the planner, and specify a sequential strategy
when the planner expects index-heap correlation?
I think we had better to develop these patches in step by step each patches, 
because it is difficult that readahead optimizetion is completely come true from 
a beginning of one patch. We need flame-work in these patches, first.



Try OLAP-style queries.


I have DBT-3(TPC-H) benchmark tools. If you don't like TPC-H, could you tell
me good OLAP benchmark tools?


I don't really know. Skimming the specs, I'm not sure if those queries
generate large index range queries. You could try, maybe with
autoexplain?

OK, I do. And, I will use simple large index range queries with explain command.

Regards,
--
Mitsuamsa KONDO
NTT Open Source Software Center


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-17 Thread Gavin Flower

On 18/11/13 14:51, David E. Wheeler wrote:

On Nov 17, 2013, at 5:49 PM, Josh Berkus j...@agliodbs.com wrote:


Jstore isn't the worst name suggestion I've heard on this thread.  The
reason I prefer JSONB though, is that a new user looking for a place to
put JSON data will clearly realize that JSON and JSONB are alternatives
and related in some way.  They won't necessarily expect that jstore
has anything to do with JSON, especially when there is another type
called JSON.  Quite a few people are liable to think it's something to
do with Java.

Besides, we might get sued by these people: http://www.jstor.org/  ;-)

Okay, how about JDATE? ;-P

David




I don't want a Japanese Date - would cause complications with my wife!  :-)


Cheers,
Gavin


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Parse more than bind and execute when connect to database by jdbc

2013-11-17 Thread wangshuo

Hi hackers,

I used jdbc to connect to PostgreSQL ,and I found parse 
nearly three
times of bind and execute.Is is normaly? What factors may 
cause this result?


The OS is CentOS 5. The PG is 9.0.9. The JDBC is 9.2.

 Yours,
 Wang Shuo
 HighGo Software Co.,Ltd.
 November 18, 2013


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimize kernel readahead using buffer access strategy

2013-11-17 Thread Claudio Freire
On Sun, Nov 17, 2013 at 11:02 PM, KONDO Mitsumasa
kondo.mitsum...@lab.ntt.co.jp wrote:
 However, my patch is on the way and needed to more improvement. I am
 going
 to add method of controlling readahead by GUC, for user can freely select
 readahed parameter in their transactions.


 Rather, I'd try to avoid fadvising consecutive or almost-consecutive
 blocks. Detecting that is hard at the block level, but maybe you can
 tie that detection into the planner, and specify a sequential strategy
 when the planner expects index-heap correlation?

 I think we had better to develop these patches in step by step each patches,
 because it is difficult that readahead optimizetion is completely come true
 from a beginning of one patch. We need flame-work in these patches, first.

Well, problem is, that without those smarts, I don't think this patch
can be enabled by default. It will considerably hurt common use cases
for postgres.

But I guess we'll have a better idea about that when we see how much
of a performance impact it makes when you run those tests, so no need
to guess in the dark.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] information schema parameter_default implementation

2013-11-17 Thread Peter Eisentraut
On Sun, 2013-11-17 at 16:38 -0500, Tom Lane wrote:
 I'm a bit confused as to where this column is coming from?  There's
 no such thing in SQL:2008 as far as I can see.

SQL:2011



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parse more than bind and execute when connect to database by jdbc

2013-11-17 Thread Amit Kapila
On Mon, Nov 18, 2013 at 7:32 AM,  wangs...@highgo.com.cn wrote:
 Hi hackers,

 I used jdbc to connect to PostgreSQL ,and I found parse nearly 
 three
 times of bind and execute.Is is normaly?
Actually bind and execute should be more than parse. Have you
confirmed that by enabling Log/Debug messages on server or you are
referring to JDBC log?

 What factors may cause this result?
I am really not sure of JDBC, but from Libpq or general concept wise,
it can happen only if someone does call parse multiple times, but then
execute only few of the statements. I think you might need to narrow
down the problem by having less number of queries in your JDBC
application. If possible start by having only 1 or 2 queries and then
see how many times parse,bind,execute gets called.


 The OS is CentOS 5. The PG is 9.0.9. The JDBC is 9.2.



With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WITHIN GROUP patch

2013-11-17 Thread Peter Eisentraut
On Fri, 2013-11-15 at 00:05 +0530, Atri Sharma wrote:
 Please find the latest version of the patch. This version fixes the
 issues pointed out by the reviewer and the divide by zero bug in
 percent_rank function. This version also adds a regression test for
 the divide by zero case in percent_rank.

This patch doesn't apply.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimize kernel readahead using buffer access strategy

2013-11-17 Thread KONDO Mitsumasa

(2013/11/18 11:25), Claudio Freire wrote:

On Sun, Nov 17, 2013 at 11:02 PM, KONDO Mitsumasa
kondo.mitsum...@lab.ntt.co.jp wrote:

However, my patch is on the way and needed to more improvement. I am
going
to add method of controlling readahead by GUC, for user can freely select
readahed parameter in their transactions.



Rather, I'd try to avoid fadvising consecutive or almost-consecutive
blocks. Detecting that is hard at the block level, but maybe you can
tie that detection into the planner, and specify a sequential strategy
when the planner expects index-heap correlation?


I think we had better to develop these patches in step by step each patches,
because it is difficult that readahead optimizetion is completely come true
from a beginning of one patch. We need flame-work in these patches, first.


Well, problem is, that without those smarts, I don't think this patch
can be enabled by default. It will considerably hurt common use cases
for postgres.

Yes. I have thought as much you that defalut setting is false.
(use normal readahead as before). Next version of my patch will become these.


But I guess we'll have a better idea about that when we see how much
of a performance impact it makes when you run those tests, so no need
to guess in the dark.

Yes, sure.

Regards,
--
Mitsumasa KONDO
NTT Open Source Software Center


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-17 Thread Andrew Dunstan


On 11/17/2013 08:49 PM, Josh Berkus wrote:

Now, if it turns out that the new hstore is not dealing with json input
and output, we could have json, jstore and hstore.

Jstore isn't the worst name suggestion I've heard on this thread.  The
reason I prefer JSONB though, is that a new user looking for a place to
put JSON data will clearly realize that JSON and JSONB are alternatives
and related in some way.  They won't necessarily expect that jstore
has anything to do with JSON, especially when there is another type
called JSON.  Quite a few people are liable to think it's something to
do with Java.

Besides, we might get sued by these people: http://www.jstor.org/  ;-)



I don't think any name that doesn't begin with json is acceptable. I 
could live with jsonb. It has the merit of brevity, but maybe it's a 
tad too close to json to be the right answer.


cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-17 Thread David E. Wheeler
On Nov 17, 2013, at 8:19 PM, Andrew Dunstan and...@dunslane.net wrote:

 I don't think any name that doesn't begin with json is acceptable. I could 
 live with jsonb. It has the merit of brevity, but maybe it's a tad too 
 close to json to be the right answer.

JSONFTW.

David



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] appendPQExpBufferVA vs appendStringInfoVA

2013-11-17 Thread David Rowley
On Mon, Nov 18, 2013 at 1:01 AM, Marko Kreen mark...@gmail.com wrote:

 On Thu, Nov 14, 2013 at 09:33:59PM +1300, David Rowley wrote:
  On Sun, Nov 3, 2013 at 3:18 AM, David Rowley dgrowle...@gmail.com
 wrote:
   I'm low on ideas on how to improve things much around here for now, but
   for what it's worth, I did create a patch which changes unnecessary
 calls
   to appendPQExpBuffer() into calls to appendPQExpBufferStr() similar to
 the
   recent one for appendStringInfo and appendStringInfoString.
  
  Attached is a re-based version of this.

 It does not apply anymore, could you resend it?


I've attached a re-based version.


 I am bit suspicious of performance impact of this patch, but think
 that it's still worthwhile as it decreases code style where single
 string argument is given to printf-style function without %s.


This thread probably did not explain very will the point of this patch.
All this kicked up from an earlier patch which added for alignment in the
log_line_prefix GUC. After some benchmarks were done on the proposed patch
for that, it was discovered that replacing appendStringInfoString with
appendStringInfo gave a big enough slowdown to matter in high volume
logging scenarios. That patch was revised and the appendStringInfo()'s were
only used when they were really needed and performance increased again.

I then ran a few benchmarks seen here:
http://www.postgresql.org/message-id/caaphdvp2ulkpuhjnckonbgg2vxpvxolopzhrgxbs-m0r0v4...@mail.gmail.com

To compare appendStringInfo(si, %s, str); with appendStringinfoString(a,
str); and appendStringInfo(si, str);

The conclusion to those benchmarks were that appendStringInfoString() was
the best function to use when no formatting was required, so I submitted a
patch which replaced appendStringInfo() with appendStringInfoString() where
that was possible and that was accepted.

appendPQExpBuffer() and appendPQExpBufferStr are the front end versions of
appendStringInfo, so I spent an hour or so replacing these calls too as it
should show a similar speedup, though in this case likely the performance
is less critical, my thinking was more along the lines of, it increases
performance a little bit with a total of 0 increase in code complexity.

The findings in the benchmarks in the link above also showed that we might
want to look into turning appendStringInfoString into a macro
around appendBinaryStringInfo() to allow us to skip the strlen() call for
string constants... It's unclear at the moment if this would be a good idea
or much of an improvement, so it was left for something to think about for
the future.


Regards

David Rowley

--
 marko




appendPQExpBufferStr_v0.3.patch.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parse more than bind and execute when connect to database by jdbc

2013-11-17 Thread wangshuo

On 2013-11-18 11:39, Amit Kapila wrote:

On Mon, Nov 18, 2013 at 7:32 AM,  wangs...@highgo.com.cn wrote:

Hi hackers,

I used jdbc to connect to PostgreSQL ,and I found parse 
nearly three

times of bind and execute.Is is normaly?

Actually bind and execute should be more than parse. Have you
confirmed that by enabling Log/Debug messages on server or you are
referring to JDBC log?



Thanks for your reply.

This is the Log/Debug messages.
Some configuration options are as follows:

 log_min_duration_statement = 0'
 log_statement = 'all'
 log_line_prefix = '%t [%p]: [%l-1] db=%d,user=%u [%x]'
 logging_collector = on



What factors may cause this result?

I am really not sure of JDBC, but from Libpq or general concept wise,
it can happen only if someone does call parse multiple times, but 
then

execute only few of the statements.


I do agree with you. But I don't know what query can call parse 
multiple times.

Could you give me more information or example about this case?


 Wang Shuo
 HighGo Software Co.,Ltd.
 November 18, 2013


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Assertions in PL/PgSQL

2013-11-17 Thread Pavel Stehule
2013/11/17 Tom Lane t...@sss.pgh.pa.us

 Pavel Stehule pavel.steh...@gmail.com writes:
  [ rebased patch for RAISE WHEN ]

 I have to say I do not see the point of this.  It does nothing you
 can't do already with IF condition THEN RAISE   And frankly
 the RAISE statement has got too darn many options already.  We don't
 need yet more cruft on it that we'll have to maintain forevermore.

 If this were improving standards compliance somehow, I'd be okay
 with it; but what other implementation has got this?


RAISE statement is not ANSI compliant ever, and it has only thin similarity
with Oracle' PL/SQL RAISE statement now - and it is significantly enhanced
in relation to original ADA

Usually I am not a happy, when PL/pgSQL going far from original ADA, but I
think so this use case is very practical current usual pattern is less
readable than conditional RAISE It is similar to CONTINUE and EXIST
statement. Actually we need a some functionality, that allows simply write
assertions (without custom source code uglyfication). RAISE WHEN is good
for this purpose.

Regards

Pavel






 regards, tom lane



Re: [HACKERS] New option for pg_basebackup, to specify a different directory for pg_xlog

2013-11-17 Thread Haribabu kommi
On 17 November 2013 00:55 Fujii Masao wrote:
 On Sat, Nov 16, 2013 at 2:27 PM, Haribabu kommi
 haribabu.ko...@huawei.com wrote:
  on 15 November 2013 17:26 Magnus Hagander wrote:
 
 On Fri, Nov 15, 2013 at 12:10 PM, Haribabu kommi
 haribabu.ko...@huawei.com wrote:
 
 On 14 November 2013 23:59 Fujii Masao wrote:
  On Thu, Nov 14, 2013 at 9:08 PM, Haribabu kommi
  haribabu.ko...@huawei.com wrote:
   Please find attached the patch, for adding a new option for
   pg_basebackup, to specify a different directory for pg_xlog.
 
  Sounds good! Here are the review comments:
  Don't we need to prevent users from specifying the same directory
  in both --pgdata and --xlogdir?
 
 I feel no need to prevent, even if user specifies both --pgdata and
 --xlogdir as same directory all the transaction log files will be
 created in the base directory  instead of pg_xlog directory.
 
 
 
 Given how easy it would be to prevent that, I think we should. It
 would be  an easy misunderstanding to specify that when you actually
 want it in  wherever/pg_xlog. Specifying that would be redundant in
 the first place,  but people ca do that, but it
 
 would also be very easy to do it by mistake, and you'd end up with
 something that's really bad, including a recursive symlink.
 
 
 
  Presently with initdb also user can specify both data and xlog
  directories as same.
 
  To prevent the data directory and xlog directory as same, there is a
  way in windows (_fullpath api) to get absolute path from a relative
  path, but I didn't get any such possibilities in linux.
 
  I didn't find any other way to check it, if anyone have any idea
  regarding this please let me know.
 
 What about make_absolute_path() in miscinit.c?

The make_absoulte_patch() function gets the current working directory and adds
The relative path to CWD, this is not giving proper absolute path. 

I have added a new function verify_data_and_xlog_dir_same() which will change 
the 
Current working directory to data directory and gets the CWD and the same way 
for transaction
log directory. Compare the both data and xlog directories and throw an error. 
Please check it once.

Is there any other way to identify that both data and xlog directories are 
pointing to the same
Instead of comparing both absolute paths?

Updated patch attached in the mail.

Regards,
Hari babu.


UserSpecifiedxlogDir_v3.patch
Description: UserSpecifiedxlogDir_v3.patch

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] pg_basebackup: progress report max once per second

2013-11-17 Thread Sawada Masahiko
On Thu, Nov 14, 2013 at 6:27 PM, Mika Eloranta m...@ohmu.fi wrote:
 On 13 Nov 2013, at 20:51, Mika Eloranta m...@ohmu.fi wrote:

 Prevent excessive progress reporting that can grow to gigabytes
 of output with large databases.

 Same patch as an attachment.

 --
 Mika Eloranta
 Ohmu Ltd.  http://www.ohmu.fi/


I got error with following scenario.

$ initdb -D data -E UTF8 --no-locale
/* setting the replication parameters */
$ pg_basebackup -D 2data
Floating point exception
LOG:  could not send data to client: Broken pipe
ERROR:  base backup could not send data, aborting backup
FATAL:  connection to client lost


Regards,

---
Sawada Masahiko


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extra functionality to createuser

2013-11-17 Thread Amit Kapila
On Sat, Nov 16, 2013 at 4:57 AM, Christopher Browne cbbro...@gmail.com wrote:
 On Fri, Nov 15, 2013 at 3:14 PM, Peter Eisentraut pete...@gmx.net wrote:
 On 11/14/13, 4:35 PM, Christopher Browne wrote: On Thu, Nov 14, 2013 at
 5:41 AM, Sameer Thakur samthaku...@gmail.com wrote:
 So i think -g option is failing

 Right you are.


This patch adds useful option '-g' to createuser utility which will
allow user to make new roles as member of existing roles and the same
is already possible by Create Role/User syntax.

Few comments:

1.
+  termoption-g//term
+  termoption--roles//term

All other options which require argument are of form:
 termoption-c replaceable class=parameternumber/replaceable//term
  termoption--connection-limit=replaceable
class=parameternumber/replaceable//term

So I think it is better to have this new option which require argument
in similar form.

2.
+Indicates roles to associate with this role.

I think word associate is not very clear, wouldn't it be better to
mention that this new role will be member of roles specified.
For example:
Indicates roles to which the new role will be immediately added as a new member.

3.
+ case 'g':
+ roles = pg_strdup(optarg);
+ break;

If we see most of other options in case handling are ordered as per
their order in long_options array. For example

static struct option long_options[] = {
{host, required_argument, NULL, 'h'},
{port, required_argument, NULL, 'p'},
..

Now the order of handling for both is same in switch case or while get
opt_long() function call. I think this makes code easy to understand
and modify.
However there is no functionality issue here, so you can keep the code
as per your existing patch as well, this is just a suggestion.


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Wildcard usage enhancements in .pgpass

2013-11-17 Thread Alexey Klyukin
Hi Martijn,

On Sun, Nov 17, 2013 at 7:56 PM, Martijn van Oosterhout
klep...@svana.orgwrote:

 On Sat, Nov 16, 2013 at 09:26:33PM +0100, Alexey Klyukin wrote:
  Hi,
 
  Attached is the patch that improves usage of '*' wildcard in .pgpass,
  particularly in the host part. The use case is below.

 Looks interesting, though I wonder if you could use fnmatch(3) here. Or
 woud that match more than you expect?  For example, it would allow
 'foo*bar' to match 'foo.bar' which your code doesn't.


fnmatch(3) looks like a good deal and I'd certainly consider it if we go
the road of matching regular expressions, although for simpler use cases
it's an overkill, since it forces us to do an extra pass over the string to
be matched and introduces some performance penalties of using a regexp
matching engine.

-- 
Regards,
Alexey Klyukin


Re: [HACKERS] [PATCH] Use MAP_HUGETLB where supported (v3)

2013-11-17 Thread Abhijit Menon-Sen
At 2013-11-15 15:17:32 +0200, hlinnakan...@vmware.com wrote:

 I spent some time whacking this around, new patch version attached.

Thanks.

 But I'm not wedded to the idea if someone objects; a log message might
 also be reasonable: LOG: huge TLB pages are not supported on this
 platform, but huge_tlb_pages was 'on'

Put that way, I have to wonder if the right thing to do is just to have
a try_huge_pages=on|off setting, and log a warning if the attempt did
not succeed. It would be easier to document, and I don't think there's
much point in making it an error if the allocation fails.

-- Abhijit

P.S. I'd be happy to do the followup work for this patch (updating
documentation, etc.), but it'll have to wait until I recover from
this !#$@! stomach bug.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers