On Nov 9, 2010, at 2:42 PM, Cédric Villemain wrote:
Are you thinking of a contrib module 'pgtap' that we can use to
accomplish the optionnal regression tests ?
Oh, if the project wants it in contrib, sure. Otherwise I'd probably just have
the test stuff include it somehow.
David
--
Sent
On Nov 8, 2010, at 10:36 AM, Charles Pritchard wrote:
Because of a lack of interested implementers, the spec does not put forward
a standard dialect/subset. It simply uses Sqlite
As de-facto standards go, you could do *much* worse.
David
--
Sent via pgsql-hackers mailing list
On Nov 7, 2010, at 5:24 AM, Roberto Mello wrote:
Yes, but I am wondering whether you should just stick to what would
come out of a normal explain, for consistency sake. Maybe provide
another function, or parameter that would cast the results to
intervals?
I think it's more convenient to have
On Nov 8, 2010, at 3:12 PM, David E. Wheeler wrote:
It could output a table like the above. FWIW, The function I've written works
like this:
SELECT plan('SELECT * FROM bar');
Sorry, that's
SELECT * FROM plan('SELECT * FROM bar');
Best,
David
--
Sent via pgsql-hackers mailing
On Nov 5, 2010, at 1:42 PM, David E. Wheeler wrote:
http://git.postgresql.org/gitweb?p=postgresql.git;a=blob;f=src/backend/commands/explain.c;h=f494ec98e510c23120e072bd5ee8821ea12738a4;hb=HEAD#l617
Ah, great, thanks.
So based on this, I've come up with:
Node Type TEXT
On Nov 6, 2010, at 11:44 AM, David E. Wheeler wrote:
On Nov 5, 2010, at 1:42 PM, David E. Wheeler wrote:
http://git.postgresql.org/gitweb?p=postgresql.git;a=blob;f=src/backend/commands/explain.c;h=f494ec98e510c23120e072bd5ee8821ea12738a4;hb=HEAD#l617
Ah, great, thanks.
So based
On Nov 5, 2010, at 12:36 PM, Alvaro Herrera wrote:
Hi,
A customer of ours has the need for temporary functions. The use case
is writing test cases for their databases: the idea being that their
code creates a temp function which then goes away automatically at
session end, just like a
Fellow Hackers,
I'm writing a function to turn an EXPLAIN plan into a table with columns. As
such, I need to have a complete list of the various bits of each plan node and
their types for the table. Here's what I've got so far:
Node Type TEXT,
StrategyTEXT,
On Nov 5, 2010, at 1:36 PM, Andrew Dunstan wrote:
Of course, there are containers too, which are not in your list at all. How
do you intend to represent the tree-ish structure in a flat table?
So far I see only two containers: Subplans and Sot Keys. The latter is
represented as an array. The
On Nov 5, 2010, at 1:38 PM, Dimitri Fontaine wrote:
It seems that you need to read through ExplainNode in
src/backend/commands/explain.c:
http://git.postgresql.org/gitweb?p=postgresql.git;a=blob;f=src/backend/commands/explain.c;h=f494ec98e510c23120e072bd5ee8821ea12738a4;hb=HEAD#l617
Ah,
On Nov 5, 2010, at 1:42 PM, David E. Wheeler wrote:
On Nov 5, 2010, at 1:38 PM, Dimitri Fontaine wrote:
It seems that you need to read through ExplainNode in
src/backend/commands/explain.c:
http://git.postgresql.org/gitweb?p=postgresql.git;a=blob;f=src/backend/commands/explain.c;h
On Nov 4, 2010, at 4:20 AM, Peter Eisentraut wrote:
On ons, 2010-11-03 at 14:15 -0700, David E. Wheeler wrote:
/me wants a global $dbh that mimics the DBI interface but just uses
SPI under the hood. Not volunteering, either…
Already exists: DBD::PgSPI. Probably needs lots of updating
On Nov 3, 2010, at 2:06 PM, Alex Hunsaker wrote:
try:
plpy.execute(insert into foo values(1))
except plpy.UniqueViolation, e:
plpy.notice(Ooops, you got yourself a SQLSTATE %d, e.sqlstate)
Ouuu googly eyes.
[ now that eval { }, thanks to Tim Bunce, works with plperl it should
be
On Oct 27, 2010, at 9:08 PM, Andrew Dunstan wrote:
Well, it turns out that the hashref required exactly one more line to
achieve. We already have all the infrastructure on the composite handling
code, and all it requires it to enable it for the RECORDOID case.
I don't suppose that it would
On Oct 28, 2010, at 9:31 AM, Andrew Dunstan wrote:
Of course it's possible, but it's a different feature. As for just as easy,
no, it's much more work. I agree it should be done, though.
I bet we could raise some money to fund it's development. How much work are we
talking about here?
Best,
On Oct 26, 2010, at 7:15 AM, Robert Haas wrote:
Notwithstanding the above, I don't think ELEMENT would be a very bad choice.
I still think we should just go for LABEL and be done with it. But
y'all can ignore me if you want...
+1
David
--
Sent via pgsql-hackers mailing list
On Oct 25, 2010, at 10:08 AM, Tom Lane wrote:
I can see the point of that, but I don't find LABEL to be a particularly
great name for the elements of an enum type, and so I'm not in favor of
institutionalizing that name in the syntax. How about ADD VALUE?
From the fine manual:
The second
On Oct 25, 2010, at 4:12 PM, Tom Lane wrote:
However, that objection doesn't hold for plperl or pltcl (and likely
not plpython, though I don't know that language enough to be sure).
So it would be a reasonable feature request to teach those PLs to
accept record parameters. I think the fact
On Oct 21, 2010, at 12:33 AM, Dimitri Fontaine wrote:
I don't see what it buys us in this very context. The main thing here to
realize is that I wrote about no code to parse the control file. I don't
think the extension patch should depend on the JSON-in-core patch.
Now, once we have JSON
On Oct 21, 2010, at 8:12 AM, Dimitri Fontaine wrote:
That's a good idea, but my guess is that the implementation cost of
supporting the control format in your perl infrastructure is at least an
order of magnitude lower than the cost for me to support your current
JSON file format, so I lean
On Oct 21, 2010, at 2:17 PM, Tom Lane wrote:
The oversight here is that we don't use appendrel planning for
a top-level UNION ALL construct. That didn't use to matter,
because you always got the same stupid Append plan either way.
Now it seems like we ought to have some more intelligence for
example:
{
name: pair,
abstract: A key/value pair data type,
version: 0.1.0,
maintainer: David E. Wheeler da...@justatheory.com,
license: postgresql,
}
They can have a lot more information, too. Her's the one I actually shipped
with pair:
http://github.com/theory/kv-pair/blob
On Oct 20, 2010, at 9:58 PM, Alvaro Herrera wrote:
What's wrong with sticking to Makefile syntax? Are we intending to
build a JSON parser in GNU make perchance?
That metadata isn't *for* make, is it?
D
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes
On Oct 19, 2010, at 12:17 PM, Robert Haas wrote:
I think we should take a few steps back and ask why we think that
binary encoding is the way to go. We store XML as text, for example,
and I can't remember any complaints about that on -bugs or
-performance, so why do we think JSON will be
On Oct 17, 2010, at 9:56 AM, Jeff Davis wrote:
3. Somehow deprecate floating point timestamps or make them unusable in
conjunction with Range Types. I'm not sure if there is demand to keep
them alive or not.
+1
David
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To
On Oct 8, 2010, at 1:47 PM, Tom Lane wrote:
How so? In a typical application, there would not likely be very many
such rows --- we're talking about cases like documents containing zero
indexable words. In any case, the problem right now is that GIN has
significant functional limitations
On Sep 1, 2010, at 11:52 AM, Pavel Stehule wrote:
regression=# create or replace function array_agg_transfn_strict(internal,
anyelement) returns internal as 'array_agg_transfn' language internal
immutable;
CREATE FUNCTION
regression=# create aggregate array_agg_strict(anyelement) (stype =
On Sep 28, 2010, at 7:41 AM, Robert Haas wrote:
I don't have any opinion about whether the functionality proposed here
is worth the trouble, but I do have an opinion about that syntax: it's
an awful choice.
I agree, on both points.
It's nice to try to reduce the excess verbosity that is
On Sep 27, 2010, at 5:05 AM, Peter Eisentraut wrote:
Um, no.
In the meantime, I have arrived at the conclusion that doing this isn't
worth it because it will break all regression test output. We can fix
the stuff in our tree, but pg_regress is also used externally, and those
guys would
On Sep 23, 2010, at 1:02 PM, Robert Haas wrote:
On Thu, Sep 23, 2010 at 3:52 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Apparently somebody's confused between local and GMT time somewhere in
there.
Ouch. That rather sucks.
Obviously, all committers must now relocate to the UK.
Best,
David
On Sep 22, 2010, at 2:08 PM, David Fetter wrote:
It's not about naming platforms for exclusion. It's about requiring
functionalities for *in*clusion.
Passes all tests.
David
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
On Sep 21, 2010, at 6:20 PM, Tom Lane wrote:
While this isn't much worse than what I was used to with CVS, it's
definitely not better. I think that I could simplify transferring the
patch back to older branches if I could use git cherry-pick. However,
that only works on already-committed
On Sep 21, 2010, at 8:01 PM, Bruce Momjian wrote:
Then they'd all be patched and staged.
If I understand correctly, that 'git reset' will mark all branch changes
as staged but not committed, and then you can commit all branches at
once and push it. Is that right?
Right.
David
--
Sent
On Sep 21, 2010, at 8:19 PM, Tom Lane wrote:
You sure about the staged part?
Yes, I do it all the time (I make a lot of mistakes).
Offhand I think I like Andrew's recommendation of a shortlived branch
better. In essence your idea is using the tip of master itself as a
shortlived branch,
On Sep 14, 2010, at 7:32 PM, Itagaki Takahiro wrote:
Here is a patch for basic JSON support. It adds only those features:
* Add json data type, that is binary-compatible with text.
* Syntax checking on text to JSON conversion.
* json_pretty() -- print JSON tree with indentation.
We have
On Sep 15, 2010, at 5:23 AM, Simon Riggs wrote:
Fast, efficient, no extra code.
I love that sentence. Even if it has no verb.
Best,
David
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
On Sep 14, 2010, at 10:48 AM, Simon Riggs wrote:
I will post my patch on this thread when it is available.
Sounds awesome Simon, I look forward to seeing the discussion!
Best,
David
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
On Sep 9, 2010, at 12:12 AM, Pavel Stehule wrote:
about 2 months for full time and 2 months for partial time - is my tip
Two months full or two months partial? I'll take the latter, please!
David
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your
On Sep 8, 2010, at 3:57 PM, Darren Duncan wrote:
While I don't agree with the idea of providing extra names that are
probably mostly going to increase the confusion of someone trying to
understand such a system, I think this use case would be well covered by
synonyms. But these would be
Howdy,
Anyone ever thought to try to add $subject to PL/pgSQL? Someone left a
[comment][] on the PGXN blog about how this is a supported syntax for using
named parameters on Oracle. The context is to avoid conflicts between variable
names and column names by function-qualifyin the former and
On Sep 7, 2010, at 9:35 AM, Tom Lane wrote:
How does $subject differ from what we already do? See
http://www.postgresql.org/docs/9.0/static/plpgsql-structure.html
particularly this:
Note: There is actually a hidden outer block surrounding the
body of any PL/pgSQL function.
I think so. Try it!
David
On Sep 7, 2010, at 11:39 AM, Sergey Konoplev wrote:
Hi,
On 7 September 2010 20:35, Tom Lane t...@sss.pgh.pa.us wrote:
How does $subject differ from what we already do? See
http://www.postgresql.org/docs/9.0/static/plpgsql-structure.html
So will it be possible
On Sep 6, 2010, at 12:07 AM, Pavel Stehule wrote:
The work on PostgreSQL is adventure, and very good experience, very
good school for me. It's job only for people who like programming, who
like hacking, it isn't job for people, who go to office on 8 hours.
Next I use PostgreSQL for my job -
On Sep 3, 2010, at 7:31 AM, Tom Lane wrote:
I don't think the cast should act that way, but I could see providing a
separate conversion function that returns 0 ... or perhaps better NULL
... if no match.
+1 I could use this in pgTAP.
David
--
Sent via pgsql-hackers mailing list
On Aug 31, 2010, at 11:56 PM, Thom Brown wrote:
The first form of aggregate expression invokes the aggregate across all
input rows for which the given expression(s) yield non-null values.
(Actually, it is up to the aggregate function whether to ignore null values
or not — but all the
On Sep 1, 2010, at 12:30 AM, Pavel Stehule wrote:
Docs is wrong :) I like current implementation. You can remove a NULLs
from aggregation very simply, but different direction isn't possible
Would appreciate the recipe for removing the NULLs.
Best,
David
--
Sent via pgsql-hackers mailing
On Sep 1, 2010, at 1:06 AM, Thom Brown wrote:
I think it might be both. array_agg doesn't return NULL, it returns
an array which contains NULL.
The second I wrote that, I realised it was b*ll%$ks, as I was still in
the process of waking up.
I know that feeling.
/me sips his coffee
Best,
On Sep 1, 2010, at 12:30 AM, Pavel Stehule wrote:
So are the docs right, or is array_agg() right?
Docs is wrong :) I like current implementation. You can remove a NULLs
from aggregation very simply, but different direction isn't possible
Patch:
diff --git a/doc/src/sgml/syntax.sgml
On Sep 1, 2010, at 10:12 AM, Tom Lane wrote:
I think when that text was written, it was meant to imply all the
aggregates defined in SQL92. There seems to be a lot of confusion
in this thread about whether standard means defined by SQL spec
or built-in in Postgres. Should we try to refine
On Sep 1, 2010, at 10:52 AM, Thom Brown wrote:
ould appreciate the recipe for removing the NULLs.
WHERE clause :P
There may be cases where that's undesirable, such as there being more
than one aggregate in the SELECT list, or the column being grouped on
needing to return rows regardless
On Sep 1, 2010, at 10:30 AM, Tom Lane wrote:
Hm, actually the whole para needs work. It was designed at a time when
DISTINCT automatically discarded nulls, which isn't true anymore, and
that fact was patched-in in a very awkward way too. Perhaps something
like
The first form of
On Sep 1, 2010, at 11:09 AM, Pavel Stehule wrote:
Then you can eliminate NULLs with simple function
CREATE OR REPLACE FUNCTION remove_null(anyarray)
RETURNS anyarray AS $$
SELECT ARRAY(SELECT x FROM unnest($1) g(x) WHERE x IS NOT NULL)
$$ LANGUAGE sql;
Kind of defeats the purpose of the
The aggregate docs say:
The first form of aggregate expression invokes the aggregate across all input
rows for which the given expression(s) yield non-null values. (Actually, it
is up to the aggregate function whether to ignore null values or not — but
all the standard ones do.)
--
On Aug 26, 2010, at 9:05 AM, Tom Lane wrote:
On reflection, I think that the current system design for this is
predicated on the theory that RECORDs really are all the same type, and
the executor had better be prepared to cope with a series of RECORDs
that have different tupdescs, or throw
On Aug 23, 2010, at 11:24 PM, Joe Conway wrote:
Maybe something like this?
select cmp_ok(a,b,c)
from
(
values('1.2.2'::varchar, '='::text, '1.2.2'::varchar),
('1.2.23', '=', '1.2.23'),
('1.2.42', '=', '1.2.32')
) as ss(a, b, c);
cmp_ok
t
t
f
(3 rows)
On Aug 24, 2010, at 7:05 AM, Tom Lane wrote:
You could do it like this:
SELECT cmp_ok(lv, op, rv) FROM unnest(ARRAY[
ROW('1.2.2', '=', '1.2.2'),
ROW('1.2.23', '=', '1.2.23')
]::vcmp[]);
Oh, duh. :-)
psql:t/types.pg:205: ERROR: invalid memory alloc request size
On Aug 24, 2010, at 10:21 AM, Tom Lane wrote:
This may be the ultimate bike-shed but Wouldn't this be clearer the
other way around? I generally think input comes first and then output.
The order was bothering me a bit too, but there's a generic decision
in there that the tlist is shown
On Aug 23, 2010, at 2:35 AM, Andrew Dunstan wrote:
I'm not wedded to the syntax. Let the bikeshedding begin.
Seems pretty good to me as-is.
David
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
On Aug 23, 2010, at 12:20 PM, Tom Lane wrote:
Josh Berkus j...@agliodbs.com writes:
I really don't see the value in making a command substantially less
intuitive in order to avoid a single keyword, unless it affects areas of
Postgres outside of this particular command.
It's the three
Hackers,
I've been trying to come up with a simpler way to iterate over a series of
values in pgTAP tests than by creating a table, inserting rows, and then
selecting from the table. The best I've come up with so far is:
CREATE TYPE vcmp AS ( lv semver, op text, rv semver);
SELECT
On Aug 21, 2010, at 1:45 AM, Stefan Kaltenbrunner wrote:
hmm FWIW I would interpret something like 9.0.1B4 as the forth beta
release for the first point release of the major release 9.0 bis seems
stupid and is not anything we have done before.
It does't make sense for PostgreSQL, no.
You
Hackers,
A while ago, I asked if .0 releases could be versioned with three digits
instead of two. That is, it would be 8.4.0 instead of 8.4. This is to make
the format consistent with maintenance releases (8.4.1, etc.). I thought this
was generally agreed upon, but maybe not, because I just
On Aug 20, 2010, at 11:34 AM, David Fetter wrote:
+1 for three-number versions...well, until we really see the light and
go to two-number versions. 8.3 and 8.4 are different enough that they
shouldn't even mildly appear the same, for example.
No idea what you mean by that, but generally it's
On Aug 20, 2010, at 11:40 AM, Tom Lane wrote:
David E. Wheeler da...@kineticode.com writes:
A while ago, I asked if .0 releases could be versioned with three
digits instead of two. That is, it would be 8.4.0 instead of 8.4.
We've been doing that for some time, no? A quick look at the CVS
On Aug 20, 2010, at 11:47 AM, David Fetter wrote:
No idea what you mean by that, but generally it's a bad idea to
switch from dotted-integer version numbers and numeric version
numbers. See Perl (Quel désastre!).
I'm thinking that after 9.0, the first release of the next major
version
On Aug 20, 2010, at 12:02 PM, Greg Stark wrote:
Again, it means the format would be consistent. Always three integers. Nice
thing about Semantic Versions is that if you append any ASCII string to the
third integer, it automatically means less than that integer.
So I count three integers
On Aug 20, 2010, at 12:15 PM, Tom Lane wrote:
No, I mean 9.0.0beta4. If we were to adopt the Semantic Versioning spec, one
would *always* use X.Y.Z, with optional ASCII characters appended to Z to
add meaning (including less than unadorned Z).
Well, I for one will fiercely resist adopting
On Aug 20, 2010, at 12:21 PM, Devrim GÜNDÜZ wrote:
+1 for Tom's post.
20.Ağu.2010 tarihinde 21:40 saatinde, Tom Lane t...@sss.pgh.pa.us şunları
yazdı:
.0 is for releases, not betas. I see no need for an extra number in
beta versions.
Yes, well, it's still implicit, isn't it?
David
On Aug 20, 2010, at 2:10 PM, Tom Lane wrote:
9.0.0 is less than 9.0.0anything. Unless you wire some specific
knowledge of semantics of particular letter-strings into the comparison
algorithm, it's difficult to come to another decision, IMO.
That's what Semantic versions do. From the spec's
On Aug 20, 2010, at 5:38 PM, Greg Sabino Mullane wrote:
Then why are we discussing it on -hackers?
Because you will need buy in from the hackers if you
ever want to do something as radical as change to
a two-number, one dot system (or some the slightly
less radical earlier suggestions).
On Aug 20, 2010, at 7:49 PM, Robert Haas wrote:
I think the semantic versioning approach makes sense for libraries,
but it is not too clear to me that it makes sense for other kinds of
applications. YMMV, of course.
Yeah, I'm more concerned about determining dependencies in extensions and
On Aug 19, 2010, at 8:08 AM, Robert Haas wrote:
Another possibility is for EXECUTE USING to coerce any unknowns to TEXT
before it calls the parser at all. This would square with the typical
default assumption for unknown literals, and it would avoid having to
have any semantics changes below
On Aug 11, 2010, at 7:41 AM, Tom Lane wrote:
I had forgotten that discussion. It looks like we trailed off without
any real consensus: there was about equal sentiment for an array with
zero elements and an array with one empty-string element. We ended
up leaving it alone because (a) that
On Aug 11, 2010, at 9:36 AM, Tom Lane wrote:
I believe those are all , rather than '' + undef + ''.
If you believe my previous opinion that the design center for these
functions is arrays of numbers, then a zero-entry text[] array is what
you want, because you can successfully cast it to
On Aug 11, 2010, at 9:40 AM, Robert Haas wrote:
Yeah, I think David's examples are talking about the behavior of join,
but we're trying to decide what split should do.
Right, sorry about that.
I think the main
argument for making it return NULL is that you can then fairly easily
use
On Aug 11, 2010, at 10:53 AM, Robert Haas wrote:
Iterating through an array with plpgsql, for example, is more clunky
than it should be.
Really?
FOR var IN SELECT UNNEST(arr) LOOP ... END LOOP
I mean, doing everything is sort of clunky in PL/pgsql, but this
doesn't seem particularly
On Aug 11, 2010, at 10:58 AM, Andrew Dunstan wrote:
for i in array_lower(myarray,1) .. array_upper(myarray,1) loop ...
works well
for i in select array_subscripts(myarray, 1) loop ...
Best,
David
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to
On Aug 11, 2010, at 11:35 AM, Andrew Dunstan wrote:
for i in select array_subscripts(myarray, 1) loop ...
That's not a built-in function AFAIK.
Pavel pointed out to me only yesterday that it is:
http://www.postgresql.org/docs/current/static/functions-srf.html#FUNCTIONS-SRF-SUBSCRIPTS
On Aug 10, 2010, at 11:46 AM, Thom Brown wrote:
I, personally, would expect an empty array output given an empty
input, and a null output for a null input.
+1
David
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
On Aug 8, 2010, at 8:38 PM, Tom Lane wrote:
Um, but \sf *doesn't* give you anything that's usefully copy and
pasteable. And if that were the goal, why doesn't it have an option to
write to a file?
But it's really the line numbers shoved in front that I'm on about here.
I can't see *any*
On Aug 9, 2010, at 1:10 PM, Robert Haas wrote:
My first thought is that we should go back to the string_to_array and
array_to_string names. The key reason not to use those names was the
conflict with the old functions if you didn't specify a third argument,
but where is the advantage of not
On Aug 9, 2010, at 5:45 PM, Bruce Momjian wrote:
I figured it out; done:
http://wiki.postgresql.org/wiki/TodoDone90
Jeepers. That's a long list!
David
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
On Aug 7, 2010, at 11:05 PM, Pavel Stehule wrote:
COLLECTION?
yes, sorry - simply - class where fields can be accessed via specified
index - unique or not unique.
Like in Oracle? From:
http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14261/collections.htm
A collection is an
On Aug 8, 2010, at 9:10 AM, Pavel Stehule wrote:
There are no keys.
ok - I didn't use a correct name - so indexed set is better.
Hash?
David
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
On Aug 6, 2010, at 10:49 PM, Pavel Stehule wrote:
Huh? You can select into an array:
and pg doesn't handle 2D arrays well - can't to use ARRAY(subselect)
constructor for 2D arrays
Right.
try=# select ARRAY(SELECT ARRAY[k,v] FROM foo);
ERROR: could not find array type for datatype text[]
On Aug 7, 2010, at 12:24 AM, Pavel Stehule wrote:
try=# create or replace function try(bool) returns text language plperl AS
'shift';
CREATE FUNCTION
Time: 121.403 ms
try=# select try(true);
try
-
t
(1 row)
I wish this wasn't so.
It must not be - it depends on PL handler
On Aug 6, 2010, at 11:13 AM, Tom Lane wrote:
That would work too, although I think it might be a bit harder to use
than one alternating-name-and-value array, at least in some scenarios.
You'd have to be careful that you got the values in the same order in
both arrays, which'd be easy to
On Aug 6, 2010, at 1:49 PM, Pavel Stehule wrote:
yes it is one a possibility and probably best. The nice of this
variant can be two forms like current variadic does - foo(.., a :=
10, b := 10) or foo(.., variadic ARRAY[(a,10),(b,10)])
I started fiddling and got as far as this:
CREATE TYPE
Hackers,
I noticed that the hstore docs still document the = operator instead of %.
This patch changes that. It also updates the first examples to us full SQL
statements, because otherwise the use of = without surrounding single quotes
was confusing.
Best,
David
hstoredoc.patch
On Aug 6, 2010, at 2:12 PM, Pavel Stehule wrote:
SELECT foo('this' ~ 'that', 1 ~ 4);
Not bad, I think. I kind of like it. It reminds me how much I hate the %
hstore construction operator, though (the new name for =).
so there is only small step to proposed feature
SELECT foo(this :=
On Aug 6, 2010, at 3:16 PM, Tom Lane wrote:
I noticed that the hstore docs still document the = operator instead
of %. This patch changes that.
It looks to me like you are changing the examples of the I/O
representation ... which did NOT change.
Hrm? The first few examples at the top? I
On Aug 6, 2010, at 3:38 PM, Tom Lane wrote:
We definitely need to document the `text % text` constructor
BTW, there isn't any % constructor anymore --- we agreed to provide
only the hstore(text, text) constructor.
Oh, I must've been looking at an older checkout, then. Never mind.
Best,
On Aug 6, 2010, at 8:49 PM, Pavel Stehule wrote:
Sorry, not following you here
I would to difference a key and value in notation.
That's exactly what my solution does. The array solution doesn't. Whether it's
appropriate to use a custom composite type, however, is an open question.
Pavel
On Aug 6, 2010, at 9:48 PM, Pavel Stehule wrote:
That's exactly what my solution does. The array solution doesn't. Whether
it's appropriate to use a custom composite type, however, is an open
question.
no it doesn't - in your design there are no different notation for key
and for value.
On Aug 6, 2010, at 9:59 PM, Tom Lane wrote:
It's not immediately clear to me what an ordered-pair type would get you
that you don't get with 2-element arrays.
Just syntactic sugar, really. And control over how many items you have (a
bounded pair rather than an unlimited element array).
A
On Aug 6, 2010, at 10:15 PM, Pavel Stehule wrote:
This is not exactly without precedent, either: our built-in xpath()
function appears to use precisely this approach for its namespace-list
argument.
it's one variant, but isn't perfect
a) it expects so key and value are literals
Huh? You
On Aug 5, 2010, at 11:25 AM, Tom Lane wrote:
Applied to HEAD and 9.0. The mistaken case will now yield this:
regression=# select string_agg(f1 order by f1, ',') from text_tbl;
ERROR: function string_agg(text) does not exist
LINE 1: select string_agg(f1 order by f1, ',') from text_tbl;
On Aug 5, 2010, at 11:42 AM, Thom Brown wrote:
LINE 1: select string_agg(f1 order by f1, ',') from text_tbl;
^
I'm confused: that looks like the two-argument form to me. Have I missed
something?
HINT: No function matches the given name and argument types. You might
On Aug 5, 2010, at 11:45 AM, Tom Lane wrote:
I'm confused: that looks like the two-argument form to me. Have I missed
something?
Yeah, the whole point of the thread: that's not a call of a two-argument
aggregate. It's a call of a one-argument aggregate, using a two-column
sort key to
On Aug 5, 2010, at 12:16 PM, Tom Lane wrote:
HINT: No aggregate function matches the given name and argument
types. Perhaps you misplaced ORDER BY; ORDER BY must appear after all
regular arguments of the aggregate.
+1
David
--
Sent via pgsql-hackers mailing list
701 - 800 of 1565 matches
Mail list logo