Re: [HACKERS] Allow SQL/plpgsql functions to accept record

2015-04-28 Thread Jim Nasby

On 4/28/15 1:31 PM, Andrew Dunstan wrote:


On 04/28/2015 01:44 PM, Jim Nasby wrote:

On 4/27/15 10:06 PM, Andrew Dunstan wrote:

My point remains that we really need methods of a) getting the field
names from generic records and b) using text values to access fields of
generic records, both as lvalues and rvalues. Without those this feature
will be of comparatively little value, IMNSHO. With them it will be much
more useful and  powerful.


Sure, and if I had some pointers on what was necessary there I'd take
a look at it. But I'm not very familiar with plpgsql (let alone what
we'd need to do this in SQL), so I'd just be fumbling around. As a
reminder, one of the big issues there seems to be that while plSQL
knows what the underlying type is, plpgsql has no idea, which
seriously limits the use of passing it a record.

In the meantime I've got a patch that definitely works for plSQL and
allows you to handle a record and pass it on to other functions (such
as json_from_record()). Since that's my original motivation for
looking at this, I'd like that patch to be considered unless there's a
big drawback to it that I'm missing. (For 9.6, of course.)



If you look at composite_to_json() it gives you almost all that you'd
need to construct an array of field names for an arbitrary record, and a
lot of what you'd need to extract a value for an arbitrary field.
populate_record_worker() has a good deal of what you'd need to set a
value of an arbitrary field. None of that means that there isn't a good
deal of work do do, but if you want pointers there are some.


Thanks, those are helpful. BTW, I think this is a memory leak in 
populate_record_worker():


my_extra = (RecordIOData *) fcinfo-flinfo-fn_extra;
if (my_extra == NULL ||
my_extra-ncolumns != ncolumns)
{
fcinfo-flinfo-fn_extra =
MemoryContextAlloc(fcinfo-flinfo-fn_mcxt,


The part that I'm still concerned about in plpgsql is how to handle the 
case of having a record that we should be able to associate with a 
specific composite type (such as a table type). That's not currently 
working in my patch, but I'm not sure why. Maybe I need to test for that 
and in that case set the variable up as a PLPGSQL_TTYPE_ROW instead of 
PLPGSQL_TTYPE_REC?

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow SQL/plpgsql functions to accept record

2015-04-28 Thread Merlin Moncure
On Tue, Apr 28, 2015 at 12:44 PM, Jim Nasby jim.na...@bluetreble.com wrote:
 On 4/27/15 10:06 PM, Andrew Dunstan wrote:

 My point remains that we really need methods of a) getting the field
 names from generic records and b) using text values to access fields of
 generic records, both as lvalues and rvalues. Without those this feature
 will be of comparatively little value, IMNSHO. With them it will be much
 more useful and  powerful.


 Sure, and if I had some pointers on what was necessary there I'd take a look
 at it. But I'm not very familiar with plpgsql (let alone what we'd need to
 do this in SQL), so I'd just be fumbling around. As a reminder, one of the
 big issues there seems to be that while plSQL knows what the underlying type
 is, plpgsql has no idea, which seriously limits the use of passing it a
 record.

 In the meantime I've got a patch that definitely works for plSQL and allows
 you to handle a record and pass it on to other functions (such as
 json_from_record()). Since that's my original motivation for looking at
 this, I'd like that patch to be considered unless there's a big drawback to
 it that I'm missing. (For 9.6, of course.)

I think it's pretty useful actually.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow SQL/plpgsql functions to accept record

2015-04-28 Thread Jim Nasby

On 4/27/15 10:06 PM, Andrew Dunstan wrote:

My point remains that we really need methods of a) getting the field
names from generic records and b) using text values to access fields of
generic records, both as lvalues and rvalues. Without those this feature
will be of comparatively little value, IMNSHO. With them it will be much
more useful and  powerful.


Sure, and if I had some pointers on what was necessary there I'd take a 
look at it. But I'm not very familiar with plpgsql (let alone what we'd 
need to do this in SQL), so I'd just be fumbling around. As a reminder, 
one of the big issues there seems to be that while plSQL knows what the 
underlying type is, plpgsql has no idea, which seriously limits the use 
of passing it a record.


In the meantime I've got a patch that definitely works for plSQL and 
allows you to handle a record and pass it on to other functions (such as 
json_from_record()). Since that's my original motivation for looking at 
this, I'd like that patch to be considered unless there's a big drawback 
to it that I'm missing. (For 9.6, of course.)

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow SQL/plpgsql functions to accept record

2015-04-28 Thread Andrew Dunstan


On 04/28/2015 01:44 PM, Jim Nasby wrote:

On 4/27/15 10:06 PM, Andrew Dunstan wrote:

My point remains that we really need methods of a) getting the field
names from generic records and b) using text values to access fields of
generic records, both as lvalues and rvalues. Without those this feature
will be of comparatively little value, IMNSHO. With them it will be much
more useful and  powerful.


Sure, and if I had some pointers on what was necessary there I'd take 
a look at it. But I'm not very familiar with plpgsql (let alone what 
we'd need to do this in SQL), so I'd just be fumbling around. As a 
reminder, one of the big issues there seems to be that while plSQL 
knows what the underlying type is, plpgsql has no idea, which 
seriously limits the use of passing it a record.


In the meantime I've got a patch that definitely works for plSQL and 
allows you to handle a record and pass it on to other functions (such 
as json_from_record()). Since that's my original motivation for 
looking at this, I'd like that patch to be considered unless there's a 
big drawback to it that I'm missing. (For 9.6, of course.)



If you look at composite_to_json() it gives you almost all that you'd 
need to construct an array of field names for an arbitrary record, and a 
lot of what you'd need to extract a value for an arbitrary field. 
populate_record_worker() has a good deal of what you'd need to set a 
value of an arbitrary field. None of that means that there isn't a good 
deal of work do do, but if you want pointers there are some.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow SQL/plpgsql functions to accept record

2015-04-27 Thread Jim Nasby

On 4/25/15 4:50 PM, Tom Lane wrote:

Well, we already support local variables of type RECORD in plpgsql, so
it's not immediately clear to me that function arguments would be much
worse.  There are a lot of deficiencies with the RECORD-local-variable
implementation: if you try to change the actual RECORD type from one call
to the next you'll probably have problems.  But it seems like we could
avoid that for function arguments by treating RECORD as a polymorphic
argument type, and thereby generating a separate set of plan trees for
each actual record type passed to the function within a given session.
So in principle it ought to work better than the local-variable case does
today.

In short I suspect that Jim is right and this has more to do with a
shortage of round tuits than any fundamental problem.


I took a stab at plpgsql and it seems to work ok... but I'm not sure 
it's terribly valuable because you end up with an anonymous record 
instead of something that points back to what you handed it. The 'good' 
news is it doesn't seem to blow up on successive calls with different 
arguments...



Not sure about the SQL-function case.  That might be even easier because
functions.c doesn't try to cache plans across queries; or maybe not.


This on the other hand was rather easy. It's not horribly useful due to 
built-in restrictions on dealing with record, but that's certainly not 
plsql's fault, and this satisfies my initial use case of


create function cn(record) returns bigint language sql as $$
SELECT count(*)
  FROM json_each_text( row_to_json($1) ) a
  WHERE value IS NULL $$;

Attached patches both pass make check. The plpgsql is WIP, but I think 
the SQL one is OK.

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
From cc1776d4963b0ae837f61320ba36a6ff3ad7a7cb Mon Sep 17 00:00:00 2001
From: Jim Nasby jim.na...@bluetreble.com
Date: Mon, 27 Apr 2015 18:54:51 -0500
Subject: [PATCH] Allow SQL functions to accept a record

---
 src/backend/catalog/pg_proc.c   | 10 +++---
 src/test/regress/expected/create_function_3.out | 17 ++---
 src/test/regress/sql/create_function_3.sql  |  6 +-
 3 files changed, 26 insertions(+), 7 deletions(-)

diff --git a/src/backend/catalog/pg_proc.c b/src/backend/catalog/pg_proc.c
index 1229829..daf3297 100644
--- a/src/backend/catalog/pg_proc.c
+++ b/src/backend/catalog/pg_proc.c
@@ -870,14 +870,18 @@ fmgr_sql_validator(PG_FUNCTION_ARGS)
 errmsg(SQL functions cannot return type %s,

format_type_be(proc-prorettype;
 
-   /* Disallow pseudotypes in arguments */
-   /* except for polymorphic */
+   /*
+* Disallow pseudotypes in arguments except for polymorphic and record. 
In
+* the context of validating a function, record may as well be 
polymorphic,
+* so treat it as such.
+*/
haspolyarg = false;
for (i = 0; i  proc-pronargs; i++)
{
if (get_typtype(proc-proargtypes.values[i]) == TYPTYPE_PSEUDO)
{
-   if (IsPolymorphicType(proc-proargtypes.values[i]))
+   if (IsPolymorphicType(proc-proargtypes.values[i]) ||
+   proc-proargtypes.values[i] == RECORDOID)
haspolyarg = true;
else
ereport(ERROR,
diff --git a/src/test/regress/expected/create_function_3.out 
b/src/test/regress/expected/create_function_3.out
index 6a4352c..cd370e3 100644
--- a/src/test/regress/expected/create_function_3.out
+++ b/src/test/regress/expected/create_function_3.out
@@ -16,16 +16,26 @@ CREATE FUNCTION functest_A_2(text[]) RETURNS int LANGUAGE 
'sql'
AS 'SELECT $1[0]::int';
 CREATE FUNCTION functest_A_3() RETURNS bool LANGUAGE 'sql'
AS 'SELECT false';
+CREATE FUNCTION functest_A_4(record) RETURNS regtype LANGUAGE 'sql'
+   AS 'SELECT pg_catalog.pg_typeof($1)';
+SELECT functest_A_4(NULL::pg_catalog.pg_class);
+ functest_a_4 
+--
+ pg_class
+(1 row)
+
 SELECT proname, prorettype::regtype, proargtypes::regtype[] FROM pg_proc
WHERE oid in ('functest_A_1'::regproc,
  'functest_A_2'::regproc,
- 'functest_A_3'::regproc) ORDER BY proname;
+ 'functest_A_3'::regproc,
+ 'functest_A_4'::regproc) ORDER BY proname;
proname| prorettype |proargtypes
 --++---
  functest_a_1 | boolean| [0:1]={text,date}
  functest_a_2 | integer| [0:0]={text[]}
  functest_a_3 | boolean| {}
-(3 rows)
+ functest_a_4 | regtype| [0:0]={record}
+(4 rows)
 
 --
 -- IMMUTABLE | STABLE | VOLATILE
@@ -219,10 +229,11 @@ SELECT routine_name, ordinal_position, parameter_name, 
parameter_default
 
 -- Cleanups
 DROP SCHEMA temp_func_test 

Re: [HACKERS] Allow SQL/plpgsql functions to accept record

2015-04-27 Thread Andrew Dunstan


On 04/27/2015 10:35 PM, Jim Nasby wrote:

On 4/25/15 4:50 PM, Tom Lane wrote:

Well, we already support local variables of type RECORD in plpgsql, so
it's not immediately clear to me that function arguments would be much
worse.  There are a lot of deficiencies with the RECORD-local-variable
implementation: if you try to change the actual RECORD type from one 
call

to the next you'll probably have problems.  But it seems like we could
avoid that for function arguments by treating RECORD as a polymorphic
argument type, and thereby generating a separate set of plan trees for
each actual record type passed to the function within a given session.
So in principle it ought to work better than the local-variable case 
does

today.

In short I suspect that Jim is right and this has more to do with a
shortage of round tuits than any fundamental problem.


I took a stab at plpgsql and it seems to work ok... but I'm not sure 
it's terribly valuable because you end up with an anonymous record 
instead of something that points back to what you handed it. The 
'good' news is it doesn't seem to blow up on successive calls with 
different arguments...



Not sure about the SQL-function case. That might be even easier because
functions.c doesn't try to cache plans across queries; or maybe not.


This on the other hand was rather easy. It's not horribly useful due 
to built-in restrictions on dealing with record, but that's certainly 
not plsql's fault, and this satisfies my initial use case of


create function cn(record) returns bigint language sql as $$
SELECT count(*)
  FROM json_each_text( row_to_json($1) ) a
  WHERE value IS NULL $$;

Attached patches both pass make check. The plpgsql is WIP, but I think 
the SQL one is OK.



My point remains that we really need methods of a) getting the field 
names from generic records and b) using text values to access fields of 
generic records, both as lvalues and rvalues. Without those this feature 
will be of comparatively little value, IMNSHO. With them it will be much 
more useful and  powerful.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow SQL/plpgsql functions to accept record

2015-04-25 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Wed, Apr 22, 2015 at 6:12 PM, Jim Nasby jim.na...@bluetreble.com wrote:
 As for allowing SQL and plpgsql functions to accept a record, I think our
 JSON functionality just provided plenty of reason we should allow accepting
 them, even if you can't do much with it: you *can* hand it to row_to_json(),
 which does allow you to do something useful with it. So it seems reasonable
 to me that we should at least accept it as a function argument.

 I agree that that would be useful.  I think the problem with an
 expression like rowvar.something is that PL/pgsql cannot infer the
 type of the result, and nothing else works without that.  I doubt that
 it's practical to lift that restriction.

Well, we already support local variables of type RECORD in plpgsql, so
it's not immediately clear to me that function arguments would be much
worse.  There are a lot of deficiencies with the RECORD-local-variable
implementation: if you try to change the actual RECORD type from one call
to the next you'll probably have problems.  But it seems like we could
avoid that for function arguments by treating RECORD as a polymorphic
argument type, and thereby generating a separate set of plan trees for
each actual record type passed to the function within a given session.
So in principle it ought to work better than the local-variable case does
today.

In short I suspect that Jim is right and this has more to do with a
shortage of round tuits than any fundamental problem.

Not sure about the SQL-function case.  That might be even easier because
functions.c doesn't try to cache plans across queries; or maybe not.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow SQL/plpgsql functions to accept record

2015-04-23 Thread Jim Nasby

On 4/23/15 8:33 AM, Pavel Stehule wrote:

I agree that that would be useful.  I think the problem with an
expression like rowvar.something is that PL/pgsql cannot infer the
type of the result, and nothing else works without that.  I doubt that
it's practical to lift that restriction.  PL/pgsql could introduce
dedicated syntax for this operation, like DYNAMIC_EXTRACT(rowvar,
colname, resulttype) or something, but that's going to be clunky at
best.  Whether we eventually do that or not, if we can allow rows to
be passed in and then let people use json or hstore operators on them,
that would be a helpful step forward, IMHO.  I'm not sure if that's
practical either, but maybe...


this need significant changes in plpgsql - it can enforce work with
different types in expressions in cycle - so we should to leave
expressions based on plans or we have to introduce alternated plans for
different input types.


Is this fundamentally an issue of not knowing what we're being handed 
when we compile the function? Perhaps a way around this limitation would 
be to recompile during execution if any record arguments get a different 
base type. In reality, I suspect that won't happen during a single query.


I'll take a look at at least allowing passing a record in so you can 
hand it to some other function. Any pointers on how to do that would be 
welcome; I've never hacked on plpgsql or SQL function code before.

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow SQL/plpgsql functions to accept record

2015-04-23 Thread Pavel Stehule
2015-04-23 15:19 GMT+02:00 Robert Haas robertmh...@gmail.com:

 On Wed, Apr 22, 2015 at 6:12 PM, Jim Nasby jim.na...@bluetreble.com
 wrote:
  On 4/22/15 2:12 PM, Merlin Moncure wrote:
  That being said, I think json types with their associated API, given
  that they are core types, will ultimately handle these types of
  problems.  That way, at least, we can avoid adding syntax and
  functionality that will basically do the same thing.  This reminds me
  a little bit of the json_build() vs enhanced row() syntax we discussed
  some time back.  I didn't say so at the time, but for posterity, I
  think you were right...json_build() is working fine for building
  arbitrary record types and moving a record to json and deconstructing
  it should work just as well.
 
  The one part I don't care for in that is it seems rather inefficient to
 cast
  something to JSON just so we can do things we really should be able to do
  with a record. But perhaps it's not all that costly.
 
  As for allowing SQL and plpgsql functions to accept a record, I think our
  JSON functionality just provided plenty of reason we should allow
 accepting
  them, even if you can't do much with it: you *can* hand it to
 row_to_json(),
  which does allow you to do something useful with it. So it seems
 reasonable
  to me that we should at least accept it as a function argument.

 I agree that that would be useful.  I think the problem with an
 expression like rowvar.something is that PL/pgsql cannot infer the
 type of the result, and nothing else works without that.  I doubt that
 it's practical to lift that restriction.  PL/pgsql could introduce
 dedicated syntax for this operation, like DYNAMIC_EXTRACT(rowvar,
 colname, resulttype) or something, but that's going to be clunky at
 best.  Whether we eventually do that or not, if we can allow rows to
 be passed in and then let people use json or hstore operators on them,
 that would be a helpful step forward, IMHO.  I'm not sure if that's
 practical either, but maybe...


this need significant changes in plpgsql - it can enforce work with
different types in expressions in cycle - so we should to leave expressions
based on plans or we have to introduce alternated plans for different input
types.

Pavel





 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



Re: [HACKERS] Allow SQL/plpgsql functions to accept record

2015-04-23 Thread Robert Haas
On Wed, Apr 22, 2015 at 6:12 PM, Jim Nasby jim.na...@bluetreble.com wrote:
 On 4/22/15 2:12 PM, Merlin Moncure wrote:
 That being said, I think json types with their associated API, given
 that they are core types, will ultimately handle these types of
 problems.  That way, at least, we can avoid adding syntax and
 functionality that will basically do the same thing.  This reminds me
 a little bit of the json_build() vs enhanced row() syntax we discussed
 some time back.  I didn't say so at the time, but for posterity, I
 think you were right...json_build() is working fine for building
 arbitrary record types and moving a record to json and deconstructing
 it should work just as well.

 The one part I don't care for in that is it seems rather inefficient to cast
 something to JSON just so we can do things we really should be able to do
 with a record. But perhaps it's not all that costly.

 As for allowing SQL and plpgsql functions to accept a record, I think our
 JSON functionality just provided plenty of reason we should allow accepting
 them, even if you can't do much with it: you *can* hand it to row_to_json(),
 which does allow you to do something useful with it. So it seems reasonable
 to me that we should at least accept it as a function argument.

I agree that that would be useful.  I think the problem with an
expression like rowvar.something is that PL/pgsql cannot infer the
type of the result, and nothing else works without that.  I doubt that
it's practical to lift that restriction.  PL/pgsql could introduce
dedicated syntax for this operation, like DYNAMIC_EXTRACT(rowvar,
colname, resulttype) or something, but that's going to be clunky at
best.  Whether we eventually do that or not, if we can allow rows to
be passed in and then let people use json or hstore operators on them,
that would be a helpful step forward, IMHO.  I'm not sure if that's
practical either, but maybe...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow SQL/plpgsql functions to accept record

2015-04-22 Thread Andrew Dunstan


On 04/22/2015 11:29 AM, Jim Nasby wrote:

On 4/20/15 2:04 PM, David G. Johnston wrote:


​SELECT (src.v).* FROM ( VALUES (ROW(1,2,3)) ) src (v)​;
ERROR: record type has not been registered

While it may not be necessary to solve both problems I suspect they have
the same underlying root cause - specifically the separation of concerns
between the planner and the executor.


I don't think they're related at all. C functions have the ability to 
accept a record, so the executor must be able to support it. It's just 
that SQL and plpgsql functions don't have that support. I suspect 
that's just because no one has gotten around to it.



Well, that's assuming everyone else thinks it would be a good idea. 
Maybe they do, but I wouldn't assume it.


The answer in the past has been to use more dynamically typed languages 
such as perl for which this problem is well suited.


There are actually several problems: first, given an arbitrary record 
plpgsql has no easy and efficient way of finding out what field names it 
has. Second, even if it has such knowledge it has no way of using it - 
it's not like JavaScript where you can use a text value as a field name. 
And third, it has no way of creating variables of the right type to hold 
extracted values.


All of these could possibly be overcome, but it would not be a small 
piece of work, I suspect. Given that plperl buys you all of that 
already  (just try this, for example) people might think it not worth 
the extra trouble.


   create function rkeys(record) returns text[] language plperl as $$
   my $rec = shift; return [ keys %$rec ]; $$;
   select unnest(rkeys(r)) from (select * from pg_class limit 1) r;


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow SQL/plpgsql functions to accept record

2015-04-22 Thread Jim Nasby

On 4/20/15 2:04 PM, David G. Johnston wrote:


​SELECT (src.v).* FROM ( VALUES (ROW(1,2,3)) ) src (v)​;
ERROR: record type has not been registered

While it may not be necessary to solve both problems I suspect they have
the same underlying root cause - specifically the separation of concerns
between the planner and the executor.


I don't think they're related at all. C functions have the ability to 
accept a record, so the executor must be able to support it. It's just 
that SQL and plpgsql functions don't have that support. I suspect that's 
just because no one has gotten around to it.

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow SQL/plpgsql functions to accept record

2015-04-22 Thread Merlin Moncure
On Wed, Apr 22, 2015 at 11:20 AM, Andrew Dunstan and...@dunslane.net wrote:

 On 04/22/2015 11:29 AM, Jim Nasby wrote:

 On 4/20/15 2:04 PM, David G. Johnston wrote:


 SELECT (src.v).* FROM ( VALUES (ROW(1,2,3)) ) src (v);
 ERROR: record type has not been registered

 While it may not be necessary to solve both problems I suspect they have
 the same underlying root cause - specifically the separation of concerns
 between the planner and the executor.

 I don't think they're related at all. C functions have the ability to
 accept a record, so the executor must be able to support it. It's just that
 SQL and plpgsql functions don't have that support. I suspect that's just
 because no one has gotten around to it.

 Well, that's assuming everyone else thinks it would be a good idea. Maybe
 they do, but I wouldn't assume it.

 The answer in the past has been to use more dynamically typed languages such
 as perl for which this problem is well suited.

I've never really been satisfied with this answer.  The two languages
with really good core support are perl and python, neither of which
are my cup of tea.   Also, there is no chance of inlining any of the
dynamic languages which has serious performance ramifications.  In a
perfect world, pl/v8 would be a good choice for a general purpose
database support language as javascript has a number of properties
that make it attractive for integration.  Even if we had that though
(and it's unlikely), a large percentage of postgres devs, including
myself, dislike coding in any language except sql plus extensions.

That being said, I think json types with their associated API, given
that they are core types, will ultimately handle these types of
problems.  That way, at least, we can avoid adding syntax and
functionality that will basically do the same thing.  This reminds me
a little bit of the json_build() vs enhanced row() syntax we discussed
some time back.  I didn't say so at the time, but for posterity, I
think you were right...json_build() is working fine for building
arbitrary record types and moving a record to json and deconstructing
it should work just as well.

merlin

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow SQL/plpgsql functions to accept record

2015-04-22 Thread Jim Nasby

On 4/22/15 2:12 PM, Merlin Moncure wrote:

That being said, I think json types with their associated API, given
that they are core types, will ultimately handle these types of
problems.  That way, at least, we can avoid adding syntax and
functionality that will basically do the same thing.  This reminds me
a little bit of the json_build() vs enhanced row() syntax we discussed
some time back.  I didn't say so at the time, but for posterity, I
think you were right...json_build() is working fine for building
arbitrary record types and moving a record to json and deconstructing
it should work just as well.


The one part I don't care for in that is it seems rather inefficient to 
cast something to JSON just so we can do things we really should be able 
to do with a record. But perhaps it's not all that costly.


As for allowing SQL and plpgsql functions to accept a record, I think 
our JSON functionality just provided plenty of reason we should allow 
accepting them, even if you can't do much with it: you *can* hand it to 
row_to_json(), which does allow you to do something useful with it. So 
it seems reasonable to me that we should at least accept it as a 
function argument.

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow SQL/plpgsql functions to accept record

2015-04-20 Thread David G. Johnston
On Sun, Apr 19, 2015 at 3:02 PM, Jim Nasby jim.na...@bluetreble.com wrote:

 Is there a fundamental reason SQL/plpgsql functions won't accept record as
 an input type? If not, can someone point me at a patch that might show how
 much work would be involved in adding support?

 My particular use case is a generic function that will count how many
 fields in a record are NULL. I can do it in pure SQL (below), but was
 hoping to wrap the entire thing in a function. Right now, I have to add a
 call to row_to_json() to the function call.

 SELECT count(*)
   FROM json_each_text( row_to_json($1) ) a
   WHERE value IS NULL


​See also:

​SELECT (src.v).* FROM ( VALUES (ROW(1,2,3)) ) src (v)​;
ERROR: record type has not been registered

While it may not be necessary to solve both problems I suspect they have
the same underlying root cause - specifically the separation of concerns
between the planner and the executor.

ISTM that the planner needs to be able to create arbitrarily named
composite types and leave them registered in the session somewhere for
the executor to find.  Session because:

PREPARE prep_rec AS SELECT record_input_func(v) FROM ( VALUES
(ROW($1::integer,$2::boolean,$3::text)) src (v);
EXECUTE prep_rec USING (1, true, 'hi!');

If it requires additional smarts in the executor to make this all work I
suspect the cost-benefit equations end up supporting the somewhat more
verbose but workable status-quo.

I'm not sure how { row_to_json(record) } works but SQL (including pl/pgsql)
needs to have some source of definition for what the record type should be
in reality - and that source currently is the catalogs whose rows are
locked by the planner and injected, I think, into a session cache.  The
source query in pl/pgsql defines the type for fully embedded use of the
record placeholder while the caller's function alias provides that
information for RETURNS record.  The calling query needs to provide the
same information for CREATE FUNCTION func( arg1 record ) since the body
of the pl/pgsql function needs to instantiate arg1 with a known type as
soon as the function is entered.  It is theoretically possible to impute
the needed anonymous type from the query definition - the problem is how
and where to register that information for execution.

At least for pl/pgsql I could see possibly doing something like func( arg1
packed_record_bytes) and having pl/pgsql understand how to unpack those
bytes into an anonymous but structured record (like it would with SELECT
... INTO record_var) seems plausible.  I would not expect pl/SQL to allow
anything of the sort as it doesn't seem compatible with the idea of
inline-ability.

Maybe the C code for row_to_json (or libpq in general) can provide
inspiration (particularly for the pack/unpack bytes) but as I do not know
C I'm going to have to leave that to others.

David J.