On Thu, Oct 23, 2014 at 8:39 AM, Andrew Dunstan <and...@dunslane.net> wrote:
>
> On 10/23/2014 09:27 AM, Merlin Moncure wrote:
>>
>> On Thu, Oct 23, 2014 at 4:34 AM, Pavel Stehule <pavel.steh...@gmail.com>
>> wrote:
>>>
>>> Hi
>>>
>>> here is a prototype
>>>
>>> postgres=# select row_to_json(row(10 as A, row(30 as c, 20 AS B) as x));
>>>           row_to_json
>>> ------------------------------
>>>   {"a":10,"x":{"c":30,"b":20}}
>>> (1 row)
>>>
>>> postgres=# select row_to_json(row(10, row(30, 20)));
>>>             row_to_json
>>> ----------------------------------
>>>   {"f1":10,"f2":{"f1":30,"f2":20}}
>>> (1 row)
>>
>> wow -- this is great.   I'll take a a look.
>>
>
> Already in  9.4:
>
> andrew=# select
> json_build_object('a',10,'x',json_build_object('c',30,'b',20));
>            json_build_object
> ----------------------------------------
>  {"a" : 10, "x" : {"c" : 30, "b" : 20}}
> (1 row)
>
>
> So I'm not sure why we want another mechanism unless it's needed in some
> other context.

json_build_object is super useful for sure, but what about
performance?  Application communication of data via json has been
steadily increasing in terms of overall percentage in all the work
that I do and performance is very important.

I tested at one million rows and:
A. select to_json(array(select json_build_object('a',a,'b',b) from foo f));
takes about twice as long as either:
B. select to_json(array(select row(a,b) from foo f));
or
C. select to_json(array(select f from foo f));

Note the results aren't quite the same, "B" anonymizes the columns to
'f1' etc and 'A' adds 5 extra spaces per array element (aside: the
json serialization functions are not consistently spaced -- shouldn't
they generally be as spartan as possible?).  Maybe the performance
differences are a reflection if that spurious space consumption
though...looking a the code json_build_object just does basic
StringInfo processing so I don't see any reason for it to be greatly
slower.

With a nested construction
(json_build_object('a',a,'b',json_build_object('a', a, 'b', b)) vs
row(a,b,row(a,b))) the results are closer; about 1.5x the time taken
for json_build_object.  Not close enough to call it a wash, but not
damning either, at least for this one case.

In terms of row() construction, there aren't many cases today because
row() is used precisely because it destroys column names unless you
have a composite type handy to cast (and it's cpu cycle sucking
overhead) so I've learned to code around it.  In some cases a row()
type that preserved names would remove the need for the composite.  It
doesn't happen *that* often -- usually it comes up when stashing
aggregated rows through a CTE.  At least some of *those* cases are to
work around the lack of LATERAL; my production systems are still on
9.2.

All that being said, row() seems to me to have a lot of style points
and I don't think nested row constructions should have a dependency on
json/jsonb.  It's just something you do, and json processing is
deferred to the last stage of processing before the data goes out the
door..that's where we would presumably apply formatting decisions on
top of that.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to