I've just been running some timings of my JSON-producing functions, in particular array_to_json, and comparing them with the current XML-producing functions. Here's a typical result:

   andrew=# explain analyse select array_to_json(array_agg(q),true)
   from (select * from pg_attribute) q;
                                                         QUERY PLAN
   
---------------------------------------------------------------------------------------------------------------------
     Aggregate  (cost=70.77..70.78 rows=1 width=203) (actual
   time=38.919..38.920 rows=1 loops=1)
       ->  Seq Scan on pg_attribute  (cost=0.00..65.01 rows=2301
   width=203) (actual time=0.007..1.454 rows=2253 loops=1)
     Total runtime: 39.300 ms
   (3 rows)

   Time: 62.753 ms
   andrew=# explain analyse select table_to_xml('pg_attribute',
   true,false,'');
                                           QUERY PLAN
   
----------------------------------------------------------------------------------------
     Result  (cost=0.00..0.26 rows=1 width=0) (actual
   time=519.170..526.737 rows=1 loops=1)
     Total runtime: 526.780 ms
   (2 rows)


As you can see, producing the JSON is a heck of a lot faster than producing the equivalent XML. I had thought it might be necessary for good performance to cache the type output info in the FunctionCallInfo structure, rather than fetch it for each Datum we output, but that doesn't seem to be so. For now I'm inclined not to proceed with that, and leave it as an optimization to be considered later if necessary. Thoughts?

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to