So, here's a destruction test case:

200,000 JSON values (plus 2 key columns)
Average width 4K (+/- 1K)
183 keys per JSON value
        keys 10 to 30 characters
        105 float values
        70 integer values
        8 text and date values
        no nesting

The "jsonic" table is JSON
The "jsonbish" table is JSONB

(I can't share this data set, but it makes a good test case)

And, we see the effect:

postgres=# select pg_size_pretty(pg_total_relation_size('jsonic'));
 pg_size_pretty
----------------
 394 MB
(1 row)

postgres=# select pg_size_pretty(pg_total_relation_size('jsonbish'));
 pg_size_pretty
----------------
 1147 MB
(1 row)

So, pretty bad; JSONB is 200% larger than JSON.

I don't think having 183 top-level keys is all that unreasonable of a
use case.  Some folks will be migrating from Mongo, Redis or Couch to
PostgreSQL, and might have a whole denormalized schema in JSON.

BTW, I find this peculiar:

postgres=# select pg_size_pretty(pg_relation_size('jsonic'));

 pg_size_pretty
----------------
 383 MB
(1 row)

postgres=# select pg_size_pretty(pg_relation_size('jsonbish'));

 pg_size_pretty
----------------
 11 MB
(1 row)

Next up: Tom's patch and indexing!

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to