On Thu, Aug 14, 2014 at 11:52 AM, Bruce Momjian <br...@momjian.us> wrote: > On Thu, Aug 14, 2014 at 12:22:46PM -0400, Tom Lane wrote: >> Bruce Momjian <br...@momjian.us> writes: >> > Uh, can we get compression for actual documents, rather than duplicate >> > strings? >> >> [ shrug... ] What's your proposed set of "actual documents"? >> I don't think we have any corpus of JSON docs that are all large >> enough to need compression. >> >> This gets back to the problem of what test case are we going to consider >> while debating what solution to adopt. > > Uh, we just one need one 12k JSON document from somewhere. Clearly this > is something we can easily get.
it's trivial to make a large json[b] document: select length(to_json(array(select row(a.*) from pg_attribute a))::TEXT); select -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers