On 09/12/2014 01:30 PM, Heikki Linnakangas wrote:
> 
> Performance was one argument for sure. It's not hard to come up with a
> case where the all-lengths approach is much slower: take a huge array
> with, say, million elements, and fetch the last element in a tight loop.
> And do that in a PL/pgSQL function without storing the datum to disk, so
> that it doesn't get toasted. Not a very common thing to do in real life,
> although something like that might come up if you do a lot of json
> processing in PL/pgSQL. but storing offsets makes that faster.

While I didnt post the results (because they were uninteresting), I did
specifically test the "last element" in a set of 200 elements for
all-lengths vs. original offsets for JSONB, and the results were not
statistically different.

I did not test against your patch; is there some reason why your patch
would be faster for the "last element" case than the original offsets
version?

If not, I think the corner case is so obscure as to be not worth
optimizing for.  I can't imagine that more than a tiny minority of our
users are going to have thousands of keys per datum.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to