Daniel Farina <drfar...@acm.org> writes: > Generally I think the delimited untoasting of metadata from arrays > separately from the payload is Not A Bad Idea.
I looked at this patch a bit. I agree that it could be a big win for large external arrays, but ... 1. As-is, it's a significant *pessimization* for small arrays, because the heap_tuple_untoast_attr_slice code does a palloc/copy even when one is not needed because the data is already not toasted. I think there needs to be a code path that avoids that. 2. Arrays that are large enough to be pushed out to toast storage are almost certainly going to get compressed first. The potential win in this case is very limited because heap_tuple_untoast_attr_slice will fetch and decompress the whole thing. Admittedly this is a limitation of the existing code and not a fault of the patch proper, but still, if you want to make something that's generically useful, you need to do something about that. Perhaps pglz_decompress() could be extended with an argument to say "decompress no more than this much" --- although that would mean adding another test to its inner loop, so we'd need to check for performance degradation. I'm also unsure how to predict how much compressed data needs to be read in to get at least N bytes of decompressed data. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers