> On Feb 20, 2019, at 10:37 AM, Simon Riggs <si...@2ndquadrant.com> wrote:
> 
> -1, I think this is blowing up the complexity of a already useful patch,
> even though there's no increase in complexity due to the patch proposed
> here.  I totally get wanting incremental decompression for jsonb, but I
> don't see why Paul should be held hostage for that.
> 
> Not sure I agree with your emotive language. Review comments != holding 
> hostages.
> 
> If we add one set of code now and need to add another different one later, we 
> will have 2 sets of code that do similar things.

So, current state is, asked for a datum slice, we can now decompress just the 
parts we need to get that slice. This allows us to speed up anything that knows 
in advance how big a slice they are going to want. At this moment all I’ve 
found is left() and substr() for the start-at-front case.

What this does not support: any function that probably wants 
less-than-everything, but doesn’t know how big a slice to look for. Stephen 
thinks I should put an iterator on decompression, which would be an interesting 
piece of work. Having looked at the json code a little doing partial searches 
would require a lot of re-work that is above my paygrade, but if there was an 
iterator in place, at least that next stop would then be open. 

Note that adding an iterator isn’t adding two ways to do the same thing, since 
the iterator would slot nicely underneath the existing slicing API, and just 
iterate to the requested slice size. So this is easily just “another step” 
along the train line to providing streaming access to compressed and TOASTed 
data.

I’d hate for the simple slice ability to get stuck behind the other work, since 
it’s both (a) useful and (b) exists. If you are concerned the iterator will 
never get done, I can only offer my word that, since it seems important to 
multiple people on this list, I will do it. (Just not, maybe, very well :)

P.

Reply via email to